While these samples are representative of the content of Science.gov,

they are not comprehensive nor are they the most current set.

We encourage you to perform a real-time search of Science.gov

to obtain the most current and comprehensive results.

Last update: August 15, 2014.

1

We argue that there is a need for culture-specific measures of identity that delineate the factors that most make sense for specific cultural groups. One such measure, recently developed specifically for M?ori peoples, is the Multi-Dimensional Model of M?ori Identity and Cultural Engagement (MMM-ICE). M?ori are the indigenous peoples of New Zealand. The MMM-ICE is a 6-factor measure that assesses the following aspects of identity and cultural engagement as M?ori: (a) group membership evaluation, (b) socio-political consciousness, (c) cultural efficacy and active identity engagement, (d) spirituality, (e) interdependent self-concept, and (f) authenticity beliefs. This article examines the scale properties of the MMM-ICE using item response theory (IRT) analysis in a sample of 492 M?ori. The MMM-ICE subscales showed reasonably even levels of measurement precision across the latent trait range. Analysis of age (cohort) effects further indicated that most aspects of M?ori identification tended to be higher among older M?ori, and these cohort effects were similar for both men and women. This study provides novel support for the reliability and measurement precision of the MMM-ICE. The study also provides a first step in exploring change and stability in M?ori identity across the life span. A copy of the scale, along with recommendations for scale scoring, is included. PMID:23356361

Sibley, Chris G; Houkamau, Carla A

2013-01-01

2

NASA Astrophysics Data System (ADS)

Current practice in regional-scale shallow landslide hazard assessment is to adopt a one-dimensional slope stability representation. Such a representation cannot produce discrete landslides and thus cannot make predictions on landslide size. Furthermore, one-dimensional approaches cannot include lateral effects, which are known to be important in defining instability. Here we derive an alternative model that accounts for lateral resistance by representing the forces acting on each margin of an unstable block of soil. We model boundary frictional resistances using 'at rest' earth pressure on the lateral sides, and 'active' and 'passive' pressure, using the log-spiral method, on the upslope and downslope margins. We represent root reinforcement on each margin assuming that root cohesion declines exponentially with soil depth. We test our model's ability to predict failure of an observed landslide where the relevant parameters are relatively well constrained and find that our model predicts failure at the observed location and predicts that larger or smaller failures conformal to the observed shape are indeed more stable. We use a sensitivity analysis of the model to show that lateral reinforcement sets a minimum landslide size, and that the additional strength at the downslope boundary results in optimal shapes that are longer in the downslope direction. However, reinforcement effects alone cannot fully explain the size or shape distributions of observed landslides, highlighting the importance of the spatial pattern of key parameters (e.g. pore water pressure and soil depth) at the watershed scale. The application of the model at this scale requires an efficient method to find unstable shapes among an exponential number of candidates. In this context, the model allows a more extensive examination of the controls on landslide size, shape and location.

Milledge, David; Bellugi, Dino; McKean, Jim; Dietrich, William E.

2013-04-01

3

NASA Astrophysics Data System (ADS)

The infinite slope model is the basis for almost all watershed scale slope stability models. However, it assumes that a potential landslide is infinitely long and wide. As a result, it cannot represent resistance at the margins of a potential landslide (e.g. from lateral roots), and is unable to predict the size of a potential landslide. Existing three-dimensional models generally require computationally expensive numerical solutions and have previously been applied only at the hillslope scale. Here we derive an alternative analytical treatment that accounts for lateral resistance by representing the forces acting on each margin of an unstable block. We apply 'at rest' earth pressure on the lateral sides, and 'active' and 'passive' pressure using a log-spiral method on the upslope and downslope margins. We represent root reinforcement on each margin assuming that root cohesion is an exponential function of soil depth. We benchmark this treatment against other more complete approaches (Finite Element (FE) and closed form solutions) and find that our model: 1) converges on the infinite slope predictions as length / depth and width / depth ratios become large; 2) agrees with the predictions from state-of-the-art FE models to within +/- 30% error, for the specific cases in which these can be applied. We then test our model's ability to predict failure of an actual (mapped) landslide where the relevant parameters are relatively well constrained. We find that our model predicts failure at the observed location with a nearly identical shape and predicts that larger or smaller shapes conformal to the observed shape are indeed more stable. Finally, we perform a sensitivity analysis using our model to show that lateral reinforcement sets a minimum landslide size, while the additional strength at the downslope boundary means that the optimum shape for a given size is longer in a downslope direction. However, reinforcement effects cannot fully explain the size or shape distributions of observed landslides, highlighting the importance of spatial patterns of key parameters (e.g. pore water pressure) and motivating the model's watershed scale application. This watershed scale application requires an efficient method to find the least stable shapes among an almost infinite set. However, when applied in this context, it allows a more complete examination of the controls on landslide size, shape and location.

Milledge, D.; Bellugi, D.; McKean, J. A.; Dietrich, W.

2012-12-01

4

NASA Astrophysics Data System (ADS)

We investigate cross-correlations between typical Japanese stocks collected through Yahoo!Japan website ( http://finance.yahoo.co.jp/ ). By making use of multi-dimensional scaling (MDS) for the cross-correlation matrices, we draw two-dimensional scattered plots in which each point corresponds to each stock. To make a clustering for these data plots, we utilize the mixture of Gaussians to fit the data set to several Gaussian densities. By minimizing the so-called Akaike Information Criterion (AIC) with respect to parameters in the mixture, we attempt to specify the best possible mixture of Gaussians. It might be naturally assumed that all the two-dimensional data points of stocks shrink into a single small region when some economic crisis takes place. The justification of this assumption is numerically checked for the empirical Japanese stock data, for instance, those around 11 March 2011.

Ibuki, Takero; Suzuki, Sei; Inoue, Jun-ichi

5

Exploring perceptually similar cases with multi-dimensional scaling

NASA Astrophysics Data System (ADS)

Retrieving a set of known lesions similar to the one being evaluated might be of value for assisting radiologists to distinguish between benign and malignant clustered microcalcifications (MCs) in mammograms. In this work, we investigate how perceptually similar cases with clustered MCs may relate to one another in terms of their underlying characteristics (from disease condition to image features). We first conduct an observer study to collect similarity scores from a group of readers (five radiologists and five non-radiologists) on a set of 2,000 image pairs, which were selected from 222 cases based on their images features. We then explore the potential relationship among the different cases as revealed by their similarity ratings. We apply the multi-dimensional scaling (MDS) technique to embed all the cases in a 2-D plot, in which perceptually similar cases are placed in close vicinity of one another based on their level of similarity. Our results show that cases having different characteristics in their clustered MCs are accordingly placed in different regions in the plot. Moreover, cases of same pathology tend to be clustered together locally, and neighboring cases (which are more similar) tend to be also similar in their clustered MCs (e.g., cluster size and shape). These results indicate that subjective similarity ratings from the readers are well correlated with the image features of the underlying MCs of the cases, and that perceptually similar cases could be of diagnostic value for discriminating between malignant and benign cases.

Wang, Juan; Yang, Yongyi; Wernick, Miles N.; Nishikawa, Robert M.

2014-03-01

6

MultiDimensional Analysis of Data Streams Using Stream Cubes

Large volumes of dynamic stream data pose great challenges to its analysis. Besides its dynamic and transient behavior, stream\\u000a data has another important characteristic:multi-dimensionality. Much of stream data resides at a multidimensional space and at rather low level of abstraction, whereas\\u000a most analysts are interested in relatively high-level dynamic changes in some combination of dimensions. To discover high-level dynamic and

Jiawei Han; Y. Cai; Yixin Chen; Guozhu Dong; Jian Pei; Benjamin Wah; Jianyong Wang

7

Development of a Multi-Dimensional Scale for PDD and ADHD

ERIC Educational Resources Information Center

A novel assessment scale, the multi-dimensional scale for pervasive developmental disorder (PDD) and attention-deficit/hyperactivity disorder (ADHD) (MSPA), is reported. Existing assessment scales are intended to establish each diagnosis. However, the diagnosis by itself does not always capture individual characteristics or indicate the level of…

Funabiki, Yasuko; Kawagishi, Hisaya; Uwatoko, Teruhisa; Yoshimura, Sayaka; Murai, Toshiya

2011-01-01

8

Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies —such as efficient data management— supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.

Rubel, Oliver; Ahern, Sean; Bethel, E. Wes; Biggin, Mark D.; Childs, Hank; Cormier-Michel, Estelle; DePace, Angela; Eisen, Michael B.; Fowlkes, Charless C.; Geddes, Cameron G. R.; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keranen, Soile V. E.; Knowles, David W.; Hendriks, Cris L. Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat; Ushizima, Daniela; Weber, Gunther H.; Wu, Kesheng

2013-01-01

9

An LMI framework for analysis and design of multi-dimensional haptic systems

This paper presents a convenient framework based on passivity and Linear Matrix Inequalities (LMIs) for stability analysis and controller design for haptic systems involving multiple devices and human operators interacting with a common virtual environment. The proposed approach addresses peculiar features of the multi-dimensional scenario such as different operator-device configurations, and allows for taking into account structural constraints such as

Gianni Bianchini; Marcello Orlandesi; Domenico Prattichizzo

2008-01-01

10

Polaris: A System for Query, Analysis and Visualization of MultiDimensional Relational Databases

In the last several years, large multi-dimensional databases have become common in a variety of applications such as data ware- housing and scientific computing. Analysis and exploration tasks place significant demands on the interfaces to these databases. Because of the size of the data sets, dense graphical representa- tions are more effective for exploration than spreadsheets and charts. Furthermore, because

Chris Stolte; Pat Hanrahan

2000-01-01

11

ERIC Educational Resources Information Center

This study proposes a multi-dimensional approach to investigate, represent, and categorize students' in-depth understanding of complex physics concepts. Clinical interviews were conducted with 30 undergraduate physics students to probe their understanding of heat conduction. Based on the data analysis, six aspects of the participants' responses…

Chiou, Guo-Li; Anderson, O. Roger

2010-01-01

12

Global optimization using interval analysis — the multi-dimensional case

Summary We show how interval analysis can be used to compute the global minimum of a twice continuously differentiable function ofn variables over ann-dimensional parallelopiped with sides parallel to the coordinate axes. Our method provides infallible bounds on both the globally minimum value of the function and the point(s) at which the minimum occurs.

Eldon Hansen

1980-01-01

13

Method of multi-dimensional moment analysis for the characterization of signal peaks

A method of multi-dimensional moment analysis for the characterization of signal peaks can be used to optimize the operation of an analytical system. With a two-dimensional Peclet analysis, the quality and signal fidelity of peaks in a two-dimensional experimental space can be analyzed and scored. This method is particularly useful in determining optimum operational parameters for an analytical system which requires the automated analysis of large numbers of analyte data peaks. For example, the method can be used to optimize analytical systems including an ion mobility spectrometer that uses a temperature stepped desorption technique for the detection of explosive mixtures.

Pfeifer, Kent B; Yelton, William G; Kerr, Dayle R; Bouchier, Francis A

2012-10-23

14

Background: High throughput microarray technologies have afforded the investigation of genomes, epigenomes, and transcriptomes at unprecedented resolution. However, software packages to handle, analyze, and visualize data from these multiple 'omics disciplines have not been adequately developed. Results: Here, we present SIGMA2, a system for the integrative genomic multi-dimensional analysis of cancer genomes, epigenomes, and transcriptomes. Multi-dimensional datasets can be simultaneously

Raj Chari; Bradley P. Coe; Craig Wedseltoft; Marie Benetti; Ian M. Wilson; Emily A. Vucic; Calum Macaulay; Raymond T. Ng; Wan L. Lam

2008-01-01

15

This 2012 Annual Merit Review presentation gives an overview of the Computer-Aided Engineering of Batteries (CAEBAT) project and introduces the Multi-Scale, Multi-Dimensional model for modeling lithium-ion batteries for electric vehicles.

Pesaran, A.; Kim, G. H.; Smith, K.; Santhanagopalan, S.; Lee, K. J.

2012-05-01

16

Effective use of metadata in the integration and analysis of multi-dimensional optical data

NASA Astrophysics Data System (ADS)

Data discovery and integration relies on adequate metadata. However, creating and maintaining metadata is time consuming and often poorly addressed or avoided altogether, leading to problems in later data analysis and exchange. This is particularly true for research fields in which metadata standards do not yet exist or are under development, or within smaller research groups without enough resources. Vegetation monitoring using in-situ and remote optical sensing is an example of such a domain. In this area, data are inherently multi-dimensional, with spatial, temporal and spectral dimensions usually being well characterized. Other equally important aspects, however, might be inadequately translated into metadata. Examples include equipment specifications and calibrations, field/lab notes and field/lab protocols (e.g., sampling regimen, spectral calibration, atmospheric correction, sensor view angle, illumination angle), data processing choices (e.g., methods for gap filling, filtering and aggregation of data), quality assurance, and documentation of data sources, ownership and licensing. Each of these aspects can be important as metadata for search and discovery, but they can also be used as key data fields in their own right. If each of these aspects is also understood as an "extra dimension," it is possible to take advantage of them to simplify the data acquisition, integration, analysis, visualization and exchange cycle. Simple examples include selecting data sets of interest early in the integration process (e.g., only data collected according to a specific field sampling protocol) or applying appropriate data processing operations to different parts of a data set (e.g., adaptive processing for data collected under different sky conditions). More interesting scenarios involve guided navigation and visualization of data sets based on these extra dimensions, as well as partitioning data sets to highlight relevant subsets to be made available for exchange. The DAX (Data Acquisition to eXchange) Web-based tool uses a flexible metadata representation model and takes advantage of multi-dimensional data structures to translate metadata types into data dimensions, effectively reshaping data sets according to available metadata. With that, metadata is tightly integrated into the acquisition-to-exchange cycle, allowing for more focused exploration of data sets while also increasing the value of, and incentives for, keeping good metadata. The tool is being developed and tested with optical data collected in different settings, including laboratory, field, airborne, and satellite platforms.

Pastorello, G. Z.; Gamon, J. A.

2012-12-01

17

Seismic fragility analysis of highway bridges considering multi-dimensional performance limit state

NASA Astrophysics Data System (ADS)

Fragility analysis for highway bridges has become increasingly important in the risk assessment of highway transportation networks exposed to seismic hazards. This study introduces a methodology to calculate fragility that considers multi-dimensional performance limit state parameters and makes a first attempt to develop fragility curves for a multispan continuous (MSC) concrete girder bridge considering two performance limit state parameters: column ductility and transverse deformation in the abutments. The main purpose of this paper is to show that the performance limit states, which are compared with the seismic response parameters in the calculation of fragility, should be properly modeled as randomly interdependent variables instead of deterministic quantities. The sensitivity of fragility curves is also investigated when the dependency between the limit states is different. The results indicate that the proposed method can be used to describe the vulnerable behavior of bridges which are sensitive to multiple response parameters and that the fragility information generated by this method will be more reliable and likely to be implemented into transportation network loss estimation.

Wang, Qi'ang; Wu, Ziyan; Liu, Shukui

2012-03-01

18

Within the realm of sport management, team identification has been examined as single dimensional construct (Wann & Branscombe, 1993). However, research in social psychology has examined group identity as a multi-dimensional concept. To accurately measure and to more fully understand the implications of team identification, the construct should be studied as a multi-dimensional construct. This study is a first attempt

Bob Heere

2005-01-01

19

CTH: A software family for multi-dimensional shock physics analysis.

National Technical Information Service (NTIS)

CTH is a family of codes developed at Sandia National Laboratories for modeling complex multi-dimensional, multi-material problems that are characterized by large deformations and/or strong shocks. A two-step, second-order accurate Eulerian solution algor...

E. S. Hertel R. L. Bell M. G. Elrick A. V. Farnsworth G. I. Kerley

1992-01-01

20

Finite volume scheme for multi-dimensional drift-diffusion equations and convergence analysis

We introduce a nite volume scheme for multi-dimensional drift-diusion equations. Such equations arise from the theory of semiconductors and are composed of two continuity equations coupled with a Poisson equation. In the case that the continuity equations are non degenerate, we prove the convergence of the scheme and then the existence of solutions to the problem. The key point of

Claire Chainais-Hillairet; Jian-Guo Liu; Yue-Jun Peng

2003-01-01

21

NASA Astrophysics Data System (ADS)

We have developed an interactive web-based scheme for data-mining the spatio-temporal patterns of many earthquakes. This novel technique is based on cluster analysis of the multi-resolutional structures of earthquakes. The interactive scheme is based on a client-server paradigm in which we have used the off-screen rendering technique to facilitate the visual interrogation. A powerful 3-D visualization package Amira ( www.amiravis.com ) is also used to visualize the complex clusteral patte nrs in a reduced dimensional space. We have applied our method to observed and synthetic tic seismic catalogs. The observed data represent seismic activities situated around the Japanese islands in the 1997-2003 time interval. The synthetic data were generated by numerical simulations for various cases of a heterogeneous fault governed by quasi-analytical 3-D elastic dislocation models .At the highest resolution, we analyze the local cluster structure in the data space of seismic events for the two types of catalogs by using an agglomerative clustering algorithm. We demonstrate that small magnitude events produce local spatio-temporal patches corresponding to neighboring large events. Seismic events, quantized in space and time, generate the multi-dimensional feature space of the earthquake parameters. Using a non-hierarchical clustering algorithm and multi-dimensional scaling, we explore the multitudinous earthquakes by real-time 3-D visualization and inspection of multivariate clusters. At the resolutions characteristic of the earthquake parameters, all of the ongoing seismicity before and after largest events accumulate to a global structure consisting of a few separate clusters in the feature space . We show that by combining the clustering results from low and high resolution spaces, we can recognize precursory events more precisely. We will discuss how this WEB-IS ( Web-Interrrogative system ) would work. One can also access this by going to the URL http://boy.msi.umn.edu/web-is/. Its implementation and deployment in light of future GRID-computing will be discussed in terms of the recently developed Narada-Brokering (distributed messaging ) system of publishing and subscribing . This will provide a scalable infrastructure for several applications involving a set of nodes communicating with each other. .

Yuen, D. A.; Dzwinel, W.; Bollig, E. F.; Kadlec, B. F.; Ben-Zion, Y.; Yoshioka, S.

2003-12-01

22

ERIC Educational Resources Information Center

Efforts to develop interventions to improve homework performance have been impeded by limitations in the measurement of homework performance. This study was conducted to develop rating scales for assessing homework performance among students in elementary and middle school. Items on the scales were intended to assess student strengths as well as…

Power, Thomas J.; Dombrowski, Stefan C.; Watkins, Marley W.; Mautone, Jennifer A.; Eagle, John W.

2007-01-01

23

CTH: A software family for multi-dimensional shock physics analysis

CTH is a family of codes developed at Sandia National Laboratories for modeling complex multi-dimensional, multi-material problems that are characterized by large deformations and\\/or strong shocks. A two-step, second-order accurate Eulerian solution algorithm is used to solve the mass, momentum, and energy conservation equations. CTH includes models for material strength, fracture, porous materials, and high explosive detonation and initiation. Viscoplastic

E. S. Jr. Hertel; R. L. Bell; M. G. Elrick; A. V. Farnsworth; G. I. Kerley; J. M. McGlaun; S. V. Petney; S. A. Silling; P. A. Taylor; L. Yarrington

1992-01-01

24

ERIC Educational Resources Information Center

Attempts to design a multiple-item, multiple-dimension organization/public relationship scale. Finds that organizations and key publics have three types of relationships: professional, personal, and community. Provides an instrument that can be used to measure the influence that perceptions of the organization/public relationship have on consumer…

Bruning, Stephen D.; Ledingham, John A.

1999-01-01

25

CTH: A software family for multi-dimensional shock physics analysis

CTH is a family of codes developed at Sandia National Laboratories for modeling complex multi-dimensional, multi-material problems that are characterized by large deformations and/or strong shocks. A two-step, second-order accurate Eulerian solution algorithm is used to solve the mass, momentum, and energy conservation equations. CTH includes models for material strength, fracture, porous materials, and high explosive detonation and initiation. Viscoplastic or rate-dependent models of material strength have been added recently. The formulations of Johnson-Cook, Zerilli-Armstrong, and Steinberg-Guinan-Lund are standard options within CTH. These models rely on using an internal state variable to account for the history dependence of material response. The implementation of internal state variable models will be discussed and several sample calculations will be presented. Comparison with experimental data will be made among the various material strength models. The advancements made in modelling material response have significantly improved the ability of CTH to model complex large-deformation, plastic-flow dominated phenomena. Detonation of energetic material under shock loading conditions has been of great interest. A recently developed model of reactive burn for high explosives (HE) has been added to CTH. This model along with newly developed tabular equations-of-state for the HE reaction by-products has been compared to one- and two-dimensional explosive detonation experiments. These comparisons indicate excellent agreement of CTH predictions with experimental results. The new reactive burn model coupled with the advances in equation-of-state modeling make it possible to predict multi-dimensional burn phenomena without modifying the model parameters for different dimensionality. Examples of the features of CTH will be given. The emphasis in simulations shown will be in comparison with well characterized experiments covering key phenomena of shock physics.

Hertel, E.S. Jr.; Bell, R.L.; Elrick, M.G.; Farnsworth, A.V.; Kerley, G.I.; McGlaun, J.M.; Petney, S.V.; Silling, S.A.; Taylor, P.A.; Yarrington, L.

1992-12-31

26

Background Lack of social support is an important risk factor for antenatal depression and anxiety in low- and middle-income countries. We translated, adapted and validated the Multi-dimensional Scale of Perceived Social Support (MSPSS) in order to study the relationship between perceived social support, intimate partner violence and antenatal depression in Malawi. Methods The MSPSS was translated and adapted into Chichewa and Chiyao. Five hundred and eighty-three women attending an antenatal clinic were administered the MSPSS, depression screening measures, and a risk factor questionnaire including questions about intimate partner violence. A sub-sample of participants (n?=?196) were interviewed using the Structured Clinical Interview for DSM-IV to diagnose major depressive episode. Validity of the MSPSS was evaluated by assessment of internal consistency, factor structure, and correlation with Self Reporting Questionnaire (SRQ) score and major depressive episode. We investigated associations between perception of support from different sources (significant other, family, and friends) and major depressive episode, and whether intimate partner violence was a moderator of these associations. Results In both Chichewa and Chiyao, the MSPSS had high internal consistency for the full scale and significant other, family, and friends subscales. MSPSS full scale and subscale scores were inversely associated with SRQ score and major depression diagnosis. Using principal components analysis, the MSPSS had the expected 3-factor structure in analysis of the whole sample. On confirmatory factor analysis, goodness–of-fit indices were better for a 3-factor model than for a 2-factor model, and met standard criteria when correlation between items was allowed. Lack of support from a significant other was the only MSPSS subscale that showed a significant association with depression on multivariate analysis, and this association was moderated by experience of intimate partner violence. Conclusions The MSPSS is a valid measure of perceived social support in Malawi. Lack of support by a significant other is associated with depression in pregnant women who have experienced intimate partner violence in this setting.

2014-01-01

27

National Technical Information Service (NTIS)

Petri nets have become a mature modeling and analysis tool applicable in many application domains. Many extensions of the classical Petri net have been proposed. The multi-dimensional Petri net model presented in this paper deals with these extensions in ...

W. M. P. van der Aalst

1993-01-01

28

NASA Astrophysics Data System (ADS)

The integration of satellite data with physically based models can enable the characterization of earth systems and lead to improved management of natural resources at the catchment and regional scales. The reliability of simulations from physically based models depends on the accuracy of the forcing data and the model parameters. Forcing data obtained from satellites or other sources are often plagued with uncertainties and the model parameters require updates to capture the ever-changing environmental conditions. Although comprehensive data assimilation schemes for dual state and parameter updating have been proposed for improving the reliability of model simulations, their computational cost is sometimes prohibitively high. In this contribution, we propose a cost-effective and efficient alternative to handling complex multi-dimensional parameter and state improvement at the catchment scale. Our approach demystifies the complex multi-dimensional parameter estimation and state improvement problem by combining 1-dimensional exhaustive gridding with sensitivity-pushing, Newton-Raphson based guided random sampling and feedback from historical inverse estimates. In a numerical case study in the joint Rur and Erft Catchments in Germany, we apply our novel partial grid search approach to the estimation of soil surface roughness and vegetation opacity from disaggregated SMOS (Soil Moisture and Ocean Salinity Satellite) brightness temperature using the Community Microwave Emission Modeling platform (CMEM). Besides plausibly good estimates of the soil surface roughness and vegetation opacity at the catchment scale, our method also leads to improvement of the system states like soil surface moisture and soil temperature profile. Our method therefore has data assimilation capabilities without the associated computational cost incurred in ensemble-based data assimilation approaches. The partial grid search approach to parameter estimation is therefore a promising tool for multi-dimensional parameter estimation and state improvement in earth systems.

Miltin Mboh, Cho; Montzka, Carsten; Baatz, Roland; Vereecken, Harry

2014-05-01

29

Sorting multiple classes in multi-dimensional ROC analysis: parametric and nonparametric approaches.

In large-scale data analysis, such as in a microarray study to identify the most differentially expressed genes, diagnostic tests are frequently used to classify and predict subjects into their different categories. Frequently, these categories do not have an intrinsic natural order even though the quantitative test results have a relative order. As identifying the correct order for a proper definition of accuracy measures is important for a high-dimensional receiver operating characteristic (ROC) analysis, we propose rigorous and automated approaches to sort out the multiple categories using simple summary statistics such as means and relative effects. We discuss the hypervolume under the ROC manifold (HUM), its dependence on the order of the test results and the minimum acceptable HUM values in a general multi-category classification problem. Using a leukemia data set and a liver cancer data set, we show how our approaches provide accurate screening results when we have a large number of tests. PMID:24329017

Li, Jialiang; Chow, Yanyu; Wong, Weng Kee; Wong, Tien Yin

2014-02-01

30

NASA Astrophysics Data System (ADS)

The broad goal of this study is to represent the linguistic variation of textbooks and lectures, the primary input for student learning---and sometimes the sole input in the large introductory classes which characterize General Education at many state universities. Computer techniques are used to analyze a corpus of textbooks and lectures from first-year university classes in macroeconomics and biology. These spoken and written variants are compared to each other as well as to benchmark texts from other multi-dimensional studies in order to examine their patterns, relations, and functions. A corpus consisting of 147,000 words was created from macroeconomics and biology lectures at a medium-large state university and from a set of nationally "best-selling" textbooks used in these same introductory survey courses. The corpus was analyzed using multi-dimensional methodology (Biber, 1988). The analysis consists of both empirical and qualitative phases. Quantitative analyses are undertaken on the linguistic features, their patterns of co-occurrence, and on the contextual elements of classrooms and textbooks. The contextual analysis is used to functionally interpret the statistical patterns of co-occurrence along five dimensions of textual variation, demonstrating patterns of difference and similarity with reference to text excerpts. Results of the analysis suggest that academic discourse is far from monolithic. Pedagogic discourse in introductory classes varies by modality and discipline, but not always in the directions expected. In the present study the most abstract texts were biology lectures---more abstract than written genres of academic prose and more abstract than introductory textbooks. Academic lectures in both disciplines, monologues which carry a heavy informational load, were extremely interactive, more like conversation than academic prose. A third finding suggests that introductory survey textbooks differ from those used in upper division classes by being relatively less marked for information density, abstraction, and non-overt argumentation. In addition to the findings mentioned here, numerous other relationships among the texts exhibit complex patterns of variation related to a number of situational variables. Pedagogical implications are discussed in relation to General Education courses, differing student populations, and the reading and listening demands which students encounter in large introductory classes in the university.

Carkin, Susan

31

NASA Astrophysics Data System (ADS)

In this paper, a novel superpixel-based approach is introduced for unsupervised change detection using remote sensing images. The proposed approach contains three steps: 1) Superpixel segmentation. The simple linear iterative cluster (SLIC) algorithm is applied to obtain lattice-like homogenous superpixels. To avoid discordances of the superpixel boundaries obtained from bi-temporal images, the two images are firstly fused using principle component analysis. And then, the SLIC algorithm is applied on the first three principle components, which contain the main information of the two images. 2) For each superpixel, which is considered as the basic unit of the image space, the multi-dimensional change vector is computed from spectral, textural and structural features. 3) The superpixels are classified into two type: changed and unchanged through two progressive classification processes. The superpixels are firstly cataloged into three types: changed, unchanged and undefined by thresholding the change vectors and a voting process. And then the undefined superpixels are further classified into two classes: changed and unchanged, using a SVM-based classifier, which is trained by the derived changed and unchanged superpixels from the former step. The experiment using Indonesia data set has confirmed that the proposed approach is able to detect the changes automatically, by exploiting multiple change features.

Wu, Z.; Hu, Z.; Fan, Q.

2012-07-01

32

Frequency analysis for multi-dimensional systems. Global dynamics and diffusion

Frequency analysis is a new method for analyzing the stability of orbits in a conservative dynamical system. It was first devised in order to study the stability of the solar system [J. Laskar, Icarus 88 (1990) 266-291] and then applied to the 2D standard mapping [Laskar et al., Physica D 56 (1992) 253-269]. It is a powerful method for analyzing

Jacques Laskar

1993-01-01

33

Graph OLAP: a multi-dimensional framework for graph data analysis

Databases and data warehouse systems have been evolving from handling normalized spreadsheets stored in relational databases,\\u000a to managing and analyzing diverse application-oriented data with complex interconnecting structures. Responding to this emerging\\u000a trend, graphs have been growing rapidly and showing their critical importance in many applications, such as the analysis of\\u000a XML, social networks, Web, biological data, multimedia data and spatiotemporal

Chen Chen; Xifeng Yan; Feida Zhu; Jiawei Han; Philip S. Yu

2009-01-01

34

Multi-dimensional analysis of combustion instabilities in liquid rocket motors

NASA Astrophysics Data System (ADS)

A three-dimensional analysis of combustion instabilities in liquid rocket engines is presented based on a mixed finite difference/spectral solution methodology for the gas phase and a discrete droplet tracking formulation for the liquid phase. Vaporization is treated by a simplified model based on an infinite thermal conductivitiy assumption for spherical liquid droplets of fuel in a convective environment undergoing transient heating. A simple two parameter phenomenological combustion response model is employed for validation of the results in the small amplitude regime. The computational procedure is demonstrated to capture the phenomena of wave propagation within the combustion chamber accurately. Results demonstrate excellent amplitude and phase agreement with analytical solutions for properly selected grid resolutions under both stable and unstable operating conditions. Computations utilizing the simplified droplet model demonstrate stable response to arbitrary pulsing. This is possibly due to the assumption of uniform droplet temperature which removes the thermal inertia time-lag response of the vaporization process. The mixed-character scheme is sufficiently efficient to allow solutions on workstations at a modest increase in computational time over that required for two-dimensional solutions.

Grenda, Jeffrey M.; Venkateswaran, Sankaran; Merkle, Charles L.

1992-07-01

35

Multi-dimensional analysis of combustion instabilities in liquid rocket motors

NASA Technical Reports Server (NTRS)

A three-dimensional analysis of combustion instabilities in liquid rocket engines is presented based on a mixed finite difference/spectral solution methodology for the gas phase and a discrete droplet tracking formulation for the liquid phase. Vaporization is treated by a simplified model based on an infinite thermal conductivitiy assumption for spherical liquid droplets of fuel in a convective environment undergoing transient heating. A simple two parameter phenomenological combustion response model is employed for validation of the results in the small amplitude regime. The computational procedure is demonstrated to capture the phenomena of wave propagation within the combustion chamber accurately. Results demonstrate excellent amplitude and phase agreement with analytical solutions for properly selected grid resolutions under both stable and unstable operating conditions. Computations utilizing the simplified droplet model demonstrate stable response to arbitrary pulsing. This is possibly due to the assumption of uniform droplet temperature which removes the thermal inertia time-lag response of the vaporization process. The mixed-character scheme is sufficiently efficient to allow solutions on workstations at a modest increase in computational time over that required for two-dimensional solutions.

Grenda, Jeffrey M.; Venkateswaran, Sankaran; Merkle, Charles L.

1992-01-01

36

ERIC Educational Resources Information Center

This study examined the reliability and validity of self-reported survey data on instructional practices. It was based on a nationwide survey of more than 25,000 teachers in more than 1,000 schools across 5 years. The survey instrument was the Classroom Instructional Practice Scale (CIPS), which was based on the Classroom Information Sheet…

Shim, Minsuk K.; Felner, Robert D.; Shim, Eunjae; Noonan, Nancy

37

NASA Astrophysics Data System (ADS)

Over the last five years, UNIDATA has developed an extensible and flexible software framework for analyzing and visualizing geoscience data and models. The Integrated Data Viewer (IDV), initially developed for visualization and analysis of atmospheric data, has broad interdisciplinary application across the geosciences including atmospheric, ocean, and most recently, earth sciences. As part of the NSF-funded GEON Information Technology Research project, UNAVCO has enhanced the IDV to display earthquakes, GPS velocity vectors, and plate boundary strain rates. These and other geophysical parameters can be viewed simultaneously with three-dimensional seismic tomography and mantle geodynamic model results. Disparate data sets of different formats, variables, geographical projections and scales can automatically be displayed in a common projection. The IDV is efficient and fully interactive allowing the user to create and vary 2D and 3D displays with contour plots, vertical and horizontal cross-sections, plan views, 3D isosurfaces, vector plots and streamlines, as well as point data symbols or numeric values. Data probes (values and graphs) can be used to explore the details of the data and models. The IDV is a freely available Java application using Java3D and VisAD and runs on most computers. UNIDATA provides easy-to-follow instructions for download, installation and operation of the IDV. The IDV primarily uses netCDF, a self-describing binary file format, to store multi-dimensional data, related metadata, and source information. The IDV is designed to work with OPeNDAP-equipped data servers that provide real-time observations and numerical models from distributed locations. Users can capture and share screens and animations, or exchange XML "bundles" that contain the state of the visualization and embedded links to remote data files. A real-time collaborative feature allows groups of users to remotely link IDV sessions via the Internet and simultaneously view and control the visualization. A Jython-based formulation facility allows computations on disparate data sets using simple formulas. Although the IDV is an advanced tool for research, its flexible architecture has also been exploited for educational purposes with the Virtual Geophysical Exploration Environment (VGEE) development. The VGEE demonstration added physical concept models to the IDV and curricula for atmospheric science education intended for the high school to graduate student levels.

Meertens, C. M.; Murray, D.; McWhirter, J.

2004-12-01

38

Central Schemes for Multi-Dimensional Hamilton-Jacobi Equations

NASA Technical Reports Server (NTRS)

We present new, efficient central schemes for multi-dimensional Hamilton-Jacobi equations. These non-oscillatory, non-staggered schemes are first- and second-order accurate and are designed to scale well with an increasing dimension. Efficiency is obtained by carefully choosing the location of the evolution points and by using a one-dimensional projection step. First-and second-order accuracy is verified for a variety of multi-dimensional, convex and non-convex problems.

Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)

2002-01-01

39

The purpose of the paper is to review critical issues concerning the economic dimensions of cultural heritage, in order to show that—tangible and intangible—“cultural economic” goods and services, as provided by cultural institutions, may be analysed and valued in a multi-dimensional, multi-attribute and multi-value socio-economic environment. On this multi-dimensional and multi-attribute setting, a conceptual framework for analysing cultural services and

Massimiliano Mazzanti

2002-01-01

40

A selective and sensitive multiresidue analysis method, comprising 4 7pesticides, was developed and validated in tobacco matrix. The optimized sample preparation procedure in combination with gas chromatography mass spectrometry in selected-ion-monitoring (GC-MS/SIM) mode offered limits of detection (LOD) and quantification (LOQ) in the range of 3-5 and 7.5-15ng/g, respectively, with recoveries between 70 and 119% at 50-100ng/g fortifications. In comparison to the modified QuEChERS (Quick-Easy-Cheap-Effective-Rugged-Safe method: 2g tobacco+10ml water+10ml acetonitrile, 30min vortexing, followed by dispersive solid phase extraction cleanup), the method performed better in minimizing matrix co-extractives e.g. nicotine and megastigmatrienone. Ambiguity in analysis due to co-elution of target analytes (e.g. transfluthrin-heptachlor) and with matrix co-extractives (e.g. ?-HCH-neophytadiene, 2,4-DDE-linolenic acid) could be resolved by selective multi-dimensional (MD)GC heart-cuts. The method holds promise in routine analysis owing to noticeable efficiency of 27 samples/person/day. PMID:24746872

Khan, Zareen S; Ghosh, Rakesh Kumar; Girame, Rushali; Utture, Sagar C; Gadgil, Manasi; Banerjee, Kaushik; Reddy, D Damodar; Johnson, Nalli

2014-05-23

41

A multi-dimensional analysis of the upper Rio Grande-San Luis Valley social-ecological system

NASA Astrophysics Data System (ADS)

The Upper Rio Grande (URG), located in the San Luis Valley (SLV) of southern Colorado, is the primary contributor to streamflow to the Rio Grande Basin, upstream of the confluence of the Rio Conchos at Presidio, TX. The URG-SLV includes a complex irrigation-dependent agricultural social-ecological system (SES), which began development in 1852, and today generates more than 30% of the SLV revenue. The diversions of Rio Grande water for irrigation in the SLV have had a disproportionate impact on the downstream portion of the river. These diversions caused the flow to cease at Ciudad Juarez, Mexico in the late 1880s, creating international conflict. Similarly, low flows in New Mexico and Texas led to interstate conflict. Understanding changes in the URG-SLV that led to this event and the interactions among various drivers of change in the URG-SLV is a difficult task. One reason is that complex social-ecological systems are adaptive, contain feedbacks, emergent properties, cross-scale linkages, large-scale dynamics and non-linearities. Further, most analyses of SES to date have been qualitative, utilizing conceptual models to understand driver interactions. This study utilizes both qualitative and quantitative techniques to develop an innovative approach for analyzing driver interactions in the URG-SLV. Five drivers were identified for the URG-SLV social-ecological system: water (streamflow), water rights, climate, agriculture, and internal and external water policy. The drivers contained several longitudes (data aspect) relevant to the system, except water policy, for which only discreet events were present. Change point and statistical analyses were applied to the longitudes to identify quantifiable changes, to allow detection of cross-scale linkages between drivers, and presence of feedback cycles. Agricultural was identified as the driver signal. Change points for agricultural expansion defined four distinct periods: 1852--1923, 1924--1948, 1949--1978 and 1979--2007. Changes in streamflow, water allocations and water policy were observed in all agriculture periods. Cross-scale linkages were also evident between climate and streamflow; policy and water rights; and agriculture, groundwater pumping and streamflow.

Mix, Ken

42

This study analyzed spending in a community college to determine how dollars were aligned to the institution's mission. Using the Coopers & Lybrand LLP computerized reporting tool called the Finance Analysis Model (FAM), this study developed the FAM into a Higher Education Finance Model (HEFM). Data from 1996–1998 were collected from the colleges' general statement of accounts and was submitted

Joseph Angelo Vasti

1999-01-01

43

ERIC Educational Resources Information Center

We ask whether failing one or more of the state-mandated high-school exit examinations affects whether students graduate from high school. Using a new multi-dimensional regression-discontinuity approach, we examine simultaneously scores on mathematics and English language arts tests. Barely passing both examinations, as opposed to failing them,…

Papay, John P.; Willett, John B.; Murnane, Richard J.

2011-01-01

44

Perceptual evaluation of multi-dimensional spatial audio reproduction

NASA Astrophysics Data System (ADS)

Perceptual differences between sound reproduction systems with multiple spatial dimensions have been investigated. Two blind studies were performed using system configurations involving 1-D, 2-D, and 3-D loudspeaker arrays. Various types of source material were used, ranging from urban soundscapes to musical passages. Experiment I consisted in collecting subjects' perceptions in a free-response format to identify relevant criteria for multi-dimensional spatial sound reproduction of complex auditory scenes by means of linguistic analysis. Experiment II utilized both free response and scale judgments for seven parameters derived form Experiment I. Results indicated a strong correlation between the source material (sound scene) and the subjective evaluation of the parameters, making the notion of an ``optimal'' reproduction method difficult for arbitrary source material.

Guastavino, Catherine; Katz, Brian F. G.

2004-08-01

45

Scaling discourse analysis: Experiences from Hermanus, South Africa and Walvis Bay, Namibia1

Scaling discourse analysis refers to the necessity to consider environmental discourse a multi-dimensional and diversified practice. Depending on the various levels of state and society at which environmental policies are applied and depending on the geographical scale at which their solution is sought, we have to differentiate both policy processes and outcomes in environmental politics. We introduce the importance of

Roger Keil; Anne-Marie Debbané

2005-01-01

46

New Bounds for Multi-Dimensional Packing.

National Technical Information Service (NTIS)

New upper and lower bounds are presented for a multi-dimensional generalization of bin packing called box packing. Several variants of this problem, including bounded space box packing, square packing, variable sized box packing and resource augmented box...

R. Van Stee S. Seiden

2001-01-01

47

Progress in multi-dimensional upwind differencing

NASA Technical Reports Server (NTRS)

Multi-dimensional upwind-differencing schemes for the Euler equations are reviewed. On the basis of the first-order upwind scheme for a one-dimensional convection equation, the two approaches to upwind differencing are discussed: the fluctuation approach and the finite-volume approach. The usual extension of the finite-volume method to the multi-dimensional Euler equations is not entirely satisfactory, because the direction of wave propagation is always assumed to be normal to the cell faces. This leads to smearing of shock and shear waves when these are not grid-aligned. Multi-directional methods, in which upwind-biased fluxes are computed in a frame aligned with a dominant wave, overcome this problem, but at the expense of robustness. The same is true for the schemes incorporating a multi-dimensional wave model not based on multi-dimensional data but on an 'educated guess' of what they could be. The fluctuation approach offers the best possibilities for the development of genuinely multi-dimensional upwind schemes. Three building blocks are needed for such schemes: a wave model, a way to achieve conservation, and a compact convection scheme. Recent advances in each of these components are discussed; putting them all together is the present focus of a worldwide research effort. Some numerical results are presented, illustrating the potential of the new multi-dimensional schemes.

Vanleer, Bram

1992-01-01

48

A new measure of orthogonality for multi-dimensional chromatography.

Multi-dimensional chromatographic techniques, such as (comprehensive) two-dimensional liquid chromatography and (comprehensive) two-dimensional gas chromatography, are increasingly popular for the analysis of complex samples, such as protein digests or mineral oils. The reason behind the popularity of these techniques is the superior performance, in terms of peak-production rate (peak capacity per unit time), that multi-dimensional separations offer compared to their one-dimensional counterparts. However, to fully utilize the potential of multi-dimensional chromatography it is essential that the separation mechanisms used in each dimension be independent of each other. In other words, the two separation mechanisms need to be orthogonal. A number of algorithms have been proposed in the literature for measuring chromatographic orthogonality. However, these methods have their limitations, such as reliance on the division of the separation space into bins, need for specialist software or requirement of advanced programming skills. In addition, some of the existing methods for measuring orthogonality include regions of the separation space that do not feature peaks. In this paper we introduce a number of equations which provides information on the spread of the peaks within the separation space in addition to measuring orthogonality, without the need for complex computations or division of the separation space into bins. PMID:25064248

Camenzuli, Michelle; Schoenmakers, Peter J

2014-08-01

49

Femtosecond laser induced surface deformation in multi-dimensional data storage

NASA Astrophysics Data System (ADS)

We investigate the surface deformation in two-photon induced multi-dimensional data storage. Both experimental evidence and theoretical analysis are presented to demonstrate the surface characteristics and formation mechanism in azo-containing material. The deformation reveals strong polarization dependence and has a topographic effect on multi-dimensional encoding. Different stages of data storage process are finally discussed taking into consideration the surface deformation formation.

Hu, Yanlei; Chen, Yuhang; Li, Jiawen; Hu, Daqiao; Chu, Jiaru; Zhang, Qijin; Huang, Wenhao

2012-12-01

50

Multi-dimensional image reconstruction

US Patent & Trademark Office Database

Apparatus for radiation based imaging of a non-homogenous target area having distinguishable regions therein, comprises: an imaging unit configured to obtain radiation intensity data from a target region in the spatial dimensions and at least one other dimension, and an image four-dimension analysis unit analyzes the intensity data in the spatial dimension and said at least one other dimension in order to map the distinguishable regions. The system typically detects rates of change over time in signals from radiopharmaceuticals and uses the rates of change to identify the tissues. In a preferred embodiment, two or more radiopharmaceuticals are used, the results of one being used as a constraint on the other.

2014-03-18

51

On Multi-Dimensional Unstructured Mesh Adaption

NASA Technical Reports Server (NTRS)

Anisotropic unstructured mesh adaption is developed for a truly multi-dimensional upwind fluctuation splitting scheme, as applied to scalar advection-diffusion. The adaption is performed locally using edge swapping, point insertion/deletion, and nodal displacements. Comparisons are made versus the current state of the art for aggressive anisotropic unstructured adaption, which is based on a posteriori error estimates. Demonstration of both schemes to model problems, with features representative of compressible gas dynamics, show the present method to be superior to the a posteriori adaption for linear advection. The performance of the two methods is more similar when applied to nonlinear advection, with a difference in the treatment of shocks. The a posteriori adaption can excessively cluster points to a shock, while the present multi-dimensional scheme tends to merely align with a shock, using fewer nodes. As a consequence of this alignment tendency, an implementation of eigenvalue limiting for the suppression of expansion shocks is developed for the multi-dimensional distribution scheme. The differences in the treatment of shocks by the adaption schemes, along with the inherently low levels of artificial dissipation in the fluctuation splitting solver, suggest the present method is a strong candidate for applications to compressible gas dynamics.

Wood, William A.; Kleb, William L.

1999-01-01

52

Multi-dimensional MHD simple waves

NASA Technical Reports Server (NTRS)

In this paper we consider a formalism for multi-dimensional simple MHD waves using ideas developed by Boillat. For simple wave solutions one assumes that all the physical variables (the density rho, gas pressure p, fluid velocity V, gas entropy S, and magnetic induction B in the MHD case) depend on a single phase function phi(r,t). The simple wave solution ansatz and the MHD equations then require that the phase function has the form phi = r x n(phi) - lambda(phi)t, where = n(phi) = Delta phi / (absolute value of Delta phi) is the wave normal and lambda(phi) = omega/k = -phi t / (absolute value of Delta phi) is the normal speed of the wave front. The formalism allows for more general simple waves than that usually dealt with in which n(phi) is a constant unit vector that does not vary along the wave front. The formalism has implications for shock formation for multi-dimensional waves.

Webb, G. M.; Ratkiewicz, R.; Brio, M.; Zank, G. P.

1995-01-01

53

Multi-dimensional particle sizing techniques

NASA Astrophysics Data System (ADS)

Two techniques for multi-dimensional sizing of spherical particles are discussed and compared with one another: interferometric particle imaging (IPI) and a novel technique known as global phase Doppler (GPD). Whereas the IPI technique is known from various previous studies and uses a laser light sheet illumination of the particle field, the GPD technique is a new method and employs two intersecting laser light sheets. The resulting far-field interference pattern arises from the interference of like scattering orders from the particle, similar to the phase Doppler technique. A description of this far-field interference is given for both techniques. Both multi-dimensional particle sizing techniques sample the scattered light in the far-field by means of a defocused imaging system. The diameter of each droplet illuminated by the laser light sheet(s) is determined by measuring the angular frequency of the interference fringes in the defocused images. Combined with a pulsed laser, the technique also allows the velocity of the particle to be determined, similar to particle tracking velocimetry (PTV). However, the size of the defocused image of each particle also depends on the position of the particle perpendicular to the laser sheet, hence, with appropriate calibration, the third component of velocity is also accessible. The two techniques, IPI and GPD, are compared to one another in terms of implementation and expected accuracy. Possibilities of combining the two techniques are also discussed. Some novel approaches for the signal processing have been introduced and demonstrated with simulated and real signals.

Damaschke, Nils; Nobach, Holger; Nonn, Thomas I.; Semidetnov, Nikolay; Tropea, Cameron

2005-08-01

54

The Art of Extracting One-Dimensional Flow Properties from Multi-Dimensional Data Sets

NASA Technical Reports Server (NTRS)

The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e:g: thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.

Baurle, R. A.; Gaffney, R. L.

2007-01-01

55

Alkyl-methoxypyrazines (MPs) are important odour-active constituents of many grape cultivars and their wines. Recently, a new MP - 2,5-dimethyl-3-methoxypyrazine (DMMP) - has been reported as a possible constituent of wine. This study sought to develop a rapid and reliable method for quantifying DMMP, isopropyl methoxypyrazine (IPMP), secbutyl methoxypyrazine (SBMP) and isobutyl methoxypyrazine (IBMP) in wine. The proposed method is able to rapidly and accurately resolve all 4 MPs in a range of wine styles, with limits of detection between 1 and 2 ng L(-1) for IPMP, SBMP and IBMP and 5 ng L(-1) for DMMP. Analysis of a set of 11 commercial wines agrees with previously published values for IPMP, SBMP and IBMP, and shows for the first time that DMMP may be an important and somewhat common odorant in red wines. To our knowledge, this is the first analytical method developed for the quantification of DMMP in wine. PMID:24799220

Botezatu, Andreea; Pickering, Gary J; Kotseridis, Yorgos

2014-10-01

56

Background Genomics has substantially changed our approach to cancer research. Gene expression profiling, for example, has been utilized to delineate subtypes of cancer, and facilitated derivation of predictive and prognostic signatures. The emergence of technologies for the high resolution and genome-wide description of genetic and epigenetic features has enabled the identification of a multitude of causal DNA events in tumors. This has afforded the potential for large scale integration of genome and transcriptome data generated from a variety of technology platforms to acquire a better understanding of cancer. Results Here we show how multi-dimensional genomics data analysis would enable the deciphering of mechanisms that disrupt regulatory/signaling cascades and downstream effects. Since not all gene expression changes observed in a tumor are causal to cancer development, we demonstrate an approach based on multiple concerted disruption (MCD) analysis of genes that facilitates the rational deduction of aberrant genes and pathways, which otherwise would be overlooked in single genomic dimension investigations. Conclusions Notably, this is the first comprehensive study of breast cancer cells by parallel integrative genome wide analyses of DNA copy number, LOH, and DNA methylation status to interpret changes in gene expression pattern. Our findings demonstrate the power of a multi-dimensional approach to elucidate events which would escape conventional single dimensional analysis and as such, reduce the cohort sample size for cancer gene discovery.

2010-01-01

57

Computer Aided Data Analysis in Sociometry

ERIC Educational Resources Information Center

A computer program which analyzes sociometric data is presented. The SDAS program provides classical sociometric analysis. Multi-dimensional scaling and cluster analysis techniques may be combined with the MSP program. (JKS)

Langeheine, Rolf

1978-01-01

58

Multi-dimensionally encoded magnetic resonance imaging

Magnetic resonance imaging typically achieves spatial encoding by measuring the projection of a q-dimensional object over q-dimensional spatial bases created by linear spatial encoding magnetic fields (SEMs). Recently, imaging strategies using nonlinear SEMs have demonstrated potential advantages for reconstructing images with higher spatiotemporal resolution and reducing peripheral nerve stimulation. In practice, nonlinear SEMs and linear SEMs can be used jointly to further improve the image reconstruction performance. Here we propose the multi-dimensionally encoded (MDE) MRI to map a q-dimensional object onto a p-dimensional encoding space where p > q. MDE MRI is a theoretical framework linking imaging strategies using linear and nonlinear SEMs. Using a system of eight surface SEM coils with an eight-channel RF coil array, we demonstrate the five-dimensional MDE MRI for a two-dimensional object as a further generalization of PatLoc imaging and O-space imaging. We also present a method of optimizing spatial bases in MDE MRI. Results show that MDE MRI with a higher dimensional encoding space can reconstruct images more efficiently and with a smaller reconstruction error when the k-space sampling distribution and the number of samples are controlled.

Lin, Fa-Hsuan

2013-01-01

59

This paper aims to evaluate the validation of Schalock's quality of life multi-dimensional model (1996) in the Portuguese context. We also analyze the quality of life of disabled people by adding a political dimension (adapted from the Minorities' Rights Support Scale by Nata & Menezes, 2007) to this construct and seeking to understand the impact of discrimination. The sample is composed of 217 participants, most of whom have a physical disability, aged 16 to 81. Validation procedures of the Quality of Life Questionnaire (Schalock & Keith, 1993) and descriptive statistics and correlation analysis were conducted. Confirmatory Factor Analysis revealed good local and global fit indices, and the internal consistency of the scales was satisfactory. An adapted version of the instrument composed of five scales-satisfaction, competence, empowerment, equality of rights and positive discrimination-is proposed. The results reveal the importance of rights and empowerment for the quality of life of disabled people and indicate a strong critical consciousness concerning the experience of discrimination in different contexts. Taken together, the findings indicate the strong need for social and political changes in this domain. PMID:23866209

Loja, Ema; Costa, Maria Emília; Menezes, Isabel

2013-01-01

60

Towards Optimal Multi-Dimensional Query Processing with Bitmap Indices.

National Technical Information Service (NTIS)

Bitmap indices have been widely used in scientific applications and commercial systems for processing complex, multi-dimensional queries where traditional tree-based indices would not work efficiently. This paper studies strategies for minimizing the acce...

D. Rotem K. Stockinger K. Wu

2005-01-01

61

Optimal Data Scheduling for Uniform MultiDimensional Applications

Uniform nested loops are broadly used in scientific and multi-dimensional digital signal processing applications. Due to the amount of data handled by such applications, on-chip memory is required to improve the data access and overall system performance. In this study, a static data scheduling method, carrot-hole data scheduling, is proposed for multi-dimensional applications, in order to control the data traffic

Qingyan Wangand; Edwin H.-M. Sha; Nelson L. Passos

62

Towards a genuinely multi-dimensional upwind scheme

NASA Technical Reports Server (NTRS)

Methods of incorporating multi-dimensional ideas into algorithms for the solution of Euler equations are presented. Three schemes are developed and tested: a scheme based on a downwind distribution, a scheme based on a rotated Riemann solver and a scheme based on a generalized Riemann solver. The schemes show an improvement over first-order, grid-aligned upwind schemes, but the higher-order performance is less impressive. An outlook for the future of multi-dimensional upwind schemes is given.

Powell, Kenneth G.; Vanleer, Bram; Roe, Philip L.

1990-01-01

63

A multi-dimensional solver for the steady Euler equations

In this paper a numerical algorithm for the solution of the multi-dimensional steady Euler equations in conservative and non-conservative form is presented. Most existing standard and multi-dimensional schemes use flux balances with assumed constant distribution of variables along each cell edge, which interfaces two grid cells. This assumption is believed to be one of the main reasons for the limited

R Schwane

2003-01-01

64

Multi-dimensional modelling of gas turbine combustion using a flame sheet model in KIVA II

NASA Technical Reports Server (NTRS)

A flame sheet model for heat release is incorporated into a multi-dimensional fluid mechanical simulation for gas turbine application. The model assumes that the chemical reaction takes place in thin sheets compared to the length scale of mixing, which is valid for the primary combustion zone in a gas turbine combustor. In this paper, the details of the model are described and computational results are discussed.

Cheng, W. K.; Lai, M.-C.; Chue, T.-H.

1991-01-01

65

The internationalization of small and medium-sized enterprises (SMEs) is explored by focusing on a clarification of the internationalization construct. A set of hypotheses is conceptually developed, including the main dimensions of internationalization (operation mode, market, product, time and performance). The proposed multi-dimensional internationalization construct is empirically tested.Questionnaire data were collected from a sample of 161 Slovenian SMEs. Scales were tested

Mitja Ruzzier; Bostjan Antoncic; Robert D. Hisrich

2007-01-01

66

Multi-dimensional hybrid-simulation techniques in plasma physics

Multi-dimensional hybrid simulation models have been developed for use in studying plasma phenomena on extended time and distance scales. The models make fundamental use of the small Debye length or quasi-neutrality assumption. The ions are modeled by particle-in-cell (PIC) techniques while the electrons are considered a collision-dominated fluid. The fields are calculated in the nonradiative Darwin limit. Some electron inertial effects are retained in the Finite Electron Mass model (FEM). In this model, the quasi-neutral counterpart of Poisson's equation is obtained by first summing the electron and ion momentum equations and then taking the quasi-neutral limit. In the Zero Electron Mass (ZEM) model explicit use is made of the axisymmetric properties of the model to decouple the components of the model equations. Equations to self-consistently advance the electron temperature have recently been added to the scheme. The model equations which result from these considerations are two coupled, nonlinear, second order partial differential equations.

Hewett, D.W.

1982-01-01

67

Excitation-emission matrix (EEM) fluorescence spectroscopy is a noninvasive method for tissue diagnosis and has become important in clinical use. However, the intrinsic characterization of EEM fluorescence remains unclear. Photobleaching and the complexity of the chemical compounds make it difficult to distinguish individual compounds due to overlapping features. Conventional studies use principal component analysis (PCA) for EEM fluorescence analysis, and the relationship between the EEM features extracted by PCA and diseases has been examined. The spectral features of different tissue constituents are not fully separable or clearly defined. Recently, a non-stationary method called multi-dimensional ensemble empirical mode decomposition (MEEMD) was introduced; this method can extract the intrinsic oscillations on multiple spatial scales without loss of information. The aim of this study was to propose a fluorescence spectroscopy system for EEM measurements and to describe a method for extracting the intrinsic characteristics of EEM by MEEMD. The results indicate that, although PCA provides the principal factor for the spectral features associated with chemical compounds, MEEMD can provide additional intrinsic features with more reliable mapping of the chemical compounds. MEEMD has the potential to extract intrinsic fluorescence features and improve the detection of biochemical changes.

Chang, Chi-Ying; Chang, Chia-Chi; Hsiao, Tzu-Chien

2013-01-01

68

Excitation-emission matrix (EEM) fluorescence spectroscopy is a noninvasive method for tissue diagnosis and has become important in clinical use. However, the intrinsic characterization of EEM fluorescence remains unclear. Photobleaching and the complexity of the chemical compounds make it difficult to distinguish individual compounds due to overlapping features. Conventional studies use principal component analysis (PCA) for EEM fluorescence analysis, and the relationship between the EEM features extracted by PCA and diseases has been examined. The spectral features of different tissue constituents are not fully separable or clearly defined. Recently, a non-stationary method called multi-dimensional ensemble empirical mode decomposition (MEEMD) was introduced; this method can extract the intrinsic oscillations on multiple spatial scales without loss of information. The aim of this study was to propose a fluorescence spectroscopy system for EEM measurements and to describe a method for extracting the intrinsic characteristics of EEM by MEEMD. The results indicate that, although PCA provides the principal factor for the spectral features associated with chemical compounds, MEEMD can provide additional intrinsic features with more reliable mapping of the chemical compounds. MEEMD has the potential to extract intrinsic fluorescence features and improve the detection of biochemical changes. PMID:24240806

Chang, Chi-Ying; Chang, Chia-Chi; Hsiao, Tzu-Chien

2013-01-01

69

Multi-Dimensional Calibration of Impact Dynamic Models

NASA Technical Reports Server (NTRS)

NASA Langley, under the Subsonic Rotary Wing Program, recently completed two helicopter tests in support of an in-house effort to study crashworthiness. As part of this effort, work is on-going to investigate model calibration approaches and calibration metrics for impact dynamics models. Model calibration of impact dynamics problems has traditionally assessed model adequacy by comparing time histories from analytical predictions to test at only a few critical locations. Although this approach provides for a direct measure of the model predictive capability, overall system behavior is only qualitatively assessed using full vehicle animations. In order to understand the spatial and temporal relationships of impact loads as they migrate throughout the structure, a more quantitative approach is needed. In this work impact shapes derived from simulated time history data are used to recommend sensor placement and to assess model adequacy using time based metrics and orthogonality multi-dimensional metrics. An approach for model calibration is presented that includes metric definitions, uncertainty bounds, parameter sensitivity, and numerical optimization to estimate parameters to reconcile test with analysis. The process is illustrated using simulated experiment data.

Horta, Lucas G.; Reaves, Mercedes C.; Annett, Martin S.; Jackson, Karen E.

2011-01-01

70

High-value energy storage for the grid: a multi-dimensional look

The conceptual attractiveness of energy storage in the electrical power grid has grown in recent years with Smart Grid initiatives. But cost is a problem, interwoven with the complexity of quantifying the benefits of energy storage. This analysis builds toward a multi-dimensional picture of storage that is offered as a step toward identifying and removing the gaps and ''friction'' that permeate the delivery chain from research laboratory to grid deployment. (author)

Culver, Walter J.

2010-12-15

71

A singular multi-dimensional piston problem in compressible flow

NASA Astrophysics Data System (ADS)

This paper concerns the multi-dimensional piston problem, which is a special initial boundary value problem of multi-dimensional unsteady potential flow equation. The problem is defined in a domain bounded by two conical surfaces, one of them is shock, whose location is also to be determined. By introducing self-similar coordinates, the problem can be reduced to a free boundary value problem of an elliptic equation. The existence of the problem is proved by using partial hodograph transformation and nonlinear alternating iteration. The result also shows the stability of the structure of shock front in symmetric case under small perturbation.

Chen, Shuxing

72

Scaling in sensitivity analysis

Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.

Link, W. A.; Doherty, P.F., Jr.

2002-01-01

73

Image matrix processor for fast multi-dimensional computations

An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.

Roberson, George P. (Tracy, CA); Skeate, Michael F. (Livermore, CA)

1996-01-01

74

Image matrix processor for fast multi-dimensional computations

An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.

Roberson, G.P.; Skeate, M.F.

1996-10-15

75

Visualization of multi-dimensional data with vector-fusion

Multi-dimensional entities are modeled, displayed and understood with a new algorithm vectorizing data of any dimensionality. This algorithm is called SBP; it is a vectorized generalization of parallel coordinates. Classic geometries of any dimensionality can be demonstrated to facilitate perception and understanding of the shapes generated by this algorithm. SBP images of a 4D line, a circle and 3D and

R. R. Johnson

2000-01-01

76

Data Fusion 'Cube': A Multi-Dimensional Perspective.

National Technical Information Service (NTIS)

In the classical sense, data fusion can be viewed as a one- dimensional entity having five distinct levels. However, this view does not convey the multi-dimensional aspect of data fusion. This paper argues that data fusion is not one-dimensional, but rath...

I. Plonisch P. W. Phister

2002-01-01

77

This article examines alternate vibration isolation measures for a multi-dimensional system. The isolator and receiver are modelled by the continuous system theory. The source is assumed to be rigid and both force and moment excitations are considered. Our analysis is limited to a linear time-invariant system, and the mobility synthesis method is adopted to describe the overall system behavior. Inverted

Rajendra Singh; Seungbo Kim

2003-01-01

78

Towards Semantic Web Services on Large, Multi-Dimensional Coverages

NASA Astrophysics Data System (ADS)

Observed and simulated data in the Earth Sciences often come as coverages, the general term for space-time varying phenomena as set forth by standardization bodies like the Open GeoSpatial Consortium (OGC) and ISO. Among such data are 1-d time series, 2-D surface data, 3-D surface data time series as well as x/y/z geophysical and oceanographic data, and 4-D metocean simulation results. With increasing dimensionality the data sizes grow exponentially, up to Petabyte object sizes. Open standards for exploiting coverage archives over the Web are available to a varying extent. The OGC Web Coverage Service (WCS) standard defines basic extraction operations: spatio-temporal and band subsetting, scaling, reprojection, and data format encoding of the result - a simple interoperable interface for coverage access. More processing functionality is available with products like Matlab, Grid-type interfaces, and the OGC Web Processing Service (WPS). However, these often lack properties known as advantageous from databases: declarativeness (describe results rather than the algorithms), safe in evaluation (no request can keep a server busy infinitely), and optimizable (enable the server to rearrange the request so as to produce the same result faster). WPS defines a geo-enabled SOAP interface for remote procedure calls. This allows to webify any program, but does not allow for semantic interoperability: a function is identified only by its function name and parameters while the semantics is encoded in the (only human readable) title and abstract. Hence, another desirable property is missing, namely an explicit semantics which allows for machine-machine communication and reasoning a la Semantic Web. The OGC Web Coverage Processing Service (WCPS) language, which has been adopted as an international standard by OGC in December 2008, defines a flexible interface for the navigation, extraction, and ad-hoc analysis of large, multi-dimensional raster coverages. It is abstract in that it does not anticipate any particular protocol. One such protocol is given by the OGC Web Coverage Service (WCS) Processing Extension standard which ties WCPS into WCS. Another protocol which makes WCPS an OGC Web Processing Service (WPS) Profile is under preparation. Thereby, WCPS bridges WCS and WPS. The conceptual model of WCPS relies on the coverage model of WCS, which in turn is based on ISO 19123. WCS currently addresses raster-type coverages where a coverage is seen as a function mapping points from a spatio-temporal extent (its domain) into values of some cell type (its range). A retrievable coverage has an identifier associated, further the CRSs supported and, for each range field (aka band, channel), the interpolation methods applicable. The WCPS language offers access to one or several such coverages via a functional, side-effect free language. The following example, which derives the NDVI (Normalized Difference Vegetation Index) from given coverages C1, C2, and C3 within the regions identified by the binary mask R, illustrates the language concept: for c in ( C1, C2, C3 ), r in ( R ) return encode( (char) (c.nir - c.red) / (c.nir + c.red), H˜DF-EOS\\~ ) The result is a list of three HDF-EOS encoded images containing masked NDVI values. Note that the same request can operate on coverages of any dimensionality. The expressive power of WCPS includes statistics, image, and signal processing up to recursion, to maintain safe evaluation. As both syntax and semantics of any WCPS expression is well known the language is Semantic Web ready: clients can construct WCPS requests on the fly, servers can optimize such requests (this has been investigated extensively with the rasdaman raster database system) and automatically distribute them for processing in a WCPS-enabled computing cloud. The WCPS Reference Implementation is being finalized now that the standard is stable; it will be released in open source once ready. Among the future tasks is to extend WCPS to general meshes, in synchronization with the WCS standard. In this talk WCPS is presented in the context

Baumann, P.

2009-04-01

79

Multi-dimensional Liquid Chromatography in Proteomics

Proteomics is the large-scale study of proteins, particularly their expression, structures and functions. This still-emerging combination of technologies aims to describe and characterize all expressed proteins in a biological system. Because of upper limits on mass detection of mass spectrometers, proteins are usually digested into peptides and the peptides are then separated, identified and quantified from this complex enzymatic digest. The problem in digesting proteins first and then analyzing the peptide cleavage fragments by mass spectrometry is that huge numbers of peptides are generated that overwhelm direct mass spectral analyses. The objective in the liquid chromatography approach to proteomics is to fractionate peptide mixtures to enable and maximize identification and quantification of the component peptides by mass spectrometry. This review will focus on existing multidimensional liquid chromatographic (MDLC) platforms developed for proteomics and their application in combination with other techniques such as stable isotope labeling. We also provide some perspectives on likely future developments.

Zhang, Xiang; Fang, Aiqin; Riley, Catherine P.; Wang, Mu; Regnier, Fred E.; Buck, Charles

2010-01-01

80

Multi-dimensional Indoor Location Information Model

NASA Astrophysics Data System (ADS)

Aiming at the increasing requirements of seamless indoor and outdoor navigation and location service, a Chinese standard of Multidimensional Indoor Location Information Model is being developed, which defines ontology of indoor location. The model is complementary to 3D concepts like CityGML and IndoorGML. The goal of the model is to provide an exchange GML-based format for location needed for indoor routing and navigation. An elaborated user requirements analysis and investigation of state-of-the-art technology in expressing indoor location at home and abroad was completed to identify the manner humans specify location. The ultimate goal is to provide an ontology that will allow absolute and relative specification of location such as "in room 321", "on the second floor", as well as, "two meters from the second window", "12 steps from the door".

Xiong, Q.; Zhu, Q.; Zlatanova, S.; Huang, L.; Zhou, Y.; Du, Z.

2013-11-01

81

Advanced numerics for multi-dimensional fluid flow calculations

In recent years, there has been a growing interest in the development and use of mathematical models for the simulation of fluid flow, heat transfer and combustion processes in engineering equipment. The equations representing the multi-dimensional transport of mass, momenta and species are numerically solved by finite-difference or finite-element techniques. However despite the multiude of differencing schemes and solution algorithms, and the advancement of computing power, the calculation of multi-dimensional flows, especially three-dimensional flows, remains a mammoth task. The following discussion is concerned with the author's recent work on the construction of accurate discretization schemes for the partial derivatives, and the efficient solution of the set of nonlinear algebraic equations resulting after discretization. The present work has been jointly supported by the Ramjet Engine Division of the Wright Patterson Air Force Base, Ohio, and the NASA Lewis Research Center.

Vanka, S.P.

1984-04-01

82

Nucleosynthesis in multi-dimensional SN Ia explosions

We present the results of nucleosynthesis calculations based on multi-dimensional (2D and 3D) hydrodynamical simulations of the thermonuclear burning phase in type Ia supernovae (hereafter SN Ia). The detailed nucleosynthetic yields of our explosion models are calculated by post-processing the ejecta, using passively advected tracer particles. The nuclear reaction network employed in computing the explosive nucleosynthesis contains 383 nuclear species,

C. Travaglio; W. Hillebrandt; M. Reinecke; F.-K. Thielemann

2004-01-01

83

Improved Concurrency Control Techniques For MultiDimensional Index Structures

Multi-dimensional index structures such as R-trees enable fast searching in high-dimensional spaces. They differ from uni-dimensional structures in the following aspects: index regions in the tree may be modified during ordinary insert and delete operations; and node splits during inserts are quite expensive. Both these characteristics may lead to reduced concurrency of update and query operations. We examine how to

Kothuri Venkata Ravi Kanth; David Serena; Ambuj K. Singh

1998-01-01

84

Numerical Solution of Multi-Dimensional Hyperbolic Conservation Laws on Unstructured Meshes

NASA Technical Reports Server (NTRS)

The lecture material will discuss the application of one-dimensional approximate Riemann solutions and high order accurate data reconstruction as building blocks for solving multi-dimensional hyperbolic equations. This building block procedure is well-documented in the nationally available literature. The relevant stability and convergence theory using positive operator analysis will also be presented. All participants in the minisymposium will be asked to solve one or more generic test problems so that a critical comparison of accuracy can be made among differing approaches.

Barth, Timothy J.; Kwak, Dochan (Technical Monitor)

1995-01-01

85

Portable laser synthesizer for high-speed multi-dimensional spectroscopy

Portable, field-deployable laser synthesizer devices designed for multi-dimensional spectrometry and time-resolved and/or hyperspectral imaging include a coherent light source which simultaneously produces a very broad, energetic, discrete spectrum spanning through or within the ultraviolet, visible, and near infrared wavelengths. The light output is spectrally resolved and each wavelength is delayed with respect to each other. A probe enables light delivery to a target. For multidimensional spectroscopy applications, the probe can collect the resulting emission and deliver this radiation to a time gated spectrometer for temporal and spectral analysis.

Demos, Stavros G. (Livermore, CA); Shverdin, Miroslav Y. (Sunnyvale, CA); Shirk, Michael D. (Brentwood, CA)

2012-05-29

86

Multi-dimensional Hermite polynomials in quantum optics

NASA Astrophysics Data System (ADS)

We study a class of optical circuits with vacuum input states consisting of Gaussian sources without coherent displacements such as down-converters and squeezers, together with photo-detectors and passive interferometry (beamsplitters, polarization rotations, phase-shifters, etc). We show that the outgoing state leaving the optical circuit can be expressed in terms of so-called multi-dimensional Hermite polynomials and give their recursion and orthogonality relations. We show how quantum teleportation of single-photon polarization states can be modelled using this description.

Kok, Pieter; Braunstein, Samuel L.

2001-08-01

87

A multi-dimensional sampling method for locating small scatterers

NASA Astrophysics Data System (ADS)

A multiple signal classification (MUSIC)-like multi-dimensional sampling method (MDSM) is introduced to locate small three-dimensional scatterers using electromagnetic waves. The indicator is built with the most stable part of signal subspace of the multi-static response matrix on a set of combinatorial sampling nodes inside the domain of interest. It has two main advantages compared to the conventional MUSIC methods. First, the MDSM is more robust against noise. Second, it can work with a single incidence even for multi-scatterers. Numerical simulations are presented to show the good performance of the proposed method.

Song, Rencheng; Zhong, Yu; Chen, Xudong

2012-11-01

88

Multi-dimensional modeling of the XPAL system

NASA Astrophysics Data System (ADS)

The exciplex pumped alkali laser (XPAL) system was recently demonstrated in mixtures of Cs vapor, Ar, and ethane, by pumping Cs-Ar atomic collision pairs and subsequent dissociation of diatomic, electronically-excited CsAr molecules (exciplexes or excimers). Because of the addition of atomic collision pairs and exciplex states, modeling of the XPAL system is far more complicated than classic diode pumped alkali laser (DPAL) modeling. In this paper we discuss BLAZE-V multi-dimensional modeling of this new laser system and compare with experiments.

Palla, Andrew D.; Carroll, David L.; Verdeyen, Joseph T.; Readle, Jason D.; Spinka, Thomas M.; Wagner, Clark J.; Eden, J. Gary; Heaven, Michael C.

2010-02-01

89

Temporal abstraction (TA) provides the means to instil domain knowledge into data analysis processes and allows transformation of low level numeric data to high level qualitative narratives. TA mechanisms have been primarily applied to uni-dimensional data sources equating to single patients in the clinical context. This paper presents a framework for multi-dimensional TA (MDTA) enabling analysis of data emanating from numerous patients to detect multiple conditions within the environment of neonatal intensive care. Patient agents which perform temporal reasoning upon patient data streams are based on the Event Calculus and an active ontology provides a central knowledge core where rules are stored and agent responses accumulated, thus permitting a level of multi-dimensionality within data abstraction processes. Facilitation of TA across a ward of patients offers the potential for early detection of debilitating conditions such as Sepsis, Pneumothorax and Periventricular Leukomalacia (PVL), which have been shown to exhibit advance indicators in physiological data. Preliminary prototyping for patient agents has begun with promising results and a schema for the active rule repository outlined. PMID:18002814

Stacey, Michael; McGregor, Carolyn; Tracy, Mark

2007-01-01

90

Construction of Multi-Dimensional Periodic Complementary Array Sets

NASA Astrophysics Data System (ADS)

Multi-dimensional (MD) periodic complementary array sets (CASs) with impulse-like MD periodic autocorrelation function are naturally generalized to (one dimensional) periodic complementary sequence sets, and such array sets are widely applied to communication, radar, sonar, coded aperture imaging, and so forth. In this letter, based on multi-dimensional perfect arrays (MD PAs), a method for constructing MD periodic CASs is presented, which is carried out by sampling MD PAs. It is particularly worth mentioning that the numbers and sizes of sub-arrays in the proposed MD periodic CASs can be freely changed within the range of possibilities. In particular, for arbitrarily given positive integers M and L, two-dimensional periodic polyphase CASs with the number M2 and size L × L of sub-arrays can be produced by the proposed method. And analogously, pseudo-random MD periodic CASs can be given when pseudo-random MD arrays are sampled. Finally, the proposed method's validity is made sure by a given example.

Zeng, Fanxin; Zhang, Zhenyu

91

Multi-Dimensional Damage Detection for Surfaces and Structures

NASA Technical Reports Server (NTRS)

Current designs for inflatable or semi-rigidized structures for habitats and space applications use a multiple-layer construction, alternating thin layers with thicker, stronger layers, which produces a layered composite structure that is much better at resisting damage. Even though such composite structures or layered systems are robust, they can still be susceptible to penetration damage. The ability to detect damage to surfaces of inflatable or semi-rigid habitat structures is of great interest to NASA. Damage caused by impacts of foreign objects such as micrometeorites can rupture the shell of these structures, causing loss of critical hardware and/or the life of the crew. While not all impacts will have a catastrophic result, it will be very important to identify and locate areas of the exterior shell that have been damaged by impacts so that repairs (or other provisions) can be made to reduce the probability of shell wall rupture. This disclosure describes a system that will provide real-time data regarding the health of the inflatable shell or rigidized structures, and information related to the location and depth of impact damage. The innovation described here is a method of determining the size, location, and direction of damage in a multilayered structure. In the multi-dimensional damage detection system, layers of two-dimensional thin film detection layers are used to form a layered composite, with non-detection layers separating the detection layers. The non-detection layers may be either thicker or thinner than the detection layers. The thin-film damage detection layers are thin films of materials with a conductive grid or striped pattern. The conductive pattern may be applied by several methods, including printing, plating, sputtering, photolithography, and etching, and can include as many detection layers that are necessary for the structure construction or to afford the detection detail level required. The damage is detected using a detector or sensory system, which may include a time domain reflectometer, resistivity monitoring hardware, or other resistance-based systems. To begin, a layered composite consisting of thin-film damage detection layers separated by non-damage detection layers is fabricated. The damage detection layers are attached to a detector that provides details regarding the physical health of each detection layer individually. If damage occurs to any of the detection layers, a change in the electrical properties of the detection layers damaged occurs, and a response is generated. Real-time analysis of these responses will provide details regarding the depth, location, and size estimation of the damage. Multiple damages can be detected, and the extent (depth) of the damage can be used to generate prognostic information related to the expected lifetime of the layered composite system. The detection system can be fabricated very easily using off-the-shelf equipment, and the detection algorithms can be written and updated (as needed) to provide the level of detail needed based on the system being monitored. Connecting to the thin film detection layers is very easy as well. The truly unique feature of the system is its flexibility; the system can be designed to gather as much (or as little) information as the end user feels necessary. Individual detection layers can be turned on or off as necessary, and algorithms can be used to optimize performance. The system can be used to generate both diagnostic and prognostic information related to the health of layer composite structures, which will be essential if such systems are utilized for space exploration. The technology is also applicable to other in-situ health monitoring systems for structure integrity.

Williams, Martha; Lewis, Mark; Roberson, Luke; Medelius, Pedro; Gibson, Tracy; Parks, Steen; Snyder, Sarah

2013-01-01

92

Recent progress in hardware and software technology opens up vistas where flexible services on large, multi-dimensional coverage data become a commodity. Interactive data browsing like with Virtual Globes, selective download, and ad-hoc analysis services are about to become available routinely, as several sites already demonstrate. However, for easy access and true machine-machine communication, Semantic Web concepts as being investigated for

P. Baumann

2009-01-01

93

Importance of multi-dimensional morphodynamics for habitat evolution: Danube River 1715-2006

NASA Astrophysics Data System (ADS)

Human-unimpaired braided and anabranched river systems are characterized by manifold multi-dimensional exchange processes. The intensity of hydrological surface/subsurface connectivity of riverine habitats depends on more than regular or episodic water level fluctuations due to the hydrological regime. Morphodynamic changes are also a basic underlying factor. In order to provide new insights into the long-term habitat configuration of large rivers prior to channelization, this study discusses the hydromorphological alterations of an alluvial section of the Austrian Danube based on historical records from 1715 to 2006. The study combines the analysis of habitat patterns and intensity of hydrological connectivity over the long term with the reconstruction of short-term morphodynamic processes between 1812 and 1821. The main research questions are (1) whether the intensive morphodynamics prior to channelization are reflected by a marked variation in habitat patterns or whether the variation remained within a small range, and (2) which fluvial processes contributed to the evolution of the habitat configuration identified. The study reveals that the mean variations in the habitat patterns and the intensity of hydrological connectivity were only between 3% and 10% before 1821, although the river landscape was subject to intensive fluvial disturbances. An exception was the expansion of aquatic habitats between low and mean flow, which deviated by 15%. Habitat evolution was affected by morphodynamic processes occurring across different temporal scales. Both gradual channel changes such as incision or migration and sudden processes such as avulsions (cut-offs) contributed to the patterns identified. Locally, sudden channel changes extensively altered the habitat conditions with regard to hydrological surface/subsurface connectivity. Such alterations foster or restrain the potential evolution and the ecological succession of the riparian vegetation at the respective sites. On a larger spatial and temporal scale, however, the changes in the intensity of hydrological connectivity were largely balanced. The results support the hypothesis that a resilient “shifting mosaic steady-state” existed over the long term as long as the framework conditions (e.g. climate) did not significantly change. The habitat mosaic representing different types and different ages potentially allowed many riverine species to co-exist in an environment with frequent perturbations. From 1821 onwards, river engineering measures significantly altered habitat patterns and severely truncated the potential of the system to recover from disturbances.

Hohensinner, Severin; Jungwirth, Mathias; Muhar, Susanne; Schmutz, Stefan

2014-06-01

94

A Multi-Dimensional Classification Model for Scientific Workflow Characteristics

Workflows have been used to model repeatable tasks or operations in manufacturing, business process, and software. In recent years, workflows are increasingly used for orchestration of science discovery tasks that use distributed resources and web services environments through resource models such as grid and cloud computing. Workflows have disparate re uirements and constraints that affects how they might be managed in distributed environments. In this paper, we present a multi-dimensional classification model illustrated by workflow examples obtained through a survey of scientists from different domains including bioinformatics and biomedical, weather and ocean modeling, astronomy detailing their data and computational requirements. The survey results and classification model contribute to the high level understandingof scientific workflows.

Ramakrishnan, Lavanya; Plale, Beth

2010-04-05

95

ABSTRACT This paper addresses the problem,of multi-scale region-oriented image analysis. It first deals with a representational issue : how to represent all the solutions of a multi-scale partitioning algo- rithm, i.e. of an algorithm returning ordered partitions with res- pect to a one-dimensional ’scale’ parameter ? To achieve this, we propose the scale-sets representation which,can be viewed,as a region-oriented version

Laurent Guigues; Hervé Le Men

2003-01-01

96

Profile Analysis: Multidimensional Scaling Approach.

ERIC Educational Resources Information Center

Outlines an exploratory multidimensional scaling-based approach to profile analysis called Profile Analysis via Multidimensional Scaling (PAMS) (M. Davison, 1994). The PAMS model has the advantages of being applied to samples of any size easily, classifying persons on a continuum, and using person profile index for further hypothesis studies, but…

Ding, Cody S.

2001-01-01

97

A one-dimensional shock capturing finite element method and multi-dimensional generalizations

NASA Technical Reports Server (NTRS)

Multi-dimensional generalizations of a one-dimensional finite element shock capturing scheme are proposed. A scalar model problem is used to emphasize that 'preferred directions' are important in multi-dimensional applications. Schemes are developed for the two-dimensional Euler equations. One, based upon characteristics, employs the Mach lines and streamlines as preferred directions.

Hughes, T. J. R.; Mallet, M.; Zanutta, R.; Taki, Y.; Tezduyar, T. E.

1985-01-01

98

Development of a Scale Measuring Trait Anxiety in Physical Education

ERIC Educational Resources Information Center

The aim of the present study was to examine the validity and reliability of a multi-dimensional measure of trait anxiety specifically designed for the physical education lesson. The Physical Education Trait Anxiety Scale was initially completed by 774 high school students during regular school classes. A confirmatory factor analysis supported the…

Barkoukis, Vassilis; Rodafinos, Angelos; Koidou, Eirini; Tsorbatzoudis, Haralambos

2012-01-01

99

Scaling analysis of transient heating

NSDL National Science Digital Library

This problem is a simple case designed to show the power of scaling analysis to estimate the behavior of variables of interest without doing a detailed analysis. Here, internal heat generation heats a square part and the student is asked to find the dependence of the maximum temperature on time. The use of a scaling analysis encourages the student to think about the physics of the problem more than just solving the differential equation.

Krane, Matthew J.

2008-10-14

100

Wildfire Detection using by Multi Dimensional Histogram in Boreal Forest

NASA Astrophysics Data System (ADS)

Early detection of wildfires is an issue for reduction of damage to environment and human. There are some attempts to detect wildfires by using satellite imagery, which are mainly classified into three methods: Dozier Method(1981-), Threshold Method(1986-) and Contextual Method(1994-). However, the accuracy of these methods is not enough: some commission and omission errors are included in the detected results. In addition, it is not so easy to analyze satellite imagery with high accuracy because of insufficient ground truth data. Kudoh and Hosoi (2003) developed the detection method by using three-dimensional (3D) histogram from past fire data with the NOAA-AVHRR imagery. But their method is impractical because their method depends on their handworks to pick up past fire data from huge data. Therefore, the purpose of this study is to collect fire points as hot spots efficiently from satellite imagery and to improve the method to detect wildfires with the collected data. As our method, we collect past fire data with the Alaska Fire History data obtained by the Alaska Fire Service (AFS). We select points that are expected to be wildfires, and pick up the points inside the fire area of the AFS data. Next, we make 3D histogram with the past fire data. In this study, we use Bands 1, 21 and 32 of MODIS. We calculate the likelihood to detect wildfires with the three-dimensional histogram. As our result, we select wildfires with the 3D histogram effectively. We can detect the troidally spreading wildfire. This result shows the evidence of good wildfire detection. However, the area surrounding glacier tends to rise brightness temperature. It is a false alarm. Burnt area and bare ground are sometimes indicated as false alarms, so that it is necessary to improve this method. Additionally, we are trying various combinations of MODIS bands as the better method to detect wildfire effectively. So as to adjust our method in another area, we are applying our method to tropical forest in Kalimantan, Indonesia and around Chiang Mai, Thailand. But the ground truth data in these areas is lesser than the one in Alaska. Our method needs lots of accurate observed data to make multi-dimensional histogram in the same area. In this study, we can show the system to select wildfire data efficiently from satellite imagery. Furthermore, the development of multi-dimensional histogram from past fire data makes it possible to detect wildfires accurately.

Honda, K.; Kimura, K.; Honma, T.

2008-12-01

101

A global inversion method for multi-dimensional NMR logging

NASA Astrophysics Data System (ADS)

We describe a general global inversion methodology of multi-dimensional NMR logging for pore fluid typing and quantification in petroleum exploration. Although higher dimensions are theoretically possible, for practical reasons, we limit our discussion of proton density distributions as a function of two (2D) or three (3D) independent variables. The 2D can be diffusion coefficient and T2 relaxation time (D-T2), and the 3D can be diffusion coefficient, T2, and T1 relaxation times (D-T2-T1) of the saturating fluids in rocks. Using the contrast between the diffusion coefficients of fluids (oil and water), the oil and water phases within the rocks can be clearly identified. This 2D or 3D proton density distribution function can be obtained from either two-window or regular type multiple CPMG echo trains encoded with diffusion, T1, and T2 relaxation by varying echo spacing and wait time. From this 2D/3D proton density distribution function, not only the saturations of water and oil can be determined, the viscosity of the oil and the gas-oil ratio can also be estimated based on a previously experimentally determined D-T2 relationship.

Sun, Boqin; Dunn, Keh-Jim

2005-01-01

102

Entanglement Entropy of Fermi Liquids via Multi-dimensional Bosonization

NASA Astrophysics Data System (ADS)

The logarithmic violations of the area law, i.e. an ``area law'' with logarithmic correction of the form S ˜L^d-1 L, for entanglement entropy are found in both 1D gapless system and for high dimensional free fermions. The purpose of this work is to show that both violations are of the same origin, and in the presence of Fermi liquid interactions such behavior persists for 2D fermion systems. In this paper we first consider the entanglement entropy of a toy model, namely a set of decoupled 1D chains of free spinless fermions, to relate both violations in an intuitive way. We then use multi-dimensional bosonization to re-derive the formula by Gioev and Klich [Phys. Rev. Lett. 96, 100503 (2006)] for free fermions through a low-energy effective Hamiltonian, and explicitly show the logarithmic corrections to the area law in both cases share the same origin: the discontinuity at the Fermi surface (points). In the presence of Fermi liquid (forward scattering) interactions, the bosonized theory remains quadratic in terms of the original local degrees of freedom, and after regularizing the theory with a mass term we are able to calculate the entanglement entropy perturbatively up to second order in powers of the coupling parameter for a special geometry via the replica trick.

Ding, Wenxin; Seidel, Alexander; Yang, Kun

2012-02-01

103

CASTRO: Multi-dimensional Eulerian AMR Radiation-hydrodynamics Code

NASA Astrophysics Data System (ADS)

CASTRO is a multi-dimensional Eulerian AMR radiation-hydrodynamics code that includes stellar equations of state, nuclear reaction networks, and self-gravity. Initial target applications for CASTRO include Type Ia and Type II supernovae. CASTRO supports calculations in 1-d, 2-d and 3-d Cartesian coordinates, as well as 1-d spherical and 2-d cylindrical (r-z) coordinate systems. Time integration of the hydrodynamics equations is based on an unsplit version of the the piecewise parabolic method (PPM) with new limiters that avoid reducing the accuracy of the scheme at smooth extrema. CASTRO can follow an arbitrary number of isotopes or elements. The atomic weights and amounts of these elements are used to calculate the mean molecular weight of the gas required by the equation of state. CASTRO supports several different approaches to solving for self-gravity. The most general is a full Poisson solve for the gravitational potential. CASTRO also supports a monopole approximation for gravity, and a constant gravity option is also available. The CASTRO software is written in C++ and Fortran, and is based on the BoxLib software framework developed by CCSE.

Center For Computational Sciences; Entineering (Berkeley); Howell, Louis; Singer, Mike

2011-05-01

104

Data sets resulting from physical simulations typically contain a multitude of physical variables. It is, therefore, desirable that visualization methods take into account the entire multi-field volume data rather than concentrating on one variable. We present a visualization approach based on surface extraction from multi-field particle volume data. The surfaces segment the data with respect to the underlying multi-variate function. Decisions on segmentation properties are based on the analysis of the multi-dimensional feature space. The feature space exploration is performed by an automated multi-dimensional hierarchical clustering method, whose resulting density clusters are shown in the form of density level sets in a 3D star coordinate layout. In the star coordinate layout, the user can select clusters of interest. A selected cluster in feature space corresponds to a segmenting surface in object space. Based on the segmentation property induced by the cluster membership, we extract a surface from the volume data. Our driving applications are Smoothed Particle Hydrodynamics (SPH) simulations, where each particle carries multiple properties. The data sets are given in the form of unstructured point-based volume data. We directly extract our surfaces from such data without prior resampling or grid generation. The surface extraction computes individual points on the surface, which is supported by an efficient neighborhood computation. The extracted surface points are rendered using point-based rendering operations. Our approach combines methods in scientific visualization for object-space operations with methods in information visualization for feature-space operations. PMID:18989000

Linsen, Lars; Van Long, Tran; Rosenthal, Paul; Rosswog, Stephan

2008-01-01

105

Multi-dimensional Multiphase Modeling of Sediment Transport

NASA Astrophysics Data System (ADS)

Sediment transport driven by waves and currents is of great significance to further predict coastal morphodynamics. Eulerian two-phase models have been shown effective to study sheet flow sediment transport, though most of them are limited to Reynolds-averaged one-dimensional-vertical formulation. Hence, bedform, plug flow and turbulence cannot be resolved. Our goal is to develop four-way coupled multiphase models for multi-dimensional sediment transport under the numerical framework of OpenFOAM for Eulerian modeling and CFDEM for Euler-Lagrangian modeling. In the Eulerian modeling, particle-particle interaction is modeled using the kinetic theory for granular flow for binary collision and phenomenological closure for stresses of enduring contact. To improve the capability of the model for a range of grain sizes, a new closure for the fluid-particle velocity fluctuation correlation in the k-? equations is proposed. The model is validated by comparing the numerical results with laboratory experiments under steady flow and oscillatory flow for grain size ranging from 0.13~0.51 mm. To improve the closure of particle stress and studying poly-dispersed sediment transport processes, an Euler-Lagrangian solver called CFDEM, which couples OpenFOAM for the fluid phase and LIGGGHTS for particle phase, is modified for sand transport in oscillatory flow. Preliminary investigation suggests that even under sheet flow condition, small bed irregularities are observed during flow reversal. These small irregularities later encourage the formation of large sediment cloud during peak flow. 2D/3D simulation of the recent U-tube experiments at Naval Research Laboratory will be carried out to study instabilities in sheet flow and the poly-dispersed effects.

Cheng, Z.; d'Albignac, S.; Yu, X.; Hsu, T.; Sou, I.; Calantoni, J.

2012-12-01

106

Hyper\\/J: multi-dimensional separation of concerns for Java

Hyper\\/J#8482; supports flexible, multi-dimensional separation of concerns for Java#8482; software. This demonstration shows how to use Hyper\\/J in some important development and evolution seenarios, emphasizing the software engineering benefits it provides.

Harold Ossher; Peri L. Tarr

2000-01-01

107

NASA Astrophysics Data System (ADS)

We use computational tools to study assortativity patterns in multi-dimensional inter-organizational networks on the basis of different node attributes. In the case study of an inter-organizational network in the humanitarian relief sector, we consider not only macro-level topological patterns, but also assortativity on the basis of micro-level organizational attributes. Unlike assortative social networks, this inter-organizational network exhibits disassortative or random patterns on three node attributes. We believe organizations' seek of complementarity is one of the main reasons for the special patterns. Our analysis also provides insights on how to promote collaborations among the humanitarian relief organizations.

Zhao, Kang; Ngamassi, Louis-Marie; Yen, John; Maitland, Carleen; Tapia, Andrea

108

Solution to n-dimensional Sturm-Liouville-like equations using multi-dimensional Schwarzian

NASA Astrophysics Data System (ADS)

In the present work we produce the solution to the n-dimensional Sturm-Liouville-like equations in R. To make it, we define the multi-dimensional Schwarzian derivative of a real-valued function of n variables and show that its basic properties related to its invariance under the action of a group of multi-dimensional Möbius transformations defined in R correspond to a straightforward generalization of those of the one-dimensional Schwarzian.

Erkut, M. H.; Borisenok, S. V.; Ça?lar, M.; Polato?lu, Y.

109

Landscape scale soil pollen analysis

NASA Astrophysics Data System (ADS)

Whatever the potential of soil pollen analysis, the medium has not been exploited to the same extent as conventional approaches involving lake and peat deposits. Problems endemic to soil pollen studies may, however, be preventing the realisation of investigations which might otherwise be carried out, especially those at the landscape scale. Spatially-based pilot studies focusing upon sub-peat podsols in Sussex (England) and Jura (Scotland) provide encouragement for the use of soil pollen analyses in broader scale inquiries of settlement and the wider landscape. A newly-instigated application of the approach is introduced for part of Shetland (Scotland).

Whittington, Graeme; Edwards, Kevin J.

1999-10-01

110

All-optical multi-dimensional imaging of energy-materials beyond the diffraction limit

NASA Astrophysics Data System (ADS)

Efficient, environmentally-friendly, harvesting, storage, transport and conversion of energy are some of the foremost challenges now facing mankind. An important facet of this challenge is the development of new materials with improved electronic and photonic properties. Nano-scale metrology will be important in developing these materials, and optical methods have many advantages over electrons or proximal probes. To surpass the diffraction limit, near-field methods can be used. Alternatively, the concept of imaging in a multi-dimensional space is employed, where, in addition to spatial dimensions, the added dimensions of energy and time allow to distinguish objects which are closely spaced, and in effect increase the achievable resolution of optical microscopy towards the molecular level. We have employed these methods towards the study of materials relevant to renewable energy processes. Specifically, we image the position and orientation of single carbohydrate binding modules and visualize their interaction with cellulose with ~ 10nm resolution, an important step in identifying the molecular underpinnings of bio-processing and the development of low-cost alternative fuels, and describe our current work implementing these concepts towards characterizing the ultrafast carrier dynamics (~ 100fs) in a new class of nano-structured solar cells, predicted to have theoretical efficiencies exceeding 60%, using femtosecond laser spectroscopy.

Smith, Steve; Dagel, D. J.; Zhong, L.; Kolla, P.; Ding, S.-Y.

2011-09-01

111

Multi-dimensional Modeling of Fullerene (C60) Nanoparticle Transport in the Subsurface Environment

NASA Astrophysics Data System (ADS)

The escalating production and consumption of engineered nanomaterials may lead to increased release into groundwater. A number of studies have revealed the potential human health effects and aquatic toxicity of nanomaterials. Understanding the fate and transport of engineered nanomaterials is very important for evaluating their potential risks to human and ecological health. While a lot of efforts have been put forward in this area, limited work has been conducted to evaluate engineered nanomaterial transport in multi-dimension and at field scale. In this work, we simulate the transport of fullerene aggregates (nC60), a widely used engineered nanomaterial, in a multi-dimensional environment. A Modular Three-Dimensional Multispecies Transport Model (MT3DMS) was modified to incorporate the transport and retention of nC60. The modified MT3DMS was validated by comparing with analytical solutions and one-dimensional numerical simulation results. The validated simulator was then used to simulate nC60 transport in two- and three-dimensional field sites. Hypothetical scenarios for nanomaterial entering the subsurface environment, including entering from an injection well and releasing from a waste site were investigated. Influences of injection rate, groundwater velocity, ground water recharge rate, subsurface heterogeneity, and nanomaterial size and surface property were evaluated. Insights gained from this work will be discussed.

Bai, C.; Li, Y.

2011-12-01

112

Nucleosynthesis in multi-dimensional SN Ia explosions

NASA Astrophysics Data System (ADS)

We present the results of nucleosynthesis calculations based on multi-dimensional (2D and 3D) hydrodynamical simulations of the thermonuclear burning phase in type Ia supernovae (hereafter SN Ia). The detailed nucleosynthetic yields of our explosion models are calculated by post-processing the ejecta, using passively advected tracer particles. The nuclear reaction network employed in computing the explosive nucleosynthesis contains 383 nuclear species, ranging from neutrons, protons, and ?-particles to 98Mo. Our models follow the common assumption that SN Ia are the explosions of white dwarfs that have approached the Chandrasekhar mass (Mch˜ 1.39), and are disrupted by thermonuclear fusion of carbon and oxygen. But in contrast to 1D models which adjust the burning speed to reproduce lightcurves and spectra, the thermonuclear burning model applied in this paper does not contain adjustable parameters. Therefore variations of the explosion energies and nucleosynthesis yields are dependent on changes of the initial conditions only. Here we discuss the nucleosynthetic yields obtained in 2D and 3D models with two different choices of ignition conditions (centrally ignited, in which the spherical initial flame geometry is perturbated with toroidal rings, and bubbles, in which multi-point ignition conditions are simulated), but keeping the initial composition of the white dwarf unchanged. Constraints imposed on the hydrodynamical models from nucleosynthesis as well as from the radial velocity distribution of the elements are discussed in detail. We show that in our simulations unburned C and O varies typically from ˜40% to ˜50% of the total ejected material. Some of the unburned material remains between the flame plumes and is concentrated in low velocity regions at the end of the simulations. This effect is more pronounced in 2D than in 3D and in models with a small number of (large) ignition spots. The main differences between all our models and standard 1D computations are, besides the higher mass fraction of unburned C and O, the C/O ratio (in our case is typically a factor of 2.5 higher than in 1D computations), and somewhat lower abundances of certain intermediate mass nuclei such as S, Cl, Ar, K, and Ca, and of 56Ni. We also demonstrate that the amount of 56Ni produced in the explosion is a very sensitive function of density and temperature. Because explosive C and O burning may produce the iron-group elements and their isotopes in rather different proportions one can get different 56Ni-fractions (and thus supernova luminosities) without changing the kinetic energy of the explosion. Finally, we show that we need the high resolution multi-point ignition (bubbles) model to burn most of the material in the center (demonstrating that high resolution coupled with a large number of ignition spots is crucial to get rid of unburned material in a pure deflagration SN Ia model). Tables 1 and 2 are only available in electronic form at http://www.edpsciences.org

Travaglio, C.; Hillebrandt, W.; Reinecke, M.; Thielemann, F.-K.

2004-10-01

113

Steps Toward a Large-Scale Solar Image Data Analysis to Differentiate Solar Phenomena

NASA Astrophysics Data System (ADS)

We detail the investigation of the first application of several dissimilarity measures for large-scale solar image data analysis. Using a solar-domain-specific benchmark dataset that contains multiple types of phenomena, we analyzed combinations of image parameters with different dissimilarity measures to determine the combinations that will allow us to differentiate between the multiple solar phenomena from both intra-class and inter-class perspectives, where by class we refer to the same types of solar phenomena. We also investigate the problem of reducing data dimensionality by applying multi-dimensional scaling to the dissimilarity matrices that we produced using the previously mentioned combinations. As an early investigation into dimensionality reduction, we investigate by applying multidimensional scaling (MDS) how many MDS components are needed to maintain a good representation of our data (in a new artificial data space) and how many can be discarded to enhance our querying performance. Finally, we present a comparative analysis of several classifiers to determine the quality of the dimensionality reduction achieved with this combination of image parameters, similarity measures, and MDS.

Banda, J. M.; Angryk, R. A.; Martens, P. C. H.

2013-11-01

114

Scaling analysis of stock markets.

In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis. PMID:24985421

Bu, Luping; Shang, Pengjian

2014-06-01

115

NASA Astrophysics Data System (ADS)

While the scattering phase for several one-dimensional potentials can be exactly derived, less is known in multi-dimensional quantum systems. This work provides a method to extend the one-dimensional phase knowledge to multi-dimensional quantization rules. The extension is illustrated in the example of Bogomolny's transfer operator method applied in two quantum wells bounded by step potentials of different heights. This generalized semiclassical method accurately determines the energy spectrum of the systems, which indicates the substantial role of the proposed phase correction. Theoretically, the result can be extended to other semiclassical methods, such as Gutzwiller trace formula, dynamical zeta functions, and semiclassical Landauer-Büttiker formula. In practice, this recipe enhances the applicability of semiclassical methods to multi-dimensional quantum systems bounded by general soft potentials.

Huang, Wen-Min; Mou, Chung-Yu; Chang, Cheng-Hung

2010-02-01

116

Multi-dimensional high-order numerical schemes for Lagrangian hydrodynamics

An approximate solver for multi-dimensional Riemann problems at grid points of unstructured meshes, and a numerical scheme for multi-dimensional hydrodynamics have been developed in this paper. The solver is simple, and is developed only for the use in numerical schemes for hydrodynamics. The scheme is truely multi-dimensional, is second order accurate in both space and time, and satisfies conservation laws exactly for mass, momentum, and total energy. The scheme has been tested through numerical examples involving strong shocks. It has been shown that the scheme offers the principle advantages of high-order Codunov schemes; robust operation in the presence of very strong shocks and thin shock fronts.

Dai, William W [Los Alamos National Laboratory; Woodward, Paul R [Los Alamos National Laboratory

2009-01-01

117

Towards Optimal Multi-Dimensional Query Processing with BitmapIndices

Bitmap indices have been widely used in scientific applications and commercial systems for processing complex, multi-dimensional queries where traditional tree-based indices would not work efficiently. This paper studies strategies for minimizing the access costs for processing multi-dimensional queries using bitmap indices with binning. Innovative features of our algorithm include (a) optimally placing the bin boundaries and (b) dynamically reordering the evaluation of the query terms. In addition, we derive several analytical results concerning optimal bin allocation for a probabilistic query model. Our experimental evaluation with real life data shows an average I/O cost improvement of at least a factor of 10 for multi-dimensional queries on datasets from two different applications. Our experiments also indicate that the speedup increases with the number of query dimensions.

Rotem, Doron; Stockinger, Kurt; Wu, Kesheng

2005-09-30

118

NASA Astrophysics Data System (ADS)

Recent progress in hardware and software technology opens up vistas where flexible services on large, multi-dimensional coverage data become a commodity. Interactive data browsing like with Virtual Globes, selective download, and ad-hoc analysis services are about to become available routinely, as several sites already demonstrate. However, for easy access and true machine-machine communication, Semantic Web concepts as being investigated for vector and meta data, need to be extended to raster data and other coverage types. Even more will it then be important to rely on open standards for data and service interoperability. The Open GeoSpatial Consortium (OGC), following a modular approach to specifying geo service interfaces, has issued the Web Coverage Service (WCS) Implementation Standard for accessing coverages or parts thereof. In contrast to the Web Map Service (WMS), which delivers imagery, WCS preserves data semantics and, thus, allows further processing. Together with the Web Catalog Service (CS-W) and the Web Feature Service (WFS) WCS completes the classical triad of meta, vector, and raster data. As such, they represent the core data services on which other services build. The current version of WCS is 1.1 with Corrigendum 2, also referred to as WCS 1.1.2. The WCS Standards Working Group (WCS.SWG) is continuing development of WCS in various directions. One work item is to extend WCS, which currently is confined to regularly gridded data, with support for further coverage types, such as those specified in ISO 19123. Two recently released extensions to WCS are WCS-T ("T" standing for "transactional") which adds upload capabilities to coverage servers and WCPS (Web Coverage Processing Service) which offers a coverage processing language, thereby bridging the gap to the generic WPS (Web Processing Service). All this is embedded into OGC's current initiative to achieve modular topical specification suites through so-called "extensions" which add focused capabilities to some minimal "core" specification. In this talk the current status of WCS, ongoing work, and directions under consideration are outlined. Further, embedding of WCS in the larger context of OGC's modular specification framework and into SOA concepts is discussed. The author, who is co-chair of OGC's WCS Working Group (WG) and Coverages WG, presents facts and personal views on the future of large-scale coverage services.

Baumann, P.

2009-04-01

119

Measurement of Low Level Explosives Reaction in Gauged MultiDimensional Steven Impact Tests

The Steven Test was developed to determine relative impact sensitivity of metal encased solid high explosives and be amenable to two-dimensional modeling. Low level reaction thresholds occur at impact velocities below those required for shock initiation. To assist in understanding this test, multi-dimensional gauge techniques utilizing carbon foil and carbon resistor gauges were used to measure pressure and event times.

A. M. Niles; J. W. Forbes; C. M. Tarver; S. K. Chidester; F. Garcia; D. W. Greenwood; R. G. Garza; L L Swizter

2001-01-01

120

Most direct volume renderings produced today employ one-dimensional transfer functions, which assign color and opacity to the volume based solely on the single scalar quantity which comprises the dataset. Though they have not received widespread attention, multi-dimensional transfer functions are a very effective way to extract specific material boundaries and convey subtle surface properties. However, identifying good transfer functions is

Joe Kniss; Gordon L. Kindlmann; Charles D. Hansen

2001-01-01

121

Minimizing I\\/O Costs of MultiDimensional Queries with Bitmap Indices

Bitmap indices have been widely used in scientific ap- plications and commercial systems for processing complex, multi-dimensional queries where traditional tree-based in- dices would not work efficiently. A common approach for reducing the size of a bitmap index for high cardinality at- tributes is to group ranges of values of an attribute into bins and then build a bitmap for

Doron Rotem; Kurt Stockinger; Kesheng Wu

2006-01-01

122

Parameter estimation of multi-dimensional hidden Markov models - a scalable approach

Parameter estimation is a key computational issue in all statistical image modeling techniques. In this paper, we explore a computationally efficient parameter estimation algorithm for multi-dimensional hidden Markov models. 2-D HMM has been applied to supervised aerial image classification and comparisons have been made with the first proposed estimation algorithm. An extensive parametric study has been performed with 3-D HMM

Dhiraj Joshi; Jia Li; James Ze Wang

2005-01-01

123

Existence and Asymptotic Behavior of MultiDimensional Quantum Hydrodynamic Model for Semiconductors

This paper is devoted to the study of the existence and the time-asymptotic of multi-dimensional quantum hydrodynamic equations for the electron particle density, the current density and the electrostatic potential in spatial periodic domain. The equations are formally analogous to classical hydrodynamics but differ in the momentum equation, which is forced by an additional nonlinear dispersion term, (due to the

Hailiang Li; Pierangelo Marcati

2004-01-01

124

ERIC Educational Resources Information Center

Many English learning websites have been developed worldwide, but little research has been conducted concerning the development of comprehensive evaluation criteria. The main purpose of this study is thus to construct a multi-dimensional set of criteria to help learners and teachers evaluate the quality of English learning websites. These…

Liu, Gi-Zen; Liu, Zih-Hui; Hwang, Gwo-Jen

2011-01-01

125

Knowledge discovery in multi-dimensional data is a challenging problem in engineering design. For example, in trade space exploration of large design data sets, designers need to select a subset of data of interest and examine data from different data dimensions and within data clusters at different granularities. This exploration is a process that demands both humans, who can heuristically decide

Xiaolong Zhang; Tim Simpson; Mary Frecker; George Lesieutre

2010-01-01

126

ERIC Educational Resources Information Center

The purpose of this study is to determine if the multi-dimensional leadership orientation of the heads of departments in Malaysian polytechnics affects their leadership effectiveness and the lecturers' commitment to work as perceived by the lecturers. The departmental heads' leadership orientation was determined by five leadership dimensions…

Ibrahim, Mohammed Sani; Mujir, Siti Junaidah Mohd

2012-01-01

127

Kullback-Leibler Information and Its Applications in Multi-Dimensional Adaptive Testing

ERIC Educational Resources Information Center

This paper first discusses the relationship between Kullback-Leibler information (KL) and Fisher information in the context of multi-dimensional item response theory and is further interpreted for the two-dimensional case, from a geometric perspective. This explication should allow for a better understanding of the various item selection methods…

Wang, Chun; Chang, Hua-Hua; Boughton, Keith A.

2011-01-01

128

A combined discontinuous Galerkin and finite volume scheme for multi-dimensional VPFP system

We construct a numerical scheme for the multi-dimensional Vlasov-Poisson-Fokker-Planck system based on a combined finite volume (FV) method for the Poisson equation in spatial domain and the streamline diffusion (SD) and discontinuous Galerkin (DG) finite element in time, phase-space variables for the Vlasov-Fokker-Planck equation.

Asadzadeh, M.; Bartoszek, K. [Department of Mathematics, Chalmers University of Technology and University of Gothenburg SE-412 96 Goeteborg (Sweden)

2011-05-20

129

Efficient Organization And Access Of MultiDimensional

This paper addresses the problem of urgently needed data management techniques forefficiently retrieving requested subsets of large datasets from mass storage devices. This problem isespecially critical for scientific investigators who need ready access to the large volume of data generatedby large-scale supercomputer simulations and physical experiments as well as the automated collectionof observations by monitoring devices and satellites. This problem

Ling Tony Chen; R. Drach; M. Keating; S. Louis; Doron Rotem; Arie Shoshani

1995-01-01

130

Playing Tetris on Meshes and MultiDimensional SHEARSORT

Shearsort is a classical sorting algorithm working in rounds on 2-dimensional meshesof processors. Its elementary and elegant runtime analysis can be found in varioustextbooks. There is a straightforward generalization of Shearsort to multi-dimensionalmeshes. As experiments turn out, it works fast. However, no method has yet been shownstrong enough to provide a tight analysis of this algorithm. In this paper, we

Miroslaw Kutylowski; Rolf Wanka

1997-01-01

131

Scale Freeness in Factor Analysis.

ERIC Educational Resources Information Center

The notion of scale freeness does not seem to have been well understood in the factor analytic literature. Misconceptions concerning scale-freeness are clarified, and a theorem that ensures scale freeness in the orthogonal factor model is given in this paper. (Author/JKS)

Swaminathan, Hariharan; Algina, James

1978-01-01

132

Multi-scale Analysis with Python

NASA Astrophysics Data System (ADS)

This talk will discuss the SphereBlur package, written in Python and available in the Ultra-scale Visualization Climate Data AnalysisTools (UV-CDAT) environment. SphereBlur provides a flexible multi-scale analysis toolkit for climate data based on linear scale space. Scale space methods, common in image processing, draw upon the well-studied physics of diffusion to obtain a multi-scale representation of data. A simple extension of these methods to the sphere provides flexible analysis tools for climate data. We use this framework to evaluate model performance at multiple spatial scales and to design filters to isolate scales of interest. We show how these methods can be used to simultaneously detect points and scales of interest in data, and to track the appearance and evolution of features such as corners, edges, and "blobs" in observational and model data. Prepared by LLNL under Contract DE-AC52-07NA27344.

Marvel, K. D.

2012-12-01

133

NASA Astrophysics Data System (ADS)

The solution of the polarized line radiative transfer (RT) equation in multi-dimensional geometries has been rarely addressed and only under the approximation that the changes of frequencies at each scattering are uncorrelated (complete frequency redistribution). With the increase in the resolution power of telescopes, being able to handle RT in multi-dimensional structures becomes absolutely necessary. In the present paper, our first aim is to formulate the polarized RT equation for resonance scattering in multi-dimensional media, using the elegant technique of irreducible spherical tensors {T}^K_Q(i, ?). Our second aim is to develop a numerical method of a solution based on the polarized approximate lambda iteration (PALI) approach. We consider both complete frequency redistribution and partial frequency redistribution (PRD) in the line scattering. In a multi-dimensional geometry, the radiation field is non-axisymmetrical even in the absence of a symmetry breaking mechanism such as an oriented magnetic field. We generalize here to the three-dimensional (3D) case, the decomposition technique developed for the Hanle effect in a one-dimensional (1D) medium which allows one to represent the Stokes parameters I, Q, U by a set of six cylindrically symmetrical functions. The scattering phase matrix is expressed in terms of {T}^K_Q(i, ?) (i=0,1,2, K=0,1,2, -K ? Q ? +K), with ? being the direction of the outgoing ray. Starting from the definition of the source vector, we show that it can be represented in terms of six components SK Q independent of ?. The formal solution of the multi-dimensional transfer equation shows that the Stokes parameters can also be expanded in terms of {T}^K_Q(i, ?). Because of the 3D geometry, the expansion coefficients IK Q remain ?-dependent. We show that each IK Q satisfies a simple transfer equation with a source term SK Q and that this transfer equation provides an efficient approach for handling the polarized transfer in multi-dimensional geometries. A PALI method for 3D, associated with a core-wing separation method for treating PRD, is developed. It is tested by comparison with 1D solutions, and several benchmark solutions in the 3D case are given.

Anusha, L. S.; Nagendra, K. N.

2011-01-01

134

The solution of the polarized line radiative transfer (RT) equation in multi-dimensional geometries has been rarely addressed and only under the approximation that the changes of frequencies at each scattering are uncorrelated (complete frequency redistribution). With the increase in the resolution power of telescopes, being able to handle RT in multi-dimensional structures becomes absolutely necessary. In the present paper, our first aim is to formulate the polarized RT equation for resonance scattering in multi-dimensional media, using the elegant technique of irreducible spherical tensors T{sub Q}{sup K}(i, {Omega}). Our second aim is to develop a numerical method of a solution based on the polarized approximate lambda iteration (PALI) approach. We consider both complete frequency redistribution and partial frequency redistribution (PRD) in the line scattering. In a multi-dimensional geometry, the radiation field is non-axisymmetrical even in the absence of a symmetry breaking mechanism such as an oriented magnetic field. We generalize here to the three-dimensional (3D) case, the decomposition technique developed for the Hanle effect in a one-dimensional (1D) medium which allows one to represent the Stokes parameters I, Q, U by a set of six cylindrically symmetrical functions. The scattering phase matrix is expressed in terms of T{sub Q}{sup K}(i, {Omega}) (i=0,1,2, K=0,1,2, -K {<=} Q {<=} +K), with {Omega} being the direction of the outgoing ray. Starting from the definition of the source vector, we show that it can be represented in terms of six components S{sup K}{sub Q} independent of {Omega}. The formal solution of the multi-dimensional transfer equation shows that the Stokes parameters can also be expanded in terms of T{sub Q}{sup K}(i, {Omega}). Because of the 3D geometry, the expansion coefficients I{sup K}{sub Q} remain {Omega}-dependent. We show that each I{sup K}{sub Q} satisfies a simple transfer equation with a source term S{sup K}{sub Q} and that this transfer equation provides an efficient approach for handling the polarized transfer in multi-dimensional geometries. A PALI method for 3D, associated with a core-wing separation method for treating PRD, is developed. It is tested by comparison with 1D solutions, and several benchmark solutions in the 3D case are given.

Anusha, L. S.; Nagendra, K. N. [Indian Institute of Astrophysics, Koramangala, 2nd Block, Bangalore 560 034 (India)

2011-01-01

135

Asynchronous Multi-Dimensional Hybrid Simulations of Magnetized Plasmas

NASA Astrophysics Data System (ADS)

Hybrid simulations provide important insight into the physics of magnetized plasmas with energetic ion components. Ion-driven processes are crucial for understanding the behavior of complex plasma systems such as the Earth magnetosphere and the Field-Reversed Configuration (FRC). Largely varying time and length scales often prevent simulating these systems with adequate resolution. To resolve this issue we developed an asynchronous, uni-dimensional hybrid code, HYPERS. Instead of stepping all simulation variables uniformly in time, HYPERS tracks meaningful changes to individual particles and cell-based electromagnetic fields via discrete events. HYPERS has recently been parallelized with the Preemptive Event Processing (PEP) technique. The parallel algorithm enables arbitrary domain decompositions and processor configurations on restarts. This is a critical prerequisite for implementing a full load balancing functionality. We validate HYPERS by simulating the interaction of streaming plasmas with dipole magnetospheres and show that our approach results in superior numerical metrics (stability, accuracy and speed) compared to conventional techniques. As the first step towards simulating the FRC, we apply HYPERS to study magnetically-driven plasma compression in two dimensions.

Omelchenko, Y. A.; Karimabadi, H.; Brown, M.; Catalyurek, U. V.; Saule, E.

2011-11-01

136

Rapid prediction of multi-dimensional NMR data sets.

We present a computational environment for Fast Analysis of multidimensional NMR DAta Sets (FANDAS) that allows assembling multidimensional data sets from a variety of input parameters and facilitates comparing and modifying such "in silico" data sets during the various stages of the NMR data analysis. The input parameters can vary from (partial) NMR assignments directly obtained from experiments to values retrieved from in silico prediction programs. The resulting predicted data sets enable a rapid evaluation of sample labeling in light of spectral resolution and structural content, using standard NMR software such as Sparky. In addition, direct comparison to experimental data sets can be used to validate NMR assignments, distinguish different molecular components, refine structural models or other parameters derived from NMR data. The method is demonstrated in the context of solid-state NMR data obtained for the cyclic nucleotide binding domain of a bacterial cyclic nucleotide-gated channel and on membrane-embedded sensory rhodopsin II. FANDAS is freely available as web portal under WeNMR ( http://www.wenmr.eu/services/FANDAS ). PMID:23143278

Gradmann, Sabine; Ader, Christian; Heinrich, Ines; Nand, Deepak; Dittmann, Marc; Cukkemane, Abhishek; van Dijk, Marc; Bonvin, Alexandre M J J; Engelhard, Martin; Baldus, Marc

2012-12-01

137

Multi-dimensional hydrocode analyses of penetrating hypervelocity impacts.

The Eulerian hydrocode, CTH, has been used to study the interaction of hypervelocity flyer plates with thin targets at velocities from 6 to 11 km/s. These penetrating impacts produce debris clouds that are subsequently allowed to stagnate against downstream witness plates. Velocity histories from this latter plate are used to infer the evolution and propagation of the debris cloud. This analysis, which is a companion to a parallel experimental effort, examined both numerical and physics-based issues. We conclude that numerical resolution and convergence are important in ways we had not anticipated. The calculated release from the extreme states generated by the initial impact shows discrepancies with related experimental observations, and indicates that even for well-known materials (e.g., aluminum), high-temperature failure criteria are not well understood, and that non-equilibrium or rate-dependent equations of state may be influencing the results.

Saul, W. Venner; Reinhart, William Dodd; Thornhill, Tom Finley, III; Lawrence, Raymond Jeffery Jr.; Chhabildas, Lalit Chandra; Bessette, Gregory Carl

2003-08-01

138

Multi-Dimensional Hydrocode Analyses of Penetrating Hypervelocity Impacts

NASA Astrophysics Data System (ADS)

The Eulerian hydrocode, CTH, has been used to study the interaction of hypervelocity flyer plates with thin targets at velocities from 6 to 11 km/s. These penetrating impacts produce debris clouds that are subsequently allowed to stagnate against downstream witness plates. Velocity histories from this latter plate are used to infer the evolution and propagation of the debris cloud. This analysis, which is a companion to a parallel experimental effort, examined both numerical and physics-based issues. We conclude that numerical resolution and convergence are important in ways we had not anticipated. The calculated release from the extreme states generated by the initial impact shows discrepancies with related experimental observations, and indicates that even for well-known materials (e.g., aluminum), high-temperature failure criteria are not well understood, and that non-equilibrium or rate-dependent equations of state may be influencing the results.

Bessette, G. C.; Lawrence, R. J.; Chhabildas, L. C.; Reinhart, W. D.; Thornhill, T. F.; Saul, W. V.

2004-07-01

139

Confirmatory Factor Analysis and Profile Analysis via Multidimensional Scaling

ERIC Educational Resources Information Center

This paper describes the Confirmatory Factor Analysis (CFA) parameterization of the Profile Analysis via Multidimensional Scaling (PAMS) model to demonstrate validation of profile pattern hypotheses derived from multidimensional scaling (MDS). Profile Analysis via Multidimensional Scaling (PAMS) is an exploratory method for identifying major…

Kim, Se-Kang; Davison, Mark L.; Frisby, Craig L.

2007-01-01

140

Multi-dimensional hybrid Fourier continuation–WENO solvers for conservation laws

NASA Astrophysics Data System (ADS)

We introduce a multi-dimensional point-wise multi-domain hybrid Fourier-Continuation/WENO technique (FC–WENO) that enables high-order and non-oscillatory solution of systems of nonlinear conservation laws, and essentially dispersionless, spectral, solution away from discontinuities, as well as mild CFL constraints for explicit time stepping schemes. The hybrid scheme conjugates the expensive, shock-capturing WENO method in small regions containing discontinuities with the efficient FC method in the rest of the computational domain, yielding a highly effective overall scheme for applications with a mix of discontinuities and complex smooth structures. The smooth and discontinuous solution regions are distinguished using the multi-resolution procedure of Harten [A. Harten, Adaptive multiresolution schemes for shock computations, J. Comput. Phys. 115 (1994) 319–338]. We consider a WENO scheme of formal order nine and a FC method of order five. The accuracy, stability and efficiency of the new hybrid method for conservation laws are investigated for problems with both smooth and non-smooth solutions. The Euler equations for gas dynamics are solved for the Mach 3 and Mach 1.25 shock wave interaction with a small, plain, oblique entropy wave using the hybrid FC–WENO, the pure WENO and the hybrid central difference–WENO (CD–WENO) schemes. We demonstrate considerable computational advantages of the new FC-based method over the two alternatives. Moreover, in solving a challenging two-dimensional Richtmyer–Meshkov instability (RMI), the hybrid solver results in seven-fold speedup over the pure WENO scheme. Thanks to the multi-domain formulation of the solver, the scheme is straightforwardly implemented on parallel processors using message passing interface as well as on Graphics Processing Units (GPUs) using CUDA programming language. The performance of the solver on parallel CPUs yields almost perfect scaling, illustrating the minimal communication requirements of the multi-domain strategy. For the same RMI test, the hybrid computations on a single GPU, in double precision arithmetics, displays five- to six-fold speedup over the hybrid computations on a single CPU. The relative speedup of the hybrid computation over the WENO computations on GPUs is similar to that on CPUs, demonstrating the advantage of hybrid schemes technique on both CPUs and GPUs.

Shahbazi, Khosro; Hesthaven, Jan S.; Zhu, Xueyu

2013-11-01

141

EnBiD: Fast Multi-dimensional Density Estimation

NASA Astrophysics Data System (ADS)

We present a method to numerically estimate the densities of a discretely sampled data based on a binary space partitioning tree. We start with a root node containing all the particles and then recursively divide each node into two nodes each containing roughly equal number of particles, until each of the nodes contains only one particle. The volume of such a leaf node provides an estimate of the local density and its shape provides an estimate of the variance. We implement an entropy-based node splitting criterion that results in a significant improvement in the estimation of densities compared to earlier work. The method is completely metric free and can be applied to arbitrary number of dimensions. We use this method to determine the appropriate metric at each point in space and then use kernel-based methods for calculating the density. The kernel-smoothed estimates were found to be more accurate and have lower dispersion. We apply this method to determine the phase-space densities of dark matter haloes obtained from cosmological N-body simulations. We find that contrary to earlier studies, the volume distribution function v(f) of phase-space density f does not have a constant slope but rather a small hump at high phase-space densities. We demonstrate that a model in which a halo is made up by a superposition of Hernquist spheres is not capable in explaining the shape of v(f) versus f relation, whereas a model which takes into account the contribution of the main halo separately roughly reproduces the behaviour as seen in simulations. The use of the presented method is not limited to calculation of phase-space densities, but can be used as a general purpose data-mining tool and due to its speed and accuracy it is ideally suited for analysis of large multidimensional data sets.

Sharma, Sanjib; Steinmetz, Matthias

2011-09-01

142

Multi-dimensional option pricing using radial basis functions and the generalized Fourier transform

NASA Astrophysics Data System (ADS)

We show that the generalized Fourier transform can be used for reducing the computational cost and memory requirements of radial basis function methods for multi-dimensional option pricing. We derive a general algorithm, including a transformation of the Black-Scholes equation into the heat equation, that can be used in any number of dimensions. Numerical experiments in two and three dimensions show that the gain is substantial even for small problem sizes. Furthermore, the gain increases with the number of dimensions.

Larsson, Elisabeth; Ahlander, Krister; Hall, Andreas

2008-12-01

143

Spectral Localization by Gaussian Random Potentials in MultiDimensional Continuous Space

A detailed mathematical proof is given that the energy spectrum of a non-relativistic quantum particle in multi-dimensional Euclidean space under the influence of suitable random potentials has almost surely a pure-point component. The result applies in particular to a certain class of zero-mean Gaussian random potentials, which are homogeneous with respect to Euclidean translations. More precisely, for these Gaussian random

Werner Fischer; Hajo Leschke; Peter Müller

2000-01-01

144

Efficient array architectures for multi-dimensional lifting-based discrete wavelet transforms

Efficient array architectures for multi-dimensional (m-D) discrete wavelet transform (DWT), e.g. m=2,3, are presented, in which the lifting scheme of DWT is used to reduce efficiently hardware complexity. The parallelism of 2m subbands transforms in lifting-based m-D DWT is explored, which increases efficiently the throughput rate of separable m-D DWT with fewer additional hardware overhead. The proposed architecture is composed

Cheng-yi Xiong; Jian-hua Hou; Jin-wen Tian; Jian Liu

2007-01-01

145

Visualizing multi-dimensional clusters, trends, and outliers using star coordinates

Interactive visualizations are effective tools in mining scientific, engineering, and business data to support decision-making activities. Star Coordinates is proposed as a new multi-dimensional visualization technique, which supports various interactions to stimulate visual thinking in early stages of knowledge discovery process. In Star Coordinates, coordinate axes are arranged on a two-dimensional surface, where each axis shares the same origin point.

Eser Kandogan

2001-01-01

146

Low back pain in adolescent female rowers: a multi-dimensional intervention study

The aim of this study was to determine whether a multi-dimensional intervention programme was effective in reducing the incidence\\u000a of low back pain (LBP) and the associated levels of pain and disability in schoolgirl rowers. This non-randomised controlled\\u000a trial involved an intervention (INT) group consisting of 90 schoolgirl rowers from one school and a control (CTRL) group consisting\\u000a of 131

Debra Perich; Angus Burnett; Peter O’Sullivan; Chris Perkin

2011-01-01

147

Metarule-Guided Mining of MultiDimensional Association Rules Using Data Cubes

In this paper, we employ a novel approach to metarule-guided, multi-dimensional association rule mining which explores a data cube structure. We propose algorithms for metarule-guided min- ing: given a metarule containing p predicates, we compare mining on an n-dimensional (n-D) cube structure (where p < n) with mining on smaller multiple pdimensional cubes. In addition, we propose an efficient method

Micheline Kamber; Jiawei Han; Jenny Chiang

1997-01-01

148

NASA Astrophysics Data System (ADS)

Using the reform documents of the National Council of Teachers of Mathematics (NCTM) (NCTM, 1989, 1991, 1995), a theory-based multi-dimensional assessment framework (the "SEA" framework) which should help expand the scope of assessment in mathematics is proposed. This framework uses a context based on mathematical reasoning and has components that comprise mathematical concepts, mathematical procedures, mathematical communication, mathematical problem solving, and mathematical disposition.

Anku, Sitsofe E.

1997-09-01

149

Long-Time Stability of MultiDimensional Noncharacteristic Viscous Boundary Layers

We establish long-time stability of multi-dimensional noncharacteristic boundary layers of a class of hyperbolic–parabolic\\u000a systems including the compressible Navier–Stokes equations with inflow [outflow] boundary conditions, under the assumption\\u000a of strong spectral, or uniform Evans, stability. Evans stability has been verified for small-amplitude layers by Guès, Métivier,\\u000a Williams, and Zumbrun. For large-amplitude layers, it may be efficiently checked numerically, as done

Toan Nguyen; Kevin Zumbrun

2010-01-01

150

In this paper, we summarise our recent research interest in the hydrothermal synthesis and structural characterisation of multi-dimensional coordination polymers. The use of N-(phosphonomethyl)iminodiacetic acid (also referred to as H4pmida) in the literature as a versatile chelating organic ligand is briefly reviewed. This molecule plays an important role in the formation of centrosymmetric dimeric [V2O2(pmida)2]4? anionic units, which were first

Filipe A. Almeida Paz; João Rocha; Jacek Klinowski; Tito Trindade; Fa-Nian Shi; Luís Mafra

2005-01-01

151

A multi-dimensional HLL-Riemann solver for Euler equations of gas dynamics

This article presents a numerical model that enables to solve on unstructured triangular meshes and with a high-order of accuracy, a multi-dimensional Riemann problem that appears when solving hyperbolic problems.For this purpose, we use a MUSCL-like procedure in a “cell-vertex” finite-volume framework.In the first part of this procedure, we devise a four-state bi-dimensional HLL solver (HLL-2D). This solver is based

G. Capdeville

2011-01-01

152

A univariate dimension-reduction method for multi-dimensional integration in stochastic mechanics

This paper presents a new, univariate dimension-reduction method for calculating statistical moments of response of mechanical systems subject to uncertainties in loads, material properties, and geometry. The method involves an additive decomposition of a multi-dimensional response function into multiple one-dimensional functions, an approximation of response moments by moments of single random variables, and a moment-based quadrature rule for numerical integration.

S. Rahman; H. Xu

2004-01-01

153

Minimizing I/O Costs of Multi-Dimensional Queries with BitmapIndices

Bitmap indices have been widely used in scientific applications and commercial systems for processing complex,multi-dimensional queries where traditional tree-based indices would not work efficiently. A common approach for reducing the size of a bitmap index for high cardinality attributes is to group ranges of values of an attribute into bins and then build a bitmap for each bin rather than a bitmap for each value of the attribute. Binning reduces storage costs,however, results of queries based on bins often require additional filtering for discarding it false positives, i.e., records in the result that do not satisfy the query constraints. This additional filtering,also known as ''candidate checking,'' requires access to the base data on disk and involves significant I/O costs. This paper studies strategies for minimizing the I/O costs for ''candidate checking'' for multi-dimensional queries. This is done by determining the number of bins allocated for each dimension and then placing bin boundaries in optimal locations. Our algorithms use knowledge of data distribution and query workload. We derive several analytical results concerning optimal bin allocation for a probabilistic query model. Our experimental evaluation with real life data shows an average I/O cost improvement of at least a factor of 10 for multi-dimensional queries on datasets from two different applications. Our experiments also indicate that the speedup increases with the number of query dimensions.

Rotem, Doron; Stockinger, Kurt; Wu, Kesheng

2006-03-30

154

The recently-proposed multi-dimensional digital predistortion (DPD) technique is experimentally investigated in terms of nonlinearity order, memory length, oversampling rate, digital-to-analog conversion resolution, carrier frequency dependence and RF input power tolerance, in both directly-modulated and externally-modulated multi-band radio-over-fiber (RoF) systems. Similar characteristics of the multi-dimensional digital predistorter are identified in directly-modulated and externally-modulated RoF systems. The experimental results suggest implementing a memory-free multi-dimensional digital predistorter involving nonlinearity orders up to 5 at 2 × oversampling rate for practical multi-band RoF systems. Using the suggested parameters, the multi-dimensional DPD is able to improve the RF input power tolerance by greater than 3dB for each band in a two-band RoF system, indicating an enhancement of RF power transmitting efficiency. PMID:24663783

Chen, Hao; Li, Jianqiang; Xu, Kun; Pei, Yinqing; Dai, Yitang; Yin, Feifei; Lin, Jintong

2014-02-24

155

National Technical Information Service (NTIS)

This report summarizes the major findings of a research program in the area of convective heat transfer modeling for internal combustion engine flows. The modeling effort has been performed in the context of multi-dimensional flow calculation procedures w...

1989-01-01

156

NASA Astrophysics Data System (ADS)

This paper addresses the extension of one-dimensional filters in two and three space dimensions. A new multi-dimensional extension is proposed for explicit and implicit generalized Shapiro filters. We introduce a definition of explicit and implicit generalized Shapiro filters that leads to very simple formulas for the analyses in two and three space dimensions. We show that many filters used for weather forecasting, high-order aerodynamic and aeroacoustic computations match the proposed definition. Consequently the new multi-dimensional extension can be easily implemented in existing solvers. The new multi-dimensional extension and the two commonly used methods are compared in terms of compactness, robustness, accuracy and computational cost. Benefits of the genuinely multi-dimensional extension are assessed for various computations using the compressible Euler equations.

Falissard, F.

2013-11-01

157

Scaling analysis of affinity propagation

NASA Astrophysics Data System (ADS)

We analyze and exploit some scaling properties of the affinity propagation (AP) clustering algorithm proposed by Frey and Dueck [Science 315, 972 (2007)]. Following a divide and conquer strategy we setup an exact renormalization-based approach to address the question of clustering consistency, in particular, how many cluster are present in a given data set. We first observe that the divide and conquer strategy, used on a large data set hierarchically reduces the complexity O(N2) to O(N(h+2)/(h+1)) , for a data set of size N and a depth h of the hierarchical strategy. For a data set embedded in a d -dimensional space, we show that this is obtained without notably damaging the precision except in dimension d=2 . In fact, for d larger than 2 the relative loss in precision scales such as N(2-d)/(h+1)d . Finally, under some conditions we observe that there is a value s? of the penalty coefficient, a free parameter used to fix the number of clusters, which separates a fragmentation phase (for s~~s? ) of the underlying hidden cluster structure. At this precise point holds a self-similarity property which can be exploited by the hierarchical strategy to actually locate its position, as a result of an exact decimation procedure. From this observation, a strategy based on AP can be defined to find out how many clusters are present in a given data set.~~

Furtlehner, Cyril; Sebag, Michèle; Zhang, Xiangliang

2010-06-01

158

Scaling analysis of affinity propagation.

We analyze and exploit some scaling properties of the affinity propagation (AP) clustering algorithm proposed by Frey and Dueck [Science 315, 972 (2007)]. Following a divide and conquer strategy we setup an exact renormalization-based approach to address the question of clustering consistency, in particular, how many cluster are present in a given data set. We first observe that the divide and conquer strategy, used on a large data set hierarchically reduces the complexity O(N2) to O(N((h+2)/(h+1))) , for a data set of size N and a depth h of the hierarchical strategy. For a data set embedded in a d -dimensional space, we show that this is obtained without notably damaging the precision except in dimension d=2 . In fact, for d larger than 2 the relative loss in precision scales such as N((2-d)/(h+1)d). Finally, under some conditions we observe that there is a value s* of the penalty coefficient, a free parameter used to fix the number of clusters, which separates a fragmentation phase (for s~~s*) of the underlying hidden cluster structure. At this precise point holds a self-similarity property which can be exploited by the hierarchical strategy to actually locate its position, as a result of an exact decimation procedure. From this observation, a strategy based on AP can be defined to find out how many clusters are present in a given data set. PMID:20866473~~

Furtlehner, Cyril; Sebag, Michèle; Zhang, Xiangliang

2010-06-01

159

Image registration and motion estimation play central roles in many fields, including RADAR, SONAR, light microscopy, and medical imaging. Because of its central significance, estimator accuracy, precision, and computational cost are of critical importance. We have previously presented a highly accurate, spline-based time delay estimator that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous representation of a reference signal and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we describe the MUlti-dimensional Spline-based Estimator (MUSE) that allows accurate and precise estimation of multi-dimensional displacements/strain components from multi-dimensional data sets. We describe the mathematical formulation for two- and three-dimensional motion/strain estimation and present simulation results to assess the intrinsic bias and standard deviation of this algorithm and compare it to currently available multi-dimensional estimators. In 1000 noise-free simulations of ultrasound data we found that 2D MUSE exhibits maximum bias of 2.6 x 10(-4) samples in range and 2.2 x 10(-3) samples in azimuth (corresponding to 4.8 and 297 nm, respectively). The maximum simulated standard deviation of estimates in both dimensions was comparable at roughly 2.8 x 10(-3) samples (corresponding to 54 nm axially and 378 nm laterally). These results are between two and three orders of magnitude better than currently used 2D tracking methods. Simulation of performance in 3D yielded similar results to those observed in 2D. We also present experimental results obtained using 2D MUSE on data acquired by an Ultrasonix Sonix RP imaging system with an L14-5/38 linear array transducer operating at 6.6 MHz. While our validation of the algorithm was performed using ultrasound data, MUSE is broadly applicable across imaging applications. PMID:18807190

Viola, Francesco; Coe, Ryan L; Owen, Kevin; Guenther, Drake A; Walker, William F

2008-12-01

160

Linearized stability for a multi-dimensional free boundary problem modelling two-phase tumour growth

NASA Astrophysics Data System (ADS)

This paper is concerned with a multi-dimensional free boundary problem modelling the growth of a tumour with two species of cells: proliferating cells and quiescent cells. This free boundary problem has a unique radial stationary solution. By using the Fourier expansion of functions on unit sphere via spherical harmonics, we establish some decay estimates for the solution of the linearized system of this tumour model at the radial stationary solution, so proving that the radial stationary solution is linearly asymptotically stable when neglecting translations.

Cui, Shangbin

2014-05-01

161

Investigation of multi-dimensional computational models for calculating pollutant transport

A performance study of five numerical solution algorithms for multi-dimensional advection-diffusion prediction on mesoscale grids was made. Test problems include transport of point and distributed sources, and a simulation of a continuous source. In all cases, analytical solutions are available to assess relative accuracy. The particle-in-cell and second-moment algorithms, both of which employ sub-grid resolution coupled with Lagrangian advection, exhibit superior accuracy in modeling a point source release. For modeling of a distributed source, algorithms based upon the pseudospectral and finite element interpolation concepts, exhibit improved accuracy on practical discretizations.

Pepper, D W; Cooper, R E; Baker, A J

1980-01-01

162

NASA Astrophysics Data System (ADS)

In this study, we consider the high dimensional unipolar hydrodynamic model for semiconductors in the form of Euler-Poisson equations. Based on the results that we have obtained in the first part (Huang, et al., 2011 [16]) for the 1-D case, we can further show the stability of planar stationary waves in multi-dimensional case. Utilizing the energy method, we obtain the global existence of the solutions of high dimensional Euler-Poisson equations for the unipolar hydrodynamic model, and prove that the solutions converge to the planar stationary waves time-exponentially.

Huang, Feimin; Mei, Ming; Wang, Yong; Yu, Huimin

163

2-D/Axisymmetric Formulation of Multi-dimensional Upwind Scheme

NASA Technical Reports Server (NTRS)

A multi-dimensional upwind discretization of the two-dimensional/axisymmetric Navier-Stokes equations is detailed for unstructured meshes. The algorithm is an extension of the fluctuation splitting scheme of Sidilkover. Boundary conditions are implemented weakly so that all nodes are updated using the base scheme, and eigen-value limiting is incorporated to suppress expansion shocks. Test cases for Mach numbers ranging from 0.1-17 are considered, with results compared against an unstructured upwind finite volume scheme. The fluctuation splitting inviscid distribution requires fewer operations than the finite volume routine, and is seen to produce less artificial dissipation, leading to generally improved solution accuracy.

Wood, William A.; Kleb, William L.

2001-01-01

164

NASA Astrophysics Data System (ADS)

The solution of polarized radiative transfer equation with angle-dependent (AD) partial frequency redistribution (PRD) is a challenging problem. Modeling the observed, linearly polarized strong resonance lines in the solar spectrum often requires the solution of the AD line transfer problems in one-dimensional or multi-dimensional (multi-D) geometries. The purpose of this paper is to develop an understanding of the relative importance of the AD PRD effects and the multi-D transfer effects and particularly their combined influence on the line polarization. This would help in a quantitative analysis of the second solar spectrum (the linearly polarized spectrum of the Sun). We consider both non-magnetic and magnetic media. In this paper we reduce the Stokes vector transfer equation to a simpler form using a Fourier decomposition technique for multi-D media. A fast numerical method is also devised to solve the concerned multi-D transfer problem. The numerical results are presented for a two-dimensional medium with a moderate optical thickness (effectively thin) and are computed for a collisionless frequency redistribution. We show that the AD PRD effects are significant and cannot be ignored in a quantitative fine analysis of the line polarization. These effects are accentuated by the finite dimensionality of the medium (multi-D transfer). The presence of magnetic fields (Hanle effect) modifies the impact of these two effects to a considerable extent.

Anusha, L. S.; Nagendra, K. N.

2012-02-01

165

Multi-dimensional NMR without coherence transfer: Minimizing losses in large systems

Most multi-dimensional solution NMR experiments connect one dimension to another using coherence transfer steps that involve evolution under scalar couplings. While experiments of this type have been a boon to biomolecular NMR the need to work on ever larger systems pushes the limits of these procedures. Spin relaxation during transfer periods for even the most efficient 15N–1H HSQC experiments can result in more than an order of magnitude loss in sensitivity for molecules in the 100 kDa range. A relatively unexploited approach to preventing signal loss is to avoid coherence transfer steps entirely. Here we describe a scheme for multi-dimensional NMR spectroscopy that relies on direct frequency encoding of a second dimension by multi-frequency decoupling during acquisition, a technique that we call MD-DIRECT. A substantial improvement in sensitivity of 15N–1H correlation spectra is illustrated with application to the 21 kDa ADP ribosylation factor (ARF) labeled with 15N in all alanine residues. Operation at 4 °C mimics observation of a 50 kDa protein at 35 °C.

Liu, Yizhou; Prestegard, James H.

2011-01-01

166

A rigorous theoretical investigation has been made on Zakharov-Kuznetsov (ZK) equation of ion-acoustic (IA) solitary waves (SWs) and their multi-dimensional instability in a magnetized degenerate plasma which consists of inertialess electrons, inertial ions, negatively, and positively charged stationary heavy ions. The ZK equation is derived by the reductive perturbation method, and multi-dimensional instability of these solitary structures is also studied by the small-k (long wave-length plane wave) perturbation expansion technique. The effects of the external magnetic field are found to significantly modify the basic properties of small but finite-amplitude IA SWs. The external magnetic field and the propagation directions of both the nonlinear waves and their perturbation modes are found to play a very important role in changing the instability criterion and the growth rate of the unstable IA SWs. The basic features (viz., amplitude, width, instability, etc.) and the underlying physics of the IA SWs, which are relevant to space and laboratory plasma situations, are briefly discussed.

Haider, M. M. [Department of Physics, Mawlana Bhashani Science and Technology University, Santosh, Tangail 1902 (Bangladesh); Mamun, A. A. [Department of Physics, Jahangirnagar University, Savar, Dhaka 1342 (Bangladesh)

2012-10-15

167

On the Global Existence and Stability of a Multi-Dimensional Supersonic Conic Shock Wave

NASA Astrophysics Data System (ADS)

We establish the global existence and stability of a three-dimensional supersonic conic shock wave for a compactly perturbed steady supersonic flow past an infinitely long circular cone with a sharp angle. The flow is described by a 3-D steady potential equation, which is multi-dimensional, quasilinear, and hyperbolic with respect to the supersonic direction. Making use of the geometric properties of the pointed shock surface together with the Rankine-Hugoniot conditions on the conic shock surface and the boundary condition on the surface of the cone, we obtain a global uniform weighted energy estimate for the nonlinear problem by finding an appropriate multiplier and establishing a new Hardy-type inequality on the shock surface. Based on this, we prove that a multi-dimensional conic shock attached to the vertex of the cone exists globally when the Mach number of the incoming supersonic flow is sufficiently large. Moreover, the asymptotic behavior of the 3-D supersonic conic shock solution, which is shown to approach the corresponding background shock solution in the downstream domain for the uniform supersonic constant flow past the sharp cone, is also explicitly given.

Li, Jun; Ingo, Witt; Yin, Huicheng

2014-07-01

168

The U.S. Geological Survey Multi-dimensional Surface Water Modeling System

NASA Astrophysics Data System (ADS)

The U.S. Geological Survey (USGS) Multi-Dimensional Surface Water Modeling System is a generic Graphical User Interface (GUI) for computational models of flow and transport in channels. The modeling system is intended to provide the operations program of the USGS the tools necessary to study and evaluate surface water issues including: TMDLs, water rights, channel restoration and habitat assessment. The GUI is a standard graphical modeling interface that provides the user with interactive graphical tools for grid generation, managing model-specific attributes and boundary conditions and visualization of results. We use an OpenGL based graphics package to render high-level visualizations both on and off screen. The GUI is generic in the sense that it prescribes a fixed input and output data structure that is sufficiently general to be used by any model of flow or transport from 1-dimensional to multi-dimensional. The generic data structure allows easy incorporation of additional models into the framework. We will present progress on the modeling system to date and discuss future directions and goals.

McDonald, R. R.; Bennett, J. P.; Nelson, J. M.

2001-05-01

169

Scaling, dimensional analysis, and indentation measurements

We provide an overview of the basic concepts of scaling and dimensional analysis, followed by a review of some of the recent work on applying these concepts to modeling instrumented indentation measurements. Specifically, we examine conical and pyramidal indentation in elastic–plastic solids with power-law work-hardening, in power-law creep solids, and in linear viscoelastic materials. We show that the scaling approach

Yang-Tse Cheng; Che-Min Cheng

2004-01-01

170

Scale-Specific Multifractal Medical Image Analysis

Fractal geometry has been applied widely in the analysis of medical images to characterize the irregular complex tissue structures that do not lend themselves to straightforward analysis with traditional Euclidean geometry. In this study, we treat the nonfractal behaviour of medical images over large-scale ranges by considering their box-counting fractal dimension as a scale-dependent parameter rather than a single number. We describe this approach in the context of the more generalized Rényi entropy, in which we can also compute the information and correlation dimensions of images. In addition, we describe and validate a computational improvement to box-counting fractal analysis. This improvement is based on integral images, which allows the speedup of any box-counting or similar fractal analysis algorithm, including estimation of scale-dependent dimensions. Finally, we applied our technique to images of invasive breast cancer tissue from 157 patients to show a relationship between the fractal analysis of these images over certain scale ranges and pathologic tumour grade (a standard prognosticator for breast cancer). Our approach is general and can be applied to any medical imaging application in which the complexity of pathological image structures may have clinical value.

Braverman, Boris

2013-01-01

171

Radiative interactions in multi-dimensional chemically reacting flows using Monte Carlo simulations

NASA Astrophysics Data System (ADS)

The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The amount and transfer of the emitted radiative energy in a finite volume element within a medium are considered in an exact manner. The spectral correlation between transmittances of two different segments of the same path in a medium makes the statistical relationship different from the conventional relationship, which only provides the non-correlated results for nongray methods is discussed. Validation of the Monte Carlo formulations is conducted by comparing results of this method of other solutions. In order to further establish the validity of the MCM, a relatively simple problem of radiative interactions in laminar parallel plate flows is considered. One-dimensional correlated Monte Carlo formulations are applied to investigate radiative heat transfer. The nongray Monte Carlo solutions are also obtained for the same problem and they also essentially match the available analytical solutions. the exact correlated and non-correlated Monte Carlo formulations are very complicated for multi-dimensional systems. However, by introducing the assumption of an infinitesimal volume element, the approximate correlated and non-correlated formulations are obtained which are much simpler than the exact formulations. Consideration of different problems and comparison of different solutions reveal that the approximate and exact correlated solutions agree very well, and so do the approximate and exact non-correlated solutions. However, the two non-correlated solutions have no physical meaning because they significantly differ from the correlated solutions. An accurate prediction of radiative heat transfer in any nongray and multi-dimensional system is possible by using the approximate correlated formulations. Radiative interactions are investigated in chemically reacting compressible flows of premixed hydrogen and air in an expanding nozzle. The governing equations are based on the fully elliptic Navier-Stokes equations. Chemical reaction mechanisms were described by a finite rate chemistry model. The correlated Monte Carlo method developed earlier was employed to simulate multi-dimensional radiative heat transfer. Results obtained demonstrate that radiative effects on the flowfield are minimal but radiative effects on the wall heat transfer are significant. Extensive parametric studies are conducted to investigate the effects of equivalence ratio, wall temperature, inlet flow temperature, and nozzle size on the radiative and conductive wall fluxes.

Liu, Jiwen; Tiwari, Surendra N.

1994-10-01

172

Radiative interactions in multi-dimensional chemically reacting flows using Monte Carlo simulations

NASA Technical Reports Server (NTRS)

The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The amount and transfer of the emitted radiative energy in a finite volume element within a medium are considered in an exact manner. The spectral correlation between transmittances of two different segments of the same path in a medium makes the statistical relationship different from the conventional relationship, which only provides the non-correlated results for nongray methods is discussed. Validation of the Monte Carlo formulations is conducted by comparing results of this method of other solutions. In order to further establish the validity of the MCM, a relatively simple problem of radiative interactions in laminar parallel plate flows is considered. One-dimensional correlated Monte Carlo formulations are applied to investigate radiative heat transfer. The nongray Monte Carlo solutions are also obtained for the same problem and they also essentially match the available analytical solutions. the exact correlated and non-correlated Monte Carlo formulations are very complicated for multi-dimensional systems. However, by introducing the assumption of an infinitesimal volume element, the approximate correlated and non-correlated formulations are obtained which are much simpler than the exact formulations. Consideration of different problems and comparison of different solutions reveal that the approximate and exact correlated solutions agree very well, and so do the approximate and exact non-correlated solutions. However, the two non-correlated solutions have no physical meaning because they significantly differ from the correlated solutions. An accurate prediction of radiative heat transfer in any nongray and multi-dimensional system is possible by using the approximate correlated formulations. Radiative interactions are investigated in chemically reacting compressible flows of premixed hydrogen and air in an expanding nozzle. The governing equations are based on the fully elliptic Navier-Stokes equations. Chemical reaction mechanisms were described by a finite rate chemistry model. The correlated Monte Carlo method developed earlier was employed to simulate multi-dimensional radiative heat transfer. Results obtained demonstrate that radiative effects on the flowfield are minimal but radiative effects on the wall heat transfer are significant. Extensive parametric studies are conducted to investigate the effects of equivalence ratio, wall temperature, inlet flow temperature, and nozzle size on the radiative and conductive wall fluxes.

Liu, Jiwen; Tiwari, Surendra N.

1994-01-01

173

A multi-dimensional cell-vertex upwind discretization technique for the Navier-Strokes equations on unstructured grids is\\u000a presented. The grids are composed of linear triangles in two and linear tetrahedra in three space dimensions. The nonlinear\\u000a upwind schemes for the inviscid part can be viewed as a multi-dimensional generalization of the Roe-scheme, but also as a\\u000a special class of Petrov-Galerkin schemes. They share

E. van der Weide; H. Deconinck; E. Issman; G. Degrez

1999-01-01

174

Measurement of Low Level Explosives Reaction in Gauged Multi-dimensional Steven Impact Tests

NASA Astrophysics Data System (ADS)

The Steven Test was developed to determine relative impact sensitivity of metal encased solid high explosives and also be amenable to two-dimensional modeling. Low level reaction thresholds occur at impact velocities below those required for shock initiation. To assist in understanding this test, multi-dimensional gauge techniques utilizing carbon foil and carbon resistor gauges were used to measure pressure and event times. Carbon resistor gauges indicated late time low level reactions 200-540 mus after projectile impact, creating 0.39-2.00 kb peak shocks centered in PBX 9501 explosives discs and a 0.60 kb peak shock in a LX-04 disk. Steven Test modeling results, based on ignition and growth criteria, are presented for two PBX 9501 scenarios: one with projectile impact velocity just under threshold (51 m/s) and one with projectile impact velocity just over threshold (55 m/s). Modeling results are presented and compared to experimental data.

Niles, A. M.; Garcia, F.; Greenwood, D. W.; Forbes, J. W.; Tarver, C. M.; Chidester, S. K.; Garza, R. G.; Swizter, L. L.

2002-07-01

175

Measurement of Low Level Explosives Reaction in Gauged Multi-Dimensional Steven Impact Tests

NASA Astrophysics Data System (ADS)

The Steven Test was developed to determine relative impact sensitivity of metal encased solid high explosives and be amenable to two-dimensional modeling. Low level reaction thresholds occur at impact velocities below those required for shock initiation. To assist in understanding this test, multi-dimensional gauge techniques utilizing carbon foil and carbon resistor gauges were used to measure pressure and event times. Carbon resistor gauges indicated late time low level reactions 350 ms after projectile impact, creating 0.5-0.6 kb peak shocks centered in PBX 9501 explosives discs. Steven Test calculations based on ignition and growth criteria predict low level reactions occurring at 335 ms which agrees well with experimental data. Additional gauged experiments simulating the Steven Test have been performed and will be discussed. * This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National laboratory under contract No. W-7405-Eng-48.

Niles, A. M.; Forbes, J. W.; Tarver, C. M.; Chidester, S. K.; Garcia, F.; Greenwood, D. W.; Garza, R. G.

2001-06-01

176

Racial-ethnic self-schemas: Multi-dimensional identity-based motivation

Prior self-schema research focuses on benefits of being schematic vs. aschematic in stereotyped domains. The current studies build on this work, examining racial-ethnic self-schemas as multi-dimensional, containing multiple, conflicting, and non-integrated images. A multidimensional perspective captures complexity; examining net effects of dimensions predicts within-group differences in academic engagement and well-being. When racial-ethnicity self-schemas focus attention on membership in both in-group and broader society, engagement with school should increase since school is not seen as out-group defining. When racial-ethnicity self-schemas focus attention on inclusion (not obstacles to inclusion) in broader society, risk of depressive symptoms should decrease. Support for these hypotheses was found in two separate samples (8th graders, n = 213, 9th graders followed to 12th grade n = 141).

Oyserman, Daphna

2008-01-01

177

Improved radial basis function methods for multi-dimensional option pricing

NASA Astrophysics Data System (ADS)

In this paper, we have derived a radial basis function (RBF) based method for the pricing of financial contracts by solving the Black-Scholes partial differential equation. As an example of a financial contract that can be priced with this method we have chosen the multi-dimensional European basket call option. We have shown numerically that our scheme is second-order accurate in time and spectrally accurate in space for constant shape parameter. For other non-optimal choices of shape parameter values, the resulting convergence rate is algebraic. We propose an adapted node point placement that improves the accuracy compared with a uniform distribution. Compared with an adaptive finite difference method, the RBF method is 20-40 times faster in one and two space dimensions and has approximately the same memory requirements.

Pettersson, Ulrika; Larsson, Elisabeth; Marcusson, Gunnar; Persson, Jonas

2008-12-01

178

A computer implemented method and a system for routing data packets in a multi-dimensional computer network. The method comprises routing a data packet among nodes along one dimension towards a root node, each node having input and output communication links, said root node not having any outgoing uplinks, and determining at each node if the data packet has reached a predefined coordinate for the dimension or an edge of the subrectangle for the dimension, and if the data packet has reached the predefined coordinate for the dimension or the edge of the subrectangle for the dimension, determining if the data packet has reached the root node, and if the data packet has not reached the root node, routing the data packet among nodes along another dimension towards the root node.

Chen, Dong; Eisley, Noel A.; Steinmacher-Burow, Burkhard; Heidelberger, Philip

2013-01-29

179

High-Order Central WENO Schemes for Multi-Dimensional Hamilton-Jacobi Equations

NASA Technical Reports Server (NTRS)

We present new third- and fifth-order Godunov-type central schemes for approximating solutions of the Hamilton-Jacobi (HJ) equation in an arbitrary number of space dimensions. These are the first central schemes for approximating solutions of the HJ equations with an order of accuracy that is greater than two. In two space dimensions we present two versions for the third-order scheme: one scheme that is based on a genuinely two-dimensional Central WENO reconstruction, and another scheme that is based on a simpler dimension-by-dimension reconstruction. The simpler dimension-by-dimension variant is then extended to a multi-dimensional fifth-order scheme. Our numerical examples in one, two and three space dimensions verify the expected order of accuracy of the schemes.

Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)

2002-01-01

180

Evaluation of multi-dimensional flux models for radiative transfer in combustion chambers: A review

NASA Astrophysics Data System (ADS)

In recent years, flux methods have been widely employed as alternative, albeit intrinsically less accurate, procedures to the zone or Monte Carlo methods in complete prediction procedures. Flux models of radiation fields take the form of partial differential equations, which can conveniently and economically be solved simultaneously with the equations representing flow and reaction. The flux models are usually tested and evaluated from the point of view of predictive accuracy by comparing their predictions with "exact' values produced using the zone or Monte Carlo models. Evaluations of various multi-dimensional flux-type models, such as De Marco and Lockwood, Discrete-Ordinate, Schuster-Schwarzschild and moment, are reviewed from the points of view of both accuracy and computational economy. Six-flux model of Schuster-Schwarzschild type with angular subdivisions related to the enclosure geometry is recommended for incorporation into existing procedures for complete mathematical modelling of rectangular combustion chambers.

Selcuk, N.

1984-01-01

181

X-ray Signatures of AGN Outflows: Multi-Dimensional Radiative Transfer Simulations

NASA Astrophysics Data System (ADS)

Outflows have been proposed to explain a variety of features in the X-ray and UV spectra of active galactic nuclei. The origin and launching mechanism for such outflows remains unclear but they may be associated with winds blown off the accretion flow/accretion disc. In general, such winds are not spherically symmetric and they can affect observed spectra in a variety of ways that depend on the observer's inclination. We summarize results of theoretical multi-dimensional radiative transfer simulations that provide detailed synthetic spectra for plausible wind geometries. These illustrate both the range of spectral signatures that winds can produce and help guide the interpretation of observations in the context of wind models. We discuss both absorption and broad emission features, and highlight the importance of scattered radiation in the spectrum.

Sim, S. A.; Long, K. S.; Proga, D.; Miller, L.; Turner, T. J.; Reeves, J. N.

2012-08-01

182

Since our last comprehensive review on multi-dimensional mass spectrometry-based shotgun lipidomics (Mass Spectrom. Rev. 24 (2005), 367), many new developments in the field of lipidomics have occurred. These developments include new strategies and refinements for shotgun lipidomic approaches that use direct infusion, including novel fragmentation strategies, identification of multiple new informative dimensions for mass spectrometric interrogation, and the development of new bioinformatic approaches for enhanced identification and quantitation of the individual molecular constituents that comprise each cell’s lipidome. Concurrently, advances in liquid chromatography-based platforms and novel strategies for quantitative matrix-assisted laser desorption/ionization mass spectrometry for lipidomic analyses have been developed. Through the synergistic use of this repertoire of new mass spectrometric approaches, the power and scope of lipidomics has been greatly expanded to accelerate progress toward the comprehensive understanding of the pleiotropic roles of lipids in biological systems.

Han, Xianlin; Yang, Kui; Gross, Richard W.

2011-01-01

183

NASA Astrophysics Data System (ADS)

The Hirota method is applied to construct exact analytical solitary wave solutions of the system of multi-dimensional nonlinear wave equation for n-component vector with modified background. The nonlinear part is the third-order polynomial, determined by three distinct constant vectors. These solutions have not previously been obtained by any analytic technique. The bilinear representation is derived by extracting one of the vector roots (unstable in general). This allows to reduce the cubic nonlinearity to a quadratic one. The transition between other two stable roots gives us a vector shock solitary wave solution. In our approach, the velocity of solitary wave is fixed by truncating the Hirota perturbation expansion and it is found in terms of all three roots. Simulations of solutions for the one component and one-dimensional case are also illustrated.

Tano?lu, Gamze

2007-10-01

184

A dynamic nuclear polarization strategy for multi-dimensional Earth's field NMR spectroscopy.

Dynamic nuclear polarization (DNP) is introduced as a powerful tool for polarization enhancement in multi-dimensional Earth's field NMR spectroscopy. Maximum polarization enhancements, relative to thermal equilibrium in the Earth's magnetic field, are calculated theoretically and compared to the more traditional prepolarization approach for NMR sensitivity enhancement at ultra-low fields. Signal enhancement factors on the order of 3000 are demonstrated experimentally using DNP with a nitroxide free radical, TEMPO, which contains an unpaired electron which is strongly coupled to a neighboring (14)N nucleus via the hyperfine interaction. A high-quality 2D (19)F-(1)H COSY spectrum acquired in the Earth's magnetic field with DNP enhancement is presented and compared to simulation. PMID:18926746

Halse, Meghan E; Callaghan, Paul T

2008-12-01

185

Ionizing shocks in argon. Part II: Transient and multi-dimensional effects

We extend the computations of ionizing shocks in argon to the unsteady and multi-dimensional, using a collisional-radiative model and a single-fluid, two-temperature formulation of the conservation equations. It is shown that the fluctuations of the shock structure observed in shock-tube experiments can be reproduced by the numerical simulations and explained on the basis of the coupling of the nonlinear kinetics of the collisional-radiative model with wave propagation within the induction zone. The mechanism is analogous to instabilities of detonation waves and also produces a cellular structure commonly observed in gaseous detonations. We suggest that detailed simulations of such unsteady phenomena can yield further information for the validation of nonequilibrium kinetics.

Kapper, M. G.; Cambier, J.-L. [Air Force Research Laboratory, Edwards AFB, CA 93524 (United States)

2011-06-01

186

RF pulse schemes for the simultaneous acquisition of heteronuclear multi-dimensional chemical shift correlation spectra, such as {HA(CA)NH & HA(CACO)NH}, {HA(CA)NH & H(N)CAHA} and {H(N)CAHA & H(CC)NH}, that are commonly employed in the study of moderately-sized protein molecules, have been implemented using dual sequential 1H acquisitions in the direct dimension. Such an approach is not only beneficial in terms of the reduction of experimental time as compared to data collection via two separate experiments but also facilitates the unambiguous sequential linking of the backbone amino acid residues. The potential of sequential 1H data acquisition procedure in the study of RNA is also demonstrated here.

Bellstedt, Peter; Ihle, Yvonne; Wiedemann, Christoph; Kirschstein, Anika; Herbst, Christian; Gorlach, Matthias; Ramachandran, Ramadurai

2014-01-01

187

Multi-dimensional optical and laser-based diagnostics of low-temperature ionized plasma discharges

NASA Astrophysics Data System (ADS)

A review of work centered on the utilization of multi-dimensional optical diagnostics to study phenomena arising in radiofrequency plasma discharges is given. The diagnostics range from passive techniques such as optical emission to more active techniques utilizing nanosecond lasers capable of both high temporal and spatial resolution. In this review, emphasis is placed on observations that would have been more difficult, if not impossible, to make without the use of such diagnostic techniques. Examples include the sheath structure around an electrode consisting of two different metals, double layers that arise in magnetized hydrogen discharges, or a large region of depleted argon 1s4 levels around a biased probe in an rf discharge.

Barnat, E. V.

2011-10-01

188

High-Level Waste Tanks Multi-Dimensional Contaminant Transport Model Development

A suite of multi-dimensional computer models was developed to analyze the transport of residual contamination from high-level waste tanks through the subsurface to seeplines. Cases analyzed ranged from all the tanks in the F- and H-tank farms for an overall look; to the Tank 17-20 4-pack to study plume interaction; to individual tanks, such as Tank 17 and 20 for comparison with one-dimensional and modeling. The main purpose of this work was to develop and test the models, so only two relatively conservative contaminants were examined, Tc-99 and I-129. More complex analyses, such as solubility-limited species and radionuclides that head a decay chain were not addressed in this study.

Collard, L.B.

1999-11-15

189

Multi-Dimensional Simulations of Radiative Transfer in Aspherical Core-Collapse Supernovae

We study optical radiation of aspherical supernovae (SNe) and present an approach to verify the asphericity of SNe with optical observations of extragalactic SNe. For this purpose, we have developed a multi-dimensional Monte-Carlo radiative transfer code, SAMURAI (SupernovA Multidimensional RAdIative transfer code). The code can compute the optical light curve and spectra both at early phases (< or approx. 40 days after the explosion) and late phases ({approx}1 year after the explosion), based on hydrodynamic and nucleosynthetic models. We show that all the optical observations of SN 1998bw (associated with GRB 980425) are consistent with polar-viewed radiation of the aspherical explosion model with kinetic energy 20x10{sup 51} ergs. Properties of off-axis hypernovae are also discussed briefly.

Tanaka, Masaomi [Department of Astronomy, Graduate School of Science, University of Tokyo, Tokyo (Japan); Maeda, Keiichi [Institute for the Physics and Mathematics of the Universe, University of Tokyo, Kashiwa (Japan); Max-Planck-Institut fuer Astrophysik, Garching (Germany); Mazzali, Paolo A. [Max-Planck-Institut fuer Astrophysik, Garching bei Muenchen (Germany); Istituto Nazionale di Astrofisica, OATs, Trieste (Italy); Nomoto, Ken'ichi [Department of Astronomy, Graduate School of Science, University of Tokyo, Tokyo (Japan); Institute for the Physics and Mathematics of the Universe, University of Tokyo, Kashiwa (Japan)

2008-05-21

190

Background The authors present a procedural extension of the popular Implicit Association Test (IAT; [1]) that allows for indirect measurement of attitudes on multiple dimensions (e.g., safe–unsafe; young–old; innovative–conventional, etc.) rather than on a single evaluative dimension only (e.g., good–bad). Methodology/Principal Findings In two within-subjects studies, attitudes toward three automobile brands were measured on six attribute dimensions. Emphasis was placed on evaluating the methodological appropriateness of the new procedure, providing strong evidence for its reliability, validity, and sensitivity. Conclusions/Significance This new procedure yields detailed information on the multifaceted nature of brand associations that can add up to a more abstract overall attitude. Just as the IAT, its multi-dimensional extension/application (dubbed md-IAT) is suited for reliably measuring attitudes consumers may not be consciously aware of, able to express, or willing to share with the researcher [2], [3].

Gattol, Valentin; Saaksjarvi, Maria; Carbon, Claus-Christian

2011-01-01

191

Quantitative local structure refinement from XANES: multi-dimensional interpolation approach.

A new method to determine local structure in terms of a few structural parameters is proposed and realised in FitIt software. It is based on fitting of X-ray absorption near-edge structure (XANES) spectra using the combination of full multiple-scattering calculations, and multi-dimensional interpolation of spectra as a function of structural parameters. The procedure is divided into two steps: the construction of an interpolation polynomial, and the fitting of experimental spectra using the interpolation polynomial. During the construction of the polynomial, multiple-scattering calculations for certain sets of structural parameters are needed. The strategy for the selection of the most important expansion terms and corresponding sets of structural parameters is proposed. Fitting of the spectrum using multi-dimensional interpolation is very fast (a few seconds) because multiple-scattering calculations are unnecessary during this step. Also, this approach allows the development of a visual interface with the possibility of seeing the spectrum that corresponds to any set of structural parameters immediately. Thus, using a very limited number of multiple-scattering calculations, which are most time-consuming, it is possible to fit XANES. The interpolation polynomial construction procedure for three model molecules, FeS4, FeO6 and Ni(CN)4, is demonstrated. An additional test has been performed for the latter most-complex example to check the assumption that a minimum of discrepancy between theoretical and experimental spectra corresponds only to the correct structure of the complex. A comparison with another XANES fitting software, MXAN, is given. PMID:16371705

Smolentsev, Grigory; Soldatov, Alexander

2006-01-01

192

Multi-dimensional multi-species modeling of transient electrodeposition in LIGA microfabrication.

This report documents the efforts and accomplishments of the LIGA electrodeposition modeling project which was headed by the ASCI Materials and Physics Modeling Program. A multi-dimensional framework based on GOMA was developed for modeling time-dependent diffusion and migration of multiple charged species in a dilute electrolyte solution with reduction electro-chemical reactions on moving deposition surfaces. By combining the species mass conservation equations with the electroneutrality constraint, a Poisson equation that explicitly describes the electrolyte potential was derived. The set of coupled, nonlinear equations governing species transport, electric potential, velocity, hydrodynamic pressure, and mesh motion were solved in GOMA, using the finite-element method and a fully-coupled implicit solution scheme via Newton's method. By treating the finite-element mesh as a pseudo solid with an arbitrary Lagrangian-Eulerian formulation and by repeatedly performing re-meshing with CUBIT and re-mapping with MAPVAR, the moving deposition surfaces were tracked explicitly from start of deposition until the trenches were filled with metal, thus enabling the computation of local current densities that potentially influence the microstructure and frictional/mechanical properties of the deposit. The multi-dimensional, multi-species, transient computational framework was demonstrated in case studies of two-dimensional nickel electrodeposition in single and multiple trenches, without and with bath stirring or forced flow. Effects of buoyancy-induced convection on deposition were also investigated. To further illustrate its utility, the framework was employed to simulate deposition in microscreen-based LIGA molds. Lastly, future needs for modeling LIGA electrodeposition are discussed.

Evans, Gregory Herbert (Sandia National Laboratories, Livermore, CA); Chen, Ken Shuang

2004-06-01

193

NASA Astrophysics Data System (ADS)

The term 'Convected Scheme' (CS) refers to a family of algorithms, most usually applied to the solution of Boltzmann's equation, which uses a method of characteristics in an integral form to project an initial cell forward to a group of final cells. As such the CS is a 'forward-trajectory' semi-Lagrangian scheme. For multi-dimensional simulations of neutral gas flows, the cell-centered version of this semi-Lagrangian (CCSL) scheme has advantages over other options due to its implementation simplicity, low memory requirements, and easier treatment of boundary conditions. The main drawback of the CCSL-CS to date has been its high numerical diffusion in physical space, because of the 2nd order remapping that takes place at the end of each time step. By means of a modified equation analysis, it is shown that a high order estimate of the remapping error can be obtained a priori, and a small correction to the final position of the cells can be applied upon remapping, in order to achieve full compensation of this error. The resulting scheme is 4th order accurate in space while retaining the desirable properties of the CS: it is conservative and positivity-preserving, and the overall algorithm complexity is not appreciably increased. Two monotone (i.e. non-oscillating) versions of the fourth order CCSL-CS are also presented: one uses a common flux-limiter approach; the other uses a non-polynomial reconstruction to evaluate the derivatives of the density function. The method is illustrated in simple one- and two-dimensional examples, and a fully 3D solution of the Boltzmann equation describing expansion of a gas into vacuum through a cylindrical tube.

Güçlü, Y.; Hitchon, W. N. G.

2012-04-01

194

Image registration and motion estimation play central roles in many fields, including RADAR, SONAR, light microscopy, and medical imaging. Because of its central significance, estimator accuracy, precision, and computational cost are of critical importance. We have previously presented a highly accurate, spline-based time delay estimator that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous representation of a reference signal and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we describe the MUlti-dimensional Spline-based Estimator (MUSE) that allows accurate and precise estimation of multidimensional displacements/strain components from multidimensional data sets. We describe the mathematical formulation for two- and three-dimensional motion/strain estimation and present simulation results to assess the intrinsic bias and standard deviation of this algorithm and compare it to currently available multi-dimensional estimators. In 1000 noise-free simulations of ultrasound data we found that 2D MUSE exhibits maximum bias of 2.6 × 10?4 samples in range and 2.2 × 10?3 samples in azimuth (corresponding to 4.8 and 297 nm, respectively). The maximum simulated standard deviation of estimates in both dimensions was comparable at roughly 2.8 × 10?3 samples (corresponding to 54 nm axially and 378 nm laterally). These results are between two and three orders of magnitude better than currently used 2D tracking methods. Simulation of performance in 3D yielded similar results to those observed in 2D. We also present experimental results obtained using 2D MUSE on data acquired by an Ultrasonix Sonix RP imaging system with an L14-5/38 linear array transducer operating at 6.6 MHz. While our validation of the algorithm was performed using ultrasound data, MUSE is broadly applicable across imaging applications.

Viola, Francesco; Coe, Ryan L.; Owen, Kevin; Guenther, Drake A.; Walker, William F.

2008-01-01

195

Copper plays an important role in numerous biological processes across all living systems predominantly because of its versatile redox behavior. Cellular copper homeostasis is tightly regulated and disturbances lead to severe disorders such as Wilson disease (WD) and Menkes disease. Age related changes of copper metabolism have been implicated in other neurodegenerative disorders such as Alzheimer’s disease (AD). The role of copper in these diseases has been topic of mostly bioinorganic research efforts for more than a decade, metal-protein interactions have been characterized and cellular copper pathways have been described. Despite these efforts, crucial aspects of how copper is associated with AD, for example, is still only poorly understood. To take metal related disease research to the next level, emerging multi dimensional imaging techniques are now revealing the copper metallome as the basis to better understand disease mechanisms. This review will describe how recent advances in X-ray fluorescence microscopy and fluorescent copper probes have started to contribute to this field specifically WD and AD. It furthermore provides an overview of current developments and future applications in X-ray microscopic methodologies.

Vogt, Stefan; Ralle, Martina

2012-01-01

196

Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices

NASA Technical Reports Server (NTRS)

We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction ot the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion of quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.

Biegel, Bryan A.; Ancona, Mario G.; Rafferty, Conor S.; Yu, Zhiping

2000-01-01

197

Efficient Multi-Dimensional Simulation of Quantum Confinement Effects in Advanced MOS Devices

NASA Technical Reports Server (NTRS)

We investigate the density-gradient (DG) transport model for efficient multi-dimensional simulation of quantum confinement effects in advanced MOS devices. The formulation of the DG model is described as a quantum correction to the classical drift-diffusion model. Quantum confinement effects are shown to be significant in sub-100nm MOSFETs. In thin-oxide MOS capacitors, quantum effects may reduce gate capacitance by 25% or more. As a result, the inclusion or quantum effects in simulations dramatically improves the match between C-V simulations and measurements for oxide thickness down to 2 nm. Significant quantum corrections also occur in the I-V characteristics of short-channel (30 to 100 nm) n-MOSFETs, with current drive reduced by up to 70%. This effect is shown to result from reduced inversion charge due to quantum confinement of electrons in the channel. Also, subthreshold slope is degraded by 15 to 20 mV/decade with the inclusion of quantum effects via the density-gradient model, and short channel effects (in particular, drain-induced barrier lowering) are noticeably increased.

Biegel, Bryan A.; Rafferty, Conor S.; Ancona, Mario G.; Yu, Zhi-Ping

2000-01-01

198

Atherosclerosis results from inflammatory processes involving biomarkers, such as lipid profile, haemoglobin A1C, oxidative stress, coronary artery calcium score and flow-mediated endothelial response through nitric oxide. This paper proposes a health status coefficient, which comprehends molecular and clinical measurements concerning atherosclerosis to provide a measure of arterial health. An arterial health status map is produced to map the multi-dimensional measurements to the health status coefficient. The mapping is modeled by a fuzzy system embedded with the health domain expert knowledge. The measurements obtained from the pilot study are used to tune the fuzzy system. The inferred arterial health coefficients are stored into the data cubes of a multi-dimensional database. Due to this adaptability and transparency of fuzzy system, the health status map can be easily updated when the refinement of fuzzy rule base is needed or new measurements are obtained. PMID:21347120

Chan, Lawrence W C; Benzie, Iris F F; Lau, Thomas Y H; Zheng, Yongping; Wong, Alex K S; Liu, Y; Chan, Phoebe S T

2008-01-01

199

Atherosclerosis results from inflammatory processes involving biomarkers, such as lipid profile, haemoglobin A1C, oxidative stress, coronary artery calcium score and flow-mediated endothelial response through nitric oxide. This paper proposes a health status coefficient, which comprehends molecular and clinical measurements concerning atherosclerosis to provide a measure of arterial health. An arterial health status map is produced to map the multi-dimensional measurements to the health status coefficient. The mapping is modeled by a fuzzy system embedded with the health domain expert knowledge. The measurements obtained from the pilot study are used to tune the fuzzy system. The inferred arterial health coefficients are stored into the data cubes of a multi-dimensional database. Due to this adaptability and transparency of fuzzy system, the health status map can be easily updated when the refinement of fuzzy rule base is needed or new measurements are obtained.

Chan, Lawrence W.C.; Benzie, Iris F.F.; Lau, Thomas Y.H.; Zheng, Yongping; Wong, Alex K.S.; Liu, Y.; Chan, Phoebe S.T.

2008-01-01

200

We develop a family of Eulerian-Lagrangian localized adjoint methods for the solution of theinitial-boundary value problems for first-order advection-reaction equations on general multi-dimensional domains.Different tracking algorithms, including the Euler and Runge-Kutta algorithms, are used. The derivedschemes naturally incorporate inflow boundary conditions into their formulations and do not need any artificialoutflow boundary condition. They are fully mass conservative. Moreover, they...

Hong Wang; Richard E. Ewing; Guan Qin; Stephen L. Lyons; Shushuang Man

1998-01-01

201

A high-order multi-dimensional HLL-Riemann solver for non-linear Euler equations

This article presents a numerical model that enables to solve on unstructured triangular meshes and with a high-order of accuracy, a multi-dimensional Riemann problem that appears when solving hyperbolic problems.For this purpose, we use a MUSCL-like procedure in a “cell-vertex” finite-volume framework. In the first part of this procedure, we devise a four-state bi-dimensional HLL solver (HLL-2D). This solver is

G. Capdeville

2011-01-01

202

Multi-dimensional upwind fluctuation splitting scheme with mesh adaption for hypersonic viscous flow

NASA Astrophysics Data System (ADS)

A multi-dimensional upwind fluctuation splitting scheme is developed and implemented for two dimensional and axisymmetric formulations of the Navier-Stokes equations on unstructured meshes. Key features of the scheme are the compact stencil, full upwinding, and non-linear discretization which allow for second-order accuracy with enforced positivity. Throughout, the fluctuation splitting scheme is compared to a current state-of-the-art finite volume approach, a second-order, dual mesh upwind flux difference splitting scheme (DMFDSFV), and is shown to produce more accurate results using fewer computer resources for a wide range of test cases. The scalar test cases include advected shear, circular advection, non-linear advection with coalescing shock and expansion fans, and advection-diffusion. For all scalar cases the fluctuation splitting scheme is more accurate, and the primary mechanism for the improved fluctuation splitting performance is shown to be the reduced production of artificial dissipation relative to DMFDSFV. The most significant scalar result is for combined advection-diffusion, where the present fluctuation splitting scheme is able to resolve the physical dissipation from the artificial dissipation on a much coarser mesh than DMFDSFV is able to, allowing order-of-magnitude reductions in solution time. Among the inviscid test cases the converging supersonic streams problem is notable in that the fluctuation splitting scheme exhibits superconvergent third-order spatial accuracy. For the inviscid cases of a supersonic diamond airfoil, supersonic slender cone, and incompressible circular bump the fluctuation splitting drag coefficient errors are typically half the DMFDSFV drag errors. However, for the incompressible inviscid sphere the fluctuation splitting drag error is larger than for DMFDSFV. A Blasius flat plate viscous validation case reveals a more accurate v-velocity profile for fluctuation splitting, and the reduced artificial dissipation production is shown relative to DMFDSFV. Remarkably the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. A viscous Mach 17.6 (perfect gas) cylinder case demonstrates solution monotonicity and heat transfer capability with the fluctuation splitting scheme. While fluctuation splitting is recommended over DMFDSFV, the difference in performance between the schemes is not so great as to obsolete DMFDSFV. The second half of the dissertation develops a local, compact, anisotropic unstructured mesh adaption scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. This alignment behavior stands in contrast to the curvature clustering nature of the local, anisotropic unstructured adaption strategy based upon a posteriori error estimation that is used for comparison. The characteristic alignment is most pronounced for linear advection, with reduced improvement seen for the more complex non-linear advection and advection-diffusion cases. The adaption strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization. The system test case for the adaption strategy is a sting mounted capsule at Mach-10 wind tunnel conditions, considered in both two-dimensional and axisymmetric configurations. For this complex flowfield the adaption results are disappointing since feature alignment does not emerge from the local operations. Aggressive adaption is shown to result in a loss of robustness for the solver, particularly in the bow shock/stagnation point interaction region. Reducing the adaption strength maintains solution robustness but fails to produce significant improvement in the surface heat transfer predictions.

Wood, William Alfred, III

203

Overview of NASA Multi-Dimensional Stirling Convertor Code Development and Validation Effort

NASA Technical Reports Server (NTRS)

A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and "two space" test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow fig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this

Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David

2002-01-01

204

ALEGRA-HEDP Multi-Dimensional Simulations of Z-pinch Related Physics

NASA Astrophysics Data System (ADS)

The marriage of experimental diagnostics and computer simulations continues to enhance our understanding of the physics and dynamics associated with current-driven wire arrays. Early models that assumed the formation of an unstable, cylindrical shell of plasma due to wire merger have been replaced with a more complex picture involving wire material ablating non-uniformly along the wires, creating plasma pre-fill interior to the array before the bulk of the array collapses due to magnetic forces. Non-uniform wire ablation leads to wire breakup, which provides a mechanism for some wire material to be left behind as the bulk of the array stagnates onto the pre-fill. Once the bulk of the material has stagnated, electrical current can then shift back to the material left behind and cause it to stagnate onto the already collapsed bulk array mass. These complex effects impact the total radiation output from the wire array which is very important to application of that radiation for inertial confinement fusion. A detailed understanding of the formation and evolution of wire array perturbations is needed, especially for those which are three-dimensional in nature. Sandia National Laboratories has developed a multi-physics research code tailored to simulate high energy density physics (HEDP) environments. ALEGRA-HEDP has begun to simulate the evolution of wire arrays and has produced the highest fidelity, two-dimensional simulations of wire-array precursor ablation to date. Our three-dimensional code capability now provides us with the ability to solve for the magnetic field and current density distribution associated with the wire array and the evolution of three-dimensional effects seen experimentally. The insight obtained from these multi-dimensional simulations of wire arrays will be presented and specific simulations will be compared to experimental data.

Garasi, Christopher J.

2003-10-01

205

NASA Astrophysics Data System (ADS)

In the last decades Global Navigation Satellite System (GNSS) has turned into a promising tool for probing the ionosphere. The classical input data for developing Global Ionosphere Maps (GIM) is obtained from the dual-frequency GNSS observations. Simultaneous observations of GNSS code or carrier phase at each frequency is used to form a geometric-free linear combination which contains only the ionospheric refraction term and the differential inter-frequency hardware delays. To relate the ionospheric observable to the electron density, a model is used that represents an altitude-dependent distribution of the electron density. This study aims at developing a global multi-dimensional model of the electron density using simulated GNSS observations from about 150 International GNSS Service (IGS) ground stations. Due to the fact that IGS stations are in-homogenously distributed around the world and the accuracy and reliability of the developed models are considerably lower in the area not well covered with IGS ground stations, the International Reference Ionosphere (IRI) model has been used as a background model. The correction term is estimated by applying spherical harmonics expansion to the GNSS ionospheric observable. Within this study this observable is related to the electron density using different functions for the bottom-side and top-side ionosphere. The bottom-side ionosphere is represented by an alpha-Chapman function and the top-side ionosphere is represented using the newly proposed Vary-Chap function.aximum electron density, IRI background model (elec/m3), day 202 - 2010, 0 UT eight of maximum electron density, IRI background model (km), day 202 - 2010, 0 UT

Alizadeh, M.; Schuh, H.; Schmidt, M. G.

2012-12-01

206

Multi-dimensional impurity transport code by Monte Carlo method including gyro-orbit effects

NASA Astrophysics Data System (ADS)

We are developing a new 3D Monte Carlo transport code 'IMPGYRO' for the analysis of the heavy impurities in fusion edge plasmas. The code directory solves the 3D equations of motion for the test impurity ions to take into account their gyro motion. Most of the important processes, such as the multi-step ionization process and Coulomb scattering, etc., are also included in the model. The results for the prompt redeposition rate of tungsten ions agree well with the analytic results. In addition, 2D density profiles for tungsten ions of each charge state in a simple slab geometry have been calculated for given background plasma profiles typical of a detached plasma state. Although the code is now still under development, these initial results show that the code has potential as a useful tool, not only for the analysis of the prompt redeposition very close to the wall, but also for the analysis of more large scale impurity transport processes.

Hyodo, I.; Hirano, M.; Miyamoto, K.; Hoshino, K.; Hatayama, A.

2003-03-01

207

Stream Cube: An Architecture for MultiDimensional Analysis of Data Streams

Real-time surveillance systems, telecommunication systems, and other dynamic environments often generate tremendous (potentially infinite) volume of stream data: the volume is too huge to be scanned multiple times. Much of such data resides at rather low level of abstraction, whereas most analysts are interested in relatively high-level dynamic changes (such as trends and outliers). To discover such high-level characteristics, one

Jiawei Han; Yixin Chen; Guozhu Dong; Jian Pei; Benjamin W. Wah; Jianyong Wang; Y. Dora Cai

2005-01-01

208

A Multi-Dimensional Analysis of Feedback by Tutors and Teacher-Educators to Their Students.

ERIC Educational Resources Information Center

This study analyzed guidance given to student teachers in Flanders, Belgium, by their tutors and teacher-educators, tutors' and teacher educators' written accounts, and teacher-educators' guidance discussions. The study included 895 guidance reports written by 29 tutors, which included 7,589 remarks. There were also 135 reports written by 13…

Van Looy, Linda; Vrijsen, Mike

209

CTH: A Software Family for MultiDimensional Shock Physics Analysis

. CTH is a family of codes developed at Sandia National Laboratories for modelling complexmulti-dimensional, multi-material problems that are characterized by large deformations and\\/or strongshocks. A two-step, second-order accurate Eulerian solution algorithm is used to solve the mass, momentum,and energy conservation equations. CTH includes models for material strength, fracture, porousmaterials, and high explosive detonation and initiation.Viscoplastic or rate-dependent models of...

E. S. Hertel

1993-01-01

210

Cross-Scale Analysis of Fire Regimes

Cross-scale spatial and temporal perspectives are important for studying contagious landscape disturbances such as fire, which\\u000a are controlled by myriad processes operating at different scales. We examine fire regimes in forests of western North America,\\u000a focusing on how observed patterns of fire frequency change across spatial scales. To quantify changes in fire frequency across\\u000a spatial scale, we derive the event-area

Donald A. Falk; Carol Miller; Donald McKenzie; Anne E. Black

2007-01-01

211

Dynamical scaling analysis of plant callus growth

We present experimental results for the dynamical scaling properties of the development of plant calli. We have assayed two different species of plant calli, Brassica oleracea and Brassica rapa, under different growth conditions, and show that their dynamical scalings share a universality class. From a theoretical point of view, we introduce a scaling hypothesis for systems whose size evolves in

212

Multi-dimensional forward modeling of frequency-domain helicopter-borne electromagnetic data

NASA Astrophysics Data System (ADS)

Helicopter-borne frequency-domain electromagnetic (HEM) surveys are used for fast high-resolution, three-dimensional (3-D) resistivity mapping. Nevertheless, 3-D modeling and inversion of an entire HEM data set is in many cases impractical and, therefore, interpretation is commonly based on one-dimensional (1-D) modeling and inversion tools. Such an approach is valid for environments with horizontally layered targets and for groundwater applications but there are areas of higher dimension that are not recovered correctly applying 1-D methods. The focus of this work is the multi-dimensional forward modeling. As there is no analytic solution to verify (or falsify) the obtained numerical solutions, comparison with 1-D values as well as amongst various two-dimensional (2-D) and 3-D codes is essential. At the center of a large structure (a few hundred meters edge length) and above the background structure in some distance to the anomaly 2-D and 3-D values should match the 1-D solution. Higher dimensional conditions are present at the edges of the anomaly and, therefore, only a comparison of different 2-D and 3-D codes gives an indication of the reliability of the solution. The more codes - especially if based on different methods and/or written by different programmers - agree the more reliable is the obtained synthetic data set. Very simple structures such as a conductive or resistive block embedded in a homogeneous or layered half-space without any topography and using a constant sensor height were chosen to calculate synthetic data. For the comparison one finite element 2-D code and numerous 3-D codes, which are based on finite difference, finite element and integral equation approaches, were applied. Preliminary results of the comparison will be shown and discussed. Additionally, challenges that arose from this comparative study will be addressed and further steps to approach more realistic field data settings for forward modeling will be discussed. As the driving engine of an inversion algorithm is its forward solver, applying inversion codes to HEM data is only sensible once the forward modeling results are reliable (and their limits and weaknesses are known and manageable).

Miensopust, M.; Siemon, B.; Börner, R.; Ansari, S.

2013-12-01

213

A number of model-based scaling methods have been developed that apply to asymmetric proximity matrices. A flexible data analysis approach is pro posed that combines two psychometric procedures— seriation and multidimensional scaling (MDS). The method uses seriation to define an empirical order ing of the stimuli, and then uses MDS to scale the two separate triangles of the proximity matrix

Joseph Lee Rodgers; Tony D. Thompson

1992-01-01

214

Uncertainty analysis of basin scale compaction processes

NASA Astrophysics Data System (ADS)

The dynamic evolution of porosity distribution in sedimentary basins has been typically interpreted by assuming that mechanical compaction is the dominant process. While mechanical compaction is particularly relevant during the early burial phase and has been often assumed to play a key role in the diagenesis even at the largest depths, temperature-activated geochemical compaction has been recognized as a major component driving the evolution of the basin characteristics and of the compaction process at least within the deepest layers. As a consequence, modeling basin evolution requires solving a coupled system involving partial differential equations and algebraic relationships between state variables. In this framework, quartz cementation and smectite-illite transformation are recognized to be the most relevant processes affecting sedimentary basins evolution. Spatial and temporal scales of basin evolution are intrinsically very large and it is often difficult to provide reliable estimates for the parameters included in the selected geochemical and compaction models. In this study we focus on the effects that the coupling between the quartz cementation process and mechanical compaction have on the distribution of porosity, pressure and temperature in the evolving sedimentary basin in the presence of uncertain model parameters and boundary conditions. We quantify uncertainty associated with the system state variables by means of a Global Sensitivity Analysis (GSA). The methodology is framed within the context of a generalized Polynomial Chaos Expansion (GPCE) approximation of a basin-scale evolution scenario. Sparse grids sampling techniques are employed to improve the computational efficiency of the methodology. The theoretical and computational framework adopted allows an efficient computation of the variance-based Sobol indices, exploiting a polynomial interpolation over the sparse grid collocation points. An additional advantage of the GPCE is that it yields a surrogate model of the system behavior. This can be exploited within the context of uncertainty propagation studies, e.g., based on numerical Monte Carlo simulations. It allows observing the space-time evolution of the probability density distribution (and its statistical moments) of target problem variables. The approach is illustrated through a one-dimensional example involving the process of quartz cementation in sandstones and the resulting effects on the dynamics of porosity, temperature and pressure.

Formaggia, L.; Guadagnini, A.; Imperiali, I.; Lever, V.; Porta, G.; Riva, M.; Scotti, A.; Tamellini, L.

2012-04-01

215

Multidimensional Scaling Analysis of Interior, Designed Spaces

Multidimensional scaling configurations were obtained for the same set of 11 stimuli (commercial office foyers) using data from two different judgmental tasks completed by the same group of subjects. The degree of fit between the two configurations suggests that similarity judgments analysed by multidimensional scaling methods provide useful information about the way in which subjects perceive their environment.

Rob Hall; A. Terrance Purcell; Ross Thorne; John Metcalfe

1976-01-01

216

Developmental Work Personality Scale: An Initial Analysis.

ERIC Educational Resources Information Center

The research reported in this article involved using the Developmental Model of Work Personality to create a scale to measure work personality, the Developmental Work Personality Scale (DWPS). Overall, results indicated that the DWPS may have potential applications for assessing work personality prior to client involvement in comprehensive…

Strauser, David R.; Keim, Jeanmarie

2002-01-01

217

Scale space methods for climate model analysis

NASA Astrophysics Data System (ADS)

In this paper, we introduce methods for evaluating climate model performance across spatial scales. These techniques are based on the "scale space" framework widely used in the image processing and computer vision communities. We discuss why the diffusion equation on the sphere provides a particularly attractive means of smoothing two-dimensional maps of global climate data. We establish that no structure is introduced into a map as an artifact of the smoothing procedure. This allows for the comparison of models and observations at multiple scales. As a test case for these methods, we compare the ability of high- and low-resolution versions of the Community Climate System Model (CCSM) to simulate the seasonal climatologies of surface air temperature (TAS), sea level pressure (PSL), and total precipitation rate (PR). For TAS, we find that the high-resolution model is better able to capture the boreal summer (JJA) climatological pattern at fine scales, although there is no such improvement in winter (DJF). We find the performances of the high- and low-resolution models to be similarly capable of capturing the summertime sea level pressure climatology at all scales. However, the high-resolution model PSL climatology is degraded for DJF, especially at larger scales. For both JJA and DJF precipitation climatologies, we find larger precipitation errors in the high-resolution model at the finest scales; however, performance at larger scales is improved.

Marvel, K.; Ivanova, D.; Taylor, K. E.

2013-06-01

218

Future CAD in multi-dimensional medical images--project on multi-organ, multi-disease CAD system.

A large research project on the subject of computer-aided diagnosis (CAD) entitled "Intelligent Assistance in Diagnosis of Multi-dimensional Medical Images" was initiated in Japan in 2003. The objective of this research project is to develop a multi-organ, multi-disease CAD system that incorporates anatomical knowledge of the human body and diagnostic knowledge of various types of diseases. The present paper provides an overview of the project and clarifies the trend of future CAD technologies in Japan. PMID:17382515

Kobatake, Hidefumi

2007-01-01

219

Dynamical scaling analysis of plant callus growth

NASA Astrophysics Data System (ADS)

We present experimental results for the dynamical scaling properties of the development of plant calli. We have assayed two different species of plant calli, Brassica oleracea and Brassica rapa, under different growth conditions, and show that their dynamical scalings share a universality class. From a theoretical point of view, we introduce a scaling hypothesis for systems whose size evolves in time. We expect our work to be relevant for the understanding and characterization of other systems that undergo growth due to cell division and differentiation, such as, for example, tumor development.

Galeano, J.; Buceta, J.; Juarez, K.; Pumariño, B.; de la Torre, J.; Iriondo, J. M.

2003-07-01

220

Detection of crossover time scales in multifractal detrended fluctuation analysis

NASA Astrophysics Data System (ADS)

Fractal is employed in this paper as a scale-based method for the identification of the scaling behavior of time series. Many spatial and temporal processes exhibiting complex multi(mono)-scaling behaviors are fractals. One of the important concepts in fractals is crossover time scale(s) that separates distinct regimes having different fractal scaling behaviors. A common method is multifractal detrended fluctuation analysis (MF-DFA). The detection of crossover time scale(s) is, however, relatively subjective since it has been made without rigorous statistical procedures and has generally been determined by eye balling or subjective observation. Crossover time scales such determined may be spurious and problematic. It may not reflect the genuine underlying scaling behavior of a time series. The purpose of this paper is to propose a statistical procedure to model complex fractal scaling behaviors and reliably identify the crossover time scales under MF-DFA. The scaling-identification regression model, grounded on a solid statistical foundation, is first proposed to describe multi-scaling behaviors of fractals. Through the regression analysis and statistical inference, we can (1) identify the crossover time scales that cannot be detected by eye-balling observation, (2) determine the number and locations of the genuine crossover time scales, (3) give confidence intervals for the crossover time scales, and (4) establish the statistically significant regression model depicting the underlying scaling behavior of a time series. To substantive our argument, the regression model is applied to analyze the multi-scaling behaviors of avian-influenza outbreaks, water consumption, daily mean temperature, and rainfall of Hong Kong. Through the proposed model, we can have a deeper understanding of fractals in general and a statistical approach to identify multi-scaling behavior under MF-DFA in particular.

Ge, Erjia; Leung, Yee

2013-04-01

221

Multi-scale Analysis of Heterogeneous Media.

National Technical Information Service (NTIS)

The project develops new mathematical tools for the description of multi-scale stress transfer inside composite materials. The first research activity provides new mathematical methods to characterize the extreme local stress excursions inside linear elas...

R. Lipton

2008-01-01

222

A scaling analysis to characterize thermomagnetic convection

Thermomagnetic convection is characterized using scaling arguments. We consider a square enclosure filled with a ferrofluid that is under the influence of an external magnetic field created by a line dipole. The height-averaged Nusselt number scales with the magnetic Rayleigh number as Nu?Ram0.25. This result is in excellent agreement with predictions obtained from detailed numerical simulations. Use of the Langevin

Achintya Mukhopadhyay; Ranjan Ganguly; Swarnendu Sen; Ishwar K. Puri

2005-01-01

223

Convective scale weather analysis and forecasting

NASA Technical Reports Server (NTRS)

How satellite data can be used to improve insight into the mesoscale behavior of the atmosphere is demonstrated with emphasis on the GOES-VAS sounding and image data. This geostationary satellite has the unique ability to observe frequently the atmosphere (sounders) and its cloud cover (visible and infrared) from the synoptic scale down to the cloud scale. These uniformly calibrated data sets can be combined with conventional data to reveal many of the features important in mesoscale weather development and evolution.

Purdom, J. F. W.

1984-01-01

224

NASA Technical Reports Server (NTRS)

With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

2013-01-01

225

Cepstral analysis synthesis on the mel frequency scale

This paper presents a new technique of cepstral analysis synthesis on the mel frequency scale, the log spectrum on the mel frequency scale (the mel log spectrum) is considered to be an effective representation of the spectral envelope of speech. This analysis synthesis system uses the mel log spectrum approximation (MLSA) filter which was devised for the cepstral synthesis on

S. Imai

1983-01-01

226

NASA Astrophysics Data System (ADS)

A common attribute of electric-powered aerospace vehicles and systems such as unmanned aerial vehicles, hybrid- and fully-electric aircraft, and satellites is that their performance is usually limited by the energy density of their batteries. Although lithium-ion batteries offer distinct advantages such as high voltage and low weight over other battery technologies, they are a relatively new development, and thus significant gaps in the understanding of the physical phenomena that govern battery performance remain. As a result of this limited understanding, batteries must often undergo a cumbersome design process involving many manual iterations based on rules of thumb and ad-hoc design principles. A systematic study of the relationship between operational, geometric, morphological, and material-dependent properties and performance metrics such as energy and power density is non-trivial due to the multiphysics, multiphase, and multiscale nature of the battery system. To address these challenges, two numerical frameworks are established in this dissertation: a process for analyzing and optimizing several key design variables using surrogate modeling tools and gradient-based optimizers, and a multi-scale model that incorporates more detailed microstructural information into the computationally efficient but limited macro-homogeneous model. In the surrogate modeling process, multi-dimensional maps for the cell energy density with respect to design variables such as the particle size, ion diffusivity, and electron conductivity of the porous cathode material are created. A combined surrogate- and gradient-based approach is employed to identify optimal values for cathode thickness and porosity under various operating conditions, and quantify the uncertainty in the surrogate model. The performance of multiple cathode materials is also compared by defining dimensionless transport parameters. The multi-scale model makes use of detailed 3-D FEM simulations conducted at the particle-level. A monodisperse system of ellipsoidal particles is used to simulate the effective transport coefficients and interfacial reaction current density within the porous microstructure. Microscopic simulation results are shown to match well with experimental measurements, while differing significantly from homogenization approximations used in the macroscopic model. Global sensitivity analysis and surrogate modeling tools are applied to couple the two length scales and complete the multi-scale model.

Du, Wenbo

227

Scientific design of Purdue University Multi-Dimensional Integral Test Assembly (PUMA) for GE SBWR

The scaled facility design was based on the three level scaling method; the first level is based on the well established approach obtained from the integral response function, namely integral scaling. This level insures that the stead-state as well as dynamic characteristics of the loops are scaled properly. The second level scaling is for the boundary flow of mass and energy between components; this insures that the flow and inventory are scaled correctly. The third level is focused on key local phenomena and constitutive relations. The facility has 1/4 height and 1/100 area ratio scaling; this corresponds to the volume scale of 1/400. Power scaling is 1/200 based on the integral scaling. The time will run twice faster in the model as predicted by the present scaling method. PUMA is scaled for full pressure and is intended to operate at and below 150 psia following scram. The facility models all the major components of SBWR (Simplified Boiling Water Reactor), safety and non-safety systems of importance to the transients. The model component designs and detailed instrumentations are presented in this report.

Ishii, M.; Ravankar, S.T.; Dowlati, R. [Purdue Univ., Lafayette, IN (United States). School of Nuclear Engineering] [and others

1996-04-01

228

NASA Astrophysics Data System (ADS)

A rigorous theoretical investigation has been made on multi-dimensional instability of obliquely propagating electrostatic dust-ion-acoustic (DIA) solitary structures in a magnetized dusty electronegative plasma which consists of Boltzmann electrons, nonthermal negative ions, cold mobile positive ions, and arbitrarily charged stationary dust. The Zakharov-Kuznetsov (ZK) equation is derived by the reductive perturbation method, and its solitary wave solution is analyzed for the study of the DIA solitary structures, which are found to exist in such a dusty plasma. The multi-dimensional instability of these solitary structures is also studied by the small- k (long wave-length plane wave) perturbation expansion technique. The combined effects of the external magnetic field, obliqueness, and nonthermal distribution of negative ions, which are found to significantly modify the basic properties of small but finite-amplitude DIA solitary waves, are examined. The external magnetic field and the propagation directions of both the nonlinear waves and their perturbation modes are found to play a very important role in changing the instability criterion and the growth rate of the unstable DIA solitary waves. The basic features (viz. speed, amplitude, width, instability, etc.) and the underlying physics of the DIA solitary waves, which are relevant to many astrophysical situations (especially, auroral plasma, Saturn's E-ring and F-ring, Halley's comet, etc.) and laboratory dusty plasma situations, are briefly discussed.

Kundu, N. R.; Masud, M. M.; Ashrafi, K. S.; Mamun, A. A.

2013-01-01

229

For many biological applications, a macroscopic (deterministic) treatment of reaction-drift-diffusion systems is insufficient. Instead, one has to properly handle the stochastic nature of the problem and generate true sample paths of the underlying probability distribution. Unfortunately, stochastic algorithms are computationally expensive and, in most cases, the large number of participating particles renders the relevant parameter regimes inaccessible. In an attempt to address this problem we present a genuine stochastic, multi-dimensional algorithm that solves the inhomogeneous, non-linear, drift-diffusion problem on a mesoscopic level. Our method improves on existing implementations in being multi-dimensional and handling inhomogeneous drift and diffusion. The algorithm is well suited for an implementation on data-parallel hardware architectures such as general-purpose graphics processing units (GPUs). We integrate the method into an operator-splitting approach that decouples chemical reactions from the spatial evolution. We demonstrate the validity and applicability of our algorithm with a comprehensive suite of standard test problems that also serve to quantify the numerical accuracy of the method. We provide a freely available, fully functional GPU implementation. Integration into Inchman, a user-friendly web service, that allows researchers to perform parallel simulations of reaction-drift-diffusion systems on GPU clusters is underway. PMID:22506001

Vigelius, Matthias; Meyer, Bernd

2012-01-01

230

For many biological applications, a macroscopic (deterministic) treatment of reaction-drift-diffusion systems is insufficient. Instead, one has to properly handle the stochastic nature of the problem and generate true sample paths of the underlying probability distribution. Unfortunately, stochastic algorithms are computationally expensive and, in most cases, the large number of participating particles renders the relevant parameter regimes inaccessible. In an attempt to address this problem we present a genuine stochastic, multi-dimensional algorithm that solves the inhomogeneous, non-linear, drift-diffusion problem on a mesoscopic level. Our method improves on existing implementations in being multi-dimensional and handling inhomogeneous drift and diffusion. The algorithm is well suited for an implementation on data-parallel hardware architectures such as general-purpose graphics processing units (GPUs). We integrate the method into an operator-splitting approach that decouples chemical reactions from the spatial evolution. We demonstrate the validity and applicability of our algorithm with a comprehensive suite of standard test problems that also serve to quantify the numerical accuracy of the method. We provide a freely available, fully functional GPU implementation. Integration into Inchman, a user-friendly web service, that allows researchers to perform parallel simulations of reaction-drift-diffusion systems on GPU clusters is underway.

Vigelius, Matthias; Meyer, Bernd

2012-01-01

231

Multidimensional Scaling Analysis of Interior, Designed Spaces

ERIC Educational Resources Information Center

Multidimensional scaling configurations were obtained for the same set of 11 stimuli using data from two different judgmental tasks completed by the same group of subjects. The procedure was felt to be a way of obtaining useful information regarding environmental perception. (RH)

Hall, Rob; And Others

1976-01-01

232

Background A common characteristic of environmental epidemiology is the multi-dimensional aspect of exposure patterns, frequently reduced to a cumulative exposure for simplicity of analysis. By adopting a flexible Bayesian clustering approach, we explore the risk function linking exposure history to disease. This approach is applied here to study the relationship between different smoking characteristics and lung cancer in the framework of a population based case control study. Methods Our study includes 4658 males (1995 cases, 2663 controls) with full smoking history (intensity, duration, time since cessation, pack-years) from the ICARE multi-centre study conducted from 2001-2007. We extend Bayesian clustering techniques to explore predictive risk surfaces for covariate profiles of interest. Results We were able to partition the population into 12 clusters with different smoking profiles and lung cancer risk. Our results confirm that when compared to intensity, duration is the predominant driver of risk. On the other hand, using pack-years of cigarette smoking as a single summary leads to a considerable loss of information. Conclusions Our method estimates a disease risk associated to a specific exposure profile by robustly accounting for the different dimensions of exposure and will be helpful in general to give further insight into the effect of exposures that are accumulated through different time patterns.

2013-01-01

233

Analysis of a municipal wastewater treatment plant using a neural network-based pattern analysis

This paper addresses the problem of how to capture the complex relationships that exist between process variables and to diagnose the dynamic behaviour of a municipal wastewater treatment plant (WTP). Due to the complex biological reaction mechanisms, the highly time-varying, and multivariable aspects of the real WTP, the diagnosis of the WTP are still difficult in practice. The application of intelligent techniques, which can analyse the multi-dimensional process data using a sophisticated visualisation technique, can be useful for analysing and diagnosing the activated-sludge WTP. In this paper, the Kohonen Self-Organising Feature Maps (KSOFM) neural network is applied to analyse the multi-dimensional process data, and to diagnose the inter-relationship of the process variables in a real activated-sludge WTP. By using component planes, some detailed local relationships between the process variables, e.g., responses of the process variables under different operating conditions, as well as the global information is discovered. The operating condition and the inter-relationship among the process variables in the WTP have been diagnosed and extracted by the information obtained from the clustering analysis of the maps. It is concluded that the KSOFM technique provides an effective analysing and diagnosing tool to understand the system behaviour and to extract knowledge contained in multi-dimensional data of a large-scale WTP. ?? 2003 Elsevier Science Ltd. All rights reserved.

Hong, Y. -S. T.; Rosen, M. R.; Bhamidimarri, R.

2003-01-01

234

Scaling of quasibrittle fracture: asymptotic analysis

Fracture of quasibrittle materials such as concrete, rock, ice, tough ceramics and various fibrous or particulate composites,\\u000a exhibits complex size effects. An asymptotic theory of scaling governing these size effects is presented, while its extension\\u000a to fractal cracks is left to a companion paper [1] which follows. The energy release from the structure is assumed to depend\\u000a on its size

Z. P. BAŽANT

1997-01-01

235

Local variance for multi-scale analysis in geomorphometry.

Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138

Dr?gu?, Lucian; Eisank, Clemens; Strasser, Thomas

2011-07-15

236

Local variance for multi-scale analysis in geomorphometry

Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements.

Dragut, Lucian; Eisank, Clemens; Strasser, Thomas

2011-01-01

237

Purpose – The purpose of this paper is to develop a measurement scale to assess over-the-road commercial motor vehicle operators' attitudes toward safety regulations. Design\\/methodology\\/approach – A literature review of the current USA motor carrier safety literature and general safety literature is conducted to determine the existence of a construct and measurement scale suitable for assessing truck drivers' attitudes toward

Matthew A. Douglas; Stephen M. Swartz

2009-01-01

238

NASA Astrophysics Data System (ADS)

We propose to describe the variety of galaxies from the Sloan Digital Sky Survey by using only one affine parameter. To this aim, we construct the principal curve (P-curve) passing through the spine of the data point cloud, considering the eigenspace derived from Principal Component Analysis (PCA) of morphological, physical, and photometric galaxy properties. Thus, galaxies can be labeled, ranked, and classified by a single arc-length value of the curve, measured at the unique closest projection of the data points on the P-curve. We find that the P-curve has a "W" letter shape with three turning points, defining four branches that represent distinct galaxy populations. This behavior is controlled mainly by two properties, namely u - r and star formation rate (from blue young at low arc length to red old at high arc length), while most other properties correlate well with these two. We further present the variations of several important galaxy properties as a function of arc length. Luminosity functions vary from steep Schechter fits at low arc length to double power law and ending in lognormal fits at high arc length. Galaxy clustering shows increasing autocorrelation power at large scales as arc length increases. Cross correlation of galaxies with different arc lengths shows that the probability of two galaxies belonging to the same halo decreases as their distance in arc length increases. PCA analysis allows us to find peculiar galaxy populations located apart from the main cloud of data points, such as small red galaxies dominated by a disk, of relatively high stellar mass-to-light ratio and surface mass density. On the other hand, the P-curve helped us understand the average trends, encoding 75% of the available information in the data. The P-curve allows not only dimensionality reduction but also provides supporting evidence for the following relevant physical models and scenarios in extragalactic astronomy: (1) The hierarchical merging scenario in the formation of a selected group of red massive galaxies. These galaxies present a lognormal r-band luminosity function, which might arise from multiplicative processes involved in this scenario. (2) A connection between the onset of active galactic nucleus activity and star formation quenching as mentioned in Martin et al., which appears in green galaxies transitioning from blue to red populations.

Taghizadeh-Popp, M.; Heinis, S.; Szalay, A. S.

2012-08-01

239

The Attitudes to Ageing Questionnaire: Mokken Scaling Analysis

Background Hierarchical scales are useful in understanding the structure of underlying latent traits in many questionnaires. The Attitudes to Ageing Questionnaire (AAQ) explored the attitudes to ageing of older people themselves, and originally described three distinct subscales: (1) Psychosocial Loss (2) Physical Change and (3) Psychological Growth. This study aimed to use Mokken analysis, a method of Item Response Theory, to test for hierarchies within the AAQ and to explore how these relate to underlying latent traits. Methods Participants in a longitudinal cohort study, the Lothian Birth Cohort 1936, completed a cross-sectional postal survey. Data from 802 participants were analysed using Mokken Scaling analysis. These results were compared with factor analysis using exploratory structural equation modelling. Results Participants were 51.6% male, mean age 74.0 years (SD 0.28). Three scales were identified from 18 of the 24 items: two weak Mokken scales and one moderate Mokken scale. (1) ‘Vitality’ contained a combination of items from all three previously determined factors of the AAQ, with a hierarchy from physical to psychosocial; (2) ‘Legacy’ contained items exclusively from the Psychological Growth scale, with a hierarchy from individual contributions to passing things on; (3) ‘Exclusion’ contained items from the Psychosocial Loss scale, with a hierarchy from general to specific instances. All of the scales were reliable and statistically significant with ‘Legacy’ showing invariant item ordering. The scales correlate as expected with personality, anxiety and depression. Exploratory SEM mostly confirmed the original factor structure. Conclusions The concurrent use of factor analysis and Mokken scaling provides additional information about the AAQ. The previously-described factor structure is mostly confirmed. Mokken scaling identifies a new factor relating to vitality, and a hierarchy of responses within three separate scales, referring to vitality, legacy and exclusion. This shows what older people themselves consider important regarding their own ageing.

Shenkin, Susan D.; Watson, Roger; Laidlaw, Ken; Starr, John M.; Deary, Ian J.

2014-01-01

240

Multi-scale analysis of heavy rainfall systems

NASA Astrophysics Data System (ADS)

The aim of this work is to study the cross-scale interactions with focus on mesoscale convective system. A multi-scale analysis of a heavy rainfall event is carried out by dividing the responsible systems into large, middle and small scales. The three distinctive scales correspond respectively to upper- and low-level jets, meso-scale convective systems and convective cells. The governing equations for the three scales are derived and then simplified to their bare essence to illustrate the cross-scale interactions. In particular, the cross-scale transfers of momentum and heat are retained in the equations to illustrate the interactions between the large and small scale motions with the mesoscale system. WRF has been used to simulate a heavy rainfall event in southeast China and the model results are used to test the theory of the multi-scale interactions. Overall, the theory shows a plausible mechanism that the meso-scale convective system is responsible for the vertical momentum transfer from the upper level jet to the lower level jet which maintains the low-level positive vorticity of the convective system. The low-level jet also carries large quantities of moisture from the South China Sea to Southeast China, which are necessary for small scale convections.

TSUI, Chi Yan; Shao, Yaping

2014-05-01

241

Application of Multidimensional Fuzzy Analysis to Decision Making

The goal of multi-dimensional fuzzy analysis consists in discovering different properties in multi-dimensional fuzzy distributions represented either extensionally (database) or intensionally (knowledge base). In this paper we show how this approach can be applied to such problems as decision making and knowledge discovery in databases. For uniform and efficient representation of fuzzy knowledge and data we propose a technique of

Alexandr A. Savinov

242

Source Code Analysis Laboratory (SCALe) for Energy Delivery Systems.

National Technical Information Service (NTIS)

The Source Code Analysis Laboratory (SCALe) is an operational capability that tests software applications for conformance to one of the CERT(registered name) secure coding standards. CERT secure coding standards provide a detailed enumeration of coding er...

J. McCurley P. Miller R. Stoddard R. C. Seacord W. Dormann

2010-01-01

243

High-order semi-discrete central-upwind schemes for multi-dimensional Hamilton-Jacobi equations

NASA Astrophysics Data System (ADS)

We present the first fifth-order, semi-discrete central-upwind method for approximating solutions of multi-dimensional Hamilton-Jacobi equations. Unlike most of the commonly used high-order upwind schemes, our scheme is formulated as a Godunov-type scheme. The scheme is based on the fluxes of Kurganov-Tadmor and Kurganov-Noelle-Petrova, and is derived for an arbitrary number of space dimensions. A theorem establishing the monotonicity of these fluxes is provided. The spatial discretization is based on a weighted essentially non-oscillatory reconstruction of the derivative. The accuracy and stability properties of our scheme are demonstrated in a variety of examples. A comparison between our method and other fifth-order schemes for Hamilton-Jacobi equations shows that our method exhibits smaller errors without any increase in the complexity of the computations.

Bryson, Steve; Levy, Doron

2003-07-01

244

High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations

NASA Technical Reports Server (NTRS)

We present the first fifth order, semi-discrete central upwind method for approximating solutions of multi-dimensional Hamilton-Jacobi equations. Unlike most of the commonly used high order upwind schemes, our scheme is formulated as a Godunov-type scheme. The scheme is based on the fluxes of Kurganov-Tadmor and Kurganov-Tadmor-Petrova, and is derived for an arbitrary number of space dimensions. A theorem establishing the monotonicity of these fluxes is provided. The spacial discretization is based on a weighted essentially non-oscillatory reconstruction of the derivative. The accuracy and stability properties of our scheme are demonstrated in a variety of examples. A comparison between our method and other fifth-order schemes for Hamilton-Jacobi equations shows that our method exhibits smaller errors without any increase in the complexity of the computations.

Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)

2002-01-01

245

NASA Astrophysics Data System (ADS)

In this paper we deal with an alternative technique to study global dynamics in Hamiltonian systems, the mean exponential growth factor of nearby orbits (MEGNO), that proves to be efficient to investigate both regular and stochastic components of phase space. It provides a clear picture of resonance structures, location of stable and unstable periodic orbits as well as a measure of hyperbolicity in chaotic domains which coincides with that given by the Lyapunov characteristic number. Here the MEGNO is applied to a rather simple model, the 3D perturbed quartic oscillator, in order to visualize the structure of its phase space and obtain a quite clear picture of its resonance structure. Examples of application to multi-dimensional canonical maps are also included.

Cincotta, P. M.; Giordano, C. M.; Simó, C.

2003-08-01

246

High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations

NASA Technical Reports Server (NTRS)

We present high-order semi-discrete central-upwind numerical schemes for approximating solutions of multi-dimensional Hamilton-Jacobi (HJ) equations. This scheme is based on the use of fifth-order central interpolants like those developed in [1], in fluxes presented in [3]. These interpolants use the weighted essentially nonoscillatory (WENO) approach to avoid spurious oscillations near singularities, and become "central-upwind" in the semi-discrete limit. This scheme provides numerical approximations whose error is as much as an order of magnitude smaller than those in previous WENO-based fifth-order methods [2, 1]. Thee results are discussed via examples in one, two and three dimensions. We also pregnant explicit N-dimensional formulas for the fluxes, discuss their monotonicity and tl!e connection between this method and that in [2].

Bryson, Steve; Levy, Doron; Biegel, Bran R. (Technical Monitor)

2002-01-01

247

NASA Astrophysics Data System (ADS)

Holography offers a versatile, rapid and volume scalable approach for making large area, multi-dimensional, organic PBGs; however, the small refractive index contrast of organics prevents formation of a complete band-gap. The introduction of inorganic nanoparticles to the structure provides a possible solution. In contrast to the multiple steps (exposure, development and infiltration) necessitated by lithographic-based holography (e.g. photoresists), holographic photopolymerization of monomer-nanoparticle suspensions enables one-step fabrication of multidimensional organic-inorganic photonic band gap (PBG) structures with high refractive index contrast. The PBGs are formed by segregation of semiconductor nanocrystals during polymerization of the polymer network. A model describing the migration of the nanoparticles into three-dimensional patterns, encompassing elements of Kogelnik's coupled wave theory for volume holograms, mass transport and polymerization kinetics, was utilized to select writing conditions and polymerization rates to obtain optimal morphologies for optical gain.

Jakubiak, Rachel; Vaia, Richard; Bunning, Timothy; Brown, Dean; Tondiglia, Vincent; Natarajan, Lalgudi; Tomlin, David

2003-03-01

248

Protein analysis on a genomic scale

Methods for protein analysis, such as chromatography, electrophoresis, enzyme tests, receptor assays and immunological tests, have always been aimed in a classical reductionistic manner at investigating single proteins isolated from the complex protein composition of biological compartments. The complexity of the protein composition in biological systems was first visualized by two-dimensional electrophoresis (2-DE). Using 2-DE like a molecular microscope, protein

Peter Jungblut; Brigitte Wittmann-Liebold

1995-01-01

249

ERIC Educational Resources Information Center

A flexible data analysis approach is proposed that combines the psychometric procedures seriation and multidimensional scaling. The method, which is particularly appropriate for analysis of proximities containing temporal information, is illustrated using a matrix of cocitations in publications by 18 presidents of the Psychometric Society.…

Rodgers, Joseph Lee; Thompson, Tony D.

1992-01-01

250

Multiple time scale analysis of macrotransport processes

NASA Astrophysics Data System (ADS)

Convective and diffusive transport of a Brownian tracer corpuscle is analyzed within a multidimensional space decomposed into local (internal) and global (external) subspaces. Multiple time scale methods are employed to successively eliminate from the kinetic equation governing the multidimensional microtransport process its dependence upon each of the internal coordinates. The resulting long-time equation describing the residual global-space macrotransport process derived by this systematic perturbation procedure is shown to accord with the well-established results of generalized Taylor dispersion theory, heretofore obtained by ad hoc arguments. By way of example, this macrotransport description is used to analyze the Taylor dispersion of a solute in a Poiseuille-type solvent flow occuring within a rectangular duct of small, but non-zero, aspect ratio. Elementary dispersion results obtained by ignoring the side walls apply only for relatively short times; for longer times the perturbing presence of the side walls acts to substantially increase the axial dispersivity, even in the limit of zero aspect ratio-where intuition would strongly suggest otherwise.

Pagitsas, M.; Nadim, A.; Brenner, H.

1986-04-01

251

Bayesian inference in the scaling analysis of critical phenomena

NASA Astrophysics Data System (ADS)

To determine the universality class of critical phenomena, we propose a method of statistical inference in the scaling analysis of critical phenomena. The method is based on Bayesian statistics, most specifically, the Gaussian process regression. It assumes only the smoothness of a scaling function, and it does not need a form. We demonstrate this method for the finite-size scaling analysis of the Ising models on square and triangular lattices. Near the critical point, the method is comparable in accuracy to the least-square method. In addition, it works well for data to which we cannot apply the least-square method with a polynomial of low degree. By comparing the data on triangular lattices with the scaling function inferred from the data on square lattices, we confirm the universality of the finite-size scaling function of the two-dimensional Ising model.

Harada, Kenji

2011-11-01

252

Multiple time scale complexity analysis of resting state FMRI.

The present study explored multi-scale entropy (MSE) analysis to investigate the entropy of resting state fMRI signals across multiple time scales. MSE analysis was developed to distinguish random noise from complex signals since the entropy of the former decreases with longer time scales while the latter signal maintains its entropy due to a "self-resemblance" across time scales. A long resting state BOLD fMRI (rs-fMRI) scan with 1000 data points was performed on five healthy young volunteers to investigate the spatial and temporal characteristics of entropy across multiple time scales. A shorter rs-fMRI scan with 240 data points was performed on a cohort of subjects consisting of healthy young (age 23 ± 2 years, n = 8) and aged volunteers (age 66 ± 3 years, n = 8) to investigate the effect of healthy aging on the entropy of rs-fMRI. The results showed that MSE of gray matter, rather than white matter, resembles closely that of f (-1) noise over multiple time scales. By filtering out high frequency random fluctuations, MSE analysis is able to reveal enhanced contrast in entropy between gray and white matter, as well as between age groups at longer time scales. Our data support the use of MSE analysis as a validation metric for quantifying the complexity of rs-fMRI signals. PMID:24242271

Smith, Robert X; Yan, Lirong; Wang, Danny J J

2014-06-01

253

Large scale analysis of signal reachability

Motivation: Major disorders, such as leukemia, have been shown to alter the transcription of genes. Understanding how gene regulation is affected by such aberrations is of utmost importance. One promising strategy toward this objective is to compute whether signals can reach to the transcription factors through the transcription regulatory network (TRN). Due to the uncertainty of the regulatory interactions, this is a #P-complete problem and thus solving it for very large TRNs remains to be a challenge. Results: We develop a novel and scalable method to compute the probability that a signal originating at any given set of source genes can arrive at any given set of target genes (i.e., transcription factors) when the topology of the underlying signaling network is uncertain. Our method tackles this problem for large networks while providing a provably accurate result. Our method follows a divide-and-conquer strategy. We break down the given network into a sequence of non-overlapping subnetworks such that reachability can be computed autonomously and sequentially on each subnetwork. We represent each interaction using a small polynomial. The product of these polynomials express different scenarios when a signal can or cannot reach to target genes from the source genes. We introduce polynomial collapsing operators for each subnetwork. These operators reduce the size of the resulting polynomial and thus the computational complexity dramatically. We show that our method scales to entire human regulatory networks in only seconds, while the existing methods fail beyond a few tens of genes and interactions. We demonstrate that our method can successfully characterize key reachability characteristics of the entire transcriptions regulatory networks of patients affected by eight different subtypes of leukemia, as well as those from healthy control samples. Availability: All the datasets and code used in this article are available at bioinformatics.cise.ufl.edu/PReach/scalable.htm. Contact: atodor@cise.ufl.edu Supplementary information: Supplementary data are available at Bioinformatics online.

Todor, Andrei; Gabr, Haitham; Dobra, Alin; Kahveci, Tamer

2014-01-01

254

The Multi-Dimensional Surface-Water Modeling System (MD_SWMS) is a Graphical User Interface for surface-water flow and sediment-transport models. The capabilities of MD_SWMS for developing models include: importing raw topography and other ancillary data; building the numerical grid and defining initial and boundary conditions; running simulations; visualizing results; and comparing results with measured data.

McDonald, Richard; Nelson, Jonathan; Kinzel, Paul; Conaway, Jeff

2006-01-01

255

In this paper, a statistical electromagnetic (abbr., EM) interference study is constructed mainly from a theoretical viewpoint on the basis of N-dimensional random walk problem in multi-dimensional signal space. First, a characteristic function of the Hankel transform type matched to this EM environmental study is introduced especially in an extended form of D. Middleton's basic result. Then, the probability density

M. Ohta; Y. Mitani; N. Nakasako

1998-01-01

256

NASA Astrophysics Data System (ADS)

An analog of Gelfand-Levitan-Marchenko integral equations for multi- dimensional Delsarte transmutation operators is constructed by means of studying their differential-geometric structure based on the classical Lagrange identity for a formally conjugated pair of differential operators. An extension of the method for the case of affine pencils of differential operators is suggested.

Golenia, Jolanta; Prykarpatsky, Anatolij K.; Prykarpatsky, Yarema A.

257

Efficient High Order Central Schemes for Multi-Dimensional Hamilton-Jacobi Equations: Talk Slides

NASA Technical Reports Server (NTRS)

This viewgraph presentation presents information on the attempt to produce high-order, efficient, central methods that scale well to high dimension. The central philosophy is that the equations should evolve to the point where the data is smooth. This is accomplished by a cyclic pattern of reconstruction, evolution, and re-projection. One dimensional and two dimensional representational methods are detailed, as well.

Bryson, Steve; Levy, Doron; Biegel, Brian R. (Technical Monitor)

2002-01-01

258

Efficient organization and access of multi-dimensional datasets on tertiary storage systems

This paper addresses the problem of urgently needed data management techniques for efficiently retrieving requested subsets of large datasets from mass storage devices. This problem is especially critical for scientific investigators who need ready access to the large volume of data generated by large-scale supercomputer simulations and physical experiments as well as the automated collection of observations by monitoring devices

Ling Tony Chen; R. Drach; M. Keating; S. Louis; Doron Rotem; Arie Shoshani

1995-01-01

259

Mathematical Analysis of Multi-Scale Models of Complex Fluids

The state of the art of the mathematical and numerical analysis of multi-scale models of complex fluids is reviewed. Issues addressed include well-posedness of the models, convergence analysis of the numerical methods, and the structure of stationary solutions of the Doi-Onsager equation.

Tiejun Li; Pingwen Zhang

2007-01-01

260

Multiscale Analysis of Landscape Heterogeneity: Scale Variance and Pattern Metrics

A major goal of landscape ecology is to understand the formation, dynamics, and maintenance of spatial heterogeneity. Spatial heterogeneity is the most fundamental characteristic of all landscapes, and scale multiplicity is inherent in spatial heterogeneity. Thus, multiscale analysis is imperative for understanding the structure, function and dynamics of landscapes. Although a number of methods have been used for multiscale analysis

Jianguo Wu; Dennis E. Jelinski; Matt Luck; Paul T. Tueller

2000-01-01

261

Scale, population, and spatial analysis: a methodological investigation

The analysis of aggregated data introduces challenges in spatial analysis which has been described as the modifiable areal unit problem (MAUP). MAUP consists of two distinct but related research problems: scale effects and zoning effects. Various challenges accompany the science of aggregating phenomena into units that can be analyzed; the most salient of these challenges encompass theory, data sources, and

Darren M. Ruddell; Elizabeth A. Wentz

2007-01-01

262

Large-Scale Vehicle Sharing Systems: Analysis of Vélib'

A quantitative analysis of the pioneering large-scale bicycle sharing system, Vélib' in Paris, France is presented. This system involves a fleet of bicycles strategically located across the network. Users are free to check out a bicycle from close to their origin and drop it off close to their destination to complete their trip. The analysis provides key insights on the

Rahul Nair; Elise Miller-Hooks; Robert C. Hampshire; Ana Buši?

2012-01-01

263

An Analysis of Model Scale Data Transformation to Full Scale Flight Using Chevron Nozzles

NASA Technical Reports Server (NTRS)

Ground-based model scale aeroacoustic data is frequently used to predict the results of flight tests while saving time and money. The value of a model scale test is therefore dependent on how well the data can be transformed to the full scale conditions. In the spring of 2000, a model scale test was conducted to prove the value of chevron nozzles as a noise reduction device for turbojet applications. The chevron nozzle reduced noise by 2 EPNdB at an engine pressure ratio of 2.3 compared to that of the standard conic nozzle. This result led to a full scale flyover test in the spring of 2001 to verify these results. The flyover test confirmed the 2 EPNdB reduction predicted by the model scale test one year earlier. However, further analysis of the data revealed that the spectra and directivity, both on an OASPL and PNL basis, do not agree in either shape or absolute level. This paper explores these differences in an effort to improve the data transformation from model scale to full scale.

Brown, Clifford; Bridges, James

2003-01-01

264

Numerical analysis of scalar dissipation length-scales and their scaling properties

NASA Astrophysics Data System (ADS)

Scalar dissipation rate, ?, is fundamental to the description of scalar-mixing in turbulent non-premixed combustion. Most contributions to the statistics for ? come from the finest turbulent mixing-scales and thus its adequate characterisation requires good resolution. Reliable ?-measurement is complicated by the trade-off between higher resolution and greater signal-to-noise ratio. Thus, the present numerical study utilises the error-free mixture fraction, Z, and fluid mechanical data from the turbulent reacting jet DNS of Pantano (2004). The aim is to quantify the resolution requirements for ?-measurement in terms of easily measurable properties of the flow like the integral-scale Reynolds number, Re?, using spectral and spatial-filtering [cf. Barlow and Karpetis (2005)] analyses. Analysis of the 1-D cross-stream dissipation spectra enables the estimation of the dissipation length scales. It is shown that these spectrally-computed scales follow the expected Kolmogorov scaling with Re?-0.75 . The work also involves local smoothening of the instantaneous ?-field over a non-overlapping spatial-interval (filter-width, wf), to study the smoothened ?-value as a function of wf, as wf is extrapolated to the smallest scale of interest. The dissipation length-scales thus captured show a stringent Re?-1 scaling, compared to the usual Kolmogorov-type. This concurs with the criterion of 'resolution adequacy' of the DNS, as set out by Sreenivasan (2004) using the theory of multi-fractals.

Vaishnavi, Pankaj; Kronenburg, Andreas

2006-11-01

265

Light water reactor fuel performance modeling and multi-dimensional simulation

NASA Astrophysics Data System (ADS)

Light water reactor fuel is a multicomponent system required to produce thermal energy through the fission process, efficiently transfer the thermal energy to the coolant system, and provide a barrier to fission product release by maintaining structural integrity. The operating conditions within a reactor induce complex multi-physics phenomena that occur over time scales ranging from less than a microsecond to years and act over distances ranging from inter-atomic spacing to meters. These conditions impose challenging and unique modeling, simulation, and verification data requirements in order to accurately determine the state of the fuel during its lifetime in the reactor. The capabilities and limitations of the current engineering-scale one-dimensional and two-dimensional fuel performance codes is discussed and the challenges of employing higher level fidelity atomistic modeling techniques such as molecular dynamics and phase-field simulations is presented.

Rashid, Joseph Y. R.; Yagnik, Suresh K.; Montgomery, Robert O.

2011-08-01

266

Four flux-type models for radiative heat transfer in cylindrical configurations were applied to the prediction of radiative flux density and source term of a cylindrical enclosure problem based on data reported previously on a pilot-scale experimental combustor with steep temperature gradients. The models, which are Schuster-Hamaker type four-flux model derived by Lockwood and Spalding, two Schuster-Schwarzschild type four-flux models derived

Nevin Selcuk

1993-01-01

267

On Multi-dimensional Steady Subsonic Flows Determined by Physical Boundary Conditions

NASA Astrophysics Data System (ADS)

In this thesis, we investigate an inflow-outflow problem for subsonic gas flows in a nozzle with finite length, aiming at finding intrinsic (physically acceptable) boundary conditions on upstream and downstream. We first characterize a set of physical boundary conditions that ensure the existence and uniqueness of a subsonic irrotational flow in a rectangle. Our results show that suppose we prescribe the horizontal incoming flow angle at the inlet and an appropriate pressure at the exit, there exists two positive constants m 0 and m1 with m0 < m1, such that a global subsonic irrotational flow exists uniquely in the nozzle, provided that the incoming mass flux m ? [m0, m 1). The maximum speed will approach the sonic speed as the mass flux m tends to m1. The new difficulties arise from the nonlocal term involved in the mass flux and the pressure condition at the exit. We first introduce an auxiliary problem with the Bernoulli's constant as a parameter to localize the nonlocal term and then establish a monotonic relation between the mass flux and the Bernoulli's constant to recover the original problem. To deal with the loss of obliqueness induced by the pressure condition at the exit, we employ the formulation in terms of the angular velocity and the density. A Moser iteration is applied to obtain the Linfinity estimate of the angular velocity, which guarantees that the flow possesses a positive horizontal velocity in the whole nozzle. As a continuation, we investigate the influence of the incoming flow angle and the geometry structure of the nozzle walls on subsonic flows in a finitely long curved nozzle. It turns out to be interesting that the incoming flow angle and the angles of inclination of nozzle walls play the same role as the end pressure. The curvatures of the nozzle walls play an important role. We also extend our results to subsonic Euler flows in the 2-D and 3-D asymmetric cases. Then it comes to the most interesting and difficult case--the 3-D subsonic Euler flow in a bounded nozzle, which is also the essential part of this thesis. The boundary conditions we have imposed in the 2-D case have a natural extension in the 3-D case. These important clues help us a lot to develop a new formulation to get some insights on the coupling structure between hyperbolic and elliptic modes in the Euler equations. The key idea in our new formulation is to use the Bernoulli's law to reduce the dimension of the velocity field by defining new variables (1,b2=u2u 1,b3=u3 u1) and replacing u1 by the Bernoulli's function B through u21=2B-h r1+ b22+b23 . In this way, we can explore the role of the Bernoulli's law in greater depth and hope that may simplify the Euler equations a little bit. We find a new conserved quantity for flows with a constant Bernoulli's function, which behaves like the scaled vorticity in the 2-D case. More surprisingly, a system of new conservation laws can be derived, which is never been observed before, even in the two dimensional case. We employ this formulation to construct a smooth subsonic Euler flow in a rectangular cylinder by assigning the incoming flow angles and the Bernoulli's function at the inlet and the end pressure at the exit, which is also required to be adjacent to some special subsonic states. The same idea can be applied to obtain similar information for the incompressible Euler equations, the self-similar Euler equations, the steady Euler equations with damping, the steady Euler-Poisson equations and the steady Euler-Maxwell equations. Last, we are concerned with the structural stability of some steady subsonic solutions for the Euler-Poisson system. A steady subsonic solution with subsonic background charge is proven to be structurally stable with respect to small perturbations of the background charge, the incoming flow angles and the end pressure, provided the background solution has a low Mach number and a small electric field. The new ingredient in our mathematical analysis is the solvability of a new second order elliptic system supplemented with oblique derivative conditio

Weng, Shangkun

268

A survey over X-ray absorption methods in homogeneous catalysis research is given with the example of the iron-catalyzed Michael addition reaction. A thorough investigation of the catalytic cycle was possible by combination of conventional X-ray absorption spectroscopy (XAS), resonant inelastic X-ray scattering (RIXS) and multi-dimensional spectroscopy. The catalytically active compound formed in the first step of the Michael reaction of methyl vinyl ketone with 2-oxocyclopentanecarboxylate (1) could be elucidated in situ by RIXS spectroscopy, and the reduced catalytic activity of FeCl(3) x 6 H(2)O (2) compared to Fe(ClO(4))(3) x 9 H(2)O (3) could be further explained by the formation of a [Fe(III)Cl(4)(-)](3)[Fe(III)(1-H)(2)(H(2)O)(2)(+)][H(+)](2) complex. Chloride was identified as catalyst poison with a combined XAS-UV/vis study, which revealed that Cl(-) binds quantitatively to the available iron centers that are deactivated by formation of [FeCl(4)(-)]. Operando studies in the course of the reaction of methyl vinyl ketone with 1 by combined XAS-Raman spectroscopy allowed the exclusion of changes in the oxidation state and the octahedral geometry at the iron site; a reaction order of two with respect to methyl vinyl ketone and a rate constant of k = 1.413 min(-2) were determined by analysis of the C=C and C=O vibration band. Finally, a dedicated experimental set-up for three-dimensional spectroscopic studies (XAS, UV/vis and Raman) of homogeneous catalytic reactions under laboratory conditions, which emerged from the discussed investigations, is presented. PMID:20405080

Bauer, Matthias; Gastl, Christoph

2010-06-01

269

Scheme-scale ambiguity in analysis of QCD observable

NASA Astrophysics Data System (ADS)

The scheme-scale ambiguity that has plagued perturbative analysis in QCD remains on obstacle to making precise tests of the theory. Many attempts have been done to resolve the scale ambiguity. In this regard the BLM, EC, PMS and CORGI approaches are more distinct. We try to employ these methods to fix the scale ambiguity at NLO, NNLO and even in more higher order approximations. By optimizing the renormalization scale, there will be a possibility to predicate higher order terms. We present general results for predicted terms at any order, using different optimization methods. Some observable as specific examples will be used to indicate the validity of scale fixing to predicate the higher order terms.

Mirjalili, A.; A. Kniehl, B.

2010-09-01

270

Flux Coupling Analysis of Genome-Scale Metabolic Network Reconstructions

In this paper,we introduce the Flux Coupling Finder (FCF) framework for el ucidating the topological and flux connectivity features of genome-scale metabolic networks. The framework is demonstrated on genome-scale metabolic reconstructions of Helicobacter pylori, Escherichia coli,and Saccharomyces cerevisiae. The analysis allows one to determine whether any two metabolic fluxes, v1 and v2,are (1) directionally coupled,if a non-zero flux for v1

Anthony P. Burgard; Evgeni V. Nikolaev; Christophe H. Schilling; Costas D. Maranas

2004-01-01

271

Analysis and modeling of large-scale steam explosion experiments

This paper describes current analysis and modeling results of large-scale steam explosion experiments. For the large-scale experiments, a transient one-dimensional explosion model was developed that can qualitatively predict the trends in the experimental data. The model employs a description of vapor film collapse and subsequent fuel fragmentation by thermal and mechanical means. In addition, a simple empirical explosion model was developed and incorporated into a two-dimensional hydrodynamic computer program. This combination can be used to investigate the two-dimensional characteristics of the propagation and expansion phases for large-scale explosions.

Corradini, M.L.

1982-12-01

272

Using Qualitative Methods to Inform Scale Development

ERIC Educational Resources Information Center

This article describes the process by which one study utilized qualitative methods to create items for a multi dimensional scale to measure twelve step program affiliation. The process included interviewing fourteen addicted persons while in twelve step focused treatment about specific pros (things they like or would miss out on by not being…

Rowan, Noell; Wulff, Dan

2007-01-01

273

Multi-Dimensional Quantum Tunneling and Transport Using the Density-Gradient Model

NASA Technical Reports Server (NTRS)

We show that quantum effects are likely to significantly degrade the performance of MOSFETs (metal oxide semiconductor field effect transistor) as these devices are scaled below 100 nm channel length and 2 nm oxide thickness over the next decade. A general and computationally efficient electronic device model including quantum effects would allow us to monitor and mitigate these effects. Full quantum models are too expensive in multi-dimensions. Using a general but efficient PDE solver called PROPHET, we implemented the density-gradient (DG) quantum correction to the industry-dominant classical drift-diffusion (DD) model. The DG model efficiently includes quantum carrier profile smoothing and tunneling in multi-dimensions and for any electronic device structure. We show that the DG model reduces DD model error from as much as 50% down to a few percent in comparison to thin oxide MOS capacitance measurements. We also show the first DG simulations of gate oxide tunneling and transverse current flow in ultra-scaled MOSFETs. The advantages of rapid model implementation using the PDE solver approach will be demonstrated, as well as the applicability of the DG model to any electronic device structure.

Biegel, Bryan A.; Yu, Zhi-Ping; Ancona, Mario; Rafferty, Conor; Saini, Subhash (Technical Monitor)

1999-01-01

274

The multi-dimensional stability of weak-heat-release detonations

NASA Astrophysics Data System (ADS)

The stability of an overdriven planar detonation wave is examined for a one-step Arrhenius reaction model with an order-one post-shock temperature-scaled activation energy [theta] in the limit of a small post-shock temperature-scaled heat release [beta]. The ratio of specific heats, [gamma] is taken such that ([gamma][minus sign]1)=O(1). Under these assumptions, which cover a wide range of realistic physical situations, the steady detonation structure can be evaluated explicitly, with the reactant mass fraction described by an exponentially decaying function. The analytical representation of the steady structure allows a normal-mode description of the stability behaviour to be obtained via a two-term asymptotic expansion in [beta]. The resulting dispersion relation predicts that for a finite overdrive f, the detonation is always stable to two-dimensional disturbances. For large overdrives, the identification of regimes of stability or instability is found to depend on a choice of distinguished limit between the heat release [beta] and the detonation propagation Mach number D*. Regimes of instability are found to be characterized by the presence of a single unstable oscillatory mode over a finite range of wavenumbers.

Short, Mark; Stewart, D. Scott

1999-03-01

275

Full-scale system impact analysis: Digital document storage project

NASA Technical Reports Server (NTRS)

The Digital Document Storage Full Scale System can provide cost effective electronic document storage, retrieval, hard copy reproduction, and remote access for users of NASA Technical Reports. The desired functionality of the DDS system is highly dependent on the assumed requirements for remote access used in this Impact Analysis. It is highly recommended that NASA proceed with a phased, communications requirement analysis to ensure that adequate communications service can be supplied at a reasonable cost in order to validate recent working assumptions upon which the success of the DDS Full Scale System is dependent.

1989-01-01

276

NASA Astrophysics Data System (ADS)

Four flux-type models for radiative heat transfer in cylindrical configurations were applied to the prediction of radiative flux density and source term of a cylindrical enclosure problem based on data reported previously on a pilot-scale experimental combustor with steep temperature gradients. The models, which are Schuster-Hamaker type four-flux model derived by Lockwood and Spalding, two Schuster-Schwarzschild type four-flux models derived by Siddall and Selcuk and Richter and Quack and spherical harmonics approximation, were evaluated from the viewpoint of predictive accuracy by comparing their predictions with exact solutions produced previously. The comparisons showed that spherical harmonics approximation produces more accurate results than the other models with respect to the radiative energy source term and that the four-flux models of Lockwood and Spalding and Siddall and Selcuk for isotropic radiation field are more accurate with respect to the prediction of radiative flux density to the side wall.

Selcuk, Nevin

1993-02-01

277

Voice dysfunction in dysarthria: application of the MultiDimensional Voice Program™

Phonatory dysfunction is a frequent component of dysarthria and often is a primary feature noted in clinical assessment. But the vocal impairment can be difficult to assess because (a) the analysis of voice disorder of any kind can be challenging, and (b) the voice disorder in dysarthria often occurs along with other impairments affecting articulation, resonance, and respiration. A promising

R. D. Kent; H. K. Vorperian; J. F. Kent; J. R. Duffy

2003-01-01

278

New enhancements to SCALE for criticality safety analysis

As the speed, available memory, and reliability of computer hardware increases and the cost decreases, the complexity and usability of computer software will increase, taking advantage of the new hardware capabilities. Computer programs today must be more flexible and user friendly than those of the past. Within available resources, the SCALE staff at Oak Ridge National Laboratory (ORNL) is committed to upgrading its computer codes to keep pace with the current level of technology. This paper examines recent additions and enhancements to the criticality safety analysis sections of the SCALE code package. These recent additions and enhancements made to SCALE can be divided into nine categories: (1) new analytical computer codes, (2) new cross-section libraries, (3) new criticality search sequences, (4) enhanced graphical capabilities, (5) additional KENO enhancements, (6) enhanced resonance processing capabilities, (7) enhanced material information processing capabilities, (8) portability of the SCALE code package, and (9) other minor enhancements, modifications, and corrections to SCALE. Each of these additions and enhancements to the criticality safety analysis capabilities of the SCALE code system are discussed below.

Hollenbach, D.F.; Bowman, S.M.; Petrie, L.M.; Parks, C.V. [Oak Ridge National Lab., TN (United States). Computational Physics and Engineering Div.

1995-09-01

279

Distribution-based similarity measures for multi-dimensional point set retrieval applications

ABSTRACT Efiective and e?cient method,of similarity assessment con- tinues to be one of the most fundamental problems in multi- media data analysis. In case of retrieving relevant items from a collection of objects based on series of multivari- ate observations (e.g., searching the similar video clips in a repository to a query example), satisfactory performance cannot be expected using many,conventional

Jie Shao; Zi Huang; Heng Tao Shen; Jialie Shen; Xiaofang Zhou

2008-01-01

280

Using Data Cubes for Metarule-Guided Mining of MultiDimensional Association Rules

Metarule-guided mining is an interactive approachto data mining, where users probe thedata under analysis by specifying hypotheses inthe form of metarules, or pattern templates. Previousmethods for metarule-guided mining of associationrules have primarily used a transaction\\/relation table-based structure. Such approachesrequire costly, multiple scans of the datain order to find all the large itemsets. In thispaper, we employ a novel approach to

Micheline Kamber; Jiawei Han; Jenny Y. Chiang

1997-01-01

281

A genuinely multi-dimensional upwind cell-vertex scheme for the Euler equations

A scheme of solving the two-dimensional Euler equations is developed. The scheme is genuinely two-dimensional. At each iteration, the data are locally decomposed into four variables, allowing convection in appropriate directions. This is done via a cell-vertex scheme with a downwind-weighted distribution step. The scheme is conservative and third-order accurate in space. The derivation and stability analysis of the scheme

Kenneth G. Powell; Bram van Leer

1989-01-01

282

Instrumentation development for multi-dimensional two-phase flow modeling

A multi-faceted instrumentation approach is described which has played a significant role in obtaining fundamental data for two-phase flow model development. This experimental work supports the development of a three-dimensional, two-fluid, four field computational analysis capability. The goal of this development is to utilize mechanistic models and fundamental understanding rather than rely on empirical correlations to describe the interactions in

G. J. Kirouac; T. A. Trabold; P. F. Vassallo; W. E. Moore; R. Kumar

1999-01-01

283

In the previous paper of this series, we presented a formulation of the polarized radiative transfer equation for resonance scattering with partial frequency redistribution (PRD) in multi-dimensional media for a two-level atom model with unpolarized ground level, using the irreducible spherical tensors T{sub Q}{sup K}(i, {Omega}) for polarimetry. We also presented a polarized approximate lambda iteration method to solve this equation using the Jacobi iteration scheme. The formal solution used was based on a simple finite volume technique. In this paper, we develop a faster and more efficient method which uses the projection techniques applied to the radiative transfer equation (the Stabilized Preconditioned Bi-Conjugate Gradient method). We now use a more accurate formal solver, namely the well-known two-dimensional (2D) short characteristics method. Using the numerical method developed in Paper I, we can consider only simpler cases of finite 2D slabs due to computational limitations. Using the method developed in this paper, we could compute PRD solutions in 2D media in the more difficult context of semi-infinite 2D slabs also. We present several solutions which may serve as benchmarks in future studies in this area.

Anusha, L. S.; Nagendra, K. N. [Indian Institute of Astrophysics, Koramangala, 2nd Block, Bengaluru 560 034 (India); Paletou, F. [Laboratoire d'Astrophysique de Toulouse-Tarbes, Universite de Toulouse, CNRS, 14 Avenue E. Belin, F-31400 Toulouse (France)

2011-01-10

284

NASA Astrophysics Data System (ADS)

Many systems and processes, both natural and artificial, may be described by parameter-driven mathematical and physical models. We introduce a generally applicable Stochastic Optimization Framework (SOF) that can be interfaced to or wrapped around such models to optimize model outcomes by effectively "inverting" them. The Visual and Autonomous Exploration Systems Research Laboratory (http://autonomy.caltech.edu edu) at the California Institute of Technology (Caltech) has long-term experience in the optimization of multi-dimensional systems and processes. Several examples of successful application of a SOF are reviewed and presented, including biochemistry, robotics, device performance, mission design, parameter retrieval, and fractal landscape optimization. Applications of a SOF are manifold, such as in science, engineering, industry, defense & security, and reconnaissance/exploration. Keywords: Multi-parameter optimization, design/performance optimization, gradient-based steepest-descent methods, local minima, global minimum, degeneracy, overlap parameter distribution, fitness function, stochastic optimization framework, Simulated Annealing, Genetic Algorithms, Evolutionary Algorithms, Genetic Programming, Evolutionary Computation, multi-objective optimization, Pareto-optimal front, trade studies )

Fink, Wolfgang

2008-05-01

285

NASA Astrophysics Data System (ADS)

Numerical modeling has proven an efficient predictive tool for testing, comparing and contrasting different geological hypotheses. Computer models simulating fluid flow and mass transport in complex hydrothermal systems can provide considerable insight into how these systems operate to produce economic concentration of metals. This paper presents a finite element algorithm that fully couples transient, multi-dimensional fluid flow, heat and mass (solute) transport in discretely fractured porous media. The numerical method employs non-orthogonal quadrilateral meshes and their geometry, size and orientation can be adjusted freely to best fit complex earth structures in reality, such as uneven surface topography, arbitrarily-shaped geological units, and free-oriented fractures and faults. The McArthur basin hosting the HYC deposit in northern Australia is used as a field example. Its salinity conditions are first considered, followed by other scenarios simulating the Irish-type and the US gulf coast-type salinity conditions. Numerical results indicate that salinity plays an important role in controlling hydrothermal ore-forming fluid migration. High salinity at basin floor (evaporitic conditions) strengthens the thermally-induced buoyancy force and hence promotes free convection of basinal solutions; whereas high salinity at bottom (sedimentary brines) counteracts the thermal function and thus suppresses the development of hydrothermal fluid circulation. Numerical experiments also identify the favorable hydrological conditions for the formation of the HYC deposit and evaluate the similarities and differences of modeling results between two-dimensional and three-dimensional simulations.

Yang, J.

2003-12-01

286

Significant progress has been achieved in recent years with the development of high-dimensional permutationally invariant analytic Born-Oppenheimer potential-energy surfaces, making use of polynomial invariant theory. In this work, we have developed a generalization of this approach which is suitable for the construction of multi-sheeted multi-dimensional potential-energy surfaces exhibiting seams of conical intersections. The method avoids the nonlinear optimization problem which is encountered in the construction of multi-sheeted diabatic potential-energy surfaces from ab initio electronic-structure data. The key of the method is the expansion of the coefficients of the characteristic polynomial in polynomials which are invariant with respect to the point group of the molecule or the permutation group of like atoms. The multi-sheeted adiabatic potential-energy surface is obtained from the Frobenius companion matrix which contains the fitted coefficients. A three-sheeted nine-dimensional adiabatic potential-energy surface of the (2)T2 electronic ground state of the methane cation has been constructed as an example of the application of this method. PMID:23781779

Opalka, Daniel; Domcke, Wolfgang

2013-06-14

287

NASA Astrophysics Data System (ADS)

Significant progress has been achieved in recent years with the development of high-dimensional permutationally invariant analytic Born-Oppenheimer potential-energy surfaces, making use of polynomial invariant theory. In this work, we have developed a generalization of this approach which is suitable for the construction of multi-sheeted multi-dimensional potential-energy surfaces exhibiting seams of conical intersections. The method avoids the nonlinear optimization problem which is encountered in the construction of multi-sheeted diabatic potential-energy surfaces from ab initio electronic-structure data. The key of the method is the expansion of the coefficients of the characteristic polynomial in polynomials which are invariant with respect to the point group of the molecule or the permutation group of like atoms. The multi-sheeted adiabatic potential-energy surface is obtained from the Frobenius companion matrix which contains the fitted coefficients. A three-sheeted nine-dimensional adiabatic potential-energy surface of the 2T2 electronic ground state of the methane cation has been constructed as an example of the application of this method.

Opalka, Daniel; Domcke, Wolfgang

2013-06-01

288

Recent technological advancements in mass spectrometry facilitate the detection of chemical-induced posttranslational modifications (PTMs) that may alter cell signaling pathways or alter the structure and function of the modified proteins. To identify such protein adducts (Kleiner et al., Chem Res Toxicol 11:1283–1290, 1998), multi-dimensional protein identification technology (MuDPIT) has been utilized. MuDPIT was first described by Link et al. as a new technique useful for protein identification from a complex mixture of proteins (Link et al., Nat Biotechnol 17:676–682, 1999). MuDPIT utilizes two different HPLC columns to further enhance peptide separation, increasing the number of peptide hits and protein coverage. The technology is extremely useful for proteomes, such as the urine proteome, samples from immunoprecipitations, and 1D gel bands resolved from a tissue homogenate or lysate. In particular, MuDPIT has enhanced the field of adduct hunting for adducted peptides, since it is more capable of identifying lesser abundant peptides, such as those that are adducted, than the more standard LC–MS/MS. The site-specific identification of covalently adducted proteins is a prerequisite for understanding the biological significance of chemical-induced PTMs and the subsequent toxicological response they elicit.

Labenski, Matthew T.; Fisher, Ashley A.; Monks, Terrence J.; Lau, Serrine S.

2014-01-01

289

Finite-volume application of high order ENO schemes to multi-dimensional boundary-value problems

NASA Technical Reports Server (NTRS)

The finite volume approach in developing multi-dimensional, high-order accurate essentially non-oscillatory (ENO) schemes is considered. In particular, a two dimensional extension is proposed for the Euler equation of gas dynamics. This requires a spatial reconstruction operator that attains formal high order of accuracy in two dimensions by taking account of cross gradients. Given a set of cell averages in two spatial variables, polynomial interpolation of a two dimensional primitive function is employed in order to extract high-order pointwise values on cell interfaces. These points are appropriately chosen so that correspondingly high-order flux integrals are obtained through each interface by quadrature, at each point having calculated a flux contribution in an upwind fashion. The solution-in-the-small of Riemann's initial value problem (IVP) that is required for this pointwise flux computation is achieved using Roe's approximate Riemann solver. Issues to be considered in this two dimensional extension include the implementation of boundary conditions and application to general curvilinear coordinates. Results of numerical experiments are presented for qualitative and quantitative examination. These results contain the first successful application of ENO schemes to boundary value problems with solid walls.

Casper, Jay; Dorrepaal, J. Mark

1990-01-01

290

NASA Astrophysics Data System (ADS)

The stepwise gray method is the simplest method for incorporating the effects of absorption-emission bands in radiative heat transfer calculations. The non-gray character of the mixture is replaced by an 'equivalent' stepwise gray character. A method to obtain this equivalent gray character by utilizing the mean beam length is outlined. This method is coupled with the P-1 approximation, resulting in a simple model to deal with radiation from gas-particulate media in multi-dimensional geometries. To validate the accuracy of the method, it is applied to a few simple 1D problems, i.e., to isothermal plane-parallel and spherical media as well as to plane-parallel media at radiative equilibrium. Absorption and scattering behavior are studied for both black and gray walls. Results are compared with those obtained from Monte Carlo calculations incorporating the exponential-wide-band model, indicating good accuracy for all conditions studied, within the limitations of the P-1 approximation.

Modest, Michael F.; Sikka, Kamal K.

1992-08-01

291

Confirmatory factor analysis of DOSPERT scale with Chinese university students.

The factor structure of the 30-item Domain Specific Risk Taking Attitude (DOSPERT) scale (Blais & Weber, 2006) was examined with a convenience sample of 205 Chinese undergraduate students from Macao. A comparison of five competing models via confirmatory factor analysis yielded empirical support for the perspective that risk-taking attitude was content-dependent. After removing the items in the Financial subscale of the DOSPERT scale and some post hoc modifications, a reasonably good fit to the four-correlated-factor model was achieved, in concordance with the theoretical framework. However, items in some scales needed further revision to purify their factor structure so that the DOSPERT scale would be a more psychometrically sound measure for investigating one's risk-taking attitudes in different life domains. PMID:24765720

Wu, Joseph; Cheung, Hoi Yan

2014-02-01

292

Rasch Analysis of the Fullerton Advanced Balance (FAB) Scale

ABSTRACT Purpose: This cross-sectional study explores the psychometric properties and dimensionality of the Fullerton Advanced Balance (FAB) Scale, a multi-item balance test for higher-functioning older adults. Methods: Participants (n=480) were community-dwelling adults able to ambulate independently. Data gathering consisted of survey and balance performance assessment. Psychometric properties were assessed using Rasch analysis. Results: Mean age of participants was 76.4 (SD=7.1) years. Mean FAB Scale scores were 24.7/40 (SD=7.5). Analyses for scale dimensionality showed that 9 of the 10 items fit a unidimensional measure of balance. Item 10 (Reactive Postural Control) did not fit the model. The reliability of the scale to separate persons was 0.81 out of 1.00; the reliability of the scale to separate items in terms of their difficulty was 0.99 out of 1.00. Cronbach's alpha for a 10-item model was 0.805. Items of differing difficulties formed a useful ordinal hierarchy for scaling patterns of expected balance ability scoring for a normative population. Conclusion: The FAB Scale appears to be a reliable and valid tool to assess balance function in higher-functioning older adults. The test was found to discriminate among participants of varying balance abilities. Further exploration of concurrent validity of Rasch-generated expected item scoring patterns should be undertaken to determine the test's diagnostic and prescriptive utility.

Fiedler, Roger C.; Rose, Debra J.

2011-01-01

293

The science of visual analysis at extreme scale

NASA Astrophysics Data System (ADS)

Driven by market forces and spanning the full spectrum of computational devices, computer architectures are changing in ways that present tremendous opportunities and challenges for data analysis and visual analytic technologies. Leadership-class high performance computing system will have as many as a million cores by 2020 and support 10 billion-way concurrency, while laptop computers are expected to have as many as 1,000 cores by 2015. At the same time, data of all types are increasing exponentially and automated analytic methods are essential for all disciplines. Many existing analytic technologies do not scale to make full use of current platforms and fewer still are likely to scale to the systems that will be operational by the end of this decade. Furthermore, on the new architectures and for data at extreme scales, validating the accuracy and effectiveness of analytic methods, including visual analysis, will be increasingly important.

Nowell, Lucy T.

2011-01-01

294

ECG scaling properties of cardiac arrhythmias using detrended fluctuation analysis

We applied detrended fluctuation analysis to characterize at very short time scales during episodes of cardiac arrhythmias the raw electrocardiogram (ECG) waveform, aiming to get a global insight into its dynamical behaviour in patients who experienced sudden death. We found that in 15 recordings involving different types of arrhythmias (taken from PhysioNet's Sudden Cardiac Death Holter Database), the ECG waveform,

E. Rodriguez; C. Lerma; J. C. Echeverria; J. Alvarez-Ramirez

2008-01-01

295

Clustering for Visual Analogue Scale Data in Symbolic Data Analysis

We propose a hierarchical clustering for the visual analogue scale (VAS) in the framework of Symbolic Data Analysis(SDA). The VAS is a method that can be readily understood by most people to measure a characteristic or attitude that cannot be directly measured. VAS is of most value when looking at change within people, and is of less value for comparing

Kotoe Katayama; Rui Yamaguchi; Seiya Imoto; Keiko Matsuura; Kenji Watanabe; Satoru Miyano

2011-01-01

296

Crater ejecta scaling laws: fundamental forms based on dimensional analysis

A model of crater ejecta is constructed using dimensional analysis and a recently developed theory of energy and momentum coupling in cratering events. General relations are derived that provide a rationale for scaling laboratory measurements of ejecta to larger events. Specific expressions are presented for ejection velocities and ejecta blanket profiles in two limiting regimes of crater formation: the so-called

K. R. Housen; R. M. Schmidt; K. A. Holsapple

1983-01-01

297

The Dyadic Adjustment Scale: A Reliability Generalization Meta-Analysis

ERIC Educational Resources Information Center

We conducted a reliability generalization meta-analysis to examine the internal consistency of Dyadic Adjustment Scale (DAS; Spanier, 1976) scores across 91 published studies with 128 samples and 25,035 participants. The DAS was found to produce total and Dyadic cohesion, Consensus, and Satisfaction scores of acceptable internal consistency,…

Graham, James M.; Liu, Yenling J.; Jeziorski, Jennifer L.

2006-01-01

298

Visualization for the Large Scale Data Analysis Project.

National Technical Information Service (NTIS)

In this paper the authors overview the visualization approach used as part of the Large Scale Data Analysis project. This project used the AVS5 software to create interface tools for the public domain database management system POSTGRES. This work utilize...

R. E. Flanery J. M. Donato

1996-01-01

299

A Factor Analysis of the Discipline Efficacy Scale.

ERIC Educational Resources Information Center

The Discipline Efficacy Scale (DES) was designed to measure personal and general teacher efficacy beliefs about student discipline. A confirmatory factor analysis of the proposed two-factor model was carried out using a sample of 206 junior- and senior-level preservice teacher education students. Goodness of fit measures did not suggest a good fit…

Giles, Rebecca McMahon; Kazelskis, Richard; Reeves-Kazelskis, Carolyn

300

A Confirmatory Factor Analysis of the Professional Opinion Scale

ERIC Educational Resources Information Center

The Professional Opinion Scale (POS) was developed to measure social work values orientation. Objective: A confirmatory factor analysis was performed on the POS. Method: This cross-sectional study used a mailed survey design with a national random (simple) sample of members of the National Association of Social Workers. Results: The study…

Greeno, Elizabeth J.; Hughes, Anne K.; Hayward, R. Anna; Parker, Karen L.

2007-01-01

301

Evaluating Factor Analysis Decisions for Scale Design in Communication Research

Factor analysis techniques offer the communication researcher a set of important tools for the development and substantiation of attitude and opinion scales. Often, however, crucial factoring decisions are based on convention or misunderstanding, leading to questionable conclusions. The purpose of this article is to offer the nontechnical communication researcher a foundation for making and defending rational factoring decisions that can

John T. Morrison

2009-01-01

302

Exploratory Factor Analysis of African Self-Consciousness Scale Scores

ERIC Educational Resources Information Center

This study replicates and extends prior studies of the dimensionality, convergent, and external validity of African Self-Consciousness Scale scores with appropriate exploratory factor analysis methods and a large gender balanced sample (N = 348). Viable one- and two-factor solutions were cross-validated. Both first factors overlapped significantly…

Bhagwat, Ranjit; Kelly, Shalonda; Lambert, Michael C.

2012-01-01

303

Validation of inelastic analysis by full-scale component testing

This paper compares theoretical and experimental results for full-scale, prototypical components tested at elevated-temperatures to provide validation for inelastic analysis methods, material models, and design limits. Results are discussed for piping elbow plastic and creep buckling, creep ratcheting, and creep relaxation; nozzle creep ratcheting and weld cracking; and thermal striping fatigue. Comparisons between theory and test confirm the adequacy of

D. S. Griffin; A. K. Dhalla; W. S. Woodward

1987-01-01

304

Evidence for a Multi-Dimensional Latent Structural Model of Externalizing Disorders

Strong associations between conduct disorder (CD), antisocial personality disorder (ASPD) and substance use disorders (SUD) seem to reflect a general vulnerability to externalizing behaviors. Recent studies have characterized this vulnerability on a continuous scale, rather than as distinct categories, suggesting that the revision of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) take into account the underlying continuum of externalizing behaviors. However, most of this research has not included measures of disorders that appear in childhood [e.g., attention-deficit/hyperactivity disorder (ADHD) or oppositional defiant disorder (ODD)], nor has it considered the full range of possibilities for the latent structure of externalizing behaviors, particularly factor mixture models, which allow for a latent factor to have both continuous and categorical dimensions. Finally, the majority of prior studies have not tested multidimensional models. Using lifetime diagnoses of externalizing disorders from participants in the Fast Track Project (n = 715), we analyzed a series of latent variable models ranging from fully continuous factor models to fully categorical mixture models. Continuous models provided the best fit to the observed data and also suggested that a two-factor model of externalizing behavior, defined as (1) ODD+ADHD+CD and (2) SUD with adult antisocial behavior sharing common variance with both factors, was necessary to explain the covariation in externalizing disorders. The two-factor model of externalizing behavior was then replicated using a nationally representative sample drawn from the National Comorbidity Survey-Replication data (n = 5,692). These results have important implications for the conceptualization of externalizing disorders in DSM-5.

King, Kevin; McMahon, Robert J.; Wu, Johnny; Luk, Jeremy

2012-01-01

305

The overall goal of this project has been to develop desktop capability for 3-D EM inversion as a complement or alternative to existing massively parallel platforms. We have been fortunate in having a uniquely productive cooperative relationship with Kyushu University (Y. Sasaki, P.I.) who supplied a base-level 3-D inversion source code for MT data over a half-space based on staggered grid finite differences. Storage efficiency was greatly increased in this algorithm by implementing a symmetric L-U parameter step solver, and by loading the parameter step matrix one frequency at a time. Rules were established for achieving sufficient jacobian accuracy versus mesh discretization, and regularization was much improved by scaling the damping terms according to influence of parameters upon the measured response. The modified program was applied to 101 five-channel MT stations taken over the Coso East Flank area supported by the DOE and the Navy. Inversion of these data on a 2 Gb desktop PC using a half-space starting model recovered the main features of the subsurface resistivity structure seen in a massively parallel inversion which used a series of stitched 2-D inversions as a starting model. In particular, a steeply west-dipping, N-S trending conductor was resolved under the central-west portion of the East Flank. It may correspond to a highly saline magamtic fluid component, residual fluid from boiling, or less likely cryptic acid sulphate alteration, all in a steep fracture mesh. This work gained student Virginia Maris the Best Student Presentation at the 2006 GRC annual meeting.

Philip E. Wannamaker

2007-12-31

306

Conservative-variable average states for equilibrium gas multi-dimensional fluxes

NASA Technical Reports Server (NTRS)

Modern split component evaluations of the flux vector Jacobians are thoroughly analyzed for equilibrium-gas average-state determinations. It is shown that all such derivations satisfy a fundamental eigenvalue consistency theorem. A conservative-variable average state is then developed for arbitrary equilibrium-gas equations of state and curvilinear-coordinate fluxes. Original expressions for eigenvalues, sound speed, Mach number, and eigenvectors are then determined for a general average Jacobian, and it is shown that the average eigenvalues, Mach number, and eigenvectors may not coincide with their classical pointwise counterparts. A general equilibrium-gas equation of state is then discussed for conservative-variable computational fluid dynamics (CFD) Euler formulations. The associated derivations lead to unique compatibility relations that constrain the pressure Jacobian derivatives. Thereafter, alternative forms for the pressure variation and average sound speed are developed in terms of two average pressure Jacobian derivatives. Significantly, no additional degree of freedom exists in the determination of these two average partial derivatives of pressure. Therefore, they are simultaneously computed exactly without any auxiliary relation, hence without any geometric solution projection or arbitrary scale factors. Several alternative formulations are then compared and key differences highlighted with emphasis on the determination of the pressure variation and average sound speed. The relevant underlying assumptions are identified, including some subtle approximations that are inherently employed in published average-state procedures. Finally, a representative test case is discussed for which an intrinsically exact average state is determined. This exact state is then compared with the predictions of recent methods, and their inherent approximations are appropriately quantified.

Iannelli, G. S.

1992-01-01

307

A genuinely multi-dimensional upwind cell-vertex scheme for the Euler equations

NASA Technical Reports Server (NTRS)

The solution of the two-dimensional Euler equations is based on the two-dimensional linear convection equation and the Euler-equation decomposition developed by Hirsch et al. The scheme is genuinely two-dimensional. At each iteration, the data are locally decomposed into four variables, allowing convection in appropriate directions. This is done via a cell-vertex scheme with a downwind-weighted distribution step. The scheme is conservative, and third-order accurate in space. The derivation and stability analysis of the scheme for the convection equation, and the derivation of the extension to the Euler equations are given. Preconditioning techniques based on local values of the convection speeds are discussed. The scheme for the Euler equations is applied to two channel-flow problems. It is shown to converge rapidly to a solution that agrees well with that of a third-order upwind solver.

Powell, Kenneth G.; Vanleer, Bram

1989-01-01

308

NASA Technical Reports Server (NTRS)

This project is about the development of high order, non-oscillatory type schemes for computational fluid dynamics. Algorithm analysis, implementation, and applications are performed. Collaborations with NASA scientists have been carried out to ensure that the research is relevant to NASA objectives. The combination of ENO finite difference method with spectral method in two space dimension is considered, jointly with Cai [3]. The resulting scheme behaves nicely for the two dimensional test problems with or without shocks. Jointly with Cai and Gottlieb, we have also considered one-sided filters for spectral approximations to discontinuous functions [2]. We proved theoretically the existence of filters to recover spectral accuracy up to the discontinuity. We also constructed such filters for practical calculations.

Shu, Chi-Wang

1998-01-01

309

Multi-dimensional models of circumstellar shells around evolved massive stars

NASA Astrophysics Data System (ADS)

Context. Massive stars shape their surrounding medium through the force of their stellar winds, which collide with the circumstellar medium. Because the characteristics of these stellar winds vary over the course of the evolution of the star, the circumstellar matter becomes a reflection of the stellar evolution and can be used to determine the characteristics of the progenitor star. In particular, whenever a fast wind phase follows a slow wind phase, the fast wind sweeps up its predecessor in a shell, which is observed as a circumstellar nebula. Aims: We make 2D and 3D numerical simulations of fast stellar winds sweeping up their slow predecessors to investigate whether numerical models of these shells have to be 3D, or whether 2D models are sufficient to reproduce the shells correctly. Methods: We use the MPI-AMRVAC code, using hydrodynamics with optically thin radiative losses included, to make numerical models of circumstellar shells around massive stars in 2D and 3D and compare the results. We focus on those situations where a fast Wolf-Rayet star wind sweeps up the slower wind emitted by its predecessor, being either a red supergiant or a luminous blue variable. Results: As the fast Wolf-Rayet wind expands, it creates a dense shell of swept up material that expands outward, driven by the high pressure of the shocked Wolf-Rayet wind. These shells are subject to a fair variety of hydrodynamic-radiative instabilities. If the Wolf-Rayet wind is expanding into the wind of a luminous blue variable phase, the instabilities will tend to form a fairly small-scale, regular filamentary lattice with thin filaments connecting knotty features. If the Wolf-Rayet wind is sweeping up a red supergiant wind, the instabilities will form larger interconnected structures with less regularity. The numerical resolution must be high enough to resolve the compressed, swept-up shell and the evolving instabilities, which otherwise may not even form. Conclusions: Our results show that 3D models, when translated to observed morphologies, give realistic results that can be compared directly to observations. The 3D structure of the nebula will help to distinguish different progenitor scenarios.

van Marle, A. J.; Keppens, R.

2012-11-01

310

SAMNet: a network-based approach to integrate multi-dimensional high throughput datasets

The rapid development of high throughput biotechnologies has led to an onslaught of data describing genetic perturbations and changes in mRNA and protein levels in the cell. Because each assay provides a one-dimensional snapshot of active signaling pathways, it has become desirable to perform multiple assays (e.g. mRNA expression and phospho-proteomics) to measure a single condition. However, as experiments expand to accommodate various cellular conditions, proper analysis and interpretation of these data have become more challenging. Here we introduce a novel approach called SAMNet, for Simultaneous Analysis of Multiple Networks, that is able to interpret diverse assays over multiple perturbations. The algorithm uses a constrained optimization approach to integrate mRNA expression data with upstream genes, selecting edges in the protein-protein interaction network that best explain the changes across all perturbations. The result is a putative set of protein interactions that succinctly summarizes the results from all experiments, highlighting the network elements unique to each perturbation. We evaluated SAMNet in both yeast and human datasets. The yeast dataset measured the cellular response to seven different transition metals, and the human dataset measured cellular changes in four different lung cancer models of Epithelial-Mesenchymal Transition (EMT), a crucial process in tumor metastasis. SAMNet was able to identify canonical yeast metal –processing genes unique to each commodity in the yeast dataset, as well as human genes such as ?-catenin and TCF7L2/TCF4 that are required for EMT signaling but escaped detection in the mRNA and phospho-proteomic data. Moreover, SAMNet also highlighted drugs likely to modulate EMT, identifying a series of less canonical genes known to be affected by the BCR-ABL inhibitor imatinib (Gleevec), suggesting a possible influence of this drug on EMT.

Gosline, Sara JC; Spencer, Sarah J; Ursu, Oana; Fraenkel, Ernest

2012-01-01

311

Quantitative analysis of scale of aeromagnetic data raises questions about geologic-map scale

A recently published study has shown that small-scale geologic map data can reproduce mineral assessments made with considerably larger scale data. This result contradicts conventional wisdom about the importance of scale in mineral exploration, at least for regional studies. In order to formally investigate aspects of scale, a weights-of-evidence analysis using known gold occurrences and deposits in the Central Lapland Greenstone Belt of Finland as training sites provided a test of the predictive power of the aeromagnetic data. These orogenic-mesothermal-type gold occurrences and deposits have strong lithologic and structural controls associated with long (up to several kilometers), narrow (up to hundreds of meters) hydrothermal alteration zones with associated magnetic lows. The aeromagnetic data were processed using conventional geophysical methods of successive upward continuation simulating terrane clearance or 'flight height' from the original 30 m to an artificial 2000 m. The analyses show, as expected, that the predictive power of aeromagnetic data, as measured by the weights-of-evidence contrast, decreases with increasing flight height. Interestingly, the Moran autocorrelation of aeromagnetic data representing differing flight height, that is spatial scales, decreases with decreasing resolution of source data. The Moran autocorrelation coefficient scems to be another measure of the quality of the aeromagnetic data for predicting exploration targets. ?? Springer Science+Business Media, LLC 2007.

Nykanen, V.; Raines, G. L.

2006-01-01

312

Heteronuclear NMR spectroscopy is an extremely powerful tool for determining the structures of organic molecules and is of particular significance in the structural analysis of proteins. In order to leverage the method's potential for structural investigations, obtaining high-resolution NMR spectra is essential and this is generally accomplished by using very homogeneous magnetic fields. However, there are several situations where magnetic field distortions and thus line broadening is unavoidable, for example, the samples under investigation may be inherently heterogeneous, and the magnet's homogeneity may be poor. This line broadening can hinder resonance assignment or even render it impossible. We put forth a new class of pulse sequences for obtaining high-resolution heteronuclear spectra in magnetic fields with unknown spatial variations based on distant dipolar field modulations. This strategy's capabilities are demonstrated with the acquisition of high-resolution 2D gHSQC and gHMBC spectra. These sequences' performances are evaluated on the basis of their sensitivities and acquisition efficiencies. Moreover, we show that by encoding and decoding NMR observables spatially, as is done in ultrafast NMR, an extra dimension containing J-coupling information can be obtained without increasing the time necessary to acquire a heteronuclear correlation spectrum. Since the new sequences relax magnetic field homogeneity constraints imposed upon high-resolution NMR, they may be applied in portable NMR sensors and studies of heterogeneous chemical and biological materials. PMID:24607822

Zhang, Zhiyong; Huang, Yuqing; Smith, Pieter E S; Wang, Kaiyu; Cai, Shuhui; Chen, Zhong

2014-05-01

313

Multi-dimensional combustor flowfield analyses in gas-gas rocket engine

NASA Technical Reports Server (NTRS)

The objectives of the present research are to improve design capabilities for low thrust rocket engines through understanding of the detailed mixing and combustions processes. Of particular interest is a small gaseous hydrogen-oxygen thruster which is considered as a coordinated part of an on-going experimental program at NASA LeRC. Detailed computational modeling requires the application of the full three-dimensional Navier Stokes equations, coupled with species diffusion equations. The numerical procedure is performed on both time-marching and time-accurate algorithms and using an LU approximate factorization in time, flux split upwinding differencing in space. The emphasis in this paper is focused on using numerical analysis to understand detailed combustor flowfields, including the shear layer dynamics created between fuel film cooling and the core gas in the vicinity on the nearby combustor wall; the integrity and effectiveness of the coolant film; three-dimensional fuel jets injection/mixing/combustion characteristics; and their impacts on global engine performance.

Tsuei, Hsin-Hua; Merkle, Charles L.

1994-01-01

314

NASA Astrophysics Data System (ADS)

Heteronuclear NMR spectroscopy is an extremely powerful tool for determining the structures of organic molecules and is of particular significance in the structural analysis of proteins. In order to leverage the method’s potential for structural investigations, obtaining high-resolution NMR spectra is essential and this is generally accomplished by using very homogeneous magnetic fields. However, there are several situations where magnetic field distortions and thus line broadening is unavoidable, for example, the samples under investigation may be inherently heterogeneous, and the magnet’s homogeneity may be poor. This line broadening can hinder resonance assignment or even render it impossible. We put forth a new class of pulse sequences for obtaining high-resolution heteronuclear spectra in magnetic fields with unknown spatial variations based on distant dipolar field modulations. This strategy’s capabilities are demonstrated with the acquisition of high-resolution 2D gHSQC and gHMBC spectra. These sequences’ performances are evaluated on the basis of their sensitivities and acquisition efficiencies. Moreover, we show that by encoding and decoding NMR observables spatially, as is done in ultrafast NMR, an extra dimension containing J-coupling information can be obtained without increasing the time necessary to acquire a heteronuclear correlation spectrum. Since the new sequences relax magnetic field homogeneity constraints imposed upon high-resolution NMR, they may be applied in portable NMR sensors and studies of heterogeneous chemical and biological materials.

Zhang, Zhiyong; Huang, Yuqing; Smith, Pieter E. S.; Wang, Kaiyu; Cai, Shuhui; Chen, Zhong

2014-05-01

315

SINEX: SCALE shielding analysis GUI for X-Windows

SINEX (SCALE Interface Environment for X-windows) is an X-Windows graphical user interface (GUI), that is being developed for performing SCALE radiation shielding analyses. SINEX enables the user to generate input for the SAS4/MORSE and QADS/QAD-CGGP shielding analysis sequences in SCALE. The code features will facilitate the use of both analytical sequences with a minimum of additional user input. Included in SINEX is the capability to check the geometry model by generating two-dimensional (2-D) color plots of the geometry model using a new version of the SCALE module, PICTURE. The most sophisticated feature, however, is the 2-D visualization display that provides a graphical representation on screen as the user builds a geometry model. This capability to interactively build a model will significantly increase user productivity and reduce user errors. SINEX will perform extensive error checking and will allow users to execute SCALE directly from the GUI. The interface will also provide direct on-line access to the SCALE manual.

Browman, S.M.; Barnett, D.L.

1997-12-01

316

Handbook of Scaling Methods in Aquatic Ecology: Measurement, Analysis, Simulation

NASA Astrophysics Data System (ADS)

Researchers in aquatic sciences have long been interested in describing temporal and biological heterogeneities at different observation scales. During the 1970s, scaling studies received a boost from the application of spectral analysis to ecological sciences. Since then, new insights have evolved in parallel with advances in observation technologies and computing power. In particular, during the last 2 decades, novel theoretical achievements were facilitated by the use of microstructure profilers, the application of mathematical tools derived from fractal and wavelet analyses, and the increase in computing power that allowed more complex simulations. The idea of publishing the Handbook of Scaling Methods in Aquatic Ecology arose out of a special session of the 2001 Aquatic Science Meeting of the American Society of Limnology and Oceanography. The edition of the book is timely, because it compiles a good amount of the work done in these last 2 decades. The book is comprised of three sections: measurements, analysis, and simulation. Each contains some review chapters and a number of more specialized contributions. The contents are multidisciplinary and focus on biological and physical processes and their interactions over a broad range of scales, from micro-layers to ocean basins. The handbook topics include high-resolution observation methodologies, as well as applications of different mathematical tools for analysis and simulation of spatial structures, time variability of physical and biological processes, and individual organism behavior. The scientific background of the authors is highly diverse, ensuring broad interest for the scientific community.

Marrasé, Celia

2004-03-01

317

NASA Astrophysics Data System (ADS)

SPECT3D is a multi-dimensional collisional-radiative code used to post-process the output from radiation-hydrodynamics (RH) and particle-in-cell (PIC) codes to generate diagnostic signatures (e.g. images, spectra) that can be compared directly with experimental measurements. This ability to post-process simulation code output plays a pivotal role in assessing the reliability of RH and PIC simulation codes and their physics models. SPECT3D has the capability to operate on plasmas in 1D, 2D, and 3D geometries. It computes a variety of diagnostic signatures that can be compared with experimental measurements, including: time-resolved and time-integrated spectra, space-resolved spectra and streaked spectra; filtered and monochromatic images; and X-ray diode signals. Simulated images and spectra can include the effects of backlighters, as well as the effects of instrumental broadening and time-gating. SPECT3D also includes a drilldown capability that shows where frequency-dependent radiation is emitted and absorbed as it propagates through the plasma towards the detector, thereby providing insights on where the radiation seen by a detector originates within the plasma. SPECT3D has the capability to model a variety of complex atomic and radiative processes that affect the radiation seen by imaging and spectral detectors in high energy density physics (HEDP) experiments. LTE (local thermodynamic equilibrium) or non-LTE atomic level populations can be computed for plasmas. Photoabsorption rates can be computed using either escape probability models or, for selected 1D and 2D geometries, multi-angle radiative transfer models. The effects of non-thermal (i.e. non-Maxwellian) electron distributions can also be included. To study the influence of energetic particles on spectra and images recorded in intense short-pulse laser experiments, the effects of both relativistic electrons and energetic proton beams can be simulated. SPECT3D is a user-friendly software package that runs on Windows, Linux, and Mac platforms. A parallel version of SPECT3D is supported for Linux clusters for large-scale calculations. We will discuss the major features of SPECT3D, and present example results from simulations and comparisons with experimental data.

MacFarlane, J. J.; Golovkin, I. E.; Wang, P.; Woodruff, P. R.; Pereyra, N. A.

2007-05-01

318

NASA Astrophysics Data System (ADS)

The low-cost and minimum health risks associated with ultrasound (US) have made ultrasonic imaging a widely accepted method to perform diagnostic and image-guided procedures. Despite the existence of 3D ultrasound probes, most analysis and diagnostic procedures are done by studying the B-mode images. Currently, multiple ultrasound probes include 6-DOF sensors that can provide positioning information. Such tracking information can be used to reconstruct a 3D volume from a set of 2D US images. Recent advances in ultrasound imaging have also shown that, directly from the streaming radio frequency (RF) data, it is possible to obtain additional information of the anatomical region under consideration including the elasticity properties. This paper presents a generic framework that takes advantage of current graphics hardware to create a low-latency system to visualize streaming US data while combining multiple tissue attributes into a single illustration. In particular, we introduce a framework that enables real-time reconstruction and interactive visualization of streaming data while enhancing the illustration with elasticity information. The visualization module uses two-dimensional transfer functions (2D TFs) to more effectively fuse and map B-mode and strain values into specific opacity and color values. On commodity hardware, our framework can simultaneously reconstruct, render, and provide user interaction at over 15 fps. Results with phantom and real-world medical datasets show the advantages and effectiveness of our technique with ultrasound data. In particular, our results show how two-dimensional transfer functions can be used to more effectively identify, analyze and visualize lesions in ultrasound images.

Mann, David; Caban, Jesus J.; Stolka, Philipp J.; Boctor, Emad M.; Yoo, Terry S.

2011-03-01

319

Instrumentation development for multi-dimensional two-phase flow modeling

A multi-faceted instrumentation approach is described which has played a significant role in obtaining fundamental data for two-phase flow model development. This experimental work supports the development of a three-dimensional, two-fluid, four field computational analysis capability. The goal of this development is to utilize mechanistic models and fundamental understanding rather than rely on empirical correlations to describe the interactions in two-phase flows. The four fields (two dispersed and two continuous) provide a means for predicting the flow topology and the local variables over the full range of flow regimes. The fidelity of the model development can be verified by comparisons of the three-dimensional predictions with local measurements of the flow variables. Both invasive and non-invasive instrumentation techniques and their strengths and limitations are discussed. A critical aspect of this instrumentation development has been the use of a low pressure/temperature modeling fluid (R-134a) in a vertical duct which permits full optical access to visualize the flow fields in all two-phase flow regimes. The modeling fluid accurately simulates boiling steam-water systems. Particular attention is focused on the use of a gamma densitometer to obtain line-averaged and cross-sectional averaged void fractions. Hot-film anemometer probes provide data on local void fraction, interfacial frequency, bubble and droplet size, as well as information on the behavior of the liquid-vapor interface in annular flows. A laser Doppler velocimeter is used to measure the velocity of liquid-vapor interfaces in bubbly, slug and annular flows. Flow visualization techniques are also used to obtain a qualitative understanding of the two-phase flow structure, and to obtain supporting quantitative data on bubble size. Examples of data obtained with these various measurement methods are shown.

Kirouac, G.J.; Trabold, T.A.; Vassallo, P.F.; Moore, W.E.; Kumar, R. [Lockheed Martin Corp., Schenectady, NY (United States)

1999-06-01

320

Time scale analysis of a digital flight control system

NASA Technical Reports Server (NTRS)

In this paper, consideration is given to the fifth order discrete model of an aircraft (longitudinal) control system which possesses three slow (velocity, pitch angle and altitude) and two fast (angle of attack and pitch angular velocity) modes and exhibits a two-time scale property. Using the recent results of the time scale analysis of discrete control systems, the high-order discrete model is decoupled into low-order slow and fast subsystems. The results of the decoupled system are found to be in excellent agreement with those of the original system.

Naidu, D. S.; Price, D. B.

1986-01-01

321

Objective To evaluate the psychometric properties and clinical utility of Chinese Multidimensional Health Assessment Questionnaire (MDHAQ-C) in patients with rheumatoid arthritis (RA) in China. Methods 162 RA patients were recruited in the evaluation process. The reliability of the questionnaire was tested by internal consistency and item analysis. Convergent validity was assessed by correlations of MDHAQ-C with Health Assessment Questionnaire (HAQ), the 36-item Short-Form Health Survey (SF-36) and the Hospital anxiety and depression scales (HAD). Discriminant validity was tested in groups of patients with varied disease activities and functional classes. To evaluate the clinical values, correlations were calculated between MDHAQ-C and indices of clinical relevance and disease activity. Agreement with the Disease Activity Score (DAS28) and Clinical Disease Activity Index (CDAI) was estimated. Results The Cronbach's alpha was 0.944 in the Function scale (FN) and 0.768 in the scale of psychological status (PS). The item analysis indicated all the items of FN and PS are correlated at an acceptable level. MDHAQ-C correlated with the questionnaires significantly in most scales and scores of scales differed significantly in groups of different disease activity and functional status. MDHAQ-C has moderate to high correlation with most clinical indices and high correlation with a spearman coefficient of 0.701 for DAS 28 and 0.843 for CDAI. The overall agreement of categories was satisfying. Conclusion MDHAQ-C is a reliable, valid instrument for functional measurement and a feasible, informative quantitative index for busy clinical settings in Chinese RA patients.

Song, Yang; Zhu, Li-an; Wang, Su-li; Leng, Lin; Bucala, Richard; Lu, Liang-Jing

2014-01-01

322

NASA Astrophysics Data System (ADS)

The method of anchored distributions (MAD, Rubin et al., Water Resour. Res., 2010) is a Bayesian inversion technique that combines geostatistical concepts with a strategy for localization of data that is indirectly related to the target variables, using anchors. Anchors are statistical distributions of the target variables (e.g., the hydraulic conductivity) at specific locations The variable field is described by the statistical distributions of structural parameters that characterize global features and by anchor distributions that intend to capture local effects. The posterior distributions of structural and anchor parameter sets are used to update the approximate spatial distribution of target variable and are generated by re-sampling the parameter sets using their normalized likelihood estimates as the probability of being selected. Increasing the dimension of the data, to include additional information in the likelihood estimate, increases the computational burden. Two measures are taken to accommodate the advantageous additional data without spurious side effects. (1) Partitioning parameter sets into hypercubes, based upon the similarity of the structural parameter values. (2) Principal component analysis, to reduce the dimensionality by discarding a certain percentage of principal components. As an additional feature for large sample sets, or faster calculation, a ‘bundling’ regime can be implemented. Bundling is employed immediately after partitioning the parameter sets into hypercubes. Bundling identifies spatial patterns amongst the realizations generated from the distributions defining the anchor parameters. The added organizational step allows data with reduced sample sizes to be passed to the PCA algorithm. The division of the data set allows for simple parallelization of the computation and our case study achieved a three-fold dimension reduction. Because of the high dimension involved in the calculation, without absurdly large sample sizes, it is reasonable to assume that the data sparsely populates the hyperspace. In order to avoid using an interpolation scheme that would average and smooth the likelihood distribution over extensive regions of unpopulated hyperspace, the data is scanned for clusters using the HOPACH algorithm authored by M. Van der Laan. The density is estimated, over the clusters, non-parametrically. The cluster approximations are summed up using a mixture model to achieve the final likelihood estimate.

Over, M. W.; Murakami, H.; Hahn, M. S.; Yang, Y.; Rubin, Y.

2010-12-01

323

New Criticality Safety Analysis Capabilities in SCALE 5.1

Version 5.1 of the SCALE computer software system developed at Oak Ridge National Laboratory, released in 2006, contains several significant enhancements for nuclear criticality safety analysis. This paper highlights new capabilities in SCALE 5.1, including improved resonance self-shielding capabilities; ENDF/B-VI.7 cross-section and covariance data libraries; HTML output for KENO V.a; analytical calculations of KENO-VI volumes with GeeWiz/KENO3D; new CENTRMST/PMCST modules for processing ENDF/B-VI data in TSUNAMI; SCALE Generalized Geometry Package in NEWT; KENO Monte Carlo depletion in TRITON; and plotting of cross-section and covariance data in Javapeno.

Bowman, Stephen M [ORNL; DeHart, Mark D [ORNL; Dunn, Michael E [ORNL; Goluoglu, Sedat [ORNL; Horwedel, James E [ORNL; Petrie Jr, Lester M [ORNL; Rearden, Bradley T [ORNL; Williams, Mark L [ORNL

2007-01-01

324

MULTI-DIMENSIONAL RADIATIVE TRANSFER TO ANALYZE HANLE EFFECT IN Ca II K LINE AT 3933 A

Radiative transfer (RT) studies of the linearly polarized spectrum of the Sun (the second solar spectrum) have generally focused on line formation, with an aim to understand the vertical structure of the solar atmosphere using one-dimensional (1D) model atmospheres. Modeling spatial structuring in the observations of the linearly polarized line profiles requires the solution of multi-dimensional (multi-D) polarized RT equation and a model solar atmosphere obtained by magnetohydrodynamical (MHD) simulations of the solar atmosphere. Our aim in this paper is to analyze the chromospheric resonance line Ca II K at 3933 A using multi-D polarized RT with the Hanle effect and partial frequency redistribution (PRD) in line scattering. We use an atmosphere that is constructed by a two-dimensional snapshot of the three-dimensional MHD simulations of the solar photosphere, combined with columns of a 1D atmosphere in the chromosphere. This paper represents the first application of polarized multi-D RT to explore the chromospheric lines using multi-D MHD atmospheres, with PRD as the line scattering mechanism. We find that the horizontal inhomogeneities caused by MHD in the lower layers of the atmosphere are responsible for strong spatial inhomogeneities in the wings of the linear polarization profiles, while the use of horizontally homogeneous chromosphere (FALC) produces spatially homogeneous linear polarization in the line core. The introduction of different magnetic field configurations modifies the line core polarization through the Hanle effect and can cause spatial inhomogeneities in the line core. A comparison of our theoretical profiles with the observations of this line shows that the MHD structuring in the photosphere is sufficient to reproduce the line wings and in the line core, but only line center polarization can be reproduced using the Hanle effect. For a simultaneous modeling of the line wings and the line core (including the line center), MHD atmospheres with inhomogeneities in the chromosphere are required.

Anusha, L. S.; Nagendra, K. N., E-mail: bhasari@mps.mpg.de, E-mail: knn@iiap.res.in [Indian Institute of Astrophysics, Koramangala, 2nd Block, Bangalore 560 034 (India)

2013-04-20

325

In longitudinal studies, a quantitative outcome (such as blood pressure) may be altered during follow-up by the administration of a non-randomized, non-trial intervention (such as anti-hypertensive medication) that may seriously bias the study results. Current methods mainly address this issue for cross-sectional studies. For longitudinal data, the current methods are either restricted to a specific longitudinal data structure or are valid only under special circumstances. We propose two new methods for estimation of covariate effects on the underlying (untreated) general longitudinal outcomes: a single imputation method employing a modified expectation-maximization (EM)-type algorithm and a multiple imputation (MI) method utilizing a modified Monte Carlo EM-MI algorithm. Each method can be implemented as one-step, two-step, and full-iteration algorithms. They combine the advantages of the current statistical methods while reducing their restrictive assumptions and generalizing them to realistic scenarios. The proposed methods replace intractable numerical integration of a multi-dimensionally censored MVN posterior distribution with a simplified, sufficiently accurate approximation. It is particularly attractive when outcomes reach a plateau after intervention due to various reasons. Methods are studied via simulation and applied to data from the Diabetes Control and Complications Trial/Epidemiology of Diabetes Interventions and Complications study of treatment for type 1 diabetes. Methods proved to be robust to high dimensions, large amounts of censored data, low within-subject correlation, and when subjects receive non-trial intervention to treat the underlying condition only (with high Y), or for treatment in the majority of subjects (with high Y) in combination with prevention for a small fraction of subjects (with normal Y). PMID:24258796

Sun, Wanjie; Larsen, Michael D; Lachin, John M

2014-04-15

326

NASA Astrophysics Data System (ADS)

The multiconfiguration time-dependent Hartree (MCTDH) method is an algorithm for propagating multi-dimensional wavepackets [U. Manthe, H.-D. Meyer, and L. S. Cederbaum, J. Chem. Phys. 97, 3199 (1992); M. H. Beck, A. J"akle, G. A. Worth, and H.-D. Meyer, Phys. Rep. 324, 1 (2000)] or density operators [A. Raab and H.-D. Meyer, J. Chem. Phys. 112, 10718 (2000)]. This algorithm is briefly introduced and its efficiency is explained. In short, the efficiency originates from using an variationally optimized time-dependent basis set. The efficiency of MCTDH is best demonstrated through the calculations on the absorption spectrum of pyrazine. There the correlated motion of all the 24 modes evolving on two coupled electronic surfaces is treated with high accuracy [G. Worth, H.-D. Meyer, and L. S. Cederbaum, J. Chem. Phys. 109, 3518 (1998); A. Raab, G. Worth, H.-D. Meyer, and L. S. Cederbaum, J. Chem. Phys. 110, 936 (1999)]. To emphasize the very large gain obtained here we note that the underlying primitive basis consists of 10^21 points, whereas there are only 3.8 x 10^6 configurations are needed for convergence. MCTDH has been applied to a wide range of molecular processes, including photodissociation, surface scattering, reactive scattering and the computation of vibrational spectra. Recently MCTDH has been used to study the well known spin-boson model [H. Wang, J. Chem. Phys. 113, 9948 (2000)], including up to 80 vibrational modes. The present talk will discuss an application of MCTDH to a high dimensional spin system. Further information on MCTDH can be found on the web site http://www.pci.uni-heidelberg.de/tc/usr/mctdh/

Meyer, Hans-Dieter

2003-03-01

327

Scaling Laws in Canopy Flows: A Wind-Tunnel Analysis

NASA Astrophysics Data System (ADS)

An analysis of velocity statistics and spectra measured above a wind-tunnel forest model is reported. Several measurement stations downstream of the forest edge have been investigated and it is observed that, while the mean velocity profile adjusts quickly to the new canopy boundary condition, the turbulence lags behind and shows a continuous penetration towards the free stream along the canopy model. The statistical profiles illustrate this growth and do not collapse when plotted as a function of the vertical coordinate. However, when the statistics are plotted as function of the local mean velocity (normalized with a characteristic velocity scale), they do collapse, independently of the streamwise position and freestream velocity. A new scaling for the spectra of all three velocity components is proposed based on the velocity variance and integral time scale. This normalization improves the collapse of the spectra compared to existing scalings adopted in atmospheric measurements, and allows the determination of a universal function that provides the velocity spectrum. Furthermore, a comparison of the proposed scaling laws for two different canopy densities is shown, demonstrating that the vertical velocity variance is the most sensible statistical quantity to the characteristics of the canopy roughness.

Segalini, Antonio; Fransson, Jens H. M.; Alfredsson, P. Henrik

2013-08-01

328

Language pyramid and multi-scale text analysis

The classical Bag-of-Word (BOW) model represents a document as a histogram of word occurrence, losing the spatial information that is invaluable for many text analysis tasks. In this paper, we present the Language Pyramid (LaP) model, which casts a document as a probabilistic distribution over the joint semantic-spatial space and motivates a multi-scale 2D local smoothing framework for nonparametric text

Shuang-Hong Yang; Hongyuan Zha

2010-01-01

329

Analysis of spatial variability of hydraulic conductivity at field scale

Gupta, N., Rudra, R.P. and Parkin, G. 2006. Analysis of spatial variability of hydraulic conductivity at field scale. Canadian Biosystems Engineering\\/Le génie des biosystèmes au Canada 48: 1.55 - 1.62. Hydraulic conductivity, a vital input parameter in physically based rainfall-runoff modeling approaches, determined by double ring infiltrometer and Guelph permeameter along and across the slope of a field, was analyzed

N. Gupta; R. P. Rudra; G. Parkin

330

Crime Analysis at Multiple Scales of Aggregation: A Topological Approach

Patterns in crime vary quite substantially at different scales of aggregation, in part because data tend to be organized around\\u000a standardized, artificially defined units of measurement such as the census tract, the city boundary, or larger administrative\\u000a or political boundaries. The boundaries that separate units of data often obscure the detailed spatial patterns and muddy\\u000a analysis. These aggregation units have

Patricia L. Brantingham; Paul J. Brantingham; Mona Vajihollahi; Kathryn Wuschke

331

Renormalisation of Noncommutative ? 4 Theory by Multi-Scale Analysis

In this paper we give a much more efficient proof that the real Euclidean ?4-model on the four-dimensional Moyal plane is renormalisable to all orders. We prove rigorous bounds on the propagator which\\u000a complete the previous renormalisation proof based on renormalisation group equations for non-local matrix models. On the other\\u000a hand, our bounds permit a powerful multi-scale analysis of the

Vincent Rivasseau; Fabien Vignes-Tourneret; Raimar Wulkenhaar

2006-01-01

332

The scale analysis sequence for LWR fuel depletion

The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system is used extensively to perform away-from-reactor safety analysis (particularly criticality safety, shielding, heat transfer analyses) for spent light water reactor (LWR) fuel. Spent fuel characteristics such as radiation sources, heat generation sources, and isotopic concentrations can be computed within SCALE using the SAS2 control module. A significantly enhanced version of the SAS2 control module, which is denoted as SAS2H, has been made available with the release of SCALE-4. For each time-dependent fuel composition, SAS2H performs one-dimensional (1-D) neutron transport analyses (via XSDRNPM-S) of the reactor fuel assembly using a two-part procedure with two separate unit-cell-lattice models. The cross sections derived from a transport analysis at each time step are used in a point-depletion computation (via ORIGEN-S) that produces the burnup-dependent fuel composition to be used in the next spectral calculation. A final ORIGEN-S case is used to perform the complete depletion/decay analysis using the burnup-dependent cross sections. The techniques used by SAS2H and two recent applications of the code are reviewed in this paper. 17 refs., 5 figs., 5 tabs.

Hermann, O.W.; Parks, C.V.

1991-01-01

333

A Bayesian analysis of scale-invariant processes

NASA Astrophysics Data System (ADS)

We have demonstrated that the Maximum Entropy (ME) principle in the context of Bayesian probability theory can be used to derive the probability distributions of those processes characterized by its scaling properties including multiscaling moments and geometric mean. We started from a proof-of-concept case of a power-law probability distribution, followed by the general case of multifractality aided by the wavelet representation of the cascade model. The ME formalism leads to the probability distribution of the multiscaling parameter and those of incremental multifractal processes at different scales. Compared to other methods, the ME method significantly reduces computational cost by leaving out unimportant details. The ME distributions have been evaluated against the empirical histograms derived from the drainage area of river network, soil moisture and topography. This analysis supports the assertion that the ME principle is a universal and unified framework for modeling processes governed by scale-invariant laws. The ME theory opens new possibilities of extracting information of multifractal processes beyond the scales of observation.

Nieves, Veronica; Wang, Jingfeng; Bras, Rafael L.

2012-05-01

334

Quantitative analysis of scale sensitivity in geographic cellular automata

NASA Astrophysics Data System (ADS)

Geographical Cellular Automata (GCA) approach is based on complexity theory and is widely used in geospatial modeling. A reason for the increasing attention given to GCA models is that they can easily be integrated with rasterbased GIS environment. However, the behavior of the GCA models is affected by uncertainties arising from the interaction between model elements, structures, and the quality of data sources used as model input. The objective of this study is to examine the impacts of model elements on the generated outputs of a GIS-based GCA land-use growth model using sensitivity analysis (SA) approach. The proposed SA method consists of KAPPA index with different spatial metrics. A stochastic GCA model was built to model land use change in the changsha region (Hunan,China). The transition rules were empirically derived from four Landsat-TM (30m resolution) images taken in 1996,1999, 2002 and 2005 that have been resampled to four resolutions (30, 60, 90, 120m). Five different neighbourhood configurations were considered (Moore, Von Neumann, and circular approximations of 2, 3 and 4 cell radii). Simulations were performed for each of the twenty spatial scale scenarios. Results show that spatial scale has a considerable impact on simulation dynamics in terms of both land use area and spatial structure. The spatial scale domains present in the results reveal the nonlinear relationships that link the spatial scale components to the simulation results.

Zhang, Honghui; Zeng, Yongnian; Yin, Changlin; Jin, Xiaobin; Chen, Guanghui; You, Shenjin; Zou, Bin

2008-11-01

335

Scaling analysis for the investigation of slip mechanisms in nanofluids.

The primary objective of this study is to investigate the effect of slip mechanisms in nanofluids through scaling analysis. The role of nanoparticle slip mechanisms in both water- and ethylene glycol-based nanofluids is analyzed by considering shape, size, concentration, and temperature of the nanoparticles. From the scaling analysis, it is found that all of the slip mechanisms are dominant in particles of cylindrical shape as compared to that of spherical and sheet particles. The magnitudes of slip mechanisms are found to be higher for particles of size between 10 and 80 nm. The Brownian force is found to dominate in smaller particles below 10 nm and also at smaller volume fraction. However, the drag force is found to dominate in smaller particles below 10 nm and at higher volume fraction. The effect of thermophoresis and Magnus forces is found to increase with the particle size and concentration. In terms of time scales, the Brownian and gravity forces act considerably over a longer duration than the other forces. For copper-water-based nanofluid, the effective contribution of slip mechanisms leads to a heat transfer augmentation which is approximately 36% over that of the base fluid. The drag and gravity forces tend to reduce the Nusselt number of the nanofluid while the other forces tend to enhance it. PMID:21791036

Savithiri, S; Pattamatta, Arvind; Das, Sarit K

2011-01-01

336

Scaling analysis for the investigation of slip mechanisms in nanofluids

The primary objective of this study is to investigate the effect of slip mechanisms in nanofluids through scaling analysis. The role of nanoparticle slip mechanisms in both water- and ethylene glycol-based nanofluids is analyzed by considering shape, size, concentration, and temperature of the nanoparticles. From the scaling analysis, it is found that all of the slip mechanisms are dominant in particles of cylindrical shape as compared to that of spherical and sheet particles. The magnitudes of slip mechanisms are found to be higher for particles of size between 10 and 80 nm. The Brownian force is found to dominate in smaller particles below 10 nm and also at smaller volume fraction. However, the drag force is found to dominate in smaller particles below 10 nm and at higher volume fraction. The effect of thermophoresis and Magnus forces is found to increase with the particle size and concentration. In terms of time scales, the Brownian and gravity forces act considerably over a longer duration than the other forces. For copper-water-based nanofluid, the effective contribution of slip mechanisms leads to a heat transfer augmentation which is approximately 36% over that of the base fluid. The drag and gravity forces tend to reduce the Nusselt number of the nanofluid while the other forces tend to enhance it.

2011-01-01

337

Analysis of the Spatial Scaling Characteristics of Snow Depth

NASA Astrophysics Data System (ADS)

Directional spectral analyses were conducted for LIDAR (LIght Detection And Ranging) snow depths measured in six of the nine 1-km2 Intensive Study Areas (ISA's) of NASA's Cold Land Processes Experiment (CLPX) in the spring of 2003 (8-9 of April, 2003). The six study areas analyzed are located in the Fraser and Rabbit Ears Mesoscale Study Areas of the project in the state of Colorado. The snow depth power spectra were compared to the spectra of bare ground elevations (topography) and elevations filtered to the top of vegetation (topography + elevation). The log power spectral density of snow depth versus log of frequency (f) presents two distinct slopes with scale breaks at wavelengths between 6 m and 45 m. The average fractal dimensions for the study areas range between 1.80 and 2.30 for the low frequencies intervals, and between 0.79 and 1.03 for the high frequencies intervals, indicating spatial self-similarity in the snow depth fields. The scale breaks observed in the power spectra of snow depth are not present in the power spectra of topography and/or topography + vegetation, and the slopes of the snow depth spectra differ from the slopes of the power spectra of topography and topography + vegetation. The observed breaks in the power spectra of snow depth are not explained by the power spectra of the underlying topography and vegetation. These scale breaks must be the product of a switch in the dominant process(es) driving the variability of the snow cover properties at these scales. Potential physical causes of the scale breaks will be presented based on further analysis of snow depth data and additional variables. The spatial variability of snow depth at scales smaller than the scale breaks observed is controlled, among other factors, by the interaction of wind, vegetation, and small topographic features. At larger scales, this variability is controlled by precipitation patterns, short and long wave radiation, aspect, slope, and wind, among others. These differences are analyzed to explain the characteristics observed in the power spectra of snow depth.

Trujillo, E.; Ramírez, J. A.; Elder, K. J.

2005-12-01

338

Multi-Scale Fractal Analysis of Image Texture and Pattern

NASA Technical Reports Server (NTRS)

Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely independent of scale. Self-similarity is defined as a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. An ideal fractal (or monofractal) curve or surface has a constant dimension over all scales, although it may not be an integer value. This is in contrast to Euclidean or topological dimensions, where discrete one, two, and three dimensions describe curves, planes, and volumes. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution. However, most geographical phenomena are not strictly self-similar at all scales, but they can often be modeled by a stochastic fractal in which the scaling and self-similarity properties of the fractal have inexact patterns that can be described by statistics. Stochastic fractal sets relax the monofractal self-similarity assumption and measure many scales and resolutions in order to represent the varying form of a phenomenon as a function of local variables across space. In image interpretation, pattern is defined as the overall spatial form of related features, and the repetition of certain forms is a characteristic pattern found in many cultural objects and some natural features. Texture is the visual impression of coarseness or smoothness caused by the variability or uniformity of image tone or color. A potential use of fractals concerns the analysis of image texture. In these situations it is commonly observed that the degree of roughness or inexactness in an image or surface is a function of scale and not of experimental technique. The fractal dimension of remote sensing data could yield quantitative insight on the spatial complexity and information content contained within these data. A software package known as the Image Characterization and Modeling System (ICAMS) was used to explore how fractal dimension is related to surface texture and pattern. The ICAMS software was verified using simulated images of ideal fractal surfaces with specified dimensions. The fractal dimension for areas of homogeneous land cover in the vicinity of Huntsville, Alabama was measured to investigate the relationship between texture and resolution for different land covers.

Emerson, Charles W.

1998-01-01

339

The Copper River Basin, the sixth largest watershed in Alaska, drains an area of 24,200 square miles. This large, glacier-fed river flows across a wide alluvial fan before it enters the Gulf of Alaska. Bridges along the Copper River Highway, which traverses the alluvial fan, have been impacted by channel migration. Due to a major channel change in 2001, Bridge 339 at Mile 36 of the highway has undergone excessive scour, resulting in damage to its abutments and approaches. During the snow- and ice-melt runoff season, which typically extends from mid-May to September, the design discharge for the bridge often is exceeded. The approach channel shifts continuously, and during our study it has shifted back and forth from the left bank to a course along the right bank nearly parallel to the road. Maintenance at Bridge 339 has been costly and will continue to be so if no action is taken. Possible solutions to the scour and erosion problem include (1) constructing a guide bank to redirect flow, (2) dredging approximately 1,000 feet of channel above the bridge to align flow perpendicular to the bridge, and (3) extending the bridge. The USGS Multi-Dimensional Surface Water Modeling System (MD_SWMS) was used to assess these possible solutions. The major limitation of modeling these scenarios was the inability to predict ongoing channel migration. We used a hybrid dataset of surveyed and synthetic bathymetry in the approach channel, which provided the best approximation of this dynamic system. Under existing conditions and at the highest measured discharge and stage of 32,500 ft3/s and 51.08 ft, respectively, the velocities and shear stresses simulated by MD_SWMS indicate scour and erosion will continue. Construction of a 250-foot-long guide bank would not improve conditions because it is not long enough. Dredging a channel upstream of Bridge 339 would help align the flow perpendicular to Bridge 339, but because of the mobility of the channel bed, the dredged channel would likely fill in during high flows. Extending Bridge 339 would accommodate higher discharges and re-align flow to the bridge.

Brabets, Timothy P.; Conaway, Jeffrey S.

2009-01-01

340

A multi-dimensional hydrodynamic model was applied to aid in the assessment of the potential hazard posed to the uranium mill tailings near Moab, Utah, by flooding in the Colorado River as it flows through Moab Valley. Discharge estimates for the 100- and 500-year recurrence interval and for the Probable Maximum Flood (PMF) were evaluated with the model for the existing channel geometry. These discharges also were modeled for three other channel-deepening configurations representing hypothetical scour of the channel at the downstream portal of Moab Valley. Water-surface elevation, velocity distribution, and shear-stress distribution were predicted for each simulation. The hydrodynamic model was developed from measured channel topography and over-bank topographic data acquired from several sources. A limited calibration of the hydrodynamic model was conducted. The extensive presence of tamarisk or salt cedar in the over-bank regions of the study reach presented challenges for determining roughness coefficients. Predicted water-surface elevations for the current channel geometry indicated that the toe of the tailings pile would be inundated by about 4 feet by the 100-year discharge and 25 feet by the PMF discharge. A small area at the toe of the tailings pile was characterized by velocities of about 1 to 2 feet per second for the 100-year discharge. Predicted velocities near the toe for the PMF discharge increased to between 2 and 4 feet per second over a somewhat larger area. The manner to which velocities progress from the 100-year discharge to the PMF discharge in the area of the tailings pile indicates that the tailings pile obstructs the over-bank flow of flood discharges. The predicted path of flow for all simulations along the existing Colorado River channel indicates that the current distribution of tamarisk in the over-bank region affects how flood-flow velocities are spatially distributed. Shear-stress distributions were predicted throughout the study reach for each discharge and channel geometry examined. Material transport was evaluated by applying these shear-stress values to empirically determined critical shear-stress values for grain sizes ranging from very fine sands to very coarse gravels.

Kenney, Terry A.

2005-01-01

341

Exploratory factor analysis of African Self-Consciousness Scale scores.

This study replicates and extends prior studies of the dimensionality, convergent, and external validity of African Self-Consciousness Scale scores with appropriate exploratory factor analysis methods and a large gender balanced sample (N = 348). Viable one- and two-factor solutions were cross-validated. Both first factors overlapped significantly and were labeled "Embracing African Heritage." The second subscale of the two-factor solution was labeled "Refusal to Deny African Heritage." Only the structural validity of the first factor of the two-factor solution was fully consistent with prior findings. Partial evidence of convergent validity was found for all factors, and only the second factor of the two-factor solution received external validity support. Implications for usage of the African Self-Consciousness Scale and recommendations for further investigation are discussed. PMID:21393316

Bhagwat, Ranjit; Kelly, Shalonda; Lambert, Michael C

2012-03-01

342

SCALE 6: Comprehensive Nuclear Safety Analysis Code System

Version 6 of the Standardized Computer Analyses for Licensing Evaluation (SCALE) computer software system developed at Oak Ridge National Laboratory, released in February 2009, contains significant new capabilities and data for nuclear safety analysis and marks an important update for this software package, which is used worldwide. This paper highlights the capabilities of the SCALE system, including continuous-energy flux calculations for processing multigroup problem-dependent cross sections, ENDF/B-VII continuous-energy and multigroup nuclear cross-section data, continuous-energy Monte Carlo criticality safety calculations, Monte Carlo radiation shielding analyses with automated three-dimensional variance reduction techniques, one- and three-dimensional sensitivity and uncertainty analyses for criticality safety evaluations, two- and three-dimensional lattice physics depletion analyses, fast and accurate source terms and decay heat calculations, automated burnup credit analyses with loading curve search, and integrated three-dimensional criticality accident alarm system analyses using coupled Monte Carlo criticality and shielding calculations.

Bowman, Stephen M [ORNL

2011-01-01

343

Underground tank vitrification: Field scale experiments and computational analysis

In situ vitrification (ISV) is a thermal waste remediation process developed by researchers at Pacific Northwest Laboratory (PNL) for stabilization and treatment of soils contaminated with hazardous, radioactive or mixed wastes. Many underground tanks containing radioactive and hazardous chemical wastes at US Department of Energy (DOE) sites will soon require remediation. Recent development activities have been pursued to determine if the ISV process is applicable to underground storage tanks. As envisioned, ISV will convert the tank, tank contents. and associated contaminated soil to a glass and crystalline block. Development activities include testing and demonstration on three scales and computational modeling and evaluation. A description of engineering solutions implemented on the field scale to mitigate unique problems posed by ISV of a confined underground structure, along with the associated computational analysis, is given in the paper.

Tixier, J.S.; Jeffs, J.T.; Thompson, L.E.

1992-06-01

344

Multidimensional multi-scale image enhancement algorithm

At present aimed at the low contrast and dark image due to the impact of uneven illumination, the paper proposes a multi-dimensional multi-scale image processing algorithm which is based on multi-scale Retinex theory and combined a conduction function constructed. The simulation results show that the effect of image processing algorithm is much better than the classical image processing methods such

Liu Yong; Yang Ping Xian

2010-01-01

345

National Technical Information Service (NTIS)

With an objective of developing advanced materials by making the materials micronized and multi-layered up to mesoscopic scales, studies were made on nanometer scale analysis of ultra-fine structure. The mesoscopic scales include sizes from nanometers to ...

1994-01-01

346

Hopf Bifurcation Analysis for Depth Control of Submersible Vehicles.

National Technical Information Service (NTIS)

Control of a modern submarine is a multi-dimensional problem coupling initial stability, hydrodynamic and control system response. The loss of stability at moderate to high speeds is examined using a nonlinear Hopf bifurcation analysis. Complete linear st...

C. A. Bateman

1993-01-01

347

Small scale wind perturbation analysis for vertically rising launch vehicles

NASA Technical Reports Server (NTRS)

This paper discusses the determination of small-scale vertical wind spectra used with space flight and ballistic technology. In particular, Jimsphere, a precision balloon wind sensor with high radar reflectivity is considered. Gross wind velocity data is analyzed to subtract the steady-state wind and wind change-shear effects. A residue of small wind perturbations is left in the horizontal (scalar) along the vertical direction. An analysis leading to formulation of the covariance function with altitude is presented. The function is decoupled to yield an almost periodic representation of the vertical wind perturbations. Forcing functions are determined when the representation is coupled with the vehicle velocity characteristics.

Chenoweth, H. B.

1980-01-01

348

Large-scale analysis of network bistability for human cancers.

Protein-protein interaction and gene regulatory networks are likely to be locked in a state corresponding to a disease by the behavior of one or more bistable circuits exhibiting switch-like behavior. Sets of genes could be over-expressed or repressed when anomalies due to disease appear, and the circuits responsible for this over- or under-expression might persist for as long as the disease state continues. This paper shows how a large-scale analysis of network bistability for various human cancers can identify genes that can potentially serve as drug targets or diagnosis biomarkers. PMID:20628618

Shiraishi, Tetsuya; Matsuyama, Shinako; Kitano, Hiroaki

2010-01-01

349

Dimensional analysis of small-scale steam explosion experiments

Dimensional analysis applied to Nelson's small-scale steam explosion experiments to determine the qualitative effect of each relevant parameter for triggering a steam explosion. According to experimental results, the liquid entrapment model seems to be a consistent explanation for the steam explosion triggering mechanism. The three-dimensional oscillatory wave motion of the vapor/liquid interface is analyzed to determine the necessary conditions for local condensation and production of a coolant microjet to be entrapped in fuel. It is proposed that different contact modes between fuel and coolant may involve different initiation mechanisms of steam explosions.

Huh, K.; Corradini, M.L.

1986-05-01

350

A multi-dimensional analysis of the upper Rio Grande-San Luis Valley social-ecological system

The Upper Rio Grande (URG), located in the San Luis Valley (SLV) of southern Colorado, is the primary contributor to streamflow to the Rio Grande Basin, upstream of the confluence of the Rio Conchos at Presidio, TX. The URG-SLV includes a complex irrigation-dependent agricultural social-ecological system (SES), which began development in 1852, and today generates more than 30% of the

Ken Mix

2010-01-01

351

The Seasonal Thermal Energy Storage Program is being conducted for the Department of Energy by Pacific Northwest Laboratory. A major thrust of this program has been the study of natural aquifers as hosts for thermal energy storage and retrieval. Numerical simulation of the nonisothermal response of the host media is fundamental to the evaluation of proposed experimental designs and field test results. This report represents the primary documentation for the coupled fluid, energy and solute transport (CFEST) code. Sections of this document are devoted to the conservation equations and their numerical analogues, the input data requirements, and the verification studies completed to date.

Gupta, S.K.; Kincaid, C.T.; Meyer, P.R.; Newbill, C.A.; Cole, C.R.

1982-08-01

352

ERIC Educational Resources Information Center

Primary education is essential for the economic development in any country. Most studies give more emphasis to the final output (such as literacy, enrolment etc.) rather than the delivery of the entire primary education system. In this paper, we study the school level data from an Indian district, collected under the official DISE statistics. We…

Sengupta, Atanu; Pal, Naibedya Prasun

2012-01-01

353

In the limit ffl ! 0, a spike-layer solution is constructed for the reaction-diffusion equationffl24u+ Q(u) = 0 ; x 2 D ae RN;ffl @ n u + bu = 0 ; x 2 @D ;where b ? 0 and D is a bounded convex domain. Here Q(u) is such that there exists a uniqueradially symmetric function u c (ffl\\\\Gamma1r)

Michael J. Ward

1995-01-01

354

NASA Technical Reports Server (NTRS)

An enthalpy transforming scheme is proposed to convert the energy equation into a nonlinear equation with the enthalpy, E, being the single dependent variable. The existing control-volume finite-difference approach is modified so it can be applied to the numerical performance of Stefan problems. The model is tested by applying it to a three-dimensional freezing problem. The numerical results are in agreement with those existing in the literature. The model and its algorithm are further applied to a three-dimensional moving heat source problem showing that the methodology is capable of handling complicated phase-change problems with fixed grids.

Cao, Yiding; Faghri, Amir; Chang, Won Soon

1989-01-01

355

Threedimensional flow measurements, obtained from high? resolution synchrotronbased xray phase contrast i mages of blood invitro , are presented. Using data collected on beamline BL20XU at the SPring?8 synchrotron in Hyogo, Japan, we demonstrate the benefits to be gained by preproces sing of speckled Xray phase contrast images prior to PIV a nalysis. Such preprocessing techniques include use of a

S. C. Irvine; D. M. Paganin; S. Dubsky; R. A. Lewis; A. Fouras

356

Understanding large scale HPC systems through scalable monitoring and analysis.

As HPC systems grow in size and complexity, diagnosing problems and understanding system behavior, including failure modes, becomes increasingly difficult and time consuming. At Sandia National Laboratories we have developed a tool, OVIS, to facilitate large scale HPC system understanding. OVIS incorporates an intuitive graphical user interface, an extensive and extendable data analysis suite, and a 3-D visualization engine that allows visual inspection of both raw and derived data on a geometrically correct representation of a HPC system. This talk will cover system instrumentation, data collection (including log files and the complications of meaningful parsing), analysis, visualization of both raw and derived information, and how data can be combined to increase system understanding and efficiency.

Mayo, Jackson R.; Chen, Frank Xiaoxiao; Pebay, Philippe Pierre; Wong, Matthew H.; Thompson, David; Gentile, Ann C.; Roe, Diana C.; De Sapio, Vincent; Brandt, James M.

2010-09-01

357

A Multi-scale Approach to Urban Thermal Analysis

NASA Technical Reports Server (NTRS)

An environmental consequence of urbanization is the urban heat island effect, a situation where urban areas are warmer than surrounding rural areas. The urban heat island phenomenon results from the replacement of natural landscapes with impervious surfaces such as concrete and asphalt and is linked to adverse economic and environmental impacts. In order to better understand the urban microclimate, a greater understanding of the urban thermal pattern (UTP), including an analysis of the thermal properties of individual land covers, is needed. This study examines the UTP by means of thermal land cover response for the Salt Lake City, Utah, study area at two scales: 1) the community level, and 2) the regional or valleywide level. Airborne ATLAS (Advanced Thermal Land Applications Sensor) data, a high spatial resolution (10-meter) dataset appropriate for an environment containing a concentration of diverse land covers, are used for both land cover and thermal analysis at the community level. The ATLAS data consist of 15 channels covering the visible, near-IR, mid-IR and thermal-IR wavelengths. At the regional level Landsat TM data are used for land cover analysis while the ATLAS channel 13 data are used for the thermal analysis. Results show that a heat island is evident at both the community and the valleywide level where there is an abundance of impervious surfaces. ATLAS data perform well in community level studies in terms of land cover and thermal exchanges, but other, more coarse-resolution data sets are more appropriate for large-area thermal studies. Thermal response per land cover is consistent at both levels, which suggests potential for urban climate modeling at multiple scales.

Gluch, Renne; Quattrochi, Dale A.

2005-01-01

358

Investigation of Biogrout processes by numerical analysis at pore scale

NASA Astrophysics Data System (ADS)

Biogrout is a soil improving process that aims to improve the strength of sandy soils. The process is based on microbially induced calcite precipitation (MICP). In this study the main process is based on denitrification facilitated by bacteria indigenous to the soil using substrates, which can be derived from pretreated waste streams containing calcium salts of fatty acids and calcium nitrate, making it a cost effective and environmentally friendly process. The goal of this research is to improve the understanding of the process by numerical analysis so that it may be improved and applied properly for varying applications, such as borehole stabilization, liquefaction prevention, levee fortification and mitigation of beach erosion. During the denitrification process there are many phases present in the pore space including a liquid phase containing solutes, crystals, bacteria forming biofilms and gas bubbles. Due to the amount of phases and their dynamic changes (multiphase flow with (non-linear) reactive transport), there are many interactions making the process very complex. To understand this complexity in the system, the interactions between these phases are studied in a reductionist approach, increasing the complexity of the system by one phase at a time. The model will initially include flow, solute transport, crystal nucleation and growth in 2D at pore scale. The flow will be described by Navier-Stokes equations. Initial study and simulations has revealed that describing crystal growth for this application on a fixed grid can introduce significant fundamental errors. Therefore a level set method will be employed to better describe the interface of developing crystals in between sand grains. Afterwards the model will be expanded to 3D to provide more realistic flow, nucleation and clogging behaviour at pore scale. Next biofilms and lastly gas bubbles may be added to the model. From the results of these pore scale models the behaviour of the system may be studied and eventually observations may be extrapolated to a larger continuum scale.

Bergwerff, Luke; van Paassen, Leon A.; Picioreanu, Cristian; van Loosdrecht, Mark C. M.

2013-04-01

359

Statistical analysis of large-scale neuronal recording data

Relating stimulus properties to the response properties of individual neurons and neuronal networks is a major goal of sensory research. Many investigators implant electrode arrays in multiple brain areas and record from chronically implanted electrodes over time to answer a variety of questions. Technical challenges related to analyzing large-scale neuronal recording data are not trivial. Several analysis methods traditionally used by neurophysiologists do not account for dependencies in the data that are inherent in multi-electrode recordings. In addition, when neurophysiological data are not best modeled by the normal distribution and when the variables of interest may not be linearly related, extensions of the linear modeling techniques are recommended. A variety of methods exist to analyze correlated data, even when data are not normally distributed and the relationships are nonlinear. Here we review expansions of the Generalized Linear Model designed to address these data properties. Such methods are used in other research fields, and the application to large-scale neuronal recording data will enable investigators to determine the variable properties that convincingly contribute to the variances in the observed neuronal measures. Standard measures of neuron properties such as response magnitudes can be analyzed using these methods, and measures of neuronal network activity such as spike timing correlations can be analyzed as well. We have done just that in recordings from 100-electrode arrays implanted in the primary somatosensory cortex of owl monkeys. Here we illustrate how one example method, Generalized Estimating Equations analysis, is a useful method to apply to large-scale neuronal recordings.

Reed, Jamie L.; Kaas, Jon H.

2010-01-01

360

Tera-scale astronomical data analysis and visualization

NASA Astrophysics Data System (ADS)

We present a high-performance, graphics processing unit (GPU) based framework for the efficient analysis and visualization of (nearly) terabyte (TB) sized 3D images. Using a cluster of 96 GPUs, we demonstrate for a 0.5 TB image (1) volume rendering using an arbitrary transfer function at 7-10 frames per second, (2) computation of basic global image statistics such as the mean intensity and standard deviation in 1.7 s, (3) evaluation of the image histogram in 4 s and (4) evaluation of the global image median intensity in just 45 s. Our measured results correspond to a raw computational throughput approaching 1 teravoxel per second, and are 10-100 times faster than the best possible performance with traditional single-node, multi-core CPU implementations. A scalability analysis shows that the framework will scale well to images sized 1 TB and beyond. Other parallel data analysis algorithms can be added to the framework with relative ease, and accordingly we present our framework as a possible solution to the image analysis and visualization requirements of next-generation telescopes, including the forthcoming Square Kilometre Array Pathfinder radio telescopes.

Hassan, A. H.; Fluke, C. J.; Barnes, D. G.; Kilborn, V. A.

2013-03-01

361

Surface Roughness from Point Clouds - A Multi-Scale Analysis

NASA Astrophysics Data System (ADS)

Roughness is a physical parameter of surfaces which should include the surface complexity in geophysical models. In hydrodynamic modeling, e.g., roughness should estimate the resistance caused by the surface on the flow, or in remote sensing, how the signal is scattered. Roughness needs to be estimated as a parameter of the model. This has been identified as main source of the uncertainties in model prediction, mainly due to the errors that follow a traditional roughness estimation, e.g. from surface profiles, or by a visual interpretation and manual delineation from aerial photos. Currently, roughness estimation is shifting towards point clouds of surfaces, which primarily come from laser scanning and image matching techniques. However, those data sets are also not free of errors and may affect roughness estimation. Our study focusses on the estimation of roughness indices from different point clouds, and the uncertainties that follow such a procedure. The analysis is performed on a graveled surface of a river bed in Eastern Austria, using point clouds acquired by a triangulating laser scanner (Minolta Vivid 910), photogrammetry (DSLR camera), and terrestrial laser scanner (Riegl FWF scanner). To enable their comparison, all the point clouds are transformed to a superior coordinate system. Then, different roughness indices are calculated and compared at different scales, including stochastic and features-based indices like RMS of elevation, std.dev., Peak to Valley height, openness. The analysis is additionally supported with the spectral signatures (frequency domain) of the different point clouds. The selected techniques provide point clouds of different resolution (0.1-10cm) and coverage (0.3-10m), which also justifies the multi-scale roughness analysis. By doing this, it becomes possible to differentiate between the measurement errors and the roughness of the object at the resolutions of the point clouds. Parts of this study have been funded by the project NEWFOR in the framework of European Territorial Cooperation Alpine Space.

Milenkovi?, Milutin; Ressl, Camillo; Hollaus, Markus; Pfeifer, Norbert

2013-04-01

362

Reliability analysis of a utility-scale solar power plant

NASA Astrophysics Data System (ADS)

This paper presents the results of a reliability analysis for a solar central receiver power plant that employs a salt-in-tube receiver. Because reliability data for a number of critical plant components have only recently been collected, this is the first time a credible analysis can be performed. This type of power plant will be built by a consortium of western US utilities led by the Southern California Edison Company. The 10 MW plant is known as Solar Two and is scheduled to be on-line in 1994. It is a prototype which should lead to the construction of 100 MW commercial-scale plants by the year 2000. The availability calculation was performed with the UNIRAM computer code. The analysis predicted a forced outage rate of 5.4 percent and an overall plant availability, including scheduled outages, of 91 percent. The code also identified the most important contributors to plant unavailability. Control system failures were identified as the most important cause of forced outages. Receiver problems were rated second with turbine outages third. The overall plant availability of 91 percent exceeds the goal identified by the US utility study. This paper discuses the availability calculation and presents evidence why the 91 percent availability is a credible estimate.

Kolb, G. J.

1992-10-01

363

Numerical analysis of field-scale transport of bromacil

NASA Astrophysics Data System (ADS)

Field-scale transport of bromacil (5-bromo-3- sec-butyl-6-methyluracil) was analyzed using two different model processes for local description of the transport. The first was the classical, one-region convection dispersion equation (CDE) model while the second was the two-region, mobile-immobile (MIM) model. The analyses were performed by means of detailed three-dimensional, numerical simulations of the flow and the transport [Russo, D., Zaidel, J. and Laufer, A., Numerical analysis of flow and transport in a three-dimensional partially saturated heterogeneous soil. Water Resour. Res., 1998, in press], employing local soil hydraulic properties parameters from field measurements and local adsorption/desorption coefficients and the first-order degradation rate coefficient from laboratory measurements. Results of the analyses suggest that for a given flow regime, mass exchange between the mobile and the immobile regions retards the bromacil degradation, considerably affects the distribution of the bromacil resident concentration, c, at relatively large travel times, slightly affects the spatial moments of the distribution of c, and increases the skewing of the bromacil breakthrough and the uncertainty in its prediction, compared with the case in which the soil contained only a single (mobile) region. Mean and standard deviation of the simulated concentration profiles at various elapsed times were compared with measurements from a field-scale transport experiment [Tauber-Yasur, I., Hadas, A., Russo, D. and Yaron, B., Leaching of terbuthylazine and bromacil through field soils. Water, Air Soil Poln., 1998, in press] conducted at the Bet Dagan site. Given the limitations of the present study (e.g. the lack of detailed field data on the spatial variability of the soil chemical properties) the main conclusion of the present study is that the field-scale transport of bromacil at the Bet Dagan site is better quantified with the MIM model than the CDE model.

Russo, David; Tauber-Yasur, Inbar; Laufer, Asher; Yaron, Bruno

364

NASA Technical Reports Server (NTRS)

A multi-dimensional upwind fluctuation splitting scheme is developed and implemented for two-dimensional and axisymmetric formulations of the Navier-Stokes equations on unstructured meshes. Key features of the scheme are the compact stencil, full upwinding, and non-linear discretization which allow for second-order accuracy with enforced positivity. Throughout, the fluctuation splitting scheme is compared to a current state-of-the-art finite volume approach, a second-order, dual mesh upwind flux difference splitting scheme (DMFDSFV), and is shown to produce more accurate results using fewer computer resources for a wide range of test cases. A Blasius flat plate viscous validation case reveals a more accurate upsilon-velocity profile for fluctuation splitting, and the reduced artificial dissipation production is shown relative to DMFDSFV. Remarkably, the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. The second half of the report develops a local, compact, anisotropic unstructured mesh adaptation scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. The adaptation strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization.

Wood, William A., III

2002-01-01

365

Though HIV/AIDS poses serious risks to economic security, there is very little economics literature quantifying awareness and knowledge of this disease and their principal socioeconomic determinants. This is what the present study attempts to do in the context of India, which faces a significant threat from HIV/AIDS. The study is based on India's National Family Health Surveys covering the period of economic reforms and beyond. The contribution is both methodological and empirical. The study shows that the recent multi-dimensional deprivation approach to poverty can also be used to measure and analyse awareness and lack of knowledge of HIV/AIDS. The use of decomposable multi-dimensional measures helps in identifying regions, socioeconomic groups and aspects of HIV knowledge that should be targeted in policy interventions. The study identifies the importance of safe sex practices as an area that needs to be targeted in future information campaigns. The study also explores the impact of increased female autonomy in health and economic decision-making on their and their partners' knowledge of the disease, along with a host of other economic and demographic determinants. PMID:21756415

Ray, Ranjan; Sinha, Kompal

2011-11-01

366

Large-scale Biomedical Image Analysis in Grid Environments

Digital microscopy scanners are capable of capturing multi-Gigapixel images from single slides, thus producing images of sizes up to several tens of Gigabytes each, and a research study may have hundreds of slides from a specimen. The sheer size of the images and the complexity of image processing operations create roadblocks to effective integration of large-scale imaging data in research. This paper presents the application of a component-based Grid middleware system for processing extremely large images obtained from digital microscopy devices. We have developed parallel, out-of-core techniques for different classes of data processing operations commonly employed on images from confocal microscopy scanners. These techniques are combined into data pre-processing and analysis pipelines using the component-based middleware system. The experimental results show that 1) our implementation achieves good performance and can handle very large (terabyte-scale) datasets on high-performance Grid nodes, consisting of computation and/or storage clusters, and 2) it can take advantage of multiple Grid nodes connected over high-bandwidth wide-area networks by combining task- and data-parallelism.

Kumar, Vijay S.; Rutt, Benjamin; Kurc, Tahsin; Catalyurek, Umit; Pan, Tony; Saltz, Joel; Chow, Sunny; Lamont, Stephan; Martone, Maryann

2012-01-01

367

Regional scale analysis of the altimetric stream network evolution

NASA Astrophysics Data System (ADS)

Floods result from the limited carrying capacity of stream channels when compared to the discharge peak value. The transit of flood waves - with the associated erosion and sedimentation processes - often modifies local stream geometry. In some cases this results in a reduction of the stream carrying capacity, and consequently in an enhancement of the flooding risk. A mathematical model for the prediction of potential altimetric stream network evolution due to erosion and sedimentation processes is here formalized. It works at the regional scale, identifying the tendency of river segments to sedimentation, stability, or erosion. The model builds on geomorphologic concepts, and derives its parameters from extensive surveys. As a case study, tendencies of rivers pertaining to the Valle d'Aosta region are analyzed. Some validation is provided both at regional and local scales of analysis. Local validation is performed both through a mathematical model able to simulate the temporal evolution of the stream profile, and through comparison of the prediction with ante and post-event river surveys, where available. Overall results are strongly encouraging. Possible use of the information derived from the model in the context of flood and landslide hazard mitigation is briefly discussed.

Ghizzoni, T.; Lomazzi, M.; Roth, G.; Rudari, R.

2006-01-01

368

Parallel Index and Query for Large Scale Data Analysis

Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie

2011-07-18

369

Scaling law analysis of paraffin thin films on different surfaces

NASA Astrophysics Data System (ADS)

The dynamics of paraffin deposit formation on different surfaces was analyzed based on scaling laws. Carbon-based films were deposited onto silicon (Si) and stainless steel substrates from methane (CH4) gas using radio frequency plasma enhanced chemical vapor deposition. The different substrates were characterized with respect to their surface energy by contact angle measurements, surface roughness, and morphology. Paraffin thin films were obtained by the casting technique and were subsequently characterized by an atomic force microscope in noncontact mode. The results indicate that the morphology of paraffin deposits is strongly influenced by substrates used. Scaling laws analysis for coated substrates present two distinct dynamics: a local roughness exponent (?local) associated to short-range surface correlations and a global roughness exponent (?global) associated to long-range surface correlations. The local dynamics is described by the Wolf-Villain model, and a global dynamics is described by the Kardar-Parisi-Zhang model. A local correlation length (Llocal) defines the transition between the local and global dynamics with Llocal approximately 700 nm in accordance with the spacing of planes measured from atomic force micrographs. For uncoated substrates, the growth dynamics is related to Edwards-Wilkinson model.

Dotto, M. E. R.; Camargo, S. S.

2010-01-01

370

VLSI structures and iterative analysis for large-scale computation

Problems of computation and development of VLSI structures are considered in relation to each other. In particular, two issues are addressed: (a) development of components and algorithms for standard operations, suitable for VLSI implementation; (b) large-scale computation, in this case the iterative solution of large least-square problems in a limited-size VLSI architecture. On standard operations, improved and new adders are presented that can be implemented in VLSI. The adders so designed are shown to be superior when compared to other existing ones. Moreover, an iterative multiplier that uses carry save adders is also presented. On large-scale computation, analysis of iterative techniques for least-squares problems is first addressed. New convergence results are obtained and explicit expressions, for the optimal parameters as well as for their corresponding optimal asymptotic rate of convergence are derived for the family of iterative schemes known as Accelerated Overrelaxation (AOR). Moreover, partitioning of the iterative algorithm and time-space expansion are used so that a parallel implementation of the iterative scheme is obtained, in a way that computation can be performed, in a fixed-size VLSI architecture, independent of the size of the problem.

Papadopoulou, E.P.

1986-01-01

371

Scaling fluctuation analysis and statistical hypothesis testing of anthropogenic warming

NASA Astrophysics Data System (ADS)

Although current global warming may have a large anthropogenic component, its quantification relies primarily on complex General Circulation Models (GCM's) assumptions and codes; it is desirable to complement this with empirically based methodologies. Previous attempts to use the recent climate record have concentrated on "fingerprinting" or otherwise comparing the record with GCM outputs. By using CO2 radiative forcings as a linear surrogate for all anthropogenic effects we estimate the total anthropogenic warming and (effective) climate sensitivity finding: ? T anth = 0.87 ± 0.11 K, . These are close the IPPC AR5 values ? T anth = 0.85 ± 0.20 K and (equilibrium) climate sensitivity and are independent of GCM models, radiative transfer calculations and emission histories. We statistically formulate the hypothesis of warming through natural variability by using centennial scale probabilities of natural fluctuations estimated using scaling, fluctuation analysis on multiproxy data. We take into account two nonclassical statistical features—long range statistical dependencies and "fat tailed" probability distributions (both of which greatly amplify the probability of extremes). Even in the most unfavourable cases, we may reject the natural variability hypothesis at confidence levels >99 %.

Lovejoy, S.

2014-05-01

372

Anomaly Detection in Multiple Scale for Insider Threat Analysis

We propose a method to quantify malicious insider activity with statistical and graph-based analysis aided with semantic scoring rules. Different types of personal activities or interactions are monitored to form a set of directed weighted graphs. The semantic scoring rules assign higher scores for the events more significant and suspicious. Then we build personal activity profiles in the form of score tables. Profiles are created in multiple scales where the low level profiles are aggregated toward more stable higherlevel profiles within the subject or object hierarchy. Further, the profiles are created in different time scales such as day, week, or month. During operation, the insider s current activity profile is compared to the historical profiles to produce an anomaly score. For each subject with a high anomaly score, a subgraph of connected subjects is extracted to look for any related score movement. Finally the subjects are ranked by their anomaly scores to help the analysts focus on high-scored subjects. The threat-ranking component supports the interaction between the User Dashboard and the Insider Threat Knowledge Base portal. The portal includes a repository for historical results, i.e., adjudicated cases containing all of the information first presented to the user and including any additional insights to help the analysts. In this paper we show the framework of the proposed system and the operational algorithms.

Kim, Yoohwan [ORNL; Sheldon, Frederick T [ORNL; Hively, Lee M [ORNL

2012-01-01

373

Multidimensional Scaling Analysis of the Dynamics of a Country Economy

This paper analyzes the Portuguese short-run business cycles over the last 150 years and presents the multidimensional scaling (MDS) for visualizing the results. The analytical and numerical assessment of this long-run perspective reveals periods with close connections between the macroeconomic variables related to government accounts equilibrium, balance of payments equilibrium, and economic growth. The MDS method is adopted for a quantitative statistical analysis. In this way, similarity clusters of several historical periods emerge in the MDS maps, namely, in identifying similarities and dissimilarities that identify periods of prosperity and crises, growth, and stagnation. Such features are major aspects of collective national achievement, to which can be associated the impact of international problems such as the World Wars, the Great Depression, or the current global financial crisis, as well as national events in the context of broad political blueprints for the Portuguese society in the rising globalization process.

Mata, Maria Eugenia

2013-01-01

374

Multidimensional scaling analysis of the dynamics of a country economy.

This paper analyzes the Portuguese short-run business cycles over the last 150 years and presents the multidimensional scaling (MDS) for visualizing the results. The analytical and numerical assessment of this long-run perspective reveals periods with close connections between the macroeconomic variables related to government accounts equilibrium, balance of payments equilibrium, and economic growth. The MDS method is adopted for a quantitative statistical analysis. In this way, similarity clusters of several historical periods emerge in the MDS maps, namely, in identifying similarities and dissimilarities that identify periods of prosperity and crises, growth, and stagnation. Such features are major aspects of collective national achievement, to which can be associated the impact of international problems such as the World Wars, the Great Depression, or the current global financial crisis, as well as national events in the context of broad political blueprints for the Portuguese society in the rising globalization process. PMID:24294132

Tenreiro Machado, J A; Mata, Maria Eugénia

2013-01-01

375

Global Mapping Analysis: Stochastic Gradient Algorithm in Multidimensional Scaling

NASA Astrophysics Data System (ADS)

In order to implement multidimensional scaling (MDS) efficiently, we propose a new method named “global mapping analysis” (GMA), which applies stochastic approximation to minimizing MDS criteria. GMA can solve MDS more efficiently in both the linear case (classical MDS) and non-linear one (e.g., ALSCAL) if only the MDS criteria are polynomial. GMA separates the polynomial criteria into the local factors and the global ones. Because the global factors need to be calculated only once in each iteration, GMA is of linear order in the number of objects. Numerical experiments on artificial data verify the efficiency of GMA. It is also shown that GMA can find out various interesting structures from massive document collections.

Matsuda, Yoshitatsu; Yamaguchi, Kazunori

376

NASA Technical Reports Server (NTRS)

Equivalent plate analysis is often used to replace the computationally expensive finite element analysis in initial design stages or in conceptual design of aircraft wing structures. The equivalent plate model can also be used to design a wind tunnel model to match the stiffness characteristics of the wing box of a full-scale aircraft wing model while satisfying strength-based requirements An equivalent plate analysis technique is presented to predict the static and dynamic response of an aircraft wing with or without damage. First, a geometric scale factor and a dynamic pressure scale factor are defined to relate the stiffness, load and deformation of the equivalent plate to the aircraft wing. A procedure using an optimization technique is presented to create scaled equivalent plate models from the full scale aircraft wing using geometric and dynamic pressure scale factors. The scaled models are constructed by matching the stiffness of the scaled equivalent plate with the scaled aircraft wing stiffness. It is demonstrated that the scaled equivalent plate model can be used to predict the deformation of the aircraft wing accurately. Once the full equivalent plate geometry is obtained, any other scaled equivalent plate geometry can be obtained using the geometric scale factor. Next, an average frequency scale factor is defined as the average ratio of the frequencies of the aircraft wing to the frequencies of the full-scaled equivalent plate. The average frequency scale factor combined with the geometric scale factor is used to predict the frequency response of the aircraft wing from the scaled equivalent plate analysis. A procedure is outlined to estimate the frequency response and the flutter speed of an aircraft wing from the equivalent plate analysis using the frequency scale factor and geometric scale factor. The equivalent plate analysis is demonstrated using an aircraft wing without damage and another with damage. Both of the problems show that the scaled equivalent plate analysis can be successfully used to predict the frequencies and flutter speed of a typical aircraft wing.

Krishnamurthy, Thiagarajan

2010-01-01

377

NASA Astrophysics Data System (ADS)

In present work, we applied two sets of new multi-dimensional geochemical diagrams (Verma et al., 2013) obtained from linear discriminant analysis (LDA) of natural logarithm-transformed ratios of major elements and immobile major and trace elements in acid magmas to decipher plate tectonic settings and corresponding probability estimates for Paleoproterozoic rocks from Amazonian craton, São Francisco craton, São Luís craton, and Borborema province of Brazil. The robustness of LDA minimizes the effects of petrogenetic processes and maximizes the separation among the different tectonic groups. The probability based boundaries further provide a better objective statistical method in comparison to the commonly used subjective method of determining the boundaries by eye judgment. The use of readjusted major element data to 100% on an anhydrous basis from SINCLAS computer program, also helps to minimize the effects of post-emplacement compositional changes and analytical errors on these tectonic discrimination diagrams. Fifteen case studies of acid suites highlighted the application of these diagrams and probability calculations. The first case study on Jamon and Musa granites, Carajás area (Central Amazonian Province, Amazonian craton) shows a collision setting (previously thought anorogenic). A collision setting was clearly inferred for Bom Jardim granite, Xingú area (Central Amazonian Province, Amazonian craton) The third case study on Older São Jorge, Younger São Jorge and Maloquinha granites Tapajós area (Ventuari-Tapajós Province, Amazonian craton) indicated a within-plate setting (previously transitional between volcanic arc and within-plate). We also recognized a within-plate setting for the next three case studies on Aripuanã and Teles Pires granites (SW Amazonian craton), and Pitinga area granites (Mapuera Suite, NW Amazonian craton), which were all previously suggested to have been emplaced in post-collision to within-plate settings. The seventh case studies on Cassiterita-Tabuões, Ritápolis, São Tiago-Rezende Costa (south of São Francisco craton, Minas Gerais) showed a collision setting, which agrees fairly reasonably with a syn-collision tectonic setting indicated in the literature. A within-plate setting is suggested for the Serrinha magmatic suite, Mineiro belt (south of São Francisco craton, Minas Gerais), contrasting markedly with the arc setting suggested in the literature. The ninth case study on Rio Itapicuru granites and Rio Capim dacites (north of São Francisco craton, Serrinha block, Bahia) showed a continental arc setting. The tenth case study indicated within-plate setting for Rio dos Remédios volcanic rocks (São Francisco craton, Bahia), which is compatible with these rocks being the initial, rift-related igneous activity associated with the Chapada Diamantina cratonic cover. The eleventh, twelfth and thirteenth case studies on Bom Jesus-Areal granites, Rio Diamante-Rosilha dacite-rhyolite and Timbozal-Cantão granites (São Luís craton) showed continental arc, within-plate and collision settings, respectively. Finally, the last two case studies, fourteenth and fifteenth showed a collision setting for Caicó Complex and continental arc setting for Algodões (Borborema province).

Verma, Sanjeet K.; Oliveira, Elson P.

2013-08-01

378

Spatial data analysis for exploration of regional scale geothermal resources

NASA Astrophysics Data System (ADS)

Defining a comprehensive conceptual model of the resources sought is one of the most important steps in geothermal potential mapping. In this study, Fry analysis as a spatial distribution method and 5% well existence, distance distribution, weights of evidence (WofE), and evidential belief function (EBFs) methods as spatial association methods were applied comparatively to known geothermal occurrences, and to publicly-available regional-scale geoscience data in Akita and Iwate provinces within the Tohoku volcanic arc, in northern Japan. Fry analysis and rose diagrams revealed similar directional patterns of geothermal wells and volcanoes, NNW-, NNE-, NE-trending faults, hotsprings and fumaroles. Among the spatial association methods, WofE defined a conceptual model correspondent with the real world situations, approved with the aid of expert opinion. The results of the spatial association analyses quantitatively indicated that the known geothermal occurrences are strongly spatially-associated with geological features such as volcanoes, craters, NNW-, NNE-, NE-direction faults and geochemical features such as hotsprings, hydrothermal alteration zones and fumaroles. Geophysical data contains temperature gradients over 100 °C/km and heat flow over 100 mW/m2. In general, geochemical and geophysical data were better evidence layers than geological data for exploring geothermal resources. The spatial analyses of the case study area suggested that quantitative knowledge from hydrothermal geothermal resources was significantly useful for further exploration and for geothermal potential mapping in the case study region. The results can also be extended to the regions with nearly similar characteristics.

Moghaddam, Majid Kiavarz; Noorollahi, Younes; Samadzadegan, Farhad; Sharifi, Mohammad Ali; Itoi, Ryuichi

2013-10-01

379

Multi-scale analysis of a household level agent-based model of landcover change

Scale issues have significant implications for the analysis of social and biophysical processes in complex systems. These same scale implications are likewise considerations for the design and application of models of landcover change. Scale issues have wide-ranging effects from the representativeness of data used to validate models to aggregation errors introduced in the model structure. This paper presents an analysis

Tom P. Evans; Hugh Kelley

2004-01-01

380

Confirmatory Factor Analysis of the Educators' Attitudes toward Educational Research Scale

ERIC Educational Resources Information Center

This article reports results of a confirmatory factor analysis performed to cross-validate the factor structure of the Educators' Attitudes Toward Educational Research Scale. The original scale had been developed by the author and revised based on the results of an exploratory factor analysis. In the present study, the revised scale was given to…

Ozturk, Mehmet Ali

2011-01-01

381

Age Differences on Alcoholic MMPI Scales: A Discriminant Analysis Approach.

ERIC Educational Resources Information Center

Administered the Minnesota Multiphasic Personality Inventory to 91 male alcoholics after detoxification. Results indicated that the Psychopathic Deviant and Paranoia scales declined with age, while the Responsibility scale increased with age. (JAC)

Faulstich, Michael E.; And Others

1985-01-01

382

In situ vitrification large-scale operational acceptance test analysis

A thermal treatment process is currently under study to provide possible enhancement of in-place stabilization of transuranic and chemically contaminated soil sites. The process is known as in situ vitrification (ISV). In situ vitrification is a remedial action process that destroys solid and liquid organic contaminants and incorporates radionuclides into a glass-like material that renders contaminants substantially less mobile and less likely to impact the environment. A large-scale operational acceptance test (LSOAT) was recently completed in which more than 180 t of vitrified soil were produced in each of three adjacent settings. The LSOAT demonstrated that the process conforms to the functional design criteria necessary for the large-scale radioactive test (LSRT) to be conducted following verification of the performance capabilities of the process. The energy requirements and vitrified block size, shape, and mass are sufficiently equivalent to those predicted by the ISV mathematical model to confirm its usefulness as a predictive tool. The LSOAT demonstrated an electrode replacement technique, which can be used if an electrode fails, and techniques have been identified to minimize air oxidation, thereby extending electrode life. A statistical analysis was employed during the LSOAT to identify graphite collars and an insulative surface as successful cold cap subsidence techniques. The LSOAT also showed that even under worst-case conditions, the off-gas system exceeds the flow requirements necessary to maintain a negative pressure on the hood covering the area being vitrified. The retention of simulated radionuclides and chemicals in the soil and off-gas system exceeds requirements so that projected emissions are one to two orders of magnitude below the maximum permissible concentrations of contaminants at the stack.

Buelt, J.L.; Carter, J.G.

1986-05-01

383

NASA Astrophysics Data System (ADS)

In our recent studies on the analysis of bone texture in the context of Osteoporosis, we could already demonstrate the great potential of the topological evaluation of bone architecture based on the Minkowski Functionals (MF) in 2D and 3D for the prediction of the mechanical strength of cubic bone specimens depicted by high resolution MRI. Other than before, we now assess the mechanical characteristics of whole hip bone specimens imaged by multi-detector computed tomography. Due to the specific properties of the imaging modality and the bone tissue in the proximal femur, this requires to introduce a new analysis method. The internal architecture of the hip is functionally highly specialized to withstand the complex pattern of external and internal forces associated with human gait. Since the direction, connectivity and distribution of the trabeculae changes considerably within narrow spatial limits it seems most reasonable to evaluate the femoral bone structure on a local scale. The Minkowski functionals are a set of morphological descriptors for the topological characterization of binarized, multi-dimensional, convex objects with respect to shape, structure, and the connectivity of their components. The MF are usually used as global descriptors and may react very sensitively to minor structural variations which presents a major limitation in a number of applications. The objective of this work is to assess the mechanical competence of whole hip bone specimens using parameters based on the MF. We introduce an algorithm that considers the local topological aspects of the bone architecture of the proximal femur allowing to identify regions within the bone that contribute more to the overall mechanical strength than others.

Boehm, H. F.; Link, T. M.; Monetti, R. A.; Kuhn, V.; Eckstein, F.; Raeth, C. W.; Reiser, M.

2006-03-01

384

Rasch analysis and item reduction of the hypomanic personality scale

The aim of the current study was to reduce the number of items in the 48-item hypomanic personality scale (HPS) and determine whether a unidimensional scale of the hypomanic trait could be derived. Previously collected HPS data from University students (n=318) were applied to the Rasch model (one-parameter item response theory). Overall scale and individual item fit statistics were used

David M. Meads; Richard P. Bentall

2008-01-01

385

Measuring Mathematics Anxiety: Psychometric Analysis of a Bidimensional Affective Scale

ERIC Educational Resources Information Center

The purpose of this study is to develop a theoretically and methodologically sound bidimensional affective scale measuring mathematics anxiety with high psychometric quality. The psychometric properties of a 14-item Mathematics Anxiety Scale-Revised (MAS-R) adapted from Betz's (1978) 10-item Mathematics Anxiety Scale were empirically analyzed on a…

Bai, Haiyan; Wang, LihShing; Pan, Wei; Frey, Mary

2009-01-01

386

Diffusion entropy analysis on the scaling behavior of financial markets

NASA Astrophysics Data System (ADS)

In this paper the diffusion entropy technique is applied to investigate the scaling behavior of financial markets. The scaling behaviors of four representative stock markets, Dow Jones Industrial Average, Standard&Poor 500, Heng Seng Index, and Shang Hai Stock Synthetic Index, are almost the same; with the scale-invariance exponents all in the interval [0.92,0.95]. We also estimate the local scaling exponents which indicate the financial time series is homogenous perfectly. In addition, a parsimonious percolation model for stock markets is proposed, of which the scaling behavior agrees with the real-life markets well.

Cai, Shi-Min; Zhou, Pei-Ling; Yang, Hui-Jie; Yang, Chun-Xia; Wang, Bing-Hong; Zhou, Tao

2006-07-01

387

Scaling analysis of the variability of the rain drop size distribution at small scale

NASA Astrophysics Data System (ADS)

Like precipitation, the raindrop size distribution (DSD) is strongly variable in space and time. Understanding this variability is important for quantifying and minimizing some of the uncertainties in radar measurements and their interpretation in terms of rain rate. At the typical operational radar pixel scale (i.e., 1 × 1 km2), the variability of the DSD is not well documented and understood. A network of 16 identical disdrometers deployed over a 1 × 1 km2 area provides an adequate data set to investigate this small-scale variability of the DSD. The single-moment and double-moment DSD scaling approaches are used to analyze the DSD variability for a set of 36 rain events of various types. At fine temporal resolutions, neither the single-moment nor the double-moment normalization capture all the DSD variability, and the scaled DSDs appear different at the point and at the pixel scales. The double-moment normalization can however be used to obtain reliable estimates of the DSD moments at the pixel scale from point measurements, providing a way to upscale DSD moments. At coarser temporal resolutions, the spatial variability within the pixel becomes negligible, and the scaled DSDs are similar at the two spatial scales.

Berne, A.; Jaffrain, J.; Schleiss, M.

2012-09-01

388

MIXREGLS: A Program for Mixed-Effects Location Scale Analysis

MIXREGLS is a program which provides estimates for a mixed-effects location scale model assuming a (conditionally) normally-distributed dependent variable. This model can be used for analysis of data in which subjects may be measured at many observations and interest is in modeling the mean and variance structure. In terms of the variance structure, covariates can by specified to have effects on both the between-subject and within-subject variances. Another use is for clustered data in which subjects are nested within clusters (e.g., clinics, hospitals, schools, etc.) and interest is in modeling the between-cluster and within-cluster variances in terms of covariates. MIXREGLS was written in Fortran and uses maximum likelihood estimation, utilizing both the EM algorithm and a Newton-Raphson solution. Estimation of the random effects is accomplished using empirical Bayes methods. Examples illustrating stand-alone usage and features of MIXREGLS are provided, as well as use via the SAS and R software packages.

Hedeker, Donald; Nordgren, Rachel

2013-01-01

389

Acoustic modal analysis of a full-scale annular combustor

NASA Technical Reports Server (NTRS)

An acoustic modal decomposition of the measured pressure field in a full scale annular combustor installed in a ducted test rig is described. The modal analysis, utilizing a least squares optimization routine, is facilitated by the assumption of randomly occurring pressure disturbances which generate equal amplitude clockwise and counter-clockwise pressure waves, and the assumption of statistical independence between modes. These assumptions are fully justified by the measured cross spectral phases between the various measurement points. The resultant modal decomposition indicates that higher order modes compose the dominant portion of the combustor pressure spectrum in the range of frequencies of interest in core noise studies. A second major finding is that, over the frequency range of interest, each individual mode which is present exists in virtual isolation over significant portions of the spectrum. Finally, a comparison between the present results and a limited amount of data obtained in an operating turbofan engine with the same combustor is made. The comparison is sufficiently favorable to warrant the conclusion that the structure of the combustor pressure field is preserved between the component facility and the engine. Previously announced in STAR as N83-21896

Karchmer, A. M.

1983-01-01

390

Acoustic modal analysis of a full-scale annular combustor

NASA Technical Reports Server (NTRS)

An acoustic modal decomposition of the measured pressure field in a full scale annular combustor installed in a ducted test rig is described. The modal analysis, utilizing a least squares optimization routine, is facilitated by the assumption of randomly occurring pressure disturbances which generate equal amplitude clockwise and counter-clockwise pressure waves, and the assumption of statistical independence between modes. These assumptions are fully justified by the measured cross spectral phases between the various measurement points. The resultant modal decomposition indicates that higher order modes compose the dominant portion of the combustor pressure spectrum in the range of frequencies of interest in core noise studies. A second major finding is that, over the frequency range of interest, each individual mode which is present exists in virtual isolation over significant portions of the spectrum. Finally, a comparison between the present results and a limited amount of data obtained in an operating turbofan engine with the same combustor is made. The comparison is sufficiently favorable to warrant the conclusion that the structure of the combustor pressure field is preserved between the component facility and the engine.

Karchmer, A. M.

1982-01-01

391

Full-scale testing and analysis of fuselage structure

NASA Technical Reports Server (NTRS)

This paper presents recent results from a program in the Boeing Commercial Airplane Group to study the behavior of cracks in fuselage structures. The goal of this program is to improve methods for analyzing crack growth and residual strength in pressurized fuselages, thus improving new airplane designs and optimizing the required structural inspections for current models. The program consists of full-scale experimental testing of pressurized fuselage panels in both wide-body and narrow-body fixtures and finite element analyses to predict the results. The finite element analyses are geometrically nonlinear with material and fastener nonlinearity included on a case-by-case basis. The analysis results are compared with the strain gage, crack growth, and residual strength data from the experimental program. Most of the studies reported in this paper concern the behavior of single or multiple cracks in the lap joints of narrow-body airplanes (such as 727 and 737 commercial jets). The phenomenon where the crack trajectory is curved creating a 'flap' and resulting in a controlled decompression is discussed.

Miller, M.; Gruber, M. L.; Wilkins, K. E.; Worden, R. E.

1994-01-01

392

MIXREGLS: A Program for Mixed-Effects Location Scale Analysis.

MIXREGLS is a program which provides estimates for a mixed-effects location scale model assuming a (conditionally) normally-distributed dependent variable. This model can be used for analysis of data in which subjects may be measured at many observations and interest is in modeling the mean and variance structure. In terms of the variance structure, covariates can by specified to have effects on both the between-subject and within-subject variances. Another use is for clustered data in which subjects are nested within clusters (e.g., clinics, hospitals, schools, etc.) and interest is in modeling the between-cluster and within-cluster variances in terms of covariates. MIXREGLS was written in Fortran and uses maximum likelihood estimation, utilizing both the EM algorithm and a Newton-Raphson solution. Estimation of the random effects is accomplished using empirical Bayes methods. Examples illustrating stand-alone usage and features of MIXREGLS are provided, as well as use via the SAS and R software packages. PMID:23761062

Hedeker, Donald; Nordgren, Rachel

2013-03-01

393

NASA Astrophysics Data System (ADS)

A multi-dimensional zinc oxide (ZnO) hybrid structure was successfully grown on a glass substrate by using metal organic chemical vapor deposition (MOCVD). The ZnO hybrid structure was composed of nanorods grown continuously on the ZnO film without any catalysts. The growth mode could be changed from a two-dimensional (2D) film to one-dimensional (1D) nanorods by simply controlling the substrate's temperature. The ZnO with a hybrid structure showed improved electrical and optical properties. The ZnO hybrid structure grown by using MOCVD has excellent potential for applications in opto-electronic devices and solar cells as anti-reflection coatings (ARCs), transparent conductive oxides (TCOs) and transparent thin-film transistors (TTFTs).

Kim, Dae-Sik; Lee, Dohan; Lee, Je-Haeng; Byun, Dongjin

2014-05-01

394

Analysis of geomechanical behavior for the drift scale test

The Yucca Mountain Site Characterization Project is conducting a drift scale heater test, known as the Drift Scale Test (DST), in an alcove of the Exploratory Studies Facility at Yucca Mountain, Nevada. The DST is a large-scale, long-term thermal test designed to investigate coupled thermal-mechanical-hydrological-chemical behavior in a fractured, welded tuff rock mass. The general layout of the DST is

S C Blair; S R Carlson; J L Wagoner

2000-01-01

395

Rating scales are commonly used as measurement instruments in various kinds of questionnaires and quality of life assessments but also in clinical trials for evaluation of outcome or functioning. There is also an ongoing development of statistical methods for analysis of ordinal data. Hence, the users of rating scales and the statisticians need each other. The information delay concerning methodological

Elisabeth Svensson

2002-01-01

396

Principles of Adult Learning Scale: Followup and Factor Analysis.

ERIC Educational Resources Information Center

In 1978 the Principles of Adult Learning Scale (PALS) was developed to measure the degree of practitioner support of the principles of the collaborative teaching-learning mode for teaching adults. Although the original study with a field test group of 57 produced a valid and reliable 44-item summated rating scale, the stability of the normative…

Conti, Gary J.

397

Confirmatory factor analysis of the multidimensional Students’ Life Satisfaction Scale

The assessment of children’s life satisfaction (LS) is a relatively new area of research. To date, one of the most comprehensive investigations in this area has culminated in the development of the Multidimensional Students’ Life Satisfaction Scale [MSLSS; Huebner, E. S. (1994). Preliminary development and validation of a multidimensional life satisfaction scale for children. Psychological Assessment, 6, 149–158]. The first

Peter J Greenspoon; Donald H Saklofske

1998-01-01

398

Scaling and the design of miniaturized chemical-analysis systems

Micrometre-scale analytical devices are more attractive than their macroscale counterparts for various reasons. For example, they use smaller volumes of reagents and are therefore cheaper, quicker and less hazardous to use, and more environmentally appealing. Scaling laws compare the relative performance of a system as the dimensions of the system change, and can predict the operational success of miniaturized chemical

Dirk Janasek; Joachim Franzke; Andreas Manz

2006-01-01

399

A biomechanical analysis of applied pinch force during periodontal scaling

One of the factors associated with the high prevalence of upper extremity musculoskeletal disorders, such as carpal tunnel syndrome, among dental practitioners is the repeated high pinch force applied during periodontal scaling. The goal of this study was to determine the relationship between the pinch force applied during periodontal scaling and the forces generated at the tip of the tool.

Alfredo Villanueva; Hui Dong; David Rempel

2007-01-01

400

Analysis of small scale turbulent structures and the effect of spatial scales on gas transfer

NASA Astrophysics Data System (ADS)

The exchange of gases through the air-sea interface strongly depends on environmental conditions such as wind stress and waves which in turn generate near surface turbulence. Near surface turbulence is a main driver of surface divergence which has been shown to cause highly variable transfer rates on relatively small spatial scales. Due to the cool skin of the ocean, heat can be used as a tracer to detect areas of surface convergence and thus gather information about size and intensity of a turbulent process. We use infrared imagery to visualize near surface aqueous turbulence and determine the impact of turbulent scales on exchange rates. Through the high temporal and spatial resolution of these types of measurements spatial scales as well as surface dynamics can be captured. The surface heat pattern is formed by distinct structures on two scales - small-scale short lived structures termed fish scales and larger scale cold streaks that are consistent with the footprints of Langmuir Circulations. There are two key characteristics of the observed surface heat patterns: 1. The surface heat patterns show characteristic features of scales. 2. The structure of these patterns change with increasing wind stress and surface conditions. In [2] turbulent cell sizes have been shown to systematically decrease with increasing wind speed until a saturation at u* = 0.7 cm/s is reached. Results suggest a saturation in the tangential stress. Similar behaviour has been observed by [1] for gas transfer measurements at higher wind speeds. In this contribution a new model to estimate the heat flux is applied which is based on the measured turbulent cell size und surface velocities. This approach allows the direct comparison of the net effect on heat flux of eddies of different sizes and a comparison to gas transfer measurements. Linking transport models with thermographic measurements, transfer velocities can be computed. In this contribution, we will quantify the effect of small scale processes on interfacial transport and relate it to gas transfer. References [1] T. G. Bell, W. De Bruyn, S. D. Miller, B. Ward, K. Christensen, and E. S. Saltzman. Air-sea dimethylsulfide (DMS) gas transfer in the North Atlantic: evidence for limited interfacial gas exchange at high wind speed. Atmos. Chem. Phys. , 13:11073-11087, 2013. [2] J Schnieders, C. S. Garbe, W.L. Peirson, and C. J. Zappa. Analyzing the footprints of near surface aqueous turbulence - an image processing based approach. Journal of Geophysical Research-Oceans, 2013.

Schnieders, Jana; Garbe, Christoph

2014-05-01

401

Estimating Cognitive Profiles Using Profile Analysis via Multidimensional Scaling (PAMS)

ERIC Educational Resources Information Center

Two of the most popular methods of profile analysis, cluster analysis and modal profile analysis, have limitations. First, neither technique is adequate when the sample size is large. Second, neither method will necessarily provide profile information in terms of both level and pattern. A new method of profile analysis, called Profile Analysis via…

Kim, Se-Kang; Frisby, Craig L.; Davison, Mark L.

2004-01-01

402

Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain — a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).

403

Murray Gibson

2007-04-27

404

Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain ? a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).

Murray Gibson

2010-01-08

405

Scaling properties of the Arctic sea ice Deformation from Buoy Dispersion Analysis

A temporal and spatial scaling analysis of Arctic sea ice deformation is performed over time scales from 3 hours to 3 months and over spatial scales from 300 m to 300 km. The deformation is derived from the dispersion of pairs of drifting buoys, using the IABP (International Arctic Buoy Program) buoy data sets. This study characterizes the deformation of

J. Weiss; P. Rampal; D. Marsan; R. Lindsay; H. Stern

2007-01-01

406

A Factor Analysis of the Laurelton Self-Concept Scale. Volume 1, Number 14.

ERIC Educational Resources Information Center

Items from the Laurelton Self Concept Scale (LSCS) and the Locus of Control Scale for Children were administered to 172 male and female educable mental retardates to examine the LSCS by R factor analysis. It was found that the Self Concept Scale is factor analyzable when appropriately administered to educables. The small factors grouped into…

Harrison, Robert H.; Budoff, Milton

407

ERIC Educational Resources Information Center

TMFA, a FORTRAN program for three-mode factor analysis and individual-differences multidimensional scaling, is described. Program features include a variety of input options, extensive preprocessing of input data, and several alternative methods of analysis. (Author)

Redfield, Joel

1978-01-01

408

National Technical Information Service (NTIS)

Estimates of the uncertainties attached to full-scale predictions of submarine propulsion based on model tests in the Large Cavitation Channel (LCC) are obtained by means of a global uncertainty analysis. The analysis takes into account all the component ...

F. Noblesse J. R. Lee W. Brewer M. R. Pfeifer R. B. Hurwitz

1998-01-01

409

Markov Chain Analysis for Large-Scale Grid Systems.

National Technical Information Service (NTIS)

In large-scale grid systems with decentralized control, the interactions of many service providers and consumers will likely lead to emergent global system behaviors that result in unpredictable, often detrimental, outcomes. This possibility argues for de...

C. Dabrowski F. Hunt

2009-01-01

410

Modulation analysis of large-scale discrete vortices

The behavior of large-scale vortices governed by the discrete nonlinear Schrödinger equation is studied. Using a discrete version of modulation theory, it is shown how vortices are trapped and stabilized by the self-consistent Peierls-Nabarro potential that they generate in the lattice. Large-scale circular and polygonal vortices are studied away from the anticontinuum limit, which is the limit considered in previous

Luis A. Cisneros; Antonmaria A. Minzoni; Panayotis Panayotaros; Noel F. Smyth

2008-01-01

411

MEMS-based resonant heat engine: scaling analysis

In this article, a scaling model of a MEMS-based resonant heat engine is presented. The engine is an external combustion engine\\u000a made of a cavity encapsulated between two thin membranes. The cavity is filled with a saturated liquid–vapor mixture working\\u000a fluid. Both model and experiment are used to investigate issues related to scaling of the engine. The results of the

H. Bardaweel; B. S. Preetham; R. Richards; C. Richards; M. Anderson

412

ERIC Educational Resources Information Center

The diagnostic criteria for attention deficit hyperactivity disorder have evolved over time with current versions of the "Diagnostic and Statistical Manual", (4th edition), text revision, ("DSM-IV-TR") suggesting that two constellations of symptoms may be present alone or in combination. The SCALES instrument for diagnosing attention deficit…

Ryser, Gail R.; Campbell, Hilary L.; Miller, Brian K.

2010-01-01

413

CFD analysis of wind climate from human scale to urban scale

The rapid growth of computational wind engineering (CWE) has led to an expansion of the research fields of wind engineering. CWE has made it possible to analyze various physical processes associated with wind climate around humans and in urban areas. This paper reviews recent achievements in CWE and its application to wind climate in scales ranging from human to urban

Shuzo Murakami; Ryozo Ooka; Akashi Mochida; Shinji Yoshida; Sangjin Kim

1999-01-01

414

Coordinate Dependence of Variability Analysis

Analysis of motor performance variability in tasks with redundancy affords insight about synergies underlying central nervous system (CNS) control. Preferential distribution of variability in ways that minimally affect task performance suggests sophisticated neural control. Unfortunately, in the analysis of variability the choice of coordinates used to represent multi-dimensional data may profoundly affect analysis, introducing an arbitrariness which compromises its conclusions.

Dagmar Sternad; Se-Woong Park; Hermann Müller; Neville Hogan

2010-01-01

415

A biomechanical analysis of applied pinch force during periodontal scaling

One of the factors associated with the high prevalence of upper extremity musculoskeletal disorders, such as carpal tunnel syndrome, among dental practitioners is the repeated high pinch force applied during periodontal scaling. The goal of this study was to determine the relationship between the pinch force applied during periodontal scaling and the forces generated at the tip of the tool. A linear biomechanical model that incorporated tool reaction forces and a calculated safety margin was created to predict the pinch force applied by experienced and inexperienced dentists during periodontal scaling. Six dentists and six dental students used an instrumented scaling tool while performing periodontal scaling on patients. Thumb pinch force was measured by a pressure sensor, while the forces developed at the instrument tip were measured by a six-axis load cell. A biomechanical model was used to calculate a safety factor and to predict the applied pinch force. For experienced dentists, the model was moderately successful in predicting pinch force (R2 = 0.59). For inexperienced dentists, the model failed to predict peak pinch force (R2 = 0.01). The mean safety margin was higher for inexperienced (4.88±1.58) than experienced (3.35±0.55) dentists, suggesting that students apply excessive force during scaling.

Villanueva, Alfredo; Dong, Hui; Rempel, David

2009-01-01

416

A biomechanical analysis of applied pinch force during periodontal scaling.

One of the factors associated with the high prevalence of upper extremity musculoskeletal disorders, such as carpal tunnel syndrome, among dental practitioners is the repeated high pinch force applied during periodontal scaling. The goal of this study was to determine the relationship between the pinch force applied during periodontal scaling and the forces generated at the tip of the tool. A linear biomechanical model that incorporated tool reaction forces and a calculated safety margin was created to predict the pinch force applied by experienced and inexperienced dentists during periodontal scaling. Six dentists and six dental students used an instrumented scaling tool while performing periodontal scaling on patients. Thumb pinch force was measured by a pressure sensor, while the forces developed at the instrument tip were measured by a six-axis load cell. A biomechanical model was used to calculate a safety factor and to predict the applied pinch force. For experienced dentists, the model was moderately successful in predicting pinch force (R(2)=0.59). For inexperienced dentists, the model failed to predict peak pinch force (R(2)=0.01). The mean safety margin was higher for inexperienced (4.88+/-1.58) than experienced (3.35+/-0.55) dentists, suggesting that students apply excessive force during scaling. PMID:17052721

Villanueva, Alfredo; Dong, Hui; Rempel, David

2007-01-01

417

Multiscale Modeling and TimeScale Analysis of a Human Limb

A multi-scale modeling approach is proposed in this paper that assists the user in constructing musculoskeletal system models from sub-models describing various mechanisms on different levels on the length scale. In addition, dynamic time-scale analysis has been performed on the developed multi-scale models of various parts of a human limb: on wrist, elbow and shoulder characterized by different maximal muscle

Csaba Fazekas; ORGY KOZMANN; Katalin M. Hangos

2007-01-01

418

Scale Development Research: A Content Analysis and Recommendations for Best Practices

ERIC Educational Resources Information Center

The authors conducted a content analysis on new scale development articles appearing in the "Journal of Counseling Psychology" during 10 years (1995 to 2004). The authors analyze and discuss characteristics of the exploratory and confirmatory factor analysis procedures in these scale development studies with respect to sample characteristics,…

Worthington, Roger L.; Whittaker, Tiffany A.

2006-01-01

419

A wafer scale fail bit analysis system for VLSI memory yield improvement

A wafer-scale fail bit analysis system which outputs an entire wafer fail bit map (FBM) by using a data compaction technique and testing structure is developed. With this system, process defect locations on a wafer can easily be electrically recognized quickly. The processing time of wafer-scale fail bit analysis is reduced to only 2% of that required by the conventional

Y. Sakai; J. Sawada; W. Sakamoto; J. Murato; H. Kawamoto; K. Sakai; K. Nakamuta

1990-01-01

420

Grading Scales of Virginia High Schools and Admissions Action at Virginia Tech: A Research Analysis.

ERIC Educational Resources Information Center

Descriptive statistics of grading scales used in 371 Virginia high schools are discussed in light of the admissions decisions at Virginia Polytechnic Institute and State University. An analysis of the effect of grading scales on admissions decisions is presented. Nontraditional analysis is recommended as an alternative for interpreting the complex…

Carson, E. W.; And Others

1990-01-01

421

ERIC Educational Resources Information Center

Empathy is an essential building block for successful interpersonal relationships. Atypical empathic development is implicated in a range of developmental psychopathologies. However, assessment of empathy in children is constrained by a lack of suitable measurement instruments. This article outlines the development of the Kids' Empathic…

Reid, Corinne; Davis, Helen; Horlin, Chiara; Anderson, Mike; Baughman, Natalie; Campbell, Catherine

2013-01-01

422

Two new experimental technologies enabled realization of Break-out afterburner (BOA) - High quality Trident laser and free-standing C nm-targets. VPIC is an powerful tool for fundamental research of relativistic laser-matter interaction. Predictions from VPIC are validated - Novel BOA and Solitary ion acceleration mechanisms. VPIC is a fully explicit Particle In Cell (PIC) code: models plasma as billions of macro-particles moving on a computational mesh. VPIC particle advance (which typically dominates computation) has been optimized extensively for many different supercomputers. Laser-driven ions lead to realization promising applications - Ion-based fast ignition; active interrogation, hadron therapy.

Wu, Hui-Chun [Los Alamos National Laboratory; Hegelich, B.M. [Los Alamos National Laboratory; Fernandez, J.C. [Los Alamos National Laboratory; Shah, R.C. [Los Alamos National Laboratory; Palaniyappan, S. [Los Alamos National Laboratory; Jung, D. [Los Alamos National Laboratory; Yin, L [Los Alamos National Laboratory; Albright, B.J. [Los Alamos National Laboratory; Bowers, K. [Guest Scientist of XCP-6; Huang, C. [Los Alamos National Laboratory; Kwan, T.J. [Los Alamos National Laboratory

2012-06-19

423

NASA Astrophysics Data System (ADS)

The AMMA program contributed to acquire relevant ground observations in the West African monsoon area to validate satellite rainfall estimation products at several temporal and spatial scales. A comparison of rainfall estimates retrieved from several IR/MW combined algorithms with precipitation accumulations derived from rain gauge measurements was performed at the seasonal scale (pre and post onset of the 2006 monsoon) in the Sahelian band, and at daily time scales. A diurnal cycle analysis over three regions of Western Africa to help understanding strengths and weaknesses of rainfall algorithms according to regional rainfall characteristics is presented. Furthermore, a power spectrum study is carried out in those three regions over the whole rainy season for a qualitative comparison of the validation datasets with the three rainfall estimate products. Finally a selected meso-scale convective event is analyzed in details. We used the datasets of three satellite rainfall products: TMPA (0.25°-3 hours), GsMap-MVK (0.1°-1h) and EPSAT-SG (0.1°-15mn). The various scales of this study have required several validation datasets : - gridded rainfall estimates from the CILSS rain gauge network over the Sahelian band at the 1°-10-day scale; a dataset of high resolution (0.01°-1h) gridded precipitation estimates over three regions of Western Africa elaborated from dense gauge networks near Niamey (Niger), Kopargo (Benin) and Dakar (Senegal). Two additional fine scale validation datasets have been used for the case study: the 5-minute krigged rain gauge dataset and the 7-minute rainfall estimates dataset at 1.5 km of altitude retrieved from the Ronsard radar observations during the AMMA