Sibley, Chris G; Houkamau, Carla A
2013-01-01
We argue that there is a need for culture-specific measures of identity that delineate the factors that most make sense for specific cultural groups. One such measure, recently developed specifically for M?ori peoples, is the Multi-Dimensional Model of M?ori Identity and Cultural Engagement (MMM-ICE). M?ori are the indigenous peoples of New Zealand. The MMM-ICE is a 6-factor measure that assesses the following aspects of identity and cultural engagement as M?ori: (a) group membership evaluation, (b) socio-political consciousness, (c) cultural efficacy and active identity engagement, (d) spirituality, (e) interdependent self-concept, and (f) authenticity beliefs. This article examines the scale properties of the MMM-ICE using item response theory (IRT) analysis in a sample of 492 M?ori. The MMM-ICE subscales showed reasonably even levels of measurement precision across the latent trait range. Analysis of age (cohort) effects further indicated that most aspects of M?ori identification tended to be higher among older M?ori, and these cohort effects were similar for both men and women. This study provides novel support for the reliability and measurement precision of the MMM-ICE. The study also provides a first step in exploring change and stability in M?ori identity across the life span. A copy of the scale, along with recommendations for scale scoring, is included. PMID:23356361
NASA Astrophysics Data System (ADS)
Milledge, D.; Bellugi, D.; McKean, J. A.; Dietrich, W.
2012-12-01
The infinite slope model is the basis for almost all watershed scale slope stability models. However, it assumes that a potential landslide is infinitely long and wide. As a result, it cannot represent resistance at the margins of a potential landslide (e.g. from lateral roots), and is unable to predict the size of a potential landslide. Existing three-dimensional models generally require computationally expensive numerical solutions and have previously been applied only at the hillslope scale. Here we derive an alternative analytical treatment that accounts for lateral resistance by representing the forces acting on each margin of an unstable block. We apply 'at rest' earth pressure on the lateral sides, and 'active' and 'passive' pressure using a log-spiral method on the upslope and downslope margins. We represent root reinforcement on each margin assuming that root cohesion is an exponential function of soil depth. We benchmark this treatment against other more complete approaches (Finite Element (FE) and closed form solutions) and find that our model: 1) converges on the infinite slope predictions as length / depth and width / depth ratios become large; 2) agrees with the predictions from state-of-the-art FE models to within +/- 30% error, for the specific cases in which these can be applied. We then test our model's ability to predict failure of an actual (mapped) landslide where the relevant parameters are relatively well constrained. We find that our model predicts failure at the observed location with a nearly identical shape and predicts that larger or smaller shapes conformal to the observed shape are indeed more stable. Finally, we perform a sensitivity analysis using our model to show that lateral reinforcement sets a minimum landslide size, while the additional strength at the downslope boundary means that the optimum shape for a given size is longer in a downslope direction. However, reinforcement effects cannot fully explain the size or shape distributions of observed landslides, highlighting the importance of spatial patterns of key parameters (e.g. pore water pressure) and motivating the model's watershed scale application. This watershed scale application requires an efficient method to find the least stable shapes among an almost infinite set. However, when applied in this context, it allows a more complete examination of the controls on landslide size, shape and location.
Grey-Scale Measurements in Multi-Dimensional Digitized Images
van Vliet, Lucas J.
Grey-Scale Measurements in Multi-Dimensional Digitized Images Lucas J. van Vliet #12;#12;Grey-Scale Measurements in Multi-Dimensional Digitized Images proefschrift Ter verkrijging van de graad van doctor aan de ................................ 65 3.5. Spherical Objects in 3D and Higher Dimensions ............. 68 #12;VIII TABLE OF CONTENTS 3
Reconstruction of network structures from marked point processes using multi-dimensional scaling
NASA Astrophysics Data System (ADS)
Kuroda, Kaori; Hashiguchi, Hiroki; Fujiwara, Kantaro; Ikeguchi, Tohru
2014-12-01
We propose a method of estimating network structures only from observed marked point processes using the multi-dimensional scaling. In this method, first, we calculate a spike time metric which quantifies a metric distance between the observed marked point processes. Next, to represent a relationship among point processes in the Euclidean space, we apply the multi-dimensional scaling to the metric distance between point processes. Then we apply the partialization analysis to the obtained coordinate vectors by the multi-dimensional scaling. As a result, we can estimate the network structures from multiple point processes even though the elements have many common spurious inputs from the other elements.
Development of multi-dimensional body image scale for malaysian female adolescents.
Chin, Yit Siew; Taib, Mohd Nasir Mohd; Shariff, Zalilah Mohd; Khor, Geok Lin
2008-01-01
The present study was conducted to develop a Multi-dimensional Body Image Scale for Malaysian female adolescents. Data were collected among 328 female adolescents from a secondary school in Kuantan district, state of Pahang, Malaysia by using a self-administered questionnaire and anthropometric measurements. The self-administered questionnaire comprised multiple measures of body image, Eating Attitude Test (EAT-26; Garner & Garfinkel, 1979) and Rosenberg Self-esteem Inventory (Rosenberg, 1965). The 152 items from selected multiple measures of body image were examined through factor analysis and for internal consistency. Correlations between Multi-dimensional Body Image Scale and body mass index (BMI), risk of eating disorders and self-esteem were assessed for construct validity. A seven factor model of a 62-item Multi-dimensional Body Image Scale for Malaysian female adolescents with construct validity and good internal consistency was developed. The scale encompasses 1) preoccupation with thinness and dieting behavior, 2) appearance and body satisfaction, 3) body importance, 4) muscle increasing behavior, 5) extreme dieting behavior, 6) appearance importance, and 7) perception of size and shape dimensions. Besides, a multidimensional body image composite score was proposed to screen negative body image risk in female adolescents. The result found body image was correlated with BMI, risk of eating disorders and self-esteem in female adolescents. In short, the present study supports a multi-dimensional concept for body image and provides a new insight into its multi-dimensionality in Malaysian female adolescents with preliminary validity and reliability of the scale. The Multi-dimensional Body Image Scale can be used to identify female adolescents who are potentially at risk of developing body image disturbance through future intervention programs. PMID:20126371
Development of multi-dimensional body image scale for malaysian female adolescents
Taib, Mohd Nasir Mohd; Shariff, Zalilah Mohd; Khor, Geok Lin
2008-01-01
The present study was conducted to develop a Multi-dimensional Body Image Scale for Malaysian female adolescents. Data were collected among 328 female adolescents from a secondary school in Kuantan district, state of Pahang, Malaysia by using a self-administered questionnaire and anthropometric measurements. The self-administered questionnaire comprised multiple measures of body image, Eating Attitude Test (EAT-26; Garner & Garfinkel, 1979) and Rosenberg Self-esteem Inventory (Rosenberg, 1965). The 152 items from selected multiple measures of body image were examined through factor analysis and for internal consistency. Correlations between Multi-dimensional Body Image Scale and body mass index (BMI), risk of eating disorders and self-esteem were assessed for construct validity. A seven factor model of a 62-item Multi-dimensional Body Image Scale for Malaysian female adolescents with construct validity and good internal consistency was developed. The scale encompasses 1) preoccupation with thinness and dieting behavior, 2) appearance and body satisfaction, 3) body importance, 4) muscle increasing behavior, 5) extreme dieting behavior, 6) appearance importance, and 7) perception of size and shape dimensions. Besides, a multidimensional body image composite score was proposed to screen negative body image risk in female adolescents. The result found body image was correlated with BMI, risk of eating disorders and self-esteem in female adolescents. In short, the present study supports a multi-dimensional concept for body image and provides a new insight into its multi-dimensionality in Malaysian female adolescents with preliminary validity and reliability of the scale. The Multi-dimensional Body Image Scale can be used to identify female adolescents who are potentially at risk of developing body image disturbance through future intervention programs. PMID:20126371
Development of a Multi-Dimensional Scale for PDD and ADHD
ERIC Educational Resources Information Center
Funabiki, Yasuko; Kawagishi, Hisaya; Uwatoko, Teruhisa; Yoshimura, Sayaka; Murai, Toshiya
2011-01-01
A novel assessment scale, the multi-dimensional scale for pervasive developmental disorder (PDD) and attention-deficit/hyperactivity disorder (ADHD) (MSPA), is reported. Existing assessment scales are intended to establish each diagnosis. However, the diagnosis by itself does not always capture individual characteristics or indicate the level of…
Rubel, Oliver; Ahern, Sean; Bethel, E. Wes; Biggin, Mark D.; Childs, Hank; Cormier-Michel, Estelle; DePace, Angela; Eisen, Michael B.; Fowlkes, Charless C.; Geddes, Cameron G. R.; Hagen, Hans; Hamann, Bernd; Huang, Min-Yu; Keranen, Soile V. E.; Knowles, David W.; Hendriks, Chris L. Luengo; Malik, Jitendra; Meredith, Jeremy; Messmer, Peter; Prabhat,; Ushizima, Daniela; Weber, Gunther H.; Wu, Kesheng
2010-06-08
Knowledge discovery from large and complex scientific data is a challenging task. With the ability to measure and simulate more processes at increasingly finer spatial and temporal scales, the growing number of data dimensions and data objects presents tremendous challenges for effective data analysis and data exploration methods and tools. The combination and close integration of methods from scientific visualization, information visualization, automated data analysis, and other enabling technologies"such as efficient data management" supports knowledge discovery from multi-dimensional scientific data. This paper surveys two distinct applications in developmental biology and accelerator physics, illustrating the effectiveness of the described approach.
An amino acid map of inter-residue contact energies using metric multi-dimensional scaling
Sourav Rakshit; G. K. Ananthasuresh
2008-01-01
We present an amino map based on their inter-residue contact energies using the Miyazawa–Jernigan matrix. This work is based on the method of metric multi-dimensional scaling (MMDS). The MMDS map shows, among other things, that the MJ contact energies imply the hydrophobic–hydrophilic nature of the amino acid residues. With the help of the map we are able to compare and
MultiDimensional Regression Analysis of Time-Series Data Streams
Yixin Chen; Guozhu Dong; Jiawei Han; Benjamin W. Wah; Jianyong Wang
2002-01-01
Real-time production systems and other dynamic environments often generate tremendous (potentially infinite) amount of stream data; the volume of data is too huge to be stored on disks or scanned multiple times. Can we perform on-line, multi-dimensional analysis and data mining of such data to alert people about dramatic changes of situations and to initiate timely, high-quality responses? This is
Zhanhua, Cui; Gan, Jacob Gah-Kok; Lei, Li; Mathura, Venkatarajan Subramanian; Sakharkar, Meena Kishore; Kangueane, Pandjassarame
2005-01-01
Protein subunit dimers are either homodimers (consisting of identical polypeptides) or heterodimers (consisting of different polypeptides). Protein dimers are involved in several cellular processes and an understanding of their molecular principle in complexations (subunit-subunit interaction) is essential. This is generally studied using 3D structures of homodimers and heterodimers determined by X-ray crystallography. However, the current knowledge on subunit interaction is limited due to lack of sufficient 3D dimer structures. It is our interest to study heterodimers using 3D structures to identify interaction parameters that would help in the development of a model to predict heterodimer interaction sites just from protein sequences. The efficiency of such models depends on the weighted contribution of numerous parameters characterizing heterodimer interfaces. Therefore, we studied the salient features of 111 interface parameters in 65 heterodimer structures. In this study, we applied multi-dimensional scaling for dimensionality reduction on these parameters to select the most critical ones that best characterize heterodimer interfaces. The significance of these parameters in subunit interaction is discussed. PMID:15569623
Convergence Analysis of On-Policy LSPI for Multi-Dimensional Continuous State and Action-Space
Powell, Warren B.
Convergence Analysis of On-Policy LSPI for Multi-Dimensional Continuous State and Action-Space MDPs the optimization problem. We provide a formal convergence analysis of the algorithm under the assumption that value functions are spanned by finitely many known basis functions. Furthermore, the convergence result extends
Multi-dimensional PARAFAC2 component analysis of multi-channel EEG data including temporal tracking.
Weis, Martin; Jannek, Dunja; Roemer, Florian; Guenther, Thomas; Haardt, Martin; Husar, Peter
2010-01-01
The identification of signal components in electroencephalographic (EEG) data originating from neural activities is a long standing problem in neuroscience. This area has regained new attention due to the possibilities of multi-dimensional signal processing. In this work we analyze measured visual-evoked potentials on the basis of the time-varying spectrum for each channel. Recently, parallel factor (PARAFAC) analysis has been used to identify the signal components in the space-time-frequency domain. However, the PARAFAC decomposition is not able to cope with components appearing time-shifted over the different channels. Furthermore, it is not possible to track PARAFAC components over time. In this contribution we derive how to overcome these problems by using the PARAFAC2 model, which renders it an attractive approach for processing EEG data with highly dynamic (moving) sources. PMID:21096263
Lee, Hyun Jung; McDonnell, Kevin T.; Zelenyuk, Alla; Imre, D.; Mueller, Klaus
2014-03-01
Although the Euclidean distance does well in measuring data distances within high-dimensional clusters, it does poorly when it comes to gauging inter-cluster distances. This significantly impacts the quality of global, low-dimensional space embedding procedures such as the popular multi-dimensional scaling (MDS) where one can often observe non-intuitive layouts. We were inspired by the perceptual processes evoked in the method of parallel coordinates which enables users to visually aggregate the data by the patterns the polylines exhibit across the dimension axes. We call the path of such a polyline its structure and suggest a metric that captures this structure directly in high-dimensional space. This allows us to better gauge the distances of spatially distant data constellations and so achieve data aggregations in MDS plots that are more cognizant of existing high-dimensional structure similarities. Our MDS plots also exhibit similar visual relationships as the method of parallel coordinates which is often used alongside to visualize the high-dimensional data in raw form. We then cast our metric into a bi-scale framework which distinguishes far-distances from near-distances. The coarser scale uses the structural similarity metric to separate data aggregates obtained by prior classification or clustering, while the finer scale employs the appropriate Euclidean distance.
ERIC Educational Resources Information Center
Power, Thomas J.; Dombrowski, Stefan C.; Watkins, Marley W.; Mautone, Jennifer A.; Eagle, John W.
2007-01-01
Efforts to develop interventions to improve homework performance have been impeded by limitations in the measurement of homework performance. This study was conducted to develop rating scales for assessing homework performance among students in elementary and middle school. Items on the scales were intended to assess student strengths as well as…
Zeng, Wei; Zeng, An; Liu, Hao; Shang, Ming-Sheng; Zhang, Yi-Cheng
2014-01-01
Recommender systems are designed to assist individual users to navigate through the rapidly growing amount of information. One of the most successful recommendation techniques is the collaborative filtering, which has been extensively investigated and has already found wide applications in e-commerce. One of challenges in this algorithm is how to accurately quantify the similarities of user pairs and item pairs. In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can extract the essential similarity information from the networks by smoothing out noise, which provides a graphical display of the structure of the networks. With the similarity measured from MDS, we find that the item-based collaborative filtering algorithm can outperform the diffusion-based recommendation algorithms. Moreover, we show that this method tends to recommend unpopular items and increase the global diversification of the networks in long term. PMID:25343243
Multi-Dimensional Scaling and MODELLER-Based Evolutionary Algorithms for Protein Model Refinement
Chen, Yan; Shang, Yi; Xu, Dong
2015-01-01
Protein structure prediction, i.e., computationally predicting the three-dimensional structure of a protein from its primary sequence, is one of the most important and challenging problems in bioinformatics. Model refinement is a key step in the prediction process, where improved structures are constructed based on a pool of initially generated models. Since the refinement category was added to the biennial Critical Assessment of Structure Prediction (CASP) in 2008, CASP results show that it is a challenge for existing model refinement methods to improve model quality consistently. This paper presents three evolutionary algorithms for protein model refinement, in which multidimensional scaling(MDS), the MODELLER software, and a hybrid of both are used as crossover operators, respectively. The MDS-based method takes a purely geometrical approach and generates a child model by combining the contact maps of multiple parents. The MODELLER-based method takes a statistical and energy minimization approach, and uses the remodeling module in MODELLER program to generate new models from multiple parents. The hybrid method first generates models using the MDS-based method and then run them through the MODELLER-based method, aiming at combining the strength of both. Promising results have been obtained in experiments using CASP datasets. The MDS-based method improved the best of a pool of predicted models in terms of the global distance test score (GDT-TS) in 9 out of 16test targets. PMID:25844403
Nitrogen deposition and multi-dimensional plant diversity at the landscape scale
Roth, Tobias; Kohli, Lukas; Rihm, Beat; Amrhein, Valentin; Achermann, Beat
2015-01-01
Estimating effects of nitrogen (N) deposition is essential for understanding human impacts on biodiversity. However, studies relating atmospheric N deposition to plant diversity are usually restricted to small plots of high conservation value. Here, we used data on 381 randomly selected 1?km2 plots covering most habitat types of Central Europe and an elevational range of 2900?m. We found that high atmospheric N deposition was associated with low values of six measures of plant diversity. The weakest negative relation to N deposition was found in the traditionally measured total species richness. The strongest relation to N deposition was in phylogenetic diversity, with an estimated loss of 19% due to atmospheric N deposition as compared with a homogeneously distributed historic N deposition without human influence, or of 11% as compared with a spatially varying N deposition for the year 1880, during industrialization in Europe. Because phylogenetic plant diversity is often related to ecosystem functioning, we suggest that atmospheric N deposition threatens functioning of ecosystems at the landscape scale.
magHD: a new approach to multi-dimensional data storage, analysis, display and exploitation
NASA Astrophysics Data System (ADS)
Angleraud, Christophe
2014-06-01
The ever increasing amount of data and processing capabilities - following the well- known Moore's law - is challenging the way scientists and engineers are currently exploiting large datasets. The scientific visualization tools, although quite powerful, are often too generic and provide abstract views of phenomena, thus preventing cross disciplines fertilization. On the other end, Geographic information Systems allow nice and visually appealing maps to be built but they often get very confused as more layers are added. Moreover, the introduction of time as a fourth analysis dimension to allow analysis of time dependent phenomena such as meteorological or climate models, is encouraging real-time data exploration techniques that allow spatial-temporal points of interests to be detected by integration of moving images by the human brain. Magellium is involved in high performance image processing chains for satellite image processing as well as scientific signal analysis and geographic information management since its creation (2003). We believe that recent work on big data, GPU and peer-to-peer collaborative processing can open a new breakthrough in data analysis and display that will serve many new applications in collaborative scientific computing, environment mapping and understanding. The magHD (for Magellium Hyper-Dimension) project aims at developing software solutions that will bring highly interactive tools for complex datasets analysis and exploration commodity hardware, targeting small to medium scale clusters with expansion capabilities to large cloud based clusters.
Riordan, Daniel P.; Varma, Sushama; West, Robert B.; Brown, Patrick O.
2015-01-01
Characterization of the molecular attributes and spatial arrangements of cells and features within complex human tissues provides a critical basis for understanding processes involved in development and disease. Moreover, the ability to automate steps in the analysis and interpretation of histological images that currently require manual inspection by pathologists could revolutionize medical diagnostics. Toward this end, we developed a new imaging approach called multidimensional microscopic molecular profiling (MMMP) that can measure several independent molecular properties in situ at subcellular resolution for the same tissue specimen. MMMP involves repeated cycles of antibody or histochemical staining, imaging, and signal removal, which ultimately can generate information analogous to a multidimensional flow cytometry analysis on intact tissue sections. We performed a MMMP analysis on a tissue microarray containing a diverse set of 102 human tissues using a panel of 15 informative antibody and 5 histochemical stains plus DAPI. Large-scale unsupervised analysis of MMMP data, and visualization of the resulting classifications, identified molecular profiles that were associated with functional tissue features. We then directly annotated H&E images from this MMMP series such that canonical histological features of interest (e.g. blood vessels, epithelium, red blood cells) were individually labeled. By integrating image annotation data, we identified molecular signatures that were associated with specific histological annotations and we developed statistical models for automatically classifying these features. The classification accuracy for automated histology labeling was objectively evaluated using a cross-validation strategy, and significant accuracy (with a median per-pixel rate of 77% per feature from 15 annotated samples) for de novo feature prediction was obtained. These results suggest that high-dimensional profiling may advance the development of computer-based systems for automatically parsing relevant histological and cellular features from molecular imaging data of arbitrary human tissue samples, and can provide a framework and resource to spur the optimization of these technologies. PMID:26176839
Park, Ji-Won; Jeong, Hyobin; Kang, Byeongsoo; Kim, Su Jin; Park, Sang Yoon; Kang, Sokbom; Kim, Hark Kyun; Choi, Joon Sig; Hwang, Daehee; Lee, Tae Geol
2015-01-01
Time-of-flight secondary ion mass spectrometry (TOF-SIMS) emerges as a promising tool to identify the ions (small molecules) indicative of disease states from the surface of patient tissues. In TOF-SIMS analysis, an enhanced ionization of surface molecules is critical to increase the number of detected ions. Several methods have been developed to enhance ionization capability. However, how these methods improve identification of disease-related ions has not been systematically explored. Here, we present a multi-dimensional SIMS (MD-SIMS) that combines conventional TOF-SIMS and metal-assisted SIMS (MetA-SIMS). Using this approach, we analyzed cancer and adjacent normal tissues first by TOF-SIMS and subsequently by MetA-SIMS. In total, TOF- and MetA-SIMS detected 632 and 959 ions, respectively. Among them, 426 were commonly detected by both methods, while 206 and 533 were detected uniquely by TOF- and MetA-SIMS, respectively. Of the 426 commonly detected ions, 250 increased in their intensities by MetA-SIMS, whereas 176 decreased. The integrated analysis of the ions detected by the two methods resulted in an increased number of discriminatory ions leading to an enhanced separation between cancer and normal tissues. Therefore, the results show that MD-SIMS can be a useful approach to provide a comprehensive list of discriminatory ions indicative of disease states. PMID:26046669
NASA Astrophysics Data System (ADS)
Merritt, Elizabeth; Doss, Forrest; Loomis, Eric; Flippo, Kirk; Devolder, Barbara; Welser-Sherrill, Leslie; Fincke, James; Kline, John
2014-10-01
The counter-propagating shear campaign is examining instability growth and its transition to turbulence relevant to mix in ICF capsules. Experimental platforms on both OMEGA and NIF use anti-symmetric flows about a shear interface to examine isolated Kelvin-Helmholtz instability growth. Measurements of interface (an Al or Ti tracer layer) dynamics are used to benchmark the LANL RAGE hydrocode with BHR turbulence model. The tracer layer does not expand uniformly, but breaks up into multi-dimensional structures that are initially quasi-2D due to the target geometry. We are developing techniques to analyze the multi-D structure growth along the tracer surface with a focus on characterizing the time-dependent structures' spectrum of scales in order to appraise a transition to turbulence in the system and potentially provide tighter constraints on initialization schemes for the BHR model. To this end, we use a wavelet based analysis to diagnose single-time radiographs of the tracer layer surface (w/low and amplified roughness for random noise seeding) with observed spatially non-repetitive features, in order to identify spatial and temporal trends in radiographs taken at different times across several experimental shots. This work conducted under the auspices of the U.S. Department of Energy by LANL under Contract DE-AC52-06NA25396.
Mai, Junyu; Sommer, Gregory Jon; Hatch, Anson V.
2010-10-01
We report on advancements of our microscale isoelectric fractionation ({mu}IEFr) methodology for fast on-chip separation and concentration of proteins based on their isoelectric points (pI). We establish that proteins can be fractionated depending on posttranslational modifications into different pH specific bins, from where they can be efficiently transferred to downstream membranes for additional processing and analysis. This technology can enable on-chip multidimensional glycoproteomics analysis, as a new approach to expedite biomarker identification and verification.
ERIC Educational Resources Information Center
Shim, Minsuk K.; Felner, Robert D.; Shim, Eunjae; Noonan, Nancy
This study examined the reliability and validity of self-reported survey data on instructional practices. It was based on a nationwide survey of more than 25,000 teachers in more than 1,000 schools across 5 years. The survey instrument was the Classroom Instructional Practice Scale (CIPS), which was based on the Classroom Information Sheet…
Yang, Hyun-Jin; Ratnapriya, Rinki; Cogliati, Tiziana; Kim, Jung-Woong; Swaroop, Anand
2015-05-01
Genomics and genetics have invaded all aspects of biology and medicine, opening uncharted territory for scientific exploration. The definition of "gene" itself has become ambiguous, and the central dogma is continuously being revised and expanded. Computational biology and computational medicine are no longer intellectual domains of the chosen few. Next generation sequencing (NGS) technology, together with novel methods of pattern recognition and network analyses, has revolutionized the way we think about fundamental biological mechanisms and cellular pathways. In this review, we discuss NGS-based genome-wide approaches that can provide deeper insights into retinal development, aging and disease pathogenesis. We first focus on gene regulatory networks (GRNs) that govern the differentiation of retinal photoreceptors and modulate adaptive response during aging. Then, we discuss NGS technology in the context of retinal disease and develop a vision for therapies based on network biology. We should emphasize that basic strategies for network construction and analyses can be transported to any tissue or cell type. We believe that specific and uniform guidelines are required for generation of genome, transcriptome and epigenome data to facilitate comparative analysis and integration of multi-dimensional data sets, and for constructing networks underlying complex biological processes. As cellular homeostasis and organismal survival are dependent on gene-gene and gene-environment interactions, we believe that network-based biology will provide the foundation for deciphering disease mechanisms and discovering novel drug targets for retinal neurodegenerative diseases. PMID:25668385
A multi-dimensional analysis of the upper Rio Grande-San Luis Valley social-ecological system
NASA Astrophysics Data System (ADS)
Mix, Ken
The Upper Rio Grande (URG), located in the San Luis Valley (SLV) of southern Colorado, is the primary contributor to streamflow to the Rio Grande Basin, upstream of the confluence of the Rio Conchos at Presidio, TX. The URG-SLV includes a complex irrigation-dependent agricultural social-ecological system (SES), which began development in 1852, and today generates more than 30% of the SLV revenue. The diversions of Rio Grande water for irrigation in the SLV have had a disproportionate impact on the downstream portion of the river. These diversions caused the flow to cease at Ciudad Juarez, Mexico in the late 1880s, creating international conflict. Similarly, low flows in New Mexico and Texas led to interstate conflict. Understanding changes in the URG-SLV that led to this event and the interactions among various drivers of change in the URG-SLV is a difficult task. One reason is that complex social-ecological systems are adaptive, contain feedbacks, emergent properties, cross-scale linkages, large-scale dynamics and non-linearities. Further, most analyses of SES to date have been qualitative, utilizing conceptual models to understand driver interactions. This study utilizes both qualitative and quantitative techniques to develop an innovative approach for analyzing driver interactions in the URG-SLV. Five drivers were identified for the URG-SLV social-ecological system: water (streamflow), water rights, climate, agriculture, and internal and external water policy. The drivers contained several longitudes (data aspect) relevant to the system, except water policy, for which only discreet events were present. Change point and statistical analyses were applied to the longitudes to identify quantifiable changes, to allow detection of cross-scale linkages between drivers, and presence of feedback cycles. Agricultural was identified as the driver signal. Change points for agricultural expansion defined four distinct periods: 1852--1923, 1924--1948, 1949--1978 and 1979--2007. Changes in streamflow, water allocations and water policy were observed in all agriculture periods. Cross-scale linkages were also evident between climate and streamflow; policy and water rights; and agriculture, groundwater pumping and streamflow.
Progress in multi-dimensional upwind differencing
NASA Technical Reports Server (NTRS)
Vanleer, Bram
1992-01-01
Multi-dimensional upwind-differencing schemes for the Euler equations are reviewed. On the basis of the first-order upwind scheme for a one-dimensional convection equation, the two approaches to upwind differencing are discussed: the fluctuation approach and the finite-volume approach. The usual extension of the finite-volume method to the multi-dimensional Euler equations is not entirely satisfactory, because the direction of wave propagation is always assumed to be normal to the cell faces. This leads to smearing of shock and shear waves when these are not grid-aligned. Multi-directional methods, in which upwind-biased fluxes are computed in a frame aligned with a dominant wave, overcome this problem, but at the expense of robustness. The same is true for the schemes incorporating a multi-dimensional wave model not based on multi-dimensional data but on an 'educated guess' of what they could be. The fluctuation approach offers the best possibilities for the development of genuinely multi-dimensional upwind schemes. Three building blocks are needed for such schemes: a wave model, a way to achieve conservation, and a compact convection scheme. Recent advances in each of these components are discussed; putting them all together is the present focus of a worldwide research effort. Some numerical results are presented, illustrating the potential of the new multi-dimensional schemes.
Miyamoto, Yuya; Nishimura, Shigenori; Inoue, Katsuaki; Shimamoto, Shigeru; Yoshida, Takuya; Fukuhara, Ayano; Yamada, Mao; Urade, Yoshihiro; Yagi, Naoto; Ohkubo, Tadayasu; Inui, Takashi
2010-02-01
Lipocalin-type prostaglandin D synthase (L-PGDS) acts as both a PGD(2) synthase and an extracellular transporter for small lipophilic molecules. From a series of biochemical studies, it has been found that L-PGDS has an ability to bind a variety of lipophilic ligands such as biliverdin, bilirubin and retinoids in vitro. Therefore, we considered that it is necessary to clarify the molecular structure of L-PGDS upon binding ligand in order to understand the physiological relevance of L-PGDS as a transporter protein. We investigated a molecular structure of L-PGDS/biliverdin complex by small-angle X-ray scattering (SAXS) and multi-dimensional NMR measurements, and characterized the binding mechanism in detail. SAXS measurements revealed that L-PGDS has a globular shape and becomes compact by 1.3A in radius of gyration on binding biliverdin. NMR experiments revealed that L-PGDS possessed an eight-stranded antiparallel beta-barrel forming a central cavity. Upon the titration with biliverdin, some cross-peaks for residues surrounding the cavity and EF-loop and H2-helix above the beta-barrel shifted, and the intensity of other cross-peaks decreased with signal broadenings in (1)H-(15)N heteronuclear single quantum coherence spectra. These results demonstrate that L-PGDS holds biliverdin within the beta-barrel, and the conformation of the loop regions above the beta-barrel changes upon binding biliverdin. Through such a conformational change, the whole molecule of L-PGDS becomes compact. PMID:19833210
Sensitivity of Multi-dimensional Bayesian Classifiers
Utrecht, Universiteit
Sensitivity of Multi-dimensional Bayesian Classifiers Janneke H. Bolt Silja Renooij Technical Utrecht University P.O. Box 80.089 3508 TB Utrecht The Netherlands #12;Sensitivity of Multi results that support this observation were substantiated by a study of the sensitivity properties of naive
T. Downar
2009-03-31
The overall objective of the work here has been to eliminate the approximations used in current resonance treatments by developing continuous energy multi-dimensional transport calculations for problem dependent self-shielding calculations. The work here builds on the existing resonance treatment capabilities in the ORNL SCALE code system.
The Art of Extracting One-Dimensional Flow Properties from Multi-Dimensional Data Sets
NASA Technical Reports Server (NTRS)
Baurle, R. A.; Gaffney, R. L.
2007-01-01
The engineering design and analysis of air-breathing propulsion systems relies heavily on zero- or one-dimensional properties (e:g: thrust, total pressure recovery, mixing and combustion efficiency, etc.) for figures of merit. The extraction of these parameters from experimental data sets and/or multi-dimensional computational data sets is therefore an important aspect of the design process. A variety of methods exist for extracting performance measures from multi-dimensional data sets. Some of the information contained in the multi-dimensional flow is inevitably lost when any one-dimensionalization technique is applied. Hence, the unique assumptions associated with a given approach may result in one-dimensional properties that are significantly different than those extracted using alternative approaches. The purpose of this effort is to examine some of the more popular methods used for the extraction of performance measures from multi-dimensional data sets, reveal the strengths and weaknesses of each approach, and highlight various numerical issues that result when mapping data from a multi-dimensional space to a space of one dimension.
Peng, Chen; Shen, Yi; Ge, Mengqu; Wang, Minghui; Li, Ao
2015-07-14
Glioblastoma (GBM) is the most common malignant brain cancer in adults. Investigating the regulatory mechanisms underlying GBM is effective for the in-depth study of GBM. The Cancer Genome Atlas (TCGA) project is producing large-scale data and makes the comprehensive study of the diverse regulatory mechanisms underlying GBM possible. Although there have been research studies on GBM with large-scale data, distinguishing different regulatory mechanisms and identifying the key regulation types remain challenging. In this study, we integrated multi-dimensional data of differentially expressed genes in GBM: copy number variation (CNV), gene expression, miRNA expression and methylation, by performing partial correlation analysis with the Lasso technique. Our results showed that there were single-factor and multi-factor regulatory mechanisms in GBM. In further analysis of the regulation subtypes, we discovered that single-factor and multi-factor regulations are potentially distinct in functionality. Moreover, multi-factor regulations especially the key regulation subtypes may be more relevant to GBM and affect many GBM-related genes such as ERBB2 and MAPK1. This study not only verifies the utility of multi-dimensional data integration into GBM research but also distinguishes the key multi-factor regulatory subtypes that may drive pathogenesis of GBM from various regulatory mechanisms. PMID:26091184
NASA Astrophysics Data System (ADS)
Zhong, Zhaopeng
In the past twenty 20 years considerable progress has been made in developing new methods for solving the multi-dimensional transport problem. However the effort devoted to the resonance self-shielding calculation has lagged, and much less progress has been made in enhancing resonance-shielding techniques for generating problem-dependent multi-group cross sections (XS) for the multi-dimensional transport calculations. In several applications, the error introduced by self-shielding methods exceeds that due to uncertainties in the basic nuclear data, and often they can be the limiting factor on the accuracy of the final results. This work is to improve the accuracy of the resonance self-shielding calculation by developing continuous energy multi-dimensional transport calculations for problem dependent self-shielding calculations. A new method has been developed, it can calculate the continuous-energy neutron fluxes for the whole two-dimensional domain, which can be utilized as weighting function to process the self-shielded multi-group cross sections for reactor analysis and criticality calculations, and during this process, the two-dimensional heterogeneous effect in the resonance self-shielding calculation can be fully included. A new code, GEMINEWTRN (Group and Energy-Pointwise Methodology Implemented in NEWT for Resonance Neutronics) has been developed in the developing version of SCALE [1], it combines the energy pointwise (PW) capability of the CENTRM [2] with the two-dimensional discrete ordinates transport capability of lattice physics code NEWT [14]. Considering the large number of energy points in the resonance region (typically more than 30,000), the computational burden and memory requirement for GEMINEWTRN is tremendously large, some efforts have been performed to improve the computational efficiency, parallel computation has been implemented into GEMINEWTRN, which can save the computation and memory requirement a lot; some energy points reducing techniques have also been developed, improving the computational efficiency at the meanwhile preserving the accuracy. These efforts make the new method much more feasible for practical use.
Vlasov multi-dimensional model dispersion relation
Lushnikov, Pavel M., E-mail: plushnik@math.unm.edu [Department on Mathematics and Statistics, University of New Mexico, Albuquerque, New Mexico 87131 (United States); Rose, Harvey A. [Theoretical Division, Los Alamos National Laboratory, MS-B213, Los Alamos, New Mexico 87545 (United States); New Mexico Consortium, Los Alamos, New Mexico 87544 (United States); Silantyev, Denis A.; Vladimirova, Natalia [Department on Mathematics and Statistics, University of New Mexico, Albuquerque, New Mexico 87131 (United States); New Mexico Consortium, Los Alamos, New Mexico 87544 (United States)
2014-07-15
A hybrid model of the Vlasov equation in multiple spatial dimension D?>?1 [H. A. Rose and W. Daughton, Phys. Plasmas 18, 122109 (2011)], the Vlasov multi dimensional model (VMD), consists of standard Vlasov dynamics along a preferred direction, the z direction, and N flows. At each z, these flows are in the plane perpendicular to the z axis. They satisfy Eulerian-type hydrodynamics with coupling by self-consistent electric and magnetic fields. Every solution of the VMD is an exact solution of the original Vlasov equation. We show approximate convergence of the VMD Langmuir wave dispersion relation in thermal plasma to that of Vlasov-Landau as N increases. Departure from strict rotational invariance about the z axis for small perpendicular wavenumber Langmuir fluctuations in 3D goes to zero like ?{sup N}, where ? is the polar angle and flows are arranged uniformly over the azimuthal angle.
MultiDimensional Storage Virtualization Lan Huang y
Chiueh, Tzi-cker
MultiDimensional Storage Virtualization #3; Lan Huang y 650 Harry Rd. IBM Almaden Research Center-of-the-art commercial storage virtualization systems focus only on one particular storage attribute, capacity. This paper describes the design, implementa- tion and evaluation of a multi-dimensional storage vir
Jörg A. Walter
2004-01-01
We introduce a novel projection based visualization method for high-dimensional datasets by combining concepts from MDS and the geometry of the hyperbolic spaces. This approach Hyperbolic Multi-Dimensional Scaling (H-MDS) is a synthesis of two important concepts for explorative data analysis and visualization: (i) Multi-dimensional scaling uses proximity or pair distance data to generate a low-dimensional, spatial presentation of the data.
Star-ND (Multi-Dimensional Star-Identification)
Spratling, Benjamin
2012-07-16
of DOCTOR OF PHILOSOPHY May 2011 Major Subject: Aerospace Engineering STAR-ND (MULTI-DIMENSIONAL STAR-IDENTIFICATION) A Dissertation by BENJAMIN BARNETT SPRATLING IV Submitted to the Office of Graduate Studies of Texas A... May 2011 Major Subject: Aerospace Engineering iii ABSTRACT Star-ND (Multi-Dimensional Star-Identification). (May 2011) Benjamin Barnett Spratling IV, B.S., Auburn University Chair of Advisory Committee: Dr. Daniele Mortari In order...
Scaling in sensitivity analysis
Link, W.A.; Doherty, P.F., Jr.
2002-01-01
Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.
Multi-dimensional Mass Spectrometry-based Shotgun Lipidomics
Wang, Miao; Han, Xianlin
2014-01-01
Summary Multi-dimensional mass spectrometry-based shotgun lipidomics (MDMS-SL) has become a foundation analytical technology platform among current lipidomics practices due to its high efficiency, sensitivity, and reproducibility, as well as its broad coverage. This platform has been broadly used to determine the altered content and/or composition of lipid classes, subclasses, and individual molecular species induced by diseases, genetic manipulations, drug treatments, and aging, among others. Herein, we briefly discussed the principles underlying this technology and presented a protocol for routine analysis of many of the lipid classes and subclasses covered by MDMS-SL directly from lipid extracts of biological samples. In particular, lipid sample preparation from a variety of biological materials, which is one of the key components of MDMS-SL was described in details. The protocol of mass spectrometric analysis can readily be expanded for analysis of other lipid classes not mentioned as long as appropriate sample preparation is conducted. It is our sincerely hope that this protocol could aid the researchers in the field to better understand and manage the technology for analysis of cellular lipidomes. PMID:25270931
Chang, Chi-Ying; Chang, Chia-Chi; Hsiao, Tzu-Chien
2013-01-01
Excitation-emission matrix (EEM) fluorescence spectroscopy is a noninvasive method for tissue diagnosis and has become important in clinical use. However, the intrinsic characterization of EEM fluorescence remains unclear. Photobleaching and the complexity of the chemical compounds make it difficult to distinguish individual compounds due to overlapping features. Conventional studies use principal component analysis (PCA) for EEM fluorescence analysis, and the relationship between the EEM features extracted by PCA and diseases has been examined. The spectral features of different tissue constituents are not fully separable or clearly defined. Recently, a non-stationary method called multi-dimensional ensemble empirical mode decomposition (MEEMD) was introduced; this method can extract the intrinsic oscillations on multiple spatial scales without loss of information. The aim of this study was to propose a fluorescence spectroscopy system for EEM measurements and to describe a method for extracting the intrinsic characteristics of EEM by MEEMD. The results indicate that, although PCA provides the principal factor for the spectral features associated with chemical compounds, MEEMD can provide additional intrinsic features with more reliable mapping of the chemical compounds. MEEMD has the potential to extract intrinsic fluorescence features and improve the detection of biochemical changes. PMID:24240806
Multi-dimensional hybrid-simulation techniques in plasma physics
Hewett, D.W.
1982-01-01
Multi-dimensional hybrid simulation models have been developed for use in studying plasma phenomena on extended time and distance scales. The models make fundamental use of the small Debye length or quasi-neutrality assumption. The ions are modeled by particle-in-cell (PIC) techniques while the electrons are considered a collision-dominated fluid. The fields are calculated in the nonradiative Darwin limit. Some electron inertial effects are retained in the Finite Electron Mass model (FEM). In this model, the quasi-neutral counterpart of Poisson's equation is obtained by first summing the electron and ion momentum equations and then taking the quasi-neutral limit. In the Zero Electron Mass (ZEM) model explicit use is made of the axisymmetric properties of the model to decouple the components of the model equations. Equations to self-consistently advance the electron temperature have recently been added to the scheme. The model equations which result from these considerations are two coupled, nonlinear, second order partial differential equations.
SHANE DARKE; ALEX WODAK; NICK HEATHER; JEFF WARD
This article presents a new instrument with which to assess the effects of opiate treatment. The Opiate Treatment Index (OTI) is multi-dimensional in structure, with scales measuring six independently measured outcome domains: drug use; HIV risk-taking behaviour; social functioning; criminality; health; and psychological adjustment. Psychometric properties of the Index are excellent, suggesting that the OTI is a relatively quick, efficient
Fully automated online multi-dimensional protein profiling system for complex mixtures
Kiyonaga Fujii; Tomoyo Nakano; Hiroshi Hike; Fumihiko Usui; Yasuhiko Bando; Hiromasa Tojo; Toshihide Nishimura
2004-01-01
For high throughput proteome analysis of highly complex protein mixtures, we have constructed a fully automated online system for multi-dimensional protein profiling, which utilizes a combination of two-dimensional liquid chromatography and tandem mass spectrometry (2D-LC–MS–MS), based on our well-established offline system described previously [K. Fujii, T. Nakano, T. Kawamura, F. Usui, Y. Bando, R. Wang, T. Nishimura, J. Proteome Res.
Novel optimization method for multi-dimensional breast photoacoustic tomography
NASA Astrophysics Data System (ADS)
Cao, Meng; Feng, Ting; Yuan, Jie; Du, Sidan; Liu, Xiaojun; Wang, Xueding; Carson, Paul L.
2014-11-01
Photoacoustic tomography (PAT) is an effective optical biomedical imaging method which is characterized with noninonizing and noninvasive, presenting good soft tissue contrast with excellent spatial resolution. To build a multi-dimensional breast PAT image, more ultrasound sensors are needed, which brings difficulties to data acquisition. The time complexity for multi-dimensional breast PAT image reconstruction also rises tremendously. Compressive sensing (CS) theory breaks the restriction of Nyquist sampling theorem and is capable to rebuild signals with fewer measurements. In this contribution, we propose an effective optimization method for multi-dimensional breast PAT, which combines the theory of CS and an unevenly, adaptively distributing data acquisition algorithm. With this method, the quality of our reconstructed breast PAT images are better than those using existing multi-dimensional breast PAT system. To build breast PAT images with the same quality, the required number of ultrasound transducers is decreased by using our proposed method. We have verified our method on simulation data and achieved expected results in both two dimensional and three dimensional PAT image reconstruction. In the future, our method can be applied to various aspects of biomedical PAT imaging such as early stage tumor detection and in vivo imaging monitoring.
A multi-dimensional constitutive model for shape memory alloys
C. Liang; C. A. Rogers
1992-01-01
This paper presents a multi-dimensional thermomechanical constitutive model for shape memory alloys (SMAs). This constitutive relation is based upon a combination of both micromechanics and macromechanics. The martensite fraction is introduced as a variable in this model to reflect the martensitic transformation that determines the unique characteristics of shape memory alloys. This constitutive relation can be used to study the
Differential Privacy for Protecting Multi-dimensional Contingency Table Data
Differential Privacy for Protecting Multi-dimensional Contingency Table Data: Extensions from a multi-way contingency table. Privacy protection in such settings implicitly focuses on small the proposed differential-privacy mechanism allows for sensible inferences from the re- leased data. We
Synchronous Circuit Optimization via MultiDimensional Retiming y
Sha, Edwin
by producing a circuit capable of executing all its operations in parallel. The multi is to improve the circuit performance by achieving full parallelism among all operations in the circuit. DueSynchronous Circuit Optimization via MultiDimensional Retiming y Nelson Luiz Passos Edwin Hsing
Multi-dimensional Navigation Spaces for Software Evolution Michele Lanza
Lanza, Michele
EvoSpaces: Multi-dimensional Navigation Spaces for Software Evolution Michele Lanza REVEAL, Switzerland Philippe Dugerdil HEG UAS Geneva, Switzerland Abstract EvoSpaces is a Swiss-wide research project the particularities of the project and the results obtained so far. 1 Project Summary The EvoSpaces project (lasting
Image matrix processor for fast multi-dimensional computations
Roberson, G.P.; Skeate, M.F.
1996-10-15
An apparatus for multi-dimensional computation is disclosed which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination. 10 figs.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Baak, Max; Harrington, Robert; Verkerke, Wouter
2014-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates is often required to model the impact of systematic uncertainties.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
NASA Astrophysics Data System (ADS)
Baak, M.; Gadatsch, S.; Harrington, R.; Verkerke, W.
2015-01-01
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates are often required to model the impact of systematic uncertainties.
Interpolation between multi-dimensional histograms using a new non-linear moment morphing method
Max Baak; Stefan Gadatsch; Robert Harrington; Wouter Verkerke
2014-10-27
A prescription is presented for the interpolation between multi-dimensional distribution templates based on one or multiple model parameters. The technique uses a linear combination of templates, each created using fixed values of the model's parameters and transformed according to a specific procedure, to model a non-linear dependency on model parameters and the dependency between them. By construction the technique scales well with the number of input templates used, which is a useful feature in modern day particle physics, where a large number of templates is often required to model the impact of systematic uncertainties.
Michael Glencross
2009-01-01
In many research studies, respondents' beliefs and opinions about various concepts are often measured by means of five, six and seven point scales. The widely used five point scale is commonly known as a Likert scale (Likert, (1932) \\
A superfast algorithm for multi-dimensional Padé systems
NASA Astrophysics Data System (ADS)
Cabay, Stan; Labahn, George
1992-06-01
For a vector ofk+1 matrix power series, a superfast algorithm is given for the computation of multi-dimensional Padé systems. The algorithm provides a method for obtaining matrix Padé, matrix Hermite Padé and matrix simultaneous Padé approximants. When the matrix power series is normal or perfect, the algorithm is shown to calculate multi-dimensional matrix Padé systems of type (n0,...,nk) inO(?n? · log2?n?) block-matrix operations, where ?n?Dn0+...+nk. WhenkD1 and the power series is scalar, this is the same complexity as that of other superfast algorithms for computing Padé systemsE Whenk>1, the fastest methods presently compute these matrix Padé approximants with a complexity ofO(?n?2). The algorithm succeeds also in the non-normal and non-perfect case, but with a possibility of an increase in the cost complexity.
Study of multi-dimensional radiative energy transfer in molecular gases
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, S. N.
1993-01-01
The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical arrow band model with an exponential-tailed inverse intensity distribution. Consideration of spectral correlation results in some distinguishing features of the Monte Carlo formulations. Validation of the Monte Carlo formulations has been conducted by comparing results of this method with other solutions. Extension of a one-dimensional problem to a multi-dimensional problem requires some special treatments in the Monte Carlo analysis. Use of different assumptions results in different sets of Monte Carlo formulations. The nongray narrow band formulations provide the most accurate results.
Numerical Solution of Multi-Dimensional Hyperbolic Conservation Laws on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Kwak, Dochan (Technical Monitor)
1995-01-01
The lecture material will discuss the application of one-dimensional approximate Riemann solutions and high order accurate data reconstruction as building blocks for solving multi-dimensional hyperbolic equations. This building block procedure is well-documented in the nationally available literature. The relevant stability and convergence theory using positive operator analysis will also be presented. All participants in the minisymposium will be asked to solve one or more generic test problems so that a critical comparison of accuracy can be made among differing approaches.
Null geodesics in multi-dimensional homogeneous cosmologies
NASA Astrophysics Data System (ADS)
Biesiada, Marek; Szydlowski, Marek; Szcz?sny, Jerzy
1988-08-01
The problem of null geodesics in multi-dimensional homogeneous cosmologies is investigated. General expressions for redshift relations in a simple Bianchi I cosmological model are given and analysed. These relations could be used to test multidimensional world models. It is also shown that there is a finite probability for a photon to be emitted, in a random way, and to cause an observable effect associated with extra dimensions. This probability increases with time.
Metastable dynamics and exponential asymptotics in multi-dimensional domains
Michael J. Ward
\\u000a Certain singularly perturbed partial differential equations exhibit a phenomenon known as dynamic metastability, whereby the\\u000a solution evolves on an asymptotically exponentially long time interval as the singular perturbation parameter e tends to zero.\\u000a This article illustrates a technique to analyze metastable behavior for a range of problems in multi-dimensional domains.\\u000a The problems considered include the exit problem for diffusion in
Yang, Kui; Cheng, Hua; Gross, Richard W.; Han, Xianlin
2009-01-01
This report presents the strategies underlying the automated identification and quantification of individual lipid molecular species through array analysis of multi-dimensional mass spectrometry-based shotgun lipidomics (MDMS-SL) data which are acquired directly from lipid extracts after direct infusion and intrasource separation. The automated analyses of individual lipid molecular species in the program employ a strategy where MDMS-SL data from building block analyses using precursor-ion and/or neutral loss scans are used to identify individual molecular species followed by quantitation. Through this strategy, the program screens and identifies species in a high throughput fashion from a built-in database of over 36,000 potential lipid molecular species constructed employing known building blocks. The program then uses a two-step procedure for quantitation of the identified species possessing a linear dynamic range over three orders of magnitude and re-verifies the results when necessary through redundant quantification of multi-dimensional mass spectra. This program is designed to be easily adaptable for other shotgun lipidomics approaches which are currently used for mass spectrometric analysis of lipids in the field. Accordingly, the development of this program should greatly accelerate high throughput analysis of lipids using MDMS-based shotgun lipidomics. PMID:19408941
) Trading-off Multi-Dimensional NAS Performance and Equity Metrics Using Data Envelopment Analysis David. Data Envelopment Analysis (DEA) overcomes these limitations and provides an assessment of the overall are discussed. Keywords- Data Envelopment Analysis; Equity; Rationing rules I. INTRODUCTION The growth in demand
Gina Iberri-Shea
2011-01-01
This study explores the language variation in university student public speech across two academic disciplines: business administration and education. A corpus of university student public speech, made up of 102 classroom presentations (approximately 215,000 words), was designed, constructed and analysed using both quantitative and qualitative methods. Using multi-dimensional analysis, the co-occurrence of linguistic variables was functionally interpreted in order to
Multi-dimensional cosmological models in effective string gravitation. I
NASA Astrophysics Data System (ADS)
Saaryan, A. A.
1995-04-01
We consider multi-dimensional cosmological models in the low-energy field theory of strings with a boson gravitational sector containing a metric, dilaton field, and antisymmetric Kalb-Ramon field. We study the conformal properties of the action and show that in the general conformal representation the theory is equivalent to a generalized scalar-tensor theory with a Lagrangian of nongravitating matter dependent on the dilaton. We find exact solutions of the flat homogeneous anisotropic model with structure R×M1×...×Mn and with equation of state pi=ai? in the space Mi. We discuss the picture of cosmological evolution in different conformal representations.
Multifunction myoelectric control using multi-dimensional dynamic time warping.
AbdelMaseeh, Meena; Tsu-Wei Chen; Stashuk, Daniel
2014-01-01
Myoelectric control can be used for a variety of applications including powered protheses and different human computer interface systems. The aim of this study is to investigate the formulation of myoelectric control as a multi-class distance-based classification of multidimensional sequences. More specifically, we investigate (1) estimation of multi-muscle activation sequences from multi-channel electromyographic signals in an online manner, and (2) classification using a distance metric based on multi-dimensional dynamic time warping. Subject-specific results across 5 subjects executing 10 different hand movements showed an accuracy of 95% using offline extracted trajectories and an accuracy of 84% using online extracted trajectories. PMID:25570959
Fourier transform assisted deconvolution of skewed peaks in complex multi-dimensional chromatograms.
Hanke, Alexander T; Verhaert, Peter D E M; van der Wielen, Luuk A M; Eppink, Michel H M; van de Sandt, Emile J A X; Ottens, Marcel
2015-05-15
Lower order peak moments of individual peaks in heavily fused peak clusters can be determined by fitting peak models to the experimental data. The success of such an approach depends on two main aspects: the generation of meaningful initial estimates on the number and position of the peaks, and the choice of a suitable peak model. For the detection of meaningful peaks in multi-dimensional chromatograms, a fast data scanning algorithm was combined with prior resolution enhancement through the reduction of column and system broadening effects with the help of two-dimensional fast Fourier transforms. To capture the shape of skewed peaks in multi-dimensional chromatograms a formalism for the accurate calculation of exponentially modified Gaussian peaks, one of the most popular models for skewed peaks, was extended for direct fitting of two-dimensional data. The method is demonstrated to successfully identify and deconvolute peaks hidden in strongly fused peak clusters. Incorporation of automatic analysis and reporting of the statistics of the fitted peak parameters and calculated properties allows to easily identify in which regions of the chromatograms additional resolution is required for robust quantification. PMID:25841612
Multi-dimensional Longwave Forcing of Boundary Layer Cloud Systems
Mechem, David B.; Kogan, Y. L.; Ovtchinnikov, Mikhail; Davis, Anthony B; Evans, K. F.; Ellingson, Robert G.
2008-12-20
The importance of multi-dimensional (MD) longwave radiative effects on cloud dynamics is evaluated in a large eddy simulation (LES) framework employing multi-dimensional radiative transfer (Spherical Harmonics Discrete Ordinate Method —SHDOM). Simulations are performed for a case of unbroken, marine boundary layer stratocumulus and a broken field of trade cumulus. “Snapshot” calculations of MD and IPA (independent pixel approximation —1D) radiative transfer applied to LES cloud fields show that the total radiative forcing changes only slightly, although the MD effects significantly modify the spatial structure of the radiative forcing. Simulations of each cloud type employing MD and IPA radiative transfer, however, differ little. For the solid cloud case, relative to using IPA, the MD simulation exhibits a slight reduction in entrainment rate and boundary layer TKE relative to the IPA simulation. This reduction is consistent with both the slight decrease in net radiative forcing and a negative correlation between local vertical velocity and radiative forcing, which implies a damping of boundary layer eddies. Snapshot calculations of the broken cloud case suggest a slight increase in radiative cooling, though few systematic differences are noted in the interactive simulations. We attribute this result to the fact that radiative cooling is a relatively minor contribution to the total energetics. For the cloud systems in this study, the use of IPA longwave radiative transfer is sufficiently accurate to capture the dynamical behavior of BL clouds. Further investigations are required in order to generalize this conclusion for other cloud types and longer time integrations. 1
A G-FDTD Method for Solving the Multi-Dimensional Time-Dependent Schrodinger Equation
Moxley, Frederick Ira
2012-01-01
The Finite-Difference Time-Domain (FDTD) method is a well-known technique for the analysis of quantum devices. It solves a discretized Schrodinger equation in an explicitly iterative process. However, the method requires the spatial grid size and time step satisfy a very restricted condition in order to prevent the numerical solution from diverging. In this article, we present a generalized FDTD (G-FDTD) method for solving the multi-dimensional time-dependent Schrodinger equation, and obtain a more relaxed condition for stability when the finite difference approximations for spatial derivatives are employed. As such, a larger time step may be chosen. This is particularly important for quantum computations. The new G-FDTD method is tested by simulation of a particle moving in 2-D free space and then hitting an energy potential. Numerical results coincide with those obtained based on the theoretical analysis.
A G-FDTD Method for Solving the Multi-Dimensional Time-Dependent Schrodinger Equation
Frederick Ira Moxley III; Weizhong Dai
2012-12-04
The Finite-Difference Time-Domain (FDTD) method is a well-known technique for the analysis of quantum devices. It solves a discretized Schrodinger equation in an explicitly iterative process. However, the method requires the spatial grid size and time step satisfy a very restricted condition in order to prevent the numerical solution from diverging. In this article, we present a generalized FDTD (G-FDTD) method for solving the multi-dimensional time-dependent Schrodinger equation, and obtain a more relaxed condition for stability when the finite difference approximations for spatial derivatives are employed. As such, a larger time step may be chosen. This is particularly important for quantum computations. The new G-FDTD method is tested by simulation of a particle moving in 2-D free space and then hitting an energy potential. Numerical results coincide with those obtained based on the theoretical analysis.
Acceleration of multi-dimensional propagator measurements with compressed sensing
NASA Astrophysics Data System (ADS)
Paulsen, Jeffrey L.; Cho, HyungJoon; Cho, Gyunggoo; Song, Yi-Qiao
2011-12-01
NMR can probe the microstructures of anisotropic materials such as liquid crystals, stretched polymers and biological tissues through measurement of the diffusion propagator, where internal structures are indicated by restricted diffusion. Multi-dimensional measurements can probe the microscopic anisotropy, but full sampling can then quickly become prohibitively time consuming. However, for incompletely sampled data, compressed sensing is an effective reconstruction technique to enable accelerated acquisition. We demonstrate that with a compressed sensing scheme, one can greatly reduce the sampling and the experimental time with minimal effect on the reconstruction of the diffusion propagator with an example of anisotropic diffusion. We compare full sampling down to 64× sub-sampling for the 2D propagator measurement and reduce the acquisition time for the 3D experiment by a factor of 32 from ˜80 days to ˜2.5 days.
VBI-Tree: A Peer-to-Peer Framework for Supporting Multi-Dimensional Indexing Schemes
Ooi, Beng Chin
multi-dimensional query processing. Index structures are central to efficient search in database systems of index structures. Our paper makes the following contributions · We present a framework that is capableVBI-Tree: A Peer-to-Peer Framework for Supporting Multi-Dimensional Indexing Schemes H. V. Jagadish
Balance properties of multi-dimensional words Val erie Berth e and Robert Tijdeman y
Tijdeman, Robert
Balance properties of multi-dimensional words Val#19;erie Berth#19;e #3; and Robert Tijdeman y Abstract A word u is called 1-balanced if for any two factors v and w of u of equal length, we have 1 #20 v. The aim of this paper is to extend the notion of balance to multi-dimensional words. We #12;rst
Bootstrapping for Significance of Compact Clusters in Multi-dimensional Datasets
Maitra, Ranjan
Bootstrapping for Significance of Compact Clusters in Multi-dimensional Datasets Ranjan Maitra in the clustering of multi-dimensional datasets. The developed procedure compares two models and declares the more of the procedure is illustrated on two well-known classification datasets and comprehensively evaluated in terms
Out-of-core tensor approximation of multi-dimensional matrices of visual data
Hongcheng Wang; Qing Wu; Lin Shi; Yizhou Yu; Narendra Ahuja
2005-01-01
Tensor approximation is necessary to obtain compact multilinear models for multi-dimensional visual datasets. Traditionally, each multi-dimensional data item is represented as a vector. Such a scheme flattens the data and partially destroys the internal structures established throughout the multiple dimensions. In this paper, we retain the original dimensionality of the data items to more effectively exploit existing spatial redundancy and
Wildfire Detection using by Multi Dimensional Histogram in Boreal Forest
NASA Astrophysics Data System (ADS)
Honda, K.; Kimura, K.; Honma, T.
2008-12-01
Early detection of wildfires is an issue for reduction of damage to environment and human. There are some attempts to detect wildfires by using satellite imagery, which are mainly classified into three methods: Dozier Method(1981-), Threshold Method(1986-) and Contextual Method(1994-). However, the accuracy of these methods is not enough: some commission and omission errors are included in the detected results. In addition, it is not so easy to analyze satellite imagery with high accuracy because of insufficient ground truth data. Kudoh and Hosoi (2003) developed the detection method by using three-dimensional (3D) histogram from past fire data with the NOAA-AVHRR imagery. But their method is impractical because their method depends on their handworks to pick up past fire data from huge data. Therefore, the purpose of this study is to collect fire points as hot spots efficiently from satellite imagery and to improve the method to detect wildfires with the collected data. As our method, we collect past fire data with the Alaska Fire History data obtained by the Alaska Fire Service (AFS). We select points that are expected to be wildfires, and pick up the points inside the fire area of the AFS data. Next, we make 3D histogram with the past fire data. In this study, we use Bands 1, 21 and 32 of MODIS. We calculate the likelihood to detect wildfires with the three-dimensional histogram. As our result, we select wildfires with the 3D histogram effectively. We can detect the troidally spreading wildfire. This result shows the evidence of good wildfire detection. However, the area surrounding glacier tends to rise brightness temperature. It is a false alarm. Burnt area and bare ground are sometimes indicated as false alarms, so that it is necessary to improve this method. Additionally, we are trying various combinations of MODIS bands as the better method to detect wildfire effectively. So as to adjust our method in another area, we are applying our method to tropical forest in Kalimantan, Indonesia and around Chiang Mai, Thailand. But the ground truth data in these areas is lesser than the one in Alaska. Our method needs lots of accurate observed data to make multi-dimensional histogram in the same area. In this study, we can show the system to select wildfire data efficiently from satellite imagery. Furthermore, the development of multi-dimensional histogram from past fire data makes it possible to detect wildfires accurately.
Fiber optic sensor array for multi-dimensional strain measurement
NASA Astrophysics Data System (ADS)
Grossmann, Barry G.; Huang, Li-Tien
1998-04-01
A planar fiber optic strain sensor array (FOSSA) able to measure multi-dimensional strains inside a material has been developed and tested. Three-dimensional strain measurement is a developing technology that can eventually be employed for many applications including monitoring the strain field inside composite parts and structures, sensing for adaptive structures and intelligent vehicle highway systems and health monitoring systems for civil structures. A planar configuration was chosen to reduce the manufacturing difficulty and structural degradation of embedding optical sensors in more than one plane. Two extrinsic Fabry-Pérot interferometric sensors (EFPIs) and one polarimetric sensor were used to form the planar sensor array. The two EFPIs extract two normal strain components along the x and y axes. A polarimetric sensor in the same plane was used to extract the third normal strain acting on the z axis. The sensor array was embedded in an epoxy resin block and a load of 1000 lbs was applied normal to one face with a loading machine. The strains extracted from the embedded optical fiber sensors compared well with strains measured with surface bonded electrical strain gages. The difference in measured strain between the electrical strain gages and the fiber optic sensors was typically less than 3.4% on all three principal strain axes.
Scaling analysis of stock markets
NASA Astrophysics Data System (ADS)
Bu, Luping; Shang, Pengjian
2014-06-01
In this paper, we apply the detrended fluctuation analysis (DFA), local scaling detrended fluctuation analysis (LSDFA), and detrended cross-correlation analysis (DCCA) to investigate correlations of several stock markets. DFA method is for the detection of long-range correlations used in time series. LSDFA method is to show more local properties by using local scale exponents. DCCA method is a developed method to quantify the cross-correlation of two non-stationary time series. We report the results of auto-correlation and cross-correlation behaviors in three western countries and three Chinese stock markets in periods 2004-2006 (before the global financial crisis), 2007-2009 (during the global financial crisis), and 2010-2012 (after the global financial crisis) by using DFA, LSDFA, and DCCA method. The findings are that correlations of stocks are influenced by the economic systems of different countries and the financial crisis. The results indicate that there are stronger auto-correlations in Chinese stocks than western stocks in any period and stronger auto-correlations after the global financial crisis for every stock except Shen Cheng; The LSDFA shows more comprehensive and detailed features than traditional DFA method and the integration of China and the world in economy after the global financial crisis; When it turns to cross-correlations, it shows different properties for six stock markets, while for three Chinese stocks, it reaches the weakest cross-correlations during the global financial crisis.
Mokken Scale Analysis Using Hierarchical Clustering Procedures
ERIC Educational Resources Information Center
van Abswoude, Alexandra A. H.; Vermunt, Jeroen K.; Hemker, Bas T.; van der Ark, L. Andries
2004-01-01
Mokken scale analysis (MSA) can be used to assess and build unidimensional scales from an item pool that is sensitive to multiple dimensions. These scales satisfy a set of scaling conditions, one of which follows from the model of monotone homogeneity. An important drawback of the MSA program is that the sequential item selection and scale…
Multi-dimensional ultra-high frequency passive radio frequency identification tag antenna designs
Delichatsios, Stefanie Alkistis
2006-01-01
In this thesis, we present the design, simulation, and empirical evaluation of two novel multi-dimensional ultra-high frequency (UHF) passive radio frequency identification (RFID) tag antennas, the Albano-Dipole antenna ...
Clustering Very Large Multi-dimensional Datasets with Robson L. F. Cordeiro
Clustering Very Large Multi-dimensional Datasets with MapReduce Robson L. F. Cordeiro CS Department - ICMC, University of SÃ£o Paulo - Brazil robson@icmc.usp.br Caetano Traina Jr. CS Department - ICMC
Kang, InHan
2006-01-01
In this thesis, we present a system for visualizing hierarchical, multi-dimensional, memory-intensive datasets. Specifically, we designed an interactive system to visualize data collected by high-throughput microscopy and ...
The multi-dimensional measure of informed choice: a validation study.
Michie, Susan; Dormandy, Elizabeth; Marteau, Theresa M
2002-09-01
The aim of this prospective study is to assess the reliability and validity of a multi-dimensional measure of informed choice (MMIC). Participants were 225 pregnant women in two general hospitals in the UK, women receiving low-risk results following serum screening for Down syndrome. The MMIC was administered before testing and the Ottawa Decisional Conflict Scale was administered 6 weeks later. The component scales of the MMIC, knowledge and attitude, were internally consistent (alpha values of 0.68 and 0.78, respectively). Those who made a choice categorised as informed using the MMIC rated their decision 6 weeks later as being more informed, better supported and of higher quality than women whose choice was categorised as uninformed. This provides evidence of predictive validity, whilst the lack of association between the MMIC and anxiety shows construct (discriminant) validity. Thus, the MMIC has been shown to be psychometrically robust in pregnant women offered the choice to undergo prenatal screening for Down syndrome and receiving a low-risk result. Replication of this finding in other groups, facing other decisions, with other outcomes, should be assessed in future research. PMID:12220754
A Novel MultiDimensional Mapping of 8-PSK for BICM-ID
Nghi H. Tran; Ha H. Nguyen
2007-01-01
Employing multi-dimensional constellation and mapping to improve the error performance of bit-interleaved coded modulation with iterative decoding (BICM-ID) has recently received a lot of attention, both in single-antenna and multiple- antenna systems. To date, except for the cases of BPSK and QPSK constellations, good multi-dimensional mappings have only been found by computer searching techniques. This paper introduces an explicit algorithm
A Novel MultiDimensional Mapping of 8-PSK for BICM-ID
Nghi H. Tran; Ha H. Nguyen
2006-01-01
Employing multi-dimensional constellation and mapping to improve the error performance of bit-interleaved coded modulation with iterative decoding (BICM-ID) has recently received a lot of attention, both in single-antenna and multiple-antenna systems. To date, except for the cases of BPSK and QPSK constellations, good multi-dimensional mappings have only been found by computer searching techniques. This paper introduces an explicit algorithm to
Multiple time scale analysis of runaway phenomena
Lucio Demeio
1998-01-01
The analysis of runaway phenomena with the use of the Boltzmann equation shows that the time evolution of the distribution function and of the other quantities of interest, for example the average velocity, occurs on two time scales: the short time scale of the collisional equilibrium and the long time scale of the runaway flux. Under suitable conditions on the
Fessler, Jeffrey A.
A Constrained Minimization Approach to Designing Multi-dimensional, Spatially Selective RF Pulses C Arbor, MI, United States Introduction The design of multi-dimensional, spatially selective RF pulses can approach to RF pulse design using constrained minimization. This new approach can be used to address many
Scale-PC shielding analysis sequences
Bowman, S.M.
1996-05-01
The SCALE computational system is a modular code system for analyses of nuclear fuel facility and package designs. With the release of SCALE-PC Version 4.3, the radiation shielding analysis community now has the capability to execute the SCALE shielding analysis sequences contained in the control modules SAS1, SAS2, SAS3, and SAS4 on a MS- DOS personal computer (PC). In addition, SCALE-PC includes two new sequences, QADS and ORIGEN-ARP. The capabilities of each sequence are presented, along with example applications.
ERIC Educational Resources Information Center
Ibrahim, Mohammed Sani; Mujir, Siti Junaidah Mohd
2012-01-01
The purpose of this study is to determine if the multi-dimensional leadership orientation of the heads of departments in Malaysian polytechnics affects their leadership effectiveness and the lecturers' commitment to work as perceived by the lecturers. The departmental heads' leadership orientation was determined by five leadership dimensions…
A Computational Market for Information Filtering in MultiDimensional Spaces
Grigoris J. Karakoulas; Innes A. Ferguson
1995-01-01
This paper presents the computational market of SIGMA (System of Information Gathering Market- based Agents) as a model of decentralized decision making for the task of information filtering in multi- dimensional spaces such as the Usenet netnews. Dif- ferent learning and adaptation techniques are integrated within SIGMA for creating a robust net- work-based application which adapts to both changes in
Multi-dimensional SLA-based Resource Allocation for Multi-tier Cloud Computing Systems
Pedram, Massoud
Multi-dimensional SLA-based Resource Allocation for Multi-tier Cloud Computing Systems Hadi for multi-tier applications in the cloud computing is considered. An upper bound on the total profit in the cloud computing system. There are two types of SLA contracts. For the Gold SLA class, response time
Existence and Asymptotic Behavior of MultiDimensional Quantum Hydrodynamic Model for Semiconductors
Hailiang Li; Pierangelo Marcati
2004-01-01
This paper is devoted to the study of the existence and the time-asymptotic of multi-dimensional quantum hydrodynamic equations for the electron particle density, the current density and the electrostatic potential in spatial periodic domain. The equations are formally analogous to classical hydrodynamics but differ in the momentum equation, which is forced by an additional nonlinear dispersion term, (due to the
Address Decomposition for the Shaping of Multi-dimensional Signal Constellations
Kabal, Peter
Address Decomposition for the Shaping of Multi-dimensional Signal Constellations A. K constellation. This scheme, called as the ad- dress decomposition, is based on decomposing the addressing. This is called a signal constel- lation. The constellation points are usually selected as a finite subset
DRAFT DETC2009-87045 Supporting Trade Space Exploration of Multi-Dimensional Data
Zhang, Xiaolong "Luke"
DRAFT DETC2009-87045 Supporting Trade Space Exploration of Multi-Dimensional Data with Interactive. For example, in trade space exploration of large design data sets, designers need to select a subset of data this prototype system. By using visual tools during trade space exploration, this research suggests a new
MultiDimensional Modeling of the XPAL System Andrew D. Palla*a
Carroll, David L.
University, Atlanta, GA 30322 ABSTRACT The exciplex pumped alkali laser (XPAL) system was recently than classic diode pumped alkali laser (DPAL) modeling. In this paper we discuss BLAZE-V multidimensional modeling of this new laser system and compare with experiments. Keywords: XPAL, exciplex pumped alkali
Multi-Dimensional Screening Device (MDSD) for the Identification of Gifted/Talented Children.
ERIC Educational Resources Information Center
Kranz, Bella
The monograph presents a model for identifying gifted/talented children which is based on a multidimensional concept of intelligence, designed to include the less accepted school population in its initial search, and tied to a staff development program for teachers who must be part of the screening process. Rationale for the Multi-Dimensional…
Multi-dimensional Range Queries in Sensor Networks Young Jin Kim
Gao, Jie
Multi-dimensional Range Queries in Sensor Networks Xin Li Young Jin Kim Ramesh Govindan Wei Hong-0121778 for the Center for Embedded Systems and by a grant from the Intel Corporation Computer Science Department, University of Southern Cal- ifornia, {xinli, youngjki, ramesh}@usc.edu Intel Research Lab
Gi-Zen Liu; Zih-Hui Liu; Gwo-Jen Hwang
2011-01-01
Many English learning websites have been developed worldwide, but little research has been conducted concerning the development of comprehensive evaluation criteria. The main purpose of this study is thus to construct a multi-dimensional set of criteria to help learners and teachers evaluate the quality of English learning websites. These evaluation guidelines are based on web usability, learning materials, functionality of
A multi-dimensional view of entrepreneurship : Towards a research agenda on organisation emergence
Jin-ichiro Yamada
2004-01-01
This paper attempts to synthesise the theoretical research on entrepreneurship and social capital undertaken in previous studies, and presents a multi-dimensional view of entrepreneurship. In examining overviews of past single perspective entrepreneurship research, this study shows that the primary role of entrepreneurs in organisation emergence is to acquire knowledge and create social capital properly. This process is necessarily accompanied by
A Multi-Dimensional Service Chain ecosystem model Frdrique Biennier1
Paris-Sud XI, Université de
to dynamic added-value networks instead of Porter's traditional value chain model [10]. In order to improveA Multi-Dimensional Service Chain ecosystem model Frédérique Biennier1 , Régis Aubry1 , Youakim solutions to support and favor innovative business networks on the basis of an internet of services
Stability of shock waves for multi-dimensional hyperbolic-parabolic conservation laws
NASA Astrophysics Data System (ADS)
Li, Dening
1988-01-01
The uniform linear stability of shock waves is considerd for quasilinear hyperbolic-parabolic coupled conservation laws in multi-dimensional space. As an example, the stability condition and its dynamic meaning for isothermal shock wave in radiative hydrodynamics are analyzed.
Potma, Eric Olaf
Multi-dimensional differential imaging with FE-CARS microscopy Vishnu Vardhan Krishnamachari, Eric-Stokes Raman scattering (CARS) microscopy [911], provide high resolu- tion images with contrast based to CARS microscopy and was shown to add a series of new contrast mechanisms to the existing palette
Assessment of the RELAP5 multi-dimensional component model using data from LOFT test L2-5
Davis, C.B.
1998-01-01
The capability of the RELAP5-3D computer code to perform multi-dimensional analysis of a pressurized water reactor (PWR) was assessed using data from the LOFT L2-5 experiment. The LOFT facility was a 50 MW PWR that was designed to simulate the response of a commercial PWR during a loss-of-coolant accident. Test L2-5 simulated a 200% double-ended cold leg break with an immediate primary coolant pump trip. A three-dimensional model of the LOFT reactor vessel was developed. Calculations of the LOFT L2-5 experiment were performed using the RELAP5-3D Version BF02 computer code. The calculated thermal-hydraulic responses of the LOFT primary and secondary coolant systems were generally in reasonable agreement with the test. The calculated results were also generally as good as or better than those obtained previously with RELAP/MOD3.
SCALE DRAM subsystem power analysis
Bhalodia, Vimal
2005-01-01
To address the needs of the next generation of low-power systems, DDR2 SDRAM offers a number of low-power modes with various performance and power consumption tradeoffs. The SCALE DRAM Subsystem is an energy-aware DRAM ...
Automatic Adaptive MultiDimensional Particle In Cell
Giovanni Lapenta
2007-01-01
Kinetic Particle In Cell (PIC) methods can extend greatly their range of applica- bility if implicit time differencing and spatial adaption are used to address the wide range of time and length scales typical of plasmas. For implicit differencing, we refer the reader to our recent summary of the implicit moment PIC method implemented in our CELESTE3D code (G. Lapenta,
Multi-Dimensional Data Visualization Matthew J. Pastizzo
Erbacher, Robert F.
(EDA) and graphical #12;data analysis (GDA) in statistics handbooks lends further support to the notion statistical measures (for a related argument, see Loftus, 1993). The inclusion of exploratory data analysis, scatter plots). Available software packages (e.g., Data Desk® 6.1, MatLab® 6.1, SAS© -JMPTM 4.04, SPSS© 10
Reply to Adams: Multi-Dimensional Edge Interference
Eagle, Nathan N.
We completely agree with adams that, in social network analysis, the particular research question should drive the definition of what constitutes a tie ( 1). However, we believe that even studies of inherently social ...
Fernandes, Michelle; Stein, Alan; Newton, Charles R.; Cheikh-Ismail, Leila; Kihara, Michael; Wulff, Katharina; de León Quintana, Enrique; Aranzeta, Luis; Soria-Frisch, Aureli; Acedo, Javier; Ibanez, David; Abubakar, Amina; Giuliani, Francesca; Lewis, Tamsin; Kennedy, Stephen; Villar, Jose
2014-01-01
Background The International Fetal and Newborn Growth Consortium for the 21st Century (INTERGROWTH-21st) Project is a population-based, longitudinal study describing early growth and development in an optimally healthy cohort of 4607 mothers and newborns. At 24 months, children are assessed for neurodevelopmental outcomes with the INTERGROWTH-21st Neurodevelopment Package. This paper describes neurodevelopment tools for preschoolers and the systematic approach leading to the development of the Package. Methods An advisory panel shortlisted project-specific criteria (such as multi-dimensional assessments and suitability for international populations) to be fulfilled by a neurodevelopment instrument. A literature review of well-established tools for preschoolers revealed 47 candidates, none of which fulfilled all the project's criteria. A multi-dimensional assessment was, therefore, compiled using a package-based approach by: (i) categorizing desired outcomes into domains, (ii) devising domain-specific criteria for tool selection, and (iii) selecting the most appropriate measure for each domain. Results The Package measures vision (Cardiff tests); cortical auditory processing (auditory evoked potentials to a novelty oddball paradigm); and cognition, language skills, behavior, motor skills and attention (the INTERGROWTH-21st Neurodevelopment Assessment) in 35–45 minutes. Sleep-wake patterns (actigraphy) are also assessed. Tablet-based applications with integrated quality checks and automated, wireless electroencephalography make the Package easy to administer in the field by non-specialist staff. The Package is in use in Brazil, India, Italy, Kenya and the United Kingdom. Conclusions The INTERGROWTH-21st Neurodevelopment Package is a multi-dimensional instrument measuring early child development (ECD). Its developmental approach may be useful to those involved in large-scale ECD research and surveillance efforts. PMID:25423589
A volume-based multi-dimensional population balance approach for modelling high shear granulation
Anders Darelius; Henric Brage; Anders Rasmuson; Ingela Niklasson Björn; Staffan Folestad
2006-01-01
A volume-based multi-dimensional population balance model based on the approach used by Verkoeijen et al. [2002. Population balances for particulate processes—a volume approach. Chemical Engineering Science 57, 2287–2303], is further developed and applied to a wet granulation process of pharmaceutically relevant material, performed in a high shear mixer. The model is improved by a generalization that accounts for initial non-uniformly
Development of a multi-dimensional thermal-hydraulic system code, MARS 1.3.1
J.-J. Jeong; K. S. Ha; B. D. Chung; W. J. Lee
1999-01-01
A multi-dimensional thermal-hydraulic system code MARS has been developed by consolidating and restructuring the RELAP5\\/MOD3.2.1.2 and COBRA-TF codes. The two codes were adopted to take advantage of the very general, versatile features of RELAP5 and the realistic three-dimensional hydrodynamic module of COBRA-TF. In the course of code development, major features of each code were consolidated into a single code first.
Exploring Bi-Criteria versus MultiDimensional Lower Partial Moment Portfolio Models
Olivier Brandouy; Kristiaan Kerstens; Ignace Van de Woestyne
2009-01-01
This contribution explores how multi-dimensional lower partial moment portfolio models are different from their bi-criteria counterparts. In particular, the mean semivariance and semi-skewness model that seems little used in practice is contrasted to the rather popular mean semi-variance and mean semi-skewness models. The difference between these models is illustrated via a geometric reconstruction of the multidimensional efficient portfolio choice set.
Skip-Webs: Efficient Distributed Data Structures for Multi-Dimensional Data Sets
Goodrich, Michael T.
, which include linear (one-dimensional) data, such as sorted sets, as well as multi-dimensional data how to per- form a query over such a set of n items spread among n hosts using O(log n/ log log n) messages for one-dimensional data, or O(log n) messages for fixed-dimensional data, while using only O(log
Metarule-Guided Mining of MultiDimensional Association Rules Using Data Cubes
Micheline Kamber; Jiawei Han; Jenny Chiang
1997-01-01
In this paper, we employ a novel approach to metarule-guided, multi-dimensional association rule mining which explores a data cube structure. We propose algorithms for metarule-guided min- ing: given a metarule containing p predicates, we compare mining on an n-dimensional (n-D) cube structure (where p < n) with mining on smaller multiple pdimensional cubes. In addition, we propose an efficient method
Multi-Dimensional Hydrocode Analyses of Penetrating Hypervelocity Impacts
NASA Astrophysics Data System (ADS)
Bessette, G. C.; Lawrence, R. J.; Chhabildas, L. C.; Reinhart, W. D.; Thornhill, T. F.; Saul, W. V.
2004-07-01
The Eulerian hydrocode, CTH, has been used to study the interaction of hypervelocity flyer plates with thin targets at velocities from 6 to 11 km/s. These penetrating impacts produce debris clouds that are subsequently allowed to stagnate against downstream witness plates. Velocity histories from this latter plate are used to infer the evolution and propagation of the debris cloud. This analysis, which is a companion to a parallel experimental effort, examined both numerical and physics-based issues. We conclude that numerical resolution and convergence are important in ways we had not anticipated. The calculated release from the extreme states generated by the initial impact shows discrepancies with related experimental observations, and indicates that even for well-known materials (e.g., aluminum), high-temperature failure criteria are not well understood, and that non-equilibrium or rate-dependent equations of state may be influencing the results.
Hitchhiker's guide to multi-dimensional plant pathology.
Saunders, Diane G O
2015-02-01
Filamentous pathogens pose a substantial threat to global food security. One central question in plant pathology is how pathogens cause infection and manage to evade or suppress plant immunity to promote disease. With many technological advances over the past decade, including DNA sequencing technology, an array of new tools has become embedded within the toolbox of next-generation plant pathologists. By employing a multidisciplinary approach plant pathologists can fully leverage these technical advances to answer key questions in plant pathology, aimed at achieving global food security. This review discusses the impact of: cell biology and genetics on progressing our understanding of infection structure formation on the leaf surface; biochemical and molecular analysis to study how pathogens subdue plant immunity and manipulate plant processes through effectors; genomics and DNA sequencing technologies on all areas of plant pathology; and new forms of collaboration on accelerating exploitation of big data. As we embark on the next phase in plant pathology, the integration of systems biology promises to provide a holistic perspective of plant–pathogen interactions from big data and only once we fully appreciate these complexities can we design truly sustainable solutions to preserve our resources. PMID:25729800
Scale-specific multifractal medical image analysis.
Braverman, Boris; Tambasco, Mauro
2013-01-01
Fractal geometry has been applied widely in the analysis of medical images to characterize the irregular complex tissue structures that do not lend themselves to straightforward analysis with traditional Euclidean geometry. In this study, we treat the nonfractal behaviour of medical images over large-scale ranges by considering their box-counting fractal dimension as a scale-dependent parameter rather than a single number. We describe this approach in the context of the more generalized Rényi entropy, in which we can also compute the information and correlation dimensions of images. In addition, we describe and validate a computational improvement to box-counting fractal analysis. This improvement is based on integral images, which allows the speedup of any box-counting or similar fractal analysis algorithm, including estimation of scale-dependent dimensions. Finally, we applied our technique to images of invasive breast cancer tissue from 157 patients to show a relationship between the fractal analysis of these images over certain scale ranges and pathologic tumour grade (a standard prognosticator for breast cancer). Our approach is general and can be applied to any medical imaging application in which the complexity of pathological image structures may have clinical value. PMID:24023588
George Mavrotas; José Rui Figueira; Alexandros Antoniadis
2011-01-01
We propose a methodology for obtaining the exact Pareto set of Bi-Objective Multi-Dimensional Knapsack Problems, exploiting\\u000a the concept of core expansion. The core concept is effectively used in single objective multi-dimensional knapsack problems\\u000a and it is based on the “divide and conquer” principle. Namely, instead of solving one problem with n variables we solve several sub-problems with a fraction of
Cao, Jialan; Kürsten, Dana; Schneider, Steffen; Knauer, Andrea; Günther, P Mike; Köhler, J Michael
2012-02-01
The technique of microsegmented flow was applied for the generation of two- and higher dimensional concentration spaces for the screening of toxic effects of selected substances on the bacterium Escherichia coli at the nanolitre scale. Up to about 5000 distinct experiments with different combinations of effector-concentrations could be realized in a single experimental run. This was done with the help of a computer program controlling the flow rates of effector-containing syringe pumps and resulted in the formation of multi-dimensional concentration spaces in segment sequences. Prior to the application of this technique for toxicological studies on E. coli the accuracy of this method was tested by simulation experiments with up to five dissolved dyes with different spectral properties. Photometric microflow-through measurement of dye distribution inside the concentration spaces allowed the monitoring of microfluid segment compositions. Finally, we used this technique for the investigation of interferences of the antibiotics ampicillin and chloramphenicol towards E. coli cultures and their modulation by silver nanoparticles by measuring bacterial autofluorescence. Each concentration point in this three-dimensional concentration space was represented by 4 or 5 single segments. Thus, a high reliability of the measured dose/response relations was achieved. As a result, a complex response pattern was discovered including synergistic and compensatory effects as well as the modulation of the range of stimulation of bacterial growth by a sublethal dose of chloramphenicol by silver nanoparticles. PMID:22080187
NASA Technical Reports Server (NTRS)
Darmofal, David L.
2003-01-01
The use of computational simulations in the prediction of complex aerodynamic flows is becoming increasingly prevalent in the design process within the aerospace industry. Continuing advancements in both computing technology and algorithmic development are ultimately leading to attempts at simulating ever-larger, more complex problems. However, by increasing the reliance on computational simulations in the design cycle, we must also increase the accuracy of these simulations in order to maintain or improve the reliability arid safety of the resulting aircraft. At the same time, large-scale computational simulations must be made more affordable so that their potential benefits can be fully realized within the design cycle. Thus, a continuing need exists for increasing the accuracy and efficiency of computational algorithms such that computational fluid dynamics can become a viable tool in the design of more reliable, safer aircraft. The objective of this research was the development of an error estimation and grid adaptive strategy for reducing simulation errors in integral outputs (functionals) such as lift or drag from from multi-dimensional Euler and Navier-Stokes simulations. In this final report, we summarize our work during this grant.
Adaptive mesh refinement for conservative systems: multi-dimensional efficiency evaluation
R. Keppens; M. Nool; G. Toth; J. P. Goedbloed
2004-03-04
Obtainable computational efficiency is evaluated when using an Adaptive Mesh Refinement (AMR) strategy in time accurate simulations governed by sets of conservation laws. For a variety of 1D, 2D, and 3D hydro- and magnetohydrodynamic simulations, AMR is used in combination with several shock-capturing, conservative discretization schemes. Solution accuracy and execution times are compared with static grid simulations at the corresponding high resolution and time spent on AMR overhead is reported. Our examples reach corresponding efficiencies of 5 to 20 in multidimensional calculations and only 1.5 -- 8 % overhead is observed. For AMR calculations of multi-dimensional magnetohydrodynamic problems, several strategies for controlling the $\
Investigation of multi-dimensional computational models for calculating pollutant transport
Pepper, D W; Cooper, R E; Baker, A J
1980-01-01
A performance study of five numerical solution algorithms for multi-dimensional advection-diffusion prediction on mesoscale grids was made. Test problems include transport of point and distributed sources, and a simulation of a continuous source. In all cases, analytical solutions are available to assess relative accuracy. The particle-in-cell and second-moment algorithms, both of which employ sub-grid resolution coupled with Lagrangian advection, exhibit superior accuracy in modeling a point source release. For modeling of a distributed source, algorithms based upon the pseudospectral and finite element interpolation concepts, exhibit improved accuracy on practical discretizations.
NASA Astrophysics Data System (ADS)
Zeng, Wen; Xie, Maozhao
2006-12-01
The detailed surface reaction mechanism of methane on rhodium catalyst was analyzed. Comparisons between numerical simulation and experiments showed a basic agreement. The combustion process of homogeneous charge compression ignition (HCCI) engine whose piston surface has been coated with catalyst (rhodium and platinum) was numerically investigated. A multi-dimensional model with detailed chemical kinetics was built. The effects of catalytic combustion on the ignition timing, the temperature and CO concentration fields, and HC, CO and NOx emissions of the HCCI engine were discussed. The results showed the ignition timing of the HCCI engine was advanced and the emissions of HC and CO were decreased by the catalysis.
Multi-dimensional spline-based non-rigid image registration
NASA Astrophysics Data System (ADS)
Viola, Francesco; Guenther, Drake A.; Coe, Ryan L.; Walker, William F.
2007-03-01
Image registration, or equivalently motion estimation, plays a central role in a broad range of ultrasound applications including elastography, estimation of blood or tissue motion, radiation force imaging, and extended field of view imaging. Because of its central significance, motion estimation accuracy, precision, and computational cost are of critical importance. Furthermore, since motion estimation is typically performed on sampled signals, while estimates are usually desired over a continuous domain, performance should be considered in conjunction with associated interpolation. We have previously presented a highly accurate, spline-based time delay estimator that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous time representation of a reference signal and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we describe a MUlti-dimensional Spline-based Estimator (MUSE) that allows accurate and precise estimation of multi-dimensional displacements/strain components from multi-dimensional data sets. In this paper we describe the mathematical formulation for three-dimensional (3D) motion/strain estimation and present simulation results to assess the intrinsic bias and standard deviation of this algorithm and compare it to currently available multi-dimensional estimators. In 1,000 noise-free simulations we found that 2D MUSE exhibits maximum bias errors of 4.8nm and 297nm in range and azimuth respectively. The maximum simulated standard deviation of estimates in both dimensions was comparable at 0.0026 samples (corresponding to 54nm axially and 378nm laterally). These results are two to three orders of magnitude lower than currently used 2D tracking methods. Simulation of performance in 3D yielded similar results to those observed in 2D. We also performed experiments using 2D MUSE on an Ultrasonix Sonix RP imaging system with an L14-5/38 linear array transducer operating at 6.6MHz. With this experimental data we found that bias errors were significantly smaller than geometric errors induced by machining of the transducer mount.
2-D/Axisymmetric Formulation of Multi-dimensional Upwind Scheme
NASA Technical Reports Server (NTRS)
Wood, William A.; Kleb, William L.
2001-01-01
A multi-dimensional upwind discretization of the two-dimensional/axisymmetric Navier-Stokes equations is detailed for unstructured meshes. The algorithm is an extension of the fluctuation splitting scheme of Sidilkover. Boundary conditions are implemented weakly so that all nodes are updated using the base scheme, and eigen-value limiting is incorporated to suppress expansion shocks. Test cases for Mach numbers ranging from 0.1-17 are considered, with results compared against an unstructured upwind finite volume scheme. The fluctuation splitting inviscid distribution requires fewer operations than the finite volume routine, and is seen to produce less artificial dissipation, leading to generally improved solution accuracy.
MXA: a customizable HDF5-based data format for multi-dimensional data sets
NASA Astrophysics Data System (ADS)
Jackson, M.; Simmons, J. P.; De Graef, M.
2010-09-01
A new digital file format is proposed for the long-term archival storage of experimental data sets generated by serial sectioning instruments. The format is known as the multi-dimensional eXtensible Archive (MXA) format and is based on the public domain Hierarchical Data Format (HDF5). The MXA data model, its description by means of an eXtensible Markup Language (XML) file with associated Document Type Definition (DTD) are described in detail. The public domain MXA package is available through a dedicated web site (mxa.web.cmu.edu), along with implementation details and example data files.
On algebraic damping close to inhomogeneous Vlasov equilibria in multi-dimensional spaces
NASA Astrophysics Data System (ADS)
Barré, Julien; Y Yamaguchi, Yoshiyuki
2013-06-01
We investigate the asymptotic damping of a perturbation around inhomogeneous stable stationary states of the Vlasov equation in spatially multi-dimensional systems. We show that branch singularities of the Fourier-Laplace transform of the perturbation yield algebraic dampings, even for a smooth stationary state and perturbation. In two spatial dimensions, we classify the singularities and compute the associated damping rate and frequency. This 2D setting also applies to spherically symmetric self-gravitating systems. We validate the theory on an advection equation associated with the isochrone model, a model of spherical self-gravitating systems.
Barth, Jens; Oberndorfer, Cäcilia; Pasluosta, Cristian; Schülein, Samuel; Gassner, Heiko; Reinfelder, Samuel; Kugler, Patrick; Schuldhaus, Dominik; Winkler, Jürgen; Klucken, Jochen; Eskofier, Björn M.
2015-01-01
Changes in gait patterns provide important information about individuals’ health. To perform sensor based gait analysis, it is crucial to develop methodologies to automatically segment single strides from continuous movement sequences. In this study we developed an algorithm based on time-invariant template matching to isolate strides from inertial sensor signals. Shoe-mounted gyroscopes and accelerometers were used to record gait data from 40 elderly controls, 15 patients with Parkinson’s disease and 15 geriatric patients. Each stride was manually labeled from a straight 40 m walk test and from a video monitored free walk sequence. A multi-dimensional subsequence Dynamic Time Warping (msDTW) approach was used to search for patterns matching a pre-defined stride template constructed from 25 elderly controls. F-measure of 98% (recall 98%, precision 98%) for 40 m walk tests and of 97% (recall 97%, precision 97%) for free walk tests were obtained for the three groups. Compared to conventional peak detection methods up to 15% F-measure improvement was shown. The msDTW proved to be robust for segmenting strides from both standardized gait tests and free walks. This approach may serve as a platform for individualized stride segmentation during activities of daily living. PMID:25789489
Barth, Jens; Oberndorfer, Cäcilia; Pasluosta, Cristian; Schülein, Samuel; Gassner, Heiko; Reinfelder, Samuel; Kugler, Patrick; Schuldhaus, Dominik; Winkler, Jürgen; Klucken, Jochen; Eskofier, Björn M
2015-01-01
Changes in gait patterns provide important information about individuals' health. To perform sensor based gait analysis, it is crucial to develop methodologies to automatically segment single strides from continuous movement sequences. In this study we developed an algorithm based on time-invariant template matching to isolate strides from inertial sensor signals. Shoe-mounted gyroscopes and accelerometers were used to record gait data from 40 elderly controls, 15 patients with Parkinson's disease and 15 geriatric patients. Each stride was manually labeled from a straight 40 m walk test and from a video monitored free walk sequence. A multi-dimensional subsequence Dynamic Time Warping (msDTW) approach was used to search for patterns matching a pre-defined stride template constructed from 25 elderly controls. F-measure of 98% (recall 98%, precision 98%) for 40 m walk tests and of 97% (recall 97%, precision 97%) for free walk tests were obtained for the three groups. Compared to conventional peak detection methods up to 15% F-measure improvement was shown. The msDTW proved to be robust for segmenting strides from both standardized gait tests and free walks. This approach may serve as a platform for individualized stride segmentation during activities of daily living. PMID:25789489
Optimizing threshold for extreme scale analysis
NASA Astrophysics Data System (ADS)
Maynard, Robert; Moreland, Kenneth; Atyachit, Utkarsh; Geveci, Berk; Ma, Kwan-Liu
2013-01-01
As the HPC community starts focusing its efforts towards exascale, it becomes clear that we are looking at machines with a billion way concurrency. Although parallel computing has been at the core of the performance gains achieved until now, scaling over 1,000 times the current concurrency can be challenging. As discussed in this paper, even the smallest memory access and synchronization overheads can cause major bottlenecks at this scale. As we develop new software and adapt existing algorithms for exascale, we need to be cognizant of such pitfalls. In this paper, we document our experience with optimizing a fairly common and parallelizable visualization algorithm, threshold of cells based on scalar values, for such highly concurrent architectures. Our experiments help us identify design patterns that can be generalized for other visualization algorithms as well. We discuss our implementation within the Dax toolkit, which is a framework for data analysis and visualization at extreme scale. The Dax toolkit employs the patterns discussed here within the framework's scaffolding to make it easier for algorithm developers to write algorithms without having to worry about such scaling issues.
On the Global Existence and Stability of a Multi-Dimensional Supersonic Conic Shock Wave
NASA Astrophysics Data System (ADS)
Li, Jun; Ingo, Witt; Yin, Huicheng
2014-07-01
We establish the global existence and stability of a three-dimensional supersonic conic shock wave for a compactly perturbed steady supersonic flow past an infinitely long circular cone with a sharp angle. The flow is described by a 3-D steady potential equation, which is multi-dimensional, quasilinear, and hyperbolic with respect to the supersonic direction. Making use of the geometric properties of the pointed shock surface together with the Rankine-Hugoniot conditions on the conic shock surface and the boundary condition on the surface of the cone, we obtain a global uniform weighted energy estimate for the nonlinear problem by finding an appropriate multiplier and establishing a new Hardy-type inequality on the shock surface. Based on this, we prove that a multi-dimensional conic shock attached to the vertex of the cone exists globally when the Mach number of the incoming supersonic flow is sufficiently large. Moreover, the asymptotic behavior of the 3-D supersonic conic shock solution, which is shown to approach the corresponding background shock solution in the downstream domain for the uniform supersonic constant flow past the sharp cone, is also explicitly given.
Multi-dimensional NMR without coherence transfer: Minimizing losses in large systems
Liu, Yizhou; Prestegard, James H.
2011-01-01
Most multi-dimensional solution NMR experiments connect one dimension to another using coherence transfer steps that involve evolution under scalar couplings. While experiments of this type have been a boon to biomolecular NMR the need to work on ever larger systems pushes the limits of these procedures. Spin relaxation during transfer periods for even the most efficient 15N–1H HSQC experiments can result in more than an order of magnitude loss in sensitivity for molecules in the 100 kDa range. A relatively unexploited approach to preventing signal loss is to avoid coherence transfer steps entirely. Here we describe a scheme for multi-dimensional NMR spectroscopy that relies on direct frequency encoding of a second dimension by multi-frequency decoupling during acquisition, a technique that we call MD-DIRECT. A substantial improvement in sensitivity of 15N–1H correlation spectra is illustrated with application to the 21 kDa ADP ribosylation factor (ARF) labeled with 15N in all alanine residues. Operation at 4 °C mimics observation of a 50 kDa protein at 35 °C. PMID:21835658
NASA Astrophysics Data System (ADS)
Schaerer, Joël; Fassi, Aurora; Riboldi, Marco; Cerveri, Pietro; Baroni, Guido; Sarrut, David
2012-01-01
Real-time optical surface imaging systems offer a non-invasive way to monitor intra-fraction motion of a patient's thorax surface during radiotherapy treatments. Due to lack of point correspondence in dynamic surface acquisition, such systems cannot currently provide 3D motion tracking at specific surface landmarks, as available in optical technologies based on passive markers. We propose to apply deformable mesh registration to extract surface point trajectories from markerless optical imaging, thus yielding multi-dimensional breathing traces. The investigated approach is based on a non-rigid extension of the iterative closest point algorithm, using a locally affine regularization. The accuracy in tracking breathing motion was quantified in a group of healthy volunteers, by pair-wise registering the thoraco-abdominal surfaces acquired at three different respiratory phases using a clinically available optical system. The motion tracking accuracy proved to be maximal in the abdominal region, where breathing motion mostly occurs, with average errors of 1.09 mm. The results demonstrate the feasibility of recovering multi-dimensional breathing motion from markerless optical surface acquisitions by using the implemented deformable registration algorithm. The approach can potentially improve respiratory motion management in radiation therapy, including motion artefact reduction or tumour motion compensation by means of internal/external correlation models.
Irregularities and Scaling in Signal and Image Processing: Multifractal Analysis
Paris-Sud XI, Université de
Irregularities and Scaling in Signal and Image Processing: Multifractal Analysis P. Abry , S: Multifractal analysis . . . . . . . . . . . . . . . . . . 7 1.4 Data analysis and Signal Processing underlying multifractal analysis. Third, it will reformulate these theoretical tools into a wavelet framework
Large-Scale Parametric Survival Analysis†
Mittal, Sushil; Madigan, David; Cheng, Jerry; Burd, Randall S.
2013-01-01
Survival analysis has been a topic of active statistical research in the past few decades with applications spread across several areas. Traditional applications usually consider data with only small numbers of predictors with a few hundreds or thousands of observations. Recent advances in data acquisition techniques and computation power has led to considerable interest in analyzing very high-dimensional data where the number of predictor variables and the number of observations range between 104 – 106. In this paper, we present a tool for performing large-scale regularized parametric survival analysis using a variant of cyclic coordinate descent method. Through our experiments on two real data sets, we show that application of regularized models to high-dimensional data avoids overfitting and can provide improved predictive performance and calibration over corresponding low-dimensional models. PMID:23625862
Singh, Brajesh K.; Srivastava, Vineet K.
2015-01-01
The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations.
Akhter, T.; Hossain, M. M.; Mamun, A. A. [Department of Physics, Jahangirnagar University, Savar, Dhaka-1342 (Bangladesh)
2012-09-15
Dust-acoustic (DA) solitary structures and their multi-dimensional instability in a magnetized dusty plasma (containing inertial negatively and positively charged dust particles, and Boltzmann electrons and ions) have been theoretically investigated by the reductive perturbation method, and the small-k perturbation expansion technique. It has been found that the basic features (polarity, speed, height, thickness, etc.) of such DA solitary structures, and their multi-dimensional instability criterion or growth rate are significantly modified by the presence of opposite polarity dust particles and external magnetic field. The implications of our results in space and laboratory dusty plasma systems have been briefly discussed.
NASA Astrophysics Data System (ADS)
Akhter, T.; Hossain, M. M.; Mamun, A. A.
2012-09-01
Dust-acoustic (DA) solitary structures and their multi-dimensional instability in a magnetized dusty plasma (containing inertial negatively and positively charged dust particles, and Boltzmann electrons and ions) have been theoretically investigated by the reductive perturbation method, and the small-k perturbation expansion technique. It has been found that the basic features (polarity, speed, height, thickness, etc.) of such DA solitary structures, and their multi-dimensional instability criterion or growth rate are significantly modified by the presence of opposite polarity dust particles and external magnetic field. The implications of our results in space and laboratory dusty plasma systems have been briefly discussed.
Singh, Brajesh K; Srivastava, Vineet K
2015-04-01
The main goal of this paper is to present a new approximate series solution of the multi-dimensional (heat-like) diffusion equation with time-fractional derivative in Caputo form using a semi-analytical approach: fractional-order reduced differential transform method (FRDTM). The efficiency of FRDTM is confirmed by considering four test problems of the multi-dimensional time fractional-order diffusion equation. FRDTM is a very efficient, effective and powerful mathematical tool which provides exact or very close approximate solutions for a wide range of real-world problems arising in engineering and natural sciences, modelled in terms of differential equations. PMID:26064639
Scaling analysis of negative differential thermal resistance
NASA Astrophysics Data System (ADS)
Chan, Ho-Kei; He, Dahai; Hu, Bambi
2014-05-01
Negative differential thermal resistance (NDTR) can be generated for any one-dimensional heat flow with a temperature-dependent thermal conductivity. In a system-independent scaling analysis, the general condition for the occurrence of NDTR is found to be an inequality with three scaling exponents: n1n2<-(1+n3), where n1?(-?,+?) describes a particular way of varying the temperature difference, and n2 and n3 describe, respectively, the dependence of the thermal conductivity on an average temperature and on the temperature difference. For cases with a temperature-dependent thermal conductivity, i.e. n2?0, NDTR can always be generated with a suitable choice of n1 such that this inequality is satisfied. The results explain the illusory absence of a NDTR regime in certain lattices and predict new ways of generating NDTR, where such predictions have been verified numerically. The analysis will provide insights for a designing of thermal devices, and for a manipulation of heat flow in experimental systems, such as nanotubes.
Radiative interactions in multi-dimensional chemically reacting flows using Monte Carlo simulations
NASA Technical Reports Server (NTRS)
Liu, Jiwen; Tiwari, Surendra N.
1994-01-01
The Monte Carlo method (MCM) is applied to analyze radiative heat transfer in nongray gases. The nongray model employed is based on the statistical narrow band model with an exponential-tailed inverse intensity distribution. The amount and transfer of the emitted radiative energy in a finite volume element within a medium are considered in an exact manner. The spectral correlation between transmittances of two different segments of the same path in a medium makes the statistical relationship different from the conventional relationship, which only provides the non-correlated results for nongray methods is discussed. Validation of the Monte Carlo formulations is conducted by comparing results of this method of other solutions. In order to further establish the validity of the MCM, a relatively simple problem of radiative interactions in laminar parallel plate flows is considered. One-dimensional correlated Monte Carlo formulations are applied to investigate radiative heat transfer. The nongray Monte Carlo solutions are also obtained for the same problem and they also essentially match the available analytical solutions. the exact correlated and non-correlated Monte Carlo formulations are very complicated for multi-dimensional systems. However, by introducing the assumption of an infinitesimal volume element, the approximate correlated and non-correlated formulations are obtained which are much simpler than the exact formulations. Consideration of different problems and comparison of different solutions reveal that the approximate and exact correlated solutions agree very well, and so do the approximate and exact non-correlated solutions. However, the two non-correlated solutions have no physical meaning because they significantly differ from the correlated solutions. An accurate prediction of radiative heat transfer in any nongray and multi-dimensional system is possible by using the approximate correlated formulations. Radiative interactions are investigated in chemically reacting compressible flows of premixed hydrogen and air in an expanding nozzle. The governing equations are based on the fully elliptic Navier-Stokes equations. Chemical reaction mechanisms were described by a finite rate chemistry model. The correlated Monte Carlo method developed earlier was employed to simulate multi-dimensional radiative heat transfer. Results obtained demonstrate that radiative effects on the flowfield are minimal but radiative effects on the wall heat transfer are significant. Extensive parametric studies are conducted to investigate the effects of equivalence ratio, wall temperature, inlet flow temperature, and nozzle size on the radiative and conductive wall fluxes.
Multi-Dimensional, Non-Contact Metrology using Trilateration and High Resolution FMCW Ladar
Mateo, Ana Baselga
2015-01-01
Here we propose, describe, and provide experimental proof-of-concept demonstrations of a multi-dimensional, non-contact length metrology system design based on high resolution (millimeter to sub-100 micron) frequency modulated continuous wave (FMCW) ladar and trilateration based on length measurements from multiple, optical fiber-connected transmitters. With an accurate FMCW ladar source, the trilateration based design provides 3D resolution inherently independent of stand-off range and allows self-calibration to provide flexible setup of a field system. A proof-of-concept experimental demonstration was performed using a highly-stabilized, 2 THz bandwidth chirped laser source, two emitters, and one scanning emitter/receiver providing 1D surface profiles (2D metrology) of diffuse targets. The measured coordinate precision of < 200 microns was determined to be limited by laser speckle issues caused by diffuse scattering of the targets.
NASA Astrophysics Data System (ADS)
Nardin, Gaël; Autry, Travis M.; Moody, Galan; Singh, Rohan; Li, Hebin; Cundiff, Steven T.
2015-03-01
We review our recent work on multi-dimensional coherent optical spectroscopy (MDCS) of semiconductor nanostructures. Two approaches, appropriate for the study of semiconductor materials, are presented and compared. A first method is based on a non-collinear geometry, where the Four-Wave-Mixing (FWM) signal is detected in the form of a radiated optical field. This approach works for samples with translational symmetry, such as Quantum Wells (QWs) or large and dense ensembles of Quantum Dots (QDs). A second method detects the FWM in the form of a photocurrent in a collinear geometry. This second approach extends the horizon of MDCS to sub-diffraction nanostructures, such as single QDs, nanowires, or nanotubes, and small ensembles thereof. Examples of experimental results obtained on semiconductor QW structures are given for each method. In particular, it is shown how MDCS can assess coupling between excitons confined in separated QWs.
Measurement of Low Level Explosives Reaction in Gauged Multi-Dimensional Steven Impact Tests
Niles, A M; Garcia, F; Greenwood, D W; Forbes, J W; Tarver, C M; Chidester, S K; Garza, R G; Swizter, L L
2001-05-31
The Steven Test was developed to determine relative impact sensitivity of metal encased solid high explosives and also be amenable to two-dimensional modeling. Low level reaction thresholds occur at impact velocities below those required for shock initiation. To assist in understanding this test, multi-dimensional gauge techniques utilizing carbon foil and carbon resistor gauges were used to measure pressure and event times. Carbon resistor gauges indicated late time low level reactions 200-540 {micro}s after projectile impact, creating 0.39-2.00 kb peak shocks centered in PBX 9501 explosives discs and a 0.60 kb peak shock in a LX-04 disk. Steven Test modeling results, based on ignition and growth criteria, are presented for two PBX 9501 scenarios: one with projectile impact velocity just under threshold (51 m/s) and one with projectile impact velocity just over threshold (55 m/s). Modeling results are presented and compared to experimental data.
Multi-Dimensional Simulations of Radiative Transfer in Aspherical Core-Collapse Supernovae
Tanaka, Masaomi [Department of Astronomy, Graduate School of Science, University of Tokyo, Tokyo (Japan); Maeda, Keiichi [Institute for the Physics and Mathematics of the Universe, University of Tokyo, Kashiwa (Japan); Max-Planck-Institut fuer Astrophysik, Garching (Germany); Mazzali, Paolo A. [Max-Planck-Institut fuer Astrophysik, Garching bei Muenchen (Germany); Istituto Nazionale di Astrofisica, OATs, Trieste (Italy); Nomoto, Ken'ichi [Department of Astronomy, Graduate School of Science, University of Tokyo, Tokyo (Japan); Institute for the Physics and Mathematics of the Universe, University of Tokyo, Kashiwa (Japan)
2008-05-21
We study optical radiation of aspherical supernovae (SNe) and present an approach to verify the asphericity of SNe with optical observations of extragalactic SNe. For this purpose, we have developed a multi-dimensional Monte-Carlo radiative transfer code, SAMURAI (SupernovA Multidimensional RAdIative transfer code). The code can compute the optical light curve and spectra both at early phases (< or approx. 40 days after the explosion) and late phases ({approx}1 year after the explosion), based on hydrodynamic and nucleosynthetic models. We show that all the optical observations of SN 1998bw (associated with GRB 980425) are consistent with polar-viewed radiation of the aspherical explosion model with kinetic energy 20x10{sup 51} ergs. Properties of off-axis hypernovae are also discussed briefly.
Measurement of Low Level Explosives Reaction in Gauged Multi-Dimensional Steven Impact Tests
NASA Astrophysics Data System (ADS)
Niles, A. M.; Forbes, J. W.; Tarver, C. M.; Chidester, S. K.; Garcia, F.; Greenwood, D. W.; Garza, R. G.
2001-06-01
The Steven Test was developed to determine relative impact sensitivity of metal encased solid high explosives and be amenable to two-dimensional modeling. Low level reaction thresholds occur at impact velocities below those required for shock initiation. To assist in understanding this test, multi-dimensional gauge techniques utilizing carbon foil and carbon resistor gauges were used to measure pressure and event times. Carbon resistor gauges indicated late time low level reactions 350 ms after projectile impact, creating 0.5-0.6 kb peak shocks centered in PBX 9501 explosives discs. Steven Test calculations based on ignition and growth criteria predict low level reactions occurring at 335 ms which agrees well with experimental data. Additional gauged experiments simulating the Steven Test have been performed and will be discussed. * This work was performed under the auspices of the U.S. Department of Energy by the University of California, Lawrence Livermore National laboratory under contract No. W-7405-Eng-48.
Differential proteomics incorporating iTRAQ labeling and multi-dimensional separations.
Collins, Ben C; Lau, Thomas Y K; Pennington, Stephen R; Gallagher, William M
2011-01-01
Considerable effort is currently being expended to integrate newly developed "omics"-based approaches (proteomics, transcriptomics, and metabonomics) into preclinical safety evaluation workflows in the hope that more sensitive prediction of toxicology can be achieved as reported by Waters and Fostel (Nat. Rev. Genet. 5(12):936-948, 2004) and Craig et al. (J. Proteome Res. 5(7):1586-1601, 2006). Proteomic approaches are well placed to contribute to this effort as (a) proteins are the metabolically active products of genes and, as such, may provide more sensitive and direct predictive information on drug-induced liabilities and (b) they have the potential to determine tissue leakage markers in peripheral fluids. Here, we describe a workflow for proteomic semi-quantitative expression profiling of liver from rats treated with a known hepatotoxicant using a multiplexed isobaric labeling strategy and multi-dimensional liquid chromatography. PMID:20972766
A G-FDTD scheme for solving multi-dimensional open dissipative Gross-Pitaevskii equations
NASA Astrophysics Data System (ADS)
Moxley, Frederick Ira; Byrnes, Tim; Ma, Baoling; Yan, Yun; Dai, Weizhong
2015-02-01
Behaviors of dark soliton propagation, collision, and vortex formation in the context of a non-equilibrium condensate are interesting to study. This can be achieved by solving open dissipative Gross-Pitaevskii equations (dGPEs) in multiple dimensions, which are a generalization of the standard Gross-Pitaevskii equation that includes effects of the condensate gain and loss. In this article, we present a generalized finite-difference time-domain (G-FDTD) scheme, which is explicit, stable, and permits an accurate solution with simple computation, for solving the multi-dimensional dGPE. The scheme is tested by solving a steady state problem in the non-equilibrium condensate. Moreover, it is shown that the stability condition for the scheme offers a more relaxed time step restriction than the popular pseudo-spectral method. The G-FDTD scheme is then employed to simulate the dark soliton propagation, collision, and the formation of vortex-antivortex pairs.
Chen, Dong; Eisley, Noel A.; Steinmacher-Burow, Burkhard; Heidelberger, Philip
2013-01-29
A computer implemented method and a system for routing data packets in a multi-dimensional computer network. The method comprises routing a data packet among nodes along one dimension towards a root node, each node having input and output communication links, said root node not having any outgoing uplinks, and determining at each node if the data packet has reached a predefined coordinate for the dimension or an edge of the subrectangle for the dimension, and if the data packet has reached the predefined coordinate for the dimension or the edge of the subrectangle for the dimension, determining if the data packet has reached the root node, and if the data packet has not reached the root node, routing the data packet among nodes along another dimension towards the root node.
A dynamic nuclear polarization strategy for multi-dimensional Earth's field NMR spectroscopy.
Halse, Meghan E; Callaghan, Paul T
2008-12-01
Dynamic nuclear polarization (DNP) is introduced as a powerful tool for polarization enhancement in multi-dimensional Earth's field NMR spectroscopy. Maximum polarization enhancements, relative to thermal equilibrium in the Earth's magnetic field, are calculated theoretically and compared to the more traditional prepolarization approach for NMR sensitivity enhancement at ultra-low fields. Signal enhancement factors on the order of 3000 are demonstrated experimentally using DNP with a nitroxide free radical, TEMPO, which contains an unpaired electron which is strongly coupled to a neighboring (14)N nucleus via the hyperfine interaction. A high-quality 2D (19)F-(1)H COSY spectrum acquired in the Earth's magnetic field with DNP enhancement is presented and compared to simulation. PMID:18926746
Two-dimensional Core-collapse Supernova Models with Multi-dimensional Transport
NASA Astrophysics Data System (ADS)
Dolence, Joshua C.; Burrows, Adam; Zhang, Weiqun
2015-02-01
We present new two-dimensional (2D) axisymmetric neutrino radiation/hydrodynamic models of core-collapse supernova (CCSN) cores. We use the CASTRO code, which incorporates truly multi-dimensional, multi-group, flux-limited diffusion (MGFLD) neutrino transport, including all relevant {O}(v/c) terms. Our main motivation for carrying out this study is to compare with recent 2D models produced by other groups who have obtained explosions for some progenitor stars and with recent 2D VULCAN results that did not incorporate {O}(v/c) terms. We follow the evolution of 12, 15, 20, and 25 solar-mass progenitors to approximately 600 ms after bounce and do not obtain an explosion in any of these models. Though the reason for the qualitative disagreement among the groups engaged in CCSN modeling remains unclear, we speculate that the simplifying "ray-by-ray" approach employed by all other groups may be compromising their results. We show that "ray-by-ray" calculations greatly exaggerate the angular and temporal variations of the neutrino fluxes, which we argue are better captured by our multi-dimensional MGFLD approach. On the other hand, our 2D models also make approximations, making it difficult to draw definitive conclusions concerning the root of the differences between groups. We discuss some of the diagnostics often employed in the analyses of CCSN simulations and highlight the intimate relationship between the various explosion conditions that have been proposed. Finally, we explore the ingredients that may be missing in current calculations that may be important in reproducing the properties of the average CCSNe, should the delayed neutrino-heating mechanism be the correct mechanism of explosion.
Scale analysis and the measurement of social attitudes
Allen L. Edwards; Franklin P. Kilpatrick
1948-01-01
This paper discusses and compares the methods of attitude scale construction of Thurstone (method of equal-appearing intervals), Likert (method of summated ratings), and Guttman (method of scale analysis), with special emphasis on the latter as one of the most recent and significant contributions to the field. Despite a certain lack of methodological precision, scale analysis provides a means of evaluating
Nelson, James
of the Applied Multi-dimensional Fusion Project is to investigate the benefits that data fusion and related-spectral synthetic data generation, super-resolution, joint fusion and blind image restoration, multi. Keywords: multidimensional fusion; video fusion; pixel level fusion; super resolution; normalised
Dong, Guozhu
Mining MultiDimensional Constrained Gradients in Data Cubes #3; Guozhu Dong Jiawei Han Joyce Lam associated with big changes in measure in a data cube. Cells are considered similar if they are related and describe sectors of the business modeled by the data cube. The problem of mining changes of sophisticated
Fiat, Amos
, the surplus maximization mechanism is dominant strategy incentive compatible and maximizes the social surplus that for "loan", etc. One of the most important example environments for multi-dimensional mechanism de- sign-dimensional-agent surplus maximization mechanism (Mechanism 3.1) generalizes and is optimal. In this generalization, agents
Dynamical scaling analysis of plant callus growth
J. Galeano; J. Buceta; K. Juarez; B. Pumariño; J. de la Torre; J. M. Iriondo
2003-01-01
We present experimental results for the dynamical scaling properties of the development of plant calli. We have assayed two different species of plant calli, Brassica oleracea and Brassica rapa, under different growth conditions, and show that their dynamical scalings share a universality class. From a theoretical point of view, we introduce a scaling hypothesis for systems whose size evolves in
NASA Astrophysics Data System (ADS)
Lau, Chun Sing
This thesis studies two types of problems in financial derivatives pricing. The first type is the free boundary problem, which can be formulated as a partial differential equation (PDE) subject to a set of free boundary condition. Although the functional form of the free boundary condition is given explicitly, the location of the free boundary is unknown and can only be determined implicitly by imposing continuity conditions on the solution. Two specific problems are studied in details, namely the valuation of fixed-rate mortgages and CEV American options. The second type is the multi-dimensional problem, which involves multiple correlated stochastic variables and their governing PDE. One typical problem we focus on is the valuation of basket-spread options, whose underlying asset prices are driven by correlated geometric Brownian motions (GBMs). Analytic approximate solutions are derived for each of these three problems. For each of the two free boundary problems, we propose a parametric moving boundary to approximate the unknown free boundary, so that the original problem transforms into a moving boundary problem which can be solved analytically. The governing parameter of the moving boundary is determined by imposing the first derivative continuity condition on the solution. The analytic form of the solution allows the price and the hedging parameters to be computed very efficiently. When compared against the benchmark finite-difference method, the computational time is significantly reduced without compromising the accuracy. The multi-stage scheme further allows the approximate results to systematically converge to the benchmark results as one recasts the moving boundary into a piecewise smooth continuous function. For the multi-dimensional problem, we generalize the Kirk (1995) approximate two-asset spread option formula to the case of multi-asset basket-spread option. Since the final formula is in closed form, all the hedging parameters can also be derived in closed form. Numerical examples demonstrate that the pricing and hedging errors are in general less than 1% relative to the benchmark prices obtained by numerical integration or Monte Carlo simulation. By exploiting an explicit relationship between the option price and the underlying probability distribution, we further derive an approximate distribution function for the general basket-spread variable. It can be used to approximate the transition probability distribution of any linear combination of correlated GBMs. Finally, an implicit perturbation is applied to reduce the pricing errors by factors of up to 100. When compared against the existing methods, the basket-spread option formula coupled with the implicit perturbation turns out to be one of the most robust and accurate approximation methods.
Viola, Francesco; Coe, Ryan L.; Owen, Kevin; Guenther, Drake A.; Walker, William F.
2008-01-01
Image registration and motion estimation play central roles in many fields, including RADAR, SONAR, light microscopy, and medical imaging. Because of its central significance, estimator accuracy, precision, and computational cost are of critical importance. We have previously presented a highly accurate, spline-based time delay estimator that directly determines sub-sample time delay estimates from sampled data. The algorithm uses cubic splines to produce a continuous representation of a reference signal and then computes an analytical matching function between this reference and a delayed signal. The location of the minima of this function yields estimates of the time delay. In this paper we describe the MUlti-dimensional Spline-based Estimator (MUSE) that allows accurate and precise estimation of multidimensional displacements/strain components from multidimensional data sets. We describe the mathematical formulation for two- and three-dimensional motion/strain estimation and present simulation results to assess the intrinsic bias and standard deviation of this algorithm and compare it to currently available multi-dimensional estimators. In 1000 noise-free simulations of ultrasound data we found that 2D MUSE exhibits maximum bias of 2.6 × 10?4 samples in range and 2.2 × 10?3 samples in azimuth (corresponding to 4.8 and 297 nm, respectively). The maximum simulated standard deviation of estimates in both dimensions was comparable at roughly 2.8 × 10?3 samples (corresponding to 54 nm axially and 378 nm laterally). These results are between two and three orders of magnitude better than currently used 2D tracking methods. Simulation of performance in 3D yielded similar results to those observed in 2D. We also present experimental results obtained using 2D MUSE on data acquired by an Ultrasonix Sonix RP imaging system with an L14-5/38 linear array transducer operating at 6.6 MHz. While our validation of the algorithm was performed using ultrasound data, MUSE is broadly applicable across imaging applications. PMID:18807190
Minimum Sample Size Requirements for Mokken Scale Analysis
ERIC Educational Resources Information Center
Straat, J. Hendrik; van der Ark, L. Andries; Sijtsma, Klaas
2014-01-01
An automated item selection procedure in Mokken scale analysis partitions a set of items into one or more Mokken scales, if the data allow. Two algorithms are available that pursue the same goal of selecting Mokken scales of maximum length: Mokken's original automated item selection procedure (AISP) and a genetic algorithm (GA). Minimum…
Scale Analysis of Deep and Shallow Convection in the Atmosphere
Yoshimitsu Ogura; Norman A. Phillips
1962-01-01
The approximate equations of motion derived by Batchelor in 1953 are derived by a formal scale analysis, with the assumption that the percentage range in potential temperature is small and that the time scale is set by the Brunt-Väisälä frequency. Acoustic waves are then absent. If the vertical scale is small compared to the depth of an adiabatic atmosphere, the
Detection of crossover time scales in multifractal detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Ge, Erjia; Leung, Yee
2013-04-01
Fractal is employed in this paper as a scale-based method for the identification of the scaling behavior of time series. Many spatial and temporal processes exhibiting complex multi(mono)-scaling behaviors are fractals. One of the important concepts in fractals is crossover time scale(s) that separates distinct regimes having different fractal scaling behaviors. A common method is multifractal detrended fluctuation analysis (MF-DFA). The detection of crossover time scale(s) is, however, relatively subjective since it has been made without rigorous statistical procedures and has generally been determined by eye balling or subjective observation. Crossover time scales such determined may be spurious and problematic. It may not reflect the genuine underlying scaling behavior of a time series. The purpose of this paper is to propose a statistical procedure to model complex fractal scaling behaviors and reliably identify the crossover time scales under MF-DFA. The scaling-identification regression model, grounded on a solid statistical foundation, is first proposed to describe multi-scaling behaviors of fractals. Through the regression analysis and statistical inference, we can (1) identify the crossover time scales that cannot be detected by eye-balling observation, (2) determine the number and locations of the genuine crossover time scales, (3) give confidence intervals for the crossover time scales, and (4) establish the statistically significant regression model depicting the underlying scaling behavior of a time series. To substantive our argument, the regression model is applied to analyze the multi-scaling behaviors of avian-influenza outbreaks, water consumption, daily mean temperature, and rainfall of Hong Kong. Through the proposed model, we can have a deeper understanding of fractals in general and a statistical approach to identify multi-scaling behavior under MF-DFA in particular.
Three faces of self-face recognition: potential for a multi-dimensional diagnostic tool.
Sugiura, Motoaki
2015-01-01
The recognition of self-face is a unique and complex phenomenon in many aspects, including its associated perceptual integration process, its emergence during development, and its socio-motivational effect. This may explain the failure of classical attempts to identify the cortical areas specifically responsive to self-face and designate them as a unique system related to 'self'. Neuroimaging findings regarding self-face recognition seem to be explained comprehensively by a recent forward-model account of the three categories of self: the physical, interpersonal, and social selves. Self-face-specific activation in the sensory and motor association cortices may reflect cognitive scrutiny due to prediction error or task-induced top-down attention in the physical internal schema related to the self-face. Self-face-specific deactivation in some amodal association cortices in the dorsomedial frontal and lateral posterior cortices may reflect adaptive suppression of the default recruitment of the social-response system during face recognition. Self-face-specific activation under a social context in the ventral aspect of the medial prefrontal cortex and the posterior cingulate cortex may reflect cognitive scrutiny of the internal schema related to the social value of the self. The multi-facet nature of self-face-specific activation may hold potential as the basis for a multi-dimensional diagnostic tool for the cognitive system. PMID:25450313
Amado, Diana; Del Villar, Fernando; Leo, Francisco Miguel; Sánchez-Oliva, David; Sánchez-Miguel, Pedro Antonio; García-Calvo, Tomás
2014-01-01
This research study purports to verify the effect produced on the motivation of physical education students of a multi-dimensional programme in dance teaching sessions. This programme incorporates the application of teaching skills directed towards supporting the needs of autonomy, competence and relatedness. A quasi-experimental design was carried out with two natural groups of 4th year Secondary Education students - control and experimental -, delivering 12 dance teaching sessions. A prior training programme was carried out with the teacher in the experimental group to support these needs. An initial and final measurement was taken in both groups and the results revealed that the students from the experimental group showed an increase of the perception of autonomy and, in general, of the level of self-determination towards the curricular content of corporal expression focused on dance in physical education. To this end, we highlight the programme's usefulness in increasing the students' motivation towards this content, which is so complicated for teachers of this area to develop. PMID:24454831
New methodology for multi-dimensional spinal joint testing with a parallel robot.
Walker, Matthew R; Dickey, James P
2007-03-01
Six degree-of-freedom (6DOF) robots can be used to examine joints and their mechanical properties with the spatial freedom encountered physiologically. Parallel robots are capable of 6DOF motion under large payloads making them ideal for joint testing. This study developed and assessed novel methods for spinal joint testing with a custom-built parallel robot implementing hybrid load-position control. We hypothesized these methods would allow multi-dimensional control of joint loading scenarios, resulting in physiological joint motions. Tests were performed in 3DOF and 6DOF. 3DOF methods controlled the forces and the principal moment within +/-10 N and 0.25 N m under combined bending and compressive loads. 6DOF tests required larger tolerances for convergence due to machine compliance, however expected motion patterns were still observed. The unique mechanism and control approaches show promise for enabling complex three-dimensional loading patterns for in vitro joint biomechanics, and could facilitate research using specimens with unknown, changing, or nonlinear load-deformation properties. PMID:17235615
A scale analysis of the D region winter anomaly
D. Offermann; H. G. K. Brueckelmann; J. J. Barnett; K. Labitzke; K. M. Torkar; H. U. Widdel
1982-01-01
A scale analysis, i.e., an estimation of typical values of important atmospheric and ionospheric parameters and time scales is performed for the meteorological type of winter anomaly at medium latitudes (40 deg N). An interpretative model of the winter anomaly is developed by a correlation analysis on the basis of data obtained from the Western European Winter Anomaly Campaign 1975\\/76
Chan, Lawrence W.C.; Benzie, Iris F.F.; Lau, Thomas Y.H.; Zheng, Yongping; Wong, Alex K.S.; Liu, Y.; Chan, Phoebe S.T.
2008-01-01
Atherosclerosis results from inflammatory processes involving biomarkers, such as lipid profile, haemoglobin A1C, oxidative stress, coronary artery calcium score and flow-mediated endothelial response through nitric oxide. This paper proposes a health status coefficient, which comprehends molecular and clinical measurements concerning atherosclerosis to provide a measure of arterial health. An arterial health status map is produced to map the multi-dimensional measurements to the health status coefficient. The mapping is modeled by a fuzzy system embedded with the health domain expert knowledge. The measurements obtained from the pilot study are used to tune the fuzzy system. The inferred arterial health coefficients are stored into the data cubes of a multi-dimensional database. Due to this adaptability and transparency of fuzzy system, the health status map can be easily updated when the refinement of fuzzy rule base is needed or new measurements are obtained. PMID:21347120
EL-Shamy, E. F., E-mail: emadel-shamy@hotmail.com [Department of Physics, Faculty of Science, Damietta University, New Damietta 34517, Egypt and Department of Physics, College of Science, King Khalid University, Abha P.O. 9004 (Saudi Arabia)
2014-08-15
The solitary structures of multi–dimensional ion-acoustic solitary waves (IASWs) have been considered in magnetoplasmas consisting of electron-positron-ion with high-energy (superthermal) electrons and positrons are investigated. Using a reductive perturbation method, a nonlinear Zakharov-Kuznetsov equation is derived. The multi-dimensional instability of obliquely propagating (with respect to the external magnetic field) IASWs has been studied by the small-k (long wavelength plane wave) expansion perturbation method. The instability condition and the growth rate of the instability have been derived. It is shown that the instability criterion and their growth rate depend on the parameter measuring the superthermality, the ion gyrofrequency, the unperturbed positrons-to-ions density ratio, the direction cosine, and the ion-to-electron temperature ratio. Clearly, the study of our model under consideration is helpful for explaining the propagation and the instability of IASWs in space observations of magnetoplasmas with superthermal electrons and positrons.
Changhong Bai; Fujun Lai; Ye Chen; Joe Hutchinson
2008-01-01
Based on the data relevant to four public utility services (water, natural gas, electricity and thermoelectricity) collected by a personally administered on-site survey, the authors develop a model to assess the perceived service quality of public utility services. In the model, the perceived service quality of public utility services has a multi-level, multi-dimensional structure with three primary dimensions: outcome, environment
Yakovlev, Sergey V
2011-01-01
Were investigated anisotropic metric of higher dimensional space-time with only cosmological term and scalar field. Showed, that presence of scalar field is equivalent to anisotropic metric in the multy dimensional space-time and proposed idea of dimensions generation by scalar field. Were solved Einstein's equations for higher dimensional space-time of Kazner's type and derived expressions for density of energy for scalar field, which generate additional dimensions, and proposed the procedure of renormalization of the metric.
Enrico Gerlach; Siegfried Eggl; Charalampos Skokos
2011-01-01
We study the problem of efficient integration of variational equations in multi-dimensional Hamiltonian systems. For this purpose, we consider a Runge-Kutta-type integrator, a Taylor series expansion method and the so-called `Tangent Map' (TM) technique based on symplectic integration schemes, and apply them to the Fermi-Pasta-Ulam $\\\\beta$ (FPU-$\\\\beta$) lattice of $N$ nonlinearly coupled oscillators, with $N$ ranging from 4 to 20.
William G. Fateley; Radoslaw Sobczynski; Joseph V. Paukstelis; A. Norman Mortensen; Edward A. Orr; Robert M. Hammaker
1995-01-01
Our current interests in Hadamard transform spectrometry include multi-dimensional spectrometry, the operation of an acousto-optic tunable filter (AOTF) in a simultaneous multi-wavelength mode, and development of a high throughput near-infrared spectrometer using a multi-element light source. The three spatial dimensions are accessed via a two-dimensional Hadamard encoding mask for the two surface dimensions (x and y) and a photoacoustic detection
Zheng, Junnian
2012-07-16
THE POTENTIAL OF USING NATURAL GAS IN HCCI ENGINES: RESULTS FROM ZERO- AND MULTI-DIMENSIONAL SIMULATIONS A Dissertation by JUNNIAN ZHENG Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment... Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Approved by: Chair of Committee, Jerald A. Caton Committee Members, Timothy J. Jacobs Eric L...
Kim, K. S.; Boyer, L. L.; Degelman, L. O.
1985-01-01
DAYLIGhTIkIG ANALYSIS THROUGH SCALE MODEL, FULL SCALE MEASUREMENTS AND CONF'UTE R ANALYSIS FOR A TEXAS A6M UNIVERSITY CAMPUS BU ILL) ING Kang-Soo Kim Doctoral Student of Architecture Texas A&M Univers ity College Station, Texas Dr. Lester L...
Convective scale weather analysis and forecasting
NASA Technical Reports Server (NTRS)
Purdom, J. F. W.
1984-01-01
How satellite data can be used to improve insight into the mesoscale behavior of the atmosphere is demonstrated with emphasis on the GOES-VAS sounding and image data. This geostationary satellite has the unique ability to observe frequently the atmosphere (sounders) and its cloud cover (visible and infrared) from the synoptic scale down to the cloud scale. These uniformly calibrated data sets can be combined with conventional data to reveal many of the features important in mesoscale weather development and evolution.
Scale-Specific Multifractal Medical Image Analysis
Braverman, Boris
2013-01-01
Fractal geometry has been applied widely in the analysis of medical images to characterize the irregular complex tissue structures that do not lend themselves to straightforward analysis with traditional Euclidean geometry. ...
Ono, Junichi; Ando, Koji
2012-11-01
A semiquantal (SQ) molecular dynamics (MD) simulation method based on an extended Hamiltonian formulation has been developed using multi-dimensional thawed gaussian wave packets (WPs), and applied to an analysis of hydrogen-bond (H-bond) dynamics in liquid water. A set of Hamilton's equations of motion in an extended phase space, which includes variance-covariance matrix elements as auxiliary coordinates representing anisotropic delocalization of the WPs, is derived from the time-dependent variational principle. The present theory allows us to perform real-time and real-space SQMD simulations and analyze nuclear quantum effects on dynamics in large molecular systems in terms of anisotropic fluctuations of the WPs. Introducing the Liouville operator formalism in the extended phase space, we have also developed an explicit symplectic algorithm for the numerical integration, which can provide greater stability in the long-time SQMD simulations. The application of the present theory to H-bond dynamics in liquid water is carried out under a single-particle approximation in which the variance-covariance matrix and the corresponding canonically conjugate matrix are reduced to block-diagonal structures by neglecting the interparticle correlations. As a result, it is found that the anisotropy of the WPs is indispensable for reproducing the disordered H-bond network compared to the classical counterpart with the use of the potential model providing competing quantum effects between intra- and intermolecular zero-point fluctuations. In addition, the significant WP delocalization along the out-of-plane direction of the jumping hydrogen atom associated with the concerted breaking and forming of H-bonds has been detected in the H-bond exchange mechanism. The relevance of the dynamical WP broadening to the relaxation of H-bond number fluctuations has also been discussed. The present SQ method provides the novel framework for investigating nuclear quantum dynamics in the many-body molecular systems in which the local anisotropic fluctuations of nuclear WPs play an essential role. PMID:23145735
Overview of NASA Multi-dimensional Stirling Convertor Code Development and Validation Effort
NASA Technical Reports Server (NTRS)
Tew, Roy C.; Cairelli, James E.; Ibrahim, Mounir B.; Simon, Terrence W.; Gedeon, David
2002-01-01
A NASA grant has been awarded to Cleveland State University (CSU) to develop a multi-dimensional (multi-D) Stirling computer code with the goals of improving loss predictions and identifying component areas for improvements. The University of Minnesota (UMN) and Gedeon Associates are teamed with CSU. Development of test rigs at UMN and CSU and validation of the code against test data are part of the effort. The one-dimensional (1-D) Stirling codes used for design and performance prediction do not rigorously model regions of the working space where abrupt changes in flow area occur (such as manifolds and other transitions between components). Certain hardware experiences have demonstrated large performance gains by varying manifolds and heat exchanger designs to improve flow distributions in the heat exchangers. 1-D codes were not able to predict these performance gains. An accurate multi-D code should improve understanding of the effects of area changes along the main flow axis, sensitivity of performance to slight changes in internal geometry, and, in general, the understanding of various internal thermodynamic losses. The commercial CFD-ACE code has been chosen for development of the multi-D code. This 2-D/3-D code has highly developed pre- and post-processors, and moving boundary capability. Preliminary attempts at validation of CFD-ACE models of MIT gas spring and "two space" test rigs were encouraging. Also, CSU's simulations of the UMN oscillating-flow fig compare well with flow visualization results from UMN. A complementary Department of Energy (DOE) Regenerator Research effort is aiding in development of regenerator matrix models that will be used in the multi-D Stirling code. This paper reports on the progress and challenges of this
Mihaljevi?, Bojan; Bielza, Concha; Benavides-Piccione, Ruth; DeFelipe, Javier; Larrañaga, Pedro
2014-01-01
Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists' classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features. PMID:25505405
MULTI-DIMENSIONAL FEATURES OF NEUTRINO TRANSFER IN CORE-COLLAPSE SUPERNOVAE
Sumiyoshi, K. [Numazu College of Technology, Ooka 3600, Numazu, Shizuoka 410-8501 (Japan); Takiwaki, T. [National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588 (Japan); Matsufuru, H. [Computing Research Center, High Energy Accelerator Research Organization 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan); Yamada, S., E-mail: sumi@numazu-ct.ac.jp, E-mail: takiwaki.tomoya@nao.ac.jp, E-mail: hideo.matsufuru@kek.jp, E-mail: shoichi@heap.phys.waseda.ac.jp [Science and Engineering and Advanced Research Institute for Science and Engineering, Waseda University, Okubo, 3-4-1, Shinjuku, Tokyo 169-8555 (Japan)
2015-01-01
We study the multi-dimensional properties of neutrino transfer inside supernova cores by solving the Boltzmann equations for neutrino distribution functions in genuinely six-dimensional phase space. Adopting representative snapshots of the post-bounce core from other supernova simulations in three dimensions, we solve the temporal evolution to stationary states of neutrino distribution functions using our Boltzmann solver. Taking advantage of the multi-angle and multi-energy feature realized by the S {sub n} method in our code, we reveal the genuine characteristics of spatially three-dimensional neutrino transfer, such as nonradial fluxes and nondiagonal Eddington tensors. In addition, we assess the ray-by-ray approximation, turning off the lateral-transport terms in our code. We demonstrate that the ray-by-ray approximation tends to propagate fluctuations in thermodynamical states around the neutrino sphere along each radial ray and overestimate the variations between the neutrino distributions on different radial rays. We find that the difference in the densities and fluxes of neutrinos between the ray-by-ray approximation and the full Boltzmann transport becomes ?20%, which is also the case for the local heating rate, whereas the volume-integrated heating rate in the Boltzmann transport is found to be only slightly larger (?2%) than the counterpart in the ray-by-ray approximation due to cancellation among different rays. These results suggest that we should carefully assess the possible influences of various approximations in the neutrino transfer employed in current simulations of supernova dynamics. Detailed information on the angle and energy moments of neutrino distribution functions will be profitable for the future development of numerical methods in neutrino-radiation hydrodynamics.
Mihaljevi?, Bojan; Bielza, Concha; Benavides-Piccione, Ruth; DeFelipe, Javier; Larrañaga, Pedro
2014-01-01
Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists' classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features. PMID:25505405
Local variance for multi-scale analysis in geomorphometry
Dr?gu?, Lucian; Eisank, Clemens; Strasser, Thomas
2011-01-01
Increasing availability of high resolution Digital Elevation Models (DEMs) is leading to a paradigm shift regarding scale issues in geomorphometry, prompting new solutions to cope with multi-scale analysis and detection of characteristic scales. We tested the suitability of the local variance (LV) method, originally developed for image analysis, for multi-scale analysis in geomorphometry. The method consists of: 1) up-scaling land-surface parameters derived from a DEM; 2) calculating LV as the average standard deviation (SD) within a 3 × 3 moving window for each scale level; 3) calculating the rate of change of LV (ROC-LV) from one level to another, and 4) plotting values so obtained against scale levels. We interpreted peaks in the ROC-LV graphs as markers of scale levels where cells or segments match types of pattern elements characterized by (relatively) equal degrees of homogeneity. The proposed method has been applied to LiDAR DEMs in two test areas different in terms of roughness: low relief and mountainous, respectively. For each test area, scale levels for slope gradient, plan, and profile curvatures were produced at constant increments with either resampling (cell-based) or image segmentation (object-based). Visual assessment revealed homogeneous areas that convincingly associate into patterns of land-surface parameters well differentiated across scales. We found that the LV method performed better on scale levels generated through segmentation as compared to up-scaling through resampling. The results indicate that coupling multi-scale pattern analysis with delineation of morphometric primitives is possible. This approach could be further used for developing hierarchical classifications of landform elements. PMID:21779138
Multi-dimensional analysis of the chemical and physical properties of spiral galaxies
Rosales Ortega, Fernando Fabián
2010-02-09
from high-latitude aircrafts or space telescopes. The source of energy that enables normal emission nebulae to radiate is ultraviolet radiation from stars within or near the nebula. There should be one or more stars with effective surface temperature... ; however, the main excitation process responsible for the observed strengths of such lines with the same spin or multiplicity as the ground term is resonance fluorescence by photons, which is much less effective for H and He lines because the resonance...
A multi-dimensional perspective on social capital and economic development: an exploratory analysis
Soogwan Doh; Connie L. McNeely
While various types of capital have been identified and studied as drivers of economic development, recent arguments surrounding\\u000a social capital in particular have led to calls for greater attention to its role in economic development within a country.\\u000a Thus, we examine social capital relative to its impact on economic development at the country level. We employ a generalized\\u000a social capital
Computation of a (min,+) multi-dimensional convolution for end-to-end performance analysis
Anne Bouillard; Laurent Jouhet; Eric Thierry
Network Calculus is an attractive theory to derive deter- ministic bounds on end-to-end performance measures. Nev- ertheless bounding tightly and quickly the worst-case delay or backlog of a ow over a path with cross-trac remains a challenging problem. This paper carries on with the study of congurations where a main ow encounters some cross-trac ows which inter-
Correlation network analysis for multi-dimensional data in stocks market
NASA Astrophysics Data System (ADS)
Kazemilari, Mansooreh; Djauhari, Maman Abdurachman
2015-07-01
This paper shows how the concept of vector correlation can appropriately measure the similarity among multivariate time series in stocks network. The motivation of this paper is (i) to apply the RV coefficient to define the network among stocks where each of them is represented by a multivariate time series; (ii) to analyze that network in terms of topological structure of the stocks of all minimum spanning trees, and (iii) to compare the network topology between univariate correlation based on r and multivariate correlation network based on RV coefficient.
Multi-dimensional, multiphase flow analysis of flamespreading in a stick propellant charge
NASA Astrophysics Data System (ADS)
Horst, A. W.; Robbins, F. W.; Gough, P. S.
1983-10-01
The interior ballistic performance of propelling charges employing stick propellant often cannot be simulated using either lumped parameter or two phase flow models. Much of this disparity is usually attributed to enhanced burning within the long perforations, perhaps accompanied by splitting or fracture of the stick to yield additional burning surface. Unusually low (or even reversed) sensitivity of performance to propellant conditioning temperature has also been noted, a factor that, if controllable, may have significant impact on the acceptability of new stick propellant charges. Moreover, the mechanisms responsible for all the above behavior may well be exploitable as high progressivity, high density (HPD) propelling charge concepts. A state of the art version of TDNOVA, a two dimensional, two phase flow interior ballistic code, is employed to probe the ignition and flamespreading processes in stick propellant charges. Calculations of flame propagation on exterior and interior surfaces, as well as pressurization profiles both within the perforations and in the interstices, are described for typical and simplified stick charge configurations. Reconcilation of predicted behavior with experimental observation is discussed, and further specific studies using TDNOVA are identified in order to verify a postulated explanation for ballistic data exhibiting an anomalous temperature sensitivity.
David L. Meier
1998-11-10
A new field of numerical astrophysics is introduced which addresses the solution of large, multidimensional structural or slowly-evolving problems (rotating stars, interacting binaries, thick advective accretion disks, four dimensional spacetimes, etc.). The technique employed is the Finite Element Method (FEM), commonly used to solve engineering structural problems. The approach developed herein has the following key features: 1. The computational mesh can extend into the time dimension, as well as space, perhaps only a few cells, or throughout spacetime. 2. Virtually all equations describing the astrophysics of continuous media, including the field equations, can be written in a compact form similar to that routinely solved by most engineering finite element codes. 3. The transformations that occur naturally in the four-dimensional FEM possess both coordinate and boost features, such that (a) although the computational mesh may have a complex, non-analytic, curvilinear structure, the physical equations still can be written in a simple coordinate system independent of the mesh geometry. (b) if the mesh has a complex flow velocity with respect to coordinate space, the transformations will form the proper arbitrary Lagrangian- Eulerian advective derivatives automatically. 4. The complex difference equations on the arbitrary curvilinear grid are generated automatically from encoded differential equations. This first paper concentrates on developing a robust and widely-applicable set of techniques using the nonlinear FEM and presents some examples.
Multi-dimensional analysis of hdl: an approach to understanding atherogenic hdl
Johnson, Jr., Jeffery Devoyne
2009-05-15
the early onset of coronary artery disease (CAD). The research presented here focuses on the pairing of DGU with post-separatory techniques including matrix-assisted laser desorption mass spectrometry (MALDI-MS), liquid chromatography mass spectrometry (LC-MS...
Stochastic analysis on large scale interacting systems
Tadahisa Funaki
The evolutional laws in physical phenomena like the dynamics of uids are, in general, described by nonlinear partial dieren tial equations. Behind such physical phenomena, a microscopic world composed of atoms or molecules exists. It is a system with an enormous degree of freedom, and evolves in time making very complex interactions among them. We call it a large scale
Analysis of Reynolds number scaling for viscous vortex reconnection
NASA Astrophysics Data System (ADS)
Ni, Qionglin; Hussain, Fazle; Wang, Jianchun; Chen, Shiyi
2011-11-01
A theoretical analysis of viscous vortex reconnection based on scale separation is developed, where the Reynolds number (Re = circulation/viscosity) scaling for reconnection time Trec is derived. The scaling varies from Trec ~ Re-1 to Trec ~ Re - 0 . 5 , and the direct numerical simulation (DNS) data from Garten [Garten et al, J. Fluid Mech. 426, 1 (2001), cited hereinafter as GW] and Hussain [Hussain et al, Phys. Fluids 23, 021701 (2011) cited hereinafter as HD] collapse well within the range of the asymptotic scalings. Moreover, our analysis predicts two Reynolds numbers, namely, a characteristic Retheory ?[O(102), O(103)] for the Trec ~ Re - 0 . 75 scaling given by HD, and the critical Reynolds number Rec ~ O(104) for the transition after whom the large-scale vortex reconnection does no longer occur.
Genetic Analysis of Invasive Plant Populations at Different Spatial Scales
Sarah Ward
2006-01-01
Measuring genetic diversity requires selection of a spatial scale of analysis. Different levels of genetic structuring are\\u000a revealed at different spatial scales, however, and the relative importance of factors driving genetic structuring varies along\\u000a the spatial scale continuum. Unequal gene flow is a major factor determining genetic structure in plant populations at the\\u000a local level, while the effect of selection
The Attitudes to Ageing Questionnaire: Mokken Scaling Analysis
Shenkin, Susan D.; Watson, Roger; Laidlaw, Ken; Starr, John M.; Deary, Ian J.
2014-01-01
Background Hierarchical scales are useful in understanding the structure of underlying latent traits in many questionnaires. The Attitudes to Ageing Questionnaire (AAQ) explored the attitudes to ageing of older people themselves, and originally described three distinct subscales: (1) Psychosocial Loss (2) Physical Change and (3) Psychological Growth. This study aimed to use Mokken analysis, a method of Item Response Theory, to test for hierarchies within the AAQ and to explore how these relate to underlying latent traits. Methods Participants in a longitudinal cohort study, the Lothian Birth Cohort 1936, completed a cross-sectional postal survey. Data from 802 participants were analysed using Mokken Scaling analysis. These results were compared with factor analysis using exploratory structural equation modelling. Results Participants were 51.6% male, mean age 74.0 years (SD 0.28). Three scales were identified from 18 of the 24 items: two weak Mokken scales and one moderate Mokken scale. (1) ‘Vitality’ contained a combination of items from all three previously determined factors of the AAQ, with a hierarchy from physical to psychosocial; (2) ‘Legacy’ contained items exclusively from the Psychological Growth scale, with a hierarchy from individual contributions to passing things on; (3) ‘Exclusion’ contained items from the Psychosocial Loss scale, with a hierarchy from general to specific instances. All of the scales were reliable and statistically significant with ‘Legacy’ showing invariant item ordering. The scales correlate as expected with personality, anxiety and depression. Exploratory SEM mostly confirmed the original factor structure. Conclusions The concurrent use of factor analysis and Mokken scaling provides additional information about the AAQ. The previously-described factor structure is mostly confirmed. Mokken scaling identifies a new factor relating to vitality, and a hierarchy of responses within three separate scales, referring to vitality, legacy and exclusion. This shows what older people themselves consider important regarding their own ageing. PMID:24892302
The Fermi-Ulam Accelerator Model Under Scaling Analysis
Edson D. Leonel; P. V. E. McClintock; J. Kamphorst Leal da Silva
2004-06-24
The chaotic low energy region of the Fermi-Ulam simplified accelerator model is characterised by use of scaling analysis. It is shown that the average velocity and the roughness (variance of the average velocity) obey scaling functions with the same characteristic exponents. The formalism is widely applicable, including to billiards and to other chaotic systems.
Analysis of Small-Scale Hydraulic Systems Jicheng Xia
Durfee, William K.
Analysis of Small-Scale Hydraulic Systems Jicheng Xia Department of Mechanical Engineering, MN, 55455 Email: wkdurfee@umn.edu We investigated small-scale hydraulic power system using a system- mine whether the high power density advantage of hydraulic power system holds at small sizes. Hydraulic
Functional Analysis of Large-scale DNA Strand Displacement Circuits
Hamadi, Yousseff
Functional Analysis of Large-scale DNA Strand Displacement Circuits Boyan Yordanov, Christoph M prop- erties of large-scale DNA strand displacement (DSD) circuits based on Satisfiability Modulo Theories that enables us to prove the functional correctness of DNA circuit designs for arbitrary inputs
A multiple-scale analysis of grating-assisted couplers
Brent E. Little; Weiping P. Huang; S. K. Chaudhari
1991-01-01
Optical directional couplers with longitudinal periodic perturbations or gratings are analyzed by a multiple-scale solution to the coupled-mode equations. The use of two length scales in the analysis becomes the key to obtaining globally valid analytic solutions, which are shown to be in excellent agreement with the exact numerical solutions. Because of the nonorthogonality of the coupled modes in the
Analysis of universities' scale economies based on DEA
Zenglian zhang; Min cao
2011-01-01
Universities as an input-output system, its size is an important factor in education efficiency of investment. We use the 2006 financial report data of 76 institutions directly under the Ministry of Education, use CCR, BCC and NIRS models of DEA to empirical analysis, finds many universities exist scale diseconomies phenomenon, but the scale economies in kinds of universities are different.
Mokken Scale Analysis for Dichotomous Items Using Marginal Models
ERIC Educational Resources Information Center
van der Ark, L. Andries; Croon, Marcel A.; Sijtsma, Klaas
2008-01-01
Scalability coefficients play an important role in Mokken scale analysis. For a set of items, scalability coefficients have been defined for each pair of items, for each individual item, and for the entire scale. Hypothesis testing with respect to these scalability coefficients has not been fully developed. This study introduces marginal modelling…
Multidimensional data scaling - dynamical cascade approach
Milan Jovovic; Geoffrey Fox
In this report a multi-dimensional data scaling approach is proposed in data mining and knowledge discovery applications. We derive the method based on an analogy to the physical computation of signal distortion. A dynamical cascade computation diagrams result from the statistical physics model computation in the free energy decomposition. We assess the scale invariance of various data sets, such as
SCALE ANALYSIS OF CONVECTIVE MELTING WITH INTERNAL HEAT GENERATION
John Crepeau
2011-03-01
Using a scale analysis approach, we model phase change (melting) for pure materials which generate internal heat for small Stefan numbers (approximately one). The analysis considers conduction in the solid phase and natural convection, driven by internal heat generation, in the liquid regime. The model is applied for a constant surface temperature boundary condition where the melting temperature is greater than the surface temperature in a cylindrical geometry. We show the time scales in which conduction and convection heat transfer dominate.
Large-scale latent semantic analysis
Andrew McGregor Olney
2011-01-01
Latent semantic analysis (LSA) is a statistical technique for representing word meaning that has been widely used for making\\u000a semantic similarity judgments between words, sentences, and documents. In order to perform an LSA analysis, an LSA space is\\u000a created in a two-stage procedure, involving the construction of a word frequency matrix and the dimensionality reduction of\\u000a that matrix through singular
Metal analysis of scales taken from Arctic grayling.
Farrell, A P; Hodaly, A H; Wang, S
2000-11-01
This study examined concentrations of metals in fish scales taken from Arctic grayling using laser ablation-inductively coupled plasma mass spectrometry (LA-ICPMS). The purpose was to assess whether scale metal concentrations reflected whole muscle metal concentrations and whether the spatial distribution of metals within an individual scale varied among the growth annuli of the scales. Ten elements (Mg, Ca, Ni, Zn, As, Se, Cd, Sb, Hg, and Pb) were measured in 10 to 16 ablation sites (5 microm radius) on each scale sample from Arctic grayling (Thymallus arcticus) (n = 10 fish). Ca, Mg, and Zn were at physiological levels in all scale samples. Se, Hg, and As were also detected in all scale samples. Only Cd was below detection limits of the LA-ICPMS for all samples, but some of the samples were below detection limits for Sb, Pb, and Ni. The mean scale concentrations for Se, Hg, and Pb were not significantly different from the muscle concentrations and individual fish values were within fourfold of each other. Cd was not detected in either muscle or scale tissue, whereas Sb was detected at low levels in some scale samples but not in any of the muscle samples. Similarly, As was detected in all scale samples but not in muscle, and Ni was detected almost all scale samples but only in one of the muscle samples. Therefore, there were good qualitative and quantitative agreements between the metal concentrations in scale and muscle tissues, with LA-ICPMS analysis of scales appearing to be a more sensitive method of detecting the body burden of Ni and As when compared with muscle tissue. Correlation analyses, performed for Pb, Hg, and Se concentrations, revealed that the scale concentrations for these three metals generally exceeded those of the muscle at low muscle concentrations. The LA-ICPMS analysis of scales had the capability to resolve significant spatial differences in metal concentrations within a fish scale. We conclude that metal analysis of fish scales using LA-ICPMS shows considerable promise as a nonlethal analytical tool to assess metal body burden in fish that could possibly generate a historic record of metal exposure. However, comprehensive validation experiments are still needed. PMID:11031313
Multi-dimensional forward modeling of frequency-domain helicopter-borne electromagnetic data
NASA Astrophysics Data System (ADS)
Miensopust, M.; Siemon, B.; Börner, R.; Ansari, S.
2013-12-01
Helicopter-borne frequency-domain electromagnetic (HEM) surveys are used for fast high-resolution, three-dimensional (3-D) resistivity mapping. Nevertheless, 3-D modeling and inversion of an entire HEM data set is in many cases impractical and, therefore, interpretation is commonly based on one-dimensional (1-D) modeling and inversion tools. Such an approach is valid for environments with horizontally layered targets and for groundwater applications but there are areas of higher dimension that are not recovered correctly applying 1-D methods. The focus of this work is the multi-dimensional forward modeling. As there is no analytic solution to verify (or falsify) the obtained numerical solutions, comparison with 1-D values as well as amongst various two-dimensional (2-D) and 3-D codes is essential. At the center of a large structure (a few hundred meters edge length) and above the background structure in some distance to the anomaly 2-D and 3-D values should match the 1-D solution. Higher dimensional conditions are present at the edges of the anomaly and, therefore, only a comparison of different 2-D and 3-D codes gives an indication of the reliability of the solution. The more codes - especially if based on different methods and/or written by different programmers - agree the more reliable is the obtained synthetic data set. Very simple structures such as a conductive or resistive block embedded in a homogeneous or layered half-space without any topography and using a constant sensor height were chosen to calculate synthetic data. For the comparison one finite element 2-D code and numerous 3-D codes, which are based on finite difference, finite element and integral equation approaches, were applied. Preliminary results of the comparison will be shown and discussed. Additionally, challenges that arose from this comparative study will be addressed and further steps to approach more realistic field data settings for forward modeling will be discussed. As the driving engine of an inversion algorithm is its forward solver, applying inversion codes to HEM data is only sensible once the forward modeling results are reliable (and their limits and weaknesses are known and manageable).
Scaling analysis of multi-variate intermittent time series
NASA Astrophysics Data System (ADS)
Kitt, Robert; Kalda, Jaan
2005-08-01
The scaling properties of the time series of asset prices and trading volumes of stock markets are analysed. It is shown that similar to the asset prices, the trading volume data obey multi-scaling length-distribution of low-variability periods. In the case of asset prices, such scaling behaviour can be used for risk forecasts: the probability of observing next day a large price movement is (super-universally) inversely proportional to the length of the ongoing low-variability period. Finally, a method is devised for a multi-factor scaling analysis. We apply the simplest, two-factor model to equity index and trading volume time series.
Lamani, Xolelwa; Horst, Simeon; Zimmermann, Thomas; Schmidt, Torsten C
2015-01-01
Aromatic amines are an important class of harmful components of cigarette smoke. Nevertheless, only few of them have been reported to occur in urine, which raises questions on the fate of these compounds in the human body. Here we report on the results of a new analytical method, in situ derivatization solid phase microextraction (SPME) multi-dimensional gas chromatography mass spectrometry (GCxGC-qMS), that allows for a comprehensive fingerprint analysis of the substance class in complex matrices. Due to the high polarity of amino compounds, the complex urine matrix and prevalence of conjugated anilines, pretreatment steps such as acidic hydrolysis, liquid-liquid extraction (LLE), and derivatization of amines to their corresponding aromatic iodine compounds are necessary. Prior to detection, the derivatives were enriched by headspace SPME with the extraction efficiency of the SPME fiber ranging between 65 % and 85 %. The measurements were carried out in full scan mode with conservatively estimated limits of detection (LOD) in the range of several ng/L and relative standard deviation (RSD) less than 20 %. More than 150 aromatic amines have been identified in the urine of a smoking person, including alkylated and halogenated amines as well as substituted naphthylamines. Also in the urine of a non-smoker, a number of aromatic amines have been identified, which suggests that the detection of biomarkers in urine samples using a more comprehensive analysis as detailed in this report may be essential to complement the approach of the use of classic biomarkers. PMID:25142049
Large-scale latent semantic analysis.
Olney, Andrew McGregor
2011-06-01
Latent semantic analysis (LSA) is a statistical technique for representing word meaning that has been widely used for making semantic similarity judgments between words, sentences, and documents. In order to perform an LSA analysis, an LSA space is created in a two-stage procedure, involving the construction of a word frequency matrix and the dimensionality reduction of that matrix through singular value decomposition (SVD). This article presents LANSE, an SVD algorithm specifically designed for LSA, which allows extremely large matrices to be processed using off-the-shelf computer hardware. PMID:21302024
Simple scaling analysis of active channel patterns in Fiumara environment
NASA Astrophysics Data System (ADS)
De Bartolo, Samuele; Fallico, Carmine; Ferrari, Ennio
2015-03-01
A simple scaling analysis was performed on experimental data relative to a riverbed reach of the Allaro Fiumara, a fluvial environment typical of Southern Italy. For this purpose, a simplified geometrical approach was followed to determine the spatial distribution of the number of active channels for the river stretch considered. In particular, the section lines crossing the braided network skeleton with distance ranging from 5 to 200 m were considered. Firstly, a probabilistic analysis of the experimental data was carried out by using a truncated Poisson distribution to characterize the examined river morphologically. Afterward, a scaling analysis was performed to investigate the existence of a possible multimodal behaviour of the number of active channels and to identify the corresponding cutoff values. For this second approach by the so-called standard coarse graining analysis, we defined a power law usable to give the probability distribution of the active channels number with varying spatial partition (distance between consecutive sections). In this way, verifying the existence of a bimodal scaling behaviour was possible. Moreover, the cutoff limits that characterize the bimodal behaviour of the active channels were found for all the partition distances from 5 to 100 m, while the corresponding shape and scale parameters were also determined. A comparison of the results obtained by the statistical approach and the scaling analysis was carried out. The variability of the characteristic parameters of the Poisson and power type laws with scale was also investigated.
NASA Astrophysics Data System (ADS)
Kaethner, Christian; Ahlborg, Mandy; Knopp, Tobias; Sattel, Timo F.; Buzug, Thorsten M.
2014-01-01
Magnetic Particle Imaging (MPI) is a tomographic imaging modality capable to visualize tracers using magnetic fields. A high magnetic gradient strength is mandatory, to achieve a reasonable image quality. Therefore, a power optimization of the coil configuration is essential. In order to realize a multi-dimensional efficient gradient field generator, the following improvements compared to conventionally used Maxwell coil configurations are proposed: (i) curved rectangular coils, (ii) interleaved coils, and (iii) multi-layered coils. Combining these adaptions results in total power reduction of three orders of magnitude, which is an essential step for the feasibility of building full-body human MPI scanners.
Flux Coupling Analysis of Genome-Scale Metabolic Network Reconstructions
Maranas, Costas
Flux Coupling Analysis of Genome-Scale Metabolic Network Reconstructions Anthony P. Burgard,1-zero but also a fixed flux for v2 and vice versa. Flux coupling analysis also enables the global identification Diego, California 92121, USA In this paper, we introduce the Flux Coupling Finder (FCF) framework
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
Developing Assessment Scales for Large-Scale Speaking Tests: A Multiple-Method Approach
ERIC Educational Resources Information Center
Galaczi, Evelina D.; ffrench, Angela; Hubbard, Chris; Green, Anthony
2011-01-01
The process of constructing assessment scales for performance testing is complex and multi-dimensional. As a result, a number of different approaches, both empirically and intuitively based, are open to developers. In this paper we outline the approach taken in the revision of a set of assessment scales used with speaking tests, and present the…
Genome-scale DNA methylation analysis
Fouse, Shaun D; Nagarajan, Raman P; Costello, Joseph F
2010-01-01
The haploid human genome contains approximately 29 million CpGs that exist in a methylated, hydroxymethylated or unmethylated state, collectively referred to as the DNA methylome. The methylation status of cytosines in CpGs and occasionally in non-CpG cytosines influences protein–DNA interactions, gene expression, and chromatin structure and stability. The degree of DNA methylation at particular loci may be heritable transgenerationally and may be altered by environmental exposures and diet, potentially contributing to the development of human diseases. For the vast majority of normal and disease methylomes however, less than 1% of the CpGs have been assessed, revealing the formative stage of methylation mapping techniques. Thus, there is significant discovery potential in new genome-scale platforms applied to methylome mapping, particularly oligonucleotide arrays and the transformative technology of next-generation sequencing. Here, we outline the currently used methylation detection reagents and their application to microarray and sequencing platforms. A comparison of the emerging methods is presented, highlighting their degrees of technical complexity, methylome coverage and precision in resolving methylation. Because there are hundreds of unique methylomes to map within one individual and interindividual variation is likely to be significant, international coordination is essential to standardize methylome platforms and to create a full repository of methylome maps from tissues and unique cell types. PMID:20657796
Assessment of RELAP5-3D multi-dimensional component model using data from LOFT Test L2-5
Davis, C.B.
1998-07-01
The capability of the RELAP5-3D computer code to perform multi-dimensional analysis of a pressurized water reactor (PWR) was assessed using data from the Loss-of-Fluid Test (LOFT) L2-5 experiment. The LOFT facility was a 50 MW PWR that was designed to simulate the response of a commercial PWR during a loss-of-coolant accident (LOCA). Test L2-5 simulated a 200% double-ended cold leg break with an immediate primary coolant pump trip. A three-dimensional model of the LOFT reactor vessel was developed. Calculations of the LOFT L21-5 experiment were performed using the RELAP5-3D computer code. The calculations simulated the blowdown, refill, and reflood portions of the transient. The calculated thermal-hydraulic response of the primary coolant system was generally in reasonable agreement with the test. The calculated results were also generally as good as or better than those obtained previously with RELAP5/MOD3.
Andreas A. J. Wismeijer; Klaas Sijtsma; Ad J. J. M. Vingerhoets
2008-01-01
We discuss and contrast 2 methods for investigating the dimensionality of data from tests and questionnaires: the popular principal components analysis (PCA) and the more recent Mokken scale analysis (MSA; Mokken, 1971). First, we discuss the theoretical similarities and differences between both methods. Then, we use both methods to analyze data collected by means of Larson and Chastain's (1990) Self-Concealment
Large scale analysis of signal reachability
Todor, Andrei; Gabr, Haitham; Dobra, Alin; Kahveci, Tamer
2014-01-01
Motivation: Major disorders, such as leukemia, have been shown to alter the transcription of genes. Understanding how gene regulation is affected by such aberrations is of utmost importance. One promising strategy toward this objective is to compute whether signals can reach to the transcription factors through the transcription regulatory network (TRN). Due to the uncertainty of the regulatory interactions, this is a #P-complete problem and thus solving it for very large TRNs remains to be a challenge. Results: We develop a novel and scalable method to compute the probability that a signal originating at any given set of source genes can arrive at any given set of target genes (i.e., transcription factors) when the topology of the underlying signaling network is uncertain. Our method tackles this problem for large networks while providing a provably accurate result. Our method follows a divide-and-conquer strategy. We break down the given network into a sequence of non-overlapping subnetworks such that reachability can be computed autonomously and sequentially on each subnetwork. We represent each interaction using a small polynomial. The product of these polynomials express different scenarios when a signal can or cannot reach to target genes from the source genes. We introduce polynomial collapsing operators for each subnetwork. These operators reduce the size of the resulting polynomial and thus the computational complexity dramatically. We show that our method scales to entire human regulatory networks in only seconds, while the existing methods fail beyond a few tens of genes and interactions. We demonstrate that our method can successfully characterize key reachability characteristics of the entire transcriptions regulatory networks of patients affected by eight different subtypes of leukemia, as well as those from healthy control samples. Availability: All the datasets and code used in this article are available at bioinformatics.cise.ufl.edu/PReach/scalable.htm. Contact: atodor@cise.ufl.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24932011
Chen, Hao; Li, Jianqiang; Yin, Chunjing; Xu, Kun; Dai, Yitang; Yin, Feifei
2014-08-25
A multi-dimensional crest factor reduction (MD-CFR) technique is proposed to improve the performance and efficiency of multi-band radio-over-fiber (RoF) links. Cooperating with multi-dimensional digital predistortion (MD-DPD), MD-CFR increases the performance of both directly-modulated and externally-modulated RoF links, in terms of error vector magnitude (EVM) and adjacent channel power ratio (ACPR). For directly-modulated RoF link, more than 5 dB output ACPR reduction is obtained, output EVMs are reduced from 11.83% and 12.47% to 7.51% and 7.26% for two bands respectively, while only a slight improvement to 11.58% and 10.78% is obtained solely using MD-DPD. Similar results are achieved in externally-modulated RoF link. Given a threshold in EVM or ACPR, the RF power transmit efficiency is also further enhanced. PMID:25321299
Scaling Analysis of the Galaxy Distribution in the SSRS Catalog
A. Campos; R. Dominguez-Tenreiro; G. Yepes
1994-07-14
A detailed analysis of the galaxy distribution in the Southern Sky Redshift Survey (SSRS) by means of the multifractal or scaling formalism is presented. It is shown that galaxies cluster in different ways according to their morphological type as well as their size. Ellipticals are more clustered than spirals, even at scales up to 15 h$^{-1}$ Mpc, whereas no clear segregation between early and late spirals is found. It is also shown that smaller galaxies distribute more homogeneously than larger galaxies.
Flux Coupling Analysis of Genome-Scale Metabolic Network Reconstructions
Anthony P. Burgard; Evgeni V. Nikolaev; Christophe H. Schilling; Costas D. Maranas
2004-01-01
In this paper,we introduce the Flux Coupling Finder (FCF) framework for el ucidating the topological and flux connectivity features of genome-scale metabolic networks. The framework is demonstrated on genome-scale metabolic reconstructions of Helicobacter pylori, Escherichia coli,and Saccharomyces cerevisiae. The analysis allows one to determine whether any two metabolic fluxes, v1 and v2,are (1) directionally coupled,if a non-zero flux for v1
Multiple-length-scale deformation analysis in a thermoplastic polyurethane
NASA Astrophysics Data System (ADS)
Sui, Tan; Baimpas, Nikolaos; Dolbnya, Igor P.; Prisacariu, Cristina; Korsunsky, Alexander M.
2015-03-01
Thermoplastic polyurethane elastomers enjoy an exceptionally wide range of applications due to their remarkable versatility. These block co-polymers are used here as an example of a structurally inhomogeneous composite containing nano-scale gradients, whose internal strain differs depending on the length scale of consideration. Here we present a combined experimental and modelling approach to the hierarchical characterization of block co-polymer deformation. Synchrotron-based small- and wide-angle X-ray scattering and radiography are used for strain evaluation across the scales. Transmission electron microscopy image-based finite element modelling and fast Fourier transform analysis are used to develop a multi-phase numerical model that achieves agreement with the combined experimental data using a minimal number of adjustable structural parameters. The results highlight the importance of fuzzy interfaces, that is, regions of nanometre-scale structure and property gradients, in determining the mechanical properties of hierarchical composites across the scales.
Rasch analysis of the Multiple Sclerosis Impact Scale (MSIS-29)
Ramp, Melina; Khan, Fary; Misajon, Rose Anne; Pallant, Julie F
2009-01-01
Background Multiple Sclerosis (MS) is a degenerative neurological disease that causes impairments, including spasticity, pain, fatigue, and bladder dysfunction, which negatively impact on quality of life. The Multiple Sclerosis Impact Scale (MSIS-29) is a disease-specific health-related quality of life (HRQoL) instrument, developed using the patient's perspective on disease impact. It consists of two subscales assessing the physical (MSIS-29-PHYS) and psychological (MSIS-29-PSYCH) impact of MS. Although previous studies have found support for the psychometric properties of the MSIS-29 using traditional methods of scale evaluation, the scale has not been subjected to a detailed Rasch analysis. Therefore, the objective of this study was to use Rasch analysis to assess the internal validity of the scale, and its response format, item fit, targeting, internal consistency and dimensionality. Methods Ninety-two persons with definite MS residing in the community were recruited from a tertiary hospital database. Patients completed the MSIS-29 as part of a larger study. Rasch analysis was undertaken to assess the psychometric properties of the MSIS-29. Results Rasch analysis showed overall support for the psychometric properties of the two MSIS-29 subscales, however it was necessary to reduce the response format of the MSIS-29-PHYS to a 3-point response scale. Both subscales were unidimensional, had good internal consistency, and were free from item bias for sex and age. Dimensionality testing indicated it was not appropriate to combine the two subscales to form a total MSIS score. Conclusion In this first study to use Rasch analysis to fully assess the psychometric properties of the MSIS-29 support was found for the two subscales but not for the use of the total scale. Further use of Rasch analysis on the MSIS-29 in larger and broader samples is recommended to confirm these findings. PMID:19545445
NASA Astrophysics Data System (ADS)
Taghizadeh-Popp, M.; Heinis, S.; Szalay, A. S.
2012-08-01
We propose to describe the variety of galaxies from the Sloan Digital Sky Survey by using only one affine parameter. To this aim, we construct the principal curve (P-curve) passing through the spine of the data point cloud, considering the eigenspace derived from Principal Component Analysis (PCA) of morphological, physical, and photometric galaxy properties. Thus, galaxies can be labeled, ranked, and classified by a single arc-length value of the curve, measured at the unique closest projection of the data points on the P-curve. We find that the P-curve has a "W" letter shape with three turning points, defining four branches that represent distinct galaxy populations. This behavior is controlled mainly by two properties, namely u - r and star formation rate (from blue young at low arc length to red old at high arc length), while most other properties correlate well with these two. We further present the variations of several important galaxy properties as a function of arc length. Luminosity functions vary from steep Schechter fits at low arc length to double power law and ending in lognormal fits at high arc length. Galaxy clustering shows increasing autocorrelation power at large scales as arc length increases. Cross correlation of galaxies with different arc lengths shows that the probability of two galaxies belonging to the same halo decreases as their distance in arc length increases. PCA analysis allows us to find peculiar galaxy populations located apart from the main cloud of data points, such as small red galaxies dominated by a disk, of relatively high stellar mass-to-light ratio and surface mass density. On the other hand, the P-curve helped us understand the average trends, encoding 75% of the available information in the data. The P-curve allows not only dimensionality reduction but also provides supporting evidence for the following relevant physical models and scenarios in extragalactic astronomy: (1) The hierarchical merging scenario in the formation of a selected group of red massive galaxies. These galaxies present a lognormal r-band luminosity function, which might arise from multiplicative processes involved in this scenario. (2) A connection between the onset of active galactic nucleus activity and star formation quenching as mentioned in Martin et al., which appears in green galaxies transitioning from blue to red populations.
Analysis of information management for large-scale bridge construction
CHAOSHENG TANG; BAOJUN WANG; WEI GAO; BIN SHI
This paper describes the development of a standard for managing the geotechnical engineering practice of a large-scale bridge engineering investigation and survey. It is important to establish an information management system in such a project to aid in the decision making and enhance the management quality and work efficiency. Based on the analysis of information characteristics and management deficiencies in
Taxometric Analysis of the Levenson Self-Report Psychopathy Scale
Glenn D. Walters; Chad A. Brinkley; Philip R. Magaletta; Pamela M. Diamond
2008-01-01
Levenson's Self-Report Psychopathy scale (Levenson, Kiehl, & Fitzpatrick, 1995) was administered to 1,972 male and female federal prison inmates, the results of which were subjected to taxometric analysis. We employed 4 taxometric procedures in this study: mean above minus below a cut (Meehl & Yonce, 1994), maximum slope (Grove & Meehl, 1993), maximum eigenvalue (Waller & Meehl, 1998), and latent-mode
Scaling, Granulation, and Fuzzy Attributes in Formal Concept Analysis
Belohlavek, Radim
analysis (FCA) of data with fuzzy attributes. In ordinary FCA, the input is a data table with yes/no attributes. Scaling is a process of transformation of data tables with general attributes, e.g. nominal, ordinal, etc., to data tables with yes/no attributes. This way, data tables with general attributes can
An investigation of returns to scale in data envelopment analysis
Lawrence M. Seiford; Joe Zhu
1999-01-01
This paper discusses the determination of returns to scale (RTS) in data envelopment analysis (DEA). Three basic RTS methods and their modifications are reviewed and the equivalence between these different RTS methods is presented. The effect of multiple optimal DEA solutions on the RTS estimation is studied. It is shown that possible alternate optimal solutions only affect the estimation of
Data Mining: Data Analysis on a Grand Scale? Padhraic Smyth
Smyth, Padhraic
Data Mining: Data Analysis on a Grand Scale? Padhraic Smyth Information and Computer Science for Statistical Methods in Medical Research, September 2000 1 #12;Abstract Modern data mininghas evolvedlargelyas aresult ofe orts bycomputer scientists to address the needs of data owners" in extracting useful
Exploratory Factor Analysis of African Self-Consciousness Scale Scores
ERIC Educational Resources Information Center
Bhagwat, Ranjit; Kelly, Shalonda; Lambert, Michael C.
2012-01-01
This study replicates and extends prior studies of the dimensionality, convergent, and external validity of African Self-Consciousness Scale scores with appropriate exploratory factor analysis methods and a large gender balanced sample (N = 348). Viable one- and two-factor solutions were cross-validated. Both first factors overlapped significantly…
High-Order Semi-Discrete Central-Upwind Schemes for Multi-Dimensional Hamilton-Jacobi Equations
NASA Technical Reports Server (NTRS)
Bryson, Steve; Levy, Doron; Biegel, Bryan (Technical Monitor)
2002-01-01
We present the first fifth order, semi-discrete central upwind method for approximating solutions of multi-dimensional Hamilton-Jacobi equations. Unlike most of the commonly used high order upwind schemes, our scheme is formulated as a Godunov-type scheme. The scheme is based on the fluxes of Kurganov-Tadmor and Kurganov-Tadmor-Petrova, and is derived for an arbitrary number of space dimensions. A theorem establishing the monotonicity of these fluxes is provided. The spacial discretization is based on a weighted essentially non-oscillatory reconstruction of the derivative. The accuracy and stability properties of our scheme are demonstrated in a variety of examples. A comparison between our method and other fifth-order schemes for Hamilton-Jacobi equations shows that our method exhibits smaller errors without any increase in the complexity of the computations.
Rose, Donald; Bodor, J. Nicholas; Hutchinson, Paul L.; Swalm, Chris M.
2010-01-01
Research on neighborhood food access has focused on documenting disparities in the food environment and on assessing the links between the environment and consumption. Relatively few studies have combined in-store food availability measures with geographic mapping of stores. We review research that has used these multi-dimensional measures of access to explore the links between the neighborhood food environment and consumption or weight status. Early research in California found correlations between red meat, reduced-fat milk, and whole-grain bread consumption and shelf space availability of these products in area stores. Subsequent research in New York confirmed the low-fat milk findings. Recent research in Baltimore has used more sophisticated diet assessment tools and store-based instruments, along with controls for individual characteristics, to show that low availability of healthy food in area stores is associated with low-quality diets of area residents. Our research in southeastern Louisiana has shown that shelf space availability of energy-dense snack foods is positively associated with BMI after controlling for individual socioeconomic characteristics. Most of this research is based on cross-sectional studies. To assess the direction of causality, future research testing the effects of interventions is needed. We suggest that multi-dimensional measures of the neighborhood food environment are important to understanding these links between access and consumption. They provide a more nuanced assessment of the food environment. Moreover, given the typical duration of research project cycles, changes to in-store environments may be more feasible than changes to the overall mix of retail outlets in communities. PMID:20410084
NASA Astrophysics Data System (ADS)
Ren, Xiaodong; Xu, Kun; Shyy, Wei; Gu, Chunwei
2015-07-01
This paper presents a high-order discontinuous Galerkin (DG) method based on a multi-dimensional gas kinetic evolution model for viscous flow computations. Generally, the DG methods for equations with higher order derivatives must transform the equations into a first order system in order to avoid the so-called "non-conforming problem". In the traditional DG framework, the inviscid and viscous fluxes are numerically treated differently. Differently from the traditional DG approaches, the current method adopts a kinetic evolution model for both inviscid and viscous flux evaluations uniformly. By using a multi-dimensional gas kinetic formulation, we can obtain a spatial and temporal dependent gas distribution function for the flux integration inside the cell and at the cell interface, which is distinguishable from the Gaussian Quadrature point flux evaluation in the traditional DG method. Besides the initial higher order non-equilibrium states inside each control volume, a Linear Least Square (LLS) method is used for the reconstruction of smooth distributions of macroscopic flow variables around each cell interface in order to construct the corresponding equilibrium state. Instead of separating the space and time integrations and using the multistage Runge-Kutta time stepping method for time accuracy, the current method integrates the flux function in space and time analytically, which subsequently saves the computational time. Many test cases in two and three dimensions, which include high Mach number compressible viscous and heat conducting flows and the low speed high Reynolds number laminar flows, are presented to demonstrate the performance of the current scheme.
Amadi, Ovid Charles
2013-01-01
The requirement that individual cells be able to communicate with one another over a range of length scales is a fundamental prerequisite for the evolution of multicellular organisms. Often diffusible chemical molecules ...
SINEX: SCALE shielding analysis GUI for X-Windows
Browman, S.M.; Barnett, D.L.
1997-12-01
SINEX (SCALE Interface Environment for X-windows) is an X-Windows graphical user interface (GUI), that is being developed for performing SCALE radiation shielding analyses. SINEX enables the user to generate input for the SAS4/MORSE and QADS/QAD-CGGP shielding analysis sequences in SCALE. The code features will facilitate the use of both analytical sequences with a minimum of additional user input. Included in SINEX is the capability to check the geometry model by generating two-dimensional (2-D) color plots of the geometry model using a new version of the SCALE module, PICTURE. The most sophisticated feature, however, is the 2-D visualization display that provides a graphical representation on screen as the user builds a geometry model. This capability to interactively build a model will significantly increase user productivity and reduce user errors. SINEX will perform extensive error checking and will allow users to execute SCALE directly from the GUI. The interface will also provide direct on-line access to the SCALE manual.
SCALE system cross-section validation for criticality safety analysis
Hathout, A.M.; Westfall, R.M.; Dodds, H.L. Jr.
1980-01-01
The purpose of this study is to test selected data from three cross-section libraries for use in the criticality safety analysis of UO/sub 2/ fuel rod lattices. The libraries, which are distributed with the SCALE system, are used to analyze potential criticality problems which could arise in the industrial fuel cycle for PWR and BWR reactors. Fuel lattice criticality problems could occur in pool storage, dry storage with accidental moderation, shearing and dissolution of irradiated elements, and in fuel transport and storage due to inadequate packing and shipping cask design. The data were tested by using the SCALE system to analyze 25 recently performed critical experiments.
Scaled-particle theory analysis of cylindrical cavities in solution
NASA Astrophysics Data System (ADS)
Ashbaugh, Henry S.
2015-04-01
The solvation of hard spherocylindrical solutes is analyzed within the context of scaled-particle theory, which takes the view that the free energy of solvating an empty cavitylike solute is equal to the pressure-volume work required to inflate a solute from nothing to the desired size and shape within the solvent. Based on our analysis, an end cap approximation is proposed to predict the solvation free energy as a function of the spherocylinder length from knowledge regarding only the solvent density in contact with a spherical solute. The framework developed is applied to extend Reiss's classic implementation of scaled-particle theory and a previously developed revised scaled-particle theory to spherocylindrical solutes. To test the theoretical descriptions developed, molecular simulations of the solvation of infinitely long cylindrical solutes are performed. In hard-sphere solvents classic scaled-particle theory is shown to provide a reasonably accurate description of the solvent contact correlation and resulting solvation free energy per unit length of cylinders, while the revised scaled-particle theory fitted to measured values of the contact correlation provides a quantitative free energy. Applied to the Lennard-Jones solvent at a state-point along the liquid-vapor coexistence curve, however, classic scaled-particle theory fails to correctly capture the dependence of the contact correlation. Revised scaled-particle theory, on the other hand, provides a quantitative description of cylinder solvation in the Lennard-Jones solvent with a fitted interfacial free energy in good agreement with that determined for purely spherical solutes. The breakdown of classical scaled-particle theory does not result from the failure of the end cap approximation, however, but is indicative of neglected higher-order curvature dependences on the solvation free energy.
Bhattacharyya, Shuvra S.
ABSTRACT Signal processing applications usually encounter multi-dimensional real-time per- formance for software implementation for signal processing applications. In this thesis, a number of important memory processing applica- Title of dissertation: INTEGRATED SOFTWARE SYNTHESIS FOR SIGNAL PROCESSING APPLICATIONS
M. Ohta; Y. Mitani; N. Nakasako
1998-01-01
In this paper, a statistical electromagnetic (abbr., EM) interference study is constructed mainly from a theoretical viewpoint on the basis of N-dimensional random walk problem in multi-dimensional signal space. First, a characteristic function of the Hankel transform type matched to this EM environmental study is introduced especially in an extended form of D. Middleton's basic result. Then, the probability density
NASA Astrophysics Data System (ADS)
Xu, Fuyi; Yuan, Jia
2015-04-01
This paper is dedicated to study of the Cauchy problem for a multi-dimensional ({N ? 2} ) compressible viscous liquid-gas two-phase flow model. We prove the local well-posedness of the system for large data in critical Besov spaces based on the L p framework under the sole assumption that the initial liquid mass is bounded away from zero.
New Criticality Safety Analysis Capabilities in SCALE 5.1
Bowman, Stephen M [ORNL; DeHart, Mark D [ORNL; Dunn, Michael E [ORNL; Goluoglu, Sedat [ORNL; Horwedel, James E [ORNL; Petrie Jr, Lester M [ORNL; Rearden, Bradley T [ORNL; Williams, Mark L [ORNL
2007-01-01
Version 5.1 of the SCALE computer software system developed at Oak Ridge National Laboratory, released in 2006, contains several significant enhancements for nuclear criticality safety analysis. This paper highlights new capabilities in SCALE 5.1, including improved resonance self-shielding capabilities; ENDF/B-VI.7 cross-section and covariance data libraries; HTML output for KENO V.a; analytical calculations of KENO-VI volumes with GeeWiz/KENO3D; new CENTRMST/PMCST modules for processing ENDF/B-VI data in TSUNAMI; SCALE Generalized Geometry Package in NEWT; KENO Monte Carlo depletion in TRITON; and plotting of cross-section and covariance data in Javapeno.
Analysis of Reynolds number scaling for viscous vortex reconnection
NASA Astrophysics Data System (ADS)
Ni, Qionglin; Hussain, Fazle; Wang, Jianchun; Chen, Shiyi
2012-10-01
A theoretical analysis of viscous vortex reconnection is developed based on scale separation, and the Reynolds number, Re (= circulation/viscosity), scaling for the reconnection time Trec is derived. The scaling varies continuously as Re increases from T_{rec} ˜ {mathopRenolimits} ^{ - 1} to T_{rec} ˜ {mathopRenolimits} ^{ - 1/2}. This theoretical prediction agrees well with direct numerical simulations by Garten et al. [J. Fluid Mech. 426, 1 (2001)], 10.1017/S0022112000002251 and Hussain and Duraisamy [Phys. Fluids 23, 021701 (2011)], 10.1063/1.3532039. Moreover, our analysis yields two Re's, namely, a characteristic Re {mathopRenolimits} _{0.75} in left[ {Oleft({10^2 } right),Oleft({10^3 } right)} right] for the T_{rec} ˜ {mathopRenolimits} ^{ - 0.75} scaling given by Hussain and Duraisamy and the critical Re {mathopRenolimits} _c ˜ Oleft({10^4 } right) for the transition after which the first reconnection is completed. For {mathopRenolimits} > {mathopRenolimits} _c, a quiescent state follows, and then, a second reconnection may occur.
Multiscale Thermal Analysis for Nanometer-Scale Integrated Circuits
Zyad Hassan; Nicholas Allec; Li Shang; Robert P. Dick; Vishak Venkatraman; Ronggui Yang
2009-01-01
Thermal analysis has long been essential for designing reliable high-performance cost-effective integrated circuits (ICs). Increasing power densities are making this problem more important. Characterizing the thermal profile of an IC quickly enough to allow feedback on the thermal effects of tentative design changes is a daunting problem, and its complexity is increasing. The move to nanometer-scale fabrication processes is increasing
Global stability analysis of multitime-scale neural networks
Zhanshan Wang; Enlin Zhang; Huaguang Zhang; Zhengyun Ren
Global asymptotic stability problem is studied for a class of recurrent neural networks with multitime scale. The concerned\\u000a network involves two coupling terms, i.e., long-term memory and short-term memory, which leads to the difficulty to the dynamics\\u000a analysis, especially for the case of multiple time varying delays. Some novel stability criteria are proposed on the basis\\u000a of linear matrix inequality
Comparative Analysis of Multiple Genome-Scale Data Sets
Margaret Werner-Washburne; Brian Wylie; Kevin Boyack; Edwina Fuge; Judith Galbraith; Jose Weber; George Davidson
2002-01-01
The ongoing analyses of published genome-scale data sets is evidence that different approaches are required to completely mine this data. We report the use of novel tools for both visualization and data set comparison to analyze yeast gene-expression (cell cycle and exit from stationary phase\\/G0) and protein-interaction studies. This analysis led to new insights about each data set. For example,
The Self-Talk Scale: Development, Factor Analysis, and Validation
Thomas M. Brinthaupt; Michael B. Hein; Tracey E. Kramer
2009-01-01
Researchers and theorists have argued that self-talk plays an important role in everyday behavior and self-regulation. To facilitate research on this role, we developed a new measure of self-talk for use with nonclinical adult populations. The Self-Talk Scale (STS) measures one's frequency of self-talk. Analysis indicated a factor structure consisting of Social Assessment, Self-Criticism, Self-Reinforcement, and Self-Management factors. In 5
Microbial community analysis of a full-scale DEMON bioreactor.
Gonzalez-Martinez, Alejandro; Rodriguez-Sanchez, Alejandro; Muñoz-Palazon, Barbara; Garcia-Ruiz, Maria-Jesus; Osorio, Francisco; van Loosdrecht, Mark C M; Gonzalez-Lopez, Jesus
2015-03-01
Full-scale applications of autotrophic nitrogen removal technologies for the treatment of digested sludge liquor have proliferated during the last decade. Among these technologies, the aerobic/anoxic deammonification process (DEMON) is one of the major applied processes. This technology achieves nitrogen removal from wastewater through anammox metabolism inside a single bioreactor due to alternating cycles of aeration. To date, microbial community composition of full-scale DEMON bioreactors have never been reported. In this study, bacterial community structure of a full-scale DEMON bioreactor located at the Apeldoorn wastewater treatment plant was analyzed using pyrosequencing. This technique provided a higher-resolution study of the bacterial assemblage of the system compared to other techniques used in lab-scale DEMON bioreactors. Results showed that the DEMON bioreactor was a complex ecosystem where ammonium oxidizing bacteria, anammox bacteria and many other bacterial phylotypes coexist. The potential ecological role of all phylotypes found was discussed. Thus, metagenomic analysis through pyrosequencing offered new perspectives over the functioning of the DEMON bioreactor by exhaustive identification of microorganisms, which play a key role in the performance of bioreactors. In this way, pyrosequencing has been proven as a helpful tool for the in-depth investigation of the functioning of bioreactors at microbiological scale. PMID:25245398
Efficient organization and access of multi-dimensional datasets on tertiary storage systems
Ling Tony Chen; R. Drach; M. Keating; S. Louis; Doron Rotem; Arie Shoshani
1995-01-01
This paper addresses the problem of urgently needed data management techniques for efficiently retrieving requested subsets of large datasets from mass storage devices. This problem is especially critical for scientific investigators who need ready access to the large volume of data generated by large-scale supercomputer simulations and physical experiments as well as the automated collection of observations by monitoring devices
Efficient Processing of Top-k Dominating Queries on Multi-Dimensional Data Man Lung Yiu
Yiu, Man Lung
in the 2D space, where the dimensions correspond to (preference) attribute values; travel- ing time by users, and (iii) the result is indepen- dent of the scales at different dimensions. Despite in a d-dimensional space Rd . Given a (monotone) ranking function F : Rd R, a top-k query [14, 9
The scale analysis sequence for LWR fuel depletion
Hermann, O.W.; Parks, C.V.
1991-01-01
The SCALE (Standardized Computer Analyses for Licensing Evaluation) code system is used extensively to perform away-from-reactor safety analysis (particularly criticality safety, shielding, heat transfer analyses) for spent light water reactor (LWR) fuel. Spent fuel characteristics such as radiation sources, heat generation sources, and isotopic concentrations can be computed within SCALE using the SAS2 control module. A significantly enhanced version of the SAS2 control module, which is denoted as SAS2H, has been made available with the release of SCALE-4. For each time-dependent fuel composition, SAS2H performs one-dimensional (1-D) neutron transport analyses (via XSDRNPM-S) of the reactor fuel assembly using a two-part procedure with two separate unit-cell-lattice models. The cross sections derived from a transport analysis at each time step are used in a point-depletion computation (via ORIGEN-S) that produces the burnup-dependent fuel composition to be used in the next spectral calculation. A final ORIGEN-S case is used to perform the complete depletion/decay analysis using the burnup-dependent cross sections. The techniques used by SAS2H and two recent applications of the code are reviewed in this paper. 17 refs., 5 figs., 5 tabs.
On Multi-dimensional Steady Subsonic Flows Determined by Physical Boundary Conditions
NASA Astrophysics Data System (ADS)
Weng, Shangkun
In this thesis, we investigate an inflow-outflow problem for subsonic gas flows in a nozzle with finite length, aiming at finding intrinsic (physically acceptable) boundary conditions on upstream and downstream. We first characterize a set of physical boundary conditions that ensure the existence and uniqueness of a subsonic irrotational flow in a rectangle. Our results show that suppose we prescribe the horizontal incoming flow angle at the inlet and an appropriate pressure at the exit, there exists two positive constants m 0 and m1 with m0 < m1, such that a global subsonic irrotational flow exists uniquely in the nozzle, provided that the incoming mass flux m ? [m0, m 1). The maximum speed will approach the sonic speed as the mass flux m tends to m1. The new difficulties arise from the nonlocal term involved in the mass flux and the pressure condition at the exit. We first introduce an auxiliary problem with the Bernoulli's constant as a parameter to localize the nonlocal term and then establish a monotonic relation between the mass flux and the Bernoulli's constant to recover the original problem. To deal with the loss of obliqueness induced by the pressure condition at the exit, we employ the formulation in terms of the angular velocity and the density. A Moser iteration is applied to obtain the Linfinity estimate of the angular velocity, which guarantees that the flow possesses a positive horizontal velocity in the whole nozzle. As a continuation, we investigate the influence of the incoming flow angle and the geometry structure of the nozzle walls on subsonic flows in a finitely long curved nozzle. It turns out to be interesting that the incoming flow angle and the angles of inclination of nozzle walls play the same role as the end pressure. The curvatures of the nozzle walls play an important role. We also extend our results to subsonic Euler flows in the 2-D and 3-D asymmetric cases. Then it comes to the most interesting and difficult case--the 3-D subsonic Euler flow in a bounded nozzle, which is also the essential part of this thesis. The boundary conditions we have imposed in the 2-D case have a natural extension in the 3-D case. These important clues help us a lot to develop a new formulation to get some insights on the coupling structure between hyperbolic and elliptic modes in the Euler equations. The key idea in our new formulation is to use the Bernoulli's law to reduce the dimension of the velocity field by defining new variables (1,b2=u2u 1,b3=u3 u1) and replacing u1 by the Bernoulli's function B through u21=2B-h r1+ b22+b23 . In this way, we can explore the role of the Bernoulli's law in greater depth and hope that may simplify the Euler equations a little bit. We find a new conserved quantity for flows with a constant Bernoulli's function, which behaves like the scaled vorticity in the 2-D case. More surprisingly, a system of new conservation laws can be derived, which is never been observed before, even in the two dimensional case. We employ this formulation to construct a smooth subsonic Euler flow in a rectangular cylinder by assigning the incoming flow angles and the Bernoulli's function at the inlet and the end pressure at the exit, which is also required to be adjacent to some special subsonic states. The same idea can be applied to obtain similar information for the incompressible Euler equations, the self-similar Euler equations, the steady Euler equations with damping, the steady Euler-Poisson equations and the steady Euler-Maxwell equations. Last, we are concerned with the structural stability of some steady subsonic solutions for the Euler-Poisson system. A steady subsonic solution with subsonic background charge is proven to be structurally stable with respect to small perturbations of the background charge, the incoming flow angles and the end pressure, provided the background solution has a low Mach number and a small electric field. The new ingredient in our mathematical analysis is the solvability of a new second order elliptic system supplemented with oblique derivative conditio
Scaling analysis for the investigation of slip mechanisms in nanofluids.
Savithiri, S; Pattamatta, Arvind; Das, Sarit K
2011-01-01
The primary objective of this study is to investigate the effect of slip mechanisms in nanofluids through scaling analysis. The role of nanoparticle slip mechanisms in both water- and ethylene glycol-based nanofluids is analyzed by considering shape, size, concentration, and temperature of the nanoparticles. From the scaling analysis, it is found that all of the slip mechanisms are dominant in particles of cylindrical shape as compared to that of spherical and sheet particles. The magnitudes of slip mechanisms are found to be higher for particles of size between 10 and 80 nm. The Brownian force is found to dominate in smaller particles below 10 nm and also at smaller volume fraction. However, the drag force is found to dominate in smaller particles below 10 nm and at higher volume fraction. The effect of thermophoresis and Magnus forces is found to increase with the particle size and concentration. In terms of time scales, the Brownian and gravity forces act considerably over a longer duration than the other forces. For copper-water-based nanofluid, the effective contribution of slip mechanisms leads to a heat transfer augmentation which is approximately 36% over that of the base fluid. The drag and gravity forces tend to reduce the Nusselt number of the nanofluid while the other forces tend to enhance it. PMID:21791036
Scaling analysis for the investigation of slip mechanisms in nanofluids
2011-01-01
The primary objective of this study is to investigate the effect of slip mechanisms in nanofluids through scaling analysis. The role of nanoparticle slip mechanisms in both water- and ethylene glycol-based nanofluids is analyzed by considering shape, size, concentration, and temperature of the nanoparticles. From the scaling analysis, it is found that all of the slip mechanisms are dominant in particles of cylindrical shape as compared to that of spherical and sheet particles. The magnitudes of slip mechanisms are found to be higher for particles of size between 10 and 80 nm. The Brownian force is found to dominate in smaller particles below 10 nm and also at smaller volume fraction. However, the drag force is found to dominate in smaller particles below 10 nm and at higher volume fraction. The effect of thermophoresis and Magnus forces is found to increase with the particle size and concentration. In terms of time scales, the Brownian and gravity forces act considerably over a longer duration than the other forces. For copper-water-based nanofluid, the effective contribution of slip mechanisms leads to a heat transfer augmentation which is approximately 36% over that of the base fluid. The drag and gravity forces tend to reduce the Nusselt number of the nanofluid while the other forces tend to enhance it. PMID:21791036
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.
1998-01-01
Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely independent of scale. Self-similarity is defined as a property of curves or surfaces where each part is indistinguishable from the whole, or where the form of the curve or surface is invariant with respect to scale. An ideal fractal (or monofractal) curve or surface has a constant dimension over all scales, although it may not be an integer value. This is in contrast to Euclidean or topological dimensions, where discrete one, two, and three dimensions describe curves, planes, and volumes. Theoretically, if the digital numbers of a remotely sensed image resemble an ideal fractal surface, then due to the self-similarity property, the fractal dimension of the image will not vary with scale and resolution. However, most geographical phenomena are not strictly self-similar at all scales, but they can often be modeled by a stochastic fractal in which the scaling and self-similarity properties of the fractal have inexact patterns that can be described by statistics. Stochastic fractal sets relax the monofractal self-similarity assumption and measure many scales and resolutions in order to represent the varying form of a phenomenon as a function of local variables across space. In image interpretation, pattern is defined as the overall spatial form of related features, and the repetition of certain forms is a characteristic pattern found in many cultural objects and some natural features. Texture is the visual impression of coarseness or smoothness caused by the variability or uniformity of image tone or color. A potential use of fractals concerns the analysis of image texture. In these situations it is commonly observed that the degree of roughness or inexactness in an image or surface is a function of scale and not of experimental technique. The fractal dimension of remote sensing data could yield quantitative insight on the spatial complexity and information content contained within these data. A software package known as the Image Characterization and Modeling System (ICAMS) was used to explore how fractal dimension is related to surface texture and pattern. The ICAMS software was verified using simulated images of ideal fractal surfaces with specified dimensions. The fractal dimension for areas of homogeneous land cover in the vicinity of Huntsville, Alabama was measured to investigate the relationship between texture and resolution for different land covers.
Reactor Physics Methods and Analysis Capabilities in SCALE
DeHart, Mark D [ORNL; Bowman, Stephen M [ORNL
2011-01-01
The TRITON sequence of the SCALE code system provides a powerful, robust, and rigorous approach for performing reactor physics analysis. This paper presents a detailed description of TRITON in terms of its key components used in reactor calculations. The ability to accurately predict the nuclide composition of depleted reactor fuel is important in a wide variety of applications. These applications include, but are not limited to, the design, licensing, and operation of commercial/research reactors and spent-fuel transport/storage systems. New complex design projects such as next-generation power reactors and space reactors require new high-fidelity physics methods, such as those available in SCALE/TRITON, that accurately represent the physics associated with both evolutionary and revolutionary reactor concepts as they depart from traditional and well-understood light water reactor designs.
SCALE 6: Comprehensive Nuclear Safety Analysis Code System
Bowman, Stephen M [ORNL
2011-01-01
Version 6 of the Standardized Computer Analyses for Licensing Evaluation (SCALE) computer software system developed at Oak Ridge National Laboratory, released in February 2009, contains significant new capabilities and data for nuclear safety analysis and marks an important update for this software package, which is used worldwide. This paper highlights the capabilities of the SCALE system, including continuous-energy flux calculations for processing multigroup problem-dependent cross sections, ENDF/B-VII continuous-energy and multigroup nuclear cross-section data, continuous-energy Monte Carlo criticality safety calculations, Monte Carlo radiation shielding analyses with automated three-dimensional variance reduction techniques, one- and three-dimensional sensitivity and uncertainty analyses for criticality safety evaluations, two- and three-dimensional lattice physics depletion analyses, fast and accurate source terms and decay heat calculations, automated burnup credit analyses with loading curve search, and integrated three-dimensional criticality accident alarm system analyses using coupled Monte Carlo criticality and shielding calculations.
Scaling and dimensional analysis of acoustic streaming jets
Moudjed, B.; Botton, V.; Henry, D.; Ben Hadid, H. [Laboratoire de Mécanique des Fluides et d’Acoustique, CNRS/Université de Lyon, Ecole Centrale de Lyon/Université Lyon 1/INSA de Lyon, ECL, 36 Avenue Guy de Collongue, 69134 Ecully Cedex (France); Garandet, J.-P. [CEA, Laboratoire d’Instrumentation et d’Expérimentation en Mécanique des Fluides et Thermohydraulique, DEN/DANS/DM2S/STMF/LIEFT, CEA-Saclay, F-91191 Gif-sur-Yvette Cedex (France)
2014-09-15
This paper focuses on acoustic streaming free jets. This is to say that progressive acoustic waves are used to generate a steady flow far from any wall. The derivation of the governing equations under the form of a nonlinear hydrodynamics problem coupled with an acoustic propagation problem is made on the basis of a time scale discrimination approach. This approach is preferred to the usually invoked amplitude perturbations expansion since it is consistent with experimental observations of acoustic streaming flows featuring hydrodynamic nonlinearities and turbulence. Experimental results obtained with a plane transducer in water are also presented together with a review of the former experimental investigations using similar configurations. A comparison of the shape of the acoustic field with the shape of the velocity field shows that diffraction is a key ingredient in the problem though it is rarely accounted for in the literature. A scaling analysis is made and leads to two scaling laws for the typical velocity level in acoustic streaming free jets; these are both observed in our setup and in former studies by other teams. We also perform a dimensional analysis of this problem: a set of seven dimensionless groups is required to describe a typical acoustic experiment. We find that a full similarity is usually not possible between two acoustic streaming experiments featuring different fluids. We then choose to relax the similarity with respect to sound attenuation and to focus on the case of a scaled water experiment representing an acoustic streaming application in liquid metals, in particular, in liquid silicon and in liquid sodium. We show that small acoustic powers can yield relatively high Reynolds numbers and velocity levels; this could be a virtue for heat and mass transfer applications, but a drawback for ultrasonic velocimetry.
Multi-scale analysis and simulation of powder blending in pharmaceutical manufacturing
Ngai, Samuel S. H
2005-01-01
A Multi-Scale Analysis methodology was developed and carried out for gaining fundamental understanding of the pharmaceutical powder blending process. Through experiment, analysis and computer simulations, microscopic ...
The Effects of Multi-Dimensional Competition on Education Market Outcomes
Karakaplan, Mustafa
2012-10-19
private schools BLS U.S. Bureau of Labor Statistics CBSA Core Based Statistical Area CCD Common Core of Data, National Center for Education Statistics CWI Comparable wage index DEA Data envelopment analysis f.o.b. Free-on-board GMM Generalized...
Multi--dimensional Cosmological Radiative Transfer with a Variable Eddington Tensor Formalism
Nickolay Y. Gnedin; Tom Abel
2001-06-15
We present a new approach to numerically model continuum radiative transfer based on the Optically Thin Variable Eddington Tensor (OTVET) approximation. Our method insures the exact conservation of the photon number and flux (in the explicit formulation) and automatically switches from the optically thick to the optically thin regime. It scales as N logN with the number of hydrodynamic resolution elements and is independent of the number of sources of ionizing radiation (i.e. works equally fast for an arbitrary source function). We also describe an implementation of the algorithm in a Soften Lagrangian Hydrodynamic code (SLH) and a multi--frequency approach appropriate for hydrogen and helium continuum opacities. We present extensive tests of our method for single and multiple sources in homogeneous and inhomogeneous density distributions, as well as a realistic simulation of cosmological reionization.
Genome-Scale Analysis of Saccharomyces cerevisiae Metabolism and Ethanol Production
Mountziaris, T. J.
ARTICLE Genome-Scale Analysis of Saccharomyces cerevisiae Metabolism and Ethanol Production in Fed on a genome-scale metabolic network reconstruction is developed for in silico analysis of Saccharomyces Periodicals, Inc. KEYWORDS: Saccharomyces cerevisiae; dynamic flux bal- ance analysis; genome-scale metabolic
Nonlinearity analysis of model-scale jet noise
NASA Astrophysics Data System (ADS)
Gee, Kent L.; Atchley, Anthony A.; Falco, Lauren E.; Shepherd, Micah R.
2012-09-01
This paper describes the use of a spectrally-based "nonlinearity indicator" to complement ordinary spectral analysis of jet noise propagation data. The indicator, which involves the cross spectrum between the temporal acoustic pressure and the square of the acoustic pressure, stems directly from ensemble averaging the generalized Burgers equation. The indicator is applied to unheated model-scale jet noise from subsonic and supersonic nozzles. The results demonstrate how the indicator can be used to interpret the evolution of power spectra in the transition from the geometric near to far field. Geometric near-field and nonlinear effects can be distinguished from one another, thus lending additional physical insight into the propagation.
NASA Astrophysics Data System (ADS)
Fink, Wolfgang
2008-04-01
Many systems and processes, both natural and artificial, may be described by parameter-driven mathematical and physical models. We introduce a generally applicable Stochastic Optimization Framework (SOF) that can be interfaced to or wrapped around such models to optimize model outcomes by effectively "inverting" them. The Visual and Autonomous Exploration Systems Research Laboratory (http://autonomy.caltech.edu edu) at the California Institute of Technology (Caltech) has long-term experience in the optimization of multi-dimensional systems and processes. Several examples of successful application of a SOF are reviewed and presented, including biochemistry, robotics, device performance, mission design, parameter retrieval, and fractal landscape optimization. Applications of a SOF are manifold, such as in science, engineering, industry, defense & security, and reconnaissance/exploration. Keywords: Multi-parameter optimization, design/performance optimization, gradient-based steepest-descent methods, local minima, global minimum, degeneracy, overlap parameter distribution, fitness function, stochastic optimization framework, Simulated Annealing, Genetic Algorithms, Evolutionary Algorithms, Genetic Programming, Evolutionary Computation, multi-objective optimization, Pareto-optimal front, trade studies )
Finite-volume application of high order ENO schemes to multi-dimensional boundary-value problems
NASA Technical Reports Server (NTRS)
Casper, Jay; Dorrepaal, J. Mark
1990-01-01
The finite volume approach in developing multi-dimensional, high-order accurate essentially non-oscillatory (ENO) schemes is considered. In particular, a two dimensional extension is proposed for the Euler equation of gas dynamics. This requires a spatial reconstruction operator that attains formal high order of accuracy in two dimensions by taking account of cross gradients. Given a set of cell averages in two spatial variables, polynomial interpolation of a two dimensional primitive function is employed in order to extract high-order pointwise values on cell interfaces. These points are appropriately chosen so that correspondingly high-order flux integrals are obtained through each interface by quadrature, at each point having calculated a flux contribution in an upwind fashion. The solution-in-the-small of Riemann's initial value problem (IVP) that is required for this pointwise flux computation is achieved using Roe's approximate Riemann solver. Issues to be considered in this two dimensional extension include the implementation of boundary conditions and application to general curvilinear coordinates. Results of numerical experiments are presented for qualitative and quantitative examination. These results contain the first successful application of ENO schemes to boundary value problems with solid walls.
Schoenberg, Poppy L A; Speckens, Anne E M
2015-02-01
To illuminate candidate neural working mechanisms of Mindfulness-Based Cognitive Therapy (MBCT) in the treatment of recurrent depressive disorder, parallel to the potential interplays between modulations in electro-cortical dynamics and depressive symptom severity and self-compassionate experience. Linear and nonlinear ? and ? EEG oscillatory dynamics were examined concomitant to an affective Go/NoGo paradigm, pre-to-post MBCT or natural wait-list, in 51 recurrent depressive patients. Specific EEG variables investigated were; (1) induced event-related (de-) synchronisation (ERD/ERS), (2) evoked power, and (3) inter-/intra-hemispheric coherence. Secondary clinical measures included depressive severity and experiences of self-compassion. MBCT significantly downregulated ? and ? power, reflecting increased cortical excitability. Enhanced ?-desynchronisation/ERD was observed for negative material opposed to attenuated ?-ERD towards positively valenced stimuli, suggesting activation of neural networks usually hypoactive in depression, related to positive emotion regulation. MBCT-related increase in left-intra-hemispheric ?-coherence of the fronto-parietal circuit aligned with these synchronisation dynamics. Ameliorated depressive severity and increased self-compassionate experience pre-to-post MBCT correlated with ?-ERD change. The multi-dimensional neural mechanisms of MBCT pertain to task-specific linear and non-linear neural synchronisation and connectivity network dynamics. We propose MBCT-related modulations in differing cortical oscillatory bands have discrete excitatory (enacting positive emotionality) and inhibitory (disengaging from negative material) effects, where mediation in the ? and ? bands relates to the former. PMID:26052359
Labenski, Matthew T; Fisher, Ashley A; Monks, Terrence J; Lau, Serrine S
2011-01-01
Recent technological advancements in mass spectrometry facilitate the detection of chemical-induced posttranslational modifications (PTMs) that may alter cell signaling pathways or alter the structure and function of the modified proteins. To identify such protein adducts (Kleiner et al., Chem Res Toxicol 11:1283-1290, 1998), multi-dimensional protein identification technology (MuDPIT) has been utilized. MuDPIT was first described by Link et al. as a new technique useful for protein identification from a complex mixture of proteins (Link et al., Nat Biotechnol 17:676-682, 1999). MuDPIT utilizes two different HPLC columns to further enhance peptide separation, increasing the number of peptide hits and protein coverage. The technology is extremely useful for proteomes, such as the urine proteome, samples from immunoprecipitations, and 1D gel bands resolved from a tissue homogenate or lysate. In particular, MuDPIT has enhanced the field of adduct hunting for adducted peptides, since it is more capable of identifying lesser abundant peptides, such as those that are adducted, than the more standard LC-MS/MS. The site-specific identification of covalently adducted proteins is a prerequisite for understanding the biological significance of chemical-induced PTMs and the subsequent toxicological response they elicit. PMID:20972764
Zhang, Ming-Hua; Gu, Jun-Fei; Feng, Liang; Jia, Xiao-Bin
2013-11-01
The quality control is one of the key problems in the modernization and internationalization of traditional Chinese medicines (TCM). As TCMs have the characteristics of integrity and systematicness, their effect in disease prevention and treatment is the result of multi-component synergistic effect. Currently, the quality control of TCMs is mostly measured with a single index, which can not reflect the integrity of TCMs. As TCM components could play the role in preventing and treating diseases through multiple targets and channels, only if we expound the specific composition and structural relations among inherent components, and determine the optimum composition and structure ratio of TCMs in preventing and treating diseases and revealing their optimal efficiency, safety and stability, can we get rid of the conventional quantitative model, and realize the scientific integral quality control in a real sense. On the basis of the component structure theory, we propose "multi-dimensional structure and process dynamics quality control system" in this article, and systematically expound the optimal efficiency of TCMs, in order to provide a theoretical basis for improving the efficacy of TCM preparation products. PMID:24494540
Florian Roemer; Hanna Becker; Martin Haardt
2010-01-01
Subspace-based high-resolution parameter algorithms such as ESPRIT, MUSIC, or RARE are known as efficient and versatile tools in various signal processing applications including radar, sonar, medical imaging, or the analysis of MIMO channel sounder measurements. Since these techniques are based on the singular value decomposition (SVD), their performance can be analyzed with the help of SVD-based perturbation theory. Recently we
Understanding large scale HPC systems through scalable monitoring and analysis.
Mayo, Jackson R.; Chen, Frank Xiaoxiao; Pebay, Philippe Pierre; Wong, Matthew H.; Thompson, David; Gentile, Ann C.; Roe, Diana C.; De Sapio, Vincent; Brandt, James M.
2010-09-01
As HPC systems grow in size and complexity, diagnosing problems and understanding system behavior, including failure modes, becomes increasingly difficult and time consuming. At Sandia National Laboratories we have developed a tool, OVIS, to facilitate large scale HPC system understanding. OVIS incorporates an intuitive graphical user interface, an extensive and extendable data analysis suite, and a 3-D visualization engine that allows visual inspection of both raw and derived data on a geometrically correct representation of a HPC system. This talk will cover system instrumentation, data collection (including log files and the complications of meaningful parsing), analysis, visualization of both raw and derived information, and how data can be combined to increase system understanding and efficiency.
Irregularities and scaling in signal and image processing: multifractal analysis
NASA Astrophysics Data System (ADS)
Abry, Patrice; Jaffard, Herwig; Wendt, Stéphane
2015-03-01
B. Mandelbrot gave a new birth to the notions of scale invariance, self-similarity and non-integer dimensions, gathering them as the founding corner-stones used to build up fractal geometry. The first purpose of the present contribution is to review and relate together these key notions, explore their interplay and show that they are different facets of a single intuition. Second, we will explain how these notions lead to the derivation of the mathematical tools underlying multifractal analysis. Third, we will reformulate these theoretical tools into a wavelet framework, hence enabling their better theoretical understanding as well as their efficient practical implementation. B. Mandelbrot used his concept of fractal geometry to analyze real-world applications of very different natures. As a tribute to his work, applications of various origins, and where multifractal analysis proved fruitful, are revisited to illustrate the theoretical developments proposed here.
A Multi-scale Approach to Urban Thermal Analysis
NASA Technical Reports Server (NTRS)
Gluch, Renne; Quattrochi, Dale A.
2005-01-01
An environmental consequence of urbanization is the urban heat island effect, a situation where urban areas are warmer than surrounding rural areas. The urban heat island phenomenon results from the replacement of natural landscapes with impervious surfaces such as concrete and asphalt and is linked to adverse economic and environmental impacts. In order to better understand the urban microclimate, a greater understanding of the urban thermal pattern (UTP), including an analysis of the thermal properties of individual land covers, is needed. This study examines the UTP by means of thermal land cover response for the Salt Lake City, Utah, study area at two scales: 1) the community level, and 2) the regional or valleywide level. Airborne ATLAS (Advanced Thermal Land Applications Sensor) data, a high spatial resolution (10-meter) dataset appropriate for an environment containing a concentration of diverse land covers, are used for both land cover and thermal analysis at the community level. The ATLAS data consist of 15 channels covering the visible, near-IR, mid-IR and thermal-IR wavelengths. At the regional level Landsat TM data are used for land cover analysis while the ATLAS channel 13 data are used for the thermal analysis. Results show that a heat island is evident at both the community and the valleywide level where there is an abundance of impervious surfaces. ATLAS data perform well in community level studies in terms of land cover and thermal exchanges, but other, more coarse-resolution data sets are more appropriate for large-area thermal studies. Thermal response per land cover is consistent at both levels, which suggests potential for urban climate modeling at multiple scales.
Song, Yang; Zhu, Li-an; Wang, Su-li; Leng, Lin; Bucala, Richard; Lu, Liang-Jing
2014-01-01
Objective To evaluate the psychometric properties and clinical utility of Chinese Multidimensional Health Assessment Questionnaire (MDHAQ-C) in patients with rheumatoid arthritis (RA) in China. Methods 162 RA patients were recruited in the evaluation process. The reliability of the questionnaire was tested by internal consistency and item analysis. Convergent validity was assessed by correlations of MDHAQ-C with Health Assessment Questionnaire (HAQ), the 36-item Short-Form Health Survey (SF-36) and the Hospital anxiety and depression scales (HAD). Discriminant validity was tested in groups of patients with varied disease activities and functional classes. To evaluate the clinical values, correlations were calculated between MDHAQ-C and indices of clinical relevance and disease activity. Agreement with the Disease Activity Score (DAS28) and Clinical Disease Activity Index (CDAI) was estimated. Results The Cronbach's alpha was 0.944 in the Function scale (FN) and 0.768 in the scale of psychological status (PS). The item analysis indicated all the items of FN and PS are correlated at an acceptable level. MDHAQ-C correlated with the questionnaires significantly in most scales and scores of scales differed significantly in groups of different disease activity and functional status. MDHAQ-C has moderate to high correlation with most clinical indices and high correlation with a spearman coefficient of 0.701 for DAS 28 and 0.843 for CDAI. The overall agreement of categories was satisfying. Conclusion MDHAQ-C is a reliable, valid instrument for functional measurement and a feasible, informative quantitative index for busy clinical settings in Chinese RA patients. PMID:24848431
NASA Astrophysics Data System (ADS)
Zhou, Ning; Kolobov, Vladimir; Kudriavtsev, Vladimir
2001-10-01
The commercial CFD-ACE+ software has been extended to account for ion energy dependent surface reactions. The ion energy distribution function and the mean ion energy at a biased wafer were obtained using the Riley sheath model extended by the NASA group (Bose et al., J. Appl. Phys. v.87, 7176(2000)). The plasma chemistry model (by P. Ho et al., SAND2001-1292) consisting of 132-step gas-phase reactions and 55-step ion energy dependent surface reactions, was implemented to simulate the C2F6 plasma etching of silicon dioxide in an Inductively Coupled Plasma. Validation studies have been performed against the experimental data by Anderson et al. of UNM for a lab-scale GEC reactor. For a wide range of operating conditions (pressure: 5-25 mTorr; plasma power: 205-495 Watts; bias power: 22-148 Watts), the average etch rate calculated by CFD-ACE+ 2-D simulations agrees very well with those by 0-D AURORA predictions and the experimental data. The CFD-ACE+ simulations allow one to study the radial uniformity of the etch rate depending on discharge conditions.
Large-scale analysis of microRNA evolution
2012-01-01
Background In animals, microRNAs (miRNA) are important genetic regulators. Animal miRNAs appear to have expanded in conjunction with an escalation in complexity during early bilaterian evolution. Their small size and high-degree of similarity makes them challenging for phylogenetic approaches. Furthermore, genomic locations encoding miRNAs are not clearly defined in many species. A number of studies have looked at the evolution of individual miRNA families. However, we currently lack resources for large-scale analysis of miRNA evolution. Results We addressed some of these issues in order to analyse the evolution of miRNAs. We perform syntenic and phylogenetic analysis for miRNAs from 80 animal species. We present synteny maps, phylogenies and functional data for miRNAs across these species. These data represent the basis of our analyses and also act as a resource for the community. Conclusions We use these data to explore the distribution of miRNAs across phylogenetic space, characterise their birth and death, and examine functional relationships between miRNAs and other genes. These data confirm a number of previously reported findings on a larger scale and also offer novel insights into the evolution of the miRNA repertoire in animals, and it’s genomic organization. PMID:22672736
Psychometric analysis of the Ten-Item Perceived Stress Scale.
Taylor, John M
2015-03-01
Although the 10-item Perceived Stress Scale (PSS-10) is a popular measure, a review of the literature reveals 3 significant gaps: (a) There is some debate as to whether a 1- or a 2-factor model best describes the relationships among the PSS-10 items, (b) little information is available on the performance of the items on the scale, and (c) it is unclear whether PSS-10 scores are subject to gender bias. These gaps were addressed in this study using a sample of 1,236 adults from the National Survey of Midlife Development in the United States II. Based on self-identification, participants were 56.31% female, 77% White, 17.31% Black and/or African American, and the average age was 54.48 years (SD = 11.69). Findings from an ordinal confirmatory factor analysis suggested the relationships among the items are best described by an oblique 2-factor model. Item analysis using the graded response model provided no evidence of item misfit and indicated both subscales have a wide estimation range. Although t tests revealed a significant difference between the means of males and females on the Perceived Helplessness Subscale (t = 4.001, df = 1234, p < .001), measurement invariance tests suggest that PSS-10 scores may not be substantially affected by gender bias. Overall, the findings suggest that inferences made using PSS-10 scores are valid. However, this study calls into question inferences where the multidimensionality of the PSS-10 is ignored. PMID:25346996
NASA Astrophysics Data System (ADS)
Tsai, Chin-Chung; Liu, Shiang-Yao
2005-10-01
The purpose of this study was to describe the development and validation of an instrument to identify various dimensions of scientific epistemological views (SEVs) held by high school students. The instrument included five SEV dimensions (subscales): the role of social negotiation on science, the invented and creative reality of science, the theory-laden exploration of science, the cultural impacts on science, and the changing features of science. Six hundred and thirteen high school students in Taiwan responded to the instrument. Data analysis indicated that the instrument developed in this study had satisfactory validity and reliability measures. Correlation analysis and in-depth interviews supported the legitimacy of using multiple dimensions in representing student SEVs. Significant differences were found between male and female students, and between students’ and their teachers’ responses on some SEV dimensions. Suggestions were made about the use of the instrument to examine complicated interplays between SEVs and science learning, to evaluate science instruction, and to understand the cultural differences in epistemological views of science.
Roberto Muscedere; Vassil S. Dimitrov; Graham A. Jullien; William C. Miller
2002-01-01
The Multi-Dimensional Logarithmic Number System (MDLNS), with sim- ilar properties to the Logarithmic Number System (LNS), provides more degrees of freedom than the LNS by virtue of having two orthogonal bases and the ability to use multiple digits. Unlike the LNS, there is no direct functional relationship between binary\\/floating point representation and the MDLNS representation. Traditionally look-up tables (LUTs) were
Investigation of Biogrout processes by numerical analysis at pore scale
NASA Astrophysics Data System (ADS)
Bergwerff, Luke; van Paassen, Leon A.; Picioreanu, Cristian; van Loosdrecht, Mark C. M.
2013-04-01
Biogrout is a soil improving process that aims to improve the strength of sandy soils. The process is based on microbially induced calcite precipitation (MICP). In this study the main process is based on denitrification facilitated by bacteria indigenous to the soil using substrates, which can be derived from pretreated waste streams containing calcium salts of fatty acids and calcium nitrate, making it a cost effective and environmentally friendly process. The goal of this research is to improve the understanding of the process by numerical analysis so that it may be improved and applied properly for varying applications, such as borehole stabilization, liquefaction prevention, levee fortification and mitigation of beach erosion. During the denitrification process there are many phases present in the pore space including a liquid phase containing solutes, crystals, bacteria forming biofilms and gas bubbles. Due to the amount of phases and their dynamic changes (multiphase flow with (non-linear) reactive transport), there are many interactions making the process very complex. To understand this complexity in the system, the interactions between these phases are studied in a reductionist approach, increasing the complexity of the system by one phase at a time. The model will initially include flow, solute transport, crystal nucleation and growth in 2D at pore scale. The flow will be described by Navier-Stokes equations. Initial study and simulations has revealed that describing crystal growth for this application on a fixed grid can introduce significant fundamental errors. Therefore a level set method will be employed to better describe the interface of developing crystals in between sand grains. Afterwards the model will be expanded to 3D to provide more realistic flow, nucleation and clogging behaviour at pore scale. Next biofilms and lastly gas bubbles may be added to the model. From the results of these pore scale models the behaviour of the system may be studied and eventually observations may be extrapolated to a larger continuum scale.
Philip E. Wannamaker
2007-12-31
The overall goal of this project has been to develop desktop capability for 3-D EM inversion as a complement or alternative to existing massively parallel platforms. We have been fortunate in having a uniquely productive cooperative relationship with Kyushu University (Y. Sasaki, P.I.) who supplied a base-level 3-D inversion source code for MT data over a half-space based on staggered grid finite differences. Storage efficiency was greatly increased in this algorithm by implementing a symmetric L-U parameter step solver, and by loading the parameter step matrix one frequency at a time. Rules were established for achieving sufficient jacobian accuracy versus mesh discretization, and regularization was much improved by scaling the damping terms according to influence of parameters upon the measured response. The modified program was applied to 101 five-channel MT stations taken over the Coso East Flank area supported by the DOE and the Navy. Inversion of these data on a 2 Gb desktop PC using a half-space starting model recovered the main features of the subsurface resistivity structure seen in a massively parallel inversion which used a series of stitched 2-D inversions as a starting model. In particular, a steeply west-dipping, N-S trending conductor was resolved under the central-west portion of the East Flank. It may correspond to a highly saline magamtic fluid component, residual fluid from boiling, or less likely cryptic acid sulphate alteration, all in a steep fracture mesh. This work gained student Virginia Maris the Best Student Presentation at the 2006 GRC annual meeting.
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang
1998-01-01
This project is about the development of high order, non-oscillatory type schemes for computational fluid dynamics. Algorithm analysis, implementation, and applications are performed. Collaborations with NASA scientists have been carried out to ensure that the research is relevant to NASA objectives. The combination of ENO finite difference method with spectral method in two space dimension is considered, jointly with Cai [3]. The resulting scheme behaves nicely for the two dimensional test problems with or without shocks. Jointly with Cai and Gottlieb, we have also considered one-sided filters for spectral approximations to discontinuous functions [2]. We proved theoretically the existence of filters to recover spectral accuracy up to the discontinuity. We also constructed such filters for practical calculations.
A genuinely multi-dimensional upwind cell-vertex scheme for the Euler equations
NASA Technical Reports Server (NTRS)
Powell, Kenneth G.; Vanleer, Bram
1989-01-01
The solution of the two-dimensional Euler equations is based on the two-dimensional linear convection equation and the Euler-equation decomposition developed by Hirsch et al. The scheme is genuinely two-dimensional. At each iteration, the data are locally decomposed into four variables, allowing convection in appropriate directions. This is done via a cell-vertex scheme with a downwind-weighted distribution step. The scheme is conservative, and third-order accurate in space. The derivation and stability analysis of the scheme for the convection equation, and the derivation of the extension to the Euler equations are given. Preconditioning techniques based on local values of the convection speeds are discussed. The scheme for the Euler equations is applied to two channel-flow problems. It is shown to converge rapidly to a solution that agrees well with that of a third-order upwind solver.
Mascagni, Michael
Analysis of Large-scale Grid-based Monte Carlo Applications Analysis of Large-scale Grid-based Monte Carlo Applications Yaohang Li and Michael Mascagni Department of Computer Science and School-based Monte Carlo Applications Yaohang Li* Department of Computer Science and School of Computational Science
NASA Technical Reports Server (NTRS)
Krishnamurthy, Thiagarajan
2010-01-01
Equivalent plate analysis is often used to replace the computationally expensive finite element analysis in initial design stages or in conceptual design of aircraft wing structures. The equivalent plate model can also be used to design a wind tunnel model to match the stiffness characteristics of the wing box of a full-scale aircraft wing model while satisfying strength-based requirements An equivalent plate analysis technique is presented to predict the static and dynamic response of an aircraft wing with or without damage. First, a geometric scale factor and a dynamic pressure scale factor are defined to relate the stiffness, load and deformation of the equivalent plate to the aircraft wing. A procedure using an optimization technique is presented to create scaled equivalent plate models from the full scale aircraft wing using geometric and dynamic pressure scale factors. The scaled models are constructed by matching the stiffness of the scaled equivalent plate with the scaled aircraft wing stiffness. It is demonstrated that the scaled equivalent plate model can be used to predict the deformation of the aircraft wing accurately. Once the full equivalent plate geometry is obtained, any other scaled equivalent plate geometry can be obtained using the geometric scale factor. Next, an average frequency scale factor is defined as the average ratio of the frequencies of the aircraft wing to the frequencies of the full-scaled equivalent plate. The average frequency scale factor combined with the geometric scale factor is used to predict the frequency response of the aircraft wing from the scaled equivalent plate analysis. A procedure is outlined to estimate the frequency response and the flutter speed of an aircraft wing from the equivalent plate analysis using the frequency scale factor and geometric scale factor. The equivalent plate analysis is demonstrated using an aircraft wing without damage and another with damage. Both of the problems show that the scaled equivalent plate analysis can be successfully used to predict the frequencies and flutter speed of a typical aircraft wing.
The vulnerability cube: a multi-dimensional framework for assessing relative vulnerability.
Lin, Brenda B; Morefield, Philip E
2011-09-01
The diversity and abundance of information available for vulnerability assessments can present a challenge to decision-makers. Here we propose a framework to aggregate and present socioeconomic and environmental data in a visual vulnerability assessment that will help prioritize management options for communities vulnerable to environmental change. Socioeconomic and environmental data are aggregated into distinct categorical indices across three dimensions and arranged in a cube, so that individual communities can be plotted in a three-dimensional space to assess the type and relative magnitude of the communities' vulnerabilities based on their position in the cube. We present an example assessment using a subset of the USEPA National Estuary Program (NEP) estuaries: coastal communities vulnerable to the effects of environmental change on ecosystem health and water quality. Using three categorical indices created from a pool of publicly available data (socioeconomic index, land use index, estuary condition index), the estuaries were ranked based on their normalized averaged scores and then plotted along the three axes to form a vulnerability cube. The position of each community within the three-dimensional space communicates both the types of vulnerability endemic to each estuary and allows for the clustering of estuaries with like-vulnerabilities to be classified into typologies. The typologies highlight specific vulnerability descriptions that may be helpful in creating specific management strategies. The data used to create the categorical indices are flexible depending on the goals of the decision makers, as different data should be chosen based on availability or importance to the system. Therefore, the analysis can be tailored to specific types of communities, allowing a data rich process to inform decision-making. PMID:21638079
MULTI-DIMENSIONAL RADIATIVE TRANSFER TO ANALYZE HANLE EFFECT IN Ca II K LINE AT 3933 A
Anusha, L. S.; Nagendra, K. N., E-mail: bhasari@mps.mpg.de, E-mail: knn@iiap.res.in [Indian Institute of Astrophysics, Koramangala, 2nd Block, Bangalore 560 034 (India)
2013-04-20
Radiative transfer (RT) studies of the linearly polarized spectrum of the Sun (the second solar spectrum) have generally focused on line formation, with an aim to understand the vertical structure of the solar atmosphere using one-dimensional (1D) model atmospheres. Modeling spatial structuring in the observations of the linearly polarized line profiles requires the solution of multi-dimensional (multi-D) polarized RT equation and a model solar atmosphere obtained by magnetohydrodynamical (MHD) simulations of the solar atmosphere. Our aim in this paper is to analyze the chromospheric resonance line Ca II K at 3933 A using multi-D polarized RT with the Hanle effect and partial frequency redistribution (PRD) in line scattering. We use an atmosphere that is constructed by a two-dimensional snapshot of the three-dimensional MHD simulations of the solar photosphere, combined with columns of a 1D atmosphere in the chromosphere. This paper represents the first application of polarized multi-D RT to explore the chromospheric lines using multi-D MHD atmospheres, with PRD as the line scattering mechanism. We find that the horizontal inhomogeneities caused by MHD in the lower layers of the atmosphere are responsible for strong spatial inhomogeneities in the wings of the linear polarization profiles, while the use of horizontally homogeneous chromosphere (FALC) produces spatially homogeneous linear polarization in the line core. The introduction of different magnetic field configurations modifies the line core polarization through the Hanle effect and can cause spatial inhomogeneities in the line core. A comparison of our theoretical profiles with the observations of this line shows that the MHD structuring in the photosphere is sufficient to reproduce the line wings and in the line core, but only line center polarization can be reproduced using the Hanle effect. For a simultaneous modeling of the line wings and the line core (including the line center), MHD atmospheres with inhomogeneities in the chromosphere are required.
Pincus, T; Yazici, Y; Bergman, M
2005-01-01
The HAQ has become the pre-eminent patient questionnaire used in rheumatology. It is easily completed by patients, but not easily reviewed and scored in standard clinical care and has some minor psychometric limitations, as do all questionnaires. Modifications of the HAQ been made to facilitate use in standard care, particularly to include 8-10 activities of daily living, along with scores for pain and global status and other information on one side of one page for rapid review by the clinician. A patient questionnaire for standard care should be limited to 2 sides of 1 page, in a format amenable to "eyeball" review by the clinician in 5 seconds or less. It can be scored formally in 15-20 seconds or less, and is useful in patients with all rheumatic diseases. The current version of a multi-dimensional HAQ (MDHAQ) includes scoring templates on the questionnaire to allow formal scoring in less than 15 seconds by a rheumatologist or an assistant, for possible entry onto a paper and/or computerized flow sheet. Various versions of the MDHAQ may also include a "constant" region of physical function, pain and patient global status, and "variable" regions of fatigue, morning stiffness, psychological distress, change in status, a review of systems, a rheumatoid arthritis disease activity self-report joint count (RADAI), review of recent health events, and review of medications. The MDHAQ can be used in the infrastructure of rheumatology care to include quantitative data in standard care of all patients with all rheumatic diseases. PMID:16273781
NASA Astrophysics Data System (ADS)
Pandarinath, Kailasa
2014-12-01
Several new multi-dimensional tectonomagmatic discrimination diagrams employing log-ratio variables of chemical elements and probability based procedure have been developed during the last 10 years for basic-ultrabasic, intermediate and acid igneous rocks. There are numerous studies on extensive evaluations of these newly developed diagrams which have indicated their successful application to know the original tectonic setting of younger and older as well as sea-water and hydrothermally altered volcanic rocks. In the present study, these diagrams were applied to Precambrian rocks of Mexico (southern and north-eastern) and Argentina. The study indicated the original tectonic setting of Precambrian rocks from the Oaxaca Complex of southern Mexico as follows: (1) dominant rift (within-plate) setting for rocks of 1117-988 Ma age; (2) dominant rift and less-dominant arc setting for rocks of 1157-1130 Ma age; and (3) a combined tectonic setting of collision and rift for Etla Granitoid Pluton (917 Ma age). The diagrams have indicated the original tectonic setting of the Precambrian rocks from the north-eastern Mexico as: (1) a dominant arc tectonic setting for the rocks of 988 Ma age; and (2) an arc and collision setting for the rocks of 1200-1157 Ma age. Similarly, the diagrams have indicated the dominant original tectonic setting for the Precambrian rocks from Argentina as: (1) with-in plate (continental rift-ocean island) and continental rift (CR) setting for the rocks of 800 Ma and 845 Ma age, respectively; and (2) an arc setting for the rocks of 1174-1169 Ma and of 1212-1188 Ma age. The inferred tectonic setting for these Precambrian rocks are, in general, in accordance to the tectonic setting reported in the literature, though there are some inconsistence inference of tectonic settings by some of the diagrams. The present study confirms the importance of these newly developed discriminant-function based diagrams in inferring the original tectonic setting of Precambrian rocks.
Scaling law analysis of paraffin thin films on different surfaces
Dotto, M. E. R.; Camargo, S. S. Jr. [Engenharia Metalurgica e de Materials, Universidade Federal do Rio de Janeiro, Cx. Postal 68505, Rio de Janeiro, RJ, CEP 21945-970 (Brazil)
2010-01-15
The dynamics of paraffin deposit formation on different surfaces was analyzed based on scaling laws. Carbon-based films were deposited onto silicon (Si) and stainless steel substrates from methane (CH{sub 4}) gas using radio frequency plasma enhanced chemical vapor deposition. The different substrates were characterized with respect to their surface energy by contact angle measurements, surface roughness, and morphology. Paraffin thin films were obtained by the casting technique and were subsequently characterized by an atomic force microscope in noncontact mode. The results indicate that the morphology of paraffin deposits is strongly influenced by substrates used. Scaling laws analysis for coated substrates present two distinct dynamics: a local roughness exponent ({alpha}{sub local}) associated to short-range surface correlations and a global roughness exponent ({alpha}{sub global}) associated to long-range surface correlations. The local dynamics is described by the Wolf-Villain model, and a global dynamics is described by the Kardar-Parisi-Zhang model. A local correlation length (L{sub local}) defines the transition between the local and global dynamics with L{sub local} approximately 700 nm in accordance with the spacing of planes measured from atomic force micrographs. For uncoated substrates, the growth dynamics is related to Edwards-Wilkinson model.
Characterization of P2P IPTV Traffic: Scaling Analysis
Silverston, Thomas; Salamatian, Kave
2007-01-01
P2P IPTV applications arise on the Internet and will be massively used in the future. It is expected that P2P IPTV will contribute to increase the overall Internet traffic. In this context, it is important to measure the impact of P2P IPTV on the networks and to characterize this traffic. During the 2006 FIFA World Cup, we performed an extensive measurement campaign. We measured network traffic generated by broadcasting soccer games by the most popular P2P IPTV applications, namely PPLive, PPStream, SOPCast and TVAnts. From the collected data, we characterized the P2P IPTV traffic structure at different time scales. To the best of our knowledge, this is the first work, which presents a complete multiscale analysis of the P2P IPTV traffic. Our observations show that the network traffic has not the same scale behavior whether the applications use TCP or UDP. For all the applications, the download traffic is different from the upload traffic and the signaling traffic has an impact on the download traffic.
Anomaly Detection in Multiple Scale for Insider Threat Analysis
Kim, Yoohwan [ORNL] [ORNL; Sheldon, Frederick T [ORNL] [ORNL; Hively, Lee M [ORNL] [ORNL
2012-01-01
We propose a method to quantify malicious insider activity with statistical and graph-based analysis aided with semantic scoring rules. Different types of personal activities or interactions are monitored to form a set of directed weighted graphs. The semantic scoring rules assign higher scores for the events more significant and suspicious. Then we build personal activity profiles in the form of score tables. Profiles are created in multiple scales where the low level profiles are aggregated toward more stable higherlevel profiles within the subject or object hierarchy. Further, the profiles are created in different time scales such as day, week, or month. During operation, the insider s current activity profile is compared to the historical profiles to produce an anomaly score. For each subject with a high anomaly score, a subgraph of connected subjects is extracted to look for any related score movement. Finally the subjects are ranked by their anomaly scores to help the analysts focus on high-scored subjects. The threat-ranking component supports the interaction between the User Dashboard and the Insider Threat Knowledge Base portal. The portal includes a repository for historical results, i.e., adjudicated cases containing all of the information first presented to the user and including any additional insights to help the analysts. In this paper we show the framework of the proposed system and the operational algorithms.
Parallel Index and Query for Large Scale Data Analysis
Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie
2011-07-18
Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.
Analysis of thorium-uranium fuel using MOCUP and SCALE
Weaver, K.D.; Herring, J.S.; Kullberg, C.; MacDonald, P.E.
1999-07-01
A renewed interest in mixed thorium-uranium fuels has arisen lately based on the need for proliferation resistance, longer fuel cycles, higher burnup, and improved wasteform characteristics. Thorium fuel cycles have been studied in the past, most notably in the development of the light water breeder reactor at Shippingport, Pennsylvania, but these cycles were directed toward the production, reprocessing, and reuse of {sup 233}U through reactors having a seed and blanket configuration. The fuel used in this analysis is a mixed thoria-urania, in which the {sup 233}U is used in situ, replacing the {sup 235}U that is burned. Current UO{sub 2}fuel is burned to {approximately}45 MWd/kg, while recent calculations with SCALE 4.3 suggest that ThO{sub 2}-UO{sub 2} fuel could be burned to {gt}70 MWd/kg. A ThO{sub 2}-UO{sub 2} fuel would decrease the total amount of plutonium produced by a factor of 5 and the amount of {sup 239}Pu by a factor of 6.5. Furthermore, the plutonium that is produced is high in {sup 238}Pu, a strong source of spontaneous neutrons and decay heat, adding to proliferation resistance. This paper focuses on the reactivity trends and minor actinide production in mixed ThO{sub 2}-UO{sub 2} fuel by using MOCUP and comparing the results with SCALE 4.3.
A scaling analysis of thermoacoustic convection in a zero-gravity environment
Krane, R.J.; Parang, M.
1982-01-01
This paper presents a scaling analysis of a one-dimensional thermoacoustic convection heat transfer process in a zero-gravity environment. The relative importance of the terms in the governing equations is discussed for different time scales without attempting to solve the equations. The scaling analysis suggests certain generalizations that can be made in this class of heat transfer problems.
Generalized singular value decomposition for comparative analysis of genome-scale expression
Utah, University of
Generalized singular value decomposition for comparative analysis of genome-scale expression data 14, 2003 We describe a comparative mathematical framework for two genome-scale expression data sets scale. Comparative analysis of these data among two or more model organisms promises to enhance
Large-scale quantitative analysis of painting arts.
Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong
2014-01-01
Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances. PMID:25501877
Global Mapping Analysis: Stochastic Gradient Algorithm in Multidimensional Scaling
NASA Astrophysics Data System (ADS)
Matsuda, Yoshitatsu; Yamaguchi, Kazunori
In order to implement multidimensional scaling (MDS) efficiently, we propose a new method named “global mapping analysis” (GMA), which applies stochastic approximation to minimizing MDS criteria. GMA can solve MDS more efficiently in both the linear case (classical MDS) and non-linear one (e.g., ALSCAL) if only the MDS criteria are polynomial. GMA separates the polynomial criteria into the local factors and the global ones. Because the global factors need to be calculated only once in each iteration, GMA is of linear order in the number of objects. Numerical experiments on artificial data verify the efficiency of GMA. It is also shown that GMA can find out various interesting structures from massive document collections.
Confirmatory Factor Analysis of the Minnesota Nicotine Withdrawal Scale
Toll, Benjamin A.; O’Malley, Stephanie S.; McKee, Sherry A.; Salovey, Peter; Krishnan-Sarin, Suchitra
2008-01-01
The authors examined the factor structure of the Minnesota Nicotine Withdrawal Scale (MNWS) using confirmatory factor analysis in clinical research samples of smokers trying to quit (n = 723). Three confirmatory factor analytic models, based on previous research, were tested with each of the 3 study samples at multiple points in time. A unidimensional model including all 8 MNWS items was found to be the best explanation of the data. This model produced fair to good internal consistency estimates. Additionally, these data revealed that craving should be included in the total score of the MNWS. Factor scores derived from this single-factor, 8-item model showed that increases in withdrawal were associated with poor smoking outcome for 2 of the clinical studies. Confirmatory factor analyses of change scores showed that the MNWS symptoms cohere as a syndrome over time. Future investigators should report a total score using all of the items from the MNWS. PMID:17563141
Large-Scale Quantitative Analysis of Painting Arts
NASA Astrophysics Data System (ADS)
Kim, Daniel; Son, Seung-Woo; Jeong, Hawoong
2014-12-01
Scientists have made efforts to understand the beauty of painting art in their own languages. As digital image acquisition of painting arts has made rapid progress, researchers have come to a point where it is possible to perform statistical analysis of a large-scale database of artistic paints to make a bridge between art and science. Using digital image processing techniques, we investigate three quantitative measures of images - the usage of individual colors, the variety of colors, and the roughness of the brightness. We found a difference in color usage between classical paintings and photographs, and a significantly low color variety of the medieval period. Interestingly, moreover, the increment of roughness exponent as painting techniques such as chiaroscuro and sfumato have advanced is consistent with historical circumstances.
Multidimensional Scaling Analysis of the Dynamics of a Country Economy
Mata, Maria Eugénia
2013-01-01
This paper analyzes the Portuguese short-run business cycles over the last 150 years and presents the multidimensional scaling (MDS) for visualizing the results. The analytical and numerical assessment of this long-run perspective reveals periods with close connections between the macroeconomic variables related to government accounts equilibrium, balance of payments equilibrium, and economic growth. The MDS method is adopted for a quantitative statistical analysis. In this way, similarity clusters of several historical periods emerge in the MDS maps, namely, in identifying similarities and dissimilarities that identify periods of prosperity and crises, growth, and stagnation. Such features are major aspects of collective national achievement, to which can be associated the impact of international problems such as the World Wars, the Great Depression, or the current global financial crisis, as well as national events in the context of broad political blueprints for the Portuguese society in the rising globalization process. PMID:24294132
Metal Analysis of Scales Taken from Arctic Grayling
A. P. Farrell; A. H. Hodaly; S. Wang
This study examined concentrations of metals in fish scales taken from Arctic grayling using laser ablation-induc- tively coupled plasma mass spectrometry (LA-ICPMS). The purpose was to assess whether scale metal concentrations re- flected whole muscle metal concentrations and whether the spatial distribution of metals within an individual scale varied among the growth annuli of the scales. Ten elements (Mg, Ca,
The Use of Factor Analysis to Salvage Poor Guttman Scales: Can it Really Work.
ERIC Educational Resources Information Center
Kayser, Brian D.
The Guttman model of scale analysis has found continued use in sociological analysis despite criticisms placed against it. An empirical example is provided of the use of factor analysis with Guttman scaling even taking into account the criticisms of both very restricted item number of dichotomous responses. Data came from a questionnaire using…
Analysis of Brain Activation Patterns Using a 3-D Scale-Space Primal Sketch
Lindeberg, Tony
Analysis of Brain Activation Patterns Using a 3-D Scale-Space Primal Sketch Tony Lindeberg1 , P. of Numerical Analysis and Computing Science, KTH, S-100 44 Stockholm, Sweden http://www.nada.kth.se/tony Email, the method overcomes the limitations of performing the analysis at a single scale or assuming specific models
Sensitivity analysis and scale issues in landslide susceptibility mapping
NASA Astrophysics Data System (ADS)
Catani, Filippo; Lagomarsino, Daniela; Segoni, Samuele; Tofani, Veronica
2013-04-01
Despite the large number of recent advances and developments in landslide susceptibility mapping (LSM) there is still a lack of studies focusing on specific aspects of LSM model sensitivity. For example, the influence of factors of paramount importance such as the survey scale of the landslide conditioning variables (LCVs), the resolution of the mapping unit (MUR) and the optimal number and ranking of LCVs have never been investigated analytically, especially on large datasets. In this paper we attempt this experimentation concentrating on the impact of model tuning choice on the final result, rather than on the comparison of methodologies. To this end, we adopt a simple implementation of the random forest (RF) classification family to produce an ensamble of landslide susceptibility maps for a set of different model settings, input data types and scales. RF classification and regression methods offer a very flexible environment for testing model parameters and mapping hypotheses, allowing for a direct quantification of variable importance. The model choice is, in itself, quite innovative since it is the first time that such technique, widely used in remote sensing for image classification, is used in this form for the production of a LSM. Random forest is a combination of tree (usually binary) bayesian predictors that permits to relate a set of contributing factors with the actual landslides occurrence. Being it a nonparametric model, it is possible to incorporate a range of numeric or categorical data layers and there is no need to select unimodal training data. Many classical and widely acknowledged landslide predisposing factors have been taken into account as mainly related to: the lithology, the land use, the land surface geometry (derived from DTM), the structural and anthropogenic constrains. In addition, for each factor we also included in the parameter set the standard deviation (for numerical variables) or the variety (for categorical ones). The use of random forest enables to estimate the relative importance of the single input parameters and to select the optimal configuration of the regression model. The model was initially applied using the complete set of input parameters, then with progressively smaller subsamples of the parameter space. Considering the best set of parameters we also studied the impact of scale and accuracy of input variables and the influence of the RF model random component on the susceptibility results. We apply the model statistics to a test area in central Italy, the hydrographic basin of the Arno river (ca. 9000 km2), we present the obtained results and discuss them. We also use the outcomes of the parameter sensitivity analysis to investigate the different role of environmental factors in the test area.
Age Differences on Alcoholic MMPI Scales: A Discriminant Analysis Approach.
ERIC Educational Resources Information Center
Faulstich, Michael E.; And Others
1985-01-01
Administered the Minnesota Multiphasic Personality Inventory to 91 male alcoholics after detoxification. Results indicated that the Psychopathic Deviant and Paranoia scales declined with age, while the Responsibility scale increased with age. (JAC)
Metal Analysis of Scales Taken from Arctic Grayling
A. P. Farrell; A. H. Hodaly; S. Wang
2000-01-01
This study examined concentrations of metals in fish scales taken from Arctic grayling using laser ablation–inductively coupled\\u000a plasma mass spectrometry (LA-ICPMS). The purpose was to assess whether scale metal concentrations reflected whole muscle metal\\u000a concentrations and whether the spatial distribution of metals within an individual scale varied among the growth annuli of\\u000a the scales. Ten elements (Mg, Ca, Ni, Zn,
Brabets, Timothy P.; Conaway, Jeffrey S.
2009-01-01
The Copper River Basin, the sixth largest watershed in Alaska, drains an area of 24,200 square miles. This large, glacier-fed river flows across a wide alluvial fan before it enters the Gulf of Alaska. Bridges along the Copper River Highway, which traverses the alluvial fan, have been impacted by channel migration. Due to a major channel change in 2001, Bridge 339 at Mile 36 of the highway has undergone excessive scour, resulting in damage to its abutments and approaches. During the snow- and ice-melt runoff season, which typically extends from mid-May to September, the design discharge for the bridge often is exceeded. The approach channel shifts continuously, and during our study it has shifted back and forth from the left bank to a course along the right bank nearly parallel to the road. Maintenance at Bridge 339 has been costly and will continue to be so if no action is taken. Possible solutions to the scour and erosion problem include (1) constructing a guide bank to redirect flow, (2) dredging approximately 1,000 feet of channel above the bridge to align flow perpendicular to the bridge, and (3) extending the bridge. The USGS Multi-Dimensional Surface Water Modeling System (MD_SWMS) was used to assess these possible solutions. The major limitation of modeling these scenarios was the inability to predict ongoing channel migration. We used a hybrid dataset of surveyed and synthetic bathymetry in the approach channel, which provided the best approximation of this dynamic system. Under existing conditions and at the highest measured discharge and stage of 32,500 ft3/s and 51.08 ft, respectively, the velocities and shear stresses simulated by MD_SWMS indicate scour and erosion will continue. Construction of a 250-foot-long guide bank would not improve conditions because it is not long enough. Dredging a channel upstream of Bridge 339 would help align the flow perpendicular to Bridge 339, but because of the mobility of the channel bed, the dredged channel would likely fill in during high flows. Extending Bridge 339 would accommodate higher discharges and re-align flow to the bridge.
Dream Intensity Scale: Factors in the Phenomenological Analysis of Dreams
Calvin Kai-Ching Yu
2010-01-01
The present study aimed to develop a comprehensive assessment tool for measuring subjective dream intensity by revising the original probes and response scales of the Dream Intensity Inventory and incorporating new variables. The factor analyses suggested that 18 items of the new instrument, Dream Intensity Scale, could form four scales and six subscales. The revision of the probes and response
NASA Technical Reports Server (NTRS)
Cao, Yiding; Faghri, Amir; Chang, Won Soon
1989-01-01
An enthalpy transforming scheme is proposed to convert the energy equation into a nonlinear equation with the enthalpy, E, being the single dependent variable. The existing control-volume finite-difference approach is modified so it can be applied to the numerical performance of Stefan problems. The model is tested by applying it to a three-dimensional freezing problem. The numerical results are in agreement with those existing in the literature. The model and its algorithm are further applied to a three-dimensional moving heat source problem showing that the methodology is capable of handling complicated phase-change problems with fixed grids.
S. C. Irvine; D. M. Paganin; S. Dubsky; R. A. Lewis; A. Fouras
Threedimensional flow measurements, obtained from high? resolution synchrotronbased xray phase contrast i mages of blood invitro , are presented. Using data collected on beamline BL20XU at the SPring?8 synchrotron in Hyogo, Japan, we demonstrate the benefits to be gained by preproces sing of speckled Xray phase contrast images prior to PIV a nalysis. Such preprocessing techniques include use of a
Multi-Dimensional K-Factor Analysis for V2V Radio Channels in Open Sub-urban Street Crossings
Zemen, Thomas
in a typical open sub- urban street crossing. The channel conditions vary from non line-of sight (NLOS) to line-of-sight strong fading (Rayleigh distributed), and a large K-factor value is related to less variations (Ricean
Jiang, Boyang
2012-02-14
environmental conditions at a given area is of great importance for naval exercises and operations. This capability has progressed beyond simple reduced-dimension models (e.g., Navy Standard Surf Model; Earle 1989) to more sophisticated comprehensive three... where the use of a one-dimensional nearshore model (e.g. the Navy Standard Surf Model) would be inappropriate (Morris 2001). In this study we will use data from the Duck94 experiment to establish the accuracy of the basic model. 2.1 Model Parameters...
Gupta, S.K.; Kincaid, C.T.; Meyer, P.R.; Newbill, C.A.; Cole, C.R.
1982-08-01
The Seasonal Thermal Energy Storage Program is being conducted for the Department of Energy by Pacific Northwest Laboratory. A major thrust of this program has been the study of natural aquifers as hosts for thermal energy storage and retrieval. Numerical simulation of the nonisothermal response of the host media is fundamental to the evaluation of proposed experimental designs and field test results. This report represents the primary documentation for the coupled fluid, energy and solute transport (CFEST) code. Sections of this document are devoted to the conservation equations and their numerical analogues, the input data requirements, and the verification studies completed to date.
Merritt, Cullen
2014-05-31
abuse treatment facilities collected from the 2011 National Survey of Substance Abuse Treatment Services (N-SSATS) provides the basis for conducting a series of confirmatory factor analyses (CFA). In addition, interviews with 21 senior managers of mental...
Technology Transfer Automated Retrieval System (TEKTRAN)
Recent advances in technology have led to the collection of high-dimensional data not previously encountered in many scientific environments. As a result, scientists are often faced with the challenging task of including these high-dimensional data into statistical models. For example, data from sen...
Santor, D A; Zuroff, D C; Fielding, A
1997-08-01
Revisions of the Depressive Experiences Questionnaire (DEQ; Blatt, D'Afflitti, & Quinlan, 1976) have failed to replicate the degree of orthogonality routinely observed with the original Dependency and Self-Criticism scales. Item performance on the DEQ was examined by computing correlation coefficients between factor-derived scores and unit-weighted composite scores for Dependency and Self-Criticism as a function of (a) the importance of individual items in predicting factor-derived Dependency and Self-Criticism scores and (b) scale length. Analyses identified sets of unit-weighted items that optimally preserve the psychometric properties of the original DEQ scales, including between-scale orthogonality, while reducing the number of items used to measure Dependency and Self-Criticism. Findings were replicated in college (N = 172) and clinical (N = 83) samples. Limitations of exploratory principal components and confirmatory factor analysis as tools for revising scales are discussed. PMID:9306686
MicroScale Thermophoresis: Interaction analysis and beyond
NASA Astrophysics Data System (ADS)
Jerabek-Willemsen, Moran; André, Timon; Wanner, Randy; Roth, Heide Marie; Duhr, Stefan; Baaske, Philipp; Breitsprecher, Dennis
2014-12-01
MicroScale Thermophoresis (MST) is a powerful technique to quantify biomolecular interactions. It is based on thermophoresis, the directed movement of molecules in a temperature gradient, which strongly depends on a variety of molecular properties such as size, charge, hydration shell or conformation. Thus, this technique is highly sensitive to virtually any change in molecular properties, allowing for a precise quantification of molecular events independent of the size or nature of the investigated specimen. During a MST experiment, a temperature gradient is induced by an infrared laser. The directed movement of molecules through the temperature gradient is detected and quantified using either covalently attached or intrinsic fluorophores. By combining the precision of fluorescence detection with the variability and sensitivity of thermophoresis, MST provides a flexible, robust and fast way to dissect molecular interactions. In this review, we present recent progress and developments in MST technology and focus on MST applications beyond standard biomolecular interaction studies. By using different model systems, we introduce alternative MST applications - such as determination of binding stoichiometries and binding modes, analysis of protein unfolding, thermodynamics and enzyme kinetics. In addition, wedemonstrate the capability of MST to quantify high-affinity interactions with dissociation constants (Kds) in the low picomolar (pM) range as well as protein-protein interactions in pure mammalian cell lysates.
Full-scale testing and analysis of fuselage structure
NASA Astrophysics Data System (ADS)
Miller, M.; Gruber, M. L.; Wilkins, K. E.; Worden, R. E.
1994-09-01
This paper presents recent results from a program in the Boeing Commercial Airplane Group to study the behavior of cracks in fuselage structures. The goal of this program is to improve methods for analyzing crack growth and residual strength in pressurized fuselages, thus improving new airplane designs and optimizing the required structural inspections for current models. The program consists of full-scale experimental testing of pressurized fuselage panels in both wide-body and narrow-body fixtures and finite element analyses to predict the results. The finite element analyses are geometrically nonlinear with material and fastener nonlinearity included on a case-by-case basis. The analysis results are compared with the strain gage, crack growth, and residual strength data from the experimental program. Most of the studies reported in this paper concern the behavior of single or multiple cracks in the lap joints of narrow-body airplanes (such as 727 and 737 commercial jets). The phenomenon where the crack trajectory is curved creating a 'flap' and resulting in a controlled decompression is discussed.
Full-scale testing and analysis of fuselage structure
NASA Technical Reports Server (NTRS)
Miller, M.; Gruber, M. L.; Wilkins, K. E.; Worden, R. E.
1994-01-01
This paper presents recent results from a program in the Boeing Commercial Airplane Group to study the behavior of cracks in fuselage structures. The goal of this program is to improve methods for analyzing crack growth and residual strength in pressurized fuselages, thus improving new airplane designs and optimizing the required structural inspections for current models. The program consists of full-scale experimental testing of pressurized fuselage panels in both wide-body and narrow-body fixtures and finite element analyses to predict the results. The finite element analyses are geometrically nonlinear with material and fastener nonlinearity included on a case-by-case basis. The analysis results are compared with the strain gage, crack growth, and residual strength data from the experimental program. Most of the studies reported in this paper concern the behavior of single or multiple cracks in the lap joints of narrow-body airplanes (such as 727 and 737 commercial jets). The phenomenon where the crack trajectory is curved creating a 'flap' and resulting in a controlled decompression is discussed.
Large-scale fault kinematic analysis in Noctis Labyrinthus (Mars)
NASA Astrophysics Data System (ADS)
Bistacchi, Nicola; Massironi, Matteo; Baggio, Paolo
2004-01-01
Noctis Labyrinthus (Mars) is characterized by many tectonic features, which represent brittle deformation of the crust. This tectonic setting was analysed by remote sensing of the Viking Mars Digital Image Model (MDIM) mosaic and Mars Orbiter Camera (MOC) global mosaic, in order to identify deformational events. The main features are normal faults producing horst-graben structures, strike-slip faults, and related en-echelon and pull-apart basins. Using the criterion of cross-cutting relationships and analysis of secondary structures, to infer sense of movement of faults, two deformational phases were identified in the Noctis Labyrinthus area. The first, D1, located mainly in the northern part, is characterized by transtensional faults (Noachian). The second, D2, recorded in the southern part of the Noctis Labyrinthus by an orthorhombic extensional fault pattern along NNE and WNW trends, is related to the Valles Marineris formation (Late Noachian-Early Hesperian). A third tectonic event, D3, represented by the partly known dextral NW strike-slip faults cross-cutting the Valles Marineris Canyon System (Late Hesperian?-Amazonian?), was not found in Noctis Labyrinthus at the scale and resolution considered.
ERIC Educational Resources Information Center
Redfield, Joel
1978-01-01
TMFA, a FORTRAN program for three-mode factor analysis and individual-differences multidimensional scaling, is described. Program features include a variety of input options, extensive preprocessing of input data, and several alternative methods of analysis. (Author)
Large-Scale Variability-Aware Type Checking and Dataflow Analysis
Apel, Sven
Large-Scale Variability-Aware Type Checking and Dataflow Analysis J¨org Liebig, Alexander von Rhein and Mathematics University of Passau, Germany November 2012 #12;Large-Scale Variability-Aware Type Checking-world, large- scale systems: the Busybox tool suite and the Linux kernel. We report on our experience
Nanometer to Centimeter Scale Analysis and Modeling of Pore Structures
NASA Astrophysics Data System (ADS)
Wesolowski, D. J.; Anovitz, L.; Vlcek, L.; Rother, G.; Cole, D. R.
2011-12-01
The microstructure and evolution of pore space in rocks is a critically important factor controlling fluid flow. The size, distribution and connectivity of these confined geometries dictate how fluids including H2O and CO2, migrate into and through these micro- and nano-environments, wet and react with the solid. (Ultra)small-angle neutron scattering and autocorrelations derived from BSE imaging provide a method of quantifying pore structures in a statistically significant manner from the nanometer to the centimeter scale. Multifractal analysis provides additional constraints. These methods were used to characterize the pore features of a variety of potential CO2 geological storage formations and geothermal systems such as the shallow buried quartz arenites from the St. Peter Sandstone and the deeper Mt. Simon quartz arenite in Ohio as well as the Eau Claire shale and mudrocks from the Cranfield MS CO2 injection test and the normal temperature and high-temperature vapor-dominated parts of the Geysers geothermal system in California. For example, analyses of samples of St. Peter sandstone show total porosity correlates with changes in pores structure including pore size ratios, surface fractal dimensions, and lacunarity. These samples contain significant large-scale porosity, modified by quartz overgrowths, and neutron scattering results show significant sub-micron porosity, which may make up fifty percent or more of the total pore volume. While previous scattering data from sandstones suggest scattering is dominated by surface fractal behavior, our data are both fractal and pseudo-fractal. The scattering curves are composed of steps, modeled as polydispersed assemblages of pores with log-normal distributions. In some samples a surface-fractal overprint is present. There are also significant changes in the mono and multifractal dimensions of the pore structure as the pore fraction decreases. There are strong positive correlations between D(0) and image and total scattering porosities, and strong negative correlations between these and multifractality, which increases as pore fraction decreases and the percent of (U)SANS porosity increases. Individual fractal dimensions at all q values from the BSE images decrease during silcrete formation. These data suggest that microporosity is more prevalent and may play a much more important role than previously thought in fluid/rock interactions in coarse-grained sandstone. Preliminary results from shale and mudrocks indicate there are dramatic differences not only in terms of total micro- to nano-porosity, but also in terms of pore surface fractal (roughness) and mass fractal (pore distributions) dimensions as well as size distributions. Information from imaging and scattering data can also be used to constrain computer-generated, random, three-dimensional porous structures. The results integrate various sources of experimental information and are statistically compatible with the real rock. This allows a more detailed multiscale analysis of structural correlations in the material. Acknowledgements. Research sponsored by the Division of Chemical Sciences, Geosciences and Biosciences, Office of Basic Energy Sciences, U.S. Department of Energy.
Estimating Cognitive Profiles Using Profile Analysis via Multidimensional Scaling (PAMS)
ERIC Educational Resources Information Center
Kim, Se-Kang; Frisby, Craig L.; Davison, Mark L.
2004-01-01
Two of the most popular methods of profile analysis, cluster analysis and modal profile analysis, have limitations. First, neither technique is adequate when the sample size is large. Second, neither method will necessarily provide profile information in terms of both level and pattern. A new method of profile analysis, called Profile Analysis via…
GAS MIXING ANALYSIS IN A LARGE-SCALED SALTSTONE FACILITY
Lee, S
2008-05-28
Computational fluid dynamics (CFD) methods have been used to estimate the flow patterns mainly driven by temperature gradients inside vapor space in a large-scaled Saltstone vault facility at Savannah River site (SRS). The purpose of this work is to examine the gas motions inside the vapor space under the current vault configurations by taking a three-dimensional transient momentum-energy coupled approach for the vapor space domain of the vault. The modeling calculations were based on prototypic vault geometry and expected normal operating conditions as defined by Waste Solidification Engineering. The modeling analysis was focused on the air flow patterns near the ventilated corner zones of the vapor space inside the Saltstone vault. The turbulence behavior and natural convection mechanism used in the present model were benchmarked against the literature information and theoretical results. The verified model was applied to the Saltstone vault geometry for the transient assessment of the air flow patterns inside the vapor space of the vault region using the potential operating conditions. The baseline model considered two cases for the estimations of the flow patterns within the vapor space. One is the reference nominal case. The other is for the negative temperature gradient between the roof inner and top grout surface temperatures intended for the potential bounding condition. The flow patterns of the vapor space calculated by the CFD model demonstrate that the ambient air comes into the vapor space of the vault through the lower-end ventilation hole, and it gets heated up by the Benard-cell type circulation before leaving the vault via the higher-end ventilation hole. The calculated results are consistent with the literature information. Detailed results and the cases considered in the calculations will be discussed here.
QA-Pagelet: Data Preparation Techniques for Large-Scale Data Analysis of the Deep Web
Caverlee, James
QA-Pagelet: Data Preparation Techniques for Large-Scale Data Analysis of the Deep Web James the QA-Pagelet as a fundamental data preparation technique for large-scale data analysis of the Deep Web-Pagelets from the Deep Web. Two unique features of the Thor framework are 1) the novel page clustering
QA-Pagelet: Data Preparation Techniques for Large Scale Data Analysis of the Deep Web
Liu, Ling
1 QA-Pagelet: Data Preparation Techniques for Large Scale Data Analysis of the Deep Web James data preparation technique for large scale data analysis of the Deep Web. To support QA the Deep Web. Two unique features of the Thor framework are (1) the novel page clustering for grouping
Menut, Laurent
Bayesian Monte Carlo analysis applied to regional-scale inverse emission modeling for reactive. The inversion method is based on Bayesian Monte Carlo analysis applied to a regional-scale chemistry transport are attributed to individual Monte Carlo simulations by comparing them with observations from the AIRPARIF
Scaling analysis applied to the NORVEX code development and thermal energy flight experiment
J. R. L. Skarda; David Namkoong; Douglas Darling
1991-01-01
A scaling analysis is used to study the dominant flow processes that occur in molten phase change material (PCM) under 1 g and microgravity conditions. Results of the scaling analysis are applied to the development of the NORVEX (NASA Oak Ridge Void Experiment) computer program and the preparation of the Thermal Energy Storage (TES) flight experiment. The NORVEX computer program
Levy scaling: The diffusion entropy analysis applied to DNA sequences Nicola Scafetta,1,2
Scafetta, Nicola
of the statistical analysis of a time series generated by complex dynamics with the diffusion entropy analysis DEA N with the traditional methods. We compare the DEA to the traditional methods of scaling detection and prove that the DEA, DEA detects the real scaling of a time series without requiring any form of detrending. We show
HIGH PERFORMANCE DIRECT-ITERATIVE HYBRID LINEAR SOLUTION METHOD FOR LARGE SCALE STRUCTURAL ANALYSIS
Seung Jo Kim; Min Ki Kim
In this paper, hybrid direct-iterative linear solution method is proposed to solve large- scale structural analysis problem. Direct solution method is common to finite element structural analysis, but it has some disadvantage compared with iterative method. Iterative method is quite good performance to solve large scale problem, so we can combine both two method to get good performance to solve
Ernest O. Watkins; Michael J. Wiebe
1980-01-01
A sample of 240 preschool children was used to assess the con struct validity of each of the McCarthy Scales of Children's Abilities (MSCA). Individual subtests were used in a stepwise regression analysis of MSCA scales. This analysis supported McCarthy's con struct of the General Cognitive Index (GCI) as well as the Verbal and Perceptual-Performance Scales. Construct of the Quantitative
NASA Technical Reports Server (NTRS)
Wood, William A., III
2002-01-01
A multi-dimensional upwind fluctuation splitting scheme is developed and implemented for two-dimensional and axisymmetric formulations of the Navier-Stokes equations on unstructured meshes. Key features of the scheme are the compact stencil, full upwinding, and non-linear discretization which allow for second-order accuracy with enforced positivity. Throughout, the fluctuation splitting scheme is compared to a current state-of-the-art finite volume approach, a second-order, dual mesh upwind flux difference splitting scheme (DMFDSFV), and is shown to produce more accurate results using fewer computer resources for a wide range of test cases. A Blasius flat plate viscous validation case reveals a more accurate upsilon-velocity profile for fluctuation splitting, and the reduced artificial dissipation production is shown relative to DMFDSFV. Remarkably, the fluctuation splitting scheme shows grid converged skin friction coefficients with only five points in the boundary layer for this case. The second half of the report develops a local, compact, anisotropic unstructured mesh adaptation scheme in conjunction with the multi-dimensional upwind solver, exhibiting a characteristic alignment behavior for scalar problems. The adaptation strategy is extended to the two-dimensional and axisymmetric Navier-Stokes equations of motion through the concept of fluctuation minimization.
A Critical Analysis of the Concept of Scale Dependent Macrodispersivity
NASA Astrophysics Data System (ADS)
Zech, Alraune; Attinger, Sabine; Cvetkovic, Vladimir; Dagan, Gedeon; Dietrich, Peter; Fiori, Aldo; Rubin, Yoram; Teutsch, Georg
2015-04-01
Transport by groundwater occurs over the different scales encountered by moving solute plumes. Spreading of plumes is often quantified by the longitudinal macrodispersivity ?L (half the rate of change of the second spatial moment divided by the mean velocity). It was found that generally ?L is scale dependent, increasing with the travel distance L of the plume centroid, stabilizing eventually at a constant value (Fickian regime). It was surmised in the literature that ?L scales up with travel distance L following a universal scaling law. Attempts to define the scaling law were sursued by several authors (Arya et al, 1988, Neuman, 1990, Xu and Eckstein, 1995, Schulze-Makuch, 2005), by fitting a regression line in the log-log representation of results from an ensemble of field experiment, primarily those experiments included by the compendium of experiments summarized by Gelhar et al, 1992. Despite concerns raised about universality of scaling laws (e.g., Gelhar, 1992, Anderson, 1991), such relationships are being employed by practitioners for modeling multiscale transport (e.g., Fetter, 1999), because they, presumably, offer a convenient prediction tool, with no need for detailed site characterization. Several attempts were made to provide theoretical justifications for the existence of a universal scaling law (e.g. Neuman, 1990 and 2010, Hunt et al, 2011). Our study revisited the concept of universal scaling through detailed analyses of field data (including the most recent tracer tests reported in the literature), coupled with a thorough re-evaluation of the reliability of the reported ?L values. Our investigation concludes that transport, and particularly ?L, is formation-specific, and that modeling of transport cannot be relegated to a universal scaling law. Instead, transport requires characterization of aquifer properties, e.g. spatial distribution of hydraulic conductivity, and the use of adequate models.
NASA Astrophysics Data System (ADS)
Verma, Sanjeet K.; Oliveira, Elson P.
2013-08-01
In present work, we applied two sets of new multi-dimensional geochemical diagrams (Verma et al., 2013) obtained from linear discriminant analysis (LDA) of natural logarithm-transformed ratios of major elements and immobile major and trace elements in acid magmas to decipher plate tectonic settings and corresponding probability estimates for Paleoproterozoic rocks from Amazonian craton, São Francisco craton, São Luís craton, and Borborema province of Brazil. The robustness of LDA minimizes the effects of petrogenetic processes and maximizes the separation among the different tectonic groups. The probability based boundaries further provide a better objective statistical method in comparison to the commonly used subjective method of determining the boundaries by eye judgment. The use of readjusted major element data to 100% on an anhydrous basis from SINCLAS computer program, also helps to minimize the effects of post-emplacement compositional changes and analytical errors on these tectonic discrimination diagrams. Fifteen case studies of acid suites highlighted the application of these diagrams and probability calculations. The first case study on Jamon and Musa granites, Carajás area (Central Amazonian Province, Amazonian craton) shows a collision setting (previously thought anorogenic). A collision setting was clearly inferred for Bom Jardim granite, Xingú area (Central Amazonian Province, Amazonian craton) The third case study on Older São Jorge, Younger São Jorge and Maloquinha granites Tapajós area (Ventuari-Tapajós Province, Amazonian craton) indicated a within-plate setting (previously transitional between volcanic arc and within-plate). We also recognized a within-plate setting for the next three case studies on Aripuanã and Teles Pires granites (SW Amazonian craton), and Pitinga area granites (Mapuera Suite, NW Amazonian craton), which were all previously suggested to have been emplaced in post-collision to within-plate settings. The seventh case studies on Cassiterita-Tabuões, Ritápolis, São Tiago-Rezende Costa (south of São Francisco craton, Minas Gerais) showed a collision setting, which agrees fairly reasonably with a syn-collision tectonic setting indicated in the literature. A within-plate setting is suggested for the Serrinha magmatic suite, Mineiro belt (south of São Francisco craton, Minas Gerais), contrasting markedly with the arc setting suggested in the literature. The ninth case study on Rio Itapicuru granites and Rio Capim dacites (north of São Francisco craton, Serrinha block, Bahia) showed a continental arc setting. The tenth case study indicated within-plate setting for Rio dos Remédios volcanic rocks (São Francisco craton, Bahia), which is compatible with these rocks being the initial, rift-related igneous activity associated with the Chapada Diamantina cratonic cover. The eleventh, twelfth and thirteenth case studies on Bom Jesus-Areal granites, Rio Diamante-Rosilha dacite-rhyolite and Timbozal-Cantão granites (São Luís craton) showed continental arc, within-plate and collision settings, respectively. Finally, the last two case studies, fourteenth and fifteenth showed a collision setting for Caicó Complex and continental arc setting for Algodões (Borborema province).
Time and scale Hurst exponent analysis for financial markets
NASA Astrophysics Data System (ADS)
Matos, José A. O.; Gama, Sílvio M. A.; Ruskin, Heather J.; Sharkasi, Adel Al; Crane, Martin
2008-06-01
We use a new method of studying the Hurst exponent with time and scale dependency. This new approach allows us to recover the major events affecting worldwide markets (such as the September 11th terrorist attack) and analyze the way those effects propagate through the different scales. The time-scale dependence of the referred measures demonstrates the relevance of entropy measures in distinguishing the several characteristics of market indices: “effects” include early awareness, patterns of evolution as well as comparative behaviour distinctions in emergent/established markets.
Glaspey, Amy M; Macleod, Andrea A N
2010-01-01
The purpose of the current study is to document phonological change from a multidimensional perspective for a 3-year-old boy with phonological disorder by comparing three measures: (1) accuracy of consonant productions, (2) dynamic assessment, and (3) acoustic analysis. The methods included collecting a sample of the targets /s, [symbol: see the text], t[symbol: see the text], and d[symbol: see the text]/ produced in single words and repeated over time. The samples were analysed using phonetic transcription and acoustic measures of duration, spectral mean, and spectral variance. A dynamic assessment was administered that showed change in response to cues and linguistic environments using a 15-point scale. The results from the three measures were compared for gradient change and evidence of covert contrasts. In conclusion, gradient change was illustrated across the three measures and contrasts were evident in the child's phonological system. Acoustic measures were most sensitive, followed by dynamic assessment; however, accuracy scores based on phonetic transcription showed little to no change for some targets. PMID:20345258
Kilometer Scale Roughness Analysis of Lunar Digital Terrain Model
Y. Yokota; J. Haruyama; M. Ohtake; T. Matsunaga; C. Honda; T. Morota; H. Demura; N. Hirata
2007-01-01
We demonstrate the root mean square deviation method as an indicator of topographic roughness on a kilometer scale, using stereo images from an Apollo Mapping Camera in a Digital Terrain Model, and compare three regions in the lunar highlands.
Data mining techniques for large-scale gene expression analysis
Palmer, Nathan Patrick
2011-01-01
Modern computational biology is awash in large-scale data mining problems. Several high-throughput technologies have been developed that enable us, with relative ease and little expense, to evaluate the coordinated expression ...
Crossover scaling evaluation in mixed correlated signals by means of Detrended Fluctuation Analysis
NASA Astrophysics Data System (ADS)
Martínez-García, C. R.; Reyes-Ramírez, I.; Angulo-Brown, F.; Guzmán-Vargas, L.
2015-01-01
In this work we study the scaling behavior of signals constructed by a class of processes with "intrinsic trends" and correlated noises. We focus our attention in evaluating the appearence of a crossover in the scaling exponents obtained by means of the Detrended Fluctuation Analysis. In particular, we evaluate the conditions of the trend which leads to a crossover where the scaling exponent from small scales (?s) is smaller or larger than the corresponding exponent from large scales (?l). We find that decreasing trend leads to ?s > ?l, whereas an increasing trend results in ?s < ?l. We also find that when one introduces correlated noise into a decreasing or increasing trend map, there is a crossover which separates two regions with exponents related to the trend mainly over short scales for decreasing trends, while over large scales, the scaling exponent resembles the behavior of the increasing trend.
A quality assessment of 3D video analysis for full scale rockfall experiments
NASA Astrophysics Data System (ADS)
Volkwein, A.; Glover, J.; Bourrier, F.; Gerber, W.
2012-04-01
Main goal of full scale rockfall experiments is to retrieve a 3D trajectory of a boulder along the slope. Such trajectories then can be used to calibrate rockfall simulation models. This contribution presents the application of video analysis techniques capturing rock fall velocity of some free fall full scale rockfall experiments along a rock face with an inclination of about 50 degrees. Different scaling methodologies have been evaluated. They mainly differ in the way the scaling factors between the movie frames and the reality and are determined. For this purpose some scale bars and targets with known dimensions have been distributed in advance along the slope. The single scaling approaches are briefly described as follows: (i) Image raster is scaled to the distant fixed scale bar then recalibrated to the plane of the passing rock boulder by taking the measured position of the nearest impact as the distance to the camera. The distance between the camera, scale bar, and passing boulder are surveyed. (ii) The image raster was scaled using the four nearest targets (identified using frontal video) from the trajectory to be analyzed. The average of the scaling factors was finally taken as scaling factor. (iii) The image raster was scaled using the four nearest targets from the trajectory to be analyzed. The scaling factor for one trajectory was calculated by balancing the mean scaling factors associated with the two nearest and the two farthest targets in relation to their mean distance to the analyzed trajectory. (iv) Same as previous method but with varying scaling factors during along the trajectory. It has shown that a direct measure of the scaling target and nearest impact zone is the most accurate. If constant plane is assumed it doesn't account for the lateral deviations of the rock boulder from the fall line consequently adding error into the analysis. Thus a combination of scaling methods (i) and (iv) are considered to give the best results. For best results regarding the lateral rough positioning along the slope, the frontal video must also be scaled. The error in scaling the video images can be evaluated by comparing the data by additional combination of the vertical trajectory component over time with the theoretical polynomial trend according to gravity. The different tracking techniques used to plot the position of the boulder's center of gravity all generated positional data with minimal error acceptable for trajectory analysis. However, when calculating instantaneous velocities an amplification of this error becomes un acceptable. A regression analysis of the data is helpful to optimize trajectory and velocity, respectively.
Kim, K. S.; Boyer, L. L.; Degelman, L. O.
1985-01-01
In the first part of this study, daylighting levles in an actualy classroom are compared to scale model measurements and to computer program predictions. Secondly, the daylighting effects in the building atrium are examined through the studies...
Comparative analysis of broad-scale landscape patterns
Gardner, R.H.; Turner, M.G.; Milne, B.T.; O'Neill, R.
1987-07-01
The effects of ecological processes on observed landscape patterns can be studied by using models which are neutral to the processes of interest. When USGS land use data (LUDA) were compared to data from neutral landscape models, significant differences in the number, size distribution of patches, and the area/perimeter (fractal) indices were found. Results have shown that landscape patterns are dependent on the scales at which they are measured. Therefore, it is critical to identify the appropriate scales and map resolution at which disturbance and landscape processes interact.
B. Müller; H. -Th. Janka
2014-06-26
Considering general relativistic, two-dimensional (2D) supernova (SN) explosion models of progenitor stars between 8.1 and 27 solar masses, we systematically analyze the properties of the neutrino emission from core collapse and bounce to the post-explosion phase. The models were computed with the Vertex-CoCoNuT code, using three-flavor, energy-dependent neutrino transport in the ray-by-ray-plus approximation. Our results confirm the close similarity of the mean energies of electron antineutrinos and heavy-lepton neutrinos and even their crossing during the accretion phase for stars with M>10 M_sun as observed in previous 1D and 2D simulations with state-of-the-art neutrino transport. We establish a roughly linear scaling of the electron antineutrino mean energy with the proto-neutron star (PNS) mass, which holds in time as well as for different progenitors. Convection inside the PNS affects the neutrino emission on the 10-20% level, and accretion continuing beyond the onset of the explosion prevents the abrupt drop of the neutrino luminosities seen in artificially exploded 1D models. We demonstrate that a wavelet-based time-frequency analysis of SN neutrino signals in IceCube will offer sensitive diagnostics for the SN core dynamics up to at least ~10kpc distance. Strong, narrow-band signal modulations indicate quasi-periodic shock sloshing motions due to the standing accretion shock instability (SASI), and the frequency evolution of such "SASI neutrino chirps" reveals shock expansion or contraction. The onset of the explosion is accompanied by a shift of the modulation frequency below 40-50Hz, and post-explosion, episodic accretion downflows will be signaled by activity intervals stretching over an extended frequency range in the wavelet spectrogram.
Validation of Normalizations, Scaling, and Photofading Corrections for FRAP Data Analysis
Kang, Minchul; Andreani, Manuel; Kenworthy, Anne K.
2015-01-01
Fluorescence Recovery After Photobleaching (FRAP) has been a versatile tool to study transport and reaction kinetics in live cells. Since the fluorescence data generated by fluorescence microscopy are in a relative scale, a wide variety of scalings and normalizations are used in quantitative FRAP analysis. Scaling and normalization are often required to account for inherent properties of diffusing biomolecules of interest or photochemical properties of the fluorescent tag such as mobile fraction or photofading during image acquisition. In some cases, scaling and normalization are also used for computational simplicity. However, to our best knowledge, the validity of those various forms of scaling and normalization has not been studied in a rigorous manner. In this study, we investigate the validity of various scalings and normalizations that have appeared in the literature to calculate mobile fractions and correct for photofading and assess their consistency with FRAP equations. As a test case, we consider linear or affine scaling of normal or anomalous diffusion FRAP equations in combination with scaling for immobile fractions. We also consider exponential scaling of either FRAP equations or FRAP data to correct for photofading. Using a combination of theoretical and experimental approaches, we show that compatible scaling schemes should be applied in the correct sequential order; otherwise, erroneous results may be obtained. We propose a hierarchical workflow to carry out FRAP data analysis and discuss the broader implications of our findings for FRAP data analysis using a variety of kinetic models. PMID:26017223
NASA Astrophysics Data System (ADS)
Pachepsky, Y. A.
2009-12-01
Advances in sensor physics and technology create opportunities for explicit consideration of patterns in soil-vegetation-atmosphere systems (SVAS). The purpose of this talk is to provoke discussion on the current status of pattern analysis and interpretation in SVAS. The explicit consideration of patterns requires observations and analysis at scales that are both coarser and finer than the scale of interest. Within-scale scaling relationships are often observed in SVAS components. However, direct scaling relationships have not been discovered between scales, possibly because the different scales provide different types of information about the SVAS, use different variables to characterize SVAS, and exhibit different variability of the system. To transcend the scales, models are needed that explicitly treat the fine-scale heterogeneity and rare occurrences that control processes at the coarser scale. As patterns are generated from simulations and or/or observations, methods are needed for pattern characterization and comparison. One promising direction here is the symbolic representation of patterns which leads to the exploitation of methods developed in the bioinformatics community. Examples drawn from soil hydrology and micrometeorology will be used in illustrations to make the argument that observation and analysis of patterns is the important part of understanding and quantifying relationships between structure, functioning and self-organization in SVAS and their components.
NASA Technical Reports Server (NTRS)
Beard, Daniel A.; Liang, Shou-Dan; Qian, Hong; Biegel, Bryan (Technical Monitor)
2001-01-01
Predicting behavior of large-scale biochemical metabolic networks represents one of the greatest challenges of bioinformatics and computational biology. Approaches, such as flux balance analysis (FBA), that account for the known stoichiometry of the reaction network while avoiding implementation of detailed reaction kinetics are perhaps the most promising tools for the analysis of large complex networks. As a step towards building a complete theory of biochemical circuit analysis, we introduce energy balance analysis (EBA), which compliments the FBA approach by introducing fundamental constraints based on the first and second laws of thermodynamics. Fluxes obtained with EBA are thermodynamically feasible and provide valuable insight into the activation and suppression of biochemical pathways.
Analysis of subjective knee complaints using visual analog scales
Fred Flandry; Jon P. Hunt; Glenn C. Terry; Jack C. Hughston
1991-01-01
A questionnaire using a system of visual analog scales was developed for analyzing subjective knee com plaints. This system was tested on 117 consecutive patients who had undergone knee surgery and 65 patients at their initial office evaluation of a knee disor der. The validity of and patient affinity for this type of questionnaire was compared with that of three
Consumer appraisal of drinking water: Multidimensional scaling analysis
Marie Falahee; A. W. MacRae
1995-01-01
Two non-quantitative, non-descriptive procedures, similarity sorting and preference ranking, were used to compare the tastes of 13 different water types. Each procedure was conducted with a different set of completely untrained assessors. A total of 25 assessors took part in the sorting procedure and 87 in the ranking procedure. After multidimensional scaling there was good agreement between the spatial configurations
A Rasch Analysis of the Teachers Music Confidence Scale
ERIC Educational Resources Information Center
Yim, Hoi Yin Bonnie; Abd-El-Fattah, Sabry; Lee, Lai Wan Maria
2007-01-01
This article presents a new measure of teachers' confidence to conduct musical activities with young children; Teachers Music Confidence Scale (TMCS). The TMCS was developed using a sample of 284 in-service and pre-service early childhood teachers in Hong Kong Special Administrative Region (HKSAR). The TMCS consisted of 10 musical activities.…
Analysis of stock market indices through multidimensional scaling
NASA Astrophysics Data System (ADS)
Machado, J. Tenreiro; Duarte, Fernando B.; Duarte, Gonçalo Monteiro
2011-12-01
We propose a graphical method to visualize possible time-varying correlations between fifteen stock market values. The method is useful for observing stable or emerging clusters of stock markets with similar behaviour. The graphs, originated from applying multidimensional scaling techniques (MDS), may also guide the construction of multivariate econometric models.
Bohr model and dimensional scaling analysis of atoms and molecules
Urtekin, Kerim
2007-04-25
, which can be derived from quantum mechanics using the well-known dimentional scaling technique is used to yield potential energy curves of H2 and several more complicated molecules, such as LiH, Li2, BeH, He2 and H3, with accuracies strikingly comparable...
Lunar topography: Statistical analysis of roughness on a kilometer scale
Y. Yokota; J. Haruyama; C. Honda; T. Morota; M. Ohtake; H. Kawasaki; S. Hara; K. Hioki
2008-01-01
Quantifying surface roughness may contribute significantly in the stratigraphic study of the Moon. We demonstrate the root mean square (RMS) deviation method as an indicator of topographic roughness on a kilometer scale, using stereo images from an Apollo Mapping Camera in a Digital Terrain Model, and compare three regions in the lunar highlands.
Scaling Acoustic Data Analysis through Collaboration and Automation
Jason Wimmer; Michael Towsey; Birgit Planitz; Paul Roe; Ian Williamson
2010-01-01
Monitoring and assessing environmental health is becoming increasingly important as human activity and climate change place greater pressure on global biodiversity. Acoustic sensors provide the ability to collect data passively, objectively and continuously across large areas for extended periods of time. While these factors make acoustic sensors attractive as autonomous data collectors, there are significant issues associated with large-scale data
Picomole scale stereochemical analysis of sphingosines and dihydrosphingosines
Akira Kawamura; Nina Berova; Verena Dirsch; Alfonso Mangoni; Koji Nakanishi; Gary Schwartz; Alicja Bielawska; Yusuf Hannun; Isao Kitagawa
1996-01-01
We have developed a simple picomole (low nanogram) scale HPLC scheme which can separate all eight isomers of sphingosine and dihydrosphingosine thus leading to the identification of their relative and absolute configurations. The amino group of the sample is derivatized to its fluorescent N-naphthimide which is analyzed by normal and chiral phase HPLC, coupled with fluorescence peak detection. If necessary,
SUSY Parameter Analysis at TeV and Planck Scales
B. C. Allanach; G. A. Blair; A. Freitas; S. Kraml; H. -U. Martyn; G. Polesello; W. Porod; P. M. Zerwas
2004-07-06
Coherent analyses at future LHC and LC experiments can be used to explore the breaking mechanism of supersymmetry and to reconstruct the fundamental theory at high energies, in particular at the grand unification scale. This will be exemplified for minimal supergravity.
Animating vernaculars, wired: critical discourse analysis on an awkward scale
Katie Vann
2009-01-01
In recent years critical discourse analysts have increasingly pointed to the World Wide Web as a distinctive site of discursive practice, and have urged that more research work be conducted with specifically web-based corpora. While the conduct of ‘wired CDA’ presents new possibilities for CDA, it also entails apparent dilemmas that stem from the scale of web-specific corpora and CDA's
THE USEFULNESS OF SCALE ANALYSIS: EXAMPLES FROM EASTERN MASSACHUSETTS
Many water system managers and operators are curious about the value of analyzing the scales of drinking water pipes. Approximately 20 sections of lead service lines were removed in 2002 from various locations throughout the greater Boston distribution system, and were sent to ...
Analysis of Small-Scale Hydraulic Actuation Jicheng Xia
Durfee, William K.
of force and power while at the same time being relatively light weight compared to the equivalent to an equivalent electromechanical system comprised of off-the-shelf components. Calculation results revealed that high operating pressures are needed for small-scale hydraulics to be lighter than the equivalent
An Exploratory Factor Analysis of the Differential Ability Scales.
ERIC Educational Resources Information Center
Dunham, Mardis D.; McIntosh, David E.
The primary goal of this study was to investigate the underlying structure of the Differential Ability Scales (DAS) using Exploratory Principal Axis Factoring (PAF) with 62 nonclinical preschoolers. While previous factor analyses of the DAS Core subtests revealed the derivation of two distinct factors, the current results revealed only one factor,…
Performance analysis of small-scale experimental facility of TWDEC
Ryoh Kawana; Motoo Ishikawa; Hiromasa Takeno; Takayoshi Yamamoto; Yasuyoshi Yasaka
2008-01-01
The objective of the present paper is to analyze small-scale experimental facilities of TWDEC (Travelling Wave type Direct Energy Converter) and to propose a modification in regard to a measuring device of the facilities by means of numerical simulation with the axisymmetrical two-dimensional approximation (a PIC method). The numerical simulation has given the following results: (1) tendency of the numerical
Introducing Scale Analysis by Way of a Pendulum
ERIC Educational Resources Information Center
Lira, Ignacio
2007-01-01
Empirical correlations are a practical means of providing approximate answers to problems in physics whose exact solution is otherwise difficult to obtain. The correlations relate quantities that are deemed to be important in the physical situation to which they apply, and can be derived from experimental data by means of dimensional and/or scale…
Qualitative Differences of Divalent Salts: Multidimensional Scaling and Cluster Analysis
Juyun Lim; Harry T. Lawless
2005-01-01
Sensations from salts of iron, calcium, magnesium, and zinc with different anions were studied using a sorting task and multidimensional scaling (MDS). Ten divalent salts were adjusted in concentrations such that the mean intensity ratings were approximately equal. Stimuli were sorted on the basis of similarity to minimize any semantic influence and were examined with and without nasal occlusion to
Williams, Dean [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Doutriaux, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Patchett, John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Sean [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shipman, Galen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Miller, Ross [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Steed, Chad [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Krishnan, Harinarayan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Silva, Claudio [NYU Polytechnic School of Engineering, New York, NY (United States); Chaudhary, Aashish [Kitware, Inc., Clifton Park, NY (United States); Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pugmire, David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Childs, Hank [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Prabhat, Mr. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States); Bauer, Andrew [Kitware, Inc., Clifton Park, NY (United States); Pletzer, Alexander [Tech-X Corp., Boulder, CO (United States); Poco, Jorge [NYU Polytechnic School of Engineering, New York, NY (United States); Ellqvist, Tommy [NYU Polytechnic School of Engineering, New York, NY (United States); Santos, Emanuele [Federal Univ. of Ceara, Fortaleza (Brazil); Potter, Gerald [NASA Johnson Space Center, Houston, TX (United States); Smith, Brian [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Maxwell, Thomas [NASA Johnson Space Center, Houston, TX (United States); Kindig, David [Tech-X Corp., Boulder, CO (United States); Koop, David [NYU Polytechnic School of Engineering, New York, NY (United States)
2013-05-01
To support interactive visualization and analysis of complex, large-scale climate data sets, UV-CDAT integrates a powerful set of scientific computing libraries and applications to foster more efficient knowledge discovery. Connected through a provenance framework, the UV-CDAT components can be loosely coupled for fast integration or tightly coupled for greater functionality and communication with other components. This framework addresses many challenges in the interactive visual analysis of distributed large-scale data for the climate community.
Doutriaux, Charles [Lawrence Livermore National Laboratory (LLNL); Patchett, John [Los Alamos National Laboratory (LANL); Williams, Dean N. [Lawrence Livermore National Laboratory (LLNL); Miller, Ross G [ORNL; Steed, Chad A [ORNL; Krishnan, Harinarayan [Lawrence Berkeley National Laboratory (LBNL); Silva, Claudio T. [New York University, Center for Urban Sciences; Chaudhary, Aashish [Kitware; Bremer, Peer-Timo [Lawrence Livermore National Laboratory (LLNL); Pugmire, Dave [ORNL; Bethel, E Wes [Lawrence Berkeley National Laboratory (LBNL); Childs, Hank [Lawrence Berkeley National Laboratory (LBNL); Prabhat, [Lawrence Berkeley National Laboratory (LBNL); Geveci, Berk [Kitware; Bauer, Andy [Kitware; Pletzer, Alexander [Tech-X Corporation; Poco, Jorge [Polytechnic Institute of New York University; Ellqvist, Tommy [New York University; Santos, Emanuele [Universidade Federal do Ceara, Ceara, Brazil; Potter, Gerald [National Aeronautics and Space Administration (NASA); Smith, Brian E [ORNL; Maxwell, Thomas P. [National Aeronautics and Space Administration (NASA); Kindig, Dave [Tech-X Corporation; Koop, David [New York University
2013-01-01
To support interactive visualization and analysis of complex, large-scale climate data sets, UV-CDAT integrates a powerful set of scientific computing libraries and applications to foster more efficient knowledge discovery. Connected through a provenance framework, the UV-CDAT components can be loosely coupled for fast integration or tightly coupled for greater functionality and communication with other components. This framework addresses many challenges in interactive visual analysis of distributed large-scale data for the climate community.
Williams, Dean N. [Lawrence Livermore National Laboratory (LLNL); Bremer, Peer-Timo [Lawrence Livermore National Laboratory (LLNL); Doutriaux, Charles [Lawrence Livermore National Laboratory (LLNL); Patchett, John [Los Alamos National Laboratory (LANL); Williams, Sean [Los Alamos National Laboratory (LANL); Shipman, Galen M [ORNL; Miller, Ross G [ORNL; Pugmire, Dave [ORNL; Smith, Brian E [ORNL; Steed, Chad A [ORNL; Bethel, E Wes [Lawrence Berkeley National Laboratory (LBNL); Childs, Hank [Lawrence Berkeley National Laboratory (LBNL); Krishnan, Harinarayan [Lawrence Berkeley National Laboratory (LBNL); Silva, Claudio T. [New York University, Center for Urban Sciences; Santos, Emanuele [Universidade Federal do Ceara, Ceara, Brazil; Koop, David [New York University; Ellqvist, Tommy [New York University; Poco, Jorge [Polytechnic Institute of New York University; Geveci, Berk [Kitware; Chaudhary, Aashish [Kitware; Bauer, Andy [Kitware; Pletzer, Alexander [Tech-X Corporation; Kindig, Dave [Tech-X Corporation; Potter, Gerald [National Aeronautics and Space Administration (NASA); Maxwell, Thomas P. [National Aeronautics and Space Administration (NASA)
2013-01-01
To support interactive visualization and analysis of complex, large-scale climate data sets, UV-CDAT integrates a powerful set of scientific computing libraries and applications to foster more efficient knowledge discovery. Connected through a provenance framework, the UV-CDAT components can be loosely coupled for fast integration or tightly coupled for greater functionality and communication with other components. This framework addresses many challenges in the interactive visual analysis of distributed large-scale data for the climate community.
Computational methods for time-scale analysis of nonlinear dynamical systems
Shawn Iravanchy
2003-01-01
Knowledge of the time-scale structure of a smooth finite dimensional nonlinear dynamical system provides the opportunity for model decomposition, if there are two or more disparate time-scales. A few benefits of such model decomposition are simplified control design and analysis and reduced computational effort in simulation. Singular perturbation theory provides the tools necessary to analyze and decompose a multiple time-scale
An analysis of a large scale habitat monitoring application
Robert Szewczyk; Alan M. Mainwaring; Joseph Polastre; John Anderson; David E. Culler
2004-01-01
Habitat and environmental monitoring is a driving application for wireless sensor networks. We present an analysis of data from a second generation sensor networks deployed during the summer and autumn of 2003. During a 4 month deployment, these networks, consisting of 150 devices, produced unique datasets for both systems and biological analysis. This paper focuses on nodal and network performance,
Combinatorial motif analysis and hypothesis generation on a genomic scale
Yuh-jyh Hu; Suzanne B. Sandmeyer; Calvin Mclaughlin; Dennis F. Kibler
2000-01-01
Motivation: Computer-assisted methods are essential for the analysis of biosequences. Gene activity is regulated in part by the binding of regulatory molecules (transcription factors) to combinations of short motifs. The goal of our analysis is the development of algorithms to identify regu- latory motifs and to predict the activity of combinations of those motifs. Approach: Our research begins with a
Bing-Nan Lu; En-Guang Zhao; Shan-Gui Zhou
2012-01-04
For the first time the potential energy surfaces of actinide nuclei in the $(\\beta_{20}, \\beta_{22}, \\beta_{30})$ deformation space are obtained from a multi-dimensional constrained covariant density functional theory. With this newly developed theory we are able to explore the importance of the triaxial and octupole shapes simultaneously along the whole fission path. It is found that besides the octupole deformation, the triaxiality also plays an important role upon the second fission barriers. The outer barrier as well as the inner barrier are lowered by the triaxial deformation compared with axially symmetric results. This lowering effect for the reflection asymmetric outer barrier is 0.5 $\\sim$ 1 MeV, accounting for $10 \\sim 20%$ of the barrier height. With the inclusion of the triaxial deformation, a good agreement with the data for the outer barriers of actinide nuclei is achieved.
Scale Construction on the Basis of Components Analysis: A Comparison of Three Strategies.
ERIC Educational Resources Information Center
ten Berge, Jos M. F.; Knol, Dirk L.
1985-01-01
Constructing scales on the basis of components analysis by assigning weights 1 to variables with high positive loadings on the components and -1 to variables with high negative loadings was compared with other strategies of scale construction, which assign weights 1 or -1 to variables with high weights for the components. (Author/BW)
The Religious Orientation Scale: Review and Meta-Analysis of Social Desirability Effects.
ERIC Educational Resources Information Center
Trimble, Douglas E.
1997-01-01
Studies of the reliability and validity of scores on the Religious Orientation Scale (G. Allport and J. Ross, 1967) were reviewed with respect to social desirability. Meta analysis shows that one scale correlates with social desirability, but another does not, suggesting that partialing out this variance is not recommended. (SLD)
A Philosophical Item Analysis of the Right-Wing Authoritarianism Scale.
ERIC Educational Resources Information Center
Eigenberger, Marty
Items of Altemeyer's 1986 version of the "Right-Wing Authoritarianism Scale" (RWA Scale) were analyzed as philosophical propositions in an effort to establish each item's suggestive connotation and denotation. The guiding principle of the analysis was the way in which the statements reflected authoritarianism's defining characteristics of…
Pre-site Characterization Risk Analysis for Commercial-Scale Carbon Sequestration
Lu, Zhiming
Pre-site Characterization Risk Analysis for Commercial-Scale Carbon Sequestration Zhenxue Dai a probability framework to evaluate subsurface risks associated with commercial-scale carbon sequestration to the atmosphere.1-3 The Big Sky Carbon Sequestration Partnership (BSCSP) is one of seven partnerships tasked
Experimental and theoretical analysis of a small scale thermoacoustic cooler driven by two sources
Paris-Sud XI, Université de
Experimental and theoretical analysis of a small scale thermoacoustic cooler driven by two sources by scaling down thermoacoustic coolers to provide practical solutions for thermal heat management, especially the optimal field which optimizes thermoacoustic effects. Moreover, the working frequency is not related
Large-Scale Gene Expression Data Analysis: A New Challenge to Computational Biologists
Large-Scale Gene Expression Data Analysis: A New Challenge to Computational Biologists Michael Q arrays to monitor gene expression at a genome-wide scale constitutes a fundamental advance in biology. In particular, the expression pattern of all genes in Saccharomyces cerevisiae can be interrogated using
Paris-Sud XI, UniversitÃ© de
approach is only valid in the low frequency range and we have bypassed the propagation of acoustic waves periodic vibration. Keywords: triple scale expansion; periodic solutions; nonlinear vibrations; normal modes 1 Introduction In this article, we perform a triple scale analysis of small periodic solutions
Multi-scale Complexity Analysis on the Sequence of E. coli Complete Genome
Ren, Kui
Multi-scale Complexity Analysis on the Sequence of E. coli Complete Genome Jin Wang1 , Qidong Zhang Introduction The complete sequencing of human genome and other genome sequences from model organisms[1 of Mechanics and Engineering Science, Beijing University, Beijing 100871 Abstract We have analyzed the multi-scale
Analysis of Rainfall records in India: Self Organized Criticality and Scaling
Sarkar, A
2005-01-01
The time series data of the monthly rainfall records (for the time period 1871-2002) in All India and different regions of India are analyzed. It is found that the distributions of the rainfall intensity exhibit perfect power law behavior. The scaling analysis revealed two distinct scaling regions in the rainfall time series.
Analysis of Likert Scale Data in Disability and Medical Rehabilitation Research
Michael J. Nanna; Shlomo S. Sawilowsky
1998-01-01
Many clinical evaluations are subjective, resulting in ordinal level measurements. A widely used example in medical rehabilitation is the Functional Independence Measure (FIM), which provides a measure of disability. The FIM is an 18-item, 7-point Likert scale ranging from complete dependence to complete independence. Parametric statistics are commonly used for the analysis of ordinal data. However, Likert scales often lead
ERIC Educational Resources Information Center
Smits, Iris A. M.; Timmerman, Marieke E.; Meijer, Rob R.
2012-01-01
The assessment of the number of dimensions and the dimensionality structure of questionnaire data is important in scale evaluation. In this study, the authors evaluate two dimensionality assessment procedures in the context of Mokken scale analysis (MSA), using a so-called fixed lowerbound. The comparative simulation study, covering various…
Robust Mokken Scale Analysis by Means of the Forward Search Algorithm for Outlier Detection
ERIC Educational Resources Information Center
Zijlstra, Wobbe P.; van der Ark, L. Andries; Sijtsma, Klaas
2011-01-01
Exploratory Mokken scale analysis (MSA) is a popular method for identifying scales from larger sets of items. As with any statistical method, in MSA the presence of outliers in the data may result in biased results and wrong conclusions. The forward search algorithm is a robust diagnostic method for outlier detection, which we adapt here to…
Initial Economic Analysis of Utility-Scale Wind Integration in Hawaii
Not Available
2012-03-01
This report summarizes an analysis, conducted by the National Renewable Energy Laboratory (NREL) in May 2010, of the economic characteristics of a particular utility-scale wind configuration project that has been referred to as the 'Big Wind' project.
Large-Scale Analysis of Formant Frequency Estimation Variability in Conversational Telephone Speech*
Large-Scale Analysis of Formant Frequency Estimation Variability in Conversational Telephone Speech Laboratory) & Reva Schwartz (United States Secret Services) We quantitatively investigate how the telephone in law enforcement. The telephone channel and regional dialect are important factors in forensic
Modeling and Analysis of Large-Scale On-Chip Interconnects
Feng, Zhuo
2010-07-14
As IC technologies scale to the nanometer regime, efficient and accurate modeling and analysis of VLSI systems with billions of transistors and interconnects becomes increasingly critical and difficult. VLSI systems impacted ...
An analysis of exhaustion hardening in micron-scale plasticity
A. A. Benzerga
2008-01-01
The rate-dependent behavior of micron-scale model planar crystals is investigated using the framework of mechanism-based discrete dislocation plasticity. Long-range interactions between dislocations are accounted for through elasticity. Mechanism-based constitutive rules are used to represent the short-range interactions between dislocations, including dislocation multiplication and dislocation escape at free surfaces. Emphasis is laid on circumstances where the deformed samples are not statistically
On the Choice of Scales for Task Analysis
Juan I. Sanchez; Scott L. Fraser
1992-01-01
One hundred one incumbents of 25 service jobs rated their respective tasks on relative time spent, difficulty of learning, criticality, and overall importance. Although scale convergence varied as a function of job title, task criticality and importance ratings were similar and presented low to moderate levels of convergence with both time-spent and difficulty-of-learning ratings. Different composites of task importance were
Smectic ordering in liquid crystal - aerosil dispersions II. Scaling analysis
Germano S. Iannacchione; Sungil Park; Carl W. Garland; Robert J. Birgeneau; Robert L. Leheny
2002-08-14
Liquid crystals offer many unique opportunities to study various phase transitions with continuous symmetry in the presence of quenched random disorder (QRD). The QRD arises from the presence of porous solids in the form of a random gel network. Experimental and theoretical work support the view that for fixed (static) inclusions, quasi-long-range smectic order is destroyed for arbitrarily small volume fractions of the solid. However, the presence of porous solids indicates that finite-size effects could play some role in limiting long-range order. In an earlier work, the nematic - smectic-A transition region of octylcyanobiphenyl (8CB) and silica aerosils was investigated calorimetrically. A detailed x-ray study of this system is presented in the preceding Paper I, which indicates that pseudo-critical scaling behavior is observed. In the present paper, the role of finite-size scaling and two-scale universality aspects of the 8CB+aerosil system are presented and the dependence of the QRD strength on the aerosil density is discussed.
Cook, Robert Annan
1972-01-01
SCALE DEPENDENCIES IN STRUCTURAL ANALYSIS AS ILLUSTRATED BY CHEVRON FOLDS ALONG THE BEARTOOTH FRONT, WYOMING A Thesis ROBERT ANNAN COOK Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirement..., for the degree of MASTER OF SCIENCE August 1972 Ma)or Sub)oct: Geology SCALE DEPENDENCIES IN STRUCTURAL ANALYSIS AS ILLUSTRATED BY CHEVRON FOLDS ALONG THE BEARTOOTH FRONT, WYOMING A Thesis by ROBERT ANNAN COOK Approved as to style and content by...
Cook, Robert Annan
1972-01-01
SCALE DEPENDENCIES IN STRUCTURAL ANALYSIS AS ILLUSTRATED BY CHEVRON FOLDS ALONG THE BEARTOOTH FRONT, WYOMING A Thesis ROBERT ANNAN COOK Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirement..., for the degree of MASTER OF SCIENCE August 1972 Ma)or Sub)oct: Geology SCALE DEPENDENCIES IN STRUCTURAL ANALYSIS AS ILLUSTRATED BY CHEVRON FOLDS ALONG THE BEARTOOTH FRONT, WYOMING A Thesis by ROBERT ANNAN COOK Approved as to style and content by...
Two scale analysis applied to low permeability sandstones
NASA Astrophysics Data System (ADS)
Davy, Catherine; Song, Yang; Nguyen Kim, Thang; Adler, Pierre
2015-04-01
Low permeability materials are often composed of several pore structures of various scales, which are superposed one to another. It is often impossible to measure and to determine the macroscopic properties in one step. In the low permeability sandstones that we consider, the pore space is essentially made of micro-cracks between grains. These fissures are two dimensional structures, which aperture is roughly on the order of one micron. On the grain scale, i.e., on the scale of 1 mm, the fissures form a network. These two structures can be measured by using two different tools [1]. The density of the fissure networks is estimated by trace measurements on the two dimensional images provided by classical 2D Scanning Electron Microscopy (SEM) with a pixel size of 2.2 micron. The three dimensional geometry of the fissures is measured by X-Ray micro-tomography (micro-CT) in the laboratory, with a voxel size of 0.6x0.6x0.6microns3. The macroscopic permeability is calculated in two steps. On the small scale, the fracture transmissivity is calculated by solving the Stokes equation on several portions of the measured fissures by micro-CT. On the large scale, the density of the fissures is estimated by three different means based on the number of intersections with scanlines, on the surface density of fissures and on the intersections between fissures per unit surface. These three means show that the network is relatively isotropic and they provide very close estimations of the density. Then, a general formula derived from systematic numerical computations [2] is used to derive the macroscopic dimensionless permeability which is proportional to the fracture transmissivity. The combination of the two previous results yields the dimensional macroscopic permeability which is found to be in acceptable agreement with the experimental measurements. Some extensions of these preliminary works will be presented as a tentative conclusion. References [1] Z. Duan, C. A. Davy, F. Agostini, L. Jeannin, D. Troadec, F. Skoczylas, Hydraulic cut-off and gas recovery potential of sandstones from Tight Gas Reservoirs: a laboratory investigation, International Journal of Rock Mechanics and Mining Science, Vol.65, pp.75-85, 2014. [2] P.M. Adler, J.-F. Thovert, V.V. Mourzenko: Fractured porous media, Oxford University Press, 2012.
ERIC Educational Resources Information Center
Emons, Wilco H. M.; Sijtsma, Klaas; Pedersen, Susanne S.
2012-01-01
The Hospital Anxiety and Depression Scale (HADS) measures anxiety and depressive symptoms and is widely used in clinical and nonclinical populations. However, there is some debate about the number of dimensions represented by the HADS. In a sample of 534 Dutch cardiac patients, this study examined (a) the dimensionality of the HADS using Mokken…
Murray Gibson
2007-04-27
Musical scales involve notes that, sounded simultaneously (chords), sound good together. The result is the left brain meeting the right brain — a Pythagorean interval of overlapping notes. This synergy would suggest less difference between the working of the right brain and the left brain than common wisdom would dictate. The pleasing sound of harmony comes when two notes share a common harmonic, meaning that their frequencies are in simple integer ratios, such as 3/2 (G/C) or 5/4 (E/C).
Large-scale analysis of phylogenetic search behavior
Park, Hyun Jung
2009-05-15
Phylogenetic analysis is used in all branches of biology by inferring evolutionary trees. Applications include designing more effective drugs, tracing the transmission of deadly viruses, and guiding conservation and biodiversity efforts. Most...
ERIC Educational Resources Information Center
Staik, Irene M.
A study was undertaken to provide a factor analysis of the Omega Scale, a 25-item, Likert-type scale developed in 1984 to assess attitudes toward death and funerals and other body disposition practices. The Omega Scale was administered to 250 students enrolled in introductory psychology classes at two higher education institutions in Alabama.…
A. I. Liapis; R. Bruttini
1995-01-01
Dynamic and spatially multi-dimensional mathematical models of the primary and secondary drying stages of the freeze-drying of pharmaceutical crystalline and amorphous solutes in vials, are constructed and presented in this work. The models account for the removal of free and bound water and could also provide the geometric shape of the moving interface and its position. It is proved that
Stochastic analysis of a field-scale unsaturated transport experiment
NASA Astrophysics Data System (ADS)
Severino, G.; Comegna, A.; Coppola, A.; Sommella, A.; Santini, A.
2010-10-01
Modelling of field-scale transport of chemicals is of deep interest to public as well as private sectors, and it represents an area of active theoretical research in many environmentally-based disciplines. However, the experimental data needed to validate field-scale transport models are very limited due to the numerous logistic difficulties that one faces out. In the present paper, the migration of a tracer (Cl -) was monitored during its movement in the unsaturated zone beneath the surface of 8 m × 50 m sandy soil. Under flux-controlled, steady-state water flow ( Jw = 10 mm/day) was achieved by bidaily sprinkler irrigation. A pulse of 105 g/m 2 KCl was applied uniformly to the surface, and subsequently leached downward by the same (chloride-free) flux Jw over the successive two months. Chloride concentration monitoring was carried out in seven measurement campaigns (each one corresponding to a given time) along seven (parallel) transects. The mass recovery was near 100%, therefore underlining the very good-quality of the concentration data-set. The chloride concentrations are used to test two field-scale models of unsaturated transport: (i) the Advection-Dispersion Equation (ADE), which models transport far from the zone of solute entry, and (ii) the Stochastic- Convective Log- normal (CLT) transfer function model, which instead accounts for transport near the release zone. Both the models provided an excellent representation of the solute spreading at z > 0.45 m (being z = 0.45 m the calibration depth). As a consequence, by the depth z ? 50 cm one can regard transport as Fickian. The ADE model dramatically underestimates solute spreading at shallow depths. This is due to the boundary effects which are not captured by the ADE. The CLT model appears to be a more robust tool to mimic transport at every depth.
Scale Analysis of Convective Activity on Titan: Is the Surface Setting the Time Scale?
NASA Astrophysics Data System (ADS)
Ingersoll, A. P.; Roe, H. G.; Schaller, E. L.; Brown, M. E.
2005-08-01
A fundamental number for convective activity is the time tau needed to re-humidify the atmosphere. This is M/E, where M is the mass of condensate per unit area (50 kg/m2 for Earth) and E is the evaporation rate (1.5 m of liquid water/yr). Alternately, tau is ML/F, where L is the latent heat of vaporization and F is the surface heat flux (125 W/m2 for Earth). With these numbers, tau = 12 days for Earth. This number also controls the time that a parcel spends in the descending branch of the Hadley cell, since the parcel must radiate away the heat of vaporization that it gained in the ascending branch. Tropical convective activity fluctuates on comparable time scales. Equatorial wave disturbances propagate to the west with a period of 4-5 days (Holton, 2004, p. 375). The equatorial intraseasonal oscillation, also known as the Madden-Julian oscillation (MJO), propagates to the east on a timescale of 30-60 days (Holton, 2004, p. 385). These bracket the time scale tau. Other oscillations like El Nino Southern Oscillation (ENSO) and the quasi-biennial oscillation (QBO) involve the oceans and the stratosphere, and are less relevant to the timescales of tropical convection. The value of tau for Titan is hundreds of times greater than for Earth, because the surface heat flux is much less and the latent heat content is about the same as on Earth. The polar cloud outbreaks at intervals of months around Titan's southern summer solstice are then a puzzle, as are the short-term variations of mid-latitude clouds observed after the solstice. One possibility (Roe et al., 2005, submitted; Schaller et al., 2005, submitted) is that the clouds originate from eruptions at the surface, which is controlling the variability of the atmosphere.
Development of a statistical sampling method for uncertainty analysis with SCALE
Williams, M.; Wiarda, D.; Smith, H.; Jessee, M. A.; Rearden, B. T. [Oak Ridge National Laboratory, P.O Box 2008, Oak Ridge, TN 37831-6354 (United States); Zwermann, W.; Klein, M.; Pautz, A.; Krzykacz-Hausmann, B.; Gallner, L. [Gesellschaft fuer Anlagen- und Reaktorsicherheit GRS, Forschungszentrum, Boltzmannstrasse 14, 85748 Garching (Germany)
2012-07-01
A new statistical sampling sequence called Sampler has been developed for the SCALE code system. Random values for the input multigroup cross sections are determined by using the XSUSA program to sample uncertainty data provided in the SCALE covariance library. Using these samples, Sampler computes perturbed self-shielded cross sections and propagates the perturbed nuclear data through any specified SCALE analysis sequence, including those for criticality safety, lattice physics with depletion, and shielding calculations. Statistical analysis of the output distributions provides uncertainties and correlations in the desired responses. (authors)
Manufacturing Cost Analysis for YSZ-Based FlexCells at Pilot and Full Scale Production Scales
Scott Swartz; Lora Thrun; Robin Kimbrell; Kellie Chenault
2011-05-01
Significant reductions in cell costs must be achieved in order to realize the full commercial potential of megawatt-scale SOFC power systems. The FlexCell designed by NexTech Materials is a scalable SOFC technology that offers particular advantages over competitive technologies. In this updated topical report, NexTech analyzes its FlexCell design and fabrication process to establish manufacturing costs at both pilot scale (10 MW/year) and full-scale (250 MW/year) production levels and benchmarks this against estimated anode supported cell costs at the 250 MW scale. This analysis will show that even with conservative assumptions for yield, materials usage, and cell power density, a cost of $35 per kilowatt can be achieved at high volume. Through advancements in cell size and membrane thickness, NexTech has identified paths for achieving cell manufacturing costs as low as $27 per kilowatt for its FlexCell technology. Also in this report, NexTech analyzes the impact of raw material costs on cell cost, showing the significant increases that result if target raw material costs cannot be achieved at this volume.
NASA Astrophysics Data System (ADS)
Suteanu, Cristian
2015-04-01
Many natural objects and processes have been shown to enjoy scale symmetry, i.e. they are characterized by invariance to change in scale; it is also well-known that for real-world features, scaling aspects are only valid over limited scale intervals. At the same time, many natural patterns also enjoy other symmetry properties. This paper presents an approach to natural patterns based on the coupling of scale symmetry with three other forms of symmetry: translation, reflection, and rotation. The first one is assessed using isopersistence diagrams based on multiscale time series analysis (detrended fluctuation analysis and Haar wavelet analysis), the second evaluates time series temporal irreversibility as a function of scale, while the third one considers the impact of rotation on scaling properties found in data from vector fields. The paper shows that the characterization of the way and the extent to which these three forms of symmetry are coupled to scale symmetry can effectively support the evaluation of strongly variable natural patterns. The methodology is illustrated with a wide range of application examples, including air temperature, wind speed and direction, river discharge, and earthquakes.
Small-Scale Smart Grid Construction and Analysis
NASA Astrophysics Data System (ADS)
Surface, Nicholas James
The smart grid (SG) is a commonly used catch-phrase in the energy industry yet there is no universally accepted definition. The objectives and most useful concepts have been investigated extensively in economic, environmental and engineering research by applying statistical knowledge and established theories to develop simulations without constructing physical models. In this study, a small-scale version (SSSG) is constructed to physically represent these ideas so they can be evaluated. Results of construction show data acquisition three times more expensive than the grid itself although mainly due to the incapability to downsize 70% of data acquisition costs to small-scale. Experimentation on the fully assembled grid exposes the limitations of low cost modified sine wave power, significant enough to recommend pure sine wave investment in future SSSG iterations. Findings can be projected to full-size SG at a ratio of 1:10, based on the appliance representing average US household peak daily load. However this exposes disproportionalities in the SSSG compared with previous SG investigations and recommended changes for future iterations are established to remedy this issue. Also discussed are other ideas investigated in the literature and their suitability for SSSG incorporation. It is highly recommended to develop a user-friendly bidirectional charger to more accurately represent vehicle-to-grid (V2G) infrastructure. Smart homes, BEV swap stations and pumped hydroelectric storage can also be researched on future iterations of the SSSG.
Crack detection in beams in noisy conditions using scale fractal dimension analysis of mode shapes
NASA Astrophysics Data System (ADS)
Bai, R. B.; Ostachowicz, W.; Cao, M. S.; Su, Z.
2014-06-01
Fractal dimension analysis of mode shapes has been actively studied in the area of structural damage detection. The most prominent features of fractal dimension analysis are high sensitivity to damage and instant determination of damage location. However, an intrinsic deficiency is its susceptibility to measurement noise, likely obscuring the features of damage. To address this deficiency, this study develops a novel damage detection method, scale fractal dimension (SFD) analysis of mode shapes, based on combining the complementary merits of a stationary wavelet transform (SWT) and Katz’s fractal dimension in damage characterization. With this method, the SWT is used to decompose a mode shape into a set of scale mode shapes at scale levels, with damage information and noise separated into distinct scale mode shapes because of their dissimilar scale characteristics; the Katz’s fractal dimension individually runs on every scale mode shape in the noise-adaptive condition provided by the SWT to canvass damage. Proof of concept for the SFD analysis is performed on cracked beams simulated by the spectral finite element method; the reliability of the method is assessed using Monte Carlo simulation to mimic the operational variability in realistic damage diagnosis. The proposed method is further experimentally validated on a cracked aluminum beam with mode shapes acquired by a scanning laser vibrometer. The results show that the SFD analysis of mode shapes provides a new strategy for damage identification in noisy conditions.
GECKO: a complete large-scale gene expression analysis platform
Joachim Theilhaber; Anatoly Ulyanov; Anish Malanthara; Jack Cole; Dapeng Xu; Robert Nahf; Michael Heuer; Christoph Brockel; Steven Bushnell
2004-01-01
Background: Gecko (Gene Expression: Computation and Knowledge Organization) is a complete, high-capacity centralized gene expression analysis system, developed in response to the needs of a distributed user community. Results: Based on a client-server architecture, with a centralized repository of typically many tens of thousands of Affymetrix scans, Gecko includes automatic processing pipelines for uploading data from remote sites, a data
Scaling analysis of biogeochemical parameters in coastal waters
Sylvie Zongo; François Schmitt
2010-01-01
Monitoring data are very useful for rapidly providing quality controlled measurements of many environmental aquatic, and thus understanding the spatio-temporal structure which governs the dynamics. We consider here the long time biogeochemical time series from automatic continuous monitoring. These biogeochemical time series from in Eastern English Channel: coastal waters, estuarine waters and river waters. In the first analysis, we consider
Meta-Analysis of Scale Reliability Using Latent Variable Modeling
ERIC Educational Resources Information Center
Raykov, Tenko; Marcoulides, George A.
2013-01-01
A latent variable modeling approach is outlined that can be used for meta-analysis of reliability coefficients of multicomponent measuring instruments. Important limitations of efforts to combine composite reliability findings across multiple studies are initially pointed out. A reliability synthesis procedure is discussed that is based on…
TIME-MASS SCALING IN SOIL TEXTURE ANALYSIS
Technology Transfer Automated Retrieval System (TEKTRAN)
Data on texture are used in the majority of inferences about soil functioning and use. The model of fractal fragmentation has attracted attention as a possible source of minimum set of parameters to describe observed particle size distributions. Popular techniques of textural analysis employ the rel...
Landscape structure analysis of Kansas at three scales
Jerry A. Griffith; Edward A. Martinko; Kevin P. Price
2000-01-01
Recent research in landscape ecology has sought to define the underlying structure of landscape pattern as quantified by landscape pattern metrics. One method used by researchers to address this question involves statistical data reduction techniques. In this study, principal components analysis (PCA) was performed on 27 landscape pattern metrics derived from a Kansas land cover data base at three spatial
Paris-Sud XI, Université de
1 MULTI-TIME SCALE ANALYSIS OF SUGARCANE WITHIN-FIELD VARIABILITY: IMPROVED CROP DIAGNOSIS USING condition. To test this hypothesis, we analyzed the within-field variability of a sugarcane crop at seasonal, and soil depth) and cropping (harvest date) factors. The analysis was based on a sugarcane field vegetation
EARs in the Wild: Large-Scale Analysis of Execution After Redirect Vulnerabilities
Kruegel, Christopher
EARs in the Wild: Large-Scale Analysis of Execution After Redirect Vulnerabilities Pierre Payet a research paper that incorrectly modeled the redirect seman- tics, causing their static analysis to miss EAR vulnerabilities. To understand the breadth and scope of EARs in the real world, we performed a large
Factorial Structure and Invariance Analysis of the Sense of Belonging Scales
ERIC Educational Resources Information Center
Tovar, Esau; Simon, Merril A.
2010-01-01
Using a diverse sample of university students, this article describes outcomes of a confirmatory factor analysis and a group invariance analysis conducted to validate the factorial structure of the Sense of Belonging Scales. Accordingly, a modified factor structure departing significantly from that of the original authors is proposed. (Contains 5…
R. Salvador; J. Piñol; S. Tarantola; E. Pla
2001-01-01
A Global Sensitivity Analysis (GSA) and an analysis of scale effects have been carried out over the equations given by Rothermel (1972), with some additional modifications. Data mainly derived from Mediterranean shrublands and a spatially close meteorological station have been used to derive probability distribution functions for the variables involved. In spite of the abundant non-linearities contained in the equations
Scaling Analysis and Evolution Equation of the North Atlantic Oscillation Index Fluctuations
C. Collette; M. Ausloos
2004-01-01
The North Atlantic Oscillation (NAO) monthly index is studied from 1825 till 2002 in order to identify the scaling ranges of its fluctuations upon different delay times and to find out whether or not it can be regarded as a Markov process. A Hurst rescaled range analysis and a detrended fluctuation analysis both indicate the existence of weakly persistent long
Analysis of the load density of small-scale general selenographic maps of the moon
K. B. Shingareva; V. P. Shashkina
1975-01-01
Comparative analysis of the load density of small-scale lunar maps is carried out. Crater counts were made over a normalized area. Analysis has shown that there exist only local differences in load density between the Lunar Astronautical Chart (U.S.) and the Soviet map of the equatorial zone of the visible hemisphere of the moon. The load density of the maps
Principal Component Analysis for Large Scale Problems with Lots of Missing Values
Tapani Raiko; Alexander Ilin; Juha Karhunen
2007-01-01
Principal component analysis (PCA) is a well-known classi- cal data analysis technique. There are a number of algorithms for solving the problem, some scaling better than others to problems with high di- mensionality. They also differ in their ability to handle missing values in the data. We study a case where the data are high-dimensional and a majority of the
NASA Astrophysics Data System (ADS)
Lim, Joohyun; Kim, Youngouk; Paik, Joonki
In this paper, we present comparative analysis of scale-invariant feature extraction using different wavelet bases. The main advantage of the wavelet transform is the multi-resolution analysis. Furthermore, wavelets enable localigation in both space and frequency domains and high-frequency salient feature detection. Wavelet transforms can use various basis functions. This research aims at comparative analysis of Daubechies, Haar and Gabor wavelets for scale-invariant feature extraction. Experimental results show that Gabor wavelets outperform better than Daubechies, Haar wavelets in the sense of both objective and subjective measures.
Cross-range scaling for ISAR via optical flow analysis
Chun-mao Yeh; Jia Xu; Ying-Ning Peng; Xiu-Tan Wang; Jian Yang; Xiang-Gen Xia
2012-01-01
For high resolution imaging applications, ISAR can be viewed as a kind of camera which linearly maps the scattering centers from the physical plane to the RD image plane. Thus, the concept and methods of optical flow in computer vision may be introduced for the rotation estimation in ISAR. Furthermore, three kinds of optical flow analysis-based estimators, namely triangle-based, line-based,
NASA Astrophysics Data System (ADS)
Petroy, S. B.; Leisso, N.; Hinckley, E. S.; Meier, C. L.; Barnett, D.
2013-12-01
The National Ecological Observatory Network (NEON) is a continental-scale ecological observation platform designed to collect and disseminate data that contributes to understanding and forecasting the impacts of climate change, land use change, and invasive species on ecology. NEON will collect in-situ and airborne data over 60 sites across the US, including Alaska, Hawaii, and Puerto Rico. The NEON vegetation sampling protocol currently directs the collection of foliar samples from dominant species at each site; field spectra are collected from the samples that are further analyzed for bulk and isotopic carbon and nitrogen content. Through employment of consistent sampling and analysis strategies, NEON will provide a unique, rich, and varied data collection to support studies of foliar traits within species at specific sites and across/between regions. When combined with the NEON airborne hyperspectral and LiDAR imagery, these data will be key to support validation efforts of existing algorithms for deriving canopy scale nitrogen, carbon and other foliar traits, as well as supporting development of data products that are informed by - and include - the ground data specifically, thereby potentially reducing uncertainties in the observational data products. Presented here are prototype datasets collected at NEON Domain 1 (Harvard Forest, summer 2012) and Domain 17 (San Joaquin Experiment Range, summer 2013). Lessons-learned from the field campaigns are discussed, along with preliminary results from the Harvard Forest campaign, which combine the field and the laboratory data in support of current algorithm validation efforts. Extension of these protocols to future NEON Domain characterization activities is also presented.
Physical Analysis and Scaling of a Jet and Vortex Actuator
NASA Technical Reports Server (NTRS)
Lachowicz, Jason T.; Yao, Chung-Sheng; Joslin, Ronald D.
2004-01-01
Our previous studies have shown that the Jet and Vortex Actuator generates free-jet, wall-jet, and near- wall vortex flow fields. That is, the actuator can be operated in different modes by simply varying the driving frequency and/or amplitude. For this study, variations are made in the actuator plate and wide-slot widths and sine/asymmetrical actuator plate input forcing (drivers) to further study the actuator induced flow fields. Laser sheet flow visualization, particle- image velocimetry, and laser velocimetry are used to measure and characterize the actuator induced flow fields. Laser velocimetry measurements indicate that the vortex strength increases with the driver repetition rate for a fixed actuator geometry (wide slot and plate width). For a given driver repetition rate, the vortex strength increases as the plate width decreases provided the wide-slot to plate-width ratio is fixed. Using an asymmetric plate driver, a stronger vortex is generated for the same actuator geometry and a given driver repetition rate. The nondimensional scaling provides the approximate ranges for operating the actuator in the free jet, wall jet, or vortex flow regimes. Finally, phase-locked velocity measurements from particle image velocimetry indicate that the vortex structure is stationary, confirming previous computations. Both the computations and the particle image velocimetry measurements (expectantly) show unsteadiness near the wide-slot opening, which is indicative of mass ejection from the actuator.
Large-scale complex physical modeling and precision analysis
NASA Astrophysics Data System (ADS)
Wu, Man-Sheng; Di, Bang-Rang; Wei, Jian-Xin; Liang, Xiang-Hao; Zhou, Yi; Liu, Yi-Mou; Kong, Zhao-Ju
2014-06-01
Large-scale 3D physical models of complex structures can be used to simulate hydrocarbon exploration areas. The high-fidelity simulation of actual structures poses challenges to model building and quality control. Such models can be used to collect wideazimuth, multi-azimuth, and full-azimuth seismic data that can be used to verify various 3D processing and interpretation methods. Faced with nonideal imaging problems owing to the extensive complex surface conditions and subsurface structures in the oil-rich foreland basins of western China, we designed and built the KS physical model based on the complex subsurface structure. This is the largest and most complex 3D physical model built to date. The physical modeling technology advancements mainly involve 1) the model design method, 2) the model casting flow, and 3) data acquisition. A 3D velocity model of the physical model was obtained for the first time, and the model building precision was quantitatively analyzed. The absolute error was less than 3 mm, which satisfies the experimental requirements. The 3D velocity model obtained from 3D measurements of the model layers is the basis for testing various imaging methods. Furthermore, the model is considered a standard in seismic physical modeling technology.
Joohyun Lim; Youngouk Kim; Joonki Paik
2009-01-01
\\u000a In this paper, we present comparative analysis of scale-invariant feature extraction using different wavelet bases. The main\\u000a advantage of the wavelet transform is the multi-resolution analysis. Furthermore, wavelets enable localigation in both space\\u000a and frequency domains and high-frequency salient feature detection. Wavelet transforms can use various basis functions. This\\u000a research aims at comparative analysis of Daubechies, Haar and Gabor wavelets
Adapting and Validating a Scale to Measure Sexual Stigma among Lesbian, Bisexual and Queer Women
Logie, Carmen H.; Earnshaw, Valerie
2015-01-01
Lesbian, bisexual and queer (LBQ) women experience pervasive sexual stigma that harms wellbeing. Stigma is a multi-dimensional construct and includes perceived stigma, awareness of negative attitudes towards one’s group, and enacted stigma, overt experiences of discrimination. Despite its complexity, sexual stigma research has generally explored singular forms of sexual stigma among LBQ women. The study objective was to develop a scale to assess perceived and enacted sexual stigma among LBQ women. We adapted a sexual stigma scale for use with LBQ women. The validation process involved 3 phases. First, we held a focus group where we engaged a purposively selected group of key informants in cognitive interviewing techniques to modify the survey items to enhance relevance to LBQ women. Second, we implemented an internet-based, cross-sectional survey with LBQ women (n=466) in Toronto, Canada. Third, we administered an internet-based survey at baseline and 6-week follow-up with LBQ women in Toronto (n=24) and Calgary (n=20). We conducted an exploratory factor analysis using principal components analysis and descriptive statistics to explore health and demographic correlates of the sexual stigma scale. Analyses yielded one scale with two factors: perceived and enacted sexual stigma. The total scale and subscales demonstrated adequate internal reliability (total scale alpha coefficient: 0.78; perceived sub-scale: 0.70; enacted sub-scale: 0.72), test-retest reliability, and construct validity. Perceived and enacted sexual stigma were associated with higher rates of depressive symptoms and lower self-esteem, social support, and self-rated health scores. Results suggest this sexual stigma scale adapted for LBQ women has good psychometric properties and addresses enacted and perceived stigma dimensions. The overwhelming majority of participants reported experiences of perceived sexual stigma. This underscores the importance of moving beyond a singular focus on discrimination to explore perceptions of social judgment, negative attitudes and social norms. PMID:25679391
MRS Spring Meeting, April 26, 2000, San Francisco, CA Using Wafer-Scale Patterns for CMP Analysis
Boning, Duane S.
MRS Spring Meeting, April 26, 2000, San Francisco, CA Using Wafer-Scale Patterns for CMP Analysis Clara, CA ABSTRACT A new set of wafer-scale patterns has been designed for analysis and modeling of key the planarization capability of a CMP process using simple measurements on wafer scale patterns. We examine means
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Quattrochi, Dale A.; Luvall, Jeffrey C.
1997-01-01
Fractals embody important ideas of self-similarity, in which the spatial behavior or appearance of a system is largely scale-independent. Self-similarity is a property of curves or surfaces where each part is indistinguishable from the whole. The fractal dimension D of remote sensing data yields quantitative insight on the spatial complexity and information content contained within these data. Analyses of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed(l0 to 80 meters). The forested scene behaves as one would expect-larger pixel sizes decrease the complexity of the image as individual clumps of trees are averaged into larger blocks. The increased complexity of the agricultural image with increasing pixel size results from the loss of homogeneous groups of pixels in the large fields to mixed pixels composed of varying combinations of NDVI values that correspond to roads and vegetation. The same process occur's in the urban image to some extent, but the lack of large, homogeneous areas in the high resolution NDVI image means the initially high D value is maintained as pixel size increases. The slope of the fractal dimension-resolution relationship provides indications of how image classification or feature identification will be affected by changes in sensor resolution.
Scaling Analysis for the Direct Reactor Auxiliary Cooling System for AHTRs
Yoder Jr, Graydon L [ORNL] [ORNL; Wilson, Dane F [ORNL] [ORNL; Wang, X. NMN [Ohio State University] [Ohio State University; Lv, Q. NMN [Ohio State University] [Ohio State University; Sun, X NMN [Ohio State University] [Ohio State University; Christensen, R. N. [Ohio State University] [Ohio State University; Blue, T. E. [Ohio State University] [Ohio State University; Subharwall, Piyush [Idaho National Laboratory (INL)] [Idaho National Laboratory (INL)
2011-01-01
The Direct Reactor Auxiliary Cooling System (DRACS), shown in Fig. 1 [1], is a passive heat removal system proposed for the Advanced High-Temperature Reactor (AHTR). It features three coupled natural circulation/convection loops completely relying on the buoyancy as the driving force. A prototypic design of the DRACS employed in a 20-MWth AHTR has been discussed in our previous work [2]. The total height of the DRACS is usually more than 10 m, and the required heating power will be large (on the order of 200 kW), both of which make a full-scale experiment not feasible in our laboratory. This therefore motivates us to perform a scaling analysis for the DRACS to obtain a scaled-down model. In this paper, theory and methodology for such a scaling analysis are presented.
Lead pipe scale analysis using broad-beam argon ion milling to elucidate drinking water corrosion.
Nadagouda, Mallikarjuna N; White, Colin; Lytle, Darren
2011-04-01
Herein, we characterized lead pipe scale removed from a drinking water distribution system using argon ion beam etching and a variety of solids analysis approaches. Specifically, pipe scale cross sections and solids were analyzed using scanning electron microscopy, energy dispersive X-ray analysis, X-ray diffraction, and acid digestion followed by inductively coupled plasma mass spectrometry analyses. The pipe scale consisted of at least five layers that contained Pb(II) and Pb(IV) minerals, and magnesium, aluminum, manganese, iron, and silicon solids. The outer layer was enriched with crystalline amorphous manganese and iron, giving it a dark orange to red color. The middle layers consisted of hydrocerussite and plattnerite, and the bottom layer consisted primarily of litharge. Over the litharge layer, hydrocerussite crystals were grown vertically away from the pipe wall, which included formations of plattnerite. Significant amounts of trace contaminant vanadium, likely in the form of vanadinite, and copper accumulated in the scale as well. PMID:21281551
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.
1999-01-01
Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images of the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimensional-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.
Multi-Scale Fractal Analysis of Image Texture and Pattern
NASA Technical Reports Server (NTRS)
Emerson, Charles W.; Lam, Nina Siu-Ngan; Quattrochi, Dale A.
1999-01-01
Analyses of the fractal dimension of Normalized Difference Vegetation Index (NDVI) images of homogeneous land covers near Huntsville, Alabama revealed that the fractal dimension of an image of an agricultural land cover indicates greater complexity as pixel size increases, a forested land cover gradually grows smoother, and an urban image remains roughly self-similar over the range of pixel sizes analyzed (10 to 80 meters). A similar analysis of Landsat Thematic Mapper images of the East Humboldt Range in Nevada taken four months apart show a more complex relation between pixel size and fractal dimension. The major visible difference between the spring and late summer NDVI images is the absence of high elevation snow cover in the summer image. This change significantly alters the relation between fractal dimension and pixel size. The slope of the fractal dimension-resolution relation provides indications of how image classification or feature identification will be affected by changes in sensor spatial resolution.
Automated Large-Scale Shoreline Variability Analysis From Video
NASA Astrophysics Data System (ADS)
Pearre, N. S.
2006-12-01
Land-based video has been used to quantify changes in nearshore conditions for over twenty years. By combining the ability to track rapid, short-term shoreline change and changes associated with longer term or seasonal processes, video has proved to be a cost effective and versatile tool for coastal science. Previous video-based studies of shoreline change have typically examined the position of the shoreline along a small number of cross-shore lines as a proxy for the continuous coast. The goal of this study is twofold: (1) to further develop automated shoreline extraction algorithms for continuous shorelines, and (2) to track the evolution of a nourishment project at Rehoboth Beach, DE that was concluded in June 2005. Seven cameras are situated approximately 30 meters above mean sea level and 70 meters from the shoreline. Time exposure and variance images are captured hourly during daylight and transferred to a local processing computer. After correcting for lens distortion and geo-rectifying to a shore-normal coordinate system, the images are merged to form a composite planform image of 6 km of coast. Automated extraction algorithms establish shoreline and breaker positions throughout a tidal cycle on a daily basis. Short and long term variability in the daily shoreline will be characterized using empirical orthogonal function (EOF) analysis. Periodic sediment volume information will be extracted by incorporating the results of monthly ground-based LIDAR surveys and by correlating the hourly shorelines to the corresponding tide level under conditions with minimal wave activity. The Delaware coast in the area downdrift of the nourishment site is intermittently interrupted by short groins. An Even/Odd analysis of the shoreline response around these groins will be performed. The impact of groins on the sediment volume transport along the coast during periods of accretive and erosive conditions will be discussed. [This work is being supported by DNREC and the Henlopen Hotel
Mayunga, Joseph S.
2010-07-14
Over the past decades, coastal areas in the United States have experienced exponential increases in economic losses due to flooding, hurricanes, and tropical storms. This in part is due to increasing concentrations of human ...
Wu, Hui-Chun [Los Alamos National Laboratory; Hegelich, B.M. [Los Alamos National Laboratory; Fernandez, J.C. [Los Alamos National Laboratory; Shah, R.C. [Los Alamos National Laboratory; Palaniyappan, S. [Los Alamos National Laboratory; Jung, D. [Los Alamos National Laboratory; Yin, L [Los Alamos National Laboratory; Albright, B.J. [Los Alamos National Laboratory; Bowers, K. [Guest Scientist of XCP-6; Huang, C. [Los Alamos National Laboratory; Kwan, T.J. [Los Alamos National Laboratory
2012-06-19
Two new experimental technologies enabled realization of Break-out afterburner (BOA) - High quality Trident laser and free-standing C nm-targets. VPIC is an powerful tool for fundamental research of relativistic laser-matter interaction. Predictions from VPIC are validated - Novel BOA and Solitary ion acceleration mechanisms. VPIC is a fully explicit Particle In Cell (PIC) code: models plasma as billions of macro-particles moving on a computational mesh. VPIC particle advance (which typically dominates computation) has been optimized extensively for many different supercomputers. Laser-driven ions lead to realization promising applications - Ion-based fast ignition; active interrogation, hadron therapy.
A multi-scale segmentation\\/object relationship modelling methodology for landscape analysis
C. Burnett; Thomas Blaschke
2003-01-01
Natural complexity can best be explored using spatial analysis tools based on concepts of landscape as process continuums that can be partially decomposed into objects or patches. We introduce a five-step methodology based on multi-scale segmentation and object relationship modelling. Hierarchical patch dynamics (HPD) is adopted as the theoretical framework to address issues of heterogeneity, scale, connectivity and quasi-equilibriums in
High-throughput generation, optimization and analysis of genome-scale metabolic models
Matthew DeJongh; Aaron A Best; Paul M Frybarger; Ben Linsay; Rick L Stevens; Christopher S Henry
2010-01-01
Genome-scale metabolic models have proven to be valuable for predicting organism phenotypes from genotypes. Yet efforts to develop new models are failing to keep pace with genome sequencing. To address this problem, we introduce the Model SEED, a web-based resource for high-throughput generation, optimization and analysis of genome-scale metabolic models. The Model SEED integrates existing methods and introduces techniques to
A Practical Guide to Genome-Scale Metabolic Models and Their Analysis
Filipe Santos; Joost Boele; Bas Teusink
2011-01-01
Genome-scale metabolic reconstructions and their analysis with constraint-based modeling techniques have gained enormous momentum. It is a natural next step after sequencing of a genome, as a technique that links top-down systems biology analyses at genome scale with bottom-up systems biology modeling scrutiny. This chapter aims at (systems) biologists that have an interest in, but no extensive knowledge of, applying
Multi-scale analysis of disturbance regimes in the northern Chihuahuan Desert
Stephen R. Yool
1998-01-01
Remote sensing facilitates cross-scale validation, enables analysis of processes and patterns in time and space, and is thus viable for the conduct of earth system science. Multi-scale analyses of natural vegetation patterns and processes in the northern Chihuahuan Desert show that natural vegetation is capable of recovering from short-term, high intensity disturbances such as an atomic bomb blast. In contrast,
Spatial scaling: Its analysis and effects on animal movements in semiarid landscape mosaics
Wiens, J.A.
1992-09-01
The research conducted under this agreement focused in general on the effects of envirorunental heterogeneity on movements of animals and materials in semiarid grassland landscapes, on the form of scale-dependency of ecological patterns and processes, and on approaches to extrapolating among spatial scales. The findings are summarized in a series of published and unpublished papers that are included as the main body of this report. We demonstrated the value of experimental model systems'' employing observations and experiments conducted in small-scale microlandscapes to test concepts relating to flows of individuals and materials through complex, heterogeneous mosaics. We used fractal analysis extensively in this research, and showed how fractal measures can produce insights and lead,to questions that do not emerge from more traditional scale-dependent measures. We developed new concepts and theory to deal with scale-dependency in ecological systems and with integrating individual movement patterns into considerations of population and ecosystem dynamics.
Kalkan, Erol; Chopra, Anil K.
2010-01-01
Earthquake engineering practice is increasingly using nonlinear response history analysis (RHA) to demonstrate performance of structures. This rigorous method of analysis requires selection and scaling of ground motions appropriate to design hazard levels. Presented herein is a modal-pushover-based scaling (MPS) method to scale ground motions for use in nonlinear RHA of buildings and bridges. In the MPS method, the ground motions are scaled to match (to a specified tolerance) a target value of the inelastic deformation of the first-'mode' inelastic single-degree-of-freedom (SDF) system whose properties are determined by first-'mode' pushover analysis. Appropriate for first-?mode? dominated structures, this approach is extended for structures with significant contributions of higher modes by considering elastic deformation of second-'mode' SDF system in selecting a subset of the scaled ground motions. Based on results presented for two bridges, covering single- and multi-span 'ordinary standard' bridge types, and six buildings, covering low-, mid-, and tall building types in California, the accuracy and efficiency of the MPS procedure are established and its superiority over the ASCE/SEI 7-05 scaling procedure is demonstrated.
Persian adaptation of Foreign Language Reading Anxiety Scale: a psychometric analysis.
Baghaei, Purya; Hohensinn, Christine; Kubinger, Klaus D
2014-04-01
The validity and psychometric properties of a new Persian adaptation of the Foreign Language Reading Anxiety Scale were investigated. The scale was translated into Persian and administered to 160 undergraduate students (131 women, 29 men; M age = 23.4 yr., SD = 4.3). Rasch model analysis on the scale's original 20 items revealed that the data do not fit the partial credit model. Principal components analysis identified three factors: one related to feelings of anxiety about reading, the second reflected the reverse-worded items, and the third related to general ideas about reading in a foreign language. In a re-analysis, the 12 items that loaded on the first factor showed a good fit with the partial credit model. PMID:24897892
MULTI-SCALE MORPHOLOGICAL ANALYSIS OF SDSS DR5 SURVEY USING THE METRIC SPACE TECHNIQUE
Wu Yongfeng; Batuski, David J. [Department of Physics and Astronomy, University of Maine, Orono, ME 04469 (United States); Khalil, Andre, E-mail: yongfeng.wu@umit.maine.ed [Department of Mathematics and Statistics and Institute for Molecular Biophysics, University of Maine, Orono, ME 04469 (United States)
2009-12-20
Following the novel development and adaptation of the Metric Space Technique (MST), a multi-scale morphological analysis of the Sloan Digital Sky Survey (SDSS) Data Release 5 (DR5) was performed. The technique was adapted to perform a space-scale morphological analysis by filtering the galaxy point distributions with a smoothing Gaussian function, thus giving quantitative structural information on all size scales between 5 and 250 Mpc. The analysis was performed on a dozen slices of a volume of space containing many newly measured galaxies from the SDSS DR5 survey. Using the MST, observational data were compared to galaxy samples taken from N-body simulations with current best estimates of cosmological parameters and from random catalogs. By using the maximal ranking method among MST output functions, we also develop a way to quantify the overall similarity of the observed samples with the simulated samples.
Finite resolution effects in the analysis of the scaling behavior of rough surfaces
Buceta; Pastor; Rubio; de la Rubia FJ
2000-05-01
We investigate the influence of finite spatial resolution in the analysis of the scaling behavior of rough surfaces. We analyze such an effect for two usual measurement methods: the local width and the height-height correlation function. We show that while the correlation function is insensitive to finite resolution effects for practical purposes, the local width presents correction terms to the scaling law, leading to an effective value of the local roughness exponent smaller than the theoretically expected. We also show that a functional scaling relation can only be properly formulated in terms of the height-height correlation function. PMID:11031673
An analysis of space scales for sea ice drift
Carrieres, T. [Ice Centre Environment Canada, Ottawa, Ontario (Canada)
1994-12-31
Sea ice presents a hazard to navigation off Canada`s east coast from January to June. The Ice Centre Environment Canada (ICEC) which is part of the Atmospheric Environment Service monitors ice conditions in order to assist safe and efficient operations through or around the ice. The ice program depends on an advanced data acquisition, analysis and forecasting effort. Support for the latter is provided by kinematic models as well as a fairly simple dynamic sea ice model. In order to improve ICEC`s forecasting capabilities, the Department of Fisheries and Oceans (DFO) conducts ice modelling research and regular field experiments. The experiments provide a better understanding of the ice and also allow models to be validated and refined. The Bedford Institute of Oceanography (BIO, part of DFO) regularly deploys beacons on ice floes off the Labrador and Newfoundland coasts. These beacons provide environmental as well as location information through Service ARGOS. Documentation on the accuracy and information of the sensors is documented in Prinsenberg, 1993. The beacon locations are used here to infer an relatively unbiased representation of sea ice drift.
Intrinsic multi-scale analysis: a multi-variate empirical mode decomposition framework
Looney, David; Hemakom, Apit; Mandic, Danilo P.
2015-01-01
A novel multi-scale approach for quantifying both inter- and intra-component dependence of a complex system is introduced. This is achieved using empirical mode decomposition (EMD), which, unlike conventional scale-estimation methods, obtains a set of scales reflecting the underlying oscillations at the intrinsic scale level. This enables the data-driven operation of several standard data-association measures (intrinsic correlation, intrinsic sample entropy (SE), intrinsic phase synchrony) and, at the same time, preserves the physical meaning of the analysis. The utility of multi-variate extensions of EMD is highlighted, both in terms of robust scale alignment between system components, a pre-requisite for inter-component measures, and in the estimation of feature relevance. We also illuminate that the properties of EMD scales can be used to decouple amplitude and phase information, a necessary step in order to accurately quantify signal dynamics through correlation and SE analysis which are otherwise not possible. Finally, the proposed multi-scale framework is applied to detect directionality, and higher order features such as coupling and regularity, in both synthetic and biological systems. PMID:25568621
A new approach for modeling and analysis of molten salt reactors using SCALE
Powers, J. J.; Harrison, T. J.; Gehin, J. C. [Oak Ridge National Laboratory, PO Box 2008, Oak Ridge, TN 37831-6172 (United States)
2013-07-01
The Office of Fuel Cycle Technologies (FCT) of the DOE Office of Nuclear Energy is performing an evaluation and screening of potential fuel cycle options to provide information that can support future research and development decisions based on the more promising fuel cycle options. [1] A comprehensive set of fuel cycle options are put into evaluation groups based on physics and fuel cycle characteristics. Representative options for each group are then evaluated to provide the quantitative information needed to support the valuation of criteria and metrics used for the study. Included in this set of representative options are Molten Salt Reactors (MSRs), the analysis of which requires several capabilities that are not adequately supported by the current version of SCALE or other neutronics depletion software packages (e.g., continuous online feed and removal of materials). A new analysis approach was developed for MSR analysis using SCALE by taking user-specified MSR parameters and performing a series of SCALE/TRITON calculations to determine the resulting equilibrium operating conditions. This paper provides a detailed description of the new analysis approach, including the modeling equations and radiation transport models used. Results for an MSR fuel cycle option of interest are also provided to demonstrate the application to a relevant problem. The current implementation is through a utility code that uses the two-dimensional (2D) TRITON depletion sequence in SCALE 6.1 but could be readily adapted to three-dimensional (3D) TRITON depletion sequences or other versions of SCALE. (authors)
Compte, Emilio J; Sepúlveda, Ana R; de Pellegrin, Yolanda; Blanco, Miriam
2015-06-01
Several studies have demonstrated that men express body dissatisfaction differently than women. Although specific instruments that address body dissatisfaction in men have been developed, only a few have been validated in Latin-American male populations. The aim of this study was to reassess the factor structure of the Spanish versions of the Drive for Muscularity Scale (DMS-S) and the Male Body Attitudes Scale (MBAS-S) in an Argentinian sample. A cross-sectional study was conducted among 423 male students to examine: the factorial structure (confirmatory factor analysis), the internal consistency reliability, and the concurrent, convergent and discriminant validity of both scales. Results replicated the two factor structures for the DMS-S and MBAS-S. Both scales showed excellent levels of internal consistency, and various measures of construct validity indicated that the DMS-S and MBAS-S were acceptable and valid instruments to assess body dissatisfaction in Argentinian males. PMID:25828841
Large-scale moisture flux analysis for the United States
NASA Astrophysics Data System (ADS)
Wang, Sheng-Hung
The seasonal and annual United States moisture budget (precipitation minus evaporation, P-E) is calculated using monthly NCEP/NCAR reanalysis data. Statistical methodologies, primarily Rotated Principal Component Analysis (RPCA), are used to evaluate the variability in the moisture budget. RPCA was performed on monthly moisture budget data and each pattern identifies an individual region of the data domain exhibiting unique variability characteristics. Time series of the RPC patterns are correlated and compared to station precipitation records, and composites of atmospheric circulation fields are obtained during sets of years when extreme values of RPC scores occur. RPCA was performed on the annual mean values of total P-E in to examine the inter-annual and inter-decadal variability of the moisture budget changes. The seasonal RPCA analyses of P-E fields revealed patterns with similar characteristics. For example, the first principal component was centered over the continental interior, indicating high seasonal P-E variation in that area, while subsequent patterns are located near coastal areas including the Gulf of Mexico. The RPC scores were correlated to atmospheric and oceanic teleconnection indices and the links between the teleconnections and the P-E patterns were further evaluated. In winter, for example, the Pacific/North American teleconnection is best linked to continental interior moisture flux. The study indicates the possibility of detecting Great Plains low-level jet events in the NCEP/NCAR reanalysis by using spatial patterns of wind speed, locations of vertical maximum winds and the nocturnal characteristics of the low-level wind maximum. RPCA was performed on the monthly April-May-June P-E data in order to evaluate the low-level jet's role in U.S. moisture flux. One RPC pattern in particular seems associated with low-level jet events in the central United States, having an association to drought and flood events in the Midwest and central United States.
Scaling CFL-Reachability-Based Points-To Analysis Using Context-Sensitive Must-Not-Alias Analysis
Guoqing Xu; Atanas Rountev; Manu Sridharan
2009-01-01
Pointer analyses derived from a Context-Free-Language (CFL) reachability formulation achieve very high precision, but they\\u000a do not scale well to compute the points-to solution for an entire large program. Our goal is to increase significantly the\\u000a scalability of the currently most precise points-to analysis for Java. This CFL-reachability analysis depends on determining\\u000a whether two program variables may be aliases. We
Tescione, Lia; Lambropoulos, James; Paranandi, Madhava Ram; Makagiansar, Helena; Ryll, Thomas
2015-01-01
A bench scale cell culture model representative of manufacturing scale (2,000 L) was developed based on oxygen mass transfer principles, for a CHO-based process producing a recombinant human protein. Cell culture performance differences across scales are characterized most often by sub-optimal performance in manufacturing scale bioreactors. By contrast in this study, reduced growth rates were observed at bench scale during the initial model development. Bioreactor models based on power per unit volume (P/V), volumetric mass transfer coefficient (kL a), and oxygen transfer rate (OTR) were evaluated to address this scale performance difference. Lower viable cell densities observed for the P/V model were attributed to higher sparge rates and reduced oxygen mass transfer efficiency (kL a) of the small scale hole spargers. Increasing the sparger kL a by decreasing the pore size resulted in a further decrease in growth at bench scale. Due to sensitivity of the cell line to gas sparge rate and bubble size that was revealed by the P/V and kL a models, an OTR model based on oxygen enrichment and increased P/V was selected that generated endpoint sparge rates representative of 2,000 L scale. This final bench scale model generated similar growth rates as manufacturing. In order to take into account other routinely monitored process parameters besides growth, a multivariate statistical approach was applied to demonstrate validity of the small scale model. After the model was selected based on univariate and multivariate analysis, product quality was generated and verified to fall within the 95% confidence limit of the multivariate model. PMID:25042258
Norman, Matthew R [ORNL
2014-01-01
The novel ADER-DT time discretization is applied to two-dimensional transport in a quadrature-free, WENO- and FCT-limited, Finite-Volume context. Emphasis is placed on (1) the serial and parallel computational properties of ADER-DT and this framework and (2) the flexibility of ADER-DT and this framework in efficiently balancing accuracy with other constraints important to transport applications. This study demonstrates a range of choices for the user when approaching their specific application while maintaining good parallel properties. In this method, genuine multi-dimensionality, single-step and single-stage time stepping, strict positivity, and a flexible range of limiting are all achieved with only one parallel synchronization and data exchange per time step. In terms of parallel data transfers per simulated time interval, this improves upon multi-stage time stepping and post-hoc filtering techniques such as hyperdiffusion. This method is evaluated with standard transport test cases over a range of limiting options to demonstrate quantitatively and qualitatively what a user should expect when employing this method in their application.
Multi-scale analysis of the spatial variability of soil organic carbon
NASA Astrophysics Data System (ADS)
Stevens, François; Bogaert, Patrick; van Wesemael, Bas
2014-05-01
Information on soil properties and state is required for food security, global environmental management and climate change mitigation. Therefore, important efforts are put in the collection of soil data of many types and at very different spatial scales. Besides, soil organic carbon dynamics models at regional or global level and integrated soil policies require to predict soil properties on extensive areas, while keeping a resolution of a few meters. However, predict soil properties at fine resolution on large area is challenging, since soil properties are generally the result of a large number of soil processes, which may act at very different spatial scale. Indeed, both the strength and the nature of the link between soil properties and environmental factors depend on the scale at which we look to. Therefore, the characterization of the link between a soil property and a given controlling factor may be complicated by some variability in the soil property resulting from additionnal processes acting at other spatial scales. We propose a method of geostatistical analysis to decompose the spatial information on a soil property into multiple scale components. The variogram of soil properties is modeled by a function which is the sum of multiple sub-model with different ranges. Each sub-model can be used separately to predict the soil property at a particular scale. The analysis was performed in Belgian Loess Belt with the legacy dataset Aardewerk. The method allowed to highlight relationships between soil properties at particular spatial scales, which were hardly observable without spatial decomposition. In particular, the link between texture and organic carbon, or between topsoil and subsoil organic carbon, appeared more clearly at the coarsest scale. Besides allowing a better understanding of the controls on soil variables, the method provides a way to improve prediction of soil variables when different covariates are available at different scales.
Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario
2014-01-01
Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565
A substrate noise analysis methodology for large-scale mixed-signal ICs
Wen Kung Chu; Nishath Verghese; Heayn-Jun Chol; Kenji Shimazaki; Hiroyuki Tsujikawa; Shouzou Hirano; Shirou Doushoh; Makoto Nagata; Atsushi Iwata; Takafumi Ohmoto
2003-01-01
A substrate noise analysis methodology is described that simulates substrate noise waveforms at sensitive locations of large-scale mixed-signal ICs. Simulation results for a 7.3 mm×7.3 mm chip with 500 k devices, obtained in a few hours on an engineering server, show good correlation with silicon measurements as testing conditions are varied. An analysis of the substrate and package reveals the
Analysis of Large-Scale mRNA Expression Data Sets by Genetic Algorithms
Chia Huey Ooi; Patrick Tan
DNA microarray experiments typically produce large-scale data sets comprising thousands of mRNA expression values measured\\u000a across multiple biological samples. A common problem in the analysis of this data is the ‘curse of dimensionality’, where\\u000a the number of available samples is often insufficient for reliable analysis due to the large number of individual measurements\\u000a made per sample. Genetic algorithms (GAs) are
SCALE TSUNAMI Analysis of Critical Experiments for Validation of 233U Systems
Mueller, Don [ORNL] [ORNL; Rearden, Bradley T [ORNL] [ORNL
2009-01-01
Oak Ridge National Laboratory (ORNL) staff used the SCALE TSUNAMI tools to provide a demonstration evaluation of critical experiments considered for use in validation of current and anticipated operations involving {sup 233}U at the Radiochemical Development Facility (RDF). This work was reported in ORNL/TM-2008/196 issued in January 2009. This paper presents the analysis of two representative safety analysis models provided by RDF staff.
Large-scale analysis of the yeast proteome by multidimensional protein identification technology
Michael P. Washburn; Dirk Wolters; John R. Yates III
2001-01-01
We describe a largely unbiased method for rapid and large-scale proteome analysis by multidimensional liquid chromatography, tandem mass spectrometry, and database searching by the SEQUEST algorithm, named multidimensional protein identification technology (MudPIT). MudPIT was applied to the proteome of the Saccharomyces cerevisiae strain BJ5460 grown to mid-log phase and yielded the largest proteome analysis to date. A total of 1,484
Multi-scale enveloping spectrogram for vibration analysis in bearing defect diagnosis
Ruqiang Yan; Robert X. Gao
2009-01-01
This paper presents a new signal processing algorithm, termed multi-scale enveloping spectrogram (MuSEnS), for vibration signal analysis in the condition monitoring and health diagnosis of rolling bearings. Compared to the conventional enveloping spectral analysis technique in which the bandwidth of the signal components of interest needs to be known a priori to obtain consistent results under varying machine operating conditions,
Large-scale nonlinear structural analysis simulation on CRAY parallel\\/vector supercomputers
Eugene Poole; John Bauer; Troy Stratton; Tom Weidner
1993-01-01
Two large-scale nonlinear structural analysis simulations of a nine degree model of the Space Shuttle Redesigned Solid Rocket Motor (RSRM) factory joint are described. The first analysis simulates a burst pressure test where a pressure load was incrementally applied from 0 to 1500 p.s.i. This simulation was used to assist in evaluating and defining an error criterion based on correlation
NASA Astrophysics Data System (ADS)
Jain, G.; Sharma, S.; Vyas, A.; Rajawat, A. S.
2014-11-01
This study attempts to measure and characterize urban sprawl using its multiple dimensions in the Jamnagar city, India. The study utilized the multi-date satellite images acquired by CORONA, IRS 1D PAN & LISS-3, IRS P6 LISS-4 and Resourcesat-2 LISS-4 sensors. The extent of urban growth in the study area was mapped at 1 : 25,000 scale for the years 1965, 2000, 2005 and 2011. The growth of urban areas was further categorized into infill growth, expansion and leapfrog development. The city witnessed an annual growth of 1.60 % per annum during the period 2000-2011 whereas the population growth during the same period was observed at less than 1.0 % per annum. The new development in the city during 2000-2005 time period comprised of 22 % as infill development, 60 % as extension of the peripheral urbanized areas, and 18 % as leapfrogged development. However, during 2005-2011 timeframe the proportion of leapfrog development increased to 28 % whereas due to decrease in availability of developable area within the city, the infill developments declined to 9 %. The urban sprawl in Jamnagar city was further characterized on the basis of five dimensions of sprawl viz. population density, continuity, clustering, concentration and centrality by integrating the population data with sprawl for year 2001 and 2011. The study characterised the growth of Jamnagar as low density and low concentration outwardly expansion.
The Genomic HyperBrowser: an analysis web server for genome-scale data.
Sandve, Geir K; Gundersen, Sveinung; Johansen, Morten; Glad, Ingrid K; Gunathasan, Krishanthi; Holden, Lars; Holden, Marit; Liestøl, Knut; Nygård, Ståle; Nygaard, Vegard; Paulsen, Jonas; Rydbeck, Halfdan; Trengereid, Kai; Clancy, Trevor; Drabløs, Finn; Ferkingstad, Egil; Kalas, Matús; Lien, Tonje; Rye, Morten B; Frigessi, Arnoldo; Hovig, Eivind
2013-07-01
The immense increase in availability of genomic scale datasets, such as those provided by the ENCODE and Roadmap Epigenomics projects, presents unprecedented opportunities for individual researchers to pose novel falsifiable biological questions. With this opportunity, however, researchers are faced with the challenge of how to best analyze and interpret their genome-scale datasets. A powerful way of representing genome-scale data is as feature-specific coordinates relative to reference genome assemblies, i.e. as genomic tracks. The Genomic HyperBrowser (http://hyperbrowser.uio.no) is an open-ended web server for the analysis of genomic track data. Through the provision of several highly customizable components for processing and statistical analysis of genomic tracks, the HyperBrowser opens for a range of genomic investigations, related to, e.g., gene regulation, disease association or epigenetic modifications of the genome. PMID:23632163
Multi-scale dynamical analysis (MSDA) of sea level records versus PDO, AMO, and NAO indexes
Scafetta, Nicola
2013-01-01
Herein I propose a multi-scale dynamical analysis to facilitate the physical interpretation of tide gauge records. The technique uses graphical diagrams. It is applied to six secular-long tide gauge records representative of the world oceans: Sydney, Pacific coast of Australia; Fremantle, Indian Ocean coast of Australia; New York City, Atlantic coast of USA; Honolulu, U.S. state of Hawaii; San Diego, U.S. state of California; and Venice, Mediterranean Sea, Italy. For comparison, an equivalent analysis is applied to the Pacific Decadal Oscillation (PDO) index and to the Atlantic Multidecadal Oscillation (AMO) index. Finally, a global reconstruction of sea level and a reconstruction of the North Atlantic Oscillation (NAO) index are analyzed and compared: both sequences cover about three centuries from 1700 to 2000. The proposed methodology quickly highlights oscillations and teleconnections among the records at the decadal and multidecadal scales. At the secular time scales tide gauge records present relatively...
Relaxation Mode Analysis and Scale-Dependent Energy Landscape Statistics in Liquids
NASA Astrophysics Data System (ADS)
Cai, Zhikun; Zhang, Yang
2015-03-01
In contrast to the prevailing focus on short-lived classical phonon modes in liquids, we propose a classical treatment of the relaxation modes in liquids under a framework analogous to the normal mode analysis in solids. Our relaxation mode analysis is built upon the experimentally measurable two-point density-density correlation function (e.g. using quasi-elastic and inelastic scattering experiments). We show in the Laplace-inverted relaxation frequency z-domain, the eigen relaxation modes are readily decoupled. From here, important statistics of the scale-dependent activation energy in the energy landscape as well as the scale-dependent relaxation time distribution function can be obtained. We first demonstrate this approach in the case of supercooled liquids when dynamic heterogeneity emerges in the landscape-influenced regime. And then we show, using this framework, we are able to extract the scale-dependent energy landscape statistics from neutron scattering measurements.
Numerical analysis of the scaling parameter of adaptive compensation for thermal blooming effects
NASA Astrophysics Data System (ADS)
Huang, Yinbo; Wang, Yingjian; Gong, Zhiben
2002-09-01
By using the time-dependent propagation computer code, adaptive compensation for thermal blooming effects, which are induced by collimated high-energy laser (HEL) beam propagation through the atmosphere, is numerically calculated and analyzed under different conditions. The numerical results show that, with the definite adaptive optics (AO) system, the scaling parameter ND/NFB is available to evaluate the effect of adaptive compensation efficiently. Moreever, we get the scaling relation between the scaling parameter ND/NFB and the far-field Strehl ratio, which is can be described by Strehlequals1/[1+AND/NFB+B(ND/NFB)C], where A, B and C are fitting parameters. We also get the threshold of adaptive phase compensation instability (PCI) through analysis of the scaling rotation above. In addition, we discuss the difference between adaptive compensation and whole-beam compensation.
NASA Astrophysics Data System (ADS)
Bramich, D. M.; Bachelet, E.; Alsubai, K. A.; Mislis, D.; Parley, N.
2015-05-01
Context. Understanding the source of systematic errors in photometry is essential for their calibration. Aims: We investigate how photometry performed on difference images can be influenced by errors in the photometric scale factor. Methods: We explore the equations for difference image analysis (DIA), and we derive an expression describing how errors in the difference flux, the photometric scale factor and the reference flux are propagated to the object photometry. Results: We find that the error in the photometric scale factor is important, and while a few studies have shown that it can be at a significant level, it is currently neglected by the vast majority of photometric surveys employing DIA. Conclusions: Minimising the error in the photometric scale factor, or compensating for it in a post-calibration model, is crucial for reducing the systematic errors in DIA photometry.
Developing Data Analysis Infrastructure to Support Regional and Global Scale Synthesis
NASA Astrophysics Data System (ADS)
Agarwal, D.; van Ingen, C.; Humphrey, M.; Goode, M.; Li, J.; Ryu, Y.; Shoshani, A.; Faybishenko, B.; Romosan, A.; Hunt, J. R.; Moran, T.
2010-12-01
Scientists have been building rich networks of measurement sites that span a wide range of ecosystems and environmental conditions. The data spans satellite measured down to subsurface conditions. This data is now available from individual researchers, national and international research networks, and agencies. It has the potential to allow researchers to discern large-scale patterns and disturbances in the combined data. The challenge in bringing this data into a common dataset is one of heterogeneity and scale. Informatics is critical to managing, curating, and making it accessible in a form that it can be used and interpreted accurately. We have been working with FLUXNET researchers, ASCEM project, and various watershed groups. This talk will discuss our experiences and the next challenges we are facing in managing environmental data and in developing informatics infrastructure to enable researchers to easily access and use integrated regional- and global-scale data to address large-scale questions and publish analysis results.
Comparing ODEP and DEP forces for micro\\/nano scale manipulation: A theoretical analysis
Shu-E Wang; Ming-Lin Li; Zai-Li Dong; Yan-Li Qu; W. J. Li
2010-01-01
This paper presents a theoretical investigation to evaluate and compare the capabilities of using dielectrophoresis (DEP) and optical image-driven dielectrophoresis (ODEP) forces for micro\\/nano scale manipulation. A simplified model of particle velocity as a function of electrode size and particle radius was derived based on Stokes' law. Then, electric field analysis of two typical electrode configurations for DEP and ODEP
An atomic-scale analysis of catalytically-assisted chemical vapor deposition of carbon nanotubes
Grujicic, Mica
An atomic-scale analysis of catalytically-assisted chemical vapor deposition of carbon nanotubes M Growth of carbon nanotubes during transition-metal particles catalytically-assisted thermal decomposition of the transition-metal particles and onto the surface of carbon nanotubes, carbon atom attachment to the growing
Multiscale analysis of depth images from natural scenes: Scaling in the depth of the woods
Chapeau-Blondeau, François
. All rights reserved. 1. Introduction Images of natural scenes are important information car- ryingMultiscale analysis of depth images from natural scenes: Scaling in the depth of the woods Yann a c t We analyze an ensemble of images from outdoor natural scenes and consisting of pairs
A Computational Pipeline for Protein Structure Prediction and Analysis at Genome Scale
1 A Computational Pipeline for Protein Structure Prediction and Analysis at Genome Scale Manesh that they can complement the existing experimental techniques. In this paper, we present an automated pipeline for protein structure prediction. The centerpiece of the pipeline is a threading-based protein structure
Maggie's Day: A Small-Scale Analysis of English Education Policy
ERIC Educational Resources Information Center
Thomson, Pat; Hall, Christine; Jones, Ken
2010-01-01
Policy sociologists typically research at large scale. This paper presents an example of a policy analysis which illuminates how policy is embedded in single incidents, lives and places. The case in point concerns the policy fetish for "closing the gap and raising the bar". This rhetoric is taken to mean improving the learning of all students,…
Explore the Usefulness of Person-Fit Analysis on Large-Scale Assessment
ERIC Educational Resources Information Center
Cui, Ying; Mousavi, Amin
2015-01-01
The current study applied the person-fit statistic, l[subscript z], to data from a Canadian provincial achievement test to explore the usefulness of conducting person-fit analysis on large-scale assessments. Item parameter estimates were compared before and after the misfitting student responses, as identified by l[subscript z], were removed. The…
Analysis of Stability, Response and LQR Controller Design of a Small Scale Helicopter Dynamics
Dharmayanda, Hardian Reza; Lee, Young Jae; Sung, Sangkyung
2008-01-01
This paper presents how to use feedback controller with helicopter dynamics state space model. A simplified analysis is presented for controller design using LQR of small scale helicopters for axial and forward flights. Our approach is simple and gives the basic understanding about how to develop controller for solving the stability of linear helicopter flight dynamics.
Technical efficiency and economies of scale: A non-parametric analysis of REIT operating efficiency
Randy I. Anderson; Robert Fok; Thomas Springer; James Webb
2002-01-01
This study measures technical efficiency and economies of scale for real estate investment trusts (REITs) by employing data envelopment analysis (DEA), a linear-programming technique. Using data from the National Association of Real Estate Investment Trusts (NAREITs) for the years 1992–1996, we find that REITs are technically inefficient, and the inefficiencies are a result of both poor input utilization and failure
Multivariate statistical analysis of vibration signals from industrial scale ball grinding
Yigen Zeng; K. S. E. Forssberg
1995-01-01
Multivariate statistical modelling based on vibration signal analysis was performed at commercial scale grinding. The source digital signals consist of three channels of mechanical vibrations obtained at the axial, horizontal and vertical directions. The feed rate, power draw, pulp temperature were collected automatically by the control system while samples of the feed material and ground product of the ball mill
Analysis of Heating Systems and Scale of Natural Gas-Condensing Water Boilers in Northern Zones
Wu, Y.; Wang, S.; Pan, S.; Shi, Y.
2006-01-01
In this paper, various heating systems and scale of the natural gas-condensing water boiler in northern zones are discussed, based on a technical-economic analysis of the heating systems of natural gas condensing water boilers in northern zones...
1 Analysis of Thermoelectric Properties of Scaled Silicon Nanowires Using an Atomistic Tight Abstract Low dimensional materials provide the possibility of improved thermoelectric performance due. As a result of suppressed phonon conduction, large improvements on the thermoelectric figure of merit, ZT
Information and Knowledge Assisted Analysis and Visualization of Large-Scale Data
Wang, Chaoli
Information and Knowledge Assisted Analysis and Visualization of Large-Scale Data Chaoli Wang Kwan hardware enable faster rendering, achieving interactive visualization of large data must also rely, and indexing of large data for more efficient and effective visualization. Most of the solutions introduced
ERIC Educational Resources Information Center
Ng, Kok-Mun; Wang, Chuang; Kim, Do-Hong; Bodenhorn, Nancy
2010-01-01
The authors investigated the factor structure of the Schutte Self-Report Emotional Intelligence (SSREI) scale on international students. Via confirmatory factor analysis, the authors tested the fit of the models reported by Schutte et al. and five other studies to data from 640 international students in the United States. Results show that…
ERIC Educational Resources Information Center
Davison, Mark L.; Kim, Se-Kang; Ding, Shuai
A model for test scores called the profile analysis via multidimensional scaling (PAMS) model is described. The model reparameterizes the linear latent variable model in such a way that the latent variables can be interpreted in terms of profile patterns, rather than factors. The model can serve as the basis for exploratory multidimensional…
ERIC Educational Resources Information Center
Walters, Glenn D.; Diamond, Pamela M.; Magaletta, Philip R.; Geyer, Matthew D.; Duncan, Scott A.
2007-01-01
The Antisocial Features (ANT) scale of the Personality Assessment Inventory (PAI) was subjected to taxometric analysis in a group of 2,135 federal prison inmates. Scores on the three ANT subscales--Antisocial Behaviors (ANT-A), Egocentricity (ANT-E), and Stimulus Seeking (ANT-S)--served as indicators in this study and were evaluated using the…
The Columbia Impairment Scale: Factor Analysis Using a Community Mental Health Sample
ERIC Educational Resources Information Center
Singer, Jonathan B.; Eack, Shaun M.; Greeno, Catherine M.
2011-01-01
Objective: The objective of this study was to test the factor structure of the parent version of the Columbia Impairment Scale (CIS) in a sample of mothers who brought their children for community mental health (CMH) services (n = 280). Method: Confirmatory factor analysis (CFA) was used to test the fit of the hypothesized four-factor structure…
Multi-Scale Entropy Analysis of Different Spontaneous Motor Unit Discharge Patterns
Zhang, Xu; Chen, Xiang; Barkhaus, Paul E.; Zhou, Ping
2013-01-01
This study explores a novel application of multi-scale entropy (MSE) analysis for characterizing different patterns of spontaneous electromyogram (EMG) signals including sporadic, tonic and repetitive spontaneous motor unit discharges, and normal surface EMG baseline. Two algorithms for MSE analysis, namely the standard MSE and the intrinsic mode entropy (IMEn) (based on the recently developed multivariate empirical mode decomposition (MEMD) method), were applied to different patterns of spontaneous EMG. Significant differences were observed in multiple scales of the standard MSE and IMEn analyses (p < 0.001) for any two of the spontaneous EMG patterns, while such significance may not be observed from the single scale entropy analysis. Compared to the standard MSE, the IMEn analysis facilitates usage of a relatively low scale number to discern entropy difference among various patterns of spontaneous EMG signals. The findings from this study contribute to our understanding of the nonlinear dynamic properties of different spontaneous EMG patterns, which may be related to spinal motoneuron or motor unit health. PMID:24235117
ERIC Educational Resources Information Center
Donovan, Phillip Raymond
2009-01-01
This study focuses on the analysis of the behavior of unbound aggregates to offset wheel loads. Test data from full-scale aircraft gear loading conducted at the National Airport Pavement Test Facility (NAPTF) by the Federal Aviation Administration (FAA) are used to investigate the effects of wander (offset loads) on the deformation behavior of…
Large-scale flow phenomena in axial compressors: Modeling, analysis, and control with air injectors
Gregory Scott Hagen
2001-01-01
This thesis presents a large scale model of axial compressor flows that is detailed enough to describe the modal and spike stall inception processes, and is also amenable to dynamical systems analysis and control design. The research presented here is based on the model derived by Mezic, which shows that the flows are dominated by the competition between the blade
An ICF-CY-Based Content Analysis of the Vineland Adaptive Behavior Scales-II
ERIC Educational Resources Information Center
Gleason, Kara; Coster, Wendy
2012-01-01
Background: The International Classification of Functioning, Disability and Health (ICF), and its version for children and youth (ICF-CY), has been increasingly adopted as a system to describe function and disability. A content analysis of the Vineland Adaptive Behavior Scales-II (VABS-II) was conducted to examine congruence with the functioning…
Approximate Group Analysis and Multiple Time Scales Method for the Approximate Boussinesq Equation
Svetlana A. Kordyukova
2006-01-01
This paper is devoted to investigation of the approximate Boussinesq equation by methods of the approximate symmetry analysis of partial differential equations with a small parameter developed by Baikov, Gazizov and Ibragimov. We combine these methods with the method of multiple time scales to extend the domain of definition of approximate group invariant solutions of the approximate Boussinesq equation.
Multiple scales analysis of the Fokker–Planck equation for simple shear flow
G. Subramanian; J. F. Brady
2004-01-01
We extend the multiple time scales formalism originally introduced for the Fokker–Planck equation by Wycoff and Balazs (Physica A 146 (1987) 175) to the case of simple shear flow. The analysis is carried out for small values of the Stokes number, St, a dimensionless measure of the inertia of a Brownian particle. The probability density is expanded in terms of
Yao Shen; Parthasarathy Guturu; Thyagaraju Damarla; Bill P. Buckles; Kameswara Rao Namuduri
2009-01-01
This paper presents a novel approach to digital video stabilization that uses adaptive particle filter for global motion estimation. In this approach, dimensionality of the feature space is first reduced by the principal component analysis (PCA) method using the features obtained from a scale invariant feature transform (SIFT), and hence the resultant features may be termed as the PCA-SIFT features.
A Polytomous Item Response Theory Analysis of Social Physique Anxiety Scale
ERIC Educational Resources Information Center
Fletcher, Richard B.; Crocker, Peter
2014-01-01
The present study investigated the social physique anxiety scale's factor structure and item properties using confirmatory factor analysis and item response theory. An additional aim was to identify differences in response patterns between groups (gender). A large sample of high school students aged 11-15 years (N = 1,529) consisting of n =…
Frequency and Scale Domain Analysis of Complex Quadrature Embolic Doppler UltrasoundSignals
Arslan, Tughrul
Frequency and Scale Domain Analysis of Complex Quadrature Embolic Doppler Ultrasound blood cells [1][2]. Asymptomatic circulating emboli can be detected by transcranial Doppler ultrasound on the fast Fourier transform (FFT) processing, which is the standard processing used by Doppler ultrasound
Wavelet Analysis for a New Multiresolution Model for Large-Scale Textured Terrains
Illes Balears, Universitat de les
Wavelet Analysis for a New Multiresolution Model for Large-Scale Textured Terrains MarÃa JosÃ©, transmission of terrain data over slow networks is still worrying. Multiresolution models allow progressive describe a new multiresolution model called Geometric-Textured Bitree (GTB) that enables progressive
LARGE SCALE DISASTER ANALYSIS AND MANAGEMENT: SYSTEM LEVEL STUDY ON AN INTEGRATED MODEL
The increasing intensity and scale of human activity across the globe leading to severe depletion and deterioration of the Earth's natural resources has meant that sustainability has emerged as a new paradigm of analysis and management. Sustainability, conceptually defined by the...
Analysis of Heating Systems and Scale of Natural Gas-Condensing Water Boilers in Northern Zones
Wu, Y.; Wang, S.; Pan, S.; Shi, Y.
2006-01-01
In this paper, various heating systems and scale of the natural gas-condensing water boiler in northern zones are discussed, based on a technical-economic analysis of the heating systems of natural gas condensing water boilers in northern zones...
ERIC Educational Resources Information Center
Ackermann, Margot Elise; Morrow, Jennifer Ann
2008-01-01
The present study describes the development and initial validation of the Coping with the College Environment Scale (CWCES). Participants included 433 college students who took an online survey. Principal Components Analysis (PCA) revealed six coping strategies: planning and self-management, seeking support from institutional resources, escaping…
Texture Classification via Area-Scale Analysis of Raking Light Images
Klein, Andrew G.
Texture Classification via Area-Scale Analysis of Raking Light Images Andrew G. Klein Dept. of Engineering and Design Western Washington University Bellingham, WA 98225 Contact email: andy.klein@wwu.edu Anh H. Do and Christopher A. Brown Dept. of Mechanical Engineering Worcester Polytechnic Institute
Profile analysis and the abbreviated Wechsler Adult Intelligence Scale: A multivariate approach
Ronald A. Goebel; Paul Satz
1975-01-01
The abbreviated WAIS by P. Satz and S Mogel, while yielding high correlations with the standard WAIS scales, has been criticized for introducing sufficient subtest unreliability to prohibit profile interpretations. Using multivariate profile analytic techniques (R. B. Cattell's rp and hierarchical grouping analysis) and sampling from both brain-injured and psychiatric populations (ns = 118 and 173, respectively), these forms were
EXPERIMENTAL MODAL ANALYSIS OF A SCALED CAR BODY FOR METRO VEHICLES
C. Benatzky; C. Bilik; M. Kozek; A. Stribersky; J. Wassermann
This contribution deals with the investigation of an approximately 1\\/10-scaled model of a metro vehicle car body concerning the low structural eigenmodes. The model has been devel- oped by means of finite element calculations in such a way that the eigenfrequ encies of the model lie close together. An experimental modal analysis has been carried out to determine the structural
FISA: Fast Iterative Signature Algorithm for the analysis of large-scale gene expression data
Gupta, Neelima
FISA: Fast Iterative Signature Algorithm for the analysis of large-scale gene expression data Seema the genes showing similar expression patterns into what are called transcription modules (TM). A TM is defined as a set of genes and a set of conditions under which these genes are most tightly co-expressed
Scaling Java PointsTo Analysis using SPARK Ondrej Lhotak and Laurie Hendren
Scaling Java PointsÂTo Analysis using SPARK OndÅ¸rej Lhotâ??ak and Laurie Hendren Sable Research Group very different. We introduce SPARK, a flexible framework for experimenting with pointsÂto analÂ yses for Java. SPARK supports equalityÂ and subsetÂbased analyses, variations in field sensitivity, respect
Scaling Java Points-To Analysis using SPARK Ondrej Lhotak and Laurie Hendren
Lhotak, Ondrej
Scaling Java Points-To Analysis using SPARK Ondrej LhotÂ´ak and Laurie Hendren Sable Research Group very different. We introduce SPARK, a flexible framework for experimenting with points-to anal- yses for Java. SPARK supports equality- and subset-based analyses, variations in field sensitivity, respect
ADVANCES IN MODAL ANALYSIS USING A ROBUST AND MULTI-SCALE METHOD Ccile Picard1
Paris-Sud XI, Université de
ADVANCES IN MODAL ANALYSIS USING A ROBUST AND MULTI-SCALE METHOD Cécile Picard1 , Christian Frisson ABSTRACT This article presents a new approach to modal synthesis for rendering sounds of virtual objects of the object, which permits the construction of plausible lower res- olution approximations of the modal model
Attitudes towards induced abortion in Peninsular Malaysia--a Guttman scale analysis.
Takeshita, Y J; Tan Boon Ann; Arshat, H
1986-12-01
The data for this study are from the 1974 Malaysian Fertility and Family Survey. The analysis focuses on the responses given by about 6000 women to the question whether they would approve or disapprove of induced abortion under each of the following conditions: poor health, contraceptive failure, unwanted pregnancy, lack of finances, rape, and unmarried status. There was substantial endorsement of induced abortion if the pregnancy is due to rape (71%) but a progressively diminishing amount of endorsement of all other conditions: unmarried (54.3%), health (52.2%), lack of finances (34.5%), contraceptive failure (19%), and unwanted pregnancy (12.3%). A Guttman scale analysis is applied to the responses, coded "1" if "yes" or "depends" and as "0" if "no". A set of attitude items is said to form a Guttman scale if the items fall along a contimuum in a cumulative manner such that an endorsement of any 1 item implies endorsement of all items falling below it. The application of Guttman scale analysis reveals tht the 6 items do arrange themselves in this order. This study demonstrates that there was in the mid-1970s a fairly consistent patern of attitudes with respect to induced abortin in Peninsular Malaysia. This study also demonstrates the usefulness of the Guttman scale analysis. A replication of this study with more recent data would be useful in documenting any changes in attitudes towards induced abortion in Malaysia. PMID:12314887
Reconciliation of Genome-Scale Metabolic Reconstructions for Comparative Systems Analysis
Matthew A. Oberhardt; Jacek Pucha?ka; Vítor A. P. Martins dos Santos; Jason A. Papin
2011-01-01
In the past decade, over 50 genome-scale metabolic reconstructions have been built for a variety of single- and multi- cellular organisms. These reconstructions have enabled a host of computational methods to be leveraged for systems-analysis of metabolism, leading to greater understanding of observed phenotypes. These methods have been sparsely applied to comparisons between multiple organisms, however, due mainly to the
COMPUTATIONAL ANALYSIS AND MODELING OF GENOME-SCALE AVIDITY DISTRIBUTION OF TRANSCRIPTION FACTOR
Wong, Limsoon
1 COMPUTATIONAL ANALYSIS AND MODELING OF GENOME- SCALE AVIDITY DISTRIBUTION OF TRANSCRIPTION FACTOR factors in the human genome. We propose a mixture probabilistic model and develop computational programs binding sites, human genome, mixture probabilistic model, Kolmogorov-Waring process, Monte Carlo
A multi-scale electro-thermo-mechanical analysis of single walled carbon nanotubes
Tarek Ragab
2010-01-01
Carbon nanotubes are formed by folding a graphene sheet. They have gained a lot of attention during the last decade due to their extra ordinary mechanical, thermal and electrical properties. Molecular dynamics simulations have been used extensively for studying the mechanical properties of carbon nanotubes. In this thesis, a quantum mechanics and molecular dynamics level multi-scale modeling and analysis of
Large-Scale Analysis of Formant Frequency Estimation Variability in Conversational Telephone Speech*
Large-Scale Analysis of Formant Frequency Estimation Variability in Conversational Telephone Speech@ll.mit.edu, Reva.Schwartz@usss.dhs.gov Abstract We quantify how the telephone channel and regional dialect conversational speech, telephone channel, Amer- ican English, forensic phonetics 1. Introduction In this work, we
Large scale estimation of arterial traffic and structural analysis of traffic patterns using probe and analyzing traffic conditions on large arterial networks is an inherently difficult task. The first goal of this article is to demonstrate how arterial traffic conditions can be estimated using sparsely sampled GPS
Rasch model analysis of the Depression, Anxiety and Stress Scales (DASS)
Shea, Tracey L; Tennant, Alan; Pallant, Julie F
2009-01-01
Background There is a growing awareness of the need for easily administered, psychometrically sound screening tools to identify individuals with elevated levels of psychological distress. Although support has been found for the psychometric properties of the Depression, Anxiety and Stress Scales (DASS) using classical test theory approaches it has not been subjected to Rasch analysis. The aim of this study was to use Rasch analysis to assess the psychometric properties of the DASS-21 scales, using two different administration modes. Methods The DASS-21 was administered to 420 participants with half the sample responding to a web-based version and the other half completing a traditional pencil-and-paper version. Conformity of DASS-21 scales to a Rasch partial credit model was assessed using the RUMM2020 software. Results To achieve adequate model fit it was necessary to remove one item from each of the DASS-21 subscales. The reduced scales showed adequate internal consistency reliability, unidimensionality and freedom from differential item functioning for sex, age and mode of administration. Analysis of all DASS-21 items combined did not support its use as a measure of general psychological distress. A scale combining the anxiety and stress items showed satisfactory fit to the Rasch model after removal of three items. Conclusion The results provide support for the measurement properties, internal consistency reliability, and unidimensionality of three slightly modified DASS-21 scales, across two different administration methods. The further use of Rasch analysis on the DASS-21 in larger and broader samples is recommended to confirm the findings of the current study. PMID:19426512
Scale-4 Analysis of Pressurized Water Reactor Critical Configurations: Volume 3-Surry Unit 1 Cycle 2
Bowman, S.M.
1995-01-01
The requirements of ANSI/ANS 8.1 specify that calculational methods for away-from-reactor criticality safety analyses be validated against experimental measurements. If credit for the negative reactivity of the depleted (or spent) fuel isotopics is desired, it is necessary to benchmark computational methods against spent fuel critical configurations. This report summarizes a portion of the ongoing effort to benchmark away-from-reactor criticality analysis methods using selected critical configurations from commercial pressurized-water reactors. The analysis methodology selected for all the calculations in this report is based on the codes and data provided in the SCALE-4 code system. The isotopic densities for the spent fuel assemblies in the critical configurations were calculated using the SAS2H analytical sequence of the SCALE-4 system. The sources of data and the procedures for deriving SAS2H input parameters are described in detail. The SNIKR code module was used to extract the necessary isotopic densities from the SAS2H results and to provide the data in the format required by the SCALE criticality analysis modules. The CSASN analytical sequence in SCALE-4 was used to perform resonance processing of the cross sections. The KENO V.a module of SCALE-4 was used to calculate the effective multiplication factor (k{sub eff}) of each case. The SCALE-4 27-group burnup library containing ENDF/B-IV (actinides) and ENDF/B-V (fission products) data was used for all the calculations. This volume of the report documents the SCALE system analysis of two reactor critical configurations for Surry Unit 1 Cycle 2. This unit and cycle were chosen for a previous analysis using a different methodology because detailed isotopics from multidimensional reactor calculations were available from the Virginia Power Company. These data permitted a direct comparison of criticality calculations using the utility-calculated isotopics with those using the isotopics generated by the SCALE-4 SAS2H sequence. These reactor critical benchmarks have been reanalyzed using the methodology described above. The two benchmark critical calculations were the beginning-of-cycle (BOC) startup at hot, zero-power (HZP) and an end-of-cycle (EOC) critical at hot, full-power (HFP) critical conditions. These calculations were used to check for consistency in the calculated results for different burnup, downtime, temperature, xenon, and boron conditions. The k{sub eff} results were 1.0014 and 1.0113, respectively, with a standard deviation of 0.0005.
Fine-Scale Analysis Reveals Cryptic Landscape Genetic Structure in Desert Tortoises
Latch, Emily K.; Boarman, William I.; Walde, Andrew; Fleischer, Robert C.
2011-01-01
Characterizing the effects of landscape features on genetic variation is essential for understanding how landscapes shape patterns of gene flow and spatial genetic structure of populations. Most landscape genetics studies have focused on patterns of gene flow at a regional scale. However, the genetic structure of populations at a local scale may be influenced by a unique suite of landscape variables that have little bearing on connectivity patterns observed at broader spatial scales. We investigated fine-scale spatial patterns of genetic variation and gene flow in relation to features of the landscape in desert tortoise (Gopherus agassizii), using 859 tortoises genotyped at 16 microsatellite loci with associated data on geographic location, sex, elevation, slope, and soil type, and spatial relationship to putative barriers (power lines, roads). We used spatially explicit and non-explicit Bayesian clustering algorithms to partition the sample into discrete clusters, and characterize the relationships between genetic distance and ecological variables to identify factors with the greatest influence on gene flow at a local scale. Desert tortoises exhibit weak genetic structure at a local scale, and we identified two subpopulations across the study area. Although genetic differentiation between the subpopulations was low, our landscape genetic analysis identified both natural (slope) and anthropogenic (roads) landscape variables that have significantly influenced gene flow within this local population. We show that desert tortoise movements at a local scale are influenced by features of the landscape, and that these features are different than those that influence gene flow at larger scales. Our findings are important for desert tortoise conservation and management, particularly in light of recent translocation efforts in the region. More generally, our results indicate that recent landscape changes can affect gene flow at a local scale and that their effects can be detected almost immediately. PMID:22132143