Thermocapillary Bubble Migration: Thermal Boundary Layers for Large Marangoni Numbers
NASA Technical Reports Server (NTRS)
Balasubramaniam, R.; Subramanian, R. S.
1996-01-01
The migration of an isolated gas bubble in an immiscible liquid possessing a temperature gradient is analyzed in the absence of gravity. The driving force for the bubble motion is the shear stress at the interface which is a consequence of the temperature dependence of the surface tension. The analysis is performed under conditions for which the Marangoni number is large, i.e. energy is transferred predominantly by convection. Velocity fields in the limit of both small and large Reynolds numbers are used. The thermal problem is treated by standard boundary layer theory. The outer temperature field is obtained in the vicinity of the bubble. A similarity solution is obtained for the inner temperature field. For both small and large Reynolds numbers, the asymptotic values of the scaled migration velocity of the bubble in the limit of large Marangoni numbers are calculated. The results show that the migration velocity has the same scaling for both low and large Reynolds numbers, but with a different coefficient. Higher order thermal boundary layers are analyzed for the large Reynolds number flow field and the higher order corrections to the migration velocity are obtained. Results are also presented for the momentum boundary layer and the thermal wake behind the bubble, for large Reynolds number conditions.
Very Large Data Volumes Analysis of Collaborative Systems with Finite Number of States
ERIC Educational Resources Information Center
Ivan, Ion; Ciurea, Cristian; Pavel, Sorin
2010-01-01
The collaborative system with finite number of states is defined. A very large database is structured. Operations on large databases are identified. Repetitive procedures for collaborative systems operations are derived. The efficiency of such procedures is analyzed. (Contains 6 tables, 5 footnotes and 3 figures.)
TIME DISTRIBUTIONS OF LARGE AND SMALL SUNSPOT GROUPS OVER FOUR SOLAR CYCLES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kilcik, A.; Yurchyshyn, V. B.; Abramenko, V.
2011-04-10
Here we analyze solar activity by focusing on time variations of the number of sunspot groups (SGs) as a function of their modified Zurich class. We analyzed data for solar cycles 20-23 by using Rome (cycles 20 and 21) and Learmonth Solar Observatory (cycles 22 and 23) SG numbers. All SGs recorded during these time intervals were separated into two groups. The first group includes small SGs (A, B, C, H, and J classes by Zurich classification), and the second group consists of large SGs (D, E, F, and G classes). We then calculated small and large SG numbers frommore » their daily mean numbers as observed on the solar disk during a given month. We report that the time variations of small and large SG numbers are asymmetric except for solar cycle 22. In general, large SG numbers appear to reach their maximum in the middle of the solar cycle (phases 0.45-0.5), while the international sunspot numbers and the small SG numbers generally peak much earlier (solar cycle phases 0.29-0.35). Moreover, the 10.7 cm solar radio flux, the facular area, and the maximum coronal mass ejection speed show better agreement with the large SG numbers than they do with the small SG numbers. Our results suggest that the large SG numbers are more likely to shed light on solar activity and its geophysical implications. Our findings may also influence our understanding of long-term variations of the total solar irradiance, which is thought to be an important factor in the Sun-Earth climate relationship.« less
Holographic turbulence in a large number of dimensions
NASA Astrophysics Data System (ADS)
Rozali, Moshe; Sabag, Evyatar; Yarom, Amos
2018-04-01
We consider relativistic hydrodynamics in the limit where the number of spatial dimensions is very large. We show that under certain restrictions, the resulting equations of motion simplify significantly. Holographic theories in a large number of dimensions satisfy the aforementioned restrictions and their dynamics are captured by hydrodynamics with a naturally truncated derivative expansion. Using analytic and numerical techniques we analyze two and three-dimensional turbulent flow of such fluids in various regimes and its relation to geometric data.
Analyzing Prosocial Content on T.V.
ERIC Educational Resources Information Center
Davidson, Emily S.; Neale, John M.
To enhance knowledge of television content, a prosocial code was developed by watching a large number of potentially prosocial television programs and making notes on all the positive acts. The behaviors were classified into a workable number of categories. The prosocial code is largely verbal and contains seven categories which fall into two…
Rare Cell Detection by Single-Cell RNA Sequencing as Guided by Single-Molecule RNA FISH.
Torre, Eduardo; Dueck, Hannah; Shaffer, Sydney; Gospocic, Janko; Gupte, Rohit; Bonasio, Roberto; Kim, Junhyong; Murray, John; Raj, Arjun
2018-02-28
Although single-cell RNA sequencing can reliably detect large-scale transcriptional programs, it is unclear whether it accurately captures the behavior of individual genes, especially those that express only in rare cells. Here, we use single-molecule RNA fluorescence in situ hybridization as a gold standard to assess trade-offs in single-cell RNA-sequencing data for detecting rare cell expression variability. We quantified the gene expression distribution for 26 genes that range from ubiquitous to rarely expressed and found that the correspondence between estimates across platforms improved with both transcriptome coverage and increased number of cells analyzed. Further, by characterizing the trade-off between transcriptome coverage and number of cells analyzed, we show that when the number of genes required to answer a given biological question is small, then greater transcriptome coverage is more important than analyzing large numbers of cells. More generally, our report provides guidelines for selecting quality thresholds for single-cell RNA-sequencing experiments aimed at rare cell analyses. Copyright © 2018 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Gilra, D. P.; Pwa, T. H.; Arnal, E. M.; de Vries, J.
1982-06-01
In order to process and analyze high resolution IUE data on a large number of interstellar lines in a large number of images for a large number of stars, computer programs were developed for 115 lines in the short wavelength range and 40 in the long wavelength range. Programs include extraction, processing, plotting, averaging, and profile fitting. Wavelength calibration in high resolution spectra, fixed pattern noise, instrument profile and resolution, and the background problem in the region where orders are crowding are discussed. All the expected lines are detected in at least one spectrum.
The Influence of the Number of Different Stocks on the Levy-Levy-Solomon Model
NASA Astrophysics Data System (ADS)
Kohl, R.
The stock market model of Levy, Levy, Solomon is simulated for more than one stock to analyze the behavior for a large number of investors. Small markets can lead to realistic looking prices for one and more stocks. A large number of investors leads to a semi-regular fashion simulating one stock. For many stocks, three of the stocks are semi-regular and dominant, the rest is chaotic. Aside from that we changed the utility function and checked the results.
Urban land expansion in Quanzhou City, China, 1995-2010
USDA-ARS?s Scientific Manuscript database
With its phenomenal development in recent decades, urbanization in China has been covered in a large number of studies. These studies have focused on large cities, with smaller and lesser known cities largely overlooked. This study analyzed the spatiotemporal changes of land use in Quanzhou, a histo...
NASA Technical Reports Server (NTRS)
Wheeler, A. A.; Mcfadden, G. B.; Murray, B. T.; Coriell, S. R.
1991-01-01
The effect of vertical, sinusoidal, time-dependent gravitational acceleration on the onset of solutal convection during directional solidification is analyzed in the limit of large modulation frequency. When the unmodulated state is unstable, the modulation amplitude required to stabilize the system is determined by the method of averaging. When the unmodulated state is stable, resonant modes of instability occur at large modulation amplitude. These are analyzed using matched asymptotic expansions to elucidate the boundary-layer structure for both the Rayleigh-Benard and directional solidification configurations. Based on these analyses, a thorough examination of the dependence of the stability criteria on the unmodulated Rayleigh number, Schmidt number, and distribution coefficient, is carried out.
Maximizing User Satisfaction With Office Practice Data Processing Systems
O'Flaherty, Thomas; Jussim, Judith
1980-01-01
Significant numbers of physicians are using data processing services and a large number of firms are offering an increasing variety of services. This paper quantifies user dissatisfaction with office practice data processing systems and analyzes factors affecting dissatisfaction in large group practices. Based on this analysis, a proposal is made for a more structured approach to obtaining data processing services in order to lower the risks and increase satisfaction with data processing.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets.
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O; Gelfand, Alan E
2016-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online.
Hierarchical Nearest-Neighbor Gaussian Process Models for Large Geostatistical Datasets
Datta, Abhirup; Banerjee, Sudipto; Finley, Andrew O.; Gelfand, Alan E.
2018-01-01
Spatial process models for analyzing geostatistical data entail computations that become prohibitive as the number of spatial locations become large. This article develops a class of highly scalable nearest-neighbor Gaussian process (NNGP) models to provide fully model-based inference for large geostatistical datasets. We establish that the NNGP is a well-defined spatial process providing legitimate finite-dimensional Gaussian densities with sparse precision matrices. We embed the NNGP as a sparsity-inducing prior within a rich hierarchical modeling framework and outline how computationally efficient Markov chain Monte Carlo (MCMC) algorithms can be executed without storing or decomposing large matrices. The floating point operations (flops) per iteration of this algorithm is linear in the number of spatial locations, thereby rendering substantial scalability. We illustrate the computational and inferential benefits of the NNGP over competing methods using simulation studies and also analyze forest biomass from a massive U.S. Forest Inventory dataset at a scale that precludes alternative dimension-reducing methods. Supplementary materials for this article are available online. PMID:29720777
Turbulent Superstructures in Rayleigh-Bénard convection at different Prandtl number
NASA Astrophysics Data System (ADS)
Schumacher, Jörg; Pandey, Ambrish; Ender, Martin; Westermann, Rüdiger; Scheel, Janet D.
2017-11-01
Large-scale patterns of the temperature and velocity field in horizontally extended cells can be considered as turbulent superstructures in Rayleigh-Bénard convection (RBC). These structures are obtained once the turbulent fluctuations are removed by a finite-time average. Their existence has been reported for example in Bailon-Cuba et al.. This large-scale order obeys a strong similarity with the well-studied patterns from the weakly nonlinear regime at lower Rayleigh number in RBC. In the present work we analyze the superstructures of RBC at different Prandtl number for Prandtl values between Pr = 0.005 for liquid sodium and 7 for water. The characteristic evolution time scales, the typical spatial extension of the rolls and the properties of the defects of the resulting superstructure patterns are analyzed. Data are obtained from well-resolved spectral element direct numerical simulations. The work is supported by the Priority Programme SPP 1881 of the Deutsche Forschungsgemeinschaft.
Rapid quantification of proanthocyanidins (condensed tannins) with a continuous flow analyzer
James K. Nitao; Bruce A. Birr; Muraleedharan G. Nair; Daniel A. Herms; William J. Mattson
2001-01-01
Proanthocyanidins (condensed tannins) frequently need to be quantified in large numbers of samples in food, plant, and environmental studies. An automated colorimetric method to quantify proanthocyanidins with sulfuric acid (H2SO4) was therefore developed for use in a continuous flow analyzer. Assay conditions were...
Investigation of estimators of probability density functions
NASA Technical Reports Server (NTRS)
Speed, F. M.
1972-01-01
Four research projects are summarized which include: (1) the generation of random numbers on the IBM 360/44, (2) statistical tests used to check out random number generators, (3) Specht density estimators, and (4) use of estimators of probability density functions in analyzing large amounts of data.
HierarchicalTopics: visually exploring large text collections using topic hierarchies.
Dou, Wenwen; Yu, Li; Wang, Xiaoyu; Ma, Zhiqiang; Ribarsky, William
2013-12-01
Analyzing large textual collections has become increasingly challenging given the size of the data available and the rate that more data is being generated. Topic-based text summarization methods coupled with interactive visualizations have presented promising approaches to address the challenge of analyzing large text corpora. As the text corpora and vocabulary grow larger, more topics need to be generated in order to capture the meaningful latent themes and nuances in the corpora. However, it is difficult for most of current topic-based visualizations to represent large number of topics without being cluttered or illegible. To facilitate the representation and navigation of a large number of topics, we propose a visual analytics system--HierarchicalTopic (HT). HT integrates a computational algorithm, Topic Rose Tree, with an interactive visual interface. The Topic Rose Tree constructs a topic hierarchy based on a list of topics. The interactive visual interface is designed to present the topic content as well as temporal evolution of topics in a hierarchical fashion. User interactions are provided for users to make changes to the topic hierarchy based on their mental model of the topic space. To qualitatively evaluate HT, we present a case study that showcases how HierarchicalTopics aid expert users in making sense of a large number of topics and discovering interesting patterns of topic groups. We have also conducted a user study to quantitatively evaluate the effect of hierarchical topic structure. The study results reveal that the HT leads to faster identification of large number of relevant topics. We have also solicited user feedback during the experiments and incorporated some suggestions into the current version of HierarchicalTopics.
Identification of copy number variants in whole-genome data using Reference Coverage Profiles
Glusman, Gustavo; Severson, Alissa; Dhankani, Varsha; Robinson, Max; Farrah, Terry; Mauldin, Denise E.; Stittrich, Anna B.; Ament, Seth A.; Roach, Jared C.; Brunkow, Mary E.; Bodian, Dale L.; Vockley, Joseph G.; Shmulevich, Ilya; Niederhuber, John E.; Hood, Leroy
2015-01-01
The identification of DNA copy numbers from short-read sequencing data remains a challenge for both technical and algorithmic reasons. The raw data for these analyses are measured in tens to hundreds of gigabytes per genome; transmitting, storing, and analyzing such large files is cumbersome, particularly for methods that analyze several samples simultaneously. We developed a very efficient representation of depth of coverage (150–1000× compression) that enables such analyses. Current methods for analyzing variants in whole-genome sequencing (WGS) data frequently miss copy number variants (CNVs), particularly hemizygous deletions in the 1–100 kb range. To fill this gap, we developed a method to identify CNVs in individual genomes, based on comparison to joint profiles pre-computed from a large set of genomes. We analyzed depth of coverage in over 6000 high quality (>40×) genomes. The depth of coverage has strong sequence-specific fluctuations only partially explained by global parameters like %GC. To account for these fluctuations, we constructed multi-genome profiles representing the observed or inferred diploid depth of coverage at each position along the genome. These Reference Coverage Profiles (RCPs) take into account the diverse technologies and pipeline versions used. Normalization of the scaled coverage to the RCP followed by hidden Markov model (HMM) segmentation enables efficient detection of CNVs and large deletions in individual genomes. Use of pre-computed multi-genome coverage profiles improves our ability to analyze each individual genome. We make available RCPs and tools for performing these analyses on personal genomes. We expect the increased sensitivity and specificity for individual genome analysis to be critical for achieving clinical-grade genome interpretation. PMID:25741365
Segmentation and Quantitative Analysis of Epithelial Tissues.
Aigouy, Benoit; Umetsu, Daiki; Eaton, Suzanne
2016-01-01
Epithelia are tissues that regulate exchanges with the environment. They are very dynamic and can acquire virtually any shape; at the cellular level, they are composed of cells tightly connected by junctions. Most often epithelia are amenable to live imaging; however, the large number of cells composing an epithelium and the absence of informatics tools dedicated to epithelial analysis largely prevented tissue scale studies. Here we present Tissue Analyzer, a free tool that can be used to segment and analyze epithelial cells and monitor tissue dynamics.
ERIC Educational Resources Information Center
Zacharakis, Jeff; Wang, Haiyan; Patterson, Margaret Becker; Andersen, Lori
2015-01-01
This research analyzed linked high-quality state data from K-12, adult education, and postsecondary state datasets in order to better understand the association between student demographics and successful completion of a postsecondary program. Due to the relatively small sample size compared to the large number of features, we analyzed the data…
Heidema, A Geert; Boer, Jolanda M A; Nagelkerke, Nico; Mariman, Edwin C M; van der A, Daphne L; Feskens, Edith J M
2006-04-21
Genetic epidemiologists have taken the challenge to identify genetic polymorphisms involved in the development of diseases. Many have collected data on large numbers of genetic markers but are not familiar with available methods to assess their association with complex diseases. Statistical methods have been developed for analyzing the relation between large numbers of genetic and environmental predictors to disease or disease-related variables in genetic association studies. In this commentary we discuss logistic regression analysis, neural networks, including the parameter decreasing method (PDM) and genetic programming optimized neural networks (GPNN) and several non-parametric methods, which include the set association approach, combinatorial partitioning method (CPM), restricted partitioning method (RPM), multifactor dimensionality reduction (MDR) method and the random forests approach. The relative strengths and weaknesses of these methods are highlighted. Logistic regression and neural networks can handle only a limited number of predictor variables, depending on the number of observations in the dataset. Therefore, they are less useful than the non-parametric methods to approach association studies with large numbers of predictor variables. GPNN on the other hand may be a useful approach to select and model important predictors, but its performance to select the important effects in the presence of large numbers of predictors needs to be examined. Both the set association approach and random forests approach are able to handle a large number of predictors and are useful in reducing these predictors to a subset of predictors with an important contribution to disease. The combinatorial methods give more insight in combination patterns for sets of genetic and/or environmental predictor variables that may be related to the outcome variable. As the non-parametric methods have different strengths and weaknesses we conclude that to approach genetic association studies using the case-control design, the application of a combination of several methods, including the set association approach, MDR and the random forests approach, will likely be a useful strategy to find the important genes and interaction patterns involved in complex diseases.
Solar concentration properties of flat fresnel lenses with large F-numbers
NASA Technical Reports Server (NTRS)
Cosby, R. M.
1978-01-01
The solar concentration performances of flat, line-focusing sun-tracking Fresnel lenses with selected f-numbers between 0.9 and 2.0 were analyzed. Lens transmittance was found to have a weak dependence on f-number, with a 2% increase occuring as the f-number is increased from 0.9 to 2.0. The geometric concentration ratio for perfectly tracking lenses peaked for an f-number near 1.35. Intensity profiles were more uniform over the image extent for large f-number lenses when compared to the f/0.9 lens results. Substantial decreases in geometri concentration ratios were observed for transverse tracking errors equal to or below 1 degree for all f-number lenses. With respect to tracking errors, the solar performance is optimum for f-numbers between 1.25 and 1.5.
Robust failure detection filters. M.S. Thesis
NASA Technical Reports Server (NTRS)
Sanmartin, A. M.
1985-01-01
The robustness of detection filters applied to the detection of actuator failures on a free-free beam is analyzed. This analysis is based on computer simulation tests of the detection filters in the presence of different types of model mismatch, and on frequency response functions of the transfers corresponding to the model mismatch. The robustness of detection filters based on a model of the beam containing a large number of structural modes varied dramatically with the placement of some of the filter poles. The dynamics of these filters were very hard to analyze. The design of detection filters with a number of modes equal to the number of sensors was trivial. They can be configured to detect any number of actuator failure events. The dynamics of these filters were very easy to analyze and their robustness properties were much improved. A change of the output transformation allowed the filter to perform satisfactorily with realistic levels of model mismatch.
NASA Astrophysics Data System (ADS)
Taniguchi, Shigeru; Arima, Takashi; Ruggeri, Tommaso; Sugiyama, Masaru
2018-05-01
The shock wave structure in rarefied polyatomic gases is analyzed based on extended thermodynamics (ET). In particular, the case with large relaxation time for the dynamic pressure, which corresponds to large bulk viscosity, is considered by adopting the simplest version of extended thermodynamics with only 6 independent fields (ET6); the mass density, the velocity, the temperature and the dynamic pressure. Recently, the validity of the theoretical predictions by ET was confirmed by the numerical analysis based on the kinetic theory in [S Kosuge and K Aoki: Phys. Rev. Fluids, Vol. 3, 023401 (2018)]. It was shown that numerical results using the polyatomic version of ellipsoidal statistical model agree with the theoretical predictions by ET for small or moderately large Mach numbers. In the present paper, first, we compare the theoretical predictions by ET6 with the ones by kinetic theory for large Mach number under the same assumptions, that is, the gas is polytropic and the bulk viscosity is proportional to the temperature. Second, the shock wave structure for large Mach number in a non-polytropic gas is analyzed with the particular interest in the effect of the temperature dependence of specific heat and the bulk viscosity on the shock wave structure. Through the analysis of the case of a rarefied carbon dioxide (CO2) gas, it is shown that these temperature dependences play important roles in the precise analysis of the structure for strong shock waves.
Text Me! Interpersonal Discourse Analysis of Egyptian Mobile Operators' SMSs
ERIC Educational Resources Information Center
El-Falaky, Mai Samir
2016-01-01
The present study examines the discourse of a number of Short Messaging Service (SMS). The selected data is analyzed according to the lexico-grammatical choices reflected in the interpersonal metafunction. Results are, then, interpreted for the purpose of deciding how service providers use language to convince a large number of customers of their…
NASA Technical Reports Server (NTRS)
Gauthier, M. K.; Miller, E. L.; Shumka, A.
1980-01-01
Laser-Scanning System pinpoints imperfections in solar cells. Entire solar panels containing large numbers of cells can be scanned. Although technique is similar to use of scanning electron microscope (SEM) to locate microscopic imperfections, it differs in that large areas may be examined, including entire solar panels, and it is not necessary to remove cover glass or encapsulants.
The large scale microelectronics Computer-Aided Design and Test (CADAT) system
NASA Technical Reports Server (NTRS)
Gould, J. M.
1978-01-01
The CADAT system consists of a number of computer programs written in FORTRAN that provide the capability to simulate, lay out, analyze, and create the artwork for large scale microelectronics. The function of each software component of the system is described with references to specific documentation for each software component.
The application of waste fly ash and construction-waste in cement filling material in goaf
NASA Astrophysics Data System (ADS)
Chen, W. X.; Xiao, F. K.; Guan, X. H.; Cheng, Y.; Shi, X. P.; Liu, S. M.; Wang, W. W.
2018-01-01
As the process of urbanization accelerated, resulting in a large number of abandoned fly ash and construction waste, which have occupied the farmland and polluted the environment. In this paper, a large number of construction waste and abandoned fly ash are mixed into the filling material in goaf, the best formula of the filling material which containing a large amount of abandoned fly ash and construction waste is obtained, and the performance of the filling material is analyzed. The experimental results show that the cost of filling material is very low while the performance is very good, which have a good prospect in goaf.
Gooding, Thomas Michael [Rochester, MN
2011-04-19
An analytical mechanism for a massively parallel computer system automatically analyzes data retrieved from the system, and identifies nodes which exhibit anomalous behavior in comparison to their immediate neighbors. Preferably, anomalous behavior is determined by comparing call-return stack tracebacks for each node, grouping like nodes together, and identifying neighboring nodes which do not themselves belong to the group. A node, not itself in the group, having a large number of neighbors in the group, is a likely locality of error. The analyzer preferably presents this information to the user by sorting the neighbors according to number of adjoining members of the group.
Statistical physics of community ecology: a cavity solution to MacArthur’s consumer resource model
NASA Astrophysics Data System (ADS)
Advani, Madhu; Bunin, Guy; Mehta, Pankaj
2018-03-01
A central question in ecology is to understand the ecological processes that shape community structure. Niche-based theories have emphasized the important role played by competition for maintaining species diversity. Many of these insights have been derived using MacArthur’s consumer resource model (MCRM) or its generalizations. Most theoretical work on the MCRM has focused on small ecosystems with a few species and resources. However theoretical insights derived from small ecosystems many not scale up to large ecosystems with many resources and species because large systems with many interacting components often display new emergent behaviors that cannot be understood or deduced from analyzing smaller systems. To address these shortcomings, we develop a statistical physics inspired cavity method to analyze MCRM when both the number of species and the number of resources is large. Unlike previous work in this limit, our theory addresses resource dynamics and resource depletion and demonstrates that species generically and consistently perturb their environments and significantly modify available ecological niches. We show how our cavity approach naturally generalizes niche theory to large ecosystems by accounting for the effect of collective phenomena on species invasion and ecological stability. Our theory suggests that such phenomena are a generic feature of large, natural ecosystems and must be taken into account when analyzing and interpreting community structure. It also highlights the important role that statistical-physics inspired approaches can play in furthering our understanding of ecology.
The Renewed Primary School in Belgium: Analysis of the Local Innovation Policy.
ERIC Educational Resources Information Center
Vandenberghe, Roland
The Renewed Primary School project in Belgium is analyzed in this paper in terms of organizational response to a large-scale innovation, which is characterized by its multidimensionality, by the large number of participating schools, and by a complex support structure. Section 2 of the report presents an elaborated description of these…
Learning Style Patterns among Special Needs Adult Students at King Saud University
ERIC Educational Resources Information Center
Alshuaibi, Abdulrahman
2017-01-01
Few studies of learning styles among adults with special needs exist worldwide. Even though there are large numbers of adults with special needs, this population in university education has been largely ignored in educational research. Therefore, this study aimed to gather and analyze learning styles of adult special needs students and to provide…
Stable amplitude chimera states in a network of locally coupled Stuart-Landau oscillators
NASA Astrophysics Data System (ADS)
Premalatha, K.; Chandrasekar, V. K.; Senthilvelan, M.; Lakshmanan, M.
2018-03-01
We investigate the occurrence of collective dynamical states such as transient amplitude chimera, stable amplitude chimera, and imperfect breathing chimera states in a locally coupled network of Stuart-Landau oscillators. In an imperfect breathing chimera state, the synchronized group of oscillators exhibits oscillations with large amplitudes, while the desynchronized group of oscillators oscillates with small amplitudes, and this behavior of coexistence of synchronized and desynchronized oscillations fluctuates with time. Then, we analyze the stability of the amplitude chimera states under various circumstances, including variations in system parameters and coupling strength, and perturbations in the initial states of the oscillators. For an increase in the value of the system parameter, namely, the nonisochronicity parameter, the transient chimera state becomes a stable chimera state for a sufficiently large value of coupling strength. In addition, we also analyze the stability of these states by perturbing the initial states of the oscillators. We find that while a small perturbation allows one to perturb a large number of oscillators resulting in a stable amplitude chimera state, a large perturbation allows one to perturb a small number of oscillators to get a stable amplitude chimera state. We also find the stability of the transient and stable amplitude chimera states and traveling wave states for an appropriate number of oscillators using Floquet theory. In addition, we also find the stability of the incoherent oscillation death states.
Stable amplitude chimera states in a network of locally coupled Stuart-Landau oscillators.
Premalatha, K; Chandrasekar, V K; Senthilvelan, M; Lakshmanan, M
2018-03-01
We investigate the occurrence of collective dynamical states such as transient amplitude chimera, stable amplitude chimera, and imperfect breathing chimera states in a locally coupled network of Stuart-Landau oscillators. In an imperfect breathing chimera state, the synchronized group of oscillators exhibits oscillations with large amplitudes, while the desynchronized group of oscillators oscillates with small amplitudes, and this behavior of coexistence of synchronized and desynchronized oscillations fluctuates with time. Then, we analyze the stability of the amplitude chimera states under various circumstances, including variations in system parameters and coupling strength, and perturbations in the initial states of the oscillators. For an increase in the value of the system parameter, namely, the nonisochronicity parameter, the transient chimera state becomes a stable chimera state for a sufficiently large value of coupling strength. In addition, we also analyze the stability of these states by perturbing the initial states of the oscillators. We find that while a small perturbation allows one to perturb a large number of oscillators resulting in a stable amplitude chimera state, a large perturbation allows one to perturb a small number of oscillators to get a stable amplitude chimera state. We also find the stability of the transient and stable amplitude chimera states and traveling wave states for an appropriate number of oscillators using Floquet theory. In addition, we also find the stability of the incoherent oscillation death states.
CYLINDRICAL WAVES OF FINITE AMPLITUDE IN DISSIPATIVE MEDIUM (in Russian)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naugol'nykh, K.A.; Soluyan, S.I.; Khokhlov, R.V.
1962-07-01
Propagation of diverging and converging cylindrical waves in a nonlinear, viscous, heat conducting medium is analyzed using approximation methods. The KrylovBogolyubov method was used for small Raynold's numbers, and the method of S. I. Soluyan et al. (Vest. Mosk. Univ. ser. phys. and astronomy 3, 52-81, 1981), was used for large Raynold's numbers. The formation and dissipation of shock fronts and spatial dimensions of shock phenomena were analyzed. It is shown that the problem of finiteamplitude cylindrical wave propagation is identical to the problem of plane wave propagations in a medium with variable viscosity. (tr-auth)
APPHi: Automated Photometry Pipeline for High Cadence Large Volume Data
NASA Astrophysics Data System (ADS)
Sánchez, E.; Castro, J.; Silva, J.; Hernández, J.; Reyes, M.; Hernández, B.; Alvarez, F.; García T.
2018-04-01
APPHi (Automated Photometry Pipeline) carries out aperture and differential photometry of TAOS-II project data. It is computationally efficient and can be used also with other astronomical wide-field image data. APPHi works with large volumes of data and handles both FITS and HDF5 formats. Due the large number of stars that the software has to handle in an enormous number of frames, it is optimized to automatically find the best value for parameters to carry out the photometry, such as mask size for aperture, size of window for extraction of a single star, and the number of counts for the threshold for detecting a faint star. Although intended to work with TAOS-II data, APPHi can analyze any set of astronomical images and is a robust and versatile tool to performing stellar aperture and differential photometry.
Exploration of multiphoton entangled states by using weak nonlinearities
He, Ying-Qiu; Ding, Dong; Yan, Feng-Li; Gao, Ting
2016-01-01
We propose a fruitful scheme for exploring multiphoton entangled states based on linear optics and weak nonlinearities. Compared with the previous schemes the present method is more feasible because there are only small phase shifts instead of a series of related functions of photon numbers in the process of interaction with Kerr nonlinearities. In the absence of decoherence we analyze the error probabilities induced by homodyne measurement and show that the maximal error probability can be made small enough even when the number of photons is large. This implies that the present scheme is quite tractable and it is possible to produce entangled states involving a large number of photons. PMID:26751044
Disordered quivers and cold horizons
Anninos, Dionysios; Anous, Tarek; Denef, Frederik
2016-12-15
We analyze the low temperature structure of a supersymmetric quiver quantum mechanics with randomized superpotential coefficients, treating them as quenched disorder. These theories describe features of the low energy dynamics of wrapped branes, which in large number backreact into extremal black holes. We show that the low temperature theory, in the limit of a large number of bifundamentals, exhibits a time reparametrization symmetry as well as a specific heat linear in the temperature. Both these features resemble the behavior of black hole horizons in the zero temperature limit. We demonstrate similarities between the low temperature physics of the random quivermore » model and a theory of large N free fermions with random masses.« less
Parallel/distributed direct method for solving linear systems
NASA Technical Reports Server (NTRS)
Lin, Avi
1990-01-01
A new family of parallel schemes for directly solving linear systems is presented and analyzed. It is shown that these schemes exhibit a near optimal performance and enjoy several important features: (1) For large enough linear systems, the design of the appropriate paralleled algorithm is insensitive to the number of processors as its performance grows monotonically with them; (2) It is especially good for large matrices, with dimensions large relative to the number of processors in the system; (3) It can be used in both distributed parallel computing environments and tightly coupled parallel computing systems; and (4) This set of algorithms can be mapped onto any parallel architecture without any major programming difficulties or algorithmical changes.
Radial sets: interactive visual analysis of large overlapping sets.
Alsallakh, Bilal; Aigner, Wolfgang; Miksch, Silvia; Hauser, Helwig
2013-12-01
In many applications, data tables contain multi-valued attributes that often store the memberships of the table entities to multiple sets such as which languages a person masters, which skills an applicant documents, or which features a product comes with. With a growing number of entities, the resulting element-set membership matrix becomes very rich of information about how these sets overlap. Many analysis tasks targeted at set-typed data are concerned with these overlaps as salient features of such data. This paper presents Radial Sets, a novel visual technique to analyze set memberships for a large number of elements. Our technique uses frequency-based representations to enable quickly finding and analyzing different kinds of overlaps between the sets, and relating these overlaps to other attributes of the table entities. Furthermore, it enables various interactions to select elements of interest, find out if they are over-represented in specific sets or overlaps, and if they exhibit a different distribution for a specific attribute compared to the rest of the elements. These interactions allow formulating highly-expressive visual queries on the elements in terms of their set memberships and attribute values. As we demonstrate via two usage scenarios, Radial Sets enable revealing and analyzing a multitude of overlapping patterns between large sets, beyond the limits of state-of-the-art techniques.
Extension of electronic speckle correlation interferometry to large deformations
NASA Astrophysics Data System (ADS)
Sciammarella, Cesar A.; Sciammarella, Federico M.
1998-07-01
The process of fringe formation under simultaneous illumination in two orthogonal directions is analyzed. Procedures to extend the applicability of this technique to large deformation and high density of fringes are introduced. The proposed techniques are applied to a number of technical problems. Good agreement is obtained when the experimental results are compared with results obtained by other methods.
NASA Astrophysics Data System (ADS)
Ichino, Shinya; Mawaki, Takezo; Teramoto, Akinobu; Kuroda, Rihito; Park, Hyeonwoo; Wakashima, Shunichi; Goto, Tetsuya; Suwa, Tomoyuki; Sugawa, Shigetoshi
2018-04-01
Random telegraph noise (RTN), which occurs in in-pixel source follower (SF) transistors, has become one of the most critical problems in high-sensitivity CMOS image sensors (CIS) because it is a limiting factor of dark random noise. In this paper, the behaviors of RTN toward changes in SF drain current conditions were analyzed using a low-noise array test circuit measurement system with a floor noise of 35 µV rms. In addition to statistical analysis by measuring a large number of transistors (18048 transistors), we also analyzed the behaviors of RTN parameters such as amplitude and time constants in the individual transistors. It is demonstrated that the appearance probability of RTN becomes small under a small drain current condition, although large-amplitude RTN tends to appear in a very small number of cells.
Feldman, Daniel; Liu, Zuowei; Nath, Pran
2007-12-21
The minimal supersymmetric standard model with soft breaking has a large landscape of supersymmetric particle mass hierarchies. This number is reduced significantly in well-motivated scenarios such as minimal supergravity and alternatives. We carry out an analysis of the landscape for the first four lightest particles and identify at least 16 mass patterns, and provide benchmarks for each. We study the signature space for the patterns at the CERN Large Hadron Collider by analyzing the lepton+ (jet> or =2) + missing P{T} signals with 0, 1, 2, and 3 leptons. Correlations in missing P{T} are also analyzed. It is found that even with 10 fb{-1} of data a significant discrimination among patterns emerges.
Multiplex-Ready Technology for mid-throughput genotyping of molecular markers.
Bonneau, Julien; Hayden, Matthew
2014-01-01
Screening molecular markers across large populations in breeding programs is generally time consuming and expensive. The Multiplex-Ready Technology (MRT) (Hayden et al., BMC genomics 9:80, 2008) was created to optimize polymorphism screening and genotyping using standardized PCR reaction conditions. The flexibility of this method maximizes the number of markers (up to 24 markers SSR or SNP, ideally small PCR product <500 bp and highly polymorphic) by using fluorescent dye (VIC, FAM, NED, and PET) and a semiautomated DNA fragment analyzer (ABI3730) capillary electrophoresis for large numbers of DNA samples (96 or 384 samples).
Markov-modulated Markov chains and the covarion process of molecular evolution.
Galtier, N; Jean-Marie, A
2004-01-01
The covarion (or site specific rate variation, SSRV) process of biological sequence evolution is a process by which the evolutionary rate of a nucleotide/amino acid/codon position can change in time. In this paper, we introduce time-continuous, space-discrete, Markov-modulated Markov chains as a model for representing SSRV processes, generalizing existing theory to any model of rate change. We propose a fast algorithm for diagonalizing the generator matrix of relevant Markov-modulated Markov processes. This algorithm makes phylogeny likelihood calculation tractable even for a large number of rate classes and a large number of states, so that SSRV models become applicable to amino acid or codon sequence datasets. Using this algorithm, we investigate the accuracy of the discrete approximation to the Gamma distribution of evolutionary rates, widely used in molecular phylogeny. We show that a relatively large number of classes is required to achieve accurate approximation of the exact likelihood when the number of analyzed sequences exceeds 20, both under the SSRV and among site rate variation (ASRV) models.
HUMAN EXPOSURE ASSESSMENT USING IMMUNOASSAY
The National Exposure Research Laboratory-Las Vegas is developing analytical methods for human exposure assessment studies. Critical exposure studies generate a large number of samples which must be analyzed in a reliable, cost-effective and timely manner. TCP (3,5,6-trichlor...
Impact of Medicare on the Use of Medical Services by Disabled Beneficiaries, 1972-1974
Deacon, Ronald W.
1979-01-01
The extension of Medicare coverage in 1973 to disabled persons receiving cash benefits under the Social Security Act provided an opportunity to examine the impact of health insurance coverage on utilization and expenses for Part B services. Data on medical services used both before and after coverage, collected through the Current Medicare Survey, were analyzed. Results indicate that access to care (as measured by the number of persons using services) increased slightly, while the rate of use did not. The large increase in the number of persons eligible for Medicare reflected the large increase in the number of cash beneficiaries. Significant increases also were found in the amount charged for medical services. The absence of large increases in access and service use may be attributed, in part, to the already existing source of third party payment available to disabled cash beneficiaries in 1972, before Medicare coverage. PMID:10316939
Substructure coupling in the frequency domain
NASA Technical Reports Server (NTRS)
1985-01-01
Frequency domain analysis was found to be a suitable method for determining the transient response of systems subjected to a wide variety of loads. However, since a large number of calculations are performed within the discrete frequency loop, the method loses it computational efficiency if the loads must be represented by a large number of discrete frequencies. It was also discovered that substructure coupling in the frequency domain work particularly well for analyzing structural system with a small number of interface and loaded degrees of freedom. It was discovered that substructure coupling in the frequency domain can lead to an efficient method of obtaining natural frequencies of undamped structures. It was also found that the damped natural frequencies of a system may be determined using frequency domain techniques.
Vector computer memory bank contention
NASA Technical Reports Server (NTRS)
Bailey, D. H.
1985-01-01
A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.
Vector computer memory bank contention
NASA Technical Reports Server (NTRS)
Bailey, David H.
1987-01-01
A number of vector supercomputers feature very large memories. Unfortunately the large capacity memory chips that are used in these computers are much slower than the fast central processing unit (CPU) circuitry. As a result, memory bank reservation times (in CPU ticks) are much longer than on previous generations of computers. A consequence of these long reservation times is that memory bank contention is sharply increased, resulting in significantly lowered performance rates. The phenomenon of memory bank contention in vector computers is analyzed using both a Markov chain model and a Monte Carlo simulation program. The results of this analysis indicate that future generations of supercomputers must either employ much faster memory chips or else feature very large numbers of independent memory banks.
Operational Evaluation of the Rapid Viability PCR Method for ...
Journal Article This research work has a significant impact on the use of the RV-PCR method to analyze post-decontamination environmental samples during an anthrax event. The method has shown 98% agreement with the traditional culture based method. With such a success, this method, upon validation, will significantly increase the laboratory throughput/capacity to analyze a large number of anthrax event samples in a relatively short time.
Global bending quantum number and the absence of monodromy in the HCN{r_reversible}CNH molecule
DOE Office of Scientific and Technical Information (OSTI.GOV)
Efstathiou, K.; Sadovskii, D.A.; Joyeux, M.
We introduce and analyze a model system based on a deformation of a spherical pendulum that can be used to reproduce large amplitude bending vibrations of flexible triatomic molecules with two stable linear equilibria. On the basis of our model and the recent vibrational potential [ J. Chem. Phys. 115, 3706 (2001) ], we analyze the HCN/CNH isomerizing molecule. We find that HCN/CNH has no monodromy and introduce the second global bending quantum number for this system at all energies where the potential is expected to work. We also show that LiNC/LiCN is a qualitatively different system with monodromy.
Characterization of glycoprotein biopharmaceutical products by Caliper LC90 CE-SDS gel technology.
Chen, Grace; Ha, Sha; Rustandi, Richard R
2013-01-01
Over the last decade, science has greatly improved in the area of protein sizing and characterization. Efficient high-throughput methods are now available to substitute for the traditional labor-intensive SDS-PAGE methods, which alternatively take days to analyze a very limited number of samples. Currently, PerkinElmer(®) (Caliper) has designed an automated chip-based fluorescence detection method capable of analyzing proteins in minutes with sensitivity similar to standard SDS-PAGE. Here, we describe the use and implementation of this technology to characterize and screen a large number of formulations of target glycoproteins in the 14-200 kDa molecular weight range.
High energy behavior of gravity at large N
DOE Office of Scientific and Technical Information (OSTI.GOV)
Canfora, F.
2006-09-15
A first step in the analysis of the renormalizability of gravity at large N is carried out. Suitable resummations of planar diagrams give rise to a theory in which there is only a finite number of primitive, superficially divergent, Feynman diagrams. The mechanism is similar to the one which makes the 3D Gross-Neveu model renormalizable at large N. The connections with gravitational confinement and Kawai-Lewellen-Tye relations are briefly analyzed. Some potential problems in fulfilling the Zinn-Justin equations are pointed out.
Profiling 976 ToxCast chemicals across 331 enzymatic and receptor signaling assays
Understanding potential health risks is a significant challenge for large numbers of diverse chemicals with poorly characterized exposures and mechanisms of toxicities. The present study analyzes chemical-target activity profiles of 976 chemicals (including failed pharmaceuticals...
Winiecki, A.L.; Kroop, D.C.; McGee, M.K.; Lenkszus, F.R.
1984-01-01
An analytical instrument and particularly a time-of-flight-mass spectrometer for processing a large number of analog signals irregularly spaced over a spectrum, with programmable masking of portions of the spectrum where signals are unlikely in order to reduce memory requirements and/or with a signal capturing assembly having a plurality of signal capturing devices fewer in number than the analog signals for use in repeated cycles within the data processing time period.
Winiecki, Alan L.; Kroop, David C.; McGee, Marilyn K.; Lenkszus, Frank R.
1986-01-01
An analytical instrument and particularly a time-of-flight-mass spectrometer for processing a large number of analog signals irregularly spaced over a spectrum, with programmable masking of portions of the spectrum where signals are unlikely in order to reduce memory requirements and/or with a signal capturing assembly having a plurality of signal capturing devices fewer in number than the analog signals for use in repeated cycles within the data processing time period.
Molecular inversion probe assay for allelic quantitation
Ji, Hanlee; Welch, Katrina
2010-01-01
Molecular inversion probe (MIP) technology has been demonstrated to be a robust platform for large-scale dual genotyping and copy number analysis. Applications in human genomic and genetic studies include the possibility of running dual germline genotyping and combined copy number variation ascertainment. MIPs analyze large numbers of specific genetic target sequences in parallel, relying on interrogation of a barcode tag, rather than direct hybridization of genomic DNA to an array. The MIP approach does not replace, but is complementary to many of the copy number technologies being performed today. Some specific advantages of MIP technology include: Less DNA required (37 ng vs. 250 ng), DNA quality less important, more dynamic range (amplifications detected up to copy number 60), allele specific information “cleaner” (less SNP crosstalk/contamination), and quality of markers better (fewer individual MIPs versus SNPs needed to identify copy number changes). MIPs can be considered a candidate gene (targeted whole genome) approach and can find specific areas of interest that otherwise may be missed with other methods. PMID:19488872
Understanding potential health risks is a significant challenge for large numbers of diverse chemicals with poorly characterized exposures and mechanisms of toxicities. The present study analyzes chemical-target activity profiles of 976 chemicals (including failed pharmaceuticals...
Stability analysis of a Vlasov-Wave system describing particles interacting with their environment
NASA Astrophysics Data System (ADS)
De Bièvre, Stephan; Goudon, Thierry; Vavasseur, Arthur
2018-06-01
We study a kinetic equation of the Vlasov-Wave type, which arises in the description of the behavior of a large number of particles interacting weakly with an environment, composed of an infinite collection of local vibrational degrees of freedom, modeled by wave equations. We use variational techniques to establish the existence of large families of stationary states for this system, and analyze their stability.
A vectorization of the Hess McDonnell Douglas potential flow program NUED for the STAR-100 computer
NASA Technical Reports Server (NTRS)
Boney, L. R.; Smith, R. E., Jr.
1979-01-01
The computer program NUED for analyzing potential flow about arbitrary three dimensional lifting bodies using the panel method was modified to use vector operations and run on the STAR-100 computer. A high speed of computation and ability to approximate the body surface with a large number of panels are characteristics of NUEDV. The new program shows that vector operations can be readily implemented in programs of this type to increase the computational speed on the STAR-100 computer. The virtual memory architecture of the STAR-100 facilitates the use of large numbers of panels to approximate the body surface.
USAF solar thermal applications overview
NASA Technical Reports Server (NTRS)
Hauger, J. S.; Simpson, J. A.
1981-01-01
Process heat applications were compared to solar thermal technologies. The generic process heat applications were analyzed for solar thermal technology utilization, using SERI's PROSYS/ECONOMAT model in an end use matching analysis and a separate analysis was made for solar ponds. Solar technologies appear attractive in a large number of applications. Low temperature applications at sites with high insolation and high fuel costs were found to be most attractive. No one solar thermal technology emerges as a clearly universal or preferred technology, however,, solar ponds offer a potential high payoff in a few, selected applications. It was shown that troughs and flat plate systems are cost effective in a large number of applications.
Analysis of a Distributed Pulse Power System Using a Circuit Analysis Code
1979-06-01
dose rate was then integrated to give a number that could be compared with measure- ments made using thermal luminescent dosimeters ( TLD ’ s). Since...NM 8 7117 AND THE BDM CORPORATION, ALBUQUERQUE, NM 87106 Abstract A sophisticated computer code (SCEPTRE), used to analyze electronic circuits...computer code (SCEPTRE), used to analyze electronic circuits, was used to evaluate the performance of a large flash X-ray machine. This device was
Role of Brazilian zoos in ex situ bird conservation: from 1981 to 2005.
Azevedo, Cristiano S; Young, Robert J; Rodrigues, Marcos
2011-01-01
Zoos may play an important role in conservation when they maintain and breed large numbers of animals that are threatened with extinction. Bird conservation is in a privileged situation owing to the extensive biological information available about this class. Annual inventories produced by the "Sociedade de Zoológicos do Brasil" in the years 1981, 1990, 2000, and 2005 were analyzed. Variables, such as the number of zoos per geographic region; number of birds held; number of bird species in each IUCN threat category; number of exotic and native bird species; number of potentially breeding bird species; number of bird species in each order; and number of threatened bird species breeding, were analyzed. Brazilian zoos kept more than 350 bird species. The number of bird species and specimens held by the Brazilian Zoos increased from 1981 to 2000, but decreased in 2005. The same pattern was observed for the number of species in each IUCN threat category. Results showed that the potential of the Brazilian zoos in bird conservation needs to be enhanced because they maintain threatened species but do not implement systematic genetic, reproductive, or behavioral management protocols for most species. © 2010 Wiley Periodicals, Inc.
Amplified emission and lasing in a plasmonic nanolaser with many three-level molecules
NASA Astrophysics Data System (ADS)
Zhang, Yuan; Mølmer, Klaus
2018-01-01
Steady-state plasmonic lasing is studied theoretically for a system consisting of many dye molecules arranged regularly around a gold nanosphere. A three-level model with realistic molecular dissipation is employed to analyze the performance as a function of the pump field amplitude and number of molecules. Few molecules and moderate pumping produce a single narrow emission peak because the excited molecules transfer energy to a single dipole plasmon mode by amplified spontaneous emission. Under strong pumping, the single peak splits into broader and weaker emission peaks because two molecular excited levels interfere with each other through coherent coupling with the pump field and with the dipole plasmon field. A large number of molecules gives rise to a Poisson-like distribution of plasmon number states with a large mean number characteristic of lasing action. These characteristics of lasing, however, deteriorate under strong pumping because of the molecular interference effect.
Zhao, Min; Wang, Qingguo; Wang, Quan; Jia, Peilin; Zhao, Zhongming
2013-01-01
Copy number variation (CNV) is a prevalent form of critical genetic variation that leads to an abnormal number of copies of large genomic regions in a cell. Microarray-based comparative genome hybridization (arrayCGH) or genotyping arrays have been standard technologies to detect large regions subject to copy number changes in genomes until most recently high-resolution sequence data can be analyzed by next-generation sequencing (NGS). During the last several years, NGS-based analysis has been widely applied to identify CNVs in both healthy and diseased individuals. Correspondingly, the strong demand for NGS-based CNV analyses has fuelled development of numerous computational methods and tools for CNV detection. In this article, we review the recent advances in computational methods pertaining to CNV detection using whole genome and whole exome sequencing data. Additionally, we discuss their strengths and weaknesses and suggest directions for future development.
2013-01-01
Copy number variation (CNV) is a prevalent form of critical genetic variation that leads to an abnormal number of copies of large genomic regions in a cell. Microarray-based comparative genome hybridization (arrayCGH) or genotyping arrays have been standard technologies to detect large regions subject to copy number changes in genomes until most recently high-resolution sequence data can be analyzed by next-generation sequencing (NGS). During the last several years, NGS-based analysis has been widely applied to identify CNVs in both healthy and diseased individuals. Correspondingly, the strong demand for NGS-based CNV analyses has fuelled development of numerous computational methods and tools for CNV detection. In this article, we review the recent advances in computational methods pertaining to CNV detection using whole genome and whole exome sequencing data. Additionally, we discuss their strengths and weaknesses and suggest directions for future development. PMID:24564169
Heidelberg Retina Tomography Analysis in Optic Disks with Anatomic Particularities
Alexandrescu, C; Pascu, R; Ilinca, R; Popescu, V; Ciuluvica, R; Voinea, L; Celea, C
2010-01-01
Due to its objectivity, reproducibility and predictive value confirmed by many large scale statistical clinical studies, Heidelberg Retina Tomography has become one of the most used computerized image analysis of the optic disc in glaucoma. It has been signaled, though, that the diagnostic value of Moorfieds Regression Analyses and Glaucoma Probability Score decreases when analyzing optic discs with extreme sizes. The number of false positive results increases in cases of megalopapilllae and the number of false negative results increases in cases of small size optic discs. The present paper is a review of the aspects one should take into account when analyzing a HRT result of an optic disc with anatomic particularities. PMID:21254731
Photon number amplification/duplication through parametric conversion
NASA Technical Reports Server (NTRS)
Dariano, G. M.; Macchiavello, C.; Paris, M.
1993-01-01
The performance of parametric conversion in achieving number amplification and duplication is analyzed. It is shown that the effective maximum gains G(sub *) remain well below their integer ideal values, even for large signals. Correspondingly, one has output Fano factors F(sub *) which are increasing functions of the input photon number. On the other hand, in the inverse (deamplifier/recombiner) operating mode quasi-ideal gains G(sub *) and small factors F(sub *) approximately equal to 10 percent are obtained. Output noise and non-ideal gains are ascribed to spontaneous parametric emission.
Sampling large random knots in a confined space
NASA Astrophysics Data System (ADS)
Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.
2007-09-01
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.
ERIC Educational Resources Information Center
Gagnon, Douglas; Mattingly, Marybeth; Connelly, Vincent J.
2014-01-01
In 2013, Carsey released a brief that analyzed rates of restraint and seclusion using a large, nationally representative data set of U.S. school districts. This brief, which analyzes a more comprehensive data set and the most current Civil Rights Data Collection, serves as a follow-up to the previous brief. Authors Douglas Gagnon, Marybeth…
ImatraNMR: Novel software for batch integration and analysis of quantitative NMR spectra
NASA Astrophysics Data System (ADS)
Mäkelä, A. V.; Heikkilä, O.; Kilpeläinen, I.; Heikkinen, S.
2011-08-01
Quantitative NMR spectroscopy is a useful and important tool for analysis of various mixtures. Recently, in addition of traditional quantitative 1D 1H and 13C NMR methods, a variety of pulse sequences aimed for quantitative or semiquantitative analysis have been developed. To obtain actual usable results from quantitative spectra, they must be processed and analyzed with suitable software. Currently, there are many processing packages available from spectrometer manufacturers and third party developers, and most of them are capable of analyzing and integration of quantitative spectra. However, they are mainly aimed for processing single or few spectra, and are slow and difficult to use when large numbers of spectra and signals are being analyzed, even when using pre-saved integration areas or custom scripting features. In this article, we present a novel software, ImatraNMR, designed for batch analysis of quantitative spectra. In addition to capability of analyzing large number of spectra, it provides results in text and CSV formats, allowing further data-analysis using spreadsheet programs or general analysis programs, such as Matlab. The software is written with Java, and thus it should run in any platform capable of providing Java Runtime Environment version 1.6 or newer, however, currently it has only been tested with Windows and Linux (Ubuntu 10.04). The software is free for non-commercial use, and is provided with source code upon request.
Analyzing Predictors of High Opioid Use in the U.S. Navy
2016-09-01
minimalist approach to treating chronic pain patients (Levy, Netzer, & Pikulin, 2014). In 2015, the CDC published suggested guidelines for prescribing...appropriate number of trees as overfitting can occur if this parameter is too large. For the shrinkage parameter, we used a recommended starting
USING THE AIR QUALITY MODEL TO ANALYZE THE CONCENTRATIONS OF AIR TOXICS OVER THE CONTINENTAL U.S.
The U.S. Environmental Protection Agency is examining the concentrations and deposition of hazardous air pollutants (HAPs), which include a large number of chemicals, ranging from non reactive (i.e. carbon tetrachloride) to reactive (i.e. formaldehyde), exist in gas, aqueous, and...
Genetic structure of populations and differentiation in forest trees
Raymond P. Guries; F. Thomas Ledig
1981-01-01
Electrophoretic techniques permit population biologists to analyze genetic structure of natural populations by using large numbers of allozyme loci. Several methods of analysis have been applied to allozyme data, including chi-square contingency tests, F-statistics, and genetic distance. This paper compares such statistics for pitch pine (Pinus rigida...
Veal, Colin D.; Xu, Hang; Reekie, Katherine; Free, Robert; Hardwick, Robert J.; McVey, David; Brookes, Anthony J.; Hollox, Edward J.; Talbot, Christopher J.
2013-01-01
Motivation: Genomic copy number variation (CNV) can influence susceptibility to common diseases. High-throughput measurement of gene copy number on large numbers of samples is a challenging, yet critical, stage in confirming observations from sequencing or array Comparative Genome Hybridization (CGH). The paralogue ratio test (PRT) is a simple, cost-effective method of accurately determining copy number by quantifying the amplification ratio between a target and reference amplicon. PRT has been successfully applied to several studies analyzing common CNV. However, its use has not been widespread because of difficulties in assay design. Results: We present PRTPrimer (www.prtprimer.org) software for automated PRT assay design. In addition to stand-alone software, the web site includes a database of pre-designed assays for the human genome at an average spacing of 6 kb and a web interface for custom assay design. Other reference genomes can also be analyzed through local installation of the software. The usefulness of PRTPrimer was tested within known CNV, and showed reproducible quantification. This software and database provide assays that can rapidly genotype CNV, cost-effectively, on a large number of samples and will enable the widespread adoption of PRT. Availability: PRTPrimer is available in two forms: a Perl script (version 5.14 and higher) that can be run from the command line on Linux systems and as a service on the PRTPrimer web site (www.prtprimer.org). Contact: cjt14@le.ac.uk Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:23742985
Gomes, Chandima
2012-11-01
This paper addresses a concurrent multidisciplinary problem: animal safety against lightning hazards. In regions where lightning is prevalent, either seasonally or throughout the year, a considerable number of wild, captive and tame animals are injured due to lightning generated effects. The paper discusses all possible injury mechanisms, focusing mainly on animals with commercial value. A large number of cases from several countries have been analyzed. Economically and practically viable engineering solutions are proposed to address the issues related to the lightning threats discussed.
Zhu, Dan; Zhou, Gang; Xu, Caiguo; Zhang, Qifa
2016-02-20
Utilization of heterosis has greatly contributed to rice productivity in China and many Asian countries. Superior hybrids usually show heterosis at two stages: canopy development at vegetative stage and panicle development at reproductive stage resulting in heterosis in yield. Although the genetic basis of heterosis in rice has been extensively investigated, all the previous studies focused on yield traits at maturity stage. In this study, we analyzed the genetic basis of heterosis at seedling stage making use of an "immortalized F2" population composed of 105 hybrids produced by intercrossing recombinant inbred lines (RILs) from a cross between Zhenshan 97 and Minghui 63, the parents of Shanyou 63, which is an elite hybrid widely grown in China. Eight seedling traits, seedling height, tiller number, leaf number, root number, maximum root length, root dry weight, shoot dry weight and total dry weight, were investigated using hydroponic culture. We analyzed single-locus and digenic genetic effects at the whole genome level using an ultrahigh-density SNP bin map obtained by population re-sequencing. The analysis revealed large numbers of heterotic effects for seedling traits including dominance, overdominance and digenic dominance (epistasis) in both positive and negative directions. Overdominance effects were prevalent for all the traits, and digenic dominance effects also accounted for a large portion of the genetic effects. The results suggested that cumulative small advantages of the single-locus effects and two-locus interactions, most of which could not be detected statistically, could explain the genetic basis of seedling heterosis of the F1 hybrid. Copyright © 2016 Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, and Genetics Society of China. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Yiǧit, Erdal; Kilcik, Ali; Elias, Ana Georgina; Dönmez, Burçin; Ozguc, Atila; Yurchshyn, Vasyl; Rozelot, Jean-Pierre
2018-06-01
The long term solar activity dependencies of ionospheric F1 and F2 regions' critical frequencies (f0F1 and f0F2) are analyzed for the last four solar cycles (1976-2015). We show that the ionospheric F1 and F2 regions have different solar activity dependencies in terms of the sunspot group (SG) numbers: F1 region critical frequency (f0F1) peaks at the same time with the small SG numbers, while the f0F2 reaches its maximum at the same time with the large SG numbers, especially during the solar cycle 23. The observed differences in the sensitivity of ionospheric critical frequencies to sunspot group (SG) numbers provide a new insight into the solar activity effects on the ionosphere and space weather. While the F1 layer is influenced by the slow solar wind, which is largely associated with small SGs, the ionospheric F2 layer is more sensitive to Coronal Mass Ejections (CMEs) and fast solar winds, which are mainly produced by large SGs and coronal holes. The SG numbers maximize during of peak of the solar cycle and the number of coronal holes peaks during the sunspot declining phase. During solar minimum there are relatively less large SGs, hence reduced CME and flare activity. These results provide a new perspective for assessing how the different regions of the ionosphere respond to space weather effects.
Scalable Performance Measurement and Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gamblin, Todd
2009-01-01
Concurrency levels in large-scale, distributed-memory supercomputers are rising exponentially. Modern machines may contain 100,000 or more microprocessor cores, and the largest of these, IBM's Blue Gene/L, contains over 200,000 cores. Future systems are expected to support millions of concurrent tasks. In this dissertation, we focus on efficient techniques for measuring and analyzing the performance of applications running on very large parallel machines. Tuning the performance of large-scale applications can be a subtle and time-consuming task because application developers must measure and interpret data from many independent processes. While the volume of the raw data scales linearly with the number ofmore » tasks in the running system, the number of tasks is growing exponentially, and data for even small systems quickly becomes unmanageable. Transporting performance data from so many processes over a network can perturb application performance and make measurements inaccurate, and storing such data would require a prohibitive amount of space. Moreover, even if it were stored, analyzing the data would be extremely time-consuming. In this dissertation, we present novel methods for reducing performance data volume. The first draws on multi-scale wavelet techniques from signal processing to compress systemwide, time-varying load-balance data. The second uses statistical sampling to select a small subset of running processes to generate low-volume traces. A third approach combines sampling and wavelet compression to stratify performance data adaptively at run-time and to reduce further the cost of sampled tracing. We have integrated these approaches into Libra, a toolset for scalable load-balance analysis. We present Libra and show how it can be used to analyze data from large scientific applications scalably.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mankey, G J; Morton, S A; Tobin, J G
A spin- and angle-resolved x-ray photoelectron spectrometer for the study of magnetic materials will be discussed. It consists of a turntable with electron lenses connected to a large hemispherical analyzer. A mini-Mott spin detector is fitted to the output of the hemispherical analyzer. This system, when coupled to a synchrotron radiation source will allow determination of a complete set of quantum numbers of a photoelectron. This instrument will be used to study ferromagnetic, antiferromagnetic and nonmagnetic materials. Some prototypical materials systems to be studied with this instrument system will be proposed.
NASA Astrophysics Data System (ADS)
Ahmed, Bilal; Javed, Tariq; Ali, N.
2018-01-01
This paper analyzes the MHD flow of micropolar fluid induced by peristaltic waves passing through the porous saturated channel at large Reynolds number. The flow model is formulated in the absence of assumptions of lubrication theory which yields the governing equations into a non-linear set of coupled partial differential equations which allows studying the peristaltic mechanism at non-zero Reynolds and wave numbers. The influence of other involved parameters on velocity, stream function and microrotation are discussed through graphs plotted by using Galerkin's finite element method. Besides that, the phenomena of pumping and trapping are also analyzed in the later part of the paper. To ensure the accuracy of the developed code, obtained results are compared with the results available in the literature and found in excellent agreement. It is found that the peristalsis mixing can be enhanced by increasing Hartmann number while it reduces by increasing permeability of the porous medium.
Microbial community analysis using MEGAN.
Huson, Daniel H; Weber, Nico
2013-01-01
Metagenomics, the study of microbes in the environment using DNA sequencing, depends upon dedicated software tools for processing and analyzing very large sequencing datasets. One such tool is MEGAN (MEtaGenome ANalyzer), which can be used to interactively analyze and compare metagenomic and metatranscriptomic data, both taxonomically and functionally. To perform a taxonomic analysis, the program places the reads onto the NCBI taxonomy, while functional analysis is performed by mapping reads to the SEED, COG, and KEGG classifications. Samples can be compared taxonomically and functionally, using a wide range of different charting and visualization techniques. PCoA analysis and clustering methods allow high-level comparison of large numbers of samples. Different attributes of the samples can be captured and used within analysis. The program supports various input formats for loading data and can export analysis results in different text-based and graphical formats. The program is designed to work with very large samples containing many millions of reads. It is written in Java and installers for the three major computer operating systems are available from http://www-ab.informatik.uni-tuebingen.de. © 2013 Elsevier Inc. All rights reserved.
Chiu, Chi-yang; Jung, Jeesun; Wang, Yifan; Weeks, Daniel E.; Wilson, Alexander F.; Bailey-Wilson, Joan E.; Amos, Christopher I.; Mills, James L.; Boehnke, Michael; Xiong, Momiao; Fan, Ruzong
2016-01-01
In this paper, extensive simulations are performed to compare two statistical methods to analyze multiple correlated quantitative phenotypes: (1) approximate F-distributed tests of multivariate functional linear models (MFLM) and additive models of multivariate analysis of variance (MANOVA), and (2) Gene Association with Multiple Traits (GAMuT) for association testing of high-dimensional genotype data. It is shown that approximate F-distributed tests of MFLM and MANOVA have higher power and are more appropriate for major gene association analysis (i.e., scenarios in which some genetic variants have relatively large effects on the phenotypes); GAMuT has higher power and is more appropriate for analyzing polygenic effects (i.e., effects from a large number of genetic variants each of which contributes a small amount to the phenotypes). MFLM and MANOVA are very flexible and can be used to perform association analysis for: (i) rare variants, (ii) common variants, and (iii) a combination of rare and common variants. Although GAMuT was designed to analyze rare variants, it can be applied to analyze a combination of rare and common variants and it performs well when (1) the number of genetic variants is large and (2) each variant contributes a small amount to the phenotypes (i.e., polygenes). MFLM and MANOVA are fixed effect models which perform well for major gene association analysis. GAMuT can be viewed as an extension of sequence kernel association tests (SKAT). Both GAMuT and SKAT are more appropriate for analyzing polygenic effects and they perform well not only in the rare variant case, but also in the case of a combination of rare and common variants. Data analyses of European cohorts and the Trinity Students Study are presented to compare the performance of the two methods. PMID:27917525
Rietschel, Marcella; Mattheisen, Manuel; Breuer, René; Schulze, Thomas G.; Nöthen, Markus M.; Levinson, Douglas; Shi, Jianxin; Gejman, Pablo V.; Cichon, Sven; Ophoff, Roel A.
2012-01-01
Recent studies suggest that variation in complex disorders (e.g., schizophrenia) is explained by a large number of genetic variants with small effect size (Odds Ratio∼1.05–1.1). The statistical power to detect these genetic variants in Genome Wide Association (GWA) studies with large numbers of cases and controls (∼15,000) is still low. As it will be difficult to further increase sample size, we decided to explore an alternative method for analyzing GWA data in a study of schizophrenia, dramatically reducing the number of statistical tests. The underlying hypothesis was that at least some of the genetic variants related to a common outcome are collocated in segments of chromosomes at a wider scale than single genes. Our approach was therefore to study the association between relatively large segments of DNA and disease status. An association test was performed for each SNP and the number of nominally significant tests in a segment was counted. We then performed a permutation-based binomial test to determine whether this region contained significantly more nominally significant SNPs than expected under the null hypothesis of no association, taking linkage into account. Genome Wide Association data of three independent schizophrenia case/control cohorts with European ancestry (Dutch, German, and US) using segments of DNA with variable length (2 to 32 Mbp) was analyzed. Using this approach we identified a region at chromosome 5q23.3-q31.3 (128–160 Mbp) that was significantly enriched with nominally associated SNPs in three independent case-control samples. We conclude that considering relatively wide segments of chromosomes may reveal reliable relationships between the genome and schizophrenia, suggesting novel methodological possibilities as well as raising theoretical questions. PMID:22723893
Financial performance and managed care trends of health centers.
Martin, Brian C; Shi, Leiyu; Ward, Ryan D
2009-01-01
Data were analyzed from the 1998-2004 Uniform Data System (UDS) to identify trends and predictors of financial performance (costs, productivity, and overall financial health) for health centers (HCs). Several differences were noted regarding revenues, self-sufficiency, service offerings, and urban/rural setting. Urban centers with larger numbers of clients, centers that treated high numbers of patients with chronic diseases, and centers with large numbers of prenatal care users were the most fiscally sound. Positive financial performance can be targeted through strategies that generate positive revenue, strive to decrease costs, and target services that are in demand.
NASA Technical Reports Server (NTRS)
Rock, M.; Kunigahalli, V.; Khan, S.; Mcnair, A.
1984-01-01
Sealed nickel cadmium cells having undergone a large number of cycles were discharged using the Hg/HgO reference electrode. The negative electrode exhibited the second plateau. SEM of negative plates of such cells show clusters of large crystals of cadmium hydroxide. These large crystals on the negative plates disappear after continuous overcharging in flooded cells. Atomic Absorption Spectroscopy and standard wet chemical methods are being used to determine the cell materials viz: nickel, cadmium, cobalt, potassum and carbonate. The anodes and cathodes are analyzed after careful examination and the condition of the separator material is evaluated.
Occurrence of 1153 organic micropollutants in the aquatic environment of Vietnam.
Chau, H T C; Kadokami, K; Duong, H T; Kong, L; Nguyen, T T; Nguyen, T Q; Ito, Y
2018-03-01
The rapid increase in the number and volume of chemical substances being used in modern society has been accompanied by a large number of potentially hazardous chemicals being found in environmental samples. In Vietnam, the monitoring of chemical substances is mainly limited to a small number of known pollutants in spite of rapid economic growth and urbanization, and there is an urgent need to examine a large number of chemicals to prevent impacts from expanding environmental pollution. However, it is difficult to analyze a large number of chemicals using existing methods, because they are time consuming and expensive. In the present study, we determined 1153 substances to grasp a pollution picture of microcontaminants in the aquatic environment. To achieve this objective, we have used two comprehensive analytical methods: (1) solid-phase extraction (SPE) and LC-TOF-MS analysis, and (2) SPE and GC-MS analysis. We collected 42 samples from northern (the Red River and Hanoi), central (Hue and Danang), and southern (Ho Chi Minh City and Saigon-Dongnai River) Vietnam. One hundred and sixty-five compounds were detected at least once. The compounds detected most frequently (>40 % samples) at μg/L concentrations were sterols (cholesterol, beta-sitosterol, stigmasterol, coprostanol), phthalates (bis(2-ethylhexyl) phthalate and di-n-butyl phthalate), and pharmaceutical and personal care products (caffeine, metformin). These contaminants were detected at almost the same detection frequency as in developed countries. The results reveal that surface waters in Vietnam, particularly in the center of large cities, are polluted by a large number of organic micropollutants, with households and business activities as the major sources. In addition, risk quotients (MEC/PNEC values) for nonylphenol, sulfamethoxazole, ampicillin, acetaminophen, erythromycin and clarithromycin were higher than 1, which indicates a possibility of adverse effects on aquatic ecosystems.
[Telephone consultations on exposure to nuclear disaster radiation].
Yashima, Sachiko; Chida, Koichi
2014-03-01
The Fukushima nuclear disaster occurred on March 11, 2011. For about six weeks, I worked as a counselor for phone consultations regarding radiation risk. I analyzed the number of consultations, consultations by telephone, and their changing patterns with elapse of time, to assist with consultations about risk in the future. There were a large number of questions regarding the effects of radiation, particularly with regard to children. We believe that counseling and risk communication are the key to effectively informing the public about radiation risks.
Investigation of the Large Scale Evolution and Topology of Coronal Mass Ejections in the Solar Wind
NASA Technical Reports Server (NTRS)
Riley, Pete
2001-01-01
This investigation is concerned with the large-scale evolution and topology of coronal mass ejections (CMEs) in the solar wind. During the course of this three-year investigation, we have undertaken a number of studies that are discussed in more detail in this report. For example, we conducted an analysis of all CMEs observed by the Ulysses spacecraft during its in-ecliptic phase between 1 and 5 AU. In addition to studying the properties of the ejecta, we also analyzed the shocks that could be unambiguously associated with the fast CMEs. We also analyzed a series of 'density holes' observed in the solar wind that bear many similarities with CMEs. To complement this analysis, we conducted a series of 1-D and 2 1/2-D fluid, MHD, and hybrid simulations to address a number of specific issues related to CME evolution in the solar wind. For example, we used fluid simulations to address the interpretation of negative electron temperature-density relationships often observed within CME/cloud intervals. As part of this investigation, a number of fruitful international collaborations were forged. Finally, the results of this work were presented at nine scientific meetings and communicated in eight scientific, refereed papers.
Mach Number effects on turbulent superstructures in wall bounded flows
NASA Astrophysics Data System (ADS)
Kaehler, Christian J.; Bross, Matthew; Scharnowski, Sven
2017-11-01
Planer and three-dimensional flow field measurements along a flat plat boundary layer in the Trisonic Wind Tunnel Munich (TWM) are examined with the aim to characterize the scaling, spatial organization, and topology of large scale turbulent superstructures in compressible flow. This facility is ideal for this investigation as the ratio of boundary layer thickness to test section spanwise extent ratio is around 1/25, ensuring minimal sidewall and corner effects on turbulent structures in the center of the test section. A major difficulty in the experimental investigation of large scale features is the mutual size of the superstructures which can extend over many boundary layer thicknesses. Using multiple PIV systems, it was possible to capture the full spatial extent of large-scale structures over a range of Mach numbers from Ma = 0.3 - 3. To calculate the average large-scale structure length and spacing, the acquired vector fields were analyzed by statistical multi-point methods that show large scale structures with a correlation length of around 10 boundary layer thicknesses over the range of Mach numbers investigated. Furthermore, the average spacing between high and low momentum structures is on the order of a boundary layer thicknesses. This work is supported by the Priority Programme SPP 1881 Turbulent Superstructures of the Deutsche Forschungsgemeinschaft.
ImatraNMR: novel software for batch integration and analysis of quantitative NMR spectra.
Mäkelä, A V; Heikkilä, O; Kilpeläinen, I; Heikkinen, S
2011-08-01
Quantitative NMR spectroscopy is a useful and important tool for analysis of various mixtures. Recently, in addition of traditional quantitative 1D (1)H and (13)C NMR methods, a variety of pulse sequences aimed for quantitative or semiquantitative analysis have been developed. To obtain actual usable results from quantitative spectra, they must be processed and analyzed with suitable software. Currently, there are many processing packages available from spectrometer manufacturers and third party developers, and most of them are capable of analyzing and integration of quantitative spectra. However, they are mainly aimed for processing single or few spectra, and are slow and difficult to use when large numbers of spectra and signals are being analyzed, even when using pre-saved integration areas or custom scripting features. In this article, we present a novel software, ImatraNMR, designed for batch analysis of quantitative spectra. In addition to capability of analyzing large number of spectra, it provides results in text and CSV formats, allowing further data-analysis using spreadsheet programs or general analysis programs, such as Matlab. The software is written with Java, and thus it should run in any platform capable of providing Java Runtime Environment version 1.6 or newer, however, currently it has only been tested with Windows and Linux (Ubuntu 10.04). The software is free for non-commercial use, and is provided with source code upon request. Copyright © 2011 Elsevier Inc. All rights reserved.
Comparison of the NCI open database with seven large chemical structural databases.
Voigt, J H; Bienfait, B; Wang, S; Nicklaus, M C
2001-01-01
Eight large chemical databases have been analyzed and compared to each other. Central to this comparison is the open National Cancer Institute (NCI) database, consisting of approximately 250 000 structures. The other databases analyzed are the Available Chemicals Directory ("ACD," from MDL, release 1.99, 3D-version); the ChemACX ("ACX," from CamSoft, Version 4.5); the Maybridge Catalog and the Asinex database (both as distributed by CamSoft as part of ChemInfo 4.5); the Sigma-Aldrich Catalog (CD-ROM, 1999 Version); the World Drug Index ("WDI," Derwent, version 1999.03); and the organic part of the Cambridge Crystallographic Database ("CSD," from Cambridge Crystallographic Data Center, 1999 Version 5.18). The database properties analyzed are internal duplication rates; compounds unique to each database; cumulative occurrence of compounds in an increasing number of databases; overlap of identical compounds between two databases; similarity overlap; diversity; and others. The crystallographic database CSD and the WDI show somewhat less overlap with the other databases than those with each other. In particular the collections of commercial compounds and compilations of vendor catalogs have a substantial degree of overlap among each other. Still, no database is completely a subset of any other, and each appears to have its own niche and thus "raison d'être". The NCI database has by far the highest number of compounds that are unique to it. Approximately 200 000 of the NCI structures were not found in any of the other analyzed databases.
USDA-ARS?s Scientific Manuscript database
With the rapid development of small imaging sensors and unmanned aerial vehicles (UAVs), remote sensing is undergoing a revolution with greatly increased spatial and temporal resolutions. While more relevant detail becomes available, it is a challenge to analyze the large number of images to extract...
Helping Young Children Understand Graphs: A Demonstration Study.
ERIC Educational Resources Information Center
Freeland, Kent; Madden, Wendy
1990-01-01
Outlines a demonstration lesson showing third graders how to make and interpret graphs. Includes descriptions of purpose, vocabulary, and learning activities in which students graph numbers of students with dogs at home and analyze the contents of M&M candy packages by color. Argues process helps students understand large amounts of abstract…
A Survey of the 1986 Canadian Library Systems Marketplace.
ERIC Educational Resources Information Center
Merilees, Bobbie
1987-01-01
This analysis of trends in the Canadian library systems marketplace in 1986, compares installations of large integrated systems and microcomputer based systems by relative market share, and number of installations by type of library. Canadian vendors' sales in international markets are also analyzed, and a director of vendors provided. (Author/CLB)
NASA Technical Reports Server (NTRS)
Park, Steve
1990-01-01
A large and diverse number of computational techniques are routinely used to process and analyze remotely sensed data. These techniques include: univariate statistics; multivariate statistics; principal component analysis; pattern recognition and classification; other multivariate techniques; geometric correction; registration and resampling; radiometric correction; enhancement; restoration; Fourier analysis; and filtering. Each of these techniques will be considered, in order.
A Survey of Speech Programs in Community Colleges.
ERIC Educational Resources Information Center
Meyer, Arthur C.
The rapid growth of community colleges in the last decade resulted in large numbers of students enrolled in programs previously unavailable to them in a single comprehensive institution. The purpose of this study was to gather and analyze data to provide information about the speech programs that community colleges created or expanded as a result…
Demand for University Continuing Education in Canada: Who Participates and Why?
ERIC Educational Resources Information Center
Adamuti-Trache, Maria; Schuetze, Hans G.
2009-01-01
The demand for and participation in continuing education by Canadian university graduates who completed bachelor and/or first professional degrees in 1995 are analyzed in this article. Within five years of completing their first degree, in addition to participating in graduate programs, a large number of those graduates participated in non-degree…
USDA-ARS?s Scientific Manuscript database
Large datasets containing single nucleotide polymorphisms (SNPs) are used to analyze genome-wide diversity in a robust collection of cultivars from representative accessions, across the world. The extent of linkage disequilibrium (LD) within a population determines the number of markers required fo...
ERIC Educational Resources Information Center
Jobe, LaWanda D.
2013-01-01
African American women are enrolling and returning to college in large numbers across many community college campuses, especially those women who would be characterized as nontraditional students. This qualitative study examined and analyzed the experiences, stresses, and coping mechanisms of first generation, nontraditional, single parent,…
Application and Analysis of Measurement Model for Calibrating Spatial Shear Surface in Triaxial Test
NASA Astrophysics Data System (ADS)
Zhang, Zhihua; Qiu, Hongsheng; Zhang, Xiedong; Zhang, Hang
2017-12-01
Discrete element method has great advantages in simulating the contacts, fractures, large displacement and deformation between particles. In order to analyze the spatial distribution of the shear surface in the three-dimensional triaxial test, a measurement model is inserted in the numerical triaxial model which is generated by weighted average assembling method. Due to the non-visibility of internal shear surface in laboratory, it is largely insufficient to judge the trend of internal shear surface only based on the superficial cracks of sheared sample, therefore, the measurement model is introduced. The trend of the internal shear zone is analyzed according to the variations of porosity, coordination number and volumetric strain in each layer. It shows that as a case study on confining stress of 0.8 MPa, the spatial shear surface is calibrated with the results of the rotated particle distribution and the theoretical value with the specific characteristics of the increase of porosity, the decrease of coordination number, and the increase of volumetric strain, which represents the measurement model used in three-dimensional model is applicable.
Doubly robust matching estimators for high dimensional confounding adjustment.
Antonelli, Joseph; Cefalu, Matthew; Palmer, Nathan; Agniel, Denis
2018-05-11
Valid estimation of treatment effects from observational data requires proper control of confounding. If the number of covariates is large relative to the number of observations, then controlling for all available covariates is infeasible. In cases where a sparsity condition holds, variable selection or penalization can reduce the dimension of the covariate space in a manner that allows for valid estimation of treatment effects. In this article, we propose matching on both the estimated propensity score and the estimated prognostic scores when the number of covariates is large relative to the number of observations. We derive asymptotic results for the matching estimator and show that it is doubly robust in the sense that only one of the two score models need be correct to obtain a consistent estimator. We show via simulation its effectiveness in controlling for confounding and highlight its potential to address nonlinear confounding. Finally, we apply the proposed procedure to analyze the effect of gender on prescription opioid use using insurance claims data. © 2018, The International Biometric Society.
Analysis of volatile organic compounds. [trace amounts of organic volatiles in gas samples
NASA Technical Reports Server (NTRS)
Zlatkis, A. (Inventor)
1977-01-01
An apparatus and method are described for reproducibly analyzing trace amounts of a large number of organic volatiles existing in a gas sample. Direct injection of the trapped volatiles into a cryogenic percolum provides a sharply defined plug. Applications of the method include: (1) analyzing the headspace gas of body fluids and comparing a profile of the organic volatiles with standard profiles for the detection and monitoring of disease; (2) analyzing the headspace gas of foods and beverages and comparing the profile with standard profiles to monitor and control flavor and aroma; and (3) analyses for determining the organic pollutants in air or water samples.
Rogue wave in coupled electric transmission line
NASA Astrophysics Data System (ADS)
Duan, J. K.; Bai, Y. L.
2018-03-01
Distributed electrical transmission lines that consist of a large number of identical sections have been theoretically studied in the present paper. The rogue wave is analyzed and predicted using the nonlinear Schrodinger equation (NLSE). The results indicate that, in the continuum limit, the voltage for the transmission line is described in some cases by the NLSE that is obtained using the traditional perturbation technique. The dependences of the characteristics of the rouge wave parameters on the coupled electric transmission line are shown in the paper. As is well known, rogue waves can be found for a large number of oceanic disasters, and such waves may be disastrous. However, the results of the present paper for coupled electric transmission lines may be useful.
Inflation in random Gaussian landscapes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masoumi, Ali; Vilenkin, Alexander; Yamada, Masaki, E-mail: ali@cosmos.phy.tufts.edu, E-mail: vilenkin@cosmos.phy.tufts.edu, E-mail: Masaki.Yamada@tufts.edu
2017-05-01
We develop analytic and numerical techniques for studying the statistics of slow-roll inflation in random Gaussian landscapes. As an illustration of these techniques, we analyze small-field inflation in a one-dimensional landscape. We calculate the probability distributions for the maximal number of e-folds and for the spectral index of density fluctuations n {sub s} and its running α {sub s} . These distributions have a universal form, insensitive to the correlation function of the Gaussian ensemble. We outline possible extensions of our methods to a large number of fields and to models of large-field inflation. These methods do not suffer frommore » potential inconsistencies inherent in the Brownian motion technique, which has been used in most of the earlier treatments.« less
NASA Technical Reports Server (NTRS)
Bainum, P. M.; Sellappan, R.
1978-01-01
Attitude control techniques for the pointing and stabilization of very large, inherently flexible spacecraft systems were investigated. The attitude dynamics and control of a long, homogeneous flexible beam whose center of mass is assumed to follow a circular orbit was analyzed. First order effects of gravity gradient were included. A mathematical model which describes the system rotations and deflections within the orbital plane was developed by treating the beam as a number of discretized mass particles connected by massless, elastic structural elements. The uncontrolled dynamics of the system are simulated and, in addition, the effects of the control devices were considered. The concept of distributed modal control, which provides a means for controlling a system mode independently of all other modes, was examined. The effect of varying the number of modes in the model as well as the number and location of the control devices were also considered.
Reynolds number dependence of large-scale friction control in turbulent channel flow
NASA Astrophysics Data System (ADS)
Canton, Jacopo; Örlü, Ramis; Chin, Cheng; Schlatter, Philipp
2016-12-01
The present work investigates the effectiveness of the control strategy introduced by Schoppa and Hussain [Phys. Fluids 10, 1049 (1998), 10.1063/1.869789] as a function of Reynolds number (Re). The skin-friction drag reduction method proposed by these authors, consisting of streamwise-invariant, counter-rotating vortices, was analyzed by Canton et al. [Flow, Turbul. Combust. 97, 811 (2016), 10.1007/s10494-016-9723-8] in turbulent channel flows for friction Reynolds numbers (Reτ) corresponding to the value of the original study (i.e., 104) and 180. For these Re, a slightly modified version of the method proved to be successful and was capable of providing a drag reduction of up to 18%. The present study analyzes the Reynolds number dependence of this drag-reducing strategy by performing two sets of direct numerical simulations (DNS) for Reτ=360 and 550. A detailed analysis of the method as a function of the control parameters (amplitude and wavelength) and Re confirms, on the one hand, the effectiveness of the large-scale vortices at low Re and, on the other hand, the decreasing and finally vanishing effectiveness of this method for higher Re. In particular, no drag reduction can be achieved for Reτ=550 for any combination of the parameters controlling the vortices. For low Reynolds numbers, the large-scale vortices are able to affect the near-wall cycle and alter the wall-shear-stress distribution to cause an overall drag reduction effect, in accordance with most control strategies. For higher Re, instead, the present method fails to penetrate the near-wall region and cannot induce the spanwise velocity variation observed in other more established control strategies, which focus on the near-wall cycle. Despite the negative outcome, the present results demonstrate the shortcomings of the control strategy and show that future focus should be on methods that directly target the near-wall region or other suitable alternatives.
A Scalable Approach for Protein False Discovery Rate Estimation in Large Proteomic Data Sets.
Savitski, Mikhail M; Wilhelm, Mathias; Hahne, Hannes; Kuster, Bernhard; Bantscheff, Marcus
2015-09-01
Calculating the number of confidently identified proteins and estimating false discovery rate (FDR) is a challenge when analyzing very large proteomic data sets such as entire human proteomes. Biological and technical heterogeneity in proteomic experiments further add to the challenge and there are strong differences in opinion regarding the conceptual validity of a protein FDR and no consensus regarding the methodology for protein FDR determination. There are also limitations inherent to the widely used classic target-decoy strategy that particularly show when analyzing very large data sets and that lead to a strong over-representation of decoy identifications. In this study, we investigated the merits of the classic, as well as a novel target-decoy-based protein FDR estimation approach, taking advantage of a heterogeneous data collection comprised of ∼19,000 LC-MS/MS runs deposited in ProteomicsDB (https://www.proteomicsdb.org). The "picked" protein FDR approach treats target and decoy sequences of the same protein as a pair rather than as individual entities and chooses either the target or the decoy sequence depending on which receives the highest score. We investigated the performance of this approach in combination with q-value based peptide scoring to normalize sample-, instrument-, and search engine-specific differences. The "picked" target-decoy strategy performed best when protein scoring was based on the best peptide q-value for each protein yielding a stable number of true positive protein identifications over a wide range of q-value thresholds. We show that this simple and unbiased strategy eliminates a conceptual issue in the commonly used "classic" protein FDR approach that causes overprediction of false-positive protein identification in large data sets. The approach scales from small to very large data sets without losing performance, consistently increases the number of true-positive protein identifications and is readily implemented in proteomics analysis software. © 2015 by The American Society for Biochemistry and Molecular Biology, Inc.
Multiple damage identification on a wind turbine blade using a structural neural system
NASA Astrophysics Data System (ADS)
Kirikera, Goutham R.; Schulz, Mark J.; Sundaresan, Mannur J.
2007-04-01
A large number of sensors are required to perform real-time structural health monitoring (SHM) to detect acoustic emissions (AE) produced by damage growth on large complicated structures. This requires a large number of high sampling rate data acquisition channels to analyze high frequency signals. To overcome the cost and complexity of having such a large data acquisition system, a structural neural system (SNS) was developed. The SNS reduces the required number of data acquisition channels and predicts the location of damage within a sensor grid. The sensor grid uses interconnected sensor nodes to form continuous sensors. The combination of continuous sensors and the biomimetic parallel processing of the SNS tremendously reduce the complexity of SHM. A wave simulation algorithm (WSA) was developed to understand the flexural wave propagation in composite structures and to utilize the code for developing the SNS. Simulation of AE responses in a plate and comparison with experimental results are shown in the paper. The SNS was recently tested by a team of researchers from University of Cincinnati and North Carolina A&T State University during a quasi-static proof test of a 9 meter long wind turbine blade at the National Renewable Energy Laboratory (NREL) test facility in Golden, Colorado. Twelve piezoelectric sensor nodes were used to form four continuous sensors to monitor the condition of the blade during the test. The four continuous sensors are used as inputs to the SNS. There are only two analog output channels of the SNS, and these signals are digitized and analyzed in a computer to detect damage. In the test of the wind turbine blade, multiple damages were identified and later verified by sectioning of the blade. The results of damage identification using the SNS during this proof test will be shown in this paper. Overall, the SNS is very sensitive and can detect damage on complex structures with ribs, joints, and different materials, and the system relatively inexpensive and simple to implement on large structures.
A Scalable Approach for Protein False Discovery Rate Estimation in Large Proteomic Data Sets
Savitski, Mikhail M.; Wilhelm, Mathias; Hahne, Hannes; Kuster, Bernhard; Bantscheff, Marcus
2015-01-01
Calculating the number of confidently identified proteins and estimating false discovery rate (FDR) is a challenge when analyzing very large proteomic data sets such as entire human proteomes. Biological and technical heterogeneity in proteomic experiments further add to the challenge and there are strong differences in opinion regarding the conceptual validity of a protein FDR and no consensus regarding the methodology for protein FDR determination. There are also limitations inherent to the widely used classic target–decoy strategy that particularly show when analyzing very large data sets and that lead to a strong over-representation of decoy identifications. In this study, we investigated the merits of the classic, as well as a novel target–decoy-based protein FDR estimation approach, taking advantage of a heterogeneous data collection comprised of ∼19,000 LC-MS/MS runs deposited in ProteomicsDB (https://www.proteomicsdb.org). The “picked” protein FDR approach treats target and decoy sequences of the same protein as a pair rather than as individual entities and chooses either the target or the decoy sequence depending on which receives the highest score. We investigated the performance of this approach in combination with q-value based peptide scoring to normalize sample-, instrument-, and search engine-specific differences. The “picked” target–decoy strategy performed best when protein scoring was based on the best peptide q-value for each protein yielding a stable number of true positive protein identifications over a wide range of q-value thresholds. We show that this simple and unbiased strategy eliminates a conceptual issue in the commonly used “classic” protein FDR approach that causes overprediction of false-positive protein identification in large data sets. The approach scales from small to very large data sets without losing performance, consistently increases the number of true-positive protein identifications and is readily implemented in proteomics analysis software. PMID:25987413
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man
2016-01-01
During inactive phases of Madden-Julian oscillation (MJO), there are plenty of deep but small convective systems and far fewer deep and large ones. During active phases of MJO, a manifestation of an increase in the occurrence of large and deep cloud clusters results from an amplification of large-scale motions by stronger convective heating. This study is designed to quantitatively examine the roles of small and large cloud clusters during the MJO life cycle. We analyze the cloud object data from Aqua CERES observations for tropical deep convective (DC) and cirrostratus (CS) cloud object types according to the real-time multivariate MJO index. The cloud object is a contiguous region of the earth with a single dominant cloud-system type. The size distributions, defined as the footprint numbers as a function of cloud object diameters, for particular MJO phases depart greatly from the combined (8-phase) distribution at large cloud-object diameters due to the reduced/increased numbers of cloud objects related to changes in the large-scale environments. The medium diameter corresponding to the combined distribution is determined and used to partition all cloud objects into "small" and "large" groups of a particular phase. The two groups corresponding to the combined distribution have nearly equal numbers of footprints. The medium diameters are 502 km for DC and 310 km for cirrostratus. The range of the variation between two extreme phases (typically, the most active and depressed phases) for the small group is 6-11% in terms of the numbers of cloud objects and the total footprint numbers. The corresponding range for the large group is 19-44%. In terms of the probability density functions of radiative and cloud physical properties, there are virtually no differences between the MJO phases for the small group, but there are significant differences for the large groups for both DC and CS types. These results suggest that the intreseasonal variation signals reside at the large cloud clusters while the small cloud clusters represent the background noises resulting from various types of the tropical waves with different wavenumbers and propagation directions/speeds.
Rescaling citations of publications in physics
NASA Astrophysics Data System (ADS)
Radicchi, Filippo; Castellano, Claudio
2011-04-01
We analyze the citation distributions of all papers published in Physical Review journals between 1985 and 2009. The average number of citations received by papers published in a given year and in a given field is computed. Large variations are found, showing that it is not fair to compare citation numbers across fields and years. However, when a rescaling procedure by the average is used, it is possible to compare impartially articles across years and fields. We make the rescaling factors available for use by the readers. We also show that rescaling citation numbers by the number of publication authors has strong effects and should therefore be taken into account when assessing the bibliometric performance of researchers.
Liu, Chao; Liu, Jinhong; Zhang, Junxiang; Zhu, Shiyao
2018-02-05
The direct counterfactual quantum communication (DCQC) is a surprising phenomenon that quantum information can be transmitted without using any carriers of physical particles. The nested interferometers are promising devices for realizing DCQC as long as the number of interferometers goes to be infinity. Considering the inevitable loss or dissipation in practical experimental interferometers, we analyze the dependence of reliability on the number of interferometers, and show that the reliability of direct communication is being rapidly degraded with the large number of interferometers. Furthermore, we simulate and test this counterfactual deterministic communication protocol with a finite number of interferometers, and demonstrate the improvement of the reliability using dissipation compensation in interferometers.
Rescaling citations of publications in physics.
Radicchi, Filippo; Castellano, Claudio
2011-04-01
We analyze the citation distributions of all papers published in Physical Review journals between 1985 and 2009. The average number of citations received by papers published in a given year and in a given field is computed. Large variations are found, showing that it is not fair to compare citation numbers across fields and years. However, when a rescaling procedure by the average is used, it is possible to compare impartially articles across years and fields. We make the rescaling factors available for use by the readers. We also show that rescaling citation numbers by the number of publication authors has strong effects and should therefore be taken into account when assessing the bibliometric performance of researchers.
Parallel group independent component analysis for massive fMRI data sets.
Chen, Shaojie; Huang, Lei; Qiu, Huitong; Nebel, Mary Beth; Mostofsky, Stewart H; Pekar, James J; Lindquist, Martin A; Eloyan, Ani; Caffo, Brian S
2017-01-01
Independent component analysis (ICA) is widely used in the field of functional neuroimaging to decompose data into spatio-temporal patterns of co-activation. In particular, ICA has found wide usage in the analysis of resting state fMRI (rs-fMRI) data. Recently, a number of large-scale data sets have become publicly available that consist of rs-fMRI scans from thousands of subjects. As a result, efficient ICA algorithms that scale well to the increased number of subjects are required. To address this problem, we propose a two-stage likelihood-based algorithm for performing group ICA, which we denote Parallel Group Independent Component Analysis (PGICA). By utilizing the sequential nature of the algorithm and parallel computing techniques, we are able to efficiently analyze data sets from large numbers of subjects. We illustrate the efficacy of PGICA, which has been implemented in R and is freely available through the Comprehensive R Archive Network, through simulation studies and application to rs-fMRI data from two large multi-subject data sets, consisting of 301 and 779 subjects respectively.
Exploring brand-name drug mentions on Twitter for pharmacovigilance.
Carbonell, Pablo; Mayer, Miguel A; Bravo, Àlex
2015-01-01
Twitter has been proposed by several studies as a means to track public health trends such as influenza and Ebola outbreaks by analyzing user messages in order to measure different population features and interests. In this work we analyze the number and features of mentions on Twitter of drug brand names in order to explore the potential usefulness of the automated detection of drug side effects and drug-drug interactions on social media platforms such as Twitter. This information can be used for the development of predictive models for drug toxicity, drug-drug interactions or drug resistance. Taking into account the large number of drug brand mentions that we found on Twitter, it is promising as a tool for the detection, understanding and monitoring the way people manage prescribed drugs.
A multi-scalar PDF approach for LES of turbulent spray combustion
NASA Astrophysics Data System (ADS)
Raman, Venkat; Heye, Colin
2011-11-01
A comprehensive joint-scalar probability density function (PDF) approach is proposed for large eddy simulation (LES) of turbulent spray combustion and tests are conducted to analyze the validity and modeling requirements. The PDF method has the advantage that the chemical source term appears closed but requires models for the small scale mixing process. A stable and consistent numerical algorithm for the LES/PDF approach is presented. To understand the modeling issues in the PDF method, direct numerical simulation of a spray flame at three different fuel droplet Stokes numbers and an equivalent gaseous flame are carried out. Assumptions in closing the subfilter conditional diffusion term in the filtered PDF transport equation are evaluated for various model forms. In addition, the validity of evaporation rate models in high Stokes number flows is analyzed.
Toward two-dimensional search engines
NASA Astrophysics Data System (ADS)
Ermann, L.; Chepelianskii, A. D.; Shepelyansky, D. L.
2012-07-01
We study the statistical properties of various directed networks using ranking of their nodes based on the dominant vectors of the Google matrix known as PageRank and CheiRank. On average PageRank orders nodes proportionally to a number of ingoing links, while CheiRank orders nodes proportionally to a number of outgoing links. In this way, the ranking of nodes becomes two dimensional which paves the way for the development of two-dimensional search engines of a new type. Statistical properties of information flow on the PageRank-CheiRank plane are analyzed for networks of British, French and Italian universities, Wikipedia, Linux Kernel, gene regulation and other networks. A special emphasis is done for British universities networks using the large database publicly available in the UK. Methods of spam links control are also analyzed.
Fiber-optic temperature sensor using a spectrum-modulating semiconductor etalon
NASA Technical Reports Server (NTRS)
Beheim, Glenn; Anthan, Donald J.; Beheim, Glenn; Anthan, Donald J.
1987-01-01
Described is a fiber-optic temperature sensor that uses a spectrum modulating SiC etalon. The spectral output of this type of sensor may be analyzed to obtain a temperature measurement which is largely independent of the transmission properties of the sensor's fiber-optic link. A highly precise laboratory spectrometer is described in detail, and this instrument is used to study the properties of this type of sensor. Also described are a number of different spectrum analyzers that are more suitable for use in a practical thermometer.
Advanced proteomic liquid chromatography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Fang; Smith, Richard D.; Shen, Yufeng
2012-10-26
Liquid chromatography coupled with mass spectrometry is the predominant platform used to analyze proteomics samples consisting of large numbers of proteins and their proteolytic products (e.g., truncated polypeptides) and spanning a wide range of relative concentrations. This review provides an overview of advanced capillary liquid chromatography techniques and methodologies that greatly improve separation resolving power and proteomics analysis coverage, sensitivity, and throughput.
Is Epenthesis a Means to Optimize Feet? A Reanalysis of the CLPF Database
ERIC Educational Resources Information Center
Taelman, Helena; Gillis, Steven
2008-01-01
Fikkert (1994) analyzed a large corpus of Dutch children's early language production, and found that they often add targetless syllables to their words in order to create bisyllabic feet. In this note we point out a methodological problem with that analysis: in an important number of cases, epenthetic vowels occur at places where grammatical…
A Methodology for Estimating the Size of Subject Collections, Using African Studies as an Example.
ERIC Educational Resources Information Center
Lauer, Joseph J.
1983-01-01
Provides formula for estimating number of Africana titles in large libraries using Library of Congress classification schedule. To determine percentage of Africana falling in DT section, distribution of titles in two academic libraries with separate shelflists is analyzed. Core categories of Africana and distribution of northern African titles are…
Changing Patterns in Internal Communication in Large Academic Libraries. Occasional Paper Number 6.
ERIC Educational Resources Information Center
Euster, Joanne R.
Based on data from a 1979 survey of ARL member libraries, this study by the Office of Management Studies analyzes the responses of selected libraries which had provided internal studies or planning documents on the subject of internal communication and notes the extent of resulting changes in procedures. The studies yielded information on staff…
ERIC Educational Resources Information Center
Gelfand, Jessica T.; Christie, Robert E.; Gelfand, Stanley A.
2014-01-01
Purpose: Speech recognition may be analyzed in terms of recognition probabilities for perceptual wholes (e.g., words) and parts (e.g., phonemes), where j or the j-factor reveals the number of independent perceptual units required for recognition of the whole (Boothroyd, 1968b; Boothroyd & Nittrouer, 1988; Nittrouer & Boothroyd, 1990). For…
An analysis of large chaparral fires in San Diego County, CA
Bob Eisele
2015-01-01
San Diego County, California, holds the records for the largest area burned and greatest number of structures destroyed in California. This paper analyzes 102 years of fire history, population growth, and weather records from 1910 through 2012 to examine the factors that are driving the wildfire system. Annual area burned is compared with precipitation during the...
Preliminary Investigation of a New Type of Supersonic Inlet
NASA Technical Reports Server (NTRS)
Ferri, Antonio; Nucci, Louis M
1952-01-01
A supersonic inlet with supersonic deceleration of the flow entirely outside of the inlet is considered a particular arrangement with fixed geometry having a central body with a circular annular intake is analyzed, and it is shown theoretically that this arrangement gives high pressure recovery for a large range of Mach number and mass flow and, therefore, is practical for use on supersonic airplanes and missiles. Experimental results confirming the theoretical analysis give pressure recoveries which vary from 95 percent for Mach number 1.33 to 86 percent for number 2.00. These results were originally presented in a classified document of the NACA in 1946.
Estimating How Often Mass Extinctions Due to Impacts Occur on the Earth
NASA Technical Reports Server (NTRS)
Buratti, Bonnie J.
2013-01-01
This hands-on, inquiry based activity has been taught at JPL's summer workshop "Teachers Touch the Sky" for the past two decades. Students act as mini-investigators as they gather and analyze data to estimate how often an impact large enough to cause a mass extinction occurs on the Earth. Large craters are counted on the Moon, and this number is extrapolated to the size of the Earth. Given the age of the Solar System, the students can then estimate how often large impacts occur on the Earth. This activity is based on an idea by Dr. David Morrison, NASA Ames Research Center.
Bockholt, Henry J.; Scully, Mark; Courtney, William; Rachakonda, Srinivas; Scott, Adam; Caprihan, Arvind; Fries, Jill; Kalyanam, Ravi; Segall, Judith M.; de la Garza, Raul; Lane, Susan; Calhoun, Vince D.
2009-01-01
A neuroinformatics (NI) system is critical to brain imaging research in order to shorten the time between study conception and results. Such a NI system is required to scale well when large numbers of subjects are studied. Further, when multiple sites participate in research projects organizational issues become increasingly difficult. Optimized NI applications mitigate these problems. Additionally, NI software enables coordination across multiple studies, leveraging advantages potentially leading to exponential research discoveries. The web-based, Mind Research Network (MRN), database system has been designed and improved through our experience with 200 research studies and 250 researchers from seven different institutions. The MRN tools permit the collection, management, reporting and efficient use of large scale, heterogeneous data sources, e.g., multiple institutions, multiple principal investigators, multiple research programs and studies, and multimodal acquisitions. We have collected and analyzed data sets on thousands of research participants and have set up a framework to automatically analyze the data, thereby making efficient, practical data mining of this vast resource possible. This paper presents a comprehensive framework for capturing and analyzing heterogeneous neuroscience research data sources that has been fully optimized for end-users to perform novel data mining. PMID:20461147
A method for generating new datasets based on copy number for cancer analysis.
Kim, Shinuk; Kon, Mark; Kang, Hyunsik
2015-01-01
New data sources for the analysis of cancer data are rapidly supplementing the large number of gene-expression markers used for current methods of analysis. Significant among these new sources are copy number variation (CNV) datasets, which typically enumerate several hundred thousand CNVs distributed throughout the genome. Several useful algorithms allow systems-level analyses of such datasets. However, these rich data sources have not yet been analyzed as deeply as gene-expression data. To address this issue, the extensive toolsets used for analyzing expression data in cancerous and noncancerous tissue (e.g., gene set enrichment analysis and phenotype prediction) could be redirected to extract a great deal of predictive information from CNV data, in particular those derived from cancers. Here we present a software package capable of preprocessing standard Agilent copy number datasets into a form to which essentially all expression analysis tools can be applied. We illustrate the use of this toolset in predicting the survival time of patients with ovarian cancer or glioblastoma multiforme and also provide an analysis of gene- and pathway-level deletions in these two types of cancer.
Lou, Di-Ming; Xu, Ning; Fan, Wen-Jia; Zhang, Tao
2014-02-01
With a common rail diesel engine without any modification and the engine exhaust particle number and particle size analyzer EEPS, this study used the air-fuel ratio to investigate the particulate number concentration, mass concentration and number distribution characteristics of a diesel engine fueled with butanol-diesel blends (Bu10, Bu15, Bu20, Bu30 and Bu40) and petroleum diesel. The results show: for all test fuels, the particle number distributions turn to be unimodal. With the increasing of butanol, numbers of nucleation mode particles and small accumulation mode particle decrease. At low speed and low load conditions, the number of large accumulation mode particle increases slightly, but under higher speed and load conditions, the number does not increase. When the fuels contain butanol, the total particle number concentration and mass concentration in all conditions decrease and that is more obvious at high speed load.
Parameter identification of civil engineering structures
NASA Technical Reports Server (NTRS)
Juang, J. N.; Sun, C. T.
1980-01-01
This paper concerns the development of an identification method required in determining structural parameter variations for systems subjected to an extended exposure to the environment. The concept of structural identifiability of a large scale structural system in the absence of damping is presented. Three criteria are established indicating that a large number of system parameters (the coefficient parameters of the differential equations) can be identified by a few actuators and sensors. An eight-bay-fifteen-story frame structure is used as example. A simple model is employed for analyzing the dynamic response of the frame structure.
NASA Astrophysics Data System (ADS)
Duffy, Ken; Lobunets, Olena; Suhov, Yuri
2007-05-01
We propose a model of a loss averse investor who aims to maximize his expected wealth under certain constraints. The constraints are that he avoids, with high probability, incurring an (suitably defined) unacceptable loss. The methodology employed comes from the theory of large deviations. We explore a number of fundamental properties of the model and illustrate its desirable features. We demonstrate its utility by analyzing assets that follow some commonly used financial return processes: Fractional Brownian Motion, Jump Diffusion, Variance Gamma and Truncated Lévy.
Capacity choice in a large market.
Godenhielm, Mats; Kultti, Klaus
2014-01-01
We analyze endogenous capacity formation in a large frictional market with perfectly divisible goods. Each seller posts a price and decides on a capacity. The buyers base their decision on which seller to visit on both characteristics. In this setting we determine the conditions for the existence and uniqueness of a symmetric equilibrium. When capacity is unobservable there exists a continuum of equilibria. We show that the "best" of these equilibria leads to the same seller capacities and the same number of trades as the symmetric equilibrium under observable capacity.
Aligator: A computational tool for optimizing total chemical synthesis of large proteins.
Jacobsen, Michael T; Erickson, Patrick W; Kay, Michael S
2017-09-15
The scope of chemical protein synthesis (CPS) continues to expand, driven primarily by advances in chemical ligation tools (e.g., reversible solubilizing groups and novel ligation chemistries). However, the design of an optimal synthesis route can be an arduous and fickle task due to the large number of theoretically possible, and in many cases problematic, synthetic strategies. In this perspective, we highlight recent CPS tool advances and then introduce a new and easy-to-use program, Aligator (Automated Ligator), for analyzing and designing the most efficient strategies for constructing large targets using CPS. As a model set, we selected the E. coli ribosomal proteins and associated factors for computational analysis. Aligator systematically scores and ranks all feasible synthetic strategies for a particular CPS target. The Aligator script methodically evaluates potential peptide segments for a target using a scoring function that includes solubility, ligation site quality, segment lengths, and number of ligations to provide a ranked list of potential synthetic strategies. We demonstrate the utility of Aligator by analyzing three recent CPS projects from our lab: TNFα (157 aa), GroES (97 aa), and DapA (312 aa). As the limits of CPS are extended, we expect that computational tools will play an increasingly important role in the efficient execution of ambitious CPS projects such as production of a mirror-image ribosome. Copyright © 2017 Elsevier Ltd. All rights reserved.
Asteroid families classification: Exploiting very large datasets
NASA Astrophysics Data System (ADS)
Milani, Andrea; Cellino, Alberto; Knežević, Zoran; Novaković, Bojan; Spoto, Federica; Paolicchi, Paolo
2014-09-01
The number of asteroids with accurately determined orbits increases fast, and this increase is also accelerating. The catalogs of asteroid physical observations have also increased, although the number of objects is still smaller than in the orbital catalogs. Thus it becomes more and more challenging to perform, maintain and update a classification of asteroids into families. To cope with these challenges we developed a new approach to the asteroid family classification by combining the Hierarchical Clustering Method (HCM) with a method to add new members to existing families. This procedure makes use of the much larger amount of information contained in the proper elements catalogs, with respect to classifications using also physical observations for a smaller number of asteroids. Our work is based on a large catalog of high accuracy synthetic proper elements (available from AstDyS), containing data for >330,000 numbered asteroids. By selecting from the catalog a much smaller number of large asteroids, we first identify a number of core families; to these we attribute the next layer of smaller objects. Then, we remove all the family members from the catalog, and reapply the HCM to the rest. This gives both satellite families which extend the core families and new independent families, consisting mainly of small asteroids. These two cases are discriminated by another step of attribution of new members and by merging intersecting families. This leads to a classification with 128 families and currently 87,095 members. The number of members can be increased automatically with each update of the proper elements catalog; changes in the list of families are not automated. By using information from absolute magnitudes, we take advantage of the larger size range in some families to analyze their shape in the proper semimajor axis vs. inverse diameter plane. This leads to a new method to estimate the family age, or ages in cases where we identify internal structures. The analysis of the plot above evidences some open problems but also the possibility of obtaining further information of the geometrical properties of the impact process. The results from the previous steps are then analyzed, using also auxiliary information on physical properties including WISE albedos and SDSS color indexes. This allows to solve some difficult cases of families overlapping in the proper elements space but generated by different collisional events. The families formed by one or more cratering events are found to be more numerous than previously believed because the fragments are smaller. We analyze some examples of cratering families (Massalia, Vesta, Eunomia) which show internal structures, interpreted as multiple collisions. We also discuss why Ceres has no family.
Lee, Hye Ryun; Park, Jeong Su; Shin, Sue; Roh, Eun Youn; Yoon, Jong Hyun; Han, Kyou Sup; Kim, Byung Jae; Storms, Robert W; Chao, Nelson J
2012-01-01
We analyzed neonatal factors that could affect hematopoietic variables of cord blood (CB) donated from Korean neonates. The numbers of total nucleated cells (TNCs), CD34+ cells, and CD34+ cells/TNCs of CB in neonates were compared according to sex, gestational age, birth weight, birth weight centile for gestational age, and ABO blood group. With 11,098 CB units analyzed, blood group O CB showed an increased number of TNCs, CD34+ cells, and CD34+ cells/TNCs compared with other blood groups. Although TNC counts were lower in males, no difference in the number of CD34+ cells was demonstrated because the number of CD34+ cells/TNCs was higher in males. An increase in the gestational age resulted in an increase in the number of TNCs and decreases in the number of CD34+ cells and CD34+ cells/TNCs. The numbers of TNCs, CD34+ cells, and CD34+ cells/TNCs increased according to increased birth weight centile as well as birth weight. CB with blood group O has unique hematologic variables in this large-scale analysis of Korean neonates, although the impact on the storage policies of CB banks or the clinical outcome of transplantation remains to be determined. © 2011 American Association of Blood Banks.
Ohneberg, K; Wolkewitz, M; Beyersmann, J; Palomar-Martinez, M; Olaechea-Astigarraga, P; Alvarez-Lerma, F; Schumacher, M
2015-01-01
Sampling from a large cohort in order to derive a subsample that would be sufficient for statistical analysis is a frequently used method for handling large data sets in epidemiological studies with limited resources for exposure measurement. For clinical studies however, when interest is in the influence of a potential risk factor, cohort studies are often the first choice with all individuals entering the analysis. Our aim is to close the gap between epidemiological and clinical studies with respect to design and power considerations. Schoenfeld's formula for the number of events required for a Cox' proportional hazards model is fundamental. Our objective is to compare the power of analyzing the full cohort and the power of a nested case-control and a case-cohort design. We compare formulas for power for sampling designs and cohort studies. In our data example we simultaneously apply a nested case-control design with a varying number of controls matched to each case, a case cohort design with varying subcohort size, a random subsample and a full cohort analysis. For each design we calculate the standard error for estimated regression coefficients and the mean number of distinct persons, for whom covariate information is required. The formula for the power of a nested case-control design and the power of a case-cohort design is directly connected to the power of a cohort study using the well known Schoenfeld formula. The loss in precision of parameter estimates is relatively small compared to the saving in resources. Nested case-control and case-cohort studies, but not random subsamples yield an attractive alternative for analyzing clinical studies in the situation of a low event rate. Power calculations can be conducted straightforwardly to quantify the loss of power compared to the savings in the num-ber of patients using a sampling design instead of analyzing the full cohort.
NASA Technical Reports Server (NTRS)
Welch, R. M.; Sengupta, S. K.; Chen, D. W.
1990-01-01
Stratocumulus cloud fields in the FIRE IFO region are analyzed using LANDSAT Thematic Mapper imagery. Structural properties such as cloud cell size distribution, cell horizontal aspect ratio, fractional coverage and fractal dimension are determined. It is found that stratocumulus cloud number densities are represented by a power law. Cell horizontal aspect ratio has a tendency to increase at large cell sizes, and cells are bi-fractal in nature. Using LANDSAT Multispectral Scanner imagery for twelve selected stratocumulus scenes acquired during previous years, similar structural characteristics are obtained. Cloud field spatial organization also is analyzed. Nearest-neighbor spacings are fit with a number of functions, with Weibull and Gamma distributions providing the best fits. Poisson tests show that the spatial separations are not random. Second order statistics are used to examine clustering.
Image analysis for quantification of bacterial rock weathering.
Puente, M Esther; Rodriguez-Jaramillo, M Carmen; Li, Ching Y; Bashan, Yoav
2006-02-01
A fast, quantitative image analysis technique was developed to assess potential rock weathering by bacteria. The technique is based on reduction in the surface area of rock particles and counting the relative increase in the number of small particles in ground rock slurries. This was done by recording changes in ground rock samples with an electronic image analyzing process. The slurries were previously amended with three carbon sources, ground to a uniform particle size and incubated with rock weathering bacteria for 28 days. The technique was developed and tested, using two rock-weathering bacteria Pseudomonas putida R-20 and Azospirillum brasilense Cd on marble, granite, apatite, quartz, limestone, and volcanic rock as substrates. The image analyzer processed large number of particles (10(7)-10(8) per sample), so that the weathering capacity of bacteria can be detected.
Large-scale diversity of slope fishes: pattern inconsistency between multiple diversity indices.
Gaertner, Jean-Claude; Maiorano, Porzia; Mérigot, Bastien; Colloca, Francesco; Politou, Chrissi-Yianna; Gil De Sola, Luis; Bertrand, Jacques A; Murenu, Matteo; Durbec, Jean-Pierre; Kallianiotis, Argyris; Mannini, Alessandro
2013-01-01
Large-scale studies focused on the diversity of continental slope ecosystems are still rare, usually restricted to a limited number of diversity indices and mainly based on the empirical comparison of heterogeneous local data sets. In contrast, we investigate large-scale fish diversity on the basis of multiple diversity indices and using 1454 standardized trawl hauls collected throughout the upper and middle slope of the whole northern Mediterranean Sea (36°3'- 45°7' N; 5°3'W - 28°E). We have analyzed (1) the empirical relationships between a set of 11 diversity indices in order to assess their degree of complementarity/redundancy and (2) the consistency of spatial patterns exhibited by each of the complementary groups of indices. Regarding species richness, our results contrasted both the traditional view based on the hump-shaped theory for bathymetric pattern and the commonly-admitted hypothesis of a large-scale decreasing trend correlated with a similar gradient of primary production in the Mediterranean Sea. More generally, we found that the components of slope fish diversity we analyzed did not always show a consistent pattern of distribution according either to depth or to spatial areas, suggesting that they are not driven by the same factors. These results, which stress the need to extend the number of indices traditionally considered in diversity monitoring networks, could provide a basis for rethinking not only the methodological approach used in monitoring systems, but also the definition of priority zones for protection. Finally, our results call into question the feasibility of properly investigating large-scale diversity patterns using a widespread approach in ecology, which is based on the compilation of pre-existing heterogeneous and disparate data sets, in particular when focusing on indices that are very sensitive to sampling design standardization, such as species richness.
Towse, John N; Loetscher, Tobias; Brugger, Peter
2014-01-01
We investigate the number preferences of children and adults when generating random digit sequences. Previous research has shown convincingly that adults prefer smaller numbers when randomly choosing between responses 1-6. We analyze randomization choices made by both children and adults, considering a range of experimental studies and task configurations. Children - most of whom are between 8 and 11~years - show a preference for relatively large numbers when choosing numbers 1-10. Adults show a preference for small numbers with the same response set. We report a modest association between children's age and numerical bias. However, children also exhibit a small number bias with a smaller response set available, and they show a preference specifically for the numbers 1-3 across many datasets. We argue that number space demonstrates both continuities (numbers 1-3 have a distinct status) and change (a developmentally emerging bias toward the left side of representational space or lower numbers).
ERIC Educational Resources Information Center
Peng, Ching-Huai
2008-01-01
After the 2008 Olympics is concluded and commentators and journalists internationally begin the process of evaluating Beijing's performance as the host city, one of the primary elements to be analyzed will be the quality of visitor service provided by more than 70,000 volunteers. Given the large number of Chinese students who have chosen a Western…
ERIC Educational Resources Information Center
Manson, Donald M.; And Others
Characteristics that would tend to place Mexican immigrants in direct competition with native workers for jobs at the bottom of the wage and skill hierarchy are their numbers, their largely undocumented status, low education and skill levels, and poor English-speaking ability. Using regression analysis, 1980 Census data were analyzed to determine…
ERIC Educational Resources Information Center
Vanderheiden, Gregg C.
The paper analyzes major microcomputer systems and their use in rehabilitative systems for persons with physical handicaps. Four categories of microcomputers are addressed: systems designed for home or school with emphasis on low cost, recreation, and educational software; general purpose microcomputers with applications in a large number of…
Do the Timeliness, Regularity, and Intensity of Online Work Habits Predict Academic Performance?
ERIC Educational Resources Information Center
Dvorak, Tomas; Jia, Miaoqing
2016-01-01
This study analyzes the relationship between students' online work habits and academic performance. We utilize data from logs recorded by a course management system (CMS) in two courses at a small liberal arts college in the U.S. Both courses required the completion of a large number of online assignments. We measure three aspects of students'…
In Their Own Words: A Text Analytics Investigation of College Course Attrition
ERIC Educational Resources Information Center
Michalski, Greg V.
2014-01-01
Excessive course attrition is costly to both the student and the institution. While most institutions have systems to quantify and report the numbers, far less attention is typically paid to each student's reason(s) for withdrawal. In this case study, text analytics was used to analyze a large set of open-ended written comments in which students…
The Effect of Small Sample Size on Two-Level Model Estimates: A Review and Illustration
ERIC Educational Resources Information Center
McNeish, Daniel M.; Stapleton, Laura M.
2016-01-01
Multilevel models are an increasingly popular method to analyze data that originate from a clustered or hierarchical structure. To effectively utilize multilevel models, one must have an adequately large number of clusters; otherwise, some model parameters will be estimated with bias. The goals for this paper are to (1) raise awareness of the…
An Integrated Management Support and Production Control System for Hardwood Forest Products
Guillermo A. Mendoza; Roger J. Meimban; William Sprouse; William G. Luppold; Philip A. Araman
1991-01-01
Spreadsheet and simulation models are tools which enable users to analyze a large number of variables affecting hardwood material utilization and profit in a systematic fashion. This paper describes two spreadsheet models; SEASaw and SEAIn, and a hardwood sawmill simulator. SEASaw is designed to estimate the amount of conversion from timber to lumber, while SEAIn is a...
ERIC Educational Resources Information Center
Cantwell, Brendan; Taylor, Barrett J.
2013-01-01
The American academic research enterprise relies heavily on contributions made by foreign nationals. Of particular note is the large number of international postdocs employed at universities in the United States (US). Postdocs are among the fastest growing group of academic staff in the US, and over 50% of all postdocs in the US are temporary visa…
Forest fires and air quality issues in southern Europe
Ana Isabel Miranda; Enrico Marchi; Marco Ferretti; Millán M. Millán
2009-01-01
Each summer forest fires in southern Europe emit large quantities of pollutants to the atmosphere. These fires can generate a number of air pollution episodes as measured by air quality monitoring networks. We analyzed the impact of forest fires on air quality of specific regions of southern Europe. Data from several summer seasons were studied with the aim of...
Middle School Students' Writing and Feedback in a Cloud-Based Classroom Environment
ERIC Educational Resources Information Center
Zheng, Binbin; Lawrence, Joshua; Warschauer, Mark; Lin, Chin-Hsi
2015-01-01
Individual writing and collaborative writing skills are important for academic success, yet are poorly taught in K-12 classrooms. This study examines how sixth-grade students (n = 257) taught by two teachers used Google Docs to write and exchange feedback. We used longitudinal growth models to analyze a large number of student writing samples…
Advanced proteomic liquid chromatography
Xie, Fang; Smith, Richard D.; Shen, Yufeng
2012-01-01
Liquid chromatography coupled with mass spectrometry is the predominant platform used to analyze proteomics samples consisting of large numbers of proteins and their proteolytic products (e.g., truncated polypeptides) and spanning a wide range of relative concentrations. This review provides an overview of advanced capillary liquid chromatography techniques and methodologies that greatly improve separation resolving power and proteomics analysis coverage, sensitivity, and throughput. PMID:22840822
ERIC Educational Resources Information Center
Hajat, Anjum; Lucas, Jacqueline B.; Kington, Raynard
In this report, various health measures are compared across Hispanic subgroups in the United States. National Health Interview Survey (NHIS) data aggregated from 1992 through 1995 were analyzed. NHIS is one of the few national surveys that has a sample sufficiently large enough to allow such comparisons. Both age-adjusted and unadjusted estimates…
NASA Astrophysics Data System (ADS)
Sabine, Ortiz; Marc, Chomaz Jean; Thomas, Loiseleux
2001-11-01
In mixing layers between two parallel streams of different densities, shear and gravity effects interplay. When the Roosby number, which compares the nonlinear acceleration terms to the Coriolis forces, is large enough, buoyancy acts as a restoring force, the Kelvin-Helmholtz mode is known to be stabilized by the stratification. If the density interface is sharp enough, two new instability modes, known as Holmboe modes, propagating in opposite directions appear. This mechanism has been study in the temporal instability framework. We analyze the associated spatial instability problem, in the Boussinesq approximation, for two immiscible inviscid fluids with broken-line velocity profile. We show how the classical scenario for transition between absolute and convective instability should be modified due to the presence of propagating waves. In convective region, the spatial theory is relevant and the slowest propagative wave is shown to be the most spatially amplified, as suggested by the intuition. Spatial theory is compared with mixing layer experiments (C.G. Koop and Browand J. Fluid Mech. 93, part 1, 135 (1979)), and wedge flows (G. Pawlak and L. Armi J. Fluid Mech. 376, 1 (1999)). Physical mechanism for the Holmboe mode destabilization is analyzed via an asymptotic expansion that explains precisely the absolute instability domain at large Richardson number.
Yeom, Jae Min; Yum, Seong Soo; Liu, Yangang; ...
2017-04-20
Entrainment and mixing processes and their effects on cloud microphysics in the continental stratocumulus clouds observed in Oklahoma during the RACORO campaign are analyzed in the frame of homogeneous and inhomogeneous mixing concepts by combining the approaches of microphysical correlation, mixing diagram, and transition scale (number). A total of 110 horizontally penetrated cloud segments is analyzed in this paper. Mixing diagram and cloud microphysical relationship analyses show homogeneous mixing trait of positive relationship between liquid water content (L) and mean volume of droplets (V) (i.e., smaller droplets in more diluted parcel) in most cloud segments. Relatively small temperature and humiditymore » differences between the entraining air from above the cloud top and cloudy air and relatively large turbulent dissipation rate are found to be responsible for this finding. The related scale parameters (i.e., transition length and transition scale number) are relatively large, which also indicates high likelihood of homogeneous mixing. Finally, clear positive relationship between L and vertical velocity (W) for some cloud segments is suggested to be evidence of vertical circulation mixing, which may further enhance the positive relationship between L and V created by homogeneous mixing.« less
NASA Astrophysics Data System (ADS)
Yeom, Jae Min; Yum, Seong Soo; Liu, Yangang; Lu, Chunsong
2017-09-01
Entrainment and mixing processes and their effects on cloud microphysics in the continental stratocumulus clouds observed in Oklahoma during the RACORO campaign are analyzed in the frame of homogeneous and inhomogeneous mixing concepts by combining the approaches of microphysical correlation, mixing diagram, and transition scale (number). A total of 110 horizontally penetrated cloud segments is analyzed. Mixing diagram and cloud microphysical relationship analyses show homogeneous mixing trait of positive relationship between liquid water content (L) and mean volume of droplets (V) (i.e., smaller droplets in more diluted parcel) in most cloud segments. Relatively small temperature and humidity differences between the entraining air from above the cloud top and cloudy air and relatively large turbulent dissipation rate are found to be responsible for this finding. The related scale parameters (i.e., transition length and transition scale number) are relatively large, which also indicates high likelihood of homogeneous mixing. Clear positive relationship between L and vertical velocity (W) for some cloud segments is suggested to be evidence of vertical circulation mixing, which may further enhance the positive relationship between L and V created by homogeneous mixing.
Canine and feline hematology reference values for the ADVIA 120 hematology system.
Moritz, Andreas; Fickenscher, Yvonne; Meyer, Karin; Failing, Klaus; Weiss, Douglas J
2004-01-01
The ADVIA 120 is a laser-based hematology analyzer with software applications for animal species. Accurate reference values would be useful for the assessment of new hematologic parameters and for interlaboratory comparisons. The goal of this study was to establish reference intervals for CBC results and new parameters for RBC morphology, reticulocytes, and platelets in healthy dogs and cats using the ADVIA 120 hematology system. The ADVIA 120, with multispecies software (version 1.107-MS), was used to analyze whole blood samples from clinically healthy dogs (n=46) and cats (n=61). Data distribution was determined and reference intervals were calculated as 2.5 to 97.5 percentiles and 25 to 75 percentiles. Most data showed Gaussian or log-normal distribution. The numbers of RBCs falling outside the normocytic-normochromic range were slightly higher in cats than in dogs. Both dogs and cats had reticulocytes with low, medium, and high absorbance. Mean numbers of large platelets and platelet clumps were higher in cats compared with dogs. Reference intervals obtained on the ADVIA 120 provide valuable baseline information for assessing new hematologic parameters and for interlaboratory comparisons. Differences compared with previously published reference values can be attributed largely to differences in methodology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yeom, Jae Min; Yum, Seong Soo; Liu, Yangang
Entrainment and mixing processes and their effects on cloud microphysics in the continental stratocumulus clouds observed in Oklahoma during the RACORO campaign are analyzed in the frame of homogeneous and inhomogeneous mixing concepts by combining the approaches of microphysical correlation, mixing diagram, and transition scale (number). A total of 110 horizontally penetrated cloud segments is analyzed in this paper. Mixing diagram and cloud microphysical relationship analyses show homogeneous mixing trait of positive relationship between liquid water content (L) and mean volume of droplets (V) (i.e., smaller droplets in more diluted parcel) in most cloud segments. Relatively small temperature and humiditymore » differences between the entraining air from above the cloud top and cloudy air and relatively large turbulent dissipation rate are found to be responsible for this finding. The related scale parameters (i.e., transition length and transition scale number) are relatively large, which also indicates high likelihood of homogeneous mixing. Finally, clear positive relationship between L and vertical velocity (W) for some cloud segments is suggested to be evidence of vertical circulation mixing, which may further enhance the positive relationship between L and V created by homogeneous mixing.« less
Research on characteristics of radiated noise of large cargo ship in shallow water
NASA Astrophysics Data System (ADS)
Liu, Yongdong; Zhang, Liang
2017-01-01
With the rapid development of the shipping industry, the number of the world's ship is gradually increasing. The characteristics of the radiated noise of the ship are also of concern. Since the noise source characteristics of multichannel interference, the surface wave and the sea temperature microstructure and other reasons, the sound signal received in the time-frequency domain has varying characteristics. The signal of the radiated noise of the large cargo ship JOCHOH from horizontal hydrophone array in some shallow water of China is processed and analyzed in the summer of 2015, and the results show that a large cargo ship JOCHOH has a number of noise sources in the direction of the ship's bow and stern lines, such as host, auxiliary and propellers. The radiating sound waves generated by these sources do not meet the spherical wave law at lower frequency in the ocean, and its radiated noise has inherent spatial distribution, the variation characteristics of the radiated noise the large cargo ship in time and frequency domain are given. The research method and results are of particular importance.
A Large number of fast cosmological simulations
NASA Astrophysics Data System (ADS)
Koda, Jun; Kazin, E.; Blake, C.
2014-01-01
Mock galaxy catalogs are essential tools to analyze large-scale structure data. Many independent realizations of mock catalogs are necessary to evaluate the uncertainties in the measurements. We perform 3600 cosmological simulations for the WiggleZ Dark Energy Survey to obtain the new improved Baron Acoustic Oscillation (BAO) cosmic distance measurements using the density field "reconstruction" technique. We use 1296^3 particles in a periodic box of 600/h Mpc on a side, which is the minimum requirement from the survey volume and observed galaxies. In order to perform such large number of simulations, we developed a parallel code using the COmoving Lagrangian Acceleration (COLA) method, which can simulate cosmological large-scale structure reasonably well with only 10 time steps. Our simulation is more than 100 times faster than conventional N-body simulations; one COLA simulation takes only 15 minutes with 216 computing cores. We have completed the 3600 simulations with a reasonable computation time of 200k core hours. We also present the results of the revised WiggleZ BAO distance measurement, which are significantly improved by the reconstruction technique.
Variability and Maintenance of Turbulence in the Very Stable Boundary Layer
NASA Astrophysics Data System (ADS)
Mahrt, Larry
2010-04-01
The relationship of turbulence quantities to mean flow quantities, such as the Richardson number, degenerates substantially for strong stability, at least in those studies that do not place restrictions on minimum turbulence or non-stationarity. This study examines the large variability of the turbulence for very stable conditions by analyzing four months of turbulence data from a site with short grass. Brief comparisons are made with three additional sites, one over short grass on flat terrain and two with tall vegetation in complex terrain. For very stable conditions, any dependence of the turbulence quantities on the mean wind speed or bulk Richardson number becomes masked by large scatter, as found in some previous studies. The large variability of the turbulence quantities is due to random variations and other physical influences not represented by the bulk Richardson number. There is no critical Richardson number above which the turbulence vanishes. For very stable conditions, the record-averaged vertical velocity variance and the drag coefficient increase with the strength of the submeso motions (wave motions, solitary waves, horizontal modes and numerous more complex signatures). The submeso motions are on time scales of minutes and not normally considered part of the mean flow. The generation of turbulence by such unpredictable motions appears to preclude universal similarity theory for predicting the surface stress for very stable conditions. Large variation of the stress direction with respect to the wind direction for the very stable regime is also examined. Needed additional work is noted.
Testing Evaluation of the Electrochemical Organic Content Analyzer
NASA Technical Reports Server (NTRS)
Davenport, R. J.
1979-01-01
The breadboard electrochemical organic content analyzer was evalauted for aerospace applications. An awareness of the disadvantages of expendables in some systems resulted in an effort to investigate ways of reducing the consumption of the analyzer's electrolyte from the rate of 5.17 kg/30 days. It was found that the electrochemical organic content analyzer can result in an organic monitor in the water quality monitor having a range of 0.1 to 100 mg/1 total organic carbon for a large number of common organic solutes. In a flight version it is anticipated the analyzer would occupy .0002 cu m, weigh 1.4 kg, and require 10 W or less of power. With the optimum method of injecting electrolyte into the sample (saturation of the sample with a salt) it would expend only 0.04 kg of electrolyte during 30 days of continuous operation.
Foulquier, Nathan; Redou, Pascal; Le Gal, Christophe; Rouvière, Bénédicte; Pers, Jacques-Olivier; Saraux, Alain
2018-05-17
Big data analysis has become a common way to extract information from complex and large datasets among most scientific domains. This approach is now used to study large cohorts of patients in medicine. This work is a review of publications that have used artificial intelligence and advanced machine learning techniques to study physio pathogenesis-based treatments in pSS. A systematic literature review retrieved all articles reporting on the use of advanced statistical analysis applied to the study of systemic autoimmune diseases (SADs) over the last decade. An automatic bibliography screening method has been developed to perform this task. The program called BIBOT was designed to fetch and analyze articles from the pubmed database using a list of keywords and Natural Language Processing approaches. The evolution of trends in statistical approaches, sizes of cohorts and number of publications over this period were also computed in the process. In all, 44077 abstracts were screened and 1017 publications were analyzed. The mean number of selected articles was 101.0 (S.D. 19.16) by year, but increased significantly over the time (from 74 articles in 2008 to 138 in 2017). Among them only 12 focused on pSS but none of them emphasized on the aspect of pathogenesis-based treatments. To conclude, medicine progressively enters the era of big data analysis and artificial intelligence, but these approaches are not yet used to describe pSS-specific pathogenesis-based treatment. Nevertheless, large multicentre studies are investigating this aspect with advanced algorithmic tools on large cohorts of SADs patients.
Literature review: Use of commercial films as a teaching resource for health sciences students.
Díaz Membrives, Montserrat; Icart Isern, M Teresa; López Matheu, M Carmen
2016-01-01
Analyze some of the characteristics of the publications focused on commercial cinema as a learning tool for university students engaged in health sciences degrees. The review was based on the search of information in three electronic databases: MEDLINE, CINAHL and ERIC. 54 papers were selected and analyzed. Cinema is a commonly used resource; however there is still a lack of studies demonstrating its usefulness and validity. This review is limited on its analysis by the fact that a large number of experiences are described as having a loose design. Copyright © 2015 Elsevier Ltd. All rights reserved.
Space Construction System Analysis. Special Emphasis Studies
NASA Technical Reports Server (NTRS)
1979-01-01
Generic concepts were analyzed to determine: (1) the maximum size of a deployable solar array which might be packaged into a single orbit payload bay; (2) the optimal overall shape of a large erectable structure for large satellite projects; (3) the optimization of electronic communication with emphasis on the number of antennas and their diameters; and (4) the number of beams, traffic growth, and projections and frequencies were found feasible to package a deployable solar array which could generate over 250 kilowatts of electrical power. Also, it was found that the linear-shaped erectable structure is better for ease of construction and installation of systems, and compares favorably on several other counts. The study of electronic communication technology indicated that proliferation of individual satellites will crowd the spectrum by the early 1990's, so that there will be a strong tendency toward a small number of communications platforms over the continental U.S.A. with many antennas and multiple spot beams.
Statistical properties of share volume traded in financial markets
NASA Astrophysics Data System (ADS)
Gopikrishnan, Parameswaran; Plerou, Vasiliki; Gabaix, Xavier; Stanley, H. Eugene
2000-10-01
We quantitatively investigate the ideas behind the often-expressed adage ``it takes volume to move stock prices,'' and study the statistical properties of the number of shares traded QΔt for a given stock in a fixed time interval Δt. We analyze transaction data for the largest 1000 stocks for the two-year period 1994-95, using a database that records every transaction for all securities in three major US stock markets. We find that the distribution P(QΔt) displays a power-law decay, and that the time correlations in QΔt display long-range persistence. Further, we investigate the relation between QΔt and the number of transactions NΔt in a time interval Δt, and find that the long-range correlations in QΔt are largely due to those of NΔt. Our results are consistent with the interpretation that the large equal-time correlation previously found between QΔt and the absolute value of price change \\|GΔt\\| (related to volatility) are largely due to NΔt.
Grazier, Kyle L; Eisenberg, Daniel; Jedele, Jenefer M; Smiley, Mary L
2016-04-01
This study evaluated utilization of mental health and substance use services among enrollees at a large employee health plan following changes to benefit limits after passage in 2008 of federal mental health parity legislation. This study used a pre-post design. Benefits and claims data for 43,855 enrollees in the health plan in 2009 and 2010 were analyzed for utilization and costs after removal of a 30-visit cap on the number of covered mental health visits. There was a large increase in the proportion of health plan enrollees with more than 30 outpatient visits after the cap's removal, an increase of 255% among subscribers and 176% among dependents (p<.001). The number of people near the 30-visit limit for substance use disorders was too few to observe an effect. Federal mental health parity legislation is likely to increase utilization of mental health services by individuals who had previously met their benefit limit.
Hybrid estimation of complex systems.
Hofbaur, Michael W; Williams, Brian C
2004-10-01
Modern automated systems evolve both continuously and discretely, and hence require estimation techniques that go well beyond the capability of a typical Kalman Filter. Multiple model (MM) estimation schemes track these system evolutions by applying a bank of filters, one for each discrete system mode. Modern systems, however, are often composed of many interconnected components that exhibit rich behaviors, due to complex, system-wide interactions. Modeling these systems leads to complex stochastic hybrid models that capture the large number of operational and failure modes. This large number of modes makes a typical MM estimation approach infeasible for online estimation. This paper analyzes the shortcomings of MM estimation, and then introduces an alternative hybrid estimation scheme that can efficiently estimate complex systems with large number of modes. It utilizes search techniques from the toolkit of model-based reasoning in order to focus the estimation on the set of most likely modes, without missing symptoms that might be hidden amongst the system noise. In addition, we present a novel approach to hybrid estimation in the presence of unknown behavioral modes. This leads to an overall hybrid estimation scheme for complex systems that robustly copes with unforeseen situations in a degraded, but fail-safe manner.
Transition Experiments on Large Bluntness Cones with Distributed Roughness in Hypersonic Flight
NASA Technical Reports Server (NTRS)
Reda, Daniel. C.; Wilder, Michael C.; Prabhu, Dinesh K.
2012-01-01
Large bluntness cones with smooth nosetips and roughened frusta were flown in the NASA Ames hypersonic ballistic range at a Mach number of 10 through quiescent air environments. Global surface intensity (temperature) distributions were optically measured and analyzed to determine transition onset and progression over the roughened surface. Real-gas Navier-Stokes calculations of model flowfields, including laminar boundary layer development in these flowfields, were conducted to predict values of key dimensionless parameters used to correlate transition on such configurations in hypersonic flow. For these large bluntness cases, predicted axial distributions of the roughness Reynolds number showed (for each specified freestream pressure) that this parameter was a maximum at the physical beginning of the roughened zone and decreased with increasing run length along the roughened surface. Roughness-induced transition occurred downstream of this maximum roughness Reynolds number location, and progressed upstream towards the beginning of the roughened zone as freestream pressure was systematically increased. Roughness elements encountered at the upstream edge of the roughened frusta thus acted like a finite-extent trip array, consistent with published results concerning the tripping effectiveness of roughness bands placed on otherwise smooth surfaces.
Characterization of spray-induced turbulence using fluorescence PIV
NASA Astrophysics Data System (ADS)
van der Voort, Dennis D.; Dam, Nico J.; Clercx, Herman J. H.; Water, Willem van de
2018-07-01
The strong shear induced by the injection of liquid sprays at high velocities induces turbulence in the surrounding medium. This, in turn, influences the motion of droplets as well as the mixing of air and vapor. Using fluorescence-based tracer particle image velocimetry, the velocity field surrounding 125-135 m/s sprays exiting a 200-μm nozzle is analyzed. For the first time, the small- and large-scale turbulence characteristics of the gas phase surrounding a spray has been measured simultaneously, using a large eddy model to determine the sub-grid scales. This further allows the calculation of the Stokes numbers of droplets, which indicates the influence of turbulence on their motion. The measurements lead to an estimate of the dissipation rate ɛ ≈ 35 m2 s^{-3}, a microscale Reynolds number Re_{λ } ≈ 170, and a Kolmogorov length scale of η ≈ 10^{-4} m. Using these dissipation rates to convert a droplet size distribution to a distribution of Stokes numbers, we show that only the large scale motion of turbulence disperses the droplet in the current case, but the small scales will grow in importance with increasing levels of atomization and ambient pressures.
Nishizaki, Tatsuya; Matoba, Osamu; Nitta, Kouichi
2014-09-01
The recording properties of three-dimensional speckle-shift multiplexing in reflection-type holographic memory are analyzed numerically. Three-dimensional recording can increase the number of multiplexed holograms by suppressing the cross-talk noise from adjacent holograms by using depth-direction multiplexing rather than in-plane multiplexing. Numerical results indicate that the number of multiplexed holograms in three-layer recording can be increased by 1.44 times as large as that of a single-layer recording when an acceptable signal-to-noise ratio is set to be 2 when NA=0.43 and the thickness of the recording medium is 0.5 mm.
Speckle interferometry with temporal phase evaluation for measuring large-object deformation.
Joenathan, C; Franze, B; Haible, P; Tiziani, H J
1998-05-01
We propose a new method for measuring large-object deformations byusing temporal evolution of the speckles in speckleinterferometry. The principle of the method is that by deformingthe object continuously, one obtains fluctuations in the intensity ofthe speckle. A large number of frames of the object motion arecollected to be analyzed later. The phase data for whole-objectdeformation are then retrieved by inverse Fourier transformation of afiltered spectrum obtained by Fourier transformation of thesignal. With this method one is capable of measuring deformationsof more than 100 mum, which is not possible using conventionalelectronic speckle pattern interferometry. We discuss theunderlying principle of the method and the results of theexperiments. Some nondestructive testing results are alsopresented.
Benchmarking processes for managing large international space programs
NASA Technical Reports Server (NTRS)
Mandell, Humboldt C., Jr.; Duke, Michael B.
1993-01-01
The relationship between management style and program costs is analyzed to determine the feasibility of financing large international space missions. The incorporation of management systems is considered to be essential to realizing low cost spacecraft and planetary surface systems. Several companies ranging from large Lockheed 'Skunk Works' to small companies including Space Industries, Inc., Rocket Research Corp., and Orbital Sciences Corp. were studied. It is concluded that to lower the prices, the ways in which spacecraft and hardware are developed must be changed. Benchmarking of successful low cost space programs has revealed a number of prescriptive rules for low cost managements, including major changes in the relationships between the public and private sectors.
Validation of Rapid Radiochemical Method for Californium ...
Technical Brief In the event of a radiological/nuclear contamination event, the response community would need tools and methodologies to rapidly assess the nature and the extent of contamination. To characterize a radiologically contaminated outdoor area and to inform risk assessment, large numbers of environmental samples would be collected and analyzed over a short period of time. To address the challenge of quickly providing analytical results to the field, the U.S. EPA developed a robust analytical method. This method allows response officials to characterize contaminated areas and to assess the effectiveness of remediation efforts, both rapidly and accurately, in the intermediate and late phases of environmental cleanup. Improvement in sample processing and analysis leads to increased laboratory capacity to handle the analysis of a large number of samples following the intentional or unintentional release of a radiological/nuclear contaminant.
Lattice Boltzmann simulation of nonequilibrium effects in oscillatory gas flow.
Tang, G H; Gu, X J; Barber, R W; Emerson, D R; Zhang, Y H
2008-08-01
Accurate evaluation of damping in laterally oscillating microstructures is challenging due to the complex flow behavior. In addition, device fabrication techniques and surface properties will have an important effect on the flow characteristics. Although kinetic approaches such as the direct simulation Monte Carlo (DSMC) method and directly solving the Boltzmann equation can address these challenges, they are beyond the reach of current computer technology for large scale simulation. As the continuum Navier-Stokes equations become invalid for nonequilibrium flows, we take advantage of the computationally efficient lattice Boltzmann method to investigate nonequilibrium oscillating flows. We have analyzed the effects of the Stokes number, Knudsen number, and tangential momentum accommodation coefficient for oscillating Couette flow and Stokes' second problem. Our results are in excellent agreement with DSMC data for Knudsen numbers up to Kn=O(1) and show good agreement for Knudsen numbers as large as 2.5. In addition to increasing the Stokes number, we demonstrate that increasing the Knudsen number or decreasing the accommodation coefficient can also expedite the breakdown of symmetry for oscillating Couette flow. This results in an earlier transition from quasisteady to unsteady flow. Our paper also highlights the deviation in velocity slip between Stokes' second problem and the confined Couette case.
Low-Skill Workers in Rural America Face Permanent Job Loss. Policy Brief Number 2
ERIC Educational Resources Information Center
Glasmeier, Amy; Salant, Priscilla
2006-01-01
Global economic competition and other factors have cost rural America 1.5 million jobs in the past six years. This brief analyzes job displacement figures from around the country between 1997 and 2003. The loss of rural jobs was particularly large in the manufacturing sector, and the rate of loss was higher in the rural Northeast than in the rest…
ERIC Educational Resources Information Center
Nickson, Lautrice M.; Kritsonis, William Allan
2006-01-01
The purpose of this article is to analyze factors that influence special educators to remain in the field of education. School administrators are perplexed by the large number of teachers who decide to leave the field of education after three years. The retention rates of special educators' require school administrators to focus on developing a…
Twisted Radio Waves and Twisted Thermodynamics
Kish, Laszlo B.; Nevels, Robert D.
2013-01-01
We present and analyze a gedanken experiment and show that the assumption that an antenna operating at a single frequency can transmit more than two independent information channels to the far field violates the Second Law of Thermodynamics. Transmission of a large number of channels, each associated with an angular momenta ‘twisted wave’ mode, to the far field in free space is therefore not possible. PMID:23424647
Meeting the Needs for Legal Education in the South.
ERIC Educational Resources Information Center
Pye, A. Kenneth
The purpose of this paper is to collect and analyze data related to the needs of the legal profession and the capacity of law schools to meet these needs in the southern states. The law schools in this southern region are educating more law students than at any time in history. But the need for legal services in the region and the large number of…
Large-scale gene function analysis with the PANTHER classification system.
Mi, Huaiyu; Muruganujan, Anushya; Casagrande, John T; Thomas, Paul D
2013-08-01
The PANTHER (protein annotation through evolutionary relationship) classification system (http://www.pantherdb.org/) is a comprehensive system that combines gene function, ontology, pathways and statistical analysis tools that enable biologists to analyze large-scale, genome-wide data from sequencing, proteomics or gene expression experiments. The system is built with 82 complete genomes organized into gene families and subfamilies, and their evolutionary relationships are captured in phylogenetic trees, multiple sequence alignments and statistical models (hidden Markov models or HMMs). Genes are classified according to their function in several different ways: families and subfamilies are annotated with ontology terms (Gene Ontology (GO) and PANTHER protein class), and sequences are assigned to PANTHER pathways. The PANTHER website includes a suite of tools that enable users to browse and query gene functions, and to analyze large-scale experimental data with a number of statistical tests. It is widely used by bench scientists, bioinformaticians, computer scientists and systems biologists. In the 2013 release of PANTHER (v.8.0), in addition to an update of the data content, we redesigned the website interface to improve both user experience and the system's analytical capability. This protocol provides a detailed description of how to analyze genome-wide experimental data with the PANTHER classification system.
NASA Technical Reports Server (NTRS)
Grosch, C. E.; Jackson, T. L.
1991-01-01
The ignition and structure of a reacting compressible mixing layer is considered using finite rate chemistry lying between two streams of reactants with different freestream speeds and temperatures. Numerical integration of the governing equations show that the structure of the reacting flow can be quite complicated depending on the magnitude of the Zeldovich number. An analysis of both the ignition a diffusion flame regimes is presented using a combination of large Zeldovich number asymptotics and numerics. This allows to analyze the behavior of these regimes as a function of the parameters of the problem.
Solid motor aft closure insulation erosion. [heat flux correlation for rate analysis
NASA Technical Reports Server (NTRS)
Stampfl, E.; Landsbaum, E. M.
1973-01-01
The erosion rate of aft closure insulation in a number of large solid propellant motors was empirically analyzed by correlating the average ablation rate with a number of variables that had previously been demonstrated to affect heat flux. The main correlating parameter was a heat flux based on the simplified Bartz heat transfer coefficient corrected for two-dimensional effects. A multiplying group contained terms related to port-to-throat ratio, local wall angle, grain geometry and nozzle cant angle. The resulting equation gave a good correlation and is a useful design tool.
Contrail Formation in Aircraft Wakes Using Large-Eddy Simulations
NASA Technical Reports Server (NTRS)
Paoli, R.; Helie, J.; Poinsot, T. J.; Ghosal, S.
2002-01-01
In this work we analyze the issue of the formation of condensation trails ("contrails") in the near-field of an aircraft wake. The basic configuration consists in an exhaust engine jet interacting with a wing-tip training vortex. The procedure adopted relies on a mixed Eulerian/Lagrangian two-phase flow approach; a simple micro-physics model for ice growth has been used to couple ice and vapor phases. Large eddy simulations have carried out at a realistic flight Reynolds number to evaluate the effects of turbulent mixing and wake vortex dynamics on ice-growth characteristics and vapor thermodynamic properties.
A Large Scale Dynamical System Immune Network Modelwith Finite Connectivity
NASA Astrophysics Data System (ADS)
Uezu, T.; Kadono, C.; Hatchett, J.; Coolen, A. C. C.
We study a model of an idiotypic immune network which was introduced by N. K. Jerne. It is known that in immune systems there generally exist several kinds of immune cells which can recognize any particular antigen. Taking this fact into account and assuming that each cell interacts with only a finite number of other cells, we analyze a large scale immune network via both numerical simulations and statistical mechanical methods, and show that the distribution of the concentrations of antibodies becomes non-trivial for a range of values of the strength of the interaction and the connectivity.
Statistical simulation of the magnetorotational dynamo.
Squire, J; Bhattacharjee, A
2015-02-27
Turbulence and dynamo induced by the magnetorotational instability (MRI) are analyzed using quasilinear statistical simulation methods. It is found that homogenous turbulence is unstable to a large-scale dynamo instability, which saturates to an inhomogenous equilibrium with a strong dependence on the magnetic Prandtl number (Pm). Despite its enormously reduced nonlinearity, the dependence of the angular momentum transport on Pm in the quasilinear model is qualitatively similar to that of nonlinear MRI turbulence. This demonstrates the importance of the large-scale dynamo and suggests how dramatically simplified models may be used to gain insight into the astrophysically relevant regimes of very low or high Pm.
Analysis of the Giacobini-Zinner bow wave
NASA Technical Reports Server (NTRS)
Smith, E. J.; Slavin, J. A.; Bame, S. J.; Thomsen, M. F.; Cowley, S. W. H.; Richardson, I. G.; Hovestadt, D.; Ipavich, F. M.; Ogilvie, K. W.; Coplan, M. A.
1986-01-01
The cometary bow wave of P/Giacobini-Zinner has been analyzed using the complete set of ICE field and particle observations to determine if it is a shock. Changes in the magnetic field and plasma flow velocities from upstream to downstream have been analyzed to determine the direction of the normal and the propagation velocity of the bow wave. The velocity has then been compared with the fast magnetosonic wave speed upstream to derive the Mach number and establish whether it is supersonic, i.e., a shock, or subsonic, i.e., a large amplitude wave. The various measurements have also been compared with values derived from a Rankine-Hugoniot analysis. The results indicate that, inbound, the bow wave is a shock with M = 1.5. Outbound, a subsonic Mach number is obtained, however, arguments are presented that the bow wave is also likely to be a shock at this location.
Simulations of Laboratory Astrophysics Experiments using the CRASH code
NASA Astrophysics Data System (ADS)
Trantham, Matthew; Kuranz, Carolyn; Manuel, Mario; Keiter, Paul; Drake, R. P.
2014-10-01
Computer simulations can assist in the design and analysis of laboratory astrophysics experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport, electron heat conduction and laser ray tracing. This poster/talk will demonstrate some of the experiments the CRASH code has helped design or analyze including: Kelvin-Helmholtz, Rayleigh-Taylor, imploding bubbles, and interacting jet experiments. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via Grant DEFC52-08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, Grant Number DE-NA0001840, and by the National Laser User Facility Program, Grant Number DE-NA0000850.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradonjic, Milan; Hagberg, Aric; Hengartner, Nick
We analyze component evolution in general random intersection graphs (RIGs) and give conditions on existence and uniqueness of the giant component. Our techniques generalize the existing methods for analysis on component evolution in RIGs. That is, we analyze survival and extinction properties of a dependent, inhomogeneous Galton-Watson branching process on general RIGs. Our analysis relies on bounding the branching processes and inherits the fundamental concepts from the study on component evolution in Erdos-Renyi graphs. The main challenge becomes from the underlying structure of RIGs, when the number of offsprings follows a binomial distribution with a different number of nodes andmore » different rate at each step during the evolution. RIGs can be interpreted as a model for large randomly formed non-metric data sets. Besides the mathematical analysis on component evolution, which we provide in this work, we perceive RIGs as an important random structure which has already found applications in social networks, epidemic networks, blog readership, or wireless sensor networks.« less
Singh, Juswinder; Deng, Zhan; Narale, Gaurav; Chuaqui, Claudio
2006-01-01
The combination of advances in structure-based drug design efforts in the pharmaceutical industry in parallel with structural genomics initiatives in the public domain has led to an explosion in the number of structures of protein-small molecule complexes structures. This information has critical importance to both the understanding of the structural basis for molecular recognition in biological systems and the design of better drugs. A significant challenge exists in managing this vast amount of data and fully leveraging it. Here, we review our work to develop a simple, fast way to store, organize, mine, and analyze large numbers of protein-small molecule complexes. We illustrate the utility of the approach to the management of inhibitor complexes from the protein kinase family. Finally, we describe our recent efforts in applying this method to the design of target-focused chemical libraries.
Conflict Misleads Large Carnivore Management and Conservation: Brown Bears and Wolves in Spain.
Fernández-Gil, Alberto; Naves, Javier; Ordiz, Andrés; Quevedo, Mario; Revilla, Eloy; Delibes, Miguel
2016-01-01
Large carnivores inhabiting human-dominated landscapes often interact with people and their properties, leading to conflict scenarios that can mislead carnivore management and, ultimately, jeopardize conservation. In northwest Spain, brown bears Ursus arctos are strictly protected, whereas sympatric wolves Canis lupus are subject to lethal control. We explored ecological, economic and societal components of conflict scenarios involving large carnivores and damages to human properties. We analyzed the relation between complaints of depredations by bears and wolves on beehives and livestock, respectively, and bear and wolf abundance, livestock heads, number of culled wolves, amount of paid compensations, and media coverage. We also evaluated the efficiency of wolf culling to reduce depredations on livestock. Bear damages to beehives correlated positively to the number of female bears with cubs of the year. Complaints of wolf predation on livestock were unrelated to livestock numbers; instead, they correlated positively to the number of wild ungulates harvested during the previous season, the number of wolf packs, and to wolves culled during the previous season. Compensations for wolf complaints were fivefold higher than for bears, but media coverage of wolf damages was thirtyfold higher. Media coverage of wolf damages was unrelated to the actual costs of wolf damages, but the amount of news correlated positively to wolf culling. However, wolf culling was followed by an increase in compensated damages. Our results show that culling of the wolf population failed in its goal of reducing damages, and suggest that management decisions are at least partly mediated by press coverage. We suggest that our results provide insight to similar scenarios, where several species of large carnivores share the landscape with humans, and management may be reactive to perceived conflicts.
Conflict Misleads Large Carnivore Management and Conservation: Brown Bears and Wolves in Spain
Fernández-Gil, Alberto; Naves, Javier; Ordiz, Andrés; Quevedo, Mario; Revilla, Eloy; Delibes, Miguel
2016-01-01
Large carnivores inhabiting human-dominated landscapes often interact with people and their properties, leading to conflict scenarios that can mislead carnivore management and, ultimately, jeopardize conservation. In northwest Spain, brown bears Ursus arctos are strictly protected, whereas sympatric wolves Canis lupus are subject to lethal control. We explored ecological, economic and societal components of conflict scenarios involving large carnivores and damages to human properties. We analyzed the relation between complaints of depredations by bears and wolves on beehives and livestock, respectively, and bear and wolf abundance, livestock heads, number of culled wolves, amount of paid compensations, and media coverage. We also evaluated the efficiency of wolf culling to reduce depredations on livestock. Bear damages to beehives correlated positively to the number of female bears with cubs of the year. Complaints of wolf predation on livestock were unrelated to livestock numbers; instead, they correlated positively to the number of wild ungulates harvested during the previous season, the number of wolf packs, and to wolves culled during the previous season. Compensations for wolf complaints were fivefold higher than for bears, but media coverage of wolf damages was thirtyfold higher. Media coverage of wolf damages was unrelated to the actual costs of wolf damages, but the amount of news correlated positively to wolf culling. However, wolf culling was followed by an increase in compensated damages. Our results show that culling of the wolf population failed in its goal of reducing damages, and suggest that management decisions are at least partly mediated by press coverage. We suggest that our results provide insight to similar scenarios, where several species of large carnivores share the landscape with humans, and management may be reactive to perceived conflicts. PMID:26974962
Clinical research in Finland in 2002 and 2007: quantity and type
2013-01-01
Background Regardless of worries over clinical research and various initiatives to overcome problems, few quantitative data on the numbers and type of clinical research exist. This article aims to describe the volume and type of clinical research in 2002 and 2007 in Finland. Methods The research law in Finland requires all medical research to be submitted to regional ethics committees (RECs). Data from all new projects in 2002 and 2007 were collected from REC files and the characteristics of clinical projects (76% of all submissions) were analyzed. Results The number of clinical projects was large, but declining: 794 in 2002 and 762 in 2007. Drug research (mainly trials) represented 29% and 34% of the clinical projects; their total number had not declined, but those without a commercial sponsor had. The number of different principal investigators was large (630 and 581). Most projects were observational, while an experimental design was used in 43% of projects. Multi-center studies were common. In half of the projects, the main funder was health care or was done as unpaid work; 31% had industry funding as the main source. There was a clear difference in the type of research by sponsorship. Industry-funded research was largely drug research, international multi-center studies, with randomized controlled or other experimental design. The findings for the two years were similar, but a university hospital as the main research site became less common between 2002 and 2007. Conclusions Clinical research projects were common, but numbers are declining; research was largely funded by health care, with many physicians involved. Drug trials were a minority, even though most research promotion efforts and regulation concerns them. PMID:23680289
Clinical research in Finland in 2002 and 2007: quantity and type.
Hemminki, Elina; Virtanen, Jorma; Veerus, Piret; Regushevskaya, Elena
2013-05-16
Regardless of worries over clinical research and various initiatives to overcome problems, few quantitative data on the numbers and type of clinical research exist. This article aims to describe the volume and type of clinical research in 2002 and 2007 in Finland. The research law in Finland requires all medical research to be submitted to regional ethics committees (RECs). Data from all new projects in 2002 and 2007 were collected from REC files and the characteristics of clinical projects (76% of all submissions) were analyzed. The number of clinical projects was large, but declining: 794 in 2002 and 762 in 2007. Drug research (mainly trials) represented 29% and 34% of the clinical projects; their total number had not declined, but those without a commercial sponsor had. The number of different principal investigators was large (630 and 581). Most projects were observational, while an experimental design was used in 43% of projects. Multi-center studies were common. In half of the projects, the main funder was health care or was done as unpaid work; 31% had industry funding as the main source. There was a clear difference in the type of research by sponsorship. Industry-funded research was largely drug research, international multi-center studies, with randomized controlled or other experimental design. The findings for the two years were similar, but a university hospital as the main research site became less common between 2002 and 2007. Clinical research projects were common, but numbers are declining; research was largely funded by health care, with many physicians involved. Drug trials were a minority, even though most research promotion efforts and regulation concerns them.
FIM, a Novel FTIR-Based Imaging Method for High Throughput Locomotion Analysis
Otto, Nils; Löpmeier, Tim; Valkov, Dimitar; Jiang, Xiaoyi; Klämbt, Christian
2013-01-01
We designed a novel imaging technique based on frustrated total internal reflection (FTIR) to obtain high resolution and high contrast movies. This FTIR-based Imaging Method (FIM) is suitable for a wide range of biological applications and a wide range of organisms. It operates at all wavelengths permitting the in vivo detection of fluorescent proteins. To demonstrate the benefits of FIM, we analyzed large groups of crawling Drosophila larvae. The number of analyzable locomotion tracks was increased by implementing a new software module capable of preserving larval identity during most collision events. This module is integrated in our new tracking program named FIMTrack which subsequently extracts a number of features required for the analysis of complex locomotion phenotypes. FIM enables high throughput screening for even subtle behavioral phenotypes. We tested this newly developed setup by analyzing locomotion deficits caused by the glial knockdown of several genes. Suppression of kinesin heavy chain (khc) or rab30 function led to contraction pattern or head sweeping defects, which escaped in previous analysis. Thus, FIM permits forward genetic screens aimed to unravel the neural basis of behavior. PMID:23349775
NASA Astrophysics Data System (ADS)
Bonelli, Maria Grazia; Ferrini, Mauro; Manni, Andrea
2016-12-01
The assessment of metals and organic micropollutants contamination in agricultural soils is a difficult challenge due to the extensive area used to collect and analyze a very large number of samples. With Dioxins and dioxin-like PCBs measurement methods and subsequent the treatment of data, the European Community advises the develop low-cost and fast methods allowing routing analysis of a great number of samples, providing rapid measurement of these compounds in the environment, feeds and food. The aim of the present work has been to find a method suitable to describe the relations occurring between organic and inorganic contaminants and use the value of the latter in order to forecast the former. In practice, the use of a metal portable soil analyzer coupled with an efficient statistical procedure enables the required objective to be achieved. Compared to Multiple Linear Regression, the Artificial Neural Networks technique has shown to be an excellent forecasting method, though there is no linear correlation between the variables to be analyzed.
Direct simulation of flat-plate boundary layer with mild free-stream turbulence
NASA Astrophysics Data System (ADS)
Wu, Xiaohua; Moin, Parviz
2014-11-01
Spatially evolving direct numerical simulation of the flat-plate boundary layer has been performed. The momentum thickness Reynolds number develops from 80 to 3000 with a free-stream turbulence intensity decaying from 3 percent to 0.8 percent. Predicted skin-friction is in agreement with the Blasius solution prior to breakdown, follows the well-known T3A bypass transition data during transition, and agrees with the Erm and Joubert Melbourne wind-tunnel data after the completion of transition. We introduce the concept of bypass transition in the narrow sense. Streaks, although present, do not appear to be dynamically important during the present bypass transition as they occur downstream of infant turbulent spots. For the turbulent boundary layer, viscous scaling collapses the rate of dissipation profiles in the logarithmic region at different Reynolds numbers. The ratio of Taylor microscale and the Kolmogorov length scale is nearly constant over a large portion of the outer layer. The ratio of large-eddy characteristic length and the boundary layer thickness scales very well with Reynolds number. The turbulent boundary layer is also statistically analyzed using frequency spectra, conditional-sampling, and two-point correlations. Near momentum thickness Reynolds number of 2900, three layers of coherent vortices are observed: the upper and lower layers are distinct hairpin forests of large and small sizes respectively; the middle layer consists of mostly fragmented hairpin elements.
NASA Astrophysics Data System (ADS)
Sakaida, Satoshi; Tabe, Yutaka; Chikahisa, Takemi
2017-09-01
A method for the large-scale simulation with the lattice Boltzmann method (LBM) is proposed for liquid water movement in a gas diffusion layer (GDL) of polymer electrolyte membrane fuel cells. The LBM is able to analyze two-phase flows in complex structures, however the simulation domain is limited due to heavy computational loads. This study investigates a variety means to reduce computational loads and increase the simulation areas. One is applying an LBM treating two-phases as having the same density, together with keeping numerical stability with large time steps. The applicability of this approach is confirmed by comparing the results with rigorous simulations using actual density. The second is establishing the maximum limit of the Capillary number that maintains flow patterns similar to the precise simulation; this is attempted as the computational load is inversely proportional to the Capillary number. The results show that the Capillary number can be increased to 3.0 × 10-3, where the actual operation corresponds to Ca = 10-5∼10-8. The limit is also investigated experimentally using an enlarged scale model satisfying similarity conditions for the flow. Finally, a demonstration is made of the effects of pore uniformity in GDL as an example of a large-scale simulation covering a channel.
Performance Analysis Tool for HPC and Big Data Applications on Scientific Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Wucherl; Koo, Michelle; Cao, Yu
Big data is prevalent in HPC computing. Many HPC projects rely on complex workflows to analyze terabytes or petabytes of data. These workflows often require running over thousands of CPU cores and performing simultaneous data accesses, data movements, and computation. It is challenging to analyze the performance involving terabytes or petabytes of workflow data or measurement data of the executions, from complex workflows over a large number of nodes and multiple parallel task executions. To help identify performance bottlenecks or debug the performance issues in large-scale scientific applications and scientific clusters, we have developed a performance analysis framework, using state-ofthe-more » art open-source big data processing tools. Our tool can ingest system logs and application performance measurements to extract key performance features, and apply the most sophisticated statistical tools and data mining methods on the performance data. It utilizes an efficient data processing engine to allow users to interactively analyze a large amount of different types of logs and measurements. To illustrate the functionality of the big data analysis framework, we conduct case studies on the workflows from an astronomy project known as the Palomar Transient Factory (PTF) and the job logs from the genome analysis scientific cluster. Our study processed many terabytes of system logs and application performance measurements collected on the HPC systems at NERSC. The implementation of our tool is generic enough to be used for analyzing the performance of other HPC systems and Big Data workows.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Szymanski, R., E-mail: rszymans@cbmm.lodz.pl; Sosnowski, S.; Maślanka, Ł.
2016-03-28
Theoretical analysis and computer simulations (Monte Carlo and numerical integration of differential equations) show that the statistical effect of a small number of reacting molecules depends on a way the molecules are distributed among the small volume nano-reactors (droplets in this study). A simple reversible association A + B = C was chosen as a model reaction, enabling to observe both thermodynamic (apparent equilibrium constant) and kinetic effects of a small number of reactant molecules. When substrates are distributed uniformly among droplets, all containing the same equal number of substrate molecules, the apparent equilibrium constant of the association is highermore » than the chemical one (observed in a macroscopic—large volume system). The average rate of the association, being initially independent of the numbers of molecules, becomes (at higher conversions) higher than that in a macroscopic system: the lower the number of substrate molecules in a droplet, the higher is the rate. This results in the correspondingly higher apparent equilibrium constant. A quite opposite behavior is observed when reactant molecules are distributed randomly among droplets: the apparent association rate and equilibrium constants are lower than those observed in large volume systems, being the lower, the lower is the average number of reacting molecules in a droplet. The random distribution of reactant molecules corresponds to ideal (equal sizes of droplets) dispersing of a reaction mixture. Our simulations have shown that when the equilibrated large volume system is dispersed, the resulting droplet system is already at equilibrium and no changes of proportions of droplets differing in reactant compositions can be observed upon prolongation of the reaction time.« less
Development of an Efficient Binaural Simulation for the Analysis of Structural Acoustic Data
NASA Technical Reports Server (NTRS)
Lalime, Aimee L.; Johnson, Marty E.; Rizzi, Stephen A. (Technical Monitor)
2002-01-01
Binaural or "virtual acoustic" representation has been proposed as a method of analyzing acoustic and vibroacoustic data. Unfortunately, this binaural representation can require extensive computer power to apply the Head Related Transfer Functions (HRTFs) to a large number of sources, as with a vibrating structure. This work focuses on reducing the number of real-time computations required in this binaural analysis through the use of Singular Value Decomposition (SVD) and Equivalent Source Reduction (ESR). The SVD method reduces the complexity of the HRTF computations by breaking the HRTFs into dominant singular values (and vectors). The ESR method reduces the number of sources to be analyzed in real-time computation by replacing sources on the scale of a structural wavelength with sources on the scale of an acoustic wavelength. It is shown that the effectiveness of the SVD and ESR methods improves as the complexity of the source increases. In addition, preliminary auralization tests have shown that the results from both the SVD and ESR methods are indistinguishable from the results found with the exhaustive method.
Santos, Taciana Mirella Batista Dos; Cardoso, Mirian Domingos; Pitangui, Ana Carolina Rodarti; Santos, Yasmim Gabriella Cardoso; Paiva, Saul Martins; Melo, João Paulo Ramos; Silva, Lygia Maria Pereira
2016-12-01
The scope of this study was to analyze the trend of completeness of the data on violence perpetrated against adolescents registered in the State of Pernambuco between 2009 and 2012. This involved a cross-sectional survey of 5,259 adolescents, who were the victims of violence reported in SINAN-VIVA of the Pernambuco State Health Department. Simple linear regression was used to investigate the trend of completeness of the variables. The percentages of completeness were considered to be dependent variables (Y) and the number of years as independent variables (X). The results show a significant increase of 204% in the number of notifications. However, of the 34 variables analyzed, 27 (79.4%) showed a stationary trend, 6 (17.6%) a downward trend, and only one variable (2.9%) an upward trend. Completeness was considered 'Very Poor' for the variables: Education (47.3%), Full Address (21.3%), Occurrence Time (38%) and Use of Alcohol by the Attacker (47%). Therefore, despite the large increase in the number of notifications, data quality continued to be compromised, hampering a more realistic analysis of this group.
Kim, Yunhwan; Lee, Sunmi; Chu, Chaeshin; Choe, Seoyun; Hong, Saeme; Shin, Youngseo
2016-02-01
The outbreak of Middle Eastern respiratory syndrome coronavirus (MERS-CoV) was one of the major events in South Korea in 2015. In particular, this study pays attention to formulating a mathematical model for MERS transmission dynamics and estimating transmission rates. Incidence data of MERS-CoV from the government authority was analyzed for the first aim and a mathematical model was built and analyzed for the second aim of the study. A mathematical model for MERS-CoV transmission dynamics is used to estimate the transmission rates in two periods due to the implementation of intensive interventions. Using the estimates of the transmission rates, the basic reproduction number was estimated in two periods. Due to the superspreader, the basic reproduction number was very large in the first period; however, the basic reproduction number of the second period has reduced significantly after intensive interventions. It turned out to be the intensive isolation and quarantine interventions that were the most critical factors that prevented the spread of the MERS outbreak. The results are expected to be useful to devise more efficient intervention strategies in the future.
Particulate Matter deposition on Quercus ilex leaves in an industrial city of central Italy.
Sgrigna, G; Sæbø, A; Gawronski, S; Popek, R; Calfapietra, C
2015-02-01
A number of studies have focused on urban trees to understand their mitigation capacity of air pollution. In this study particulate matter (PM) deposition on Quercus ilex leaves was quantitatively analyzed in four districts of the City of Terni (Italy) for three periods of the year. Fine (between 0.2 and 2.5 μm) and Large (between 2.5 and 10 μm) PM fractions were analyzed. Mean PM deposition value on Quercus ilex leaves was 20.6 μg cm(-2). Variations in PM deposition correlated with distance to main roads and downwind position relatively to industrial area. Epicuticular waxes were measured and related to accumulated PM. For Fine PM deposited in waxes we observed a higher value (40% of total Fine PM) than Large PM (4% of total Large PM). Results from this study allow to increase our understanding about air pollution interactions with urban vegetation and could be hopefully taken into account when guidelines for local urban green management are realized. Copyright © 2014 Elsevier Ltd. All rights reserved.
Characterization of fire regime in Sardinia (Italy)
NASA Astrophysics Data System (ADS)
Bacciu, V. M.; Salis, M.; Mastinu, S.; Masala, F.; Sirca, C.; Spano, D.
2012-12-01
In the last decades, a number of Authors highlighted the crucial role of forest fires within Mediterranean ecosystems, with impacts both negative and positive on all biosphere components and with reverberations on different scales. Fire determines the landscape structure and plant composition, but it is also the cause of enormous economic and ecological damages, beside the loss of human life. In Sardinia (Italy), the second largest island of the Mediterranean Basin, forest fires are perceived as one of the main environmental and social problems, and data are showing that the situation is worsening especially within the rural-urban peripheries and the increasing number of very large forest fires. The need for information concerning forest fire regime has been pointed out by several Authors (e.g. Rollins et al., 2002), who also emphasized the importance of understanding the factors (such as weather/climate, socio-economic, and land use) that determine spatial and temporal fire patterns. These would be used not only as a baseline to predict the climate change effect on forest fires, but also as a fire management and mitigation strategy. The main aim of this paper is, thus, to analyze the temporal and spatial patterns of fire occurrence in Sardinia (Italy) during the last three decades (1980-2010). For the analyzed period, fire statistics were provided by the Sardinian Forest Service (CFVA - Corpo Forestale e di Vigilanza Ambientale), while weather data for eight weather stations were obtained from the web site www.tutiempo.it. For each station, daily series of precipitation, mean, maximum and minimum temperature, relative humidity and wind speed were available. The present study firstly analyzed fire statistics (burned area and number of fires) according to the main fire regime characteristics (seasonality, fire return interval, fire incidence, fire size distribution). Then, fire and weather daily values were averaged to obtain monthly, seasonal and annual values, and a set of parametric and not parametric statistical tests were used to analyze the fire-weather relationships. Results showed a high inter- and intra-annual variability, also considering the different type of affected vegetation. As for other Mediterranean areas, a smaller number of large fires caused a high proportion of burned area. Land cover greatly influenced fire occurrence and fire size distribution across the landscape. Furthermore, fire activity (number of fires and area burned) showed significant correlations with weather variables, especially summer precipitation and wind, which seemed to drive the fire seasons and the fire propagation, respectively.
NASA Astrophysics Data System (ADS)
Yang, Chencheng; Tang, Gang; Hu, Xiong
2017-07-01
Shore-hoisting motor in the daily work will produce a large number of vibration signal data,in order to analyze the correlation among the data and discover the fault and potential safety hazard of the motor, the data are discretized first, and then Apriori algorithm are used to mine the strong association rules among the data. The results show that the relationship between day 1 and day 16 is the most closely related, which can guide the staff to analyze the work of these two days of motor to find and solve the problem of fault and safety.
Research Dilemmas with Behavioral Big Data.
Shmueli, Galit
2017-06-01
Behavioral big data (BBD) refers to very large and rich multidimensional data sets on human and social behaviors, actions, and interactions, which have become available to companies, governments, and researchers. A growing number of researchers in social science and management fields acquire and analyze BBD for the purpose of extracting knowledge and scientific discoveries. However, the relationships between the researcher, data, subjects, and research questions differ in the BBD context compared to traditional behavioral data. Behavioral researchers using BBD face not only methodological and technical challenges but also ethical and moral dilemmas. In this article, we discuss several dilemmas, challenges, and trade-offs related to acquiring and analyzing BBD for causal behavioral research.
A two-step sensitivity analysis for hydrological signatures in Jinhua River Basin, East China
NASA Astrophysics Data System (ADS)
Pan, S.; Fu, G.; Chiang, Y. M.; Xu, Y. P.
2016-12-01
Owing to model complexity and large number of parameters, calibration and sensitivity analysis are difficult processes for distributed hydrological models. In this study, a two-step sensitivity analysis approach is proposed for analyzing the hydrological signatures in Jinhua River Basin, East China, using the Distributed Hydrology-Soil-Vegetation Model (DHSVM). A rough sensitivity analysis is firstly conducted to obtain preliminary influential parameters via Analysis of Variance. The number of parameters was greatly reduced from eighteen-three to sixteen. Afterwards, the sixteen parameters are further analyzed based on a variance-based global sensitivity analysis, i.e., Sobol's sensitivity analysis method, to achieve robust sensitivity rankings and parameter contributions. Parallel-Computing is applied to reduce computational burden in variance-based sensitivity analysis. The results reveal that only a few number of model parameters are significantly sensitive, including rain LAI multiplier, lateral conductivity, porosity, field capacity, wilting point of clay loam, understory monthly LAI, understory minimum resistance and root zone depths of croplands. Finally several hydrological signatures are used for investigating the performance of DHSVM. Results show that high value of efficiency criteria didn't indicate excellent performance of hydrological signatures. For most samples from Sobol's sensitivity analysis, water yield was simulated very well. However, lowest and maximum annual daily runoffs were underestimated. Most of seven-day minimum runoffs were overestimated. Nevertheless, good performances of the three signatures above still exist in a number of samples. Analysis of peak flow shows that small and medium floods are simulated perfectly while slight underestimations happen to large floods. The work in this study helps to further multi-objective calibration of DHSVM model and indicates where to improve the reliability and credibility of model simulation.
Improved argument-FFT frequency offset estimation for QPSK coherent optical Systems
NASA Astrophysics Data System (ADS)
Han, Jilong; Li, Wei; Yuan, Zhilin; Li, Haitao; Huang, Liyan; Hu, Qianggao
2016-02-01
A frequency offset estimation (FOE) algorithm based on fast Fourier transform (FFT) of the signal's argument is investigated, which does not require removing the modulated data phase. In this paper, we analyze the flaw of the argument-FFT algorithm and propose a combined FOE algorithm, in which the absolute of frequency offset (FO) is accurately calculated by argument-FFT algorithm with a relatively large number of samples and the sign of FO is determined by FFT-based interpolation discrete Fourier transformation (DFT) algorithm with a relatively small number of samples. Compared with the previous algorithms based on argument-FFT, the proposed one has low complexity and can still effectively work with a relatively less number of samples.
Lorenz, Kim; Cohen, Barak A.
2012-01-01
Quantitative trait loci (QTL) with small effects on phenotypic variation can be difficult to detect and analyze. Because of this a large fraction of the genetic architecture of many complex traits is not well understood. Here we use sporulation efficiency in Saccharomyces cerevisiae as a model complex trait to identify and study small-effect QTL. In crosses where the large-effect quantitative trait nucleotides (QTN) have been genetically fixed we identify small-effect QTL that explain approximately half of the remaining variation not explained by the major effects. We find that small-effect QTL are often physically linked to large-effect QTL and that there are extensive genetic interactions between small- and large-effect QTL. A more complete understanding of quantitative traits will require a better understanding of the numbers, effect sizes, and genetic interactions of small-effect QTL. PMID:22942125
Analysis of Genome-Wide Association Studies with Multiple Outcomes Using Penalization
Liu, Jin; Huang, Jian; Ma, Shuangge
2012-01-01
Genome-wide association studies have been extensively conducted, searching for markers for biologically meaningful outcomes and phenotypes. Penalization methods have been adopted in the analysis of the joint effects of a large number of SNPs (single nucleotide polymorphisms) and marker identification. This study is partly motivated by the analysis of heterogeneous stock mice dataset, in which multiple correlated phenotypes and a large number of SNPs are available. Existing penalization methods designed to analyze a single response variable cannot accommodate the correlation among multiple response variables. With multiple response variables sharing the same set of markers, joint modeling is first employed to accommodate the correlation. The group Lasso approach is adopted to select markers associated with all the outcome variables. An efficient computational algorithm is developed. Simulation study and analysis of the heterogeneous stock mice dataset show that the proposed method can outperform existing penalization methods. PMID:23272092
Detecting a Weak Association by Testing its Multiple Perturbations: a Data Mining Approach
NASA Astrophysics Data System (ADS)
Lo, Min-Tzu; Lee, Wen-Chung
2014-05-01
Many risk factors/interventions in epidemiologic/biomedical studies are of minuscule effects. To detect such weak associations, one needs a study with a very large sample size (the number of subjects, n). The n of a study can be increased but unfortunately only to an extent. Here, we propose a novel method which hinges on increasing sample size in a different direction-the total number of variables (p). We construct a p-based `multiple perturbation test', and conduct power calculations and computer simulations to show that it can achieve a very high power to detect weak associations when p can be made very large. As a demonstration, we apply the method to analyze a genome-wide association study on age-related macular degeneration and identify two novel genetic variants that are significantly associated with the disease. The p-based method may set a stage for a new paradigm of statistical tests.
NASA Astrophysics Data System (ADS)
Plebe, Alice; Grasso, Giorgio
2016-12-01
This paper describes a system developed for the simulation of flames inside an open-source 3D computer graphic software, Blender, with the aim of analyzing in virtual reality scenarios of hazards in large-scale industrial plants. The advantages of Blender are of rendering at high resolution the very complex structure of large industrial plants, and of embedding a physical engine based on smoothed particle hydrodynamics. This particle system is used to evolve a simulated fire. The interaction of this fire with the components of the plant is computed using polyhedron separation distance, adopting a Voronoi-based strategy that optimizes the number of feature distance computations. Results on a real oil and gas refining industry are presented.
Mathematical Models to Determine Stable Behavior of Complex Systems
NASA Astrophysics Data System (ADS)
Sumin, V. I.; Dushkin, A. V.; Smolentseva, T. E.
2018-05-01
The paper analyzes a possibility to predict functioning of a complex dynamic system with a significant amount of circulating information and a large number of random factors impacting its functioning. Functioning of the complex dynamic system is described as a chaotic state, self-organized criticality and bifurcation. This problem may be resolved by modeling such systems as dynamic ones, without applying stochastic models and taking into account strange attractors.
Although exome sequencing data are generated primarily to detect single-nucleotide variants and indels, they can also be used to identify a subset of genomic rearrangements whose breakpoints are located in or near exons. Using >4,600 tumor and normal pairs across 15 cancer types, we identified over 9,000 high confidence somatic rearrangements, including a large number of gene fusions.
ERIC Educational Resources Information Center
Neumark, David; Johnson, Hans; Li, Qian; Schiff, Eric
2011-01-01
The impending retirement of the baby boom cohort could pose dramatic challenges for the U.S. labor force for at least two reasons. First, the boomers--adults born between 1946 and 1964--are large in number. Second, boomers are relatively well educated. In this report we develop and analyze occupational and labor force projections to the year 2018,…
A Computer Vision System forAnalyzing Images of Rough Hardwood Lumber
Tai-Hoon Cho; Richard W. Conners; Philip A. Araman
1990-01-01
A sawmill cuts logs into lumber and sells this lumber to secondary remanufacturers. The price a sawmiller can charge for a volume of lumber depends on its grade. For a number of species the price of a given volume of material can double in going from one grade to the next higher grade. While the grade of a board largely depends on the distribution of defects on the...
Nonlinear dynamics, chaos and complex cardiac arrhythmias
NASA Technical Reports Server (NTRS)
Glass, L.; Courtemanche, M.; Shrier, A.; Goldberger, A. L.
1987-01-01
Periodic stimulation of a nonlinear cardiac oscillator in vitro gives rise to complex dynamics that is well described by one-dimensional finite difference equations. As stimulation parameters are varied, a large number of different phase-locked and chaotic rhythms is observed. Similar rhythms can be observed in the intact human heart when there is interaction between two pacemaker sites. Simplified models are analyzed, which show some correspondence to clinical observations.
Automatic Classification of Cellular Expression by Nonlinear Stochastic Embedding (ACCENSE).
Shekhar, Karthik; Brodin, Petter; Davis, Mark M; Chakraborty, Arup K
2014-01-07
Mass cytometry enables an unprecedented number of parameters to be measured in individual cells at a high throughput, but the large dimensionality of the resulting data severely limits approaches relying on manual "gating." Clustering cells based on phenotypic similarity comes at a loss of single-cell resolution and often the number of subpopulations is unknown a priori. Here we describe ACCENSE, a tool that combines nonlinear dimensionality reduction with density-based partitioning, and displays multivariate cellular phenotypes on a 2D plot. We apply ACCENSE to 35-parameter mass cytometry data from CD8(+) T cells derived from specific pathogen-free and germ-free mice, and stratify cells into phenotypic subpopulations. Our results show significant heterogeneity within the known CD8(+) T-cell subpopulations, and of particular note is that we find a large novel subpopulation in both specific pathogen-free and germ-free mice that has not been described previously. This subpopulation possesses a phenotypic signature that is distinct from conventional naive and memory subpopulations when analyzed by ACCENSE, but is not distinguishable on a biaxial plot of standard markers. We are able to automatically identify cellular subpopulations based on all proteins analyzed, thus aiding the full utilization of powerful new single-cell technologies such as mass cytometry.
Kinnings, Sarah L; Geis, Jennifer A; Almasri, Eyad; Wang, Huiquan; Guan, Xiaojun; McCullough, Ron M; Bombard, Allan T; Saldivar, Juan-Sebastian; Oeth, Paul; Deciu, Cosmin
2015-08-01
Sufficient fetal DNA in a maternal plasma sample is required for accurate aneuploidy detection via noninvasive prenatal testing, thus highlighting a need to understand the factors affecting fetal fraction. The MaterniT21™ PLUS test uses massively parallel sequencing to analyze cell-free fetal DNA in maternal plasma and detect chromosomal abnormalities. We assess the impact of a variety of factors, both maternal and fetal, on the fetal fraction across a large number of samples processed by Sequenom Laboratories. The rate of increase in fetal fraction with increasing gestational age varies across the duration of the testing period and is also influenced by fetal aneuploidy status. Maternal weight trends inversely with fetal fraction, and we find no added benefit from analyzing body mass index or blood volume instead of weight. Strong correlations exist between fetal fractions from aliquots taken from the same patient at the same blood draw and also at different blood draws. While a number of factors trend with fetal fraction across the cohort as a whole, they are not the sole determinants of fetal fraction. In this study, the variability for any one patient does not appear large enough to justify postponing testing to a later gestational age. © 2015 John Wiley & Sons, Ltd.
Lewis, I.A.D.
1956-05-15
This patent pentains to an electrical pulse amplitude analyzer, capable of accepting input pulses having a separation between adjacent pulses in the order of one microsecond while providing a large number of channels of classification. In its broad aspect the described pulse amplitude analyzer utilizes a storage cathode ray tube und control circuitry whereby the amplitude of the analyzed pulses controls both the intensity and vertical defiection of the beam to charge particular spots in horizontal sectors of the tube face as the beam is moved horizontally across the tube face. As soon as the beam has swept the length of the tube the information stored therein is read out by scanning individually each horizontal sector corresponding to a certain range of pulse amplitudes and applying the output signal from each scan to separate indicating means.
Organizational structure and communication networks in a university environment
NASA Astrophysics Data System (ADS)
Mathiesen, Joachim; Jamtveit, Bjørn; Sneppen, Kim
2010-07-01
The “six degrees of separation” between any two individuals on Earth has become emblematic of the “small world” theme, even though the information conveyed via a chain of human encounters decays very rapidly with increasing chain length, and diffusion of information via this process may be very inefficient in large human organizations. The information flow on a communication network in a large organization, the University of Oslo, has been studied by analyzing email records. The records allow for quantification of communication intensity across organizational levels and between organizational units (referred to as “modules”). We find that the number of email messages within modules scales with module size to the power of 1.29±.06 , and the frequency of communication between individuals decays exponentially with the number of links required upward in the organizational hierarchy before they are connected. Our data also indicates that the number of messages sent by administrative units is proportional to the number of individuals at lower levels in the administrative hierarchy, and the “divergence of information” within modules is associated with this linear relationship. The observed scaling is consistent with a hierarchical system in which individuals far apart in the organization interact little with each other and receive a disproportionate number of messages from higher levels in the administrative hierarchy.
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man
2015-01-01
During inactive phases of Madden-Julian Oscillation (MJO), there are plenty of deep but small convective systems and far fewer deep and large ones. During active phases of MJO, a manifestation of an increase in the occurrence of large and deep cloud clusters results from an amplification of large-scale motions by stronger convective heating. This study is designed to quantitatively examine the roles of small and large cloud clusters during the MJO life cycle. We analyze the cloud object data from Aqua CERES (Clouds and the Earth's Radiant Energy System) observations between July 2006 and June 2010 for tropical deep convective (DC) and cirrostratus (CS) cloud object types according to the real-time multivariate MJO index, which assigns the tropics to one of the eight MJO phases each day. The cloud object is a contiguous region of the earth with a single dominant cloud-system type. The criteria for defining these cloud types are overcast footprints and cloud top pressures less than 400 hPa, but DC has higher cloud optical depths (=10) than those of CS (<10). The size distributions, defined as the footprint numbers as a function of cloud object diameters, for particular MJO phases depart greatly from the combined (8-phase) distribution at large cloud-object diameters due to the reduced/increased numbers of cloud objects related to changes in the large-scale environments. The medium diameter corresponding to the combined distribution is determined and used to partition all cloud objects into "small" and "large" groups of a particular phase. The two groups corresponding to the combined distribution have nearly equal numbers of footprints. The medium diameters are 502 km for DC and 310 km for cirrostratus. The range of the variation between two extreme phases (typically, the most active and depressed phases) for the small group is 6-11% in terms of the numbers of cloud objects and the total footprint numbers. The corresponding range for the large group is 19-44%. In terms of the probability density functions of radiative and cloud physical properties, there are virtually no differences between the MJO phases for the small group, but there are significant differences for the large groups for both DC and CS types. These results suggest that the intreseasonal variation signals reside at the large cloud clusters while the small cloud clusters represent the background noises resulting from various types of the tropical waves with different wavenumbers and propagation speeds/directions.
SNP discovery in the bovine milk transcriptome using RNA-Seq technology.
Cánovas, Angela; Rincon, Gonzalo; Islas-Trejo, Alma; Wickramasinghe, Saumya; Medrano, Juan F
2010-12-01
High-throughput sequencing of RNA (RNA-Seq) was developed primarily to analyze global gene expression in different tissues. However, it also is an efficient way to discover coding SNPs. The objective of this study was to perform a SNP discovery analysis in the milk transcriptome using RNA-Seq. Seven milk samples from Holstein cows were analyzed by sequencing cDNAs using the Illumina Genome Analyzer system. We detected 19,175 genes expressed in milk samples corresponding to approximately 70% of the total number of genes analyzed. The SNP detection analysis revealed 100,734 SNPs in Holstein samples, and a large number of those corresponded to differences between the Holstein breed and the Hereford bovine genome assembly Btau4.0. The number of polymorphic SNPs within Holstein cows was 33,045. The accuracy of RNA-Seq SNP discovery was tested by comparing SNPs detected in a set of 42 candidate genes expressed in milk that had been resequenced earlier using Sanger sequencing technology. Seventy of 86 SNPs were detected using both RNA-Seq and Sanger sequencing technologies. The KASPar Genotyping System was used to validate unique SNPs found by RNA-Seq but not observed by Sanger technology. Our results confirm that analyzing the transcriptome using RNA-Seq technology is an efficient and cost-effective method to identify SNPs in transcribed regions. This study creates guidelines to maximize the accuracy of SNP discovery and prevention of false-positive SNP detection, and provides more than 33,000 SNPs located in coding regions of genes expressed during lactation that can be used to develop genotyping platforms to perform marker-trait association studies in Holstein cattle.
Opposed-flow flame spread and extinction in mixed-convection boundary layers
NASA Technical Reports Server (NTRS)
Altenkirch, R. A.; Wedha-Nayagam, M.
1989-01-01
Experimental data for flame spread down thin fuel samples in an opposing, mixed-convection, boundary-layer flow are analyzed to determine the gas-phase velocity that characterizes how the flame reacts as it spreads toward the leading edge of the fuel sample into a thinning boundary layer. In the forced-flow limit where the cube of the Reynolds number divided by the Grashof number, Re exp 3/Gr, is large, L(q)/L(e), where L(q) is a theoretical flame standoff distance at extinction and L(e) is the measured distance from the leading edge of the sample where extinction occurs, is found to be proportional to Re exp n with n = -0.874 and Re based on L(e). The value of n is established by the character of the flow field near the leading edge of the flame. The Re dependence is used, along with a correction for the mixed-convection situation where Re exp 3/Gr is not large, to construct a Damkohler number with which the measured spread rates correlate for all values of Re exp 3/Gr.
Xie, Weibo; Wang, Gongwei; Yuan, Meng; Yao, Wen; Lyu, Kai; Zhao, Hu; Yang, Meng; Li, Pingbo; Zhang, Xing; Yuan, Jing; Wang, Quanxiu; Liu, Fang; Dong, Huaxia; Zhang, Lejing; Li, Xinglei; Meng, Xiangzhou; Zhang, Wan; Xiong, Lizhong; He, Yuqing; Wang, Shiping; Yu, Sibin; Xu, Caiguo; Luo, Jie; Li, Xianghua; Xiao, Jinghua; Lian, Xingming; Zhang, Qifa
2015-01-01
Intensive rice breeding over the past 50 y has dramatically increased productivity especially in the indica subspecies, but our knowledge of the genomic changes associated with such improvement has been limited. In this study, we analyzed low-coverage sequencing data of 1,479 rice accessions from 73 countries, including landraces and modern cultivars. We identified two major subpopulations, indica I (IndI) and indica II (IndII), in the indica subspecies, which corresponded to the two putative heterotic groups resulting from independent breeding efforts. We detected 200 regions spanning 7.8% of the rice genome that had been differentially selected between IndI and IndII, and thus referred to as breeding signatures. These regions included large numbers of known functional genes and loci associated with important agronomic traits revealed by genome-wide association studies. Grain yield was positively correlated with the number of breeding signatures in a variety, suggesting that the number of breeding signatures in a line may be useful for predicting agronomic potential and the selected loci may provide targets for rice improvement. PMID:26358652
Melchardt, Thomas; Hufnagl, Clemens; Weinstock, David M; Kopp, Nadja; Neureiter, Daniel; Tränkenschuh, Wolfgang; Hackl, Hubert; Weiss, Lukas; Rinnerthaler, Gabriel; Hartmann, Tanja N; Greil, Richard; Weigert, Oliver; Egle, Alexander
2016-08-09
Little information is available about the role of certain mutations for clonal evolution and the clinical outcome during relapse in diffuse large B-cell lymphoma (DLBCL). Therefore, we analyzed formalin-fixed-paraffin-embedded tumor samples from first diagnosis, relapsed or refractory disease from 28 patients using next-generation sequencing of the exons of 104 coding genes. Non-synonymous mutations were present in 74 of the 104 genes tested. Primary tumor samples showed a median of 8 non-synonymous mutations (range: 0-24) with the used gene set. Lower numbers of non-synonymous mutations in the primary tumor were associated with a better median OS compared with higher numbers (28 versus 15 months, p=0.031). We observed three patterns of clonal evolution during relapse of disease: large global change, subclonal selection and no or minimal change possibly suggesting preprogrammed resistance. We conclude that targeted re-sequencing is a feasible and informative approach to characterize the molecular pattern of relapse and it creates novel insights into the role of dynamics of individual genes.
Imaging mass spectrometry data reduction: automated feature identification and extraction.
McDonnell, Liam A; van Remoortere, Alexandra; de Velde, Nico; van Zeijl, René J M; Deelder, André M
2010-12-01
Imaging MS now enables the parallel analysis of hundreds of biomolecules, spanning multiple molecular classes, which allows tissues to be described by their molecular content and distribution. When combined with advanced data analysis routines, tissues can be analyzed and classified based solely on their molecular content. Such molecular histology techniques have been used to distinguish regions with differential molecular signatures that could not be distinguished using established histologic tools. However, its potential to provide an independent, complementary analysis of clinical tissues has been limited by the very large file sizes and large number of discrete variables associated with imaging MS experiments. Here we demonstrate data reduction tools, based on automated feature identification and extraction, for peptide, protein, and lipid imaging MS, using multiple imaging MS technologies, that reduce data loads and the number of variables by >100×, and that highlight highly-localized features that can be missed using standard data analysis strategies. It is then demonstrated how these capabilities enable multivariate analysis on large imaging MS datasets spanning multiple tissues. Copyright © 2010 American Society for Mass Spectrometry. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
García-Barberena, Javier; Olcoz, Asier; Sorbet, Fco. Javier
2017-06-01
CSP technologies are essential to allow large shares of renewables into the grid due to their unique ability to cope with the large variability of the energy resource by means of technically and economically feasible thermal energy storage (TES) systems. However, there is still the need and sought to achieve technological breakthroughs towards cost reductions and increased efficiencies. For this, research on advanced power cycles, like the Decoupled Solar Combined Cycle (DSCC) is, are regarded as a key objective. The DSCC concept is, basically, a Combined Brayton-Rankine cycle in which the bottoming cycle is decoupled from the operation of the topping cycle by means of an intermediate storage system. According to this concept, one or several solar towers driving a solar air receiver and a Gas Turbine (Brayton cycle) feed through their exhaust gasses a single storage system and bottoming cycle. This general concept benefits from a large flexibility in its design. On the one hand, different possible schemes related to number and configuration of solar towers, storage systems media and configuration, bottoming cycles, etc. are possible. On the other, within a specific scheme a large number of design parameters can be optimized, including the solar field size, the operating temperatures and pressures of the receiver, the power of the Brayton and Rankine cycles, the storage capacity and others. Heretofore, DSCC plants have been analyzed by means of simple steady-state models with pre-stablished operating parameters in the power cycles. In this work, a detailed transient simulation model for DSCC plants has been developed and is used to analyze different DSCC plant schemes. For each of the analyzed plant schemes, a sensitivity analysis and selection of the main design parameters is carried out. Results show that an increase in annual solar to electric efficiency of 30% (from 12.91 to 16.78) can be achieved by using two bottoming Rankine cycles at two different temperatures, enabling low temperature heat recovery from the receiver and Gas Turbine exhaust gasses.
Shumaker, L; Fetterolf, D E; Suhrie, J
1998-01-01
The recent availability of inexpensive document scanners and optical character recognition technology has created the ability to process surveys in large numbers with a minimum of operator time. Programs, which allow computer entry of such scanned questionnaire results directly into PC based relational databases, have further made it possible to quickly collect and analyze significant amounts of information. We have created an internal capability to easily generate survey data and conduct surveillance across a number of medical practice sites within a managed care/practice management organization. Patient satisfaction surveys, referring physician surveys and a variety of other evidence gathering tools have been deployed.
Wuelfing, W Peter; Daublain, Pierre; Kesisoglou, Filippos; Templeton, Allen; McGregor, Caroline
2015-04-06
In the drug discovery setting, the ability to rapidly identify drug absorption risk in preclinical species at high doses from easily measured physical properties is desired. This is due to the large number of molecules being evaluated and their high attrition rate, which make resource-intensive in vitro and in silico evaluation unattractive. High-dose in vivo data from rat, dog, and monkey are analyzed here, using a preclinical dose number (PDo) concept based on the dose number described by Amidon and other authors (Pharm. Res., 1993, 10, 264-270). PDo, as described in this article, is simply calculated as dose (mg/kg) divided by compound solubility in FaSSIF (mg/mL) and approximates the volume of biorelevant media per kilogram of animal that would be needed to fully dissolve the dose. High PDo values were found to be predictive of difficulty in achieving drug exposure (AUC)-dose proportionality in in vivo studies, as could be expected; however, this work analyzes a large data set (>900 data points) and provides quantitative guidance to identify drug absorption risk in preclinical species based on a single solubility measurement commonly carried out in drug discovery. Above the PDo values defined, >50% of all in vivo studies exhibited poor AUC-dose proportionality in rat, dog, and monkey, and these values can be utilized as general guidelines in discovery and early development to rapidly assess risk of solubility-limited absorption for a given compound. A preclinical dose number generated by biorelevant dilutions of formulated compounds (formulated PDo) was also evaluated and defines solubility targets predictive of suitable AUC-dose proportionality in formulation development efforts. Application of these guidelines can serve to efficiently identify compounds in discovery that are likely to present extreme challenges with respect to solubility-limited absorption in preclinical species as well as reduce the testing of poor formulations in vivo, which is a key ethical and resource matter.
Parameterized reduced order models from a single mesh using hyper-dual numbers
NASA Astrophysics Data System (ADS)
Brake, M. R. W.; Fike, J. A.; Topping, S. D.
2016-06-01
In order to assess the predicted performance of a manufactured system, analysts must consider random variations (both geometric and material) in the development of a model, instead of a single deterministic model of an idealized geometry with idealized material properties. The incorporation of random geometric variations, however, potentially could necessitate the development of thousands of nearly identical solid geometries that must be meshed and separately analyzed, which would require an impractical number of man-hours to complete. This research advances a recent approach to uncertainty quantification by developing parameterized reduced order models. These parameterizations are based upon Taylor series expansions of the system's matrices about the ideal geometry, and a component mode synthesis representation for each linear substructure is used to form an efficient basis with which to study the system. The numerical derivatives required for the Taylor series expansions are obtained via hyper-dual numbers, and are compared to parameterized models constructed with finite difference formulations. The advantage of using hyper-dual numbers is two-fold: accuracy of the derivatives to machine precision, and the need to only generate a single mesh of the system of interest. The theory is applied to a stepped beam system in order to demonstrate proof of concept. The results demonstrate that the hyper-dual number multivariate parameterization of geometric variations, which largely are neglected in the literature, are accurate for both sensitivity and optimization studies. As model and mesh generation can constitute the greatest expense of time in analyzing a system, the foundation to create a parameterized reduced order model based off of a single mesh is expected to reduce dramatically the necessary time to analyze multiple realizations of a component's possible geometry.
Large-Scale Diversity of Slope Fishes: Pattern Inconsistency between Multiple Diversity Indices
Gaertner, Jean-Claude; Colloca, Francesco; Politou, Chrissi-Yianna; Gil De Sola, Luis; Bertrand, Jacques A.; Murenu, Matteo; Durbec, Jean-Pierre; Kallianiotis, Argyris; Mannini, Alessandro
2013-01-01
Large-scale studies focused on the diversity of continental slope ecosystems are still rare, usually restricted to a limited number of diversity indices and mainly based on the empirical comparison of heterogeneous local data sets. In contrast, we investigate large-scale fish diversity on the basis of multiple diversity indices and using 1454 standardized trawl hauls collected throughout the upper and middle slope of the whole northern Mediterranean Sea (36°3′- 45°7′ N; 5°3′W - 28°E). We have analyzed (1) the empirical relationships between a set of 11 diversity indices in order to assess their degree of complementarity/redundancy and (2) the consistency of spatial patterns exhibited by each of the complementary groups of indices. Regarding species richness, our results contrasted both the traditional view based on the hump-shaped theory for bathymetric pattern and the commonly-admitted hypothesis of a large-scale decreasing trend correlated with a similar gradient of primary production in the Mediterranean Sea. More generally, we found that the components of slope fish diversity we analyzed did not always show a consistent pattern of distribution according either to depth or to spatial areas, suggesting that they are not driven by the same factors. These results, which stress the need to extend the number of indices traditionally considered in diversity monitoring networks, could provide a basis for rethinking not only the methodological approach used in monitoring systems, but also the definition of priority zones for protection. Finally, our results call into question the feasibility of properly investigating large-scale diversity patterns using a widespread approach in ecology, which is based on the compilation of pre-existing heterogeneous and disparate data sets, in particular when focusing on indices that are very sensitive to sampling design standardization, such as species richness. PMID:23843962
Wu, Yubao; Zhu, Xiaofeng; Chen, Jian; Zhang, Xiang
2013-11-01
Epistasis (gene-gene interaction) detection in large-scale genetic association studies has recently drawn extensive research interests as many complex traits are likely caused by the joint effect of multiple genetic factors. The large number of possible interactions poses both statistical and computational challenges. A variety of approaches have been developed to address the analytical challenges in epistatic interaction detection. These methods usually output the identified genetic interactions and store them in flat file formats. It is highly desirable to develop an effective visualization tool to further investigate the detected interactions and unravel hidden interaction patterns. We have developed EINVis, a novel visualization tool that is specifically designed to analyze and explore genetic interactions. EINVis displays interactions among genetic markers as a network. It utilizes a circular layout (specially, a tree ring view) to simultaneously visualize the hierarchical interactions between single nucleotide polymorphisms (SNPs), genes, and chromosomes, and the network structure formed by these interactions. Using EINVis, the user can distinguish marginal effects from interactions, track interactions involving more than two markers, visualize interactions at different levels, and detect proxy SNPs based on linkage disequilibrium. EINVis is an effective and user-friendly free visualization tool for analyzing and exploring genetic interactions. It is publicly available with detailed documentation and online tutorial on the web at http://filer.case.edu/yxw407/einvis/. © 2013 WILEY PERIODICALS, INC.
On the challenges of drawing conclusions from p-values just below 0.05
2015-01-01
In recent years, researchers have attempted to provide an indication of the prevalence of inflated Type 1 error rates by analyzing the distribution of p-values in the published literature. De Winter & Dodou (2015) analyzed the distribution (and its change over time) of a large number of p-values automatically extracted from abstracts in the scientific literature. They concluded there is a ‘surge of p-values between 0.041–0.049 in recent decades’ which ‘suggests (but does not prove) questionable research practices have increased over the past 25 years.’ I show the changes in the ratio of fractions of p-values between 0.041–0.049 over the years are better explained by assuming the average power has decreased over time. Furthermore, I propose that their observation that p-values just below 0.05 increase more strongly than p-values above 0.05 can be explained by an increase in publication bias (or the file drawer effect) over the years (cf. Fanelli, 2012; Pautasso, 2010, which has led to a relative decrease of ‘marginally significant’ p-values in abstracts in the literature (instead of an increase in p-values just below 0.05). I explain why researchers analyzing large numbers of p-values need to relate their assumptions to a model of p-value distributions that takes into account the average power of the performed studies, the ratio of true positives to false positives in the literature, the effects of publication bias, and the Type 1 error rate (and possible mechanisms through which it has inflated). Finally, I discuss why publication bias and underpowered studies might be a bigger problem for science than inflated Type 1 error rates, and explain the challenges when attempting to draw conclusions about inflated Type 1 error rates from a large heterogeneous set of p-values. PMID:26246976
Neural network post-processing of grayscale optical correlator
NASA Technical Reports Server (NTRS)
Lu, Thomas T; Hughlett, Casey L.; Zhoua, Hanying; Chao, Tien-Hsin; Hanan, Jay C.
2005-01-01
In this paper we present the use of a radial basis function neural network (RBFNN) as a post-processor to assist the optical correlator to identify the objects and to reject false alarms. Image plane features near the correlation peaks are extracted and fed to the neural network for analysis. The approach is capable of handling large number of object variations and filter sets. Preliminary experimental results are presented and the performance is analyzed.
Velocities of gas in star-forming regions
NASA Astrophysics Data System (ADS)
Nissen, H. D.; Gustafsson, M.; Field, D.; Lemaire, J. L.; Clénet, Y.; Rouan, D.
2007-12-01
We present high spatial (0.18") and velocity (<2 km/s) resolution observations of the central 1'x1' of OMC1. We identify a large number of shock features and determine radial velocity, position angle and emission brightness for each of these features. Using this dataset we analyze the kinematic properties of the inner square arcminute of OMC1, identifying among other things the IR signature of a massive outflow originating from source I.
Qualitative properties of large buckled states of spherical shells
NASA Technical Reports Server (NTRS)
Shih, K. G.; Antman, S. S.
1985-01-01
A system of 6th-order quasi-linear Ordinary Differential Equations is analyzed to show the global existence of axisymmetrically buckled states. A surprising nodal property is obtained which shows that everywhere along a branch of solutions that bifurcates from a simple eigenvalue of the linearized equation, the number of simultaneously vanishing points of both shear resultant and circumferential bending moment resultant remains invariant, provided that a certain auxiliary condition is satisfied.
Increasing Supercycle Lengths of Active SU UMa-type Dwarf Novae
NASA Astrophysics Data System (ADS)
Otulakowska-Hypka, M.; Olech, A.
2014-12-01
We present observational evidence that supercycle lengths of the most active SU UMa-type stars are increasing during the past decades. We analyzed a large number of photometric measurements from available archives and found that this effect is generic for this class of stars, independently of their evolutionary status. This finding is in agreement with previous predictions and the most recent work of Patterson et al. (2012) on BK Lyn.
Takino, Sachio; Yamashiro, Hideaki; Sugano, Yukou; Fujishima, Yohei; Nakata, Akifumi; Kasai, Kosuke; Hayashi, Gohei; Urushihara, Yusuke; Suzuki, Masatoshi; Shinoda, Hisashi; Miura, Tomisato; Fukumoto, Manabu
2017-02-01
In this study we analyzed the effect of chronic and low-dose-rate (LDR) radiation on spermatogenic cells of large Japanese field mice ( Apodemus speciosus ) after the Fukushima Daiichi Nuclear Power Plant (FNPP) accident. In March 2014, large Japanese field mice were collected from two sites located in, and one site adjacent to, the FNPP ex-evacuation zone: Tanashio, Murohara and Akogi, respectively. Testes from these animals were analyzed histologically. External dose rate from radiocesium (combined 134 Cs and 137 Cs) in these animals at the sampling sites exhibited 21 μGy/day in Tanashio, 304-365 μGy/day in Murohara and 407-447 μGy/day in Akogi. In the Akogi group, the numbers of spermatogenic cells and proliferating cell nuclear antigen (PCNA)-positive cells per seminiferous tubule were significantly higher compared to the Tanashio and Murohara groups, respectively. TUNEL-positive apoptotic cells tended to be detected at a lower level in the Murohara and Akogi groups compared to the Tanashio group. These results suggest that enhanced spermatogenesis occurred in large Japanese field mice living in and around the FNPP ex-evacuation zone. It remains to be elucidated whether this phenomenon, attributed to chronic exposure to LDR radiation, will benefit or adversely affect large Japanese field mice.
NASA Astrophysics Data System (ADS)
Mohammed, Ali Ibrahim Ali
The understanding and treatment of brain disorders as well as the development of intelligent machines is hampered by the lack of knowledge of how the brain fundamentally functions. Over the past century, we have learned much about how individual neurons and neural networks behave, however new tools are critically needed to interrogate how neural networks give rise to complex brain processes and disease conditions. Recent innovations in molecular techniques, such as optogenetics, have enabled neuroscientists unprecedented precision to excite, inhibit and record defined neurons. The impressive sensitivity of currently available optogenetic sensors and actuators has now enabled the possibility of analyzing a large number of individual neurons in the brains of behaving animals. To promote the use of these optogenetic tools, this thesis integrates cutting edge optogenetic molecular sensors which is ultrasensitive for imaging neuronal activity with custom wide field optical microscope to analyze a large number of individual neurons in living brains. Wide-field microscopy provides a large field of view and better spatial resolution approaching the Abbe diffraction limit of fluorescent microscope. To demonstrate the advantages of this optical platform, we imaged a deep brain structure, the Hippocampus, and tracked hundreds of neurons over time while mouse was performing a memory task to investigate how those individual neurons related to behavior. In addition, we tested our optical platform in investigating transient neural network changes upon mechanical perturbation related to blast injuries. In this experiment, all blasted mice show a consistent change in neural network. A small portion of neurons showed a sustained calcium increase for an extended period of time, whereas the majority lost their activities. Finally, using optogenetic silencer to control selective motor cortex neurons, we examined their contributions to the network pathology of basal ganglia related to Parkinson's disease. We found that inhibition of motor cortex does not alter exaggerated beta oscillations in the striatum that are associated with parkinsonianism. Together, these results demonstrate the potential of developing integrated optogenetic system to advance our understanding of the principles underlying neural network computation, which would have broad applications from advancing artificial intelligence to disease diagnosis and treatment.
High-frequency asymptotic methods for analyzing the EM scattering by open-ended waveguide cavities
NASA Technical Reports Server (NTRS)
Burkholder, R. J.; Pathak, P. H.
1989-01-01
Four high-frequency methods are described for analyzing the electromagnetic (EM) scattering by electrically large open-ended cavities. They are: (1) a hybrid combination of waveguide modal analysis and high-frequency asymptotics, (2) geometrical optics (GO) ray shooting, (3) Gaussian beam (GB) shooting, and (4) the generalized ray expansion (GRE) method. The hybrid modal method gives very accurate results but is limited to cavities which are made up of sections of uniform waveguides for which the modal fields are known. The GO ray shooting method can be applied to much more arbitrary cavity geometries and can handle absorber treated interior walls, but it generally only predicts the major trends of the RCS pattern and not the details. Also, a very large number of rays need to be tracked for each new incidence angle. Like the GO ray shooting method, the GB shooting method can handle more arbitrary cavities, but it is much more efficient and generally more accurate than the GO method because it includes the fields diffracted by the rim at the open end which enter the cavity. However, due to beam divergence effects the GB method is limited to cavities which are not very long compared to their width. The GRE method overcomes the length-to-width limitation of the GB method by replacing the GB's with GO ray tubes which are launched in the same manner as the GB's to include the interior rim diffracted field. This method gives good accuracy and is generally more efficient than the GO method, but a large number of ray tubes needs to be tracked.
Linear optics only allows every possible quantum operation for one photon or one port
NASA Astrophysics Data System (ADS)
Moyano-Fernández, Julio José; Garcia-Escartin, Juan Carlos
2017-01-01
We study the evolution of the quantum state of n photons in m different modes when they go through a lossless linear optical system. We show that there are quantum evolution operators U that cannot be built with linear optics alone unless the number of photons or the number of modes is equal to one. The evolution for single photons can be controlled with the known realization of any unitary proved by Reck, Zeilinger, Bernstein and Bertani. The evolution for a single mode corresponds to the trivial evolution in a phase shifter. We analyze these two cases and prove that any other combination of the number of photons and modes produces a Hilbert state too large for the linear optics system to give any desired evolution.
Performance Evaluation of Bluetooth Low Energy: A Systematic Review.
Tosi, Jacopo; Taffoni, Fabrizio; Santacatterina, Marco; Sannino, Roberto; Formica, Domenico
2017-12-13
Small, compact and embedded sensors are a pervasive technology in everyday life for a wide number of applications (e.g., wearable devices, domotics, e-health systems, etc.). In this context, wireless transmission plays a key role, and among available solutions, Bluetooth Low Energy (BLE) is gaining more and more popularity. BLE merges together good performance, low-energy consumption and widespread diffusion. The aim of this work is to review the main methodologies adopted to investigate BLE performance. The first part of this review is an in-depth description of the protocol, highlighting the main characteristics and implementation details. The second part reviews the state of the art on BLE characteristics and performance. In particular, we analyze throughput, maximum number of connectable sensors, power consumption, latency and maximum reachable range, with the aim to identify what are the current limits of BLE technology. The main results can be resumed as follows: throughput may theoretically reach the limit of ~230 kbps, but actual applications analyzed in this review show throughputs limited to ~100 kbps; the maximum reachable range is strictly dependent on the radio power, and it goes up to a few tens of meters; the maximum number of nodes in the network depends on connection parameters, on the network architecture and specific device characteristics, but it is usually lower than 10; power consumption and latency are largely modeled and analyzed and are strictly dependent on a huge number of parameters. Most of these characteristics are based on analytical models, but there is a need for rigorous experimental evaluations to understand the actual limits.
Performance Evaluation of Bluetooth Low Energy: A Systematic Review
Taffoni, Fabrizio; Santacatterina, Marco; Sannino, Roberto
2017-01-01
Small, compact and embedded sensors are a pervasive technology in everyday life for a wide number of applications (e.g., wearable devices, domotics, e-health systems, etc.). In this context, wireless transmission plays a key role, and among available solutions, Bluetooth Low Energy (BLE) is gaining more and more popularity. BLE merges together good performance, low-energy consumption and widespread diffusion. The aim of this work is to review the main methodologies adopted to investigate BLE performance. The first part of this review is an in-depth description of the protocol, highlighting the main characteristics and implementation details. The second part reviews the state of the art on BLE characteristics and performance. In particular, we analyze throughput, maximum number of connectable sensors, power consumption, latency and maximum reachable range, with the aim to identify what are the current limits of BLE technology. The main results can be resumed as follows: throughput may theoretically reach the limit of ~230 kbps, but actual applications analyzed in this review show throughputs limited to ~100 kbps; the maximum reachable range is strictly dependent on the radio power, and it goes up to a few tens of meters; the maximum number of nodes in the network depends on connection parameters, on the network architecture and specific device characteristics, but it is usually lower than 10; power consumption and latency are largely modeled and analyzed and are strictly dependent on a huge number of parameters. Most of these characteristics are based on analytical models, but there is a need for rigorous experimental evaluations to understand the actual limits. PMID:29236085
Aggressiveness, violence, homicidality, homicide, and Lyme disease
Bransfield, Robert C
2018-01-01
Background No study has previously analyzed aggressiveness, homicide, and Lyme disease (LD). Materials and methods Retrospective LD chart reviews analyzed aggressiveness, compared 50 homicidal with 50 non-homicidal patients, and analyzed homicides. Results Most aggression with LD was impulsive, sometimes provoked by intrusive symptoms, sensory stimulation or frustration and was invariably bizarre and senseless. About 9.6% of LD patients were homicidal with the average diagnosis delay of 9 years. Postinfection findings associated with homicidality that separated from the non-homicidal group within the 95% confidence interval included suicidality, sudden abrupt mood swings, explosive anger, paranoia, anhedonia, hypervigilance, exaggerated startle, disinhibition, nightmares, depersonalization, intrusive aggressive images, dissociative episodes, derealization, intrusive sexual images, marital/family problems, legal problems, substance abuse, depression, panic disorder, memory impairments, neuropathy, cranial nerve symptoms, and decreased libido. Seven LD homicides included predatory aggression, poor impulse control, and psychosis. Some patients have selective hyperacusis to mouth sounds, which I propose may be the result of brain dysfunction causing a disinhibition of a primitive fear of oral predation. Conclusion LD and the immune, biochemical, neurotransmitter, and the neural circuit reactions to it can cause impairments associated with violence. Many LD patients have no aggressiveness tendencies or only mild degrees of low frustration tolerance and irritability and pose no danger; however, a lesser number experience explosive anger, a lesser number experience homicidal thoughts and impulses, and much lesser number commit homicides. Since such large numbers are affected by LD, this small percent can be highly significant. Much of the violence associated with LD can be avoided with better prevention, diagnosis, and treatment of LD. PMID:29576731
NASA Astrophysics Data System (ADS)
Lakra, Suchita; Mandal, Sanjoy
2017-06-01
A quadruple micro-optical ring resonator (QMORR) with multiple output bus waveguides is mathematically modeled and analyzed by making use of the delay-line signal processing approach in Z-domain and Mason's gain formula. The performances of QMORR with two output bus waveguides with vertical coupling are analyzed. This proposed structure is capable of providing wider free spectral response from both the output buses with appreciable cross talk. Thus, this configuration could provide increased capacity to insert a large number of communication channels. The simulated frequency response characteristic and its dispersion and group delay characteristics are graphically presented using the MATLAB environment.
Adaptive variational mode decomposition method for signal processing based on mode characteristic
NASA Astrophysics Data System (ADS)
Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng
2018-07-01
Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.
The key role of dry days in changing regional climate and precipitation regimes
Polade, Suraj; Pierce, David W.; Cayan, Daniel R.; Gershunov, Alexander; Dettinger, Michael D.
2014-01-01
Future changes in the number of dry days per year can either reinforce or counteract projected increases in daily precipitation intensity as the climate warms. We analyze climate model projected changes in the number of dry days using 28 coupled global climate models from the Coupled Model Intercomparison Project, version 5 (CMIP5). We find that the Mediterranean Sea region, parts of Central and South America, and western Indonesia could experience up to 30 more dry days per year by the end of this century. We illustrate how changes in the number of dry days and the precipitation intensity on precipitating days combine to produce changes in annual precipitation, and show that over much of the subtropics the change in number of dry days dominates the annual changes in precipitation and accounts for a large part of the change in interannual precipitation variability.
The massive fermion phase for the U(N) Chern-Simons gauge theory in D=3 at large N
Bardeen, William A.
2014-10-07
We explore the phase structure of fermions in the U(N) Chern-Simons Gauge theory in three dimensions using the large N limit where N is the number of colors and the fermions are taken to be in the fundamental representation of the U(N) gauge group. In the large N limit, the theory retains its classical conformal behavior and considerable attention has been paid to possible AdS/CFT dualities of the theory in the conformal phase. In this paper we present a solution for the massive phase of the fermion theory that is exact to the leading order of ‘t Hooft’s large Nmore » expansion. We present evidence for the spontaneous breaking of the exact scale symmetry and analyze the properties of the dilaton that appears as the Goldstone boson of scale symmetry breaking.« less
Fjell, Ylva; Alexanderson, Kristina; Nordenmark, Mikael; Bildt, Carina
2008-01-01
The aim of the present study was to analyze the association between number of working hours, the level of perceived physical strain, work-home interface and musculoskeletal pain and fatigue among women and men employed in the public sector. Cross-sectional data from 1,180 employees (86% women) in 49 public workplaces in 2002-2003 were analyzed. Odds ratios (OR) with 95% confidence intervals (CIs) were used as measures of the associations. The analyses showed differences as well as similarities between women and men. Overall the women reported higher levels of perceived physical strain relative to total workload. A high level of physical strain was strongly associated with musculoskeletal pain and fatigue. Nevertheless, no detrimental effects were observed on health of high total working hours which indicates that a large number of total working hours might be balanced by accompanying multiple roles or many responsibilities and therefore not be generally regarded as risk factors for ill health.
Modeling Laboratory Astrophysics Experiments using the CRASH code
NASA Astrophysics Data System (ADS)
Trantham, Matthew; Drake, R. P.; Grosskopf, Michael; Bauerle, Matthew; Kruanz, Carolyn; Keiter, Paul; Malamud, Guy; Crash Team
2013-10-01
The understanding of high energy density systems can be advanced by laboratory astrophysics experiments. Computer simulations can assist in the design and analysis of these experiments. The Center for Radiative Shock Hydrodynamics (CRASH) at the University of Michigan developed a code that has been used to design and analyze high-energy-density experiments on OMEGA, NIF, and other large laser facilities. This Eulerian code uses block-adaptive mesh refinement (AMR) with implicit multigroup radiation transport and electron heat conduction. This poster/talk will demonstrate some of the experiments the CRASH code has helped design or analyze including: Radiative shocks experiments, Kelvin-Helmholtz experiments, Rayleigh-Taylor experiments, plasma sheet, and interacting jets experiments. This work is funded by the Predictive Sciences Academic Alliances Program in NNSA-ASC via grant DEFC52- 08NA28616, by the NNSA-DS and SC-OFES Joint Program in High-Energy-Density Laboratory Plasmas, grant number DE-FG52-09NA29548, and by the National Laser User Facility Program, grant number DE-NA0000850.
GrigoraSNPs: Optimized Analysis of SNPs for DNA Forensics.
Ricke, Darrell O; Shcherbina, Anna; Michaleas, Adam; Fremont-Smith, Philip
2018-04-16
High-throughput sequencing (HTS) of single nucleotide polymorphisms (SNPs) enables additional DNA forensic capabilities not attainable using traditional STR panels. However, the inclusion of sets of loci selected for mixture analysis, extended kinship, phenotype, biogeographic ancestry prediction, etc., can result in large panel sizes that are difficult to analyze in a rapid fashion. GrigoraSNP was developed to address the allele-calling bottleneck that was encountered when analyzing SNP panels with more than 5000 loci using HTS. GrigoraSNPs uses a MapReduce parallel data processing on multiple computational threads plus a novel locus-identification hashing strategy leveraging target sequence tags. This tool optimizes the SNP calling module of the DNA analysis pipeline with runtimes that scale linearly with the number of HTS reads. Results are compared with SNP analysis pipelines implemented with SAMtools and GATK. GrigoraSNPs removes a computational bottleneck for processing forensic samples with large HTS SNP panels. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.
The prevalence of childhood dysphonia: a cross-sectional study.
Carding, Paul N; Roulstone, Sue; Northstone, Kate
2006-12-01
There is only very limited information on the prevalence of voice disorders, particularly for the pediatric population. This study examined the prevalence of dysphonia in a large cohort of children (n = 7389) at 8 years of age. Data were collected within a large prospective epidemiological study and included a formal assessment by one of five research speech and language therapists as well as a parental report of their child's voice. Common risk factors that were also analyzed included sex, sibling numbers, asthma, regular conductive hearing loss, and frequent upper respiratory infection. The research clinicians identified a dysphonia prevalence of 6% compared with a parental report of 11%. Both measures suggested a significant risk of dysphonia for children with older siblings. Other measures were not in agreement between clinician and parental reports. The clinician judgments also suggested significant risk factors for sex (male) but not for any common respiratory or otolaryngological conditions that were analyzed. Parental report suggested significant risk factors with respect to asthma and tonsillectomy. These results are discussed in detail.
Massively parallel implementation of 3D-RISM calculation with volumetric 3D-FFT.
Maruyama, Yutaka; Yoshida, Norio; Tadano, Hiroto; Takahashi, Daisuke; Sato, Mitsuhisa; Hirata, Fumio
2014-07-05
A new three-dimensional reference interaction site model (3D-RISM) program for massively parallel machines combined with the volumetric 3D fast Fourier transform (3D-FFT) was developed, and tested on the RIKEN K supercomputer. The ordinary parallel 3D-RISM program has a limitation on the number of parallelizations because of the limitations of the slab-type 3D-FFT. The volumetric 3D-FFT relieves this limitation drastically. We tested the 3D-RISM calculation on the large and fine calculation cell (2048(3) grid points) on 16,384 nodes, each having eight CPU cores. The new 3D-RISM program achieved excellent scalability to the parallelization, running on the RIKEN K supercomputer. As a benchmark application, we employed the program, combined with molecular dynamics simulation, to analyze the oligomerization process of chymotrypsin Inhibitor 2 mutant. The results demonstrate that the massive parallel 3D-RISM program is effective to analyze the hydration properties of the large biomolecular systems. Copyright © 2014 Wiley Periodicals, Inc.
Kim, Augustine Yongwhi; Choi, Hoduk
2018-01-01
The purpose of this paper is to evaluate food taste, smell, and characteristics from consumers' online reviews. Several studies in food sensory evaluation have been presented for consumer acceptance. However, these studies need taste descriptive word lexicon, and they are not suitable for analyzing large number of evaluators to predict consumer acceptance. In this paper, an automated text analysis method for food evaluation is presented to analyze and compare recently introduced two jjampong ramen types (mixed seafood noodles). To avoid building a sensory word lexicon, consumers' reviews are collected from SNS. Then, by training word embedding model with acquired reviews, words in the large amount of review text are converted into vectors. Based on these words represented as vectors, inference is performed to evaluate taste and smell of two jjampong ramen types. Finally, the reliability and merits of the proposed food evaluation method are confirmed by a comparison with the results from an actual consumer preference taste evaluation. PMID:29606960
Kim, Augustine Yongwhi; Ha, Jin Gwan; Choi, Hoduk; Moon, Hyeonjoon
2018-01-01
The purpose of this paper is to evaluate food taste, smell, and characteristics from consumers' online reviews. Several studies in food sensory evaluation have been presented for consumer acceptance. However, these studies need taste descriptive word lexicon, and they are not suitable for analyzing large number of evaluators to predict consumer acceptance. In this paper, an automated text analysis method for food evaluation is presented to analyze and compare recently introduced two jjampong ramen types (mixed seafood noodles). To avoid building a sensory word lexicon, consumers' reviews are collected from SNS. Then, by training word embedding model with acquired reviews, words in the large amount of review text are converted into vectors. Based on these words represented as vectors, inference is performed to evaluate taste and smell of two jjampong ramen types. Finally, the reliability and merits of the proposed food evaluation method are confirmed by a comparison with the results from an actual consumer preference taste evaluation.
IFDOTMETER: A New Software Application for Automated Immunofluorescence Analysis.
Rodríguez-Arribas, Mario; Pizarro-Estrella, Elisa; Gómez-Sánchez, Rubén; Yakhine-Diop, S M S; Gragera-Hidalgo, Antonio; Cristo, Alejandro; Bravo-San Pedro, Jose M; González-Polo, Rosa A; Fuentes, José M
2016-04-01
Most laboratories interested in autophagy use different imaging software for managing and analyzing heterogeneous parameters in immunofluorescence experiments (e.g., LC3-puncta quantification and determination of the number and size of lysosomes). One solution would be software that works on a user's laptop or workstation that can access all image settings and provide quick and easy-to-use analysis of data. Thus, we have designed and implemented an application called IFDOTMETER, which can run on all major operating systems because it has been programmed using JAVA (Sun Microsystems). Briefly, IFDOTMETER software has been created to quantify a variety of biological hallmarks, including mitochondrial morphology and nuclear condensation. The program interface is intuitive and user-friendly, making it useful for users not familiar with computer handling. By setting previously defined parameters, the software can automatically analyze a large number of images without the supervision of the researcher. Once analysis is complete, the results are stored in a spreadsheet. Using software for high-throughput cell image analysis offers researchers the possibility of performing comprehensive and precise analysis of a high number of images in an automated manner, making this routine task easier. © 2015 Society for Laboratory Automation and Screening.
Analysis of large system black box verification test data
NASA Technical Reports Server (NTRS)
Clapp, Kenneth C.; Iyer, Ravishankar Krishnan
1993-01-01
Issues regarding black box, large systems verification are explored. It begins by collecting data from several testing teams. An integrated database containing test, fault, repair, and source file information is generated. Intuitive effectiveness measures are generated using conventional black box testing results analysis methods. Conventional analysts methods indicate that the testing was effective in the sense that as more tests were run, more faults were found. Average behavior and individual data points are analyzed. The data is categorized and average behavior shows a very wide variation in number of tests run and in pass rates (pass rates ranged from 71 percent to 98 percent). The 'white box' data contained in the integrated database is studied in detail. Conservative measures of effectiveness are discussed. Testing efficiency (ratio of repairs to number of tests) is measured at 3 percent, fault record effectiveness (ratio of repairs to fault records) is measured at 55 percent, and test script redundancy (ratio of number of failed tests to minimum number of tests needed to find the faults) ranges from 4.2 to 15.8. Error prone source files and subsystems are identified. A correlational mapping of test functional area to product subsystem is completed. A new adaptive testing process based on real-time generation of the integrated database is proposed.
Ohashi, J; Clark, A G
2005-05-01
The recent cataloguing of a large number of SNPs enables us to perform genome-wide association studies for detecting common genetic variants associated with disease. Such studies, however, generally have limited research budgets for genotyping and phenotyping. It is therefore necessary to optimize the study design by determining the most cost-effective numbers of SNPs and individuals to analyze. In this report we applied the stepwise focusing method, with two-stage design, developed by Satagopan et al. (2002) and Saito & Kamatani (2002), to optimize the cost-effectiveness of a genome-wide direct association study using a transmission/disequilibrium test (TDT). The stepwise focusing method consists of two steps: a large number of SNPs are examined in the first focusing step, and then all the SNPs showing a significant P-value are tested again using a larger set of individuals in the second focusing step. In the framework of optimization, the numbers of SNPs and families and the significance levels in the first and second steps were regarded as variables to be considered. Our results showed that the stepwise focusing method achieves a distinct gain of power compared to a conventional method with the same research budget.
arrayCGHbase: an analysis platform for comparative genomic hybridization microarrays
Menten, Björn; Pattyn, Filip; De Preter, Katleen; Robbrecht, Piet; Michels, Evi; Buysse, Karen; Mortier, Geert; De Paepe, Anne; van Vooren, Steven; Vermeesch, Joris; Moreau, Yves; De Moor, Bart; Vermeulen, Stefan; Speleman, Frank; Vandesompele, Jo
2005-01-01
Background The availability of the human genome sequence as well as the large number of physically accessible oligonucleotides, cDNA, and BAC clones across the entire genome has triggered and accelerated the use of several platforms for analysis of DNA copy number changes, amongst others microarray comparative genomic hybridization (arrayCGH). One of the challenges inherent to this new technology is the management and analysis of large numbers of data points generated in each individual experiment. Results We have developed arrayCGHbase, a comprehensive analysis platform for arrayCGH experiments consisting of a MIAME (Minimal Information About a Microarray Experiment) supportive database using MySQL underlying a data mining web tool, to store, analyze, interpret, compare, and visualize arrayCGH results in a uniform and user-friendly format. Following its flexible design, arrayCGHbase is compatible with all existing and forthcoming arrayCGH platforms. Data can be exported in a multitude of formats, including BED files to map copy number information on the genome using the Ensembl or UCSC genome browser. Conclusion ArrayCGHbase is a web based and platform independent arrayCGH data analysis tool, that allows users to access the analysis suite through the internet or a local intranet after installation on a private server. ArrayCGHbase is available at . PMID:15910681
An evolutionary algorithm for large traveling salesman problems.
Tsai, Huai-Kuang; Yang, Jinn-Moon; Tsai, Yuan-Fang; Kao, Cheng-Yan
2004-08-01
This work proposes an evolutionary algorithm, called the heterogeneous selection evolutionary algorithm (HeSEA), for solving large traveling salesman problems (TSP). The strengths and limitations of numerous well-known genetic operators are first analyzed, along with local search methods for TSPs from their solution qualities and mechanisms for preserving and adding edges. Based on this analysis, a new approach, HeSEA is proposed which integrates edge assembly crossover (EAX) and Lin-Kernighan (LK) local search, through family competition and heterogeneous pairing selection. This study demonstrates experimentally that EAX and LK can compensate for each other's disadvantages. Family competition and heterogeneous pairing selections are used to maintain the diversity of the population, which is especially useful for evolutionary algorithms in solving large TSPs. The proposed method was evaluated on 16 well-known TSPs in which the numbers of cities range from 318 to 13509. Experimental results indicate that HeSEA performs well and is very competitive with other approaches. The proposed method can determine the optimum path when the number of cities is under 10,000 and the mean solution quality is within 0.0074% above the optimum for each test problem. These findings imply that the proposed method can find tours robustly with a fixed small population and a limited family competition length in reasonable time, when used to solve large TSPs.
Optimization by nonhierarchical asynchronous decomposition
NASA Technical Reports Server (NTRS)
Shankar, Jayashree; Ribbens, Calvin J.; Haftka, Raphael T.; Watson, Layne T.
1992-01-01
Large scale optimization problems are tractable only if they are somehow decomposed. Hierarchical decompositions are inappropriate for some types of problems and do not parallelize well. Sobieszczanski-Sobieski has proposed a nonhierarchical decomposition strategy for nonlinear constrained optimization that is naturally parallel. Despite some successes on engineering problems, the algorithm as originally proposed fails on simple two dimensional quadratic programs. The algorithm is carefully analyzed for quadratic programs, and a number of modifications are suggested to improve its robustness.
Massive Multi-Agent Systems Control
NASA Technical Reports Server (NTRS)
Campagne, Jean-Charles; Gardon, Alain; Collomb, Etienne; Nishida, Toyoaki
2004-01-01
In order to build massive multi-agent systems, considered as complex and dynamic systems, one needs a method to analyze and control the system. We suggest an approach using morphology to represent and control the state of large organizations composed of a great number of light software agents. Morphology is understood as representing the state of the multi-agent system as shapes in an abstract geometrical space, this notion is close to the notion of phase space in physics.
The Assignment of Scale to Object-Oriented Software Measures
NASA Technical Reports Server (NTRS)
Neal, Ralph D.; Weistroffer, H. Roland; Coppins, Richard J.
1997-01-01
In order to improve productivity (and quality), measurement of specific aspects of software has become imperative. As object oriented programming languages have become more widely used, metrics designed specifically for object-oriented software are required. Recently a large number of new metrics for object- oriented software has appeared in the literature. Unfortunately, many of these proposed metrics have not been validated to measure what they purport to measure. In this paper fifty (50) of these metrics are analyzed.
Correlation Function Analysis of Fiber Networks: Implications for Thermal Conductivity
NASA Technical Reports Server (NTRS)
Martinez-Garcia, Jorge; Braginsky, Leonid; Shklover, Valery; Lawson, John W.
2011-01-01
The heat transport in highly porous fiber structures is investigated. The fibers are supposed to be thin, but long, so that the number of the inter-fiber connections along each fiber is large. We show that the effective conductivity of such structures can be found from the correlation length of the two-point correlation function of the local conductivities. Estimation of the parameters, determining the conductivity, from the 2D images of the structures is analyzed.
Kasprowicz, Richard; Rand, Emma; O'Toole, Peter J; Signoret, Nathalie
2018-05-22
Cell-to-cell communication engages signaling and spatiotemporal reorganization events driven by highly context-dependent and dynamic intercellular interactions, which are difficult to capture within heterogeneous primary cell cultures. Here, we present a straightforward correlative imaging approach utilizing commonly available instrumentation to sample large numbers of cell-cell interaction events, allowing qualitative and quantitative characterization of rare functioning cell-conjugates based on calcium signals. We applied this approach to examine a previously uncharacterized immunological synapse, investigating autologous human blood CD4 + T cells and monocyte-derived macrophages (MDMs) forming functional conjugates in vitro. Populations of signaling conjugates were visualized, tracked and analyzed by combining live imaging, calcium recording and multivariate statistical analysis. Correlative immunofluorescence was added to quantify endogenous molecular recruitments at the cell-cell junction. By analyzing a large number of rare conjugates, we were able to define calcium signatures associated with different states of CD4 + T cell-MDM interactions. Quantitative image analysis of immunostained conjugates detected the propensity of endogenous T cell surface markers and intracellular organelles to polarize towards cell-cell junctions with high and sustained calcium signaling profiles, hence defining immunological synapses. Overall, we developed a broadly applicable approach enabling detailed single cell- and population-based investigations of rare cell-cell communication events with primary cells.
Analysis of aggregated tick returns: Evidence for anomalous diffusion
NASA Astrophysics Data System (ADS)
Weber, Philipp
2007-01-01
In order to investigate the origin of large price fluctuations, we analyze stock price changes of ten frequently traded NASDAQ stocks in the year 2002. Though the influence of the trading frequency on the aggregate return in a certain time interval is important, it cannot alone explain the heavy-tailed distribution of stock price changes. For this reason, we analyze intervals with a fixed number of trades in order to eliminate the influence of the trading frequency and investigate the relevance of other factors for the aggregate return. We show that in tick time the price follows a discrete diffusion process with a variable step width while the difference between the number of steps in positive and negative direction in an interval is Gaussian distributed. The step width is given by the return due to a single trade and is long-term correlated in tick time. Hence, its mean value can well characterize an interval of many trades and turns out to be an important determinant for large aggregate returns. We also present a statistical model reproducing the cumulative distribution of aggregate returns. For an accurate agreement with the empirical distribution, we also take into account asymmetries of the step widths in different directions together with cross correlations between these asymmetries and the mean step width as well as the signs of the steps.
Local richness along gradients in the Siskiyou herb flora: R.H. Whittaker revisited
Grace, James B.; Harrison, Susan; Damschen, Ellen Ingman
2011-01-01
In his classic study in the Siskiyou Mountains (Oregon, USA), one of the most botanically rich forested regions in North America, R. H. Whittaker (1960) foreshadowed many modern ideas on the multivariate control of local species richness along environmental gradients related to productivity. Using a structural equation model to analyze his data, which were never previously statistically analyzed, we demonstrate that Whittaker was remarkably accurate in concluding that local herb richness in these late-seral forests is explained to a large extent by three major abiotic gradients (soils, topography, and elevation), and in turn, by the effects of these gradients on tree densities and the numbers of individual herbs. However, while Whittaker also clearly appreciated the significance of large-scale evolutionary and biogeographic influences on community composition, he did not fully articulate the more recent concept that variation in the species richness of local communities could be explained in part by variation in the sizes of regional species pools. Our model of his data is among the first to use estimates of regional species pool size to explain variation in local community richness along productivity-related gradients. We find that regional pool size, combined with a modest number of other interacting abiotic and biotic factors, explains most of the variation in local herb richness in the Siskiyou biodiversity hotspot.
Zaehringer, Julie G; Wambugu, Grace; Kiteme, Boniface; Eckert, Sandra
2018-05-01
Africa has been heavily targeted by large-scale agricultural investments (LAIs) throughout the last decade, with scarcely known impacts on local social-ecological systems. In Kenya, a large number of LAIs were made in the region northwest of Mount Kenya. These large-scale farms produce vegetables and flowers mainly for European markets. However, land use in the region remains dominated by small-scale crop and livestock farms with less than 1 ha of land each, who produce both for their own subsistence and for the local markets. We interviewed 100 small-scale farmers living near five different LAIs to elicit their perceptions of the impacts that these LAIs have on their land use and the overall environment. Furthermore, we analyzed remotely sensed land cover and land use data to assess land use change in the vicinity of the five LAIs. While land use change did not follow a clear trend, a number of small-scale farmers did adapt their crop management to environmental changes such as a reduced river water flows and increased pests, which they attributed to the presence of LAIs. Despite the high number of open conflicts between small-scale land users and LAIs around the issue of river water abstraction, the main environmental impact, felt by almost half of the interviewed land users, was air pollution with agrochemicals sprayed on the LAIs' land. Even though only a low percentage of local land users and their household members were directly involved with LAIs, a large majority of respondents favored the presence of LAIs nearby, as they are believed to contribute to the region's overall economic development. Copyright © 2018 Elsevier Ltd. All rights reserved.
On the correlation of plume centerline velocity decay of turbulent acoustically excited jets
NASA Technical Reports Server (NTRS)
Vonglahn, Uwe H.
1987-01-01
Acoustic excitation was shown to alter the velocity decay and spreading characteristics of jet plumes by modifying the large-scale structures in the plume shear layer. The present work consists of reviewing and analyzing available published and unpublished experimental data in order to determine the importance and magnitude of the several variables that contribute to plume modification by acoustic excitation. Included in the study were consideration of the effects of internal and external acoustic excitation, excitation Strouhal number, acoustic excitation level, nozzle size, and flow conditions. The last include jet Mach number and jet temperature. The effects of these factors on the plume centerline velocity decay are then summarized in an overall empirical correlation.
Isolated rotor noise due to inlet distortion or turbulence
NASA Technical Reports Server (NTRS)
Mani, R.
1974-01-01
Theoretical formulation, analysis, and results are presented that are necessary to analyze quadrupole noise generated from a loaded, subsonic rotor because of its interaction with an inflow distortion or inlet turbulence. The ratio of quadrupole to dipole noise is largely a function of the axial Mach number, wheel tip Mach number, rotor solidity, and total pressure ratio across the rotor. It is relatively independent of the specific form of the inflow distortion or inlet turbulence. Comparisons with experimental data only succeed in predicting gross levels at a given speed and fail to predict the variation of noise at fixed speed with flow and pressure ratio. Likely sources of this discrepancy are discussed.
Adnet, F A O; Anjos, D H S; Menezes-Oliveira, A; Lanfredi, R M
2009-04-01
Species of Cruzia are parasites of the large intestine of marsupials, reptiles, amphibians, and mammalians. Cruzia tentaculata specimens were collected from the large intestine of Didelphis marsupialis (Mammalia: Didelphidae) from Colombia (new geographical record) and from Brazil and analyzed by light and scanning electron microscopy. The morphology of males and females by light microscopy corroborated most of the previous description and the ultrastructure by scanning electron microscopy evidence: the topography of the cuticle, deirids, amphids, phasmids in both sexes, a pair of papillae near the vulva opening, and the number and location of male caudal papillae, adding new features for species identification only observed by this technique.
OpinionSeer: interactive visualization of hotel customer feedback.
Wu, Yingcai; Wei, Furu; Liu, Shixia; Au, Norman; Cui, Weiwei; Zhou, Hong; Qu, Huamin
2010-01-01
The rapid development of Web technology has resulted in an increasing number of hotel customers sharing their opinions on the hotel services. Effective visual analysis of online customer opinions is needed, as it has a significant impact on building a successful business. In this paper, we present OpinionSeer, an interactive visualization system that could visually analyze a large collection of online hotel customer reviews. The system is built on a new visualization-centric opinion mining technique that considers uncertainty for faithfully modeling and analyzing customer opinions. A new visual representation is developed to convey customer opinions by augmenting well-established scatterplots and radial visualization. To provide multiple-level exploration, we introduce subjective logic to handle and organize subjective opinions with degrees of uncertainty. Several case studies illustrate the effectiveness and usefulness of OpinionSeer on analyzing relationships among multiple data dimensions and comparing opinions of different groups. Aside from data on hotel customer feedback, OpinionSeer could also be applied to visually analyze customer opinions on other products or services.
Castellano, Julen; Puente, Asier; Echeazarra, Ibon; Casamichana, David
2015-06-01
The aim of this study was to analyze the influence of different large-sided games on the physical and physiological variables in under-13 soccer players. The effects on heart rate (HR) and physical demands of different number of players (NP) (7, 9, and 11) together with the relative pitch area (RPA) (100, 200, and 300 m) during two 12-minute repetitions were analyzed in this study. The variables analyzed were mean, maximum and different intensity zones of HR; total distance (TD); work:rest ratio (W:R); player load (PL); 5 absolute and 3 relative speed categories. The results support the hypothesis that a change in pitch dimensions affects locomotor activity more than the NP does but also refute the hypothesis that the change in the NP has a greater effect on HR. To be more specific, an increase in the RPA per player (300/200/100 m2) was associated with higher values of the following variables: TD (2,250-2,314/2,003-2,148/1,766-1,845 m), W:R (0.5-0.6/0.4-0.5/0.3 arbitrary unit [AU]), PL (271-306/246-285/229-267 AU), %HRmean (85-88/85-89/81-83%), %HRmax (95-100/97-100/95-98%), and affected the percentage of time spent in both absolute (above 8 km·h(-1)) and relative speed (above 40% Vmax) categories (p ≤ 0.05, effect size: 0.31-0.85). These results may help youth soccer coaches to plan the progressive introduction of large-sided games so that task demands are adapted to the physiological and physical development of participants.
2013-01-01
Background Aberrant methylation at imprinted differentially methylated regions (DMRs) in human 11p15.5 has been reported in many tumors including hepatoblastoma. However, the methylation status of imprinted DMRs in imprinted loci scattered through the human genome has not been analyzed yet in any tumors. Methods The methylation statuses of 33 imprinted DMRs were analyzed in 12 hepatoblastomas and adjacent normal liver tissue by MALDI-TOF MS and pyrosequencing. Uniparental disomy (UPD) and copy number abnormalities were investigated with DNA polymorphisms. Results Among 33 DMRs analyzed, 18 showed aberrant methylation in at least 1 tumor. There was large deviation in the incidence of aberrant methylation among the DMRs. KvDMR1 and IGF2-DMR0 were the most frequently hypomethylated DMRs. INPP5Fv2-DMR and RB1-DMR were hypermethylated with high frequencies. Hypomethylation was observed at certain DMRs not only in tumors but also in a small number of adjacent histologically normal liver tissue, whereas hypermethylation was observed only in tumor samples. The methylation levels of long interspersed nuclear element-1 (LINE-1) did not show large differences between tumor tissue and normal liver controls. Chromosomal abnormalities were also found in some tumors. 11p15.5 and 20q13.3 loci showed the frequent occurrence of both genetic and epigenetic alterations. Conclusions Our analyses revealed tumor-specific aberrant hypermethylation at some imprinted DMRs in 12 hepatoblastomas with additional suggestion for the possibility of hypomethylation prior to tumor development. Some loci showed both genetic and epigenetic alterations with high frequencies. These findings will aid in understanding the development of hepatoblastoma. PMID:24373183
Nonlinear aerodynamic effects on bodies in supersonic flow
NASA Technical Reports Server (NTRS)
Pittman, J. L.; Siclari, M. J.
1984-01-01
The supersonic flow about generic bodies was analyzed to identify the elments of the nonlinear flow and to determine the influence of geometry and flow conditions on the magnitude of these nonlinearities. The nonlinear effects were attributed to separated-flow nonlinearities and attached-flow nonlinearities. The nonlinear attached-flow contribution was further broken down into large-disturbance effects and entropy effects. Conical, attached-flow bundaries were developed to illustrate the flow regimes where the nonlinear effects are significant, and the use of these boundaries for angle of attack and three-dimensional geometries was indicated. Normal-force and pressure comparisons showed that the large-disturbance and separated-flow effects were the dominant nonlinear effects at low supersonic Mach numbers and that the entropy effects were dominant for high supersonic Mach number flow. The magnitude of all the nonlinear effects increased with increasing angle of attack. A full-potential method, NCOREL, which includes an approximate entropy correction, was shown to provide accurate attached-flow pressure estimates from Mach 1.6 through 4.6.
Uncovering low dimensional macroscopic chaotic dynamics of large finite size complex systems
NASA Astrophysics Data System (ADS)
Skardal, Per Sebastian; Restrepo, Juan G.; Ott, Edward
2017-08-01
In the last decade, it has been shown that a large class of phase oscillator models admit low dimensional descriptions for the macroscopic system dynamics in the limit of an infinite number N of oscillators. The question of whether the macroscopic dynamics of other similar systems also have a low dimensional description in the infinite N limit has, however, remained elusive. In this paper, we show how techniques originally designed to analyze noisy experimental chaotic time series can be used to identify effective low dimensional macroscopic descriptions from simulations with a finite number of elements. We illustrate and verify the effectiveness of our approach by applying it to the dynamics of an ensemble of globally coupled Landau-Stuart oscillators for which we demonstrate low dimensional macroscopic chaotic behavior with an effective 4-dimensional description. By using this description, we show that one can calculate dynamical invariants such as Lyapunov exponents and attractor dimensions. One could also use the reconstruction to generate short-term predictions of the macroscopic dynamics.
Statistical and clustering analysis for disturbances: A case study of voltage dips in wind farms
Garcia-Sanchez, Tania; Gomez-Lazaro, Emilio; Muljadi, Eduard; ...
2016-01-28
This study proposes and evaluates an alternative statistical methodology to analyze a large number of voltage dips. For a given voltage dip, a set of lengths is first identified to characterize the root mean square (rms) voltage evolution along the disturbance, deduced from partial linearized time intervals and trajectories. Principal component analysis and K-means clustering processes are then applied to identify rms-voltage patterns and propose a reduced number of representative rms-voltage profiles from the linearized trajectories. This reduced group of averaged rms-voltage profiles enables the representation of a large amount of disturbances, which offers a visual and graphical representation ofmore » their evolution along the events, aspects that were not previously considered in other contributions. The complete process is evaluated on real voltage dips collected in intense field-measurement campaigns carried out in a wind farm in Spain among different years. The results are included in this paper.« less
Equivalent circuit-based analysis of CMUT cell dynamics in arrays.
Oguz, H K; Atalar, Abdullah; Köymen, Hayrettin
2013-05-01
Capacitive micromachined ultrasonic transducers (CMUTs) are usually composed of large arrays of closely packed cells. In this work, we use an equivalent circuit model to analyze CMUT arrays with multiple cells. We study the effects of mutual acoustic interactions through the immersion medium caused by the pressure field generated by each cell acting upon the others. To do this, all the cells in the array are coupled through a radiation impedance matrix at their acoustic terminals. An accurate approximation for the mutual radiation impedance is defined between two circular cells, which can be used in large arrays to reduce computational complexity. Hence, a performance analysis of CMUT arrays can be accurately done with a circuit simulator. By using the proposed model, one can very rapidly obtain the linear frequency and nonlinear transient responses of arrays with an arbitrary number of CMUT cells. We performed several finite element method (FEM) simulations for arrays with small numbers of cells and showed that the results are very similar to those obtained by the equivalent circuit model.
Factors influencing piglet pre-weaning mortality in 47 commercial swine herds in Thailand.
Nuntapaitoon, Morakot; Tummaruk, Padet
2018-01-01
The present study aims to determine the occurrence of piglet pre-weaning mortality in commercial swine herds in Thailand in relation to piglet, sow, and environmental factors. Data were collected from the database of the computerized recording system from 47 commercial swine herds in Thailand. The raw data were carefully scrutinized for accuracy. Litters with a lactation length < 16 days or >28 days were excluded. In total, 199,918 litters from 74,088 sows were included in the analyses. Piglet pre-weaning mortality at the individual sow level was calculated as piglet pre-weaning mortality (%) = (number of littermate pigs - number of piglets at weaning) / number of littermate pigs. Litters were classified according to sow parity numbers (1, 2-5, and 6-9), average birth weight of the piglets (0.80-1.29, 1.30-1.79, 1.80-2.50 kg), number of littermate pigs (5-7, 8-10, 11-12, and 13-15 piglets), and size of the herd (small, medium, and large). Pearson correlations were conducted to analyze the associations between piglet pre-weaning mortality and reproductive parameters. Additionally, a general linear model procedure was performed to analyze the various factors influencing piglet pre-weaning mortality. On average, piglet pre-weaning mortality was 11.2% (median = 9.1%) and varied among herds from 4.8 to 19.2%. Among all the litters, 62.1, 18.1, and 19.8% of the litters had a piglet pre-weaning mortality rate of 0-10, 11-20, and greater than 20%, respectively. As the number of littermate pigs increased, piglet pre-weaning mortality also increased (r = 0.390, P < 0.001). Litters with 13-16 littermate pigs had a higher piglet pre-weaning mortality than litters with 5-7, 8-10, and 11-12 littermate pigs (20.8, 7.8, 7.2, and 11.2%, respectively; P < 0.001). Piglet pre-weaning mortality in large-sized herds was higher than that in small- and medium-sized herds (13.6, 10.6, and 11.2%, respectively; P < 0.001). Interestingly, in all categories of herd size, piglet pre-weaning mortality was increased almost two times when the number of littermates increased from 11-12 to 13-16 piglets. Furthermore, piglets with birth weights of 0.80-1.29 kg in large-sized herds had a higher risk of mortality than those in small- and medium-sized herds (15.3, 10.9, and 12.2%, respectively, P < 0.001). In conclusion, in commercial swine herds in the tropics, piglet pre-weaning mortality averaged 11.2% and varied among herds from 4.8 to 19.2%. The litters with 13-16 littermate pigs had piglet pre-weaning mortality of up to 20.8%. Piglets with low birth weight (0.80-1.29 kg) had a higher risk of pre-weaning mortality. Management strategies for reducing piglet pre-weaning mortality in tropical climates should be emphasized in litters with a high number of littermate pigs, low piglet birth weights, and large herd sizes.
Very large hail occurrence in Poland from 2007 to 2015
NASA Astrophysics Data System (ADS)
Pilorz, Wojciech
2015-10-01
Very large hail is known as a presence of a hailstone greater or equal to 5 cm in diameter. This phenomenon is rare but its significant consequences, not only to agriculture but also to automobiles, households and people outdoor makes it essential thing to examine. Hail appearance is strictly connected with storms frequency and its kind. The most hail-endangered kind of storm is supercell storm. Geographical distribution of hailstorms was compared with geographical distribution of storms in Poland. Similarities were found. The area of the largest number of storms is southeastern Poland. Analyzed European Severe Weather Database (ESWD) data showed that most of very large hail reports occurred in this part of Poland. The probable reason for this situation is the longest period of lasting tropical airmasses in southeastern Poland. Spatial distribution analysis shows also more hail incidents over Upper Silesia, Lesser Poland, Subcarpathia and Świętokrzyskie regions. The information source about hail occurrence was ESWD - open database, where everyone can add report and find reports which meet given search criteria. 69 hailstorms in the period of 2007 - 2015 were examined. They caused 121 very large hail reports. It was found that there is large disproportion in number of hailstorms and hail reports between individual years. Very large hail season in Poland begins in May and ends in September with cumulation in July. Most of hail occurs between 12:00 and 17:00 UTC, but there were some cases of very large (one extremely large) hail at night and early morning hours. However very large hail is a spectacular phenomenon, its local character determines potentially high information loss rate and it is the most significant problem in hail research.
WEGO 2.0: a web tool for analyzing and plotting GO annotations, 2018 update.
Ye, Jia; Zhang, Yong; Cui, Huihai; Liu, Jiawei; Wu, Yuqing; Cheng, Yun; Xu, Huixing; Huang, Xingxin; Li, Shengting; Zhou, An; Zhang, Xiuqing; Bolund, Lars; Chen, Qiang; Wang, Jian; Yang, Huanming; Fang, Lin; Shi, Chunmei
2018-05-18
WEGO (Web Gene Ontology Annotation Plot), created in 2006, is a simple but useful tool for visualizing, comparing and plotting GO (Gene Ontology) annotation results. Owing largely to the rapid development of high-throughput sequencing and the increasing acceptance of GO, WEGO has benefitted from outstanding performance regarding the number of users and citations in recent years, which motivated us to update to version 2.0. WEGO uses the GO annotation results as input. Based on GO's standardized DAG (Directed Acyclic Graph) structured vocabulary system, the number of genes corresponding to each GO ID is calculated and shown in a graphical format. WEGO 2.0 updates have targeted four aspects, aiming to provide a more efficient and up-to-date approach for comparative genomic analyses. First, the number of input files, previously limited to three, is now unlimited, allowing WEGO to analyze multiple datasets. Also added in this version are the reference datasets of nine model species that can be adopted as baselines in genomic comparative analyses. Furthermore, in the analyzing processes each Chi-square test is carried out for multiple datasets instead of every two samples. At last, WEGO 2.0 provides an additional output graph along with the traditional WEGO histogram, displaying the sorted P-values of GO terms and indicating their significant differences. At the same time, WEGO 2.0 features an entirely new user interface. WEGO is available for free at http://wego.genomics.org.cn.
Multimedia content analysis and indexing: evaluation of a distributed and scalable architecture
NASA Astrophysics Data System (ADS)
Mandviwala, Hasnain; Blackwell, Scott; Weikart, Chris; Van Thong, Jean-Manuel
2003-11-01
Multimedia search engines facilitate the retrieval of documents from large media content archives now available via intranets and the Internet. Over the past several years, many research projects have focused on algorithms for analyzing and indexing media content efficiently. However, special system architectures are required to process large amounts of content from real-time feeds or existing archives. Possible solutions include dedicated distributed architectures for analyzing content rapidly and for making it searchable. The system architecture we propose implements such an approach: a highly distributed and reconfigurable batch media content analyzer that can process media streams and static media repositories. Our distributed media analysis application handles media acquisition, content processing, and document indexing. This collection of modules is orchestrated by a task flow management component, exploiting data and pipeline parallelism in the application. A scheduler manages load balancing and prioritizes the different tasks. Workers implement application-specific modules that can be deployed on an arbitrary number of nodes running different operating systems. Each application module is exposed as a web service, implemented with industry-standard interoperable middleware components such as Microsoft ASP.NET and Sun J2EE. Our system architecture is the next generation system for the multimedia indexing application demonstrated by www.speechbot.com. It can process large volumes of audio recordings with minimal support and maintenance, while running on low-cost commodity hardware. The system has been evaluated on a server farm running concurrent content analysis processes.
NASA Astrophysics Data System (ADS)
Guevara Hidalgo, Esteban; Nemoto, Takahiro; Lecomte, Vivien
Rare trajectories of stochastic systems are important to understand because of their potential impact. However, their properties are by definition difficult to sample directly. Population dynamics provide a numerical tool allowing their study, by means of simulating a large number of copies of the system, which are subjected to a selection rule that favors the rare trajectories of interest. However, such algorithms are plagued by finite simulation time- and finite population size- effects that can render their use delicate. Using the continuous-time cloning algorithm, we analyze the finite-time and finite-size scalings of estimators of the large deviation functions associated to the distribution of the rare trajectories. We use these scalings in order to propose a numerical approach which allows to extract the infinite-time and infinite-size limit of these estimators.
Large-N kinetic theory for highly occupied systems
NASA Astrophysics Data System (ADS)
Walz, R.; Boguslavski, K.; Berges, J.
2018-06-01
We consider an effective kinetic description for quantum many-body systems, which is not based on a weak-coupling or diluteness expansion. Instead, it employs an expansion in the number of field components N of the underlying scalar quantum field theory. Extending previous studies, we demonstrate that the large-N kinetic theory at next-to-leading order is able to describe important aspects of highly occupied systems, which are beyond standard perturbative kinetic approaches. We analyze the underlying quasiparticle dynamics by computing the effective scattering matrix elements analytically and solve numerically the large-N kinetic equation for a highly occupied system far from equilibrium. This allows us to compute the universal scaling form of the distribution function at an infrared nonthermal fixed point within a kinetic description, and we compare to existing lattice field theory simulation results.
Electric wind in a Differential Mobility Analyzer
Palo, Marus; Meelis Eller; Uin, Janek; ...
2015-10-25
Electric wind -- the movement of gas, induced by ions moving in an electric field -- can be a distorting factor in size distribution measurements using Differential Mobility Analyzers (DMAs). The aim of this study was to determine the conditions under which electric wind occurs in the locally-built VLDMA (Very Long Differential Mobility Analyzer) and TSI Long-DMA (3081) and to describe the associated distortion of the measured spectra. Electric wind proved to be promoted by the increase of electric field strength, aerosol layer thickness, particle number concentration and particle size. The measured size spectra revealed three types of distortion: wideningmore » of the size distribution, shift of the mode of the distribution to smaller diameters and smoothing out the peaks of the multiply charged particles. Electric wind may therefore be a source of severe distortion of the spectrum when measuring large particles at high concentrations.« less
Association between Air Pollution and Hemoptysis
Garcia-Olive, Ignasi; Radua, Joaquim; Fiz, Jose Antonio; Sanz-Santos, Jose; Ruiz-Manzano, Juan
2016-01-01
Background. The relationship between air pollution and exacerbation of respiratory diseases is well established. Nevertheless, its association with hemoptysis has been poorly investigated. This paper describes the relationship of air pollutants with severe hemoptysis. Methods. All consecutive subjects with severe hemoptysis during a 5-year period were included. The relationship between the contamination measurements and the frequency of embolizations was analyzed using Poisson regressions. In these regressions, the dependent variable was the monthly number of embolizations in a given month and the independent variable was either the concentration of an air contaminant during the same month, the concentration of the air contaminant during the previous month, or the difference between the two. Results. A higher total number of embolizations per month were observed over the months with increases in the concentration of NO. The number of embolizations was 2.0 in the 33 months with no increases in the concentration of NO, 2.1 in the 12 months with small increases, 2.2 in the 5 months with moderate increases, 2.5 in the 4 months with large increases, and 4.0 in the 5 months with very large increases. Conclusion. There is association between hemoptysis and increases in the concentration of atmospheric NO in Badalona (Spain). PMID:27445569
Comprehensive analysis of information dissemination in disasters
NASA Astrophysics Data System (ADS)
Zhang, N.; Huang, H.; Su, Boni
2016-11-01
China is a country that experiences a large number of disasters. The number of deaths caused by large-scale disasters and accidents in past 10 years is around 900,000. More than 92.8 percent of these deaths could be avoided if there were an effective pre-warning system deployed. Knowledge of the information dissemination characteristics of different information media taking into consideration governmental assistance (information published by a government) in disasters in urban areas, plays a critical role in increasing response time and reducing the number of deaths and economic losses. In this paper we have developed a comprehensive information dissemination model to optimize efficiency of pre-warning mechanics. This model also can be used for disseminating information for evacuees making real-time evacuation plans. We analyzed every single information dissemination models for pre-warning in disasters by considering 14 media: short message service (SMS), phone, television, radio, news portals, Wechat, microblogs, email, newspapers, loudspeaker vehicles, loudspeakers, oral communication, and passive information acquisition via visual and auditory senses. Since governmental assistance is very useful in a disaster, we calculated the sensitivity of governmental assistance ratio. The results provide useful references for information dissemination during disasters in urban areas.
Chiu, Mei Choi; Pun, Chi Seng; Wong, Hoi Ying
2017-08-01
Investors interested in the global financial market must analyze financial securities internationally. Making an optimal global investment decision involves processing a huge amount of data for a high-dimensional portfolio. This article investigates the big data challenges of two mean-variance optimal portfolios: continuous-time precommitment and constant-rebalancing strategies. We show that both optimized portfolios implemented with the traditional sample estimates converge to the worst performing portfolio when the portfolio size becomes large. The crux of the problem is the estimation error accumulated from the huge dimension of stock data. We then propose a linear programming optimal (LPO) portfolio framework, which applies a constrained ℓ 1 minimization to the theoretical optimal control to mitigate the risk associated with the dimensionality issue. The resulting portfolio becomes a sparse portfolio that selects stocks with a data-driven procedure and hence offers a stable mean-variance portfolio in practice. When the number of observations becomes large, the LPO portfolio converges to the oracle optimal portfolio, which is free of estimation error, even though the number of stocks grows faster than the number of observations. Our numerical and empirical studies demonstrate the superiority of the proposed approach. © 2017 Society for Risk Analysis.
Physical mapping of 5S and 18S ribosomal DNA in three species of Agave (Asparagales, Asparagaceae).
Gomez-Rodriguez, Victor Manuel; Rodriguez-Garay, Benjamin; Palomino, Guadalupe; Martínez, Javier; Barba-Gonzalez, Rodrigo
2013-01-01
Agave Linnaeus, 1753 is endemic of America and is considered one of the most important crops in Mexico due to its key role in the country's economy. Cytogenetic analysis was carried out in Agave tequilana Weber, 1902 'Azul', Agave cupreata Trelease et Berger, 1915 and Agave angustifolia Haworth, 1812. The analysis showed that in all species the diploid chromosome number was 2n = 60, with bimodal karyotypes composed of five pairs of large chromosomes and 25 pairs of small chromosomes. Furthermore, different karyotypical formulae as well as a secondary constriction in a large chromosome pair were found in all species. Fluorescent in situ hybridization (FISH) was used for physical mapping of 5S and 18S ribosomal DNA (rDNA). All species analyzed showed that 5S rDNA was located in both arms of a small chromosome pair, while 18S rDNA was associated with the secondary constriction of a large chromosome pair. Data of FISH analysis provides new information about the position and number of rDNA loci and helps for detection of hybrids in breeding programs as well as evolutionary studies.
Physical mapping of 5S and 18S ribosomal DNA in three species of Agave (Asparagales, Asparagaceae)
Gomez-Rodriguez, Victor Manuel; Rodriguez-Garay, Benjamin; Palomino, Guadalupe; Martínez, Javier; Barba-Gonzalez, Rodrigo
2013-01-01
Abstract Agave Linnaeus, 1753 is endemic of America and is considered one of the most important crops in Mexico due to its key role in the country’s economy. Cytogenetic analysis was carried out in Agave tequilana Weber, 1902 ‘Azul’, Agave cupreata Trelease et Berger, 1915 and Agave angustifolia Haworth, 1812. The analysis showed that in all species the diploid chromosome number was 2n = 60, with bimodal karyotypes composed of five pairs of large chromosomes and 25 pairs of small chromosomes. Furthermore, different karyotypical formulae as well as a secondary constriction in a large chromosome pair were found in all species. Fluorescent in situ hybridization (FISH) was used for physical mapping of 5S and 18S ribosomal DNA (rDNA). All species analyzed showed that 5S rDNA was located in both arms of a small chromosome pair, while 18S rDNA was associated with the secondary constriction of a large chromosome pair. Data of FISH analysis provides new information about the position and number of rDNA loci and helps for detection of hybrids in breeding programs as well as evolutionary studies. PMID:24260700
Egas, Conceição; Pinheiro, Miguel; Gomes, Paula; Barroso, Cristina; Bettencourt, Raul
2012-08-01
Deep-sea environments are largely unexplored habitats where a surprising number of species may be found in large communities, thriving regardless of the darkness, extreme cold, and high pressure. Their unique geochemical features result in reducing environments rich in methane and sulfides, sustaining complex chemosynthetic ecosystems that represent one of the most surprising findings in oceans in the last 40 years. The deep-sea Lucky Strike hydrothermal vent field, located in the Mid Atlantic Ridge, is home to large vent mussel communities where Bathymodiolus azoricus represents the dominant faunal biomass, owing its survival to symbiotic associations with methylotrophic or methanotrophic and thiotrophic bacteria. The recent transcriptome sequencing and analysis of gill tissues from B. azoricus revealed a number of genes of bacterial origin, hereby analyzed to provide a functional insight into the gill microbial community. The transcripts supported a metabolically active microbiome and a variety of mechanisms and pathways, evidencing also the sulfur and methane metabolisms. Taxonomic affiliation of transcripts and 16S rRNA community profiling revealed a microbial community dominated by thiotrophic and methanotrophic endosymbionts of B. azoricus and the presence of a Sulfurovum-like epsilonbacterium.
Large-scale production of embryonic red blood cells from human embryonic stem cells.
Olivier, Emmanuel N; Qiu, Caihong; Velho, Michelle; Hirsch, Rhoda Elison; Bouhassira, Eric E
2006-12-01
To develop a method to produce in culture large number of erythroid cells from human embryonic stem cells. Human H1 embryonic stem cells were differentiated into hematopoietic cells by coculture with a human fetal liver cell line, and the resulting CD34-positive cells were expanded in vitro in liquid culture using a three-step method. The erythroid cells produced were then analyzed by light microscopy and flow cytometry. Globin expression was characterized by quantitative reverse-transcriptase polymerase chain reaction and by high-performance liquid chromatography. CD34-positive cells produced from human embryonic stem cells could be efficiently differentiated into erythroid cells in liquid culture leading to a more than 5000-fold increase in cell number. The erythroid cells produced are similar to primitive erythroid cells present in the yolk sac of early human embryos and did not enucleate. They are fully hemoglobinized and express a mixture of embryonic and fetal globins but no beta-globin. We have developed an experimental protocol to produce large numbers of primitive erythroid cells starting from undifferentiated human embryonic stem cells. As the earliest human erythroid cells, the nucleated primitive erythroblasts, are not very well characterized because experimental material at this stage of development is very difficult to obtain, this system should prove useful to answer a number of experimental questions regarding the biology of these cells. In addition, production of mature red blood cells from human embryonic stem cells is of great potential practical importance because it could eventually become an alternate source of cell for transfusion.
Lumping of degree-based mean-field and pair-approximation equations for multistate contact processes
NASA Astrophysics Data System (ADS)
Kyriakopoulos, Charalampos; Grossmann, Gerrit; Wolf, Verena; Bortolussi, Luca
2018-01-01
Contact processes form a large and highly interesting class of dynamic processes on networks, including epidemic and information-spreading networks. While devising stochastic models of such processes is relatively easy, analyzing them is very challenging from a computational point of view, particularly for large networks appearing in real applications. One strategy to reduce the complexity of their analysis is to rely on approximations, often in terms of a set of differential equations capturing the evolution of a random node, distinguishing nodes with different topological contexts (i.e., different degrees of different neighborhoods), such as degree-based mean-field (DBMF), approximate-master-equation (AME), or pair-approximation (PA) approaches. The number of differential equations so obtained is typically proportional to the maximum degree kmax of the network, which is much smaller than the size of the master equation of the underlying stochastic model, yet numerically solving these equations can still be problematic for large kmax. In this paper, we consider AME and PA, extended to cope with multiple local states, and we provide an aggregation procedure that clusters together nodes having similar degrees, treating those in the same cluster as indistinguishable, thus reducing the number of equations while preserving an accurate description of global observables of interest. We also provide an automatic way to build such equations and to identify a small number of degree clusters that give accurate results. The method is tested on several case studies, where it shows a high level of compression and a reduction of computational time of several orders of magnitude for large networks, with minimal loss in accuracy.
Verification of Space Station Secondary Power System Stability Using Design of Experiment
NASA Technical Reports Server (NTRS)
Karimi, Kamiar J.; Booker, Andrew J.; Mong, Alvin C.; Manners, Bruce
1998-01-01
This paper describes analytical methods used in verification of large DC power systems with applications to the International Space Station (ISS). Large DC power systems contain many switching power converters with negative resistor characteristics. The ISS power system presents numerous challenges with respect to system stability such as complex sources and undefined loads. The Space Station program has developed impedance specifications for sources and loads. The overall approach to system stability consists of specific hardware requirements coupled with extensive system analysis and testing. Testing of large complex distributed power systems is not practical due to size and complexity of the system. Computer modeling has been extensively used to develop hardware specifications as well as to identify system configurations for lab testing. The statistical method of Design of Experiments (DoE) is used as an analysis tool for verification of these large systems. DOE reduces the number of computer runs which are necessary to analyze the performance of a complex power system consisting of hundreds of DC/DC converters. DoE also provides valuable information about the effect of changes in system parameters on the performance of the system. DoE provides information about various operating scenarios and identification of the ones with potential for instability. In this paper we will describe how we have used computer modeling to analyze a large DC power system. A brief description of DoE is given. Examples using applications of DoE to analysis and verification of the ISS power system are provided.
Energy transfer in turbulence under rotation
NASA Astrophysics Data System (ADS)
Buzzicotti, Michele; Aluie, Hussein; Biferale, Luca; Linkmann, Moritz
2018-03-01
It is known that rapidly rotating turbulent flows are characterized by the emergence of simultaneous upscale and downscale energy transfer. Indeed, both numerics and experiments show the formation of large-scale anisotropic vortices together with the development of small-scale dissipative structures. However the organization of interactions leading to this complex dynamics remains unclear. Two different mechanisms are known to be able to transfer energy upscale in a turbulent flow. The first is characterized by two-dimensional interactions among triads lying on the two-dimensional, three-component (2D3C)/slow manifold, namely on the Fourier plane perpendicular to the rotation axis. The second mechanism is three-dimensional and consists of interactions between triads with the same sign of helicity (homochiral). Here, we present a detailed numerical study of rotating flows using a suite of high-Reynolds-number direct numerical simulations (DNS) within different parameter regimes to analyze both upscale and downscale cascade ranges. We find that the upscale cascade at wave numbers close to the forcing scale is generated by increasingly dominant homochiral interactions which couple the three-dimensional bulk and the 2D3C plane. This coupling produces an accumulation of energy in the 2D3C plane, which then transfers energy to smaller wave numbers thanks to the two-dimensional mechanism. In the forward cascade range, we find that the energy transfer is dominated by heterochiral triads and is dominated primarily by interaction within the fast manifold where kz≠0 . We further analyze the energy transfer in different regions in the real-space domain. In particular, we distinguish high-strain from high-vorticity regions and we uncover that while the mean transfer is produced inside regions of strain, the rare but extreme events of energy transfer occur primarily inside the large-scale column vortices.
Long-range interactions of hydrogen atoms in excited states. III. n S -1 S interactions for n ≥3
NASA Astrophysics Data System (ADS)
Adhikari, C. M.; Debierre, V.; Jentschura, U. D.
2017-09-01
The long-range interaction of excited neutral atoms has a number of interesting and surprising properties such as the prevalence of long-range oscillatory tails and the emergence of numerically large van der Waals C6 coefficients. Furthermore, the energetically quasidegenerate n P states require special attention and lead to mathematical subtleties. Here we analyze the interaction of excited hydrogen atoms in n S states (3 ≤n ≤12 ) with ground-state hydrogen atoms and find that the C6 coefficients roughly grow with the fourth power of the principal quantum number and can reach values in excess of 240 000 (in atomic units) for states with n =12 . The nonretarded van der Waals result is relevant to the distance range R ≪a0/α , where a0 is the Bohr radius and α is the fine-structure constant. The Casimir-Polder range encompasses the interatomic distance range a0/α ≪R ≪ℏ c /L , where L is the Lamb shift energy. In this range, the contribution of quasidegenerate excited n P states remains nonretarded and competes with the 1 /R2 and 1 /R4 tails of the pole terms, which are generated by lower-lying m P states with 2 ≤m ≤n -1 , due to virtual resonant emission. The dominant pole terms are also analyzed in the Lamb shift range R ≫ℏ c /L . The familiar 1 /R7 asymptotics from the usual Casimir-Polder theory is found to be completely irrelevant for the analysis of excited-state interactions. The calculations are carried out to high precision using computer algebra in order to handle a large number of terms in intermediate steps of the calculation for highly excited states.
NASA Astrophysics Data System (ADS)
Brey, Steven J.; Ruminski, Mark; Atwood, Samuel A.; Fischer, Emily V.
2018-02-01
Fires represent an air quality challenge because they are large, dynamic and transient sources of particulate matter and ozone precursors. Transported smoke can deteriorate air quality over large regions. Fire severity and frequency are likely to increase in the future, exacerbating an existing problem. Using the National Environmental Satellite, Data, and Information Service (NESDIS) Hazard Mapping System (HMS) smoke data for North America for the period 2007 to 2014, we examine a subset of fires that are confirmed to have produced sufficient smoke to warrant the initiation of a U.S. National Weather Service smoke forecast. We find that gridded HMS-analyzed fires are well correlated (r = 0.84) with emissions from the Global Fire Emissions Inventory Database 4s (GFED4s). We define a new metric, smoke hours, by linking observed smoke plumes to active fires using ensembles of forward trajectories. This work shows that the Southwest, Northwest, and Northwest Territories initiate the most air quality forecasts and produce more smoke than any other North American region by measure of the number of HYSPLIT points analyzed, the duration of those HYSPLIT points, and the total number of smoke hours produced. The average number of days with smoke plumes overhead is largest over the north-central United States. Only Alaska, the Northwest, the Southwest, and Southeast United States regions produce the majority of smoke plumes observed over their own borders. This work moves a new dataset from a daily operational setting to a research context, and it demonstrates how changes to the frequency or intensity of fires in the western United States could impact other regions.
Development and evaluation of porcine cysticercosis QuickELISA in Triturus EIA analyzer.
Handali, Sukwan; Pattabhi, Sowmya; Lee, Yeuk-Mui; Silva-Ibanez, Maria; Kovalenko, Victor A; Levin, Andrew E; Gonzalez, Armando E; Roberts, Jacquelin M; Garcia, Hector H; Gilman, Robert H; Hancock, Kathy; Tsang, Victor C W
2010-01-01
We evaluated three diagnostic antigens (recombinant GP50, recombinant T24H, and synthetic Ts18var1) for cysticercosis and found that all three performed well in detecting cysticercosis in humans and pigs in several assay formats. These antigens were adapted to a new antibody detection format (QuickELISA). With one single incubation step which involves all reactants except the enzyme substrate, the QuickELISA is particularly suited for automation. We formatted the QuickELISA for the Triturus EIA analyzer for testing large numbers of samples. We found that in QuickELISA formats rGP50 and rT24H have better sensitivity and specificity than sTs18var1 for detecting porcine cysticercosis.
NASA Technical Reports Server (NTRS)
Gross, S. H.
1981-01-01
The ASTP Doppler data were recalibrated, analyzed and related to geophysical phenomena and found consistent. Spectra were computed for data intervals covering each hemisphere. As many as 14 such intervals were analyzed. Wave structure is seen in much of the data. The spectra for all those intervals are very similar in a number of respects. They all decrease with frequency, or with decreasing wavelength. Power law fits are reasonable and spectral indices are found to range from about -2.0 to about -3.5. Both large scale (thousands of kilometers) and medium scale (hundreds of kilometers) waves are evident. These spectra are very similar to spectra of in situ measurements of neutrals and ionization measured by Atmosphere Explorer C.
Stochastic stability in three-player games.
Kamiński, Dominik; Miekisz, Jacek; Zaborowski, Marcin
2005-11-01
Animal behavior and evolution can often be described by game-theoretic models. Although in many situations the number of players is very large, their strategic interactions are usually decomposed into a sum of two-player games. Only recently were evolutionarily stable strategies defined for multi-player games and their properties analyzed [Broom, M., Cannings, C., Vickers, G.T., 1997. Multi-player matrix games. Bull. Math. Biol. 59, 931-952]. Here we study the long-run behavior of stochastic dynamics of populations of randomly matched individuals playing symmetric three-player games. We analyze the stochastic stability of equilibria in games with multiple evolutionarily stable strategies. We also show that, in some games, a population may not evolve in the long run to an evolutionarily stable equilibrium.
Jankowski, Stéphane; Currie-Fraser, Erica; Xu, Licen; Coffa, Jordy
2008-09-01
Annotated DNA samples that had been previously analyzed were tested using multiplex ligation-dependent probe amplification (MLPA) assays containing probes targeting BRCA1, BRCA2, and MMR (MLH1/MSH2 genes) and the 9p21 chromosomal region. MLPA polymerase chain reaction products were separated on a capillary electrophoresis platform, and the data were analyzed using GeneMapper v4.0 software (Applied Biosystems, Foster City, CA). After signal normalization, loci regions that had undergone deletions or duplications were identified using the GeneMapper Report Manager and verified using the DyeScale functionality. The results highlight an easy-to-use, optimal sample preparation and analysis workflow that can be used for both small- and large-scale studies.
ADOPTION OF MELD SCORE INCREASES THE NUMBER OF LIVER TRANSPLANT
NACIF, Lucas Souto; ANDRAUS, Wellington; MARTINO, Rodrigo Bronze; SANTOS, Vinicius Rocha; PINHEIRO, Rafael Soares; HADDAD, Luciana BP; D'ALBUQUERQUE, Luiz Carneiro
2014-01-01
Background Liver transplantation is performed at large transplant centers worldwide as a therapeutic intervention for patients with end-stage liver diseases. Aim To analyze the outcomes and incidence of liver transplantation performed at the University of São Paulo and to compare those with the State of São Paulo before and after adoption of the Model for End-Stage Liver Disease (MELD) score. Method Evaluation of the number of liver transplantations before and after adoption of the MELD score. Mean values and standard deviations were used to analyze normally distributed variables. The incidence results were compared with those of the State of São Paulo. Results There was a high prevalence of male patients, with a predominance of middle-aged. The main indication for liver transplantation was hepatitis C cirrhosis. The mean and median survival rates and overall survival over ten and five years were similar between the groups (p>0.05). The MELD score increased over the course of the study period for patients who underwent liver transplantation (p>0.05). There were an increased number of liver transplants after adoption of the MELD score at this institution and in the State of São Paulo (p<0.001). Conclusion The adoption of the MELD score led to increase the number of liver transplants performed in São Paulo. PMID:25184772
Quantifying the Impacts of Large Scale Integration of Renewables in Indian Power Sector
NASA Astrophysics Data System (ADS)
Kumar, P.; Mishra, T.; Banerjee, R.
2017-12-01
India's power sector is responsible for nearly 37 percent of India's greenhouse gas emissions. For a fast emerging economy like India whose population and energy consumption are poised to rise rapidly in the coming decades, renewable energy can play a vital role in decarbonizing power sector. In this context, India has targeted 33-35 percent emission intensity reduction (with respect to 2005 levels) along with large scale renewable energy targets (100GW solar, 60GW wind, and 10GW biomass energy by 2022) in INDCs submitted at Paris agreement. But large scale integration of renewable energy is a complex process which faces a number of problems like capital intensiveness, matching intermittent loads with least storage capacity and reliability. In this context, this study attempts to assess the technical feasibility of integrating renewables into Indian electricity mix by 2022 and analyze its implications on power sector operations. This study uses TIMES, a bottom up energy optimization model with unit commitment and dispatch features. We model coal and gas fired units discretely with region-wise representation of wind and solar resources. The dispatch features are used for operational analysis of power plant units under ramp rate and minimum generation constraints. The study analyzes India's electricity sector transition for the year 2022 with three scenarios. The base case scenario (no RE addition) along with INDC scenario (with 100GW solar, 60GW wind, 10GW biomass) and low RE scenario (50GW solar, 30GW wind) have been created to analyze the implications of large scale integration of variable renewable energy. The results provide us insights on trade-offs involved in achieving mitigation targets and investment decisions involved. The study also examines operational reliability and flexibility requirements of the system for integrating renewables.
NASA Astrophysics Data System (ADS)
Lee, J.; Kim, K.
A Very Large Scale Integration (VLSI) architecture for robot direct kinematic computation suitable for industrial robot manipulators was investigated. The Denavit-Hartenberg transformations are reviewed to exploit a proper processing element, namely an augmented CORDIC. Specifically, two distinct implementations are elaborated on, such as the bit-serial and parallel. Performance of each scheme is analyzed with respect to the time to compute one location of the end-effector of a 6-links manipulator, and the number of transistors required.
Correlation between average melting temperature and glass transition temperature in metallic glasses
NASA Astrophysics Data System (ADS)
Lu, Zhibin; Li, Jiangong
2009-02-01
The correlation between average melting temperature (⟨Tm⟩) and glass transition temperature (Tg) in metallic glasses (MGs) is analyzed. A linear relationship, Tg=0.385⟨Tm⟩, is observed. This correlation agrees with Egami's suggestion [Rep. Prog. Phys. 47, 1601 (1984)]. The prediction of Tg from ⟨Tm⟩ through the relationship Tg=0.385⟨Tm⟩ has been tested using experimental data obtained on a large number of MGs. This relationship can be used to predict and design MGs with a desired Tg.
Bridging kinematics and concentration content in a chaotic micromixer.
Villermaux, E; Stroock, A D; Stone, H A
2008-01-01
We analyze the mixing properties of the microfluidic herringbone configuration introduced to mix scalar substances in a narrow channel at low Reynolds but large Péclet numbers. Because of the grooves sculpted on the channel floor, substantial transverse motions are superimposed onto the usual longitudinal Poiseuille dispersion along the channel, whose impact on both the mixing rate and mixture content is quantified. We demonstrate the direct link between the flow kinematics and the deformation rate of the mixture's concentration distribution, whose overall shape is also determined.
Formation and distribution of fragments in the spontaneous fission of 240 Pu
Sadhukhan, Jhilam; Zhang, Chunli; Nazarewicz, Witold; ...
2017-12-18
We use the stochastic Langevin framework to simulate the nuclear evolution after the system tunnels through the multidimensional potential barrier. For a representative sample of different initial configurations along the outer turning-point line, we define effective fission paths by computing a large number of Langevin trajectories. We extract the relative contribution of each such path to the fragment distribution. We then use nucleon localization functions along effective fission pathways to analyze the characteristics of prefragments at prescission configurations.
1990-01-31
a set of codes which will provide a large number of addresses while minimizing interference . We have analyzed the bit error rate (BER) of the...there will be significant crosstalk. The most severe interference will be caused by the unswitched component of the high-intensity pulses. For example...Diagram of Experimental Apparatus Q = Quarter-wave Plate P = Polarising Filter IF = Interference Filter Figure 2. I Oscilloscope trace a. of Kerr
The Seasonal Evolution of Sea Ice Floe Size Distribution
2013-09-30
the summer breakup of the ice cover . Large-scale, lower resolution imagery from MODIS and other platforms will also be analyzed to determine changes...control number. 1. REPORT DATE 30 SEP 2013 2. REPORT TYPE 3. DATES COVERED 00-00-2013 to 00-00-2013 4. TITLE AND SUBTITLE The Seasonal Evolution...appearance and morphology of the Arctic sea ice cover over and annual cycle. These photos were taken over the pack ice near SHEBA in May (left) and
The Seasonal Evolution of Sea Ice Floe Size Distribution
2014-09-30
summer breakup of the ice cover . Large-scale, lower resolution imagery from MODIS and other platforms will also be analyzed to determine changes in floe...number. 1. REPORT DATE 30 SEP 2014 2. REPORT TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE The Seasonal Evolution of Sea...morphology of the Arctic sea ice cover over and annual cycle. These photos were taken over the pack ice near SHEBA in May (left) and August (right
The inorganic constituents of echinoderms
Clarke, F.W.; Wheeler, W.C.
1915-01-01
In a recent paper on the composition of crinoid skeletons we showed that crinoids contain large quantities of magnesia, and that its proportion varies with the temperature of the water in which the creatures live. This result was so novel and surprising that it seemed desirable to examine other echinoderms and to ascertain whether they showed the same characteristics and regularity. A number of sea urchins and starfishes were therefore studied, their inorganic constituents being analyzed in the same manner as those of the crinoids
2011-09-30
sounds. The numbers for fin whales and humpback whale song are still being analyzed. Figure 3 shows the difference in communication masking for one...Three call types were chosen to characterize humpback whale song and social sounds; only one is reported here. Week # Date Start Date End Species...Risch, D., Van Parijs, S. M. (in review).Seasonal variation in the presence of humpback whale song on a north Atlantic foraging ground. Aquatic Biology 11
Azimuth orientation of the dragonfly (Sympetrum)
NASA Technical Reports Server (NTRS)
Hisada, M.
1972-01-01
Evidence is presented of directional orientation by an alighting dragonfly relative to the azimuth of the sun. The effects of wind direction on this orientation are analyzed. It was concluded that wind does not play a major role in orientation but may have some secondary function in helping greater numbers of dragonflies face windward more often than leeward. A search was made to find the principle sensory receptor for orientation. Two possibilities, the large compound eye and the frontal ocelli, were noted; however, no conclusive evidence could be found.
ERIC Educational Resources Information Center
Knutson, Kristopher; Smith, Jennifer; Nichols, Paul; Wallert, Mark A.; Provost, Joseph J.
2010-01-01
Research-based learning in a teaching environment is an effective way to help bring the excitement and experience of independent bench research to a large number of students. The program described here is the second of a two-semester biochemistry laboratory series. Here, students are empowered to design, execute and analyze their own experiments…
NASA Technical Reports Server (NTRS)
Lee, J.; Kim, K.
1991-01-01
A Very Large Scale Integration (VLSI) architecture for robot direct kinematic computation suitable for industrial robot manipulators was investigated. The Denavit-Hartenberg transformations are reviewed to exploit a proper processing element, namely an augmented CORDIC. Specifically, two distinct implementations are elaborated on, such as the bit-serial and parallel. Performance of each scheme is analyzed with respect to the time to compute one location of the end-effector of a 6-links manipulator, and the number of transistors required.
In Situ Surveying of Saturn's Rings
NASA Technical Reports Server (NTRS)
Clark, P. E.; Curtis, S. A.; Rilee, M. L.; Cheung, C.
2004-01-01
The Saturn Autonomous Ring Array (SARA) mission concept is a new application for the Autonomous Nano-Technology Swarm (ANTS) architecture, a paradigm being developed for exploration of high surface area and/or multibody targets to minimize costs and maximize effectiveness of survey operations. Systems designed with ANTS architecture are built from potentially very large numbers of highly autonomous, yet socially interactive, specialists, in approximately ten specialist classes. Here, we analyze requirements for such a mission as well as specialized autonomous operations which would support this application.
Open clusters as laboratories: The angular momentum evolution of young stars
NASA Technical Reports Server (NTRS)
Stauffer, John R.
1994-01-01
This is the annual status report for the third year of our LTSA grant 'Open Clusters as Laboratories.' Because we have now had a few years to work on the project, we have started to produce and publish a large number of papers. We have been extremely successful in obtaining ROSAT observations of open clusters. With the demise of the PSPC on ROSAT, our main data source has come to an end and we will be able to concentrate on analyzing those data.
Modal analysis of circular Bragg fibers with arbitrary index profiles
NASA Astrophysics Data System (ADS)
Horikis, Theodoros P.; Kath, William L.
2006-12-01
A finite-difference approach based upon the immersed interface method is used to analyze the mode structure of Bragg fibers with arbitrary index profiles. The method allows general propagation constants and eigenmodes to be calculated to a high degree of accuracy, while computation times are kept to a minimum by exploiting sparse matrix algebra. The method is well suited to handle complicated structures comprised of a large number of thin layers with high-index contrast and simultaneously determines multiple eigenmodes without modification.
Managing Livestock Species under Climate Change in Australia
Seo, S. Niggol; McCarl, Bruce
2011-01-01
Simple Summary World communities are concerned about the impacts of a hotter and drier climate on future agriculture. By examining Australian regional livestock data on sheep, beef cattle, dairy cattle, and pigs, the authors find that livestock production will expand under such conditions. Livestock revenue per farm is expected to increase by more than 47% by 2060 under the UKMO, the GISS, and a high degree of warming CSIRO scenario. The existence of a threshold temperature for these species is not evident. Abstract This paper examines the vulnerabilities of major livestock species raised in Australia to climate change using the regional livestock profile of Australia of around 1,400 regions. The number of each species owned, the number of each species sold, and the aggregate livestock revenue across all species are examined. The four major species analyzed are sheep, beef cattle, dairy cattle, and pigs. The analysis also includes livestock products such as wool and milk. These livestock production statistics are regressed against climate, geophysical, market and household characteristics. In contrast to crop studies, the analysis finds that livestock species are resilient to a hotter and more arid climate. Under the CSIRO climate scenario in which temperature increases by 3.4 °C, livestock revenue per farm increases significantly while the number of each species owned increases by large percentages except for dairy cattle. The precipitation reduction by about 8% in 2060 also increases the numbers of livestock species per farm household. Under both UKMO and GISS scenarios, livestock revenue is expected to increase by around 47% while the livestock population increases by large percentage. Livestock management may play a key role in adapting to a hot and arid climate in Australia. However, critical values of the climatic variables for the species analyzed in this paper are not obvious from the regional data. PMID:26486620
Preliminary Investigation of a New Type of Supersonic Inlet
NASA Technical Reports Server (NTRS)
Ferri, Antonio; Nucci, Louis M
1946-01-01
A supersonic inlet with supersonic deceleration of the flow entirely outside of the inlet is considered. A particular arrangement with fixed geometry having a central body with a circular annular intake is analyzed, and it is shown theoretically that this arrangement gives high pressure recovery for a large range of Mach number and mass flow and therefore is practical for use on supersonic airplanes and missiles. For some Mach numbers the drag coefficient for this type of inlet is larger than the drag coefficient for the type of inlet with supersonic compression entirely inside, but the pressure recovery is larger for all flight conditions. The differences in drag can be eliminated for the design Mach number. Experimental results confirm the results of the theoretical analysis and show that pressure recoveries of 95 percent for Mach numbers of 1.33 and 1.52, 92 percent for a Mach number of 1.72, and 86 percent for a Mach number oof 2.10 are possible with the configurations considered. If the mass flow decreases, the total drag coefficient increases gradually and the pressure recovery does not change appreciably.
Changes in size of deforested patches in the Brazilian Amazon.
Rosa, Isabel M D; Souza, Carlos; Ewers, Robert M
2012-10-01
Different deforestation agents, such as small farmers and large agricultural businesses, create different spatial patterns of deforestation. We analyzed the proportion of deforestation associated with different-sized clearings in the Brazilian Amazon from 2002 through 2009. We used annual deforestation maps to determine total area deforested and the size distribution of deforested patches per year. The size distribution of deforested areas changed over time in a consistent, directional manner. Large clearings (>1000 ha) comprised progressively smaller amounts of total annual deforestation. The number of smaller clearings (6.25-50.00 ha) remained unchanged over time. Small clearings accounted for 73% of all deforestation in 2009, up from 30% in 2002, whereas the proportion of deforestation attributable to large clearings decreased from 13% to 3% between 2002 and 2009. Large clearings were concentrated in Mato Grosso, but also occurred in eastern Pará and in Rondônia. In 2002 large clearings accounted for 17%, 15%, and 10% of all deforestation in Mato Grosso, Pará, and Rondônia, respectively. Even in these states, where there is a highly developed agricultural business dominated by soybean production and cattle ranching, the proportional contribution of large clearings to total deforestation declined. By 2009 large clearings accounted for 2.5%, 3.5%, and 1% of all deforestation in Mato Grosso, Pará, and Rondônia, respectively. These changes in deforestation patch size are coincident with the implementation of new conservation policies by the Brazilian government, which suggests that these policies are not effectively reducing the number of small clearings in primary forest, whether these are caused by large landholders or smallholders, but have been more effective at reducing the frequency of larger clearings. ©2012 Society for Conservation Biology.
bigSCale: an analytical framework for big-scale single-cell data.
Iacono, Giovanni; Mereu, Elisabetta; Guillaumet-Adkins, Amy; Corominas, Roser; Cuscó, Ivon; Rodríguez-Esteban, Gustavo; Gut, Marta; Pérez-Jurado, Luis Alberto; Gut, Ivo; Heyn, Holger
2018-06-01
Single-cell RNA sequencing (scRNA-seq) has significantly deepened our insights into complex tissues, with the latest techniques capable of processing tens of thousands of cells simultaneously. Analyzing increasing numbers of cells, however, generates extremely large data sets, extending processing time and challenging computing resources. Current scRNA-seq analysis tools are not designed to interrogate large data sets and often lack sensitivity to identify marker genes. With bigSCale, we provide a scalable analytical framework to analyze millions of cells, which addresses the challenges associated with large data sets. To handle the noise and sparsity of scRNA-seq data, bigSCale uses large sample sizes to estimate an accurate numerical model of noise. The framework further includes modules for differential expression analysis, cell clustering, and marker identification. A directed convolution strategy allows processing of extremely large data sets, while preserving transcript information from individual cells. We evaluated the performance of bigSCale using both a biological model of aberrant gene expression in patient-derived neuronal progenitor cells and simulated data sets, which underlines the speed and accuracy in differential expression analysis. To test its applicability for large data sets, we applied bigSCale to assess 1.3 million cells from the mouse developing forebrain. Its directed down-sampling strategy accumulates information from single cells into index cell transcriptomes, thereby defining cellular clusters with improved resolution. Accordingly, index cell clusters identified rare populations, such as reelin ( Reln )-positive Cajal-Retzius neurons, for which we report previously unrecognized heterogeneity associated with distinct differentiation stages, spatial organization, and cellular function. Together, bigSCale presents a solution to address future challenges of large single-cell data sets. © 2018 Iacono et al.; Published by Cold Spring Harbor Laboratory Press.
Retrievals of abundances of hydrocarbon and nitrile species in Titan’s upper atmosphere
NASA Astrophysics Data System (ADS)
Yung, Yuk; Fan, Siteng; Shemansky, D. E.; Li, Cheng; Gao, Peter
2017-10-01
We develop an innovative retrieval method for Titan occultation measurements by the Cassini UVIS experiment. The T35 occultation is analyzed to illustrate the methodology. A significant number of occultations observed using the UVIS spectrographs show loss of pointing control required for correction of the spectral vectors. Consequently, only three stellar occultations have been analyzed to date. We use the Markov Chain Monte-Carlo (MCMC) method to retrieve the abundances or upper limits of thirteen hydrocarbon and nitrile species (N2, CH4, C2H2, C2H4, C2H6, HCN, C4H2, C6N2, C6H6, tholin, HC3N, C2N2, NH3) along with the pointing error using the Cassini/UVIS simulator. These numbers are derived for the fast T35 occultation, which has never been analyzed because of large pointing errors. Uncertainty in the retrievals is determined using an intrinsic fitting probability distribution function. The Caltech/JPL photochemical and kinetics model, KINETICS, is used to calculate the atmospheric aforementioned species. Comparisons between model and observations reveal gaps in our current understanding of the chemical kinetics of hydrocarbons and nitrile species, especially for C6H6.
Kohanim, Sahar; Sternberg, Paul; Karrass, Jan; Cooper, William O; Pichert, James W
2016-02-01
The number of unsolicited patient complaints about a physician has been shown to correlate with increased malpractice risk. Using a large national patient complaint database, we evaluated the number and content of unsolicited patient complaints about ophthalmologists to identify significant risk factors for receiving a complaint. Retrospective cohort study. Ophthalmologists, nonophthalmic surgeons, nonophthalmic nonsurgeons. We analyzed 2087 unsolicited or spontaneous complaints reported about 815 ophthalmologists practicing in 24 academic and nonacademic organizations using the Patient Advocacy Reporting System (PARS). Complaints against 5273 nonophthalmic surgeons and 19487 nonophthalmic nonsurgeons during the same period were used for comparison. Complaint type profiles were assigned using a previously validated standardized coding system. We (1) described the distribution of complaints against ophthalmologists; (2) compared the distribution and rates of patient complaints about ophthalmologists with those of nonophthalmic surgeons and nonophthalmic nonsurgeons in the database; (3) analyzed differences in complaint type profiles and quantity of complaints by ophthalmic subspecialty, practice setting, physician gender, medical school type, and graduation date; and (4) identified significant risk factors for high numbers of unsolicited patient complaints after adjusting for other covariates. Unsolicited patient complaints. Ophthalmologists had significantly fewer complaints per physician than other nonophthalmic surgeons and nonsurgeons. Sixty-three percent of ophthalmologists had 0 complaints, whereas 10% of ophthalmologists accounted for 61% of all complaints. Ophthalmologists from academic centers, female ophthalmologists, and younger ophthalmologists had significantly more complaints (P < 0.01), and general ophthalmologists had significantly fewer complaints than subspecialists (P < 0.05). After adjusting for covariates using multivariable analysis, working at an academic center was a statistically significant risk factor (adjusted relative risk, 1.82; 95% confidence interval, 1.36-2.43; P < 0.001). Ophthalmologists had significantly fewer complaints than nonophthalmic surgeons and nonophthalmic nonsurgeons, and by implication may have a lower malpractice risk as a group. Nevertheless, a small number of ophthalmologists generated a disproportionate number of complaints. Working at an academic center was a significant independent risk factor for having more patient complaints. Further research is needed to clarify the underlying reasons for this association and to identify interventions that may decrease this risk. Copyright © 2016 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Dlugach, Jana M.; Mishchenko, Michael I.; Mackowski, Daniel W.
2012-01-01
Using the results of direct, numerically exact computer solutions of the Maxwell equations, we analyze scattering and absorption characteristics of polydisperse compound particles in the form of wavelength-sized spheres covered with a large number of much smaller spherical grains.The results pertain to the complex refractive indices1.55 + i0.0003,1.55 + i0.3, and 3 + i0.1. We show that the optical effects of dusting wavelength-sized hosts by microscopic grains can vary depending on the number and size of the grains as well as on the complex refractive index. Our computations also demonstrate the high efficiency of the new superposition T-matrix code developed for use on distributed memory computer clusters.
On the correlation of plume centerline velocity decay of turbulent acoustically excited jets
NASA Technical Reports Server (NTRS)
Von Glahn, Uwe H.
1987-01-01
Acoustic excitation has been shown to alter the velocity decay and spreading characteristics of jet plumes by modifying the large-scale structures in the plume shear layer. The present work consists of reviewing and analyzing available published and unpublished experimental data in order to determine the importance and magnitude of the several variables that contribute to plume modification by acoustic excitation. Included in the study were consideration of the effects of internal or external acoustic excitation, excitation Strouhal number, acoustic excitation level, nozzle size and flow conditions. The last include jet Mach number and jet temperature. The effects of these factors on the plume centerline velocity decay are then summarized in an overall empirical correlation.
An evaluation of MPI message rate on hybrid-core processors
Barrett, Brian W.; Brightwell, Ron; Grant, Ryan; ...
2014-11-01
Power and energy concerns are motivating chip manufacturers to consider future hybrid-core processor designs that may combine a small number of traditional cores optimized for single-thread performance with a large number of simpler cores optimized for throughput performance. This trend is likely to impact the way in which compute resources for network protocol processing functions are allocated and managed. In particular, the performance of MPI match processing is critical to achieving high message throughput. In this paper, we analyze the ability of simple and more complex cores to perform MPI matching operations for various scenarios in order to gain insightmore » into how MPI implementations for future hybrid-core processors should be designed.« less
Fast algorithm for spectral mixture analysis of imaging spectrometer data
NASA Astrophysics Data System (ADS)
Schouten, Theo E.; Klein Gebbinck, Maurice S.; Liu, Z. K.; Chen, Shaowei
1996-12-01
Imaging spectrometers acquire images in many narrow spectral bands but have limited spatial resolution. Spectral mixture analysis (SMA) is used to determine the fractions of the ground cover categories (the end-members) present in each pixel. In this paper a new iterative SMA method is presented and tested using a 30 band MAIS image. The time needed for each iteration is independent of the number of bands, thus the method can be used for spectrometers with a large number of bands. Further a new method, based on K-means clustering, for obtaining endmembers from image data is described and compared with existing methods. Using the developed methods the available MAIS image was analyzed using 2 to 6 endmembers.
Predicting Fluctuations in Cryptocurrency Transactions Based on User Comments and Replies.
Kim, Young Bin; Kim, Jun Gi; Kim, Wook; Im, Jae Ho; Kim, Tae Hyeong; Kang, Shin Jin; Kim, Chang Hun
2016-01-01
This paper proposes a method to predict fluctuations in the prices of cryptocurrencies, which are increasingly used for online transactions worldwide. Little research has been conducted on predicting fluctuations in the price and number of transactions of a variety of cryptocurrencies. Moreover, the few methods proposed to predict fluctuation in currency prices are inefficient because they fail to take into account the differences in attributes between real currencies and cryptocurrencies. This paper analyzes user comments in online cryptocurrency communities to predict fluctuations in the prices of cryptocurrencies and the number of transactions. By focusing on three cryptocurrencies, each with a large market size and user base, this paper attempts to predict such fluctuations by using a simple and efficient method.
Predicting Fluctuations in Cryptocurrency Transactions Based on User Comments and Replies
Kim, Young Bin; Kim, Jun Gi; Kim, Wook; Im, Jae Ho; Kim, Tae Hyeong; Kang, Shin Jin; Kim, Chang Hun
2016-01-01
This paper proposes a method to predict fluctuations in the prices of cryptocurrencies, which are increasingly used for online transactions worldwide. Little research has been conducted on predicting fluctuations in the price and number of transactions of a variety of cryptocurrencies. Moreover, the few methods proposed to predict fluctuation in currency prices are inefficient because they fail to take into account the differences in attributes between real currencies and cryptocurrencies. This paper analyzes user comments in online cryptocurrency communities to predict fluctuations in the prices of cryptocurrencies and the number of transactions. By focusing on three cryptocurrencies, each with a large market size and user base, this paper attempts to predict such fluctuations by using a simple and efficient method. PMID:27533113
Functional Generalized Structured Component Analysis.
Suk, Hye Won; Hwang, Heungsun
2016-12-01
An extension of Generalized Structured Component Analysis (GSCA), called Functional GSCA, is proposed to analyze functional data that are considered to arise from an underlying smooth curve varying over time or other continua. GSCA has been geared for the analysis of multivariate data. Accordingly, it cannot deal with functional data that often involve different measurement occasions across participants and a large number of measurement occasions that exceed the number of participants. Functional GSCA addresses these issues by integrating GSCA with spline basis function expansions that represent infinite-dimensional curves onto a finite-dimensional space. For parameter estimation, functional GSCA minimizes a penalized least squares criterion by using an alternating penalized least squares estimation algorithm. The usefulness of functional GSCA is illustrated with gait data.
Software Engineering for Scientific Computer Simulations
NASA Astrophysics Data System (ADS)
Post, Douglass E.; Henderson, Dale B.; Kendall, Richard P.; Whitney, Earl M.
2004-11-01
Computer simulation is becoming a very powerful tool for analyzing and predicting the performance of fusion experiments. Simulation efforts are evolving from including only a few effects to many effects, from small teams with a few people to large teams, and from workstations and small processor count parallel computers to massively parallel platforms. Successfully making this transition requires attention to software engineering issues. We report on the conclusions drawn from a number of case studies of large scale scientific computing projects within DOE, academia and the DoD. The major lessons learned include attention to sound project management including setting reasonable and achievable requirements, building a good code team, enforcing customer focus, carrying out verification and validation and selecting the optimum computational mathematics approaches.
Stracuzzi, David John; Brost, Randolph C.; Phillips, Cynthia A.; ...
2015-09-26
Geospatial semantic graphs provide a robust foundation for representing and analyzing remote sensor data. In particular, they support a variety of pattern search operations that capture the spatial and temporal relationships among the objects and events in the data. However, in the presence of large data corpora, even a carefully constructed search query may return a large number of unintended matches. This work considers the problem of calculating a quality score for each match to the query, given that the underlying data are uncertain. As a result, we present a preliminary evaluation of three methods for determining both match qualitymore » scores and associated uncertainty bounds, illustrated in the context of an example based on overhead imagery data.« less
MANOVA for distinguishing experts' perceptions about entrepreneurship using NES data from GEM
NASA Astrophysics Data System (ADS)
Correia, Aldina; Costa e Silva, Eliana; Lopes, Isabel C.; Braga, Alexandra
2016-12-01
Global Entrepreneurship Monitor is a large scale database for internationally comparative entrepreneurship that includes information about many aspects of entrepreneurship activities, perceptions, conditions, national and regional policy, among others, of a large number of countries. This project has two main sources of primary data: the Adult Population Survey and the National Expert Survey. In this work the 2011 and 2012 National Expert Survey datasets are studied. Our goal is to analyze the effects of the different type of entrepreneurship expert specialization on the perceptions about the Entrepreneurial Framework Conditions. For this purpose the multivariate analysis of variance is used. Some similarities between the results obtained for the 2011 and 2012 datasets were found, however the differences between experts still exist.
Protocol for Detection of Yersinia pestis in Environmental ...
Methods Report This is the first ever open-access and detailed protocol available to all government departments and agencies, and their contractors to detect Yersinia pestis, the pathogen that causes plague, from multiple environmental sample types including water. Each analytical method includes sample processing procedure for each sample type in a step-by-step manner. It includes real-time PCR, traditional microbiological culture, and the Rapid Viability PCR (RV-PCR) analytical methods. For large volume water samples it also includes an ultra-filtration-based sample concentration procedure. Because of such a non-restrictive availability of this protocol to all government departments and agencies, and their contractors, the nation will now have increased laboratory capacity to analyze large number of samples during a wide-area plague incident.
Beuchat, L R; Mann, David A; Gurtler, Joshua B
2007-11-01
A study was done to compare Nissui Compact Dry Yeast and Mold plates (CDYM), 3M Petrifilm Yeast and Mold count plates (PYM), dichloran-rose bengal chloramphenicol (DRBC) agar, and dichloran 18% glycerol (DG18) agar for enumerating yeasts and molds naturally occurring in 97 foods (grains, legumes, raw fruits and vegetables, nuts, dairy products, meats, and miscellaneous processed foods and dry mixes). Correlation coefficients for plates incubated for 5 days were DG18 versus DRBC (0.93), PYM versus DRBC (0.81), CDYM versus DG18 (0.81), PYM versus DG18 (0.80), CDYM versus DRBC (0.79), and CDYM versus PYM (0.75). The number of yeasts and molds recovered from a group of foods (n = 32) analyzed on a weight basis (CFU per gram) was not significantly different (alpha = 0.05) when samples were plated on DRBC, DG18, PYM, or CDYM. However, the order of recovery from foods (n = 65) in a group analyzed on a unit or piece basis, or a composite of both groups (n = 97), was DRBC > DG18 = CDYM > PYM. Compared with PYM, CDYM recovered equivalent, significantly higher (alpha = 0.05) or significantly lower (alpha = 0.05) numbers of yeasts and molds in 51.5, 27.8, and 20.6%, respectively, of the 97 foods tested; respective values were 68.8, 15.6, and 15.6% in the small group (n = 32) and 43.1, 33.8, and 23.1% in the large group (n = 65) of foods. The two groups contained different types of foods, the latter consisting largely (73.8%) of raw fruits (n = 16) and vegetables (n = 32). Differences in efficacy of the four methods in recovering yeasts and molds from foods in the two groups are attributed in part to differences in genera and predominant mycoflora. While DG18 agar, CDYM, and PYM appear to be acceptable for enumerating yeasts and molds in the foods analyzed in this study, overall, DRBC agar recovered higher numbers from the 97 test foods, thereby supporting its recommended use as a general purpose medium for mycological analysis.
Genomic analysis of carboxyl/cholinesterase genes in the silkworm Bombyx mori
2010-01-01
Background Carboxyl/cholinesterases (CCEs) have pivotal roles in dietary detoxification, pheromone or hormone degradation and neurodevelopment. The recent completion of genome projects in various insect species has led to the identification of multiple CCEs with unknown functions. Here, we analyzed the phylogeny, expression and genomic distribution of 69 putative CCEs in the silkworm, Bombyx mori (Lepidoptera: Bombycidae). Results A phylogenetic tree of CCEs in B. mori and other lepidopteran species was constructed. The expression pattern of each B. mori CCE was also investigated by a search of an expressed sequence tag (EST) database, and the relationship between phylogeny and expression was analyzed. A large number of B. mori CCEs were identified from a midgut EST library. CCEs expressed in the midgut formed a cluster in the phylogenetic tree that included not only B. mori genes but also those of other lepidopteran species. The silkworm, and possibly also other lepidopteran species, has a large number of CCEs, and this might be a consequence of the large cluster of midgut CCEs. Investigation of intron-exon organization in B. mori CCEs revealed that their positions and splicing site phases were strongly conserved. Several B. mori CCEs, including juvenile hormone esterase, not only showed clustering in the phylogenetic tree but were also closely located on silkworm chromosomes. We investigated the phylogeny and microsynteny of neuroligins in detail, among many CCEs. Interestingly, we found the evolution of this gene appeared not to be conserved between B. mori and other insect orders. Conclusions We analyzed 69 putative CCEs from B. mori. Comparison of these CCEs with other lepidopteran CCEs indicated that they had conserved expression and function in this insect order. The analyses showed that CCEs were unevenly distributed across the genome of B. mori and suggested that neuroligins may have a distinct evolutionary history from other insect order. It is possible that such an uneven genomic distribution and a unique neuroligin evolution are shared with other lepidopteran insects. Our genomic analysis has provided novel information on the CCEs of the silkworm, which will be of value to understanding the biology, physiology and evolution of insect CCEs. PMID:20546589
Genetic Structures of Copy Number Variants Revealed by Genotyping Single Sperm
Luo, Minjie; Cui, Xiangfeng; Fredman, David; Brookes, Anthony J.; Azaro, Marco A.; Greenawalt, Danielle M.; Hu, Guohong; Wang, Hui-Yun; Tereshchenko, Irina V.; Lin, Yong; Shentu, Yue; Gao, Richeng; Shen, Li; Li, Honghua
2009-01-01
Background Copy number variants (CNVs) occupy a significant portion of the human genome and may have important roles in meiotic recombination, human genome evolution and gene expression. Many genetic diseases may be underlain by CNVs. However, because of the presence of their multiple copies, variability in copy numbers and the diploidy of the human genome, detailed genetic structure of CNVs cannot be readily studied by available techniques. Methodology/Principal Findings Single sperm samples were used as the primary subjects for the study so that CNV haplotypes in the sperm donors could be studied individually. Forty-eight CNVs characterized in a previous study were analyzed using a microarray-based high-throughput genotyping method after multiplex amplification. Seventeen single nucleotide polymorphisms (SNPs) were also included as controls. Two single-base variants, either allelic or paralogous, could be discriminated for all markers. Microarray data were used to resolve SNP alleles and CNV haplotypes, to quantitatively assess the numbers and compositions of the paralogous segments in each CNV haplotype. Conclusions/Significance This is the first study of the genetic structure of CNVs on a large scale. Resulting information may help understand evolution of the human genome, gain insight into many genetic processes, and discriminate between CNVs and SNPs. The highly sensitive high-throughput experimental system with haploid sperm samples as subjects may be used to facilitate detailed large-scale CNV analysis. PMID:19384415
NASA Astrophysics Data System (ADS)
Gallaire, Francois; Zhu, Lailai
2016-11-01
While the deformation regimes under flow of anuclear cells, like red blood cells, have been widely analyzed, the dynamics of nuclear cells are less explored. The objective of this work is to investigate the interplay between the stiff nucleus, modeled here as a rigid spherical particle and the surrounding deformable cell membrane, modeled for simplicity as an immiscible droplet, subjected to an external unbounded plane shear flow. A three-dimensional boundary integral implementation is developed to describe the interface-structure interaction characterized by two dimensionless numbers: the capillary number Ca , defined as the ratio of shear to capillary forces and and the particle-droplet size ratio. For large Ca , i.e. very deformable droplets, the particle has a stable equilibrium position at the center of the droplet. However, for smaller Ca , both the plane symmetry and the time invariance are broken and the particle migrates to a closed orbit located off the symmetry plane, reaching a limit cycle. For even smaller capillary numbers, the time invariance is restored and the particle reaches a steady equilibrium position off the symmetry plane. This series of bifurcations is analyzed and possible physical mechanisms from which they originate are discussed. Financial support by ERC Grant SimCoMiCs 280117 is gratefully acknowledged.
Visual Analysis of Cloud Computing Performance Using Behavioral Lines.
Muelder, Chris; Zhu, Biao; Chen, Wei; Zhang, Hongxin; Ma, Kwan-Liu
2016-02-29
Cloud computing is an essential technology to Big Data analytics and services. A cloud computing system is often comprised of a large number of parallel computing and storage devices. Monitoring the usage and performance of such a system is important for efficient operations, maintenance, and security. Tracing every application on a large cloud system is untenable due to scale and privacy issues. But profile data can be collected relatively efficiently by regularly sampling the state of the system, including properties such as CPU load, memory usage, network usage, and others, creating a set of multivariate time series for each system. Adequate tools for studying such large-scale, multidimensional data are lacking. In this paper, we present a visual based analysis approach to understanding and analyzing the performance and behavior of cloud computing systems. Our design is based on similarity measures and a layout method to portray the behavior of each compute node over time. When visualizing a large number of behavioral lines together, distinct patterns often appear suggesting particular types of performance bottleneck. The resulting system provides multiple linked views, which allow the user to interactively explore the data by examining the data or a selected subset at different levels of detail. Our case studies, which use datasets collected from two different cloud systems, show that this visual based approach is effective in identifying trends and anomalies of the systems.
NASA Technical Reports Server (NTRS)
Stofan, Ellen R.
2005-01-01
Proxemy Research had a grant from NASA to perform science research on upwelling and volcanism on Venus. This was a 3 year Planetary Geology and Geophysics grant to E. Stofan, entitled Coronae and Large volcanoes on Venus. This grant closes on 12/31/05. Here we summarize the scientific progress and accomplishments of this grant. Scientific publications and abstracts of presentations are indicated in the final section. This was a very productive grant and the progress that was made is summarized. Attention is drawn to the publications and abstracts published in each year. The proposal consisted of two tasks, one examining coronae and one studying large volcanoes. The corona task (Task 1) consisted of three parts: 1) a statistical study of the updated corona population, with Sue Smrekar, Lori Glaze, Paula Martin and Steve Baloga; 2) geologic analysis of several specific groups of coronae, with Sue Smrekar and others; and 3) determining the histories and significance of a number of coronae with extreme amounts of volcanism, with Sue Smrekar. Task 2, studies of large volcanoes, consisted of two subtasks. In the first, we studied the geologic history of several volcanoes, with John Guest, Peter Grindrod, Antony Brian and Steve Anderson. In the second subtask, I analyzed a number of Venusian volcanoes with evidence of summit diking along with Peter Grindrod and Francis Nimmo.
Reynolds number effects in combustion noise
NASA Technical Reports Server (NTRS)
Seshan, P. K.
1981-01-01
Acoustic emission spectra have been obtained for non-premixed turbulent combustion from two small diameter laboratory gas burners, two commercial gas burners and a large gas burner in the firebox of a Babcock-Wilcox Boiler (50,000 lb steam/hr). The changes in burner size and firing rate represent changes in Reynolds number and changes in air/fuel ratio represent departure from stoichiometric proportions. The combustion efficiency was measured independently through gas analysis. The acoustic spectra obtained from the various burners exhibit a persistent shape over the Reynolds number range of 8200-82,000. The spectra were analyzed for identification of a predictable frequency domain that is most responsive to, and readily correlated with, combustion efficiency. A simple parameter (consisting of the ratio of the average acoustic power output in the most responsive frequency bandwidth to the acoustic power level of the loudest frequency) is proposed whose value increases significantly and unmistakably as combustion efficiency approaches 100%. The dependence of the most responsive frequency domain on the various Reynolds numbers associated with turbulent jets is discussed.
The spanwise spectra in wall-bounded turbulence
NASA Astrophysics Data System (ADS)
Wang, Hong-Ping; Wang, Shi-Zhao; He, Guo-Wei
2017-12-01
The pre-multiplied spanwise energy spectra of streamwise velocity fluctuations are investigated in this paper. Two distinct spectral peaks in the spanwise spectra are observed in low-Reynolds-number wall-bounded turbulence. The spectra are calculated from direct numerical simulation (DNS) of turbulent channel flows and zero-pressure-gradient boundary layer flows. These two peaks locate in the near-wall and outer regions and are referred to as the inner peak and the outer peak, respectively. This result implies that the streamwise velocity fluctuations can be separated into large and small scales in the spanwise direction even though the friction Reynolds number Re_τ can be as low as 1000. The properties of the inner and outer peaks in the spanwise spectra are analyzed. The locations of the inner peak are invariant over a range of Reynolds numbers. However, the locations of the outer peak are associated with the Reynolds number, which are much higher than those of the outer peak of the pre-multiplied streamwise energy spectra of the streamwise velocity.
The spanwise spectra in wall-bounded turbulence
NASA Astrophysics Data System (ADS)
Wang, Hong-Ping; Wang, Shi-Zhao; He, Guo-Wei
2018-06-01
The pre-multiplied spanwise energy spectra of streamwise velocity fluctuations are investigated in this paper. Two distinct spectral peaks in the spanwise spectra are observed in low-Reynolds-number wall-bounded turbulence. The spectra are calculated from direct numerical simulation (DNS) of turbulent channel flows and zero-pressure-gradient boundary layer flows. These two peaks locate in the near-wall and outer regions and are referred to as the inner peak and the outer peak, respectively. This result implies that the streamwise velocity fluctuations can be separated into large and small scales in the spanwise direction even though the friction Reynolds number Re_τ can be as low as 1000. The properties of the inner and outer peaks in the spanwise spectra are analyzed. The locations of the inner peak are invariant over a range of Reynolds numbers. However, the locations of the outer peak are associated with the Reynolds number, which are much higher than those of the outer peak of the pre-multiplied streamwise energy spectra of the streamwise velocity.
What Four Million Mappings Can Tell You about Two Hundred Ontologies
NASA Astrophysics Data System (ADS)
Ghazvinian, Amir; Noy, Natalya F.; Jonquet, Clement; Shah, Nigam; Musen, Mark A.
The field of biomedicine has embraced the Semantic Web probably more than any other field. As a result, there is a large number of biomedical ontologies covering overlapping areas of the field. We have developed BioPortal—an open community-based repository of biomedical ontologies. We analyzed ontologies and terminologies in BioPortal and the Unified Medical Language System (UMLS), creating more than 4 million mappings between concepts in these ontologies and terminologies based on the lexical similarity of concept names and synonyms. We then analyzed the mappings and what they tell us about the ontologies themselves, the structure of the ontology repository, and the ways in which the mappings can help in the process of ontology design and evaluation. For example, we can use the mappings to guide users who are new to a field to the most pertinent ontologies in that field, to identify areas of the domain that are not covered sufficiently by the ontologies in the repository, and to identify which ontologies will serve well as background knowledge in domain-specific tools. While we used a specific (but large) ontology repository for the study, we believe that the lessons we learned about the value of a large-scale set of mappings to ontology users and developers are general and apply in many other domains.
Enhancement of large fluctuations to extinction in adaptive networks
NASA Astrophysics Data System (ADS)
Hindes, Jason; Schwartz, Ira B.; Shaw, Leah B.
2018-01-01
During an epidemic, individual nodes in a network may adapt their connections to reduce the chance of infection. A common form of adaption is avoidance rewiring, where a noninfected node breaks a connection to an infected neighbor and forms a new connection to another noninfected node. Here we explore the effects of such adaptivity on stochastic fluctuations in the susceptible-infected-susceptible model, focusing on the largest fluctuations that result in extinction of infection. Using techniques from large-deviation theory, combined with a measurement of heterogeneity in the susceptible degree distribution at the endemic state, we are able to predict and analyze large fluctuations and extinction in adaptive networks. We find that in the limit of small rewiring there is a sharp exponential reduction in mean extinction times compared to the case of zero adaption. Furthermore, we find an exponential enhancement in the probability of large fluctuations with increased rewiring rate, even when holding the average number of infected nodes constant.
Generalized friendship paradox in complex networks: The case of scientific collaboration
NASA Astrophysics Data System (ADS)
Eom, Young-Ho; Jo, Hang-Hyun
2014-04-01
The friendship paradox states that your friends have on average more friends than you have. Does the paradox ``hold'' for other individual characteristics like income or happiness? To address this question, we generalize the friendship paradox for arbitrary node characteristics in complex networks. By analyzing two coauthorship networks of Physical Review journals and Google Scholar profiles, we find that the generalized friendship paradox (GFP) holds at the individual and network levels for various characteristics, including the number of coauthors, the number of citations, and the number of publications. The origin of the GFP is shown to be rooted in positive correlations between degree and characteristics. As a fruitful application of the GFP, we suggest effective and efficient sampling methods for identifying high characteristic nodes in large-scale networks. Our study on the GFP can shed lights on understanding the interplay between network structure and node characteristics in complex networks.
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
Generalized friendship paradox in complex networks: The case of scientific collaboration
Eom, Young-Ho; Jo, Hang-Hyun
2014-01-01
The friendship paradox states that your friends have on average more friends than you have. Does the paradox “hold” for other individual characteristics like income or happiness? To address this question, we generalize the friendship paradox for arbitrary node characteristics in complex networks. By analyzing two coauthorship networks of Physical Review journals and Google Scholar profiles, we find that the generalized friendship paradox (GFP) holds at the individual and network levels for various characteristics, including the number of coauthors, the number of citations, and the number of publications. The origin of the GFP is shown to be rooted in positive correlations between degree and characteristics. As a fruitful application of the GFP, we suggest effective and efficient sampling methods for identifying high characteristic nodes in large-scale networks. Our study on the GFP can shed lights on understanding the interplay between network structure and node characteristics in complex networks. PMID:24714092
Azevedo, C F; Nascimento, M; Silva, F F; Resende, M D V; Lopes, P S; Guimarães, S E F; Glória, L S
2015-10-09
A significant contribution of molecular genetics is the direct use of DNA information to identify genetically superior individuals. With this approach, genome-wide selection (GWS) can be used for this purpose. GWS consists of analyzing a large number of single nucleotide polymorphism markers widely distributed in the genome; however, because the number of markers is much larger than the number of genotyped individuals, and such markers are highly correlated, special statistical methods are widely required. Among these methods, independent component regression, principal component regression, partial least squares, and partial principal components stand out. Thus, the aim of this study was to propose an application of the methods of dimensionality reduction to GWS of carcass traits in an F2 (Piau x commercial line) pig population. The results show similarities between the principal and the independent component methods and provided the most accurate genomic breeding estimates for most carcass traits in pigs.
Probing neutrino coupling to a light scalar with coherent neutrino scattering
NASA Astrophysics Data System (ADS)
Farzan, Yasaman; Lindner, Manfred; Rodejohann, Werner; Xu, Xun-Jie
2018-05-01
Large neutrino event numbers in future experiments measuring coherent elastic neutrino nucleus scattering allow precision measurements of standard and new physics. We analyze the current and prospective limits of a light scalar particle coupling to neutrinos and quarks, using COHERENT and CONUS as examples. Both lepton number conserving and violating interactions are considered. It is shown that current (future) experiments can probe for scalar masses of a few MeV couplings down to the level of 10-4 (10-6). Scalars with masses around the neutrino energy allow to determine their mass via a characteristic spectrum shape distortion. Our present and future limits are compared with constraints from supernova evolution, Big Bang nucleosynthesis and neutrinoless double beta decay. We also outline UV-complete underlying models that include a light scalar with coupling to quarks for both lepton number violating and conserving coupling to neutrinos.
Rotating Hele-Shaw cell with a time-dependent angular velocity
NASA Astrophysics Data System (ADS)
Anjos, Pedro H. A.; Alvarez, Victor M. M.; Dias, Eduardo O.; Miranda, José A.
2017-12-01
Despite the large number of existing studies of viscous flows in rotating Hele-Shaw cells, most investigations analyze rotational motion with a constant angular velocity, under vanishing Reynolds number conditions in which inertial effects can be neglected. In this work, we examine the linear and weakly nonlinear dynamics of the interface between two immiscible fluids in a rotating Hele-Shaw cell, considering the action of a time-dependent angular velocity, and taking into account the contribution of inertia. By using a generalized Darcy's law, we derive a second-order mode-coupling equation which describes the time evolution of the interfacial perturbation amplitudes. For arbitrary values of viscosity and density ratios, and for a range of values of a rotational Reynolds number, we investigate how the time-dependent angular velocity and inertia affect the important finger competition events that traditionally arise in rotating Hele-Shaw flows.
Seneca, Sara; De Rademaeker, Marjan; Sermon, Karen; De Rycke, Martine; De Vos, Michel; Haentjens, Patrick; Devroey, Paul; Liebaers, Ingeborg
2010-01-01
Purpose This study aims to analyze the relationship between trinucleotide repeat length and reproductive outcome in a large cohort of DM1 patients undergoing ICSI and PGD. Methods Prospective cohort study. The effect of trinucleotide repeat length on reproductive outcome per patient was analyzed using bivariate analysis (T-test) and multivariate analysis using Kaplan-Meier and Cox regression analysis. Results Between 1995 and 2005, 205 cycles of ICSI and PGD were carried out for DM1 in 78 couples. The number of trinucleotide repeats does not have an influence on reproductive outcome when adjusted for age, BMI, basal FSH values, parity, infertility status and male or female affected. Cox regression analysis indicates that cumulative live birth rate is not influenced by the number of trinucleotide repeats. The only factor with a significant effect is age (p < 0.05). Conclusion There is no evidence of an effect of trinucleotide repeat length on reproductive outcome in patients undergoing ICSI and PGD. PMID:20221684
The shadow of Saturn's icy satellites in the E ring
NASA Astrophysics Data System (ADS)
Schmidt, J.; Sremcevic, M.
2008-09-01
We analyze shadows that Saturnian satellites cast in the E ring, a faint, broad dust ring composed of icy grains. The brightness contrast of a moon's shadow relative to the surrounding ring allows to infer local properties of the size distribution of ring particles. We derive the shadow contrast from a large number of Cassini images of Enceladus taken in various filters in a range of phase angles 144 to 164 degrees. For Tethys and Dione we identify a clear shadow in images with phase angles larger than 160 degrees. From the data we obtain the number density of E ring grains at the orbits of Tethys and Dione relative to the one near Enceladus. The latter we constrain from the variation of the shadow contrast with color and phase angle. From the Enceladus data we construct the phase curve of the E ring dust between 144 and 164 degrees. We compare to data obtained from Earth-bound observations by de Pater et al 2004 and in situ measurements by the Cosmic Dust Analyzer onboard Cassini.
NASA Astrophysics Data System (ADS)
Scheingraber, Christoph; Käser, Martin; Allmann, Alexander
2017-04-01
Probabilistic seismic risk analysis (PSRA) is a well-established method for modelling loss from earthquake events. In the insurance industry, it is widely employed for probabilistic modelling of loss to a distributed portfolio. In this context, precise exposure locations are often unknown, which results in considerable loss uncertainty. The treatment of exposure uncertainty has already been identified as an area where PSRA would benefit from increased research attention. However, so far, epistemic location uncertainty has not been in the focus of a large amount of research. We propose a new framework for efficient treatment of location uncertainty. To demonstrate the usefulness of this novel method, a large number of synthetic portfolios resembling real-world portfolios is systematically analyzed. We investigate the effect of portfolio characteristics such as value distribution, portfolio size, or proportion of risk items with unknown coordinates on loss variability. Several sampling criteria to increase the computational efficiency of the framework are proposed and put into the wider context of well-established Monte-Carlo variance reduction techniques. The performance of each of the proposed criteria is analyzed.
Time-Shifted Boundary Conditions Used for Navier-Stokes Aeroelastic Solver
NASA Technical Reports Server (NTRS)
Srivastava, Rakesh
1999-01-01
Under the Advanced Subsonic Technology (AST) Program, an aeroelastic analysis code (TURBO-AE) based on Navier-Stokes equations is currently under development at NASA Lewis Research Center s Machine Dynamics Branch. For a blade row, aeroelastic instability can occur in any of the possible interblade phase angles (IBPA s). Analyzing small IBPA s is very computationally expensive because a large number of blade passages must be simulated. To reduce the computational cost of these analyses, we used time shifted, or phase-lagged, boundary conditions in the TURBO-AE code. These conditions can be used to reduce the computational domain to a single blade passage by requiring the boundary conditions across the passage to be lagged depending on the IBPA being analyzed. The time-shifted boundary conditions currently implemented are based on the direct-store method. This method requires large amounts of data to be stored over a period of the oscillation cycle. On CRAY computers this is not a major problem because solid-state devices can be used for fast input and output to read and write the data onto a disk instead of storing it in core memory.
NASA Astrophysics Data System (ADS)
Dai, A. J.; Chen, Z. Y.; Huang, D. W.; Tong, R. H.; Zhang, J.; Wei, Y. N.; Ma, T. K.; Wang, X. L.; Yang, H. Y.; Gao, H. L.; Pan, Y.; the J-TEXT Team
2018-05-01
A large number of runaway electrons (REs) with energies as high as several tens of mega-electron volt (MeV) may be generated during disruptions on a large-scale tokamak. The kinetic energy carried by REs is eventually deposited on the plasma-facing components, causing damage and posing a threat on the operation of the tokamak. The remaining magnetic energy following a thermal quench is significant on a large-scale tokamak. The conversion of magnetic energy to runaway kinetic energy will increase the threat of runaway electrons on the first wall. The magnetic energy dissipated inside the vacuum vessel (VV) equals the decrease of initial magnetic energy inside the VV plus the magnetic energy flowing into the VV during a disruption. Based on the estimated magnetic energy, the evolution of magnetic-kinetic energy conversion are analyzed through three periods in disruptions with a runaway current plateau.
Collaborative filtering to improve navigation of large radiology knowledge resources.
Kahn, Charles E
2005-06-01
Collaborative filtering is a knowledge-discovery technique that can help guide readers to items of potential interest based on the experience of prior users. This study sought to determine the impact of collaborative filtering on navigation of a large, Web-based radiology knowledge resource. Collaborative filtering was applied to a collection of 1,168 radiology hypertext documents available via the Internet. An item-based collaborative filtering algorithm identified each document's six most closely related documents based on 248,304 page views in an 18-day period. Documents were amended to include links to their related documents, and use was analyzed over the next 5 days. The mean number of documents viewed per visit increased from 1.57 to 1.74 (P < 0.0001). Collaborative filtering can increase a radiology information resource's utilization and can improve its usefulness and ease of navigation. The technique holds promise for improving navigation of large Internet-based radiology knowledge resources.
Predicting protein functions from redundancies in large-scale protein interaction networks
NASA Technical Reports Server (NTRS)
Samanta, Manoj Pratim; Liang, Shoudan
2003-01-01
Interpreting data from large-scale protein interaction experiments has been a challenging task because of the widespread presence of random false positives. Here, we present a network-based statistical algorithm that overcomes this difficulty and allows us to derive functions of unannotated proteins from large-scale interaction data. Our algorithm uses the insight that if two proteins share significantly larger number of common interaction partners than random, they have close functional associations. Analysis of publicly available data from Saccharomyces cerevisiae reveals >2,800 reliable functional associations, 29% of which involve at least one unannotated protein. By further analyzing these associations, we derive tentative functions for 81 unannotated proteins with high certainty. Our method is not overly sensitive to the false positives present in the data. Even after adding 50% randomly generated interactions to the measured data set, we are able to recover almost all (approximately 89%) of the original associations.
A component-based software environment for visualizing large macromolecular assemblies.
Sanner, Michel F
2005-03-01
The interactive visualization of large biological assemblies poses a number of challenging problems, including the development of multiresolution representations and new interaction methods for navigating and analyzing these complex systems. An additional challenge is the development of flexible software environments that will facilitate the integration and interoperation of computational models and techniques from a wide variety of scientific disciplines. In this paper, we present a component-based software development strategy centered on the high-level, object-oriented, interpretive programming language: Python. We present several software components, discuss their integration, and describe some of their features that are relevant to the visualization of large molecular assemblies. Several examples are given to illustrate the interoperation of these software components and the integration of structural data from a variety of experimental sources. These examples illustrate how combining visual programming with component-based software development facilitates the rapid prototyping of novel visualization tools.
NASA Astrophysics Data System (ADS)
Valtonen, Katariina; Leppänen, Mauri
Governments worldwide are concerned for efficient production of services to customers. To improve quality of services and to make service production more efficient, information and communication technology (ICT) is largely exploited in public administration (PA). Succeeding in this exploitation calls for large-scale planning which embraces issues from strategic to technological level. In this planning the notion of enterprise architecture (EA) is commonly applied. One of the sub-architectures of EA is business architecture (BA). BA planning is challenging in PA due to a large number of stakeholders, a wide set of customers, and solid and hierarchical structures of organizations. To support EA planning in Finland, a project to engineer a government EA (GEA) method was launched. In this chapter, we analyze the discussions and outputs of the project workshops and reflect emerged issues on current e-government literature. We bring forth insights into and suggestions for government BA and its development.
Carbon Dioxide Physiological Forcing Dominates Projected Eastern Amazonian Drying
NASA Astrophysics Data System (ADS)
Richardson, T. B.; Forster, P. M.; Andrews, T.; Boucher, O.; Faluvegi, G.; Fläschner, D.; Kasoar, M.; Kirkevâg, A.; Lamarque, J.-F.; Myhre, G.; Olivié, D.; Samset, B. H.; Shawki, D.; Shindell, D.; Takemura, T.; Voulgarakis, A.
2018-03-01
Future projections of east Amazonian precipitation indicate drying, but they are uncertain and poorly understood. In this study we analyze the Amazonian precipitation response to individual atmospheric forcings using a number of global climate models. Black carbon is found to drive reduced precipitation over the Amazon due to temperature-driven circulation changes, but the magnitude is uncertain. CO2 drives reductions in precipitation concentrated in the east, mainly due to a robustly negative, but highly variable in magnitude, fast response. We find that the physiological effect of CO2 on plant stomata is the dominant driver of the fast response due to reduced latent heating and also contributes to the large model spread. Using a simple model, we show that CO2 physiological effects dominate future multimodel mean precipitation projections over the Amazon. However, in individual models temperature-driven changes can be large, but due to little agreement, they largely cancel out in the model mean.
Heat Budget of Large Rivers: Sensitivity to Stream Morphology
NASA Astrophysics Data System (ADS)
Lancaster, S. T.; Haggerty, R.
2014-12-01
In order to assess the feasibility of effecting measurable changes in the heat budget of a large river through restoration, we use a numerical model to analyze the sensitivity of that heat budget to morphological manipulations, specifically those resulting in a narrower main channel with more alcoves. We base model parameters primarily on the gravel-bedded middle Snake River near Marsing, Idaho. The heat budget is represented by an advection-dispersion-reaction equation with, in addition to radiative, evaporative, and sensible heat fluxes, a hyporheic flux term that models lateral flow from the main stream, through bars, and into alcoves and side channels. This term effectively introduces linear dispersion of water temperatures with respect to time, so that the magnitude of the hyporheic term in the heat budget is expected to scale with the ``hyporheic number," defined as , where is dimensionless hyporheic flow rate and is dimensionless mean residence time of water entering the hyporheic zone. Simulations varying the parameters for channel width and hyporheic flow indicate that, for a large river such as the middle Snake River, feasible changes in channel width would produce downstream changes in heat flux an order of magnitude larger than would relatively extreme changes in hyporheic number. Changes, such as reduced channel width and increased hyporheic number, that tend to reduce temperatures in the summer, when temperatures are increasing with time and downstream distance, actually tend to increase temperatures in the fall, when temperatures are decreasing with time and distance.
Small-scale dynamo at low magnetic Prandtl numbers
NASA Astrophysics Data System (ADS)
Schober, Jennifer; Schleicher, Dominik; Bovino, Stefano; Klessen, Ralf S.
2012-12-01
The present-day Universe is highly magnetized, even though the first magnetic seed fields were most probably extremely weak. To explain the growth of the magnetic field strength over many orders of magnitude, fast amplification processes need to operate. The most efficient mechanism known today is the small-scale dynamo, which converts turbulent kinetic energy into magnetic energy leading to an exponential growth of the magnetic field. The efficiency of the dynamo depends on the type of turbulence indicated by the slope of the turbulence spectrum v(ℓ)∝ℓϑ, where v(ℓ) is the eddy velocity at a scale ℓ. We explore turbulent spectra ranging from incompressible Kolmogorov turbulence with ϑ=1/3 to highly compressible Burgers turbulence with ϑ=1/2. In this work, we analyze the properties of the small-scale dynamo for low magnetic Prandtl numbers Pm, which denotes the ratio of the magnetic Reynolds number, Rm, to the hydrodynamical one, Re. We solve the Kazantsev equation, which describes the evolution of the small-scale magnetic field, using the WKB approximation. In the limit of low magnetic Prandtl numbers, the growth rate is proportional to Rm(1-ϑ)/(1+ϑ). We furthermore discuss the critical magnetic Reynolds number Rmcrit, which is required for small-scale dynamo action. The value of Rmcrit is roughly 100 for Kolmogorov turbulence and 2700 for Burgers. Furthermore, we discuss that Rmcrit provides a stronger constraint in the limit of low Pm than it does for large Pm. We conclude that the small-scale dynamo can operate in the regime of low magnetic Prandtl numbers if the magnetic Reynolds number is large enough. Thus, the magnetic field amplification on small scales can take place in a broad range of physical environments and amplify week magnetic seed fields on short time scales.
Small-scale dynamo at low magnetic Prandtl numbers.
Schober, Jennifer; Schleicher, Dominik; Bovino, Stefano; Klessen, Ralf S
2012-12-01
The present-day Universe is highly magnetized, even though the first magnetic seed fields were most probably extremely weak. To explain the growth of the magnetic field strength over many orders of magnitude, fast amplification processes need to operate. The most efficient mechanism known today is the small-scale dynamo, which converts turbulent kinetic energy into magnetic energy leading to an exponential growth of the magnetic field. The efficiency of the dynamo depends on the type of turbulence indicated by the slope of the turbulence spectrum v(ℓ)∝ℓ^{ϑ}, where v(ℓ) is the eddy velocity at a scale ℓ. We explore turbulent spectra ranging from incompressible Kolmogorov turbulence with ϑ=1/3 to highly compressible Burgers turbulence with ϑ=1/2. In this work, we analyze the properties of the small-scale dynamo for low magnetic Prandtl numbers Pm, which denotes the ratio of the magnetic Reynolds number, Rm, to the hydrodynamical one, Re. We solve the Kazantsev equation, which describes the evolution of the small-scale magnetic field, using the WKB approximation. In the limit of low magnetic Prandtl numbers, the growth rate is proportional to Rm^{(1-ϑ)/(1+ϑ)}. We furthermore discuss the critical magnetic Reynolds number Rm_{crit}, which is required for small-scale dynamo action. The value of Rm_{crit} is roughly 100 for Kolmogorov turbulence and 2700 for Burgers. Furthermore, we discuss that Rm_{crit} provides a stronger constraint in the limit of low Pm than it does for large Pm. We conclude that the small-scale dynamo can operate in the regime of low magnetic Prandtl numbers if the magnetic Reynolds number is large enough. Thus, the magnetic field amplification on small scales can take place in a broad range of physical environments and amplify week magnetic seed fields on short time scales.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dul, F.A.; Arczewski, K.
1994-03-01
Although it has been stated that [open quotes]an attempt to solve (very large problems) by subspace iterations seems futile[close quotes], we will show that the statement is not true, especially for extremely large eigenproblems. In this paper a new two-phase subspace iteration/Rayleigh quotient/conjugate gradient method for generalized, large, symmetric eigenproblems Ax = [lambda]Bx is presented. It has the ability of solving extremely large eigenproblems, N = 216,000, for example, and finding a large number of leftmost or rightmost eigenpairs, up to 1000 or more. Multiple eigenpairs, even those with multiplicity 100, can be easily found. The use of the proposedmore » method for solving the big full eigenproblems (N [approximately] 10[sup 3]), as well as for large weakly non-symmetric eigenproblems, have been considered also. The proposed method is fully iterative; thus the factorization of matrices ins avoided. The key idea consists in joining two methods: subspace and Rayleigh quotient iterations. The systems of indefinite and almost singular linear equations (a - [sigma]B)x = By are solved by various iterative conjugate gradient method can be used without danger of breaking down due to its property that may be called [open quotes]self-correction towards the eigenvector,[close quotes] discovered recently by us. The use of various preconditioners (SSOR and IC) has also been considered. The main features of the proposed method have been analyzed in detail. Comparisons with other methods, such as, accelerated subspace iteration, Lanczos, Davidson, TLIME, TRACMN, and SRQMCG, are presented. The results of numerical tests for various physical problems (acoustic, vibrations of structures, quantum chemistry) are presented as well. 40 refs., 12 figs., 2 tabs.« less
Citation patterns of online and print journals in the digital ageEC
De Groote, Sandra L.
2008-01-01
Purpose: The research assesses the impact of online journals on citation patterns by examining whether researchers were more likely to limit the resources they cited to those journals available online rather than those only in print. Setting: Publications from a large urban university with a medical college at an urban location and at a smaller regional location were examined. The number of online journals available to authors on either campus was the same. The number of print journals available on the large campus was much greater than the print journals available at the small campus. Methodology: Searches by author affiliation from 1996 to 2005 were performed in the Web of Science to find all articles written by affiliated members in the college of medicine at the selected institution. Cited references from randomly selected articles were recorded, and the cited journals were coded into five categories based on their availability at the study institution: print only, print and online, online only, not owned, and dropped. Results were analyzed using SPSS. The age of articles cited for selected years as well as for 2006 and 2007 was also examined. Results: The number of journals cited each year continued to increase. On the large urban campus, researchers were not more likely to cite journals available online or less likely to cite journals only in print. At the regional location, at which the number of print-only journals was minimal, use of print-only journals significantly decreased. Conclusion/discussion: The citation of print-only journals by researchers with access to a library with a large print and electronic collection appeared to continue, despite the availability of potential alternatives in the online collection. Journals available in electronic format were cited more frequently in publications from the campus whose library had a small print collection, and the citation of journals available in both print and electronic formats generally increased over the years studied. PMID:18974814
Can decision making in general surgery be based on evidence? An empirical study of Cochrane Reviews.
Diener, Markus K; Wolff, Robert F; von Elm, Erik; Rahbari, Nuh N; Mavergames, Chris; Knaebel, Hanns-Peter; Seiler, Christoph M; Antes, Gerd
2009-09-01
This empirical study analyzes the current status of Cochrane Reviews (CRs) and their strength of recommendation for evidence-based decision making in the field of general surgery. Systematic literature search of the Cochrane Database of Systematic Reviews and the Cochrane Collaboration's homepage to identify available CRs on surgical topics. Quantitative and qualitative characteristics, utilization, and formulated treatment recommendations were evaluated by 2 independent reviewers. Association of review characteristics with treatment recommendation was analyzed using univariate and multivariate logistic regression models. Ninety-three CRs, including 1,403 primary studies and 246,473 patients, were identified. Mean number of included primary studies per CR was 15.1 (standard deviation [SD] 14.5) including 2,650 (SD 3,340) study patients. Two and a half (SD 8.3) nonrandomized trials were included per analyzed CR. Seventy-two (77%) CRs were published or updated in 2005 or later. Explicit treatment recommendations were given in 45 (48%). Presence of a treatment recommendation was associated with the number of included primary studies and the proportion of randomized studies. Utilization of surgical CRs remained low and showed large inter-country differences. The most surgical CRs were accessed in UK, USA, and Australia, followed by several Western and Eastern European countries. Only a minority of available CRs address surgical questions and their current usage is low. Instead of unsystematically increasing the number of surgical CRs it would be far more efficient to focus the review process on relevant surgical questions. Prioritization of CRs needs valid methods which should be developed by the scientific surgical community.
Effects of Turbulence on Settling Velocities of Synthetic and Natural Particles
NASA Astrophysics Data System (ADS)
Jacobs, C.; Jendrassak, M.; Gurka, R.; Hackett, E. E.
2014-12-01
For large-scale sediment transport predictions, an important parameter is the settling or terminal velocity of particles because it plays a key role in determining the concentration of sediment particles within the water column as well as the deposition rate of particles onto the seabed. The settling velocity of particles is influenced by the fluid dynamic environment as well as attributes of the particle, such as its size, shape, and density. This laboratory study examines the effects of turbulence, generated by an oscillating grid, on both synthetic and natural particles for a range of flow conditions. Because synthetic particles are spherical, they serve as a reference for the natural particles that are irregular in shape. Particle image velocimetry (PIV) and high-speed imaging systems were used simultaneously to study the interaction between the fluid mechanics and sediment particles' dynamics in a tank. The particles' dynamics were analyzed using a custom two-dimensional tracking algorithm used to obtain distributions of the particle's velocity and acceleration. Turbulence properties, such as root-mean-square turbulent velocity and vorticity, were calculated from the PIV data. Results are classified by Stokes number, which was based-on the integral scale deduced from the auto-correlation function of velocity. We find particles with large Stokes numbers are unaffected by the turbulence, while particles with small Stokes numbers primarily show an increase in settling velocity in comparison to stagnant flow. The results also show an inverse relationship between Stokes number and standard deviation of the settling velocity. This research enables a better understanding of the interdependence between particles and turbulent flow, which can be used to improve parameterizations in large-scale sediment transport models.
Far from thunderstorm UV transient events in the atmosphere measured by Vernov satellite
NASA Astrophysics Data System (ADS)
Morozenko, Violetta; Klimov, Pavel; Khrenov, Boris; Gali, Garipov; Margarita, Kaznacheeva; Mikhail, Panasyuk; Sergei, Svertilov; Robert, Holzworth
2016-04-01
The steady self-contained classification of events such as sprites, elves, blue jets emerged for the period of transient luminous events (TLE) observation. In accordance with TLE origin theories the presence of the thunderstorm region where the lightnings with the large peak current generating in is necessary. However, some far-from-thunderstorm region events were also detected and revealed to us another TLE generating mechanisms. For the discovering of the TLE nature the Universitetsky-Tatiana-2 and Vernov satellites were equipped with ultraviolet (240-400 nm) and red-infrared ( >610 nm) detectors. In both detector it was carried out regardless the lightnings with the guidance by the flashes in the UV wavelength where lightning's emitting is quite faint. The lowered threshold on the Vernov satellite allowed to select the great amount of TLE with the numerous far-from-thunderstorm region events examples. such events were not conjuncted with lightning activity measured by global lightning location network (WWLLN) on the large area of approximately 107 km2 for 30 minutes before and after the time of registration. The characteristic features of this type of event are: the absence of significant signal in the red-infrared detector's channel; a relatively small number of photons (less than 5 ṡ 1021). A large number of without lightning flash were detected at high latitudes over the ocean (30°S - 60°S). Lightning activity in the magnetic conjugate point also was analyzed. The relationship of far-from-thunderstorm region events with the specific lightning discharges didn't confirmed. Far-from-thunderstorm events - a new type of transient phenomena in the upper atmosphere is not associated with the thunderstorm activity. The mechanism of such discharges is not clear, though it was accumulated a sufficient amount of experimental facts of the existence of such flashes. According to the data of Vernov satellite the temporal profile, duration, location with earth coordinates and the number of photons generated in the far-from-thunderstorm atmospheric events has been analyzed and the discussion of these events origin is in progress.
Kyriakopoulos, Charalampos; Grossmann, Gerrit; Wolf, Verena; Bortolussi, Luca
2018-01-01
Contact processes form a large and highly interesting class of dynamic processes on networks, including epidemic and information-spreading networks. While devising stochastic models of such processes is relatively easy, analyzing them is very challenging from a computational point of view, particularly for large networks appearing in real applications. One strategy to reduce the complexity of their analysis is to rely on approximations, often in terms of a set of differential equations capturing the evolution of a random node, distinguishing nodes with different topological contexts (i.e., different degrees of different neighborhoods), such as degree-based mean-field (DBMF), approximate-master-equation (AME), or pair-approximation (PA) approaches. The number of differential equations so obtained is typically proportional to the maximum degree k_{max} of the network, which is much smaller than the size of the master equation of the underlying stochastic model, yet numerically solving these equations can still be problematic for large k_{max}. In this paper, we consider AME and PA, extended to cope with multiple local states, and we provide an aggregation procedure that clusters together nodes having similar degrees, treating those in the same cluster as indistinguishable, thus reducing the number of equations while preserving an accurate description of global observables of interest. We also provide an automatic way to build such equations and to identify a small number of degree clusters that give accurate results. The method is tested on several case studies, where it shows a high level of compression and a reduction of computational time of several orders of magnitude for large networks, with minimal loss in accuracy.
Enhancement of chest radiographs using eigenimage processing
NASA Astrophysics Data System (ADS)
Bones, Philip J.; Butler, Anthony P. H.; Hurrell, Michael
2006-08-01
Frontal chest radiographs ("chest X-rays") are routinely used by medical personnel to assess patients for a wide range of suspected disorders. Often large numbers of images need to be analyzed. Furthermore, at times the images need to analyzed ("reported") when no radiological expert is available. A system which enhances the images in such a way that abnormalities are more obvious is likely to reduce the chance that an abnormality goes unnoticed. The authors previously reported the use of principal components analysis to derive a basis set of eigenimages from a training set made up of images from normal subjects. The work is here extended to investigate how best to emphasize the abnormalities in chest radiographs. Results are also reported for various forms of image normalizing transformations used in performing the eigenimage processing.
A Process Management System for Networked Manufacturing
NASA Astrophysics Data System (ADS)
Liu, Tingting; Wang, Huifen; Liu, Linyan
With the development of computer, communication and network, networked manufacturing has become one of the main manufacturing paradigms in the 21st century. Under the networked manufacturing environment, there exist a large number of cooperative tasks susceptible to alterations, conflicts caused by resources and problems of cost and quality. This increases the complexity of administration. Process management is a technology used to design, enact, control, and analyze networked manufacturing processes. It supports efficient execution, effective management, conflict resolution, cost containment and quality control. In this paper we propose an integrated process management system for networked manufacturing. Requirements of process management are analyzed and architecture of the system is presented. And a process model considering process cost and quality is developed. Finally a case study is provided to explain how the system runs efficiently.
Programming an offline-analyzer of motor imagery signals via python language.
Alonso-Valerdi, Luz María; Sepulveda, Francisco
2011-01-01
Brain Computer Interface (BCI) systems control the user's environment via his/her brain signals. Brain signals related to motor imagery (MI) have become a widespread method employed by the BCI community. Despite the large number of references describing the MI signal treatment, there is not enough information related to the available programming languages that could be suitable to develop a specific-purpose MI-based BCI. The present paper describes the development of an offline-analysis system based on MI-EEG signals via open-source programming languages, and the assessment of the system using electrical activity recorded from three subjects. The analyzer recognized at least 63% of the MI signals corresponding to three classes. The results of the offline analysis showed a promising performance considering that the subjects have never undergone MI trainings.
Jankowski, Stéphane; Currie-Fraser, Erica; Xu, Licen; Coffa, Jordy
2008-01-01
Annotated DNA samples that had been previously analyzed were tested using multiplex ligation-dependent probe amplification (MLPA) assays containing probes targeting BRCA1, BRCA2, and MMR (MLH1/MSH2 genes) and the 9p21 chromosomal region. MLPA polymerase chain reaction products were separated on a capillary electrophoresis platform, and the data were analyzed using GeneMapper v4.0 software (Applied Biosystems, Foster City, CA). After signal normalization, loci regions that had undergone deletions or duplications were identified using the GeneMapper Report Manager and verified using the DyeScale functionality. The results highlight an easy-to-use, optimal sample preparation and analysis workflow that can be used for both small- and large-scale studies. PMID:19137113
MDTRA: a molecular dynamics trajectory analyzer with a graphical user interface.
Popov, Alexander V; Vorobjev, Yury N; Zharkov, Dmitry O
2013-02-05
Most of existing software for analysis of molecular dynamics (MD) simulation results is based on command-line, script-guided processes that require the researchers to have an idea about programming language constructions used, often applied to the one and only product. Here, we describe an open-source cross-platform program, MD Trajectory Reader and Analyzer (MDTRA), that performs a large number of MD analysis tasks assisted with a graphical user interface. The program has been developed to facilitate the process of search and visualization of results. MDTRA can handle trajectories as sets of protein data bank files and presents tools and guidelines to convert some other trajectory formats into such sets. The parameters analyzed by MDTRA include interatomic distances, angles, dihedral angles, angles between planes, one-dimensional and two-dimensional root-mean-square deviation, solvent-accessible area, and so on. As an example of using the program, we describe the application of MDTRA to analyze the MD of formamidopyrimidine-DNA glycosylase, a DNA repair enzyme from Escherichia coli. Copyright © 2012 Wiley Periodicals, Inc.
Performance Evaluation of the Sysmex CS-5100 Automated Coagulation Analyzer.
Chen, Liming; Chen, Yu
2015-01-01
Coagulation testing is widely applied clinically, and laboratories increasingly demand automated coagulation analyzers with short turn-around times and high-throughput. The purpose of this study was to evaluate the performance of the Sysmex CS-5100 automated coagulation analyzer for routine use in a clinical laboratory. The prothrombin time (PT), international normalized ratio (INR), activated partial thromboplastin time (APTT), fibrinogen (Fbg), and D-dimer were compared between the Sysmex CS-5100 and Sysmex CA-7000 analyzers, and the imprecision, comparison, throughput, STAT function, and performance for abnormal samples were measured in each. The within-run and between-run coefficients of variation (CV) for the PT, APTT, INR, and D-dimer analyses showed excellent results both in the normal and pathologic ranges. The correlation coefficients between the Sysmex CS-5100 and Sysmex CA-7000 were highly correlated. The throughput of the Sysmex CS-5100 was faster than that of the Sysmex CA-7000. There was no interference at all by total bilirubin concentrations and triglyceride concentrations in the Sysmex CS-5100 analyzer. We demonstrated that the Sysmex CS-5100 performs with satisfactory imprecision and is well suited for coagulation analysis in laboratories processing large sample numbers and icteric and lipemic samples.
A Hierarchical Framework for State-Space Matrix Inference and Clustering.
Zuo, Chandler; Chen, Kailei; Hewitt, Kyle J; Bresnick, Emery H; Keleş, Sündüz
2016-09-01
In recent years, a large number of genomic and epigenomic studies have been focusing on the integrative analysis of multiple experimental datasets measured over a large number of observational units. The objectives of such studies include not only inferring a hidden state of activity for each unit over individual experiments, but also detecting highly associated clusters of units based on their inferred states. Although there are a number of methods tailored for specific datasets, there is currently no state-of-the-art modeling framework for this general class of problems. In this paper, we develop the MBASIC ( M atrix B ased A nalysis for S tate-space I nference and C lustering) framework. MBASIC consists of two parts: state-space mapping and state-space clustering. In state-space mapping, it maps observations onto a finite state-space, representing the activation states of units across conditions. In state-space clustering, MBASIC incorporates a finite mixture model to cluster the units based on their inferred state-space profiles across all conditions. Both the state-space mapping and clustering can be simultaneously estimated through an Expectation-Maximization algorithm. MBASIC flexibly adapts to a large number of parametric distributions for the observed data, as well as the heterogeneity in replicate experiments. It allows for imposing structural assumptions on each cluster, and enables model selection using information criterion. In our data-driven simulation studies, MBASIC showed significant accuracy in recovering both the underlying state-space variables and clustering structures. We applied MBASIC to two genome research problems using large numbers of datasets from the ENCODE project. The first application grouped genes based on transcription factor occupancy profiles of their promoter regions in two different cell types. The second application focused on identifying groups of loci that are similar to a GATA2 binding site that is functional at its endogenous locus by utilizing transcription factor occupancy data and illustrated applicability of MBASIC in a wide variety of problems. In both studies, MBASIC showed higher levels of raw data fidelity than analyzing these data with a two-step approach using ENCODE results on transcription factor occupancy data.
Relative azimuth inversion by way of damped maximum correlation estimates
Ringler, A.T.; Edwards, J.D.; Hutt, C.R.; Shelly, F.
2012-01-01
Horizontal seismic data are utilized in a large number of Earth studies. Such work depends on the published orientations of the sensitive axes of seismic sensors relative to true North. These orientations can be estimated using a number of different techniques: SensOrLoc (Sensitivity, Orientation and Location), comparison to synthetics (Ekstrom and Busby, 2008), or by way of magnetic compass. Current methods for finding relative station azimuths are unable to do so with arbitrary precision quickly because of limitations in the algorithms (e.g. grid search methods). Furthermore, in order to determine instrument orientations during station visits, it is critical that any analysis software be easily run on a large number of different computer platforms and the results be obtained quickly while on site. We developed a new technique for estimating relative sensor azimuths by inverting for the orientation with the maximum correlation to a reference instrument, using a non-linear parameter estimation routine. By making use of overlapping windows, we are able to make multiple azimuth estimates, which helps to identify the confidence of our azimuth estimate, even when the signal-to-noise ratio (SNR) is low. Finally, our algorithm has been written as a stand-alone, platform independent, Java software package with a graphical user interface for reading and selecting data segments to be analyzed.
Stochastic simulations on a model of circadian rhythm generation.
Miura, Shigehiro; Shimokawa, Tetsuya; Nomura, Taishin
2008-01-01
Biological phenomena are often modeled by differential equations, where states of a model system are described by continuous real values. When we consider concentrations of molecules as dynamical variables for a set of biochemical reactions, we implicitly assume that numbers of the molecules are large enough so that their changes can be regarded as continuous and they are described deterministically. However, for a system with small numbers of molecules, changes in their numbers are apparently discrete and molecular noises become significant. In such cases, models with deterministic differential equations may be inappropriate, and the reactions must be described by stochastic equations. In this study, we focus a clock gene expression for a circadian rhythm generation, which is known as a system involving small numbers of molecules. Thus it is appropriate for the system to be modeled by stochastic equations and analyzed by methodologies of stochastic simulations. The interlocked feedback model proposed by Ueda et al. as a set of deterministic ordinary differential equations provides a basis of our analyses. We apply two stochastic simulation methods, namely Gillespie's direct method and the stochastic differential equation method also by Gillespie, to the interlocked feedback model. To this end, we first reformulated the original differential equations back to elementary chemical reactions. With those reactions, we simulate and analyze the dynamics of the model using two methods in order to compare them with the dynamics obtained from the original deterministic model and to characterize dynamics how they depend on the simulation methodologies.
Communication between primary care and physician specialist: is it improving?
Biagetti, B; Aulinas, A; Dalama, B; Nogués, R; Zafón, C; Mesa, J
2015-01-01
Efforts have recently been made in Spain to improve the communication model between primary care and specialized care. The aim of our study was to analyze the impact of a change in the communication model between the two areas when comparing a traditional system to a consulting system in terms of satisfaction of general practitioners and the number of patient referrals. A questionnaire was used to assess the point of view on the relations with the endocrinologist team of 20 general practitioners from one primary care center at baseline and 18 months after the implementation of the new method of communication. In addition, we counted the number of referrals during the two periods. We analyzed 30 questionnaires; 13 before and 17 after the consulting system was established. Consulting system was preferred to other alternatives as a way of communication with endocrinologists. After the consulting system was implemented, general practitioners were more confident in treating hypothyroidism and diabetes. There was a decrease in the number of patient referrals to specialized care from 93.8 to 34.6 per month after implementation of the consultant system. The consultant system was more efficient in resolving problems and responding to general practitioners than the traditional system. General practitioners were more confident in self-management of hypothyroidism and diabetes. A very large decrease in the number of patient referrals was observed after implementation of the consultant system. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.
Irradiation-induced microchemical changes in highly irradiated 316 stainless steel
NASA Astrophysics Data System (ADS)
Fujii, K.; Fukuya, K.
2016-02-01
Cold-worked 316 stainless steel specimens irradiated to 74 dpa in a pressurized water reactor (PWR) were analyzed by atom probe tomography (APT) to extend knowledge of solute clusters and segregation at higher doses. The analyses confirmed that those clusters mainly enriched in Ni-Si or Ni-Si-Mn were formed at high number density. The clusters were divided into three types based on their size and Mn content; small Ni-Si clusters (3-4 nm in diameter), and large Ni-Si and Ni-Si-Mn clusters (8-10 nm in diameter). The total cluster number density was 7.7 × 1023 m-3. The fraction of large clusters was almost 1/10 of the total density. The average composition (in at%) for small clusters was: Fe, 54; Cr, 12; Mn, 1; Ni, 22; Si, 11; Mo, 1, and for large clusters it was: Fe, 44; Cr, 9; Mn, 2; Ni, 29; Si, 14; Mo,1. It was likely that some of the Ni-Si clusters correspond to γ‧ phase precipitates while the Ni-Si-Mn clusters were precursors of G phase precipitates. The APT analyses at grain boundaries confirmed enrichment of Ni, Si, P and Cu and depletion of Fe, Cr, Mo and Mn. The segregation behavior was consistent with previous knowledge of radiation induced segregation.
NASA Astrophysics Data System (ADS)
Zakharenkova, I.; Astafyeva, E.; Cherniak, I.
2016-12-01
We investigate signatures of the large-scale travelling ionospheric disturbances (LSTIDs) that they leave in the ground-based total electron content (TEC) during the 2015 St. Patrick's Day Storm. We take advantage of a large number of the ground-based GPS/GNSS receivers to analyze simultaneous LSTIDs propagation in different sectors from very dense and multipoint observations. The region of interest includes the both Northern and Southern American sectors, as well as the whole European sector. We use measurements derived from more than 5000 GPS/GNSS receivers of numerous global and regional GNSS networks. We considerably increase number of available observations by processing signals from not only GPS but also from GLONASS. We retrieve a perturbation component of the resulted TEC maps constructed with high spatial and temporal resolution. LSTIDs originating in the auroral oval and propagating equatorward were clearly identified in both hemispheres. In this report we discuss features of the observed LSTIDs, in particular, 1) similarities and differences of their simultaneous propagation over American and European sectors ; 2) interhemispheric LSTIDs propagation in the American sector; 3) dependence of the LSTIDs characteristic parameters (velocity, wavelength) on the intensification of the auroral activity during the main phase of this storm.
Locality of correlation in density functional theory.
Burke, Kieron; Cancio, Antonio; Gould, Tim; Pittalis, Stefano
2016-08-07
The Hohenberg-Kohn density functional was long ago shown to reduce to the Thomas-Fermi (TF) approximation in the non-relativistic semiclassical (or large-Z) limit for all matter, i.e., the kinetic energy becomes local. Exchange also becomes local in this limit. Numerical data on the correlation energy of atoms support the conjecture that this is also true for correlation, but much less relevant to atoms. We illustrate how expansions around a large particle number are equivalent to local density approximations and their strong relevance to density functional approximations. Analyzing highly accurate atomic correlation energies, we show that EC → -AC ZlnZ + BCZ as Z → ∞, where Z is the atomic number, AC is known, and we estimate BC to be about 37 mhartree. The local density approximation yields AC exactly, but a very incorrect value for BC, showing that the local approximation is less relevant for the correlation alone. This limit is a benchmark for the non-empirical construction of density functional approximations. We conjecture that, beyond atoms, the leading correction to the local density approximation in the large-Z limit generally takes this form, but with BC a functional of the TF density for the system. The implications for the construction of approximate density functionals are discussed.
Universal distribution of component frequencies in biological and technological systems
Pang, Tin Yau; Maslov, Sergei
2013-01-01
Bacterial genomes and large-scale computer software projects both consist of a large number of components (genes or software packages) connected via a network of mutual dependencies. Components can be easily added or removed from individual systems, and their use frequencies vary over many orders of magnitude. We study this frequency distribution in genomes of ∼500 bacterial species and in over 2 million Linux computers and find that in both cases it is described by the same scale-free power-law distribution with an additional peak near the tail of the distribution corresponding to nearly universal components. We argue that the existence of a power law distribution of frequencies of components is a general property of any modular system with a multilayered dependency network. We demonstrate that the frequency of a component is positively correlated with its dependency degree given by the total number of upstream components whose operation directly or indirectly depends on the selected component. The observed frequency/dependency degree distributions are reproduced in a simple mathematically tractable model introduced and analyzed in this study. PMID:23530195
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagos, Samson M.; Leung, Lai-Yung R.; Yoon, Jin-Ho
Simulations from the Community Earth System Model Large Ensemble project are analyzed to investigate the impact of global warming on atmospheric rivers (ARs). The model has notable biases in simulating the subtropical jet position and the relationship between extreme precipitation and moisture transport. After accounting for these biases, the model projects an ensemble mean increase of 35% in the number of landfalling AR days between the last twenty years of the 20th and 21st centuries. However, the number of AR associated extreme precipitation days increases only by 28% because the moisture transport required to produce extreme precipitation also increases withmore » warming. Internal variability introduces an uncertainty of ±8% and ±7% in the projected changes in AR days and associated extreme precipitation days. In contrast, accountings for model biases only change the projected changes by about 1%. The significantly larger mean changes compared to internal variability and to the effects of model biases highlight the robustness of AR responses to global warming.« less
Dynamic non-equilibrium wall-modeling for large eddy simulation at high Reynolds numbers
NASA Astrophysics Data System (ADS)
Kawai, Soshi; Larsson, Johan
2013-01-01
A dynamic non-equilibrium wall-model for large-eddy simulation at arbitrarily high Reynolds numbers is proposed and validated on equilibrium boundary layers and a non-equilibrium shock/boundary-layer interaction problem. The proposed method builds on the prior non-equilibrium wall-models of Balaras et al. [AIAA J. 34, 1111-1119 (1996)], 10.2514/3.13200 and Wang and Moin [Phys. Fluids 14, 2043-2051 (2002)], 10.1063/1.1476668: the failure of these wall-models to accurately predict the skin friction in equilibrium boundary layers is shown and analyzed, and an improved wall-model that solves this issue is proposed. The improvement stems directly from reasoning about how the turbulence length scale changes with wall distance in the inertial sublayer, the grid resolution, and the resolution-characteristics of numerical methods. The proposed model yields accurate resolved turbulence, both in terms of structure and statistics for both the equilibrium and non-equilibrium flows without the use of ad hoc corrections. Crucially, the model accurately predicts the skin friction, something that existing non-equilibrium wall-models fail to do robustly.
A numerical algorithm with preference statements to evaluate the performance of scientists.
Ricker, Martin
Academic evaluation committees have been increasingly receptive for using the number of published indexed articles, as well as citations, to evaluate the performance of scientists. It is, however, impossible to develop a stand-alone, objective numerical algorithm for the evaluation of academic activities, because any evaluation necessarily includes subjective preference statements. In a market, the market prices represent preference statements, but scientists work largely in a non-market context. I propose a numerical algorithm that serves to determine the distribution of reward money in Mexico's evaluation system, which uses relative prices of scientific goods and services as input. The relative prices would be determined by an evaluation committee. In this way, large evaluation systems (like Mexico's Sistema Nacional de Investigadores ) could work semi-automatically, but not arbitrarily or superficially, to determine quantitatively the academic performance of scientists every few years. Data of 73 scientists from the Biology Institute of Mexico's National University are analyzed, and it is shown that the reward assignation and academic priorities depend heavily on those preferences. A maximum number of products or activities to be evaluated is recommended, to encourage quality over quantity.
NASA Astrophysics Data System (ADS)
Du, T. Z.; Liu, C.-H.; Zhao, Y. B.
2014-10-01
In this study, the dispersion of chemically reactive pollutants is calculated by large-eddy simulation (LES) in a neutrally stratified urban canopy layer (UCL) over urban areas. As a pilot attempt, idealized street canyons of unity building-height-to-street-width (aspect) ratio are used. Nitric oxide (NO) is emitted from the ground surface of the first street canyon into the domain doped with ozone (O3). In the absence of ultraviolet radiation, this irreversible chemistry produces nitrogen dioxide (NO2), developing a reactive plume over the rough urban surface. A range of timescales of turbulence and chemistry are utilized to examine the mechanism of turbulent mixing and chemical reactions in the UCL. The Damköhler number (Da) and the reaction rate (r) are analyzed along the vertical direction on the plane normal to the prevailing flow at 10 m after the source. The maximum reaction rate peaks at an elevation where Damköhler number Da is equal or close to unity. Hence, comparable timescales of turbulence and reaction could enhance the chemical reactions in the plume.
DSPCP: A Data Scalable Approach for Identifying Relationships in Parallel Coordinates.
Nguyen, Hoa; Rosen, Paul
2018-03-01
Parallel coordinates plots (PCPs) are a well-studied technique for exploring multi-attribute datasets. In many situations, users find them a flexible method to analyze and interact with data. Unfortunately, using PCPs becomes challenging as the number of data items grows large or multiple trends within the data mix in the visualization. The resulting overdraw can obscure important features. A number of modifications to PCPs have been proposed, including using color, opacity, smooth curves, frequency, density, and animation to mitigate this problem. However, these modified PCPs tend to have their own limitations in the kinds of relationships they emphasize. We propose a new data scalable design for representing and exploring data relationships in PCPs. The approach exploits the point/line duality property of PCPs and a local linear assumption of data to extract and represent relationship summarizations. This approach simultaneously shows relationships in the data and the consistency of those relationships. Our approach supports various visualization tasks, including mixed linear and nonlinear pattern identification, noise detection, and outlier detection, all in large data. We demonstrate these tasks on multiple synthetic and real-world datasets.
Turbulence characteristics of velocity and scalars in an internal boundary-layer above a lake
NASA Astrophysics Data System (ADS)
Sahlee, E.; Rutgersson, A.; Podgrajsek, E.
2012-12-01
We analyze turbulence measurements, including methane, from a small island in a Swedish lake. The turbulence structure was found to be highly influenced by the surrounding land during daytime. Variance spectra of both horizontal velocity and scalars during both unstable and stable stratification displayed a low frequency peak. The energy at lower frequencies displayed a daily variation, increasing in the morning and decreasing in the afternoon. We interpret this behavior as a sign of spectral lag, where the low frequency energy, large eddies, originate from the convective boundary layer above the surrounding land. When the air is advected over the lake the small eddies rapidly equilibrates with new surface forcing. However, the larger eddies remain for an appreciable distance and influence the turbulence in the developing lake boundary layer. The variance of the horizontal velocity is increased by these large eddies however, momentum fluxes and scalar variances and fluxes appear unaffected. The drag coefficient, Stanton number and Dalton number used to parameterize the momentum flux, heat flux and latent heat flux respectively all compare very well with parameterizations developed for open ocean conditions.
NASA Astrophysics Data System (ADS)
Ooi, Seng-Keat
2005-11-01
Lock-exchange gravity current flows produced by the instantaneous release of a heavy fluid are investigated using 3-D well resolved Large Eddy Simulation simulations at Grashof numbers up to 8*10^9. It is found the 3-D simulations correctly predict a constant front velocity over the initial slumping phase and a front speed decrease proportional to t-1/3 (the time t is measured from the release) over the inviscid phase, in agreement with theory. The evolution of the current in the simulations is found to be similar to that observed experimentally by Hacker et al. (1996). The effect of the dynamic LES model on the solutions is discussed. The energy budget of the current is discussed and the contribution of the turbulent dissipation to the total dissipation is analyzed. The limitations of less expensive 2D simulations are discussed; in particular their failure to correctly predict the spatio-temporal distributions of the bed shear stresses which is important in determining the amount of sediment the gravity current can entrain in the case in advances of a loose bed.
Incremental terrain processing for large digital elevation models
NASA Astrophysics Data System (ADS)
Ye, Z.
2012-12-01
Incremental terrain processing for large digital elevation models Zichuan Ye, Dean Djokic, Lori Armstrong Esri, 380 New York Street, Redlands, CA 92373, USA (E-mail: zye@esri.com, ddjokic@esri.com , larmstrong@esri.com) Efficient analyses of large digital elevation models (DEM) require generation of additional DEM artifacts such as flow direction, flow accumulation and other DEM derivatives. When the DEMs to analyze have a large number of grid cells (usually > 1,000,000,000) the generation of these DEM derivatives is either impractical (it takes too long) or impossible (software is incapable of processing such a large number of cells). Different strategies and algorithms can be put in place to alleviate this situation. This paper describes an approach where the overall DEM is partitioned in smaller processing units that can be efficiently processed. The processed DEM derivatives for each partition can then be either mosaicked back into a single large entity or managed on partition level. For dendritic terrain morphologies, the way in which partitions are to be derived and the order in which they are to be processed depend on the river and catchment patterns. These patterns are not available until flow pattern of the whole region is created, which in turn cannot be established upfront due to the size issues. This paper describes a procedure that solves this problem: (1) Resample the original large DEM grid so that the total number of cells is reduced to a level for which the drainage pattern can be established. (2) Run standard terrain preprocessing operations on the resampled DEM to generate the river and catchment system. (3) Define the processing units and their processing order based on the river and catchment system created in step (2). (4) Based on the processing order, apply the analysis, i.e., flow accumulation operation to each of the processing units, at the full resolution DEM. (5) As each processing unit is processed based on the processing order defined in (3), compare the resulting drainage pattern with the drainage pattern established at the coarser scale and adjust the drainage boundaries and rivers if necessary.
Random packing of regular polygons and star polygons on a flat two-dimensional surface.
Cieśla, Michał; Barbasz, Jakub
2014-08-01
Random packing of unoriented regular polygons and star polygons on a two-dimensional flat continuous surface is studied numerically using random sequential adsorption algorithm. Obtained results are analyzed to determine the saturated random packing ratio as well as its density autocorrelation function. Additionally, the kinetics of packing growth and available surface function are measured. In general, stars give lower packing ratios than polygons, but when the number of vertexes is large enough, both shapes approach disks and, therefore, properties of their packing reproduce already known results for disks.
Polyglot Programming in Applications Used for Genetic Data Analysis
Nowak, Robert M.
2014-01-01
Applications used for the analysis of genetic data process large volumes of data with complex algorithms. High performance, flexibility, and a user interface with a web browser are required by these solutions, which can be achieved by using multiple programming languages. In this study, I developed a freely available framework for building software to analyze genetic data, which uses C++, Python, JavaScript, and several libraries. This system was used to build a number of genetic data processing applications and it reduced the time and costs of development. PMID:25197633
NASA Astrophysics Data System (ADS)
Le Kien, Fam; Schneeweiss, Philipp; Rauschenbeutel, Arno
2013-05-01
We present a systematic derivation of the dynamical polarizability and the ac Stark shift of the ground and excited states of atoms interacting with a far-off-resonance light field of arbitrary polarization. We calculate the scalar, vector, and tensor polarizabilities of atomic cesium using resonance wavelengths and reduced matrix elements for a large number of transitions. We analyze the properties of the fictitious magnetic field produced by the vector polarizability in conjunction with the ellipticity of the polarization of the light field.
Jumbo tornado outbreak of 3 April 1974
NASA Technical Reports Server (NTRS)
Fujita, T. T.
1974-01-01
General meteorological data concerning the Jumbo tornado outbreak are presented. In terms of tornado number and total path mileage, it was more extensive than all known outbreaks. Most of the intense tornadoes avoided the large cities, however. Turn information is analyzed in detail. Left-turn tornadoes were more intense than right-turn tornadoes. Many important phenomena were observed, such as multiple suction vortices, family tornadoes, and cousin tornadoes spawned from interacting tornado cyclones. Aerial survey data will aid greatly in the solution of various scales of rotating motions, leading to improved prediction and warning of tornadoes.
1988-08-01
washed out several bridges and bridge approaches, flooded large areas of agricultural land, and caused heavy bank erosion along most of the river. In...analyzed. Both alternatives featured a cofferdam on Government Canyon, and a 2,310-foot-long corrugated metal outlet pipe draining through the...losses were determined using plate C-7 in EM 1110-2-1602 (ref. 19). A 90 degree helix was assumed for the corrugated metal pipes. This method resulted in
NASA Astrophysics Data System (ADS)
Mirsafianf, Atefeh S.; Isfahani, Shirin N.; Kasaei, Shohreh; Mobasheri, Hamid
Here we present an approach for processing neural cells images to analyze their growth process in culture environment. We have applied several image processing techniques for: 1- Environmental noise reduction, 2- Neural cells segmentation, 3- Neural cells classification based on their dendrites' growth conditions, and 4- neurons' features Extraction and measurement (e.g., like cell body area, number of dendrites, axon's length, and so on). Due to the large amount of noise in the images, we have used feed forward artificial neural networks to detect edges more precisely.
Polyglot programming in applications used for genetic data analysis.
Nowak, Robert M
2014-01-01
Applications used for the analysis of genetic data process large volumes of data with complex algorithms. High performance, flexibility, and a user interface with a web browser are required by these solutions, which can be achieved by using multiple programming languages. In this study, I developed a freely available framework for building software to analyze genetic data, which uses C++, Python, JavaScript, and several libraries. This system was used to build a number of genetic data processing applications and it reduced the time and costs of development.
Analysis of magnetic fields using variational principles and CELAS2 elements
NASA Technical Reports Server (NTRS)
Frye, J. W.; Kasper, R. G.
1977-01-01
Prospective techniques for analyzing magnetic fields using NASTRAN are reviewed. A variational principle utilizing a vector potential function is presented which has as its Euler equations, the required field equations and boundary conditions for static magnetic fields including current sources. The need for an addition to this variational principle of a constraint condition is discussed. Some results using the Lagrange multiplier method to apply the constraint and CELAS2 elements to simulate the matrices are given. Practical considerations of using large numbers of CELAS2 elements are discussed.
Aircraft measurements of trace gases and particles near the tropopause
NASA Technical Reports Server (NTRS)
Falconer, P.; Pratt, R.; Detwiler, A.; Chen, C. S.; Hogan, A.; Bernard, S.; Krebschull, K.; Winters, W.
1983-01-01
Research activities which were performed using atmospheric constituent data obtained by the NASA Global Atmospheric Sampling Program are described. The characteristics of the particle size spectrum in various meteorological settings from a special collection of GASP data are surveyed. The relationship between humidity and cloud particles is analyzed. Climatological and case studies of tropical ozone distributions measured on a large number of flights are reported. Particle counter calibrations are discussed as well as the comparison of GASP particle data in the upper troposphere with other measurements at lower altitudes over the Pacific Ocean.
The NIH Roadmap Epigenomics Program data resource
Chadwick, Lisa Helbling
2012-01-01
The NIH Roadmap Reference Epigenome Mapping Consortium is developing a community resource of genome-wide epigenetic maps in a broad range of human primary cells and tissues. There are large amounts of data already available, and a number of different options for viewing and analyzing the data. This report will describe key features of the websites where users will find data, protocols and analysis tools developed by the consortium, and provide a perspective on how this unique resource will facilitate and inform human disease research, both immediately and in the future. PMID:22690667
The NIH Roadmap Epigenomics Program data resource.
Chadwick, Lisa Helbling
2012-06-01
The NIH Roadmap Reference Epigenome Mapping Consortium is developing a community resource of genome-wide epigenetic maps in a broad range of human primary cells and tissues. There are large amounts of data already available, and a number of different options for viewing and analyzing the data. This report will describe key features of the websites where users will find data, protocols and analysis tools developed by the consortium, and provide a perspective on how this unique resource will facilitate and inform human disease research, both immediately and in the future.
System of HPC content archiving
NASA Astrophysics Data System (ADS)
Bogdanov, A.; Ivashchenko, A.
2017-12-01
This work is aimed to develop a system, that will effectively solve the problem of storing and analyzing files containing text data, by using modern software development tools, techniques and approaches. The main challenge of storing a large number of text documents defined at the problem formulation stage, have to be resolved with such functionality as full text search and document clustering depends on their contents. Main system features could be described with notions of distributed multilevel architecture, flexibility and interchangeability of components, achieved through the standard functionality incapsulation in independent executable modules.
The Cancer Genome Atlas Pan-Cancer analysis project.
Weinstein, John N; Collisson, Eric A; Mills, Gordon B; Shaw, Kenna R Mills; Ozenberger, Brad A; Ellrott, Kyle; Shmulevich, Ilya; Sander, Chris; Stuart, Joshua M
2013-10-01
The Cancer Genome Atlas (TCGA) Research Network has profiled and analyzed large numbers of human tumors to discover molecular aberrations at the DNA, RNA, protein and epigenetic levels. The resulting rich data provide a major opportunity to develop an integrated picture of commonalities, differences and emergent themes across tumor lineages. The Pan-Cancer initiative compares the first 12 tumor types profiled by TCGA. Analysis of the molecular aberrations and their functional roles across tumor types will teach us how to extend therapies effective in one cancer type to others with a similar genomic profile.
Renormalization of Extended QCD2
NASA Astrophysics Data System (ADS)
Fukaya, Hidenori; Yamamura, Ryo
2015-10-01
Extended QCD (XQCD), proposed by Kaplan [D. B. Kaplan, arXiv:1306.5818], is an interesting reformulation of QCD with additional bosonic auxiliary fields. While its partition function is kept exactly the same as that of original QCD, XQCD naturally contains properties of low-energy hadronic models. We analyze the renormalization group flow of 2D (X)QCD, which is solvable in the limit of a large number of colors N_c, to understand what kind of roles the auxiliary degrees of freedom play and how the hadronic picture emerges in the low-energy region.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Semenov, A. N.
We propose a new spin-glass model with no positional quenched disorder which is regarded as a coarse-grained model of a structural glass-former. The model is analyzed in the 1D case when the number N of states of a primary cell is large. For N → ∞, the model exhibits a sharp freezing transition of the thermodynamic origin. It is shown both analytically and numerically that the glass transition is accompanied by a significant growth of a static length scale ξ pointing to the structural (equilibrium) nature of dynamical slowdown effects in supercooled liquids.
Clustangles: An Open Library for Clustering Angular Data.
Sargsyan, Karen; Hua, Yun Hao; Lim, Carmay
2015-08-24
Dihedral angles are good descriptors of the numerous conformations visited by large, flexible systems, but their analysis requires directional statistics. A single package including the various multivariate statistical methods for angular data that accounts for the distinct topology of such data does not exist. Here, we present a lightweight standalone, operating-system independent package called Clustangles to fill this gap. Clustangles will be useful in analyzing the ever-increasing number of structures in the Protein Data Bank and clustering the copious conformations from increasingly long molecular dynamics simulations.
[Current status of occupational health and related countermeasures in Guangzhou, China].
Zeng, W F; Wu, S H; Wang, Z; Liu, Y M
2016-02-20
To investigate the current status of occupational health and related countermeasures in Guangzhou, China. Related data were collected from occupational poisoning accident investigation, diagnosis and identification of occupational diseases, and the occupational disease hazard reporting system, and the statistical data of occupational health in Guangzhou were analyzed retrospectively. The number of enterprises reporting for occupational disease hazards in Guangzhou was 20 890, and the total number of workers was 1 457 583. The number of workers exposed to occupational hazards was 284 233, and the cumulative number of workers with occupational diseases was 1 502. There were many risk factors for occupational diseases in enterprises, and there were a large number of workers with occupational diseases, as well as newly diagnosed cases. From 2001 to 2014, the total number of cases of occupational diseases was 958. The situation for the prevention and control of occupational diseases is grim in Guangzhou. Occupational health supervision and law enforcement should be enhanced, the three-level supervision system should be established and perfected, and the occupational health supervision system with a combination of "prevention, treatment, and protection" should be established and promoted, so as to gradually establish a technical service support system for occupational health.
Clarke, Richard W; Monnier, Nilah; Li, Haitao; Zhou, Dejian; Browne, Helena; Klenerman, David
2007-08-15
We present a single virion method to determine absolute distributions of copy number in the protein composition of viruses and apply it to herpes simplex virus type 1. Using two-color coincidence fluorescence spectroscopy, we determine the virion-to-virion variability in copy numbers of fluorescently labeled tegument and envelope proteins relative to a capsid protein by analyzing fluorescence intensity ratios for ensembles of individual dual-labeled virions and fitting the resulting histogram of ratios. Using EYFP-tagged capsid protein VP26 as a reference for fluorescence intensity, we are able to calculate the mean and also, for the first time to our knowledge, the variation in numbers of gD, VP16, and VP22 tegument. The measurement of the number of glycoprotein D molecules was in good agreement with independent measurements of average numbers of these glycoproteins in bulk virus preparations, validating the method. The accuracy, straightforward data processing, and high throughput of this technique make it widely applicable to the analysis of the molecular composition of large complexes in general, and it is particularly suited to providing insights into virus structure, assembly, and infectivity.
Velocity Resolved---Scalar Modeled Simulations of High Schmidt Number Turbulent Transport
NASA Astrophysics Data System (ADS)
Verma, Siddhartha
The objective of this thesis is to develop a framework to conduct velocity resolved - scalar modeled (VR-SM) simulations, which will enable accurate simulations at higher Reynolds and Schmidt (Sc) numbers than are currently feasible. The framework established will serve as a first step to enable future simulation studies for practical applications. To achieve this goal, in-depth analyses of the physical, numerical, and modeling aspects related to Sc " 1 are presented, specifically when modeling in the viscous-convective subrange. Transport characteristics are scrutinized by examining scalar-velocity Fourier mode interactions in Direct Numerical Simulation (DNS) datasets and suggest that scalar modes in the viscous-convective subrange do not directly affect large-scale transport for high Sc . Further observations confirm that discretization errors inherent in numerical schemes can be sufficiently large to wipe out any meaningful contribution from subfilter models. This provides strong incentive to develop more effective numerical schemes to support high Sc simulations. To lower numerical dissipation while maintaining physically and mathematically appropriate scalar bounds during the convection step, a novel method of enforcing bounds is formulated, specifically for use with cubic Hermite polynomials. Boundedness of the scalar being transported is effected by applying derivative limiting techniques, and physically plausible single sub-cell extrema are allowed to exist to help minimize numerical dissipation. The proposed bounding algorithm results in significant performance gain in DNS of turbulent mixing layers and of homogeneous isotropic turbulence. Next, the combined physical/mathematical behavior of the subfilter scalar-flux vector is analyzed in homogeneous isotropic turbulence, by examining vector orientation in the strain-rate eigenframe. The results indicate no discernible dependence on the modeled scalar field, and lead to the identification of the tensor-diffusivity model as a good representation of the subfilter flux. Velocity resolved - scalar modeled simulations of homogeneous isotropic turbulence are conducted to confirm the behavior theorized in these a priori analyses, and suggest that the tensor-diffusivity model is ideal for use in the viscous-convective subrange. Simulations of a turbulent mixing layer are also discussed, with the partial objective of analyzing Schmidt number dependence of a variety of scalar statistics. Large-scale statistics are confirmed to be relatively independent of the Schmidt number for Sc " 1, which is explained by the dominance of subfilter dissipation over resolved molecular dissipation in the simulations. Overall, the VR-SM framework presented is quite effective in predicting large-scale transport characteristics of high Schmidt number scalars, however, it is determined that prediction of subfilter quantities would entail additional modeling intended specifically for this purpose. The VR-SM simulations presented in this thesis provide us with the opportunity to overlap with experimental studies, while at the same time creating an assortment of baseline datasets for future validation of LES models, thereby satisfying the objectives outlined for this work.
Distribution of Plasmoids in Post-Coronal Mass Ejection Current Sheets
NASA Astrophysics Data System (ADS)
Bhattacharjee, A.; Guo, L.; Huang, Y.
2013-12-01
Recently, the fragmentation of a current sheet in the high-Lundquist-number regime caused by the plasmoid instability has been proposed as a possible mechanism for fast reconnection. In this work, we investigate this scenario by comparing the distribution of plasmoids obtained from Large Angle and Spectrometric Coronagraph (LASCO) observational data of a coronal mass ejection event with a resistive magnetohydrodynamic simulation of a similar event. The LASCO/C2 data are analyzed using visual inspection, whereas the numerical data are analyzed using both visual inspection and a more precise topological method. Contrasting the observational data with numerical data analyzed with both methods, we identify a major limitation of the visual inspection method, due to the difficulty in resolving smaller plasmoids. This result raises questions about reports of log-normal distributions of plasmoids and other coherent features in the recent literature. Based on nonlinear scaling relations of the plasmoid instability, we infer a lower bound on the current sheet width, assuming the underlying mechanism of current sheet broadening is resistive diffusion.
Meteoroid stream flux densities and the zenith exponent
NASA Astrophysics Data System (ADS)
Molau, Sirko; Barentsen, Geert
2013-01-01
The MetRec software was recently extended to measure the limiting magnitude in real-time, and to determine meteoroid stream flux densities. This paper gives a short overview of the applied algorithms. We introduce the MetRec Flux Viewer, a web tool to visualize activity profiles on- line. Starting from the Lyrids 2011, high-quality flux density profiles were derived from IMO Video Network observations for every major meteor shower. They are often in good agreement with visual data. Analyzing the 2011 Perseids, we found systematic daily variations in the flux density profile, which can be attributed to a zenith exponent gamma > 1.0. We analyzed a number of meteor showers in detail and found zenith exponent variations from shower to shower in the range between 1.55 and 2.0. The average value over all analyzed showers is gamma = 1.75. In order to determine the zenith exponent precisely, the observations must cover a large altitude range (at least 45 degrees).
Fate of pharmaceutical and trace organic compounds in three septic system plumes, Ontario, Canada.
Carrara, Cherilyn; Ptacek, Carol J; Robertson, William D; Blowes, David W; Moncur, Michael C; Sverko, Ed; Backus, Sean
2008-04-15
Three high volume septic systems in Ontario, Canada, were examined to assess the potential for onsite wastewatertreatment systems to release pharmaceutical compounds to the environment and to evaluate the mobility of these compounds in receiving aquifers. Wastewater samples were collected from the septic tanks, and groundwater samples were collected below and down gradient of the infiltration beds and analyzed for a suite of commonly used pharmaceutical and trace organic compounds. The septic tank samples contained elevated concentrations of several pharmaceutical compounds. Ten of the 12 compounds analyzed were detected in groundwater at one or more sites at concentrations in the low ng L(-1) to low microg L(-1) range. Large differences among the sites were observed in both the number of detections and the concentrations of the pharmaceutical compounds. Of the compounds analyzed, ibuprofen, gemfibrozil, and naproxen were observed to be transported atthe highest concentrations and greatest distances from the infiltration source areas, particularly in anoxic zones of the plumes.
On the Performance of TCP Spoofing in Satellite Networks
NASA Technical Reports Server (NTRS)
Ishac, Joseph; Allman, Mark
2001-01-01
In this paper, we analyze the performance of Transmission Control Protocol (TCP) in a network that consists of both satellite and terrestrial components. One method, proposed by outside research, to improve the performance of data transfers over satellites is to use a performance enhancing proxy often dubbed 'spoofing.' Spoofing involves the transparent splitting of a TCP connection between the source and destination by some entity within the network path. In order to analyze the impact of spoofing, we constructed a simulation suite based around the network simulator ns-2. The simulation reflects a host with a satellite connection to the Internet and allows the option to spoof connections just prior to the satellite. The methodology used in our simulation allows us to analyze spoofing over a large range of file sizes and under various congested conditions, while prior work on this topic has primarily focused on bulk transfers with no congestion. As a result of these simulations, we find that the performance of spoofing is dependent upon a number of conditions.
Analysis of Content Shared in Online Cancer Communities: Systematic Review
van de Poll-Franse, Lonneke V; Krahmer, Emiel; Verberne, Suzan; Mols, Floortje
2018-01-01
Background The content that cancer patients and their relatives (ie, posters) share in online cancer communities has been researched in various ways. In the past decade, researchers have used automated analysis methods in addition to manual coding methods. Patients, providers, researchers, and health care professionals can learn from experienced patients, provided that their experience is findable. Objective The aim of this study was to systematically review all relevant literature that analyzes user-generated content shared within online cancer communities. We reviewed the quality of available research and the kind of content that posters share with each other on the internet. Methods A computerized literature search was performed via PubMed (MEDLINE), PsycINFO (5 and 4 stars), Cochrane Central Register of Controlled Trials, and ScienceDirect. The last search was conducted in July 2017. Papers were selected if they included the following terms: (cancer patient) and (support group or health communities) and (online or internet). We selected 27 papers and then subjected them to a 14-item quality checklist independently scored by 2 investigators. Results The methodological quality of the selected studies varied: 16 were of high quality and 11 were of adequate quality. Of those 27 studies, 15 were manually coded, 7 automated, and 5 used a combination of methods. The best results can be seen in the papers that combined both analytical methods. The number of analyzed posts ranged from 200 to 1,500,000; the number of analyzed posters ranged from 75 to 90,000. The studies analyzing large numbers of posts mainly related to breast cancer, whereas those analyzing small numbers were related to other types of cancers. A total of 12 studies involved some or entirely automatic analysis of the user-generated content. All the authors referred to two main content categories: informational support and emotional support. In all, 15 studies reported only on the content, 6 studies explicitly reported on content and social aspects, and 6 studies focused on emotional changes. Conclusions In the future, increasing amounts of user-generated content will become available on the internet. The results of content analysis, especially of the larger studies, give detailed insights into patients’ concerns and worries, which can then be used to improve cancer care. To make the results of such analyses as usable as possible, automatic content analysis methods will need to be improved through interdisciplinary collaboration. PMID:29615384
The need for econometric research in laboratory animal operations.
Baker, David G; Kearney, Michael T
2015-06-01
The scarcity of research funding can affect animal facilities in various ways. These effects can be evaluated by examining the allocation of financial resources in animal facilities, which can be facilitated by the use of mathematical and statistical methods to analyze economic problems, a discipline known as econometrics. The authors applied econometrics to study whether increasing per diem charges had a negative effect on the number of days of animal care purchased by animal users. They surveyed animal numbers and per diem charges at 20 research institutions and found that demand for large animals decreased as per diem charges increased. The authors discuss some of the challenges involved in their study and encourage research institutions to carry out more robust econometric studies of this and other economic questions facing laboratory animal research.
Automating Content Analysis of Open-Ended Responses: Wordscores and Affective Intonation
Baek, Young Min; Cappella, Joseph N.; Bindman, Alyssa
2014-01-01
This study presents automated methods for predicting valence and quantifying valenced thoughts of a text. First, it examines whether Wordscores, developed by Laver, Benoit, and Garry (2003), can be adapted to reliably predict the valence of open-ended responses in a survey about bioethical issues in genetics research, and then tests a complementary and novel technique for coding the number of valenced thoughts in open-ended responses, termed Affective Intonation. Results show that Wordscores successfully predicts the valence of brief and grammatically imperfect open-ended responses, and Affective Intonation achieves comparable performance to human coders when estimating number of valenced thoughts. Both Wordscores and Affective Intonation have promise as reliable, effective, and efficient methods when researchers content-analyze large amounts of textual data systematically. PMID:25558294
Analysis of Effluent Gases During the CCVD Growth of Multi Wall Carbon Nanotubes from Acetylene
NASA Technical Reports Server (NTRS)
Schmitt, T. C.; Biris, A. S.; Miller, D. W.; Biris, A. R.; Lupu, D.; Trigwell, S.; Rahman, Z. U.
2005-01-01
Catalytic chemical vapor deposition was used to grow multi-walled carbon nanotubes on a Fe:Co:CaCO3 catalyst from acetylene. The influent and effluent gases were analyzed by gas chromatography and mass spectrometry at different time intervals during the nanotubes growth process in order to better understand and optimize the overall reaction. A large number of byproducts were identified and it was found that the number and the level for some of the carbon byproducts significantly increased over time. The CaCO3 catalytic support thermally decomposed into CaO and CO2 resulting in a mixture of two catalysts for growing the nanotubes, which were found to have outer diameters belonging to two main groups 8 to 35 nm and 40 to 60 nm, respectively.
Direct femtosecond laser surface structuring of crystalline silicon at 400 nm
NASA Astrophysics Data System (ADS)
Nivas, Jijil JJ; Anoop, K. K.; Bruzzese, Riccardo; Philip, Reji; Amoruso, Salvatore
2018-03-01
We have analyzed the effects of the laser pulse wavelength (400 nm) on femtosecond laser surface structuring of silicon. The features of the produced surface structures are investigated as a function of the number of pulses, N, and compared with the surface textures produced by more standard near-infrared (800 nm) laser pulses at a similar level of excitation. Our experimental findings highlight the importance of the light wavelength for the formation of the supra-wavelength grooves, and, for a large number of pulses (N ≈ 1000), the generation of other periodic structures (stripes) at 400 nm, which are not observed at 800 nm. These results provide interesting information on the generation of various surface textures, addressing the effect of the laser pulse wavelength on the generation of grooves and stripes.
Rare-earth abundances in chondritic meteorites
NASA Technical Reports Server (NTRS)
Evensen, N. M.; Hamilton, P. J.; Onions, R. K.
1978-01-01
Fifteen chondrites, including eight carbonaceous chondrites, were analyzed for rare earth element abundances by isotope dilution. Examination of REE for a large number of individual chondrites shows that only a small proportion of the analyses have flat unfractionated REE patterns within experimental error. While some of the remaining analyses are consistent with magmatic fractionation, many patterns, in particular those with positive Ce anomalies, can not be explained by known magmatic processes. Elemental abundance anomalies are found in all major chondrite classes. The persistence of anomalies in chondritic materials relatively removed from direct condensational processes implies that anomalous components are resistant to equilibrium or were introduced at a late stage of chondrite formation. Large-scale segregation of gas and condensate is implied, and bulk variations in REE abundances between planetary bodies is possible.
Turbulent Dynamics of Epithelial Cell Cultures
NASA Astrophysics Data System (ADS)
Blanch-Mercader, C.; Yashunsky, V.; Garcia, S.; Duclos, G.; Giomi, L.; Silberzan, P.
2018-05-01
We investigate the large length and long time scales collective flows and structural rearrangements within in vitro human bronchial epithelial cell (HBEC) cultures. Activity-driven collective flows result in ensembles of vortices randomly positioned in space. By analyzing a large population of vortices, we show that their area follows an exponential law with a constant mean value and their rotational frequency is size independent, both being characteristic features of the chaotic dynamics of active nematic suspensions. Indeed, we find that HBECs self-organize in nematic domains of several cell lengths. Nematic defects are found at the interface between domains with a total number that remains constant due to the dynamical balance of nucleation and annihilation events. The mean velocity fields in the vicinity of defects are well described by a hydrodynamic theory of extensile active nematics.
NASA Astrophysics Data System (ADS)
Li, X.
2014-12-01
Thermal stratification of the atmospheric surface layer has strong impact on the land-atmosphere exchange of turbulent, heat, and pollutant fluxes. Few studies have been carried out for the interaction of the weakly to moderately stable stratified atmosphere and the urban canopy. This study performs a large-eddy simulation of a modeled street canyon within a weakly to moderately stable atmosphere boundary layer. To better resolve the smaller eddy size resulted from the stable stratification, a higher spatial and temporal resolution is used. The detailed flow structure and turbulence inside the street canyon are analyzed. The relationship of pollutant dispersion and Richardson number of the atmosphere is investigated. Differences between these characteristics and those under neutral and unstable atmosphere boundary layer are emphasized.
Parallel scalability of Hartree-Fock calculations
NASA Astrophysics Data System (ADS)
Chow, Edmond; Liu, Xing; Smelyanskiy, Mikhail; Hammond, Jeff R.
2015-03-01
Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree-Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.
Cyclic subway networks are less risky in metropolises
NASA Astrophysics Data System (ADS)
Xiao, Ying; Zhang, Hai-Tao; Xu, Bowen; Zhu, Tao; Chen, Guanrong; Chen, Duxin
2018-02-01
Subways are crucial in modern transportation systems of metropolises. To quantitatively evaluate the potential risks of subway networks suffered from natural disasters or deliberate attacks, real data from seven Chinese subway systems are collected and their population distributions and anti-risk capabilities are analyzed. Counterintuitively, it is found that transfer stations with large numbers of connections are not the most crucial, but the stations and lines with large betweenness centrality are essential, if subway networks are being attacked. It is also found that cycles reduce such correlations due to the existence of alternative paths. To simulate the data-based observations, a network model is proposed to characterize the dynamics of subway systems under various intensities of attacks on stations and lines. This study sheds some light onto risk assessment of subway networks in metropolitan cities.
2013-01-01
Background Many large-scale studies analyzed high-throughput genomic data to identify altered pathways essential to the development and progression of specific types of cancer. However, no previous study has been extended to provide a comprehensive analysis of pathways disrupted by copy number alterations across different human cancers. Towards this goal, we propose a network-based method to integrate copy number alteration data with human protein-protein interaction networks and pathway databases to identify pathways that are commonly disrupted in many different types of cancer. Results We applied our approach to a data set of 2,172 cancer patients across 16 different types of cancers, and discovered a set of commonly disrupted pathways, which are likely essential for tumor formation in majority of the cancers. We also identified pathways that are only disrupted in specific cancer types, providing molecular markers for different human cancers. Analysis with independent microarray gene expression datasets confirms that the commonly disrupted pathways can be used to identify patient subgroups with significantly different survival outcomes. We also provide a network view of disrupted pathways to explain how copy number alterations affect pathways that regulate cell growth, cycle, and differentiation for tumorigenesis. Conclusions In this work, we demonstrated that the network-based integrative analysis can help to identify pathways disrupted by copy number alterations across 16 types of human cancers, which are not readily identifiable by conventional overrepresentation-based and other pathway-based methods. All the results and source code are available at http://compbio.cs.umn.edu/NetPathID/. PMID:23822816
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiu, J; Ma, L
2015-06-15
Purpose: To develop a treatment delivery and planning strategy by increasing the number of beams to minimize dose to brain tissue surrounding a target, while maximizing dose coverage to the target. Methods: We analyzed 14 different treatment plans via Leksell PFX and 4C. For standardization, single tumor cases were chosen. Original treatment plans were compared with two optimized plans. The number of beams was increased in treatment plans by varying tilt angles of the patient head, while maintaining original isocenter and the beam positions in the x-, y- and z-axes, collimator size, and beam blocking. PFX optimized plans increased beammore » numbers with three pre-set tilt angles, 70, 90, 110, and 4C optimized plans increased beam numbers with tilt angles increasing arbitrarily from range of 30 to 150 degrees. Optimized treatment plans were compared dosimetrically with original treatment plans. Results: Comparing total normal tissue isodose volumes between original and optimized plans, the low-level percentage isodose volumes decreased in all plans. Despite the addition of multiple beams up to a factor of 25, beam-on times for 1 tilt angle versus 3 or more tilt angles were comparable (<1 min.). In 64% (9/14) of the studied cases, the volume percentage decrease by >5%, with the highest value reaching 19%. The addition of more tilt angles correlates to a greater decrease in normal brain irradiated volume. Selectivity and coverage for original and optimized plans remained comparable. Conclusion: Adding large number of additional focused beams with variable patient head tilt shows improvement for dose fall-off for brain radiosurgery. The study demonstrates technical feasibility of adding beams to decrease target volume.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dou, Xin; Kim, Yusung, E-mail: yusung-kim@uiowa.edu; Bayouth, John E.
2013-04-01
To develop an optimal field-splitting algorithm of minimal complexity and verify the algorithm using head-and-neck (H and N) and female pelvic intensity-modulated radiotherapy (IMRT) cases. An optimal field-splitting algorithm was developed in which a large intensity map (IM) was split into multiple sub-IMs (≥2). The algorithm reduced the total complexity by minimizing the monitor units (MU) delivered and segment number of each sub-IM. The algorithm was verified through comparison studies with the algorithm as used in a commercial treatment planning system. Seven IMRT, H and N, and female pelvic cancer cases (54 IMs) were analyzed by MU, segment numbers, andmore » dose distributions. The optimal field-splitting algorithm was found to reduce both total MU and the total number of segments. We found on average a 7.9 ± 11.8% and 9.6 ± 18.2% reduction in MU and segment numbers for H and N IMRT cases with an 11.9 ± 17.4% and 11.1 ± 13.7% reduction for female pelvic cases. The overall percent (absolute) reduction in the numbers of MU and segments were found to be on average −9.7 ± 14.6% (−15 ± 25 MU) and −10.3 ± 16.3% (−3 ± 5), respectively. In addition, all dose distributions from the optimal field-splitting method showed improved dose distributions. The optimal field-splitting algorithm shows considerable improvements in both total MU and total segment number. The algorithm is expected to be beneficial for the radiotherapy treatment of large-field IMRT.« less
The Generalized Higher Criticism for Testing SNP-Set Effects in Genetic Association Studies
Barnett, Ian; Mukherjee, Rajarshi; Lin, Xihong
2017-01-01
It is of substantial interest to study the effects of genes, genetic pathways, and networks on the risk of complex diseases. These genetic constructs each contain multiple SNPs, which are often correlated and function jointly, and might be large in number. However, only a sparse subset of SNPs in a genetic construct is generally associated with the disease of interest. In this article, we propose the generalized higher criticism (GHC) to test for the association between an SNP set and a disease outcome. The higher criticism is a test traditionally used in high-dimensional signal detection settings when marginal test statistics are independent and the number of parameters is very large. However, these assumptions do not always hold in genetic association studies, due to linkage disequilibrium among SNPs and the finite number of SNPs in an SNP set in each genetic construct. The proposed GHC overcomes the limitations of the higher criticism by allowing for arbitrary correlation structures among the SNPs in an SNP-set, while performing accurate analytic p-value calculations for any finite number of SNPs in the SNP-set. We obtain the detection boundary of the GHC test. We compared empirically using simulations the power of the GHC method with existing SNP-set tests over a range of genetic regions with varied correlation structures and signal sparsity. We apply the proposed methods to analyze the CGEM breast cancer genome-wide association study. Supplementary materials for this article are available online. PMID:28736464
Cui, De-Mi; Yan, Weizhong; Wang, Xiao-Quan; Lu, Lie-Min
2017-10-25
Low strain pile integrity testing (LSPIT), due to its simplicity and low cost, is one of the most popular NDE methods used in pile foundation construction. While performing LSPIT in the field is generally quite simple and quick, determining the integrity of the test piles by analyzing and interpreting the test signals (reflectograms) is still a manual process performed by experienced experts only. For foundation construction sites where the number of piles to be tested is large, it may take days before the expert can complete interpreting all of the piles and delivering the integrity assessment report. Techniques that can automate test signal interpretation, thus shortening the LSPIT's turnaround time, are of great business value and are in great need. Motivated by this need, in this paper, we develop a computer-aided reflectogram interpretation (CARI) methodology that can interpret a large number of LSPIT signals quickly and consistently. The methodology, built on advanced signal processing and machine learning technologies, can be used to assist the experts in performing both qualitative and quantitative interpretation of LSPIT signals. Specifically, the methodology can ease experts' interpretation burden by screening all test piles quickly and identifying a small number of suspected piles for experts to perform manual, in-depth interpretation. We demonstrate the methodology's effectiveness using the LSPIT signals collected from a number of real-world pile construction sites. The proposed methodology can potentially enhance LSPIT and make it even more efficient and effective in quality control of deep foundation construction.
Comment on "Universal relation between skewness and kurtosis in complex dynamics"
NASA Astrophysics Data System (ADS)
Celikoglu, Ahmet; Tirnakli, Ugur
2015-12-01
In a recent paper [M. Cristelli, A. Zaccaria, and L. Pietronero, Phys. Rev. E 85, 066108 (2012), 10.1103/PhysRevE.85.066108], the authors analyzed the relation between skewness and kurtosis for complex dynamical systems, and they identified two power-law regimes of non-Gaussianity, one of which scales with an exponent of 2 and the other with 4 /3 . They concluded that the observed relation is a universal fact in complex dynamical systems. In this Comment, we test the proposed universal relation between skewness and kurtosis with a large number of synthetic data, and we show that in fact it is not a universal relation and originates only due to the small number of data points in the datasets considered. The proposed relation is tested using a family of non-Gaussian distribution known as q -Gaussians. We show that this relation disappears for sufficiently large datasets provided that the fourth moment of the distribution is finite. We find that kurtosis saturates to a single value, which is of course different from the Gaussian case (K =3 ), as the number of data is increased, and this indicates that the kurtosis will converge to a finite single value if all moments of the distribution up to fourth are finite. The converged kurtosis value for the finite fourth-moment distributions and the number of data points needed to reach this value depend on the deviation of the original distribution from the Gaussian case.
Multiplicative point process as a model of trading activity
NASA Astrophysics Data System (ADS)
Gontis, V.; Kaulakys, B.
2004-11-01
Signals consisting of a sequence of pulses show that inherent origin of the 1/ f noise is a Brownian fluctuation of the average interevent time between subsequent pulses of the pulse sequence. In this paper, we generalize the model of interevent time to reproduce a variety of self-affine time series exhibiting power spectral density S( f) scaling as a power of the frequency f. Furthermore, we analyze the relation between the power-law correlations and the origin of the power-law probability distribution of the signal intensity. We introduce a stochastic multiplicative model for the time intervals between point events and analyze the statistical properties of the signal analytically and numerically. Such model system exhibits power-law spectral density S( f)∼1/ fβ for various values of β, including β= {1}/{2}, 1 and {3}/{2}. Explicit expressions for the power spectra in the low-frequency limit and for the distribution density of the interevent time are obtained. The counting statistics of the events is analyzed analytically and numerically, as well. The specific interest of our analysis is related with the financial markets, where long-range correlations of price fluctuations largely depend on the number of transactions. We analyze the spectral density and counting statistics of the number of transactions. The model reproduces spectral properties of the real markets and explains the mechanism of power-law distribution of trading activity. The study provides evidence that the statistical properties of the financial markets are enclosed in the statistics of the time interval between trades. A multiplicative point process serves as a consistent model generating this statistics.
Wang, Jia-Chi; Boyar, Fatih Z
2016-01-01
Chromosomal microarray analysis (CMA) has been recommended and practiced routinely in the large reference laboratories of U.S.A. as the first-tier test for the postnatal evaluation of individuals with intellectual disability, autism spectrum disorders, and/or multiple congenital anomalies. Using CMA as a diagnostic tool and without a routine setting of fluorescence in situ hybridization with labeled bacterial artificial chromosome probes (BAC-FISH) in the large reference laboratories becomes a challenge in the characterization of chromosome 9 pericentric region. This region has a very complex genomic structure and contains a variety of heterochromatic and euchromatic polymorphic variants. These variants were usually studied by G-banding, C-banding and BAC-FISH analysis. Chromosomal microarray analysis (CMA) was not recommended since it may lead to false positive results. Here, we presented a cohort of four cases, in which high-resolution CMA was used as the first-tier test or simultaneously with G-banding analysis on the proband to identify pathogenic copy number variants (CNVs) in the whole genome. CMA revealed large pathogenic CNVs from chromosome 9 in 3 cases which also revealed different G-banding patterns between the two chromosome 9 homologues. Although we demonstrated that high-resolution CMA played an important role in the identification of pathogenic copy number variants in chromosome 9 pericentric regions, the lack of BAC-FISH analysis or other useful tools renders significant challenges in the characterization of chromosome 9 pericentric regions. None; it is not a clinical trial, and the cases were retrospectively collected and analyzed.
Salinity and spectral reflectance of soils
NASA Technical Reports Server (NTRS)
Szilagyi, A.; Baumgardner, M. F.
1991-01-01
The basic spectral response related to the salt content of soils in the visible and reflective IR wavelengths is analyzed in order to explore remote sensing applications for monitoring processes of the earth system. The bidirectional reflectance factor (BRF) was determined at 10 nm of increments over the 520-2320-nm spectral range. The effect of salts on reflectance was analyzed on the basis of 162 spectral measurements. MSS and TM bands were simulated within the measured spectral region. A strong relationship was found in variations of reflectance and soil characteristics pertaining to salinization and desalinization. Although the individual MSS bands had high R-squared values and 75-79 percent of soil/treatment combinations were separable, there was a large number of soil/treatment combinations not distinguished by any of the four highly correlated MSS bands under consideration.
Differential principal component analysis of ChIP-seq.
Ji, Hongkai; Li, Xia; Wang, Qian-fei; Ning, Yang
2013-04-23
We propose differential principal component analysis (dPCA) for analyzing multiple ChIP-sequencing datasets to identify differential protein-DNA interactions between two biological conditions. dPCA integrates unsupervised pattern discovery, dimension reduction, and statistical inference into a single framework. It uses a small number of principal components to summarize concisely the major multiprotein synergistic differential patterns between the two conditions. For each pattern, it detects and prioritizes differential genomic loci by comparing the between-condition differences with the within-condition variation among replicate samples. dPCA provides a unique tool for efficiently analyzing large amounts of ChIP-sequencing data to study dynamic changes of gene regulation across different biological conditions. We demonstrate this approach through analyses of differential chromatin patterns at transcription factor binding sites and promoters as well as allele-specific protein-DNA interactions.
Meteor Shower Activity Derived from "Meteor Watching Public-Campaign" in Japan
NASA Technical Reports Server (NTRS)
Sato, M.; Watanabe, J.
2011-01-01
We tried to analyze activities of meteor showers from accumulated data collected by public campaigns for meteor showers which were performed as outreach programs. The analyzed campaigns are Geminids (in 2007 and 2009), Perseids (in 2008 and 2009), Quadrantids (in 2009) and Orionids (in 2009). Thanks to the huge number of reports, the derived time variations of the activities of meteor showers is very similar to those obtained by skilled visual observers. The values of hourly rates are about one-fifth (Geminids 2007) or about one-fourth (Perseids 2008) compared with the data of skilled observers, mainly due to poor observational sites such as large cities and urban areas, together with the immature skill of participants in the campaign. It was shown to be highly possible to estimate time variation in the meteor shower activity from our campaign.
OpenMSI Arrayed Analysis Tools v2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
BOWEN, BENJAMIN; RUEBEL, OLIVER; DE ROND, TRISTAN
2017-02-07
Mass spectrometry imaging (MSI) enables high-resolution spatial mapping of biomolecules in samples and is a valuable tool for the analysis of tissues from plants and animals, microbial interactions, high-throughput screening, drug metabolism, and a host of other applications. This is accomplished by desorbing molecules from the surface on spatially defined locations, using a laser or ion beam. These ions are analyzed by a mass spectrometry and collected into a MSI 'image', a dataset containing unique mass spectra from the sampled spatial locations. MSI is used in a diverse and increasing number of biological applications. The OpenMSI Arrayed Analysis Tool (OMAAT)more » is a new software method that addresses the challenges of analyzing spatially defined samples in large MSI datasets, by providing support for automatic sample position optimization and ion selection.« less
Detecting and Analyzing Genetic Recombination Using RDP4.
Martin, Darren P; Murrell, Ben; Khoosal, Arjun; Muhire, Brejnev
2017-01-01
Recombination between nucleotide sequences is a major process influencing the evolution of most species on Earth. The evolutionary value of recombination has been widely debated and so too has its influence on evolutionary analysis methods that assume nucleotide sequences replicate without recombining. When nucleic acids recombine, the evolution of the daughter or recombinant molecule cannot be accurately described by a single phylogeny. This simple fact can seriously undermine the accuracy of any phylogenetics-based analytical approach which assumes that the evolutionary history of a set of recombining sequences can be adequately described by a single phylogenetic tree. There are presently a large number of available methods and associated computer programs for analyzing and characterizing recombination in various classes of nucleotide sequence datasets. Here we examine the use of some of these methods to derive and test recombination hypotheses using multiple sequence alignments.
NASA Astrophysics Data System (ADS)
Sugioka, Yosuke; Koike, Shunsuke; Nakakita, Kazuyuki; Numata, Daiju; Nonomura, Taku; Asai, Keisuke
2018-06-01
Transonic buffeting phenomena on a three-dimensional swept wing were experimentally analyzed using a fast-response pressure-sensitive paint (PSP). The experiment was conducted using an 80%-scaled NASA Common Research Model in the Japan Aerospace Exploration Agency (JAXA) 2 m × 2 m Transonic Wind Tunnel at a Mach number of 0.85 and a chord Reynolds number of 1.54 × 106. The angle of attack was varied between 2.82° and 6.52°. The calculation of root-mean-square (RMS) pressure fluctuations and spectral analysis were performed on measured unsteady PSP images to analyze the phenomena under off-design buffet conditions. We found that two types of shock behavior exist. The first is a shock oscillation characterized by the presence of "buffet cells" formed at a bump Strouhal number St of 0.3-0.5, which is observed under all off-design conditions. This phenomenon arises at the mid-span wing and is propagated spanwise from inboard to outboard. The other is a large spatial amplitude shock oscillation characterized by low-frequency broadband components at St < 0.1, which appears at higher angles of attack ( α ≥ 6.0°) and behaves more like two-dimensional buffet. The transition between these two shock behaviors correlates well with the rapid increase of the wing-root strain fluctuation RMS.
Playground usage and physical activity levels of children based on playground spatial features.
Reimers, Anne K; Knapp, Guido
2017-01-01
Being outdoors is one of the strongest correlates of physical activity in children. Playgrounds are spaces especially designed to enable and foster physical activity in children. This study aimed to analyze the relationship between the spatial features of public playgrounds and the usage and physical activity levels of children playing in them. A quantitative, observational study was conducted of ten playgrounds in one district of a middle-sized town in Germany. Playground spatial features were captured using an audit instrument and the playground manual of the town. Playground usage and physical activity levels of children were assessed using a modified version of the System for Observing Play and Leisure Activity in Youth. Negative binomial models were used to analyze the count data. The number of children using the playgrounds and the number of children actively playing in them were higher in those with more varied facilities and without naturalness. Girls played more actively in playgrounds without multi-purpose areas. Cleanliness, esthetics, play facility quality, division of functional areas and playground size were not related to any outcome variable. Playground spatial features are related to playground usage and activity levels of the children in the playgrounds. Playgrounds should offer a wide variety of play facilities and provide spaces for diverse play activities to respond to the needs of large numbers of different children and to provide activity-friendly areas enabling their healthy development.
HGDP and HapMap Analysis by Ancestry Mapper Reveals Local and Global Population Relationships
Magalhães, Tiago R.; Casey, Jillian P.; Conroy, Judith; Regan, Regina; Fitzpatrick, Darren J.; Shah, Naisha; Sobral, João; Ennis, Sean
2012-01-01
Knowledge of human origins, migrations, and expansions is greatly enhanced by the availability of large datasets of genetic information from different populations and by the development of bioinformatic tools used to analyze the data. We present Ancestry Mapper, which we believe improves on existing methods, for the assignment of genetic ancestry to an individual and to study the relationships between local and global populations. The principle function of the method, named Ancestry Mapper, is to give each individual analyzed a genetic identifier, made up of just 51 genetic coordinates, that corresponds to its relationship to the HGDP reference population. As a consequence, the Ancestry Mapper Id (AMid) has intrinsic biological meaning and provides a tool to measure similarity between world populations. We applied Ancestry Mapper to a dataset comprised of the HGDP and HapMap data. The results show distinctions at the continental level, while simultaneously giving details at the population level. We clustered AMids of HGDP/HapMap and observe a recapitulation of human migrations: for a small number of clusters, individuals are grouped according to continental origins; for a larger number of clusters, regional and population distinctions are evident. Calculating distances between AMids allows us to infer ancestry. The number of coordinates is expandable, increasing the power of Ancestry Mapper. An R package called Ancestry Mapper is available to apply this method to any high density genomic data set. PMID:23189146
Kim, Jung Hyeun; Mulholland, George W.; Kukuck, Scott R.; Pui, David Y. H.
2005-01-01
The slip correction factor has been investigated at reduced pressures and high Knudsen number using polystyrene latex (PSL) particles. Nano-differential mobility analyzers (NDMA) were used in determining the slip correction factor by measuring the electrical mobility of 100.7 nm, 269 nm, and 19.90 nm particles as a function of pressure. The aerosol was generated via electrospray to avoid multiplets for the 19.90 nm particles and to reduce the contaminant residue on the particle surface. System pressure was varied down to 8.27 kPa, enabling slip correction measurements for Knudsen numbers as large as 83. A condensation particle counter was modified for low pressure application. The slip correction factor obtained for the three particle sizes is fitted well by the equation: C = 1 + Kn (α + β exp(−γ/Kn)), with α = 1.165, β = 0.483, and γ = 0.997. The first quantitative uncertainty analysis for slip correction measurements was carried out. The expanded relative uncertainty (95 % confidence interval) in measuring slip correction factor was about 2 % for the 100.7 nm SRM particles, about 3 % for the 19.90 nm PSL particles, and about 2.5 % for the 269 nm SRM particles. The major sources of uncertainty are the diameter of particles, the geometric constant associated with NDMA, and the voltage. PMID:27308102
HGDP and HapMap analysis by Ancestry Mapper reveals local and global population relationships.
Magalhães, Tiago R; Casey, Jillian P; Conroy, Judith; Regan, Regina; Fitzpatrick, Darren J; Shah, Naisha; Sobral, João; Ennis, Sean
2012-01-01
Knowledge of human origins, migrations, and expansions is greatly enhanced by the availability of large datasets of genetic information from different populations and by the development of bioinformatic tools used to analyze the data. We present Ancestry Mapper, which we believe improves on existing methods, for the assignment of genetic ancestry to an individual and to study the relationships between local and global populations. The principle function of the method, named Ancestry Mapper, is to give each individual analyzed a genetic identifier, made up of just 51 genetic coordinates, that corresponds to its relationship to the HGDP reference population. As a consequence, the Ancestry Mapper Id (AMid) has intrinsic biological meaning and provides a tool to measure similarity between world populations. We applied Ancestry Mapper to a dataset comprised of the HGDP and HapMap data. The results show distinctions at the continental level, while simultaneously giving details at the population level. We clustered AMids of HGDP/HapMap and observe a recapitulation of human migrations: for a small number of clusters, individuals are grouped according to continental origins; for a larger number of clusters, regional and population distinctions are evident. Calculating distances between AMids allows us to infer ancestry. The number of coordinates is expandable, increasing the power of Ancestry Mapper. An R package called Ancestry Mapper is available to apply this method to any high density genomic data set.
A General-Purpose Optimization Engine for Multi-Disciplinary Design Applications
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.; Berke, Laszlo
1996-01-01
A general purpose optimization tool for multidisciplinary applications, which in the literature is known as COMETBOARDS, is being developed at NASA Lewis Research Center. The modular organization of COMETBOARDS includes several analyzers and state-of-the-art optimization algorithms along with their cascading strategy. The code structure allows quick integration of new analyzers and optimizers. The COMETBOARDS code reads input information from a number of data files, formulates a design as a set of multidisciplinary nonlinear programming problems, and then solves the resulting problems. COMETBOARDS can be used to solve a large problem which can be defined through multiple disciplines, each of which can be further broken down into several subproblems. Alternatively, a small portion of a large problem can be optimized in an effort to improve an existing system. Some of the other unique features of COMETBOARDS include design variable formulation, constraint formulation, subproblem coupling strategy, global scaling technique, analysis approximation, use of either sequential or parallel computational modes, and so forth. The special features and unique strengths of COMETBOARDS assist convergence and reduce the amount of CPU time used to solve the difficult optimization problems of aerospace industries. COMETBOARDS has been successfully used to solve a number of problems, including structural design of space station components, design of nozzle components of an air-breathing engine, configuration design of subsonic and supersonic aircraft, mixed flow turbofan engines, wave rotor topped engines, and so forth. This paper introduces the COMETBOARDS design tool and its versatility, which is illustrated by citing examples from structures, aircraft design, and air-breathing propulsion engine design.
Vossoughi, Mehrdad; Ayatollahi, S M T; Towhidi, Mina; Ketabchi, Farzaneh
2012-03-22
The summary measure approach (SMA) is sometimes the only applicable tool for the analysis of repeated measurements in medical research, especially when the number of measurements is relatively large. This study aimed to describe techniques based on summary measures for the analysis of linear trend repeated measures data and then to compare performances of SMA, linear mixed model (LMM), and unstructured multivariate approach (UMA). Practical guidelines based on the least squares regression slope and mean of response over time for each subject were provided to test time, group, and interaction effects. Through Monte Carlo simulation studies, the efficacy of SMA vs. LMM and traditional UMA, under different types of covariance structures, was illustrated. All the methods were also employed to analyze two real data examples. Based on the simulation and example results, it was found that the SMA completely dominated the traditional UMA and performed convincingly close to the best-fitting LMM in testing all the effects. However, the LMM was not often robust and led to non-sensible results when the covariance structure for errors was misspecified. The results emphasized discarding the UMA which often yielded extremely conservative inferences as to such data. It was shown that summary measure is a simple, safe and powerful approach in which the loss of efficiency compared to the best-fitting LMM was generally negligible. The SMA is recommended as the first choice to reliably analyze the linear trend data with a moderate to large number of measurements and/or small to moderate sample sizes.
Megger, Dominik A; Padden, Juliet; Rosowski, Kristin; Uszkoreit, Julian; Bracht, Thilo; Eisenacher, Martin; Gerges, Christian; Neuhaus, Horst; Schumacher, Brigitte; Schlaak, Jörg F; Sitek, Barbara
2017-02-10
The proteome analysis of bile fluid represents a promising strategy to identify biomarker candidates for various diseases of the hepatobiliary system. However, to obtain substantive results in biomarker discovery studies large patient cohorts necessarily need to be analyzed. Consequently, this would lead to an unmanageable number of samples to be analyzed if sample preparation protocols with extensive fractionation methods are applied. Hence, the performance of simple workflows allowing for "one sample, one shot" experiments have been evaluated in this study. In detail, sixteen different protocols implying modifications at the stages of desalting, delipidation, deglycosylation and tryptic digestion have been examined. Each method has been individually evaluated regarding various performance criteria and comparative analyses have been conducted to uncover possible complementarities. Here, the best performance in terms of proteome coverage has been assessed for a combination of acetone precipitation with in-gel digestion. Finally, a mapping of all obtained protein identifications with putative biomarkers for hepatocellular carcinoma (HCC) and cholangiocellular carcinoma (CCC) revealed several proteins easily detectable in bile fluid. These results can build the basis for future studies with large and well-defined patient cohorts in a more disease-related context. Human bile fluid is a proximal body fluid and supposed to be a potential source of disease markers. However, due to its biochemical composition, the proteome analysis of bile fluid still represents a challenging task and is therefore mostly conducted using extensive fractionation procedures. This in turn leads to a high number of mass spectrometric measurements for one biological sample. Considering the fact that in order to overcome the biological variability a high number of biological samples needs to be analyzed in biomarker discovery studies, this leads to the dilemma of an unmanageable number of necessary MS-based analyses. Hence, easy sample preparation protocols are demanded representing a compromise between proteome coverage and simplicity. In the presented study, such protocols have been evaluated regarding various technical criteria (e.g. identification rates, missed cleavages, chromatographic separation) uncovering the strengths and weaknesses of various methods. Furthermore, a cumulative bile proteome list has been generated that extends the current bile proteome catalog by 248 proteins. Finally, a mapping with putative biomarkers for hepatocellular carcinoma (HCC) and cholangiocellular carcinoma (CCC) derived from tissue-based studies, revealed several of these proteins being easily and reproducibly detectable in human bile. Therefore, the presented technical work represents a solid base for future disease-related studies. Copyright © 2016 Elsevier B.V. All rights reserved.
Genomics Portals: integrative web-platform for mining genomics data.
Shinde, Kaustubh; Phatak, Mukta; Johannes, Freudenberg M; Chen, Jing; Li, Qian; Vineet, Joshi K; Hu, Zhen; Ghosh, Krishnendu; Meller, Jaroslaw; Medvedovic, Mario
2010-01-13
A large amount of experimental data generated by modern high-throughput technologies is available through various public repositories. Our knowledge about molecular interaction networks, functional biological pathways and transcriptional regulatory modules is rapidly expanding, and is being organized in lists of functionally related genes. Jointly, these two sources of information hold a tremendous potential for gaining new insights into functioning of living systems. Genomics Portals platform integrates access to an extensive knowledge base and a large database of human, mouse, and rat genomics data with basic analytical visualization tools. It provides the context for analyzing and interpreting new experimental data and the tool for effective mining of a large number of publicly available genomics datasets stored in the back-end databases. The uniqueness of this platform lies in the volume and the diversity of genomics data that can be accessed and analyzed (gene expression, ChIP-chip, ChIP-seq, epigenomics, computationally predicted binding sites, etc), and the integration with an extensive knowledge base that can be used in such analysis. The integrated access to primary genomics data, functional knowledge and analytical tools makes Genomics Portals platform a unique tool for interpreting results of new genomics experiments and for mining the vast amount of data stored in the Genomics Portals backend databases. Genomics Portals can be accessed and used freely at http://GenomicsPortals.org.
Genomics Portals: integrative web-platform for mining genomics data
2010-01-01
Background A large amount of experimental data generated by modern high-throughput technologies is available through various public repositories. Our knowledge about molecular interaction networks, functional biological pathways and transcriptional regulatory modules is rapidly expanding, and is being organized in lists of functionally related genes. Jointly, these two sources of information hold a tremendous potential for gaining new insights into functioning of living systems. Results Genomics Portals platform integrates access to an extensive knowledge base and a large database of human, mouse, and rat genomics data with basic analytical visualization tools. It provides the context for analyzing and interpreting new experimental data and the tool for effective mining of a large number of publicly available genomics datasets stored in the back-end databases. The uniqueness of this platform lies in the volume and the diversity of genomics data that can be accessed and analyzed (gene expression, ChIP-chip, ChIP-seq, epigenomics, computationally predicted binding sites, etc), and the integration with an extensive knowledge base that can be used in such analysis. Conclusion The integrated access to primary genomics data, functional knowledge and analytical tools makes Genomics Portals platform a unique tool for interpreting results of new genomics experiments and for mining the vast amount of data stored in the Genomics Portals backend databases. Genomics Portals can be accessed and used freely at http://GenomicsPortals.org. PMID:20070909
Studies in nonlinear problems of energy. Progress report, October 1, 1993--September 30, 1994
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matkowsky, B.J.
1994-09-01
The authors concentrate on modeling, analysis and large scale scientific computation of combustion and flame propagation phenomena, with emphasis on the transition from laminar to turbulent combustion. In the transition process a flame passed through a stages exhibiting increasingly complex spatial and temporal patterns which serve as signatures identifying each stage. Often the transitions arise via bifurcation. The authors investigate nonlinear dynamics, bifurcation and pattern formation in the successive stage of transition. They describe the stability of combustion waves, and transitions to combustion waves exhibiting progressively higher degrees of spatio-temporal complexity. One aspect of this research program is the systematicmore » derivation of appropriate, approximate models from the original models governing combustion. The approximate models are then analyzed. The authors are particularly interested in understanding the basic mechanisms affecting combustion, which is a prerequisite to effective control of the process. They are interested in determining the effects of varying various control parameters, such as Nusselt number, Lewis number, heat release, activation energy, Damkohler number, Reynolds number, Prandtl number, Peclet number, etc. The authors have also considered a number of problems in self-propagating high-temperature synthesis (SHS), in which combustion waves are employed to synthesize advanced materials. Efforts are directed toward understanding fundamental mechanisms. 167 refs.« less
Noh, Jaeduk Yoshimura; Yasuda, Shigemitu; Sato, Shotaro; Matsumoto, Masako; Kunii, Yo; Noguchi, Yoshihiko; Mukasa, Koji; Ito, Kunihiko; Ito, Koichi; Sugiyama, Osamu; Kobayashi, Hiroshi; Nihojima, Shigeru; Okazaki, Masaru; Yokoyama, Shunji
2009-08-01
The clinical characteristics of myeloperoxidase antineutrophil cytoplasmic antibody (MPO-ANCA)-associated vasculitis caused by antithyroid drugs are still unclear because most reports describe only a small number of patients. The objective was to analyze a large number of patients with MPO-ANCA-associated vasculitis to determine the time of onset, the drug and dose taken, the clinical symptoms, the relationship between the clinical symptoms and the MPO-ANCA titer, and the incidence. We analyzed 92 patients in whom the adverse reaction of MPO-ANCA-associated vasculitis was reported to Chugai Pharmaceutical, a company that markets antithyroid drugs. Of the 92 patients, 41 (44.6%) had single-organ failure, 32 (34.8%) had two-organ failure, 13 (14.1%), had three-organ failure, and two (2.2%) had four-organ failure. The number of organs involved was unknown in the other four patients (4.3%). The median time of onset was 42 months (range, 1-372 months) after starting drug treatment. The median dose at onset of MPO-ANCA-associated vasculitis was 15 mg/d (range, 2.5-45 mg/d) for methimazole and 200 mg/d (50-450 mg/d) for propylthiouracil. The severity and number of organs involved were not correlated with the MPO-ANCA titer. The incidence was between 0.53 and 0.79 patients per 10,000, and the ratio of the estimated incidences for methimazole and propylthiouracil was 1:39.2. The time of onset of MPO-ANCA-associated vasculitis and the dose at onset varied. The severity and number of organs involved were not correlated with the MPO-ANCA titer, indicating a need for vigilance even when the MPO-ANCA titer is only weakly positive.
Inferring epidemiological parameters from phylogenies using regression-ABC: A comparative study
Gascuel, Olivier
2017-01-01
Inferring epidemiological parameters such as the R0 from time-scaled phylogenies is a timely challenge. Most current approaches rely on likelihood functions, which raise specific issues that range from computing these functions to finding their maxima numerically. Here, we present a new regression-based Approximate Bayesian Computation (ABC) approach, which we base on a large variety of summary statistics intended to capture the information contained in the phylogeny and its corresponding lineage-through-time plot. The regression step involves the Least Absolute Shrinkage and Selection Operator (LASSO) method, which is a robust machine learning technique. It allows us to readily deal with the large number of summary statistics, while avoiding resorting to Markov Chain Monte Carlo (MCMC) techniques. To compare our approach to existing ones, we simulated target trees under a variety of epidemiological models and settings, and inferred parameters of interest using the same priors. We found that, for large phylogenies, the accuracy of our regression-ABC is comparable to that of likelihood-based approaches involving birth-death processes implemented in BEAST2. Our approach even outperformed these when inferring the host population size with a Susceptible-Infected-Removed epidemiological model. It also clearly outperformed a recent kernel-ABC approach when assuming a Susceptible-Infected epidemiological model with two host types. Lastly, by re-analyzing data from the early stages of the recent Ebola epidemic in Sierra Leone, we showed that regression-ABC provides more realistic estimates for the duration parameters (latency and infectiousness) than the likelihood-based method. Overall, ABC based on a large variety of summary statistics and a regression method able to perform variable selection and avoid overfitting is a promising approach to analyze large phylogenies. PMID:28263987
Hitzl, Wolfgang; Trinka, Eugen; Seyfang, Leonard; Mutzenbach, Sebastian; Stadler, Katharina; Pikija, Slaven; Killer, Monika; Broussalis, Erasmia
2016-10-01
This study analyzed the number of patients with ischemic strokes recorded in the Austrian Stroke-Unit Registry with the aim of projecting this number from 2012 to 2075 and to highlight that the Austrian health system will face a dramatic increase in older patients within the next few decades. Current demographic information was obtained from EUROSTAT, and information on age- and sex-stratified 1-year incidence rates of ischemic stroke were obtained from the Austrian Stroke-Unit Registry. Sensitivity analysis was performed by analyzing the projections based on predicted ageing, main, and growth population scenarios, and with stratification by age and gender. The total number of ischemic strokes recorded in the Austrian Stroke-Unit Registry was 8,690 in 2012 and is expected to increase to 15,826, 15,626, or 18,134 in 2075 according to the ageing, main, and growth scenarios, respectively. The corresponding numbers of patients are projected to increase or decrease within different age strata as follows (100%=number of registered ischemic strokes in 2012): 0-40 years, 100%/99% (males/females); 40-50 years, 83%/83%; 50-60 years, 98%/97%; 60-70 years, 126%/119%; 70-80 years, 159%/139%; 80-90 years, 307%/199%; and 90+ years, 894%/413%. The ageing population in Austria will result in the number of patients increasing considerably from 2012 to 2075, to 182%, 180%, or 208% (relative to 100% in 2012) according to the ageing, main, and growth scenarios, respectively; the corresponding value among those aged 80+ years is 315%, 290%, or 347%. These figures demonstrated the importance of improving primary preventive measures. The results of this study should provide a basis for discussions among health-care professionals and economists to face the future large financial burden of ischemic stroke on the Austrian health system.
van den Bussche, Hendrik; Kaduszkiewicz, Hanna; Schäfer, Ingmar; Koller, Daniela; Hansen, Heike; Scherer, Martin; Schön, Gerhard
2016-04-14
By definition, high utilizers receive a large proportion of medical services and produce relatively high costs. The authors report the results of a study on the utilization of ambulatory medical care by the elderly population in Germany in comparison to other OECD countries. Evidence points to an excessive utilization in Germany. It is important to document these utilization figures and compare them to those in other countries since the healthcare system in Germany stopped recording ambulatory healthcare utilization figures in 2008. The study is based on the claims data of all insurants aged ≥ 65 of a statutory health insurance company in Germany (n = 123,224). Utilization was analyzed by the number of contacts with physicians in ambulatory medical care and by the number of different practices contacted over one year. Criteria for frequent attendance were ≥ 50 contacts with practices or contacts with ≥ 10 different practices or ≥ 3 practices of the same discipline per year. Descriptive statistical analysis and logistic regression were applied. Morbidity was analyzed by prevalence and relative risk for frequent attendance for 46 chronic diseases. Nineteen percent of the elderly were identified as high utilizers, corresponding to approximately 3.5 million elderly people in Germany. Two main types were identified. One type has many contacts with practices, belongs to the oldest age group, suffers from severe somatic diseases and multimorbidity, and/or is dependent on long-term care. The other type contacts large numbers of practices, consists of younger elderly who often suffer from psychiatric and/or psychosomatic complaints, and is less frequently multimorbid and/or nursing care dependent. We found a very high rate of frequent attendance among the German elderly, which is unique among the OECD countries. Further research should clarify its reasons and if this degree of utilization is beneficial for elderly people.
Propositional idea density in women's written language over the lifespan: computerized analysis.
Ferguson, Alison; Spencer, Elizabeth; Craig, Hugh; Colyvas, Kim
2014-06-01
The informativeness of written language, as measured by Propositional Idea Density (PD), has been shown to be a sensitive predictive index of language decline with age and dementia in previous research. The present study investigated the influence of age and education on the written language of three large cohorts of women from the general community, born between 1973 and 1978, 1946-51 and 1921-26. Written texts were obtained from the Australian Longitudinal Study on Women's Health in which participants were invited to respond to an open-ended question about their health. The informativeness of written comments of 10 words or more (90% of the total number of comments) was analyzed using the Computerized Propositional Idea Density Rater 3 (CPIDR-3). Over 2.5 million words used in 37,705 written responses from 19,512 respondents were analyzed. Based on a linear mixed model approach to statistical analysis with adjustment for several factors including number of comments per respondent and number of words per comment, a small but statistically significant effect of age was identified for the older cohort with mean age 78 years. The mean PD per word for this cohort was lower than the younger and mid-aged cohorts with mean age 27 and 53 years respectively, with mean reduction in PD 95% confidence interval (CI) of .006 (.003, .008) and .009 (.008, .011) respectively. This suggests that PD for this population of women was relatively more stable over the adult lifespan than has been reported previously even in late old age. There was no statistically significant effect of education level. Computerized analyses were found to greatly facilitate the study of informativeness of this large corpus of written language. Directions for further research are discussed in relation to the need for extended investigation of the variability of the measure for potential application to the identification of acquired language pathologies. Copyright © 2013 Elsevier Ltd. All rights reserved.
The 2030 Problem: Caring for Aging Baby Boomers
Knickman, James R; Snell, Emily K
2002-01-01
Objective To assess the coming challenges of caring for large numbers of frail elderly as the Baby Boom generation ages. Study Setting A review of economic and demographic data as well as simulations of projected socioeconomic and demographic patterns in the year 2030 form the basis of a review of the challenges related to caring for seniors that need to be faced by society. Study Design A series of analyses are used to consider the challenges related to caring for elders in the year 2030: (1) measures of macroeconomic burden are developed and analyzed, (2) the literatures on trends in disability, payment approaches for long-term care, healthy aging, and cultural views of aging are analyzed and synthesized, and(3)simulations of future income and assets patterns of the Baby Boom generation are developed. Principal Findings The economic burden of aging in 2030 should be no greater than the economic burden associated with raising large numbers of baby boom children in the 1960s. The real challenges of caring for the elderly in 2030 will involve: (1) making sure society develops payment and insurance systems for long-term care that work better than existing ones, (2) taking advantage of advances in medicine and behavioral health to keep the elderly as healthy and active as possible, (3) changing the way society organizes community services so that care is more accessible, and (4) altering the cultural view of aging to make sure all ages are integrated into the fabric of community life. Conclusions To meet the long-term care needs of Baby Boomers, social and public policy changes must begin soon. Meeting the financial and social service burdens of growing numbers of elders will not be a daunting task if necessary changes are made now rather than when Baby Boomers actually need long-term care. PMID:12236388
Examining the impact of mental illness and substance use on recidivism in a county jail.
Wilson, Amy Blank; Draine, Jeffrey; Hadley, Trevor; Metraux, Steve; Evans, Arthur
2011-01-01
This paper describes the recidivism patterns over a 4 year period for a cohort of people admitted to a large US urban jail system in 2003 and analyzes how these patterns vary based on presence of mental illness and substance abuse. Jail detention and behavioral health service records were merged for all admissions to a large urban jail system in 2003 (N=24,290). Descriptive statistics were used to analyze the recidivism patterns for people admitted to jail in 2003 (N=20,112) over a four year period. Recidivism patterns of people without mental illness or substance use disorders were compared with people with serious mental illness, substance abuse disorders, and dual diagnoses. These analyses found that over half of the people who returned to jail during the 4 year follow-up period did so in the first year. This finding did not differ by any diagnostic category. Analysis of the number of people readmitted to the jail found that people who had a diagnosis of mental illness alone had the lowest number of readmissions to jail in the 4 years after release with 50% having at least one readmission after their initial release. People with dual diagnoses, in contrast, had the highest number of readmissions to jail during the study time frame, with 68% having at least one readmission during the 4 years after release. Substance use is a driving force behind the recidivism of people with mental illness leaving a US urban jail. These findings illustrate the importance of developing interventions that provide timely access to intensive co-occurring substance abuse and mental health treatment during the immediate period after release that are capable of addressing both individual and environment factors that promote the return to drug use after release. Copyright © 2011 Elsevier Ltd. All rights reserved.
Full long-term design response analysis of a wave energy converter
Coe, Ryan G.; Michelen, Carlos; Eckert-Gallup, Aubrey; ...
2017-09-21
Efficient design of wave energy converters requires an accurate understanding of expected loads and responses during the deployment lifetime of a device. A study has been conducted to better understand best-practices for prediction of design responses in a wave energy converter. A case-study was performed in which a simplified wave energy converter was analyzed to predict several important device design responses. The application and performance of a full long-term analysis, in which numerical simulations were used to predict the device response for a large number of distinct sea states, was studied. Environmental characterization and selection of sea states for thismore » analysis at the intended deployment site were performed using principle-components analysis. The full long-term analysis applied here was shown to be stable when implemented with a relatively low number of sea states and convergent with an increasing number of sea states. As the number of sea states utilized in the analysis was increased, predicted response levels did not change appreciably. Furthermore, uncertainty in the response levels was reduced as more sea states were utilized.« less
Full long-term design response analysis of a wave energy converter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coe, Ryan G.; Michelen, Carlos; Eckert-Gallup, Aubrey
Efficient design of wave energy converters requires an accurate understanding of expected loads and responses during the deployment lifetime of a device. A study has been conducted to better understand best-practices for prediction of design responses in a wave energy converter. A case-study was performed in which a simplified wave energy converter was analyzed to predict several important device design responses. The application and performance of a full long-term analysis, in which numerical simulations were used to predict the device response for a large number of distinct sea states, was studied. Environmental characterization and selection of sea states for thismore » analysis at the intended deployment site were performed using principle-components analysis. The full long-term analysis applied here was shown to be stable when implemented with a relatively low number of sea states and convergent with an increasing number of sea states. As the number of sea states utilized in the analysis was increased, predicted response levels did not change appreciably. Furthermore, uncertainty in the response levels was reduced as more sea states were utilized.« less
Gil, María del Mar; Palmer, Miquel; Grau, Amalia; Balle, Salvador
2015-01-01
Most reintroduction and restocking programs consist of releasing captive-raised juveniles. The usefulness of these programs has been questioned, and therefore, quality control is advisable. However, evaluating restocking effectiveness is challenging because mortality estimation is required. Most methods for estimating mortality are based on tag recovery. In the case of fish, juveniles are tagged before release, and fishermen typically recover tags when fish are captured. The statistical models currently available for analyzing these data assume either constant mortality rates, fixed tag non-reporting rates, or both. Here, instead, we proposed a method that considers the mortality rate variability as a function of age/size of the released juveniles. Furthermore, the proposed method can disentangle natural from fishing mortality, analyzing the temporal distribution of the captures reported by fishermen from multiple release events. This method is demonstrated with a restocking program of a top-predator marine fish, the meagre (Argyrosomus regius), in the Balearic Islands. The estimated natural mortality just after release was very high for young fish (m 0 = 0.126 day-1 for fish 180 days old), but it was close to zero for large/old fish. These large/old fish were more resilient to wild conditions, although a long time was needed to achieve a relevant reduction in natural mortality. Conversely, these large/old fish were more vulnerable to fishing, creating a trade-off in survival. The release age that maximizes the number of survivors after, for example, one year at liberty was estimated to be 1,173 days. However, the production cost of relatively old fish is high, and only a few fish can be produced and released within a realistic budget. Therefore, in the case of the meagre, increasing the number of released fish will have no or scarce effects on restocking success. Conversely, it is advisable implement measures to reduce the high natural mortality of young juveniles and/or the length of time needed to improve fish resilience. PMID:26394242
Gene expression profiles of changes underlying different-sized human rotator cuff tendon tears.
Chaudhury, Salma; Xia, Zhidao; Thakkar, Dipti; Hakimi, Osnat; Carr, Andrew J
2016-10-01
Progressive cellular and extracellular matrix (ECM) changes related to age and disease severity have been demonstrated in rotator cuff tendon tears. Larger rotator cuff tears demonstrate structural abnormalities that potentially adversely influence healing potential. This study aimed to gain greater insight into the relationship of pathologic changes to tear size by analyzing gene expression profiles from normal rotator cuff tendons, small rotator cuff tears, and large rotator cuff tears. We analyzed gene expression profiles of 28 human rotator cuff tendons using microarrays representing the entire genome; 11 large and 5 small torn rotator cuff tendon specimens were obtained intraoperatively from tear edges, which we compared with 12 age-matched normal controls. We performed real-time polymerase chain reaction and immunohistochemistry for validation. Torn rotator cuff tendons demonstrated upregulation of a number of key genes, such as matrix metalloproteinase 3, 10, 12, 13, 15, 21, and 25; a disintegrin and metalloproteinase (ADAM) 12, 15, and 22; and aggrecan. Amyloid was downregulated in all tears. Small tears displayed upregulation of bone morphogenetic protein 5. Chemokines and cytokines that may play a role in chemotaxis were altered; interleukins 3, 10, 13, and 15 were upregulated in tears, whereas interleukins 1, 8, 11, 18, and 27 were downregulated. The gene expression profiles of normal controls and small and large rotator cuff tear groups differ significantly. Extracellular matrix remodeling genes were found to contribute to rotator cuff tear pathogenesis. Rotator cuff tears displayed upregulation of a number of matrix metalloproteinase (3, 10, 12, 13, 15, 21, and 25), a disintegrin and metalloproteinase (ADAM 12, 15, and 22) genes, and downregulation of some interleukins (1, 8, and 27), which play important roles in chemotaxis. These gene products may potentially have a role as biomarkers of failure of healing or therapeutic targets to improve tendon healing. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Jeltsch, Florian; Wurst, Susanne
2015-01-01
Small scale distribution of insect root herbivores may promote plant species diversity by creating patches of different herbivore pressure. However, determinants of small scale distribution of insect root herbivores, and impact of land use intensity on their small scale distribution are largely unknown. We sampled insect root herbivores and measured vegetation parameters and soil water content along transects in grasslands of different management intensity in three regions in Germany. We calculated community-weighted mean plant traits to test whether the functional plant community composition determines the small scale distribution of insect root herbivores. To analyze spatial patterns in plant species and trait composition and insect root herbivore abundance we computed Mantel correlograms. Insect root herbivores mainly comprised click beetle (Coleoptera, Elateridae) larvae (43%) in the investigated grasslands. Total insect root herbivore numbers were positively related to community-weighted mean traits indicating high plant growth rates and biomass (specific leaf area, reproductive- and vegetative plant height), and negatively related to plant traits indicating poor tissue quality (leaf C/N ratio). Generalist Elaterid larvae, when analyzed independently, were also positively related to high plant growth rates and furthermore to root dry mass, but were not related to tissue quality. Insect root herbivore numbers were not related to plant cover, plant species richness and soil water content. Plant species composition and to a lesser extent plant trait composition displayed spatial autocorrelation, which was not influenced by land use intensity. Insect root herbivore abundance was not spatially autocorrelated. We conclude that in semi-natural grasslands with a high share of generalist insect root herbivores, insect root herbivores affiliate with large, fast growing plants, presumably because of availability of high quantities of food. Affiliation of insect root herbivores with large, fast growing plants may counteract dominance of those species, thus promoting plant diversity. PMID:26517119
Supersymmetric Sachdev-Ye-Kitaev models
Fu, Wenbo; Gaiotto, Davide; Maldacena, Juan; ...
2017-01-13
We discuss a supersymmetric generalization of the Sachdev-Ye-Kitaev (SYK) model. These are quantum mechanical models involving N Majorana fermions. The supercharge is given by a polynomial expression in terms of the Majorana fermions with random coefficients. The Hamiltonian is the square of the supercharge. The N = 1 model with a single supercharge has unbroken supersymmetry at large N , but nonperturbatively spontaneously broken supersymmetry in the exact theory. We analyze the model by looking at the large N equation, and also by performing numerical computations for small values of N . We also compute the large N spectrum ofmore » “singlet” operators, where we find a structure qualitatively similar to the ordinary SYK model. We also discuss an N = 2 version. In this case, the model preserves supersymmetry in the exact theory and we can compute a suitably weighted Witten index to count the number of ground states, which agrees with the large N computation of the entropy. In both cases, we discuss the supersymmetric generalizations of the Schwarzian action which give the dominant effects at low energies.« less
SignalPlant: an open signal processing software platform.
Plesinger, F; Jurco, J; Halamek, J; Jurak, P
2016-07-01
The growing technical standard of acquisition systems allows the acquisition of large records, often reaching gigabytes or more in size as is the case with whole-day electroencephalograph (EEG) recordings, for example. Although current 64-bit software for signal processing is able to process (e.g. filter, analyze, etc) such data, visual inspection and labeling will probably suffer from rather long latency during the rendering of large portions of recorded signals. For this reason, we have developed SignalPlant-a stand-alone application for signal inspection, labeling and processing. The main motivation was to supply investigators with a tool allowing fast and interactive work with large multichannel records produced by EEG, electrocardiograph and similar devices. The rendering latency was compared with EEGLAB and proves significantly faster when displaying an image from a large number of samples (e.g. 163-times faster for 75 × 10(6) samples). The presented SignalPlant software is available free and does not depend on any other computation software. Furthermore, it can be extended with plugins by third parties ensuring its adaptability to future research tasks and new data formats.
Design of apochromatic lens with large field and high definition for machine vision.
Yang, Ao; Gao, Xingyu; Li, Mingfeng
2016-08-01
Precise machine vision detection for a large object at a finite working distance (WD) requires that the lens has a high resolution for a large field of view (FOV). In this case, the effect of a secondary spectrum on image quality is not negligible. According to the detection requirements, a high resolution apochromatic objective is designed and analyzed. The initial optical structure (IOS) is combined with three segments. Next, the secondary spectrum of the IOS is corrected by replacing glasses using the dispersion vector analysis method based on the Buchdahl dispersion equation. Other aberrations are optimized by the commercial optical design software ZEMAX by properly choosing the optimization function operands. The optimized optical structure (OOS) has an f-number (F/#) of 3.08, a FOV of φ60 mm, a WD of 240 mm, and a modulated transfer function (MTF) of all fields of more than 0.1 at 320 cycles/mm. The design requirements for a nonfluorite material apochromatic objective lens with a large field and high definition for machine vision detection have been achieved.
Identifiability of large-scale non-linear dynamic network models applied to the ADM1-case study.
Nimmegeers, Philippe; Lauwers, Joost; Telen, Dries; Logist, Filip; Impe, Jan Van
2017-06-01
In this work, both the structural and practical identifiability of the Anaerobic Digestion Model no. 1 (ADM1) is investigated, which serves as a relevant case study of large non-linear dynamic network models. The structural identifiability is investigated using the probabilistic algorithm, adapted to deal with the specifics of the case study (i.e., a large-scale non-linear dynamic system of differential and algebraic equations). The practical identifiability is analyzed using a Monte Carlo parameter estimation procedure for a 'non-informative' and 'informative' experiment, which are heuristically designed. The model structure of ADM1 has been modified by replacing parameters by parameter combinations, to provide a generally locally structurally identifiable version of ADM1. This means that in an idealized theoretical situation, the parameters can be estimated accurately. Furthermore, the generally positive structural identifiability results can be explained from the large number of interconnections between the states in the network structure. This interconnectivity, however, is also observed in the parameter estimates, making uncorrelated parameter estimations in practice difficult. Copyright © 2017. Published by Elsevier Inc.
Software engineering the mixed model for genome-wide association studies on large samples.
Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J
2009-11-01
Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.
Effectively-truncated large-scale shell-model calculations and nuclei around 100Sn
NASA Astrophysics Data System (ADS)
Gargano, A.; Coraggio, L.; Itaco, N.
2017-09-01
This paper presents a short overview of a procedure we have recently introduced, dubbed the double-step truncation method, which is aimed to reduce the computational complexity of large-scale shell-model calculations. Within this procedure, one starts with a realistic shell-model Hamiltonian defined in a large model space, and then, by analyzing the effective single particle energies of this Hamiltonian as a function of the number of valence protons and/or neutrons, reduced model spaces are identified containing only the single-particle orbitals relevant to the description of the spectroscopic properties of a certain class of nuclei. As a final step, new effective shell-model Hamiltonians defined within the reduced model spaces are derived by way of a unitary transformation of the original large-scale Hamiltonian. A detailed account of this transformation is given and the merit of the double-step truncation method is illustrated by discussing few selected results for 96Mo, described as four protons and four neutrons outside 88Sr. Some new preliminary results for light odd-tin isotopes from A = 101 to 107 are also reported.
Radiative Transfer Model for Operational Retrieval of Cloud Parameters from DSCOVR-EPIC Measurements
NASA Astrophysics Data System (ADS)
Yang, Y.; Molina Garcia, V.; Doicu, A.; Loyola, D. G.
2016-12-01
The Earth Polychromatic Imaging Camera (EPIC) onboard the Deep Space Climate Observatory (DSCOVR) measures the radiance in the backscattering region. To make sure that all details in the backward glory are covered, a large number of streams is required by a standard radiative transfer model based on the discrete ordinates method. Even the use of the delta-M scaling and the TMS correction do not substantially reduce the number of streams. The aim of this work is to analyze the capability of a fast radiative transfer model to retrieve operationally cloud parameters from EPIC measurements. The radiative transfer model combines the discrete ordinates method with matrix exponential for the computation of radiances and the matrix operator method for the calculation of the reflection and transmission matrices. Standard acceleration techniques as, for instance, the use of the normalized right and left eigenvectors, telescoping technique, Pade approximation and successive-order-of-scattering approximation are implemented. In addition, the model may compute the reflection matrix of the cloud by means of the asymptotic theory, and may use the equivalent Lambertian cloud model. The various approximations are analyzed from the point of view of efficiency and accuracy.
Factors affecting death at home in Japan.
Sauvaget, C; Tsuji, I; Li, J H; Hosokawa, T; Fukao, A; Hisamichi, S
1996-10-01
Despite the wish of the Japanese people to spend their final moments at home, the percentage of deaths at home among elderly is decreasing. Moreover, large variations in this rate were observed over the country. The present ecological study analyzed the relationship between the percentage of deaths at home for decedents aged 70 and over, and demographic, medical and socioeconomic characteristics. The data published in 1990 by the Japanese National Government were analyzed by correlation, principal-component, and multiple linear regression analyses. The results showed that the percentage of deaths at home for decedents aged 70 and over was positively associated with the number of persons per household, and the area of floor space per house. The divorce rate, the national tax per capita, and the mean length of hospitalization for stroke showed a negative association with the percentage of deaths at home. In the prefectures where the crude death rates of stroke and senility were high, elderly were more likely to die at home. These results suggested the importance of the number of family caregivers, and the housing conditions for terminal care at home. This research may lead to improve home medical assistance which is still underdeveloped in Japan.
Planning Of Beef Cattle Development in District Blora, Central Java, Indonesia
NASA Astrophysics Data System (ADS)
Santoso, Budi; Prasetiyono, Bambang Waluyo Hadi Eko
2018-02-01
Continuity of meat supply availability is generally related to the number and production of livestock in a region. Therefore, a framework of sustainable livestock development is needed to increase the production and productivity of livestock. Blora Regency is one of the areas in the Province of Central Java with the largest number of large livestock, primarily beef cattle. Blora Regency has a population of 199.584 beef cattle. Agricultural waste results in Blora Regency can be used as supporting the availability of feed for livestock sector. This is supported by the availability of forage feed which is very abundant.Based on these potentials, it is necessary to assess the characteristics of natural land for the development of beef cattle farms. Therefore, the objectives of this study are (1) to assess the environmental suitability of the environment for the development of cattle ranching that is grazed and stacked; (2) to analyze the potential of forage source of fodder and bearing capacity for beef cattle farming; (3) to analyze the centers of activity of development of beef cattle; (4) to prepare direction and strategy of beef cattle development in Blora Regency.
Increasing Scalability of Researcher Network Extraction from the Web
NASA Astrophysics Data System (ADS)
Asada, Yohei; Matsuo, Yutaka; Ishizuka, Mitsuru
Social networks, which describe relations among people or organizations as a network, have recently attracted attention. With the help of a social network, we can analyze the structure of a community and thereby promote efficient communications within it. We investigate the problem of extracting a network of researchers from the Web, to assist efficient cooperation among researchers. Our method uses a search engine to get the cooccurences of names of two researchers and calculates the streangth of the relation between them. Then we label the relation by analyzing the Web pages in which these two names cooccur. Research on social network extraction using search engines as ours, is attracting attention in Japan as well as abroad. However, the former approaches issue too many queries to search engines to extract a large-scale network. In this paper, we propose a method to filter superfluous queries and facilitates the extraction of large-scale networks. By this method we are able to extract a network of around 3000-nodes. Our experimental results show that the proposed method reduces the number of queries significantly while preserving the quality of the network as compared to former methods.
NASA Technical Reports Server (NTRS)
Debussche, A.; Dubois, T.; Temam, R.
1993-01-01
Using results of Direct Numerical Simulation (DNS) in the case of two-dimensional homogeneous isotropic flows, the behavior of the small and large scales of Kolmogorov like flows at moderate Reynolds numbers are first analyzed in detail. Several estimates on the time variations of the small eddies and the nonlinear interaction terms were derived; those terms play the role of the Reynolds stress tensor in the case of LES. Since the time step of a numerical scheme is determined as a function of the energy-containing eddies of the flow, the variations of the small scales and of the nonlinear interaction terms over one iteration can become negligible by comparison with the accuracy of the computation. Based on this remark, a multilevel scheme which treats differently the small and the large eddies was proposed. Using mathematical developments, estimates of all the parameters involved in the algorithm, which then becomes a completely self-adaptive procedure were derived. Finally, realistic simulations of (Kolmorov like) flows over several eddy-turnover times were performed. The results are analyzed in detail and a parametric study of the nonlinear Galerkin method is performed.
The ENIGMA Consortium: large-scale collaborative analyses of neuroimaging and genetic data.
Thompson, Paul M; Stein, Jason L; Medland, Sarah E; Hibar, Derrek P; Vasquez, Alejandro Arias; Renteria, Miguel E; Toro, Roberto; Jahanshad, Neda; Schumann, Gunter; Franke, Barbara; Wright, Margaret J; Martin, Nicholas G; Agartz, Ingrid; Alda, Martin; Alhusaini, Saud; Almasy, Laura; Almeida, Jorge; Alpert, Kathryn; Andreasen, Nancy C; Andreassen, Ole A; Apostolova, Liana G; Appel, Katja; Armstrong, Nicola J; Aribisala, Benjamin; Bastin, Mark E; Bauer, Michael; Bearden, Carrie E; Bergmann, Orjan; Binder, Elisabeth B; Blangero, John; Bockholt, Henry J; Bøen, Erlend; Bois, Catherine; Boomsma, Dorret I; Booth, Tom; Bowman, Ian J; Bralten, Janita; Brouwer, Rachel M; Brunner, Han G; Brohawn, David G; Buckner, Randy L; Buitelaar, Jan; Bulayeva, Kazima; Bustillo, Juan R; Calhoun, Vince D; Cannon, Dara M; Cantor, Rita M; Carless, Melanie A; Caseras, Xavier; Cavalleri, Gianpiero L; Chakravarty, M Mallar; Chang, Kiki D; Ching, Christopher R K; Christoforou, Andrea; Cichon, Sven; Clark, Vincent P; Conrod, Patricia; Coppola, Giovanni; Crespo-Facorro, Benedicto; Curran, Joanne E; Czisch, Michael; Deary, Ian J; de Geus, Eco J C; den Braber, Anouk; Delvecchio, Giuseppe; Depondt, Chantal; de Haan, Lieuwe; de Zubicaray, Greig I; Dima, Danai; Dimitrova, Rali; Djurovic, Srdjan; Dong, Hongwei; Donohoe, Gary; Duggirala, Ravindranath; Dyer, Thomas D; Ehrlich, Stefan; Ekman, Carl Johan; Elvsåshagen, Torbjørn; Emsell, Louise; Erk, Susanne; Espeseth, Thomas; Fagerness, Jesen; Fears, Scott; Fedko, Iryna; Fernández, Guillén; Fisher, Simon E; Foroud, Tatiana; Fox, Peter T; Francks, Clyde; Frangou, Sophia; Frey, Eva Maria; Frodl, Thomas; Frouin, Vincent; Garavan, Hugh; Giddaluru, Sudheer; Glahn, David C; Godlewska, Beata; Goldstein, Rita Z; Gollub, Randy L; Grabe, Hans J; Grimm, Oliver; Gruber, Oliver; Guadalupe, Tulio; Gur, Raquel E; Gur, Ruben C; Göring, Harald H H; Hagenaars, Saskia; Hajek, Tomas; Hall, Geoffrey B; Hall, Jeremy; Hardy, John; Hartman, Catharina A; Hass, Johanna; Hatton, Sean N; Haukvik, Unn K; Hegenscheid, Katrin; Heinz, Andreas; Hickie, Ian B; Ho, Beng-Choon; Hoehn, David; Hoekstra, Pieter J; Hollinshead, Marisa; Holmes, Avram J; Homuth, Georg; Hoogman, Martine; Hong, L Elliot; Hosten, Norbert; Hottenga, Jouke-Jan; Hulshoff Pol, Hilleke E; Hwang, Kristy S; Jack, Clifford R; Jenkinson, Mark; Johnston, Caroline; Jönsson, Erik G; Kahn, René S; Kasperaviciute, Dalia; Kelly, Sinead; Kim, Sungeun; Kochunov, Peter; Koenders, Laura; Krämer, Bernd; Kwok, John B J; Lagopoulos, Jim; Laje, Gonzalo; Landen, Mikael; Landman, Bennett A; Lauriello, John; Lawrie, Stephen M; Lee, Phil H; Le Hellard, Stephanie; Lemaître, Herve; Leonardo, Cassandra D; Li, Chiang-Shan; Liberg, Benny; Liewald, David C; Liu, Xinmin; Lopez, Lorna M; Loth, Eva; Lourdusamy, Anbarasu; Luciano, Michelle; Macciardi, Fabio; Machielsen, Marise W J; Macqueen, Glenda M; Malt, Ulrik F; Mandl, René; Manoach, Dara S; Martinot, Jean-Luc; Matarin, Mar; Mather, Karen A; Mattheisen, Manuel; Mattingsdal, Morten; Meyer-Lindenberg, Andreas; McDonald, Colm; McIntosh, Andrew M; McMahon, Francis J; McMahon, Katie L; Meisenzahl, Eva; Melle, Ingrid; Milaneschi, Yuri; Mohnke, Sebastian; Montgomery, Grant W; Morris, Derek W; Moses, Eric K; Mueller, Bryon A; Muñoz Maniega, Susana; Mühleisen, Thomas W; Müller-Myhsok, Bertram; Mwangi, Benson; Nauck, Matthias; Nho, Kwangsik; Nichols, Thomas E; Nilsson, Lars-Göran; Nugent, Allison C; Nyberg, Lars; Olvera, Rene L; Oosterlaan, Jaap; Ophoff, Roel A; Pandolfo, Massimo; Papalampropoulou-Tsiridou, Melina; Papmeyer, Martina; Paus, Tomas; Pausova, Zdenka; Pearlson, Godfrey D; Penninx, Brenda W; Peterson, Charles P; Pfennig, Andrea; Phillips, Mary; Pike, G Bruce; Poline, Jean-Baptiste; Potkin, Steven G; Pütz, Benno; Ramasamy, Adaikalavan; Rasmussen, Jerod; Rietschel, Marcella; Rijpkema, Mark; Risacher, Shannon L; Roffman, Joshua L; Roiz-Santiañez, Roberto; Romanczuk-Seiferth, Nina; Rose, Emma J; Royle, Natalie A; Rujescu, Dan; Ryten, Mina; Sachdev, Perminder S; Salami, Alireza; Satterthwaite, Theodore D; Savitz, Jonathan; Saykin, Andrew J; Scanlon, Cathy; Schmaal, Lianne; Schnack, Hugo G; Schork, Andrew J; Schulz, S Charles; Schür, Remmelt; Seidman, Larry; Shen, Li; Shoemaker, Jody M; Simmons, Andrew; Sisodiya, Sanjay M; Smith, Colin; Smoller, Jordan W; Soares, Jair C; Sponheim, Scott R; Sprooten, Emma; Starr, John M; Steen, Vidar M; Strakowski, Stephen; Strike, Lachlan; Sussmann, Jessika; Sämann, Philipp G; Teumer, Alexander; Toga, Arthur W; Tordesillas-Gutierrez, Diana; Trabzuni, Daniah; Trost, Sarah; Turner, Jessica; Van den Heuvel, Martijn; van der Wee, Nic J; van Eijk, Kristel; van Erp, Theo G M; van Haren, Neeltje E M; van 't Ent, Dennis; van Tol, Marie-Jose; Valdés Hernández, Maria C; Veltman, Dick J; Versace, Amelia; Völzke, Henry; Walker, Robert; Walter, Henrik; Wang, Lei; Wardlaw, Joanna M; Weale, Michael E; Weiner, Michael W; Wen, Wei; Westlye, Lars T; Whalley, Heather C; Whelan, Christopher D; White, Tonya; Winkler, Anderson M; Wittfeld, Katharina; Woldehawariat, Girma; Wolf, Christiane; Zilles, David; Zwiers, Marcel P; Thalamuthu, Anbupalam; Schofield, Peter R; Freimer, Nelson B; Lawrence, Natalia S; Drevets, Wayne
2014-06-01
The Enhancing NeuroImaging Genetics through Meta-Analysis (ENIGMA) Consortium is a collaborative network of researchers working together on a range of large-scale studies that integrate data from 70 institutions worldwide. Organized into Working Groups that tackle questions in neuroscience, genetics, and medicine, ENIGMA studies have analyzed neuroimaging data from over 12,826 subjects. In addition, data from 12,171 individuals were provided by the CHARGE consortium for replication of findings, in a total of 24,997 subjects. By meta-analyzing results from many sites, ENIGMA has detected factors that affect the brain that no individual site could detect on its own, and that require larger numbers of subjects than any individual neuroimaging study has currently collected. ENIGMA's first project was a genome-wide association study identifying common variants in the genome associated with hippocampal volume or intracranial volume. Continuing work is exploring genetic associations with subcortical volumes (ENIGMA2) and white matter microstructure (ENIGMA-DTI). Working groups also focus on understanding how schizophrenia, bipolar illness, major depression and attention deficit/hyperactivity disorder (ADHD) affect the brain. We review the current progress of the ENIGMA Consortium, along with challenges and unexpected discoveries made on the way.
Schmitter, Daniel; Wachowicz, Paulina; Sage, Daniel; Chasapi, Anastasia; Xenarios, Ioannis; Simanis; Unser, Michael
2013-01-01
The yeast Schizosaccharomyces pombe is frequently used as a model for studying the cell cycle. The cells are rod-shaped and divide by medial fission. The process of cell division, or cytokinesis, is controlled by a network of signaling proteins called the Septation Initiation Network (SIN); SIN proteins associate with the SPBs during nuclear division (mitosis). Some SIN proteins associate with both SPBs early in mitosis, and then display strongly asymmetric signal intensity at the SPBs in late mitosis, just before cytokinesis. This asymmetry is thought to be important for correct regulation of SIN signaling, and coordination of cytokinesis and mitosis. In order to study the dynamics of organelles or large protein complexes such as the spindle pole body (SPB), which have been labeled with a fluorescent protein tag in living cells, a number of the image analysis problems must be solved; the cell outline must be detected automatically, and the position and signal intensity associated with the structures of interest within the cell must be determined. We present a new 2D and 3D image analysis system that permits versatile and robust analysis of motile, fluorescently labeled structures in rod-shaped cells. We have designed an image analysis system that we have implemented as a user-friendly software package allowing the fast and robust image-analysis of large numbers of rod-shaped cells. We have developed new robust algorithms, which we combined with existing methodologies to facilitate fast and accurate analysis. Our software permits the detection and segmentation of rod-shaped cells in either static or dynamic (i.e. time lapse) multi-channel images. It enables tracking of two structures (for example SPBs) in two different image channels. For 2D or 3D static images, the locations of the structures are identified, and then intensity values are extracted together with several quantitative parameters, such as length, width, cell orientation, background fluorescence and the distance between the structures of interest. Furthermore, two kinds of kymographs of the tracked structures can be established, one representing the migration with respect to their relative position, the other representing their individual trajectories inside the cell. This software package, called "RodCellJ", allowed us to analyze a large number of S. pombe cells to understand the rules that govern SIN protein asymmetry. (Continued on next page) (Continued from previous page). "RodCellJ" is freely available to the community as a package of several ImageJ plugins to simultaneously analyze the behavior of a large number of rod-shaped cells in an extensive manner. The integration of different image-processing techniques in a single package, as well as the development of novel algorithms does not only allow to speed up the analysis with respect to the usage of existing tools, but also accounts for higher accuracy. Its utility was demonstrated on both 2D and 3D static and dynamic images to study the septation initiation network of the yeast Schizosaccharomyces pombe. More generally, it can be used in any kind of biological context where fluorescent-protein labeled structures need to be analyzed in rod-shaped cells. RodCellJ is freely available under http://bigwww.epfl.ch/algorithms.html.
Characterizing Topology of Probabilistic Biological Networks.
Todor, Andrei; Dobra, Alin; Kahveci, Tamer
2013-09-06
Biological interactions are often uncertain events, that may or may not take place with some probability. Existing studies analyze the degree distribution of biological networks by assuming that all the given interactions take place under all circumstances. This strong and often incorrect assumption can lead to misleading results. Here, we address this problem and develop a sound mathematical basis to characterize networks in the presence of uncertain interactions. We develop a method that accurately describes the degree distribution of such networks. We also extend our method to accurately compute the joint degree distributions of node pairs connected by edges. The number of possible network topologies grows exponentially with the number of uncertain interactions. However, the mathematical model we develop allows us to compute these degree distributions in polynomial time in the number of interactions. It also helps us find an adequate mathematical model using maximum likelihood estimation. Our results demonstrate that power law and log-normal models best describe degree distributions for probabilistic networks. The inverse correlation of degrees of neighboring nodes shows that, in probabilistic networks, nodes with large number of interactions prefer to interact with those with small number of interactions more frequently than expected.
NASA Technical Reports Server (NTRS)
Ostrach, Simon
1953-01-01
The free-convection flow and heat transfer (generated by a body force) about a flat plate parallel to the direction of the body force are formally analyzed and the type of flow is found to be dependent on the Grashof number alone. For large Grashof numbers (which are of interest in aeronautics), the flow is of the boundary-layer type and the problem is reduced in a formal manner, which is analogous to Prandtl's forced-flow boundary-layer theory, to the simultaneous solution of two ordinary differential equations subject to the proper boundary conditions. Velocity and temperature distributions for Prandtl numbers of 0.01, 0.72, 0.733, 1, 1, 10, 100, and 1000 are computed, and it is shown that velocities and Nusselt numbers of the order of magnitude of those encountered in forced-convection flows may be obtained in free-convection flows. The theoretical and experimental velocity and temperature distributions are in good agreement. A flow and a heat-transfer parameter, from which the important physical quantities such as shear stress and heat-transfer rate can be computed, are derived as functions of Prandtl number alone.
Guidelines for the Effective Use of Entity-Attribute-Value Modeling for Biomedical Databases
Dinu, Valentin; Nadkarni, Prakash
2007-01-01
Purpose To introduce the goals of EAV database modeling, to describe the situations where Entity-Attribute-Value (EAV) modeling is a useful alternative to conventional relational methods of database modeling, and to describe the fine points of implementation in production systems. Methods We analyze the following circumstances: 1) data are sparse and have a large number of applicable attributes, but only a small fraction will apply to a given entity; 2) numerous classes of data need to be represented, each class has a limited number of attributes, but the number of instances of each class is very small. We also consider situations calling for a mixed approach where both conventional and EAV design are used for appropriate data classes. Results and Conclusions In robust production systems, EAV-modeled databases trade a modest data sub-schema for a complex metadata sub-schema. The need to design the metadata effectively makes EAV design potentially more challenging than conventional design. PMID:17098467
NASA Astrophysics Data System (ADS)
Mu, G.-H.; Chen, W.; Kertész, J.; Zhou, W.-X.
2009-03-01
The distributions of trade sizes and trading volumes are investigated based on the limit order book data of 22 liquid Chinese stocks listed on the Shenzhen Stock Exchange in the whole year 2003. We observe that the size distribution of trades for individualstocks exhibits jumps, which is caused by the number preference of traders when placing orders. We analyze the applicability of the “q-Gamma” function for fitting the distribution by the Cramér-von Mises criterion. The empirical PDFs of tradingvolumes at different timescales Δt ranging from 1 min to 240 min can be well modeled. The applicability of the q-Gamma functions for multiple trades is restricted to the transaction numbers Δn≤ 8. We find that all the PDFs have power-law tails for large volumes. Using careful estimation of the average tail exponents α of the distributions of trade sizes and trading volumes, we get α> 2, well outside the Lévy regime.
Beyond the schools of psychology 2: a digital analysis of psychological review, 1904-1923.
Green, Christopher D; Feinerer, Ingo; Burman, Jeremy T
2014-01-01
In order to better understand the broader trends and points of contention in early American psychology, it is conventional to organize the relevant material in terms of "schools" of psychology-structuralism, functionalism, etc. Although not without value, this scheme marginalizes many otherwise significant figures, and tends to exclude a large number of secondary, but interesting, individuals. In an effort to address these problems, we grouped all the articles that appeared in the second and third decades of Psychological Review into five-year blocks, and then cluster analyzed each block by the articles' verbal similarity to each other. This resulted in a number of significant intellectual "genres" of psychology that are ignored by the usual "schools" taxonomy. It also made "visible" a number of figures who are typically downplayed or ignored in conventional histories of the discipline, and it provide us with an intellectual context in which to understand their contributions. © 2014 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
TenHoeve, J. E.; Remer, L. A.; Jacobson, M. Z.
2010-01-01
This study analyzes changes in the number of fires detected on forest, grass, and transition lands during the 2002-2009 biomass burning seasons using fire detection data and co-located land cover classifications from the Moderate Resolution Imaging Spectroradiometer (MODIS). We find that the total number of detected fires correlates well with MODIS mean aerosol optical depth (AOD) from year to year, in accord with other studies. However, we also show that the ratio of forest to savanna fires varies substantially from year to year. Forest fires have trended downward, on average, since the beginning of 2006 despite a modest increase in 2007. Our study suggests that high particulate matter loading detected in 2007 was likely due to a large number of savanna/agricultural fires that year. Finally, we illustrate that the correlation between annual Brazilian deforestation estimates and MODIS fires is considerably higher when fires are stratified by MODIS-derived land cover classifications.
Long Josephson tunnel junctions with doubly connected electrodes
NASA Astrophysics Data System (ADS)
Monaco, R.; Mygind, J.; Koshelets, V. P.
2012-03-01
In order to mimic the phase changes in the primordial Big Bang, several cosmological solid-state experiments have been conceived, during the last decade, to investigate the spontaneous symmetry breaking in superconductors and superfluids cooled through their transition temperature. In one of such experiments, the number of magnetic flux quanta spontaneously trapped in a superconducting loop was measured by means of a long Josephson tunnel junction built on top of the loop itself. We have analyzed this system and found a number of interesting features not occurring in the conventional case with simply connected electrodes. In particular, the fluxoid quantization results in a frustration of the Josephson phase, which, in turn, reduces the junction critical current. Further, the possible stable states of the system are obtained by a self-consistent application of the principle of minimum energy. The theoretical findings are supported by measurements on a number of samples having different geometrical configuration. The experiments demonstrate that a very large signal-to-noise ratio can be achieved in the flux quanta detection.
How long will asteroids on retrograde orbits survive?
NASA Astrophysics Data System (ADS)
Kankiewicz, Paweł; Włodarczyk, Ireneusz
2018-05-01
Generally, a common scenario for the origin of minor planets with high orbital inclinations does not exist. This applies especially to objects whose orbital inclinations are much greater than 90° (retrograde asteroids). Since the discovery of Dioretsa in 1999, approximately 100 small bodies now are classified as retrograde asteroids. A small number of them were reclassified as comets, due to cometary activity. There are only 25 multi-opposition retrograde asteroids, with a relatively large number of observations and well-determined orbits. We studied the orbital evolution of numbered and multi-opposition retrograde asteroids by numerical integration up to 1 Gy forward and backward in time. Additionally, we analyzed the propagation of orbital elements with the observational errors, determined dynamical lifetimes and studied their chaotic properties. Conclusively, we obtained quantitative parameters describing the long-term stability of orbits relating to the past and the future. In turn, we were able to estimate their lifetimes and how long these objects will survive in the Solar System.
Resources for Functional Genomics Studies in Drosophila melanogaster
Mohr, Stephanie E.; Hu, Yanhui; Kim, Kevin; Housden, Benjamin E.; Perrimon, Norbert
2014-01-01
Drosophila melanogaster has become a system of choice for functional genomic studies. Many resources, including online databases and software tools, are now available to support design or identification of relevant fly stocks and reagents or analysis and mining of existing functional genomic, transcriptomic, proteomic, etc. datasets. These include large community collections of fly stocks and plasmid clones, “meta” information sites like FlyBase and FlyMine, and an increasing number of more specialized reagents, databases, and online tools. Here, we introduce key resources useful to plan large-scale functional genomics studies in Drosophila and to analyze, integrate, and mine the results of those studies in ways that facilitate identification of highest-confidence results and generation of new hypotheses. We also discuss ways in which existing resources can be used and might be improved and suggest a few areas of future development that would further support large- and small-scale studies in Drosophila and facilitate use of Drosophila information by the research community more generally. PMID:24653003
Conformal bootstrap at large charge
NASA Astrophysics Data System (ADS)
Jafferis, Daniel; Mukhametzhanov, Baur; Zhiboedov, Alexander
2018-05-01
We consider unitary CFTs with continuous global symmetries in d > 2. We consider a state created by the lightest operator of large charge Q ≫ 1 and analyze the correlator of two light charged operators in this state. We assume that the correlator admits a well-defined large Q expansion and, relatedly, that the macroscopic (thermodynamic) limit of the correlator exists. We find that the crossing equations admit a consistent truncation, where only a finite number N of Regge trajectories contribute to the correlator at leading nontrivial order. We classify all such truncated solutions to the crossing. For one Regge trajectory N = 1, the solution is unique and given by the effective field theory of a Goldstone mode. For two or more Regge trajectories N ≥ 2, the solutions are encoded in roots of a certain degree N polynomial. Some of the solutions admit a simple weakly coupled EFT description, whereas others do not. In the weakly coupled case, each Regge trajectory corresponds to a field in the effective Lagrangian.
The issue of FM to AM conversion on the National Ignition Facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Browning, D F; Rothenberg, J E; Wilcox, R B
1998-08-13
The National Ignition Facility (NIF) baseline configuration for inertial confinement fusion requires phase modulation for two purposes. First, ~ 1Å of frequency modulation (FM) bandwidth at low modulation frequency is required to suppress buildup of Stimulated Brioullin Scattering (SBS) in the large aperture laser optics. Also ~ 3 Å or more bandwidth at high modulation frequency is required for smoothing of the speckle pattern illuminating the target by the smoothing by spectral dispersion method (SSD). Ideally, imposition of bandwidth by pure phase modulation does not affect the beam intensity. However, as a result of a large number of effects, themore » FM converts to amplitude modulation (AM). In general this adversely affects the laser performance, e.g. by reducing the margin against damage to the optics. In particular, very large conversion of FM to AM has been observed in the NIF all-fiber master oscillator and distribution systems. The various mechanisms leading to AM are analyzed and approaches to minimizing their effects are discussed.« less
Maruyama, Toshisuke
2007-01-01
To estimate the amount of evapotranspiration in a river basin, the “short period water balance method” was formulated. Then, by introducing the “complementary relationship method,” the amount of evapotranspiration was estimated seasonally, and with reasonable accuracy, for both small and large areas. Moreover, to accurately estimate river discharge in the low water season, the “weighted statistical unit hydrograph method” was proposed and a procedure for the calculation of the unit hydrograph was developed. Also, a new model, based on the “equivalent roughness method,” was successfully developed for the estimation of flood runoff from newly reclaimed farmlands. Based on the results of this research, a “composite reservoir model” was formulated to analyze the repeated use of irrigation water in large spatial areas. The application of this model to a number of watershed areas provided useful information with regard to the realities of water demand-supply systems in watersheds predominately dedicated to paddy fields, in Japan. PMID:24367144
The Strong Lensing Time Delay Challenge (2014)
NASA Astrophysics Data System (ADS)
Liao, Kai; Dobler, G.; Fassnacht, C. D.; Treu, T.; Marshall, P. J.; Rumbaugh, N.; Linder, E.; Hojjati, A.
2014-01-01
Time delays between multiple images in strong lensing systems are a powerful probe of cosmology. At the moment the application of this technique is limited by the number of lensed quasars with measured time delays. However, the number of such systems is expected to increase dramatically in the next few years. Hundred such systems are expected within this decade, while the Large Synoptic Survey Telescope (LSST) is expected to deliver of order 1000 time delays in the 2020 decade. In order to exploit this bounty of lenses we needed to make sure the time delay determination algorithms have sufficiently high precision and accuracy. As a first step to test current algorithms and identify potential areas for improvement we have started a "Time Delay Challenge" (TDC). An "evil" team has created realistic simulated light curves, to be analyzed blindly by "good" teams. The challenge is open to all interested parties. The initial challenge consists of two steps (TDC0 and TDC1). TDC0 consists of a small number of datasets to be used as a training template. The non-mandatory deadline is December 1 2013. The "good" teams that complete TDC0 will be given access to TDC1. TDC1 consists of thousands of lightcurves, a number sufficient to test precision and accuracy at the subpercent level, necessary for time-delay cosmography. The deadline for responding to TDC1 is July 1 2014. Submissions will be analyzed and compared in terms of predefined metrics to establish the goodness-of-fit, efficiency, precision and accuracy of current algorithms. This poster describes the challenge in detail and gives instructions for participation.
Global Distribution of Density Irregularities in the Equatorial Ionosphere
NASA Technical Reports Server (NTRS)
Kil, Hyosub; Heelis, R. A.
1998-01-01
We analyzed measurements of ion number density made by the retarding potential analyzer aboard the Atmosphere Explorer-E (AE-E) satellite, which was in an approximately circular orbit at an altitude near 300 km in 1977 and later at an altitude near 400 km. Large-scale (greater than 60 km) density measurements in the high-altitude regions show large depletions of bubble-like structures which are confined to narrow local time longitude, and magnetic latitude ranges, while those in the low-altitude regions show relatively small depletions which are broadly distributed,in space. For this reason we considered the altitude regions below 300 km and above 350 km and investigated the global distribution of irregularities using the rms deviation delta N/N over a path length of 18 km as an indicator of overall irregularity intensity. Seasonal variations of irregularity occurrence probability are significant in the Pacific regions, while the occurrence probability is always high in die Atlantic-African regions and is always low in die Indian regions. We find that the high occurrence probability in the Pacific regions is associated with isolated bubble structures, while that near 0 deg longitude is produced by large depictions with bubble structures which are superimposed on a large-scale wave-like background. Considerations of longitude variations due to seeding mechanisms and due to F region winds and drifts are necessary to adequately explain the observations at low and high altitudes. Seeding effects are most obvious near 0 deg longitude, while the most easily observed effect of the F region is the suppression of irregularity growth by interhemispheric neutral winds.
Can DNA barcoding accurately discriminate megadiverse Neotropical freshwater fish fauna?
2013-01-01
Background The megadiverse Neotropical freshwater ichthyofauna is the richest in the world with approximately 6,000 recognized species. Interestingly, they are distributed among only 17 orders, and almost 80% of them belong to only three orders: Characiformes, Siluriformes and Perciformes. Moreover, evidence based on molecular data has shown that most of the diversification of the Neotropical ichthyofauna occurred recently. These characteristics make the taxonomy and identification of this fauna a great challenge, even when using molecular approaches. In this context, the present study aimed to test the effectiveness of the barcoding methodology (COI gene) to identify the mega diverse freshwater fish fauna from the Neotropical region. For this purpose, 254 species of fishes were analyzed from the Upper Parana River basin, an area representative of the larger Neotropical region. Results Of the 254 species analyzed, 252 were correctly identified by their barcode sequences (99.2%). The main K2P intra- and inter-specific genetic divergence values (0.3% and 6.8%, respectively) were relatively low compared with similar values reported in the literature, reflecting the higher number of closely related species belonging to a few higher taxa and their recent radiation. Moreover, for 84 pairs of species that showed low levels of genetic divergence (<2%), application of a complementary character-based nucleotide diagnostic approach proved useful in discriminating them. Additionally, 14 species displayed high intra-specific genetic divergence (>2%), pointing to at least 23 strong candidates for new species. Conclusions Our study is the first to examine a large number of freshwater fish species from the Neotropical area, including a large number of closely related species. The results confirmed the efficacy of the barcoding methodology to identify a recently radiated, megadiverse fauna, discriminating 99.2% of the analyzed species. The power of the barcode sequences to identify species, even with low interspecific divergence, gives us an idea of the distribution of inter-specific genetic divergence in these megadiverse fauna. The results also revealed hidden genetic divergences suggestive of reproductive isolation and putative cryptic speciation in some species (23 candidates for new species). Finally, our study constituted an important contribution to the international Barcoding of Life (iBOL.org) project, providing barcode sequences for use in identification of these species by experts and non-experts, and allowing them to be available for use in other applications. PMID:23497346
Signal bi-amplification in networks of unidirectionally coupled MEMS
NASA Astrophysics Data System (ADS)
Tchakui, Murielle Vanessa; Woafo, Paul; Colet, Pere
2016-01-01
The purpose of this paper is to analyze the propagation and the amplification of an input signal in networks of unidirectionally coupled micro-electro-mechanical systems (MEMS). Two types of external excitations are considered: sinusoidal and stochastic signals. We show that sinusoidal signals are amplified up to a saturation level which depends on the transmission rate and despite MEMS being nonlinear the sinusoidal shape is well preserved if the number of MEMS is not too large. However, increasing the number of MEMS, there is an instability that leads to chaotic behavior and which is triggered by the amplification of the harmonics generated by the nonlinearities. We also show that for stochastic input signals, the MEMS array acts as a band-pass filter and after just a few elements the signal has a narrow power spectra.
Demographic Consequences of Gender Discrimination in China: Simulation Analysis of Policy Options.
Quanbao, Jiang; Shuzhuo, Li; Marcus W, Feldman
2011-08-01
The large number of missing females in China, a consequence of gender discrimination, is having and will continue to have a profound effect on the country's population development. In this paper, we analyze the causes of this gender discrimination in terms of institutions, culture and, economy, and suggest public policies that might help eliminate gender discrimination. Using a population simulation model, we study the effect of public policies on the sex ratio at birth and excess female child mortality, and the effect of gender discrimination on China's population development. We find that gender discrimination will decrease China's population size, number of births, and working age population, accelerate population aging and exacerbate the male marriage squeeze. These results provide theoretical support for suggesting that the government enact and implement public policies aimed at eliminating gender discrimination.
Demographic Consequences of Gender Discrimination in China: Simulation Analysis of Policy Options
Quanbao, Jiang; Marcus W., Feldman
2013-01-01
The large number of missing females in China, a consequence of gender discrimination, is having and will continue to have a profound effect on the country's population development. In this paper, we analyze the causes of this gender discrimination in terms of institutions, culture and, economy, and suggest public policies that might help eliminate gender discrimination. Using a population simulation model, we study the effect of public policies on the sex ratio at birth and excess female child mortality, and the effect of gender discrimination on China's population development. We find that gender discrimination will decrease China's population size, number of births, and working age population, accelerate population aging and exacerbate the male marriage squeeze. These results provide theoretical support for suggesting that the government enact and implement public policies aimed at eliminating gender discrimination. PMID:24363477
Optimizing the number of steps in learning tasks for complex skills.
Nadolski, Rob J; Kirschner, Paul A; van Merriënboer, Jeroen J G
2005-06-01
Carrying out whole tasks is often too difficult for novice learners attempting to acquire complex skills. The common solution is to split up the tasks into a number of smaller steps. The number of steps must be optimized for efficient and effective learning. The aim of the study is to investigate the relation between the number of steps provided to learners and the quality of their learning of complex skills. It is hypothesized that students receiving an optimized number of steps will learn better than those receiving either the whole task in only one step or those receiving a large number of steps. Participants were 35 sophomore law students studying at Dutch universities, mean age=22.8 years (SD=3.5), 63% were female. Participants were randomly assigned to 1 of 3 computer-delivered versions of a multimedia programme on how to prepare and carry out a law plea. The versions differed only in the number of learning steps provided. Videotaped plea-performance results were determined, various related learning measures were acquired and all computer actions were logged and analyzed. Participants exposed to an intermediate (i.e. optimized) number of steps outperformed all others on the compulsory learning task. No differences in performance on a transfer task were found. A high number of steps proved to be less efficient for carrying out the learning task. An intermediate number of steps is the most effective, proving that the number of steps can be optimized for improving learning.
Mutational screening in genes related with porto-pulmonary hypertension: An analysis of 6 cases.
Pousada, Guillermo; Baloira, Adolfo; Valverde, Diana
2017-04-07
Portopulmonary hypertension (PPH) is a rare disease with a low incidence and without a clearly-identified genetic component. The aim of this work was to check genes and genetic modifiers related to pulmonary arterial hypertension in patients with PPH in order to clarify the molecular basis of the pathology. We selected a total of 6 patients with PPH and amplified the exonic regions and intronic flanking regions of the relevant genes and regions of interest of the genetic modifiers. Six patients diagnosed with PPH were analyzed and compared to 55 healthy individuals. Potentially-pathogenic mutations were identified in the analyzed genes of 5 patients. None of these mutations, which are highly conserved throughout evolution, were detected in the control patients nor different databases analyzed (1000 Genomes, ExAC and DECIPHER). After analyzing for genetic modifiers, we found different variations that could favor the onset of the disease. The genetic analysis carried out in this small cohort of patients with PPH revealed a large number of mutations, with the ENG gene showing the greatest mutational frequency. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.
Access Control Management for SCADA Systems
NASA Astrophysics Data System (ADS)
Hong, Seng-Phil; Ahn, Gail-Joon; Xu, Wenjuan
The information technology revolution has transformed all aspects of our society including critical infrastructures and led a significant shift from their old and disparate business models based on proprietary and legacy environments to more open and consolidated ones. Supervisory Control and Data Acquisition (SCADA) systems have been widely used not only for industrial processes but also for some experimental facilities. Due to the nature of open environments, managing SCADA systems should meet various security requirements since system administrators need to deal with a large number of entities and functions involved in critical infrastructures. In this paper, we identify necessary access control requirements in SCADA systems and articulate access control policies for the simulated SCADA systems. We also attempt to analyze and realize those requirements and policies in the context of role-based access control that is suitable for simplifying administrative tasks in large scale enterprises.
Proteome-level interplay between folding and aggregation propensities of proteins.
Tartaglia, Gian Gaetano; Vendruscolo, Michele
2010-10-08
With the advent of proteomics, there is an increasing need of tools for predicting the properties of large numbers of proteins by using the information provided by their amino acid sequences, even in the absence of the knowledge of their structures. One of the most important types of predictions concerns whether proteins will fold or aggregate. Here, we study the competition between these two processes by analyzing the relationship between the folding and aggregation propensity profiles for the human and Escherichia coli proteomes. These profiles are calculated, respectively, using the CamFold method, which we introduce in this work, and the Zyggregator method. Our results indicate that the kinetic behavior of proteins is, to a large extent, determined by the interplay between regions of low folding and high aggregation propensities. Copyright © 2010. Published by Elsevier Ltd.
Community structure in traffic zones based on travel demand
NASA Astrophysics Data System (ADS)
Sun, Li; Ling, Ximan; He, Kun; Tan, Qian
2016-09-01
Large structure in complex networks can be studied by dividing it into communities or modules. Urban traffic system is one of the most critical infrastructures. It can be abstracted into a complex network composed of tightly connected groups. Here, we analyze community structure in urban traffic zones based on the community detection method in network science. Spectral algorithm using the eigenvectors of matrices is employed. Our empirical results indicate that the traffic communities are variant with the travel demand distribution, since in the morning the majority of the passengers are traveling from home to work and in the evening they are traveling a contrary direction. Meanwhile, the origin-destination pairs with large number of trips play a significant role in urban traffic network's community division. The layout of traffic community in a city also depends on the residents' trajectories.