Simple Numerical Analysis of Longboard Speedometer Data
ERIC Educational Resources Information Center
Hare, Jonathan
2013-01-01
Simple numerical data analysis is described, using a standard spreadsheet program, to determine distance, velocity (speed) and acceleration from voltage data generated by a skateboard/longboard speedometer (Hare 2012 "Phys. Educ." 47 409-17). This simple analysis is an introduction to data processing including scaling data as well as…
Simulating and mapping spatial complexity using multi-scale techniques
De Cola, L.
1994-01-01
A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields. -Author
Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity
Harbin Li; Steven G. McNulty
2007-01-01
Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL...
An analysis of ratings: A guide to RMRATE
Thomas C. Brown; Terry C. Daniel; Herbert W. Schroeder; Glen E. Brink
1990-01-01
This report describes RMRATE, a computer program for analyzing rating judgments. RMRATE scales ratings using several scaling procedures, and compares the resulting scale values. The scaling procedures include the median and simple mean, standardized values, scale values based on Thurstone's Law of Categorical Judgment, and regression-based values. RMRATE also...
Function Invariant and Parameter Scale-Free Transformation Methods
ERIC Educational Resources Information Center
Bentler, P. M.; Wingard, Joseph A.
1977-01-01
A scale-invariant simple structure function of previously studied function components for principal component analysis and factor analysis is defined. First and second partial derivatives are obtained, and Newton-Raphson iterations are utilized. The resulting solutions are locally optimal and subjectively pleasing. (Author/JKS)
NASA Astrophysics Data System (ADS)
Jahedi, Mohammad; Ardeljan, Milan; Beyerlein, Irene J.; Paydar, Mohammad Hossein; Knezevic, Marko
2015-06-01
We use a multi-scale, polycrystal plasticity micromechanics model to study the development of orientation gradients within crystals deforming by slip. At the largest scale, the model is a full-field crystal plasticity finite element model with explicit 3D grain structures created by DREAM.3D, and at the finest scale, at each integration point, slip is governed by a dislocation density based hardening law. For deformed polycrystals, the model predicts intra-granular misorientation distributions that follow well the scaling law seen experimentally by Hughes et al., Acta Mater. 45(1), 105-112 (1997), independent of strain level and deformation mode. We reveal that the application of a simple compression step prior to simple shearing significantly enhances the development of intra-granular misorientations compared to simple shearing alone for the same amount of total strain. We rationalize that the changes in crystallographic orientation and shape evolution when going from simple compression to simple shearing increase the local heterogeneity in slip, leading to the boost in intra-granular misorientation development. In addition, the analysis finds that simple compression introduces additional crystal orientations that are prone to developing intra-granular misorientations, which also help to increase intra-granular misorientations. Many metal working techniques for refining grain sizes involve a preliminary or concurrent application of compression with severe simple shearing. Our finding reveals that a pre-compression deformation step can, in fact, serve as another processing variable for improving the rate of grain refinement during the simple shearing of polycrystalline metals.
Levelized Cost of Energy Calculator | Energy Analysis | NREL
Levelized Cost of Energy Calculator Levelized Cost of Energy Calculator Transparent Cost Database Button The levelized cost of energy (LCOE) calculator provides a simple calculator for both utility-scale need to be included for a thorough analysis. To estimate simple cost of energy, use the slider controls
A Confirmatory Factor Analysis of the Professional Opinion Scale
ERIC Educational Resources Information Center
Greeno, Elizabeth J.; Hughes, Anne K.; Hayward, R. Anna; Parker, Karen L.
2007-01-01
The Professional Opinion Scale (POS) was developed to measure social work values orientation. Objective: A confirmatory factor analysis was performed on the POS. Method: This cross-sectional study used a mailed survey design with a national random (simple) sample of members of the National Association of Social Workers. Results: The study…
Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity.
Li, Harbin; McNulty, Steven G
2007-10-01
Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL estimates to the national scale could be developed. Specifically, we wanted to quantify CAL uncertainty under natural variability in 17 model parameters, and determine their relative contributions in predicting CAL. Results indicated that uncertainty in CAL came primarily from components of base cation weathering (BC(w); 49%) and acid neutralizing capacity (46%), whereas the most critical parameters were BC(w) base rate (62%), soil depth (20%), and soil temperature (11%). Thus, improvements in estimates of these factors are crucial to reducing uncertainty and successfully scaling up SMBE for national assessments of CAL.
SU-F-R-33: Can CT and CBCT Be Used Simultaneously for Radiomics Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, R; Wang, J; Zhong, H
2016-06-15
Purpose: To investigate whether CBCT and CT can be used in radiomics analysis simultaneously. To establish a batch correction method for radiomics in two similar image modalities. Methods: Four sites including rectum, bladder, femoral head and lung were considered as region of interest (ROI) in this study. For each site, 10 treatment planning CT images were collected. And 10 CBCT images which came from same site of same patient were acquired at first radiotherapy fraction. 253 radiomics features, which were selected by our test-retest study at rectum cancer CT (ICC>0.8), were calculated for both CBCT and CT images in MATLAB.more » Simple scaling (z-score) and nonlinear correction methods were applied to the CBCT radiomics features. The Pearson Correlation Coefficient was calculated to analyze the correlation between radiomics features of CT and CBCT images before and after correction. Cluster analysis of mixed data (for each site, 5 CT and 5 CBCT data are randomly selected) was implemented to validate the feasibility to merge radiomics data from CBCT and CT. The consistency of clustering result and site grouping was verified by a chi-square test for different datasets respectively. Results: For simple scaling, 234 of the 253 features have correlation coefficient ρ>0.8 among which 154 features haveρ>0.9 . For radiomics data after nonlinear correction, 240 of the 253 features have ρ>0.8 among which 220 features have ρ>0.9. Cluster analysis of mixed data shows that data of four sites was almost precisely separated for simple scaling(p=1.29 * 10{sup −7}, χ{sup 2} test) and nonlinear correction (p=5.98 * 10{sup −7}, χ{sup 2} test), which is similar to the cluster result of CT data (p=4.52 * 10{sup −8}, χ{sup 2} test). Conclusion: Radiomics data from CBCT can be merged with those from CT by simple scaling or nonlinear correction for radiomics analysis.« less
Electrochemistry at Nanometer-Scaled Electrodes
ERIC Educational Resources Information Center
Watkins, John J.; Bo Zhang; White, Henry S.
2005-01-01
Electrochemical studies using nanometer-scaled electrodes are leading to better insights into electrochemical kinetics, interfacial structure, and chemical analysis. Various methods of preparing electrodes of nanometer dimensions are discussed and a few examples of their behavior and applications in relatively simple electrochemical experiments…
Scaling in sensitivity analysis
Link, W.A.; Doherty, P.F.
2002-01-01
Population matrix models allow sets of demographic parameters to be summarized by a single value 8, the finite rate of population increase. The consequences of change in individual demographic parameters are naturally measured by the corresponding changes in 8; sensitivity analyses compare demographic parameters on the basis of these changes. These comparisons are complicated by issues of scale. Elasticity analysis attempts to deal with issues of scale by comparing the effects of proportional changes in demographic parameters, but leads to inconsistencies in evaluating demographic rates. We discuss this and other problems of scaling in sensitivity analysis, and suggest a simple criterion for choosing appropriate scales. We apply our suggestions to data for the killer whale, Orcinus orca.
USDA-ARS?s Scientific Manuscript database
Pasta is a simple food made from water and durum wheat (Triticum turgidum subsp. durum) semolina. As pasta increases in popularity, studies have endeavored to analyze the attributes that contribute to high quality pasta. Despite being a simple food, the laboratory scale analysis of pasta quality is ...
Rodríguez-Arias, Miquel Angel; Rodó, Xavier
2004-03-01
Here we describe a practical, step-by-step primer to scale-dependent correlation (SDC) analysis. The analysis of transitory processes is an important but often neglected topic in ecological studies because only a few statistical techniques appear to detect temporary features accurately enough. We introduce here the SDC analysis, a statistical and graphical method to study transitory processes at any temporal or spatial scale. SDC analysis, thanks to the combination of conventional procedures and simple well-known statistical techniques, becomes an improved time-domain analogue of wavelet analysis. We use several simple synthetic series to describe the method, a more complex example, full of transitory features, to compare SDC and wavelet analysis, and finally we analyze some selected ecological series to illustrate the methodology. The SDC analysis of time series of copepod abundances in the North Sea indicates that ENSO primarily is the main climatic driver of short-term changes in population dynamics. SDC also uncovers some long-term, unexpected features in the population. Similarly, the SDC analysis of Nicholson's blowflies data locates where the proposed models fail and provides new insights about the mechanism that drives the apparent vanishing of the population cycle during the second half of the series.
ERIC Educational Resources Information Center
Davis, Richard A.
2015-01-01
A simple classroom exercise is used to teach students about the law of propagation of uncertainty in experimental measurements and analysis. Students calculate the density of a rectangular wooden block with a hole from several measurements of mass and length using a ruler and scale. The ruler and scale give students experience with estimating…
Simple scale for assessing level of dependency of patients in general practice.
Willis, J
1986-01-01
A rating scale has been designed for assessing the degree of dependency of patients in general practice. An analysis of the elderly and disabled patients in a two doctor practice is given as an example of its use and simplicity. PMID:3087556
A Factor Analysis of the Counselor Evaluation Rating Scale
ERIC Educational Resources Information Center
Loesch, Larry C.; Rucker, Barbara B.
1977-01-01
This study was conducted on the Counselor Evaluation Rating Scale (CERS). Ratings on 404 students from approximately 35 different supervisors were factor-analyzed using an oblique solution with rotation to simple loadings. It was concluded that the CERS has generally achieved the purposes intended by its authors. (Author)
On identifying relationships between the flood scaling exponent and basin attributes.
Medhi, Hemanta; Tripathi, Shivam
2015-07-01
Floods are known to exhibit self-similarity and follow scaling laws that form the basis of regional flood frequency analysis. However, the relationship between basin attributes and the scaling behavior of floods is still not fully understood. Identifying these relationships is essential for drawing connections between hydrological processes in a basin and the flood response of the basin. The existing studies mostly rely on simulation models to draw these connections. This paper proposes a new methodology that draws connections between basin attributes and the flood scaling exponents by using observed data. In the proposed methodology, region-of-influence approach is used to delineate homogeneous regions for each gaging station. Ordinary least squares regression is then applied to estimate flood scaling exponents for each homogeneous region, and finally stepwise regression is used to identify basin attributes that affect flood scaling exponents. The effectiveness of the proposed methodology is tested by applying it to data from river basins in the United States. The results suggest that flood scaling exponent is small for regions having (i) large abstractions from precipitation in the form of large soil moisture storages and high evapotranspiration losses, and (ii) large fractions of overland flow compared to base flow, i.e., regions having fast-responding basins. Analysis of simple scaling and multiscaling of floods showed evidence of simple scaling for regions in which the snowfall dominates the total precipitation.
Active earth pressure model tests versus finite element analysis
NASA Astrophysics Data System (ADS)
Pietrzak, Magdalena
2017-06-01
The purpose of the paper is to compare failure mechanisms observed in small scale model tests on granular sample in active state, and simulated by finite element method (FEM) using Plaxis 2D software. Small scale model tests were performed on rectangular granular sample retained by a rigid wall. Deformation of the sample resulted from simple wall translation in the direction `from the soil" (active earth pressure state. Simple Coulomb-Mohr model for soil can be helpful in interpreting experimental findings in case of granular materials. It was found that the general alignment of strain localization pattern (failure mechanism) may belong to macro scale features and be dominated by a test boundary conditions rather than the nature of the granular sample.
Spatial analysis of cities using Renyi entropy and fractal parameters
NASA Astrophysics Data System (ADS)
Chen, Yanguang; Feng, Jian
2017-12-01
The spatial distributions of cities fall into two groups: one is the simple distribution with characteristic scale (e.g. exponential distribution), and the other is the complex distribution without characteristic scale (e.g. power-law distribution). The latter belongs to scale-free distributions, which can be modeled with fractal geometry. However, fractal dimension is not suitable for the former distribution. In contrast, spatial entropy can be used to measure any types of urban distributions. This paper is devoted to generalizing multifractal parameters by means of dual relation between Euclidean and fractal geometries. The main method is mathematical derivation and empirical analysis, and the theoretical foundation is the discovery that the normalized fractal dimension is equal to the normalized entropy. Based on this finding, a set of useful spatial indexes termed dummy multifractal parameters are defined for geographical analysis. These indexes can be employed to describe both the simple distributions and complex distributions. The dummy multifractal indexes are applied to the population density distribution of Hangzhou city, China. The calculation results reveal the feature of spatio-temporal evolution of Hangzhou's urban morphology. This study indicates that fractal dimension and spatial entropy can be combined to produce a new methodology for spatial analysis of city development.
Brittain, Kirsty; Mellins, Claude A.; Zerbe, Allison; Remien, Robert H.; Abrams, Elaine J.; Myer, Landon; Wilson, Ira B.
2016-01-01
Maternal adherence to antiretroviral therapy (ART) is a concern and monitoring adherence presents a significant challenge in low-resource settings. We investigated the association between self-reported adherence, measured using a simple three-item scale, and elevated viral load (VL) among HIV-infected pregnant and postpartum women on ART in Cape Town, South Africa. This is the first reported use of this scale in a non-English speaking setting and it achieved good psychometric characteristics (Cronbach α = 0.79). Among 452 women included in the analysis, only 12 % reported perfect adherence on the self-report scale, while 92 % had a VL <1000 copies/mL. Having a raised VL was consistently associated with lower median adherence scores and the area under the curve for the scale was 0.599, 0.656 and 0.642 using a VL cut-off of ≥50, ≥1000 and ≥10000 copies/mL, respectively. This simple self-report adherence scale shows potential as a first-stage adherence screener in this setting. Maternal adherence monitoring in low resource settings requires attention in the era of universal ART, and the value of this simple adherence scale in routine ART care settings warrants further investigation. PMID:27278548
NASA Astrophysics Data System (ADS)
Milani, G.; Bertolesi, E.
2017-07-01
A simple quasi analytical holonomic homogenization approach for the non-linear analysis of masonry walls in-plane loaded is presented. The elementary cell (REV) is discretized with 24 triangular elastic constant stress elements (bricks) and non-linear interfaces (mortar). A holonomic behavior with softening is assumed for mortar. It is shown how the mechanical problem in the unit cell is characterized by very few displacement variables and how homogenized stress-strain behavior can be evaluated semi-analytically.
ERIC Educational Resources Information Center
Coordination in Development, New York, NY.
This booklet was produced in response to the growing need for reliable environmental assessment techniques that can be applied to small-scale development projects. The suggested techniques emphasize low-technology environmental analysis. Although these techniques may lack precision, they can be extremely valuable in helping to assure the success…
Yi, Gihwan; Choi, Jun-Ho; Lee, Jong-Hee; Jeong, Unggi; Nam, Min-Hee; Yun, Doh-Won; Eun, Moo-Young
2005-01-01
We describe a rapid and simple procedure for homogenizing leaf samples suitable for mini/midi-scale DNA preparation in rice. The methods used tungsten carbide beads and general vortexer for homogenizing leaf samples. In general, two samples can be ground completely within 11.3+/-1.5 sec at one time. Up to 20 samples can be ground at a time using a vortexer attachment. The yields of the DNA ranged from 2.2 to 7.6 microg from 25-150 mg of young fresh leaf tissue. The quality and quantity of DNA was compatible for most of PCR work and RFLP analysis.
Huh, Yeamin; Smith, David E.; Feng, Meihau Rose
2014-01-01
Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879
Validation of the replica trick for simple models
NASA Astrophysics Data System (ADS)
Shinzato, Takashi
2018-04-01
We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.
The application of sensitivity analysis to models of large scale physiological systems
NASA Technical Reports Server (NTRS)
Leonard, J. I.
1974-01-01
A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.
Li, Xinan; Xu, Hongyuan; Cheung, Jeffrey T
2016-12-01
This work describes a new approach for gait analysis and balance measurement. It uses an inertial measurement unit (IMU) that can either be embedded inside a dynamically unstable platform for balance measurement or mounted on the lower back of a human participant for gait analysis. The acceleration data along three Cartesian coordinates is analyzed by the gait-force model to extract bio-mechanics information in both the dynamic state as in the gait analyzer and the steady state as in the balance scale. For the gait analyzer, the simple, noninvasive and versatile approach makes it appealing to a broad range of applications in clinical diagnosis, rehabilitation monitoring, athletic training, sport-apparel design, and many other areas. For the balance scale, it provides a portable platform to measure the postural deviation and the balance index under visual or vestibular sensory input conditions. Despite its simple construction and operation, excellent agreement has been demonstrated between its performance and the high-cost commercial balance unit over a wide dynamic range. The portable balance scale is an ideal tool for routine monitoring of balance index, fall-risk assessment, and other balance-related health issues for both clinical and household use.
Phenomenology of NMSSM in TeV scale mirage mediation
NASA Astrophysics Data System (ADS)
Hagimoto, Kei; Kobayashi, Tatsuo; Makino, Hiroki; Okumura, Ken-ichi; Shimomura, Takashi
2016-02-01
We study the next-to-minimal supersymmetric standard model (NMSSM) with the TeV scale mirage mediation, which is known as a solution for the little hierarchy problem in supersymmetry. Our previous study showed that 125 GeV Higgs boson is realized with {O} (10)% fine-tuning for 1.5 TeV gluino (1 TeV stop) mass. The μ term could be as large as 500 GeV without sacrificing the fine-tuning thanks to a cancellation mechanism. The singlet-doublet mixing is suppressed by tan β. In this paper, we further extend this analysis. We argue that approximate scale symmetries play a role behind the suppression of the singlet-doublet mixing. They reduce the mixing matrix to a simple form that is useful to understand the results of the numerical analysis. We perform a comprehensive analysis of the fine-tuning including the singlet sector by introducing a simple formula for the fine-tuning measure. This shows that the singlet mass of the least fine-tuning is favored by the LEP anomaly for moderate tan β. We also discuss prospects for the precision measurements of the Higgs couplings at LHC and ILC and direct/indirect dark matter searches in the model.
Wickham, Shelley; Large, Maryanne C.J; Poladian, Leon; Jermiin, Lars S
2005-01-01
Many butterfly species possess ‘structural’ colour, where colour is due to optical microstructures found in the wing scales. A number of such structures have been identified in butterfly scales, including three variations on a simple multi-layer structure. In this study, we optically characterize examples of all three types of multi-layer structure, as found in 10 species. The optical mechanism of the suppression and exaggeration of the angle-dependent optical properties (iridescence) of these structures is described. In addition, we consider the phylogeny of the butterflies, and are thus able to relate the optical properties of the structures to their evolutionary development. By applying two different types of analysis, the mechanism of adaptation is addressed. A simple parsimony analysis, in which all evolutionary changes are given an equal weighting, suggests convergent evolution of one structure. A Dollo parsimony analysis, in which the evolutionary ‘cost’ of losing a structure is less than that of gaining it, implies that ‘latent’ structures can be reused. PMID:16849221
HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.
Juusola, Jessie L; Brandeau, Margaret L
2016-04-01
To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.
Neto, Jose Osni Bruggemann; Gesser, Rafael Lehmkuhl; Steglich, Valdir; Bonilauri Ferreira, Ana Paula; Gandhi, Mihir; Vissoci, João Ricardo Nickenig; Pietrobon, Ricardo
2013-01-01
The validation of widely used scales facilitates the comparison across international patient samples. The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. The Simple Shoulder Test was translated from English into Brazilian Portuguese, translated back into English, and evaluated for accuracy by an expert committee. It was then administered to 100 patients with shoulder conditions. Psychometric properties were analyzed including factor analysis, internal reliability, test-retest reliability at seven days, and construct validity in relation to the Short Form 36 health survey (SF-36). Factor analysis demonstrated a three factor solution. Cronbach's alpha was 0.82. Test-retest reliability index as measured by intra-class correlation coefficient (ICC) was 0.84. Associations were observed in the hypothesized direction with all subscales of SF-36 questionnaire. The Simple Shoulder Test translation and cultural adaptation to Brazilian-Portuguese demonstrated adequate factor structure, internal reliability, and validity, ultimately allowing for its use in the comparison with international patient samples.
Neto, Jose Osni Bruggemann; Gesser, Rafael Lehmkuhl; Steglich, Valdir; Bonilauri Ferreira, Ana Paula; Gandhi, Mihir; Vissoci, João Ricardo Nickenig; Pietrobon, Ricardo
2013-01-01
Background The validation of widely used scales facilitates the comparison across international patient samples. The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. Objective The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. Methods The Simple Shoulder Test was translated from English into Brazilian Portuguese, translated back into English, and evaluated for accuracy by an expert committee. It was then administered to 100 patients with shoulder conditions. Psychometric properties were analyzed including factor analysis, internal reliability, test-retest reliability at seven days, and construct validity in relation to the Short Form 36 health survey (SF-36). Results Factor analysis demonstrated a three factor solution. Cronbach’s alpha was 0.82. Test-retest reliability index as measured by intra-class correlation coefficient (ICC) was 0.84. Associations were observed in the hypothesized direction with all subscales of SF-36 questionnaire. Conclusion The Simple Shoulder Test translation and cultural adaptation to Brazilian-Portuguese demonstrated adequate factor structure, internal reliability, and validity, ultimately allowing for its use in the comparison with international patient samples. PMID:23675436
Towards the computation of time-periodic inertial range dynamics
NASA Astrophysics Data System (ADS)
van Veen, L.; Vela-Martín, A.; Kawahara, G.
2018-04-01
We explore the possibility of computing simple invariant solutions, like travelling waves or periodic orbits, in Large Eddy Simulation (LES) on a periodic domain with constant external forcing. The absence of material boundaries and the simple forcing mechanism make this system a comparatively simple target for the study of turbulent dynamics through invariant solutions. We show, that in spite of the application of eddy viscosity the computations are still rather challenging and must be performed on GPU cards rather than conventional coupled CPUs. We investigate the onset of turbulence in this system by means of bifurcation analysis, and present a long-period, large-amplitude unstable periodic orbit that is filtered from a turbulent time series. Although this orbit is computed on a coarse grid, with only a small separation between the integral scale and the LES filter length, the periodic dynamics seem to capture a regeneration process of the large-scale vortices.
Pérez de Rosas, Alicia R.; Restelli, María F.; Fernández, Cintia J.; Blariza, María J.; García, Beatriz A.
2017-01-01
Here we apply inter-simple sequence repeat (ISSR) markers to explore the fine-scale genetic structure and dispersal in populations of Triatoma infestans. Five selected primers from 30 primers were used to amplify ISSRs by polymerase chain reaction. A total of 90 polymorphic bands were detected across 134 individuals captured from 11 peridomestic sites from the locality of San Martín (Capayán Department, Catamarca Province, Argentina). Significant levels of genetic differentiation suggest limited gene flow among sampling sites. Spatial autocorrelation analysis confirms that dispersal occurs on the scale of ∼469 m, suggesting that insecticide spraying should be extended at least within a radius of ∼500 m around the infested area. Moreover, Bayesian clustering algorithms indicated genetic exchange among different sites analyzed, supporting the hypothesis of an important role of peridomestic structures in the process of reinfestation. PMID:28115670
De Bartolo, Samuele; Fallico, Carmine; Veltri, Massimo
2013-01-01
Hydraulic conductivity and effective porosity values for the confined sandy loam aquifer of the Montalto Uffugo (Italy) test field were obtained by laboratory and field measurements; the first ones were carried out on undisturbed soil samples and the others by slug and aquifer tests. A direct simple-scaling analysis was performed for the whole range of measurement and a comparison among the different types of fractal models describing the scale behavior was made. Some indications about the largest pore size to utilize in the fractal models were given. The results obtained for a sandy loam soil show that it is possible to obtain global indications on the behavior of the hydraulic conductivity versus the porosity utilizing a simple scaling relation and a fractal model in coupled manner. PMID:24385876
NASA Astrophysics Data System (ADS)
Sarkarinejad, Khalil; Keshavarz, Saeede; Faghih, Ali
2015-05-01
This study is aimed at quantifying the kinematics of deformation using a population of drag fold structures associated with small-scale faults in deformed quartzites from Seh-Ghalatoun area within the HP-LT Sanandaj-Sirjan Metamorphic Belt, SW Iran. A total 30 small-scale faults in the quartzite layers were examined to determine the deformation characteristics. Obtained data revealed α0 (initial fault angle) and ω (angle between flow apophyses) are equal to 83° and 32°, respectively. These data yield mean kinematic vorticity number (Wm) equal to 0.79 and mean finite strain (Rs) of 2.32. These results confirm the relative contribution of ∼43% pure shear and ∼57% simple shear components, respectively. The strain partitioning inferred from this quantitative analysis is consistent with a sub-simple or general shear deformation pattern associated with a transpressional flow regime in the study area as a part of the Zagros Orogen. This type of deformation resulted from oblique convergence between the Afro-Arabian and Central-Iranian plates.
Measuring large scale space perception in literary texts
NASA Astrophysics Data System (ADS)
Rossi, Paolo
2007-07-01
A center and radius of “perception” (in the sense of environmental cognition) can be formally associated with a written text and operationally defined. Simple algorithms for their computation are presented, and indicators for anisotropy in large scale space perception are introduced. The relevance of these notions for the analysis of literary and historical records is briefly discussed and illustrated with an example taken from medieval historiography.
The brainstem reticular formation is a small-world, not scale-free, network
Humphries, M.D; Gurney, K; Prescott, T.J
2005-01-01
Recently, it has been demonstrated that several complex systems may have simple graph-theoretic characterizations as so-called ‘small-world’ and ‘scale-free’ networks. These networks have also been applied to the gross neural connectivity between primate cortical areas and the nervous system of Caenorhabditis elegans. Here, we extend this work to a specific neural circuit of the vertebrate brain—the medial reticular formation (RF) of the brainstem—and, in doing so, we have made three key contributions. First, this work constitutes the first model (and quantitative review) of this important brain structure for over three decades. Second, we have developed the first graph-theoretic analysis of vertebrate brain connectivity at the neural network level. Third, we propose simple metrics to quantitatively assess the extent to which the networks studied are small-world or scale-free. We conclude that the medial RF is configured to create small-world (implying coherent rapid-processing capabilities), but not scale-free, type networks under assumptions which are amenable to quantitative measurement. PMID:16615219
Simple scaling of cooperation in donor-recipient games.
Berger, Ulrich
2009-09-01
We present a simple argument which proves a general version of the scaling phenomenon recently observed in donor-recipient games by Tanimoto [Tanimoto, J., 2009. A simple scaling of the effectiveness of supporting mutual cooperation in donor-recipient games by various reciprocity mechanisms. BioSystems 96, 29-34].
Role of large-scale velocity fluctuations in a two-vortex kinematic dynamo.
Kaplan, E J; Brown, B P; Rahbarnia, K; Forest, C B
2012-06-01
This paper presents an analysis of the Dudley-James two-vortex flow, which inspired several laboratory-scale liquid-metal experiments, in order to better demonstrate its relation to astrophysical dynamos. A coordinate transformation splits the flow into components that are axisymmetric and nonaxisymmetric relative to the induced magnetic dipole moment. The reformulation gives the flow the same dynamo ingredients as are present in more complicated convection-driven dynamo simulations. These ingredients are currents driven by the mean flow and currents driven by correlations between fluctuations in the flow and fluctuations in the magnetic field. The simple model allows us to isolate the dynamics of the growing eigenvector and trace them back to individual three-wave couplings between the magnetic field and the flow. This simple model demonstrates the necessity of poloidal advection in sustaining the dynamo and points to the effect of large-scale flow fluctuations in exciting a dynamo magnetic field.
Impact force as a scaling parameter
NASA Technical Reports Server (NTRS)
Poe, Clarence C., Jr.; Jackson, Wade C.
1994-01-01
The Federal Aviation Administration (FAR PART 25) requires that a structure carry ultimate load with nonvisible impact damage and carry 70 percent of limit flight loads with discrete damage. The Air Force has similar criteria (MIL-STD-1530A). Both civilian and military structures are designed by a building block approach. First, critical areas of the structure are determined, and potential failure modes are identified. Then, a series of representative specimens are tested that will fail in those modes. The series begins with tests of simple coupons, progresses through larger and more complex subcomponents, and ends with a test on a full-scale component, hence the term 'building block.' In order to minimize testing, analytical models are needed to scale impact damage and residual strength from the simple coupons to the full-scale component. Using experiments and analysis, the present paper illustrates that impact damage can be better understood and scaled using impact force than just kinetic energy. The plate parameters considered are size and thickness, boundary conditions, and material, and the impact parameters are mass, shape, and velocity.
NASA Astrophysics Data System (ADS)
Honarmand, M.; Moradi, M.
2018-06-01
In this paper, by using scaled boundary finite element method (SBFM), a perfect nanographene sheet or cracked ones were simulated for the first time. In this analysis, the atomic carbon bonds were modeled by simple bar elements with circular cross-sections. Despite of molecular dynamics (MD), the results obtained from SBFM analysis are quite acceptable for zero degree cracks. For all angles except zero, Griffith criterion can be applied for the relation between critical stress and crack length. Finally, despite the simplifications used in nanographene analysis, obtained results can simulate the mechanical behavior with high accuracy compared with experimental and MD ones.
Assessment of the spatial scaling behaviour of floods in the United Kingdom
NASA Astrophysics Data System (ADS)
Formetta, Giuseppe; Stewart, Elizabeth; Bell, Victoria
2017-04-01
Floods are among the most dangerous natural hazards, causing loss of life and significant damage to private and public property. Regional flood-frequency analysis (FFA) methods are essential tools to assess the flood hazard and plan interventions for its mitigation. FFA methods are often based on the well-known index flood method that assumes the invariance of the coefficient of variation of floods with drainage area. This assumption is equivalent to the simple scaling or self-similarity assumption for peak floods, i.e. their spatial structure remains similar in a particular, relatively simple, way to itself over a range of scales. Spatial scaling of floods has been evaluated at national scale for different countries such as Canada, USA, and Australia. According our knowledge. Such a study has not been conducted for the United Kingdom even though the standard FFA method there is based on the index flood assumption. In this work we present an integrated approach to assess of the spatial scaling behaviour of floods in the United Kingdom using three different methods: product moments (PM), probability weighted moments (PWM), and quantile analysis (QA). We analyse both instantaneous and daily annual observed maximum floods and performed our analysis both across the entire country and in its sub-climatic regions as defined in the Flood Studies Report (NERC, 1975). To evaluate the relationship between the k-th moments or quantiles and the drainage area we used both regression with area alone and multiple regression considering other explanatory variables to account for the geomorphology, amount of rainfall, and soil type of the catchments. The latter multiple regression approach was only recently demonstrated being more robust than the traditional regression with area alone that can lead to biased estimates of scaling exponents and misinterpretation of spatial scaling behaviour. We tested our framework on almost 600 rural catchments in UK considered as entire region and split in 11 sub-regions with 50 catchments per region on average. Preliminary results from the three different spatial scaling methods are generally in agreement and indicate that: i) only some of the peak flow variability is explained by area alone (approximately 50% for the entire country and ranging between the 40% and 70% for the sub-regions); ii) this percentage increases to 90% for the entire country and ranges between 80% and 95% for the sub-regions when the multiple regression is used; iii) the simple scaling hypothesis holds in all sub-regions with the exception of weak multi-scaling found in the regions 2 (North), and 5 and 6 (South East). We hypothesize that these deviations can be explained by heterogeneity in large scale precipitation and by the influence of the soil type (predominantly chalk) on the flood formation process in regions 5 and 6.
Finite-size scaling analysis on the phase transition of a ferromagnetic polymer chain model
NASA Astrophysics Data System (ADS)
Luo, Meng-Bo
2006-01-01
The finite-size scaling analysis method is applied to study the phase transition of a self-avoiding walking polymer chain with spatial nearest-neighbor ferromagnetic Ising interaction on the simple cubic lattice. Assuming the scaling M2(T,n)=n-2β/ν[Φ0+Φ1n1/ν(T-Tc)+O(n2/ν(T-Tc)2)] with the square magnetization M2 as the order parameter and the chain length n as the size, we estimate the second-order phase-transition temperature Tc=1.784J/kB and critical exponents 2β/ν≈0.668 and ν ≈1.0. The self-diffusion constant and the chain dimensions ⟨R2⟩ and ⟨S2⟩ do not obey such a scaling law.
Scaling range of power laws that originate from fluctuation analysis
NASA Astrophysics Data System (ADS)
Grech, Dariusz; Mazur, Zygmunt
2013-05-01
We extend our previous study of scaling range properties performed for detrended fluctuation analysis (DFA) [Physica A0378-437110.1016/j.physa.2013.01.049 392, 2384 (2013)] to other techniques of fluctuation analysis (FA). The new technique, called modified detrended moving average analysis (MDMA), is introduced, and its scaling range properties are examined and compared with those of detrended moving average analysis (DMA) and DFA. It is shown that contrary to DFA, DMA and MDMA techniques exhibit power law dependence of the scaling range with respect to the length of the searched signal and with respect to the accuracy R2 of the fit to the considered scaling law imposed by DMA or MDMA methods. This power law dependence is satisfied for both uncorrelated and autocorrelated data. We find also a simple generalization of this power law relation for series with a different level of autocorrelations measured in terms of the Hurst exponent. Basic relations between scaling ranges for different techniques are also discussed. Our findings should be particularly useful for local FA in, e.g., econophysics, finances, or physiology, where the huge number of short time series has to be examined at once and wherever the preliminary check of the scaling range regime for each of the series separately is neither effective nor possible.
Xueri Dang; Chun-Ta Lai; David Y. Hollinger; Andrew J. Schauer; Jingfeng Xiao; J. William Munger; Clenton Owensby; James R. Ehleringer
2011-01-01
We evaluated an idealized boundary layer (BL) model with simple parameterizations using vertical transport information from community model outputs (NCAR/NCEP Reanalysis and ECMWF Interim Analysis) to estimate regional-scale net CO2 fluxes from 2002 to 2007 at three forest and one grassland flux sites in the United States. The BL modeling...
Structure identification methods for atomistic simulations of crystalline materials
Stukowski, Alexander
2012-05-28
Here, we discuss existing and new computational analysis techniques to classify local atomic arrangements in large-scale atomistic computer simulations of crystalline solids. This article includes a performance comparison of typical analysis algorithms such as common neighbor analysis (CNA), centrosymmetry analysis, bond angle analysis, bond order analysis and Voronoi analysis. In addition we propose a simple extension to the CNA method that makes it suitable for multi-phase systems. Finally, we introduce a new structure identification algorithm, the neighbor distance analysis, which is designed to identify atomic structure units in grain boundaries.
Modes and emergent time scales of embayed beach dynamics
NASA Astrophysics Data System (ADS)
Ratliff, Katherine M.; Murray, A. Brad
2014-10-01
In this study, we use a simple numerical model (the Coastline Evolution Model) to explore alongshore transport-driven shoreline dynamics within generalized embayed beaches (neglecting cross-shore effects). Using principal component analysis (PCA), we identify two primary orthogonal modes of shoreline behavior that describe shoreline variation about its unchanging mean position: the rotation mode, which has been previously identified and describes changes in the mean shoreline orientation, and a newly identified breathing mode, which represents changes in shoreline curvature. Wavelet analysis of the PCA mode time series reveals characteristic time scales of these modes (typically years to decades) that emerge within even a statistically constant white-noise wave climate (without changes in external forcing), suggesting that these time scales can arise from internal system dynamics. The time scales of both modes increase linearly with shoreface depth, suggesting that the embayed beach sediment transport dynamics exhibit a diffusive scaling.
Heidari, Zahra; Roe, Daniel R; Galindo-Murillo, Rodrigo; Ghasemi, Jahan B; Cheatham, Thomas E
2016-07-25
Long time scale molecular dynamics (MD) simulations of biological systems are becoming increasingly commonplace due to the availability of both large-scale computational resources and significant advances in the underlying simulation methodologies. Therefore, it is useful to investigate and develop data mining and analysis techniques to quickly and efficiently extract the biologically relevant information from the incredible amount of generated data. Wavelet analysis (WA) is a technique that can quickly reveal significant motions during an MD simulation. Here, the application of WA on well-converged long time scale (tens of μs) simulations of a DNA helix is described. We show how WA combined with a simple clustering method can be used to identify both the physical and temporal locations of events with significant motion in MD trajectories. We also show that WA can not only distinguish and quantify the locations and time scales of significant motions, but by changing the maximum time scale of WA a more complete characterization of these motions can be obtained. This allows motions of different time scales to be identified or ignored as desired.
Humans' perceptions of animal mentality: ascriptions of thinking.
Rasmussen, J L; Rajecki, D W; Craft, H D
1993-09-01
On rating scales, 294 students indicated whether it was reasonable to say that a dog, cat, bird, fish, and school-age child had the capacity for 12 commonplace human mental operations or experiences. Factor analysis of responses identified 2 levels of attributions, simple thinking and complex thinking. The child and all animals were credited with simple thinking, but respondents were much more likely to ascribe complex thinking to the child. (A pilot study with 8 animal-behavior professionals generally replicated these results.) Certain mental categories (e.g., emotion) were judged by students to be simple for all target types; others (e.g., conservation) were judged to be universally complex. Further factoring revealed articulate ascriptions for key mental categories. Play and imagine was seen as simple in the animals but complex for the child, but enumeration and sorting and dream were seen as simple in the child but complex for the animals.
Modeling gene expression measurement error: a quasi-likelihood approach
Strimmer, Korbinian
2003-01-01
Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637
Environmental management practices are trending away from simple, local- scale assessments toward complex, multiple-stressor regional assessments. Landscape ecology provides the theory behind these assessments while geographic information systems (GIS) supply the tools to impleme...
Contour fractal analysis of grains
NASA Astrophysics Data System (ADS)
Guida, Giulia; Casini, Francesca; Viggiani, Giulia MB
2017-06-01
Fractal analysis has been shown to be useful in image processing to characterise the shape and the grey-scale complexity in different applications spanning from electronic to medical engineering (e.g. [1]). Fractal analysis consists of several methods to assign a dimension and other fractal characteristics to a dataset describing geometric objects. Limited studies have been conducted on the application of fractal analysis to the classification of the shape characteristics of soil grains. The main objective of the work described in this paper is to obtain, from the results of systematic fractal analysis of artificial simple shapes, the characterization of the particle morphology at different scales. The long term objective of the research is to link the microscopic features of granular media with the mechanical behaviour observed in the laboratory and in situ.
Scaling laws and fluctuations in the statistics of word frequencies
NASA Astrophysics Data System (ADS)
Gerlach, Martin; Altmann, Eduardo G.
2014-11-01
In this paper, we combine statistical analysis of written texts and simple stochastic models to explain the appearance of scaling laws in the statistics of word frequencies. The average vocabulary of an ensemble of fixed-length texts is known to scale sublinearly with the total number of words (Heaps’ law). Analyzing the fluctuations around this average in three large databases (Google-ngram, English Wikipedia, and a collection of scientific articles), we find that the standard deviation scales linearly with the average (Taylor's law), in contrast to the prediction of decaying fluctuations obtained using simple sampling arguments. We explain both scaling laws (Heaps’ and Taylor) by modeling the usage of words using a Poisson process with a fat-tailed distribution of word frequencies (Zipf's law) and topic-dependent frequencies of individual words (as in topic models). Considering topical variations lead to quenched averages, turn the vocabulary size a non-self-averaging quantity, and explain the empirical observations. For the numerous practical applications relying on estimations of vocabulary size, our results show that uncertainties remain large even for long texts. We show how to account for these uncertainties in measurements of lexical richness of texts with different lengths.
Tsunamis generated by subaerial mass flows
Walder, S.J.; Watts, P.; Sorensen, O.E.; Janssen, K.
2003-01-01
Tsunamis generated in lakes and reservoirs by subaerial mass flows pose distinctive problems for hazards assessment because the domain of interest is commonly the "near field," beyond the zone of complex splashing but close enough to the source that wave propagation effects are not predominant. Scaling analysis of the equations governing water wave propagation shows that near-field wave amplitude and wavelength should depend on certain measures of mass flow dynamics and volume. The scaling analysis motivates a successful collapse (in dimensionless space) of data from two distinct sets of experiments with solid block "wave makers." To first order, wave amplitude/water depth is a simple function of the ratio of dimensionless wave maker travel time to dimensionless wave maker volume per unit width. Wave amplitude data from previous laboratory investigations with both rigid and deformable wave makers follow the same trend in dimensionless parameter space as our own data. The characteristic wavelength/water depth for all our experiments is simply proportional to dimensionless wave maker travel time, which is itself given approximately by a simple function of wave maker length/water depth. Wave maker shape and rigidity do not otherwise influence wave features. Application of the amplitude scaling relation to several historical events yields "predicted" near-field wave amplitudes in reasonable agreement with measurements and observations. Together, the scaling relations for near-field amplitude, wavelength, and submerged travel time provide key inputs necessary for computational wave propagation and hazards assessment.
Data series embedding and scale invariant statistics.
Michieli, I; Medved, B; Ristov, S
2010-06-01
Data sequences acquired from bio-systems such as human gait data, heart rate interbeat data, or DNA sequences exhibit complex dynamics that is frequently described by a long-memory or power-law decay of autocorrelation function. One way of characterizing that dynamics is through scale invariant statistics or "fractal-like" behavior. For quantifying scale invariant parameters of physiological signals several methods have been proposed. Among them the most common are detrended fluctuation analysis, sample mean variance analyses, power spectral density analysis, R/S analysis, and recently in the realm of the multifractal approach, wavelet analysis. In this paper it is demonstrated that embedding the time series data in the high-dimensional pseudo-phase space reveals scale invariant statistics in the simple fashion. The procedure is applied on different stride interval data sets from human gait measurements time series (Physio-Bank data library). Results show that introduced mapping adequately separates long-memory from random behavior. Smaller gait data sets were analyzed and scale-free trends for limited scale intervals were successfully detected. The method was verified on artificially produced time series with known scaling behavior and with the varying content of noise. The possibility for the method to falsely detect long-range dependence in the artificially generated short range dependence series was investigated. (c) 2009 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Paredes-Miranda, G.; Arnott, W. P.; Moosmuller, H.
2010-12-01
The global trend toward urbanization and the resulting increase in city population has directed attention toward air pollution in megacities. A closely related question of importance for urban planning and attainment of air quality standards is how pollutant concentrations scale with city population. In this study, we use measurements of light absorption and light scattering coefficients as proxies for primary (i.e., black carbon; BC) and total (i.e., particulate matter; PM) pollutant concentration, to start addressing the following questions: What patterns and generalizations are emerging from our expanding data sets on urban air pollution? How does the per-capita air pollution vary with economic, geographic, and meteorological conditions of an urban area? Does air pollution provide an upper limit on city size? Diurnal analysis of black carbon concentration measurements in suburban Mexico City, Mexico, Las Vegas, NV, USA, and Reno, NV, USA for similar seasons suggests that commonly emitted primary air pollutant concentrations scale approximately as the square root of the urban population N, consistent with a simple 2-d box model. The measured absorption coefficient Babs is approximately proportional to the BC concentration (primary pollution) and thus scales with the square root of population (N). Since secondary pollutants form through photochemical reactions involving primary pollutants, they scale also with square root of N. Therefore the scattering coefficient Bsca, a proxy for PM concentration is also expected to scale with square root of N. Here we present light absorption and scattering measurements and data on meteorological conditions and compare the population scaling of these pollutant measurements with predictions from the simple 2-d box model. We find that these basin cities are connected by the square root of N dependence. Data from other cities will be discussed as time permits.
Alternative Analysis of the Michaelis-Menten Equations
ERIC Educational Resources Information Center
Krogstad, Harald E.; Dawed, Mohammed Yiha; Tegegne, Tadele Tesfa
2011-01-01
Courses in mathematical modelling are always in need of simple, illustrative examples. The Michaelis-Menten reaction kinetics equations have been considered to be a basic example of scaling and singular perturbation. However, the leading order approximations do not easily show the expected behaviour, and this note proposes a different perturbation…
Sequence analysis reveals genomic factors affecting EST-SSR primer performance and polymorphism
USDA-ARS?s Scientific Manuscript database
Search for simple sequence repeat (SSR) motifs and design of flanking primers in expressed sequence tag (EST) sequences can be easily done at a large scale using bioinformatics programs. However, failed amplification and/or detection, along with lack of polymorphism, is often seen among randomly sel...
Liquid-vapor rectilinear diameter revisited
NASA Astrophysics Data System (ADS)
Garrabos, Y.; Lecoutre, C.; Marre, S.; Beysens, D.; Hahn, I.
2018-02-01
In the modern theory of critical phenomena, the liquid-vapor density diameter in simple fluids is generally expected to deviate from a rectilinear law approaching the critical point. However, by performing precise scannerlike optical measurements of the position of the SF6 liquid-vapor meniscus, in an approach much closer to criticality in temperature and density than earlier measurements, no deviation from a rectilinear diameter can be detected. The observed meniscus position from far (10 K ) to extremely close (1 mK ) to the critical temperature is analyzed using recent theoretical models to predict the complete scaling consequences of a fluid asymmetry. The temperature dependence of the meniscus position appears consistent with the law of rectilinear diameter. The apparent absence of the critical hook in SF6 therefore seemingly rules out the need for the pressure scaling field contribution in the complete scaling theoretical framework in this SF6 analysis. More generally, this work suggests a way to clarify the experimental ambiguities in the simple fluids for the near-critical singularities in the density diameter.
Impact of the time scale of model sensitivity response on coupled model parameter estimation
NASA Astrophysics Data System (ADS)
Liu, Chang; Zhang, Shaoqing; Li, Shan; Liu, Zhengyu
2017-11-01
That a model has sensitivity responses to parameter uncertainties is a key concept in implementing model parameter estimation using filtering theory and methodology. Depending on the nature of associated physics and characteristic variability of the fluid in a coupled system, the response time scales of a model to parameters can be different, from hourly to decadal. Unlike state estimation, where the update frequency is usually linked with observational frequency, the update frequency for parameter estimation must be associated with the time scale of the model sensitivity response to the parameter being estimated. Here, with a simple coupled model, the impact of model sensitivity response time scales on coupled model parameter estimation is studied. The model includes characteristic synoptic to decadal scales by coupling a long-term varying deep ocean with a slow-varying upper ocean forced by a chaotic atmosphere. Results show that, using the update frequency determined by the model sensitivity response time scale, both the reliability and quality of parameter estimation can be improved significantly, and thus the estimated parameters make the model more consistent with the observation. These simple model results provide a guideline for when real observations are used to optimize the parameters in a coupled general circulation model for improving climate analysis and prediction initialization.
NASA Technical Reports Server (NTRS)
Levy, G.; Brown, R. A.
1986-01-01
A simple economical objective analysis scheme is devised and tested on real scatterometer data. It is designed to treat dense data such as those of the Seasat A Satellite Scatterometer (SASS) for individual or multiple passes, and preserves subsynoptic scale features. Errors are evaluated with the aid of sampling ('bootstrap') statistical methods. In addition, sensitivity tests have been performed which establish qualitative confidence in calculated fields of divergence and vorticity. The SASS wind algorithm could be improved; however, the data at this point are limited by instrument errors rather than analysis errors. The analysis error is typically negligible in comparison with the instrument error, but amounts to 30 percent of the instrument error in areas of strong wind shear. The scheme is very economical, and thus suitable for large volumes of dense data such as SASS data.
Scaling of drizzle virga depth with cloud thickness for marine stratocumulus clouds
Yang, Fan; Luke, Edward P.; Kollias, Pavlos; ...
2018-04-20
Drizzle plays a crucial role in cloud lifetime and radiation properties of marine stratocumulus clouds. Understanding where drizzle exists in the sub-cloud layer, which depends on drizzle virga depth, can help us better understand where below-cloud scavenging and evaporative cooling and moisturizing occur. In this study, we examine the statistical properties of drizzle frequency and virga depth of marine stratocumulus based on unique ground-based remote sensing data. Results show that marine stratocumulus clouds are drizzling nearly all the time. In addition, we derive a simple scaling analysis between drizzle virga thickness and cloud thickness. Our analytical expression agrees with themore » observational data reasonable well, which suggests that our formula provides a simple parameterization for drizzle virga of stratocumulus clouds suitable for use in other models.« less
Scaling of drizzle virga depth with cloud thickness for marine stratocumulus clouds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Fan; Luke, Edward P.; Kollias, Pavlos
Drizzle plays a crucial role in cloud lifetime and radiation properties of marine stratocumulus clouds. Understanding where drizzle exists in the sub-cloud layer, which depends on drizzle virga depth, can help us better understand where below-cloud scavenging and evaporative cooling and moisturizing occur. In this study, we examine the statistical properties of drizzle frequency and virga depth of marine stratocumulus based on unique ground-based remote sensing data. Results show that marine stratocumulus clouds are drizzling nearly all the time. In addition, we derive a simple scaling analysis between drizzle virga thickness and cloud thickness. Our analytical expression agrees with themore » observational data reasonable well, which suggests that our formula provides a simple parameterization for drizzle virga of stratocumulus clouds suitable for use in other models.« less
Metrology with Weak Value Amplification and Related Topics
2013-10-12
sensitivity depend crucially on the relative time scales involved, which include: 4 +- PBS PC HWP SBC Piezo Pulsed Laser Split Detector 50:50 FIG. 1. Simple...reasons why this may be impossible or inadvisable given a laboratory set-up. There may be a minimum quiet time between laser pulses, for example, or...measurements is a full 100 ms, our filtering limits the laser noise to time scales of about 30 ms. For analysis, we take this as our integration time in
A simple and fast representation space for classifying complex time series
NASA Astrophysics Data System (ADS)
Zunino, Luciano; Olivares, Felipe; Bariviera, Aurelio F.; Rosso, Osvaldo A.
2017-03-01
In the context of time series analysis considerable effort has been directed towards the implementation of efficient discriminating statistical quantifiers. Very recently, a simple and fast representation space has been introduced, namely the number of turning points versus the Abbe value. It is able to separate time series from stationary and non-stationary processes with long-range dependences. In this work we show that this bidimensional approach is useful for distinguishing complex time series: different sets of financial and physiological data are efficiently discriminated. Additionally, a multiscale generalization that takes into account the multiple time scales often involved in complex systems has been also proposed. This multiscale analysis is essential to reach a higher discriminative power between physiological time series in health and disease.
Simple scale interpolator facilitates reading of graphs
NASA Technical Reports Server (NTRS)
Fetterman, D. E., Jr.
1965-01-01
Simple transparent overlay with interpolation scale facilitates accurate, rapid reading of graph coordinate points. This device can be used for enlarging drawings and locating points on perspective drawings.
Asymptotic stability and instability of large-scale systems. [using vector Liapunov functions
NASA Technical Reports Server (NTRS)
Grujic, L. T.; Siljak, D. D.
1973-01-01
The purpose of this paper is to develop new methods for constructing vector Lyapunov functions and broaden the application of Lyapunov's theory to stability analysis of large-scale dynamic systems. The application, so far limited by the assumption that the large-scale systems are composed of exponentially stable subsystems, is extended via the general concept of comparison functions to systems which can be decomposed into asymptotically stable subsystems. Asymptotic stability of the composite system is tested by a simple algebraic criterion. By redefining interconnection functions among the subsystems according to interconnection matrices, the same mathematical machinery can be used to determine connective asymptotic stability of large-scale systems under arbitrary structural perturbations.
Biochemical analysis of force-sensitive responses using a large-scale cell stretch device.
Renner, Derrick J; Ewald, Makena L; Kim, Timothy; Yamada, Soichiro
2017-09-03
Physical force has emerged as a key regulator of tissue homeostasis, and plays an important role in embryogenesis, tissue regeneration, and disease progression. Currently, the details of protein interactions under elevated physical stress are largely missing, therefore, preventing the fundamental, molecular understanding of mechano-transduction. This is in part due to the difficulty isolating large quantities of cell lysates exposed to force-bearing conditions for biochemical analysis. We designed a simple, easy-to-fabricate, large-scale cell stretch device for the analysis of force-sensitive cell responses. Using proximal biotinylation (BioID) analysis or phospho-specific antibodies, we detected force-sensitive biochemical changes in cells exposed to prolonged cyclic substrate stretch. For example, using promiscuous biotin ligase BirA* tagged α-catenin, the biotinylation of myosin IIA increased with stretch, suggesting the close proximity of myosin IIA to α-catenin under a force bearing condition. Furthermore, using phospho-specific antibodies, Akt phosphorylation was reduced upon stretch while Src phosphorylation was unchanged. Interestingly, phosphorylation of GSK3β, a downstream effector of Akt pathway, was also reduced with stretch, while the phosphorylation of other Akt effectors was unchanged. These data suggest that the Akt-GSK3β pathway is force-sensitive. This simple cell stretch device enables biochemical analysis of force-sensitive responses and has potential to uncover molecules underlying mechano-transduction.
Corruption in Higher Education: Conceptual Approaches and Measurement Techniques
ERIC Educational Resources Information Center
Osipian, Ararat L.
2007-01-01
Corruption is a complex and multifaceted phenomenon. Forms of corruption are multiple. Measuring corruption is necessary not only for getting ideas about the scale and scope of the problem, but for making simple comparisons between the countries and conducting comparative analysis of corruption. While the total impact of corruption is indeed…
A model of return intervals between earthquake events
NASA Astrophysics Data System (ADS)
Zhou, Yu; Chechkin, Aleksei; Sokolov, Igor M.; Kantz, Holger
2016-06-01
Application of the diffusion entropy analysis and the standard deviation analysis to the time sequence of the southern California earthquake events from 1976 to 2002 uncovered scaling behavior typical for anomalous diffusion. However, the origin of such behavior is still under debate. Some studies attribute the scaling behavior to the correlations in the return intervals, or waiting times, between aftershocks or mainshocks. To elucidate a nature of the scaling, we applied specific reshulffling techniques to eliminate correlations between different types of events and then examined how it affects the scaling behavior. We demonstrate that the origin of the scaling behavior observed is the interplay between mainshock waiting time distribution and the structure of clusters of aftershocks, but not correlations in waiting times between the mainshocks and aftershocks themselves. Our findings are corroborated by numerical simulations of a simple model showing a very similar behavior. The mainshocks are modeled by a renewal process with a power-law waiting time distribution between events, and aftershocks follow a nonhomogeneous Poisson process with the rate governed by Omori's law.
NASA Astrophysics Data System (ADS)
Hinton, Courtney; Punjabi, Alkesh; Ali, Halima
2009-11-01
The simple map is the simplest map that has topology of divertor tokamaks [A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Let. A 364, 140--145 (2007)]. Recently, the action-angle coordinates for simple map are analytically calculated, and simple map is constructed in action-angle coordinates [O. Kerwin, A. Punjabi, and H. Ali, Phys. Plasmas 15, 072504 (2008)]. Action-angle coordinates for simple map cannot be inverted to real space coordinates (R,Z). Because there is logarithmic singularity on the ideal separatrix, trajectories cannot cross separatrix [op cit]. Simple map in action-angle coordinates is applied to calculate stochastic broadening due to the low mn magnetic perturbation with mode numbers m=1, and n=±1. The width of stochastic layer near the X-point scales as 0.63 power of the amplitude δ of low mn perturbation, toroidal flux loss scales as 1.16 power of δ, and poloidal flux loss scales as 1.26 power of δ. Scaling of width deviates from Boozer-Rechester scaling by 26% [A. Boozer, and A. Rechester, Phys. Fluids 21, 682 (1978)]. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793.
Quantifying predictability in a model with statistical features of the atmosphere
Kleeman, Richard; Majda, Andrew J.; Timofeyev, Ilya
2002-01-01
The Galerkin truncated inviscid Burgers equation has recently been shown by the authors to be a simple model with many degrees of freedom, with many statistical properties similar to those occurring in dynamical systems relevant to the atmosphere. These properties include long time-correlated, large-scale modes of low frequency variability and short time-correlated “weather modes” at smaller scales. The correlation scaling in the model extends over several decades and may be explained by a simple theory. Here a thorough analysis of the nature of predictability in the idealized system is developed by using a theoretical framework developed by R.K. This analysis is based on a relative entropy functional that has been shown elsewhere by one of the authors to measure the utility of statistical predictions precisely. The analysis is facilitated by the fact that most relevant probability distributions are approximately Gaussian if the initial conditions are assumed to be so. Rather surprisingly this holds for both the equilibrium (climatological) and nonequilibrium (prediction) distributions. We find that in most cases the absolute difference in the first moments of these two distributions (the “signal” component) is the main determinant of predictive utility variations. Contrary to conventional belief in the ensemble prediction area, the dispersion of prediction ensembles is generally of secondary importance in accounting for variations in utility associated with different initial conditions. This conclusion has potentially important implications for practical weather prediction, where traditionally most attention has focused on dispersion and its variability. PMID:12429863
2010-01-01
Background Fatigue is a common and debilitating symptom in multiple sclerosis (MS). Best-practice guidelines suggest that health services should repeatedly assess fatigue in persons with MS. Several fatigue scales are available but concern has been expressed about their validity. The objective of this study was to examine the reliability and validity of a new scale for MS fatigue, the Neurological Fatigue Index (NFI-MS). Methods Qualitative analysis of 40 MS patient interviews had previously contributed to a coherent definition of fatigue, and a potential 52 item set representing the salient themes. A draft questionnaire was mailed out to 1223 people with MS, and the resulting data subjected to both factor and Rasch analysis. Results Data from 635 (51.9% response) respondents were split randomly into an 'evaluation' and 'validation' sample. Exploratory factor analysis identified four potential subscales: 'physical', 'cognitive', 'relief by diurnal sleep or rest' and 'abnormal nocturnal sleep and sleepiness'. Rasch analysis led to further item reduction and the generation of a Summary scale comprising items from the Physical and Cognitive subscales. The scales were shown to fit Rasch model expectations, across both the evaluation and validation samples. Conclusion A simple 10-item Summary scale, together with scales measuring the physical and cognitive components of fatigue, were validated for MS fatigue. PMID:20152031
Self-folding and aggregation of amyloid nanofibrils
NASA Astrophysics Data System (ADS)
Paparcone, Raffaella; Cranford, Steven W.; Buehler, Markus J.
2011-04-01
Amyloids are highly organized protein filaments, rich in β-sheet secondary structures that self-assemble to form dense plaques in brain tissues affected by severe neurodegenerative disorders (e.g. Alzheimer's Disease). Identified as natural functional materials in bacteria, in addition to their remarkable mechanical properties, amyloids have also been proposed as a platform for novel biomaterials in nanotechnology applications including nanowires, liquid crystals, scaffolds and thin films. Despite recent progress in understanding amyloid structure and behavior, the latent self-assembly mechanism and the underlying adhesion forces that drive the aggregation process remain poorly understood. On the basis of previous full atomistic simulations, here we report a simple coarse-grain model to analyze the competition between adhesive forces and elastic deformation of amyloid fibrils. We use simple model system to investigate self-assembly mechanisms of fibrils, focused on the formation of self-folded nanorackets and nanorings, and thereby address a critical issue in linking the biochemical (Angstrom) to micrometre scales relevant for larger-scale states of functional amyloid materials. We investigate the effect of varying the interfibril adhesion energy on the structure and stability of self-folded nanorackets and nanorings and demonstrate that these aggregated amyloid fibrils are stable in such states even when the fibril-fibril interaction is relatively weak, given that the constituting amyloid fibril length exceeds a critical fibril length-scale of several hundred nanometres. We further present a simple approach to directly determine the interfibril adhesion strength from geometric measures. In addition to providing insight into the physics of aggregation of amyloid fibrils our model enables the analysis of large-scale amyloid plaques and presents a new method for the estimation and engineering of the adhesive forces responsible of the self-assembly process of amyloidnanostructures, filling a gap that previously existed between full atomistic simulations of primarily ultra-short fibrils and much larger micrometre-scale amyloid aggregates. Via direct simulation of large-scale amyloid aggregates consisting of hundreds of fibrils we demonstrate that the fibril length has a profound impact on their structure and mechanical properties, where the critical fibril length-scale derived from our analysis of self-folded nanorackets and nanorings defines the structure of amyloid aggregates. A multi-scale modeling approach as used here, bridging the scales from Angstroms to micrometres, opens a wide range of possible nanotechnology applications by presenting a holistic framework that balances mechanical properties of individual fibrils, hierarchical self-assembly, and the adhesive forces determining their stability to facilitate the design of de novoamyloid materials.
Simple heterogeneity parametrization for sea surface temperature and chlorophyll
NASA Astrophysics Data System (ADS)
Skákala, Jozef; Smyth, Timothy J.
2016-06-01
Using satellite maps this paper offers a complex analysis of chlorophyll & SST heterogeneity in the shelf seas around the southwest of the UK. The heterogeneity scaling follows a simple power law and is consequently parametrized by two parameters. It is shown that in most cases these two parameters vary only relatively little with time. The paper offers a detailed comparison of field heterogeneity between different regions. How much heterogeneity is in each region preserved in the annual median data is also determined. The paper explicitly demonstrates how one can use these results to calculate representative measurement area for in situ networks.
NASA Astrophysics Data System (ADS)
Zhaunerchyk, V.; Frasinski, L. J.; Eland, J. H. D.; Feifel, R.
2014-05-01
Multidimensional covariance analysis and its validity for correlation of processes leading to multiple products are investigated from a theoretical point of view. The need to correct for false correlations induced by experimental parameters which fluctuate from shot to shot, such as the intensity of self-amplified spontaneous emission x-ray free-electron laser pulses, is emphasized. Threefold covariance analysis based on simple extension of the two-variable formulation is shown to be valid for variables exhibiting Poisson statistics. In this case, false correlations arising from fluctuations in an unstable experimental parameter that scale linearly with signals can be eliminated by threefold partial covariance analysis, as defined here. Fourfold covariance based on the same simple extension is found to be invalid in general. Where fluctuations in an unstable parameter induce nonlinear signal variations, a technique of contingent covariance analysis is proposed here to suppress false correlations. In this paper we also show a method to eliminate false correlations associated with fluctuations of several unstable experimental parameters.
Rueckl, Martin; Lenzi, Stephen C; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W
2017-01-01
The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca 2+ -imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca 2+ imaging datasets, particularly when these have been acquired at different spatial scales.
Rueckl, Martin; Lenzi, Stephen C.; Moreno-Velasquez, Laura; Parthier, Daniel; Schmitz, Dietmar; Ruediger, Sten; Johenning, Friedrich W.
2017-01-01
The measurement of activity in vivo and in vitro has shifted from electrical to optical methods. While the indicators for imaging activity have improved significantly over the last decade, tools for analysing optical data have not kept pace. Most available analysis tools are limited in their flexibility and applicability to datasets obtained at different spatial scales. Here, we present SamuROI (Structured analysis of multiple user-defined ROIs), an open source Python-based analysis environment for imaging data. SamuROI simplifies exploratory analysis and visualization of image series of fluorescence changes in complex structures over time and is readily applicable at different spatial scales. In this paper, we show the utility of SamuROI in Ca2+-imaging based applications at three spatial scales: the micro-scale (i.e., sub-cellular compartments including cell bodies, dendrites and spines); the meso-scale, (i.e., whole cell and population imaging with single-cell resolution); and the macro-scale (i.e., imaging of changes in bulk fluorescence in large brain areas, without cellular resolution). The software described here provides a graphical user interface for intuitive data exploration and region of interest (ROI) management that can be used interactively within Jupyter Notebook: a publicly available interactive Python platform that allows simple integration of our software with existing tools for automated ROI generation and post-processing, as well as custom analysis pipelines. SamuROI software, source code and installation instructions are publicly available on GitHub and documentation is available online. SamuROI reduces the energy barrier for manual exploration and semi-automated analysis of spatially complex Ca2+ imaging datasets, particularly when these have been acquired at different spatial scales. PMID:28706482
ViSimpl: Multi-View Visual Analysis of Brain Simulation Data
Galindo, Sergio E.; Toharia, Pablo; Robles, Oscar D.; Pastor, Luis
2016-01-01
After decades of independent morphological and functional brain research, a key point in neuroscience nowadays is to understand the combined relationships between the structure of the brain and its components and their dynamics on multiple scales, ranging from circuits of neurons at micro or mesoscale to brain regions at macroscale. With such a goal in mind, there is a vast amount of research focusing on modeling and simulating activity within neuronal structures, and these simulations generate large and complex datasets which have to be analyzed in order to gain the desired insight. In such context, this paper presents ViSimpl, which integrates a set of visualization and interaction tools that provide a semantic view of brain data with the aim of improving its analysis procedures. ViSimpl provides 3D particle-based rendering that allows visualizing simulation data with their associated spatial and temporal information, enhancing the knowledge extraction process. It also provides abstract representations of the time-varying magnitudes supporting different data aggregation and disaggregation operations and giving also focus and context clues. In addition, ViSimpl tools provide synchronized playback control of the simulation being analyzed. Finally, ViSimpl allows performing selection and filtering operations relying on an application called NeuroScheme. All these views are loosely coupled and can be used independently, but they can also work together as linked views, both in centralized and distributed computing environments, enhancing the data exploration and analysis procedures. PMID:27774062
ViSimpl: Multi-View Visual Analysis of Brain Simulation Data.
Galindo, Sergio E; Toharia, Pablo; Robles, Oscar D; Pastor, Luis
2016-01-01
After decades of independent morphological and functional brain research, a key point in neuroscience nowadays is to understand the combined relationships between the structure of the brain and its components and their dynamics on multiple scales, ranging from circuits of neurons at micro or mesoscale to brain regions at macroscale. With such a goal in mind, there is a vast amount of research focusing on modeling and simulating activity within neuronal structures, and these simulations generate large and complex datasets which have to be analyzed in order to gain the desired insight. In such context, this paper presents ViSimpl, which integrates a set of visualization and interaction tools that provide a semantic view of brain data with the aim of improving its analysis procedures. ViSimpl provides 3D particle-based rendering that allows visualizing simulation data with their associated spatial and temporal information, enhancing the knowledge extraction process. It also provides abstract representations of the time-varying magnitudes supporting different data aggregation and disaggregation operations and giving also focus and context clues. In addition, ViSimpl tools provide synchronized playback control of the simulation being analyzed. Finally, ViSimpl allows performing selection and filtering operations relying on an application called NeuroScheme. All these views are loosely coupled and can be used independently, but they can also work together as linked views, both in centralized and distributed computing environments, enhancing the data exploration and analysis procedures.
Dynamics of liquid spreading on solid surfaces
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kalliadasis, S.; Chang, H.C.
1996-09-01
Using simple scaling arguments and a precursor film model, the authors show that the appropriate macroscopic contact angle {theta} during the slow spreading of a completely or partially wetting liquid under conditions of viscous flow and small slopes should be described by tan {theta} = [tan{sup 3} {theta}{sub e} {minus} 9 log {eta}Ca]{sup 1/3} where {theta}{sub e} is the static contact angle, Ca is the capillary number, and {eta} is a scaled Hamaker constant. Using this simple relation as a boundary condition, the authors are able to quantitatively model, without any empirical parameter, the spreading dynamics of several classical spreadingmore » phenomena (capillary rise, sessile, and pendant drop spreading) by simply equating the slope of the leading order static bulk region to the dynamic contact angle boundary condition without performing a matched asymptotic analysis for each case independently as is usually done in the literature.« less
Song, Jun; Braun, Gordon; Bevis, Eric; Doncaster, Kristen
2006-08-01
Fruit tissues are considered recalcitrant plant tissue for proteomic analysis. Three phenol-free protein extraction procedures for 2-DE were compared and evaluated on apple fruit proteins. Incorporation of hot SDS buffer, extraction with TCA/acetone precipitation was found to be the most effective protocol. The results from SDS-PAGE and 2-DE analysis showed high quality proteins. More than 500 apple polypeptides were separated on a small scale 2-DE gel. The successful protocol was further tested on banana fruit, in which 504 and 386 proteins were detected in peel and flesh tissues, respectively. To demonstrate the quality of the extracted proteins, several protein spots from apple and banana peels were cut from 2-DE gels, analyzed by MS and have been tentatively identified. The protocol described in this study is a simple procedure which could be routinely used in proteomic studies of many types of recalcitrant fruit tissues.
Correlation lengths in hydrodynamic models of active nematics.
Hemingway, Ewan J; Mishra, Prashant; Marchetti, M Cristina; Fielding, Suzanne M
2016-09-28
We examine the scaling with activity of the emergent length scales that control the nonequilibrium dynamics of an active nematic liquid crystal, using two popular hydrodynamic models that have been employed in previous studies. In both models we find that the chaotic spatio-temporal dynamics in the regime of fully developed active turbulence is controlled by a single active scale determined by the balance of active and elastic stresses, regardless of whether the active stress is extensile or contractile in nature. The observed scaling of the kinetic energy and enstrophy with activity is consistent with our single-length scale argument and simple dimensional analysis. Our results provide a unified understanding of apparent discrepancies in the previous literature and demonstrate that the essential physics is robust to the choice of model.
MacGregor, Hayley; McKenzie, Andrew; Jacobs, Tanya; Ullauri, Angelica
2018-04-25
In 2011, a decision was made to scale up a pilot innovation involving 'adherence clubs' as a form of differentiated care for HIV positive people in the public sector antiretroviral therapy programme in the Western Cape Province of South Africa. In 2016 we were involved in the qualitative aspect of an evaluation of the adherence club model, the overall objective of which was to assess the health outcomes for patients accessing clubs through epidemiological analysis, and to conduct a health systems analysis to evaluate how the model of care performed at scale. In this paper we adopt a complex adaptive systems lens to analyse planned organisational change through intervention in a state health system. We explore the challenges associated with taking to scale a pilot that began as a relatively simple innovation by a non-governmental organisation. Our analysis reveals how a programme initially representing a simple, unitary system in terms of management and clinical governance had evolved into a complex, differentiated care system. An innovation that was assessed as an excellent idea and received political backing, worked well whilst supported on a small scale. However, as scaling up progressed, challenges have emerged at the same time as support has waned. We identified a 'tipping point' at which the system was more likely to fail, as vulnerabilities magnified and the capacity for adaptation was exceeded. Yet the study also revealed the impressive capacity that a health system can have for catalysing novel approaches. We argue that innovation in largescale, complex programmes in health systems is a continuous process that requires ongoing support and attention to new innovation as challenges emerge. Rapid scaling up is also likely to require recourse to further resources, and a culture of iterative learning to address emerging challenges and mitigate complex system errors. These are necessary steps to the future success of adherence clubs as a cornerstone of differentiated care. Further research is needed to assess the equity and quality outcomes of a differentiated care model and to ensure the inclusive distribution of the benefits to all categories of people living with HIV.
Allen, Craig R.; Holling, Crawford S.; Garmestani, Ahjond S.; El-Shaarawi, Abdel H.; Piegorsch, Walter W.
2013-01-01
The scaling of physical, biological, ecological and social phenomena is a major focus of efforts to develop simple representations of complex systems. Much of the attention has been on discovering universal scaling laws that emerge from simple physical and geometric processes. However, there are regular patterns of departures both from those scaling laws and from continuous distributions of attributes of systems. Those departures often demonstrate the development of self-organized interactions between living systems and physical processes over narrower ranges of scale.
Landscape scale mapping of forest inventory data by nearest neighbor classification
Andrew Lister
2009-01-01
One of the goals of the Forest Service, U.S. Department of Agriculture's Forest Inventory and Analysis (FIA) program is large-area mapping. FIA scientists have tried many methods in the past, including geostatistical methods, linear modeling, nonlinear modeling, and simple choropleth and dot maps. Mapping methods that require individual model-based maps to be...
Fuzzy logic-based flight control system design
NASA Astrophysics Data System (ADS)
Nho, Kyungmoon
The application of fuzzy logic to aircraft motion control is studied in this dissertation. The self-tuning fuzzy techniques are developed by changing input scaling factors to obtain a robust fuzzy controller over a wide range of operating conditions and nonlinearities for a nonlinear aircraft model. It is demonstrated that the properly adjusted input scaling factors can meet the required performance and robustness in a fuzzy controller. For a simple demonstration of the easy design and control capability of a fuzzy controller, a proportional-derivative (PD) fuzzy control system is compared to the conventional controller for a simple dynamical system. This thesis also describes the design principles and stability analysis of fuzzy control systems by considering the key features of a fuzzy control system including the fuzzification, rule-base and defuzzification. The wing-rock motion of slender delta wings, a linear aircraft model and the six degree of freedom nonlinear aircraft dynamics are considered to illustrate several self-tuning methods employing change in input scaling factors. Finally, this dissertation is concluded with numerical simulation of glide-slope capture in windshear demonstrating the robustness of the fuzzy logic based flight control system.
Narrow linewidth power scaling and phase stabilization of 2-μm thulium fiber lasers
NASA Astrophysics Data System (ADS)
Goodno, Gregory D.; Book, Lewis D.; Rothenberg, Joshua E.; Weber, Mark E.; Benjamin Weiss, S.
2011-11-01
Thulium-doped fiber lasers (TFLs) emitting retina-safe 2-μm wavelengths offer substantial power-scaling advantages over ytterbium-doped fiber lasers for narrow linewidth, single-mode operation. This article reviews the design and performance of a pump-limited, 600 W, single-mode, single-frequency TFL amplifier chain that balances thermal limitations against those arising from stimulated Brillouin scattering (SBS). A simple analysis of thermal and SBS limits is anchored with measurements on kilowatt class Tm and Yb fiber lasers to highlight the scaling advantage of Tm for narrow linewidth operation. We also report recent results on active phase-locking of a TFL amplifier to an optical reference as a precursor to further parallel scaling via coherent beam combining.
NASA Astrophysics Data System (ADS)
Hamelin, Elizabeth I.; Blake, Thomas A.; Perez, Jonas W.; Crow, Brian S.; Shaner, Rebecca L.; Coleman, Rebecca M.; Johnson, Rudolph C.
2016-05-01
Public health response to large scale chemical emergencies presents logistical challenges for sample collection, transport, and analysis. Diagnostic methods used to identify and determine exposure to chemical warfare agents, toxins, and poisons traditionally involve blood collection by phlebotomists, cold transport of biomedical samples, and costly sample preparation techniques. Use of dried blood spots, which consist of dried blood on an FDA-approved substrate, can increase analyte stability, decrease infection hazard for those handling samples, greatly reduce the cost of shipping/storing samples by removing the need for refrigeration and cold chain transportation, and be self-prepared by potentially exposed individuals using a simple finger prick and blood spot compatible paper. Our laboratory has developed clinical assays to detect human exposures to nerve agents through the analysis of specific protein adducts and metabolites, for which a simple extraction from a dried blood spot is sufficient for removing matrix interferents and attaining sensitivities on par with traditional sampling methods. The use of dried blood spots can bridge the gap between the laboratory and the field allowing for large scale sample collection with minimal impact on hospital resources while maintaining sensitivity, specificity, traceability, and quality requirements for both clinical and forensic applications.
Prediction of gas-liquid two-phase flow regime in microgravity
NASA Technical Reports Server (NTRS)
Lee, Jinho; Platt, Jonathan A.
1993-01-01
An attempt is made to predict gas-liquid two-phase flow regime in a pipe in a microgravity environment through scaling analysis based on dominant physical mechanisms. Simple inlet geometry is adopted in the analysis to see the effect of inlet configuration on flow regime transitions. Comparison of the prediction with the existing experimental data shows good agreement, though more work is required to better define some physical parameters. The analysis clarifies much of the physics involved in this problem and can be applied to other configurations.
Perel, Pablo; Edwards, Phil; Shakur, Haleema; Roberts, Ian
2008-11-06
Traumatic brain injury (TBI) is an important cause of acquired disability. In evaluating the effectiveness of clinical interventions for TBI it is important to measure disability accurately. The Glasgow Outcome Scale (GOS) is the most widely used outcome measure in randomised controlled trials (RCTs) in TBI patients. However GOS measurement is generally collected at 6 months after discharge when loss to follow up could have occurred. The objectives of this study were to evaluate the association and predictive validity between a simple disability scale at hospital discharge, the Oxford Handicap Scale (OHS), and the GOS at 6 months among TBI patients. The study was a secondary analysis of a randomised clinical trial among TBI patients (MRC CRASH Trial). A Spearman correlation was estimated to evaluate the association between the OHS and GOS. The validity of different dichotomies of the OHS for predicting GOS at 6 months was assessed by calculating sensitivity, specificity and the C statistic. Uni and multivariate logistic regression models were fitted including OHS as explanatory variable. For each model we analysed its discrimination and calibration. We found that the OHS is highly correlated with GOS at 6 months (spearman correlation 0.75) with evidence of a linear relationship between the two scales. The OHS dichotomy that separates patients with severe dependency or death showed the greatest discrimination (C statistic: 84.3). Among survivors at hospital discharge the OHS showed a very good discrimination (C statistic 0.78) and excellent calibration when used to predict GOS outcome at 6 months. We have shown that the OHS, a simple disability scale available at hospital discharge can predict disability accurately, according to the GOS, at 6 months. OHS could be used to improve the design and analysis of clinical trials in TBI patients and may also provide a valuable clinical tool for physicians to improve communication with patients and relatives when assessing a patient's prognosis at hospital discharge.
Complexity-aware simple modeling.
Gómez-Schiavon, Mariana; El-Samad, Hana
2018-02-26
Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.
Prognostic accuracy of five simple scales in childhood bacterial meningitis.
Pelkonen, Tuula; Roine, Irmeli; Monteiro, Lurdes; Cruzeiro, Manuel Leite; Pitkäranta, Anne; Kataja, Matti; Peltola, Heikki
2012-08-01
In childhood acute bacterial meningitis, the level of consciousness, measured with the Glasgow coma scale (GCS) or the Blantyre coma scale (BCS), is the most important predictor of outcome. The Herson-Todd scale (HTS) was developed for Haemophilus influenzae meningitis. Our objective was to identify prognostic factors, to form a simple scale, and to compare the predictive accuracy of these scales. Seven hundred and twenty-three children with bacterial meningitis in Luanda were scored by GCS, BCS, and HTS. The simple Luanda scale (SLS), based on our entire database, comprised domestic electricity, days of illness, convulsions, consciousness, and dyspnoea at presentation. The Bayesian Luanda scale (BLS) added blood glucose concentration. The accuracy of the 5 scales was determined for 491 children without an underlying condition, against the outcomes of death, severe neurological sequelae or death, or a poor outcome (severe neurological sequelae, death, or deafness), at hospital discharge. The highest accuracy was achieved with the BLS, whose area under the curve (AUC) for death was 0.83, for severe neurological sequelae or death was 0.84, and for poor outcome was 0.82. Overall, the AUCs for SLS were ≥0.79, for GCS were ≥0.76, for BCS were ≥0.74, and for HTS were ≥0.68. Adding laboratory parameters to a simple scoring system, such as the SLS, improves the prognostic accuracy only little in bacterial meningitis.
Factors affecting metacognition of undergraduate nursing students in a blended learning environment.
Hsu, Li-Ling; Hsieh, Suh-Ing
2014-06-01
This paper is a report of a study to examine the influence of demographic, learning involvement and learning performance variables on metacognition of undergraduate nursing students in a blended learning environment. A cross-sectional, correlational survey design was adopted. Ninety-nine students invited to participate in the study were enrolled in a professional nursing ethics course at a public nursing college. The blended learning intervention is basically an assimilation of classroom learning and online learning. Simple linear regression showed significant associations between frequency of online dialogues, the Case Analysis Attitude Scale scores, the Case Analysis Self Evaluation Scale scores, the Blended Learning Satisfaction Scale scores, and Metacognition Scale scores. Multiple linear regression indicated that frequency of online dialogues, the Case Analysis Self Evaluation Scale and the Blended Learning Satisfaction Scale were significant independent predictors of metacognition. Overall, the model accounted for almost half of the variance in metacognition. The blended learning module developed in this study proved successful in the end as a catalyst for the exercising of metacognitive abilities by the sample of nursing students. Learners are able to develop metacognitive ability in comprehension, argumentation, reasoning and various forms of higher order thinking through the blended learning process. © 2013 Wiley Publishing Asia Pty Ltd.
Analysing attitude data through ridit schemes.
El-rouby, M G
1994-12-02
The attitudes of individuals and populations on various issues are usually assessed through sample surveys. Responses to survey questions are then scaled and combined into a meaningful whole which defines the measured attitude. The applied scales may be of nominal, ordinal, interval, or ratio nature depending upon the degree of sophistication the researcher wants to introduce into the measurement. This paper discusses methods of analysis for categorical variables of the type used in attitude and human behavior research, and recommends adoption of ridit analysis, a technique which has been successfully applied to epidemiological, clinical investigation, laboratory, and microbiological data. The ridit methodology is described after reviewing some general attitude scaling methods and problems of analysis related to them. The ridit method is then applied to a recent study conducted to assess health care service quality in North Carolina. This technique is conceptually and computationally more simple than other conventional statistical methods, and is also distribution-free. Basic requirements and limitations on its use are indicated.
Matsuguma, Shinichiro; Kawashima, Motoko; Negishi, Kazuno; Sano, Fumiya; Mimura, Masaru; Tsubota, Kazuo
2018-01-01
It is well recognized that visual impairments (VI) worsen individuals' mental condition. However, little is known about the positive aspects including subjective happiness, positive emotions, and strengths. Therefore, the purpose of this study was to investigate the positive aspects of persons with VI including their subjective happiness, positive emotions, and strengths use. Positive aspects of persons with VI were measured using the Subjective Happiness Scale (SHS), the Scale of Positive and Negative Experience-Balance (SPANE-B), and the Strengths Use Scale (SUS). A cross-sectional analysis was utilized to examine personal information in a Tokyo sample (N = 44). We used a simple regression analysis and found significant relationships between the SHS or SPANE-B and SUS; on the contrary, VI-related variables were not correlated with them. A multiple regression analysis confirmed that SUS was a significant factor associated with both the SHS and SPANE-B. Strengths use might be a possible protective factor from the negative effects of VI.
Mackey, Aaron J; Pearson, William R
2004-10-01
Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.
Grave prognosis on spontaneous intracerebral haemorrhage: GP on STAGE score.
Poungvarin, Niphon; Suwanwela, Nijasri C; Venketasubramanian, Narayanaswamy; Wong, Lawrence K S; Navarro, Jose C; Bitanga, Ester; Yoon, Byung Woo; Chang, Hui M; Alam, Sardar M
2006-11-01
Spontaneous intracerebral haemorrhage (ICH) is more common in Asia than in western countries, and has a high mortality rate. A simple prognostic score for predicting grave prognosis of ICH is lacking. Our objective was to develop a simple and reliable score for most physicians. ICH patients from seven Asian countries were enrolled between May 2000 and April 2002 for a prospective study. Clinical features such as headache and vomiting, vascular risk factors, Glasgow coma scale (GCS), body temperature (BT), blood pressure on arrival, location and size of haematoma, intraventricular haemorrhage (IVH), hydrocephalus, need for surgical treatment, medical treatment, length of hospital stay and other complications were analyzed to determine the outcome using a modified Rankin scale (MRS). Grave prognosis (defined as MRS of 5-6) was judged on the discharge date. 995 patients, mean age 59.5 +/- 14.3 years were analyzed, after exclusion of incomplete data in 87 patients. 402 patients (40.4%) were in the grave prognosis group (MRS 5-6). Univariable analysis and then multivariable analysis showed only four statistically significant predictors for grave outcome of ICH. They were fever (BT > or = 37.8 degrees c), low GCS, large haematoma and IVH. The grave prognosis on spontaneous intracerebral haemorrhage (GP on STAGE) score was derived from these four factors using a multiple logistic model. A simple and pragmatic prognostic score for ICH outcome has been developed with high sensitivity (82%) and specificity (82%). Furthermore, it can be administered by most general practitioners. Validation in other populations is now required.
Rubínová, Eva; Nikolai, Tomáš; Marková, Hana; Siffelová, Kamila; Laczó, Jan; Hort, Jakub; Vyhnálek, Martin
2014-01-01
The Clock Drawing Test is a frequently used cognitive screening test with several scoring systems in elderly populations. We compare simple and complex scoring systems and evaluate the usefulness of the combination of the Clock Drawing Test with the Mini-Mental State Examination to detect patients with mild cognitive impairment. Patients with amnestic mild cognitive impairment (n = 48) and age- and education-matched controls (n = 48) underwent neuropsychological examinations, including the Clock Drawing Test and the Mini-Mental State Examination. Clock drawings were scored by three blinded raters using one simple (6-point scale) and two complex (17- and 18-point scales) systems. The sensitivity and specificity of these scoring systems used alone and in combination with the Mini-Mental State Examination were determined. Complex scoring systems, but not the simple scoring system, were significant predictors of the amnestic mild cognitive impairment diagnosis in logistic regression analysis. At equal levels of sensitivity (87.5%), the Mini-Mental State Examination showed higher specificity (31.3%, compared with 12.5% for the 17-point Clock Drawing Test scoring scale). The combination of Clock Drawing Test and Mini-Mental State Examination scores increased the area under the curve (0.72; p < .001) and increased specificity (43.8%), but did not increase sensitivity, which remained high (85.4%). A simple 6-point scoring system for the Clock Drawing Test did not differentiate between healthy elderly and patients with amnestic mild cognitive impairment in our sample. Complex scoring systems were slightly more efficient, yet still were characterized by high rates of false-positive results. We found psychometric improvement using combined scores from the Mini-Mental State Examination and the Clock Drawing Test when complex scoring systems were used. The results of this study support the benefit of using combined scores from simple methods.
Visualization and Analysis of Multi-scale Land Surface Products via Giovanni Portals
NASA Technical Reports Server (NTRS)
Shen, Suhung; Kempler, Steven J.; Gerasimov, Irina V.
2013-01-01
Large volumes of MODIS land data products at multiple spatial resolutions have been integrated into the Giovanni online analysis system to support studies on land cover and land use changes,focused on the Northern Eurasia and Monsoon Asia regions through the LCLUC program. Giovanni (Goddard Interactive Online Visualization ANd aNalysis Infrastructure) is a Web-based application developed by the NASA Goddard Earth Sciences Data and Information Services Center (GES DISC), providing a simple and intuitive way to visualize, analyze, and access Earth science remotely-sensed and modeled data.Customized Giovanni Web portals (Giovanni-NEESPI andGiovanni-MAIRS) have been created to integrate land, atmospheric,cryospheric, and societal products, enabling researchers to do quick exploration and basic analyses of land surface changes, and their relationships to climate, at global and regional scales. This presentation shows a sample Giovanni portal page, lists selected data products in the system, and illustrates potential analyses with imagesand time-series at global and regional scales, focusing on climatology and anomaly analysis. More information is available at the GES DISCMAIRS data support project portal: http:disc.sci.gsfc.nasa.govmairs.
NASA Technical Reports Server (NTRS)
Golitsyn, G. S.
1977-01-01
The main results were the formulas for the mean convection velocities, of a viscous fluid and for the mean temperature difference in the bulk of the convecting fluid. These were obtained: by scaling analysis of the Boussinesq equations, by analysis of the energetics of the process, and by using similarity and dimensional arguments. The last approach defines the criteria of similarity and allows the proposition of some self-similarity hypotheses. By several simple new ways, an expression for the efficiency coefficient gamma of the thermal convection was also obtained. An analogy is pointed out between non-turbulent convection of a viscous fluid and the structure of turbulence for scales less than Kolmogorov's internal viscous microscale of turbulence.
Mining Distance Based Outliers in Near Linear Time with Randomization and a Simple Pruning Rule
NASA Technical Reports Server (NTRS)
Bay, Stephen D.; Schwabacher, Mark
2003-01-01
Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set.
Universal fitting formulae for baryon oscillation surveys
NASA Astrophysics Data System (ADS)
Blake, Chris; Parkinson, David; Bassett, Bruce; Glazebrook, Karl; Kunz, Martin; Nichol, Robert C.
2006-01-01
The next generation of galaxy surveys will attempt to measure the baryon oscillations in the clustering power spectrum with high accuracy. These oscillations encode a preferred scale which may be used as a standard ruler to constrain cosmological parameters and dark energy models. In this paper we present simple analytical fitting formulae for the accuracy with which the preferred scale may be determined in the tangential and radial directions by future spectroscopic and photometric galaxy redshift surveys. We express these accuracies as a function of survey parameters such as the central redshift, volume, galaxy number density and (where applicable) photometric redshift error. These fitting formulae should greatly increase the efficiency of optimizing future surveys, which requires analysis of a potentially vast number of survey configurations and cosmological models. The formulae are calibrated using a grid of Monte Carlo simulations, which are analysed by dividing out the overall shape of the power spectrum before fitting a simple decaying sinusoid to the oscillations. The fitting formulae reproduce the simulation results with a fractional scatter of 7 per cent (10 per cent) in the tangential (radial) directions over a wide range of input parameters. We also indicate how sparse-sampling strategies may enhance the effective survey area if the sampling scale is much smaller than the projected baryon oscillation scale.
Towards a Certified Lightweight Array Bound Checker for Java Bytecode
NASA Technical Reports Server (NTRS)
Pichardie, David
2009-01-01
Dynamic array bound checks are crucial elements for the security of a Java Virtual Machines. These dynamic checks are however expensive and several static analysis techniques have been proposed to eliminate explicit bounds checks. Such analyses require advanced numerical and symbolic manipulations that 1) penalize bytecode loading or dynamic compilation, 2) complexify the trusted computing base. Following the Foundational Proof Carrying Code methodology, our goal is to provide a lightweight bytecode verifier for eliminating array bound checks that is both efficient and trustable. In this work, we define a generic relational program analysis for an imperative, stackoriented byte code language with procedures, arrays and global variables and instantiate it with a relational abstract domain as polyhedra. The analysis has automatic inference of loop invariants and method pre-/post-conditions, and efficient checking of analysis results by a simple checker. Invariants, which can be large, can be specialized for proving a safety policy using an automatic pruning technique which reduces their size. The result of the analysis can be checked efficiently by annotating the program with parts of the invariant together with certificates of polyhedral inclusions. The resulting checker is sufficiently simple to be entirely certified within the Coq proof assistant for a simple fragment of the Java bytecode language. During the talk, we will also report on our ongoing effort to scale this approach for the full sequential JVM.
Multi-scaling allometric analysis for urban and regional development
NASA Astrophysics Data System (ADS)
Chen, Yanguang
2017-01-01
The concept of allometric growth is based on scaling relations, and it has been applied to urban and regional analysis for a long time. However, most allometric analyses were devoted to the single proportional relation between two elements of a geographical system. Few researches focus on the allometric scaling of multielements. In this paper, a process of multiscaling allometric analysis is developed for the studies on spatio-temporal evolution of complex systems. By means of linear algebra, general system theory, and by analogy with the analytical hierarchy process, the concepts of allometric growth can be integrated with the ideas from fractal dimension. Thus a new methodology of geo-spatial analysis and the related theoretical models emerge. Based on the least squares regression and matrix operations, a simple algorithm is proposed to solve the multiscaling allometric equation. Applying the analytical method of multielement allometry to Chinese cities and regions yields satisfying results. A conclusion is reached that the multiscaling allometric analysis can be employed to make a comprehensive evaluation for the relative levels of urban and regional development, and explain spatial heterogeneity. The notion of multiscaling allometry may enrich the current theory and methodology of spatial analyses of urban and regional evolution.
Disability: a model and measurement technique.
Williams, R G; Johnston, M; Willis, L A; Bennett, A E
1976-01-01
Current methods of ranking or scoring disability tend to be arbitrary. A new method is put forward on the hypothesis that disability progresses in regular, cumulative patterns. A model of disability is defined and tested with the use of Guttman scale analysis. Its validity is indicated on data from a survey in the community and from postsurgical patients, and some factors involved in scale variation are identified. The model provides a simple measurement technique and has implications for the assessment of individual disadvantage, for the prediction of progress in recovery or deterioration, and for evaluation of the outcome of treatment regimes. PMID:953379
Scaling of mode shapes from operational modal analysis using harmonic forces
NASA Astrophysics Data System (ADS)
Brandt, A.; Berardengo, M.; Manzoni, S.; Cigada, A.
2017-10-01
This paper presents a new method for scaling mode shapes obtained by means of operational modal analysis. The method is capable of scaling mode shapes on any structure, also structures with closely coupled modes, and the method can be used in the presence of ambient vibration from traffic or wind loads, etc. Harmonic excitation can be relatively easily accomplished by using general-purpose actuators, also for force levels necessary for driving large structures such as bridges and highrise buildings. The signal processing necessary for mode shape scaling by the proposed method is simple and the method can easily be implemented in most measurement systems capable of generating a sine wave output. The tests necessary to scale the modes are short compared to typical operational modal analysis test time. The proposed method is thus easy to apply and inexpensive relative to some other methods for scaling mode shapes that are available in literature. Although it is not necessary per se, we propose to excite the structure at, or close to, the eigenfrequencies of the modes to be scaled, since this provides better signal-to-noise ratio in the response sensors, thus permitting the use of smaller actuators. An extensive experimental activity on a real structure was carried out and the results reported demonstrate the feasibility and accuracy of the proposed method. Since the method utilizes harmonic excitation for the mode shape scaling, we propose to call the method OMAH.
Adaptive Backoff Synchronization Techniques
1989-07-01
The Simple Code. Technical Report, Lawrence Livermore Laboratory, February 1978. [6] F. Darems-Rogers, D. A. George, V. A. Norton, and G . F. Pfister...Heights, November 1986. 20 [7] Daniel Gajski , David Kuck, Duncan Lawrie, and Ahmed Saleh. Cedar - A Large Scale Multiprocessor. In International...17] Janak H. Patel. Analysis of Multiprocessors with Private Cache Memories. IEEE Transactions on Com- puters, C-31(4):296-304, April 1982. [18] G
NASA Astrophysics Data System (ADS)
Leinhardt, Zoë M.; Richardson, Derek C.
2005-08-01
We present a new code ( companion) that identifies bound systems of particles in O(NlogN) time. Simple binaries consisting of pairs of mutually bound particles and complex hierarchies consisting of collections of mutually bound particles are identifiable with this code. In comparison, brute force binary search methods scale as O(N) while full hierarchy searches can be as expensive as O(N), making analysis highly inefficient for multiple data sets with N≳10. A simple test case is provided to illustrate the method. Timing tests demonstrating O(NlogN) scaling with the new code on real data are presented. We apply our method to data from asteroid satellite simulations [Durda et al., 2004. Icarus 167, 382-396; Erratum: Icarus 170, 242; reprinted article: Icarus 170, 243-257] and note interesting multi-particle configurations. The code is available at http://www.astro.umd.edu/zoe/companion/ and is distributed under the terms and conditions of the GNU Public License.
Michnick, Stephen W; Landry, Christian R; Levy, Emmanuel D; Diss, Guillaume; Ear, Po Hien; Kowarzyk, Jacqueline; Malleshaiah, Mohan K; Messier, Vincent; Tchekanda, Emmanuelle
2016-11-01
Protein-fragment complementation assays (PCAs) comprise a family of assays that can be used to study protein-protein interactions (PPIs), conformation changes, and protein complex dimensions. We developed PCAs to provide simple and direct methods for the study of PPIs in any living cell, subcellular compartments or membranes, multicellular organisms, or in vitro. Because they are complete assays, requiring no cell-specific components other than reporter fragments, they can be applied in any context. PCAs provide a general strategy for the detection of proteins expressed at endogenous levels within appropriate subcellular compartments and with normal posttranslational modifications, in virtually any cell type or organism under any conditions. Here we introduce a number of applications of PCAs in budding yeast, Saccharomyces cerevisiae These applications represent the full range of PPI characteristics that might be studied, from simple detection on a large scale to visualization of spatiotemporal dynamics. © 2016 Cold Spring Harbor Laboratory Press.
Strain and vorticity analysis using small-scale faults and associated drag folds
NASA Astrophysics Data System (ADS)
Gomez-Rivas, Enrique; Bons, Paul D.; Griera, Albert; Carreras, Jordi; Druguet, Elena; Evans, Lynn
2007-12-01
Small-scale faults with associated drag folds in brittle-ductile rocks can retain detailed information on the kinematics and amount of deformation the host rock experienced. Measured fault orientation ( α), drag angle ( β) and the ratio of the thickness of deflected layers at the fault ( L) and further away ( T) can be compared with α, β and L/ T values that are calculated with a simple analytical model. Using graphs or a numerical best-fit routine, one can then determine the kinematic vorticity number and initial fault orientation that best fits the data. The proposed method was successfully tested on both analogue experiments and numerical simulations with BASIL. Using this method, a kinematic vorticity number of one (dextral simple shear) and a minimum finite strain of 2.5-3.8 was obtained for a population of antithetic faults with associated drag folds in a case study area at Mas Rabassers de Dalt on Cap de Creus in the Variscan of the easternmost Pyrenees, Spain.
Stable clustering and the resolution of dissipationless cosmological N-body simulations
NASA Astrophysics Data System (ADS)
Benhaiem, David; Joyce, Michael; Sylos Labini, Francesco
2017-10-01
The determination of the resolution of cosmological N-body simulations, I.e. the range of scales in which quantities measured in them represent accurately the continuum limit, is an important open question. We address it here using scale-free models, for which self-similarity provides a powerful tool to control resolution. Such models also provide a robust testing ground for the so-called stable clustering approximation, which gives simple predictions for them. Studying large N-body simulations of such models with different force smoothing, we find that these two issues are in fact very closely related: our conclusion is that the accuracy of two-point statistics in the non-linear regime starts to degrade strongly around the scale at which their behaviour deviates from that predicted by the stable clustering hypothesis. Physically the association of the two scales is in fact simple to understand: stable clustering fails to be a good approximation when there are strong interactions of structures (in particular merging) and it is precisely such non-linear processes which are sensitive to fluctuations at the smaller scales affected by discretization. Resolution may be further degraded if the short distance gravitational smoothing scale is larger than the scale to which stable clustering can propagate. We examine in detail the very different conclusions of studies by Smith et al. and Widrow et al. and find that the strong deviations from stable clustering reported by these works are the results of over-optimistic assumptions about scales resolved accurately by the measured power spectra, and the reliance on Fourier space analysis. We emphasize the much poorer resolution obtained with the power spectrum compared to the two-point correlation function.
Quantification of sensory and food quality: the R-index analysis.
Lee, Hye-Seong; van Hout, Danielle
2009-08-01
The accurate quantification of sensory difference/similarity between foods, as well as consumer acceptance/preference and concepts, is greatly needed to optimize and maintain food quality. The R-Index is one class of measures of the degree of difference/similarity, and was originally developed for sensory difference tests for food quality control, product development, and so on. The index is based on signal detection theory and is free of the response bias that can invalidate difference testing protocols, including categorization and same-different and A-Not A tests. It is also a nonparametric analysis, making no assumptions about sensory distributions, and is simple to compute and understand. The R-Index is also flexible in its application. Methods based on R-Index analysis have been used as detection and sensory difference tests, as simple alternatives to hedonic scaling, and for the measurement of consumer concepts. This review indicates the various computational strategies for the R-Index and its practical applications to consumer and sensory measurements in food science.
Second Law of Thermodynamics Applied to Metabolic Networks
NASA Technical Reports Server (NTRS)
Nigam, R.; Liang, S.
2003-01-01
We present a simple algorithm based on linear programming, that combines Kirchoff's flux and potential laws and applies them to metabolic networks to predict thermodynamically feasible reaction fluxes. These law's represent mass conservation and energy feasibility that are widely used in electrical circuit analysis. Formulating the Kirchoff's potential law around a reaction loop in terms of the null space of the stoichiometric matrix leads to a simple representation of the law of entropy that can be readily incorporated into the traditional flux balance analysis without resorting to non-linear optimization. Our technique is new as it can easily check the fluxes got by applying flux balance analysis for thermodynamic feasibility and modify them if they are infeasible so that they satisfy the law of entropy. We illustrate our method by applying it to the network dealing with the central metabolism of Escherichia coli. Due to its simplicity this algorithm will be useful in studying large scale complex metabolic networks in the cell of different organisms.
Yurk, Brian P
2018-07-01
Animal movement behaviors vary spatially in response to environmental heterogeneity. An important problem in spatial ecology is to determine how large-scale population growth and dispersal patterns emerge within highly variable landscapes. We apply the method of homogenization to study the large-scale behavior of a reaction-diffusion-advection model of population growth and dispersal. Our model includes small-scale variation in the directed and random components of movement and growth rates, as well as large-scale drift. Using the homogenized model we derive simple approximate formulas for persistence conditions and asymptotic invasion speeds, which are interpreted in terms of residence index. The homogenization results show good agreement with numerical solutions for environments with a high degree of fragmentation, both with and without periodicity at the fast scale. The simplicity of the formulas, and their connection to residence index make them appealing for studying the large-scale effects of a variety of small-scale movement behaviors.
NASA Astrophysics Data System (ADS)
Majumdar, Paulami; Greeley, Jeffrey
2018-04-01
Linear scaling relations of adsorbate energies across a range of catalytic surfaces have emerged as a central interpretive paradigm in heterogeneous catalysis. They are, however, typically developed for low adsorbate coverages which are not always representative of realistic heterogeneous catalytic environments. Herein, we present generalized linear scaling relations on transition metals that explicitly consider adsorbate-coadsorbate interactions at variable coverages. The slopes of these scaling relations do not follow the simple bond counting principles that govern scaling on transition metals at lower coverages. The deviations from bond counting are explained using a pairwise interaction model wherein the interaction parameter determines the slope of the scaling relationship on a given metal at variable coadsorbate coverages, and the slope across different metals at fixed coadsorbate coverage is approximated by adding a coverage-dependent correction to the standard bond counting contribution. The analysis provides a compact explanation for coverage-dependent deviations from bond counting in scaling relationships and suggests a useful strategy for incorporation of coverage effects into catalytic trends studies.
Correcting the SIMPLE Model of Free Recall
ERIC Educational Resources Information Center
Lee, Michael D.; Pooley, James P.
2013-01-01
The scale-invariant memory, perception, and learning (SIMPLE) model developed by Brown, Neath, and Chater (2007) formalizes the theoretical idea that scale invariance is an important organizing principle across numerous cognitive domains and has made an influential contribution to the literature dealing with modeling human memory. In the context…
Geometry and Reynolds-Number Scaling on an Iced Business-Jet Wing
NASA Technical Reports Server (NTRS)
Lee, Sam; Ratvasky, Thomas P.; Thacker, Michael; Barnhart, Billy P.
2005-01-01
A study was conducted to develop a method to scale the effect of ice accretion on a full-scale business jet wing model to a 1/12-scale model at greatly reduced Reynolds number. Full-scale, 5/12-scale, and 1/12-scale models of identical airfoil section were used in this study. Three types of ice accretion were studied: 22.5-minute ice protection system failure shape, 2-minute initial ice roughness, and a runback shape that forms downstream of a thermal anti-ice system. The results showed that the 22.5-minute failure shape could be scaled from full-scale to 1/12-scale through simple geometric scaling. The 2-minute roughness shape could be scaled by choosing an appropriate grit size. The runback ice shape exhibited greater Reynolds number effects and could not be scaled by simple geometric scaling of the ice shape.
Ji, Jun; Ling, Jeffrey; Jiang, Helen; Wen, Qiaojun; Whitin, John C; Tian, Lu; Cohen, Harvey J; Ling, Xuefeng B
2013-03-23
Mass spectrometry (MS) has evolved to become the primary high throughput tool for proteomics based biomarker discovery. Until now, multiple challenges in protein MS data analysis remain: large-scale and complex data set management; MS peak identification, indexing; and high dimensional peak differential analysis with the concurrent statistical tests based false discovery rate (FDR). "Turnkey" solutions are needed for biomarker investigations to rapidly process MS data sets to identify statistically significant peaks for subsequent validation. Here we present an efficient and effective solution, which provides experimental biologists easy access to "cloud" computing capabilities to analyze MS data. The web portal can be accessed at http://transmed.stanford.edu/ssa/. Presented web application supplies large scale MS data online uploading and analysis with a simple user interface. This bioinformatic tool will facilitate the discovery of the potential protein biomarkers using MS.
Multi-Scale Surface Descriptors
Cipriano, Gregory; Phillips, George N.; Gleicher, Michael
2010-01-01
Local shape descriptors compactly characterize regions of a surface, and have been applied to tasks in visualization, shape matching, and analysis. Classically, curvature has be used as a shape descriptor; however, this differential property characterizes only an infinitesimal neighborhood. In this paper, we provide shape descriptors for surface meshes designed to be multi-scale, that is, capable of characterizing regions of varying size. These descriptors capture statistically the shape of a neighborhood around a central point by fitting a quadratic surface. They therefore mimic differential curvature, are efficient to compute, and encode anisotropy. We show how simple variants of mesh operations can be used to compute the descriptors without resorting to expensive parameterizations, and additionally provide a statistical approximation for reduced computational cost. We show how these descriptors apply to a number of uses in visualization, analysis, and matching of surfaces, particularly to tasks in protein surface analysis. PMID:19834190
Large scale EMF in current sheets induced by tearing modes
NASA Astrophysics Data System (ADS)
Mizerski, Krzysztof A.
2018-02-01
An extension of the analysis of resistive instabilities of a sheet pinch from a famous work by Furth et al (1963 Phys. Fluids 6 459) is presented here, to study the mean electromotive force (EMF) generated by the developing instability. In a Cartesian configuration and in the presence of a current sheet first the boundary layer technique is used to obtain global, matched asymptotic solutions for the velocity and magnetic field and then the solutions are used to calculate the large-scale EMF in the system. It is reported, that in the bulk the curl of the mean EMF is linear in {{j}}0\\cdot {{B}}0, a simple pseudo-scalar quantity constructed from the large-scale quantities.
A simple Lagrangian forecast system with aviation forecast potential
NASA Technical Reports Server (NTRS)
Petersen, R. A.; Homan, J. H.
1983-01-01
A trajectory forecast procedure is developed which uses geopotential tendency fields obtained from a simple, multiple layer, potential vorticity conservative isentropic model. This model can objectively account for short-term advective changes in the mass field when combined with fine-scale initial analyses. This procedure for producing short-term, upper-tropospheric trajectory forecasts employs a combination of a detailed objective analysis technique, an efficient mass advection model, and a diagnostically proven trajectory algorithm, none of which require extensive computer resources. Results of initial tests are presented, which indicate an exceptionally good agreement for trajectory paths entering the jet stream and passing through an intensifying trough. It is concluded that this technique not only has potential for aiding in route determination, fuel use estimation, and clear air turbulence detection, but also provides an example of the types of short range forecasting procedures which can be applied at local forecast centers using simple algorithms and a minimum of computer resources.
Assessing map accuracy in a remotely sensed, ecoregion-scale cover map
Edwards, T.C.; Moisen, Gretchen G.; Cutler, D.R.
1998-01-01
Landscape- and ecoregion-based conservation efforts increasingly use a spatial component to organize data for analysis and interpretation. A challenge particular to remotely sensed cover maps generated from these efforts is how best to assess the accuracy of the cover maps, especially when they can exceed 1000 s/km2 in size. Here we develop and describe a methodological approach for assessing the accuracy of large-area cover maps, using as a test case the 21.9 million ha cover map developed for Utah Gap Analysis. As part of our design process, we first reviewed the effect of intracluster correlation and a simple cost function on the relative efficiency of cluster sample designs to simple random designs. Our design ultimately combined clustered and subsampled field data stratified by ecological modeling unit and accessibility (hereafter a mixed design). We next outline estimation formulas for simple map accuracy measures under our mixed design and report results for eight major cover types and the three ecoregions mapped as part of the Utah Gap Analysis. Overall accuracy of the map was 83.2% (SE=1.4). Within ecoregions, accuracy ranged from 78.9% to 85.0%. Accuracy by cover type varied, ranging from a low of 50.4% for barren to a high of 90.6% for man modified. In addition, we examined gains in efficiency of our mixed design compared with a simple random sample approach. In regard to precision, our mixed design was more precise than a simple random design, given fixed sample costs. We close with a discussion of the logistical constraints facing attempts to assess the accuracy of large-area, remotely sensed cover maps.
Kutner, Jean S; Smith, Marlaine C; Corbin, Lisa; Hemphill, Linnea; Benton, Kathryn; Mellis, B Karen; Beaty, Brenda; Felton, Sue; Yamashita, Traci E; Bryant, Lucinda L; Fairclough, Diane L
2008-09-16
Small studies of variable quality suggest that massage therapy may relieve pain and other symptoms. To evaluate the efficacy of massage for decreasing pain and symptom distress and improving quality of life among persons with advanced cancer. Multisite, randomized clinical trial. Population-based Palliative Care Research Network. 380 adults with advanced cancer who were experiencing moderate-to-severe pain; 90% were enrolled in hospice. Six 30-minute massage or simple-touch sessions over 2 weeks. Primary outcomes were immediate (Memorial Pain Assessment Card, 0- to 10-point scale) and sustained (Brief Pain Inventory [BPI], 0- to 10-point scale) change in pain. Secondary outcomes were immediate change in mood (Memorial Pain Assessment Card) and 60-second heart and respiratory rates and sustained change in quality of life (McGill Quality of Life Questionnaire, 0- to 10-point scale), symptom distress (Memorial Symptom Assessment Scale, 0- to 4-point scale), and analgesic medication use (parenteral morphine equivalents [mg/d]). Immediate outcomes were obtained just before and after each treatment session. Sustained outcomes were obtained at baseline and weekly for 3 weeks. 298 persons were included in the immediate outcome analysis and 348 in the sustained outcome analysis. A total of 82 persons did not receive any allocated study treatments (37 massage patients, 45 control participants). Both groups demonstrated immediate improvement in pain (massage, -1.87 points [95% CI, -2.07 to -1.67 points]; control, -0.97 point [CI, -1.18 to -0.76 points]) and mood (massage, 1.58 points [CI, 1.40 to 1.76 points]; control, 0.97 point [CI, 0.78 to 1.16 points]). Massage was superior for both immediate pain and mood (mean difference, 0.90 and 0.61 points, respectively; P < 0.001). No between-group mean differences occurred over time in sustained pain (BPI mean pain, 0.07 point [CI, -0.23 to 0.37 points]; BPI worst pain, -0.14 point [CI, -0.59 to 0.31 points]), quality of life (McGill Quality of Life Questionnaire overall, 0.08 point [CI, -0.37 to 0.53 points]), symptom distress (Memorial Symptom Assessment Scale global distress index, -0.002 point [CI, -0.12 to 0.12 points]), or analgesic medication use (parenteral morphine equivalents, -0.10 mg/d [CI, -0.25 to 0.05 mg/d]). The immediate outcome measures were obtained by unblinded study therapists, possibly leading to reporting bias and the overestimation of a beneficial effect. The generalizability to all patients with advanced cancer is uncertain. The differential beneficial effect of massage therapy over simple touch is not conclusive without a usual care control group. Massage may have immediately beneficial effects on pain and mood among patients with advanced cancer. Given the lack of sustained effects and the observed improvements in both study groups, the potential benefits of attention and simple touch should also be considered in this patient population.
Risk of Resource Failure and Toolkit Variation in Small-Scale Farmers and Herders
Collard, Mark; Ruttle, April; Buchanan, Briggs; O’Brien, Michael J.
2012-01-01
Recent work suggests that global variation in toolkit structure among hunter-gatherers is driven by risk of resource failure such that as risk of resource failure increases, toolkits become more diverse and complex. Here we report a study in which we investigated whether the toolkits of small-scale farmers and herders are influenced by risk of resource failure in the same way. In the study, we applied simple linear and multiple regression analysis to data from 45 small-scale food-producing groups to test the risk hypothesis. Our results were not consistent with the hypothesis; none of the risk variables we examined had a significant impact on toolkit diversity or on toolkit complexity. It appears, therefore, that the drivers of toolkit structure differ between hunter-gatherers and small-scale food-producers. PMID:22844421
Fractal Signals & Space-Time Cartoons
NASA Astrophysics Data System (ADS)
Oetama, H. C. Jakob; Maksoed, W. H.
2016-03-01
In ``Theory of Scale Relativity'', 1991- L. Nottale states whereas ``scale relativity is a geometrical & fractal space-time theory''. It took in comparisons to ``a unified, wavelet based framework for efficiently synthetizing, analyzing ∖7 processing several broad classes of fractal signals''-Gregory W. Wornell:``Signal Processing with Fractals'', 1995. Furthers, in Fig 1.1. a simple waveform from statistically scale-invariant random process [ibid.,h 3 ]. Accompanying RLE Technical Report 566 ``Synthesis, Analysis & Processing of Fractal Signals'' as well as from Wornell, Oct 1991 herewith intended to deducts =a Δt + (1 - β Δ t) ...in Petersen, et.al: ``Scale invariant properties of public debt growth'',2010 h. 38006p2 to [1/{1- (2 α (λ) /3 π) ln (λ/r)}depicts in Laurent Nottale,1991, h 24. Acknowledgment devotes to theLates HE. Mr. BrigadierGeneral-TNI[rtd].Prof. Ir. HANDOJO.
Self-consciousness concept and assessment in self-report measures
DaSilveira, Amanda; DeSouza, Mariane L.; Gomes, William B.
2015-01-01
This study examines how self-consciousness is defined and assessed using self-report questionnaires (Self-Consciousness Scale (SCS), Self-Reflection and Insight Scale, Self-Absorption Scale, Rumination-Reflection Questionnaire, and Philadelphia Mindfulness Scale). Authors of self-report measures suggest that self-consciousness can be distinguished by its private/public aspects, its adaptive/maladaptive applied characteristics, and present/past experiences. We examined these claims in a study using 602 young adults to whom the aforementioned scales were administered. Data were analyzed as follows: (1) correlation analysis to find simple associations between the measures; (2) factorial analysis using Oblimin rotation of total scores provided from the scales; and (3) factorial analysis considering the 102 items of the scales all together. It aimed to clarify relational patterns found in the correlations between SCSs, and to identify possible latent constructs behind these scales. Results support the adaptive/maladaptive aspects of self-consciousness, as well as distinguish to some extent public aspects from private ones. However, some scales that claimed to be theoretically derived from the concept of Private Self-Consciousness correlated with some of its public self-aspects. Overall, our findings suggest that while self-reflection measures tend to tap into past experiences and judged concepts that were already processed by the participants’ inner speech and thoughts, the Awareness measure derived from Mindfulness Scale seems to be related to a construct associated with present experiences in which one is aware of without any further judgment or logical/rational symbolization. This sub-scale seems to emphasize the role that present experiences have in self-consciousness, and it is argued that such a concept refers to what has been studied by phenomenology and psychology over more than 100 years: the concept of pre-reflective self-conscious. PMID:26191030
Self-consciousness concept and assessment in self-report measures.
DaSilveira, Amanda; DeSouza, Mariane L; Gomes, William B
2015-01-01
This study examines how self-consciousness is defined and assessed using self-report questionnaires (Self-Consciousness Scale (SCS), Self-Reflection and Insight Scale, Self-Absorption Scale, Rumination-Reflection Questionnaire, and Philadelphia Mindfulness Scale). Authors of self-report measures suggest that self-consciousness can be distinguished by its private/public aspects, its adaptive/maladaptive applied characteristics, and present/past experiences. We examined these claims in a study using 602 young adults to whom the aforementioned scales were administered. Data were analyzed as follows: (1) correlation analysis to find simple associations between the measures; (2) factorial analysis using Oblimin rotation of total scores provided from the scales; and (3) factorial analysis considering the 102 items of the scales all together. It aimed to clarify relational patterns found in the correlations between SCSs, and to identify possible latent constructs behind these scales. Results support the adaptive/maladaptive aspects of self-consciousness, as well as distinguish to some extent public aspects from private ones. However, some scales that claimed to be theoretically derived from the concept of Private Self-Consciousness correlated with some of its public self-aspects. Overall, our findings suggest that while self-reflection measures tend to tap into past experiences and judged concepts that were already processed by the participants' inner speech and thoughts, the Awareness measure derived from Mindfulness Scale seems to be related to a construct associated with present experiences in which one is aware of without any further judgment or logical/rational symbolization. This sub-scale seems to emphasize the role that present experiences have in self-consciousness, and it is argued that such a concept refers to what has been studied by phenomenology and psychology over more than 100 years: the concept of pre-reflective self-conscious.
Development of the competency scale for primary care managers in Thailand: Scale development.
Kitreerawutiwong, Keerati; Sriruecha, Chanaphol; Laohasiriwong, Wongsa
2015-12-09
The complexity of the primary care system requires a competent manager to achieve high-quality healthcare. The existing literature in the field yields little evidence of the tools to assess the competency of primary care administrators. This study aimed to develop and examine the psychometric properties of the competency scale for primary care managers in Thailand. The scale was developed using in-depth interviews and focus group discussions among policy makers, managers, practitioners, village health volunteers, and clients. The specific dimensions were extracted from 35 participants. 123 items were generated from the evidence and qualitative data. Content validity was established through the evaluation of seven experts and the original 123 items were reduced to 84 items. The pilot testing was conducted on a simple random sample of 487 primary care managers. Item analysis, reliability testing, and exploratory factor analysis were applied to establish the scale's reliability and construct validity. Exploratory factor analysis identified nine dimensions with 48 items using a five-point Likert scale. Each dimension accounted for greater than 58.61% of the total variance. The scale had strong content validity (Indices = 0.85). Each dimension of Cronbach's alpha ranged from 0.70 to 0.88. Based on these analyses, this instrument demonstrated sound psychometric properties and therefore is considered an effective tool for assessment of the primary care manager competencies. The results can be used to improve competency requirements of primary care managers, with implications for health service management workforce development.
Wavelet Analysis of Turbulent Spots and Other Coherent Structures in Unsteady Transition
NASA Technical Reports Server (NTRS)
Lewalle, Jacques
1998-01-01
This is a secondary analysis of a portion of the Halstead data. The hot-film traces from an embedded stage of a low pressure turbine have been extensively analyzed by Halstead et al. In this project, wavelet analysis is used to develop the quantitative characterization of individual coherent structures in terms of size, amplitude, phase, convection speed, etc., as well as phase-averaged time scales. The purposes of the study are (1) to extract information about turbulent time scales for comparison with unsteady model results (e.g. k/epsilon). Phase-averaged maps of dominant time scales will be presented; and (2) to evaluate any differences between wake-induced and natural spots that might affect model performance. Preliminary results, subject to verification with data at higher frequency resolution, indicate that spot properties are independent of their phase relative to the wake footprints: therefore requirements for the physical content of models are kept relatively simple. Incidentally, we also observed that spot substructures can be traced over several stations; further study will examine their possible impact.
Nonlinear fracture mechanics-based analysis of thin wall cylinders
NASA Technical Reports Server (NTRS)
Brust, Frederick W.; Leis, Brian N.; Forte, Thomas P.
1994-01-01
This paper presents a simple analysis technique to predict the crack initiation, growth, and rupture of large-radius, R, to thickness, t, ratio (thin wall) cylinders. The method is formulated to deal both with stable tearing as well as fatigue mechanisms in applications to both surface and through-wall axial cracks, including interacting surface cracks. The method can also account for time-dependent effects. Validation of the model is provided by comparisons of predictions to more than forty full scale experiments of thin wall cylinders pressurized to failure.
MIS Score: Prediction Model for Minimally Invasive Surgery.
Hu, Yuanyuan; Cao, Jingwei; Hou, Xianzeng; Liu, Guangcun
2017-03-01
Reports suggest that patients with spontaneous intracerebral hemorrhage (ICH) can benefit from minimally invasive surgery, but the inclusion criterion for operation is controversial. This article analyzes factors affecting the 30-day prognoses of patients who have received minimally invasive surgery and proposes a simple grading scale that represents clinical operation effectiveness. The records of 101 patients with spontaneous ICH presenting to Qianfoshan Hospital were reviewed. Factors affecting their 30-day prognosis were identified by logistic regression. A clinical grading scale, the MIS score, was developed by weighting the independent predictors based on these factors. Univariate analysis revealed that the factors that affect 30-day prognosis include Glasgow coma scale score (P < 0.01), age ≥80 years (P < 0.05), blood glucose (P < 0.01), ICH volume (P < 0.01), operation time (P < 0.05), and presence of intraventricular hemorrhage (P < 0.001). Logistic regression revealed that the factors that affect 30-day prognosis include Glasgow coma scale score (P < 0.05), age (P < 0.05), ICH volume (P < 0.01), and presence of intraventricular hemorrhage (P < 0.05). The MIS score was developed accordingly; 39 patients with 0-1 MIS scores had favorable prognoses, whereas only 9 patients with 2-5 MIS scores had poor prognoses. The MIS score is a simple grading scale that can be used to select patients who are suited for minimal invasive drainage surgery. When MIS score is 0-1, minimal invasive surgery is strongly recommended for patients with spontaneous cerebral hemorrhage. The scale merits further prospective studies to fully determine its efficacy. Copyright © 2016 Elsevier Inc. All rights reserved.
Urzay, Javier; Llewellyn Smith, Stefan G; Thompson, Elinor; Glover, Beverley J
2009-08-21
Plant reproduction depends on pollen dispersal. For anemophilous (wind-pollinated) species, such as grasses and many trees, shedding pollen from the anther must be accomplished by physical mechanisms. The unknown nature of this process has led to its description as the 'paradox of pollen liberation'. A simple scaling analysis, supported by experimental measurements on typical wind-pollinated plant species, is used to estimate the suitability of previous resolutions of this paradox based on wind-gust aerodynamic models of fungal-spore liberation. According to this scaling analysis, the steady Stokes drag force is found to be large enough to liberate anemophilous pollen grains, and unsteady boundary-layer forces produced by wind gusts are found to be mostly ineffective since the ratio of the characteristic viscous time scale to the inertial time scale of acceleration of the wind stream is a small parameter for typical anemophilous species. A hypothetical model of a stochastic aeroelastic mechanism, initiated by the atmospheric turbulence typical of the micrometeorological conditions in the vicinity of the plant, is proposed to contribute to wind pollination.
Universal binding energy relations in metallic adhesion
NASA Technical Reports Server (NTRS)
Ferrante, J.; Smith, J. R.; Rose, J. J.
1984-01-01
Rose, Smith, and Ferrante have discovered scaling relations which map the adhesive binding energy calculated by Ferrante and Smith onto a single universal binding energy curve. These binding energies are calculated for all combinations of Al(111), Zn(0001), Mg(0001), and Na(110) in contact. The scaling involves normalizing the energy by the maximum binding energy and normalizing distances by a suitable combination of Thomas-Fermi screening lengths. Rose et al. have also found that the calculated cohesive energies of K, Ba, Cu, Mo, and Sm scale by similar simple relations, suggesting the universal relation may be more general than for the simple free electron metals for which it was derived. In addition, the scaling length was defined more generally in order to relate it to measurable physical properties. Further this universality can be extended to chemisorption. A simple and yet quite accurate prediction of a zero temperature equation of state (volume as a function of pressure for metals and alloys) is presented. Thermal expansion coefficients and melting temperatures are predicted by simple, analytic expressions, and results compare favorably with experiment for a broad range of metals.
Lee-Yang zero analysis for the study of QCD phase structure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ejiri, Shinji
2006-03-01
We comment on the Lee-Yang zero analysis for the study of the phase structure of QCD at high temperature and baryon number density by Monte-Carlo simulations. We find that the sign problem for nonzero density QCD induces a serious problem in the finite volume scaling analysis of the Lee-Yang zeros for the investigation of the order of the phase transition. If the sign problem occurs at large volume, the Lee-Yang zeros will always approach the real axis of the complex parameter plane in the thermodynamic limit. This implies that a scaling behavior which would suggest a crossover transition will notmore » be obtained. To clarify this problem, we discuss the Lee-Yang zero analysis for SU(3) pure gauge theory as a simple example without the sign problem, and then consider the case of nonzero density QCD. It is suggested that the distribution of the Lee-Yang zeros in the complex parameter space obtained by each simulation could be more important information for the investigation of the critical endpoint in the (T,{mu}{sub q}) plane than the finite volume scaling behavior.« less
Adaptive Backoff Synchronization Techniques
1989-06-01
The Simple Code. Technical Report, Lawrence Livermore Laboratory, February 1978. [6J F. Darems-Rogers, D. A. George, V. A. Norton, and G . F. Pfister...Heights, November 1986. 20 [7] Daniel Gajski , David Kuck, Duncan Lawrie, and Ahmed Saleh. Cedar - A Large Scale Multiprocessor. In International Conference...17] Janak H. Patel. Analysis of Multiprocessors with Private Cache Memories. IEEE Transactions on Com- puters, C-31(4):296-304, April 1982. [18] G
Liu, Yansong; Yu, Xinnian; Yang, Bixiu; Zhang, Fuquan; Zou, Wenhua; Na, Aiguo; Zhao, Xudong; Yin, Guangzhong
2017-03-21
Overgeneral autobiographical memory has been identified as a risk factor for the onset and maintenance of depression. However, little is known about the underlying mechanisms that might explain overgeneral autobiographical memory phenomenon in depression. The purpose of this study was to test the mediation effects of rumination on the relationship between overgeneral autobiographical memory and depressive symptoms. Specifically, the mediation effects of brooding and reflection subtypes of rumination were examined in patients with major depressive disorder. Eighty-seven patients with major depressive disorder completed the 17-item Hamilton Depression Rating Scale, Ruminative Response Scale, and Autobiographical Memory Test. Bootstrap mediation analysis for simple and multiple mediation models through the PROCESS macro was applied. Simple mediation analysis showed that rumination significantly mediated the relationship between overgeneral autobiographical memory and depression symptoms. Multiple mediation analyses showed that brooding, but not reflection, significantly mediated the relationship between overgeneral autobiographical memory and depression symptoms. Our results indicate that global rumination partly mediates the relationship between overgeneral autobiographical memory and depressive symptoms in patients with major depressive disorder. Furthermore, the present results suggest that the mediating role of rumination in the relationship between overgeneral autobiographical memory and depression is mainly due to the maladaptive brooding subtype of rumination.
Fitzgibbon, Jessica; Beck, Martina; Zhou, Ji; Faulkner, Christine; Robatzek, Silke; Oparka, Karl
2013-01-01
Plasmodesmata (PD) form tubular connections that function as intercellular communication channels. They are essential for transporting nutrients and for coordinating development. During cytokinesis, simple PDs are inserted into the developing cell plate, while during wall extension, more complex (branched) forms of PD are laid down. We show that complex PDs are derived from existing simple PDs in a pattern that is accelerated when leaves undergo the sink–source transition. Complex PDs are inserted initially at the three-way junctions between epidermal cells but develop most rapidly in the anisocytic complexes around stomata. For a quantitative analysis of complex PD formation, we established a high-throughput imaging platform and constructed PDQUANT, a custom algorithm that detected cell boundaries and PD numbers in different wall faces. For anticlinal walls, the number of complex PDs increased with increasing cell size, while for periclinal walls, the number of PDs decreased. Complex PD insertion was accelerated by up to threefold in response to salicylic acid treatment and challenges with mannitol. In a single 30-min run, we could derive data for up to 11k PDs from 3k epidermal cells. This facile approach opens the door to a large-scale analysis of the endogenous and exogenous factors that influence PD formation. PMID:23371949
Rauk, Adam P; Guo, Kevin; Hu, Yanling; Cahya, Suntara; Weiss, William F
2014-08-01
Defining a suitable product presentation with an acceptable stability profile over its intended shelf-life is one of the principal challenges in bioproduct development. Accelerated stability studies are routinely used as a tool to better understand long-term stability. Data analysis often employs an overall mass action kinetics description for the degradation and the Arrhenius relationship to capture the temperature dependence of the observed rate constant. To improve predictive accuracy and precision, the current work proposes a least-squares estimation approach with a single nonlinear covariate and uses a polynomial to describe the change in a product attribute with respect to time. The approach, which will be referred to as Arrhenius time-scaled (ATS) least squares, enables accurate, precise predictions to be achieved for degradation profiles commonly encountered during bioproduct development. A Monte Carlo study is conducted to compare the proposed approach with the common method of least-squares estimation on the logarithmic form of the Arrhenius equation and nonlinear estimation of a first-order model. The ATS least squares method accommodates a range of degradation profiles, provides a simple and intuitive approach for data presentation, and can be implemented with ease. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Andrews, Ross N; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan
2018-02-01
X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables probing dynamics in a broad array of materials with XPCS, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fails. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. In this paper, we propose an alternative analysis scheme based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. Using XPCS data measured from colloidal gels, we demonstrate the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.
Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan; Kuzmenko, Ivan; Ilavsky, Jan
2018-01-01
X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables probing dynamics in a broad array of materials with XPCS, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fails. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. In this paper, we propose an alternative analysis scheme based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. Using XPCS data measured from colloidal gels, we demonstrate the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS. PMID:29875506
Nonlinear zero-sum differential game analysis by singular perturbation methods
NASA Technical Reports Server (NTRS)
Sinar, J.; Farber, N.
1982-01-01
A class of nonlinear, zero-sum differential games, exhibiting time-scale separation properties, can be analyzed by singular-perturbation techniques. The merits of such an analysis, leading to an approximate game solution, as well as the 'well-posedness' of the formulation, are discussed. This approach is shown to be attractive for investigating pursuit-evasion problems; the original multidimensional differential game is decomposed to a 'simple pursuit' (free-stream) game and two independent (boundary-layer) optimal-control problems. Using multiple time-scale boundary-layer models results in a pair of uniformly valid zero-order composite feedback strategies. The dependence of suboptimal strategies on relative geometry and own-state measurements is demonstrated by a three dimensional, constant-speed example. For game analysis with realistic vehicle dynamics, the technique of forced singular perturbations and a variable modeling approach is proposed. Accuracy of the analysis is evaluated by comparison with the numerical solution of a time-optimal, variable-speed 'game of two cars' in the horizontal plane.
Clark, M.P.; Rupp, D.E.; Woods, R.A.; Tromp-van, Meerveld; Peters, N.E.; Freer, J.E.
2009-01-01
The purpose of this paper is to identify simple connections between observations of hydrological processes at the hillslope scale and observations of the response of watersheds following rainfall, with a view to building a parsimonious model of catchment processes. The focus is on the well-studied Panola Mountain Research Watershed (PMRW), Georgia, USA. Recession analysis of discharge Q shows that while the relationship between dQ/dt and Q is approximately consistent with a linear reservoir for the hillslope, there is a deviation from linearity that becomes progressively larger with increasing spatial scale. To account for these scale differences conceptual models of streamflow recession are defined at both the hillslope scale and the watershed scale, and an assessment made as to whether models at the hillslope scale can be aggregated to be consistent with models at the watershed scale. Results from this study show that a model with parallel linear reservoirs provides the most plausible explanation (of those tested) for both the linear hillslope response to rainfall and non-linear recession behaviour observed at the watershed outlet. In this model each linear reservoir is associated with a landscape type. The parallel reservoir model is consistent with both geochemical analyses of hydrological flow paths and water balance estimates of bedrock recharge. Overall, this study demonstrates that standard approaches of using recession analysis to identify the functional form of storage-discharge relationships identify model structures that are inconsistent with field evidence, and that recession analysis at multiple spatial scales can provide useful insights into catchment behaviour. Copyright ?? 2008 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Sreekanth, J.; Moore, Catherine
2018-04-01
The application of global sensitivity and uncertainty analysis techniques to groundwater models of deep sedimentary basins are typically challenged by large computational burdens combined with associated numerical stability issues. The highly parameterized approaches required for exploring the predictive uncertainty associated with the heterogeneous hydraulic characteristics of multiple aquifers and aquitards in these sedimentary basins exacerbate these issues. A novel Patch Modelling Methodology is proposed for improving the computational feasibility of stochastic modelling analysis of large-scale and complex groundwater models. The method incorporates a nested groundwater modelling framework that enables efficient simulation of groundwater flow and transport across multiple spatial and temporal scales. The method also allows different processes to be simulated within different model scales. Existing nested model methodologies are extended by employing 'joining predictions' for extrapolating prediction-salient information from one model scale to the next. This establishes a feedback mechanism supporting the transfer of information from child models to parent models as well as parent models to child models in a computationally efficient manner. This feedback mechanism is simple and flexible and ensures that while the salient small scale features influencing larger scale prediction are transferred back to the larger scale, this does not require the live coupling of models. This method allows the modelling of multiple groundwater flow and transport processes using separate groundwater models that are built for the appropriate spatial and temporal scales, within a stochastic framework, while also removing the computational burden associated with live model coupling. The utility of the method is demonstrated by application to an actual large scale aquifer injection scheme in Australia.
Global-Scale Hydrology: Simple Characterization of Complex Simulation
NASA Technical Reports Server (NTRS)
Koster, Randal D.
1999-01-01
Atmospheric general circulation models (AGCMS) are unique and valuable tools for the analysis of large-scale hydrology. AGCM simulations of climate provide tremendous amounts of hydrological data with a spatial and temporal coverage unmatched by observation systems. To the extent that the AGCM behaves realistically, these data can shed light on the nature of the real world's hydrological cycle. In the first part of the seminar, I will describe the hydrological cycle in a typical AGCM, with some emphasis on the validation of simulated precipitation against observations. The second part of the seminar will focus on a key goal in large-scale hydrology studies, namely the identification of simple, overarching controls on hydrological behavior hidden amidst the tremendous amounts of data produced by the highly complex AGCM parameterizations. In particular, I will show that a simple 50-year-old climatological relation (and a recent extension we made to it) successfully predicts, to first order, both the annual mean and the interannual variability of simulated evaporation and runoff fluxes. The seminar will conclude with an example of a practical application of global hydrology studies. The accurate prediction of weather statistics several months in advance would have tremendous societal benefits, and conventional wisdom today points at the use of coupled ocean-atmosphere-land models for such seasonal-to-interannual prediction. Understanding the hydrological cycle in AGCMs is critical to establishing the potential for such prediction. Our own studies show, among other things, that soil moisture retention can lead to significant precipitation predictability in many midlatitude and tropical regions.
Validation of the Weight Concerns Scale Applied to Brazilian University Students.
Dias, Juliana Chioda Ribeiro; da Silva, Wanderson Roberto; Maroco, João; Campos, Juliana Alvares Duarte Bonini
2015-06-01
The aim of this study was to evaluate the validity and reliability of the Portuguese version of the Weight Concerns Scale (WCS) when applied to Brazilian university students. The scale was completed by 1084 university students from Brazilian public education institutions. A confirmatory factor analysis was conducted. The stability of the model in independent samples was assessed through multigroup analysis, and the invariance was estimated. Convergent, concurrent, divergent, and criterion validities as well as internal consistency were estimated. Results indicated that the one-factor model presented an adequate fit to the sample and values of convergent validity. The concurrent validity with the Body Shape Questionnaire and divergent validity with the Maslach Burnout Inventory for Students were adequate. Internal consistency was adequate, and the factorial structure was invariant in independent subsamples. The results present a simple and short instrument capable of precisely and accurately assessing concerns with weight among Brazilian university students. Copyright © 2015 Elsevier Ltd. All rights reserved.
Data analysis using scale-space filtering and Bayesian probabilistic reasoning
NASA Technical Reports Server (NTRS)
Kulkarni, Deepak; Kutulakos, Kiriakos; Robinson, Peter
1991-01-01
This paper describes a program for analysis of output curves from Differential Thermal Analyzer (DTA). The program first extracts probabilistic qualitative features from a DTA curve of a soil sample, and then uses Bayesian probabilistic reasoning to infer the mineral in the soil. The qualifier module employs a simple and efficient extension of scale-space filtering suitable for handling DTA data. We have observed that points can vanish from contours in the scale-space image when filtering operations are not highly accurate. To handle the problem of vanishing points, perceptual organizations heuristics are used to group the points into lines. Next, these lines are grouped into contours by using additional heuristics. Probabilities are associated with these contours using domain-specific correlations. A Bayes tree classifier processes probabilistic features to infer the presence of different minerals in the soil. Experiments show that the algorithm that uses domain-specific correlation to infer qualitative features outperforms a domain-independent algorithm that does not.
Echinocyte shapes: bending, stretching, and shear determine spicule shape and spacing.
Mukhopadhyay, Ranjan; Lim H W, Gerald; Wortis, Michael
2002-01-01
We study the shapes of human red blood cells using continuum mechanics. In particular, we model the crenated, echinocytic shapes and show how they may arise from a competition between the bending energy of the plasma membrane and the stretching/shear elastic energies of the membrane skeleton. In contrast to earlier work, we calculate spicule shapes exactly by solving the equations of continuum mechanics subject to appropriate boundary conditions. A simple scaling analysis of this competition reveals an elastic length Lambda(el), which sets the length scale for the spicules and is, thus, related to the number of spicules experimentally observed on the fully developed echinocyte. PMID:11916836
DOE Office of Scientific and Technical Information (OSTI.GOV)
Attota, Ravikiran, E-mail: Ravikiran.attota@nist.gov; Dixson, Ronald G.
We experimentally demonstrate that the three-dimensional (3-D) shape variations of nanometer-scale objects can be resolved and measured with sub-nanometer scale sensitivity using conventional optical microscopes by analyzing 4-D optical data using the through-focus scanning optical microscopy (TSOM) method. These initial results show that TSOM-determined cross-sectional (3-D) shape differences of 30 nm–40 nm wide lines agree well with critical-dimension atomic force microscope measurements. The TSOM method showed a linewidth uncertainty of 1.22 nm (k = 2). Complex optical simulations are not needed for analysis using the TSOM method, making the process simple, economical, fast, and ideally suited for high volume nanomanufacturing process monitoring.
Morken, Tone; Baste, Valborg; Johnsen, Grethe E; Rypdal, Knut; Palmstierna, Tom; Johansen, Ingrid Hjulstad
2018-05-08
Many emergency primary health care workers experience aggressive behaviour from patients or visitors. Simple incident-reporting procedures exist for inpatient, psychiatric care, but a similar and simple incident-report for other health care settings is lacking. The aim was to adjust a pre-existing form for reporting aggressive incidents in a psychiatric inpatient setting to the emergency primary health care settings. We also wanted to assess the validity of the severity scores in emergency primary health care. The Staff Observation Scale - Revised (SOAS-R) was adjusted to create a pilot version of the Staff Observation Scale - Revised Emergency (SOAS-RE). A Visual Analogue Scale (VAS) was added to the form to judge the severity of the incident. Data for validation of the pilot version of SOAS-RE were collected from ten casualty clinics in Norway during 12 months. Variance analysis was used to test gender and age differences. Linear regression analysis was performed to evaluate the relative impact that each of the five SOAS-RE columns had on the VAS score. The association between SOAS-RE severity score and VAS severity score was calculated by the Pearson correlation coefficient. The SOAS-R was adjusted to emergency primary health care, refined and called The Staff Observation Aggression Scale - Revised Emergency (SOAS-RE). A total of 350 SOAS-RE forms were collected from the casualty clinics, but due to missing data, 291 forms were included in the analysis. SOAS-RE scores ranged from 1 to 22. The mean total severity score of SOAS-RE was 10.0 (standard deviation (SD) =4.1) and the mean VAS score was 45.4 (SD = 26.7). We found a significant correlation of 0.45 between the SOAS-RE total severity scores and the VAS severity ratings. The linear regression analysis showed that individually each of the categories, which described the incident, had a low impact on the VAS score. The SOAS-RE seems to be a useful instrument for research, incident-recording and management of incidents in emergency primary care. The moderate correlation between SOAS-RE severity score and the VAS severity score shows that application of both the severity ratings is valuable to follow-up of workers affected by workplace violence.
Comparative analysis of multiple-casualty incident triage algorithms.
Garner, A; Lee, A; Harrison, K; Schultz, C H
2001-11-01
We sought to retrospectively measure the accuracy of multiple-casualty incident (MCI) triage algorithms and their component physiologic variables in predicting adult patients with critical injury. We performed a retrospective review of 1,144 consecutive adult patients transported by ambulance and admitted to 2 trauma centers. Association between first-recorded out-of-hospital physiologic variables and a resource-based definition of severe injury appropriate to the MCI context was determined. The association between severe injury and Triage Sieve, Simple Triage and Rapid Treatment, modified Simple Triage and Rapid Treatment, and CareFlight Triage was determined in the patient population. Of the physiologic variables, the Motor Component of the Glasgow Coma Scale had the strongest association with severe injury, followed by systolic blood pressure. The differences between CareFlight Triage, Simple Triage and Rapid Treatment, and modified Simple Triage and Rapid Treatment were not dramatic, with sensitivities of 82% (95% confidence interval [CI] 75% to 88%), 85% (95% CI 78% to 90%), and 84% (95% CI 76% to 89%), respectively, and specificities of 96% (95% CI 94% to 97%), 86% (95% CI 84% to 88%), and 91% (95% CI 89% to 93%), respectively. Both forms of Triage Sieve were significantly poorer predictors of severe injury. Of the physiologic variables used in the triage algorithms, the Motor Component of the Glasgow Coma Scale and systolic blood pressure had the strongest association with severe injury. CareFlight Triage, Simple Triage and Rapid Treatment, and modified Simple Triage and Rapid Treatment had similar sensitivities in predicting critical injury in designated trauma patients, but CareFlight Triage had better specificity. Because patients in a true mass casualty situation may not be completely comparable with designated trauma patients transported to emergency departments in routine circumstances, the best triage instrument in this study may not be the best in an actual MCI. These findings must be validated prospectively before their accuracy can be confirmed.
Scale Interactions in the Tropics from a Simple Multi-Cloud Model
NASA Astrophysics Data System (ADS)
Niu, X.; Biello, J. A.
2017-12-01
Our lack of a complete understanding of the interaction between the moisture convection and equatorial waves remains an impediment in the numerical simulation of large-scale organization, such as the Madden-Julian Oscillation (MJO). The aim of this project is to understand interactions across spatial scales in the tropics from a simplified framework for scale interactions while a using a simplified framework to describe the basic features of moist convection. Using multiple asymptotic scales, Biello and Majda[1] derived a multi-scale model of moist tropical dynamics (IMMD[1]), which separates three regimes: the planetary scale climatology, the synoptic scale waves, and the planetary scale anomalies regime. The scales and strength of the observed MJO would categorize it in the regime of planetary scale anomalies - which themselves are forced from non-linear upscale fluxes from the synoptic scales waves. In order to close this model and determine whether it provides a self-consistent theory of the MJO. A model for diabatic heating due to moist convection must be implemented along with the IMMD. The multi-cloud parameterization is a model proposed by Khouider and Majda[2] to describe the three basic cloud types (congestus, deep and stratiform) that are most responsible for tropical diabatic heating. We implement a simplified version of the multi-cloud model that is based on results derived from large eddy simulations of convection [3]. We present this simplified multi-cloud model and show results of numerical experiments beginning with a variety of convective forcing states. Preliminary results on upscale fluxes, from synoptic scales to planetary scale anomalies, will be presented. [1] Biello J A, Majda A J. Intraseasonal multi-scale moist dynamics of the tropical atmosphere[J]. Communications in Mathematical Sciences, 2010, 8(2): 519-540. [2] Khouider B, Majda A J. A simple multicloud parameterization for convectively coupled tropical waves. Part I: Linear analysis[J]. Journal of the atmospheric sciences, 2006, 63(4): 1308-1323. [3] Dorrestijn J, Crommelin D T, Biello J A, et al. A data-driven multi-cloud model for stochastic parametrization of deep convection[J]. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 2013, 371(1991): 20120374.
de Oliveira, Flávia Augusta; Luna, Stelio Pacca Loureiro; do Amaral, Jackson Barros; Rodrigues, Karoline Alves; Sant'Anna, Aline Cristina; Daolio, Milena; Brondani, Juliana Tabarelli
2014-09-06
The recognition and measurement of pain in cattle are important in determining the necessity for and efficacy of analgesic intervention. The aim of this study was to record behaviour and determine the validity and reliability of an instrument to assess acute pain in 40 cattle subjected to orchiectomy after sedation with xylazine and local anaesthesia. The animals were filmed before and after orchiectomy to record behaviour. The pain scale was based on previous studies, on a pilot study and on analysis of the camera footage. Three blinded observers and a local observer assessed the edited films obtained during the preoperative and postoperative periods, before and after rescue analgesia and 24 hours after surgery. Re-evaluation was performed one month after the first analysis. Criterion validity (agreement) and item-total correlation using Spearman's coefficient were employed to refine the scale. Based on factor analysis, a unidimensional scale was adopted. The internal consistency of the data was excellent after refinement (Cronbach's α coefficient = 0.866). There was a high correlation (p < 0.001) between the proposed scale and the visual analogue, simple descriptive and numerical rating scales. The construct validity and responsiveness were confirmed by the increase and decrease in pain scores after surgery and rescue analgesia, respectively (p < 0.001). Inter- and intra-observer reliability ranged from moderate to very good. The optimal cut-off point for rescue analgesia was > 4, and analysis of the area under the curve (AUC = 0.963) showed excellent discriminatory ability. The UNESP-Botucatu unidimensional pain scale for assessing acute postoperative pain in cattle is a valid, reliable and responsive instrument with excellent internal consistency and discriminatory ability. The cut-off point for rescue analgesia provides an additional tool for guiding analgesic therapy.
Finite element modeling and analysis of tires
NASA Technical Reports Server (NTRS)
Noor, A. K.; Andersen, C. M.
1983-01-01
Predicting the response of tires under various loading conditions using finite element technology is addressed. Some of the recent advances in finite element technology which have high potential for application to tire modeling problems are reviewed. The analysis and modeling needs for tires are identified. Reduction methods for large-scale nonlinear analysis, with particular emphasis on treatment of combined loads, displacement-dependent and nonconservative loadings; development of simple and efficient mixed finite element models for shell analysis, identification of equivalent mixed and purely displacement models, and determination of the advantages of using mixed models; and effective computational models for large-rotation nonlinear problems, based on a total Lagrangian description of the deformation are included.
Temporal scaling of groundwater level fluctuations near a stream
Schilling, K.E.; Zhang, Y.-K.
2012-01-01
Temporal scaling in stream discharge and hydraulic heads in riparian wells was evaluated to determine the feasibility of using spectral analysis to identify potential surface and groundwater interaction. In floodplains where groundwater levels respond rapidly to precipitation recharge, potential interaction is established if the hydraulic head (h) spectrum of riparian groundwater has a power spectral density similar to stream discharge (Q), exhibiting a characteristic breakpoint between high and low frequencies. At a field site in Walnut Creek watershed in central Iowa, spectral analysis of h in wells located 1 m from the channel edge showed a breakpoint in scaling very similar to the spectrum of Q (~20 h), whereas h in wells located 20 and 40 m from the channel showed temporal scaling from 1 to 10,000 h without a well-defined breakpoint. The spectral exponent (??) in the riparian zone decreased systematically from the channel into the floodplain as groundwater levels were increasingly dominated by white noise groundwater recharge. The scaling pattern of hydraulic head was not affected by land cover type, although the number of analyses was limited and site conditions were variable among sites. Spectral analysis would not replace quantitative tracer or modeling studies, but the method may provide a simple means of confirming potential interaction at some sites. ?? 2011, The Author(s). Ground Water ?? 2011, National Ground Water Association.
Temporal scaling of groundwater level fluctuations near a stream.
Schilling, Keith E; Zhang, You-Kuan
2012-01-01
Temporal scaling in stream discharge and hydraulic heads in riparian wells was evaluated to determine the feasibility of using spectral analysis to identify potential surface and groundwater interaction. In floodplains where groundwater levels respond rapidly to precipitation recharge, potential interaction is established if the hydraulic head (h) spectrum of riparian groundwater has a power spectral density similar to stream discharge (Q), exhibiting a characteristic breakpoint between high and low frequencies. At a field site in Walnut Creek watershed in central Iowa, spectral analysis of h in wells located 1 m from the channel edge showed a breakpoint in scaling very similar to the spectrum of Q (∼20 h), whereas h in wells located 20 and 40 m from the channel showed temporal scaling from 1 to 10,000 h without a well-defined breakpoint. The spectral exponent (β) in the riparian zone decreased systematically from the channel into the floodplain as groundwater levels were increasingly dominated by white noise groundwater recharge. The scaling pattern of hydraulic head was not affected by land cover type, although the number of analyses was limited and site conditions were variable among sites. Spectral analysis would not replace quantitative tracer or modeling studies, but the method may provide a simple means of confirming potential interaction at some sites. © 2011, The Author(s). Ground Water © 2011, National Ground Water Association.
Mathematics and the Internet: A Source of Enormous Confusion and Great Potential
2009-05-01
free Internet Myth The story recounted below of the scale-free nature of the Internet seems convincing, sound, and al- most too good to be true ...models. In fact, much of the initial excitement in the nascent field of network science can be attributed to an ear- ly and appealingly simple class...this new class of networks, com- monly referred to as scale-free networks. The term scale-free derives from the simple observation that power-law node
NASA Astrophysics Data System (ADS)
Most, Sebastian; Nowak, Wolfgang; Bijeljic, Branko
2015-04-01
Fickian transport in groundwater flow is the exception rather than the rule. Transport in porous media is frequently simulated via particle methods (i.e. particle tracking random walk (PTRW) or continuous time random walk (CTRW)). These methods formulate transport as a stochastic process of particle position increments. At the pore scale, geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Hence, it is important to get a better understanding of the processes at pore scale. For our analysis we track the positions of 10.000 particles migrating through the pore space over time. The data we use come from micro CT scans of a homogeneous sandstone and encompass about 10 grain sizes. Based on those images we discretize the pore structure and simulate flow at the pore scale based on the Navier-Stokes equation. This flow field realistically describes flow inside the pore space and we do not need to add artificial dispersion during the transport simulation. Next, we use particle tracking random walk and simulate pore-scale transport. Finally, we use the obtained particle trajectories to do a multivariate statistical analysis of the particle motion at the pore scale. Our analysis is based on copulas. Every multivariate joint distribution is a combination of its univariate marginal distributions. The copula represents the dependence structure of those univariate marginals and is therefore useful to observe correlation and non-Gaussian interactions (i.e. non-Fickian transport). The first goal of this analysis is to better understand the validity regions of commonly made assumptions. We are investigating three different transport distances: 1) The distance where the statistical dependence between particle increments can be modelled as an order-one Markov process. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks start. 2) The distance where bivariate statistical dependence simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW/CTRW). 3) The distance of complete statistical independence (validity of classical PTRW/CTRW). The second objective is to reveal characteristic dependencies influencing transport the most. Those dependencies can be very complex. Copulas are highly capable of representing linear dependence as well as non-linear dependence. With that tool we are able to detect persistent characteristics dominating transport even across different scales. The results derived from our experimental data set suggest that there are many more non-Fickian aspects of pore-scale transport than the univariate statistics of longitudinal displacements. Non-Fickianity can also be found in transverse displacements, and in the relations between increments at different time steps. Also, the found dependence is non-linear (i.e. beyond simple correlation) and persists over long distances. Thus, our results strongly support the further refinement of techniques like correlated PTRW or correlated CTRW towards non-linear statistical relations.
Demonstrating microbial co-occurrence pattern analyses within and between ecosystems
Williams, Ryan J.; Howe, Adina; Hofmockel, Kirsten S.
2014-01-01
Co-occurrence patterns are used in ecology to explore interactions between organisms and environmental effects on coexistence within biological communities. Analysis of co-occurrence patterns among microbial communities has ranged from simple pairwise comparisons between all community members to direct hypothesis testing between focal species. However, co-occurrence patterns are rarely studied across multiple ecosystems or multiple scales of biological organization within the same study. Here we outline an approach to produce co-occurrence analyses that are focused at three different scales: co-occurrence patterns between ecosystems at the community scale, modules of co-occurring microorganisms within communities, and co-occurring pairs within modules that are nested within microbial communities. To demonstrate our co-occurrence analysis approach, we gathered publicly available 16S rRNA amplicon datasets to compare and contrast microbial co-occurrence at different taxonomic levels across different ecosystems. We found differences in community composition and co-occurrence that reflect environmental filtering at the community scale and consistent pairwise occurrences that may be used to infer ecological traits about poorly understood microbial taxa. However, we also found that conclusions derived from applying network statistics to microbial relationships can vary depending on the taxonomic level chosen and criteria used to build co-occurrence networks. We present our statistical analysis and code for public use in analysis of co-occurrence patterns across microbial communities. PMID:25101065
Mesoscale Dynamical Regimes in the Midlatitudes
NASA Astrophysics Data System (ADS)
Craig, G. C.; Selz, T.
2018-01-01
The atmospheric mesoscales are characterized by a complex variety of meteorological phenomena that defy simple classification. Here a full space-time spectral analysis is carried out, based on a 7 day convection-permitting simulation of springtime midlatitude weather on a large domain. The kinetic energy is largest at synoptic scales, and on the mesoscale it is largely confined to an "advective band" where space and time scales are related by a constant of proportionality which corresponds to a velocity scale of about 10 m s-1. Computing the relative magnitude of different terms in the governing equations allows the identification of five dynamical regimes. These are tentatively identified as quasi-geostrophic flow, propagating gravity waves, stationary gravity waves related to orography, acoustic modes, and a weak temperature gradient regime, where vertical motions are forced by diabatic heating.
Object-oriented analysis and design: a methodology for modeling the computer-based patient record.
Egyhazy, C J; Eyestone, S M; Martino, J; Hodgson, C L
1998-08-01
The article highlights the importance of an object-oriented analysis and design (OOAD) methodology for the computer-based patient record (CPR) in the military environment. Many OOAD methodologies do not adequately scale up, allow for efficient reuse of their products, or accommodate legacy systems. A methodology that addresses these issues is formulated and used to demonstrate its applicability in a large-scale health care service system. During a period of 6 months, a team of object modelers and domain experts formulated an OOAD methodology tailored to the Department of Defense Military Health System and used it to produce components of an object model for simple order processing. This methodology and the lessons learned during its implementation are described. This approach is necessary to achieve broad interoperability among heterogeneous automated information systems.
NASA Technical Reports Server (NTRS)
Anderson, G. S.; Hayden, R. E.; Thompson, A. R.; Madden, R.
1985-01-01
The feasibility of acoustical scale modeling techniques for modeling wind effects on long range, low frequency outdoor sound propagation was evaluated. Upwind and downwind propagation was studied in 1/100 scale for flat ground and simple hills with both rigid and finite ground impedance over a full scale frequency range from 20 to 500 Hz. Results are presented as 1/3-octave frequency spectra of differences in propagation loss between the case studied and a free-field condition. Selected sets of these results were compared with validated analytical models for propagation loss, when such models were available. When they were not, results were compared with predictions from approximate models developed. Comparisons were encouraging in many cases considering the approximations involved in both the physical modeling and analysis methods. Of particular importance was the favorable comparison between theory and experiment for propagation over soft ground.
Activity affects intraspecific body-size scaling of metabolic rate in ectothermic animals.
Glazier, Douglas Stewart
2009-10-01
Metabolic rate is commonly thought to scale with body mass (M) to the 3/4 power. However, the metabolic scaling exponent (b) may vary with activity state, as has been shown chiefly for interspecific relationships. Here I use a meta-analysis of literature data to test whether b changes with activity level within species of ectothermic animals. Data for 19 species show that b is usually higher during active exercise (mean +/- 95% confidence limits = 0.918 +/- 0.038) than during rest (0.768 +/- 0.069). This significant upward shift in b to near 1 is consistent with the metabolic level boundaries hypothesis, which predicts that maximal metabolic rate during exercise should be chiefly influenced by volume-related muscular power production (scaling as M (1)). This dependence of b on activity level does not appear to be a simple temperature effect because body temperature in ectotherms changes very little during exercise.
Data Analysis and Synthesis for the ONR Undersea Sand Dunes in the South China Sea Field Experiments
2015-09-30
understanding of coastal oceanography by means of applying simple dynamical theories to high-quality observations obtained in the field. My primary...area of expertise is physical oceanography , but I also enjoy collaborating with biological, chemical, acoustical, and optical oceanographers to work... oceanography , and impact of the bottom configuration and physical oceanography on acoustic propagation. • The space and time scales of the dune
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamm, Peter; Fanourgakis, George S.; Xantheas, Sotiris S.
Nuclear quantum effects in liquid water have profound implications for several of its macroscopic properties related to structure, dynamics, spectroscopy and transport. Although several of water’s macroscopic properties can be reproduced by classical descriptions of the nuclei using potentials effectively parameterized for a narrow range of its phase diagram, a proper account of the nuclear quantum effects is required in order to ensure that the underlying molecular interactions are transferable across a wide temperature range covering different regions of that diagram. When performing an analysis of the hydrogen bonded structural networks in liquid water resulting from the classical (class.) andmore » quantum (q.m.) descriptions of the nuclei with the transferable, flexible, polarizable TTM3-F interaction potential, we found that the two results can be superimposed over the temperature range of T=270-350 K using a surprisingly simple, linear scaling of the two temperatures according to T(q.m.)=aT(class)- T , where a=1.2 and T=51 K. The linear scaling and constant shift of the temperature scale can be considered as a generalization of the previously reported temperature shifts (corresponding to structural changes and the melting T) induced by quantum effects in liquid water.« less
Seismic waves and earthquakes in a global monolithic model
NASA Astrophysics Data System (ADS)
Roubíček, Tomáš
2018-03-01
The philosophy that a single "monolithic" model can "asymptotically" replace and couple in a simple elegant way several specialized models relevant on various Earth layers is presented and, in special situations, also rigorously justified. In particular, global seismicity and tectonics is coupled to capture, e.g., (here by a simplified model) ruptures of lithospheric faults generating seismic waves which then propagate through the solid-like mantle and inner core both as shear (S) or pressure (P) waves, while S-waves are suppressed in the fluidic outer core and also in the oceans. The "monolithic-type" models have the capacity to describe all the mentioned features globally in a unified way together with corresponding interfacial conditions implicitly involved, only when scaling its parameters appropriately in different Earth's layers. Coupling of seismic waves with seismic sources due to tectonic events is thus an automatic side effect. The global ansatz is here based, rather for an illustration, only on a relatively simple Jeffreys' viscoelastic damageable material at small strains whose various scaling (limits) can lead to Boger's viscoelastic fluid or even to purely elastic (inviscid) fluid. Self-induced gravity field, Coriolis, centrifugal, and tidal forces are counted in our global model, as well. The rigorous mathematical analysis as far as the existence of solutions, convergence of the mentioned scalings, and energy conservation is briefly presented.
Mistarz, Ulrik H; Brown, Jeffery M; Haselmann, Kim F; Rand, Kasper D
2014-12-02
Gas-phase hydrogen/deuterium exchange (HDX) is a fast and sensitive, yet unharnessed analytical approach for providing information on the structural properties of biomolecules, in a complementary manner to mass analysis. Here, we describe a simple setup for ND3-mediated millisecond gas-phase HDX inside a mass spectrometer immediately after ESI (gas-phase HDX-MS) and show utility for studying the primary and higher-order structure of peptides and proteins. HDX was achieved by passing N2-gas through a container filled with aqueous deuterated ammonia reagent (ND3/D2O) and admitting the saturated gas immediately upstream or downstream of the primary skimmer cone. The approach was implemented on three commercially available mass spectrometers and required no or minor fully reversible reconfiguration of gas-inlets of the ion source. Results from gas-phase HDX-MS of peptides using the aqueous ND3/D2O as HDX reagent indicate that labeling is facilitated exclusively through gaseous ND3, yielding similar results to the infusion of purified ND3-gas, while circumventing the complications associated with the use of hazardous purified gases. Comparison of the solution-phase- and gas-phase deuterium uptake of Leu-Enkephalin and Glu-Fibrinopeptide B, confirmed that this gas-phase HDX-MS approach allows for labeling of sites (heteroatom-bound non-amide hydrogens located on side-chains, N-terminus and C-terminus) not accessed by classical solution-phase HDX-MS. The simple setup is compatible with liquid chromatography and a chip-based automated nanoESI interface, allowing for online gas-phase HDX-MS analysis of peptides and proteins separated on a liquid chromatographic time scale at increased throughput. Furthermore, online gas-phase HDX-MS could be performed in tandem with ion mobility separation or electron transfer dissociation, thus enabling multiple orthogonal analyses of the structural properties of peptides and proteins in a single automated LC-MS workflow.
Thai venous stroke prognostic score: TV-SPSS.
Poungvarin, Niphon; Prayoonwiwat, Naraporn; Ratanakorn, Disya; Towanabut, Somchai; Tantirittisak, Tassanee; Suwanwela, Nijasri; Phanthumchinda, Kamman; Tiamkoa, Somsak; Chankrachang, Siwaporn; Nidhinandana, Samart; Laptikultham, Somsak; Limsoontarakul, Sansern; Udomphanthuruk, Suthipol
2009-11-01
Prognosis of cerebral venous sinus thrombosis (CVST) has never been studied in Thailand. A simple prognostic score to predict poor prognosis of CVST has also never been reported. The authors are aiming to establish a simple and reliable prognostic score for this condition. The medical records of CVST patients from eight neurological training centers in Thailand who received between April 1993 and September 2005 were reviewed as part of this retrospective study. Clinical features included headache, seizure, stroke risk factors, Glasgow coma scale (GCS), blood pressure on arrival, papilledema, hemiparesis, meningeal irritation sign, location of occluded venous sinuses, hemorrhagic infarction, cerebrospinal fluid opening pressure, treatment options, length of stay, and other complications were analyzed to determine the outcome using modified Rankin scale (mRS). Poor prognosis (defined as mRS of 3-6) was determined on the discharge date. One hundred ninety four patients' records, 127 females (65.5%) and mean age of 36.6 +/- 14.4 years, were analyzed Fifty-one patients (26.3%) were in the poor outcome group (mRS 3-6). Overall mortality was 8.4%. Univariate analysis and then multivariate analysis using SPSS version 11.5 revealed only four statistically significant predictors influencing outcome of CVST They were underlying malignancy, low GCS, presence of hemorrhagic infarction (for poor outcome), and involvement of lateral sinus (for good outcome). Thai venous stroke prognostic score (TV-SPSS) was derived from these four factors using a multiple logistic model. A simple and pragmatic prognostic score for CVST outcome has been developed with high sensitivity (93%), yet low specificity (33%). The next study should focus on the validation of this score in other prospective populations.
Preliminary psychometric testing of the Fox Simple Quality-of-Life Scale.
Fox, Sherry
2004-06-01
Although quality of life is extensively defined as subjective and multidimensional with both affective and cognitive components, few instruments capture important dimensions of the construct, and few are both conceptually congruent and user friendly for the clinical setting. The aim of this study was to develop and test a measure that would be easy to use clinically and capture both cognitive and affective components of quality of life. Initial item sources for the Fox Simple Quality-of-Life Scale (FSQOLS) were literature-based. Thirty items were compiled for content validity assessment by a panel of expert healthcare clinicians from various disciplines, predominantly nursing. Five items were removed as a result of the review because they reflected negatively worded or redundant items. The 25-item scale was mailed to 177 people with lung, colon, and ovarian cancer in various stages. Cancer types were selected theoretically, based on similarity in prognosis, degree of symptom burden, and possible meaning and experience. Of the 145 participants, all provided complete data on the FSQOLS. Psychometric evaluation of the FSQOLS included item-total correlations, principal components analysis with varimax rotation revealing two factors explaining 50% variance, reliability estimation using alpha estimates, and item-factor correlations. The FSQOLS exhibited significant convergent validity with four popular quality-of-life instruments: the Ferrans and Powers Quality of Life Index, the Functional Assessment of Cancer Therapy Scale, the Short-Form-36 Health Survey, and the General Well-Being Scale. Content validity of the scale was explored and supported using qualitative interviews of 14 participants with lung, colon and ovarian cancer, who were a subgroup of the sample for the initial instrument testing.
Thurber, Greg M; Wittrup, K Dane
2008-05-01
Antibody-based cancer treatment depends upon distribution of the targeting macromolecule throughout tumor tissue, and spatial heterogeneity could significantly limit efficacy in many cases. Antibody distribution in tumor tissue is a function of drug dosage, antigen concentration, binding affinity, antigen internalization, drug extravasation from blood vessels, diffusion in the tumor extracellular matrix, and systemic clearance rates. We have isolated the effects of a subset of these variables by live-cell microscopic imaging of single-chain antibody fragments against carcinoembryonic antigen in LS174T tumor spheroids. The measured rates of scFv penetration and retention were compared with theoretical predictions based on simple scaling criteria. The theory predicts that antibody dose must be large enough to drive a sufficient diffusive flux of antibody to overcome cellular internalization, and exposure time must be long enough to allow penetration to the spheroid center. The experimental results in spheroids are quantitatively consistent with these predictions. Therefore, simple scaling criteria can be applied to accurately predict antibody and antibody fragment penetration distance in tumor tissue.
Thurber, Greg M.; Wittrup, K. Dane
2010-01-01
Antibody-based cancer treatment depends upon distribution of the targeting macromolecule throughout tumor tissue, and spatial heterogeneity could significantly limit efficacy in many cases. Antibody distribution in tumor tissue is a function of drug dosage, antigen concentration, binding affinity, antigen internalization, drug extravasation from blood vessels, diffusion in the tumor extracellular matrix, and systemic clearance rates. We have isolated the effects of a subset of these variables by live-cell microscopic imaging of single-chain antibody fragments against carcinoembryonic antigen in LS174T tumor spheroids. The measured rates of scFv penetration and retention were compared with theoretical predictions based on simple scaling criteria. The theory predicts that antibody dose must be large enough to drive a sufficient diffusive flux of antibody to overcome cellular internalization, and exposure time must be long enough to allow penetration to the spheroid center. The experimental results in spheroids are quantitatively consistent with these predictions. Therefore, simple scaling criteria can be applied to accurately predict antibody and antibody fragment penetration distance in tumor tissue. PMID:18451160
Grassi, Mario; Nucera, Andrea
2010-01-01
The objective of this study was twofold: 1) to confirm the hypothetical eight scales and two-component summaries of the questionnaire Short Form 36 Health Survey (SF-36), and 2) to evaluate the performance of two alternative measures to the original physical component summary (PCS) and mental component summary (MCS). We performed principal component analysis (PCA) based on 35 items, after optimal scaling via multiple correspondence analysis (MCA), and subsequently on eight scales, after standard summative scoring. Item-based summary measures were planned. Data from the European Community Respiratory Health Survey II follow-up of 8854 subjects from 25 centers were analyzed to cross-validate the original and the novel PCS and MCS. Overall, the scale- and item-based comparison indicated that the SF-36 scales and summaries meet the supposed dimensionality. However, vitality, social functioning, and general health items did not fit data optimally. The novel measures, derived a posteriori by unit-rule from an oblique (correlated) MCA/PCA solution, are simple item sums or weighted scale sums where the weights are the raw scale ranges. These item-based scores yielded consistent scale-summary results for outliers profiles, with an expected known-group differences validity. We were able to confirm the hypothesized dimensionality of eight scales and two summaries of the SF-36. The alternative scoring reaches at least the same required standards of the original scoring. In addition, it can reduce the item-scale inconsistencies without loss of predictive validity.
A Simple Force-Motion Relation for Migrating Cells Revealed by Multipole Analysis of Traction Stress
Tanimoto, Hirokazu; Sano, Masaki
2014-01-01
For biophysical understanding of cell motility, the relationship between mechanical force and cell migration must be uncovered, but it remains elusive. Since cells migrate at small scale in dissipative circumstances, the inertia force is negligible and all forces should cancel out. This implies that one must quantify the spatial pattern of the force instead of just the summation to elucidate the force-motion relation. Here, we introduced multipole analysis to quantify the traction stress dynamics of migrating cells. We measured the traction stress of Dictyostelium discoideum cells and investigated the lowest two moments, the force dipole and quadrupole moments, which reflect rotational and front-rear asymmetries of the stress field. We derived a simple force-motion relation in which cells migrate along the force dipole axis with a direction determined by the force quadrupole. Furthermore, as a complementary approach, we also investigated fine structures in the stress field that show front-rear asymmetric kinetics consistent with the multipole analysis. The tight force-motion relation enables us to predict cell migration only from the traction stress patterns. PMID:24411233
Energy and time determine scaling in biological and computer designs
Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie
2016-01-01
Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy–time minimization principle may govern the design of many complex systems that process energy, materials and information. This article is part of the themed issue ‘The major synthetic evolutionary transitions’. PMID:27431524
Energy and time determine scaling in biological and computer designs.
Moses, Melanie; Bezerra, George; Edwards, Benjamin; Brown, James; Forrest, Stephanie
2016-08-19
Metabolic rate in animals and power consumption in computers are analogous quantities that scale similarly with size. We analyse vascular systems of mammals and on-chip networks of microprocessors, where natural selection and human engineering, respectively, have produced systems that minimize both energy dissipation and delivery times. Using a simple network model that simultaneously minimizes energy and time, our analysis explains empirically observed trends in the scaling of metabolic rate in mammals and power consumption and performance in microprocessors across several orders of magnitude in size. Just as the evolutionary transitions from unicellular to multicellular animals in biology are associated with shifts in metabolic scaling, our model suggests that the scaling of power and performance will change as computer designs transition to decentralized multi-core and distributed cyber-physical systems. More generally, a single energy-time minimization principle may govern the design of many complex systems that process energy, materials and information.This article is part of the themed issue 'The major synthetic evolutionary transitions'. © 2016 The Author(s).
Cosmic Star Formation: A Simple Model of the SFRD(z)
NASA Astrophysics Data System (ADS)
Chiosi, Cesare; Sciarratta, Mauro; D’Onofrio, Mauro; Chiosi, Emanuela; Brotto, Francesca; De Michele, Rosaria; Politino, Valeria
2017-12-01
We investigate the evolution of the cosmic star formation rate density (SFRD) from redshift z = 20 to z = 0 and compare it with the observational one by Madau and Dickinson derived from recent compilations of ultraviolet (UV) and infrared (IR) data. The theoretical SFRD(z) and its evolution are obtained using a simple model that folds together the star formation histories of prototype galaxies that are designed to represent real objects of different morphological type along the Hubble sequence and the hierarchical growing of structures under the action of gravity from small perturbations to large-scale objects in Λ-CDM cosmogony, i.e., the number density of dark matter halos N(M,z). Although the overall model is very simple and easy to set up, it provides results that mimic results obtained from highly complex large-scale N-body simulations well. The simplicity of our approach allows us to test different assumptions for the star formation law in galaxies, the effects of energy feedback from stars to interstellar gas, the efficiency of galactic winds, and also the effect of N(M,z). The result of our analysis is that in the framework of the hierarchical assembly of galaxies, the so-called time-delayed star formation under plain assumptions mainly for the energy feedback and galactic winds can reproduce the observational SFRD(z).
Falter, Christian; Ellinger, Dorothea; von Hülsen, Behrend; Heim, René; Voigt, Christian A.
2015-01-01
The outwardly directed cell wall and associated plasma membrane of epidermal cells represent the first layers of plant defense against intruding pathogens. Cell wall modifications and the formation of defense structures at sites of attempted pathogen penetration are decisive for plant defense. A precise isolation of these stress-induced structures would allow a specific analysis of regulatory mechanism and cell wall adaption. However, methods for large-scale epidermal tissue preparation from the model plant Arabidopsis thaliana, which would allow proteome and cell wall analysis of complete, laser-microdissected epidermal defense structures, have not been provided. We developed the adhesive tape – liquid cover glass technique (ACT) for simple leaf epidermis preparation from A. thaliana, which is also applicable on grass leaves. This method is compatible with subsequent staining techniques to visualize stress-related cell wall structures, which were precisely isolated from the epidermal tissue layer by laser microdissection (LM) coupled to laser pressure catapulting. We successfully demonstrated that these specific epidermal tissue samples could be used for quantitative downstream proteome and cell wall analysis. The development of the ACT for simple leaf epidermis preparation and the compatibility to LM and downstream quantitative analysis opens new possibilities in the precise examination of stress- and pathogen-related cell wall structures in epidermal cells. Because the developed tissue processing is also applicable on A. thaliana, well-established, model pathosystems that include the interaction with powdery mildews can be studied to determine principal regulatory mechanisms in plant–microbe interaction with their potential outreach into crop breeding. PMID:25870605
Falter, Christian; Ellinger, Dorothea; von Hülsen, Behrend; Heim, René; Voigt, Christian A
2015-01-01
The outwardly directed cell wall and associated plasma membrane of epidermal cells represent the first layers of plant defense against intruding pathogens. Cell wall modifications and the formation of defense structures at sites of attempted pathogen penetration are decisive for plant defense. A precise isolation of these stress-induced structures would allow a specific analysis of regulatory mechanism and cell wall adaption. However, methods for large-scale epidermal tissue preparation from the model plant Arabidopsis thaliana, which would allow proteome and cell wall analysis of complete, laser-microdissected epidermal defense structures, have not been provided. We developed the adhesive tape - liquid cover glass technique (ACT) for simple leaf epidermis preparation from A. thaliana, which is also applicable on grass leaves. This method is compatible with subsequent staining techniques to visualize stress-related cell wall structures, which were precisely isolated from the epidermal tissue layer by laser microdissection (LM) coupled to laser pressure catapulting. We successfully demonstrated that these specific epidermal tissue samples could be used for quantitative downstream proteome and cell wall analysis. The development of the ACT for simple leaf epidermis preparation and the compatibility to LM and downstream quantitative analysis opens new possibilities in the precise examination of stress- and pathogen-related cell wall structures in epidermal cells. Because the developed tissue processing is also applicable on A. thaliana, well-established, model pathosystems that include the interaction with powdery mildews can be studied to determine principal regulatory mechanisms in plant-microbe interaction with their potential outreach into crop breeding.
Simple scale interpolator facilitates reading of graphs
NASA Technical Reports Server (NTRS)
Fazio, A.; Henry, B.; Hood, D.
1966-01-01
Set of cards with scale divisions and a scale finder permits accurate reading of the coordinates of points on linear or logarithmic graphs plotted on rectangular grids. The set contains 34 different scales for linear plotting and 28 single cycle scales for log plots.
NASA Astrophysics Data System (ADS)
Lee, Minsuk; Won, Youngjae; Park, Byungjun; Lee, Seungrag
2017-02-01
Not only static characteristics but also dynamic characteristics of the red blood cell (RBC) contains useful information for the blood diagnosis. Quantitative phase imaging (QPI) can capture sample images with subnanometer scale depth resolution and millisecond scale temporal resolution. Various researches have been used QPI for the RBC diagnosis, and recently many researches has been developed to decrease the process time of RBC information extraction using QPI by the parallel computing algorithm, however previous studies are interested in the static parameters such as morphology of the cells or simple dynamic parameters such as root mean square (RMS) of the membrane fluctuations. Previously, we presented a practical blood test method using the time series correlation analysis of RBC membrane flickering with QPI. However, this method has shown that there is a limit to the clinical application because of the long computation time. In this study, we present an accelerated time series correlation analysis of RBC membrane flickering using the parallel computing algorithm. This method showed consistent fractal scaling exponent results of the surrounding medium and the normal RBC with our previous research.
Scaling laws of passive-scalar diffusion in the interstellar medium
NASA Astrophysics Data System (ADS)
Colbrook, Matthew J.; Ma, Xiangcheng; Hopkins, Philip F.; Squire, Jonathan
2017-05-01
Passive-scalar mixing (metals, molecules, etc.) in the turbulent interstellar medium (ISM) is critical for abundance patterns of stars and clusters, galaxy and star formation, and cooling from the circumgalactic medium. However, the fundamental scaling laws remain poorly understood in the highly supersonic, magnetized, shearing regime relevant for the ISM. We therefore study the full scaling laws governing passive-scalar transport in idealized simulations of supersonic turbulence. Using simple phenomenological arguments for the variation of diffusivity with scale based on Richardson diffusion, we propose a simple fractional diffusion equation to describe the turbulent advection of an initial passive scalar distribution. These predictions agree well with the measurements from simulations, and vary with turbulent Mach number in the expected manner, remaining valid even in the presence of a large-scale shear flow (e.g. rotation in a galactic disc). The evolution of the scalar distribution is not the same as obtained using simple, constant 'effective diffusivity' as in Smagorinsky models, because the scale dependence of turbulent transport means an initially Gaussian distribution quickly develops highly non-Gaussian tails. We also emphasize that these are mean scalings that apply only to ensemble behaviours (assuming many different, random scalar injection sites): individual Lagrangian 'patches' remain coherent (poorly mixed) and simply advect for a large number of turbulent flow-crossing times.
Jenerette, Coretta; Dixon, Jane
2010-10-01
Ethnic and cultural norms influence an individual's assertiveness. In health care, assertiveness may play an important role in health outcomes, especially for predominantly minority populations, such as adults with sickle cell disease. Therefore, it is important to develop measures to accurately assess assertiveness. It is also important to reduce response burden of lengthy instruments while retaining instrument reliability and validity. The purpose of this article is to describe development of a shorter version of the Simple Rathus Assertiveness Schedule (SRAS). Data from a cross-sectional descriptive study of adults with sickle cell disease were used to construct a short form of the SRAS, guided by stepwise regression analysis. The 19-item Simple Rathus Assertiveness Scale-Short Form (SRAS-SF) had acceptable reliability (α = .81) and construct validity and was highly correlated with the SRAS (r = .98, p = .01). The SRAS-SF reduces response burden, while maintaining reliability and validity.
Paillet, Frederick L.; Singhroy, V.H.; Hansen, D.T.; Pierce, R.R.; Johnson, A.I.
2002-01-01
Integration of geophysical data obtained at various scales can bridge the gap between localized data from boreholes and site-wide data from regional survey profiles. Specific approaches to such analysis include: 1) comparing geophysical measurements in boreholes with the same measurement made from the surface; 2) regressing geophysical data obtained in boreholes with water-sample data from screened intervals; 3) using multiple, physically independent measurements in boreholes to develop multivariate response models for surface geophysical surveys; 4) defining subsurface cell geometry for most effective survey inversion methods; and 5) making geophysical measurements in boreholes to serve as independent verification of geophysical interpretations. Integrated analysis of surface electromagnetic surveys and borehole geophysical logs at a study site in south Florida indicates that salinity of water in the surficial aquifers is controlled by a simple wedge of seawater intrusion along the coast and by a complex pattern of upward brine seepage from deeper aquifers throughout the study area. This interpretation was verified by drilling three additional test boreholes in carefully selected locations.
Clerici, Nicola; Bodini, Antonio; Ferrarini, Alessandro
2004-10-01
In order to achieve improved sustainability, local authorities need to use tools that adequately describe and synthesize environmental information. This article illustrates a methodological approach that organizes a wide suite of environmental indicators into few aggregated indices, making use of correlation, principal component analysis, and fuzzy sets. Furthermore, a weighting system, which includes stakeholders' priorities and ambitions, is applied. As a case study, the described methodology is applied to the Reggio Emilia Province in Italy, by considering environmental information from 45 municipalities. Principal component analysis is used to condense an initial set of 19 indicators into 6 fundamental dimensions that highlight patterns of environmental conditions at the provincial scale. These dimensions are further aggregated in two indices of environmental performance through fuzzy sets. The simple form of these indices makes them particularly suitable for public communication, as they condensate a wide set of heterogeneous indicators. The main outcomes of the analysis and the potential applications of the method are discussed.
Metadata and annotations for multi-scale electrophysiological data.
Bower, Mark R; Stead, Matt; Brinkmann, Benjamin H; Dufendach, Kevin; Worrell, Gregory A
2009-01-01
The increasing use of high-frequency (kHz), long-duration (days) intracranial monitoring from multiple electrodes during pre-surgical evaluation for epilepsy produces large amounts of data that are challenging to store and maintain. Descriptive metadata and clinical annotations of these large data sets also pose challenges to simple, often manual, methods of data analysis. The problems of reliable communication of metadata and annotations between programs, the maintenance of the meanings within that information over long time periods, and the flexibility to re-sort data for analysis place differing demands on data structures and algorithms. Solutions to these individual problem domains (communication, storage and analysis) can be configured to provide easy translation and clarity across the domains. The Multi-scale Annotation Format (MAF) provides an integrated metadata and annotation environment that maximizes code reuse, minimizes error probability and encourages future changes by reducing the tendency to over-fit information technology solutions to current problems. An example of a graphical utility for generating and evaluating metadata and annotations for "big data" files is presented.
Supersymmetry from typicality: TeV-scale gauginos and PeV-scale squarks and sleptons.
Nomura, Yasunori; Shirai, Satoshi
2014-09-12
We argue that under a set of simple assumptions the multiverse leads to low-energy supersymmetry with the spectrum often called spread or minisplit supersymmetry: the gauginos are in the TeV region with the other superpartners 2 or 3 orders of magnitude heavier. We present a particularly simple realization of supersymmetric grand unified theory using this idea.
Gao, Chunsheng; Xin, Pengfei; Cheng, Chaohua; Tang, Qing; Chen, Ping; Wang, Changbiao; Zang, Gonggu; Zhao, Lining
2014-01-01
Cannabis sativa L. is an important economic plant for the production of food, fiber, oils, and intoxicants. However, lack of sufficient simple sequence repeat (SSR) markers has limited the development of cannabis genetic research. Here, large-scale development of expressed sequence tag simple sequence repeat (EST-SSR) markers was performed to obtain more informative genetic markers, and to assess genetic diversity in cannabis (Cannabis sativa L.). Based on the cannabis transcriptome, 4,577 SSRs were identified from 3,624 ESTs. From there, a total of 3,442 complementary primer pairs were designed as SSR markers. Among these markers, trinucleotide repeat motifs (50.99%) were the most abundant, followed by hexanucleotide (25.13%), dinucleotide (16.34%), tetranucloetide (3.8%), and pentanucleotide (3.74%) repeat motifs, respectively. The AAG/CTT trinucleotide repeat (17.96%) was the most abundant motif detected in the SSRs. One hundred and seventeen EST-SSR markers were randomly selected to evaluate primer quality in 24 cannabis varieties. Among these 117 markers, 108 (92.31%) were successfully amplified and 87 (74.36%) were polymorphic. Forty-five polymorphic primer pairs were selected to evaluate genetic diversity and relatedness among the 115 cannabis genotypes. The results showed that 115 varieties could be divided into 4 groups primarily based on geography: Northern China, Europe, Central China, and Southern China. Moreover, the coefficient of similarity when comparing cannabis from Northern China with the European group cannabis was higher than that when comparing with cannabis from the other two groups, owing to a similar climate. This study outlines the first large-scale development of SSR markers for cannabis. These data may serve as a foundation for the development of genetic linkage, quantitative trait loci mapping, and marker-assisted breeding of cannabis.
Cheng, Chaohua; Tang, Qing; Chen, Ping; Wang, Changbiao; Zang, Gonggu; Zhao, Lining
2014-01-01
Cannabis sativa L. is an important economic plant for the production of food, fiber, oils, and intoxicants. However, lack of sufficient simple sequence repeat (SSR) markers has limited the development of cannabis genetic research. Here, large-scale development of expressed sequence tag simple sequence repeat (EST-SSR) markers was performed to obtain more informative genetic markers, and to assess genetic diversity in cannabis (Cannabis sativa L.). Based on the cannabis transcriptome, 4,577 SSRs were identified from 3,624 ESTs. From there, a total of 3,442 complementary primer pairs were designed as SSR markers. Among these markers, trinucleotide repeat motifs (50.99%) were the most abundant, followed by hexanucleotide (25.13%), dinucleotide (16.34%), tetranucloetide (3.8%), and pentanucleotide (3.74%) repeat motifs, respectively. The AAG/CTT trinucleotide repeat (17.96%) was the most abundant motif detected in the SSRs. One hundred and seventeen EST-SSR markers were randomly selected to evaluate primer quality in 24 cannabis varieties. Among these 117 markers, 108 (92.31%) were successfully amplified and 87 (74.36%) were polymorphic. Forty-five polymorphic primer pairs were selected to evaluate genetic diversity and relatedness among the 115 cannabis genotypes. The results showed that 115 varieties could be divided into 4 groups primarily based on geography: Northern China, Europe, Central China, and Southern China. Moreover, the coefficient of similarity when comparing cannabis from Northern China with the European group cannabis was higher than that when comparing with cannabis from the other two groups, owing to a similar climate. This study outlines the first large-scale development of SSR markers for cannabis. These data may serve as a foundation for the development of genetic linkage, quantitative trait loci mapping, and marker-assisted breeding of cannabis. PMID:25329551
Techniques for automatic large scale change analysis of temporal multispectral imagery
NASA Astrophysics Data System (ADS)
Mercovich, Ryan A.
Change detection in remotely sensed imagery is a multi-faceted problem with a wide variety of desired solutions. Automatic change detection and analysis to assist in the coverage of large areas at high resolution is a popular area of research in the remote sensing community. Beyond basic change detection, the analysis of change is essential to provide results that positively impact an image analyst's job when examining potentially changed areas. Present change detection algorithms are geared toward low resolution imagery, and require analyst input to provide anything more than a simple pixel level map of the magnitude of change that has occurred. One major problem with this approach is that change occurs in such large volume at small spatial scales that a simple change map is no longer useful. This research strives to create an algorithm based on a set of metrics that performs a large area search for change in high resolution multispectral image sequences and utilizes a variety of methods to identify different types of change. Rather than simply mapping the magnitude of any change in the scene, the goal of this research is to create a useful display of the different types of change in the image. The techniques presented in this dissertation are used to interpret large area images and provide useful information to an analyst about small regions that have undergone specific types of change while retaining image context to make further manual interpretation easier. This analyst cueing to reduce information overload in a large area search environment will have an impact in the areas of disaster recovery, search and rescue situations, and land use surveys among others. By utilizing a feature based approach founded on applying existing statistical methods and new and existing topological methods to high resolution temporal multispectral imagery, a novel change detection methodology is produced that can automatically provide useful information about the change occurring in large area and high resolution image sequences. The change detection and analysis algorithm developed could be adapted to many potential image change scenarios to perform automatic large scale analysis of change.
Fu, Hai-Yan; Guo, Jun-Wei; Yu, Yong-Jie; Li, He-Dong; Cui, Hua-Peng; Liu, Ping-Ping; Wang, Bing; Wang, Sheng; Lu, Peng
2016-06-24
Peak detection is a critical step in chromatographic data analysis. In the present work, we developed a multi-scale Gaussian smoothing-based strategy for accurate peak extraction. The strategy consisted of three stages: background drift correction, peak detection, and peak filtration. Background drift correction was implemented using a moving window strategy. The new peak detection method is a variant of the system used by the well-known MassSpecWavelet, i.e., chromatographic peaks are found at local maximum values under various smoothing window scales. Therefore, peaks can be detected through the ridge lines of maximum values under these window scales, and signals that are monotonously increased/decreased around the peak position could be treated as part of the peak. Instrumental noise was estimated after peak elimination, and a peak filtration strategy was performed to remove peaks with signal-to-noise ratios smaller than 3. The performance of our method was evaluated using two complex datasets. These datasets include essential oil samples for quality control obtained from gas chromatography and tobacco plant samples for metabolic profiling analysis obtained from gas chromatography coupled with mass spectrometry. Results confirmed the reasonability of the developed method. Copyright © 2016 Elsevier B.V. All rights reserved.
Agent Based Modeling: Fine-Scale Spatio-Temporal Analysis of Pertussis
NASA Astrophysics Data System (ADS)
Mills, D. A.
2017-10-01
In epidemiology, spatial and temporal variables are used to compute vaccination efficacy and effectiveness. The chosen resolution and scale of a spatial or spatio-temporal analysis will affect the results. When calculating vaccination efficacy, for example, a simple environment that offers various ideal outcomes is often modeled using coarse scale data aggregated on an annual basis. In contrast to the inadequacy of this aggregated method, this research uses agent based modeling of fine-scale neighborhood data centered around the interactions of infants in daycare and their families to demonstrate an accurate reflection of vaccination capabilities. Despite being able to prevent major symptoms, recent studies suggest that acellular Pertussis does not prevent the colonization and transmission of Bordetella Pertussis bacteria. After vaccination, a treated individual becomes a potential asymptomatic carrier of the Pertussis bacteria, rather than an immune individual. Agent based modeling enables the measurable depiction of asymptomatic carriers that are otherwise unaccounted for when calculating vaccination efficacy and effectiveness. Using empirical data from a Florida Pertussis outbreak case study, the results of this model demonstrate that asymptomatic carriers bias the calculated vaccination efficacy and reveal a need for reconsidering current methods that are widely used for calculating vaccination efficacy and effectiveness.
Effect of Longitudinal Oscillations on Downward Flame Spread over Thin Solid Fuels
NASA Technical Reports Server (NTRS)
Nayagam, Vedha; Sacksteder, Kurt
2013-01-01
Downward flame spread rates over vertically vibrated thin fuel samples are measured in air at one atmospheric pressure under normal gravity. Unlike flame spread against forced-convective flows, the present results show that with increasing vibration acceleration the flame spread rate increases before being blown off at high acceleration levels causing flame extinction. A simple scaling analysis seems to explain this phenomenon, which may have important implications to flammability studies including in microgravity environments.
Discrete elements for 3D microfluidics.
Bhargava, Krisna C; Thompson, Bryant; Malmstadt, Noah
2014-10-21
Microfluidic systems are rapidly becoming commonplace tools for high-precision materials synthesis, biochemical sample preparation, and biophysical analysis. Typically, microfluidic systems are constructed in monolithic form by means of microfabrication and, increasingly, by additive techniques. These methods restrict the design and assembly of truly complex systems by placing unnecessary emphasis on complete functional integration of operational elements in a planar environment. Here, we present a solution based on discrete elements that liberates designers to build large-scale microfluidic systems in three dimensions that are modular, diverse, and predictable by simple network analysis techniques. We develop a sample library of standardized components and connectors manufactured using stereolithography. We predict and validate the flow characteristics of these individual components to design and construct a tunable concentration gradient generator with a scalable number of parallel outputs. We show that these systems are rapidly reconfigurable by constructing three variations of a device for generating monodisperse microdroplets in two distinct size regimes and in a high-throughput mode by simple replacement of emulsifier subcircuits. Finally, we demonstrate the capability for active process monitoring by constructing an optical sensing element for detecting water droplets in a fluorocarbon stream and quantifying their size and frequency. By moving away from large-scale integration toward standardized discrete elements, we demonstrate the potential to reduce the practice of designing and assembling complex 3D microfluidic circuits to a methodology comparable to that found in the electronics industry.
Yoshioka, Shinya; Kinoshita, Shuichi
2006-01-22
A few species of Morpho butterflies have a distinctive white stripe pattern on their structurally coloured blue wings. Since the colour pattern of a butterfly wing is formed as a mosaic of differently coloured scales, several questions naturally arise: are the microstructures the same between the blue and white scales? How is the distinctive whiteness produced, structurally or by means of pigmentation? To answer these questions, we have performed structural and optical investigations of the stripe pattern of a butterfly, Morpho cypris. It is found that besides the dorsal and ventral scale layers, the wing substrate also has the corresponding stripe pattern. Quantitative optical measurements and analysis using a simple model for the wing structure reveal the origin of the higher reflectance which makes the white stripe brighter.
What do the data show? Fostering physical intuition with ClimateBits and NASA Earth Observations
NASA Astrophysics Data System (ADS)
Schollaert Uz, S.; Ward, K.
2017-12-01
Through data visualizations using global satellite imagery available in NASA Earth Observations (NEO), we explain Earth science concepts (e.g. albedo, urban heat island effect, phytoplankton). We also provide examples of ways to explore the satellite data in NEO within a new blog series. This is an ideal tool for scientists and non-scientists alike who want to quickly check satellite imagery for large scale features or patterns. NEO analysis requires no software or plug-ins; only a browser and an internet connection. You can even check imagery and perform simple analyses from your smart phone. NEO can be used to create graphics for presentations and papers or as a first step before acquiring data for more rigorous analysis. NEO has potential application to easily explore large scale environmental and climate patterns that impact operations and infrastructure. This is something we are currently exploring with end user groups.
Universal shocks in the Wishart random-matrix ensemble.
Blaizot, Jean-Paul; Nowak, Maciej A; Warchoł, Piotr
2013-05-01
We show that the derivative of the logarithm of the average characteristic polynomial of a diffusing Wishart matrix obeys an exact partial differential equation valid for an arbitrary value of N, the size of the matrix. In the large N limit, this equation generalizes the simple inviscid Burgers equation that has been obtained earlier for Hermitian or unitary matrices. The solution, through the method of characteristics, presents singularities that we relate to the precursors of shock formation in the Burgers equation. The finite N effects appear as a viscosity term in the Burgers equation. Using a scaling analysis of the complete equation for the characteristic polynomial, in the vicinity of the shocks, we recover in a simple way the universal Bessel oscillations (so-called hard-edge singularities) familiar in random-matrix theory.
A Simple Measure of the Dynamics of Segmented Genomes: An Application to Influenza
NASA Astrophysics Data System (ADS)
Aris-Brosou, Stéphane
The severity of influenza epidemics, which can potentially become a pandemic, has been very difficult to predict. However, past efforts were focusing on gene-by-gene approaches, while it is acknowledged that the whole genome dynamics contribute to the severity of an epidemic. Here, putting this rationale into action, I describe a simple measure of the amount of reassortment that affects influenza at a genomic scale during a particular year. The analysis of 530 complete genomes of the H1N1 subtype, sampled over eleven years, shows that the proposed measure explains 58% of the variance in the prevalence of H1 influenza in the US population. The proposed measure, denoted nRF, could therefore improve influenza surveillance programs at a minimal cost.
An on-board near-optimal climb-dash energy management
NASA Technical Reports Server (NTRS)
Weston, A. R.; Cliff, E. M.; Kelley, H. J.
1982-01-01
On-board real time flight control is studied in order to develop algorithms which are simple enough to be used in practice, for a variety of missions involving three dimensional flight. The intercept mission in symmetric flight is emphasized. Extensive computation is required on the ground prior to the mission but the ensuing on-board exploitation is extremely simple. The scheme takes advantage of the boundary layer structure common in singular perturbations, arising with the multiple time scales appropriate to aircraft dynamics. Energy modelling of aircraft is used as the starting point for the analysis. In the symmetric case, a nominal path is generated which fairs into the dash or cruise state. Feedback coefficients are found as functions of the remaining energy to go (dash energy less current energy) along the nominal path.
Modeling Age-Related Differences in Immediate Memory Using SIMPLE
ERIC Educational Resources Information Center
Surprenant, Aimee M.; Neath, Ian; Brown, Gordon D. A.
2006-01-01
In the SIMPLE model (Scale Invariant Memory and Perceptual Learning), performance on memory tasks is determined by the locations of items in multidimensional space, and better performance is associated with having fewer close neighbors. Unlike most previous simulations with SIMPLE, the ones reported here used measured, rather than assumed,…
Scaling in Ecosystems and the Linkage of Macroecological Laws
NASA Astrophysics Data System (ADS)
Rinaldo, A.
2007-12-01
Are there predictable linkages among macroecological laws regulating size and abundance of organisms that are ubiquitously supported by empirical observations and that ecologists treat traditionally as independent? Do fragmentation of habitats, or reduced supply of energy and matter, result in predictable changes on whole ecosystems as a function of their size? Using a coherent theoretical framework based on scaling theory, it is argued that the answer to both these questions is affirmative. The concern of the talk is with the comparatively simple situation of the steady state behavior of a fully developed ecosystem in which, over evolutionary time, resources are exploited in full, individual and collective metabolic needs are met and enough time has elapsed to produce a rough balance between speciation and extinction and ecological fluxes. While ecological patterns and processes often show great variation when viewed at different scales of space, time, organismic size and organizational complexity, there is also widespread evidence for the existence of scaling regularities as embedded in macroecological "laws" or rules. These laws have commanded considerable attention from the ecological community. Indeed they are central to ecological theory as they describe the features of complex adaptive systems shown by a number of biological systems, and perhaps for the investigation of the dynamic origin of scale invariance of natural forms in general. The species-area and relative species-abundance relations, the scaling of community and species' size spectra, the scaling of population densities with their mean body mass and the scaling of the largest organism with ecosystem size are examples of such laws. Borrowing heavily from earlier successes in physics, it will be shown how simple mathematical scaling arguments, following from dimensional and finite-size scaling analyses, provide theoretical predictions of the inter- relationships among the species abundance relationship, the species-area relationship and community size spectra, in excellent accord with empirical data. The main conclusion is that the proposed scaling framework, along with the questions and predictions it provides, serves as a starting point for a novel approach to macroecological analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan
X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering-vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables XPCS to probe the dynamics in a broad array of materials, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fail. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. This paper proposes an alternative analysis schememore » based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. In conclusion, using XPCS data measured from colloidal gels, it is demonstrated that the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.« less
Andrews, Ross N.; Narayanan, Suresh; Zhang, Fan; ...
2018-02-01
X-ray photon correlation spectroscopy (XPCS), an extension of dynamic light scattering (DLS) in the X-ray regime, detects temporal intensity fluctuations of coherent speckles and provides scattering-vector-dependent sample dynamics at length scales smaller than DLS. The penetrating power of X-rays enables XPCS to probe the dynamics in a broad array of materials, including polymers, glasses and metal alloys, where attempts to describe the dynamics with a simple exponential fit usually fail. In these cases, the prevailing XPCS data analysis approach employs stretched or compressed exponential decay functions (Kohlrausch functions), which implicitly assume homogeneous dynamics. This paper proposes an alternative analysis schememore » based upon inverse Laplace or Gaussian transformation for elucidating heterogeneous distributions of dynamic time scales in XPCS, an approach analogous to the CONTIN algorithm widely accepted in the analysis of DLS from polydisperse and multimodal systems. In conclusion, using XPCS data measured from colloidal gels, it is demonstrated that the inverse transform approach reveals hidden multimodal dynamics in materials, unleashing the full potential of XPCS.« less
Higuchi, Yoshiyuki; Izumi, Hiroyuki; Kumashiro, Mashaharu
2010-06-01
This study developed an assessment scale that hierarchically classifies degrees of low back pain severity. This assessment scale consists of two subscales: 1) pain intensity; 2) pain interference. First, the assessment scale devised by the authors was used to administer a self-administered questionnaire to 773 male workers in the car manufacturing industry. Subsequently, the validity of the measurement items was examined and some of them were revised. Next, the corrected low back pain scale was used in a self-administered questionnaire, the subjects of which were 5053 ordinary workers. The hierarchical validity between the measurement items was checked based on the results of Mokken Scale analysis. Finally, a low back pain assessment scale consisting of seven items was perfected. Quantitative assessment is made possible by scoring the items and low back pain severity can be classified into four hierarchical levels: none; mild; moderate; severe. STATEMENT OF RELEVANCE: The use of this scale devised by the authors allows a more detailed assessment of the degree of risk factor effect and also should prove useful both in selecting remedial measures for occupational low back pain and evaluating their efficacy.
How to normalize metatranscriptomic count data for differential expression analysis.
Klingenberg, Heiner; Meinicke, Peter
2017-01-01
Differential expression analysis on the basis of RNA-Seq count data has become a standard tool in transcriptomics. Several studies have shown that prior normalization of the data is crucial for a reliable detection of transcriptional differences. Until now it has not been clear whether and how the transcriptomic approach can be used for differential expression analysis in metatranscriptomics. We propose a model for differential expression in metatranscriptomics that explicitly accounts for variations in the taxonomic composition of transcripts across different samples. As a main consequence the correct normalization of metatranscriptomic count data under this model requires the taxonomic separation of the data into organism-specific bins. Then the taxon-specific scaling of organism profiles yields a valid normalization and allows us to recombine the scaled profiles into a metatranscriptomic count matrix. This matrix can then be analyzed with statistical tools for transcriptomic count data. For taxon-specific scaling and recombination of scaled counts we provide a simple R script. When applying transcriptomic tools for differential expression analysis directly to metatranscriptomic data with an organism-independent (global) scaling of counts the resulting differences may be difficult to interpret. The differences may correspond to changing functional profiles of the contributing organisms but may also result from a variation of taxonomic abundances. Taxon-specific scaling eliminates this variation and therefore the resulting differences actually reflect a different behavior of organisms under changing conditions. In simulation studies we show that the divergence between results from global and taxon-specific scaling can be drastic. In particular, the variation of organism abundances can imply a considerable increase of significant differences with global scaling. Also, on real metatranscriptomic data, the predictions from taxon-specific and global scaling can differ widely. Our studies indicate that in real data applications performed with global scaling it might be impossible to distinguish between differential expression in terms of transcriptomic changes and differential composition in terms of changing taxonomic proportions. As in transcriptomics, a proper normalization of count data is also essential for differential expression analysis in metatranscriptomics. Our model implies a taxon-specific scaling of counts for normalization of the data. The application of taxon-specific scaling consequently removes taxonomic composition variations from functional profiles and therefore provides a clear interpretation of the observed functional differences.
Extraction of the proton radius from electron-proton scattering data
Lee, Gabriel; Arrington, John R.; Hill, Richard J.
2015-07-27
We perform a new analysis of electron-proton scattering data to determine the proton electric and magnetic radii, enforcing model-independent constraints from form factor analyticity. A wide-ranging study of possible systematic effects is performed. An improved analysis is developed that rebins data taken at identical kinematic settings and avoids a scaling assumption of systematic errors with statistical errors. Employing standard models for radiative corrections, our improved analysis of the 2010 Mainz A1 Collaboration data yields a proton electric radius r E = 0.895(20) fm and magnetic radius r M = 0.776(38) fm. A similar analysis applied to world data (excluding Mainzmore » data) implies r E = 0.916(24) fm and r M = 0.914(35) fm. The Mainz and world values of the charge radius are consistent, and a simple combination yields a value r E = 0.904(15) fm that is 4σ larger than the CREMA Collaboration muonic hydrogen determination. The Mainz and world values of the magnetic radius differ by 2.7σ, and a simple average yields r M = 0.851(26) fm. As a result, the circumstances under which published muonic hydrogen and electron scattering data could be reconciled are discussed, including a possible deficiency in the standard radiative correction model which requires further analysis.« less
Rotstein, Horacio G
2014-01-01
We investigate the dynamic mechanisms of generation of subthreshold and phase resonance in two-dimensional linear and linearized biophysical (conductance-based) models, and we extend our analysis to account for the effect of simple, but not necessarily weak, types of nonlinearities. Subthreshold resonance refers to the ability of neurons to exhibit a peak in their voltage amplitude response to oscillatory input currents at a preferred non-zero (resonant) frequency. Phase-resonance refers to the ability of neurons to exhibit a zero-phase (or zero-phase-shift) response to oscillatory input currents at a non-zero (phase-resonant) frequency. We adapt the classical phase-plane analysis approach to account for the dynamic effects of oscillatory inputs and develop a tool, the envelope-plane diagrams, that captures the role that conductances and time scales play in amplifying the voltage response at the resonant frequency band as compared to smaller and larger frequencies. We use envelope-plane diagrams in our analysis. We explain why the resonance phenomena do not necessarily arise from the presence of imaginary eigenvalues at rest, but rather they emerge from the interplay of the intrinsic and input time scales. We further explain why an increase in the time-scale separation causes an amplification of the voltage response in addition to shifting the resonant and phase-resonant frequencies. This is of fundamental importance for neural models since neurons typically exhibit a strong separation of time scales. We extend this approach to explain the effects of nonlinearities on both resonance and phase-resonance. We demonstrate that nonlinearities in the voltage equation cause amplifications of the voltage response and shifts in the resonant and phase-resonant frequencies that are not predicted by the corresponding linearized model. The differences between the nonlinear response and the linear prediction increase with increasing levels of the time scale separation between the voltage and the gating variable, and they almost disappear when both equations evolve at comparable rates. In contrast, voltage responses are almost insensitive to nonlinearities located in the gating variable equation. The method we develop provides a framework for the investigation of the preferred frequency responses in three-dimensional and nonlinear neuronal models as well as simple models of coupled neurons.
Using SQL Databases for Sequence Similarity Searching and Analysis.
Pearson, William R; Mackey, Aaron J
2017-09-13
Relational databases can integrate diverse types of information and manage large sets of similarity search results, greatly simplifying genome-scale analyses. By focusing on taxonomic subsets of sequences, relational databases can reduce the size and redundancy of sequence libraries and improve the statistical significance of homologs. In addition, by loading similarity search results into a relational database, it becomes possible to explore and summarize the relationships between all of the proteins in an organism and those in other biological kingdoms. This unit describes how to use relational databases to improve the efficiency of sequence similarity searching and demonstrates various large-scale genomic analyses of homology-related data. It also describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. The unit also introduces search_demo, a database that stores sequence similarity search results. The search_demo database is then used to explore the evolutionary relationships between E. coli proteins and proteins in other organisms in a large-scale comparative genomic analysis. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.
Possible biomechanical origins of the long-range correlations in stride intervals of walking
NASA Astrophysics Data System (ADS)
Gates, Deanna H.; Su, Jimmy L.; Dingwell, Jonathan B.
2007-07-01
When humans walk, the time duration of each stride varies from one stride to the next. These temporal fluctuations exhibit long-range correlations. It has been suggested that these correlations stem from higher nervous system centers in the brain that control gait cycle timing. Existing proposed models of this phenomenon have focused on neurophysiological mechanisms that might give rise to these long-range correlations, and generally ignored potential alternative mechanical explanations. We hypothesized that a simple mechanical system could also generate similar long-range correlations in stride times. We modified a very simple passive dynamic model of bipedal walking to incorporate forward propulsion through an impulsive force applied to the trailing leg at each push-off. Push-off forces were varied from step to step by incorporating both “sensory” and “motor” noise terms that were regulated by a simple proportional feedback controller. We generated 400 simulations of walking, with different combinations of sensory noise, motor noise, and feedback gain. The stride time data from each simulation were analyzed using detrended fluctuation analysis to compute a scaling exponent, α. This exponent quantified how each stride interval was correlated with previous and subsequent stride intervals over different time scales. For different variations of the noise terms and feedback gain, we obtained short-range correlations (α<0.5), uncorrelated time series (α=0.5), long-range correlations (0.5<α<1.0), or Brownian motion (α>1.0). Our results indicate that a simple biomechanical model of walking can generate long-range correlations and thus perhaps these correlations are not a complex result of higher level neuronal control, as has been previously suggested.
A Multivariate Analysis of Galaxy Cluster Properties
NASA Astrophysics Data System (ADS)
Ogle, P. M.; Djorgovski, S.
1993-05-01
We have assembled from the literature a data base on on 394 clusters of galaxies, with up to 16 parameters per cluster. They include optical and x-ray luminosities, x-ray temperatures, galaxy velocity dispersions, central galaxy and particle densities, optical and x-ray core radii and ellipticities, etc. In addition, derived quantities, such as the mass-to-light ratios and x-ray gas masses are included. Doubtful measurements have been identified, and deleted from the data base. Our goal is to explore the correlations between these parameters, and interpret them in the framework of our understanding of evolution of clusters and large-scale structure, such as the Gott-Rees scaling hierarchy. Among the simple, monovariate correlations we found, the most significant include those between the optical and x-ray luminosities, x-ray temperatures, cluster velocity dispersions, and central galaxy densities, in various mutual combinations. While some of these correlations have been discussed previously in the literature, generally smaller samples of objects have been used. We will also present the results of a multivariate statistical analysis of the data, including a principal component analysis (PCA). Such an approach has not been used previously for studies of cluster properties, even though it is much more powerful and complete than the simple monovariate techniques which are commonly employed. The observed correlations may lead to powerful constraints for theoretical models of formation and evolution of galaxy clusters. P.M.O. was supported by a Caltech graduate fellowship. S.D. acknowledges a partial support from the NASA contract NAS5-31348 and the NSF PYI award AST-9157412.
Arunachalam, Kantha D; Annamalai, Sathesh Kumar
2013-01-01
The exploitation of various plant materials for the biosynthesis of nanoparticles is considered a green technology as it does not involve any harmful chemicals. The aim of this study was to develop a simple biological method for the synthesis of silver and gold nanoparticles using Chrysopogon zizanioides. To exploit various plant materials for the biosynthesis of nanoparticles was considered a green technology. An aqueous leaf extract of C. zizanioides was used to synthesize silver and gold nanoparticles by the bioreduction of silver nitrate (AgNO3) and chloroauric acid (HAuCl4) respectively. Water-soluble organics present in the plant materials were mainly responsible for reducing silver or gold ions to nanosized Ag or Au particles. The synthesized silver and gold nanoparticles were characterized by ultraviolet (UV)-visible spectroscopy, scanning electron microscopy (SEM), energy dispersive X-ray analysis (EDAX), Fourier transform infrared spectroscopy (FTIR), and X-ray diffraction (XRD) analysis. The kinetics decline reactions of aqueous silver/gold ion with the C. zizanioides crude extract were determined by UV-visible spectroscopy. SEM analysis showed that aqueous gold ions, when exposed to the extract were reduced and resulted in the biosynthesis of gold nanoparticles in the size range 20–50 nm. This eco-friendly approach for the synthesis of nanoparticles is simple, can be scaled up for large-scale production with powerful bioactivity as demonstrated by the synthesized silver nanoparticles. The synthesized nanoparticles can have clinical use as antibacterial, antioxidant, as well as cytotoxic agents and can be used for biomedical applications. PMID:23861583
Videodermoscopy does not enhance diagnosis of scalp contact dermatitis due to topical minoxidil.
Tosti, Antonella; Donati, Aline; Vincenzi, Colombina; Fabbrocini, Gabriella
2009-07-01
Videodermoscopy (VD) is a noninvasive diagnostic tool that provides useful information for the differential diagnosis of scalp disorders. The aim of this study was to investigate if dermoscopy may help the clinician in the diagnosis of contact dermatitis of the scalp. We analyzed the dermoscopic images taken from 7 patients with contact dermatitis due to topical minoxidil, 6 patients complaining of intense scalp itching during treatment with topical minoxidil but with negative patch tests and 19 controls. The following dermoscopic patterns described for scalp diseases were evaluated: Vascular patterns (simple loops, twisted loops and arborizing lines), follicular/perifollicular patterns (yellow dots, empty ostia, white dots, peripilar signs), white scales, yellow scales, follicular plugging, hair diameter diversity, honeycomb pattern and short regrowing hairs. Findings were graded from 0-4, according to severity in 20-fold magnifications. Statistical analysis included univariate analysis and Chi-square test by SPSS version 12. There were no statistical differences in the analysis of the vascular patterns and scales between the 3 groups. We were not able to detect dermoscopic features that can help the clinician in distinguishing scalp contact dermatitis due to topical minoxidil from other conditions that cause severe scalp itching. In particular, minoxidil contact dermatitis does not produce increase or alterations in the morphology of the scalp vessels or significant scalp scaling when evaluated with dermoscopy.
Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe
2016-07-01
We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.
A simple atomic-level hydrophobicity scale reveals protein interfacial structure.
Kapcha, Lauren H; Rossky, Peter J
2014-01-23
Many amino acid residue hydrophobicity scales have been created in an effort to better understand and rapidly characterize water-protein interactions based only on protein structure and sequence. There is surprisingly low consistency in the ranking of residue hydrophobicity between scales, and their ability to provide insightful characterization varies substantially across subject proteins. All current scales characterize hydrophobicity based on entire amino acid residue units. We introduce a simple binary but atomic-level hydrophobicity scale that allows for the classification of polar and non-polar moieties within single residues, including backbone atoms. This simple scale is first shown to capture the anticipated hydrophobic character for those whole residues that align in classification among most scales. Examination of a set of protein binding interfaces establishes good agreement between residue-based and atomic-level descriptions of hydrophobicity for five residues, while the remaining residues produce discrepancies. We then show that the atomistic scale properly classifies the hydrophobicity of functionally important regions where residue-based scales fail. To illustrate the utility of the new approach, we show that the atomic-level scale rationalizes the hydration of two hydrophobic pockets and the presence of a void in a third pocket within a single protein and that it appropriately classifies all of the functionally important hydrophilic sites within two otherwise hydrophobic pores. We suggest that an atomic level of detail is, in general, necessary for the reliable depiction of hydrophobicity for all protein surfaces. The present formulation can be implemented simply in a manner no more complex than current residue-based approaches. © 2013.
A Generalized Simple Formulation of Convective Adjustment ...
Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a prescribed value or ad hoc representation of τ is used in most global and regional climate/weather models making it a tunable parameter and yet still resulting in uncertainties in convective precipitation simulations. In this work, a generalized simple formulation of τ for use in any convection parameterization for shallow and deep clouds is developed to reduce convective precipitation biases at different grid spacing. Unlike existing other methods, our new formulation can be used with field campaign measurements to estimate τ as demonstrated by using data from two different special field campaigns. Then, we implemented our formulation into a regional model (WRF) for testing and evaluation. Results indicate that our simple τ formulation can give realistic temporal and spatial variations of τ across continental U.S. as well as grid-scale and subgrid scale precipitation. We also found that as the grid spacing decreases (e.g., from 36 to 4-km grid spacing), grid-scale precipitation dominants over subgrid-scale precipitation. The generalized τ formulation works for various types of atmospheric conditions (e.g., continental clouds due to heating and large-scale forcing over la
How well can regional fluxes be derived from smaller-scale estimates?
NASA Technical Reports Server (NTRS)
Moore, Kathleen E.; Fitzjarrald, David R.; Ritter, John A.
1992-01-01
Regional surface fluxes are essential lower boundary conditions for large scale numerical weather and climate models and are the elements of global budgets of important trace gases. Surface properties affecting the exchange of heat, moisture, momentum and trace gases vary with length scales from one meter to hundreds of km. A classical difficulty is that fluxes have been measured directly only at points or along lines. The process of scaling up observations limited in space and/or time to represent larger areas was done by assigning properties to surface classes and combining estimated or calculated fluxes using an area weighted average. It is not clear that a simple area weighted average is sufficient to produce the large scale from the small scale, chiefly due to the effect of internal boundary layers, nor is it known how important the uncertainty is to large scale model outcomes. Simultaneous aircraft and tower data obtained in the relatively simple terrain of the western Alaska tundra were used to determine the extent to which surface type variation can be related to fluxes of heat, moisture, and other properties. Surface type was classified as lake or land with aircraft borne infrared thermometer, and flight level heat and moisture fluxes were related to surface type. The magnitude and variety of sampling errors inherent in eddy correlation flux estimation place limits on how well any flux can be known even in simple geometries.
Buchenberg, Sebastian; Schaudinnus, Norbert; Stock, Gerhard
2015-03-10
Biomolecules exhibit structural dynamics on a number of time scales, including picosecond (ps) motions of a few atoms, nanosecond (ns) local conformational transitions, and microsecond (μs) global conformational rearrangements. Despite this substantial separation of time scales, fast and slow degrees of freedom appear to be coupled in a nonlinear manner; for example, there is theoretical and experimental evidence that fast structural fluctuations are required for slow functional motion to happen. To elucidate a microscopic mechanism of this multiscale behavior, Aib peptide is adopted as a simple model system. Combining extensive molecular dynamics simulations with principal component analysis techniques, a hierarchy of (at least) three tiers of the molecule's free energy landscape is discovered. They correspond to chiral left- to right-handed transitions of the entire peptide that happen on a μs time scale, conformational transitions of individual residues that take about 1 ns, and the opening and closing of structure-stabilizing hydrogen bonds that occur within tens of ps and are triggered by sub-ps structural fluctuations. Providing a simple mechanism of hierarchical dynamics, fast hydrogen bond dynamics is found to be a prerequisite for the ns local conformational transitions, which in turn are a prerequisite for the slow global conformational rearrangement of the peptide. As a consequence of the hierarchical coupling, the various processes exhibit a similar temperature behavior which may be interpreted as a dynamic transition.
A study on assimilating potential vorticity data
NASA Astrophysics Data System (ADS)
Li, Yong; Ménard, Richard; Riishøjgaard, Lars Peter; Cohn, Stephen E.; Rood, Richard B.
1998-08-01
The correlation that exists between the potential vorticity (PV) field and the distribution of chemical tracers such as ozone suggests the possibility of using tracer observations as proxy PV data in atmospheric data assimilation systems. Especially in the stratosphere, there are plentiful tracer observations but a general lack of reliable wind observations, and the correlation is most pronounced. The issue investigated in this study is how model dynamics would respond to the assimilation of PV data. First, numerical experiments of identical-twin type were conducted with a simple univariate nuding algorithm and a global shallow water model based on PV and divergence (PV-D model). All model fields are successfully reconstructed through the insertion of complete PV data alone if an appropriate value for the nudging coefficient is used. A simple linear analysis suggests that slow modes are recovered rapidly, at a rate nearly independent of spatial scale. In a more realistic experiment, appropriately scaled total ozone data from the NIMBUS-7 TOMS instrument were assimilated as proxy PV data into the PV-D model over a 10-day period. The resulting model PV field matches the observed total ozone field relatively well on large spatial scales, and the PV, geopotential and divergence fields are dynamically consistent. These results indicate the potential usefulness that tracer observations, as proxy PV data, may offer in a data assimilation system.
Fuzzy-based propagation of prior knowledge to improve large-scale image analysis pipelines
Mikut, Ralf
2017-01-01
Many automatically analyzable scientific questions are well-posed and a variety of information about expected outcomes is available a priori. Although often neglected, this prior knowledge can be systematically exploited to make automated analysis operations sensitive to a desired phenomenon or to evaluate extracted content with respect to this prior knowledge. For instance, the performance of processing operators can be greatly enhanced by a more focused detection strategy and by direct information about the ambiguity inherent in the extracted data. We present a new concept that increases the result quality awareness of image analysis operators by estimating and distributing the degree of uncertainty involved in their output based on prior knowledge. This allows the use of simple processing operators that are suitable for analyzing large-scale spatiotemporal (3D+t) microscopy images without compromising result quality. On the foundation of fuzzy set theory, we transform available prior knowledge into a mathematical representation and extensively use it to enhance the result quality of various processing operators. These concepts are illustrated on a typical bioimage analysis pipeline comprised of seed point detection, segmentation, multiview fusion and tracking. The functionality of the proposed approach is further validated on a comprehensive simulated 3D+t benchmark data set that mimics embryonic development and on large-scale light-sheet microscopy data of a zebrafish embryo. The general concept introduced in this contribution represents a new approach to efficiently exploit prior knowledge to improve the result quality of image analysis pipelines. The generality of the concept makes it applicable to practically any field with processing strategies that are arranged as linear pipelines. The automated analysis of terabyte-scale microscopy data will especially benefit from sophisticated and efficient algorithms that enable a quantitative and fast readout. PMID:29095927
Internal Fluid Dynamics and Frequency Scaling of Sweeping Jet Fluidic Oscillators
NASA Astrophysics Data System (ADS)
Seo, Jung Hee; Salazar, Erik; Mittal, Rajat
2017-11-01
Sweeping jet fluidic oscillators (SJFOs) are devices that produce a spatially oscillating jet solely based on intrinsic flow instability mechanisms without any moving parts. Recently, SJFOs have emerged as effective actuators for flow control, but the internal fluid dynamics of the device that drives the oscillatory flow mechanism is not yet fully understood. In the current study, the internal fluid dynamics of the fluidic oscillator with feedback channels has been investigated by employing incompressible flow simulations. The study is focused on the oscillation mechanisms and scaling laws that underpin the jet oscillation. Based on the simulation results, simple phenomenological models that connect the jet deflection to the feedback flow are developed. Several geometric modifications are considered in order to explore the characteristic length scales and phase relationships associated with the jet oscillation and to assess the proposed phenomenological model. A scaling law for the jet oscillation frequency is proposed based on the detailed analysis. This research is supported by AFOSR Grant FA9550-14-1-0289 monitored by Dr. Douglas Smith.
Scaling and self-organized criticality in proteins: Lysozyme c
NASA Astrophysics Data System (ADS)
Phillips, J. C.
2009-11-01
Proteins appear to be the most dramatic natural example of self-organized criticality (SOC), a concept that explains many otherwise apparently unlikely phenomena. Protein functionality is often dominated by long-range hydro(phobic/philic) interactions, which both drive protein compaction and mediate protein-protein interactions. In contrast to previous reductionist short-range hydrophobicity scales, the holistic Moret-Zebende hydrophobicity scale [Phys. Rev. E 75, 011920 (2007)] represents a hydroanalytic tool that bioinformatically quantifies SOC in a way fully compatible with evolution. Hydroprofiling identifies chemical trends in the activities and substrate binding abilities of model enzymes and antibiotic animal lysozymes c , as well as defensins, which have been the subject of tens of thousands of experimental studies. The analysis is simple and easily performed and immediately yields insights not obtainable by traditional methods based on short-range real-space interactions, as described either by classical force fields used in molecular-dynamics simulations, or hydrophobicity scales based on transference energies from water to organic solvents or solvent-accessible areas.
NASA Astrophysics Data System (ADS)
Postadjian, T.; Le Bris, A.; Sahbi, H.; Mallet, C.
2017-05-01
Semantic classification is a core remote sensing task as it provides the fundamental input for land-cover map generation. The very recent literature has shown the superior performance of deep convolutional neural networks (DCNN) for many classification tasks including the automatic analysis of Very High Spatial Resolution (VHR) geospatial images. Most of the recent initiatives have focused on very high discrimination capacity combined with accurate object boundary retrieval. Therefore, current architectures are perfectly tailored for urban areas over restricted areas but not designed for large-scale purposes. This paper presents an end-to-end automatic processing chain, based on DCNNs, that aims at performing large-scale classification of VHR satellite images (here SPOT 6/7). Since this work assesses, through various experiments, the potential of DCNNs for country-scale VHR land-cover map generation, a simple yet effective architecture is proposed, efficiently discriminating the main classes of interest (namely buildings, roads, water, crops, vegetated areas) by exploiting existing VHR land-cover maps for training.
Stability of large-scale systems with stable and unstable subsystems.
NASA Technical Reports Server (NTRS)
Grujic, Lj. T.; Siljak, D. D.
1972-01-01
The purpose of this paper is to develop new methods for constructing vector Liapunov functions and broaden the application of Liapunov's theory to stability analysis of large-scale dynamic systems. The application, so far limited by the assumption that the large-scale systems are composed of exponentially stable subsystems, is extended via the general concept of comparison functions to systems which can be decomposed into asymptotically stable subsystems. Asymptotic stability of the composite system is tested by a simple algebraic criterion. With minor technical adjustments, the same criterion can be used to determine connective asymptotic stability of large-scale systems subject to structural perturbations. By redefining the constraints imposed on the interconnections among the subsystems, the considered class of systems is broadened in an essential way to include composite systems with unstable subsystems. In this way, the theory is brought substantially closer to reality since stability of all subsystems is no longer a necessary assumption in establishing stability of the overall composite system.
The Problem Behaviour Checklist: short scale to assess challenging behaviours
Nagar, Jessica; Evans, Rosie; Oliver, Patricia; Bassett, Paul; Liedtka, Natalie; Tarabi, Aris
2016-01-01
Background Challenging behaviour, especially in intellectual disability, covers a wide range that is in need of further evaluation. Aims To develop a short but comprehensive instrument for all aspects of challenging behaviour. Method In the first part of a two-stage enquiry, a 28-item scale was constructed to examine the components of challenging behaviour. Following a simple factor analysis this was developed further to create a new short scale, the Problem Behaviour Checklist (PBCL). The scale was subsequently used in a randomised controlled trial and tested for interrater reliability. Scores were also compared with a standard scale, the Modified Overt Aggression Scale (MOAS). Results Seven identified factors – personal violence, violence against property, self-harm, sexually inappropriate, contrary, demanding and disappearing behaviour – were scored on a 5-point scale. A subsequent factor analysis with the second population showed demanding, violent and contrary behaviour to account for most of the variance. Interrater reliability using weighted kappa showed good agreement (0.91; 95% CI 0.83–0.99). Good agreement was also shown with scores on the MOAS and a score of 1 on the PBCL showed high sensitivity (97%) and specificity (85%) for a threshold MOASscore of 4. Conclusions The PBCL appears to be a suitable and practical scale for assessing all aspects of challenging behaviour. Declaration of interest None. Copyright and usage © 2016 The Royal College of Psychiatrists. This is an open access article distributed under the terms of the Creative Commons Non-Commercial, No Derivatives (CC BY-NC-ND) licence. PMID:27703753
Exploring the effect of power law social popularity on language evolution.
Gong, Tao; Shuai, Lan
2014-01-01
We evaluate the effect of a power-law-distributed social popularity on the origin and change of language, based on three artificial life models meticulously tracing the evolution of linguistic conventions including lexical items, categories, and simple syntax. A cross-model analysis reveals an optimal social popularity, in which the λ value of the power law distribution is around 1.0. Under this scaling, linguistic conventions can efficiently emerge and widely diffuse among individuals, thus maintaining a useful level of mutual understandability even in a big population. From an evolutionary perspective, we regard this social optimality as a tradeoff among social scaling, mutual understandability, and population growth. Empirical evidence confirms that such optimal power laws exist in many large-scale social systems that are constructed primarily via language-related interactions. This study contributes to the empirical explorations and theoretical discussions of the evolutionary relations between ubiquitous power laws in social systems and relevant individual behaviors.
Perspectives on scaling and multiscaling in passive scalar turbulence
NASA Astrophysics Data System (ADS)
Banerjee, Tirthankar; Basu, Abhik
2018-05-01
We revisit the well-known problem of multiscaling in substances passively advected by homogeneous and isotropic turbulent flows or passive scalar turbulence. To that end we propose a two-parameter continuum hydrodynamic model for an advected substance concentration θ , parametrized jointly by y and y ¯, that characterize the spatial scaling behavior of the variances of the advecting stochastic velocity and the stochastic additive driving force, respectively. We analyze it within a one-loop dynamic renormalization group method to calculate the multiscaling exponents of the equal-time structure functions of θ . We show how the interplay between the advective velocity and the additive force may lead to simple scaling or multiscaling. In one limit, our results reduce to the well-known results from the Kraichnan model for passive scalar. Our framework of analysis should be of help for analytical approaches for the still intractable problem of fluid turbulence itself.
Simple and Multiple Endmember Mixture Analysis in the Boreal Forest
NASA Technical Reports Server (NTRS)
Roberts, Dar A.; Gamon, John A.; Qiu, Hong-Lie
2000-01-01
A key scientific objective of the original Boreal Ecosystem-Atmospheric Study (BOREAS) field campaign (1993-1996) was to obtain the baseline data required for modeling and predicting fluxes of energy, mass, and trace gases in the boreal forest biome. These data sets are necessary to determine the sensitivity of the boreal forest biome to potential climatic changes and potential biophysical feedbacks on climate. A considerable volume of remotely sensed and supporting field data were acquired by numerous researchers to meet this objective. By design, remote sensing and modeling were considered critical components for scaling efforts, extending point measurements from flux towers and field sites over larger spatial and longer temporal scales. A major focus of the BOREAS Follow-on program was concerned with integrating the diverse remotely sensed and ground-based data sets to address specific questions such as carbon dynamics at local to regional scales.
On Two-Scale Modelling of Heat and Mass Transfer
NASA Astrophysics Data System (ADS)
Vala, J.; Št'astník, S.
2008-09-01
Modelling of macroscopic behaviour of materials, consisting of several layers or components, whose microscopic (at least stochastic) analysis is available, as well as (more general) simulation of non-local phenomena, complicated coupled processes, etc., requires both deeper understanding of physical principles and development of mathematical theories and software algorithms. Starting from the (relatively simple) example of phase transformation in substitutional alloys, this paper sketches the general formulation of a nonlinear system of partial differential equations of evolution for the heat and mass transfer (useful in mechanical and civil engineering, etc.), corresponding to conservation principles of thermodynamics, both at the micro- and at the macroscopic level, and suggests an algorithm for scale-bridging, based on the robust finite element techniques. Some existence and convergence questions, namely those based on the construction of sequences of Rothe and on the mathematical theory of two-scale convergence, are discussed together with references to useful generalizations, required by new technologies.
Do people trust dentists? Development of the Dentist Trust Scale.
Armfield, J M; Ketting, M; Chrisopoulos, S; Baker, S R
2017-09-01
This study aimed to adapt a measure of trust in physicians to trust in dentists and to assess the reliability and validity of the measure. Questionnaire data were collected from a simple random sample of 596 Australian adults. The 11-item General Trust in Physicians Scale was modified to apply to dentists. The Dentist Trust Scale (DTS) had good internal consistency (α = 0.92) and exploratory factor analysis revealed a single-factor solution. Lower DTS scores were associated with less trust in the dentist last visited, having previously changed dentists due to unhappiness with the care received, currently having dental pain, usual visiting frequency, dental avoidance, and with past experiences of discomfort, gagging, fainting, embarrassment and personal problems with the dentist. The majority of people appear to exhibit trust in dentists. The DTS shows promising reliability and validity evidence. © 2017 Australian Dental Association.
Choi, Kyongsik; Chon, James W; Gu, Min; Lee, Byoungho
2007-08-20
In this paper, a simple confocal laser scanning microscopic (CLSM) image mapping technique based on the finite-difference time domain (FDTD) calculation has been proposed and evaluated for characterization of a subwavelength-scale three-dimensional (3D) void structure fabricated inside polymer matrix. The FDTD simulation method adopts a focused Gaussian beam incident wave, Berenger's perfectly matched layer absorbing boundary condition, and the angular spectrum analysis method. Through the well matched simulation and experimental results of the xz-scanned 3D void structure, we first characterize the exact position and the topological shape factor of the subwavelength-scale void structure, which was fabricated by a tightly focused ultrashort pulse laser. The proposed CLSM image mapping technique based on the FDTD can be widely applied from the 3D near-field microscopic imaging, optical trapping, and evanescent wave phenomenon to the state-of-the-art bio- and nanophotonics.
Scaling Phenomenology in Meson Photoproduction from CLAS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biplab Dey, Curtis A. Meyer
2010-08-01
In the high energy limit, perturbative QCD predicts that hard scattering amplitudes should follow simple scaling laws. For hard scattering at 90°, we show that experiments support this prediction even in the “medium energy” regime of 2.3 GeV<=sqrt(s)<=2.84 GeV, as long as there are no s-channel resonances present. Our data consists of high statistics measurements for five different exclusive meson photoproduction channels (pomega, peta, peta[prime], K+Lambdaand K+[summation]0) recently obtained from CLAS at Jefferson Lab. The same power-law scaling also leads to “saturated” Regge trajectories at high energies. That is, at large -t and -u, Regge trajectories must approach constant negativemore » integers. We demonstrate the application of saturated Regge phenomenology by performing a partial wave analysis fit to the gammayp-->peta[prime]differential cross sections.« less
Rearranging Pionless Effective Field Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin Savage; Silas Beane
2001-11-19
We point out a redundancy in the operator structure of the pionless effective field theory which dramatically simplifies computations. This redundancy is best exploited by using dibaryon fields as fundamental degrees of freedom. In turn, this suggests a new power counting scheme which sums range corrections to all orders. We explore this method with a few simple observables: the deuteron charge form factor, n p -> d gamma, and Compton scattering from the deuteron. Higher dimension operators involving electroweak gauge fields are not renormalized by the s-wave strong interactions, and therefore do not scale with inverse powers of the renormalizationmore » scale. Thus, naive dimensional analysis of these operators is sufficient to estimate their contribution to a given process.« less
Simulation of wave propagation in three-dimensional random media
NASA Technical Reports Server (NTRS)
Coles, William A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1993-01-01
Quantitative error analysis for simulation of wave propagation in three dimensional random media assuming narrow angular scattering are presented for the plane wave and spherical wave geometry. This includes the errors resulting from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive index of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared to the spatial spectra of intensity. The numerical requirements for a simulation of given accuracy are determined for realizations of the field. The numerical requirements for accurate estimation of higher moments of the field are less stringent.
Understanding the origins of uncertainty in landscape-scale variations of emissions of nitrous oxide
NASA Astrophysics Data System (ADS)
Milne, Alice; Haskard, Kathy; Webster, Colin; Truan, Imogen; Goulding, Keith
2014-05-01
Nitrous oxide is a potent greenhouse gas which is over 300 times more radiatively effective than carbon dioxide. In the UK, the agricultural sector is estimated to be responsible for over 80% of nitrous oxide emissions, with these emissions resulting from livestock and farmers adding nitrogen fertilizer to soils. For the purposes of reporting emissions to the IPCC, the estimates are calculated using simple models whereby readily-available national or international statistics are combined with IPCC default emission factors. The IPCC emission factor for direct emissions of nitrous oxide from soils has a very large uncertainty. This is primarily because the variability of nitrous oxide emissions in space is large and this results in uncertainty that may be regarded as sample noise. To both reduce uncertainty through improved modelling, and to communicate an understanding of this uncertainty, we must understand the origins of the variation. We analysed data on nitrous oxide emission rate and some other soil properties collected from a 7.5-km transect across contrasting land uses and parent materials in eastern England. We investigated the scale-dependence and spatial uniformity of the correlations between soil properties and emission rates from farm to landscape scale using wavelet analysis. The analysis revealed a complex pattern of scale-dependence. Emission rates were strongly correlated with a process-specific function of the water-filled pore space at the coarsest scale and nitrate at intermediate and coarsest scales. We also found significant correlations between pH and emission rates at the intermediate scales. The wavelet analysis showed that these correlations were not spatially uniform and that at certain scales changes in parent material coincided with significant changes in correlation. Our results indicate that, at the landscape scale, nitrate content and water-filled pore space are key soil properties for predicting nitrous oxide emissions and should therefore be incorporated into process models and emission factors for inventory calculations.
Gas production in the Barnett Shale obeys a simple scaling theory
Patzek, Tad W.; Male, Frank; Marder, Michael
2013-01-01
Natural gas from tight shale formations will provide the United States with a major source of energy over the next several decades. Estimates of gas production from these formations have mainly relied on formulas designed for wells with a different geometry. We consider the simplest model of gas production consistent with the basic physics and geometry of the extraction process. In principle, solutions of the model depend upon many parameters, but in practice and within a given gas field, all but two can be fixed at typical values, leading to a nonlinear diffusion problem we solve exactly with a scaling curve. The scaling curve production rate declines as 1 over the square root of time early on, and it later declines exponentially. This simple model provides a surprisingly accurate description of gas extraction from 8,294 wells in the United States’ oldest shale play, the Barnett Shale. There is good agreement with the scaling theory for 2,057 horizontal wells in which production started to decline exponentially in less than 10 y. The remaining 6,237 horizontal wells in our analysis are too young for us to predict when exponential decline will set in, but the model can nevertheless be used to establish lower and upper bounds on well lifetime. Finally, we obtain upper and lower bounds on the gas that will be produced by the wells in our sample, individually and in total. The estimated ultimate recovery from our sample of 8,294 wells is between 10 and 20 trillion standard cubic feet. PMID:24248376
Gas production in the Barnett Shale obeys a simple scaling theory.
Patzek, Tad W; Male, Frank; Marder, Michael
2013-12-03
Natural gas from tight shale formations will provide the United States with a major source of energy over the next several decades. Estimates of gas production from these formations have mainly relied on formulas designed for wells with a different geometry. We consider the simplest model of gas production consistent with the basic physics and geometry of the extraction process. In principle, solutions of the model depend upon many parameters, but in practice and within a given gas field, all but two can be fixed at typical values, leading to a nonlinear diffusion problem we solve exactly with a scaling curve. The scaling curve production rate declines as 1 over the square root of time early on, and it later declines exponentially. This simple model provides a surprisingly accurate description of gas extraction from 8,294 wells in the United States' oldest shale play, the Barnett Shale. There is good agreement with the scaling theory for 2,057 horizontal wells in which production started to decline exponentially in less than 10 y. The remaining 6,237 horizontal wells in our analysis are too young for us to predict when exponential decline will set in, but the model can nevertheless be used to establish lower and upper bounds on well lifetime. Finally, we obtain upper and lower bounds on the gas that will be produced by the wells in our sample, individually and in total. The estimated ultimate recovery from our sample of 8,294 wells is between 10 and 20 trillion standard cubic feet.
Interacting particle systems in time-dependent geometries
NASA Astrophysics Data System (ADS)
Ali, A.; Ball, R. C.; Grosskinsky, S.; Somfai, E.
2013-09-01
Many complex structures and stochastic patterns emerge from simple kinetic rules and local interactions, and are governed by scale invariance properties in combination with effects of the global geometry. We consider systems that can be described effectively by space-time trajectories of interacting particles, such as domain boundaries in two-dimensional growth or river networks. We study trajectories embedded in time-dependent geometries, and the main focus is on uniformly expanding or decreasing domains for which we obtain an exact mapping to simple fixed domain systems while preserving the local scale invariance properties. This approach was recently introduced in Ali et al (2013 Phys. Rev. E 87 020102(R)) and here we provide a detailed discussion on its applicability for self-affine Markovian models, and how it can be adapted to self-affine models with memory or explicit time dependence. The mapping corresponds to a nonlinear time transformation which converges to a finite value for a large class of trajectories, enabling an exact analysis of asymptotic properties in expanding domains. We further provide a detailed discussion of different particle interactions and generalized geometries. All our findings are based on exact computations and are illustrated numerically for various examples, including Lévy processes and fractional Brownian motion.
2011-01-01
Background Studies of nuclear function in many organisms, especially those with tough cell walls, are limited by lack of availability of simple, economical methods for large-scale preparation of clean, undamaged nuclei. Findings Here we present a useful method for nuclear isolation from the important model organism, the fission yeast, Schizosaccharomyces pombe. To preserve in vivo molecular configurations, we flash-froze the yeast cells in liquid nitrogen. Then we broke their tough cell walls, without damaging their nuclei, by grinding in a precision-controlled motorized mortar-and-pestle apparatus. The cryo-ground cells were resuspended and thawed in a buffer designed to preserve nuclear morphology, and the nuclei were enriched by differential centrifugation. The washed nuclei were free from contaminating nucleases and have proven well-suited as starting material for genome-wide chromatin analysis and for preparation of fragile DNA replication intermediates. Conclusions We have developed a simple, reproducible, economical procedure for large-scale preparation of endogenous-nuclease-free, morphologically intact nuclei from fission yeast. With appropriate modifications, this procedure may well prove useful for isolation of nuclei from other organisms with, or without, tough cell walls. PMID:22088094
Givens, Robert M; Mesner, Larry D; Hamlin, Joyce L; Buck, Michael J; Huberman, Joel A
2011-11-16
Studies of nuclear function in many organisms, especially those with tough cell walls, are limited by lack of availability of simple, economical methods for large-scale preparation of clean, undamaged nuclei. Here we present a useful method for nuclear isolation from the important model organism, the fission yeast, Schizosaccharomyces pombe. To preserve in vivo molecular configurations, we flash-froze the yeast cells in liquid nitrogen. Then we broke their tough cell walls, without damaging their nuclei, by grinding in a precision-controlled motorized mortar-and-pestle apparatus. The cryo-ground cells were resuspended and thawed in a buffer designed to preserve nuclear morphology, and the nuclei were enriched by differential centrifugation. The washed nuclei were free from contaminating nucleases and have proven well-suited as starting material for genome-wide chromatin analysis and for preparation of fragile DNA replication intermediates. We have developed a simple, reproducible, economical procedure for large-scale preparation of endogenous-nuclease-free, morphologically intact nuclei from fission yeast. With appropriate modifications, this procedure may well prove useful for isolation of nuclei from other organisms with, or without, tough cell walls.
Calving relation for tidewater glaciers based on detailed stress field analysis
NASA Astrophysics Data System (ADS)
Mercenier, Rémy; Lüthi, Martin P.; Vieli, Andreas
2018-02-01
Ocean-terminating glaciers in Arctic regions have undergone rapid dynamic changes in recent years, which have been related to a dramatic increase in calving rates. Iceberg calving is a dynamical process strongly influenced by the geometry at the terminus of tidewater glaciers. We investigate the effect of varying water level, calving front slope and basal sliding on the state of stress and flow regime for an idealized grounded ocean-terminating glacier and scale these results with ice thickness and velocity. Results show that water depth and calving front slope strongly affect the stress state while the effect from spatially uniform variations in basal sliding is much smaller. An increased relative water level or a reclining calving front slope strongly decrease the stresses and velocities in the vicinity of the terminus and hence have a stabilizing effect on the calving front. We find that surface stress magnitude and distribution for simple geometries are determined solely by the water depth relative to ice thickness. Based on this scaled relationship for the stress peak at the surface, and assuming a critical stress for damage initiation, we propose a simple and new parametrization for calving rates for grounded tidewater glaciers that is calibrated with observations.
Tanimoto, Hirokazu; Sano, Masaki
2014-01-07
For biophysical understanding of cell motility, the relationship between mechanical force and cell migration must be uncovered, but it remains elusive. Since cells migrate at small scale in dissipative circumstances, the inertia force is negligible and all forces should cancel out. This implies that one must quantify the spatial pattern of the force instead of just the summation to elucidate the force-motion relation. Here, we introduced multipole analysis to quantify the traction stress dynamics of migrating cells. We measured the traction stress of Dictyostelium discoideum cells and investigated the lowest two moments, the force dipole and quadrupole moments, which reflect rotational and front-rear asymmetries of the stress field. We derived a simple force-motion relation in which cells migrate along the force dipole axis with a direction determined by the force quadrupole. Furthermore, as a complementary approach, we also investigated fine structures in the stress field that show front-rear asymmetric kinetics consistent with the multipole analysis. The tight force-motion relation enables us to predict cell migration only from the traction stress patterns. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Combined tension and bending testing of tapered composite laminates
NASA Astrophysics Data System (ADS)
O'Brien, T. Kevin; Murri, Gretchen B.; Hagemeier, Rick; Rogers, Charles
1994-11-01
A simple beam element used at Bell Helicopter was incorporated in the Computational Mechanics Testbed (COMET) finite element code at the Langley Research Center (LaRC) to analyze the responce of tappered laminates typical of flexbeams in composite rotor hubs. This beam element incorporated the influence of membrane loads on the flexural response of the tapered laminate configurations modeled and tested in a combined axial tension and bending (ATB) hydraulic load frame designed and built at LaRC. The moments generated from the finite element model were used in a tapered laminated plate theory analysis to estimate axial stresses on the surface of the tapered laminates due to combined bending and tension loads. Surfaces strains were calculated and compared to surface strains measured using strain gages mounted along the laminate length. The strain distributions correlated reasonably well with the analysis. The analysis was then used to examine the surface strain distribution in a non-linear tapered laminate where a similarly good correlation was obtained. Results indicate that simple finite element beam models may be used to identify tapered laminate configurations best suited for simulating the response of a composite flexbeam in a full scale rotor hub.
Collision geometry scaling of Au+Au pseudorapidity density from √(sNN )=19.6 to 200 GeV
NASA Astrophysics Data System (ADS)
Back, B. B.; Baker, M. D.; Ballintijn, M.; Barton, D. S.; Betts, R. R.; Bickley, A. A.; Bindel, R.; Budzanowski, A.; Busza, W.; Carroll, A.; Decowski, M. P.; García, E.; George, N.; Gulbrandsen, K.; Gushue, S.; Halliwell, C.; Hamblen, J.; Heintzelman, G. A.; Henderson, C.; Hofman, D. J.; Hollis, R. S.; Hołyński, R.; Holzman, B.; Iordanova, A.; Johnson, E.; Kane, J. L.; Katzy, J.; Khan, N.; Kucewicz, W.; Kulinich, P.; Kuo, C. M.; Lin, W. T.; Manly, S.; McLeod, D.; Mignerey, A. C.; Nouicer, R.; Olszewski, A.; Pak, R.; Park, I. C.; Pernegger, H.; Reed, C.; Remsberg, L. P.; Reuter, M.; Roland, C.; Roland, G.; Rosenberg, L.; Sagerer, J.; Sarin, P.; Sawicki, P.; Skulski, W.; Steinberg, P.; Stephans, G. S.; Sukhanov, A.; Tonjes, M. B.; Tang, J.-L.; Trzupek, A.; Vale, C.; van Nieuwenhuizen, G. J.; Verdier, R.; Wolfs, F. L.; Wosiek, B.; Woźniak, K.; Wuosmaa, A. H.; Wysłouch, B.
2004-08-01
The centrality dependence of the midrapidity charged particle multiplicity in Au+Au heavy-ion collisions at √(sNN )=19.6 and 200 GeV is presented. Within a simple model, the fraction of hard (scaling with number of binary collisions) to soft (scaling with number of participant pairs) interactions is consistent with a value of x=0.13±0.01 (stat) ±0.05 (syst) at both energies. The experimental results at both energies, scaled by inelastic p ( p¯ ) +p collision data, agree within systematic errors. The ratio of the data was found not to depend on centrality over the studied range and yields a simple linear scale factor of R200/19.6 =2.03±0.02 (stat) ±0.05 (syst) .
Amirian, Mohammad-Elyas; Fazilat-Pour, Masoud
2016-08-01
The present study examined simple and multivariate relationships of spiritual intelligence with general health and happiness. The employed method was descriptive and correlational. King's Spiritual Quotient scales, GHQ-28 and Oxford Happiness Inventory, are filled out by a sample consisted of 384 students, which were selected using stratified random sampling from the students of Shahid Bahonar University of Kerman. Data are subjected to descriptive and inferential statistics including correlations and multivariate regressions. Bivariate correlations support positive and significant predictive value of spiritual intelligence toward general health and happiness. Further analysis showed that among the Spiritual Intelligence' subscales, Existential Critical Thinking Predicted General Health and Happiness, reversely. In addition, happiness was positively predicted by generation of personal meaning and transcendental awareness. The findings are discussed in line with the previous studies and the relevant theoretical background.
Testing averaged cosmology with type Ia supernovae and BAO data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos, B.; Alcaniz, J.S.; Coley, A.A.
An important problem in precision cosmology is the determination of the effects of averaging and backreaction on observational predictions, particularly in view of the wealth of new observational data and improved statistical techniques. In this paper, we discuss the observational viability of a class of averaged cosmologies which consist of a simple parametrized phenomenological two-scale backreaction model with decoupled spatial curvature parameters. We perform a Bayesian model selection analysis and find that this class of averaged phenomenological cosmological models is favored with respect to the standard ΛCDM cosmological scenario when a joint analysis of current SNe Ia and BAO datamore » is performed. In particular, the analysis provides observational evidence for non-trivial spatial curvature.« less
Hemilä, Harri
2017-05-12
The relative scale has been used for decades in analysing binary data in epidemiology. In contrast, there has been a long tradition of carrying out meta-analyses of continuous outcomes on the absolute, original measurement, scale. The biological rationale for using the relative scale in the analysis of binary outcomes is that it adjusts for baseline variations; however, similar baseline variations can occur in continuous outcomes and relative effect scale may therefore be often useful also for continuous outcomes. The aim of this study was to determine whether the relative scale is more consistent with empirical data on treating the common cold than the absolute scale. Individual patient data was available for 2 randomized trials on zinc lozenges for the treatment of the common cold. Mossad (Ann Intern Med 125:81-8, 1996) found 4.0 days and 43% reduction, and Petrus (Curr Ther Res 59:595-607, 1998) found 1.77 days and 25% reduction, in the duration of colds. In both trials, variance in the placebo group was significantly greater than in the zinc lozenge group. The effect estimates were applied to the common cold distributions of the placebo groups, and the resulting distributions were compared with the actual zinc lozenge group distributions. When the absolute effect estimates, 4.0 and 1.77 days, were applied to the placebo group common cold distributions, negative and zero (i.e., impossible) cold durations were predicted, and the high level variance remained. In contrast, when the relative effect estimates, 43 and 25%, were applied, impossible common cold durations were not predicted in the placebo groups, and the cold distributions became similar to those of the zinc lozenge groups. For some continuous outcomes, such as the duration of illness and the duration of hospital stay, the relative scale leads to a more informative statistical analysis and more effective communication of the study findings. The transformation of continuous data to the relative scale is simple with a spreadsheet program, after which the relative scale data can be analysed using standard meta-analysis software. The option for the analysis of relative effects of continuous outcomes directly from the original data should be implemented in standard meta-analysis programs.
Mohapatra, Pratyasha; Mendivelso-Perez, Deyny; Bobbitt, Jonathan M; Shaw, Santosh; Yuan, Bin; Tian, Xinchun; Smith, Emily A; Cademartiri, Ludovico
2018-05-30
This paper describes a simple approach to the large scale synthesis of colloidal Si nanocrystals and their processing by He plasma into spin-on carbon-free nanocrystalline Si films. We further show that the RIE etching rate in these films is 1.87 times faster than for single crystalline Si, consistent with a simple geometric argument that accounts for the nanoscale roughness caused by the nanoparticle shape.
Development of small scale cluster computer for numerical analysis
NASA Astrophysics Data System (ADS)
Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.
2017-09-01
In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.
Lee, Unseok; Chang, Sungyul; Putra, Gian Anantrio; Kim, Hyoungseok; Kim, Dong Hwan
2018-01-01
A high-throughput plant phenotyping system automatically observes and grows many plant samples. Many plant sample images are acquired by the system to determine the characteristics of the plants (populations). Stable image acquisition and processing is very important to accurately determine the characteristics. However, hardware for acquiring plant images rapidly and stably, while minimizing plant stress, is lacking. Moreover, most software cannot adequately handle large-scale plant imaging. To address these problems, we developed a new, automated, high-throughput plant phenotyping system using simple and robust hardware, and an automated plant-imaging-analysis pipeline consisting of machine-learning-based plant segmentation. Our hardware acquires images reliably and quickly and minimizes plant stress. Furthermore, the images are processed automatically. In particular, large-scale plant-image datasets can be segmented precisely using a classifier developed using a superpixel-based machine-learning algorithm (Random Forest), and variations in plant parameters (such as area) over time can be assessed using the segmented images. We performed comparative evaluations to identify an appropriate learning algorithm for our proposed system, and tested three robust learning algorithms. We developed not only an automatic analysis pipeline but also a convenient means of plant-growth analysis that provides a learning data interface and visualization of plant growth trends. Thus, our system allows end-users such as plant biologists to analyze plant growth via large-scale plant image data easily.
Van Driel, Robin; Trask, Catherine; Johnson, Peter W; Callaghan, Jack P; Koehoorn, Mieke; Teschke, Kay
2013-01-01
Measuring trunk posture in the workplace commonly involves subjective observation or self-report methods or the use of costly and time-consuming motion analysis systems (current gold standard). This work compared trunk inclination measurements using a simple data-logging inclinometer with trunk flexion measurements using a motion analysis system, and evaluated adding measures of subject anthropometry to exposure prediction models to improve the agreement between the two methods. Simulated lifting tasks (n=36) were performed by eight participants, and trunk postures were simultaneously measured with each method. There were significant differences between the two methods, with the inclinometer initially explaining 47% of the variance in the motion analysis measurements. However, adding one key anthropometric parameter (lower arm length) to the inclinometer-based trunk flexion prediction model reduced the differences between the two systems and accounted for 79% of the motion analysis method's variance. Although caution must be applied when generalizing lower-arm length as a correction factor, the overall strategy of anthropometric modeling is a novel contribution. In this lifting-based study, by accounting for subject anthropometry, a single, simple data-logging inclinometer shows promise for trunk posture measurement and may have utility in larger-scale field studies where similar types of tasks are performed.
NASA Astrophysics Data System (ADS)
Casas-Castillo, M. Carmen; Llabrés-Brustenga, Alba; Rius, Anna; Rodríguez-Solà, Raúl; Navarro, Xavier
2018-02-01
As well as in other natural processes, it has been frequently observed that the phenomenon arising from the rainfall generation process presents fractal self-similarity of statistical type, and thus, rainfall series generally show scaling properties. Based on this fact, there is a methodology, simple scaling, which is used quite broadly to find or reproduce the intensity-duration-frequency curves of a place. In the present work, the relationship of the simple scaling parameter with the characteristic rainfall pattern of the area of study has been investigated. The calculation of this scaling parameter has been performed from 147 daily rainfall selected series covering the temporal period between 1883 and 2016 over the Catalonian territory (Spain) and its nearby surroundings, and a discussion about the relationship between the scaling parameter spatial distribution and rainfall pattern, as well as about trends of this scaling parameter over the past decades possibly due to climate change, has been presented.
Dorian, Paul; Guerra, Peter G; Kerr, Charles R; O'Donnell, Suzan S; Crystal, Eugene; Gillis, Anne M; Mitchell, L Brent; Roy, Denis; Skanes, Allan C; Rose, M Sarah; Wyse, D George
2009-06-01
Atrial fibrillation (AF) is commonly associated with impaired quality of life. There is no simple validated scale to quantify the functional illness burden of AF. The Canadian Cardiovascular Society Severity in Atrial Fibrillation (CCS-SAF) scale is a bedside scale that ranges from class 0 to 4, from no effect on functional quality of life to a severe effect on life quality. This study was performed to validate the scale. In 484 patients with documented AF (62.2+/-12.5 years of age, 67% men; 62% paroxysmal and 38% persistent/permanent), the SAF class was assessed and 2 validated quality-of-life questionnaires were administered: the SF-36 generic scale and the disease-specific AFSS (University of Toronto Atrial Fibrillation Severity Scale). There is a significant linear graded correlation between the SAF class and measures of symptom severity, physical and emotional components of quality of life, general well-being, and health care consumption related to AF. Patients with SAF class 0 had age- and sex-standardized SF-36 scores of 0.15+/-0.16 and -0.04+/-0.31 (SD units), that is, units away from the mean population score for the mental and physical summary scores, respectively. For each unit increase in SAF class, there is a 0.36 and 0.40 SD unit decrease in the SF-36 score for the physical and mental components. As the SAF class increases from 0 to 4, the symptom severity score (range, 0 to 35) increases from 4.2+/-5.0 to 18.4+/-7.8 (P<0.0001). The CCS-SAF scale is a simple semiquantitative scale that closely approximates patient-reported subjective measures of quality of life in AF and may be practical for clinical use.
Shiffman, Carl
2017-11-30
To define and elucidate the properties of reduced-variable Nyquist plots. Non-invasive measurements of the electrical impedance of the human thigh. A retrospective analysis of the electrical impedances of 154 normal subjects measured over the past decade shows that 'scaling' of the Nyquist plots for human thigh muscles is a property shared by healthy thigh musculature, irrespective of subject and the length of muscle segment. Here the term scaling signifies the near and sometimes 'perfect' coalescence of the separate X versus R plots into one 'reduced' Nyquist plot by the simple expedient of dividing R and X by X m , the value of X at the reactance maximum. To the extent allowed by noise levels one can say that there is one 'universal' reduced Nyquist plot for the thigh musculature of healthy subjects. There is one feature of the Nyquist curves which is not 'universal', however, namely the frequency f m at which the maximum in X is observed. That is found to vary from 10 to 100 kHz. depending on subject and segment length. Analysis shows, however, that the mean value of 1/f m is an accurately linear function of segment length, though there is a small subject-to-subject random element as well. Also, following the recovery of an otherwise healthy victim of ankle fracture demonstrates the clear superiority of measurements above about 800 kHz, where scaling is not observed, in contrast to measurements below about 400 kHz, where scaling is accurately obeyed. The ubiquity of 'scaling' casts new light on the interpretation of impedance results as they are used in electrical impedance myography and bioelectric impedance analysis.
A meta-analysis of crop pest and natural enemy response to landscape complexity.
Chaplin-Kramer, Rebecca; O'Rourke, Megan E; Blitzer, Eleanor J; Kremen, Claire
2011-09-01
Many studies in recent years have investigated the relationship between landscape complexity and pests, natural enemies and/or pest control. However, no quantitative synthesis of this literature beyond simple vote-count methods yet exists. We conducted a meta-analysis of 46 landscape-level studies, and found that natural enemies have a strong positive response to landscape complexity. Generalist enemies show consistent positive responses to landscape complexity across all scales measured, while specialist enemies respond more strongly to landscape complexity at smaller scales. Generalist enemy response to natural habitat also tends to occur at larger spatial scales than for specialist enemies, suggesting that land management strategies to enhance natural pest control should differ depending on whether the dominant enemies are generalists or specialists. The positive response of natural enemies does not necessarily translate into pest control, since pest abundances show no significant response to landscape complexity. Very few landscape-scale studies have estimated enemy impact on pest populations, however, limiting our understanding of the effects of landscape on pest control. We suggest focusing future research efforts on measuring population dynamics rather than static counts to better characterise the relationship between landscape complexity and pest control services from natural enemies. © 2011 Blackwell Publishing Ltd/CNRS.
Attribution of regional flood changes based on scaling fingerprints
NASA Astrophysics Data System (ADS)
Viglione, A.; Merz, B.; Dung, N.; Parajka, J.; Nester, T.; Bloeschl, G.
2017-12-01
Changes in the river flood regime may be due to atmospheric processes (e.g., increasing precipitation), catchment processes (e.g., soil compaction associated with land use change), and river system processes (e.g., loss of retention volume in the floodplains). We propose a framework for attributing flood changes to these drivers based on a regional analysis. We exploit the scaling characteristics (i.e., fingerprints) with catchment area of the effects of the drivers on flood changes. The estimation of their relative contributions is framed in Bayesian terms. Analysis of a synthetic, controlled case suggests that the accuracy of the regional attribution increases with increasing number of sites and record lengths, decreases with increasing regional heterogeneity, increases with increasing difference of the scaling fingerprints, and decreases with an increase of their prior uncertainty. The applicability of the framework is illustrated for a case study set in Austria, where positive flood trends have been observed at many sites in the past decades. The individual scaling fingerprints related to the atmospheric, catchment, and river system processes are estimated from rainfall data and simple hydrological modeling. Although the distributions of the contributions are rather wide, the attribution identifies precipitation change as the main driver of flood change in the study region.
Pyrotechnic modeling for the NSI and pin puller
NASA Technical Reports Server (NTRS)
Powers, Joseph M.; Gonthier, Keith A.
1993-01-01
A discussion concerning the modeling of pyrotechnically driven actuators is presented in viewgraph format. The following topics are discussed: literature search, constitutive data for full-scale model, simple deterministic model, observed phenomena, and results from simple model.
A simple landslide susceptibility analysis for hazard and risk assessment in developing countries
NASA Astrophysics Data System (ADS)
Guinau, M.; Vilaplana, J. M.
2003-04-01
In recent years, a number of techniques and methodologies have been developed for mitigating natural disasters. The complexity of these methodologies and the scarcity of material and data series justify the need for simple methodologies to obtain the necessary information for minimising the effects of catastrophic natural phenomena. The work with polygonal maps using a GIS allowed us to develop a simple methodology, which was developed in an area of 473 Km2 in the Departamento de Chinandega (NW Nicaragua). This area was severely affected by a large number of landslides (mainly debris flows), triggered by the Hurricane Mitch rainfalls in October 1998. With the aid of aerial photography interpretation at 1:40.000 scale, amplified to 1:20.000, and detailed field work, a landslide map at 1:10.000 scale was constructed. The failure zones of landslides were digitized in order to obtain a failure zone digital map. A terrain unit digital map, in which a series of physical-environmental terrain factors are represented, was also used. Dividing the studied area into two zones (A and B) with homogeneous physical and environmental characteristics, allows us to develop the proposed methodology and to validate it. In zone A, the failure zone digital map is superimposed onto the terrain unit digital map to establish the relationship between the different terrain factors and the failure zones. The numerical expression of this relationship enables us to classify the terrain by its landslide susceptibility. In zone B, this numerical relationship was employed to obtain a landslide susceptibility map, obviating the need for a failure zone map. The validity of the methodology can be tested in this area by using the degree of superposition of the susceptibility map and the failure zone map. The implementation of the methodology in tropical countries with physical and environmental characteristics similar to those of the study area allows us to carry out a landslide susceptibility analysis in areas where landslide records do not exist. This analysis is essential to landslide hazard and risk assessment, which is necessary to determine the actions for mitigating landslide effects, e.g. land planning, emergency aid actions, etc.
A Lithology Based Map Unit Schema For Onegeology Regional Geologic Map Integration
NASA Astrophysics Data System (ADS)
Moosdorf, N.; Richard, S. M.
2012-12-01
A system of lithogenetic categories for a global lithological map (GLiM, http://www.ifbm.zmaw.de/index.php?id=6460&L=3) has been compiled based on analysis of lithology/genesis categories for regional geologic maps for the entire globe. The scheme is presented for discussion and comment. Analysis of units on a variety of regional geologic maps indicates that units are defined based on assemblages of rock types, as well as their genetic type. In this compilation of continental geology, outcropping surface materials are dominantly sediment/sedimentary rock; major subdivisions of the sedimentary category include clastic sediment, carbonate sedimentary rocks, clastic sedimentary rocks, mixed carbonate and clastic sedimentary rock, colluvium and residuum. Significant areas of mixed igneous and metamorphic rock are also present. A system of global categories to characterize the lithology of regional geologic units is important for Earth System models of matter fluxes to soils, ecosystems, rivers and oceans, and for regional analysis of Earth surface processes at global scale. Because different applications of the classification scheme will focus on different lithologic constituents in mixed units, an ontology-type representation of the scheme that assigns properties to the units in an analyzable manner will be pursued. The OneGeology project is promoting deployment of geologic map services at million scale for all nations. Although initial efforts are commonly simple scanned map WMS services, the intention is to move towards data-based map services that categorize map units with standard vocabularies to allow use of a common map legend for better visual integration of the maps (e.g. see OneGeology Europe, http://onegeology-europe.brgm.fr/ geoportal/ viewer.jsp). Current categorization of regional units with a single lithology from the CGI SimpleLithology (http://resource.geosciml.org/201202/ Vocab2012html/ SimpleLithology201012.html) vocabulary poorly captures the lithologic character of such units in a meaningful way. A lithogenetic unit category scheme accessible as a GeoSciML-portrayal-based OGC Styled Layer Description resource is key to enabling OneGeology (http://oneGeology.org) geologic map services to achieve a high degree of visual harmonization.
Basinwide response of the Atlantic Meridional Overturning Circulation to interannual wind forcing
NASA Astrophysics Data System (ADS)
Zhao, Jian
2017-12-01
An eddy-resolving Ocean general circulation model For the Earth Simulator (OFES) and a simple wind-driven two-layer model are used to investigate the role of momentum fluxes in driving the Atlantic Meridional Overturning Circulation (AMOC) variability throughout the Atlantic basin from 1950 to 2010. Diagnostic analysis using the OFES results suggests that interior baroclinic Rossby waves and coastal topographic waves play essential roles in modulating the AMOC interannual variability. The proposed mechanisms are verified in the context of a simple two-layer model with realistic topography and only forced by surface wind. The topographic waves communicate high-latitude anomalies into lower latitudes and account for about 50% of the AMOC interannual variability in the subtropics. In addition, the large scale Rossby waves excited by wind forcing together with topographic waves set up coherent AMOC interannual variability patterns across the tropics and subtropics. The comparisons between the simple model and OFES results suggest that a large fraction of the AMOC interannual variability in the Atlantic basin can be explained by wind-driven dynamics.
Modeling of two-phase porous flow with damage
NASA Astrophysics Data System (ADS)
Cai, Z.; Bercovici, D.
2009-12-01
Two-phase dynamics has been broadly studied in Earth Science in a convective system. We investigate the basic physics of compaction with damage theory and present preliminary results of both steady state and time-dependent transport when melt migrates through porous medium. In our simple 1-D model, damage would play an important role when we consider the ascent of melt-rich mixture at constant velocity. Melt segregation becomes more difficult so that porosity is larger than that in simple compaction in the steady-state compaction profile. Scaling analysis for compaction equation is performed to predict the behavior of melt segregation with damage. The time-dependent of the compacting system is investigated by looking at solitary wave solutions to the two-phase model. We assume that the additional melt is injected to the fracture material through a single pulse with determined shape and velocity. The existence of damage allows the pulse to keep moving further than that in simple compaction. Therefore more melt could be injected to the two-phase mixture and future application such as carbon dioxide injection is proposed.
Lum, Terry Y S; Yan, Elsie C W; Ho, Andy H Y; Shum, Michelle H Y; Wong, Gloria H Y; Lau, Mandy M Y; Wang, Junfang
2016-11-01
The experience and practice of filial piety have evolved in modern Chinese societies, and existing measures fail to capture these important changes. Based on a conceptual analysis on current literature, 42 items were initially compiled to form a Contemporary Filial Piety Scale (CFPS), and 1,080 individuals from a representative sample in Hong Kong were surveyed. Principal component analysis generated a 16-item three-factor model: Pragmatic Obligations (Factor 1; 10 items), Compassionate Reverence (Factor 2; 4 items), and Family Continuity (Factor 3; 2 items). Confirmatory factor analysis revealed strong factor loadings for Factors 1 and 2, while removing Factor 3 and conceptually duplicated items increased total variance explained from 58.02% to 60.09% and internal consistency from .84 to .88. A final 10-item two-factor structure model was adopted with a goodness of fit of 0.95. The CFPS-10 is a data-driven, simple, and efficient instrument with strong psychometric properties for assessing contemporary filial piety. © The Author(s) 2015.
Renormalization scheme dependence of high-order perturbative QCD predictions
NASA Astrophysics Data System (ADS)
Ma, Yang; Wu, Xing-Gang
2018-02-01
Conventionally, one adopts typical momentum flow of a physical observable as the renormalization scale for its perturbative QCD (pQCD) approximant. This simple treatment leads to renormalization scheme-and-scale ambiguities due to the renormalization scheme and scale dependence of the strong coupling and the perturbative coefficients do not exactly cancel at any fixed order. It is believed that those ambiguities will be softened by including more higher-order terms. In the paper, to show how the renormalization scheme dependence changes when more loop terms have been included, we discuss the sensitivity of pQCD prediction on the scheme parameters by using the scheme-dependent {βm ≥2}-terms. We adopt two four-loop examples, e+e-→hadrons and τ decays into hadrons, for detailed analysis. Our results show that under the conventional scale setting, by including more-and-more loop terms, the scheme dependence of the pQCD prediction cannot be reduced as efficiently as that of the scale dependence. Thus a proper scale-setting approach should be important to reduce the scheme dependence. We observe that the principle of minimum sensitivity could be such a scale-setting approach, which provides a practical way to achieve optimal scheme and scale by requiring the pQCD approximate be independent to the "unphysical" theoretical conventions.
Type-curve estimation of statistical heterogeneity
NASA Astrophysics Data System (ADS)
Neuman, Shlomo P.; Guadagnini, Alberto; Riva, Monica
2004-04-01
The analysis of pumping tests has traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. We explore numerically the feasibility of using a simple graphical approach (without numerical inversion) to estimate the geometric mean, integral scale, and variance of local log transmissivity on the basis of quasi steady state head data when a randomly heterogeneous confined aquifer is pumped at a constant rate. By local log transmissivity we mean a function varying randomly over horizontal distances that are small in comparison with a characteristic spacing between pumping and observation wells during a test. Experimental evidence and hydrogeologic scaling theory suggest that such a function would tend to exhibit an integral scale well below the maximum well spacing. This is in contrast to equivalent transmissivities derived from pumping tests by treating the aquifer as being locally uniform (on the scale of each test), which tend to exhibit regional-scale spatial correlations. We show that whereas the mean and integral scale of local log transmissivity can be estimated reasonably well based on theoretical ensemble mean variations of head and drawdown with radial distance from a pumping well, estimating the log transmissivity variance is more difficult. We obtain reasonable estimates of the latter based on theoretical variation of the standard deviation of circumferentially averaged drawdown about its mean.
Chen, Ming-Jen; Liu, Ya-Ting; Lin, Chiao-Wen; Ponnusamy, Vinoth Kumar; Jen, Jen-Fon
2013-03-12
This paper describes the development of a novel, simple and efficient in-tube based ultrasound-assisted salt-induced liquid-liquid microextraction (IT-USA-SI-LLME) technique for the rapid determination of triclosan (TCS) in personal care products by high performance liquid chromatography-ultraviolet (HPLC-UV) detection. IT-USA-SI-LLME method is based on the rapid phase separation of water-miscible organic solvent from the aqueous phase in the presence of high concentration of salt (salting-out phenomena) under ultrasonication. In the present work, an indigenously fabricated home-made glass extraction device (8-mL glass tube inbuilt with a self-scaled capillary tip) was utilized as the phase separation device for USA-SI-LLME. After the extraction, the upper extractant layer was narrowed into the self-scaled capillary tip by pushing the plunger plug; thus, the collection and measurement of the upper organic solvent layer was simple and convenient. The effects of various parameters on the extraction efficiency were thoroughly evaluated and optimized. Under optimal conditions, detection was linear in the concentration range of 0.4-100ngmL(-1) with correlation coefficient of 0.9968. The limit of detection was 0.09ngmL(-1) and the relative standard deviations ranged between 0.8 and 5.3% (n=5). The applicability of the developed method was demonstrated for the analysis of TCS in different commercial personal care products and the relative recoveries ranged from 90.4 to 98.5%. The present method was proven to be a simple, sensitive, less organic solvent consuming, inexpensive and rapid procedure for analysis of TCS in a variety of commercially available personal care products or cosmetic preparations. Copyright © 2013 Elsevier B.V. All rights reserved.
An optimal modification of a Kalman filter for time scales
NASA Technical Reports Server (NTRS)
Greenhall, C. A.
2003-01-01
The Kalman filter in question, which was implemented in the time scale algorithm TA(NIST), produces time scales with poor short-term stability. A simple modification of the error covariance matrix allows the filter to produce time scales with good stability at all averaging times, as verified by simulations of clock ensembles.
How Darcy's equation is linked to the linear reservoir at catchment scale
NASA Astrophysics Data System (ADS)
Savenije, Hubert H. G.
2017-04-01
In groundwater hydrology two simple linear equations exist that describe the relation between groundwater flow and the gradient that drives it: Darcy's equation and the linear reservoir. Both equations are empirical at heart: Darcy's equation at the laboratory scale and the linear reservoir at the watershed scale. Although at first sight they show similarity, without having detailed knowledge of the structure of the underlying aquifers it is not trivial to upscale Darcy's equation to the watershed scale. In this paper, a relatively simple connection is provided between the two, based on the assumption that the groundwater system is organized by an efficient drainage network, a mostly invisible pattern that has evolved over geological time scales. This drainage network provides equally distributed resistance to flow along the streamlines that connect the active groundwater body to the stream, much like a leaf is organized to provide all stomata access to moisture at equal resistance.
NASA Astrophysics Data System (ADS)
Mori, Shintaro; Hisakado, Masato
2015-05-01
We propose a finite-size scaling analysis method for binary stochastic processes X(t) in { 0,1} based on the second moment correlation length ξ for the autocorrelation function C(t). The purpose is to clarify the critical properties and provide a new data analysis method for information cascades. As a simple model to represent the different behaviors of subjects in information cascade experiments, we assume that X(t) is a mixture of an independent random variable that takes 1 with probability q and a random variable that depends on the ratio z of the variables taking 1 among recent r variables. We consider two types of the probability f(z) that the latter takes 1: (i) analog [f(z) = z] and (ii) digital [f(z) = θ(z - 1/2)]. We study the universal functions of scaling for ξ and the integrated correlation time τ. For finite r, C(t) decays exponentially as a function of t, and there is only one stable renormalization group (RG) fixed point. In the limit r to ∞ , where X(t) depends on all the previous variables, C(t) in model (i) obeys a power law, and the system becomes scale invariant. In model (ii) with q ≠ 1/2, there are two stable RG fixed points, which correspond to the ordered and disordered phases of the information cascade phase transition with the critical exponents β = 1 and ν|| = 2.
Attribution of regional flood changes based on scaling fingerprints
Merz, Bruno; Viet Dung, Nguyen; Parajka, Juraj; Nester, Thomas; Blöschl, Günter
2016-01-01
Abstract Changes in the river flood regime may be due to atmospheric processes (e.g., increasing precipitation), catchment processes (e.g., soil compaction associated with land use change), and river system processes (e.g., loss of retention volume in the floodplains). This paper proposes a new framework for attributing flood changes to these drivers based on a regional analysis. We exploit the scaling characteristics (i.e., fingerprints) with catchment area of the effects of the drivers on flood changes. The estimation of their relative contributions is framed in Bayesian terms. Analysis of a synthetic, controlled case suggests that the accuracy of the regional attribution increases with increasing number of sites and record lengths, decreases with increasing regional heterogeneity, increases with increasing difference of the scaling fingerprints, and decreases with an increase of their prior uncertainty. The applicability of the framework is illustrated for a case study set in Austria, where positive flood trends have been observed at many sites in the past decades. The individual scaling fingerprints related to the atmospheric, catchment, and river system processes are estimated from rainfall data and simple hydrological modeling. Although the distributions of the contributions are rather wide, the attribution identifies precipitation change as the main driver of flood change in the study region. Overall, it is suggested that the extension from local attribution to a regional framework, including multiple drivers and explicit estimation of uncertainty, could constitute a similar shift in flood change attribution as the extension from local to regional flood frequency analysis. PMID:27609996
Realpe, Alba; Adams, Ann; Wall, Peter; Griffin, Damian; Donovan, Jenny L
2016-08-01
How a randomized controlled trial (RCT) is explained to patients is a key determinant of recruitment to that trial. This study developed and implemented a simple six-step model to fully inform patients and to support them in deciding whether to take part or not. Ninety-two consultations with 60 new patients were recorded and analyzed during a pilot RCT comparing surgical and nonsurgical interventions for hip impingement. Recordings were analyzed using techniques of thematic analysis and focused conversation analysis. Early findings supported the development of a simple six-step model to provide a framework for good recruitment practice. Model steps are as follows: (1) explain the condition, (2) reassure patients about receiving treatment, (3) establish uncertainty, (4) explain the study purpose, (5) give a balanced view of treatments, and (6) Explain study procedures. There are also two elements throughout the consultation: (1) responding to patients' concerns and (2) showing confidence. The pilot study was successful, with 70% (n = 60) of patients approached across nine centers agreeing to take part in the RCT, so that the full-scale trial was funded. The six-step model provides a promising framework for successful recruitment to RCTs. Further testing of the model is now required. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
2013-01-01
Background The reliable and valid measurement of attitudes towards condom use are essential to assist efforts to design population specific interventions aimed at promoting positive attitude towards, and increased use of condoms. Although several studies, mostly in English speaking western world, have demonstrated the utility of condom attitude scales, very limited culturally relevant condom attitude measures have been developed till to date. We have developed a scale and evaluated its psychometric properties in a sub-sample of rural-to-urban migrant workers in Bangladesh. Methods This paper reports mostly on cross-sectional survey components of a mixed methods sexual health research in Bangladesh. The survey sample (n = 878) comprised rural-to-urban migrant taxi drivers (n = 437) and restaurant workers (n = 441) in Dhaka (aged 18–35 years). The study also involved focus group sessions with same populations to establish the content validity and cultural equivalency of the scale. The current scale was administered with a large sexual health survey questionnaire and consisted of 10 items. Quantitative and qualitative data were assessed with statistical and thematic analysis, respectively, and then presented. Results The participants found the scale simple and easy to understand and use. The internal consistency (α) of the scale was 0.89 with high construct validity (the first component accounted for about 52% of variance and second component about 20% of the total variance with an Eigen-value for both factors greater than one). The test-retest reliability (repeatability) was also found satisfactory with high inter-item correlations (the majority of the intra-class correlation coefficient values was above 2 and was significant for all items on the scale, p < 0.001). The 2-week repeatability assessed by the Pearson product–moment correlation coefficient was 0.75. Conclusion The results indicated that Bengali version of the scale have good metric properties for assessing attitudes toward condom use. Validated scale is a short, simple and reliable instrument for measuring attitudes towards condom use in vulnerable populations like current study sample. This culturally-customized scale can be used to monitor the progress of condom uptake and promotion activities in Bangladesh or similar settings. PMID:23510383
Roy, Tapash; Anderson, Claire; Evans, Catrin; Rahman, Mohammad Shafiqur; Rahman, Mosiur
2013-03-19
The reliable and valid measurement of attitudes towards condom use are essential to assist efforts to design population specific interventions aimed at promoting positive attitude towards, and increased use of condoms. Although several studies, mostly in English speaking western world, have demonstrated the utility of condom attitude scales, very limited culturally relevant condom attitude measures have been developed till to date. We have developed a scale and evaluated its psychometric properties in a sub-sample of rural-to-urban migrant workers in Bangladesh. This paper reports mostly on cross-sectional survey components of a mixed methods sexual health research in Bangladesh. The survey sample (n = 878) comprised rural-to-urban migrant taxi drivers (n = 437) and restaurant workers (n = 441) in Dhaka (aged 18-35 years). The study also involved focus group sessions with same populations to establish the content validity and cultural equivalency of the scale. The current scale was administered with a large sexual health survey questionnaire and consisted of 10 items. Quantitative and qualitative data were assessed with statistical and thematic analysis, respectively, and then presented. The participants found the scale simple and easy to understand and use. The internal consistency (α) of the scale was 0.89 with high construct validity (the first component accounted for about 52% of variance and second component about 20% of the total variance with an Eigen-value for both factors greater than one). The test-retest reliability (repeatability) was also found satisfactory with high inter-item correlations (the majority of the intra-class correlation coefficient values was above 2 and was significant for all items on the scale, p < 0.001). The 2-week repeatability assessed by the Pearson product-moment correlation coefficient was 0.75. The results indicated that Bengali version of the scale have good metric properties for assessing attitudes toward condom use. Validated scale is a short, simple and reliable instrument for measuring attitudes towards condom use in vulnerable populations like current study sample. This culturally-customized scale can be used to monitor the progress of condom uptake and promotion activities in Bangladesh or similar settings.
Anomalous scaling of stochastic processes and the Moses effect
NASA Astrophysics Data System (ADS)
Chen, Lijian; Bassler, Kevin E.; McCauley, Joseph L.; Gunaratne, Gemunu H.
2017-04-01
The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t1/2. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.
Anomalous scaling of stochastic processes and the Moses effect.
Chen, Lijian; Bassler, Kevin E; McCauley, Joseph L; Gunaratne, Gemunu H
2017-04-01
The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t^{1/2}. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.
Validity and Reliability of the Turkish Chronic Pain Acceptance Questionnaire
Akmaz, Hazel Ekin; Uyar, Meltem; Kuzeyli Yıldırım, Yasemin; Akın Korhan, Esra
2018-05-29
Pain acceptance is the process of giving up the struggle with pain and learning to live a worthwhile life despite it. In assessing patients with chronic pain in Turkey, making a diagnosis and tracking the effectiveness of treatment is done with scales that have been translated into Turkish. However, there is as yet no valid and reliable scale in Turkish to assess the acceptance of pain. To validate a Turkish version of the Chronic Pain Acceptance Questionnaire developed by McCracken and colleagues. Methodological and cross sectional study. A simple randomized sampling method was used in selecting the study sample. The sample was composed of 201 patients, more than 10 times the number of items examined for validity and reliability in the study, which totaled 20. A patient identification form, the Chronic Pain Acceptance Questionnaire, and the Brief Pain Inventory were used to collect data. Data were collected by face-to-face interviews. In the validity testing, the content validity index was used to evaluate linguistic equivalence, content validity, construct validity, and expert views. In reliability testing of the scale, Cronbach’s α coefficient was calculated, and item analysis and split-test reliability methods were used. Principal component analysis and varimax rotation were used in factor analysis and to examine factor structure for construct concept validity. The item analysis established that the scale, all items, and item-total correlations were satisfactory. The mean total score of the scale was 21.78. The internal consistency coefficient was 0.94, and the correlation between the two halves of the scale was 0.89. The Chronic Pain Acceptance Questionnaire, which is intended to be used in Turkey upon confirmation of its validity and reliability, is an evaluation instrument with sufficient validity and reliability, and it can be reliably used to examine patients’ acceptance of chronic pain.
Validity and Reliability of the Turkish Chronic Pain Acceptance Questionnaire
Akmaz, Hazel Ekin; Uyar, Meltem; Kuzeyli Yıldırım, Yasemin; Akın Korhan, Esra
2018-01-01
Background: Pain acceptance is the process of giving up the struggle with pain and learning to live a worthwhile life despite it. In assessing patients with chronic pain in Turkey, making a diagnosis and tracking the effectiveness of treatment is done with scales that have been translated into Turkish. However, there is as yet no valid and reliable scale in Turkish to assess the acceptance of pain. Aims: To validate a Turkish version of the Chronic Pain Acceptance Questionnaire developed by McCracken and colleagues. Study Design: Methodological and cross sectional study. Methods: A simple randomized sampling method was used in selecting the study sample. The sample was composed of 201 patients, more than 10 times the number of items examined for validity and reliability in the study, which totaled 20. A patient identification form, the Chronic Pain Acceptance Questionnaire, and the Brief Pain Inventory were used to collect data. Data were collected by face-to-face interviews. In the validity testing, the content validity index was used to evaluate linguistic equivalence, content validity, construct validity, and expert views. In reliability testing of the scale, Cronbach’s α coefficient was calculated, and item analysis and split-test reliability methods were used. Principal component analysis and varimax rotation were used in factor analysis and to examine factor structure for construct concept validity. Results: The item analysis established that the scale, all items, and item-total correlations were satisfactory. The mean total score of the scale was 21.78. The internal consistency coefficient was 0.94, and the correlation between the two halves of the scale was 0.89. Conclusion: The Chronic Pain Acceptance Questionnaire, which is intended to be used in Turkey upon confirmation of its validity and reliability, is an evaluation instrument with sufficient validity and reliability, and it can be reliably used to examine patients’ acceptance of chronic pain. PMID:29843496
Khanjari, Sedigheh; Oskouie, Fatemeh; Langius-Eklöf, Ann
2012-02-01
To translate and test the reliability and validity of the Persian version of the Caregiver Quality of Life Index-Cancer scale. Research across many countries has determined quality of life of cancer patients, but few attempts have been made to measure the quality of life of family caregivers of patients with breast cancer. The Caregiver Quality of Life Index-Cancer scale was developed for this purpose, but until now, it has not been translated into or tested in the Persian language. Methodological research design. After standard translation, the 35-item Caregiver Quality of Life Index-Cancer scale was administered to 166 Iranian family caregivers of patients with breast cancer. A confirmatory factor analysis was carried out using LISREL to test the scale's construct validity. Further, the internal consistency and convergent validity of the instrument were tested. For convergent validity, four instruments were used in the study: sense of coherence scale, spirituality perspective scale, health index and brief religious coping scale. The confirmatory factor analysis resulted in the same four-factor structure as the original, though, with somewhat different item loadings. The Persian version of the Caregiver Quality of Life Index-Cancer scales had satisfactory internal consistency (0·72-0·90). Tests of convergent validity showed that all hypotheses were confirmed. A hierarchical multiple regression analysis additionally confirmed the convergent validity between the total Caregiver Quality of Life Index-Cancer score and sense of coherence (β = 0·34), negative religious coping (β = -0·21), education (β = 0·24) and the more severe stage of breast cancer (β = 0·23), in total explaining 41% of the variance. The Persian version of the Caregiver Quality of Life Index-Cancer scale could be a reliable and valid measure in Iranian family caregivers of patients with breast cancer. The Persian version of the Caregiver Quality of Life Index-Cancer scale is simple to administer and will help nurses to identify the nursing needs of family caregivers. © 2011 Blackwell Publishing Ltd.
Sub-core permeability and relative permeability characterization with Positron Emission Tomography
NASA Astrophysics Data System (ADS)
Zahasky, C.; Benson, S. M.
2017-12-01
This study utilizes preclinical micro-Positron Emission Tomography (PET) to image and quantify the transport behavior of pulses of a conservative aqueous radiotracer injected during single and multiphase flow experiments in a Berea sandstone core with axial parallel bedding heterogeneity. The core is discretized into streamtubes, and using the micro-PET data, expressions are derived from spatial moment analysis for calculating sub-core scale tracer flux and pore water velocity. Using the flux and velocity data, it is then possible to calculate porosity and saturation from volumetric flux balance, and calculate permeability and water relative permeability from Darcy's law. Full 3D simulations are then constructed based on this core characterization. Simulation results are compared with experimental results in order to test the assumptions of the simple streamtube model. Errors and limitations of this analysis will be discussed. These new methods of imaging and sub-core permeability and relative permeability measurements enable experimental quantification of transport behavior across scales.
Replica and extreme-value analysis of the Jarzynski free-energy estimator
NASA Astrophysics Data System (ADS)
Palassini, Matteo; Ritort, Felix
2008-03-01
We analyze the Jarzynski estimator of free-energy differences from nonequilibrium work measurements. By a simple mapping onto Derrida's Random Energy Model, we obtain a scaling limit for the expectation of the bias of the estimator. We then derive analytical approximations in three different regimes of the scaling parameter x = log(N)/W, where N is the number of measurements and W the mean dissipated work. Our approach is valid for a generic distribution of the dissipated work, and is based on a replica symmetry breaking scheme for x >> 1, the asymptotic theory of extreme value statistics for x << 1, and a direct approach for x near one. The combination of the three analytic approximations describes well Monte Carlo data for the expectation value of the estimator, for a wide range of values of N, from N=1 to large N, and for different work distributions. Based on these results, we introduce improved free-energy estimators and discuss the application to the analysis of experimental data.
Simple mass production of zinc oxide nanostructures via low-temperature hydrothermal synthesis
NASA Astrophysics Data System (ADS)
Ghasaban, Samaneh; Atai, Mohammad; Imani, Mohammad
2017-03-01
The specific properties of zinc oxide (ZnO) nanoparticles have attracted much attention within the scientific community as a useful material for biomedical applications. Hydrothermal synthesis is known as a useful method to produce nanostructures with certain particle size and morphology however, scaling up the reaction is still a challenging task. In this research, large scale hydrothermal synthesis of ZnO nanostructures (60 g) was performed in a 5 l stainless steel autoclave by reaction between anionic (ammonia or sodium hydroxide) and cationic (zinc acetate dehydrate) precursors in low temperature. Hydrothermal reaction temperature and time were decreased to 115 °C and 2 or 6 h. In batch repetitions, the same morphologies (plate- and needle-like) with reproducible particle size were obtained. The nanostructures formed were analyzed by powder x-ray diffraction, Fourier-transform infrared spectroscopy, energy dispersive x-ray analysis, scanning electron microscopy and BET analysis. The nanostructures formed were antibacterially active against Staphylococcus aureus.
Trends in Department of Defense hospital efficiency.
Ozcan, Y A; Bannick, R R
1994-04-01
This study employs a simple cross sectional design using longitudinal data to explore the underlying factors associated with differences in hospital technical efficiency using data envelopment analysis (DEA) in the Department of Defense (DOD) sector across three service components, the Army, Air Force and Navy. The results suggest that the services do not differ significantly in hospital efficiency. Nor does hospital efficiency appear to differ over time. With respect to the efficient use of input resources, the services experienced a general decline in excessive usage of various inputs over the three years. Analysis of the returns to scale captures opportunities for planners of changing the relative mix of output to input slacks for increasing a hospital's efficiency. That is, policy makers would get more immediate "bang per buck" with emphasis on improving the efficiencies of hospitals with higher returns to scale than other hospitals. Findings also suggest a significant degree of comparability between the DEA measure and these measures often used to indicate efficiency.
Williams, Alex H; Kim, Tony Hyun; Wang, Forea; Vyas, Saurabh; Ryu, Stephen I; Shenoy, Krishna V; Schnitzer, Mark; Kolda, Tamara G; Ganguli, Surya
2018-06-27
Perceptions, thoughts, and actions unfold over millisecond timescales, while learned behaviors can require many days to mature. While recent experimental advances enable large-scale and long-term neural recordings with high temporal fidelity, it remains a formidable challenge to extract unbiased and interpretable descriptions of how rapid single-trial circuit dynamics change slowly over many trials to mediate learning. We demonstrate a simple tensor component analysis (TCA) can meet this challenge by extracting three interconnected, low-dimensional descriptions of neural data: neuron factors, reflecting cell assemblies; temporal factors, reflecting rapid circuit dynamics mediating perceptions, thoughts, and actions within each trial; and trial factors, describing both long-term learning and trial-to-trial changes in cognitive state. We demonstrate the broad applicability of TCA by revealing insights into diverse datasets derived from artificial neural networks, large-scale calcium imaging of rodent prefrontal cortex during maze navigation, and multielectrode recordings of macaque motor cortex during brain machine interface learning. Copyright © 2018 Elsevier Inc. All rights reserved.
Guan, Ng Chong; Isa, Saramah Mohammed; Hashim, Aili Hanim; Pillai, Subash Kumar; Harbajan Singh, Manveen Kaur
2015-03-01
The use of the Internet has been increasing dramatically over the decade in Malaysia. Excessive usage of the Internet has lead to a phenomenon called Internet addiction. There is a need for a reliable, valid, and simple-to-use scale to measure Internet addiction in the Malaysian population for clinical practice and research purposes. The aim of this study was to validate the Malay version of the Internet Addiction Test, using a sample of 162 medical students. The instrument displayed good internal consistency (Cronbach's α = .91), parallel reliability (intraclass coefficient = .88, P < .001), and concurrent validity with the Compulsive Internet Use Scale (Pearson's correlation = .84, P < .001). Receiver operating characteristic analysis showed that 43 was the optimal cutoff score to discriminate students with and without Internet dependence. Principal component analysis with varimax rotation identified a 5-factor model. The Malay version of the Internet Addiction Test appeared to be a valid instrument for assessing Internet addiction in Malaysian university students. © 2012 APJPH.
A Monte Carlo–Based Bayesian Approach for Measuring Agreement in a Qualitative Scale
Pérez Sánchez, Carlos Javier
2014-01-01
Agreement analysis has been an active research area whose techniques have been widely applied in psychology and other fields. However, statistical agreement among raters has been mainly considered from a classical statistics point of view. Bayesian methodology is a viable alternative that allows the inclusion of subjective initial information coming from expert opinions, personal judgments, or historical data. A Bayesian approach is proposed by providing a unified Monte Carlo–based framework to estimate all types of measures of agreement in a qualitative scale of response. The approach is conceptually simple and it has a low computational cost. Both informative and non-informative scenarios are considered. In case no initial information is available, the results are in line with the classical methodology, but providing more information on the measures of agreement. For the informative case, some guidelines are presented to elicitate the prior distribution. The approach has been applied to two applications related to schizophrenia diagnosis and sensory analysis. PMID:29881002
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jadaan, O.M.; Tressler, R.E.
1993-04-01
The methodology to predict the lifetime of sintered [alpha]-silicon carbide (SASC) tubes subjected to slow crack growth (SCG) conditions involved the experimental determination of the SCG parameters of that material and the scaling analysis to project the stress rupture data from small specimens to large components. Dynamic fatigue testing, taking into account the effect of threshold stress intensity factor, of O-ring and compressed C-ring specimens was used to obtain the SCG parameters. These SCG parameters were in excellent agreement with those published in the literature and extracted from stress rupture tests of tensile and bend specimens. Two methods were usedmore » to predict the lifetimes of internally heated and pressurized SASC tubes. The first is a fracture mechanics approach that is well known in the literature. The second method used a scaling analysis in which the stress rupture distribution (lifetime) of any specimen configuration can be predicted from stress rupture data of another.« less
NASA Astrophysics Data System (ADS)
Ohmori, Shuichi; Narabayashi, Tadashi; Mori, Michitsugu
A steam injector (SI) is a simple, compact and passive pump and also acts as a high-performance direct-contact compact heater. This provides SI with capability to serve also as a direct-contact feed-water heater that heats up feed-water by using extracted steam from turbine. Our technology development aims to significantly simplify equipment and reduce physical quantities by applying "high-efficiency SI", which are applicable to a wide range of operation regimes beyond the performance and applicable range of existing SIs and enables unprecedented multistage and parallel operation, to the low-pressure feed-water heaters and emergency core cooling system of nuclear power plants, as well as achieve high inherent safety to prevent severe accidents by keeping the core covered with water (a severe accident-free concept). This paper describes the results of the scale model test, and the transient analysis of SI-driven passive core injection system (PCIS).
NASA Astrophysics Data System (ADS)
Mercer, Gary J.
This quantitative study examined the relationship between secondary students with math anxiety and physics performance in an inquiry-based constructivist classroom. The Revised Math Anxiety Rating Scale was used to evaluate math anxiety levels. The results were then compared to the performance on a physics standardized final examination. A simple correlation was performed, followed by a multivariate regression analysis to examine effects based on gender and prior math background. The correlation showed statistical significance between math anxiety and physics performance. The regression analysis showed statistical significance for math anxiety, physics performance, and prior math background, but did not show statistical significance for math anxiety, physics performance, and gender.
Multiple scaling behaviour and nonlinear traits in music scores
Larralde, Hernán; Martínez-Mekler, Gustavo; Müller, Markus
2017-01-01
We present a statistical analysis of music scores from different composers using detrended fluctuation analysis (DFA). We find different fluctuation profiles that correspond to distinct autocorrelation structures of the musical pieces. Further, we reveal evidence for the presence of nonlinear autocorrelations by estimating the DFA of the magnitude series, a result validated by a corresponding study of appropriate surrogate data. The amount and the character of nonlinear correlations vary from one composer to another. Finally, we performed a simple experiment in order to evaluate the pleasantness of the musical surrogate pieces in comparison with the original music and find that nonlinear correlations could play an important role in the aesthetic perception of a musical piece. PMID:29308256
Wolff, Hans-Georg; Preising, Katja
2005-02-01
To ease the interpretation of higher order factor analysis, the direct relationships between variables and higher order factors may be calculated by the Schmid-Leiman solution (SLS; Schmid & Leiman, 1957). This simple transformation of higher order factor analysis orthogonalizes first-order and higher order factors and thereby allows the interpretation of the relative impact of factor levels on variables. The Schmid-Leiman solution may also be used to facilitate theorizing and scale development. The rationale for the procedure is presented, supplemented by syntax codes for SPSS and SAS, since the transformation is not part of most statistical programs. Syntax codes may also be downloaded from www.psychonomic.org/archive/.
Multiple scaling behaviour and nonlinear traits in music scores
NASA Astrophysics Data System (ADS)
González-Espinoza, Alfredo; Larralde, Hernán; Martínez-Mekler, Gustavo; Müller, Markus
2017-12-01
We present a statistical analysis of music scores from different composers using detrended fluctuation analysis (DFA). We find different fluctuation profiles that correspond to distinct autocorrelation structures of the musical pieces. Further, we reveal evidence for the presence of nonlinear autocorrelations by estimating the DFA of the magnitude series, a result validated by a corresponding study of appropriate surrogate data. The amount and the character of nonlinear correlations vary from one composer to another. Finally, we performed a simple experiment in order to evaluate the pleasantness of the musical surrogate pieces in comparison with the original music and find that nonlinear correlations could play an important role in the aesthetic perception of a musical piece.
Tummala, Seshu B; Junne, Stefan G; Paredes, Carlos J; Papoutsakis, Eleftherios T
2003-12-30
Antisense RNA (asRNA) downregulation alters protein expression without changing the regulation of gene expression. Downregulation of primary metabolic enzymes possibly combined with overexpression of other metabolic enzymes may result in profound changes in product formation, and this may alter the large-scale transcriptional program of the cells. DNA-array based large-scale transcriptional analysis has the potential to elucidate factors that control cellular fluxes even in the absence of proteome data. These themes are explored in the study of large-scale transcriptional analysis programs and the in vivo primary-metabolism fluxes of several related recombinant C. acetobutylicum strains: C. acetobutylicum ATCC 824(pSOS95del) (plasmid control; produces high levels of butanol snd acetone), 824(pCTFB1AS) (expresses antisense RNA against CoA transferase (ctfb1-asRNA); produces very low levels of butanol and acetone), and 824(pAADB1) (expresses ctfb1-asRNA and the alcohol-aldehyde dahydrogenase gene (aad); produce high alcohol and low acetone levels). DNA-array based transcriptional analysis revealed that the large changes in product concentrations (snd notably butanol concentration) due to ctfb1-asRNA expression alone and in combination with aad overexpression resulted in dramatic changes of the cellular transcriptome. Cluster analysis and gene expression patterns of established and putative operons involved in stress response, motility, sporulation, and fatty-acid biosynthesis indicate that these simple genetic changes dramatically alter the cellular programs of C. acetobutylicum. Comparison of gene expression and flux analysis data may point to possible flux-controling steps and suggest unknown regulatory mechanisms. Copyright 2003; Wiley Periodicals, Inc.
Santillán, Moisés
2003-07-21
A simple model of an oxygen exchanging network is presented and studied. This network's task is to transfer a given oxygen rate from a source to an oxygen consuming system. It consists of a pipeline, that interconnects the oxygen consuming system and the reservoir and of a fluid, the active oxygen transporting element, moving through the pipeline. The network optimal design (total pipeline surface) and dynamics (volumetric flow of the oxygen transporting fluid), which minimize the energy rate expended in moving the fluid, are calculated in terms of the oxygen exchange rate, the pipeline length, and the pipeline cross-section. After the oxygen exchanging network is optimized, the energy converting system is shown to satisfy a 3/4-like allometric scaling law, based upon the assumption that its performance regime is scale invariant as well as on some feasible geometric scaling assumptions. Finally, the possible implications of this result on the allometric scaling properties observed elsewhere in living beings are discussed.
Segmentation-based wavelet transform for still-image compression
NASA Astrophysics Data System (ADS)
Mozelle, Gerard; Seghier, Abdellatif; Preteux, Francoise J.
1996-10-01
In order to address simultaneously the two functionalities, content-based scalability required by MPEG-4, we introduce a segmentation-based wavelet transform (SBWT). SBWT takes into account both the mathematical properties of multiresolution analysis and the flexibility of region-based approaches for image compression. The associated methodology has two stages: 1) image segmentation into convex and polygonal regions; 2) 2D-wavelet transform of the signal corresponding to each region. In this paper, we have mathematically studied a method for constructing a multiresolution analysis (VjOmega)j (epsilon) N adapted to a polygonal region which provides an adaptive region-based filtering. The explicit construction of scaling functions, pre-wavelets and orthonormal wavelets bases defined on a polygon is carried out by using scaling functions is established by using the theory of Toeplitz operators. The corresponding expression can be interpreted as a location property which allow defining interior and boundary scaling functions. Concerning orthonormal wavelets and pre-wavelets, a similar expansion is obtained by taking advantage of the properties of the orthogonal projector P(V(j(Omega )) perpendicular from the space Vj(Omega ) + 1 onto the space (Vj(Omega )) perpendicular. Finally the mathematical results provide a simple and fast algorithm adapted to polygonal regions.
Soares, Ruben R G; Azevedo, Ana M; Van Alstine, James M; Aires-Barros, M Raquel
2015-08-01
For half a century aqueous two-phase systems (ATPSs) have been applied for the extraction and purification of biomolecules. In spite of their simplicity, selectivity, and relatively low cost they have not been significantly employed for industrial scale bioprocessing. Recently their ability to be readily scaled and interface easily in single-use, flexible biomanufacturing has led to industrial re-evaluation of ATPSs. The purpose of this review is to perform a SWOT analysis that includes a discussion of: (i) strengths of ATPS partitioning as an effective and simple platform for biomolecule purification; (ii) weaknesses of ATPS partitioning in regard to intrinsic problems and possible solutions; (iii) opportunities related to biotechnological challenges that ATPS partitioning may solve; and (iv) threats related to alternative techniques that may compete with ATPS in performance, economic benefits, scale up and reliability. This approach provides insight into the current status of ATPS as a bioprocessing technique and it can be concluded that most of the perceived weakness towards industrial implementation have now been largely overcome, thus paving the way for opportunities in fermentation feed clarification, integration in multi-stage operations and in single-step purification processes. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Zhou, Jing; Wu, Xiao-ming; Zeng, Wei-jie
2015-12-01
Sleep apnea syndrome (SAS) is prevalent in individuals and recently, there are many studies focus on using simple and efficient methods for SAS detection instead of polysomnography. However, not much work has been done on using nonlinear behavior of the electroencephalogram (EEG) signals. The purpose of this study is to find a novel and simpler method for detecting apnea patients and to quantify nonlinear characteristics of the sleep apnea. 30 min EEG scaling exponents that quantify power-law correlations were computed using detrended fluctuation analysis (DFA) and compared between six SAS and six healthy subjects during sleep. The mean scaling exponents were calculated every 30 s and 360 control values and 360 apnea values were obtained. These values were compared between the two groups and support vector machine (SVM) was used to classify apnea patients. Significant difference was found between EEG scaling exponents of the two groups (p < 0.001). SVM was used and obtained high and consistent recognition rate: average classification accuracy reached 95.1% corresponding to the sensitivity 93.2% and specificity 98.6%. DFA of EEG is an efficient and practicable method and is helpful clinically in diagnosis of sleep apnea.
Scale problems in assessment of hydrogeological parameters of groundwater flow models
NASA Astrophysics Data System (ADS)
Nawalany, Marek; Sinicyn, Grzegorz
2015-09-01
An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.
Small-Scale and Low Cost Electrodes for "Standard" Reduction Potential Measurements
ERIC Educational Resources Information Center
Eggen, Per-Odd; Kvittingen, Lise
2007-01-01
The construction of three simple and inexpensive electrodes, hydrogen, and chlorine and copper electrode is described. This simple method will encourage students to construct their own electrode and better help in understanding precipitation and other electrochemistry concepts.
Flight Research into Simple Adaptive Control on the NASA FAST Aircraft
NASA Technical Reports Server (NTRS)
Hanson, Curtis E.
2011-01-01
A series of simple adaptive controllers with varying levels of complexity were designed, implemented and flight tested on the NASA Full-Scale Advanced Systems Testbed (FAST) aircraft. Lessons learned from the development and flight testing are presented.
Escobar, Raúl G; Lucero, Nayadet; Solares, Carmen; Espinoza, Victoria; Moscoso, Odalie; Olguín, Polín; Muñoz, Karin T; Rosas, Ricardo
2016-08-16
Duchenne muscular dystrophy (DMD) and Spinal muscular atrophy (SMA) causes significant disability and progressive functional impairment. Readily available instruments that assess functionality, especially in advanced stages of the disease, are required to monitor the progress of the disease and the impact of therapeutic interventions. To describe the development of a scale to evaluate upper limb function (UL) in patients with DMD and SMA, and describe its validation process, which includes self-training for evaluators. The development of the scale included a review of published scales, an exploratory application of a pilot scale in healthy children and those with DMD, self-training of evaluators in applying the scale using a handbook and video tutorial, and assessment of a group of children with DMD and SMA using the final scale. Reliability was assessed using Cronbach and Kendall concordance and with intra and inter-rater test-retest, and validity with concordance and factorial analysis. A high level of reliability was observed, with high internal consistency (Cronbach α=0.97), and inter-rater (Kendall W=0.96) and intra-rater concordance (r=0.97 to 0.99). The validity was demonstrated by the absence of significant differences between results by different evaluators with an expert evaluator (F=0.023, P>.5), and by the factor analysis that showed that four factors account for 85.44% of total variance. This scale is a reliable and valid tool for assessing UL functionality in children with DMD and SMA. It is also easily implementable due to the possibility of self-training and the use of simple and inexpensive materials. Copyright © 2016 Sociedad Chilena de Pediatría. Publicado por Elsevier España, S.L.U. All rights reserved.
Hierarchical random walks in trace fossils and the origin of optimal search behavior
Sims, David W.; Reynolds, Andrew M.; Humphries, Nicolas E.; Southall, Emily J.; Wearmouth, Victoria J.; Metcalfe, Brett; Twitchett, Richard J.
2014-01-01
Efficient searching is crucial for timely location of food and other resources. Recent studies show that diverse living animals use a theoretically optimal scale-free random search for sparse resources known as a Lévy walk, but little is known of the origins and evolution of foraging behavior and the search strategies of extinct organisms. Here, using simulations of self-avoiding trace fossil trails, we show that randomly introduced strophotaxis (U-turns)—initiated by obstructions such as self-trail avoidance or innate cueing—leads to random looping patterns with clustering across increasing scales that is consistent with the presence of Lévy walks. This predicts that optimal Lévy searches may emerge from simple behaviors observed in fossil trails. We then analyzed fossilized trails of benthic marine organisms by using a novel path analysis technique and find the first evidence, to our knowledge, of Lévy-like search strategies in extinct animals. Our results show that simple search behaviors of extinct animals in heterogeneous environments give rise to hierarchically nested Brownian walk clusters that converge to optimal Lévy patterns. Primary productivity collapse and large-scale food scarcity characterizing mass extinctions evident in the fossil record may have triggered adaptation of optimal Lévy-like searches. The findings suggest that Lévy-like behavior has been used by foragers since at least the Eocene but may have a more ancient origin, which might explain recent widespread observations of such patterns among modern taxa. PMID:25024221
On simulating flow with multiple time scales using a method of averages
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margolin, L.G.
1997-12-31
The author presents a new computational method based on averaging to efficiently simulate certain systems with multiple time scales. He first develops the method in a simple one-dimensional setting and employs linear stability analysis to demonstrate numerical stability. He then extends the method to multidimensional fluid flow. His method of averages does not depend on explicit splitting of the equations nor on modal decomposition. Rather he combines low order and high order algorithms in a generalized predictor-corrector framework. He illustrates the methodology in the context of a shallow fluid approximation to an ocean basin circulation. He finds that his newmore » method reproduces the accuracy of a fully explicit second-order accurate scheme, while costing less than a first-order accurate scheme.« less
Quantitative analysis of population-scale family trees with millions of relatives.
Kaplanis, Joanna; Gordon, Assaf; Shor, Tal; Weissbrod, Omer; Geiger, Dan; Wahl, Mary; Gershovits, Michael; Markus, Barak; Sheikh, Mona; Gymrek, Melissa; Bhatia, Gaurav; MacArthur, Daniel G; Price, Alkes L; Erlich, Yaniv
2018-04-13
Family trees have vast applications in fields as diverse as genetics, anthropology, and economics. However, the collection of extended family trees is tedious and usually relies on resources with limited geographical scope and complex data usage restrictions. We collected 86 million profiles from publicly available online data shared by genealogy enthusiasts. After extensive cleaning and validation, we obtained population-scale family trees, including a single pedigree of 13 million individuals. We leveraged the data to partition the genetic architecture of human longevity and to provide insights into the geographical dispersion of families. We also report a simple digital procedure to overlay other data sets with our resource. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.
Tidal interactions in the expanding universe - The formation of prolate systems
NASA Technical Reports Server (NTRS)
Binney, J.; Silk, J.
1979-01-01
The study estimates the magnitude of the anisotropy that can be tidally induced in neighboring initially spherical protostructures, be they protogalaxies, protoclusters, or even uncollapsed density enhancements in the large-scale structure of the universe. It is shown that the linear analysis of tidal interactions developed by Peebles (1969) predicts that the anisotropy energy of a perturbation grows to first order in a small dimensionless parameter, whereas the net angular momentum acquired is of second order. A simple model is presented for the growth of anisotropy by tidal interactions during the nonlinear stage of the development of perturbations. A possible observational test is described of the alignment predicted by the model between the orientations of large-scale perturbations and the positions of neighboring density enhancements.
2010-01-01
property variations. The system described here is a simple 4-electrode microfluidic device made of polydimethylsiloxane PDMS [50-53] which is reversibly...through the fluid and heat it.) A more detailed description and analysis of the physics of electroosmotic actuation can be found in [46, 83] In...a control algorithm on a standard personal computer. The micro-fluidic device is made out of a soft polymer ( polydimethylsiloxane (PDMS)) and is
Roll plane analysis of on-aircraft antennas
NASA Technical Reports Server (NTRS)
Burnside, W. D.; Marhefka, R. J.; Byu, C. L.
1974-01-01
Roll plane radiation patterns of on-aircraft antennas are analyzed using high frequency solutions. Aircraft-antenna pattern performance in which the aircraft is modelled in its most basic form is presented. The fuselage is assumed to be a perfectly conducting elliptic cylinder with the antennas mounted near the top or bottom. The wings are simulated by arbitrarily many sided flat plates and the engines by circular cylinders. The patterns in each case are verified by measured results taken on simple models as well as scale models of actual aircraft.
The Study of Phase-shift Super-Frequency Induction Heating Power Supply
NASA Astrophysics Data System (ADS)
Qi, Hairun; Peng, Yonglong; Li, Yabin
This paper combines pulse-width phase-shift power modulation with fixed-angle phase-locked-control to adjust the inverter's output power, this method not only meets the work conditions of voltage inverter, but also realizes the large-scale of power modulation, and the main circuit is simple, the switching devices realize soft switching. This paper analyzes the relationship between the output power and phase-shift angle, the control strategy is simulated by Matlab/Simulink, and the results show that the method is feasible and meets the theoretical analysis
NASA Astrophysics Data System (ADS)
Kočí, Jan; Maděra, Jiří; Kočí, Václav; Hlaváčová, Zuzana; Černý, Robert
2017-11-01
A simple laboratory experiment for the determination of thermal response of a studied sample during thawing is described in the paper. The sample made of autoclaved aerated concrete was partially water saturated and frozen. Then, the temperature development during thawing was recorded, allowing to identify the time scale of the phase change process taking place inside the sample. The experimental data was then used in the inverse analysis, in order to find unknown parameters of the smoothed effective specific heat capacity model.
Salje, Ekhard K H; Planes, Antoni; Vives, Eduard
2017-10-01
Crackling noise can be initiated by competing or coexisting mechanisms. These mechanisms can combine to generate an approximate scale invariant distribution that contains two or more contributions. The overall distribution function can be analyzed, to a good approximation, using maximum-likelihood methods and assuming that it follows a power law although with nonuniversal exponents depending on a varying lower cutoff. We propose that such distributions are rather common and originate from a simple superposition of crackling noise distributions or exponential damping.
Self-Organized Criticality and Scaling in Lifetime of Traffic Jams
NASA Astrophysics Data System (ADS)
Nagatani, Takashi
1995-01-01
The deterministic cellular automaton 184 (the one-dimensional asymmetric simple-exclusion model with parallel dynamics) is extended to take into account injection or extraction of particles. The model presents the traffic flow on a highway with inflow or outflow of cars.Introducing injection or extraction of particles into the asymmetric simple-exclusion model drives the system asymptotically into a steady state exhibiting a self-organized criticality. The typical lifetime
Phonon scattering in nanoscale systems: lowest order expansion of the current and power expressions
NASA Astrophysics Data System (ADS)
Paulsson, Magnus; Frederiksen, Thomas; Brandbyge, Mads
2006-04-01
We use the non-equilibrium Green's function method to describe the effects of phonon scattering on the conductance of nano-scale devices. Useful and accurate approximations are developed that both provide (i) computationally simple formulas for large systems and (ii) simple analytical models. In addition, the simple models can be used to fit experimental data and provide physical parameters.
Feasibility and Performance of the Microwave Thermal Rocket Launcher
NASA Astrophysics Data System (ADS)
Parkin, Kevin L. G.; Culick, Fred E. C.
2004-03-01
Beamed-energy launch concepts employing a microwave thermal thruster are feasible in principle, and microwave sources of sufficient power to launch tons into LEO already exist. Microwave thermal thrusters operate on an analogous principle to nuclear thermal thrusters, which have experimentally demonstrated specific impulses exceeding 850 seconds. Assuming such performance, simple application of the rocket equation suggests that payload fractions of 10% are possible for a single stage to orbit (SSTO) microwave thermal rocket. We present an SSTO concept employing a scaled X-33 aeroshell. The flat aeroshell underside is covered by a thin-layer microwave absorbent heat-exchanger that forms part of the thruster. During ascent, the heat-exchanger faces the microwave beam. A simple ascent trajectory analysis incorporating X-33 aerodynamic data predicts a 10% payload fraction for a 1 ton craft of this type. In contrast, the Saturn V had 3 non-reusable stages and achieved a payload fraction of 4%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vriens, L.; Smeets, A.H.M.
1980-09-01
For electron-induced ionization, excitation, and de-excitation, mainly from excited atomic states, a detailed analysis is presented of the dependence of the cross sections and rate coefficients on electron energy and temperature, and on atomic parameters. A wide energy range is covered, including sudden as well as adiabatic collisions. By combining the available experimental and theoretical information, a set of simple analytical formulas is constructed for the cross sections and rate coefficients of the processes mentioned, for the total depopulation, and for three-body recombination. The formulas account for large deviations from classical and semiclassical scaling, as found for excitation. They agreemore » with experimental data and with the theories in their respective ranges of validity, but have a wider range of validity than the separate theories. The simple analytical form further facilitates the application in plasma modeling.« less
Chicken microsatellite markers isolated from libraries enriched for simple tandem repeats.
Gibbs, M; Dawson, D A; McCamley, C; Wardle, A F; Armour, J A; Burke, T
1997-12-01
The total number of microsatellite loci is considered to be at least 10-fold lower in avian species than in mammalian species. Therefore, efficient large-scale cloning of chicken microsatellites, as required for the construction of a high-resolution linkage map, is facilitated by the construction of libraries using an enrichment strategy. In this study, a plasmid library enriched for tandem repeats was constructed from chicken genomic DNA by hybridization selection. Using this technique the proportion of recombinant clones that cross-hybridized to probes containing simple tandem repeats was raised to 16%, compared with < 0.1% in a non-enriched library. Primers were designed from 121 different sequences. Polymerase chain reaction (PCR) analysis of two chicken reference pedigrees enabled 72 loci to be localized within the collaborative chicken genetic map, and at least 30 of the remaining loci have been shown to be informative in these or other crosses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scheinker, Alexander
Here, we study control of the angular-velocity actuated nonholonomic unicycle, via a simple, bounded extremum seeking controller which is robust to external disturbances and measurement noise. The vehicle performs source seeking despite not having any position information about itself or the source, able only to sense a noise corrupted scalar value whose extremum coincides with the unknown source location. In order to control the angular velocity, rather than the angular heading directly, a controller is developed such that the closed loop system exhibits multiple time scales and requires an analysis approach expanding the previous work of Kurzweil, Jarnik, Sussmann, andmore » Liu, utilizing weak limits. We provide analytic proof of stability and demonstrate how this simple scheme can be extended to include position-independent source seeking, tracking, and collision avoidance of groups on autonomous vehicles in GPS-denied environments, based only on a measure of distance to an obstacle, which is an especially important feature for an autonomous agent.« less
Changing skewness: an early warning signal of regime shifts in ecosystems.
Guttal, Vishwesha; Jayaprakash, Ciriyam
2008-05-01
Empirical evidence for large-scale abrupt changes in ecosystems such as lakes and vegetation of semi-arid regions is growing. Such changes, called regime shifts, can lead to degradation of ecological services. We study simple ecological models that show a catastrophic transition as a control parameter is varied and propose a novel early warning signal that exploits two ubiquitous features of ecological systems: nonlinearity and large external fluctuations. Either reduced resilience or increased external fluctuations can tip ecosystems to an alternative stable state. It is shown that changes in asymmetry in the distribution of time series data, quantified by changing skewness, is a model-independent and reliable early warning signal for both routes to regime shifts. Furthermore, using model simulations that mimic field measurements and a simple analysis of real data from abrupt climate change in the Sahara, we study the feasibility of skewness calculations using data available from routine monitoring.
Universality classes of fluctuation dynamics in hierarchical complex systems
NASA Astrophysics Data System (ADS)
Macêdo, A. M. S.; González, Iván R. Roa; Salazar, D. S. P.; Vasconcelos, G. L.
2017-03-01
A unified approach is proposed to describe the statistics of the short-time dynamics of multiscale complex systems. The probability density function of the relevant time series (signal) is represented as a statistical superposition of a large time-scale distribution weighted by the distribution of certain internal variables that characterize the slowly changing background. The dynamics of the background is formulated as a hierarchical stochastic model whose form is derived from simple physical constraints, which in turn restrict the dynamics to only two possible classes. The probability distributions of both the signal and the background have simple representations in terms of Meijer G functions. The two universality classes for the background dynamics manifest themselves in the signal distribution as two types of tails: power law and stretched exponential, respectively. A detailed analysis of empirical data from classical turbulence and financial markets shows excellent agreement with the theory.
How long does it take to boil an egg? A simple approach to the energy transfer equation
NASA Astrophysics Data System (ADS)
Roura, P.; Fort, J.; Saurina, J.
2000-01-01
The heating of simple geometric objects immersed in an isothermal bath is analysed qualitatively through Fourier's law. The approximate temperature evolution is compared with the exact solution obtained by solving the transport differential equation, the discrepancies being smaller than 20%. Our method succeeds in giving the solution as a function of the Fourier modulus so that the scale laws hold. It is shown that the time needed to homogenize temperature variations that extend over mean distances xm is approximately xm2/icons/Journals/Common/alpha" ALT="alpha" ALIGN="MIDDLE"/>, where icons/Journals/Common/alpha" ALT="alpha" ALIGN="MIDDLE"/> is the thermal diffusivity. This general relationship also applies to atomic diffusion. Within the approach presented there is no need to write down any differential equation. As an example, the analysis is applied to the process of boiling an egg.
NASA Astrophysics Data System (ADS)
Hsia, H.-M.; Chou, Y.-L.; Longman, R. W.
1983-07-01
The topics considered are related to measurements and controls in physical systems, the control of large scale and distributed parameter systems, chemical engineering systems, aerospace science and technology, thermodynamics and fluid mechanics, and computer applications. Subjects in structural dynamics are discussed, taking into account finite element approximations in transient analysis, buckling finite element analysis of flat plates, dynamic analysis of viscoelastic structures, the transient analysis of large frame structures by simple models, large amplitude vibration of an initially stressed thick plate, nonlinear aeroelasticity, a sensitivity analysis of a combined beam-spring-mass structure, and the optimal design and aeroelastic investigation of segmented windmill rotor blades. Attention is also given to dynamics and control of mechanical and civil engineering systems, composites, and topics in materials. For individual items see A83-44002 to A83-44061
New convergence results for the scaled gradient projection method
NASA Astrophysics Data System (ADS)
Bonettini, S.; Prato, M.
2015-09-01
The aim of this paper is to deepen the convergence analysis of the scaled gradient projection (SGP) method, proposed by Bonettini et al in a recent paper for constrained smooth optimization. The main feature of SGP is the presence of a variable scaling matrix multiplying the gradient, which may change at each iteration. In the last few years, extensive numerical experimentation showed that SGP equipped with a suitable choice of the scaling matrix is a very effective tool for solving large scale variational problems arising in image and signal processing. In spite of the very reliable numerical results observed, only a weak convergence theorem is provided establishing that any limit point of the sequence generated by SGP is stationary. Here, under the only assumption that the objective function is convex and that a solution exists, we prove that the sequence generated by SGP converges to a minimum point, if the scaling matrices sequence satisfies a simple and implementable condition. Moreover, assuming that the gradient of the objective function is Lipschitz continuous, we are also able to prove the {O}(1/k) convergence rate with respect to the objective function values. Finally, we present the results of a numerical experience on some relevant image restoration problems, showing that the proposed scaling matrix selection rule performs well also from the computational point of view.
NASA Astrophysics Data System (ADS)
Rathinasamy, Maheswaran; Bindhu, V. M.; Adamowski, Jan; Narasimhan, Balaji; Khosa, Rakesh
2017-10-01
An investigation of the scaling characteristics of vegetation and temperature data derived from LANDSAT data was undertaken for a heterogeneous area in Tamil Nadu, India. A wavelet-based multiresolution technique decomposed the data into large-scale mean vegetation and temperature fields and fluctuations in horizontal, diagonal, and vertical directions at hierarchical spatial resolutions. In this approach, the wavelet coefficients were used to investigate whether the normalized difference vegetation index (NDVI) and land surface temperature (LST) fields exhibited self-similar scaling behaviour. In this study, l-moments were used instead of conventional simple moments to understand scaling behaviour. Using the first six moments of the wavelet coefficients through five levels of dyadic decomposition, the NDVI data were shown to be statistically self-similar, with a slope of approximately -0.45 in each of the horizontal, vertical, and diagonal directions of the image, over scales ranging from 30 to 960 m. The temperature data were also shown to exhibit self-similarity with slopes ranging from -0.25 in the diagonal direction to -0.20 in the vertical direction over the same scales. These findings can help develop appropriate up- and down-scaling schemes of remotely sensed NDVI and LST data for various hydrologic and environmental modelling applications. A sensitivity analysis was also undertaken to understand the effect of mother wavelets on the scaling characteristics of LST and NDVI images.
Mosmuller, David G M; Mennes, Lisette M; Prahl, Charlotte; Kramer, Gem J C; Disse, Melissa A; van Couwelaar, Gijs M; Niessen, Frank B; Griot, J P W Don
2017-09-01
The development of the Cleft Aesthetic Rating Scale, a simple and reliable photographic reference scale for the assessment of nasolabial appearance in complete unilateral cleft lip and palate patients. A blind retrospective analysis of photographs of cleft lip and palate patients was performed with this new rating scale. VU Medical Center Amsterdam and the Academic Center for Dentistry of Amsterdam. Complete unilateral cleft lip and palate patients at the age of 6 years. Photographs that showed the highest interobserver agreement in earlier assessments were selected for the photographic reference scale. Rules were attached to the rating scale to provide a guideline for the assessment and improve interobserver reliability. Cropped photographs revealing only the nasolabial area were assessed by six observers using this new Cleft Aesthetic Rating Scale in two different sessions. Photographs of 62 children (6 years of age, 44 boys and 18 girls) were assessed. The interobserver reliability for the nose and lip together was 0.62, obtained with the intraclass correlation coefficient. To measure the internal consistency, a Cronbach alpha of .91 was calculated. The estimated reliability for three observers was .84, obtained with the Spearman Brown formula. A new, easy to use, and reliable scoring system with a photographic reference scale is presented in this study.
Simple prognostic model for patients with advanced cancer based on performance status.
Jang, Raymond W; Caraiscos, Valerie B; Swami, Nadia; Banerjee, Subrata; Mak, Ernie; Kaya, Ebru; Rodin, Gary; Bryson, John; Ridley, Julia Z; Le, Lisa W; Zimmermann, Camilla
2014-09-01
Providing survival estimates is important for decision making in oncology care. The purpose of this study was to provide survival estimates for outpatients with advanced cancer, using the Eastern Cooperative Oncology Group (ECOG), Palliative Performance Scale (PPS), and Karnofsky Performance Status (KPS) scales, and to compare their ability to predict survival. ECOG, PPS, and KPS were completed by physicians for each new patient attending the Princess Margaret Cancer Centre outpatient Oncology Palliative Care Clinic (OPCC) from April 2007 to February 2010. Survival analysis was performed using the Kaplan-Meier method. The log-rank test for trend was employed to test for differences in survival curves for each level of performance status (PS), and the concordance index (C-statistic) was used to test the predictive discriminatory ability of each PS measure. Measures were completed for 1,655 patients. PS delineated survival well for all three scales according to the log-rank test for trend (P < .001). Survival was approximately halved for each worsening performance level. Median survival times, in days, for each ECOG level were: EGOG 0, 293; ECOG 1, 197; ECOG 2, 104; ECOG 3, 55; and ECOG 4, 25.5. Median survival times, in days, for PPS (and KPS) were: PPS/KPS 80-100, 221 (215); PPS/KPS 60 to 70, 115 (119); PPS/KPS 40 to 50, 51 (49); PPS/KPS 10 to 30, 22 (29). The C-statistic was similar for all three scales and ranged from 0.63 to 0.64. We present a simple tool that uses PS alone to prognosticate in advanced cancer, and has similar discriminatory ability to more complex models. Copyright © 2014 by American Society of Clinical Oncology.
A simple index of stand density for Douglas-fir.
R.O. Curtis
1982-01-01
The expression RD = G/(Dg½), where G is basal area and Dg is quadratic mean stand diameter, provides a simple and convenient scale of relative stand density for Douglas-fir, equivalent to other generally accepted diameter-based stand density measures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leroy, Adam K.; Hughes, Annie; Schruba, Andreas
2016-11-01
The cloud-scale density, velocity dispersion, and gravitational boundedness of the interstellar medium (ISM) vary within and among galaxies. In turbulent models, these properties play key roles in the ability of gas to form stars. New high-fidelity, high-resolution surveys offer the prospect to measure these quantities across galaxies. We present a simple approach to make such measurements and to test hypotheses that link small-scale gas structure to star formation and galactic environment. Our calculations capture the key physics of the Larson scaling relations, and we show good correspondence between our approach and a traditional “cloud properties” treatment. However, we argue thatmore » our method is preferable in many cases because of its simple, reproducible characterization of all emission. Using, low- J {sup 12}CO data from recent surveys, we characterize the molecular ISM at 60 pc resolution in the Antennae, the Large Magellanic Cloud (LMC), M31, M33, M51, and M74. We report the distributions of surface density, velocity dispersion, and gravitational boundedness at 60 pc scales and show galaxy-to-galaxy and intragalaxy variations in each. The distribution of flux as a function of surface density appears roughly lognormal with a 1 σ width of ∼0.3 dex, though the center of this distribution varies from galaxy to galaxy. The 60 pc resolution line width and molecular gas surface density correlate well, which is a fundamental behavior expected for virialized or free-falling gas. Varying the measurement scale for the LMC and M31, we show that the molecular ISM has higher surface densities, lower line widths, and more self-gravity at smaller scales.« less
Investigation of shear damage considering the evolution of anisotropy
NASA Astrophysics Data System (ADS)
Kweon, S.
2013-12-01
The damage that occurs in shear deformations in view of anisotropy evolution is investigated. It is widely believed in the mechanics research community that damage (or porosity) does not evolve (increase) in shear deformations since the hydrostatic stress in shear is zero. This paper proves that the above statement can be false in large deformations of simple shear. The simulation using the proposed anisotropic ductile fracture model (macro-scale) in this study indicates that hydrostatic stress becomes nonzero and (thus) porosity evolves (increases or decreases) in the simple shear deformation of anisotropic (orthotropic) materials. The simple shear simulation using a crystal plasticity based damage model (meso-scale) shows the same physics as manifested in the above macro-scale model that porosity evolves due to the grain-to-grain interaction, i.e., due to the evolution of anisotropy. Through a series of simple shear simulations, this study investigates the effect of the evolution of anisotropy, i.e., the rotation of the orthotropic axes onto the damage (porosity) evolution. The effect of the evolutions of void orientation and void shape onto the damage (porosity) evolution is investigated as well. It is found out that the interaction among porosity, the matrix anisotropy and void orientation/shape plays a crucial role in the ductile damage of porous materials.
Defining Simple nD Operations Based on Prosmatic nD Objects
NASA Astrophysics Data System (ADS)
Arroyo Ohori, K.; Ledoux, H.; Stoter, J.
2016-10-01
An alternative to the traditional approaches to model separately 2D/3D space, time, scale and other parametrisable characteristics in GIS lies in the higher-dimensional modelling of geographic information, in which a chosen set of non-spatial characteristics, e.g. time and scale, are modelled as extra geometric dimensions perpendicular to the spatial ones, thus creating a higher-dimensional model. While higher-dimensional models are undoubtedly powerful, they are also hard to create and manipulate due to our lack of an intuitive understanding in dimensions higher than three. As a solution to this problem, this paper proposes a methodology that makes nD object generation easier by splitting the creation and manipulation process into three steps: (i) constructing simple nD objects based on nD prismatic polytopes - analogous to prisms in 3D -, (ii) defining simple modification operations at the vertex level, and (iii) simple postprocessing to fix errors introduced in the model. As a use case, we show how two sets of operations can be defined and implemented in a dimension-independent manner using this methodology: the most common transformations (i.e. translation, scaling and rotation) and the collapse of objects. The nD objects generated in this manner can then be used as a basis for an nD GIS.
Jenkinson, C; Mant, J; Carter, J; Wade, D; Winner, S
2000-03-01
To assess the validity of the London handicap scale (LHS) using a simple unweighted scoring system compared with traditional weighted scoring 323 patients admitted to hospital with acute stroke were followed up by interview 6 months after their stroke as part of a trial looking at the impact of a family support organiser. Outcome measures included the six item LHS, the Dartmouth COOP charts, the Frenchay activities index, the Barthel index, and the hospital anxiety and depression scale. Patients' handicap score was calculated both using the standard procedure (with weighting) for the LHS, and using a simple summation procedure without weighting (U-LHS). Construct validity of both LHS and U-LHS was assessed by testing their correlations with the other outcome measures. Cronbach's alpha for the LHS was 0.83. The U-LHS was highly correlated with the LHS (r=0.98). Correlation of U-LHS with the other outcome measures gave very similar results to correlation of LHS with these measures. Simple summation scoring of the LHS does not lead to any change in the measurement properties of the instrument compared with standard weighted scoring. Unweighted scores are easier to calculate and interpret, so it is recommended that these are used.
NASA Astrophysics Data System (ADS)
Kourdis, Panayotis D.; Steuer, Ralf; Goussis, Dimitris A.
2010-09-01
Large-scale models of cellular reaction networks are usually highly complex and characterized by a wide spectrum of time scales, making a direct interpretation and understanding of the relevant mechanisms almost impossible. We address this issue by demonstrating the benefits provided by model reduction techniques. We employ the Computational Singular Perturbation (CSP) algorithm to analyze the glycolytic pathway of intact yeast cells in the oscillatory regime. As a primary object of research for many decades, glycolytic oscillations represent a paradigmatic candidate for studying biochemical function and mechanisms. Using a previously published full-scale model of glycolysis, we show that, due to fast dissipative time scales, the solution is asymptotically attracted on a low dimensional manifold. Without any further input from the investigator, CSP clarifies several long-standing questions in the analysis of glycolytic oscillations, such as the origin of the oscillations in the upper part of glycolysis, the importance of energy and redox status, as well as the fact that neither the oscillations nor cell-cell synchronization can be understood in terms of glycolysis as a simple linear chain of sequentially coupled reactions.
NASA Astrophysics Data System (ADS)
Bera, Anindita; Mishra, Utkarsh; Singha Roy, Sudipto; Biswas, Anindya; Sen(De), Aditi; Sen, Ujjwal
2018-06-01
Benford's law is an empirical edict stating that the lower digits appear more often than higher ones as the first few significant digits in statistics of natural phenomena and mathematical tables. A marked proportion of such analyses is restricted to the first significant digit. We employ violation of Benford's law, up to the first four significant digits, for investigating magnetization and correlation data of paradigmatic quantum many-body systems to detect cooperative phenomena, focusing on the finite-size scaling exponents thereof. We find that for the transverse field quantum XY model, behavior of the very first significant digit of an observable, at an arbitrary point of the parameter space, is enough to capture the quantum phase transition in the model with a relatively high scaling exponent. A higher number of significant digits do not provide an appreciable further advantage, in particular, in terms of an increase in scaling exponents. Since the first significant digit of a physical quantity is relatively simple to obtain in experiments, the results have potential implications for laboratory observations in noisy environments.
A New Technique for Personality Scale Construction. Preliminary Findings.
ERIC Educational Resources Information Center
Schaffner, Paul E.; Darlington, Richard B.
Most methods of personality scale construction have clear statistical disadvantages. A hybrid method (Darlington and Bishop, 1966) was found to increase scale validity more than any other method, with large item pools. A simple modification of the Darlington-Bishop method (algebraically and conceptually similar to ridge regression, but…
Universal sequence map (USM) of arbitrary discrete sequences
2002-01-01
Background For over a decade the idea of representing biological sequences in a continuous coordinate space has maintained its appeal but not been fully realized. The basic idea is that any sequence of symbols may define trajectories in the continuous space conserving all its statistical properties. Ideally, such a representation would allow scale independent sequence analysis – without the context of fixed memory length. A simple example would consist on being able to infer the homology between two sequences solely by comparing the coordinates of any two homologous units. Results We have successfully identified such an iterative function for bijective mappingψ of discrete sequences into objects of continuous state space that enable scale-independent sequence analysis. The technique, named Universal Sequence Mapping (USM), is applicable to sequences with an arbitrary length and arbitrary number of unique units and generates a representation where map distance estimates sequence similarity. The novel USM procedure is based on earlier work by these and other authors on the properties of Chaos Game Representation (CGR). The latter enables the representation of 4 unit type sequences (like DNA) as an order free Markov Chain transition table. The properties of USM are illustrated with test data and can be verified for other data by using the accompanying web-based tool:http://bioinformatics.musc.edu/~jonas/usm/. Conclusions USM is shown to enable a statistical mechanics approach to sequence analysis. The scale independent representation frees sequence analysis from the need to assume a memory length in the investigation of syntactic rules. PMID:11895567
Assimilation of ZDR Columns for Improving the Spin-Up and Forecasts of Convective Storms
NASA Astrophysics Data System (ADS)
Carlin, J.; Gao, J.; Snyder, J.; Ryzhkov, A.
2017-12-01
A primary motivation for assimilating radar reflectivity data is the reduction of spin-up time for modeled convection. To accomplish this, cloud analysis techniques seek to induce and sustain convective updrafts in storm-scale models by inserting temperature and moisture increments and hydrometeor mixing ratios into the model analysis from simple relations with reflectivity. Polarimetric radar data provide additional insight into the microphysical and dynamic structure of convection. In particular, the radar meteorology community has known for decades that convective updrafts cause, and are typically co-located with, differential reflectivity (ZDR) columns - vertical protrusions of enhanced ZDR above the environmental 0˚C level. Despite these benefits, limited work has been done thus far to assimilate dual-polarization radar data into numerical weather prediction models. In this study, we explore the utility of assimilating ZDR columns to improve storm-scale model analyses and forecasts of convection. We modify the existing Advanced Regional Prediction System's (ARPS) cloud analysis routine to adjust model temperature and moisture state variables using detected ZDR columns as proxies for convective updrafts, and compare the resultant cycled analyses and forecasts with those from the original reflectivity-based cloud analysis formulation. Results indicate qualitative and quantitative improvements from assimilating ZDR columns, including more coherent analyzed updrafts, forecast updraft helicity swaths that better match radar-derived rotation tracks, more realistic forecast reflectivity fields, and larger equitable threat scores. These findings support the use of dual-polarization radar signatures to improve storm-scale model analyses and forecasts.
NASA Astrophysics Data System (ADS)
Cubrovic, Mihailo
2005-02-01
We report on our theoretical and numerical results concerning the transport mechanisms in the asteroid belt. We first derive a simple kinetic model of chaotic diffusion and show how it gives rise to some simple correlations (but not laws) between the removal time (the time for an asteroid to experience a qualitative change of dynamical behavior and enter a wide chaotic zone) and the Lyapunov time. The correlations are shown to arise in two different regimes, characterized by exponential and power-law scalings. We also show how is the so-called “stable chaos” (exponential regime) related to anomalous diffusion. Finally, we check our results numerically and discuss their possible applications in analyzing the motion of particular asteroids.
SimpleBox 4.0: Improving the model while keeping it simple….
Hollander, Anne; Schoorl, Marian; van de Meent, Dik
2016-04-01
Chemical behavior in the environment is often modeled with multimedia fate models. SimpleBox is one often-used multimedia fate model, firstly developed in 1986. Since then, two updated versions were published. Based on recent scientific developments and experience with SimpleBox 3.0, a new version of SimpleBox was developed and is made public here: SimpleBox 4.0. In this new model, eight major changes were implemented: removal of the local scale and vegetation compartments, addition of lake compartments and deep ocean compartments (including the thermohaline circulation), implementation of intermittent rain instead of drizzle and of depth dependent soil concentrations, adjustment of the partitioning behavior for organic acids and bases as well as of the value for enthalpy of vaporization. In this paper, the effects of the model changes in SimpleBox 4.0 on the predicted steady-state concentrations of chemical substances were explored for different substance groups (neutral organic substances, acids, bases, metals) in a standard emission scenario. In general, the largest differences between the predicted concentrations in the new and the old model are caused by the implementation of layered ocean compartments. Undesirable high model complexity caused by vegetation compartments and a local scale were removed to enlarge the simplicity and user friendliness of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.
Spectral analysis of the Forel-Ule Ocean colour comparator scale
NASA Astrophysics Data System (ADS)
Wernand, M. R.; van der Woerd, H. J.
2010-04-01
François Alphonse Forel (1890) and Willi Ule (1892) composed a colour comparator scale, with tints varying from indigo-blue to coca-cola brown, to quantify the colour of natural waters, like seas, lakes and rivers. For each measurement, the observer compares the colour of the water above a submersed white disc (Secchi disc) with the hand-held scale of pre-defined colours. The scale can be well reproduced from a simple recipe for twenty-one coloured chemical solutions and because the ease of its use, the Forel-Ule (FU) scale has been applied globally and intensively by oceanographers and limnologists from the year 1890. Indeed, the archived FU data belong to the oldest oceanographic data sets and do contain information on the changes in geobiophysical properties of natural waters during the last century. In this article we describe the optical properties of the FU-scale and its ability to cover the colours of natural waters, as observed by the human eye. The recipe of the scale and its reproduction is described. The spectral transmission of the tubes, with belonging chromaticity coordinates, is presented. The FU scale, in all its simplicity, is found to be an adequate ocean colour comparator scale. The scale is well characterized, is stable and observations are reproducible. This supports the idea that the large historic data base of FU measurements is coherent and well calibrated. Moreover, the scale can be coupled to contemporary multi-spectral observations with hand-held and satellite-based spectrometers.
NASA Astrophysics Data System (ADS)
Hnat, B.; Dudson, B. D.; Dendy, R. O.; Counsell, G. F.; Kirk, A.; MAST Team
2008-08-01
Ion saturation current (Isat) measurements of edge plasma turbulence are analysed for six MAST L-mode plasmas that differ primarily in their edge magnetic field configurations. The analysis techniques are designed to capture the strong nonlinearities of the datasets. First, absolute moments of the data are examined to obtain accurate values of scaling exponents. This confirms dual scaling behaviour in all samples, with the temporal scale τ ≈ 40-60 µs separating the two regimes. Strong universality is then identified in the functional form of the probability density function (PDF) for Isat fluctuations, which is well approximated by the Fréchet distribution on temporal scales τ <= 40 µs. For temporal scales τ > 40 µs, the PDFs appear to converge to the Gumbel distribution, which has been previously identified as a universal feature of many other complex phenomena. The optimal fitting parameters k = 1.15 for Fréchet and a = 1.35 for Gumbel provide a simple quantitative characterization of the full spectrum of fluctuations. It is concluded that, to good approximation, the properties of the edge turbulence are independent of the edge magnetic field configuration.
Scaling up Effects in the Organic Laboratory
ERIC Educational Resources Information Center
Persson, Anna; Lindstrom, Ulf M.
2004-01-01
A simple and effective way of exposing chemistry students to some of the effects of scaling up an organic reaction is described. It gives the student an experience that may encounter in an industrial setting.
Waller, Christopher C; McLeod, Malcolm D
2014-12-01
Steroid sulfates are a major class of steroid metabolite that are of growing importance in fields such as anti-doping analysis, the detection of residues in agricultural produce or medicine. Despite this, many steroid sulfate reference materials may have limited or no availability hampering the development of analytical methods. We report simple protocols for the rapid synthesis and purification of steroid sulfates that are suitable for adoption by analytical laboratories. Central to this approach is the use of solid-phase extraction (SPE) for purification, a technique routinely used for sample preparation in analytical laboratories around the world. The sulfate conjugates of sixteen steroid compounds encompassing a wide range of steroid substitution patterns and configurations are prepared, including the previously unreported sulfate conjugates of the designer steroids furazadrol (17β-hydroxyandrostan[2,3-d]isoxazole), isofurazadrol (17β-hydroxyandrostan[3,2-c]isoxazole) and trenazone (17β-hydroxyestra-4,9-dien-3-one). Structural characterization data, together with NMR and mass spectra are reported for all steroid sulfates, often for the first time. The scope of this approach for small scale synthesis is highlighted by the sulfation of 1μg of testosterone (17β-hydroxyandrost-4-en-3-one) as monitored by liquid chromatography-mass spectrometry (LCMS). Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Foroutan, Shahin; Haghshenas, Amin; Hashemian, Mohammad; Eftekhari, S. Ali; Toghraie, Davood
2018-03-01
In this paper, three-dimensional buckling behavior of nanowires was investigated based on Eringen's Nonlocal Elasticity Theory. The electric current-carrying nanowires were affected by a longitudinal magnetic field based upon the Lorentz force. The nanowires (NWs) were modeled based on Timoshenko beam theory and the Gurtin-Murdoch's surface elasticity theory. Generalized Differential Quadrature (GDQ) method was used to solve the governing equations of the NWs. Two sets of boundary conditions namely simple-simple and clamped-clamped were applied and the obtained results were discussed. Results demonstrated the effect of electric current, magnetic field, small-scale parameter, slenderness ratio, and nanowires diameter on the critical compressive buckling load of nanowires. As a key result, increasing the small-scale parameter decreased the critical load. By the same token, increasing the electric current, magnetic field, and slenderness ratio resulted in a decrease in the critical load. As the slenderness ratio increased, the effect of nonlocal theory decreased. In contrast, by expanding the NWs diameter, the nonlocal effect increased. Moreover, in the present article, the critical values of the magnetic field of strength and slenderness ratio were revealed, and the roles of the magnetic field, slenderness ratio, and NWs diameter on higher buckling loads were discussed.
“Skill of Generalized Additive Model to Detect PM2.5 Health ...
Summary. Measures of health outcomes are collinear with meteorology and air quality, making analysis of connections between human health and air quality difficult. The purpose of this analysis was to determine time scales and periods shared by the variables of interest (and by implication scales and periods that are not shared). Hospital admissions, meteorology (temperature and relative humidity), and air quality (PM2.5 and daily maximum ozone) for New York City during the period 2000-2006 were decomposed into temporal scales ranging from 2 days to greater than two years using a complex wavelet transform. Health effects were modeled as functions of the wavelet components of meteorology and air quality using the generalized additive model (GAM) framework. This simulation study showed that GAM is extremely successful at extracting and estimating a health effect embedded in a dataset. It also shows that, if the objective in mind is to estimate the health signal but not to fully explain this signal, a simple GAM model with a single confounder (calendar time) whose smooth representation includes a sufficient number of constraints is as good as a more complex model.Introduction. In the context of wavelet regression, confounding occurs when two or more independent variables interact with the dependent variable at the same frequency. Confounding also acts on a variety of time scales, changing the PM2.5 coefficient (magnitude and sign) and its significance ac
Kawada, Tomoyuki; Yamada, Natsuki
2012-01-01
Job satisfaction is an important factor in the occupational lives of workers. In this study, the relationship between one-dimensional scale of job satisfaction and psychological wellbeing was evaluated. A total of 1,742 workers (1,191 men and 551 women) participated. 100-point scale evaluating job satisfaction (0 [extremely dissatisfied] to 100 [extremely satisfied]) and the General Health Questionnaire, 12-item version (GHQ-12) evaluating psychological wellbeing were used. A multiple regression analysis was then used, controlling for gender and age. The change in the GHQ-12 and job satisfaction scores after a two-year interval was also evaluated. The mean age for the subjects was 42.2 years for the men and 36.2 years for the women. The GHQ-12 and job satisfaction scores were significantly correlated in each generation. The partial correlation coefficients between the changes in the two variables, controlling for age, were -0.395 for men and -0.435 for women (p< 0.001). A multiple regression analysis revealed that the 100-point job satisfaction score was associated with the GHQ-12 results (p< 0.001). The adjusted multiple correlation coefficient was 0.275. The 100-point scale, which is a simple and easy tool for evaluating job satisfaction, was significantly associated with psychological wellbeing as judged using the GHQ-12.
NASA Astrophysics Data System (ADS)
Kruckenberg, S. C.; Michels, Z. D.; Parsons, M. M.
2017-12-01
We present results from integrated field, microstructural and textural analysis in the Burlington mylonite zone (BMZ) of eastern Massachusetts to establish a unified micro-kinematic framework for vorticity analysis in polyphase shear zones. Specifically, we define the vorticity-normal surface based on lattice-scale rotation axes calculated from electron backscatter diffraction data using orientation statistics. In doing so, we objectively identify a suitable reference frame for rigid grain methods of vorticity analysis that can be used in concert with textural studies to constrain field- to plate-scale deformation geometries without assumptions that may bias tectonic interpretations, such as relationships between kinematic axes and fabric forming elements or the nature of the deforming zone (e.g., monoclinic vs. triclinic shear zones). Rocks within the BMZ comprise a heterogeneous mix of quartzofeldspathic ± hornblende-bearing mylonitic gneisses and quartzites. Vorticity axes inferred from lattice rotations lie within the plane of mylonitic foliation perpendicular to lineation - a pattern consistent with monoclinic deformation geometries involving simple shear and/or wrench-dominated transpression. The kinematic vorticity number (Wk) is calculated using Rigid Grain Net analysis and ranges from 0.25-0.55, indicating dominant general shear. Using the calculated Wk values and the dominant geographic fabric orientation, we constrain the angle of paleotectonic convergence between the Nashoba and Avalon terranes to 56-75º with the convergence vector trending 142-160° and plunging 3-10°. Application of the quartz recrystallized grain size piezometer suggests differential stresses in the BMZ mylonites ranging from 44 to 92 MPa; quartz CPO patterns are consistent with deformation at greenschist- to amphibolite-facies conditions. We conclude that crustal strain localization in the BMZ involved a combination of pure and simple shear in a sinistral reverse transpressional shear zone formed at or near the brittle-ductile transition under relatively high stress conditions. Moreover, we demonstrate the utility of combined crystallographic and rigid grain methods of vorticity analysis for deducing deformation geometries, kinematics, and tectonic histories in polyphase shear zones.
NASA Astrophysics Data System (ADS)
Redolfi, M.; Tubino, M.; Bertoldi, W.; Brasington, J.
2016-08-01
Understanding the role of external controls on the morphology of braided rivers is currently limited by the dearth of robust metrics to quantify and distinguish the diversity of channel form. Most existing measures are strongly dependent on river stage and unable to account for the three-dimensional complexity that is apparent in digital terrain models of braided rivers. In this paper, we introduce a simple, stage-independent morphological indicator that enables the analysis of reach-scale regime morphology as a function of slope, discharge, sediment size, and degree of confinement. The index is derived from the bed elevation frequency distribution and characterizes a statistical width-depth curve averaged longitudinally over multiple channel widths. In this way, we define a "synthetic channel" described by a simple parameter that embeds information about the river morphological complexity. Under the assumption of uniform flow, this approach can be extended to provide estimates of the reach-averaged shear stress distribution, bed load flux, and at-a-station-variability of wetted width. We test this approach using data from a wide range of labile channels including 58 flume experiments and three gravel bed braided rivers. Results demonstrate a strong relationship between the unit discharge and the shape of the elevation distribution, which varies between a U shape for typical single-thread confined channels and a Y shape for multithread reaches. Finally, we discuss the use of the metric as a diagnostic index of river condition that may be used to support inferences about the river morphological trajectory.
Application of RNAMlet to surface defect identification of steels
NASA Astrophysics Data System (ADS)
Xu, Ke; Xu, Yang; Zhou, Peng; Wang, Lei
2018-06-01
As three main production lines of steels, continuous casting slabs, hot rolled steel plates and cold rolled steel strips have different surface appearances and are produced at different speeds of their production lines. Therefore, the algorithms for the surface defect identifications of the three steel products have different requirements for real-time and anti-interference. The existing algorithms cannot be adaptively applied to surface defect identification of the three steel products. A new method of adaptive multi-scale geometric analysis named RNAMlet was proposed. The idea of RNAMlet came from the non-symmetry anti-packing pattern representation model (NAM). The image is decomposed into a set of rectangular blocks asymmetrically according to gray value changes of image pixels. Then two-dimensional Haar wavelet transform is applied to all blocks. If the image background is complex, the number of blocks is large, and more details of the image are utilized. If the image background is simple, the number of blocks is small, and less computation time is needed. RNAMlet was tested with image samples of the three steel products, and compared with three classical methods of multi-scale geometric analysis, including Contourlet, Shearlet and Tetrolet. For the image samples with complicated backgrounds, such as continuous casting slabs and hot rolled steel plates, the defect identification rate obtained by RNAMlet was 1% higher than other three methods. For the image samples with simple backgrounds, such as cold rolled steel strips, the computation time of RNAMlet was one-tenth of the other three MGA methods, while the defect identification rates obtained by RNAMlet were higher than the other three methods.
How fast do living organisms move: Maximum speeds from bacteria to elephants and whales
NASA Astrophysics Data System (ADS)
Meyer-Vernet, Nicole; Rospars, Jean-Pierre
2015-08-01
Despite their variety and complexity, living organisms obey simple scaling laws due to the universality of the laws of physics. In the present paper, we study the scaling between maximum speed and size, from bacteria to the largest mammals. While the preferred speed has been widely studied in the framework of Newtonian mechanics, the maximum speed has rarely attracted the interest of physicists, despite its remarkable scaling property; it is roughly proportional to length throughout nearly the whole range of running and swimming organisms. We propose a simple order-of-magnitude interpretation of this ubiquitous relationship, based on physical properties shared by life forms of very different body structure and varying by more than 20 orders of magnitude in body mass.
Simple scaling of catastrophic landslide dynamics.
Ekström, Göran; Stark, Colin P
2013-03-22
Catastrophic landslides involve the acceleration and deceleration of millions of tons of rock and debris in response to the forces of gravity and dissipation. Their unpredictability and frequent location in remote areas have made observations of their dynamics rare. Through real-time detection and inverse modeling of teleseismic data, we show that landslide dynamics are primarily determined by the length scale of the source mass. When combined with geometric constraints from satellite imagery, the seismically determined landslide force histories yield estimates of landslide duration, momenta, potential energy loss, mass, and runout trajectory. Measurements of these dynamical properties for 29 teleseismogenic landslides are consistent with a simple acceleration model in which height drop and rupture depth scale with the length of the failing slope.
Scaling up digital circuit computation with DNA strand displacement cascades.
Qian, Lulu; Winfree, Erik
2011-06-03
To construct sophisticated biochemical circuits from scratch, one needs to understand how simple the building blocks can be and how robustly such circuits can scale up. Using a simple DNA reaction mechanism based on a reversible strand displacement process, we experimentally demonstrated several digital logic circuits, culminating in a four-bit square-root circuit that comprises 130 DNA strands. These multilayer circuits include thresholding and catalysis within every logical operation to perform digital signal restoration, which enables fast and reliable function in large circuits with roughly constant switching time and linear signal propagation delays. The design naturally incorporates other crucial elements for large-scale circuitry, such as general debugging tools, parallel circuit preparation, and an abstraction hierarchy supported by an automated circuit compiler.
NASA Technical Reports Server (NTRS)
Mace, Gerald G.; Ackerman, Thomas P.
1996-01-01
A topic of current practical interest is the accurate characterization of the synoptic-scale atmospheric state from wind profiler and radiosonde network observations. We have examined several related and commonly applied objective analysis techniques for performing this characterization and considered their associated level of uncertainty both from a theoretical and a practical standpoint. A case study is presented where two wind profiler triangles with nearly identical centroids and no common vertices produced strikingly different results during a 43-h period. We conclude that the uncertainty in objectively analyzed quantities can easily be as large as the expected synoptic-scale signal. In order to quantify the statistical precision of the algorithms, we conducted a realistic observing system simulation experiment using output from a mesoscale model. A simple parameterization for estimating the uncertainty in horizontal gradient quantities in terms of known errors in the objectively analyzed wind components and temperature is developed from these results.
A Three-way Decomposition of a Total Effect into Direct, Indirect, and Interactive Effects
VanderWeele, Tyler J.
2013-01-01
Recent theory in causal inference has provided concepts for mediation analysis and effect decomposition that allow one to decompose a total effect into a direct and an indirect effect. Here, it is shown that what is often taken as an indirect effect can in fact be further decomposed into a “pure” indirect effect and a mediated interactive effect, thus yielding a three-way decomposition of a total effect (direct, indirect, and interactive). This three-way decomposition applies to difference scales and also to additive ratio scales and additive hazard scales. Assumptions needed for the identification of each of these three effects are discussed and simple formulae are given for each when regression models allowing for interaction are used. The three-way decomposition is illustrated by examples from genetic and perinatal epidemiology, and discussion is given to what is gained over the traditional two-way decomposition into simply a direct and an indirect effect. PMID:23354283
How much does a tokamak reactor cost?
NASA Astrophysics Data System (ADS)
Freidberg, J.; Cerfon, A.; Ballinger, S.; Barber, J.; Dogra, A.; McCarthy, W.; Milanese, L.; Mouratidis, T.; Redman, W.; Sandberg, A.; Segal, D.; Simpson, R.; Sorensen, C.; Zhou, M.
2017-10-01
The cost of a fusion reactor is of critical importance to its ultimate acceptability as a commercial source of electricity. While there are general rules of thumb for scaling both overnight cost and levelized cost of electricity the corresponding relations are not very accurate or universally agreed upon. We have carried out a series of scaling studies of tokamak reactor costs based on reasonably sophisticated plasma and engineering models. The analysis is largely analytic, requiring only a simple numerical code, thus allowing a very large number of designs. Importantly, the studies are aimed at plasma physicists rather than fusion engineers. The goals are to assess the pros and cons of steady state burning plasma experiments and reactors. One specific set of results discusses the benefits of higher magnetic fields, now possible because of the recent development of high T rare earth superconductors (REBCO); with this goal in mind, we calculate quantitative expressions, including both scaling and multiplicative constants, for cost and major radius as a function of central magnetic field.
Measurements of strain at plate boundaries using space based geodetic techniques
NASA Technical Reports Server (NTRS)
Robaudo, Stefano; Harrison, Christopher G. A.
1993-01-01
We have used the space based geodetic techniques of Satellite Laser Ranging (SLR) and VLBI to study strain along subduction and transform plate boundaries and have interpreted the results using a simple elastic dislocation model. Six stations located behind island arcs were analyzed as representative of subduction zones while 13 sites located on either side of the San Andreas fault were used for the transcurrent zones. The length deformation scale was then calculated for both tectonic margins by fitting the relative strain to an exponentially decreasing function of distance from the plate boundary. Results show that space-based data for the transcurrent boundary along the San Andreas fault help to define better the deformation length scale in the area while fitting nicely the elastic half-space earth model. For subduction type bonndaries the analysis indicates that there is no single scale length which uniquely describes the deformation. This is mainly due to the difference in subduction characteristics for the different areas.
Minimal model for a hydrodynamic fingering instability in microroller suspensions
NASA Astrophysics Data System (ADS)
Delmotte, Blaise; Donev, Aleksandar; Driscoll, Michelle; Chaikin, Paul
2017-11-01
We derive a minimal continuum model to investigate the hydrodynamic mechanism behind the fingering instability recently discovered in a suspension of microrollers near a floor [M. Driscoll et al., Nat. Phys. 13, 375 (2017), 10.1038/nphys3970]. Our model, consisting of two continuous lines of rotlets, exhibits a linear instability driven only by hydrodynamic interactions and reproduces the length-scale selection observed in large-scale particle simulations and in experiments. By adjusting only one parameter, the distance between the two lines, our dispersion relation exhibits quantitative agreement with the simulations and qualitative agreement with experimental measurements. Our linear stability analysis indicates that this instability is caused by the combination of the advective and transverse flows generated by the microrollers near a no-slip surface. Our simple model offers an interesting formalism to characterize other hydrodynamic instabilities that have not been well understood, such as size scale selection in suspensions of particles sedimenting adjacent to a wall, or the recently observed formations of traveling phonons in systems of confined driven particles.
Reliable scar scoring system to assess photographs of burn patients.
Mecott, Gabriel A; Finnerty, Celeste C; Herndon, David N; Al-Mousawi, Ahmed M; Branski, Ludwik K; Hegde, Sachin; Kraft, Robert; Williams, Felicia N; Maldonado, Susana A; Rivero, Haidy G; Rodriguez-Escobar, Noe; Jeschke, Marc G
2015-12-01
Several scar-scoring scales exist to clinically monitor burn scar development and maturation. Although scoring scars through direct clinical examination is ideal, scars must sometimes be scored from photographs. No scar scale currently exists for the latter purpose. We modified a previously described scar scale (Yeong et al., J Burn Care Rehabil 1997) and tested the reliability of this new scale in assessing burn scars from photographs. The new scale consisted of three parameters as follows: scar height, surface appearance, and color mismatch. Each parameter was assigned a score of 1 (best) to 4 (worst), generating a total score of 3-12. Five physicians with burns training scored 120 representative photographs using the original and modified scales. Reliability was analyzed using coefficient of agreement, Cronbach alpha, intraclass correlation coefficient, variance, and coefficient of variance. Analysis of variance was performed using the Kruskal-Wallis test. Color mismatch and scar height scores were validated by analyzing actual height and color differences. The intraclass correlation coefficient, the coefficient of agreement, and Cronbach alpha were higher for the modified scale than those of the original scale. The original scale produced more variance than that in the modified scale. Subanalysis demonstrated that, for all categories, the modified scale had greater correlation and reliability than the original scale. The correlation between color mismatch scores and actual color differences was 0.84 and between scar height scores and actual height was 0.81. The modified scar scale is a simple, reliable, and useful scale for evaluating photographs of burn patients. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Rogers, Keir K.; Bird, Simeon; Peiris, Hiranya V.; Pontzen, Andrew; Font-Ribera, Andreu; Leistedt, Boris
2018-05-01
Correlations measured in three dimensions in the Lyman-alpha forest are contaminated by the presence of the damping wings of high column density (HCD) absorbing systems of neutral hydrogen (H I; having column densities N(H I) > 1.6 × 10^{17} atoms cm^{-2}), which extend significantly beyond the redshift-space location of the absorber. We measure this effect as a function of the column density of the HCD absorbers and redshift by measuring three-dimensional (3D) flux power spectra in cosmological hydrodynamical simulations from the Illustris project. Survey pipelines exclude regions containing the largest damping wings. We find that, even after this procedure, there is a scale-dependent correction to the 3D Lyman-alpha forest flux power spectrum from residual contamination. We model this residual using a simple physical model of the HCD absorbers as linearly biased tracers of the matter density distribution, convolved with their Voigt profiles and integrated over the column density distribution function. We recommend the use of this model over existing models used in data analysis, which approximate the damping wings as top-hats and so miss shape information in the extended wings. The simple `linear Voigt model' is statistically consistent with our simulation results for a mock residual contamination up to small scales (|k| < 1 h Mpc^{-1}). It does not account for the effect of the highest column density absorbers on the smallest scales (e.g. |k| > 0.4 h Mpc^{-1} for small damped Lyman-alpha absorbers; HCD absorbers with N(H I) ˜ 10^{21} atoms cm^{-2}). However, these systems are in any case preferentially removed from survey data. Our model is appropriate for an accurate analysis of the baryon acoustic oscillations feature. It is additionally essential for reconstructing the full shape of the 3D flux power spectrum.
Development of an Earthquake Impact Scale
NASA Astrophysics Data System (ADS)
Wald, D. J.; Marano, K. D.; Jaiswal, K. S.
2009-12-01
With the advent of the USGS Prompt Assessment of Global Earthquakes for Response (PAGER) system, domestic (U.S.) and international earthquake responders are reconsidering their automatic alert and activation levels as well as their response procedures. To help facilitate rapid and proportionate earthquake response, we propose and describe an Earthquake Impact Scale (EIS) founded on two alerting criteria. One, based on the estimated cost of damage, is most suitable for domestic events; the other, based on estimated ranges of fatalities, is more appropriate for most global events. Simple thresholds, derived from the systematic analysis of past earthquake impact and response levels, turn out to be quite effective in communicating predicted impact and response level of an event, characterized by alerts of green (little or no impact), yellow (regional impact and response), orange (national-scale impact and response), and red (major disaster, necessitating international response). Corresponding fatality thresholds for yellow, orange, and red alert levels are 1, 100, and 1000, respectively. For damage impact, yellow, orange, and red thresholds are triggered by estimated losses exceeding 1M, 10M, and $1B, respectively. The rationale for a dual approach to earthquake alerting stems from the recognition that relatively high fatalities, injuries, and homelessness dominate in countries where vernacular building practices typically lend themselves to high collapse and casualty rates, and it is these impacts that set prioritization for international response. In contrast, it is often financial and overall societal impacts that trigger the level of response in regions or countries where prevalent earthquake resistant construction practices greatly reduce building collapse and associated fatalities. Any newly devised alert protocols, whether financial or casualty based, must be intuitive and consistent with established lexicons and procedures. In this analysis, we make an attempt at both simple and intuitive color-coded alerting criterion; yet, we preserve the necessary uncertainty measures by which one can gauge the likelihood for the alert to be over- or underestimated.
Cruz, Roberto de la; Guerrero, Pilar; Spill, Fabian; Alarcón, Tomás
2016-10-21
We propose a modelling framework to analyse the stochastic behaviour of heterogeneous, multi-scale cellular populations. We illustrate our methodology with a particular example in which we study a population with an oxygen-regulated proliferation rate. Our formulation is based on an age-dependent stochastic process. Cells within the population are characterised by their age (i.e. time elapsed since they were born). The age-dependent (oxygen-regulated) birth rate is given by a stochastic model of oxygen-dependent cell cycle progression. Once the birth rate is determined, we formulate an age-dependent birth-and-death process, which dictates the time evolution of the cell population. The population is under a feedback loop which controls its steady state size (carrying capacity): cells consume oxygen which in turn fuels cell proliferation. We show that our stochastic model of cell cycle progression allows for heterogeneity within the cell population induced by stochastic effects. Such heterogeneous behaviour is reflected in variations in the proliferation rate. Within this set-up, we have established three main results. First, we have shown that the age to the G1/S transition, which essentially determines the birth rate, exhibits a remarkably simple scaling behaviour. Besides the fact that this simple behaviour emerges from a rather complex model, this allows for a huge simplification of our numerical methodology. A further result is the observation that heterogeneous populations undergo an internal process of quasi-neutral competition. Finally, we investigated the effects of cell-cycle-phase dependent therapies (such as radiation therapy) on heterogeneous populations. In particular, we have studied the case in which the population contains a quiescent sub-population. Our mean-field analysis and numerical simulations confirm that, if the survival fraction of the therapy is too high, rescue of the quiescent population occurs. This gives rise to emergence of resistance to therapy since the rescued population is less sensitive to therapy. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.
Bonilla, Gonzalo; Di Masi, Gilda; Battaglia, Danilo; Otero, José María; Socolovsky, Mariano
2011-01-01
Peripheral nerve lesions usually are associated with neuropathic pain. In the present paper, we describe a simple scale to quantify pain after brachial plexus injuries and apply this scale to a series of patients to determine initial outcomes after reconstructive surgery. Fifty-one patients with traumatic brachial plexus avulsion injuries were treated over the period of one calendar year at one center by the same surgical team. Of these, 28 patients who were available for follow-up reported some degree of neuropathic pain radiating towards the hand or forearm and underwent reconstructive microsurgery and direct pain management, including trunk and nerve neurolysis and repair. A special pain severity rating scale was developed and used to assess patients' pain before and after surgery, over a minimum follow-up of 6 months. An independent researcher, not part of the surgical team, performed all pre- and postoperative evaluations. Of the 28 patients with brachial plexus traction injuries who met eligibility criteria, 93% were male, and most were young (mean age, 27.6 years). The mean preoperative severity of pain using our scale was 30.9 out of a maximum of 37 (± 0.76 SD), which fell to a mean of 6.9 (± 0.68 SD) 6 months post-procedure. On average, pain declined by 78% across the whole series, a decline that was statistically significant (p < .001). Subset analysis revealed similar improvements across all the different parameters of pain. We have designed and tested a simple and reliable method by which to quantify neuropathic pain after traumatic brachial plexus injuries. Initial surgical treatment of the paralysis--including nerve, trunk and root reconstruction, and neurolysis--comprises an effective means by which to initially treat neuropathic pain. Ablative or neuromodulative procedures, like dorsal root entry zone, should be reserved for refractory cases.
Pattern recognition analysis and classification modeling of selenium-producing areas
Naftz, D.L.
1996-01-01
Established chemometric and geochemical techniques were applied to water quality data from 23 National Irrigation Water Quality Program (NIWQP) study areas in the Western United States. These techniques were applied to the NIWQP data set to identify common geochemical processes responsible for mobilization of selenium and to develop a classification model that uses major-ion concentrations to identify areas that contain elevated selenium concentrations in water that could pose a hazard to water fowl. Pattern recognition modeling of the simple-salt data computed with the SNORM geochemical program indicate three principal components that explain 95% of the total variance. A three-dimensional plot of PC 1, 2 and 3 scores shows three distinct clusters that correspond to distinct hydrochemical facies denoted as facies 1, 2 and 3. Facies 1 samples are distinguished by water samples without the CaCO3 simple salt and elevated concentrations of NaCl, CaSO4, MgSO4 and Na2SO4 simple salts relative to water samples in facies 2 and 3. Water samples in facies 2 are distinguished from facies 1 by the absence of the MgSO4 simple salt and the presence of the CaCO3 simple salt. Water samples in facies 3 are similar to samples in facies 2, with the absence of both MgSO4 and CaSO4 simple salts. Water samples in facies 1 have the largest selenium concentration (10 ??gl-1), compared to a median concentration of 2.0 ??gl-1 and less than 1.0 ??gl-1 for samples in facies 2 and 3. A classification model using the soft independent modeling by class analogy (SIMCA) algorithm was constructed with data from the NIWQP study areas. The classification model was successful in identifying water samples with a selenium concentration that is hazardous to some species of water-fowl from a test data set comprised of 2,060 water samples from throughout Utah and Wyoming. Application of chemometric and geochemical techniques during data synthesis analysis of multivariate environmental databases from other national-scale environmental programs such as the NIWQP could also provide useful insights for addressing 'real world' environmental problems.
Maximum entropy production allows a simple representation of heterogeneity in semiarid ecosystems.
Schymanski, Stanislaus J; Kleidon, Axel; Stieglitz, Marc; Narula, Jatin
2010-05-12
Feedbacks between water use, biomass and infiltration capacity in semiarid ecosystems have been shown to lead to the spontaneous formation of vegetation patterns in a simple model. The formation of patterns permits the maintenance of larger overall biomass at low rainfall rates compared with homogeneous vegetation. This results in a bias of models run at larger scales neglecting subgrid-scale variability. In the present study, we investigate the question whether subgrid-scale heterogeneity can be parameterized as the outcome of optimal partitioning between bare soil and vegetated area. We find that a two-box model reproduces the time-averaged biomass of the patterns emerging in a 100 x 100 grid model if the vegetated fraction is optimized for maximum entropy production (MEP). This suggests that the proposed optimality-based representation of subgrid-scale heterogeneity may be generally applicable to different systems and at different scales. The implications for our understanding of self-organized behaviour and its modelling are discussed.
2012-01-01
Background Mokken scaling techniques are a useful tool for researchers who wish to construct unidimensional tests or use questionnaires that comprise multiple binary or polytomous items. The stochastic cumulative scaling model offered by this approach is ideally suited when the intention is to score an underlying latent trait by simple addition of the item response values. In our experience, the Mokken model appears to be less well-known than for example the (related) Rasch model, but is seeing increasing use in contemporary clinical research and public health. Mokken's method is a generalisation of Guttman scaling that can assist in the determination of the dimensionality of tests or scales, and enables consideration of reliability, without reliance on Cronbach's alpha. This paper provides a practical guide to the application and interpretation of this non-parametric item response theory method in empirical research with health and well-being questionnaires. Methods Scalability of data from 1) a cross-sectional health survey (the Scottish Health Education Population Survey) and 2) a general population birth cohort study (the National Child Development Study) illustrate the method and modeling steps for dichotomous and polytomous items respectively. The questionnaire data analyzed comprise responses to the 12 item General Health Questionnaire, under the binary recoding recommended for screening applications, and the ordinal/polytomous responses to the Warwick-Edinburgh Mental Well-being Scale. Results and conclusions After an initial analysis example in which we select items by phrasing (six positive versus six negatively worded items) we show that all items from the 12-item General Health Questionnaire (GHQ-12) – when binary scored – were scalable according to the double monotonicity model, in two short scales comprising six items each (Bech’s “well-being” and “distress” clinical scales). An illustration of ordinal item analysis confirmed that all 14 positively worded items of the Warwick-Edinburgh Mental Well-being Scale (WEMWBS) met criteria for the monotone homogeneity model but four items violated double monotonicity with respect to a single underlying dimension. Software availability and commands used to specify unidimensionality and reliability analysis and graphical displays for diagnosing monotone homogeneity and double monotonicity are discussed, with an emphasis on current implementations in freeware. PMID:22686586
Stochl, Jan; Jones, Peter B; Croudace, Tim J
2012-06-11
Mokken scaling techniques are a useful tool for researchers who wish to construct unidimensional tests or use questionnaires that comprise multiple binary or polytomous items. The stochastic cumulative scaling model offered by this approach is ideally suited when the intention is to score an underlying latent trait by simple addition of the item response values. In our experience, the Mokken model appears to be less well-known than for example the (related) Rasch model, but is seeing increasing use in contemporary clinical research and public health. Mokken's method is a generalisation of Guttman scaling that can assist in the determination of the dimensionality of tests or scales, and enables consideration of reliability, without reliance on Cronbach's alpha. This paper provides a practical guide to the application and interpretation of this non-parametric item response theory method in empirical research with health and well-being questionnaires. Scalability of data from 1) a cross-sectional health survey (the Scottish Health Education Population Survey) and 2) a general population birth cohort study (the National Child Development Study) illustrate the method and modeling steps for dichotomous and polytomous items respectively. The questionnaire data analyzed comprise responses to the 12 item General Health Questionnaire, under the binary recoding recommended for screening applications, and the ordinal/polytomous responses to the Warwick-Edinburgh Mental Well-being Scale. After an initial analysis example in which we select items by phrasing (six positive versus six negatively worded items) we show that all items from the 12-item General Health Questionnaire (GHQ-12)--when binary scored--were scalable according to the double monotonicity model, in two short scales comprising six items each (Bech's "well-being" and "distress" clinical scales). An illustration of ordinal item analysis confirmed that all 14 positively worded items of the Warwick-Edinburgh Mental Well-being Scale (WEMWBS) met criteria for the monotone homogeneity model but four items violated double monotonicity with respect to a single underlying dimension.Software availability and commands used to specify unidimensionality and reliability analysis and graphical displays for diagnosing monotone homogeneity and double monotonicity are discussed, with an emphasis on current implementations in freeware.
Bridges, John F P; Lataille, Angela T; Buttorff, Christine; White, Sharon; Niparko, John K
2012-03-01
Low utilization of hearing aids has drawn increased attention to the study of consumer preferences using both simple ratings (e.g., Likert scale) and conjoint analyses, but these two approaches often produce inconsistent results. The study aims to directly compare Likert scales and conjoint analysis in identifying important attributes associated with hearing aids among those with hearing loss. Seven attributes of hearing aids were identified through qualitative research: performance in quiet settings, comfort, feedback, frequency of battery replacement, purchase price, water and sweat resistance, and performance in noisy settings. The preferences of 75 outpatients with hearing loss were measured with both a 5-point Likert scale and with 8 paired-comparison conjoint tasks (the latter being analyzed using OLS [ordinary least squares] and logistic regression). Results were compared by examining implied willingness-to-pay and Pearson's Rho. A total of 56 respondents (75%) provided complete responses. Two thirds of respondents were male, most had sensorineural hearing loss, and most were older than 50; 44% of respondents had never used a hearing aid. Both methods identified improved performance in noisy settings as the most valued attribute. Respondents were twice as likely to buy a hearing aid with better functionality in noisy environments (p < .001), and willingness to pay for this attribute ranged from US$2674 on the Likert to US$9000 in the conjoint analysis. The authors find a high level of concordance between the methods-a result that is in stark contrast with previous research. The authors conclude that their result stems from constraining the levels on the Likert scale.
NASA Astrophysics Data System (ADS)
Lenderink, Geert; Attema, Jisk
2015-08-01
Scenarios of future changes in small scale precipitation extremes for the Netherlands are presented. These scenarios are based on a new approach whereby changes in precipitation extremes are set proportional to the change in water vapor amount near the surface as measured by the 2m dew point temperature. This simple scaling framework allows the integration of information derived from: (i) observations, (ii) a new unprecedentedly large 16 member ensemble of simulations with the regional climate model RACMO2 driven by EC-Earth, and (iii) short term integrations with a non-hydrostatic model Harmonie. Scaling constants are based on subjective weighting (expert judgement) of the three different information sources taking also into account previously published work. In all scenarios local precipitation extremes increase with warming, yet with broad uncertainty ranges expressing incomplete knowledge of how convective clouds and the atmospheric mesoscale circulation will react to climate change.
Characteristic Sizes of Life in the Oceans, from Bacteria to Whales.
Andersen, K H; Berge, T; Gonçalves, R J; Hartvig, M; Heuschele, J; Hylander, S; Jacobsen, N S; Lindemann, C; Martens, E A; Neuheimer, A B; Olsson, K; Palacz, A; Prowe, A E F; Sainmont, J; Traving, S J; Visser, A W; Wadhwa, N; Kiørboe, T
2016-01-01
The size of an individual organism is a key trait to characterize its physiology and feeding ecology. Size-based scaling laws may have a limited size range of validity or undergo a transition from one scaling exponent to another at some characteristic size. We collate and review data on size-based scaling laws for resource acquisition, mobility, sensory range, and progeny size for all pelagic marine life, from bacteria to whales. Further, we review and develop simple theoretical arguments for observed scaling laws and the characteristic sizes of a change or breakdown of power laws. We divide life in the ocean into seven major realms based on trophic strategy, physiology, and life history strategy. Such a categorization represents a move away from a taxonomically oriented description toward a trait-based description of life in the oceans. Finally, we discuss life forms that transgress the simple size-based rules and identify unanswered questions.
Reichert, Matthew D.; Alvarez, Nicolas J.; Brooks, Carlton F.; ...
2014-09-24
Pendant bubble and drop devices are invaluable tools in understanding surfactant behavior at fluid–fluid interfaces. The simple instrumentation and analysis are used widely to determine adsorption isotherms, transport parameters, and interfacial rheology. However, much of the analysis performed is developed for planar interfaces. Moreover, the application of a planar analysis to drops and bubbles (curved interfaces) can lead to erroneous and unphysical results. We revisit this analysis for a well-studied surfactant system at air–water interfaces over a wide range of curvatures as applied to both expansion/contraction experiments and interfacial elasticity measurements. The impact of curvature and transport on measured propertiesmore » is quantified and compared to other scaling relationships in the literature. Our results provide tools to design interfacial experiments for accurate determination of isotherm, transport and elastic properties.« less
Automatic network coupling analysis for dynamical systems based on detailed kinetic models.
Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich
2005-10-01
We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.
Grammatical Analysis as a Distributed Neurobiological Function
Bozic, Mirjana; Fonteneau, Elisabeth; Su, Li; Marslen-Wilson, William D
2015-01-01
Language processing engages large-scale functional networks in both hemispheres. Although it is widely accepted that left perisylvian regions have a key role in supporting complex grammatical computations, patient data suggest that some aspects of grammatical processing could be supported bilaterally. We investigated the distribution and the nature of grammatical computations across language processing networks by comparing two types of combinatorial grammatical sequences—inflectionally complex words and minimal phrases—and contrasting them with grammatically simple words. Novel multivariate analyses revealed that they engage a coalition of separable subsystems: inflected forms triggered left-lateralized activation, dissociable into dorsal processes supporting morphophonological parsing and ventral, lexically driven morphosyntactic processes. In contrast, simple phrases activated a consistently bilateral pattern of temporal regions, overlapping with inflectional activations in L middle temporal gyrus. These data confirm the role of the left-lateralized frontotemporal network in supporting complex grammatical computations. Critically, they also point to the capacity of bilateral temporal regions to support simple, linear grammatical computations. This is consistent with a dual neurobiological framework where phylogenetically older bihemispheric systems form part of the network that supports language function in the modern human, and where significant capacities for language comprehension remain intact even following severe left hemisphere damage. PMID:25421880
On estimating scale invariance in stratocumulus cloud fields
NASA Technical Reports Server (NTRS)
Seze, Genevieve; Smith, Leonard A.
1990-01-01
Examination of cloud radiance fields derived from satellite observations sometimes indicates the existence of a range of scales over which the statistics of the field are scale invariant. Many methods were developed to quantify this scaling behavior in geophysics. The usefulness of such techniques depends both on the physics of the process being robust over a wide range of scales and on the availability of high resolution, low noise observations over these scales. These techniques (area perimeter relation, distribution of areas, estimation of the capacity, d0, through box counting, correlation exponent) are applied to the high resolution satellite data taken during the FIRE experiment and provides initial estimates of the quality of data required by analyzing simple sets. The results of the observed fields are contrasted with those of images of objects with known characteristics (e.g., dimension) where the details of the constructed image simulate current observational limits. Throughout when cloud elements and cloud boundaries are mentioned; it should be clearly understood that by this structures in the radiance field are meant: all the boundaries considered are defined by simple threshold arguments.
On the context-dependent scaling of consumer feeding rates.
Barrios-O'Neill, Daniel; Kelly, Ruth; Dick, Jaimie T A; Ricciardi, Anthony; MacIsaac, Hugh J; Emmerson, Mark C
2016-06-01
The stability of consumer-resource systems can depend on the form of feeding interactions (i.e. functional responses). Size-based models predict interactions - and thus stability - based on consumer-resource size ratios. However, little is known about how interaction contexts (e.g. simple or complex habitats) might alter scaling relationships. Addressing this, we experimentally measured interactions between a large size range of aquatic predators (4-6400 mg over 1347 feeding trials) and an invasive prey that transitions among habitats: from the water column (3D interactions) to simple and complex benthic substrates (2D interactions). Simple and complex substrates mediated successive reductions in capture rates - particularly around the unimodal optimum - and promoted prey population stability in model simulations. Many real consumer-resource systems transition between 2D and 3D interactions, and along complexity gradients. Thus, Context-Dependent Scaling (CDS) of feeding interactions could represent an unrecognised aspect of food webs, and quantifying the extent of CDS might enhance predictive ecology. © The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.
Accurate Modeling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model
NASA Astrophysics Data System (ADS)
Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron; Scoccimarro, Roman
2015-01-01
The large-scale distribution of galaxies can be explained fairly simply by assuming (i) a cosmological model, which determines the dark matter halo distribution, and (ii) a simple connection between galaxies and the halos they inhabit. This conceptually simple framework, called the halo model, has been remarkably successful at reproducing the clustering of galaxies on all scales, as observed in various galaxy redshift surveys. However, none of these previous studies have carefully modeled the systematics and thus truly tested the halo model in a statistically rigorous sense. We present a new accurate and fully numerical halo model framework and test it against clustering measurements from two luminosity samples of galaxies drawn from the SDSS DR7. We show that the simple ΛCDM cosmology + halo model is not able to simultaneously reproduce the galaxy projected correlation function and the group multiplicity function. In particular, the more luminous sample shows significant tension with theory. We discuss the implications of our findings and how this work paves the way for constraining galaxy formation by accurate simultaneous modeling of multiple galaxy clustering statistics.
LOD significance thresholds for QTL analysis in experimental populations of diploid species
Van Ooijen JW
1999-11-01
Linkage analysis with molecular genetic markers is a very powerful tool in the biological research of quantitative traits. The lack of an easy way to know what areas of the genome can be designated as statistically significant for containing a gene affecting the quantitative trait of interest hampers the important prediction of the rate of false positives. In this paper four tables, obtained by large-scale simulations, are presented that can be used with a simple formula to get the false-positives rate for analyses of the standard types of experimental populations with diploid species with any size of genome. A new definition of the term 'suggestive linkage' is proposed that allows a more objective comparison of results across species.
Structural design considerations for micromachined solid-oxide fuel cells
NASA Astrophysics Data System (ADS)
Srikar, V. T.; Turner, Kevin T.; Andrew Ie, Tze Yung; Spearing, S. Mark
Micromachined solid-oxide fuel cells (μSOFCs) are among a class of devices being investigated for portable power generation. Optimization of the performance and reliability of such devices requires robust, scale-dependent, design methodologies. In this first analysis, we consider the structural design of planar, electrolyte-supported, μSOFCs from the viewpoints of electrochemical performance, mechanical stability and reliability, and thermal behavior. The effect of electrolyte thickness on fuel cell performance is evaluated using a simple analytical model. Design diagrams that account explicitly for thermal and intrinsic residual stresses are presented to identify geometries that are resistant to fracture and buckling. Analysis of energy loss due to in-plane heat conduction highlights the importance of efficient thermal isolation in microscale fuel cell design.
Data survey on the effect of product features on competitive advantage of selected firms in Nigeria.
Olokundun, Maxwell; Iyiola, Oladele; Ibidunni, Stephen; Falola, Hezekiah; Salau, Odunayo; Amaihian, Augusta; Peter, Fred; Borishade, Taiye
2018-06-01
The main objective of this study was to present a data article that investigates the effect product features on firm's competitive advantage. Few studies have examined how the features of a product could help in driving the competitive advantage of a firm. Descriptive research method was used. Statistical Package for Social Sciences (SPSS 22) was engaged for analysis of one hundred and fifty (150) valid questionnaire which were completed by small business owners registered under small and medium scale enterprises development of Nigeria (SMEDAN). Stratified and simple random sampling techniques were employed; reliability and validity procedures were also confirmed. The field data set is made publicly available to enable critical or extended analysis.
Wavelet analysis of epileptic spikes
NASA Astrophysics Data System (ADS)
Latka, Miroslaw; Was, Ziemowit; Kozik, Andrzej; West, Bruce J.
2003-05-01
Interictal spikes and sharp waves in human EEG are characteristic signatures of epilepsy. These potentials originate as a result of synchronous pathological discharge of many neurons. The reliable detection of such potentials has been the long standing problem in EEG analysis, especially after long-term monitoring became common in investigation of epileptic patients. The traditional definition of a spike is based on its amplitude, duration, sharpness, and emergence from its background. However, spike detection systems built solely around this definition are not reliable due to the presence of numerous transients and artifacts. We use wavelet transform to analyze the properties of EEG manifestations of epilepsy. We demonstrate that the behavior of wavelet transform of epileptic spikes across scales can constitute the foundation of a relatively simple yet effective detection algorithm.
Scaling and efficiency determine the irreversible evolution of a market
Baldovin, F.; Stella, A. L.
2007-01-01
In setting up a stochastic description of the time evolution of a financial index, the challenge consists in devising a model compatible with all stylized facts emerging from the analysis of financial time series and providing a reliable basis for simulating such series. Based on constraints imposed by market efficiency and on an inhomogeneous-time generalization of standard simple scaling, we propose an analytical model which accounts simultaneously for empirical results like the linear decorrelation of successive returns, the power law dependence on time of the volatility autocorrelation function, and the multiscaling associated to this dependence. In addition, our approach gives a justification and a quantitative assessment of the irreversible character of the index dynamics. This irreversibility enters as a key ingredient in a novel simulation strategy of index evolution which demonstrates the predictive potential of the model.
Hilson, Gavin; Hilson, Christopher J; Pardie, Sandra
2007-02-01
This paper critiques the approach taken by the Ghanaian Government to address mercury pollution in the artisanal and small-scale gold mining sector. Unmonitored releases of mercury-used in the gold-amalgamation process-have caused numerous environmental complications throughout rural Ghana. Certain policy, technological and educational initiatives taken to address the mounting problem, however, have proved marginally effective at best, having been designed and implemented without careful analysis of mine community dynamics, the organization of activities, operators' needs and local geological conditions. Marked improvements can only be achieved in this area through increased government-initiated dialogue with the now-ostracized illegal galamsey mining community; introducing simple, cost-effective techniques for the reduction of mercury emissions; and effecting government-sponsored participatory training exercises as mediums for communicating information about appropriate technologies and the environment.
Risk perception in epidemic modeling
NASA Astrophysics Data System (ADS)
Bagnoli, Franco; Liò, Pietro; Sguanci, Luca
2007-12-01
We investigate the effects of risk perception in a simple model of epidemic spreading. We assume that the perception of the risk of being infected depends on the fraction of neighbors that are ill. The effect of this factor is to decrease the infectivity, that therefore becomes a dynamical component of the model. We study the problem in the mean-field approximation and by numerical simulations for regular, random, and scale-free networks. We show that for homogeneous and random networks, there is always a value of perception that stops the epidemics. In the “worst-case” scenario of a scale-free network with diverging input connectivity, a linear perception cannot stop the epidemics; however, we show that a nonlinear increase of the perception risk may lead to the extinction of the disease. This transition is discontinuous, and is not predicted by the mean-field analysis.
N-point statistics of large-scale structure in the Zel'dovich approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tassev, Svetlin, E-mail: tassev@astro.princeton.edu
2014-06-01
Motivated by the results presented in a companion paper, here we give a simple analytical expression for the matter n-point functions in the Zel'dovich approximation (ZA) both in real and in redshift space (including the angular case). We present numerical results for the 2-dimensional redshift-space correlation function, as well as for the equilateral configuration for the real-space 3-point function. We compare those to the tree-level results. Our analysis is easily extendable to include Lagrangian bias, as well as higher-order perturbative corrections to the ZA. The results should be especially useful for modelling probes of large-scale structure in the linear regime,more » such as the Baryon Acoustic Oscillations. We make the numerical code used in this paper freely available.« less
Electrochemical micro/nano-machining: principles and practices.
Zhan, Dongping; Han, Lianhuan; Zhang, Jie; He, Quanfeng; Tian, Zhao-Wu; Tian, Zhong-Qun
2017-03-06
Micro/nano-machining (MNM) is becoming the cutting-edge of high-tech manufacturing because of the increasing industrial demand for supersmooth surfaces and functional three-dimensional micro/nano-structures (3D-MNS) in ultra-large scale integrated circuits, microelectromechanical systems, miniaturized total analysis systems, precision optics, and so on. Taking advantage of no tool wear, no surface stress, environmental friendliness, simple operation, and low cost, electrochemical micro/nano-machining (EC-MNM) has an irreplaceable role in MNM. This comprehensive review presents the state-of-art of EC-MNM techniques for direct writing, surface planarization and polishing, and 3D-MNS fabrications. The key point of EC-MNM is to confine electrochemical reactions at the micro/nano-meter scale. This review will bring together various solutions to "confined reaction" ranging from electrochemical principles through technical characteristics to relevant applications.
Structure-related statistical singularities along protein sequences: a correlation study.
Colafranceschi, Mauro; Colosimo, Alfredo; Zbilut, Joseph P; Uversky, Vladimir N; Giuliani, Alessandro
2005-01-01
A data set composed of 1141 proteins representative of all eukaryotic protein sequences in the Swiss-Prot Protein Knowledge base was coded by seven physicochemical properties of amino acid residues. The resulting numerical profiles were submitted to correlation analysis after the application of a linear (simple mean) and a nonlinear (Recurrence Quantification Analysis, RQA) filter. The main RQA variables, Recurrence and Determinism, were subsequently analyzed by Principal Component Analysis. The RQA descriptors showed that (i) within protein sequences is embedded specific information neither present in the codes nor in the amino acid composition and (ii) the most sensitive code for detecting ordered recurrent (deterministic) patterns of residues in protein sequences is the Miyazawa-Jernigan hydrophobicity scale. The most deterministic proteins in terms of autocorrelation properties of primary structures were found (i) to be involved in protein-protein and protein-DNA interactions and (ii) to display a significantly higher proportion of structural disorder with respect to the average data set. A study of the scaling behavior of the average determinism with the setting parameters of RQA (embedding dimension and radius) allows for the identification of patterns of minimal length (six residues) as possible markers of zones specifically prone to inter- and intramolecular interactions.
Fire-protection research for energy technology: Fy 80 year end report
NASA Astrophysics Data System (ADS)
Hasegawa, H. K.; Alvares, N. J.; Lipska, A. E.; Ford, H.; Priante, S.; Beason, D. G.
1981-05-01
This continuing research program was initiated in order to advance fire protection strategies for Fusion Energy Experiments (FEE). The program expanded to encompass other forms of energy research. Accomplishments for fiscal year 1980 were: finalization of the fault-free analysis of the Shiva fire management system; development of a second-generation, fire-growth analysis using an alternate model and new LLNL combustion dynamics data; improvements of techniques for chemical smoke aerosol analysis; development and test of a simple method to assess the corrosive potential of smoke aerosols; development of an initial aerosol dilution system; completion of primary small-scale tests for measurements of the dynamics of cable fires; finalization of primary survey format for non-LLNL energy technology facilities; and studies of fire dynamics and aerosol production from electrical insulation and computer tape cassettes.
NASA Astrophysics Data System (ADS)
Kovalev, A.; Filippov, A.; Gorb, S. N.
2016-03-01
In contrast to the majority of inorganic or artificial materials, there is no ideal long-range ordering of structures on the surface in biological systems. Local symmetry of the ordering on biological surfaces is also often broken. In the present paper, the particular symmetry violation was analyzed for dimple-like nano-pattern on the belly scales of the skin of the pythonid snake Morelia viridis using correlation analysis and statistics of the distances between individual nanostructures. The results of the analysis performed on M. viridis were compared with a well-studied nano-nipple pattern on the eye of the sphingid moth Manduca sexta, used as a reference. The analysis revealed non-random, but very specific symmetry violation. In the case of the moth eye, the nano-nipple arrangement forms a set of domains, while in the case of the snake skin, the nano-dimples arrangement resembles an ordering of particles (molecules) in amorphous (glass) state. The function of the nano-dimples arrangement may be to provide both friction and strength isotropy of the skin. A simple model is suggested, which provides the results almost perfectly coinciding with the experimental ones. Possible mechanisms of the appearance of the above nano-formations are discussed.
NASA Astrophysics Data System (ADS)
Einstein, Theodore L.; Pimpinelli, Alberto; González, Diego Luis; Morales-Cifuentes, Josue R.
2015-09-01
In studies of epitaxial growth, analysis of the distribution of the areas of capture zones (i.e. proximity polygons or Voronoi tessellations with respect to island centers) is often the best way to extract the critical nucleus size i. For non-random nucleation the normalized areas s of these Voronoi cells are well described by the generalized Wigner distribution (GWD) Pβ(s) = asβ exp(-bs2), particularly in the central region 0.5 < s < 2 where data are least noisy. Extensive Monte Carlo simulations reveal inadequacies of our earlier mean field analysis, suggesting β = i + 2 for diffusion-limited aggregation (DLA). Since simulations generate orders of magnitude more data than experiments, they permit close examination of the tails of the distribution, which differ from the simple GWD form. One refinement is based on a fragmentation model. We also compare island-size distributions. We compare analysis by island-size distribution and by scaling of island density with flux. Modifications appear for attach-limited aggregation (ALA). We focus on the experimental system para-hexaphenyl on amorphous mica, comparing the results of the three analysis techniques and reconciling their results via a novel model of hot precursors based on rate equations, pointing out the existence of intermediate scaling regimes between DLA and ALA.
Simple yet Hidden Counterexamples in Undergraduate Real Analysis
ERIC Educational Resources Information Center
Shipman, Barbara A.; Shipman, Patrick D.
2013-01-01
We study situations in introductory analysis in which students affirmed false statements as true, despite simple counterexamples that they easily recognized afterwards. The study draws attention to how simple counterexamples can become hidden in plain sight, even in an active learning atmosphere where students proposed simple (as well as more…
NASA Astrophysics Data System (ADS)
Tanaka, H. L.
2003-06-01
In this study, a numerical simulation of the Arctic Oscillation (AO) is conducted using a simple barotropic model that considers the barotropic-baroclinic interactions as the external forcing. The model is referred to as a barotropic S model since the external forcing is obtained statistically from the long-term historical data, solving an inverse problem. The barotropic S model has been integrated for 51 years under a perpetual January condition and the dominant empirical orthogonal function (EOF) modes in the model have been analyzed. The results are compared with the EOF analysis of the barotropic component of the real atmosphere based on the daily NCEP-NCAR reanalysis for 50 yr from 1950 to 1999.According to the result, the first EOF of the model atmosphere appears to be the AO similar to the observation. The annular structure of the AO and the two centers of action at Pacific and Atlantic are simulated nicely by the barotropic S model. Therefore, the atmospheric low-frequency variabilities have been captured satisfactorily even by the simple barotropic model.The EOF analysis is further conducted to the external forcing of the barotropic S model. The structure of the dominant forcing shows the characteristics of synoptic-scale disturbances of zonal wavenumber 6 along the Pacific storm track. The forcing is induced by the barotropic-baroclinic interactions associated with baroclinic instability.The result suggests that the AO can be understood as the natural variability of the barotropic component of the atmosphere induced by the inherent barotropic dynamics, which is forced by the barotropic-baroclinic interactions. The fluctuating upscale energy cascade from planetary waves and synoptic disturbances to the zonal motion plays the key role for the excitation of the AO.
Dynamic texture recognition using local binary patterns with an application to facial expressions.
Zhao, Guoying; Pietikäinen, Matti
2007-06-01
Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation.
Ultrasonic monitoring of droplets' evaporation: Application to human whole blood.
Laux, D; Ferrandis, J Y; Brutin, D
2016-09-01
During a colloidal droplet evaporation, a sol-gel transition can be observed and is described by the desiccation time τD and the gelation time τG. These characteristic times, which can be linked to viscoelastic properties of the droplet and to its composition, are classically rated by analysis of mass droplet evolution during evaporation. Even if monitoring mass evolution versus time seems straightforward, this approach is very sensitive to environmental conditions (vibrations, air flow…) as mass has to be evaluated very accurately using ultra-sensitive weighing scales. In this study we investigated the potentialities of ultrasonic shear reflectometry to assess τD and τG in a simple and reliable manner. In order to validate this approach, our study has focused on blood droplets evaporation on which a great deal of work has recently been published. Desiccation and gelation times measured with shear ultrasonic reflectometry have been perfectly correlated to values obtained from mass versus time analysis. This ultrasonic method which is not very sensitive to environmental perturbations is therefore very interesting to monitor the drying of blood droplets in a simple manner and is more generally suitable for complex fluid droplets evaporation investigation. Copyright © 2016 Elsevier B.V. All rights reserved.
Approximate analytical solution for induction heating of solid cylinders
Jankowski, Todd Andrew; Pawley, Norma Helen; Gonzales, Lindsey Michal; ...
2015-10-20
An approximate solution to the mathematical model for induction heating of a solid cylinder in a cylindrical induction coil is presented here. The coupled multiphysics model includes equations describing the electromagnetic field in the heated object, a heat transfer simulation to determine temperature of the heated object, and an AC circuit simulation of the induction heating power supply. A multiple-scale perturbation method is used to solve the multiphysics model. The approximate analytical solution yields simple closed-form expressions for the electromagnetic field and heat generation rate in the solid cylinder, for the equivalent impedance of the associated tank circuit, and formore » the frequency response of a variable frequency power supply driving the tank circuit. The solution developed here is validated by comparing predicted power supply frequency to both experimental measurements and calculated values from finite element analysis for heating of graphite cylinders in an induction furnace. The simple expressions from the analytical solution clearly show the functional dependence of the power supply frequency on the material properties of the load and the geometrical characteristics of the furnace installation. In conclusion, the expressions developed here provide physical insight into observations made during load signature analysis of induction heating.« less
Stability and chaos of Rulkov map-based neuron network with electrical synapse
NASA Astrophysics Data System (ADS)
Wang, Caixia; Cao, Hongjun
2015-02-01
In this paper, stability and chaos of a simple system consisting of two identical Rulkov map-based neurons with the bidirectional electrical synapse are investigated in detail. On the one hand, as a function of control parameters and electrical coupling strengthes, the conditions for stability of fixed points of this system are obtained by using the qualitative analysis. On the other hand, chaos in the sense of Marotto is proved by a strict mathematical way. These results could be useful for building-up large-scale neurons networks with specific dynamics and rich biophysical phenomena.
Overview of Rotating Cavitation and Cavitation Surge in the Fastrac Engine LOX Turbopump
NASA Technical Reports Server (NTRS)
Zoladz, Thomas; Turner, Jim (Technical Monitor)
2001-01-01
Observations regarding rotating cavitation and cavitation surge experienced during the development of the Fastrac 60 Klbf engine turbopump are discussed. Detailed observations from the analysis of both water flow and liquid oxygen test data are offered. Scaling and general comparison of rotating cavitation between water flow and liquid oxygen testing are discussed. Complex data features linking the localized rotating cavitation mechanism of the inducer to system surge components are described in detail. Finally a description of a simple lumped-parameter hydraulic system model developed to better understand observed data is given.
Static deflection analysis of non prismatic multilayer p-NEMS cantilevers under electrical load
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavithra, M., E-mail: pavithramasi78@gmail.com; Muruganand, S.
2016-04-13
Deflection of Euler-Bernoulli non prismatic multilayer piezoelectric nano electromechanical (p-NEMS) cantilever beams have been studied theoretically for various profiles of p-NEMS cantilevers by applying the electrical load. This problem has been answered by applying the boundary conditions derived by simple polynomials. This method is applied for various profiles like rectangular and trapezoidal by varying the thickness of the piezoelectric layer as well as the material. The obtained results provide the better deflection for trapezoidal profile with ZnO piezo electric layer of suitable nano cantilevers for nano scale applications.
Wentzel-Kramers-Brillouin method in the Bargmann representation. [of quantum mechanics
NASA Technical Reports Server (NTRS)
Voros, A.
1989-01-01
It is demonstrated that the Bargmann representation of quantum mechanics is ideally suited for semiclassical analysis, using as an example the WKB method applied to the bound-state problem in a single well of one degree of freedom. For the harmonic oscillator, this WKB method trivially gives the exact eigenfunctions in addition to the exact eigenvalues. For an anharmonic well, a self-consistent variational choice of the representation greatly improves the accuracy of the semiclassical ground state. Also, a simple change of scale illuminates the relationship of semiclassical versus linear perturbative expansions, allowing a variety of multidimensional extensions.
Invariant approach to the character classification
NASA Astrophysics Data System (ADS)
Šariri, Kristina; Demoli, Nazif
2008-04-01
Image moments analysis is a very useful tool which allows image description invariant to translation and rotation, scale change and some types of image distortions. The aim of this work was development of simple method for fast and reliable classification of characters by using Hu's and affine moment invariants. Measure of Eucleidean distance was used as a discrimination feature with statistical parameters estimated. The method was tested in classification of Times New Roman font letters as well as sets of the handwritten characters. It is shown that using all Hu's and three affine invariants as discrimination set improves recognition rate by 30%.
Stable isotope dimethyl labelling for quantitative proteomics and beyond
Hsu, Jue-Liang; Chen, Shu-Hui
2016-01-01
Stable-isotope reductive dimethylation, a cost-effective, simple, robust, reliable and easy-to- multiplex labelling method, is widely applied to quantitative proteomics using liquid chromatography-mass spectrometry. This review focuses on biological applications of stable-isotope dimethyl labelling for a large-scale comparative analysis of protein expression and post-translational modifications based on its unique properties of the labelling chemistry. Some other applications of the labelling method for sample preparation and mass spectrometry-based protein identification and characterization are also summarized. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644970
Coupled-cluster treatment of molecular strong-field ionization
NASA Astrophysics Data System (ADS)
Jagau, Thomas-C.
2018-05-01
Ionization rates and Stark shifts of H2, CO, O2, H2O, and CH4 in static electric fields have been computed with coupled-cluster methods in a basis set of atom-centered Gaussian functions with a complex-scaled exponent. Consideration of electron correlation is found to be of great importance even for a qualitatively correct description of the dependence of ionization rates and Stark shifts on the strength and orientation of the external field. The analysis of the second moments of the molecular charge distribution suggests a simple criterion for distinguishing tunnel and barrier suppression ionization in polyatomic molecules.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dombroski, M; Melius, C; Edmunds, T
2008-09-24
This study uses the Multi-scale Epidemiologic Simulation and Analysis (MESA) system developed for foreign animal diseases to assess consequences of nationwide human infectious disease outbreaks. A literature review identified the state of the art in both small-scale regional models and large-scale nationwide models and characterized key aspects of a nationwide epidemiological model. The MESA system offers computational advantages over existing epidemiological models and enables a broader array of stochastic analyses of model runs to be conducted because of those computational advantages. However, it has only been demonstrated on foreign animal diseases. This paper applied the MESA modeling methodology to humanmore » epidemiology. The methodology divided 2000 US Census data at the census tract level into school-bound children, work-bound workers, elderly, and stay at home individuals. The model simulated mixing among these groups by incorporating schools, workplaces, households, and long-distance travel via airports. A baseline scenario with fixed input parameters was run for a nationwide influenza outbreak using relatively simple social distancing countermeasures. Analysis from the baseline scenario showed one of three possible results: (1) the outbreak burned itself out before it had a chance to spread regionally, (2) the outbreak spread regionally and lasted a relatively long time, although constrained geography enabled it to eventually be contained without affecting a disproportionately large number of people, or (3) the outbreak spread through air travel and lasted a long time with unconstrained geography, becoming a nationwide pandemic. These results are consistent with empirical influenza outbreak data. The results showed that simply scaling up a regional small-scale model is unlikely to account for all the complex variables and their interactions involved in a nationwide outbreak. There are several limitations of the methodology that should be explored in future work including validating the model against reliable historical disease data, improving contact rates, spread methods, and disease parameters through discussions with epidemiological experts, and incorporating realistic behavioral assumptions.« less
Simple fluorescence-based high throughput cell viability assay for filamentous fungi.
Chadha, S; Kale, S P
2015-09-01
Filamentous fungi are important model organisms to understand the eukaryotic process and have been frequently exploited in research and industry. These fungi are also causative agents of serious diseases in plants and humans. Disease management strategies include in vitro susceptibility testing of the fungal pathogens to environmental conditions and antifungal agents. Conventional methods used for antifungal susceptibilities are cumbersome, time-consuming and are not suitable for a large-scale analysis. Here, we report a rapid, high throughput microplate-based fluorescence method for investigating the toxicity of antifungal and stress (osmotic, salt and oxidative) agents on Magnaporthe oryzae and compared it with agar dilution method. This bioassay is optimized for the resazurin reduction to fluorescent resorufin by the fungal hyphae. Resazurin bioassay showed inhibitory rates and IC50 values comparable to the agar dilution method and to previously reported IC50 or MICs for M. oryzae and other fungi. The present method can screen range of test agents from different chemical classes with different modes of action for antifungal activities in a simple, sensitive, time and cost effective manner. A simple fluorescence-based high throughput method is developed to test the effects of stress and antifungal agents on viability of filamentous fungus Magnaporthe oryzae. This resazurin fluorescence assay can detect inhibitory effects comparable to those obtained using the growth inhibition assay with added advantages of simplicity, time and cost effectiveness. This high throughput viability assay has a great potential in large-scale screening of the chemical libraries of antifungal agents, for evaluating the effects of environmental conditions and hyphal kinetic studies in mutant and natural populations of filamentous fungi. © 2015 The Society for Applied Microbiology.
NASA Astrophysics Data System (ADS)
Pletikapić, Galja; Ivošević DeNardis, Nadica
2017-01-01
Surface analytical methods are applied to examine the environmental status of seawaters. The present overview emphasizes advantages of combining surface analytical methods, applied to a hazardous situation in the Adriatic Sea, such as monitoring of the first aggregation phases of dissolved organic matter in order to potentially predict the massive mucilage formation and testing of oil spill cleanup. Such an approach, based on fast and direct characterization of organic matter and its high-resolution visualization, sets a continuous-scale description of organic matter from micro- to nanometre scales. Electrochemical method of chronoamperometry at the dropping mercury electrode meets the requirements for monitoring purposes due to the simple and fast analysis of a large number of natural seawater samples enabling simultaneous differentiation of organic constituents. In contrast, atomic force microscopy allows direct visualization of biotic and abiotic particles and provides an insight into structural organization of marine organic matter at micro- and nanometre scales. In the future, merging data at different spatial scales, taking into account experimental input on micrometre scale, observations on metre scale and modelling on kilometre scale, will be important for developing sophisticated technological platforms for knowledge transfer, reports and maps applicable for the marine environmental protection and management of the coastal area, especially for tourism, fishery and cruiser trafficking.
Indentation analysis of active viscoelastic microplasmodia of P. polycephalum
NASA Astrophysics Data System (ADS)
Fessel, Adrian; Oettmeier, Christina; Wechsler, Klaus; Döbereiner, Hans-Günther
2018-01-01
Simple organisms like Physarum polycephalum realize complex behavior, such as shortest path optimization or habituation, via mechanochemical processes rather than by a network of neurons. A full understanding of these phenomena requires detailed investigation of the underlying mechanical properties. To date, micromechanical measurements on P. polycephalum are sparse and lack reproducibility. This prompts study of microplasmodia, a reproducible and homogeneous form of P. polycephalum that resembles the plasmodial ectoplasm responsible for mechanical stability and generation of forces. We combine investigation of ultra-structure and dimension of P. polycephalum with the analysis of data obtained by indentation of microplasmodia, employing a novel nonlinear viscoelastic scaling model that accounts for finite dimension of the sample. We identify the multi-modal distribution of parameters such as Young’s moduls, Poisson’s ratio, and relaxation times associated with viscous processes that cover five orders of magnitude. Results suggest a characterization of microplasmodia as porous, compressible structures that act like elastic solids with high Young’s modulus on short time scales, whereas on long time-scales and upon repeated indentation viscous behavior dominates and the effective modulus is significantly decreased. Furthermore, Young’s modulus is found to oscillate in phase with shape of microplasmodia, emphasizing that modeling P. polycephalum oscillations as a driven oscillator with constant moduli is not practicable.
A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wan, Lipeng; Wang, Feiyi; Oral, H. Sarp
High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storagemore » systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results« less
Mellis, Ian A; Raj, Arjun
2015-10-01
Small-scale molecular systems biology, by which we mean the understanding of a how a few parts work together to control a particular biological process, is predicated on the assumption that cellular regulation is arranged in a circuit-like structure. Results from the omics revolution have upset this vision to varying degrees by revealing a high degree of interconnectivity, making it difficult to develop a simple, circuit-like understanding of regulatory processes. We here outline the limitations of the small-scale systems biology approach with examples from research into genetic algorithms, genetics, transcriptional network analysis, and genomics. We also discuss the difficulties associated with deriving true understanding from the analysis of large data sets and propose that the development of new, intelligent, computational tools may point to a way forward. Throughout, we intentionally oversimplify and talk about things in which we have little expertise, and it is likely that many of our arguments are wrong on one level or another. We do believe, however, that developing a true understanding via molecular systems biology will require a fundamental rethinking of our approach, and our goal is to provoke thought along these lines. © 2015 Mellis and Raj; Published by Cold Spring Harbor Laboratory Press.
A simple phenomenological model for grain clustering in turbulence
NASA Astrophysics Data System (ADS)
Hopkins, Philip F.
2016-01-01
We propose a simple model for density fluctuations of aerodynamic grains, embedded in a turbulent, gravitating gas disc. The model combines a calculation for the behaviour of a group of grains encountering a single turbulent eddy, with a hierarchical approximation of the eddy statistics. This makes analytic predictions for a range of quantities including: distributions of grain densities, power spectra and correlation functions of fluctuations, and maximum grain densities reached. We predict how these scale as a function of grain drag time ts, spatial scale, grain-to-gas mass ratio tilde{ρ }, strength of turbulence α, and detailed disc properties. We test these against numerical simulations with various turbulence-driving mechanisms. The simulations agree well with the predictions, spanning ts Ω ˜ 10-4-10, tilde{ρ }˜ 0{-}3, α ˜ 10-10-10-2. Results from `turbulent concentration' simulations and laboratory experiments are also predicted as a special case. Vortices on a wide range of scales disperse and concentrate grains hierarchically. For small grains this is most efficient in eddies with turnover time comparable to the stopping time, but fluctuations are also damped by local gas-grain drift. For large grains, shear and gravity lead to a much broader range of eddy scales driving fluctuations, with most power on the largest scales. The grain density distribution has a log-Poisson shape, with fluctuations for large grains up to factors ≳1000. We provide simple analytic expressions for the predictions, and discuss implications for planetesimal formation, grain growth, and the structure of turbulence.
Challenges in converting among log scaling methods.
Henry Spelter
2003-01-01
The traditional method of measuring log volume in North America is the board foot log scale, which uses simple assumptions about how much of a log's volume is recoverable. This underestimates the true recovery potential and leads to difficulties in comparing volumes measured with the traditional board foot system and those measured with the cubic scaling systems...
An extinction scale-expansion unit for the Beckman DK2 spectrophotometer
Dixon, M.
1967-01-01
The paper describes a simple but accurate unit for the Beckman DK2 recording spectrophotometer, whereby any 0·1 section of the extinction (`absorbance') scale may be expanded tenfold, while preserving complete linearity in extinction. PMID:6048800
A nested observation and model approach to non linear groundwater surface water interactions.
NASA Astrophysics Data System (ADS)
van der Velde, Y.; Rozemeijer, J. C.; de Rooij, G. H.
2009-04-01
Surface water quality measurements in The Netherlands are scattered in time and space. Therefore, water quality status and its variations and trends are difficult to determine. In order to reach the water quality goals according to the European Water Framework Directive, we need to improve our understanding of the dynamics of surface water quality and the processes that affect it. In heavily drained lowland catchment groundwater influences the discharge towards the surface water network in many complex ways. Especially a strong seasonal contracting and expanding system of discharging ditches and streams affects discharge and solute transport. At a tube drained field site the tube drain flux and the combined flux of all other flow routes toward a stretch of 45 m of surface water have been measured for a year. Also the groundwater levels at various locations in the field and the discharge at two nested catchment scales have been monitored. The unique reaction of individual flow routes on rainfall events at the field site allowed us to separate the discharge at a 4 ha catchment and at a 6 km2 into flow route contributions. The results of this nested experimental setup combined with the results of a distributed hydrological model has lead to the formulation of a process model approach that focuses on the spatial variability of discharge generation driven by temporal and spatial variations in groundwater levels. The main idea of this approach is that discharge is not generated by catchment average storages or groundwater heads, but is mainly generated by points scale extremes i.e. extreme low permeability, extreme high groundwater heads or extreme low surface elevations, all leading to catchment discharge. We focused on describing the spatial extremes in point scale storages and this led to a simple and measurable expression that governs the non-linear groundwater surface water interaction. We will present the analysis of the field site data to demonstrate the potential of nested-scale, high frequency observations. The distributed hydrological model results will be used to show transient catchment scale relations between groundwater levels and discharges. These analyses lead to a simple expression that can describe catchment scale groundwater surface water interactions.
Reliability and validity of the work and social adjustment scale in phobic disorders.
Mataix-Cols, David; Cowley, Amy J; Hankins, Matthew; Schneider, Andreas; Bachofen, Martin; Kenwright, Mark; Gega, Lina; Cameron, Rachel; Marks, Isaac M
2005-01-01
The Work and Social Adjustment Scale (WSAS) is a simple widely used 5-item measure of disability whose psychometric properties need more analysis in phobic disorders. The reliability, factor structure, validity, and sensitivity to change of the WSAS were studied in 205 phobic patients (73 agoraphobia, 62 social phobia, and 70 specific phobia) who participated in various open and randomized trials of self-exposure therapy. Internal consistency of the WSAS was excellent in all phobics pooled and in agoraphobics and social phobics separately. Principal components analysis extracted a single general factor of disability. Specific phobics gave less consistent ratings across WSAS items, suggesting that some items were less relevant to their problem. Internal consistency was marginally higher for self-ratings than clinician ratings of the WSAS. Self-ratings and clinician ratings correlated highly though patients tended to rate themselves as more disabled than clinicians did. WSAS total scores reflected differences in phobic severity and improvement with treatment. The WSAS is a valid, reliable, and change-sensitive measure of work/social and other adjustment in phobic disorders, especially in agoraphobia and social phobia.
Time Dependent Data Mining in RAVEN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cogliati, Joshua Joseph; Chen, Jun; Patel, Japan Ketan
RAVEN is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. The goal of this type of analyses is to understand the response of such systems in particular with respect their probabilistic behavior, to understand their predictability and drivers or lack of thereof. Data mining capabilities are the cornerstones to perform such deep learning of system responses. For this reason static data mining capabilities were added last fiscal year (FY 15). In real applications, when dealing with complex multi-scale, multi-physics systems it seems natural that, during transients, the relevance of themore » different scales, and physics, would evolve over time. For these reasons the data mining capabilities have been extended allowing their application over time. In this writing it is reported a description of the new RAVEN capabilities implemented with several simple analytical tests to explain their application and highlight the proper implementation. The report concludes with the application of those newly implemented capabilities to the analysis of a simulation performed with the Bison code.« less
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications
Kalinin, Alexandr A.; Palanimalai, Selvam; Dinov, Ivo D.
2018-01-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis. PMID:29630069
SOCRAT Platform Design: A Web Architecture for Interactive Visual Analytics Applications.
Kalinin, Alexandr A; Palanimalai, Selvam; Dinov, Ivo D
2017-04-01
The modern web is a successful platform for large scale interactive web applications, including visualizations. However, there are no established design principles for building complex visual analytics (VA) web applications that could efficiently integrate visualizations with data management, computational transformation, hypothesis testing, and knowledge discovery. This imposes a time-consuming design and development process on many researchers and developers. To address these challenges, we consider the design requirements for the development of a module-based VA system architecture, adopting existing practices of large scale web application development. We present the preliminary design and implementation of an open-source platform for Statistics Online Computational Resource Analytical Toolbox (SOCRAT). This platform defines: (1) a specification for an architecture for building VA applications with multi-level modularity, and (2) methods for optimizing module interaction, re-usage, and extension. To demonstrate how this platform can be used to integrate a number of data management, interactive visualization, and analysis tools, we implement an example application for simple VA tasks including raw data input and representation, interactive visualization and analysis.
Suspended Microchannel Resonators for Ultralow Volume Universal Detection
Son, Sungmin; Grover, William H.; Burg, Thomas P.; Manalis, Scott R.
2008-01-01
Universal detectors that maintain high sensitivity as the detection volume is reduced to the subnanoliter scale can enhance the utility of miniaturized total analysis systems (μ-TAS). Here the unique scaling properties of the suspended microchannel resonator (SMR) are exploited to show universal detection in a 10 pL analysis volume with a density detection limit of ∼1 μg/cm3 (10 Hz bandwidth) and a dynamic range of 6 decades. Analytes with low UV extinction coefficients such as polyethylene glycol (PEG) 8 kDa, glucose, and glycine are measured with molar detection limits of 0.66, 13.5, and 31.6 μM, respectively. To demonstrate the potential for real-time monitoring, gel filtration chromatography was used to separate different molecular weights of PEG as the SMR acquired a chromatogram by measuring the eluate density. This work suggests that the SMR could offer a simple and sensitive universal detector for various separation systems from liquid chromatography to capillary electrophoresis. Moreover, since the SMR is itself a microfluidic channel, it can be directly integrated into μ-TAS without compromising overall performance. PMID:18489125
Predictive model for convective flows induced by surface reactivity contrast
NASA Astrophysics Data System (ADS)
Davidson, Scott M.; Lammertink, Rob G. H.; Mani, Ali
2018-05-01
Concentration gradients in a fluid adjacent to a reactive surface due to contrast in surface reactivity generate convective flows. These flows result from contributions by electro- and diffusio-osmotic phenomena. In this study, we have analyzed reactive patterns that release and consume protons, analogous to bimetallic catalytic conversion of peroxide. Similar systems have typically been studied using either scaling analysis to predict trends or costly numerical simulation. Here, we present a simple analytical model, bridging the gap in quantitative understanding between scaling relations and simulations, to predict the induced potentials and consequent velocities in such systems without the use of any fitting parameters. Our model is tested against direct numerical solutions to the coupled Poisson, Nernst-Planck, and Stokes equations. Predicted slip velocities from the model and simulations agree to within a factor of ≈2 over a multiple order-of-magnitude change in the input parameters. Our analysis can be used to predict enhancement of mass transport and the resulting impact on overall catalytic conversion, and is also applicable to predicting the speed of catalytic nanomotors.
To what extent does immigration affect inequality?
NASA Astrophysics Data System (ADS)
Berman, Yonatan; Aste, Tomaso
2016-11-01
The current surge in income and wealth inequality in most western countries, along with the continuous immigration to those countries demand a quantitative analysis of the effect immigration has on economic inequality. This paper presents a quantitative analysis framework providing a way to calculate this effect. It shows that in most cases, the effect of immigration on wealth and income inequality is limited, mainly due to the relative small scale of immigration waves. For a large scale flow of immigrants, such as the immigration to the US, the UK and Australia in the past few decades, we estimate that 10 % ÷ 15 % of the wealth and income inequality increase can be attributed to immigration. The results demonstrate that immigration could possibly decrease inequality substantially, if the characteristics of the immigrants resemble the characteristics of the destination middle class population in terms of wealth or income. We empirically found that the simple linear relation ΔS = 0.18 ρ roughly describes the increase in the wealth share of the top 10 % due to immigration of a fraction ρ of the population.
Validity of the two-level model for Viterbi decoder gap-cycle performance
NASA Technical Reports Server (NTRS)
Dolinar, S.; Arnold, S.
1990-01-01
A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.
EPE analysis of sub-N10 BEoL flow with and without fully self-aligned via using Coventor SEMulator3D
NASA Astrophysics Data System (ADS)
Franke, Joern-Holger; Gallagher, Matt; Murdoch, Gayle; Halder, Sandip; Juncker, Aurelie; Clark, William
2017-03-01
During the last few decades, the semiconductor industry has been able to scale device performance up while driving costs down. What started off as simple geometrical scaling, driven mostly by advances in lithography, has recently been accompanied by advances in processing techniques and in device architectures. The trend to combine efforts using process technology and lithography is expected to intensify, as further scaling becomes ever more difficult. One promising component of future nodes are "scaling boosters", i.e. processing techniques that enable further scaling. An indispensable component in developing these ever more complex processing techniques is semiconductor process modeling software. Visualization of complex 3D structures in SEMulator3D, along with budget analysis on film thicknesses, CD and etch budgets, allow process integrators to compare flows before any physical wafers are run. Hundreds of "virtual" wafers allow comparison of different processing approaches, along with EUV or DUV patterning options for defined layers and different overlay schemes. This "virtual fabrication" technology produces massively parallel process variation studies that would be highly time-consuming or expensive in experiment. Here, we focus on one particular scaling booster, the fully self-aligned via (FSAV). We compare metal-via-metal (mevia-me) chains with self-aligned and fully-self-aligned via's using a calibrated model for imec's N7 BEoL flow. To model overall variability, 3D Monte Carlo modeling of as many variability sources as possible is critical. We use Coventor SEMulator3D to extract minimum me-me distances and contact areas and show how fully self-aligned vias allow a better me-via distance control and tighter via-me contact area variability compared with the standard self-aligned via (SAV) approach.
NASA Astrophysics Data System (ADS)
Weatherill, G. A.; Pagani, M.; Garcia, J.
2016-09-01
The creation of a magnitude-homogenized catalogue is often one of the most fundamental steps in seismic hazard analysis. The process of homogenizing multiple catalogues of earthquakes into a single unified catalogue typically requires careful appraisal of available bulletins, identification of common events within multiple bulletins and the development and application of empirical models to convert from each catalogue's native scale into the required target. The database of the International Seismological Center (ISC) provides the most exhaustive compilation of records from local bulletins, in addition to its reviewed global bulletin. New open-source tools are developed that can utilize this, or any other compiled database, to explore the relations between earthquake solutions provided by different recording networks, and to build and apply empirical models in order to harmonize magnitude scales for the purpose of creating magnitude-homogeneous earthquake catalogues. These tools are described and their application illustrated in two different contexts. The first is a simple application in the Sub-Saharan Africa region where the spatial coverage and magnitude scales for different local recording networks are compared, and their relation to global magnitude scales explored. In the second application the tools are used on a global scale for the purpose of creating an extended magnitude-homogeneous global earthquake catalogue. Several existing high-quality earthquake databases, such as the ISC-GEM and the ISC Reviewed Bulletins, are harmonized into moment magnitude to form a catalogue of more than 562 840 events. This extended catalogue, while not an appropriate substitute for a locally calibrated analysis, can help in studying global patterns in seismicity and hazard, and is therefore released with the accompanying software.
Planetary-Scale Geospatial Data Analysis Techniques in Google's Earth Engine Platform (Invited)
NASA Astrophysics Data System (ADS)
Hancher, M.
2013-12-01
Geoscientists have more and more access to new tools for large-scale computing. With any tool, some tasks are easy and other tasks hard. It is natural to look to new computing platforms to increase the scale and efficiency of existing techniques, but there is a more exiting opportunity to discover and develop a new vocabulary of fundamental analysis idioms that are made easy and effective by these new tools. Google's Earth Engine platform is a cloud computing environment for earth data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog includes a nearly complete archive of scenes from Landsat 4, 5, 7, and 8 that have been processed by the USGS, as well as a wide variety of other remotely-sensed and ancillary data products. Earth Engine supports a just-in-time computation model that enables real-time preview during algorithm development and debugging as well as during experimental data analysis and open-ended data exploration. Data processing operations are performed in parallel across many computers in Google's datacenters. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, resampling, and associating image metadata with pixel data. Early applications of Earth Engine have included the development of Google's global cloud-free fifteen-meter base map and global multi-decadal time-lapse animations, as well as numerous large and small experimental analyses by scientists from a range of academic, government, and non-governmental institutions, working in a wide variety of application areas including forestry, agriculture, urban mapping, and species habitat modeling. Patterns in the successes and failures of these early efforts have begun to emerge, sketching the outlines of a new set of simple and effective approaches to geospatial data analysis.
Assessment of online patient education materials from major ophthalmologic associations.
Huang, Grace; Fang, Christina H; Agarwal, Nitin; Bhagat, Neelakshi; Eloy, Jean Anderson; Langer, Paul D
2015-04-01
Patients are increasingly using the Internet to supplement finding medical information, which can be complex and requires a high level of reading comprehension. Online ophthalmologic materials from major ophthalmologic associations should be written at an appropriate reading level. To assess ophthalmologic online patient education materials (PEMs) on ophthalmologic association websites and to determine whether they are above the reading level recommended by the American Medical Association and National Institutes of Health. Descriptive and correlational design. Patient education materials from major ophthalmology websites were downloaded from June 1, 2014, through June 30, 2014, and assessed for level of readability using 10 scales. The Flesch Reading Ease test, Flesch-Kincaid Grade Level, Simple Measure of Gobbledygook test, Coleman-Liau Index, Gunning Fog Index, New Fog Count, New Dale-Chall Readability Formula, FORCAST scale, Raygor Readability Estimate Graph, and Fry Readability Graph were used. Text from each article was pasted into Microsoft Word and analyzed using the software Readability Studio professional edition version 2012.1 for Windows. Flesch Reading Ease score, Flesch-Kincaid Grade Level, Simple Measure of Gobbledygook grade, Coleman-Liau Index score, Gunning Fog Index score, New Fog Count, New Dale-Chall Readability Formula score, FORCAST score, Raygor Readability Estimate Graph score, and Fry Readability Graph score. Three hundred thirty-nine online PEMs were assessed. The mean Flesch Reading Ease score was 40.7 (range, 17.0-51.0), which correlates with a difficult level of reading. The mean readability grade levels ranged as follows: 10.4 to 12.6 for the Flesch-Kincaid Grade Level; 12.9 to 17.7 for the Simple Measure of Gobbledygook test; 11.4 to 15.8 for the Coleman-Liau Index; 12.4 to 18.7 for the Gunning Fog Index; 8.2 to 16.0 for the New Fog Count; 11.2 to 16.0 for the New Dale-Chall Readability Formula; 10.9 to 12.5 for the FORCAST scale; 11.0 to 17.0 for the Raygor Readability Estimate Graph; and 12.0 to 17.0 for the Fry Readability Graph. Analysis of variance demonstrated a significant difference (P < .001) between the websites for each reading scale. Online PEMs on major ophthalmologic association websites are written well above the recommended reading level. Consideration should be given to revision of these materials to allow greater comprehension among a wider audience.
A simple predictive model for the structure of the oceanic pycnocline
Gnanadesikan
1999-03-26
A simple theory for the large-scale oceanic circulation is developed, relating pycnocline depth, Northern Hemisphere sinking, and low-latitude upwelling to pycnocline diffusivity and Southern Ocean winds and eddies. The results show that Southern Ocean processes help maintain the global ocean structure and that pycnocline diffusion controls low-latitude upwelling.
A Simple, Small-Scale Lego Colorimeter with a Light-Emitting Diode (LED) Used as Detector
ERIC Educational Resources Information Center
Asheim, Jonas; Kvittingen, Eivind V.; Kvittingen, Lise; Verley, Richard
2014-01-01
This article describes how to construct a simple, inexpensive, and robust colorimeter from a few Lego bricks, in which one light-emitting diode (LED) is used as a light source and a second LED as a light detector. The colorimeter is suited to various grades and curricula.
NASA Astrophysics Data System (ADS)
Akbar, Ruzbeh; Short Gianotti, Daniel; McColl, Kaighin A.; Haghighi, Erfan; Salvucci, Guido D.; Entekhabi, Dara
2018-03-01
The soil water content profile is often well correlated with the soil moisture state near the surface. They share mutual information such that analysis of surface-only soil moisture is, at times and in conjunction with precipitation information, reflective of deeper soil fluxes and dynamics. This study examines the characteristic length scale, or effective depth Δz, of a simple active hydrological control volume. The volume is described only by precipitation inputs and soil water dynamics evident in surface-only soil moisture observations. To proceed, first an observation-based technique is presented to estimate the soil moisture loss function based on analysis of soil moisture dry-downs and its successive negative increments. Then, the length scale Δz is obtained via an optimization process wherein the root-mean-squared (RMS) differences between surface soil moisture observations and its predictions based on water balance are minimized. The process is entirely observation-driven. The surface soil moisture estimates are obtained from the NASA Soil Moisture Active Passive (SMAP) mission and precipitation from the gauge-corrected Climate Prediction Center daily global precipitation product. The length scale Δz exhibits a clear east-west gradient across the contiguous United States (CONUS), such that large Δz depths (>200 mm) are estimated in wetter regions with larger mean precipitation. The median Δz across CONUS is 135 mm. The spatial variance of Δz is predominantly explained and influenced by precipitation characteristics. Soil properties, especially texture in the form of sand fraction, as well as the mean soil moisture state have a lesser influence on the length scale.
Brennan, Paul M; Murray, Gordon D; Teasdale, Graham M
2018-06-01
OBJECTIVE Glasgow Coma Scale (GCS) scores and pupil responses are key indicators of the severity of traumatic brain damage. The aim of this study was to determine what information would be gained by combining these indicators into a single index and to explore the merits of different ways of achieving this. METHODS Information about early GCS scores, pupil responses, late outcomes on the Glasgow Outcome Scale, and mortality were obtained at the individual patient level by reviewing data from the CRASH (Corticosteroid Randomisation After Significant Head Injury; n = 9,045) study and the IMPACT (International Mission for Prognosis and Clinical Trials in TBI; n = 6855) database. These data were combined into a pooled data set for the main analysis. Methods of combining the Glasgow Coma Scale and pupil response data varied in complexity from using a simple arithmetic score (GCS score [range 3-15] minus the number of nonreacting pupils [0, 1, or 2]), which we call the GCS-Pupils score (GCS-P; range 1-15), to treating each factor as a separate categorical variable. The content of information about patient outcome in each of these models was evaluated using Nagelkerke's R 2 . RESULTS Separately, the GCS score and pupil response were each related to outcome. Adding information about the pupil response to the GCS score increased the information yield. The performance of the simple GCS-P was similar to the performance of more complex methods of evaluating traumatic brain damage. The relationship between decreases in the GCS-P and deteriorating outcome was seen across the complete range of possible scores. The additional 2 lowest points offered by the GCS-Pupils scale (GCS-P 1 and 2) extended the information about injury severity from a mortality rate of 51% and an unfavorable outcome rate of 70% at GCS score 3 to a mortality rate of 74% and an unfavorable outcome rate of 90% at GCS-P 1. The paradoxical finding that GCS score 4 was associated with a worse outcome than GCS score 3 was not seen when using the GCS-P. CONCLUSIONS A simple arithmetic combination of the GCS score and pupillary response, the GCS-P, extends the information provided about patient outcome to an extent comparable to that obtained using more complex methods. The greater range of injury severities that are identified and the smoothness of the stepwise pattern of outcomes across the range of scores may be useful in evaluating individual patients and identifying patient subgroups. The GCS-P may be a useful platform onto which information about other key prognostic features can be added in a simple format likely to be useful in clinical practice.
Optical identification using imperfections in 2D materials
NASA Astrophysics Data System (ADS)
Cao, Yameng; Robson, Alexander J.; Alharbi, Abdullah; Roberts, Jonathan; Woodhead, Christopher S.; Noori, Yasir J.; Bernardo-Gavito, Ramón; Shahrjerdi, Davood; Roedig, Utz; Fal'ko, Vladimir I.; Young, Robert J.
2017-12-01
The ability to uniquely identify an object or device is important for authentication. Imperfections, locked into structures during fabrication, can be used to provide a fingerprint that is challenging to reproduce. In this paper, we propose a simple optical technique to read unique information from nanometer-scale defects in 2D materials. Imperfections created during crystal growth or fabrication lead to spatial variations in the bandgap of 2D materials that can be characterized through photoluminescence measurements. We show a simple setup involving an angle-adjustable transmission filter, simple optics and a CCD camera can capture spatially-dependent photoluminescence to produce complex maps of unique information from 2D monolayers. Atomic force microscopy is used to verify the origin of the optical signature measured, demonstrating that it results from nanometer-scale imperfections. This solution to optical identification with 2D materials could be employed as a robust security measure to prevent counterfeiting.
Composite annotations: requirements for mapping multiscale data and models to biomedical ontologies
Cook, Daniel L.; Mejino, Jose L. V.; Neal, Maxwell L.; Gennari, John H.
2009-01-01
Current methods for annotating biomedical data resources rely on simple mappings between data elements and the contents of a variety of biomedical ontologies and controlled vocabularies. Here we point out that such simple mappings are inadequate for large-scale multiscale, multidomain integrative “virtual human” projects. For such integrative challenges, we describe a “composite annotation” schema that is simple yet sufficiently extensible for mapping the biomedical content of a variety of data sources and biosimulation models to available biomedical ontologies. PMID:19964601
A simple and low-cost platform technology for producing pexiganan antimicrobial peptide in E. coli.
Zhao, Chun-Xia; Dwyer, Mirjana Dimitrijev; Yu, Alice Lei; Wu, Yang; Fang, Sheng; Middelberg, Anton P J
2015-05-01
Antimicrobial peptides, as a new class of antibiotics, have generated tremendous interest as potential alternatives to classical antibiotics. However, the large-scale production of antimicrobial peptides remains a significant challenge. This paper reports a simple and low-cost chromatography-free platform technology for producing antimicrobial peptides in Escherichia coli (E. coli). A fusion protein comprising a variant of the helical biosurfactant protein DAMP4 and the known antimicrobial peptide pexiganan is designed by joining the two polypeptides, at the DNA level, via an acid-sensitive cleavage site. The resulting DAMP4(var)-pexiganan fusion protein expresses at high level and solubility in recombinant E. coli, and a simple heat-purification method was applied to disrupt cells and deliver high-purity DAMP4(var)-pexiganan protein. Simple acid cleavage successfully separated the DAMP4 variant protein and the antimicrobial peptide. Antimicrobial activity tests confirmed that the bio-produced antimicrobial peptide has the same antimicrobial activity as the equivalent product made by conventional chemical peptide synthesis. This simple and low-cost platform technology can be easily adapted to produce other valuable peptide products, and opens a new manufacturing approach for producing antimicrobial peptides at large scale using the tools and approaches of biochemical engineering. © 2014 Wiley Periodicals, Inc.
A Method to Constrain Genome-Scale Models with 13C Labeling Data
García Martín, Héctor; Kumar, Vinay Satish; Weaver, Daniel; Ghosh, Amit; Chubukov, Victor; Mukhopadhyay, Aindrila; Arkin, Adam; Keasling, Jay D.
2015-01-01
Current limitations in quantitatively predicting biological behavior hinder our efforts to engineer biological systems to produce biofuels and other desired chemicals. Here, we present a new method for calculating metabolic fluxes, key targets in metabolic engineering, that incorporates data from 13C labeling experiments and genome-scale models. The data from 13C labeling experiments provide strong flux constraints that eliminate the need to assume an evolutionary optimization principle such as the growth rate optimization assumption used in Flux Balance Analysis (FBA). This effective constraining is achieved by making the simple but biologically relevant assumption that flux flows from core to peripheral metabolism and does not flow back. The new method is significantly more robust than FBA with respect to errors in genome-scale model reconstruction. Furthermore, it can provide a comprehensive picture of metabolite balancing and predictions for unmeasured extracellular fluxes as constrained by 13C labeling data. A comparison shows that the results of this new method are similar to those found through 13C Metabolic Flux Analysis (13C MFA) for central carbon metabolism but, additionally, it provides flux estimates for peripheral metabolism. The extra validation gained by matching 48 relative labeling measurements is used to identify where and why several existing COnstraint Based Reconstruction and Analysis (COBRA) flux prediction algorithms fail. We demonstrate how to use this knowledge to refine these methods and improve their predictive capabilities. This method provides a reliable base upon which to improve the design of biological systems. PMID:26379153
Bounded extremum seeking for angular velocity actuated control of nonholonomic unicycle
Scheinker, Alexander
2016-08-17
Here, we study control of the angular-velocity actuated nonholonomic unicycle, via a simple, bounded extremum seeking controller which is robust to external disturbances and measurement noise. The vehicle performs source seeking despite not having any position information about itself or the source, able only to sense a noise corrupted scalar value whose extremum coincides with the unknown source location. In order to control the angular velocity, rather than the angular heading directly, a controller is developed such that the closed loop system exhibits multiple time scales and requires an analysis approach expanding the previous work of Kurzweil, Jarnik, Sussmann, andmore » Liu, utilizing weak limits. We provide analytic proof of stability and demonstrate how this simple scheme can be extended to include position-independent source seeking, tracking, and collision avoidance of groups on autonomous vehicles in GPS-denied environments, based only on a measure of distance to an obstacle, which is an especially important feature for an autonomous agent.« less
Simple models of the hydrofracture process
NASA Astrophysics Data System (ADS)
Marder, M.; Chen, Chih-Hung; Patzek, T.
2015-12-01
Hydrofracturing to recover natural gas and oil relies on the creation of a fracture network with pressurized water. We analyze the creation of the network in two ways. First, we assemble a collection of analytical estimates for pressure-driven crack motion in simple geometries, including crack speed as a function of length, energy dissipated by fluid viscosity and used to break rock, and the conditions under which a second crack will initiate while a first is running. We develop a pseudo-three-dimensional numerical model that couples fluid motion with solid mechanics and can generate branching crack structures not specified in advance. One of our main conclusions is that the typical spacing between fractures must be on the order of a meter, and this conclusion arises in two separate ways. First, it arises from analysis of gas production rates, given the diffusion constants for gas in the rock. Second, it arises from the number of fractures that should be generated given the scale of the affected region and the amounts of water pumped into the rock.
Moreno-Trejo, Maira Berenice; Sánchez-Domínguez, Margarita
2016-01-01
The synthesis that is described in this study is for the preparation of silver nanoparticles of sizes ranging from 10 nm to 30 nm with a defined shape (globular), confirmed by UV-vis, SEM, STEM and DLS analysis. This simple and favorable one-step modified Tollens reaction does not require any special equipment or other stabilizing or reducing agent except for a solution of purified mesquite gum, and it produces aqueous colloidal dispersions of silver nanoparticles with a stability that exceeds three months, a relatively narrow size distribution, a low tendency to aggregate and a yield of at least 95% for all cases. Reaction times are between 15 min and 60 min to obtain silver nanoparticles in concentrations ranging from 0.1 g to 3 g of Ag per 100 g of reaction mixture. The proposed synthetic method presents a high potential for scale-up, since its production capacity is rather high and the methodology is simple. PMID:28773938
Random sequential adsorption of straight rigid rods on a simple cubic lattice
NASA Astrophysics Data System (ADS)
García, G. D.; Sanchez-Varretti, F. O.; Centres, P. M.; Ramirez-Pastor, A. J.
2015-10-01
Random sequential adsorption of straight rigid rods of length k (k-mers) on a simple cubic lattice has been studied by numerical simulations and finite-size scaling analysis. The k-mers were irreversibly and isotropically deposited into the lattice. The calculations were performed by using a new theoretical scheme, whose accuracy was verified by comparison with rigorous analytical data. The results, obtained for k ranging from 2 to 64, revealed that (i) the jamming coverage for dimers (k = 2) is θj = 0.918388(16) . Our result corrects the previously reported value of θj = 0.799(2) (Tarasevich and Cherkasova, 2007); (ii) θj exhibits a decreasing function when it is plotted in terms of the k-mer size, being θj(∞) = 0.4045(19) the value of the limit coverage for large k's; and (iii) the ratio between percolation threshold and jamming coverage shows a non-universal behavior, monotonically decreasing to zero with increasing k.
Cheng, Ryan R; Hawk, Alexander T; Makarov, Dmitrii E
2013-02-21
Recent experiments showed that the reconfiguration dynamics of unfolded proteins are often adequately described by simple polymer models. In particular, the Rouse model with internal friction (RIF) captures internal friction effects as observed in single-molecule fluorescence correlation spectroscopy (FCS) studies of a number of proteins. Here we use RIF, and its non-free draining analog, Zimm model with internal friction, to explore the effect of internal friction on the rate with which intramolecular contacts can be formed within the unfolded chain. Unlike the reconfiguration times inferred from FCS experiments, which depend linearly on the solvent viscosity, the first passage times to form intramolecular contacts are shown to display a more complex viscosity dependence. We further describe scaling relationships obeyed by contact formation times in the limits of high and low internal friction. Our findings provide experimentally testable predictions that can serve as a framework for the analysis of future studies of contact formation in proteins.
Hetherington, James P J; Warner, Anne; Seymour, Robert M
2006-04-22
Systems Biology requires that biological modelling is scaled up from small components to system level. This can produce exceedingly complex models, which obscure understanding rather than facilitate it. The successful use of highly simplified models would resolve many of the current problems faced in Systems Biology. This paper questions whether the conclusions of simple mathematical models of biological systems are trustworthy. The simplification of a specific model of calcium oscillations in hepatocytes is examined in detail, and the conclusions drawn from this scrutiny generalized. We formalize our choice of simplification approach through the use of functional 'building blocks'. A collection of models is constructed, each a progressively more simplified version of a well-understood model. The limiting model is a piecewise linear model that can be solved analytically. We find that, as expected, in many cases the simpler models produce incorrect results. However, when we make a sensitivity analysis, examining which aspects of the behaviour of the system are controlled by which parameters, the conclusions of the simple model often agree with those of the richer model. The hypothesis that the simplified model retains no information about the real sensitivities of the unsimplified model can be very strongly ruled out by treating the simplification process as a pseudo-random perturbation on the true sensitivity data. We conclude that sensitivity analysis is, therefore, of great importance to the analysis of simple mathematical models in biology. Our comparisons reveal which results of the sensitivity analysis regarding calcium oscillations in hepatocytes are robust to the simplifications necessarily involved in mathematical modelling. For example, we find that if a treatment is observed to strongly decrease the period of the oscillations while increasing the proportion of the cycle during which cellular calcium concentrations are rising, without affecting the inter-spike or maximum calcium concentrations, then it is likely that the treatment is acting on the plasma membrane calcium pump.
Statistical self-similarity of width function maxima with implications to floods
Veitzer, S.A.; Gupta, V.K.
2001-01-01
Recently a new theory of random self-similar river networks, called the RSN model, was introduced to explain empirical observations regarding the scaling properties of distributions of various topologic and geometric variables in natural basins. The RSN model predicts that such variables exhibit statistical simple scaling, when indexed by Horton-Strahler order. The average side tributary structure of RSN networks also exhibits Tokunaga-type self-similarity which is widely observed in nature. We examine the scaling structure of distributions of the maximum of the width function for RSNs for nested, complete Strahler basins by performing ensemble simulations. The maximum of the width function exhibits distributional simple scaling, when indexed by Horton-Strahler order, for both RSNs and natural river networks extracted from digital elevation models (DEMs). We also test a powerlaw relationship between Horton ratios for the maximum of the width function and drainage areas. These results represent first steps in formulating a comprehensive physical statistical theory of floods at multiple space-time scales for RSNs as discrete hierarchical branching structures. ?? 2001 Published by Elsevier Science Ltd.
NASA Technical Reports Server (NTRS)
Kraft, R. E.; Yu, J.; Kwan, H. W.
1999-01-01
The primary purpose of this study is to develop improved models for the acoustic impedance of treatment panels at high frequencies, for application to subscale treatment designs. Effects that cause significant deviation of the impedance from simple geometric scaling are examined in detail, an improved high-frequency impedance model is developed, and the improved model is correlated with high-frequency impedance measurements. Only single-degree-of-freedom honeycomb sandwich resonator panels with either perforated sheet or "linear" wiremesh faceplates are considered. The objective is to understand those effects that cause the simple single-degree-of- freedom resonator panels to deviate at the higher-scaled frequency from the impedance that would be obtained at the corresponding full-scale frequency. This will allow the subscale panel to be designed to achieve a specified impedance spectrum over at least a limited range of frequencies. An advanced impedance prediction model has been developed that accounts for some of the known effects at high frequency that have previously been ignored as a small source of error for full-scale frequency ranges.
Xie, Zhengwei; Zhang, Tianyu; Ouyang, Qi
2018-02-01
One of the long-expected goals of genome-scale metabolic modelling is to evaluate the influence of the perturbed enzymes on flux distribution. Both ordinary differential equation (ODE) models and constraint-based models, like Flux balance analysis (FBA), lack the capacity to perform metabolic control analysis (MCA) for large-scale networks. In this study, we developed a hyper-cube shrink algorithm (HCSA) to incorporate the enzymatic properties into the FBA model by introducing a pseudo reaction V constrained by enzymatic parameters. Our algorithm uses the enzymatic information quantitatively rather than qualitatively. We first demonstrate the concept by applying HCSA to a simple three-node network, whereby we obtained a good correlation between flux and enzyme abundance. We then validate its prediction by comparison with ODE and with a synthetic network producing voilacein and analogues in Saccharomyces cerevisiae. We show that HCSA can mimic the state-state results of ODE. Finally, we show its capability of predicting the flux distribution in genome-scale networks by applying it to sporulation in yeast. We show the ability of HCSA to operate without biomass flux and perform MCA to determine rate-limiting reactions. Algorithm was implemented by Matlab and C ++. The code is available at https://github.com/kekegg/HCSA. xiezhengwei@hsc.pku.edu.cn or qi@pku.edu.cn. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Transverse beam dynamics in non-linear Fixed Field Alternating Gradient accelerators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haj, Tahar M.; Meot, F.
2016-03-02
In this paper, we present some aspects of the transverse beam dynamics in Fixed Field Ring Accelerators (FFRA): we start from the basic principles in order to derive the linearized transverse particle equations of motion for FFRA, essentially FFAGs and cyclotrons are considered here. This is a simple extension of a previous work valid for linear lattices that we generalized by including the bending terms to ensure its correctness for FFAG lattice. The space charge term (contribution of the internal coulombian forces of the beam) is contained as well, although it is not discussed here. The emphasis is on themore » scaling FFAG type: a collaboration work is undertaken in view of better understanding the properties of the 150 MeV scaling FFAG at KURRI in Japan, and progress towards high intensity operation. Some results of the benchmarking work between different codes are presented. Analysis of certain type of field imperfections revealed some interesting features about this machine that explain some of the experimental results and generalize the concept of a scaling FFAG to a non-scaling one for which the tune variations obey a well-defined law.« less
Measuring Networking as an Outcome Variable in Undergraduate Research Experiences
Hanauer, David I.; Hatfull, Graham
2015-01-01
The aim of this paper is to propose, present, and validate a simple survey instrument to measure student conversational networking. The tool consists of five items that cover personal and professional social networks, and its basic principle is the self-reporting of degrees of conversation, with a range of specific discussion partners. The networking instrument was validated in three studies. The basic psychometric characteristics of the scales were established by conducting a factor analysis and evaluating internal consistency using Cronbach’s alpha. The second study used a known-groups comparison and involved comparing outcomes for networking scales between two different undergraduate laboratory courses (one involving a specific effort to enhance networking). The final study looked at potential relationships between specific networking items and the established psychosocial variable of project ownership through a series of binary logistic regressions. Overall, the data from the three studies indicate that the networking scales have high internal consistency (α = 0.88), consist of a unitary dimension, can significantly differentiate between research experiences with low and high networking designs, and are related to project ownership scales. The ramifications of the networking instrument for student retention, the enhancement of public scientific literacy, and the differentiation of laboratory courses are discussed. PMID:26538387
Measuring water fluxes in forests: The need for integrative platforms of analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Eric J.
To understand the importance of analytical tools such as those provided by Berdanier et al. (2016) in this issue of Tree Physiology, one must understand both the grand challenges facing Earth system modelers, as well as the minutia of engaging in ecophysiological research in the field. It is between these two extremes of scale that many ecologists struggle to translate empirical research into useful conclusions that guide our understanding of how ecosystems currently function and how they are likely to change in the future. Likewise, modelers struggle to build complexity into their models that match this sophisticated understanding of howmore » ecosystems function, so that necessary simplifications required by large scales do not themselves change the conclusions drawn from these simulations. As both monitoring technology and computational power increase, along with the continual effort in both empirical and modeling research, the gap between the scale of Earth system models and ecological observations continually closes. In addition, this creates a need for platforms of model–data interaction that incorporate uncertainties in both simulations and observations when scaling from one to the other, moving beyond simple comparisons of monthly or annual sums and means.« less
Measuring water fluxes in forests: The need for integrative platforms of analysis
Ward, Eric J.
2016-08-09
To understand the importance of analytical tools such as those provided by Berdanier et al. (2016) in this issue of Tree Physiology, one must understand both the grand challenges facing Earth system modelers, as well as the minutia of engaging in ecophysiological research in the field. It is between these two extremes of scale that many ecologists struggle to translate empirical research into useful conclusions that guide our understanding of how ecosystems currently function and how they are likely to change in the future. Likewise, modelers struggle to build complexity into their models that match this sophisticated understanding of howmore » ecosystems function, so that necessary simplifications required by large scales do not themselves change the conclusions drawn from these simulations. As both monitoring technology and computational power increase, along with the continual effort in both empirical and modeling research, the gap between the scale of Earth system models and ecological observations continually closes. In addition, this creates a need for platforms of model–data interaction that incorporate uncertainties in both simulations and observations when scaling from one to the other, moving beyond simple comparisons of monthly or annual sums and means.« less
Porter, Mark L.; Plampin, Michael; Pawar, Rajesh; ...
2014-12-31
The physicochemical processes associated with CO 2 leakage into shallow aquifer systems are complex and span multiple spatial and time scales. Continuum-scale numerical models that faithfully represent the underlying pore-scale physics are required to predict the long-term behavior and aid in risk analysis regarding regulatory and management decisions. This study focuses on benchmarking the numerical simulator, FEHM, with intermediate-scale column experiments of CO 2 gas evolution in homogeneous and heterogeneous sand configurations. Inverse modeling was conducted to calibrate model parameters and determine model sensitivity to the observed steady-state saturation profiles. It is shown that FEHM is a powerful tool thatmore » is capable of capturing the experimentally observed out ow rates and saturation profiles. Moreover, FEHM captures the transition from single- to multi-phase flow and CO 2 gas accumulation at interfaces separating sands. We also derive a simple expression, based on Darcy's law, for the pressure at which CO 2 free phase gas is observed and show that it reliably predicts the location at which single-phase flow transitions to multi-phase flow.« less
Predicting the propagation of concentration and saturation fronts in fixed-bed filters.
Callery, O; Healy, M G
2017-10-15
The phenomenon of adsorption is widely exploited across a range of industries to remove contaminants from gases and liquids. Much recent research has focused on identifying low-cost adsorbents which have the potential to be used as alternatives to expensive industry standards like activated carbons. Evaluating these emerging adsorbents entails a considerable amount of labor intensive and costly testing and analysis. This study proposes a simple, low-cost method to rapidly assess the potential of novel media for potential use in large-scale adsorption filters. The filter media investigated in this study were low-cost adsorbents which have been found to be capable of removing dissolved phosphorus from solution, namely: i) aluminum drinking water treatment residual, and ii) crushed concrete. Data collected from multiple small-scale column tests was used to construct a model capable of describing and predicting the progression of adsorbent saturation and the associated effluent concentration breakthrough curves. This model was used to predict the performance of long-term, large-scale filter columns packed with the same media. The approach proved highly successful, and just 24-36 h of experimental data from the small-scale column experiments were found to provide sufficient information to predict the performance of the large-scale filters for up to three months. Copyright © 2017 Elsevier Ltd. All rights reserved.
Årestedt, Kristofer; Ågren, Susanna; Flemme, Inger; Moser, Debra K; Strömberg, Anna
2015-08-01
The four-item Control Attitudes Scale (CAS) was developed to measure control perceived by patients with cardiac disease and their family members, but extensive psychometric evaluation has not been performed. The aim was to translate, culturally adapt and psychometrically evaluate the CAS in a Swedish sample of implantable cardioverter defibrillator (ICD) recipients, heart failure (HF) patients and their partners. A sample (n=391) of ICD recipients, HF patients and partners were used. Descriptive statistics, item-total and inter-item correlations, exploratory factor analysis, ordinal regression modelling and Cronbach's alpha were used to validate the CAS. The findings from the factor analyses revealed that the CAS is a multidimensional scale including two factors, Control and Helplessness. The internal consistency was satisfactory for all scales (α=0.74-0.85), except the family version total scale (α=0.62). No differential item functioning was detected which implies that the CAS can be used to make invariant comparisons between groups of different age and sex. The psychometric properties, together with the simple and short format of the CAS, make it to a useful tool for measuring perceived control among patients with cardiac diseases and their family members. When using the CAS, subscale scores should be preferred. © The European Society of Cardiology 2014.
Generalized statistical mechanics approaches to earthquakes and tectonics.
Vallianatos, Filippos; Papadakis, Giorgos; Michas, Georgios
2016-12-01
Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes.
Generalized statistical mechanics approaches to earthquakes and tectonics
Papadakis, Giorgos; Michas, Georgios
2016-01-01
Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes. PMID:28119548
Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments
Yim, Won Cheol; Cushman, John C.
2017-07-22
Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less
NASA Astrophysics Data System (ADS)
Laiolo, P.; Gabellani, S.; Campo, L.; Silvestro, F.; Delogu, F.; Rudari, R.; Pulvirenti, L.; Boni, G.; Fascetti, F.; Pierdicca, N.; Crapolicchio, R.; Hasenauer, S.; Puca, S.
2016-06-01
The reliable estimation of hydrological variables in space and time is of fundamental importance in operational hydrology to improve the flood predictions and hydrological cycle description. Nowadays remotely sensed data can offer a chance to improve hydrological models especially in environments with scarce ground based data. The aim of this work is to update the state variables of a physically based, distributed and continuous hydrological model using four different satellite-derived data (three soil moisture products and a land surface temperature measurement) and one soil moisture analysis to evaluate, even with a non optimal technique, the impact on the hydrological cycle. The experiments were carried out for a small catchment, in the northern part of Italy, for the period July 2012-June 2013. The products were pre-processed according to their own characteristics and then they were assimilated into the model using a simple nudging technique. The benefits on the model predictions of discharge were tested against observations. The analysis showed a general improvement of the model discharge predictions, even with a simple assimilation technique, for all the assimilation experiments; the Nash-Sutcliffe model efficiency coefficient was increased from 0.6 (relative to the model without assimilation) to 0.7, moreover, errors on discharge were reduced up to the 10%. An added value to the model was found in the rainfall season (autumn): all the assimilation experiments reduced the errors up to the 20%. This demonstrated that discharge prediction of a distributed hydrological model, which works at fine scale resolution in a small basin, can be improved with the assimilation of coarse-scale satellite-derived data.
Divide and Conquer (DC) BLAST: fast and easy BLAST execution within HPC environments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yim, Won Cheol; Cushman, John C.
Bioinformatics is currently faced with very large-scale data sets that lead to computational jobs, especially sequence similarity searches, that can take absurdly long times to run. For example, the National Center for Biotechnology Information (NCBI) Basic Local Alignment Search Tool (BLAST and BLAST+) suite, which is by far the most widely used tool for rapid similarity searching among nucleic acid or amino acid sequences, is highly central processing unit (CPU) intensive. While the BLAST suite of programs perform searches very rapidly, they have the potential to be accelerated. In recent years, distributed computing environments have become more widely accessible andmore » used due to the increasing availability of high-performance computing (HPC) systems. Therefore, simple solutions for data parallelization are needed to expedite BLAST and other sequence analysis tools. However, existing software for parallel sequence similarity searches often requires extensive computational experience and skill on the part of the user. In order to accelerate BLAST and other sequence analysis tools, Divide and Conquer BLAST (DCBLAST) was developed to perform NCBI BLAST searches within a cluster, grid, or HPC environment by using a query sequence distribution approach. Scaling from one (1) to 256 CPU cores resulted in significant improvements in processing speed. Thus, DCBLAST dramatically accelerates the execution of BLAST searches using a simple, accessible, robust, and parallel approach. DCBLAST works across multiple nodes automatically and it overcomes the speed limitation of single-node BLAST programs. DCBLAST can be used on any HPC system, can take advantage of hundreds of nodes, and has no output limitations. Thus, this freely available tool simplifies distributed computation pipelines to facilitate the rapid discovery of sequence similarities between very large data sets.« less
The life of a meander bend: Connecting shape and dynamics via analysis of a numerical model
NASA Astrophysics Data System (ADS)
Schwenk, Jon; Lanzoni, Stefano; Foufoula-Georgiou, Efi
2015-04-01
Analysis of bend-scale meandering river dynamics is a problem of theoretical and practical interest. This work introduces a method for extracting and analyzing the history of individual meander bends from inception until cutoff (called "atoms") by tracking backward through time the set of two cutoff nodes in numerical meander migration models. Application of this method to a simplified yet physically based model provides access to previously unavailable bend-scale meander dynamics over long times and at high temporal resolutions. We find that before cutoffs, the intrinsic model dynamics invariably simulate a prototypical cutoff atom shape we dub simple. Once perturbations from cutoffs occur, two other archetypal cutoff planform shapes emerge called long and round that are distinguished by a stretching along their long and perpendicular axes, respectively. Three measures of meander migration—growth rate, average migration rate, and centroid migration rate—are introduced to capture the dynamic lives of individual bends and reveal that similar cutoff atom geometries share similar dynamic histories. Specifically, through the lens of the three shape types, simples are seen to have the highest growth and average migration rates, followed by rounds, and finally longs. Using the maximum average migration rate as a metric describing an atom's dynamic past, we show a strong connection between it and two metrics of cutoff geometry. This result suggests both that early formative dynamics may be inferred from static cutoff planforms and that there exists a critical period early in a meander bend's life when its dynamic trajectory is most sensitive to cutoff perturbations. An example of how these results could be applied to Mississippi River oxbow lakes with unknown historic dynamics is shown. The results characterize the underlying model and provide a framework for comparisons against more complex models and observed dynamics.
The cross-over to magnetostrophic convection in planetary dynamo systems
King, E. M.
2017-01-01
Global scale magnetostrophic balance, in which Lorentz and Coriolis forces comprise the leading-order force balance, has long been thought to describe the natural state of planetary dynamo systems. This argument arises from consideration of the linear theory of rotating magnetoconvection. Here we test this long-held tenet by directly comparing linear predictions against dynamo modelling results. This comparison shows that dynamo modelling results are not typically in the global magnetostrophic state predicted by linear theory. Then, in order to estimate at what scale (if any) magnetostrophic balance will arise in nonlinear dynamo systems, we carry out a simple scaling analysis of the Elsasser number Λ, yielding an improved estimate of the ratio of Lorentz and Coriolis forces. From this, we deduce that there is a magnetostrophic cross-over length scale, LX≈(Λo2/Rmo)D, where Λo is the linear (or traditional) Elsasser number, Rmo is the system scale magnetic Reynolds number and D is the length scale of the system. On scales well above LX, magnetostrophic convection dynamics should not be possible. Only on scales smaller than LX should it be possible for the convective behaviours to follow the predictions for the magnetostrophic branch of convection. Because LX is significantly smaller than the system scale in most dynamo models, their large-scale flows should be quasi-geostrophic, as is confirmed in many dynamo simulations. Estimating Λo≃1 and Rmo≃103 in Earth’s core, the cross-over scale is approximately 1/1000 that of the system scale, suggesting that magnetostrophic convection dynamics exists in the core only on small scales below those that can be characterized by geomagnetic observations. PMID:28413338
The cross-over to magnetostrophic convection in planetary dynamo systems.
Aurnou, J M; King, E M
2017-03-01
Global scale magnetostrophic balance, in which Lorentz and Coriolis forces comprise the leading-order force balance, has long been thought to describe the natural state of planetary dynamo systems. This argument arises from consideration of the linear theory of rotating magnetoconvection. Here we test this long-held tenet by directly comparing linear predictions against dynamo modelling results. This comparison shows that dynamo modelling results are not typically in the global magnetostrophic state predicted by linear theory. Then, in order to estimate at what scale (if any) magnetostrophic balance will arise in nonlinear dynamo systems, we carry out a simple scaling analysis of the Elsasser number Λ , yielding an improved estimate of the ratio of Lorentz and Coriolis forces. From this, we deduce that there is a magnetostrophic cross-over length scale, [Formula: see text], where Λ o is the linear (or traditional) Elsasser number, Rm o is the system scale magnetic Reynolds number and D is the length scale of the system. On scales well above [Formula: see text], magnetostrophic convection dynamics should not be possible. Only on scales smaller than [Formula: see text] should it be possible for the convective behaviours to follow the predictions for the magnetostrophic branch of convection. Because [Formula: see text] is significantly smaller than the system scale in most dynamo models, their large-scale flows should be quasi-geostrophic, as is confirmed in many dynamo simulations. Estimating Λ o ≃1 and Rm o ≃10 3 in Earth's core, the cross-over scale is approximately 1/1000 that of the system scale, suggesting that magnetostrophic convection dynamics exists in the core only on small scales below those that can be characterized by geomagnetic observations.
Geo-Ontologies Are Scale Dependent
NASA Astrophysics Data System (ADS)
Frank, A. U.
2009-04-01
Philosophers aim at a single ontology that describes "how the world is"; for information systems we aim only at ontologies that describe a conceptualization of reality (Guarino 1995; Gruber 2005). A conceptualization of the world implies a spatial and temporal scale: what are the phenomena, the objects and the speed of their change? Few articles (Reitsma et al. 2003) seem to address that an ontology is scale specific (but many articles indicate that ontologies are scale-free in another sense namely that they are scale free in the link densities between concepts). The scale in the conceptualization can be linked to the observation process. The extent of the support of the physical observation instrument and the sampling theorem indicate what level of detail we find in a dataset. These rules apply for remote sensing or sensor networks alike. An ontology of observations must include scale or level of detail, and concepts derived from observations should carry this relation forward. A simple example: in high resolution remote sensing image agricultural plots and roads between them are shown, at lower resolution, only the plots and not the roads are visible. This gives two ontologies, one with plots and roads, the other with plots only. Note that a neighborhood relation in the two different ontologies also yield different results. References Gruber, T. (2005). "TagOntology - a way to agree on the semantics of tagging data." Retrieved October 29, 2005., from http://tomgruber.org/writing/tagontology-tagcapm-talk.pdf. Guarino, N. (1995). "Formal Ontology, Conceptual Analysis and Knowledge Representation." International Journal of Human and Computer Studies. Special Issue on Formal Ontology, Conceptual Analysis and Knowledge Representation, edited by N. Guarino and R. Poli 43(5/6). Reitsma, F. and T. Bittner (2003). Process, Hierarchy, and Scale. Spatial Information Theory. Cognitive and Computational Foundations of Geographic Information ScienceInternational Conference COSIT'03.
Konik, Robert M.; Sfeir, Matthew Y.; Misewich, James A.
2015-02-17
We demonstrate that a non-perturbative framework for the treatment of the excitations of single walled carbon nanotubes based upon a field theoretic reduction is able to accurately describe experiment observations of the absolute values of excitonic energies. This theoretical framework yields a simple scaling function from which the excitonic energies can be read off. This scaling function is primarily determined by a single parameter, the charge Luttinger parameter of the tube, which is in turn a function of the tube chirality, dielectric environment, and the tube's dimensions, thus expressing disparate influences on the excitonic energies in a unified fashion. Asmore » a result, we test this theory explicitly on the data reported in [NanoLetters 5, 2314 (2005)] and [Phys. Rev. B 82, 195424 (2010)] and so demonstrate the method works over a wide range of reported excitonic spectra.« less
[Study on the related factors of suicidal ideation in college undergraduates].
Gao, Hong-sheng; Qu, Cheng-yi; Miao, Mao-hua
2003-09-01
To evaluate psychosocial factors and patterns on suicidal ideation of the undergraduates in Shanxi province. Four thousand eight hundred and eighty-two undergraduates in Shanxi province were investigated with multistage stratified random clustered samples. Factors associated with suicidal ideation were analyzed with logistic regression and Path analysis by scores of Beck Scale for Suicide Ideation (BSSI), Suicide Attitude Questionnaire (QSA), Adolescent Self-Rate Life Events Check List (ASLEC), DSQ, Social Support Rating Scale, SCL-90, Simple Coping Modes Questionnaire and EPQ. Tendency of psychological disorder was the major factor. Negative life events did not directly affect suicidal ideation, but personality did directly or indirectly affect suicidal ideation through coping and defensive response. Personality played a stabilized fundamental role while life events were minor but "triggering" agents. Mental disturbance disposition seemed to be the principal factor related to suicidal ideation. Above three factors were intergraded and resulted in suicidal ideation in chorus.
Material Model Evaluation of a Composite Honeycomb Energy Absorber
NASA Technical Reports Server (NTRS)
Jackson, Karen E.; Annett, Martin S.; Fasanella, Edwin L.; Polanco, Michael A.
2012-01-01
A study was conducted to evaluate four different material models in predicting the dynamic crushing response of solid-element-based models of a composite honeycomb energy absorber, designated the Deployable Energy Absorber (DEA). Dynamic crush tests of three DEA components were simulated using the nonlinear, explicit transient dynamic code, LS-DYNA . In addition, a full-scale crash test of an MD-500 helicopter, retrofitted with DEA blocks, was simulated. The four material models used to represent the DEA included: *MAT_CRUSHABLE_FOAM (Mat 63), *MAT_HONEYCOMB (Mat 26), *MAT_SIMPLIFIED_RUBBER/FOAM (Mat 181), and *MAT_TRANSVERSELY_ANISOTROPIC_CRUSHABLE_FOAM (Mat 142). Test-analysis calibration metrics included simple percentage error comparisons of initial peak acceleration, sustained crush stress, and peak compaction acceleration of the DEA components. In addition, the Roadside Safety Verification and Validation Program (RSVVP) was used to assess similarities and differences between the experimental and analytical curves for the full-scale crash test.
Climate Change Impacts at Department of Defense
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kotamarthi, Rao; Wang, Jiali; Zoebel, Zach
This project is aimed at providing the U.S. Department of Defense (DoD) with a comprehensive analysis of the uncertainty associated with generating climate projections at the regional scale that can be used by stakeholders and decision makers to quantify and plan for the impacts of future climate change at specific locations. The merits and limitations of commonly used downscaling models, ranging from simple to complex, are compared, and their appropriateness for application at installation scales is evaluated. Downscaled climate projections are generated at selected DoD installations using dynamic and statistical methods with an emphasis on generating probability distributions of climatemore » variables and their associated uncertainties. The sites selection and selection of variables and parameters for downscaling was based on a comprehensive understanding of the current and projected roles that weather and climate play in operating, maintaining, and planning DoD facilities and installations.« less
Stability and Interaction of Coherent Structure in Supersonic Reactive Wakes
NASA Technical Reports Server (NTRS)
Menon, Suresh
1983-01-01
A theoretical formulation and analysis is presented for a study of the stability and interaction of coherent structure in reacting free shear layers. The physical problem under investigation is a premixed hydrogen-oxygen reacting shear layer in the wake of a thin flat plate. The coherent structure is modeled as a periodic disturbance and its stability is determined by the application of linearized hydrodynamic stability theory which results in a generalized eigenvalue problem for reactive flows. Detailed stability analysis of the reactive wake for neutral, symmetrical and antisymmetrical disturbance is presented. Reactive stability criteria is shown to be quite different from classical non-reactive stability. The interaction between the mean flow, coherent structure and fine-scale turbulence is theoretically formulated using the von-Kaman integral technique. Both time-averaging and conditional phase averaging are necessary to separate the three types of motion. The resulting integro-differential equations can then be solved subject to initial conditions with appropriate shape functions. In the laminar flow transition region of interest, the spatial interaction between the mean motion and coherent structure is calculated for both non-reactive and reactive conditions and compared with experimental data wherever available. The fine-scale turbulent motion determined by the application of integral analysis to the fluctuation equations. Since at present this turbulence model is still untested, turbulence is modeled in the interaction problem by a simple algebraic eddy viscosity model. The applicability of the integral turbulence model formulated here is studied parametrically by integrating these equations for the simple case of self-similar mean motion with assumed shape functions. The effect of the motion of the coherent structure is studied and very good agreement is obtained with previous experimental and theoretical works for non-reactive flow. For the reactive case, lack of experimental data made direct comparison difficult. It was determined that the growth rate of the disturbance amplitude is lower for reactive case. The results indicate that the reactive flow stability is in qualitative agreement with experimental observation.
Ono, Daiki; Bamba, Takeshi; Oku, Yuichi; Yonetani, Tsutomu; Fukusaki, Eiichiro
2011-09-01
In this study, we constructed prediction models by metabolic fingerprinting of fresh green tea leaves using Fourier transform near-infrared (FT-NIR) spectroscopy and partial least squares (PLS) regression analysis to objectively optimize of the steaming process conditions in green tea manufacture. The steaming process is the most important step for manufacturing high quality green tea products. However, the parameter setting of the steamer is currently determined subjectively by the manufacturer. Therefore, a simple and robust system that can be used to objectively set the steaming process parameters is necessary. We focused on FT-NIR spectroscopy because of its simple operation, quick measurement, and low running costs. After removal of noise in the spectral data by principal component analysis (PCA), PLS regression analysis was performed using spectral information as independent variables, and the steaming parameters set by experienced manufacturers as dependent variables. The prediction models were successfully constructed with satisfactory accuracy. Moreover, the results of the demonstrated experiment suggested that the green tea steaming process parameters could be predicted on a larger manufacturing scale. This technique will contribute to improvement of the quality and productivity of green tea because it can objectively optimize the complicated green tea steaming process and will be suitable for practical use in green tea manufacture. Copyright © 2011 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.
Magnetic resonance imaging in laboratory petrophysical core analysis
NASA Astrophysics Data System (ADS)
Mitchell, J.; Chandrasekera, T. C.; Holland, D. J.; Gladden, L. F.; Fordham, E. J.
2013-05-01
Magnetic resonance imaging (MRI) is a well-known technique in medical diagnosis and materials science. In the more specialized arena of laboratory-scale petrophysical rock core analysis, the role of MRI has undergone a substantial change in focus over the last three decades. Initially, alongside the continual drive to exploit higher magnetic field strengths in MRI applications for medicine and chemistry, the same trend was followed in core analysis. However, the spatial resolution achievable in heterogeneous porous media is inherently limited due to the magnetic susceptibility contrast between solid and fluid. As a result, imaging resolution at the length-scale of typical pore diameters is not practical and so MRI of core-plugs has often been viewed as an inappropriate use of expensive magnetic resonance facilities. Recently, there has been a paradigm shift in the use of MRI in laboratory-scale core analysis. The focus is now on acquiring data in the laboratory that are directly comparable to data obtained from magnetic resonance well-logging tools (i.e., a common physics of measurement). To maintain consistency with well-logging instrumentation, it is desirable to measure distributions of transverse (T2) relaxation time-the industry-standard metric in well-logging-at the laboratory-scale. These T2 distributions can be spatially resolved over the length of a core-plug. The use of low-field magnets in the laboratory environment is optimal for core analysis not only because the magnetic field strength is closer to that of well-logging tools, but also because the magnetic susceptibility contrast is minimized, allowing the acquisition of quantitative image voxel (or pixel) intensities that are directly scalable to liquid volume. Beyond simple determination of macroscopic rock heterogeneity, it is possible to utilize the spatial resolution for monitoring forced displacement of oil by water or chemical agents, determining capillary pressure curves, and estimating wettability. The history of MRI in petrophysics is reviewed and future directions considered, including advanced data processing techniques such as compressed sensing reconstruction and Bayesian inference analysis of under-sampled data. Although this review focuses on rock core analysis, the techniques described are applicable in a wider context to porous media in general, such as cements, soils, ceramics, and catalytic materials.
ORIGAMI Automator Primer. Automated ORIGEN Source Terms and Spent Fuel Storage Pool Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wieselquist, William A.; Thompson, Adam B.; Bowman, Stephen M.
2016-04-01
Source terms and spent nuclear fuel (SNF) storage pool decay heat load analyses for operating nuclear power plants require a large number of Oak Ridge Isotope Generation and Depletion (ORIGEN) calculations. SNF source term calculations also require a significant amount of bookkeeping to track quantities such as core and assembly operating histories, spent fuel pool (SFP) residence times, heavy metal masses, and enrichments. The ORIGEN Assembly Isotopics (ORIGAMI) module in the SCALE code system provides a simple scheme for entering these data. However, given the large scope of the analysis, extensive scripting is necessary to convert formats and process datamore » to create thousands of ORIGAMI input files (one per assembly) and to process the results into formats readily usable by follow-on analysis tools. This primer describes a project within the SCALE Fulcrum graphical user interface (GUI) called ORIGAMI Automator that was developed to automate the scripting and bookkeeping in large-scale source term analyses. The ORIGAMI Automator enables the analyst to (1) easily create, view, and edit the reactor site and assembly information, (2) automatically create and run ORIGAMI inputs, and (3) analyze the results from ORIGAMI. ORIGAMI Automator uses the standard ORIGEN binary concentrations files produced by ORIGAMI, with concentrations available at all time points in each assembly’s life. The GUI plots results such as mass, concentration, activity, and decay heat using a powerful new ORIGEN Post-Processing Utility for SCALE (OPUS) GUI component. This document includes a description and user guide for the GUI, a step-by-step tutorial for a simplified scenario, and appendices that document the file structures used.« less
A modern approach to superradiance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endlich, Solomon; Penco, Riccardo
In this paper, we provide a simple and modern discussion of rotational super-radiance based on quantum field theory. We work with an effective theory valid at scales much larger than the size of the spinning object responsible for superradiance. Within this framework, the probability of absorption by an object at rest completely determines the superradiant amplification rate when that same object is spinning. We first discuss in detail superradiant scattering of spin 0 particles with orbital angular momentum ℓ = 1, and then extend our analysis to higher values of orbital angular momentum and spin. Along the way, we providemore » a simple derivation of vacuum friction — a ''quantum torque'' acting on spinning objects in empty space. Our results apply not only to black holes but to arbitrary spinning objects. We also discuss superradiant instability due to formation of bound states and, as an illustration, we calculate the instability rate Γ for bound states with massive spin 1 particles. For a black hole with mass M and angular velocity Ω, we find Γ ~ (GMμ) 7Ω when the particle’s Compton wavelength 1/μ is much greater than the size GM of the spinning object. This rate is parametrically much larger than the instability rate for spin 0 particles, which scales like (GM μ) 9Ω. This enhanced instability rate can be used to constrain the existence of ultralight particles beyond the Standard Model.« less
Ion transport in complex layered graphene-based membranes with tuneable interlayer spacing.
Cheng, Chi; Jiang, Gengping; Garvey, Christopher J; Wang, Yuanyuan; Simon, George P; Liu, Jefferson Z; Li, Dan
2016-02-01
Investigation of the transport properties of ions confined in nanoporous carbon is generally difficult because of the stochastic nature and distribution of multiscale complex and imperfect pore structures within the bulk material. We demonstrate a combined approach of experiment and simulation to describe the structure of complex layered graphene-based membranes, which allows their use as a unique porous platform to gain unprecedented insights into nanoconfined transport phenomena across the entire sub-10-nm scales. By correlation of experimental results with simulation of concentration-driven ion diffusion through the cascading layered graphene structure with sub-10-nm tuneable interlayer spacing, we are able to construct a robust, representative structural model that allows the establishment of a quantitative relationship among the nanoconfined ion transport properties in relation to the complex nanoporous structure of the layered membrane. This correlation reveals the remarkable effect of the structural imperfections of the membranes on ion transport and particularly the scaling behaviors of both diffusive and electrokinetic ion transport in graphene-based cascading nanochannels as a function of channel size from 10 nm down to subnanometer. Our analysis shows that the range of ion transport effects previously observed in simple one-dimensional nanofluidic systems will translate themselves into bulk, complex nanoslit porous systems in a very different manner, and the complex cascading porous circuities can enable new transport phenomena that are unattainable in simple fluidic systems.
NASA Astrophysics Data System (ADS)
Yan, David; Bazant, Martin Z.; Biesheuvel, P. M.; Pugh, Mary C.; Dawson, Francis P.
2017-03-01
Linear sweep and cyclic voltammetry techniques are important tools for electrochemists and have a variety of applications in engineering. Voltammetry has classically been treated with the Randles-Sevcik equation, which assumes an electroneutral supported electrolyte. In this paper, we provide a comprehensive mathematical theory of voltammetry in electrochemical cells with unsupported electrolytes and for other situations where diffuse charge effects play a role, and present analytical and simulated solutions of the time-dependent Poisson-Nernst-Planck equations with generalized Frumkin-Butler-Volmer boundary conditions for a 1:1 electrolyte and a simple reaction. Using these solutions, we construct theoretical and simulated current-voltage curves for liquid and solid thin films, membranes with fixed background charge, and cells with blocking electrodes. The full range of dimensionless parameters is considered, including the dimensionless Debye screening length (scaled to the electrode separation), Damkohler number (ratio of characteristic diffusion and reaction times), and dimensionless sweep rate (scaled to the thermal voltage per diffusion time). The analysis focuses on the coupling of Faradaic reactions and diffuse charge dynamics, although capacitive charging of the electrical double layers is also studied, for early time transients at reactive electrodes and for nonreactive blocking electrodes. Our work highlights cases where diffuse charge effects are important in the context of voltammetry, and illustrates which regimes can be approximated using simple analytical expressions and which require more careful consideration.
A modern approach to superradiance
Endlich, Solomon; Penco, Riccardo
2017-05-10
In this paper, we provide a simple and modern discussion of rotational super-radiance based on quantum field theory. We work with an effective theory valid at scales much larger than the size of the spinning object responsible for superradiance. Within this framework, the probability of absorption by an object at rest completely determines the superradiant amplification rate when that same object is spinning. We first discuss in detail superradiant scattering of spin 0 particles with orbital angular momentum ℓ = 1, and then extend our analysis to higher values of orbital angular momentum and spin. Along the way, we providemore » a simple derivation of vacuum friction — a ''quantum torque'' acting on spinning objects in empty space. Our results apply not only to black holes but to arbitrary spinning objects. We also discuss superradiant instability due to formation of bound states and, as an illustration, we calculate the instability rate Γ for bound states with massive spin 1 particles. For a black hole with mass M and angular velocity Ω, we find Γ ~ (GMμ) 7Ω when the particle’s Compton wavelength 1/μ is much greater than the size GM of the spinning object. This rate is parametrically much larger than the instability rate for spin 0 particles, which scales like (GM μ) 9Ω. This enhanced instability rate can be used to constrain the existence of ultralight particles beyond the Standard Model.« less
Ion transport in complex layered graphene-based membranes with tuneable interlayer spacing
Cheng, Chi; Jiang, Gengping; Garvey, Christopher J.; Wang, Yuanyuan; Simon, George P.; Liu, Jefferson Z.; Li, Dan
2016-01-01
Investigation of the transport properties of ions confined in nanoporous carbon is generally difficult because of the stochastic nature and distribution of multiscale complex and imperfect pore structures within the bulk material. We demonstrate a combined approach of experiment and simulation to describe the structure of complex layered graphene-based membranes, which allows their use as a unique porous platform to gain unprecedented insights into nanoconfined transport phenomena across the entire sub–10-nm scales. By correlation of experimental results with simulation of concentration-driven ion diffusion through the cascading layered graphene structure with sub–10-nm tuneable interlayer spacing, we are able to construct a robust, representative structural model that allows the establishment of a quantitative relationship among the nanoconfined ion transport properties in relation to the complex nanoporous structure of the layered membrane. This correlation reveals the remarkable effect of the structural imperfections of the membranes on ion transport and particularly the scaling behaviors of both diffusive and electrokinetic ion transport in graphene-based cascading nanochannels as a function of channel size from 10 nm down to subnanometer. Our analysis shows that the range of ion transport effects previously observed in simple one-dimensional nanofluidic systems will translate themselves into bulk, complex nanoslit porous systems in a very different manner, and the complex cascading porous circuities can enable new transport phenomena that are unattainable in simple fluidic systems. PMID:26933689
Experimental Investigation of the Flow on a Simple Frigate Shape (SFS)
Mora, Rafael Bardera
2014-01-01
Helicopters operations on board ships require special procedures introducing additional limitations known as ship helicopter operational limitations (SHOLs) which are a priority for all navies. This paper presents the main results obtained from the experimental investigation of a simple frigate shape (SFS) which is a typical case of study in experimental and computational aerodynamics. The results obtained in this investigation are used to make an assessment of the flow predicted by the SFS geometry in comparison with experimental data obtained testing a ship model (reduced scale) in the wind tunnel and on board (full scale) measurements performed on a real frigate type ship geometry. PMID:24523646
Estimation of critical behavior from the density of states in classical statistical models
NASA Astrophysics Data System (ADS)
Malakis, A.; Peratzakis, A.; Fytas, N. G.
2004-12-01
We present a simple and efficient approximation scheme which greatly facilitates the extension of Wang-Landau sampling (or similar techniques) in large systems for the estimation of critical behavior. The method, presented in an algorithmic approach, is based on a very simple idea, familiar in statistical mechanics from the notion of thermodynamic equivalence of ensembles and the central limit theorem. It is illustrated that we can predict with high accuracy the critical part of the energy space and by using this restricted part we can extend our simulations to larger systems and improve the accuracy of critical parameters. It is proposed that the extensions of the finite-size critical part of the energy space, determining the specific heat, satisfy a scaling law involving the thermal critical exponent. The method is applied successfully for the estimation of the scaling behavior of specific heat of both square and simple cubic Ising lattices. The proposed scaling law is verified by estimating the thermal critical exponent from the finite-size behavior of the critical part of the energy space. The density of states of the zero-field Ising model on these lattices is obtained via a multirange Wang-Landau sampling.
Formal methods for modeling and analysis of hybrid systems
NASA Technical Reports Server (NTRS)
Tiwari, Ashish (Inventor); Lincoln, Patrick D. (Inventor)
2009-01-01
A technique based on the use of a quantifier elimination decision procedure for real closed fields and simple theorem proving to construct a series of successively finer qualitative abstractions of hybrid automata is taught. The resulting abstractions are always discrete transition systems which can then be used by any traditional analysis tool. The constructed abstractions are conservative and can be used to establish safety properties of the original system. The technique works on linear and non-linear polynomial hybrid systems: the guards on discrete transitions and the continuous flows in all modes can be specified using arbitrary polynomial expressions over the continuous variables. An exemplar tool in the SAL environment built over the theorem prover PVS is detailed. The technique scales well to large and complex hybrid systems.
Visualization techniques for tongue analysis in traditional Chinese medicine
NASA Astrophysics Data System (ADS)
Pham, Binh L.; Cai, Yang
2004-05-01
Visual inspection of the tongue has been an important diagnostic method of Traditional Chinese Medicine (TCM). Clinic data have shown significant connections between various viscera cancers and abnormalities in the tongue and the tongue coating. Visual inspection of the tongue is simple and inexpensive, but the current practice in TCM is mainly experience-based and the quality of the visual inspection varies between individuals. The computerized inspection method provides quantitative models to evaluate color, texture and surface features on the tongue. In this paper, we investigate visualization techniques and processes to allow interactive data analysis with the aim to merge computerized measurements with human expert's diagnostic variables based on five-scale diagnostic conditions: Healthy (H), History Cancers (HC), History of Polyps (HP), Polyps (P) and Colon Cancer (C).
Nonlinear power spectrum from resummed perturbation theory: a leap beyond the BAO scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anselmi, Stefano; Pietroni, Massimo, E-mail: anselmi@ieec.uab.es, E-mail: massimo.pietroni@pd.infn.it
2012-12-01
A new computational scheme for the nonlinear cosmological matter power spectrum (PS) is presented. Our method is based on evolution equations in time, which can be cast in a form extremely convenient for fast numerical evaluations. A nonlinear PS is obtained in a time comparable to that needed for a simple 1-loop computation, and the numerical implementation is very simple. Our results agree with N-body simulations at the percent level in the BAO range of scales, and at the few-percent level up to k ≅ 1 h/Mpc at z∼>0.5, thereby opening the possibility of applying this tool to scales interestingmore » for weak lensing. We clarify the approximations inherent to this approach as well as its relations to previous ones, such as the Time Renormalization Group, and the multi-point propagator expansion. We discuss possible lines of improvements of the method and its intrinsic limitations by multi streaming at small scales and low redshifts.« less
NASA Astrophysics Data System (ADS)
Casas-Castillo, M. Carmen; Rodríguez-Solà, Raúl; Navarro, Xavier; Russo, Beniamino; Lastra, Antonio; González, Paula; Redaño, Angel
2018-01-01
The fractal behavior of extreme rainfall intensities registered between 1940 and 2012 by the Retiro Observatory of Madrid (Spain) has been examined, and a simple scaling regime ranging from 25 min to 3 days of duration has been identified. Thus, an intensity-duration-frequency (IDF) master equation of the location has been constructed in terms of the simple scaling formulation. The scaling behavior of probable maximum precipitation (PMP) for durations between 5 min and 24 h has also been verified. For the statistical estimation of the PMP, an envelope curve of the frequency factor ( k m ) based on a total of 10,194 station-years of annual maximum rainfall from 258 stations in Spain has been developed. This curve could be useful to estimate suitable values of PMP at any point of the Iberian Peninsula from basic statistical parameters (mean and standard deviation) of its rainfall series. [Figure not available: see fulltext.
Self-organized criticality in asymmetric exclusion model with noise for freeway traffic
NASA Astrophysics Data System (ADS)
Nagatani, Takashi
1995-02-01
The one-dimensional asymmetric simple-exclusion model with open boundaries for parallel update is extended to take into account temporary stopping of particles. The model presents the traffic flow on a highway with temporary deceleration of cars. Introducing temporary stopping into the asymmetric simple-exclusion model drives the system asymptotically into a steady state exhibiting a self-organized criticality. In the self-organized critical state, start-stop waves (or traffic jams) appear with various sizes (or lifetimes). The typical interval < s>between consecutive jams scales as < s> ≃ Lv with v = 0.51 ± 0.05 where L is the system size. It is shown that the cumulative jam-interval distribution Ns( L) satisfies the finite-size scaling form ( Ns( L) ≃ L- vf( s/ Lv). Also, the typical lifetime
Amr, Mostafa; Kaliyadan, Feroze; Shams, Tarek
2014-01-01
Skin disorders such as acne, which have significant cosmetic implications, can affect the self-perception of cutaneous body image. There are many scales which measure self-perception of cutaneous body image. We evaluated the use of a simple Cutaneous Body Image (CBI) scale to assess self-perception of body image in a sample of young Arab patients affected with acne. A total of 70 patients with acne answered the CBI questionnaire. The CBI score was correlated with the severity of acne and acne scarring, gender, and history of retinoids use. There was no statistically significant correlation between CBI and the other parameters - gender, acne/acne scarring severity, and use of retinoids. Our study suggests that cutaneous body image perception in Arab patients with acne was not dependent on variables like gender and severity of acne or acne scarring. A simple CBI scale alone is not a sufficiently reliable tool to assess self-perception of body image in patients with acne vulgaris.
NASA Astrophysics Data System (ADS)
Danel, J.-F.; Kazandjian, L.
2018-06-01
It is shown that the equation of state (EOS) and the radial distribution functions obtained by density-functional theory molecular dynamics (DFT-MD) obey a simple scaling law. At given temperature, the thermodynamic properties and the radial distribution functions given by a DFT-MD simulation remain unchanged if the mole fractions of nuclei of given charge and the average volume per atom remain unchanged. A practical interest of this scaling law is to obtain an EOS table for a fluid from that already obtained for another fluid if it has the right characteristics. Another practical interest of this result is that an asymmetric mixture made up of light and heavy atoms requiring very different time steps can be replaced by a mixture of atoms of equal mass, which facilitates the exploration of the configuration space in a DFT-MD simulation. The scaling law is illustrated by numerical results.
Effects of practice on the Wechsler Adult Intelligence Scale-IV across 3- and 6-month intervals.
Estevis, Eduardo; Basso, Michael R; Combs, Dennis
2012-01-01
A total of 54 participants (age M = 20.9; education M = 14.9; initial Full Scale IQ M = 111.6) were administered the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) at baseline and again either 3 or 6 months later. Scores on the Full Scale IQ, Verbal Comprehension, Working Memory, Perceptual Reasoning, Processing Speed, and General Ability Indices improved approximately 7, 5, 4, 5, 9, and 6 points, respectively, and increases were similar regardless of whether the re-examination occurred over 3- or 6-month intervals. Reliable change indices (RCI) were computed using the simple difference and bivariate regression methods, providing estimated base rates of change across time. The regression method provided more accurate estimates of reliable change than did the simple difference between baseline and follow-up scores. These findings suggest that prior exposure to the WAIS-IV results in significant score increments. These gains reflect practice effects instead of genuine intellectual changes, which may lead to errors in clinical judgment.
Xiao, Haopeng; Chen, Weixuan; Smeekens, Johanna M; Wu, Ronghu
2018-04-27
Protein glycosylation is ubiquitous in biological systems and essential for cell survival. However, the heterogeneity of glycans and the low abundance of many glycoproteins complicate their global analysis. Chemical methods based on reversible covalent interactions between boronic acid and glycans have great potential to enrich glycopeptides, but the binding affinity is typically not strong enough to capture low-abundance species. Here, we develop a strategy using dendrimer-conjugated benzoboroxole to enhance the glycopeptide enrichment. We test the performance of several boronic acid derivatives, showing that benzoboroxole markedly increases glycopeptide coverage from human cell lysates. The enrichment is further improved by conjugating benzoboroxole to a dendrimer, which enables synergistic benzoboroxole-glycan interactions. This robust and simple method is highly effective for sensitive glycoproteomics analysis, especially capturing low-abundance glycopeptides. Importantly, the enriched glycopeptides remain intact, making the current method compatible with mass-spectrometry-based approaches to identify glycosylation sites and glycan structures.
Protecting Privacy of Shared Epidemiologic Data without Compromising Analysis Potential
Cologne, John; Grant, Eric J.; Nakashima, Eiji; ...
2012-01-01
Objective . Ensuring privacy of research subjects when epidemiologic data are shared with outside collaborators involves masking (modifying) the data, but overmasking can compromise utility (analysis potential). Methods of statistical disclosure control for protecting privacy may be impractical for individual researchers involved in small-scale collaborations. Methods . We investigated a simple approach based on measures of disclosure risk and analytical utility that are straightforward for epidemiologic researchers to derive. The method is illustrated using data from the Japanese Atomic-bomb Survivor population. Results . Masking by modest rounding did not adequately enhance security but rounding to remove several digits of relativemore » accuracy effectively reduced the risk of identification without substantially reducing utility. Grouping or adding random noise led to noticeable bias. Conclusions . When sharing epidemiologic data, it is recommended that masking be performed using rounding. Specific treatment should be determined separately in individual situations after consideration of the disclosure risks and analysis needs.« less
Estimating Mass of Inflatable Aerodynamic Decelerators Using Dimensionless Parameters
NASA Technical Reports Server (NTRS)
Samareh, Jamshid A.
2011-01-01
This paper describes a technique for estimating mass for inflatable aerodynamic decelerators. The technique uses dimensional analysis to identify a set of dimensionless parameters for inflation pressure, mass of inflation gas, and mass of flexible material. The dimensionless parameters enable scaling of an inflatable concept with geometry parameters (e.g., diameter), environmental conditions (e.g., dynamic pressure), inflation gas properties (e.g., molecular mass), and mass growth allowance. This technique is applicable for attached (e.g., tension cone, hypercone, and stacked toroid) and trailing inflatable aerodynamic decelerators. The technique uses simple engineering approximations that were developed by NASA in the 1960s and 1970s, as well as some recent important developments. The NASA Mars Entry and Descent Landing System Analysis (EDL-SA) project used this technique to estimate the masses of the inflatable concepts that were used in the analysis. The EDL-SA results compared well with two independent sets of high-fidelity finite element analyses.
Protecting Privacy of Shared Epidemiologic Data without Compromising Analysis Potential
Cologne, John; Grant, Eric J.; Nakashima, Eiji; Chen, Yun; Funamoto, Sachiyo; Katayama, Hiroaki
2012-01-01
Objective. Ensuring privacy of research subjects when epidemiologic data are shared with outside collaborators involves masking (modifying) the data, but overmasking can compromise utility (analysis potential). Methods of statistical disclosure control for protecting privacy may be impractical for individual researchers involved in small-scale collaborations. Methods. We investigated a simple approach based on measures of disclosure risk and analytical utility that are straightforward for epidemiologic researchers to derive. The method is illustrated using data from the Japanese Atomic-bomb Survivor population. Results. Masking by modest rounding did not adequately enhance security but rounding to remove several digits of relative accuracy effectively reduced the risk of identification without substantially reducing utility. Grouping or adding random noise led to noticeable bias. Conclusions. When sharing epidemiologic data, it is recommended that masking be performed using rounding. Specific treatment should be determined separately in individual situations after consideration of the disclosure risks and analysis needs. PMID:22505949
Social network analysis for program implementation.
Valente, Thomas W; Palinkas, Lawrence A; Czaja, Sara; Chu, Kar-Hai; Brown, C Hendricks
2015-01-01
This paper introduces the use of social network analysis theory and tools for implementation research. The social network perspective is useful for understanding, monitoring, influencing, or evaluating the implementation process when programs, policies, practices, or principles are designed and scaled up or adapted to different settings. We briefly describe common barriers to implementation success and relate them to the social networks of implementation stakeholders. We introduce a few simple measures commonly used in social network analysis and discuss how these measures can be used in program implementation. Using the four stage model of program implementation (exploration, adoption, implementation, and sustainment) proposed by Aarons and colleagues [1] and our experience in developing multi-sector partnerships involving community leaders, organizations, practitioners, and researchers, we show how network measures can be used at each stage to monitor, intervene, and improve the implementation process. Examples are provided to illustrate these concepts. We conclude with expected benefits and challenges associated with this approach.
Social Network Analysis for Program Implementation
Valente, Thomas W.; Palinkas, Lawrence A.; Czaja, Sara; Chu, Kar-Hai; Brown, C. Hendricks
2015-01-01
This paper introduces the use of social network analysis theory and tools for implementation research. The social network perspective is useful for understanding, monitoring, influencing, or evaluating the implementation process when programs, policies, practices, or principles are designed and scaled up or adapted to different settings. We briefly describe common barriers to implementation success and relate them to the social networks of implementation stakeholders. We introduce a few simple measures commonly used in social network analysis and discuss how these measures can be used in program implementation. Using the four stage model of program implementation (exploration, adoption, implementation, and sustainment) proposed by Aarons and colleagues [1] and our experience in developing multi-sector partnerships involving community leaders, organizations, practitioners, and researchers, we show how network measures can be used at each stage to monitor, intervene, and improve the implementation process. Examples are provided to illustrate these concepts. We conclude with expected benefits and challenges associated with this approach. PMID:26110842
Protecting privacy of shared epidemiologic data without compromising analysis potential.
Cologne, John; Grant, Eric J; Nakashima, Eiji; Chen, Yun; Funamoto, Sachiyo; Katayama, Hiroaki
2012-01-01
Ensuring privacy of research subjects when epidemiologic data are shared with outside collaborators involves masking (modifying) the data, but overmasking can compromise utility (analysis potential). Methods of statistical disclosure control for protecting privacy may be impractical for individual researchers involved in small-scale collaborations. We investigated a simple approach based on measures of disclosure risk and analytical utility that are straightforward for epidemiologic researchers to derive. The method is illustrated using data from the Japanese Atomic-bomb Survivor population. Masking by modest rounding did not adequately enhance security but rounding to remove several digits of relative accuracy effectively reduced the risk of identification without substantially reducing utility. Grouping or adding random noise led to noticeable bias. When sharing epidemiologic data, it is recommended that masking be performed using rounding. Specific treatment should be determined separately in individual situations after consideration of the disclosure risks and analysis needs.
Near-Field Terahertz Transmission Imaging at 0.210 Terahertz Using a Simple Aperture Technique
2015-10-01
This report discusses a simple aperture useful for terahertz near-field imaging at .2010 terahertz ( lambda = 1.43 millimeters). The aperture requires...achieve a spatial resolution of lambda /7. The aperture can be scaled with the assistance of machinery found in conventional machine shops to achieve similar results using shorter terahertz wavelengths.
Evaluation of the Cardiac Depression Visual Analogue Scale in a medical and non-medical sample.
Di Benedetto, Mirella; Sheehan, Matthew
2014-01-01
Comorbid depression and medical illness is associated with a number of adverse health outcomes such as lower medication adherence and higher rates of subsequent mortality. Reliable and valid psychological measures capable of detecting a range of depressive symptoms found in medical settings are needed. The Cardiac Depression Visual Analogue Scale (CDVAS) is a recently developed, brief six-item measure originally designed to assess the range and severity of depressive symptoms within a cardiac population. The current study aimed to further investigate the psychometric properties of the CDVAS in a general and medical sample. The sample consisted of 117 participants, whose mean age was 40.0 years (SD = 19.0, range 18-84). Participants completed the CDVAS, the Cardiac Depression Scale (CDS), the Depression Anxiety Stress Scales (DASS) and a demographic and health questionnaire. The CDVAS was found to have adequate internal reliability (α = .76), strong concurrent validity with the CDS (r = .89) and the depression sub-scale of the DASS (r = .70), strong discriminant validity and strong predictive validity. The principal components analysis revealed that the CDVAS measured only one component, providing further support for the construct validity of the scale. Results of the current study indicate that the CDVAS is a short, simple, valid and reliable measure of depressive symptoms suitable for use in a general and medical sample.
Dorian, Paul; Cvitkovic, Suzan S; Kerr, Charles R; Crystal, Eugene; Gillis, Anne M; Guerra, Peter G; Mitchell, L Brent; Roy, Denis; Skanes, Allan C; Wyse, D George
2006-04-01
The severity of symptoms caused by atrial fibrillation (AF) is extremely variable. Quantifying the effect of AF on patient well-being is important but there is no simple, commonly accepted measure of the effect of AF on quality of life (QoL). Current QoL measures are cumbersome and impractical for clinical use. To create a simple, concise and readily usable AF severity score to facilitate treatment decisions and physician communication. The Canadian Cardiovascular Society (CCS) Severity of Atrial Fibrillation (SAF) Scale is analogous to the CCS Angina Functional Class. The CCS-SAF score is determined using three steps: documentation of possible AF-related symptoms (palpitations, dyspnea, dizziness/syncope, chest pain, weakness/fatigue); determination of symptom-rhythm correlation; and assessment of the effect of these symptoms on patient daily function and QoL. CCS-SAF scores range from 0 (asymptomatic) to 4 (severe impact of symptoms on QoL and activities of daily living). Patients are also categorized by type of AF (paroxysmal versus persistent/permanent). The CCS-SAF Scale will be validated using accepted measures of patient-perceived severity of symptoms and impairment of QoL and will require 'field testing' to ensure its applicability and reproducibility in the clinical setting. This type of symptom severity scale, like the New York Heart Association Functional Class for heart failure symptoms and the CCS Functional Class for angina symptoms, trades precision and comprehensiveness for simplicity and ease of use at the bedside. A common language to quantify AF severity may help to improve patient care.
Selcuk, Selcuk; Cam, Cetin; Asoglu, Mehmet Resit; Kucukbas, Mehmet; Arinkan, Arzu; Cikman, Muzaffer Seyhan; Karateke, Ates
2016-03-01
The impact of simple and radical hysterectomy on all aspects of pelvic floor dysfunctions was evaluated in current study. This retrospective cohort study included 142 patients; 58 women (40.8%) who have undergone simple, 41 (28.8%) radical hysterectomy, and 43 (30.2%) women without any surgical intervention to serve as the control group. The validated versions of the Urogenital Distress Inventory (UDI-6), Incontinence Impact Questionnaire (IIQ-7), Pelvic Floor and Incontinence Sexual Impact Questionnaire (PISQ-12), Wexner Incontinence Scale score and pelvic organ prolapse quantification (POP-Q) system were used in detailed evaluation of pelvic floor dysfunction. One-way ANOVA and Pearson's chi square tests were performed in statistical analysis. It was found that there were significant differences in irritative and obstructive scores of UDI-6 between Type III hysterectomy group and Type I hysterectomy group. In addition, patients of Type I hysterectomy had significant higher irritative and obstructive scores than the control group. Type III hysterectomy had the most significant deteriorating effect on sexual life, based on scores of PISQ-12 compared to both Type I hysterectomy group and control group. Hysterectomy results in detrimental effects on the quality of life (QoL) regarding all aspects of pelvic floor functions especially in women of radical hysterectomy. Urinary dysfunctional symptoms like urgency, obstruction and especially sexual problems are more bothersome and difficult to overcome. The impact of hysterectomy on QoL should be investigated as a whole and may be more profound than previously thought. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Outbreak statistics and scaling laws for externally driven epidemics.
Singh, Sarabjeet; Myers, Christopher R
2014-04-01
Power-law scalings are ubiquitous to physical phenomena undergoing a continuous phase transition. The classic susceptible-infectious-recovered (SIR) model of epidemics is one such example where the scaling behavior near a critical point has been studied extensively. In this system the distribution of outbreak sizes scales as P(n)∼n-3/2 at the critical point as the system size N becomes infinite. The finite-size scaling laws for the outbreak size and duration are also well understood and characterized. In this work, we report scaling laws for a model with SIR structure coupled with a constant force of infection per susceptible, akin to a "reservoir forcing". We find that the statistics of outbreaks in this system fundamentally differ from those in a simple SIR model. Instead of fixed exponents, all scaling laws exhibit tunable exponents parameterized by the dimensionless rate of external forcing. As the external driving rate approaches a critical value, the scale of the average outbreak size converges to that of the maximal size, and above the critical point, the scaling laws bifurcate into two regimes. Whereas a simple SIR process can only exhibit outbreaks of size O(N1/3) and O(N) depending on whether the system is at or above the epidemic threshold, a driven SIR process can exhibit a richer spectrum of outbreak sizes that scale as O(Nξ), where ξ∈(0,1]∖{2/3} and O((N/lnN)2/3) at the multicritical point.
Advances in time-scale algorithms
NASA Technical Reports Server (NTRS)
Stein, S. R.
1993-01-01
The term clock is usually used to refer to a device that counts a nearly periodic signal. A group of clocks, called an ensemble, is often used for time keeping in mission critical applications that cannot tolerate loss of time due to the failure of a single clock. The time generated by the ensemble of clocks is called a time scale. The question arises how to combine the times of the individual clocks to form the time scale. One might naively be tempted to suggest the expedient of averaging the times of the individual clocks, but a simple thought experiment demonstrates the inadequacy of this approach. Suppose a time scale is composed of two noiseless clocks having equal and opposite frequencies. The mean time scale has zero frequency. However if either clock fails, the time-scale frequency immediately changes to the frequency of the remaining clock. This performance is generally unacceptable and simple mean time scales are not used. First, previous time-scale developments are reviewed and then some new methods that result in enhanced performance are presented. The historical perspective is based upon several time scales: the AT1 and TA time scales of the National Institute of Standards and Technology (NIST), the A.1(MEAN) time scale of the US Naval observatory (USNO), the TAI time scale of the Bureau International des Poids et Measures (BIPM), and the KAS-1 time scale of the Naval Research laboratory (NRL). The new method was incorporated in the KAS-2 time scale recently developed by Timing Solutions Corporation. The goal is to present time-scale concepts in a nonmathematical form with as few equations as possible. Many other papers and texts discuss the details of the optimal estimation techniques that may be used to implement these concepts.
Quantitative Evaluation of Musical Scale Tunings
ERIC Educational Resources Information Center
Hall, Donald E.
1974-01-01
The acoustical and mathematical basis of the problem of tuning the twelve-tone chromatic scale is reviewed. A quantitative measurement showing how well any tuning succeeds in providing just intonation for any specific piece of music is explained and applied to musical examples using a simple computer program. (DT)
1. Aquatic ecologists use mesocosm experiments to understand mechanisms driving ecological processes. Comparisons across experiments, and extrapolations to larger scales, are complicated by the use of mesocosms with varying dimensions. We conducted a mesocosm experiment over a vo...
Methodological Issues in Questionnaire Design.
Song, Youngshin; Son, Youn Jung; Oh, Doonam
2015-06-01
The process of designing a questionnaire is complicated. Many questionnaires on nursing phenomena have been developed and used by nursing researchers. The purpose of this paper was to discuss questionnaire design and factors that should be considered when using existing scales. Methodological issues were discussed, such as factors in the design of questions, steps in developing questionnaires, wording and formatting methods for items, and administrations methods. How to use existing scales, how to facilitate cultural adaptation, and how to prevent socially desirable responding were discussed. Moreover, the triangulation method in questionnaire development was introduced. Steps were recommended for designing questions such as appropriately operationalizing key concepts for the target population, clearly formatting response options, generating items and confirming final items through face or content validity, sufficiently piloting the questionnaire using item analysis, demonstrating reliability and validity, finalizing the scale, and training the administrator. Psychometric properties and cultural equivalence should be evaluated prior to administration when using an existing questionnaire and performing cultural adaptation. In the context of well-defined nursing phenomena, logical and systematic methods will contribute to the development of simple and precise questionnaires.
Molecular dynamics of conformational substates for a simplified protein model
NASA Astrophysics Data System (ADS)
Grubmüller, Helmut; Tavan, Paul
1994-09-01
Extended molecular dynamics simulations covering a total of 0.232 μs have been carried out on a simplified protein model. Despite its simplified structure, that model exhibits properties similar to those of more realistic protein models. In particular, the model was found to undergo transitions between conformational substates at a time scale of several hundred picoseconds. The computed trajectories turned out to be sufficiently long as to permit a statistical analysis of that conformational dynamics. To check whether effective descriptions neglecting memory effects can reproduce the observed conformational dynamics, two stochastic models were studied. A one-dimensional Langevin effective potential model derived by elimination of subpicosecond dynamical processes could not describe the observed conformational transition rates. In contrast, a simple Markov model describing the transitions between but neglecting dynamical processes within conformational substates reproduced the observed distribution of first passage times. These findings suggest, that protein dynamics generally does not exhibit memory effects at time scales above a few hundred picoseconds, but confirms the existence of memory effects at a picosecond time scale.
Onset of fast "ideal" tearing in thin current sheets: Dependence on the equilibrium current profile
NASA Astrophysics Data System (ADS)
Pucci, F.; Velli, M.; Tenerani, A.; Del Sarto, D.
2018-03-01
In this paper, we study the scaling relations for the triggering of the fast, or "ideal," tearing instability starting from equilibrium configurations relevant to astrophysical as well as laboratory plasmas that differ from the simple Harris current sheet configuration. We present the linear tearing instability analysis for equilibrium magnetic fields which (a) go to zero at the boundary of the domain and (b) contain a double current sheet system (the latter previously studied as a Cartesian proxy for the m = 1 kink mode in cylindrical plasmas). More generally, we discuss the critical aspect ratio scalings at which the growth rates become independent of the Lundquist number S, in terms of the dependence of the Δ' parameter on the wavenumber k of unstable modes. The scaling Δ'(k) with k at small k is found to categorize different equilibria broadly: the critical aspect ratios may be even smaller than L/a ˜ Sα with α = 1/3 originally found for the Harris current sheet, but there exists a general lower bound α ≥ 1/4.
Communication: Diverse nanoscale cluster dynamics: Diffusion of 2D epitaxial clusters
NASA Astrophysics Data System (ADS)
Lai, King C.; Evans, James W.; Liu, Da-Jiang
2017-11-01
The dynamics of nanoscale clusters can be distinct from macroscale behavior described by continuum formalisms. For diffusion of 2D clusters of N atoms in homoepitaxial systems mediated by edge atom hopping, macroscale theory predicts simple monotonic size scaling of the diffusion coefficient, DN ˜ N-β, with β = 3/2. However, modeling for nanoclusters on metal(100) surfaces reveals that slow nucleation-mediated diffusion displaying weak size scaling β < 1 occurs for "perfect" sizes Np = L2 and L(L+1) for integer L = 3,4,… (with unique square or near-square ground state shapes), and also for Np+3, Np+4,…. In contrast, fast facile nucleation-free diffusion displaying strong size scaling β ≈ 2.5 occurs for sizes Np+1 and Np+2. DN versus N oscillates strongly between the slowest branch (for Np+3) and the fastest branch (for Np+1). All branches merge for N = O(102), but macroscale behavior is only achieved for much larger N = O(103). This analysis reveals the unprecedented diversity of behavior on the nanoscale.
The Structure of Coronal Loops
NASA Technical Reports Server (NTRS)
Antiochos, Spiro K.
2009-01-01
It is widely believed that the simple coronal loops observed by XUV imagers, such as EIT, TRACE, or XRT, actually have a complex internal structure consisting of many (perhaps hundreds) of unresolved, interwoven "strands". According to the nanoflare model, photospheric motions tangle the strands, causing them to reconnect and release the energy required to produce the observed loop plasma. Although the strands, themselves, are unresolved by present-generation imagers, there is compelling evidence for their existence and for the nanoflare model from analysis of loop intensities and temporal evolution. A problem with this scenario is that, although reconnection can eliminate some of the strand tangles, it cannot destroy helicity, which should eventually build up to observable scales. we consider, therefore, the injection and evolution of helicity by the nanoflare process and its implications for the observed structure of loops and the large-scale corona. we argue that helicity does survive and build up to observable levels, but on spatial and temporal scales larger than those of coronal loops. we discuss the implications of these results for coronal loops and the corona, in general .
Detectability of large-scale power suppression in the galaxy distribution
NASA Astrophysics Data System (ADS)
Gibelyou, Cameron; Huterer, Dragan; Fang, Wenjuan
2010-12-01
Suppression in primordial power on the Universe’s largest observable scales has been invoked as a possible explanation for large-angle observations in the cosmic microwave background, and is allowed or predicted by some inflationary models. Here we investigate the extent to which such a suppression could be confirmed by the upcoming large-volume redshift surveys. For definiteness, we study a simple parametric model of suppression that improves the fit of the vanilla ΛCDM model to the angular correlation function measured by WMAP in cut-sky maps, and at the same time improves the fit to the angular power spectrum inferred from the maximum likelihood analysis presented by the WMAP team. We find that the missing power at large scales, favored by WMAP observations within the context of this model, will be difficult but not impossible to rule out with a galaxy redshift survey with large-volume (˜100Gpc3). A key requirement for success in ruling out power suppression will be having redshifts of most galaxies detected in the imaging survey.
An Empirical Non-TNT Approach to Launch Vehicle Explosion Modeling
NASA Technical Reports Server (NTRS)
Blackwood, James M.; Skinner, Troy; Richardson, Erin H.; Bangham, Michal E.
2015-01-01
In an effort to increase crew survivability from catastrophic explosions of Launch Vehicles (LV), a study was conducted to determine the best method for predicting LV explosion environments in the near field. After reviewing such methods as TNT equivalence, Vapor Cloud Explosion (VCE) theory, and Computational Fluid Dynamics (CFD), it was determined that the best approach for this study was to assemble all available empirical data from full scale launch vehicle explosion tests and accidents. Approximately 25 accidents or full-scale tests were found that had some amount of measured blast wave, thermal, or fragment explosion environment characteristics. Blast wave overpressure was found to be much lower in the near field than predicted by most TNT equivalence methods. Additionally, fragments tended to be larger, fewer, and slower than expected if the driving force was from a high explosive type event. In light of these discoveries, a simple model for cryogenic rocket explosions is presented. Predictions from this model encompass all known applicable full scale launch vehicle explosion data. Finally, a brief description of on-going analysis and testing to further refine the launch vehicle explosion environment is discussed.
NASA Astrophysics Data System (ADS)
Xu, C.; Zhao, S.; Zhao, B.
2017-12-01
Spatial heterogeneity is scale-dependent, that is, the quantification and representation of spatial pattern vary with the resolution and extent. Overwhelming practices focused on scale effect of landscape metrics, and predicable scaling relationships found among some of them are thought to be the most effective and precise way to quantify multi-scale characteristics. However, previous studies tended to consider a narrow range of scales, and few focused on the critical threshold of scaling function. Here we examine the scalograms of 38 widely-used landscape-level metrics in a more integral spectrum of grain size among 96 landscapes with various extent (i.e. from 25km2 up towards to 221 km2), which sampled randomly from NLCD product. Our goal is to explore the existence of scaling domain and whether the response of metrics to changing resolution would be influenced by spatial extent. Results clearly show the existence of scaling domain for 13 of them (Type II), while the behaviors of other 13 (Type I) exhibit simple scaling functions and the rest (Type III) demonstrate various forms like no obvious change or fluctuation across the integral spectrum of grain size. In addition, an invariant power law scaling relationship was found between critical resolution and spatial extent for metrics falling into Type II, as the critical resolution is proportional to Eρ (ρ is a constant, and E is the extent). All the scaling exponents (ρ) are positive, suggesting that the critical resolutions for these characteristics of landscape structure can be relaxed as the spatial extent expands. This agrees well with empirical perception that coarser grain size might be allowed for spatial data with larger extent. Furthermore, the parameters of scaling functions for metrics falling into Type I and Type II vary with spatial extent, and power law or logarithmic relationships could be identified between them for some metrics. Our finding support the existence of self-organized criticality for a hierarchically-structured landscape. Although the underlying mechanism driving the scaling relationship remains unclear, it could provide guidance toward general principles in spatial pattern analysis and on selecting the proper resolution to avoid the misrepresentation of spatial pattern and profound biases in further ecological progress research.
Constraints on genes shape long-term conservation of macro-synteny in metazoan genomes.
Lv, Jie; Havlak, Paul; Putnam, Nicholas H
2011-10-05
Many metazoan genomes conserve chromosome-scale gene linkage relationships ("macro-synteny") from the common ancestor of multicellular animal life 1234, but the biological explanation for this conservation is still unknown. Double cut and join (DCJ) is a simple, well-studied model of neutral genome evolution amenable to both simulation and mathematical analysis 5, but as we show here, it is not sufficent to explain long-term macro-synteny conservation. We examine a family of simple (one-parameter) extensions of DCJ to identify models and choices of parameters consistent with the levels of macro- and micro-synteny conservation observed among animal genomes. Our software implements a flexible strategy for incorporating genomic context into the DCJ model to incorporate various types of genomic context ("DCJ-[C]"), and is available as open source software from http://github.com/putnamlab/dcj-c. A simple model of genome evolution, in which DCJ moves are allowed only if they maintain chromosomal linkage among a set of constrained genes, can simultaneously account for the level of macro-synteny conservation and for correlated conservation among multiple pairs of species. Simulations under this model indicate that a constraint on approximately 7% of metazoan genes is sufficient to constrain genome rearrangement to an average rate of 25 inversions and 1.7 translocations per million years.
Getting started with Open-Hardware: Development and Control of Microfluidic Devices
da Costa, Eric Tavares; Mora, Maria F.; Willis, Peter A.; do Lago, Claudimir L.; Jiao, Hong; Garcia, Carlos D.
2014-01-01
Understanding basic concepts of electronics and computer programming allows researchers to get the most out of the equipment found in their laboratories. Although a number of platforms have been specifically designed for the general public and are supported by a vast array of on-line tutorials, this subject is not normally included in university chemistry curricula. Aiming to provide the basic concepts of hardware and software, this article is focused on the design and use of a simple module to control a series of PDMS-based valves. The module is based on a low-cost microprocessor (Teensy) and open-source software (Arduino). The microvalves were fabricated using thin sheets of PDMS and patterned using CO2 laser engraving, providing a simple and efficient way to fabricate devices without the traditional photolithographic process or facilities. Synchronization of valve control enabled the development of two simple devices to perform injection (1.6 ± 0.4 μL/stroke) and mixing of different solutions. Furthermore, a practical demonstration of the utility of this system for microscale chemical sample handling and analysis was achieved performing an on-chip acid-base titration, followed by conductivity detection with an open-source low-cost detection system. Overall, the system provided a very reproducible (98%) platform to perform fluid delivery at the microfluidic scale. PMID:24823494
Grammatical analysis as a distributed neurobiological function.
Bozic, Mirjana; Fonteneau, Elisabeth; Su, Li; Marslen-Wilson, William D
2015-03-01
Language processing engages large-scale functional networks in both hemispheres. Although it is widely accepted that left perisylvian regions have a key role in supporting complex grammatical computations, patient data suggest that some aspects of grammatical processing could be supported bilaterally. We investigated the distribution and the nature of grammatical computations across language processing networks by comparing two types of combinatorial grammatical sequences--inflectionally complex words and minimal phrases--and contrasting them with grammatically simple words. Novel multivariate analyses revealed that they engage a coalition of separable subsystems: inflected forms triggered left-lateralized activation, dissociable into dorsal processes supporting morphophonological parsing and ventral, lexically driven morphosyntactic processes. In contrast, simple phrases activated a consistently bilateral pattern of temporal regions, overlapping with inflectional activations in L middle temporal gyrus. These data confirm the role of the left-lateralized frontotemporal network in supporting complex grammatical computations. Critically, they also point to the capacity of bilateral temporal regions to support simple, linear grammatical computations. This is consistent with a dual neurobiological framework where phylogenetically older bihemispheric systems form part of the network that supports language function in the modern human, and where significant capacities for language comprehension remain intact even following severe left hemisphere damage. Copyright © 2014 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Armour, John A. L.; Palla, Raquel; Zeeuwen, Patrick L. J. M.; den Heijer, Martin; Schalkwijk, Joost; Hollox, Edward J.
2007-01-01
Recent work has demonstrated an unexpected prevalence of copy number variation in the human genome, and has highlighted the part this variation may play in predisposition to common phenotypes. Some important genes vary in number over a high range (e.g. DEFB4, which commonly varies between two and seven copies), and have posed formidable technical challenges for accurate copy number typing, so that there are no simple, cheap, high-throughput approaches suitable for large-scale screening. We have developed a simple comparative PCR method based on dispersed repeat sequences, using a single pair of precisely designed primers to amplify products simultaneously from both test and reference loci, which are subsequently distinguished and quantified via internal sequence differences. We have validated the method for the measurement of copy number at DEFB4 by comparison of results from >800 DNA samples with copy number measurements by MAPH/REDVR, MLPA and array-CGH. The new Paralogue Ratio Test (PRT) method can require as little as 10 ng genomic DNA, appears to be comparable in accuracy to the other methods, and for the first time provides a rapid, simple and inexpensive method for copy number analysis, suitable for application to typing thousands of samples in large case-control association studies. PMID:17175532