Sample records for simple scaling relation

  1. Universal binding energy relations in metallic adhesion

    NASA Technical Reports Server (NTRS)

    Ferrante, J.; Smith, J. R.; Rose, J. J.

    1984-01-01

    Rose, Smith, and Ferrante have discovered scaling relations which map the adhesive binding energy calculated by Ferrante and Smith onto a single universal binding energy curve. These binding energies are calculated for all combinations of Al(111), Zn(0001), Mg(0001), and Na(110) in contact. The scaling involves normalizing the energy by the maximum binding energy and normalizing distances by a suitable combination of Thomas-Fermi screening lengths. Rose et al. have also found that the calculated cohesive energies of K, Ba, Cu, Mo, and Sm scale by similar simple relations, suggesting the universal relation may be more general than for the simple free electron metals for which it was derived. In addition, the scaling length was defined more generally in order to relate it to measurable physical properties. Further this universality can be extended to chemisorption. A simple and yet quite accurate prediction of a zero temperature equation of state (volume as a function of pressure for metals and alloys) is presented. Thermal expansion coefficients and melting temperatures are predicted by simple, analytic expressions, and results compare favorably with experiment for a broad range of metals.

  2. How well can regional fluxes be derived from smaller-scale estimates?

    NASA Technical Reports Server (NTRS)

    Moore, Kathleen E.; Fitzjarrald, David R.; Ritter, John A.

    1992-01-01

    Regional surface fluxes are essential lower boundary conditions for large scale numerical weather and climate models and are the elements of global budgets of important trace gases. Surface properties affecting the exchange of heat, moisture, momentum and trace gases vary with length scales from one meter to hundreds of km. A classical difficulty is that fluxes have been measured directly only at points or along lines. The process of scaling up observations limited in space and/or time to represent larger areas was done by assigning properties to surface classes and combining estimated or calculated fluxes using an area weighted average. It is not clear that a simple area weighted average is sufficient to produce the large scale from the small scale, chiefly due to the effect of internal boundary layers, nor is it known how important the uncertainty is to large scale model outcomes. Simultaneous aircraft and tower data obtained in the relatively simple terrain of the western Alaska tundra were used to determine the extent to which surface type variation can be related to fluxes of heat, moisture, and other properties. Surface type was classified as lake or land with aircraft borne infrared thermometer, and flight level heat and moisture fluxes were related to surface type. The magnitude and variety of sampling errors inherent in eddy correlation flux estimation place limits on how well any flux can be known even in simple geometries.

  3. Modeling Age-Related Differences in Immediate Memory Using SIMPLE

    ERIC Educational Resources Information Center

    Surprenant, Aimee M.; Neath, Ian; Brown, Gordon D. A.

    2006-01-01

    In the SIMPLE model (Scale Invariant Memory and Perceptual Learning), performance on memory tasks is determined by the locations of items in multidimensional space, and better performance is associated with having fewer close neighbors. Unlike most previous simulations with SIMPLE, the ones reported here used measured, rather than assumed,…

  4. How Darcy's equation is linked to the linear reservoir at catchment scale

    NASA Astrophysics Data System (ADS)

    Savenije, Hubert H. G.

    2017-04-01

    In groundwater hydrology two simple linear equations exist that describe the relation between groundwater flow and the gradient that drives it: Darcy's equation and the linear reservoir. Both equations are empirical at heart: Darcy's equation at the laboratory scale and the linear reservoir at the watershed scale. Although at first sight they show similarity, without having detailed knowledge of the structure of the underlying aquifers it is not trivial to upscale Darcy's equation to the watershed scale. In this paper, a relatively simple connection is provided between the two, based on the assumption that the groundwater system is organized by an efficient drainage network, a mostly invisible pattern that has evolved over geological time scales. This drainage network provides equally distributed resistance to flow along the streamlines that connect the active groundwater body to the stream, much like a leaf is organized to provide all stomata access to moisture at equal resistance.

  5. Regimes of stability and scaling relations for the removal time in the asteroid belt: a simple kinetic model and numerical tests

    NASA Astrophysics Data System (ADS)

    Cubrovic, Mihailo

    2005-02-01

    We report on our theoretical and numerical results concerning the transport mechanisms in the asteroid belt. We first derive a simple kinetic model of chaotic diffusion and show how it gives rise to some simple correlations (but not laws) between the removal time (the time for an asteroid to experience a qualitative change of dynamical behavior and enter a wide chaotic zone) and the Lyapunov time. The correlations are shown to arise in two different regimes, characterized by exponential and power-law scalings. We also show how is the so-called “stable chaos” (exponential regime) related to anomalous diffusion. Finally, we check our results numerically and discuss their possible applications in analyzing the motion of particular asteroids.

  6. A simple index of stand density for Douglas-fir.

    Treesearch

    R.O. Curtis

    1982-01-01

    The expression RD = G/(Dg½), where G is basal area and Dg is quadratic mean stand diameter, provides a simple and convenient scale of relative stand density for Douglas-fir, equivalent to other generally accepted diameter-based stand density measures.

  7. Simulating and mapping spatial complexity using multi-scale techniques

    USGS Publications Warehouse

    De Cola, L.

    1994-01-01

    A central problem in spatial analysis is the mapping of data for complex spatial fields using relatively simple data structures, such as those of a conventional GIS. This complexity can be measured using such indices as multi-scale variance, which reflects spatial autocorrelation, and multi-fractal dimension, which characterizes the values of fields. These indices are computed for three spatial processes: Gaussian noise, a simple mathematical function, and data for a random walk. Fractal analysis is then used to produce a vegetation map of the central region of California based on a satellite image. This analysis suggests that real world data lie on a continuum between the simple and the random, and that a major GIS challenge is the scientific representation and understanding of rapidly changing multi-scale fields. -Author

  8. Electrochemistry at Nanometer-Scaled Electrodes

    ERIC Educational Resources Information Center

    Watkins, John J.; Bo Zhang; White, Henry S.

    2005-01-01

    Electrochemical studies using nanometer-scaled electrodes are leading to better insights into electrochemical kinetics, interfacial structure, and chemical analysis. Various methods of preparing electrodes of nanometer dimensions are discussed and a few examples of their behavior and applications in relatively simple electrochemical experiments…

  9. A Simple Non-equilibrium Model of Star Formation and Scatter in the Kennicutt-Schmidt Relation and Star Formation Efficiencies in Galaxies

    NASA Astrophysics Data System (ADS)

    Orr, Matthew; Hopkins, Philip F.

    2018-06-01

    I will present a simple model of non-equilibrium star formation and its relation to the scatter in the Kennicutt-Schmidt relation and large-scale star formation efficiencies in galaxies. I will highlight the importance of a hierarchy of timescales, between the galaxy dynamical time, local free-fall time, the delay time of stellar feedback, and temporal overlap in observables, in setting the scatter of the observed star formation rates for a given gas mass. Further, I will talk about how these timescales (and their associated duty-cycles of star formation) influence interpretations of the large-scale star formation efficiency in reasonably star-forming galaxies. Lastly, the connection with galactic centers and out-of-equilibrium feedback conditions will be mentioned.

  10. A simple predictive model for the structure of the oceanic pycnocline

    PubMed

    Gnanadesikan

    1999-03-26

    A simple theory for the large-scale oceanic circulation is developed, relating pycnocline depth, Northern Hemisphere sinking, and low-latitude upwelling to pycnocline diffusivity and Southern Ocean winds and eddies. The results show that Southern Ocean processes help maintain the global ocean structure and that pycnocline diffusion controls low-latitude upwelling.

  11. Role of large-scale velocity fluctuations in a two-vortex kinematic dynamo.

    PubMed

    Kaplan, E J; Brown, B P; Rahbarnia, K; Forest, C B

    2012-06-01

    This paper presents an analysis of the Dudley-James two-vortex flow, which inspired several laboratory-scale liquid-metal experiments, in order to better demonstrate its relation to astrophysical dynamos. A coordinate transformation splits the flow into components that are axisymmetric and nonaxisymmetric relative to the induced magnetic dipole moment. The reformulation gives the flow the same dynamo ingredients as are present in more complicated convection-driven dynamo simulations. These ingredients are currents driven by the mean flow and currents driven by correlations between fluctuations in the flow and fluctuations in the magnetic field. The simple model allows us to isolate the dynamics of the growing eigenvector and trace them back to individual three-wave couplings between the magnetic field and the flow. This simple model demonstrates the necessity of poloidal advection in sustaining the dynamo and points to the effect of large-scale flow fluctuations in exciting a dynamo magnetic field.

  12. SMALL-SCALE ANISOTROPIES OF COSMIC RAYS FROM RELATIVE DIFFUSION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahlers, Markus; Mertsch, Philipp

    2015-12-10

    The arrival directions of multi-TeV cosmic rays show significant anisotropies at small angular scales. It has been argued that this small-scale structure can naturally arise from cosmic ray scattering in local turbulent magnetic fields that distort a global dipole anisotropy set by diffusion. We study this effect in terms of the power spectrum of cosmic ray arrival directions and show that the strength of small-scale anisotropies is related to properties of relative diffusion. We provide a formalism for how these power spectra can be inferred from simulations and motivate a simple analytic extension of the ensemble-averaged diffusion equation that canmore » account for the effect.« less

  13. A SIMPLE METHOD FOR EVALUATING DATA FROM AN INTERLABORATORY STUDY

    EPA Science Inventory

    Large-scale laboratory-and method-performance studies involving more than about 30 laboratories may be evaluated by calculating the HORRAT ratio for each test sample (HORRAT=[experimentally found among-laboratories relative standard deviation] divided by [relative standard deviat...

  14. A Lattice Boltzmann Method for Turbomachinery Simulations

    NASA Technical Reports Server (NTRS)

    Hsu, A. T.; Lopez, I.

    2003-01-01

    Lattice Boltzmann (LB) Method is a relatively new method for flow simulations. The start point of LB method is statistic mechanics and Boltzmann equation. The LB method tries to set up its model at molecular scale and simulate the flow at macroscopic scale. LBM has been applied to mostly incompressible flows and simple geometry.

  15. Universal binding energy relations in metallic adhesion

    NASA Technical Reports Server (NTRS)

    Ferrante, J.; Smith, J. R.; Rose, J. H.

    1981-01-01

    Scaling relations which map metallic adhesive binding energy onto a single universal binding energy curve are discussed in relation to adhesion, friction, and wear in metals. The scaling involved normalizing the energy to the maximum binding energy and normalizing distances by a suitable combination of Thomas-Fermi screening lengths. The universal curve was found to be accurately represented by E*(A*)= -(1+beta A) exp (-Beta A*) where E* is the normalized binding energy, A* is the normalized separation, and beta is the normalized decay constant. The calculated cohesive energies of potassium, barium, copper, molybdenum, and samarium were also found to scale by similar relations, suggesting that the universal relation may be more general than for the simple free electron metals.

  16. Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity.

    PubMed

    Li, Harbin; McNulty, Steven G

    2007-10-01

    Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL estimates to the national scale could be developed. Specifically, we wanted to quantify CAL uncertainty under natural variability in 17 model parameters, and determine their relative contributions in predicting CAL. Results indicated that uncertainty in CAL came primarily from components of base cation weathering (BC(w); 49%) and acid neutralizing capacity (46%), whereas the most critical parameters were BC(w) base rate (62%), soil depth (20%), and soil temperature (11%). Thus, improvements in estimates of these factors are crucial to reducing uncertainty and successfully scaling up SMBE for national assessments of CAL.

  17. HOW GALACTIC ENVIRONMENT REGULATES STAR FORMATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meidt, Sharon E.

    2016-02-10

    In a new simple model I reconcile two contradictory views on the factors that determine the rate at which molecular clouds form stars—internal structure versus external, environmental influences—providing a unified picture for the regulation of star formation in galaxies. In the presence of external pressure, the pressure gradient set up within a self-gravitating turbulent (isothermal) cloud leads to a non-uniform density distribution. Thus the local environment of a cloud influences its internal structure. In the simple equilibrium model, the fraction of gas at high density in the cloud interior is determined simply by the cloud surface density, which is itselfmore » inherited from the pressure in the immediate surroundings. This idea is tested using measurements of the properties of local clouds, which are found to show remarkable agreement with the simple equilibrium model. The model also naturally predicts the star formation relation observed on cloud scales and at the same time provides a mapping between this relation and the closer-to-linear molecular star formation relation measured on larger scales in galaxies. The key is that pressure regulates not only the molecular content of the ISM but also the cloud surface density. I provide a straightforward prescription for the pressure regulation of star formation that can be directly implemented in numerical models. Predictions for the dense gas fraction and star formation efficiency measured on large-scales within galaxies are also presented, establishing the basis for a new picture of star formation regulated by galactic environment.« less

  18. A relativistic signature in large-scale structure

    NASA Astrophysics Data System (ADS)

    Bartolo, Nicola; Bertacca, Daniele; Bruni, Marco; Koyama, Kazuya; Maartens, Roy; Matarrese, Sabino; Sasaki, Misao; Verde, Licia; Wands, David

    2016-09-01

    In General Relativity, the constraint equation relating metric and density perturbations is inherently nonlinear, leading to an effective non-Gaussianity in the dark matter density field on large scales-even if the primordial metric perturbation is Gaussian. Intrinsic non-Gaussianity in the large-scale dark matter overdensity in GR is real and physical. However, the variance smoothed on a local physical scale is not correlated with the large-scale curvature perturbation, so that there is no relativistic signature in the galaxy bias when using the simplest model of bias. It is an open question whether the observable mass proxies such as luminosity or weak lensing correspond directly to the physical mass in the simple halo bias model. If not, there may be observables that encode this relativistic signature.

  19. Metrology with Weak Value Amplification and Related Topics

    DTIC Science & Technology

    2013-10-12

    sensitivity depend crucially on the relative time scales involved, which include: 4 +- PBS PC HWP SBC Piezo Pulsed Laser Split Detector 50:50 FIG. 1. Simple...reasons why this may be impossible or inadvisable given a laboratory set-up. There may be a minimum quiet time between laser pulses, for example, or...measurements is a full 100 ms, our filtering limits the laser noise to time scales of about 30 ms. For analysis, we take this as our integration time in

  20. A New Approach to Scaling Channel Width in Bedrock Rivers and its Implications for Modeling Fluvial Incision

    NASA Astrophysics Data System (ADS)

    Finnegan, N. J.; Roe, G.; Montgomery, D. R.; Hallet, B.

    2004-12-01

    The fundamental role of bedrock channel incision on the evolution of mountainous topography has become a central concept in tectonic geomorphology over the past decade. During this time the stream power model of bedrock river incision has immerged as a valuable tool for exploring the dynamics of bedrock river incision in time and space. In most stream power analyses, river channel width--a necessary ingredient for calculating power or shear stress per unit of bed area--is assumed to scale solely with discharge. However, recent field-based studies provide evidence for the alternative view that channel width varies locally, much like channel slope does, in association with spatial changes in rock uplift rate and erodibility. This suggests that simple scaling relations between width and discharge, and hence estimates of stream power, don't apply in regions where rock uplift and erodibility vary spatially. It also highlights the need for an alternative to the traditional assumptions of hydraulic geometry to further investigation of the coupling between bedrock river incision and tectonic processes. Based on Manning's equation, basic mass conservation principles, and an assumption of self-similarity for channel cross sections, we present a new relation for scaling the steady-state width of bedrock river channels as a function of discharge (Q), channel slope (S), and roughness (Ks): W \\propto Q3/8S-3/16Ks1/16. In longitudinally simple, uniform-concavity rivers from the King Range in coastal Northern California, the model emulates traditional width-discharge relations that scale channel width with the square root of discharge. More significantly, our relation describes river width trends for the Yarlung Tsangpo in SE Tibet and the Wenatchee River in the Washington Cascades, both rivers that narrow considerably as they incise terrain with spatially varied rock uplift rates and/or lithology. We suggest that much of observed channel width variability is a simple consequence of the tendency for water to flow faster in steeper reaches and therefore maintain smaller channel cross sections. We demonstrate that using conventional scaling relations for bedrock channel width can significantly underestimate stream power variability in bedrock channels, and that our model improves estimates of spatial patterns of bedrock incision rates.

  1. On estimating scale invariance in stratocumulus cloud fields

    NASA Technical Reports Server (NTRS)

    Seze, Genevieve; Smith, Leonard A.

    1990-01-01

    Examination of cloud radiance fields derived from satellite observations sometimes indicates the existence of a range of scales over which the statistics of the field are scale invariant. Many methods were developed to quantify this scaling behavior in geophysics. The usefulness of such techniques depends both on the physics of the process being robust over a wide range of scales and on the availability of high resolution, low noise observations over these scales. These techniques (area perimeter relation, distribution of areas, estimation of the capacity, d0, through box counting, correlation exponent) are applied to the high resolution satellite data taken during the FIRE experiment and provides initial estimates of the quality of data required by analyzing simple sets. The results of the observed fields are contrasted with those of images of objects with known characteristics (e.g., dimension) where the details of the constructed image simulate current observational limits. Throughout when cloud elements and cloud boundaries are mentioned; it should be clearly understood that by this structures in the radiance field are meant: all the boundaries considered are defined by simple threshold arguments.

  2. Validation of a new simple scale to measure symptoms in atrial fibrillation: the Canadian Cardiovascular Society Severity in Atrial Fibrillation scale.

    PubMed

    Dorian, Paul; Guerra, Peter G; Kerr, Charles R; O'Donnell, Suzan S; Crystal, Eugene; Gillis, Anne M; Mitchell, L Brent; Roy, Denis; Skanes, Allan C; Rose, M Sarah; Wyse, D George

    2009-06-01

    Atrial fibrillation (AF) is commonly associated with impaired quality of life. There is no simple validated scale to quantify the functional illness burden of AF. The Canadian Cardiovascular Society Severity in Atrial Fibrillation (CCS-SAF) scale is a bedside scale that ranges from class 0 to 4, from no effect on functional quality of life to a severe effect on life quality. This study was performed to validate the scale. In 484 patients with documented AF (62.2+/-12.5 years of age, 67% men; 62% paroxysmal and 38% persistent/permanent), the SAF class was assessed and 2 validated quality-of-life questionnaires were administered: the SF-36 generic scale and the disease-specific AFSS (University of Toronto Atrial Fibrillation Severity Scale). There is a significant linear graded correlation between the SAF class and measures of symptom severity, physical and emotional components of quality of life, general well-being, and health care consumption related to AF. Patients with SAF class 0 had age- and sex-standardized SF-36 scores of 0.15+/-0.16 and -0.04+/-0.31 (SD units), that is, units away from the mean population score for the mental and physical summary scores, respectively. For each unit increase in SAF class, there is a 0.36 and 0.40 SD unit decrease in the SF-36 score for the physical and mental components. As the SAF class increases from 0 to 4, the symptom severity score (range, 0 to 35) increases from 4.2+/-5.0 to 18.4+/-7.8 (P<0.0001). The CCS-SAF scale is a simple semiquantitative scale that closely approximates patient-reported subjective measures of quality of life in AF and may be practical for clinical use.

  3. A Note on the Fractal Behavior of Hydraulic Conductivity and Effective Porosity for Experimental Values in a Confined Aquifer

    PubMed Central

    De Bartolo, Samuele; Fallico, Carmine; Veltri, Massimo

    2013-01-01

    Hydraulic conductivity and effective porosity values for the confined sandy loam aquifer of the Montalto Uffugo (Italy) test field were obtained by laboratory and field measurements; the first ones were carried out on undisturbed soil samples and the others by slug and aquifer tests. A direct simple-scaling analysis was performed for the whole range of measurement and a comparison among the different types of fractal models describing the scale behavior was made. Some indications about the largest pore size to utilize in the fractal models were given. The results obtained for a sandy loam soil show that it is possible to obtain global indications on the behavior of the hydraulic conductivity versus the porosity utilizing a simple scaling relation and a fractal model in coupled manner. PMID:24385876

  4. Natural Scales in Geographical Patterns

    NASA Astrophysics Data System (ADS)

    Menezes, Telmo; Roth, Camille

    2017-04-01

    Human mobility is known to be distributed across several orders of magnitude of physical distances, which makes it generally difficult to endogenously find or define typical and meaningful scales. Relevant analyses, from movements to geographical partitions, seem to be relative to some ad-hoc scale, or no scale at all. Relying on geotagged data collected from photo-sharing social media, we apply community detection to movement networks constrained by increasing percentiles of the distance distribution. Using a simple parameter-free discontinuity detection algorithm, we discover clear phase transitions in the community partition space. The detection of these phases constitutes the first objective method of characterising endogenous, natural scales of human movement. Our study covers nine regions, ranging from cities to countries of various sizes and a transnational area. For all regions, the number of natural scales is remarkably low (2 or 3). Further, our results hint at scale-related behaviours rather than scale-related users. The partitions of the natural scales allow us to draw discrete multi-scale geographical boundaries, potentially capable of providing key insights in fields such as epidemiology or cultural contagion where the introduction of spatial boundaries is pivotal.

  5. Generalized scaling relationships on transition metals: Influence of adsorbate-coadsorbate interactions

    NASA Astrophysics Data System (ADS)

    Majumdar, Paulami; Greeley, Jeffrey

    2018-04-01

    Linear scaling relations of adsorbate energies across a range of catalytic surfaces have emerged as a central interpretive paradigm in heterogeneous catalysis. They are, however, typically developed for low adsorbate coverages which are not always representative of realistic heterogeneous catalytic environments. Herein, we present generalized linear scaling relations on transition metals that explicitly consider adsorbate-coadsorbate interactions at variable coverages. The slopes of these scaling relations do not follow the simple bond counting principles that govern scaling on transition metals at lower coverages. The deviations from bond counting are explained using a pairwise interaction model wherein the interaction parameter determines the slope of the scaling relationship on a given metal at variable coadsorbate coverages, and the slope across different metals at fixed coadsorbate coverage is approximated by adding a coverage-dependent correction to the standard bond counting contribution. The analysis provides a compact explanation for coverage-dependent deviations from bond counting in scaling relationships and suggests a useful strategy for incorporation of coverage effects into catalytic trends studies.

  6. Simple scaling of cooperation in donor-recipient games.

    PubMed

    Berger, Ulrich

    2009-09-01

    We present a simple argument which proves a general version of the scaling phenomenon recently observed in donor-recipient games by Tanimoto [Tanimoto, J., 2009. A simple scaling of the effectiveness of supporting mutual cooperation in donor-recipient games by various reciprocity mechanisms. BioSystems 96, 29-34].

  7. Lagrangian space consistency relation for large scale structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horn, Bart; Hui, Lam; Xiao, Xiao

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias & Riotto and Peloso & Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » Furthermore, the simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less

  8. Lagrangian space consistency relation for large scale structure

    DOE PAGES

    Horn, Bart; Hui, Lam; Xiao, Xiao

    2015-09-29

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias & Riotto and Peloso & Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » Furthermore, the simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less

  9. A simple model for calculating air pollution within street canyons

    NASA Astrophysics Data System (ADS)

    Venegas, Laura E.; Mazzeo, Nicolás A.; Dezzutti, Mariana C.

    2014-04-01

    This paper introduces the Semi-Empirical Urban Street (SEUS) model. SEUS is a simple mathematical model based on the scaling of air pollution concentration inside street canyons employing the emission rate, the width of the canyon, the dispersive velocity scale and the background concentration. Dispersive velocity scale depends on turbulent motions related to wind and traffic. The parameterisations of these turbulent motions include two dimensionless empirical parameters. Functional forms of these parameters have been obtained from full scale data measured in street canyons at four European cities. The sensitivity of SEUS model is studied analytically. Results show that relative errors in the evaluation of the two dimensionless empirical parameters have less influence on model uncertainties than uncertainties in other input variables. The model estimates NO2 concentrations using a simple photochemistry scheme. SEUS is applied to estimate NOx and NO2 hourly concentrations in an irregular and busy street canyon in the city of Buenos Aires. The statistical evaluation of results shows that there is a good agreement between estimated and observed hourly concentrations (e.g. fractional bias are -10.3% for NOx and +7.8% for NO2). The agreement between the estimated and observed values has also been analysed in terms of its dependence on wind speed and direction. The model shows a better performance for wind speeds >2 m s-1 than for lower wind speeds and for leeward situations than for others. No significant discrepancies have been found between the results of the proposed model and that of a widely used operational dispersion model (OSPM), both using the same input information.

  10. Nonlinear power spectrum from resummed perturbation theory: a leap beyond the BAO scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anselmi, Stefano; Pietroni, Massimo, E-mail: anselmi@ieec.uab.es, E-mail: massimo.pietroni@pd.infn.it

    2012-12-01

    A new computational scheme for the nonlinear cosmological matter power spectrum (PS) is presented. Our method is based on evolution equations in time, which can be cast in a form extremely convenient for fast numerical evaluations. A nonlinear PS is obtained in a time comparable to that needed for a simple 1-loop computation, and the numerical implementation is very simple. Our results agree with N-body simulations at the percent level in the BAO range of scales, and at the few-percent level up to k ≅ 1 h/Mpc at z∼>0.5, thereby opening the possibility of applying this tool to scales interestingmore » for weak lensing. We clarify the approximations inherent to this approach as well as its relations to previous ones, such as the Time Renormalization Group, and the multi-point propagator expansion. We discuss possible lines of improvements of the method and its intrinsic limitations by multi streaming at small scales and low redshifts.« less

  11. Convective Detrainment and Control of the Tropical Water Vapor Distribution

    NASA Astrophysics Data System (ADS)

    Kursinski, E. R.; Rind, D.

    2006-12-01

    Sherwood et al. (2006) developed a simple power law model describing the relative humidity distribution in the tropical free troposphere where the power law exponent is the ratio of a drying time scale (tied to subsidence rates) and a moistening time which is the average time between convective moistening events whose temporal distribution is described as a Poisson distribution. Sherwood et al. showed that the relative humidity distribution observed by GPS occultations and MLS is indeed close to a power law, approximately consistent with the simple model's prediction. Here we modify this simple model to be in terms of vertical length scales rather than time scales in a manner that we think more correctly matches the model predictions to the observations. The subsidence is now in terms of the vertical distance the air mass has descended since it last detrained from a convective plume. The moisture source term becomes a profile of convective detrainment flux versus altitude. The vertical profile of the convective detrainment flux is deduced from the observed distribution of the specific humidity at each altitude combined with sinking rates estimated from radiative cooling. The resulting free tropospheric detrainment profile increases with altitude above 3 km somewhat like an exponential profile which explains the approximate power law behavior observed by Sherwood et al. The observations also reveal a seasonal variation in the detrainment profile reflecting changes in the convective behavior expected by some based on observed seasonal changes in the vertical structure of convective regions. The simple model results will be compared with the moisture control mechanisms in a GCM with many additional mechanisms, the GISS climate model, as described in Rind (2006). References Rind. D., 2006: Water-vapor feedback. In Frontiers of Climate Modeling, J. T. Kiehl and V. Ramanathan (eds), Cambridge University Press [ISBN-13 978-0-521- 79132-8], 251-284. Sherwood, S., E. R. Kursinski and W. Read, A distribution law for free-tropospheric relative humidity, J. Clim. In press. 2006

  12. Scale dependant compensational stacking of channelized sedimentary deposits

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Straub, K. M.; Hajek, E. A.

    2010-12-01

    Compensational stacking, the tendency for sediment transport system to preferentially fill topographic lows, thus smoothing out topographic relief is a concept used in the interpretation of the stratigraphic record. Recently, a metric was developed to quantify the strength of compensation in sedimentary basins by comparing observed stacking patterns to what would be expected from simple, uncorrelated stacking. This method uses the rate of decay of spatial variability in sedimentation between picked depositional horizons with increasing vertical stratigraphic averaging distance. We explore how this metric varies as a function of stratigraphic scale using data from physical experiments, stratigraphy exposed in outcrops and numerical models. In an experiment conducted at Tulane University’s Sediment Dynamics Laboratory, the topography of a channelized delta formed by weakly cohesive sediment was monitored along flow-perpendicular transects at a high temporal resolution relative to channel kinematics. Over the course of this experiment a uniform relative subsidence pattern, designed to isolate autogenic processes, resulted in the construction of a stratigraphic package that is 25 times as thick as the depth of the experimental channels. We observe a scale-dependence on the compensational stacking of deposits set by the system’s avulsion time-scale. Above the avulsion time-scale deposits stack purely compensationally, but below this time-scale deposits stack somewhere between randomly and deterministically. The well-exposed Ferris Formation (Cretaceous/Paleogene, Hanna Basin, Wyoming, USA) also shows scale-dependant stratigraphic organization which appears to be set by an avulsion time-scale. Finally, we utilize simple object-based models to illustrate how channel avulsions influence compensation in alluvial basins.

  13. Slope stability and bearing capacity of landfills and simple on-site test methods.

    PubMed

    Yamawaki, Atsushi; Doi, Yoichi; Omine, Kiyoshi

    2017-07-01

    This study discusses strength characteristics (slope stability, bearing capacity, etc.) of waste landfills through on-site tests that were carried out at 29 locations in 19 sites in Japan and three other countries, and proposes simple methods to test and assess the mechanical strength of landfills on site. Also, the possibility of using a landfill site was investigated by a full-scale eccentric loading test. As a result of this, landfills containing more than about 10 cm long plastics or other fibrous materials were found to be resilient and hard to yield. An on-site full scale test proved that no differential settlement occurs. The repose angle test proposed as a simple on-site test method has been confirmed to be a good indicator for slope stability assessment. The repose angle test suggested that landfills which have high, near-saturation water content have considerably poorer slope stability. The results of our repose angle test and the impact acceleration test were related to the internal friction angle and the cohesion, respectively. In addition to this, it was found that the air pore volume ratio measured by an on-site air pore volume ratio test is likely to be related to various strength parameters.

  14. Using relational databases for improved sequence similarity searching and large-scale genomic analyses.

    PubMed

    Mackey, Aaron J; Pearson, William R

    2004-10-01

    Relational databases are designed to integrate diverse types of information and manage large sets of search results, greatly simplifying genome-scale analyses. Relational databases are essential for management and analysis of large-scale sequence analyses, and can also be used to improve the statistical significance of similarity searches by focusing on subsets of sequence libraries most likely to contain homologs. This unit describes using relational databases to improve the efficiency of sequence similarity searching and to demonstrate various large-scale genomic analyses of homology-related data. This unit describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. These include basic use of the database to generate a novel sequence library subset, how to extend and use seqdb_demo for the storage of sequence similarity search results and making use of various kinds of stored search results to address aspects of comparative genomic analysis.

  15. Scaling relations of halo cores for self-interacting dark matter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Henry W.; Loeb, Abraham, E-mail: henrylin@college.harvard.edu, E-mail: aloeb@cfa.harvard.edu

    2016-03-01

    Using a simple analytic formalism, we demonstrate that significant dark matter self-interactions produce halo cores that obey scaling relations nearly independent of the underlying particle physics parameters such as the annihilation cross section and the mass of the dark matter particle. For dwarf galaxies, we predict that the core density ρ{sub c} and the core radius r{sub c} should obey ρ{sub c} r{sub c} ≈ 41 M{sub ⊙} pc{sup −2} with a weak mass dependence ∼ M{sup 0.2}. Remarkably, such a scaling relation has recently been empirically inferred. Scaling relations involving core mass, core radius, and core velocity dispersion are predicted and agree well with observationalmore » data. By calibrating against numerical simulations, we predict the scatter in these relations and find them to be in excellent agreement with existing data. Future observations can test our predictions for different halo masses and redshifts.« less

  16. Vortex breakdown in simple pipe bends

    NASA Astrophysics Data System (ADS)

    Ault, Jesse; Shin, Sangwoo; Stone, Howard

    2016-11-01

    Pipe bends and elbows are one of the most common fluid mechanics elements that exists. However, despite their ubiquity and the extensive amount of research related to these common, simple geometries, unexpected complexities still remain. We show that for a range of geometries and flow conditions, these simple flows experience unexpected fluid dynamical bifurcations resembling the bubble-type vortex breakdown phenomenon. Specifically, we show with simulations and experiments that recirculation zones develop within the bends under certain conditions. As a consequence, fluid and particles can remain trapped within these structures for unexpectedly-long time scales. We also present simple techniques to mitigate this recirculation effect which can potentially have impact across industries ranging from biomedical and chemical processing to food and health sciences.

  17. A PORTRAIT OF COLD GAS IN GALAXIES AT 60 pc RESOLUTION AND A SIMPLE METHOD TO TEST HYPOTHESES THAT LINK SMALL-SCALE ISM STRUCTURE TO GALAXY-SCALE PROCESSES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leroy, Adam K.; Hughes, Annie; Schruba, Andreas

    2016-11-01

    The cloud-scale density, velocity dispersion, and gravitational boundedness of the interstellar medium (ISM) vary within and among galaxies. In turbulent models, these properties play key roles in the ability of gas to form stars. New high-fidelity, high-resolution surveys offer the prospect to measure these quantities across galaxies. We present a simple approach to make such measurements and to test hypotheses that link small-scale gas structure to star formation and galactic environment. Our calculations capture the key physics of the Larson scaling relations, and we show good correspondence between our approach and a traditional “cloud properties” treatment. However, we argue thatmore » our method is preferable in many cases because of its simple, reproducible characterization of all emission. Using, low- J {sup 12}CO data from recent surveys, we characterize the molecular ISM at 60 pc resolution in the Antennae, the Large Magellanic Cloud (LMC), M31, M33, M51, and M74. We report the distributions of surface density, velocity dispersion, and gravitational boundedness at 60 pc scales and show galaxy-to-galaxy and intragalaxy variations in each. The distribution of flux as a function of surface density appears roughly lognormal with a 1 σ width of ∼0.3 dex, though the center of this distribution varies from galaxy to galaxy. The 60 pc resolution line width and molecular gas surface density correlate well, which is a fundamental behavior expected for virialized or free-falling gas. Varying the measurement scale for the LMC and M31, we show that the molecular ISM has higher surface densities, lower line widths, and more self-gravity at smaller scales.« less

  18. The validity of a simple outcome measure to assess stuttering therapy.

    PubMed

    Huinck, Wendy; Rietveld, Toni

    2007-01-01

    The validity of a simple and not time-consuming self-assessment (SA) Scale was tested to establish progress after or during stuttering therapy. The scores on the SA scale were related to (1) objective measures (percentage of stuttered syllables, and syllables per minute) and (2) (self-)evaluation tests (self-evaluation questionnaires and perceptual evaluations or judgments of disfluency, naturalness and comfort by naïve listeners). Data were collected from two groups of stutterers at four measurement times: pretherapy, posttherapy, 12 months after therapy and 24 months after therapy. The first group attended the Comprehensive Stuttering Program: an integrated program based on fluency shaping techniques, and the second group participated in a Dutch group therapy: the Doetinchem Method that focuses on emotions and cognitions related to stuttering. Results showed similar score patterns on the SA scale, the self-evaluation questionnaires, the objective measures over time, and significant correlations between the SA scale and syllables per minute, percentage of stuttered syllables, Struggle subscale of the Perceptions of Stuttering Inventory and judged fluency on the T1-T2 difference scores. We concluded that the validity of the SA measure was proved and therefore encourage the use of such an instrument when (stuttering) treatment efficacy is studied.

  19. Lagrangian space consistency relation for large scale structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horn, Bart; Hui, Lam; Xiao, Xiao, E-mail: bh2478@columbia.edu, E-mail: lh399@columbia.edu, E-mail: xx2146@columbia.edu

    Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present.more » The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space.« less

  20. Communication: Simple liquids' high-density viscosity

    NASA Astrophysics Data System (ADS)

    Costigliola, Lorenzo; Pedersen, Ulf R.; Heyes, David M.; Schrøder, Thomas B.; Dyre, Jeppe C.

    2018-02-01

    This paper argues that the viscosity of simple fluids at densities above that of the triple point is a specific function of temperature relative to the freezing temperature at the density in question. The proposed viscosity expression, which is arrived at in part by reference to the isomorph theory of systems with hidden scale invariance, describes computer simulations of the Lennard-Jones system as well as argon and methane experimental data and simulation results for an effective-pair-potential model of liquid sodium.

  1. Gravitation and Special Relativity from Compton Wave Interactions at the Planck Scale: An Algorithmic Approach

    NASA Technical Reports Server (NTRS)

    Blackwell, William C., Jr.

    2004-01-01

    In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.

  2. Mesoscale Dynamical Regimes in the Midlatitudes

    NASA Astrophysics Data System (ADS)

    Craig, G. C.; Selz, T.

    2018-01-01

    The atmospheric mesoscales are characterized by a complex variety of meteorological phenomena that defy simple classification. Here a full space-time spectral analysis is carried out, based on a 7 day convection-permitting simulation of springtime midlatitude weather on a large domain. The kinetic energy is largest at synoptic scales, and on the mesoscale it is largely confined to an "advective band" where space and time scales are related by a constant of proportionality which corresponds to a velocity scale of about 10 m s-1. Computing the relative magnitude of different terms in the governing equations allows the identification of five dynamical regimes. These are tentatively identified as quasi-geostrophic flow, propagating gravity waves, stationary gravity waves related to orography, acoustic modes, and a weak temperature gradient regime, where vertical motions are forced by diabatic heating.

  3. Program Manipulates Plots For Effective Display

    NASA Technical Reports Server (NTRS)

    Bauer, F.; Downing, J.

    1990-01-01

    Windowed Observation of Relative Motion (WORM) computer program primarily intended for generation of simple X-Y plots from data created by other programs. Enables user to label, zoom, and change scales of various plots. Three-dimensional contour and line plots provided. Written in PASCAL.

  4. Developing a protocol for creating microfluidic devices with a 3D printer, PDMS, and glass

    NASA Astrophysics Data System (ADS)

    Collette, Robyn; Novak, Eric; Shirk, Kathryn

    2015-03-01

    Microfluidics research requires the design and fabrication of devices that have the ability to manipulate small volumes of fluid, typically ranging from microliters to picoliters. These devices are used for a wide range of applications including the assembly of materials and testing of biological samples. Many methods have been previously developed to create microfluidic devices, including traditional nanolithography techniques. However, these traditional techniques are cost-prohibitive for many small-scale laboratories. This research explores a relatively low-cost technique using a 3D printed master, which is used as a template for the fabrication of polydimethylsiloxane (PDMS) microfluidic devices. The masters are designed using computer aided design (CAD) software and can be printed and modified relatively quickly. We have developed a protocol for creating simple microfluidic devices using a 3D printer and PDMS adhered to glass. This relatively simple and lower-cost technique can now be scaled to more complicated device designs and applications. Funding provided by the Undergraduate Research Grant Program at Shippensburg University and the Student/Faculty Research Engagement Grants from the College of Arts and Sciences at Shippensburg University.

  5. A pseudo-sound constitutive relationship for the dilatational covariances in compressible turbulence: An analytical theory

    NASA Technical Reports Server (NTRS)

    Ristorcelli, J. R.

    1995-01-01

    The mathematical consequences of a few simple scaling assumptions about the effects of compressibility are explored using a simple singular perturbation idea and the methods of statistical fluid mechanics. Representations for the pressure-dilation and dilatational dissipation covariances appearing in single-point moment closures for compressible turbulence are obtained. While the results are expressed in the context of a second-order statistical closure they provide some interesting and very clear physical metaphors for the effects of compressibility that have not been seen using more traditional linear stability methods. In the limit of homogeneous turbulence with quasi-normal large-scales the expressions derived are - in the low turbulent Mach number limit - asymptotically exact. The expressions obtained are functions of the rate of change of the turbulence energy, its correlation length scale, and the relative time scale of the cascade rate. The expressions for the dilatational covariances contain constants which have a precise and definite physical significance; they are related to various integrals of the longitudinal velocity correlation. The pressure-dilation covariance is found to be a nonequilibrium phenomena related to the time rate of change of the internal energy and the kinetic energy of the turbulence. Also of interest is the fact that the representation for the dilatational dissipation in turbulence, with or without shear, features a dependence on the Reynolds number. This article is a documentation of an analytical investigation of the implications of a pseudo-sound theory for the effects of compressibility.

  6. Apparatus and methodology for fire gas characterization by means of animal exposure

    NASA Technical Reports Server (NTRS)

    Marcussen, W. H.; Hilado, C. J.; Furst, A.; Leon, H. A.; Kourtides, D. A.; Parker, J. A.; Butte, J. C.; Cummins, J. M.

    1976-01-01

    While there is a great deal of information available from small-scale laboratory experiments and for relatively simple mixtures of gases, considerable uncertainty exists regarding appropriate bioassay techniques for the complex mixture of gases generated in full-scale fires. Apparatus and methodology have been developed based on current state of the art for determining the effects of fire gases in the critical first 10 minutes of a full-scale fire on laboratory animals. This information is presented for its potential value and use while further improvements are being made.

  7. Pictorial depth probed through relative sizes

    PubMed Central

    Wagemans, Johan; van Doorn, Andrea J; Koenderink, Jan J

    2011-01-01

    In the physical environment familiar size is an effective depth cue because the distance from the eye to an object equals the ratio of its physical size to its angular extent in the visual field. Such simple geometrical relations do not apply to pictorial space, since the eye itself is not in pictorial space, and consequently the notion “distance from the eye” is meaningless. Nevertheless, relative size in the picture plane is often used by visual artists to suggest depth differences. The depth domain has no natural origin, nor a natural unit; thus only ratios of depth differences could have an invariant significance. We investigate whether the pictorial relative size cue yields coherent depth structures in pictorial spaces. Specifically, we measure the depth differences for all pairs of points in a 20-point configuration in pictorial space, and we account for these observations through 19 independent parameters (the depths of the points modulo an arbitrary offset), with no meaningful residuals. We discuss a simple formal framework that allows one to handle individual differences. We also compare the depth scale obtained by way of this method with depth scales obtained in totally different ways, finding generally good agreement. PMID:23145258

  8. Calving relation for tidewater glaciers based on detailed stress field analysis

    NASA Astrophysics Data System (ADS)

    Mercenier, Rémy; Lüthi, Martin P.; Vieli, Andreas

    2018-02-01

    Ocean-terminating glaciers in Arctic regions have undergone rapid dynamic changes in recent years, which have been related to a dramatic increase in calving rates. Iceberg calving is a dynamical process strongly influenced by the geometry at the terminus of tidewater glaciers. We investigate the effect of varying water level, calving front slope and basal sliding on the state of stress and flow regime for an idealized grounded ocean-terminating glacier and scale these results with ice thickness and velocity. Results show that water depth and calving front slope strongly affect the stress state while the effect from spatially uniform variations in basal sliding is much smaller. An increased relative water level or a reclining calving front slope strongly decrease the stresses and velocities in the vicinity of the terminus and hence have a stabilizing effect on the calving front. We find that surface stress magnitude and distribution for simple geometries are determined solely by the water depth relative to ice thickness. Based on this scaled relationship for the stress peak at the surface, and assuming a critical stress for damage initiation, we propose a simple and new parametrization for calving rates for grounded tidewater glaciers that is calibrated with observations.

  9. A Kinematically Consistent Two-Point Correlation Function

    NASA Technical Reports Server (NTRS)

    Ristorcelli, J. R.

    1998-01-01

    A simple kinematically consistent expression for the longitudinal two-point correlation function related to both the integral length scale and the Taylor microscale is obtained. On the inner scale, in a region of width inversely proportional to the turbulent Reynolds number, the function has the appropriate curvature at the origin. The expression for two-point correlation is related to the nonlinear cascade rate, or dissipation epsilon, a quantity that is carried as part of a typical single-point turbulence closure simulation. Constructing an expression for the two-point correlation whose curvature at the origin is the Taylor microscale incorporates one of the fundamental quantities characterizing turbulence, epsilon, into a model for the two-point correlation function. The integral of the function also gives, as is required, an outer integral length scale of the turbulence independent of viscosity. The proposed expression is obtained by kinematic arguments; the intention is to produce a practically applicable expression in terms of simple elementary functions that allow an analytical evaluation, by asymptotic methods, of diverse functionals relevant to single-point turbulence closures. Using the expression devised an example of the asymptotic method by which functionals of the two-point correlation can be evaluated is given.

  10. Fractal Signals & Space-Time Cartoons

    NASA Astrophysics Data System (ADS)

    Oetama, H. C. Jakob; Maksoed, W. H.

    2016-03-01

    In ``Theory of Scale Relativity'', 1991- L. Nottale states whereas ``scale relativity is a geometrical & fractal space-time theory''. It took in comparisons to ``a unified, wavelet based framework for efficiently synthetizing, analyzing ∖7 processing several broad classes of fractal signals''-Gregory W. Wornell:``Signal Processing with Fractals'', 1995. Furthers, in Fig 1.1. a simple waveform from statistically scale-invariant random process [ibid.,h 3 ]. Accompanying RLE Technical Report 566 ``Synthesis, Analysis & Processing of Fractal Signals'' as well as from Wornell, Oct 1991 herewith intended to deducts =a Δt + (1 - β Δ t) ...in Petersen, et.al: ``Scale invariant properties of public debt growth'',2010 h. 38006p2 to [1/{1- (2 α (λ) /3 π) ln (λ/r)}depicts in Laurent Nottale,1991, h 24. Acknowledgment devotes to theLates HE. Mr. BrigadierGeneral-TNI[rtd].Prof. Ir. HANDOJO.

  11. Temporal variability in phosphorus transfers: classifying concentration-discharge event dynamics

    NASA Astrophysics Data System (ADS)

    Haygarth, P.; Turner, B. L.; Fraser, A.; Jarvis, S.; Harrod, T.; Nash, D.; Halliwell, D.; Page, T.; Beven, K.

    The importance of temporal variability in relationships between phosphorus (P) concentration (Cp) and discharge (Q) is linked to a simple means of classifying the circumstances of Cp-Q relationships in terms of functional types of response. New experimental data at the upstream interface of grassland soil and catchment systems at a range of scales (lysimeters to headwaters) in England and Australia are used to demonstrate the potential of such an approach. Three types of event are defined as Types 1-3, depending on whether the relative change in Q exceeds the relative change in Cp (Type 1), whether Cp and Q are positively inter-related (Type 2) and whether Cp varies yet Q is unchanged (Type 3). The classification helps to characterise circumstances that can be explained mechanistically in relation to (i) the scale of the study (with a tendency towards Type 1 in small scale lysimeters), (ii) the form of P with a tendency for Type 1 for soluble (i.e., <0.45 μm P forms) and (iii) the sources of P with Type 3 dominant where P availability overrides transport controls. This simple framework provides a basis for development of a more complex and quantitative classification of Cp-Q relationships that can be developed further to contribute to future models of P transfer and delivery from slope to stream. Studies that evaluate the temporal dynamics of the transfer of P are currently grossly under-represented in comparison with models based on static/spatial factors.

  12. Demonstration of leapfrogging for implementing nonlinear model predictive control on a heat exchanger.

    PubMed

    Sridhar, Upasana Manimegalai; Govindarajan, Anand; Rhinehart, R Russell

    2016-01-01

    This work reveals the applicability of a relatively new optimization technique, Leapfrogging, for both nonlinear regression modeling and a methodology for nonlinear model-predictive control. Both are relatively simple, yet effective. The application on a nonlinear, pilot-scale, shell-and-tube heat exchanger reveals practicability of the techniques. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Tsunamis generated by subaerial mass flows

    USGS Publications Warehouse

    Walder, S.J.; Watts, P.; Sorensen, O.E.; Janssen, K.

    2003-01-01

    Tsunamis generated in lakes and reservoirs by subaerial mass flows pose distinctive problems for hazards assessment because the domain of interest is commonly the "near field," beyond the zone of complex splashing but close enough to the source that wave propagation effects are not predominant. Scaling analysis of the equations governing water wave propagation shows that near-field wave amplitude and wavelength should depend on certain measures of mass flow dynamics and volume. The scaling analysis motivates a successful collapse (in dimensionless space) of data from two distinct sets of experiments with solid block "wave makers." To first order, wave amplitude/water depth is a simple function of the ratio of dimensionless wave maker travel time to dimensionless wave maker volume per unit width. Wave amplitude data from previous laboratory investigations with both rigid and deformable wave makers follow the same trend in dimensionless parameter space as our own data. The characteristic wavelength/water depth for all our experiments is simply proportional to dimensionless wave maker travel time, which is itself given approximately by a simple function of wave maker length/water depth. Wave maker shape and rigidity do not otherwise influence wave features. Application of the amplitude scaling relation to several historical events yields "predicted" near-field wave amplitudes in reasonable agreement with measurements and observations. Together, the scaling relations for near-field amplitude, wavelength, and submerged travel time provide key inputs necessary for computational wave propagation and hazards assessment.

  14. A simple inertial model for Neptune's zonal circulation

    NASA Technical Reports Server (NTRS)

    Allison, Michael; Lumetta, James T.

    1990-01-01

    Voyager imaging observations of zonal cloud-tracked winds on Neptune revealed a strongly subrotational equatorial jet with a speed approaching 500 m/s and generally decreasing retrograde motion toward the poles. The wind data are interpreted with a speculative but revealingly simple model based on steady gradient flow balance and an assumed global homogenization of potential vorticity for shallow layer motion. The prescribed model flow profile relates the equatorial velocity to the mid-latitude shear, in reasonable agreement with the available data, and implies a global horizontal deformation scale L(D) of about 3000 km.

  15. Quality of anaesthesia-related information accessed via Internet searches.

    PubMed

    Caron, S; Berton, J; Beydon, L

    2007-08-01

    We conducted a study to examine the quality and stability of information available from the Internet on four anaesthesia-related topics. In January 2006, we searched using four key words (porphyria, scleroderma, transfusion risk, and epidural analgesia risk) with five search engines (Google, HotBot, AltaVista, Excite, and Yahoo). We used a published scoring system (NetScoring) to evaluate the first 15 sites identified by each of these 20 searches. We also used a simple four-point scale to assess the first 100 sites in the Google search on one of our four topics ('epidural analgesia risk'). In November 2006, we conducted a second evaluation, using three search engines (Google, AltaVista, and Yahoo) with 14 synonyms for 'epidural analgesia risk'. The five search engines performed similarly. NetScoring scores were lower for transfusion risk (P < 0.001). One or more high-quality sites was identified consistently among the first 15 sites in each search. Quality scored using the simple scale correlated closely with medical content and design by NetScoring and with the number of references (P < 0.05). Synonyms of 'epidural analgesia risk' yielded similar results. The quality of accessed information improved somewhat over the 11 month period with Yahoo and AltaVista, but declined with Google. The Internet is a valuable tool for obtaining medical information, but the quality of websites varies between different topics. A simple rating scale may facilitate the quality scoring on individual websites. Differences in precise search terms used for a given topic did not appear to affect the quality of the information obtained.

  16. Modeling runoff and microbial overland transport with KINEROS2/STWIR model: Accuracy and uncertainty as affected by source of infiltration parameters

    EPA Science Inventory

    Infiltration is important to modeling the overland transport of microorganisms in environmental waters. In watershed- and hillslope scale-models, infiltration is commonly described by simple equations relating infiltration rate to soil saturated conductivity and by empirical para...

  17. A thermal-based remote sensing modeling system for estimating evapotranspiration from field to global scales

    USDA-ARS?s Scientific Manuscript database

    Thermal-infrared remote sensing of land surface temperature provides valuable information for quantifying root-zone water availability, evapotranspiration (ET) and crop condition. This paper describes a robust but relatively simple thermal-based energy balance model that parameterizes the key soil/s...

  18. A novel, simple scale for assessing the symptom severity of atrial fibrillation at the bedside: the CCS-SAF scale.

    PubMed

    Dorian, Paul; Cvitkovic, Suzan S; Kerr, Charles R; Crystal, Eugene; Gillis, Anne M; Guerra, Peter G; Mitchell, L Brent; Roy, Denis; Skanes, Allan C; Wyse, D George

    2006-04-01

    The severity of symptoms caused by atrial fibrillation (AF) is extremely variable. Quantifying the effect of AF on patient well-being is important but there is no simple, commonly accepted measure of the effect of AF on quality of life (QoL). Current QoL measures are cumbersome and impractical for clinical use. To create a simple, concise and readily usable AF severity score to facilitate treatment decisions and physician communication. The Canadian Cardiovascular Society (CCS) Severity of Atrial Fibrillation (SAF) Scale is analogous to the CCS Angina Functional Class. The CCS-SAF score is determined using three steps: documentation of possible AF-related symptoms (palpitations, dyspnea, dizziness/syncope, chest pain, weakness/fatigue); determination of symptom-rhythm correlation; and assessment of the effect of these symptoms on patient daily function and QoL. CCS-SAF scores range from 0 (asymptomatic) to 4 (severe impact of symptoms on QoL and activities of daily living). Patients are also categorized by type of AF (paroxysmal versus persistent/permanent). The CCS-SAF Scale will be validated using accepted measures of patient-perceived severity of symptoms and impairment of QoL and will require 'field testing' to ensure its applicability and reproducibility in the clinical setting. This type of symptom severity scale, like the New York Heart Association Functional Class for heart failure symptoms and the CCS Functional Class for angina symptoms, trades precision and comprehensiveness for simplicity and ease of use at the bedside. A common language to quantify AF severity may help to improve patient care.

  19. Enhancement of orientation gradients during simple shear deformation by application of simple compression

    NASA Astrophysics Data System (ADS)

    Jahedi, Mohammad; Ardeljan, Milan; Beyerlein, Irene J.; Paydar, Mohammad Hossein; Knezevic, Marko

    2015-06-01

    We use a multi-scale, polycrystal plasticity micromechanics model to study the development of orientation gradients within crystals deforming by slip. At the largest scale, the model is a full-field crystal plasticity finite element model with explicit 3D grain structures created by DREAM.3D, and at the finest scale, at each integration point, slip is governed by a dislocation density based hardening law. For deformed polycrystals, the model predicts intra-granular misorientation distributions that follow well the scaling law seen experimentally by Hughes et al., Acta Mater. 45(1), 105-112 (1997), independent of strain level and deformation mode. We reveal that the application of a simple compression step prior to simple shearing significantly enhances the development of intra-granular misorientations compared to simple shearing alone for the same amount of total strain. We rationalize that the changes in crystallographic orientation and shape evolution when going from simple compression to simple shearing increase the local heterogeneity in slip, leading to the boost in intra-granular misorientation development. In addition, the analysis finds that simple compression introduces additional crystal orientations that are prone to developing intra-granular misorientations, which also help to increase intra-granular misorientations. Many metal working techniques for refining grain sizes involve a preliminary or concurrent application of compression with severe simple shearing. Our finding reveals that a pre-compression deformation step can, in fact, serve as another processing variable for improving the rate of grain refinement during the simple shearing of polycrystalline metals.

  20. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fetterman, D. E., Jr.

    1965-01-01

    Simple transparent overlay with interpolation scale facilitates accurate, rapid reading of graph coordinate points. This device can be used for enlarging drawings and locating points on perspective drawings.

  1. Highly Fluorescent Noble Metal Quantum Dots

    PubMed Central

    Zheng, Jie; Nicovich, Philip R.; Dickson, Robert M.

    2009-01-01

    Highly fluorescent, water-soluble, few-atom noble metal quantum dots have been created that behave as multi-electron artificial atoms with discrete, size-tunable electronic transitions throughout the visible and near IR. These “molecular metals” exhibit highly polarizable transitions and scale in size according to the simple relation, Efermi/N1/3, predicted by the free electron model of metallic behavior. This simple scaling indicates that fluorescence arises from intraband transitions of free electrons and that these conduction electron transitions are the low number limit of the plasmon – the collective dipole oscillations occurring when a continuous density of states is reached. Providing the “missing link” between atomic and nanoparticle behavior in noble metals, these emissive, water-soluble Au nanoclusters open new opportunities for biological labels, energy transfer pairs, and light emitting sources in nanoscale optoelectronics. PMID:17105412

  2. The dark side of cosmology: dark matter and dark energy.

    PubMed

    Spergel, David N

    2015-03-06

    A simple model with only six parameters (the age of the universe, the density of atoms, the density of matter, the amplitude of the initial fluctuations, the scale dependence of this amplitude, and the epoch of first star formation) fits all of our cosmological data . Although simple, this standard model is strange. The model implies that most of the matter in our Galaxy is in the form of "dark matter," a new type of particle not yet detected in the laboratory, and most of the energy in the universe is in the form of "dark energy," energy associated with empty space. Both dark matter and dark energy require extensions to our current understanding of particle physics or point toward a breakdown of general relativity on cosmological scales. Copyright © 2015, American Association for the Advancement of Science.

  3. Calculation of stochastic broadening due to low mn magnetic perturbation in the simple map in action-angle coordinates

    NASA Astrophysics Data System (ADS)

    Hinton, Courtney; Punjabi, Alkesh; Ali, Halima

    2009-11-01

    The simple map is the simplest map that has topology of divertor tokamaks [A. Punjabi, H. Ali, T. Evans, and A. Boozer, Phys. Let. A 364, 140--145 (2007)]. Recently, the action-angle coordinates for simple map are analytically calculated, and simple map is constructed in action-angle coordinates [O. Kerwin, A. Punjabi, and H. Ali, Phys. Plasmas 15, 072504 (2008)]. Action-angle coordinates for simple map cannot be inverted to real space coordinates (R,Z). Because there is logarithmic singularity on the ideal separatrix, trajectories cannot cross separatrix [op cit]. Simple map in action-angle coordinates is applied to calculate stochastic broadening due to the low mn magnetic perturbation with mode numbers m=1, and n=±1. The width of stochastic layer near the X-point scales as 0.63 power of the amplitude δ of low mn perturbation, toroidal flux loss scales as 1.16 power of δ, and poloidal flux loss scales as 1.26 power of δ. Scaling of width deviates from Boozer-Rechester scaling by 26% [A. Boozer, and A. Rechester, Phys. Fluids 21, 682 (1978)]. This work is supported by US Department of Energy grants DE-FG02-07ER54937, DE-FG02-01ER54624 and DE-FG02-04ER54793.

  4. Application of the BEf-scaling approach to electron impact excitation of diople-allowed electronic states in molecules

    NASA Astrophysics Data System (ADS)

    Brunger, M. J.; Thorn, P. A.; Campbell, L.; Kato, H.; Kawahara, H.; Hoshino, M.; Tanaka, H.; Kim, Y.-K.

    2008-05-01

    We consider the efficacy of the BEf-scaling approach, in calculating reliable integral cross sections for electron impact excitation of dipole-allowed electronic states in molecules. We will demonstrate, using specific examples in H2, CO and H2O, that this relatively simple procedure can generate quite accurate integral cross sections which compare well with available experimental data. Finally, we will briefly consider the ramifications of this to atmospheric and other types of modelling studies.

  5. The Matter-Antimatter Asymmetry of the Universe

    NASA Technical Reports Server (NTRS)

    Stecker, F. W.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    I will give here an overview of the present observational and theoretical situation regarding the question of the matter-antimatter asymmetry of the universe and the related question of the existence of antimatter on a cosmological scale. I will also give a simple discussion of the role of CP (charge conjugation parity) violation in this subject.

  6. Can the ionosphere regulate magnetospheric convection.

    NASA Technical Reports Server (NTRS)

    Coroniti, F. V.; Kennel, C. F.

    1973-01-01

    A simple model is outlined that relates the dayside magnetopause displacement to the currents feeding the polar cap ionosphere, from which the ionospheric electric field and the flux return rate may be estimated as a function of magnetopause displacement. Then, flux conservation arguments make possible an estimate of the time scale on which convection increases.

  7. The study of residential life support environment system to initiate policy on sustainable simple housing

    NASA Astrophysics Data System (ADS)

    Siahaan, N. M.; Harahap, A. S.; Nababan, E.; Siahaan, E.

    2018-02-01

    This study aims to initiate sustainable simple housing system based on low CO2 emissions at Griya Martubung I Housing Medan, Indonesia. Since it was built in 1995, between 2007 until 2016 approximately 89 percent of houses have been doing various home renewal such as restoration, renovation, or reconstruction. Qualitative research conducted in order to obtain insights into the behavior of complex relationship between various components of residential life support environment that relates to CO2 emissions. Each component is studied by conducting in-depth interviews, observation of the 128 residents. The study used Likert Scale to measure residents’ perception about components. The study concludes with a synthesis describing principles for a sustainable simple housing standard that recognizes the whole characteristics of components. This study offers a means for initiating the practice of sustainable simple housing developments and efforts to manage growth and preserve the environment without violating social, economics, and ecology.

  8. Complex versus simple models: ion-channel cardiac toxicity prediction.

    PubMed

    Mistry, Hitesh B

    2018-01-01

    There is growing interest in applying detailed mathematical models of the heart for ion-channel related cardiac toxicity prediction. However, a debate as to whether such complex models are required exists. Here an assessment in the predictive performance between two established large-scale biophysical cardiac models and a simple linear model B net was conducted. Three ion-channel data-sets were extracted from literature. Each compound was designated a cardiac risk category using two different classification schemes based on information within CredibleMeds. The predictive performance of each model within each data-set for each classification scheme was assessed via a leave-one-out cross validation. Overall the B net model performed equally as well as the leading cardiac models in two of the data-sets and outperformed both cardiac models on the latest. These results highlight the importance of benchmarking complex versus simple models but also encourage the development of simple models.

  9. Cross-scale morphology

    USGS Publications Warehouse

    Allen, Craig R.; Holling, Crawford S.; Garmestani, Ahjond S.; El-Shaarawi, Abdel H.; Piegorsch, Walter W.

    2013-01-01

    The scaling of physical, biological, ecological and social phenomena is a major focus of efforts to develop simple representations of complex systems. Much of the attention has been on discovering universal scaling laws that emerge from simple physical and geometric processes. However, there are regular patterns of departures both from those scaling laws and from continuous distributions of attributes of systems. Those departures often demonstrate the development of self-organized interactions between living systems and physical processes over narrower ranges of scale.

  10. Study on Manipulations of Fluids in Micro-scale and Their Applications in Physical, Bio/chemistry

    NASA Astrophysics Data System (ADS)

    Zhou, Bingpu

    Microfluidics is a highly interdisciplinary research field which manipulates, controls and analyzes fluids in micro-scale for physical and bio/chemical applications. In this thesis, several aspects of fluid manipulations in micro-scale were studied, discussed and employed for demonstrations of practical utilizations. To begin with, mixing in continuous flow microfluidic was raised and investigated. A simple method for mixing actuation based on magnetism was proposed and realized via integration of magnetically functionalized micropillar arrays inside the microfluidic channel.With such technique, microfluidic mixing could be swiftly switched on and off via simple application or retraction of the magnetic field. Thereafter, in Chapter 3 we mainly focused on how to establish stable while tunable concentration gradients inside microfluidic network using a simple design. The proposed scheme could also be modified with on-chip pneumatic actuated valve to realize pulsatile/temporal concentration gradients simultaneously in ten microfluidic branches. We further applied such methodology to obtain roughness gradients onPolydimethylsiloxane (PDMS) surface via combinations of the microfluidic network andphoto-polymerizations. The obtained materials were utilized in parallel cell culture to figure out the relationship between substrate morphologies and the cell behaviors. In the second part of this work, we emphasized on manipulations on microdroplets insidethe microfluidic channel and explored related applications in bio/chemical aspects. Firstly, microdroplet-based microfluidic universal logic gates were successfully demonstrated vialiquid-electronic hybrid divider. For application based on such novel scheme of control lable droplet generation, on-demand chemical reaction within paired microdroplets was presented using IF logic gate. Followed by this, another important operation of microdroplet - splitting -was investigated. Addition lateral continuous flow was applied at the bifurcation as a mediumto controllably divide microdroplets with highly tunable splitting ratios. Related physical mechanism was proposed and such approach was adopted further for rapid synthesis of multi-scale microspheres.

  11. Simple heterogeneity parametrization for sea surface temperature and chlorophyll

    NASA Astrophysics Data System (ADS)

    Skákala, Jozef; Smyth, Timothy J.

    2016-06-01

    Using satellite maps this paper offers a complex analysis of chlorophyll & SST heterogeneity in the shelf seas around the southwest of the UK. The heterogeneity scaling follows a simple power law and is consequently parametrized by two parameters. It is shown that in most cases these two parameters vary only relatively little with time. The paper offers a detailed comparison of field heterogeneity between different regions. How much heterogeneity is in each region preserved in the annual median data is also determined. The paper explicitly demonstrates how one can use these results to calculate representative measurement area for in situ networks.

  12. A simple, analytical, axisymmetric microburst model for downdraft estimation

    NASA Technical Reports Server (NTRS)

    Vicroy, Dan D.

    1991-01-01

    A simple analytical microburst model was developed for use in estimating vertical winds from horizontal wind measurements. It is an axisymmetric, steady state model that uses shaping functions to satisfy the mass continuity equation and simulate boundary layer effects. The model is defined through four model variables: the radius and altitude of the maximum horizontal wind, a shaping function variable, and a scale factor. The model closely agrees with a high fidelity analytical model and measured data, particularily in the radial direction and at lower altitudes. At higher altitudes, the model tends to overestimate the wind magnitude relative to the measured data.

  13. Detecting natural occlusion boundaries using local cues

    PubMed Central

    DiMattina, Christopher; Fox, Sean A.; Lewicki, Michael S.

    2012-01-01

    Occlusion boundaries and junctions provide important cues for inferring three-dimensional scene organization from two-dimensional images. Although several investigators in machine vision have developed algorithms for detecting occlusions and other edges in natural images, relatively few psychophysics or neurophysiology studies have investigated what features are used by the visual system to detect natural occlusions. In this study, we addressed this question using a psychophysical experiment where subjects discriminated image patches containing occlusions from patches containing surfaces. Image patches were drawn from a novel occlusion database containing labeled occlusion boundaries and textured surfaces in a variety of natural scenes. Consistent with related previous work, we found that relatively large image patches were needed to attain reliable performance, suggesting that human subjects integrate complex information over a large spatial region to detect natural occlusions. By defining machine observers using a set of previously studied features measured from natural occlusions and surfaces, we demonstrate that simple features defined at the spatial scale of the image patch are insufficient to account for human performance in the task. To define machine observers using a more biologically plausible multiscale feature set, we trained standard linear and neural network classifiers on the rectified outputs of a Gabor filter bank applied to the image patches. We found that simple linear classifiers could not match human performance, while a neural network classifier combining filter information across location and spatial scale compared well. These results demonstrate the importance of combining a variety of cues defined at multiple spatial scales for detecting natural occlusions. PMID:23255731

  14. HIV Treatment and Prevention: A Simple Model to Determine Optimal Investment.

    PubMed

    Juusola, Jessie L; Brandeau, Margaret L

    2016-04-01

    To create a simple model to help public health decision makers determine how to best invest limited resources in HIV treatment scale-up and prevention. A linear model was developed for determining the optimal mix of investment in HIV treatment and prevention, given a fixed budget. The model incorporates estimates of secondary health benefits accruing from HIV treatment and prevention and allows for diseconomies of scale in program costs and subadditive benefits from concurrent program implementation. Data sources were published literature. The target population was individuals infected with HIV or at risk of acquiring it. Illustrative examples of interventions include preexposure prophylaxis (PrEP), community-based education (CBE), and antiretroviral therapy (ART) for men who have sex with men (MSM) in the US. Outcome measures were incremental cost, quality-adjusted life-years gained, and HIV infections averted. Base case analysis indicated that it is optimal to invest in ART before PrEP and to invest in CBE before scaling up ART. Diseconomies of scale reduced the optimal investment level. Subadditivity of benefits did not affect the optimal allocation for relatively low implementation levels. The sensitivity analysis indicated that investment in ART before PrEP was optimal in all scenarios tested. Investment in ART before CBE became optimal when CBE reduced risky behavior by 4% or less. Limitations of the study are that dynamic effects are approximated with a static model. Our model provides a simple yet accurate means of determining optimal investment in HIV prevention and treatment. For MSM in the US, HIV control funds should be prioritized on inexpensive, effective programs like CBE, then on ART scale-up, with only minimal investment in PrEP. © The Author(s) 2015.

  15. Remote tropical and sub-tropical responses to Amazon deforestation

    NASA Astrophysics Data System (ADS)

    Badger, Andrew M.; Dirmeyer, Paul A.

    2016-05-01

    Replacing natural vegetation with realistic tropical crops over the Amazon region in a global Earth system model impacts vertical transport of heat and moisture, modifying the interaction between the atmospheric boundary layer and the free atmosphere. Vertical velocity is decreased over a majority of the Amazon region, shifting the ascending branch and modifying the seasonality of the Hadley circulation over the Atlantic and eastern Pacific oceans. Using a simple model that relates circulation changes to heating anomalies and generalizing the upper-atmosphere temperature response to deforestation, agreement is found between the response in the fully-coupled model and the simple solution. These changes to the large-scale dynamics significantly impact precipitation in several remote regions, namely sub-Saharan Africa, Mexico, the southwestern United States and extratropical South America, suggesting non-local climate repercussions for large-scale land use changes in the tropics are possible.

  16. A simple dynamic subgrid-scale model for LES of particle-laden turbulence

    NASA Astrophysics Data System (ADS)

    Park, George Ilhwan; Bassenne, Maxime; Urzay, Javier; Moin, Parviz

    2017-04-01

    In this study, a dynamic model for large-eddy simulations is proposed in order to describe the motion of small inertial particles in turbulent flows. The model is simple, involves no significant computational overhead, contains no adjustable parameters, and is flexible enough to be deployed in any type of flow solvers and grids, including unstructured setups. The approach is based on the use of elliptic differential filters to model the subgrid-scale velocity. The only model parameter, which is related to the nominal filter width, is determined dynamically by imposing consistency constraints on the estimated subgrid energetics. The performance of the model is tested in large-eddy simulations of homogeneous-isotropic turbulence laden with particles, where improved agreement with direct numerical simulation results is observed in the dispersed-phase statistics, including particle acceleration, local carrier-phase velocity, and preferential-concentration metrics.

  17. Dynamics of liquid spreading on solid surfaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalliadasis, S.; Chang, H.C.

    1996-09-01

    Using simple scaling arguments and a precursor film model, the authors show that the appropriate macroscopic contact angle {theta} during the slow spreading of a completely or partially wetting liquid under conditions of viscous flow and small slopes should be described by tan {theta} = [tan{sup 3} {theta}{sub e} {minus} 9 log {eta}Ca]{sup 1/3} where {theta}{sub e} is the static contact angle, Ca is the capillary number, and {eta} is a scaled Hamaker constant. Using this simple relation as a boundary condition, the authors are able to quantitatively model, without any empirical parameter, the spreading dynamics of several classical spreadingmore » phenomena (capillary rise, sessile, and pendant drop spreading) by simply equating the slope of the leading order static bulk region to the dynamic contact angle boundary condition without performing a matched asymptotic analysis for each case independently as is usually done in the literature.« less

  18. Complexity-aware simple modeling.

    PubMed

    Gómez-Schiavon, Mariana; El-Samad, Hana

    2018-02-26

    Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. The abundance properties of nearby late-type galaxies. II. The relation between abundance distributions and surface brightness profiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilyugin, L. S.; Grebel, E. K.; Zinchenko, I. A.

    2014-12-01

    The relations between oxygen abundance and disk surface brightness (OH–SB relation) in the infrared W1 band are examined for nearby late-type galaxies. The oxygen abundances were presented in Paper I. The photometric characteristics of the disks are inferred here using photometric maps from the literature through bulge-disk decomposition. We find evidence that the OH–SB relation is not unique but depends on the galactocentric distance r (taken as a fraction of the optical radius R{sub 25}) and on the properties of a galaxy: the disk scale length h and the morphological T-type. We suggest a general, four-dimensional OH–SB relation with themore » values r, h, and T as parameters. The parametric OH–SB relation reproduces the observed data better than a simple, one-parameter relation; the deviations resulting when using our parametric relation are smaller by a factor of ∼1.4 than that of the simple relation. The influence of the parameters on the OH–SB relation varies with galactocentric distance. The influence of the T-type on the OH–SB relation is negligible at the centers of galaxies and increases with galactocentric distance. In contrast, the influence of the disk scale length on the OH–SB relation is at a maximum at the centers of galaxies and decreases with galactocentric distance, disappearing at the optical edges of galaxies. Two-dimensional relations can be used to reproduce the observed data at the optical edges of the disks and at the centers of the disks. The disk scale length should be used as a second parameter in the OH–SB relation at the center of the disk while the morphological T-type should be used as a second parameter in the relation at optical edge of the disk. The relations between oxygen abundance and disk surface brightness in the optical B and infrared K bands at the center of the disk and at optical edge of the disk are also considered. The general properties of the abundance–surface brightness relations are similar for the three considered bands B, K, and W1.« less

  20. Scaling in Ecosystems and the Linkage of Macroecological Laws

    NASA Astrophysics Data System (ADS)

    Rinaldo, A.

    2007-12-01

    Are there predictable linkages among macroecological laws regulating size and abundance of organisms that are ubiquitously supported by empirical observations and that ecologists treat traditionally as independent? Do fragmentation of habitats, or reduced supply of energy and matter, result in predictable changes on whole ecosystems as a function of their size? Using a coherent theoretical framework based on scaling theory, it is argued that the answer to both these questions is affirmative. The concern of the talk is with the comparatively simple situation of the steady state behavior of a fully developed ecosystem in which, over evolutionary time, resources are exploited in full, individual and collective metabolic needs are met and enough time has elapsed to produce a rough balance between speciation and extinction and ecological fluxes. While ecological patterns and processes often show great variation when viewed at different scales of space, time, organismic size and organizational complexity, there is also widespread evidence for the existence of scaling regularities as embedded in macroecological "laws" or rules. These laws have commanded considerable attention from the ecological community. Indeed they are central to ecological theory as they describe the features of complex adaptive systems shown by a number of biological systems, and perhaps for the investigation of the dynamic origin of scale invariance of natural forms in general. The species-area and relative species-abundance relations, the scaling of community and species' size spectra, the scaling of population densities with their mean body mass and the scaling of the largest organism with ecosystem size are examples of such laws. Borrowing heavily from earlier successes in physics, it will be shown how simple mathematical scaling arguments, following from dimensional and finite-size scaling analyses, provide theoretical predictions of the inter- relationships among the species abundance relationship, the species-area relationship and community size spectra, in excellent accord with empirical data. The main conclusion is that the proposed scaling framework, along with the questions and predictions it provides, serves as a starting point for a novel approach to macroecological analysis.

  1. Prognostic accuracy of five simple scales in childhood bacterial meningitis.

    PubMed

    Pelkonen, Tuula; Roine, Irmeli; Monteiro, Lurdes; Cruzeiro, Manuel Leite; Pitkäranta, Anne; Kataja, Matti; Peltola, Heikki

    2012-08-01

    In childhood acute bacterial meningitis, the level of consciousness, measured with the Glasgow coma scale (GCS) or the Blantyre coma scale (BCS), is the most important predictor of outcome. The Herson-Todd scale (HTS) was developed for Haemophilus influenzae meningitis. Our objective was to identify prognostic factors, to form a simple scale, and to compare the predictive accuracy of these scales. Seven hundred and twenty-three children with bacterial meningitis in Luanda were scored by GCS, BCS, and HTS. The simple Luanda scale (SLS), based on our entire database, comprised domestic electricity, days of illness, convulsions, consciousness, and dyspnoea at presentation. The Bayesian Luanda scale (BLS) added blood glucose concentration. The accuracy of the 5 scales was determined for 491 children without an underlying condition, against the outcomes of death, severe neurological sequelae or death, or a poor outcome (severe neurological sequelae, death, or deafness), at hospital discharge. The highest accuracy was achieved with the BLS, whose area under the curve (AUC) for death was 0.83, for severe neurological sequelae or death was 0.84, and for poor outcome was 0.82. Overall, the AUCs for SLS were ≥0.79, for GCS were ≥0.76, for BCS were ≥0.74, and for HTS were ≥0.68. Adding laboratory parameters to a simple scoring system, such as the SLS, improves the prognostic accuracy only little in bacterial meningitis.

  2. Dynamic correlations at different time-scales with empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Nava, Noemi; Di Matteo, T.; Aste, Tomaso

    2018-07-01

    We introduce a simple approach which combines Empirical Mode Decomposition (EMD) and Pearson's cross-correlations over rolling windows to quantify dynamic dependency at different time scales. The EMD is a tool to separate time series into implicit components which oscillate at different time-scales. We apply this decomposition to intraday time series of the following three financial indices: the S&P 500 (USA), the IPC (Mexico) and the VIX (volatility index USA), obtaining time-varying multidimensional cross-correlations at different time-scales. The correlations computed over a rolling window are compared across the three indices, across the components at different time-scales and across different time lags. We uncover a rich heterogeneity of interactions, which depends on the time-scale and has important lead-lag relations that could have practical use for portfolio management, risk estimation and investment decisions.

  3. The Canyonlands Grabens Revisited, with a New Interpretation of Graben Geometry

    NASA Astrophysics Data System (ADS)

    Schultz, R. A.; Moore, J. M.

    1996-03-01

    The relative scale between faults and faulted-layer thickness is critical to the mechanical behavior of faults and fault populations on any planetary body. Due to their fresh, relatively uneroded morphology and simple structural setting, the terrestrial Canyonlands grabens provide a unique opportunity to critically investigate the geometry, growth, interaction, and scaling relationships of normal faults. Symmetrical models have traditionally been used to describe these grabens, but field observations of stratigraphic offsets require asymmetric graben cross-sectional geometry. Topographic profiles reveal differential stratigraphic offsets, graben floor-tilts, and possible roll-over anticlines as well as footwall uplifts. Relationships between the asymmetric graben geometry and brittle-layer thickness are currently being investigated.

  4. Strain analysis in the Sanandaj-Sirjan HP-LT Metamorphic Belt, SW Iran: Insights from small-scale faults and associated drag folds

    NASA Astrophysics Data System (ADS)

    Sarkarinejad, Khalil; Keshavarz, Saeede; Faghih, Ali

    2015-05-01

    This study is aimed at quantifying the kinematics of deformation using a population of drag fold structures associated with small-scale faults in deformed quartzites from Seh-Ghalatoun area within the HP-LT Sanandaj-Sirjan Metamorphic Belt, SW Iran. A total 30 small-scale faults in the quartzite layers were examined to determine the deformation characteristics. Obtained data revealed α0 (initial fault angle) and ω (angle between flow apophyses) are equal to 83° and 32°, respectively. These data yield mean kinematic vorticity number (Wm) equal to 0.79 and mean finite strain (Rs) of 2.32. These results confirm the relative contribution of ∼43% pure shear and ∼57% simple shear components, respectively. The strain partitioning inferred from this quantitative analysis is consistent with a sub-simple or general shear deformation pattern associated with a transpressional flow regime in the study area as a part of the Zagros Orogen. This type of deformation resulted from oblique convergence between the Afro-Arabian and Central-Iranian plates.

  5. Effects of scale-dependent non-Gaussianity on cosmological structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LoVerde, Marilena; Miller, Amber; Shandera, Sarah

    2008-04-15

    The detection of primordial non-Gaussianity could provide a powerful means to test various inflationary scenarios. Although scale-invariant non-Gaussianity (often described by the f{sub NL} formalism) is currently best constrained by the CMB, single-field models with changing sound speed can have strongly scale-dependent non-Gaussianity. Such models could evade the CMB constraints but still have important effects at scales responsible for the formation of cosmological objects such as clusters and galaxies. We compute the effect of scale-dependent primordial non-Gaussianity on cluster number counts as a function of redshift, using a simple ansatz to model scale-dependent features. We forecast constraints on these modelsmore » achievable with forthcoming datasets. We also examine consequences for the galaxy bispectrum. Our results are relevant for the Dirac-Born-Infeld model of brane inflation, where the scale dependence of the non-Gaussianity is directly related to the geometry of the extra dimensions.« less

  6. Relational interventions in psychotherapy: development of a therapy process rating scale.

    PubMed

    Ulberg, Randi; Ness, Elisabeth; Dahl, Hanne-Sofie Johnsen; Høglend, Per Andreas; Critchfield, Kenneth; Blayvas, Phelix; Amlo, Svein

    2016-09-06

    In psychodynamic psychotherapy, one of the therapists' techniques is to intervene on and encourage exploration of the patients' relationships with other people. The impact of these interventions and the response from the patient are probably dependent on certain characteristics of the context in which the interventions are given and the interventions themselves. To identify and analyze in-session effects of therapists' techniques, process scales are used. The aim of the present study was to develop a simple, not resource consuming rating tool for in-session process to be used when therapists' interventions focus on the patients' relationships outside therapy. The present study describes the development and use of a therapy process rating scale, the Relational Work Scale (RWS). The scale was constructed to identify, categorize and explore therapist interventions that focus on the patient's relationships to family, friends, and colleges Relational Interventions and explore the impact on the in-session process. RWS was developed with sub scales rating timing, content, and valence of the relational interventions, as well as response from the patient. For the inter-rater reliability analyzes, transcribed segments (10 min) from 20 different patients were scored with RWS by two independent raters. Two clinical vignettes of relational work are included in the paper as examples of how to rate transcripts from therapy sessions with RWS. The inter-rater agreement on the RWS items was good to excellent. Relational Work Scale might be a potentially useful tool to identify relational interventions as well as explore the interaction of timing, category, and valence of relational work in psychotherapies. The therapist's interventions on the patient's relationships with people outside therapy and the following patient-therapist interaction might be explored. First Experimental Study of Transference-interpretations (FEST307/95) REGISTRATION NUMBER: ClinicalTrials.gov Identifier: NCT00423462 .

  7. Fluidized bed coal desulfurization

    NASA Technical Reports Server (NTRS)

    Ravindram, M.

    1983-01-01

    Laboratory scale experiments were conducted on two high volatile bituminous coals in a bench scale batch fluidized bed reactor. Chemical pretreatment and posttreatment of coals were tried as a means of enhancing desulfurization. Sequential chlorination and dechlorination cum hydrodesulfurization under modest conditions relative to the water slurry process were found to result in substantial sulfur reductions of about 80%. Sulfur forms as well as proximate and ultimate analyses of the processed coals are included. These studies indicate that a fluidized bed reactor process has considerable potential for being developed into a simple and economic process for coal desulfurization.

  8. Using SQL Databases for Sequence Similarity Searching and Analysis.

    PubMed

    Pearson, William R; Mackey, Aaron J

    2017-09-13

    Relational databases can integrate diverse types of information and manage large sets of similarity search results, greatly simplifying genome-scale analyses. By focusing on taxonomic subsets of sequences, relational databases can reduce the size and redundancy of sequence libraries and improve the statistical significance of homologs. In addition, by loading similarity search results into a relational database, it becomes possible to explore and summarize the relationships between all of the proteins in an organism and those in other biological kingdoms. This unit describes how to use relational databases to improve the efficiency of sequence similarity searching and demonstrates various large-scale genomic analyses of homology-related data. It also describes the installation and use of a simple protein sequence database, seqdb_demo, which is used as a basis for the other protocols. The unit also introduces search_demo, a database that stores sequence similarity search results. The search_demo database is then used to explore the evolutionary relationships between E. coli proteins and proteins in other organisms in a large-scale comparative genomic analysis. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  9. Chaotic Lagrangian models for turbulent relative dispersion.

    PubMed

    Lacorata, Guglielmo; Vulpiani, Angelo

    2017-04-01

    A deterministic multiscale dynamical system is introduced and discussed as a prototype model for relative dispersion in stationary, homogeneous, and isotropic turbulence. Unlike stochastic diffusion models, here trajectory transport and mixing properties are entirely controlled by Lagrangian chaos. The anomalous "sweeping effect," a known drawback common to kinematic simulations, is removed through the use of quasi-Lagrangian coordinates. Lagrangian dispersion statistics of the model are accurately analyzed by computing the finite-scale Lyapunov exponent (FSLE), which is the optimal measure of the scaling properties of dispersion. FSLE scaling exponents provide a severe test to decide whether model simulations are in agreement with theoretical expectations and/or observation. The results of our numerical experiments cover a wide range of "Reynolds numbers" and show that chaotic deterministic flows can be very efficient, and numerically low-cost, models of turbulent trajectories in stationary, homogeneous, and isotropic conditions. The mathematics of the model is relatively simple, and, in a geophysical context, potential applications may regard small-scale parametrization issues in general circulation models, mixed layer, and/or boundary layer turbulence models as well as Lagrangian predictability studies.

  10. Chaotic Lagrangian models for turbulent relative dispersion

    NASA Astrophysics Data System (ADS)

    Lacorata, Guglielmo; Vulpiani, Angelo

    2017-04-01

    A deterministic multiscale dynamical system is introduced and discussed as a prototype model for relative dispersion in stationary, homogeneous, and isotropic turbulence. Unlike stochastic diffusion models, here trajectory transport and mixing properties are entirely controlled by Lagrangian chaos. The anomalous "sweeping effect," a known drawback common to kinematic simulations, is removed through the use of quasi-Lagrangian coordinates. Lagrangian dispersion statistics of the model are accurately analyzed by computing the finite-scale Lyapunov exponent (FSLE), which is the optimal measure of the scaling properties of dispersion. FSLE scaling exponents provide a severe test to decide whether model simulations are in agreement with theoretical expectations and/or observation. The results of our numerical experiments cover a wide range of "Reynolds numbers" and show that chaotic deterministic flows can be very efficient, and numerically low-cost, models of turbulent trajectories in stationary, homogeneous, and isotropic conditions. The mathematics of the model is relatively simple, and, in a geophysical context, potential applications may regard small-scale parametrization issues in general circulation models, mixed layer, and/or boundary layer turbulence models as well as Lagrangian predictability studies.

  11. Predictive modelling of flow in a two-dimensional intermediate-scale, heterogeneous porous media

    USGS Publications Warehouse

    Barth, Gilbert R.; Hill, M.C.; Illangasekare, T.H.; Rajaram, H.

    2000-01-01

    To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.

  12. Flood quantiles scaling with upper soil hydraulic properties for different land uses at catchment scale

    NASA Astrophysics Data System (ADS)

    Peña, Luis E.; Barrios, Miguel; Francés, Félix

    2016-10-01

    Changes in land use within a catchment are among the causes of non-stationarity in the flood regime, as they modify the upper soil physical structure and its runoff production capacity. This paper analyzes the relation between the variation of the upper soil hydraulic properties due to changes in land use and its effect on the magnitude of peak flows: (1) incorporating fractal scaling properties to relate the effect of the static storage capacity (the sum of capillary water storage capacity in the root zone, canopy interception and surface puddles) and the upper soil vertical saturated hydraulic conductivity on the flood regime; (2) describing the effect of the spatial organization of the upper soil hydraulic properties at catchment scale; (3) examining the scale properties in the parameters of the Generalized Extreme Value (GEV) probability distribution function, in relation to the upper soil hydraulic properties. This study considered the historical changes of land use in the Combeima River catchment in South America, between 1991 and 2007, using distributed hydrological modeling of daily discharges to describe the hydrological response. Through simulation of land cover scenarios, it was demonstrated that it is possible to quantify the magnitude of peak flows in scenarios of land cover changes through its Wide-Sense Simple Scaling with the upper soil hydraulic properties.

  13. Validity of the Stokes-Einstein relation in liquids: simple rules from the excess entropy.

    PubMed

    Pasturel, A; Jakse, N

    2016-12-07

    It is becoming common practice to consider that the Stokes-Einstein relation D/T~ η -1 usually works for liquids above their melting temperatures although there is also experimental evidence for its failure. Here we investigate numerically this commonly-invoked assumption for simple liquid metals as well as for their liquid alloys. Using ab initio molecular dynamics simulations we show how entropy scaling relationships developed by Rosenfeld can be used to predict the conditions for the validity of the Stokes-Einstein relation in the liquid phase. Specifically, we demonstrate the Stokes-Einstein relation may break down in the liquid phase of some liquid alloys mainly due to the presence of local structural ordering as evidenced in their partial two-body excess entropies. Our findings shed new light on the understanding of transport properties of liquid materials and will trigger more experimental and theoretical studies since excess entropy and its two-body approximation are readily obtainable from standard experiments and simulations.

  14. Mass-Discrepancy Acceleration Relation: A Natural Outcome of Galaxy Formation in Cold Dark Matter Halos.

    PubMed

    Ludlow, Aaron D; Benítez-Llambay, Alejandro; Schaller, Matthieu; Theuns, Tom; Frenk, Carlos S; Bower, Richard; Schaye, Joop; Crain, Robert A; Navarro, Julio F; Fattahi, Azadeh; Oman, Kyle A

    2017-04-21

    We analyze the total and baryonic acceleration profiles of a set of well-resolved galaxies identified in the eagle suite of hydrodynamic simulations. Our runs start from the same initial conditions but adopt different prescriptions for unresolved stellar and active galactic nuclei feedback, resulting in diverse populations of galaxies by the present day. Some of them reproduce observed galaxy scaling relations, while others do not. However, regardless of the feedback implementation, all of our galaxies follow closely a simple relationship between the total and baryonic acceleration profiles, consistent with recent observations of rotationally supported galaxies. The relation has small scatter: Different feedback implementations-which produce different galaxy populations-mainly shift galaxies along the relation rather than perpendicular to it. Furthermore, galaxies exhibit a characteristic acceleration g_{†}, above which baryons dominate the mass budget, as observed. These observations, consistent with simple modified Newtonian dynamics, can be accommodated within the standard cold dark matter paradigm.

  15. Gyrokinetic turbulence cascade via predator-prey interactions between different scales

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobayashi, Sumire, E-mail: sumire.kobayashi@lpp.polytechnique.fr; Gurcan, Ozgur D., E-mail: ozgur.gurcan@lpp.polytechnique.fr

    2015-05-15

    Gyrokinetic simulations in a closed fieldline geometry are presented to explore the physics of nonlinear transfer in plasma turbulence. As spontaneously formed zonal flows and small-scale turbulence demonstrate “predator-prey” dynamics, a particular cascade spectrum emerges. The electrostatic potential and the density spectra appear to be in good agreement with the simple theoretical prediction based on Charney-Hasegawa-Mima equation | ϕ{sup ~}{sub k} |{sup 2}∼| n{sup ~}{sub k} |{sup 2}∝k{sup −3}/(1+k{sup 2}){sup 2}, with the spectra becoming anisotropic at small scales. The results indicate that the disparate scale interactions, in particular, the refraction and shearing of larger scale eddies by the self-consistentmore » zonal flows, dominate over local interactions, and contrary to the common wisdom, the comprehensive scaling relation is created even within the energy injection region.« less

  16. An evaluation of methods for scaling aircraft noise perception

    NASA Technical Reports Server (NTRS)

    Ollerhead, J. B.

    1971-01-01

    One hundred and twenty recorded sounds, including jets, turboprops, piston engined aircraft and helicopters were rated by a panel of subjects in a paired comparison test. The results were analyzed to evaluate a number of noise rating procedures in terms of their ability to accurately estimate both relative and absolute perceived noise levels. It was found that the complex procedures developed by Stevens, Zwicker and Kryter are superior to other scales. The main advantage of these methods over the more convenient weighted sound pressure level scales lies in their ability to cope with signals over a wide range of bandwidth. However, Stevens' loudness level scale and the perceived noise level scale both overestimate the growth of perceived level with intensity because of an apparent deficiency in the band level summation rule. A simple correction is proposed which will enable these scales to properly account for the experimental observations.

  17. Counting Magnetic Bipoles on the Sun by Polarity Inversion

    NASA Technical Reports Server (NTRS)

    Jones, Harrison P.

    2004-01-01

    This paper presents a simple and efficient algorithm for deriving images of polarity inversion from NSO/Kitt Peak magnetograms without use of contouring routines and shows by example how these maps depend upon the spatial scale for filtering the raw data. Smaller filtering scales produce many localized closed contours in mixed polarity regions while supergranular and larger filtering scales produce more global patterns. The apparent continuity of an inversion line depends on how the spatial filtering is accomplished, but its shape depends only on scale. The total length of the magnetic polarity inversion contours varies as a power law of the filter scale with fractal dimension of order 1.9. The amplitude but nut the exponent of this power-law relation varies with solar activity. The results are compared to similar analyses of areal distributions of bipolar magnetic regions.

  18. Does lake size matter? Combining morphology and process modeling to examine the contribution of lake classes to population-scale processes

    USGS Publications Warehouse

    Winslow, Luke A.; Read, Jordan S.; Hanson, Paul C.; Stanley, Emily H.

    2014-01-01

    With lake abundances in the thousands to millions, creating an intuitive understanding of the distribution of morphology and processes in lakes is challenging. To improve researchers’ understanding of large-scale lake processes, we developed a parsimonious mathematical model based on the Pareto distribution to describe the distribution of lake morphology (area, perimeter and volume). While debate continues over which mathematical representation best fits any one distribution of lake morphometric characteristics, we recognize the need for a simple, flexible model to advance understanding of how the interaction between morphometry and function dictates scaling across large populations of lakes. These models make clear the relative contribution of lakes to the total amount of lake surface area, volume, and perimeter. They also highlight the critical thresholds at which total perimeter, area and volume would be evenly distributed across lake size-classes have Pareto slopes of 0.63, 1 and 1.12, respectively. These models of morphology can be used in combination with models of process to create overarching “lake population” level models of process. To illustrate this potential, we combine the model of surface area distribution with a model of carbon mass accumulation rate. We found that even if smaller lakes contribute relatively less to total surface area than larger lakes, the increasing carbon accumulation rate with decreasing lake size is strong enough to bias the distribution of carbon mass accumulation towards smaller lakes. This analytical framework provides a relatively simple approach to upscaling morphology and process that is easily generalizable to other ecosystem processes.

  19. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing. CRESST Report 830

    ERIC Educational Resources Information Center

    Cai, Li

    2013-01-01

    Lord and Wingersky's (1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined…

  20. Research on the Present Status of the Five-Year Medical Training Program in Chinese Medical Colleges

    ERIC Educational Resources Information Center

    Xu, Yan; Dong, Zhe; Miao, Le; Ke, Yang

    2014-01-01

    The five-year program is the main path for undergraduate medical training in China. Studies have shown that during the past eleven years, the scale of medical student enrollment increased annually with a relatively simple entrance exam. The ideas, teaching contents and methods, assessment and evaluation should be updated and improved. In general,…

  1. Photon Mass, Graviton Mass: Zero or Not?

    NASA Astrophysics Data System (ADS)

    Scharff Goldhaber, Alfred; Nieto, Michael Martin

    2007-04-01

    Testing for deviations from simple laws is a time-honored part of physics research. In electricity and magnetism the first approach to such testing, from the eighteenth century well into the twentieth, was to look for departures from -2 of the power of distance between two electric charges or two magnetic poles determining the force between them. Absent a particular length scale, this was a natural choice for parameterizing possible deviations from the simple and esthetic inverse square law. With the advent of relativity and quantum mechanics, and the realization that certain phenomena of light can be described in terms of photon particles, it became appealing to ask if these particles might have a non-zero mass, and Proca found the appropriate modification of the Maxwell equations. Despite the particle-motion origin of this idea, the most powerful way to constrain the size of a possible photon mass is by setting a lower bound on the Compton wavelength, by looking at static electric and especially magnetic fields over increasing length scales. For gravity similar statements apply, but graviton mass is theoretically questionable, and observed phenomena imply either additional sources or departures from Einstein's general relativity.

  2. Exploring the effect of power law social popularity on language evolution.

    PubMed

    Gong, Tao; Shuai, Lan

    2014-01-01

    We evaluate the effect of a power-law-distributed social popularity on the origin and change of language, based on three artificial life models meticulously tracing the evolution of linguistic conventions including lexical items, categories, and simple syntax. A cross-model analysis reveals an optimal social popularity, in which the λ value of the power law distribution is around 1.0. Under this scaling, linguistic conventions can efficiently emerge and widely diffuse among individuals, thus maintaining a useful level of mutual understandability even in a big population. From an evolutionary perspective, we regard this social optimality as a tradeoff among social scaling, mutual understandability, and population growth. Empirical evidence confirms that such optimal power laws exist in many large-scale social systems that are constructed primarily via language-related interactions. This study contributes to the empirical explorations and theoretical discussions of the evolutionary relations between ubiquitous power laws in social systems and relevant individual behaviors.

  3. Radar scattering mechanisms within the meteor crater ejecta blanket: Geologic implications and relevance to Venus

    NASA Technical Reports Server (NTRS)

    Garvin, J. B.; Campbell, B. A.; Zisk, S. H.; Schaber, Gerald G.; Evans, C.

    1989-01-01

    Simple impact craters are known to occur on all of the terrestrial planets and the morphologic expression of their ejecta blankets is a reliable indicator of their relative ages on the Moon, Mars, Mercury, and most recently for Venus. It will be crucial for the interpretation of the geology of Venus to develop a reliable means of distinguishing smaller impact landforms from volcanic collapse and explosion craters, and further to use the observed SAR characteristics of crater ejecta blankets (CEB) as a means of relative age estimation. With these concepts in mind, a study was initiated of the quantitative SAR textural characteristics of the ejecta blanket preserved at Meteor Crater, Arizona, the well studied 1.2 km diameter simple crater that formed approx. 49,000 years ago from the impact of an octahedrite bolide. While Meteor Crater was formed as the result of an impact into wind and water lain sediments and has undergone recognizable water and wind related erosion, it nonetheless represents the only well studied simple impact crater on Earth with a reasonably preserved CEB. Whether the scattering behavior of the CEB can provide an independent perspective on its preservation state and style of erosion is explored. Finally, airborne laser altimeter profiles of the microtopography of the Meteor Crater CEB were used to further quantify the subradar pizel scale topographic slopes and RMS height variations for comparisons with the scattering mechanisms computed from SAR polarimetry. A preliminary assessment was summarized of the L-band radar scattering mechanisms within the Meteor Crater CEB as derived from a NASA/JPL DC-8 SAR Polarimetry dataset acquired in 1988, and the dominant scattering behavior was compared with microtopographic data (laser altimeter profiles and 1:10,000 scale topographic maps).

  4. Thermodynamic Identities and Symmetry Breaking in Short-Range Spin Glasses

    NASA Astrophysics Data System (ADS)

    Arguin, L.-P.; Newman, C. M.; Stein, D. L.

    2015-10-01

    We present a technique to generate relations connecting pure state weights, overlaps, and correlation functions in short-range spin glasses. These are obtained directly from the unperturbed Hamiltonian and hold for general coupling distributions. All are satisfied in phases with simple thermodynamic structure, such as the droplet-scaling and chaotic pairs pictures. If instead nontrivial mixed-state pictures hold, the relations suggest that replica symmetry is broken as described by a Derrida-Ruelle cascade, with pure state weights distributed as a Poisson-Dirichlet process.

  5. Energy and institution size

    PubMed Central

    2017-01-01

    Why do institutions grow? Despite nearly a century of scientific effort, there remains little consensus on this topic. This paper offers a new approach that focuses on energy consumption. A systematic relation exists between institution size and energy consumption per capita: as energy consumption increases, institutions become larger. I hypothesize that this relation results from the interplay between technological scale and human biological limitations. I also show how a simple stochastic model can be used to link energy consumption with firm dynamics. PMID:28178339

  6. Ion current detector for high pressure ion sources for monitoring separations

    DOEpatents

    Smith, R.D.; Wahl, J.H.; Hofstadler, S.A.

    1996-08-13

    The present invention relates generally to any application involving the monitoring of signal arising from ions produced by electrospray or other high pressure (>100 torr) ion sources. The present invention relates specifically to an apparatus and method for the detection of ions emitted from a capillary electrophoresis (CE) system, liquid chromatography, or other small-scale separation methods. And further, the invention provides a very simple diagnostic as to the quality of the separation and the operation of an electrospray source. 7 figs.

  7. Ion current detector for high pressure ion sources for monitoring separations

    DOEpatents

    Smith, Richard D.; Wahl, Jon H.; Hofstadler, Steven A.

    1996-01-01

    The present invention relates generally to any application involving the monitoring of signal arising from ions produced by electrospray or other high pressure (>100 torr) ion sources. The present invention relates specifically to an apparatus and method for the detection of ions emitted from a capillary electrophoresis (CE) system, liquid chromatography, or other small-scale separation methods. And further, the invention provides a very simple diagnostic as to the quality of the separation and the operation of an electrospray source.

  8. Effort of spanwise variation of turbulence on the normal acceleration of airplanes with small span relative to turbulence scale

    NASA Technical Reports Server (NTRS)

    Pratt, K. G.

    1975-01-01

    A rigid airplane with an unswept wing is analyzed. The results show that the power spectrum, relative to that for a one-dimensional turbulence field, is significantly attenuated at the higher frequencies even for airplanes with arbitrarily small ratios of span to scale of turbulence. This attenuation is described by a simple weighting function of frequency that depends only on aspect ratio. The weighting function, together with the attenuation due to the unsteady flow of gust penetration, allows the determination of the average rate of zero crossings for airplanes having very small spans without recourse to an integral truncation which is often required in calculations based on a one-dimensional turbulence field.

  9. Wavepacket dynamics in one-dimensional system with long-range correlated disorder

    NASA Astrophysics Data System (ADS)

    Yamada, Hiroaki S.

    2018-03-01

    We numerically investigate dynamical property in the one-dimensional tight-binding model with long-range correlated disorder having power spectrum 1 /fα (α: spectrum exponent) generated by Fourier filtering method. For relatively small α <αc (=2) time-dependence of mean square displacement (MSD) of the initially localized wavepacket shows ballistic spread and localizes as time elapses. It is shown that α-dependence of the dynamical localization length determined by the MSD exhibits a simple scaling law in the localization regime for the relatively weak disorder strength W. Furthermore, scaled MSD by the dynamical localization length almost obeys an universal function from the ballistic to the localization regime in the various combinations of the parameters α and W.

  10. Some Dynamical Features of Molecular Fragmentation by Electrons and Swift Ions

    NASA Astrophysics Data System (ADS)

    Montenegro, E. C.; Sigaud, L.; Wolff, W.; Luna, H.; Natalia, Ferreira

    To date, the large majority of studies on molecular fragmentation by swift charged particles have been carried out using simple molecules, for which reliable Potential Energy Curves are available to interpret the measured fragmentation yields. For complex molecules the scenario is quite different and such guidance is not available, obscuring even a simple organization of the data which are currently obtained for a large variety of molecules of biological or technological interest. In this work we show that a general and relatively simple methodology can be used to obtain a broader picture of the fragmentation pattern of an arbitrary molecule. The electronic ionization or excitation cross section of a given molecular orbital, which is the first part of the fragmentation process, can be well scaled by a simple and general procedure at high projectile velocities. The fragmentation fractions arising from each molecular orbital can then be achieved by matching the calculated ionization with the measured fragmentation cross sections. Examples for Oxygen, Chlorodifluoromethane and Pyrimidine molecules are presented.

  11. Passive Q switching and mode-locking of Er:glass lasers using VO2 mirrors

    NASA Astrophysics Data System (ADS)

    Pollack, S. A.; Chang, D. B.; Chudnovky, F. A.; Khakhaev, I. A.

    1995-09-01

    Passive Q switching of an Er:glass laser with the pulse width varying between 14 and 80 ns has been demonstrated, using three resonator vanadium-dioxide-coated (VO2) mirror samples with temperature-dependent reflectivity and differing in the reflectivity contrast. The reflectivity changes because of a phase transition from a semiconductor to a metallic state. Broad band operating characteristics of VO2 mirrors provide Q switching over a wide range of wavelengths. In addition, mode-locked pulses with much shorter time scales have been observed, due to exciton formation and recombination. A simple criterion is derived for the allowable ambient temperatures at which the Q switching operates effectively. A simple relation has also been found relating the duration of the Q-switched pulse to the contrast in reflectivities of the two mirror phases.

  12. PENDISC: a simple method for constructing a mathematical model from time-series data of metabolite concentrations.

    PubMed

    Sriyudthsak, Kansuporn; Iwata, Michio; Hirai, Masami Yokota; Shiraishi, Fumihide

    2014-06-01

    The availability of large-scale datasets has led to more effort being made to understand characteristics of metabolic reaction networks. However, because the large-scale data are semi-quantitative, and may contain biological variations and/or analytical errors, it remains a challenge to construct a mathematical model with precise parameters using only these data. The present work proposes a simple method, referred to as PENDISC (Parameter Estimation in a N on- DImensionalized S-system with Constraints), to assist the complex process of parameter estimation in the construction of a mathematical model for a given metabolic reaction system. The PENDISC method was evaluated using two simple mathematical models: a linear metabolic pathway model with inhibition and a branched metabolic pathway model with inhibition and activation. The results indicate that a smaller number of data points and rate constant parameters enhances the agreement between calculated values and time-series data of metabolite concentrations, and leads to faster convergence when the same initial estimates are used for the fitting. This method is also shown to be applicable to noisy time-series data and to unmeasurable metabolite concentrations in a network, and to have a potential to handle metabolome data of a relatively large-scale metabolic reaction system. Furthermore, it was applied to aspartate-derived amino acid biosynthesis in Arabidopsis thaliana plant. The result provides confirmation that the mathematical model constructed satisfactorily agrees with the time-series datasets of seven metabolite concentrations.

  13. Correcting the SIMPLE Model of Free Recall

    ERIC Educational Resources Information Center

    Lee, Michael D.; Pooley, James P.

    2013-01-01

    The scale-invariant memory, perception, and learning (SIMPLE) model developed by Brown, Neath, and Chater (2007) formalizes the theoretical idea that scale invariance is an important organizing principle across numerous cognitive domains and has made an influential contribution to the literature dealing with modeling human memory. In the context…

  14. Geometry and Reynolds-Number Scaling on an Iced Business-Jet Wing

    NASA Technical Reports Server (NTRS)

    Lee, Sam; Ratvasky, Thomas P.; Thacker, Michael; Barnhart, Billy P.

    2005-01-01

    A study was conducted to develop a method to scale the effect of ice accretion on a full-scale business jet wing model to a 1/12-scale model at greatly reduced Reynolds number. Full-scale, 5/12-scale, and 1/12-scale models of identical airfoil section were used in this study. Three types of ice accretion were studied: 22.5-minute ice protection system failure shape, 2-minute initial ice roughness, and a runback shape that forms downstream of a thermal anti-ice system. The results showed that the 22.5-minute failure shape could be scaled from full-scale to 1/12-scale through simple geometric scaling. The 2-minute roughness shape could be scaled by choosing an appropriate grit size. The runback ice shape exhibited greater Reynolds number effects and could not be scaled by simple geometric scaling of the ice shape.

  15. Methods and optical fibers that decrease pulse degradation resulting from random chromatic dispersion

    DOEpatents

    Chertkov, Michael; Gabitov, Ildar

    2004-03-02

    The present invention provides methods and optical fibers for periodically pinning an actual (random) accumulated chromatic dispersion of an optical fiber to a predicted accumulated dispersion of the fiber through relatively simple modifications of fiber-optic manufacturing methods or retrofitting of existing fibers. If the pinning occurs with sufficient frequency (at a distance less than or are equal to a correlation scale), pulse degradation resulting from random chromatic dispersion is minimized. Alternatively, pinning may occur quasi-periodically, i.e., the pinning distance is distributed between approximately zero and approximately two to three times the correlation scale.

  16. Gait-force model and inertial measurement unit-based measurements: A new approach for gait analysis and balance monitoring.

    PubMed

    Li, Xinan; Xu, Hongyuan; Cheung, Jeffrey T

    2016-12-01

    This work describes a new approach for gait analysis and balance measurement. It uses an inertial measurement unit (IMU) that can either be embedded inside a dynamically unstable platform for balance measurement or mounted on the lower back of a human participant for gait analysis. The acceleration data along three Cartesian coordinates is analyzed by the gait-force model to extract bio-mechanics information in both the dynamic state as in the gait analyzer and the steady state as in the balance scale. For the gait analyzer, the simple, noninvasive and versatile approach makes it appealing to a broad range of applications in clinical diagnosis, rehabilitation monitoring, athletic training, sport-apparel design, and many other areas. For the balance scale, it provides a portable platform to measure the postural deviation and the balance index under visual or vestibular sensory input conditions. Despite its simple construction and operation, excellent agreement has been demonstrated between its performance and the high-cost commercial balance unit over a wide dynamic range. The portable balance scale is an ideal tool for routine monitoring of balance index, fall-risk assessment, and other balance-related health issues for both clinical and household use.

  17. Spatial analysis of cities using Renyi entropy and fractal parameters

    NASA Astrophysics Data System (ADS)

    Chen, Yanguang; Feng, Jian

    2017-12-01

    The spatial distributions of cities fall into two groups: one is the simple distribution with characteristic scale (e.g. exponential distribution), and the other is the complex distribution without characteristic scale (e.g. power-law distribution). The latter belongs to scale-free distributions, which can be modeled with fractal geometry. However, fractal dimension is not suitable for the former distribution. In contrast, spatial entropy can be used to measure any types of urban distributions. This paper is devoted to generalizing multifractal parameters by means of dual relation between Euclidean and fractal geometries. The main method is mathematical derivation and empirical analysis, and the theoretical foundation is the discovery that the normalized fractal dimension is equal to the normalized entropy. Based on this finding, a set of useful spatial indexes termed dummy multifractal parameters are defined for geographical analysis. These indexes can be employed to describe both the simple distributions and complex distributions. The dummy multifractal indexes are applied to the population density distribution of Hangzhou city, China. The calculation results reveal the feature of spatio-temporal evolution of Hangzhou's urban morphology. This study indicates that fractal dimension and spatial entropy can be combined to produce a new methodology for spatial analysis of city development.

  18. A clinimetric approach to assessing quality of life in epilepsy.

    PubMed

    Cramer, J A

    1993-01-01

    Clinimetrics is a concept involving the use of rating scales for clinical phenomena ranging from physical examinations to functional performance. Clinimetric or rating scales can be used for defining patient status and changes that occur during long-term observation. The scores derived from such scales can be used as guidelines for intervention, treatment, or prediction of outcome. In epilepsy, clinimetric scales have been developed for assessing seizure frequency, seizure severity, adverse effects related to antiepileptic drugs (AEDs), and quality of life after surgery for epilepsy. The VA Epilepsy Cooperative Study seizure rating scale combines frequency and severity in a weighted scoring system for simple and complex partial and generalized tonic-clonic seizures, summing all items in a total seizure score. Similarly, the rating scales for systemic toxicity and neurotoxicity use scores weighted for severity for assessing specific adverse effects typically related to AEDs. A composite score, obtained by adding the scores for seizures, systemic toxicity, and neurotoxicity, represents the overall status of the patient at a given time. The Chalfont Seizure Severity Scale also applies scores relative to the impact of a given item on the patient, without factoring in seizure frequency. The Liverpool Seizure Severity Scale is a patient questionnaire covering perceived seizure severity and the impact of ictal and postictal events. The UCLA Epilepsy Surgery Inventory (ESI-55) assesses quality of life for patients who have undergone surgery for epilepsy using generic health status instruments with additional epilepsy-specific items.(ABSTRACT TRUNCATED AT 250 WORDS)

  19. Bridging the scales in a eulerian air quality model to assess megacity export of pollution

    NASA Astrophysics Data System (ADS)

    Siour, G.; Colette, A.; Menut, L.; Bessagnet, B.; Coll, I.; Meleux, F.

    2013-08-01

    In Chemistry Transport Models (CTMs), spatial scale interactions are often represented through off-line coupling between large and small scale models. However, those nested configurations cannot give account of the impact of the local scale on its surroundings. This issue can be critical in areas exposed to air mass recirculation (sea breeze cells) or around regions with sharp pollutant emission gradients (large cities). Such phenomena can still be captured by the mean of adaptive gridding, two-way nesting or using model nudging, but these approaches remain relatively costly. We present here the development and the results of a simple alternative multi-scale approach making use of a horizontal stretched grid, in the Eulerian CTM CHIMERE. This method, called "stretching" or "zooming", consists in the introduction of local zooms in a single chemistry-transport simulation. It allows bridging online the spatial scales from the city (∼1 km resolution) to the continental area (∼50 km resolution). The CHIMERE model was run over a continental European domain, zoomed over the BeNeLux (Belgium, Netherlands and Luxembourg) area. We demonstrate that, compared with one-way nesting, the zooming method allows the expression of a significant feedback of the refined domain towards the large scale: around the city cluster of BeNeLuX, NO2 and O3 scores are improved. NO2 variability around BeNeLux is also better accounted for, and the net primary pollutant flux transported back towards BeNeLux is reduced. Although the results could not be validated for ozone over BeNeLux, we show that the zooming approach provides a simple and immediate way to better represent scale interactions within a CTM, and constitutes a useful tool for apprehending the hot topic of megacities within their continental environment.

  20. Stable clustering and the resolution of dissipationless cosmological N-body simulations

    NASA Astrophysics Data System (ADS)

    Benhaiem, David; Joyce, Michael; Sylos Labini, Francesco

    2017-10-01

    The determination of the resolution of cosmological N-body simulations, I.e. the range of scales in which quantities measured in them represent accurately the continuum limit, is an important open question. We address it here using scale-free models, for which self-similarity provides a powerful tool to control resolution. Such models also provide a robust testing ground for the so-called stable clustering approximation, which gives simple predictions for them. Studying large N-body simulations of such models with different force smoothing, we find that these two issues are in fact very closely related: our conclusion is that the accuracy of two-point statistics in the non-linear regime starts to degrade strongly around the scale at which their behaviour deviates from that predicted by the stable clustering hypothesis. Physically the association of the two scales is in fact simple to understand: stable clustering fails to be a good approximation when there are strong interactions of structures (in particular merging) and it is precisely such non-linear processes which are sensitive to fluctuations at the smaller scales affected by discretization. Resolution may be further degraded if the short distance gravitational smoothing scale is larger than the scale to which stable clustering can propagate. We examine in detail the very different conclusions of studies by Smith et al. and Widrow et al. and find that the strong deviations from stable clustering reported by these works are the results of over-optimistic assumptions about scales resolved accurately by the measured power spectra, and the reliance on Fourier space analysis. We emphasize the much poorer resolution obtained with the power spectrum compared to the two-point correlation function.

  1. Exaggeration and suppression of iridescence: the evolution of two-dimensional butterfly structural colours

    PubMed Central

    Wickham, Shelley; Large, Maryanne C.J; Poladian, Leon; Jermiin, Lars S

    2005-01-01

    Many butterfly species possess ‘structural’ colour, where colour is due to optical microstructures found in the wing scales. A number of such structures have been identified in butterfly scales, including three variations on a simple multi-layer structure. In this study, we optically characterize examples of all three types of multi-layer structure, as found in 10 species. The optical mechanism of the suppression and exaggeration of the angle-dependent optical properties (iridescence) of these structures is described. In addition, we consider the phylogeny of the butterflies, and are thus able to relate the optical properties of the structures to their evolutionary development. By applying two different types of analysis, the mechanism of adaptation is addressed. A simple parsimony analysis, in which all evolutionary changes are given an equal weighting, suggests convergent evolution of one structure. A Dollo parsimony analysis, in which the evolutionary ‘cost’ of losing a structure is less than that of gaining it, implies that ‘latent’ structures can be reused. PMID:16849221

  2. Assessment of snow-dominated water resources: (Ir-)relevant scales for observation and modelling

    NASA Astrophysics Data System (ADS)

    Schaefli, Bettina; Ceperley, Natalie; Michelon, Anthony; Larsen, Joshua; Beria, Harsh

    2017-04-01

    High Alpine catchments play an essential role for many world regions since they 1) provide water resources to low lying and often relatively dry regions, 2) are important for hydropower production as a result of their high hydraulic heads, 3) offer relatively undisturbed habitat for fauna and flora and 4) provide a source of cold water often late into the summer season (due to snowmelt), which is essential for many downstream river ecosystems. However, the water balance of such high Alpine hydrological systems is often difficult to accurately estimate, in part because of seasonal to interannual accumulation of precipitation in the form of snow and ice and by relatively low but highly seasonal evapotranspiration rates. These processes are strongly driven by the topography and related vegetation patterns, by air temperature gradients, solar radiation and wind patterns. Based on selected examples, we will discuss how the spatial scale of these patterns dictates at which scales we can make reliable water balance assessments. Overall, this contribution will provide an overview of some of the key open questions in terms of observing and modelling the dominant hydrological processes in Alpine areas at the right scale. A particular focus will be on the observation and modelling of snow accumulation and melt processes, discussing in particular the usefulness of simple models versus fully physical models at different spatial scales and the role of observed data.

  3. The emergence of overlapping scale-free genetic architecture in digital organisms.

    PubMed

    Gerlee, P; Lundh, T

    2008-01-01

    We have studied the evolution of genetic architecture in digital organisms and found that the gene overlap follows a scale-free distribution, which is commonly found in metabolic networks of many organisms. Our results show that the slope of the scale-free distribution depends on the mutation rate and that the gene development is driven by expansion of already existing genes, which is in direct correspondence to the preferential growth algorithm that gives rise to scale-free networks. To further validate our results we have constructed a simple model of gene development, which recapitulates the results from the evolutionary process and shows that the mutation rate affects the tendency of genes to cluster. In addition we could relate the slope of the scale-free distribution to the genetic complexity of the organisms and show that a high mutation rate gives rise to a more complex genetic architecture.

  4. Definition of (so MIScalled) ''Complexity'' as UTTER-SIMPLICITY!!! Versus Deviations From it as Complicatedness-Measure

    NASA Astrophysics Data System (ADS)

    Young, F.; Siegel, Edward Carl-Ludwig

    2011-03-01

    (so MIScalled) "complexity" with INHERENT BOTH SCALE-Invariance Symmetry-RESTORING, AND 1 / w (1.000..) "pink" Zipf-law Archimedes-HYPERBOLICITY INEVITABILITY power-spectrum power-law decay algebraicity. Their CONNECTION is via simple-calculus SCALE-Invariance Symmetry-RESTORING logarithm-function derivative: (d/ d ω) ln(ω) = 1 / ω , i.e. (d/ d ω) [SCALE-Invariance Symmetry-RESTORING](ω) = 1/ ω . Via Noether-theorem continuous-symmetries relation to conservation-laws: (d/ d ω) [inter-scale 4-current 4-div-ergence} = 0](ω) = 1 / ω . Hence (so MIScalled) "complexity" is information inter-scale conservation, in agreement with Anderson-Mandell [Fractals of Brain/Mind, G. Stamov ed.(1994)] experimental-psychology!!!], i.e. (so MIScalled) "complexity" is UTTER-SIMPLICITY!!! Versus COMPLICATEDNESS either PLUS (Additive) VS. TIMES (Multiplicative) COMPLICATIONS of various system-specifics. COMPLICATEDNESS-MEASURE DEVIATIONS FROM complexity's UTTER-SIMPLICITY!!!: EITHER [SCALE-Invariance Symmetry-BREAKING] MINUS [SCALE-Invariance Symmetry-RESTORING] via power-spectrum power-law algebraicity decays DIFFERENCES: ["red"-Pareto] MINUS ["pink"-Zipf Archimedes-HYPERBOLICITY INEVITABILITY]!!!

  5. Micro to Nanoscale Engineering of Surface Precipitates Using Reconfigurable Contact Lines.

    PubMed

    Kabi, Prasenjit; Chaudhuri, Swetaprovo; Basu, Saptarshi

    2018-02-06

    Nanoscale engineering has traditionally adopted the chemical route of synthesis or optochemical techniques such as lithography requiring large process times, expensive equipment, and an inert environment. Directed self-assembly using evaporation of nanocolloidal droplet can be a potential low-cost alternative across various industries ranging from semiconductors to biomedical systems. It is relatively simple to scale and reorient the evaporation-driven internal flow field in an evaporating droplet which can direct dispersed matter into functional agglomerates. The resulting functional precipitates not only exhibit macroscopically discernible changes but also nanoscopic variations in the particulate assembly. Thus, the evaporating droplet forms an autonomous system for nanoscale engineering without the need for external resources. In this article, an indigenous technique of interfacial re-engineering, which is both simple and inexpensive to implement, is developed. Such re-engineering widens the horizon for surface patterning previously limited by the fixed nature of the droplet interface. It involves handprinting hydrophobic lines on a hydrophilic substrate to form a confinement of any selected geometry using a simple document stamp. Droplets cast into such confinements get modulated into a variety of shapes. The droplet shapes control the contact line behavior, evaporation dynamics, and complex internal flow pattern. By exploiting the dynamic interplay among these variables, we could control the deposit's macro- as well as nanoscale assembly not possible with simple circular droplets. We provide a detailed mechanism of the coupling at various length scales enabling a predictive capability in custom engineering, particularly useful in nanoscale applications such as photonic crystals.

  6. Modeling gene expression measurement error: a quasi-likelihood approach

    PubMed Central

    Strimmer, Korbinian

    2003-01-01

    Background Using suitable error models for gene expression measurements is essential in the statistical analysis of microarray data. However, the true probabilistic model underlying gene expression intensity readings is generally not known. Instead, in currently used approaches some simple parametric model is assumed (usually a transformed normal distribution) or the empirical distribution is estimated. However, both these strategies may not be optimal for gene expression data, as the non-parametric approach ignores known structural information whereas the fully parametric models run the risk of misspecification. A further related problem is the choice of a suitable scale for the model (e.g. observed vs. log-scale). Results Here a simple semi-parametric model for gene expression measurement error is presented. In this approach inference is based an approximate likelihood function (the extended quasi-likelihood). Only partial knowledge about the unknown true distribution is required to construct this function. In case of gene expression this information is available in the form of the postulated (e.g. quadratic) variance structure of the data. As the quasi-likelihood behaves (almost) like a proper likelihood, it allows for the estimation of calibration and variance parameters, and it is also straightforward to obtain corresponding approximate confidence intervals. Unlike most other frameworks, it also allows analysis on any preferred scale, i.e. both on the original linear scale as well as on a transformed scale. It can also be employed in regression approaches to model systematic (e.g. array or dye) effects. Conclusions The quasi-likelihood framework provides a simple and versatile approach to analyze gene expression data that does not make any strong distributional assumptions about the underlying error model. For several simulated as well as real data sets it provides a better fit to the data than competing models. In an example it also improved the power of tests to identify differential expression. PMID:12659637

  7. Order-of-magnitude physics of neutron stars. Estimating their properties from first principles

    NASA Astrophysics Data System (ADS)

    Reisenegger, Andreas; Zepeda, Felipe S.

    2016-03-01

    We use basic physics and simple mathematics accessible to advanced undergraduate students to estimate the main properties of neutron stars. We set the stage and introduce relevant concepts by discussing the properties of "everyday" matter on Earth, degenerate Fermi gases, white dwarfs, and scaling relations of stellar properties with polytropic equations of state. Then, we discuss various physical ingredients relevant for neutron stars and how they can be combined in order to obtain a couple of different simple estimates of their maximum mass, beyond which they would collapse, turning into black holes. Finally, we use the basic structural parameters of neutron stars to briefly discuss their rotational and electromagnetic properties.

  8. Validation of the replica trick for simple models

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2018-04-01

    We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.

  9. Prediction of drug transport processes using simple parameters and PLS statistics. The use of ACD/logP and ACD/ChemSketch descriptors.

    PubMed

    Osterberg, T; Norinder, U

    2001-01-01

    A method of modelling and predicting biopharmaceutical properties using simple theoretically computed molecular descriptors and multivariate statistics has been investigated for several data sets related to solubility, IAM chromatography, permeability across Caco-2 cell monolayers, human intestinal perfusion, brain-blood partitioning, and P-glycoprotein ATPase activity. The molecular descriptors (e.g. molar refractivity, molar volume, index of refraction, surface tension and density) and logP were computed with ACD/ChemSketch and ACD/logP, respectively. Good statistical models were derived that permit simple computational prediction of biopharmaceutical properties. All final models derived had R(2) values ranging from 0.73 to 0.95 and Q(2) values ranging from 0.69 to 0.86. The RMSEP values for the external test sets ranged from 0.24 to 0.85 (log scale).

  10. Dearomative dihydroxylation with arenophiles

    NASA Astrophysics Data System (ADS)

    Southgate, Emma H.; Pospech, Jola; Fu, Junkai; Holycross, Daniel R.; Sarlah, David

    2016-10-01

    Aromatic hydrocarbons are some of the most elementary feedstock chemicals, produced annually on a million metric ton scale, and are used in the production of polymers, paints, agrochemicals and pharmaceuticals. Dearomatization reactions convert simple, readily available arenes into more complex molecules with broader potential utility, however, despite substantial progress and achievements in this field, there are relatively few methods for the dearomatization of simple arenes that also selectively introduce functionality. Here we describe a new dearomatization process that involves visible-light activation of small heteroatom-containing organic molecules—arenophiles—that results in their para-cycloaddition with a variety of aromatic compounds. The approach uses N-N-arenophiles to enable dearomative dihydroxylation and diaminodihydroxylation of simple arenes. This strategy provides direct and selective access to highly functionalized cyclohexenes and cyclohexadienes and is orthogonal to existing chemical and biological dearomatization processes. Finally, we demonstrate the synthetic utility of this strategy with the concise synthesis of several biologically active compounds and natural products.

  11. Self-Organized Dynamic Flocking Behavior from a Simple Deterministic Map

    NASA Astrophysics Data System (ADS)

    Krueger, Wesley

    2007-10-01

    Coherent motion exhibiting large-scale order, such as flocking, swarming, and schooling behavior in animals, can arise from simple rules applied to an initial random array of self-driven particles. We present a completely deterministic dynamic map that exhibits emergent, collective, complex motion for a group of particles. Each individual particle is driven with a constant speed in two dimensions adopting the average direction of a fixed set of non-spatially related partners. In addition, the particle changes direction by π as it reaches a circular boundary. The dynamical patterns arising from these rules range from simple circular-type convective motion to highly sophisticated, complex, collective behavior which can be easily interpreted as flocking, schooling, or swarming depending on the chosen parameters. We present the results as a series of short movies and we also explore possible order parameters and correlation functions capable of quantifying the resulting coherence.

  12. On a Continuum Limit for Loop Quantum Cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Corichi, Alejandro; Center for Fundamental Theory, Institute for Gravitation and the Cosmos, Pennsylvania State University, University Park PA 16802; Vukasinac, Tatjana

    2008-03-06

    The use of non-regular representations of the Heisenberg-Weyl commutation relations has proved to be useful for studying conceptual and technical issues in quantum gravity. Of particular relevance is the study of Loop Quantum Cosmology (LQC), symmetry reduced theory that is related to Loop Quantum Gravity, and that is based on a non-regular, polymeric representation. Recently, a soluble model was used by Ashtekar, Corichi and Singh to study the relation between Loop Quantum Cosmology and the standard Wheeler-DeWitt theory and, in particular, the passage to the limit in which the auxiliary parameter (interpreted as ''quantum geometry discreetness'') is sent to zeromore » in hope to get rid of this 'regulator' that dictates the LQC dynamics at each 'scale'. In this note we outline the first steps toward reformulating this question within the program developed by the authors for studying the continuum limit of polymeric theories, which was successfully applied to simple systems such as a Simple Harmonic Oscillator.« less

  13. Scaling range of power laws that originate from fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Grech, Dariusz; Mazur, Zygmunt

    2013-05-01

    We extend our previous study of scaling range properties performed for detrended fluctuation analysis (DFA) [Physica A0378-437110.1016/j.physa.2013.01.049 392, 2384 (2013)] to other techniques of fluctuation analysis (FA). The new technique, called modified detrended moving average analysis (MDMA), is introduced, and its scaling range properties are examined and compared with those of detrended moving average analysis (DMA) and DFA. It is shown that contrary to DFA, DMA and MDMA techniques exhibit power law dependence of the scaling range with respect to the length of the searched signal and with respect to the accuracy R2 of the fit to the considered scaling law imposed by DMA or MDMA methods. This power law dependence is satisfied for both uncorrelated and autocorrelated data. We find also a simple generalization of this power law relation for series with a different level of autocorrelations measured in terms of the Hurst exponent. Basic relations between scaling ranges for different techniques are also discussed. Our findings should be particularly useful for local FA in, e.g., econophysics, finances, or physiology, where the huge number of short time series has to be examined at once and wherever the preliminary check of the scaling range regime for each of the series separately is neither effective nor possible.

  14. Scaling properties of European research units

    PubMed Central

    Jamtveit, Bjørn; Jettestuen, Espen; Mathiesen, Joachim

    2009-01-01

    A quantitative characterization of the scale-dependent features of research units may provide important insight into how such units are organized and how they grow. The relative importance of top-down versus bottom-up controls on their growth may be revealed by their scaling properties. Here we show that the number of support staff in Scandinavian research units, ranging in size from 20 to 7,800 staff members, is related to the number of academic staff by a power law. The scaling exponent of ≈1.30 is broadly consistent with a simple hierarchical model of the university organization. Similar scaling behavior between small and large research units with a wide range of ambitions and strategies argues against top-down control of the growth. Top-down effects, and externally imposed effects from changing political environments, can be observed as fluctuations around the main trend. The observed scaling law implies that cost-benefit arguments for merging research institutions into larger and larger units may have limited validity unless the productivity per academic staff and/or the quality of the products are considerably higher in larger institutions. Despite the hierarchical structure of most large-scale research units in Europe, the network structures represented by the academic component of such units are strongly antihierarchical and suboptimal for efficient communication within individual units. PMID:19625626

  15. A simple approximation for larval retention around reefs

    NASA Astrophysics Data System (ADS)

    Cetina-Heredia, Paulina; Connolly, Sean R.

    2011-09-01

    Estimating larval retention at individual reefs by local scale three-dimensional flows is a significant problem for understanding, and predicting, larval dispersal. Determining larval dispersal commonly involves the use of computationally demanding and expensively calibrated/validated hydrodynamic models that resolve reef wake eddies. This study models variation in larval retention times for a range of reef shapes and circulation regimes, using a reef-scale three-dimensional hydrodynamic model. It also explores how well larval retention time can be estimated based on the "Island Wake Parameter", a measure of the degree of flow turbulence in the wake of reefs that is a simple function of flow speed, reef dimension, and vertical diffusion. The mean residence times found in the present study (0.48-5.64 days) indicate substantial potential for self-recruitment of species whose larvae are passive, or weak swimmers, for the first several days after release. Results also reveal strong and significant relationships between the Island Wake Parameter and mean residence time, explaining 81-92% of the variability in retention among reefs across a range of unidirectional flow speeds and tidal regimes. These findings suggest that good estimates of larval retention may be obtained from relatively coarse-scale characteristics of the flow, and basic features of reef geomorphology. Such approximations may be a valuable tool for modeling connectivity and meta-population dynamics over large spatial scales, where explicitly characterizing fine-scale flows around reef requires a prohibitive amount of computation and extensive model calibration.

  16. Scale-Up of GRCop: From Laboratory to Rocket Engines

    NASA Technical Reports Server (NTRS)

    Ellis, David L.

    2016-01-01

    GRCop is a high temperature, high thermal conductivity copper-based series of alloys designed primarily for use in regeneratively cooled rocket engine liners. It began with laboratory-level production of a few grams of ribbon produced by chill block melt spinning and has grown to commercial-scale production of large-scale rocket engine liners. Along the way, a variety of methods of consolidating and working the alloy were examined, a database of properties was developed and a variety of commercial and government applications were considered. This talk will briefly address the basic material properties used for selection of compositions to scale up, the methods used to go from simple ribbon to rocket engines, the need to develop a suitable database, and the issues related to getting the alloy into a rocket engine or other application.

  17. Quantum Entanglement of Matter and Geometry in Large Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogan, Craig J.

    2014-12-04

    Standard quantum mechanics and gravity are used to estimate the mass and size of idealized gravitating systems where position states of matter and geometry become indeterminate. It is proposed that well-known inconsistencies of standard quantum field theory with general relativity on macroscopic scales can be reconciled by nonstandard, nonlocal entanglement of field states with quantum states of geometry. Wave functions of particle world lines are used to estimate scales of geometrical entanglement and emergent locality. Simple models of entanglement predict coherent fluctuations in position of massive bodies, of Planck scale origin, measurable on a laboratory scale, and may account formore » the fact that the information density of long lived position states in Standard Model fields, which is determined by the strong interactions, is the same as that determined holographically by the cosmological constant.« less

  18. The Origin of Universal Scaling in Biology from Molecules & Cells to Whales and Ecosystems

    NASA Astrophysics Data System (ADS)

    West, Geoffrey

    2002-03-01

    Life is the most complex physical system in the Universe manifesting an extraordinary diversity of form and function over an enormous scale ranging from the largest animals and plants to the smallest microbes. Yet, many of its most fundamental and complex phenomena scale with size in a surprisingly simple fashion. For example, metabolic rate (the power needed to sustain life) scales as the 3/4-power of mass over 27 orders of magnitude ranging from molecular and intra-cellular levels up through the smallest unicellular organisms to the largest animals and plants. Similarly, time-scales (such as lifespan and heart-rate) and sizes (such as the radius of a tree trunk or the density of mitochondria) change with size with exponents which are typically simple powers of 1/4. The phenomenology of these "laws" will be reviewed and a quantitative unified theory presented that explains their origin, including that of the universal 1/4-power. It is based on the fundamental observation that, regardless of size, almost all life is sustained, and ultimately constrained, by space-filling, fractal-like hierarchical branching networks which are optimised by the forces of natural selection. Integrated descriptions of the cardiovascular, respiratory and plant vascular systems will be presented as explicit examples. It will be shown how scaling universality can be related to an effective additional fourth spatial dimension of life. Extensions to growth, aging and mortality, ecosystems and the nature of evolution, including thermodynamic considerations and the concept of a universal molecular clock, will be discussed.

  19. A simple tectonic model for crustal accretion in the Slave Province: A 2.7-2.5 Ga granite greenstone terrane

    NASA Technical Reports Server (NTRS)

    Hoffman, P. F.

    1986-01-01

    A prograding (direction unspecified) trench-arc system is favored as a simple yet comprehensive model for crustal generation in a 250,000 sq km granite-greenstone terrain. The model accounts for the evolutionary sequence of volcanism, sedimentation, deformation, metamorphism and plutonism, observed througout the Slave province. Both unconformable (trench inner slope) and subconformable (trench outer slope) relations between the volcanics and overlying turbidities; and the existence of relatively minor amounts of pre-greenstone basement (microcontinents) and syn-greenstone plutons (accreted arc roots) are explained. Predictions include: a varaiable gap between greenstone volcanism and trench turbidite sedimentation (accompanied by minor volcanism) and systematic regional variations in age span of volcanism and plutonism. Implications of the model will be illustrated with reference to a 1:1 million scale geological map of the Slave Province (and its bounding 1.0 Ga orogens).

  20. Health related quality of life measure in systemic pediatric rheumatic diseases and its translation to different languages: an international collaboration.

    PubMed

    Moorthy, Lakshmi Nandini; Roy, Elizabeth; Kurra, Vamsi; Peterson, Margaret G E; Hassett, Afton L; Lehman, Thomas J A; Scott, Christiaan; El-Ghoneimy, Dalia; Saad, Shereen; El Feky, Reem; Al-Mayouf, Sulaiman; Dolezalova, Pavla; Malcova, Hana; Herlin, Troels; Nielsen, Susan; Wulffraat, Nico; van Royen, Annet; Marks, Stephen D; Belot, Alexandre; Brunner, Jurgen; Huemer, Christian; Foeldvari, Ivan; Horneff, Gerd; Saurenman, Traudel; Schroeder, Silke; Pratsidou-Gertsi, Polyxeni; Trachana, Maria; Uziel, Yosef; Aggarwal, Amita; Constantin, Tamas; Cimaz, Rolando; Giani, Theresa; Cantarini, Luca; Falcini, Fernanda; Manzoni, Silvia Magni; Ravelli, Angelo; Rigante, Donato; Zulian, Fracnceso; Miyamae, Takako; Yokota, Shumpei; Sato, Juliana; Magalhaes, Claudia S; Len, Claudio A; Appenzeller, Simone; Knupp, Sheila Oliveira; Rodrigues, Marta Cristine; Sztajnbok, Flavio; de Almeida, Rozana Gasparello; de Jesus, Adriana Almeida; de Arruda Campos, Lucia Maria; Silva, Clovis; Lazar, Calin; Susic, Gordana; Avcin, Tadej; Cuttica, Ruben; Burgos-Vargas, Ruben; Faugier, Enrique; Anton, Jordi; Modesto, Consuelo; Vazquez, Liza; Barillas, Lilliana; Barinstein, Laura; Sterba, Gary; Maldonado, Irama; Ozen, Seza; Kasapcopur, Ozgur; Demirkaya, Erkan; Benseler, Susa

    2014-01-01

    Rheumatic diseases in children are associated with significant morbidity and poor health-related quality of life (HRQOL). There is no health-related quality of life (HRQOL) scale available specifically for children with less common rheumatic diseases. These diseases share several features with systemic lupus erythematosus (SLE) such as their chronic episodic nature, multi-systemic involvement, and the need for immunosuppressive medications. HRQOL scale developed for pediatric SLE will likely be applicable to children with systemic inflammatory diseases. We adapted Simple Measure of Impact of Lupus Erythematosus in Youngsters (SMILEY©) to Simple Measure of Impact of Illness in Youngsters (SMILY©-Illness) and had it reviewed by pediatric rheumatologists for its appropriateness and cultural suitability. We tested SMILY©-Illness in patients with inflammatory rheumatic diseases and then translated it into 28 languages. Nineteen children (79% female, n=15) and 17 parents participated. The mean age was 12±4 years, with median disease duration of 21 months (1-172 months). We translated SMILY©-Illness into the following 28 languages: Danish, Dutch, French (France), English (UK), German (Germany), German (Austria), German (Switzerland), Hebrew, Italian, Portuguese (Brazil), Slovene, Spanish (USA and Puerto Rico), Spanish (Spain), Spanish (Argentina), Spanish (Mexico), Spanish (Venezuela), Turkish, Afrikaans, Arabic (Saudi Arabia), Arabic (Egypt), Czech, Greek, Hindi, Hungarian, Japanese, Romanian, Serbian and Xhosa. SMILY©-Illness is a brief, easy to administer and score HRQOL scale for children with systemic rheumatic diseases. It is suitable for use across different age groups and literacy levels. SMILY©-Illness with its available translations may be used as useful adjuncts to clinical practice and research.

  1. Effects of land use on lake nutrients: The importance of scale, hydrologic connectivity, and region

    USGS Publications Warehouse

    Soranno, Patricia A.; Cheruvelil, Kendra Spence; Wagner, Tyler; Webster, Katherine E.; Bremigan, Mary Tate

    2015-01-01

    Catchment land uses, particularly agriculture and urban uses, have long been recognized as major drivers of nutrient concentrations in surface waters. However, few simple models have been developed that relate the amount of catchment land use to downstream freshwater nutrients. Nor are existing models applicable to large numbers of freshwaters across broad spatial extents such as regions or continents. This research aims to increase model performance by exploring three factors that affect the relationship between land use and downstream nutrients in freshwater: the spatial extent for measuring land use, hydrologic connectivity, and the regional differences in both the amount of nutrients and effects of land use on them. We quantified the effects of these three factors that relate land use to lake total phosphorus (TP) and total nitrogen (TN) in 346 north temperate lakes in 7 regions in Michigan, USA. We used a linear mixed modeling framework to examine the importance of spatial extent, lake hydrologic class, and region on models with individual lake nutrients as the response variable, and individual land use types as the predictor variables. Our modeling approach was chosen to avoid problems of multi-collinearity among predictor variables and a lack of independence of lakes within regions, both of which are common problems in broad-scale analyses of freshwaters. We found that all three factors influence land use-lake nutrient relationships. The strongest evidence was for the effect of lake hydrologic connectivity, followed by region, and finally, the spatial extent of land use measurements. Incorporating these three factors into relatively simple models of land use effects on lake nutrients should help to improve predictions and understanding of land use-lake nutrient interactions at broad scales.

  2. Effects of Land Use on Lake Nutrients: The Importance of Scale, Hydrologic Connectivity, and Region

    PubMed Central

    Soranno, Patricia A.; Cheruvelil, Kendra Spence; Wagner, Tyler; Webster, Katherine E.; Bremigan, Mary Tate

    2015-01-01

    Catchment land uses, particularly agriculture and urban uses, have long been recognized as major drivers of nutrient concentrations in surface waters. However, few simple models have been developed that relate the amount of catchment land use to downstream freshwater nutrients. Nor are existing models applicable to large numbers of freshwaters across broad spatial extents such as regions or continents. This research aims to increase model performance by exploring three factors that affect the relationship between land use and downstream nutrients in freshwater: the spatial extent for measuring land use, hydrologic connectivity, and the regional differences in both the amount of nutrients and effects of land use on them. We quantified the effects of these three factors that relate land use to lake total phosphorus (TP) and total nitrogen (TN) in 346 north temperate lakes in 7 regions in Michigan, USA. We used a linear mixed modeling framework to examine the importance of spatial extent, lake hydrologic class, and region on models with individual lake nutrients as the response variable, and individual land use types as the predictor variables. Our modeling approach was chosen to avoid problems of multi-collinearity among predictor variables and a lack of independence of lakes within regions, both of which are common problems in broad-scale analyses of freshwaters. We found that all three factors influence land use-lake nutrient relationships. The strongest evidence was for the effect of lake hydrologic connectivity, followed by region, and finally, the spatial extent of land use measurements. Incorporating these three factors into relatively simple models of land use effects on lake nutrients should help to improve predictions and understanding of land use-lake nutrient interactions at broad scales. PMID:26267813

  3. Connecting the molecular scale to the continuum scale for diffusion processes in smectite-rich porous media.

    PubMed

    Bourg, Ian C; Sposito, Garrison

    2010-03-15

    In this paper, we address the manner in which the continuum-scale diffusive properties of smectite-rich porous media arise from their molecular- and pore-scale features. Our starting point is a successful model of the continuum-scale apparent diffusion coefficient for water tracers and cations, which decomposes it as a sum of pore-scale terms describing diffusion in macropore and interlayer "compartments." We then apply molecular dynamics (MD) simulations to determine molecular-scale diffusion coefficients D(interlayer) of water tracers and representative cations (Na(+), Cs(+), Sr(2+)) in Na-smectite interlayers. We find that a remarkably simple expression relates D(interlayer) to the pore-scale parameter δ(nanopore) ≤ 1, a constrictivity factor that accounts for the lower mobility in interlayers as compared to macropores: δ(nanopore) = D(interlayer)/D(0), where D(0) is the diffusion coefficient in bulk liquid water. Using this scaling expression, we can accurately predict the apparent diffusion coefficients of tracers H(2)0, Na(+), Sr(2+), and Cs(+) in compacted Na-smectite-rich materials.

  4. Super Clausius-Clapeyron scaling of extreme hourly precipitation and its relation to large-scale atmospheric conditions

    NASA Astrophysics Data System (ADS)

    Lenderink, Geert; Barbero, Renaud; Loriaux, Jessica; Fowler, Hayley

    2017-04-01

    Present-day precipitation-temperature scaling relations indicate that hourly precipitation extremes may have a response to warming exceeding the Clausius-Clapeyron (CC) relation; for The Netherlands the dependency on surface dew point temperature follows two times the CC relation corresponding to 14 % per degree. Our hypothesis - as supported by a simple physical argument presented here - is that this 2CC behaviour arises from the physics of convective clouds. So, we think that this response is due to local feedbacks related to the convective activity, while other large scale atmospheric forcing conditions remain similar except for the higher temperature (approximately uniform warming with height) and absolute humidity (corresponding to the assumption of unchanged relative humidity). To test this hypothesis, we analysed the large-scale atmospheric conditions accompanying summertime afternoon precipitation events using surface observations combined with a regional re-analysis for the data in The Netherlands. Events are precipitation measurements clustered in time and space derived from approximately 30 automatic weather stations. The hourly peak intensities of these events again reveal a 2CC scaling with the surface dew point temperature. The temperature excess of moist updrafts initialized at the surface and the maximum cloud depth are clear functions of surface dew point temperature, confirming the key role of surface humidity on convective activity. Almost no differences in relative humidity and the dry temperature lapse rate were found across the dew point temperature range, supporting our theory that 2CC scaling is mainly due to the response of convection to increases in near surface humidity, while other atmospheric conditions remain similar. Additionally, hourly precipitation extremes are on average accompanied by substantial large-scale upward motions and therefore large-scale moisture convergence, which appears to accelerate with surface dew point. This increase in large-scale moisture convergence appears to be consequence of latent heat release due to the convective activity as estimated from the quasi-geostrophic omega equation. Consequently, most hourly extremes occur in precipitation events with considerable spatial extent. Importantly, this event size appears to increase rapidly at the highest dew point temperature range, suggesting potentially strong impacts of climatic warming.

  5. Characterization and Scaling of Black Carbon Aerosol Concentration with City Population Based on In-Situ Measurements and Analysis

    NASA Astrophysics Data System (ADS)

    Paredes-Miranda, G.; Arnott, W. P.; Moosmuller, H.

    2010-12-01

    The global trend toward urbanization and the resulting increase in city population has directed attention toward air pollution in megacities. A closely related question of importance for urban planning and attainment of air quality standards is how pollutant concentrations scale with city population. In this study, we use measurements of light absorption and light scattering coefficients as proxies for primary (i.e., black carbon; BC) and total (i.e., particulate matter; PM) pollutant concentration, to start addressing the following questions: What patterns and generalizations are emerging from our expanding data sets on urban air pollution? How does the per-capita air pollution vary with economic, geographic, and meteorological conditions of an urban area? Does air pollution provide an upper limit on city size? Diurnal analysis of black carbon concentration measurements in suburban Mexico City, Mexico, Las Vegas, NV, USA, and Reno, NV, USA for similar seasons suggests that commonly emitted primary air pollutant concentrations scale approximately as the square root of the urban population N, consistent with a simple 2-d box model. The measured absorption coefficient Babs is approximately proportional to the BC concentration (primary pollution) and thus scales with the square root of population (N). Since secondary pollutants form through photochemical reactions involving primary pollutants, they scale also with square root of N. Therefore the scattering coefficient Bsca, a proxy for PM concentration is also expected to scale with square root of N. Here we present light absorption and scattering measurements and data on meteorological conditions and compare the population scaling of these pollutant measurements with predictions from the simple 2-d box model. We find that these basin cities are connected by the square root of N dependence. Data from other cities will be discussed as time permits.

  6. On being the right size: scaling effects in designing a human-on-a-chip

    PubMed Central

    Moraes, Christopher; Labuz, Joseph M.; Leung, Brendan M.; Inoue, Mayumi; Chun, Tae-Hwa; Takayama, Shuichi

    2013-01-01

    Developing a human-on-a-chip by connecting multiple model organ systems would provide an intermediate screen for therapeutic efficacy and toxic side effects of drugs prior to conducting expensive clinical trials. However, correctly designing individual organs and scaling them relative to each other to make a functional microscale human analog is challenging, and a generalized approach has yet to be identified. In this work, we demonstrate the importance of rational design of both the individual organ and its relationship with other organs, using a simple two-compartment system simulating insulin-dependent glucose uptake in adipose tissues. We demonstrate that inter-organ scaling laws depend on both the number of cells, and on the spatial arrangement of those cells within the microfabricated construct. We then propose a simple and novel inter-organ ‘metabolically-supported functional scaling’ approach predicated on maintaining in vivo cellular basal metabolic rates, by limiting resources available to cells on the chip. This approach leverages findings from allometric scaling models in mammals that limited resources in vivo prompts cells to behave differently than in resource-rich in vitro cultures. Although applying scaling laws directly to tissues can result in systems that would be quite challenging to implement, engineering workarounds may be used to circumvent these scaling issues. Specific workarounds discussed include the limited oxygen carrying capacity of cell culture media when used as a blood substitute and the ability to engineer non-physiological structures to augment organ function, to create the transport-accessible, yet resource-limited environment necessary for cells to mimic in vivo functionality. Furthermore, designing the structure of individual tissues in each organ compartment may be a useful strategy to bypass scaling concerns at the inter-organ level. PMID:23925524

  7. Scaling Relations from Sunyaev-Zel'dovich Effect and Chandra X-ray Measurements of High-Redshift Galaxy Clusters

    NASA Technical Reports Server (NTRS)

    Bonamente, Massimiliano; Joy, Marshall; LaRoque, Samuel J.; Carlstrom, John E.; Nagai, Daisuke; Marrone, Dan

    2007-01-01

    We present Sunyaev-Zel'dovich Effect (SZE) scaling relations for 38 massive galaxy clusters at redshifts 0.14 less than or equal to z less than or equal to 0.89, observed with both the Chandra X-ray Observatory and the centimeter-wave SZE imaging system at the BIMA and OVRO interferometric arrays. An isothermal ,Beta-model with central 100 kpc excluded from the X-ray data is used to model the intracluster medium and to measure global cluster properties. For each Cluster, we measure the X-ray spectroscopic temperature, SZE gas mass, total mass. and integrated Compton-gamma parameters within r(sub 2500). Our measurements are in agreement with the expectations based on a simple self-similar model of cluster formation and evolution. We compare the cluster properties derived from our SZE observations with and without Chandra spatial and spectral information and find them to be in good agreement: We compare our results with cosmological numerical simulations, and find that simulations that include radiative cooling, star formation and feedback match well both the slope and normalization of our SZE scaling relations.

  8. Direct Numerical Simulations of Multiphase Flows

    NASA Astrophysics Data System (ADS)

    Tryggvason, Gretar

    2013-03-01

    Many natural and industrial processes, such as rain and gas exchange between the atmosphere and oceans, boiling heat transfer, atomization and chemical reactions in bubble columns, involve multiphase flows. Often the mixture can be described as a disperse flow where one phase consists of bubbles or drops. Direct numerical simulations (DNS) of disperse flow have recently been used to study the dynamics of multiphase flows with a large number of bubbles and drops, often showing that the collective motion results in relatively simple large-scale structure. Here we review simulations of bubbly flows in vertical channels where the flow direction, as well as the bubble deformability, has profound implications on the flow structure and the total flow rate. Results obtained so far are summarized and open questions identified. The resolution for DNS of multiphase flows is usually determined by a dominant scale, such as the average bubble or drop size, but in many cases much smaller scales are also present. These scales often consist of thin films, threads, or tiny drops appearing during coalescence or breakup, or are due to the presence of additional physical processes that operate on a very different time scale than the fluid flow. The presence of these small-scale features demand excessive resolution for conventional numerical approaches. However, at small flow scales the effects of surface tension are generally strong so the interface geometry is simple and viscous forces dominate the flow and keep it simple also. These are exactly the conditions under which analytical models can be used and we will discuss efforts to combine a semi-analytical description for the small-scale processes with a fully resolved simulation of the rest of the flow. We will, in particular, present an embedded analytical description to capture the mass transfer from bubbles in liquids where the diffusion of mass is much slower than the diffusion of momentum. This results in very thin mass-boundary layers that are difficult to resolve, but the new approach allows us to simulate the mass transfer from many freely evolving bubbles and examine the effect of the interactions of the bubbles with each other and the flow. We will conclude by attempting to summarize the current status of DNS of multiphase flows. Support by NSF and DOE (CASL)

  9. A Simple Force-Motion Relation for Migrating Cells Revealed by Multipole Analysis of Traction Stress

    PubMed Central

    Tanimoto, Hirokazu; Sano, Masaki

    2014-01-01

    For biophysical understanding of cell motility, the relationship between mechanical force and cell migration must be uncovered, but it remains elusive. Since cells migrate at small scale in dissipative circumstances, the inertia force is negligible and all forces should cancel out. This implies that one must quantify the spatial pattern of the force instead of just the summation to elucidate the force-motion relation. Here, we introduced multipole analysis to quantify the traction stress dynamics of migrating cells. We measured the traction stress of Dictyostelium discoideum cells and investigated the lowest two moments, the force dipole and quadrupole moments, which reflect rotational and front-rear asymmetries of the stress field. We derived a simple force-motion relation in which cells migrate along the force dipole axis with a direction determined by the force quadrupole. Furthermore, as a complementary approach, we also investigated fine structures in the stress field that show front-rear asymmetric kinetics consistent with the multipole analysis. The tight force-motion relation enables us to predict cell migration only from the traction stress patterns. PMID:24411233

  10. Mathematics and the Internet: A Source of Enormous Confusion and Great Potential

    DTIC Science & Technology

    2009-05-01

    free Internet Myth The story recounted below of the scale-free nature of the Internet seems convincing, sound, and al- most too good to be true ...models. In fact, much of the initial excitement in the nascent field of network science can be attributed to an ear- ly and appealingly simple class...this new class of networks, com- monly referred to as scale-free networks. The term scale-free derives from the simple observation that power-law node

  11. Benford analysis of quantum critical phenomena: First digit provides high finite-size scaling exponent while first two and further are not much better

    NASA Astrophysics Data System (ADS)

    Bera, Anindita; Mishra, Utkarsh; Singha Roy, Sudipto; Biswas, Anindya; Sen(De), Aditi; Sen, Ujjwal

    2018-06-01

    Benford's law is an empirical edict stating that the lower digits appear more often than higher ones as the first few significant digits in statistics of natural phenomena and mathematical tables. A marked proportion of such analyses is restricted to the first significant digit. We employ violation of Benford's law, up to the first four significant digits, for investigating magnetization and correlation data of paradigmatic quantum many-body systems to detect cooperative phenomena, focusing on the finite-size scaling exponents thereof. We find that for the transverse field quantum XY model, behavior of the very first significant digit of an observable, at an arbitrary point of the parameter space, is enough to capture the quantum phase transition in the model with a relatively high scaling exponent. A higher number of significant digits do not provide an appreciable further advantage, in particular, in terms of an increase in scaling exponents. Since the first significant digit of a physical quantity is relatively simple to obtain in experiments, the results have potential implications for laboratory observations in noisy environments.

  12. Does it matter how you ask? Self-reported emotions to depictions of need-of-help and social context.

    PubMed

    Brielmann, Aenne A; Stolarova, Margarita

    2015-01-01

    When humans observe other people's emotions they not only can relate but also experience similar affective states. This capability is seen as a precondition for helping and other prosocial behaviors. Our study aims to quantify the influence of help-related picture content on subjectively experienced affect. It also assesses the impact of different scales on the way people rate their emotional state. The participants (N=242) of this study were shown stimuli with help-related content. In the first subset, half the drawings depicted a child or a bird needing help to reach a simple goal. The other drawings depicted situations where the goal was achieved. The second subset showed adults either actively helping a child or as passive bystanders. We created control conditions by including pictures of the adults on their own. Participants were asked to report their affective responses to the stimuli using two types of 9-point scales. For one half of the pictures, scales of arousal (calm to excited) and of bipolar valence (unhappy to happy) were employed; for the other half, unipolar scales of pleasantness and unpleasantness (strong to absent) were used. Even non-dramatic depictions of simple need-of-help situations were rated systematically lower in valence, higher in arousal, less pleasant and more unpleasant than corresponding pictures with the child or bird not needing help. The presence of a child and adult together increased pleasantness ratings compared to pictures in which they were depicted alone. Arousal was lower for pictures showing only an adult than for those including a child. Depictions of active helping were rated similarly to pictures showing a passive adult bystander, when the need-of-help was resolved. Aggregated unipolar pleasantness and unpleasantness ratings accounted well for arousal and even better for bipolar valence ratings and for content effects on them. This is the first study to report upon the meaningful impact of harmless need-of-help content on self-reported emotional experience. It provides the basis for further investigating the links between subjective emotional experience and active prosocial behavior. It also builds upon recent findings on the correspondence between emotional ratings on bipolar and unipolar scales.

  13. Long-term neurocognitive outcome and auditory event-related potentials after complex febrile seizures in children.

    PubMed

    Tsai, Min-Lan; Hung, Kun-Long; Tsan, Ying-Ying; Tung, William Tao-Hsin

    2015-06-01

    Whether prolonged or complex febrile seizures (FS) produce long-term injury to the hippocampus is a critical question concerning the neurocognitive outcome of these seizures. Long-term event-related evoked potential (ERP) recording from the scalp is a noninvasive technique reflecting the sensory and cognitive processes associated with attention tasks. This study aimed to investigate the long-term outcome of neurocognitive and attention functions and evaluated auditory event-related potentials in children who have experienced complex FS in comparison with other types of FS. One hundred and forty-seven children aged more than 6 years who had experienced complex FS, simple single FS, simple recurrent FS, or afebrile seizures (AFS) after FS and age-matched healthy controls were enrolled. Patients were evaluated with Wechsler Intelligence Scale for Children (WISC; Chinese WISC-IV) scores, behavior test scores (Chinese version of Conners' continuous performance test, CPT II V.5), and behavior rating scales. Auditory ERPs were recorded in each patient. Patients who had experienced complex FS exhibited significantly lower full-scale intelligence quotient (FSIQ), perceptual reasoning index, and working memory index scores than did the control group but did not show significant differences in CPT scores, behavior rating scales, or ERP latencies and amplitude compared with the other groups with FS. We found a significant decrease in the FSIQ and four indices of the WISC-IV, higher behavior rating scales, a trend of increased CPT II scores, and significantly delayed P300 latency and reduced P300 amplitude in the patients with AFS after FS. We conclude that there is an effect on cognitive function in children who have experienced complex FS and patients who developed AFS after FS. The results indicated that the WISC-IV is more sensitive in detecting cognitive abnormality than ERP. Cognition impairment, including perceptual reasoning and working memory defects, was identified in patients with prolonged, multiple, or focal FS. These results may have implications for the pathogenesis of complex FS. Further comprehensive psychological evaluation and educational programs are suggested. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Global-Scale Hydrology: Simple Characterization of Complex Simulation

    NASA Technical Reports Server (NTRS)

    Koster, Randal D.

    1999-01-01

    Atmospheric general circulation models (AGCMS) are unique and valuable tools for the analysis of large-scale hydrology. AGCM simulations of climate provide tremendous amounts of hydrological data with a spatial and temporal coverage unmatched by observation systems. To the extent that the AGCM behaves realistically, these data can shed light on the nature of the real world's hydrological cycle. In the first part of the seminar, I will describe the hydrological cycle in a typical AGCM, with some emphasis on the validation of simulated precipitation against observations. The second part of the seminar will focus on a key goal in large-scale hydrology studies, namely the identification of simple, overarching controls on hydrological behavior hidden amidst the tremendous amounts of data produced by the highly complex AGCM parameterizations. In particular, I will show that a simple 50-year-old climatological relation (and a recent extension we made to it) successfully predicts, to first order, both the annual mean and the interannual variability of simulated evaporation and runoff fluxes. The seminar will conclude with an example of a practical application of global hydrology studies. The accurate prediction of weather statistics several months in advance would have tremendous societal benefits, and conventional wisdom today points at the use of coupled ocean-atmosphere-land models for such seasonal-to-interannual prediction. Understanding the hydrological cycle in AGCMs is critical to establishing the potential for such prediction. Our own studies show, among other things, that soil moisture retention can lead to significant precipitation predictability in many midlatitude and tropical regions.

  15. Fractal Hypothesis of the Pelagic Microbial Ecosystem-Can Simple Ecological Principles Lead to Self-Similar Complexity in the Pelagic Microbial Food Web?

    PubMed

    Våge, Selina; Thingstad, T Frede

    2015-01-01

    Trophic interactions are highly complex and modern sequencing techniques reveal enormous biodiversity across multiple scales in marine microbial communities. Within the chemically and physically relatively homogeneous pelagic environment, this calls for an explanation beyond spatial and temporal heterogeneity. Based on observations of simple parasite-host and predator-prey interactions occurring at different trophic levels and levels of phylogenetic resolution, we present a theoretical perspective on this enormous biodiversity, discussing in particular self-similar aspects of pelagic microbial food web organization. Fractal methods have been used to describe a variety of natural phenomena, with studies of habitat structures being an application in ecology. In contrast to mathematical fractals where pattern generating rules are readily known, however, identifying mechanisms that lead to natural fractals is not straight-forward. Here we put forward the hypothesis that trophic interactions between pelagic microbes may be organized in a fractal-like manner, with the emergent network resembling the structure of the Sierpinski triangle. We discuss a mechanism that could be underlying the formation of repeated patterns at different trophic levels and discuss how this may help understand characteristic biomass size-spectra that hint at scale-invariant properties of the pelagic environment. If the idea of simple underlying principles leading to a fractal-like organization of the pelagic food web could be formalized, this would extend an ecologists mindset on how biological complexity could be accounted for. It may furthermore benefit ecosystem modeling by facilitating adequate model resolution across multiple scales.

  16. Fractal Hypothesis of the Pelagic Microbial Ecosystem—Can Simple Ecological Principles Lead to Self-Similar Complexity in the Pelagic Microbial Food Web?

    PubMed Central

    Våge, Selina; Thingstad, T. Frede

    2015-01-01

    Trophic interactions are highly complex and modern sequencing techniques reveal enormous biodiversity across multiple scales in marine microbial communities. Within the chemically and physically relatively homogeneous pelagic environment, this calls for an explanation beyond spatial and temporal heterogeneity. Based on observations of simple parasite-host and predator-prey interactions occurring at different trophic levels and levels of phylogenetic resolution, we present a theoretical perspective on this enormous biodiversity, discussing in particular self-similar aspects of pelagic microbial food web organization. Fractal methods have been used to describe a variety of natural phenomena, with studies of habitat structures being an application in ecology. In contrast to mathematical fractals where pattern generating rules are readily known, however, identifying mechanisms that lead to natural fractals is not straight-forward. Here we put forward the hypothesis that trophic interactions between pelagic microbes may be organized in a fractal-like manner, with the emergent network resembling the structure of the Sierpinski triangle. We discuss a mechanism that could be underlying the formation of repeated patterns at different trophic levels and discuss how this may help understand characteristic biomass size-spectra that hint at scale-invariant properties of the pelagic environment. If the idea of simple underlying principles leading to a fractal-like organization of the pelagic food web could be formalized, this would extend an ecologists mindset on how biological complexity could be accounted for. It may furthermore benefit ecosystem modeling by facilitating adequate model resolution across multiple scales. PMID:26648929

  17. Duration of the common cold and similar continuous outcomes should be analyzed on the relative scale: a case study of two zinc lozenge trials.

    PubMed

    Hemilä, Harri

    2017-05-12

    The relative scale has been used for decades in analysing binary data in epidemiology. In contrast, there has been a long tradition of carrying out meta-analyses of continuous outcomes on the absolute, original measurement, scale. The biological rationale for using the relative scale in the analysis of binary outcomes is that it adjusts for baseline variations; however, similar baseline variations can occur in continuous outcomes and relative effect scale may therefore be often useful also for continuous outcomes. The aim of this study was to determine whether the relative scale is more consistent with empirical data on treating the common cold than the absolute scale. Individual patient data was available for 2 randomized trials on zinc lozenges for the treatment of the common cold. Mossad (Ann Intern Med 125:81-8, 1996) found 4.0 days and 43% reduction, and Petrus (Curr Ther Res 59:595-607, 1998) found 1.77 days and 25% reduction, in the duration of colds. In both trials, variance in the placebo group was significantly greater than in the zinc lozenge group. The effect estimates were applied to the common cold distributions of the placebo groups, and the resulting distributions were compared with the actual zinc lozenge group distributions. When the absolute effect estimates, 4.0 and 1.77 days, were applied to the placebo group common cold distributions, negative and zero (i.e., impossible) cold durations were predicted, and the high level variance remained. In contrast, when the relative effect estimates, 43 and 25%, were applied, impossible common cold durations were not predicted in the placebo groups, and the cold distributions became similar to those of the zinc lozenge groups. For some continuous outcomes, such as the duration of illness and the duration of hospital stay, the relative scale leads to a more informative statistical analysis and more effective communication of the study findings. The transformation of continuous data to the relative scale is simple with a spreadsheet program, after which the relative scale data can be analysed using standard meta-analysis software. The option for the analysis of relative effects of continuous outcomes directly from the original data should be implemented in standard meta-analysis programs.

  18. Optical Interferometric Micrometrology

    NASA Technical Reports Server (NTRS)

    Abel, Phillip B.; Lauer, James R.

    1989-01-01

    Resolutions in angstrom and subangstrom range sought for atomic-scale surface probes. Experimental optical micrometrological system built to demonstrate calibration of piezoelectric transducer to displacement sensitivity of few angstroms. Objective to develop relatively simple system producing and measuring translation, across surface of specimen, of stylus in atomic-force or scanning tunneling microscope. Laser interferometer used to calibrate piezoelectric transducer used in atomic-force microscope. Electronic portion of calibration system made of commercially available components.

  19. What seeds to plant in the Great Basin? Comparing traits prioritized in native plant cultivars and releases with those that promote survival in the field

    Treesearch

    Elizabeth A. Leger; Owen W. Baughman

    2015-01-01

    Restoration in the Great Basin is typically a large-scale enterprise, with aerial, drill, and broadcast seeding of perennial species common after wildfires. Arid conditions and invasive plants are significant barriers to overcome, but relatively simple changes to seeds used for restoration may improve success. Here we summarize: 1) the composition of seed...

  20. Multi-scaling allometric analysis for urban and regional development

    NASA Astrophysics Data System (ADS)

    Chen, Yanguang

    2017-01-01

    The concept of allometric growth is based on scaling relations, and it has been applied to urban and regional analysis for a long time. However, most allometric analyses were devoted to the single proportional relation between two elements of a geographical system. Few researches focus on the allometric scaling of multielements. In this paper, a process of multiscaling allometric analysis is developed for the studies on spatio-temporal evolution of complex systems. By means of linear algebra, general system theory, and by analogy with the analytical hierarchy process, the concepts of allometric growth can be integrated with the ideas from fractal dimension. Thus a new methodology of geo-spatial analysis and the related theoretical models emerge. Based on the least squares regression and matrix operations, a simple algorithm is proposed to solve the multiscaling allometric equation. Applying the analytical method of multielement allometry to Chinese cities and regions yields satisfying results. A conclusion is reached that the multiscaling allometric analysis can be employed to make a comprehensive evaluation for the relative levels of urban and regional development, and explain spatial heterogeneity. The notion of multiscaling allometry may enrich the current theory and methodology of spatial analyses of urban and regional evolution.

  1. Echinocyte shapes: bending, stretching, and shear determine spicule shape and spacing.

    PubMed Central

    Mukhopadhyay, Ranjan; Lim H W, Gerald; Wortis, Michael

    2002-01-01

    We study the shapes of human red blood cells using continuum mechanics. In particular, we model the crenated, echinocytic shapes and show how they may arise from a competition between the bending energy of the plasma membrane and the stretching/shear elastic energies of the membrane skeleton. In contrast to earlier work, we calculate spicule shapes exactly by solving the equations of continuum mechanics subject to appropriate boundary conditions. A simple scaling analysis of this competition reveals an elastic length Lambda(el), which sets the length scale for the spicules and is, thus, related to the number of spicules experimentally observed on the fully developed echinocyte. PMID:11916836

  2. Scaled boundary finite element simulation and modeling of the mechanical behavior of cracked nanographene sheets

    NASA Astrophysics Data System (ADS)

    Honarmand, M.; Moradi, M.

    2018-06-01

    In this paper, by using scaled boundary finite element method (SBFM), a perfect nanographene sheet or cracked ones were simulated for the first time. In this analysis, the atomic carbon bonds were modeled by simple bar elements with circular cross-sections. Despite of molecular dynamics (MD), the results obtained from SBFM analysis are quite acceptable for zero degree cracks. For all angles except zero, Griffith criterion can be applied for the relation between critical stress and crack length. Finally, despite the simplifications used in nanographene analysis, obtained results can simulate the mechanical behavior with high accuracy compared with experimental and MD ones.

  3. Applications of Perron-Frobenius theory to population dynamics.

    PubMed

    Li, Chi-Kwong; Schneider, Hans

    2002-05-01

    By the use of Perron-Frobenius theory, simple proofs are given of the Fundamental Theorem of Demography and of a theorem of Cushing and Yicang on the net reproductive rate occurring in matrix models of population dynamics. The latter result, which is closely related to the Stein-Rosenberg theorem in numerical linear algebra, is further refined with some additional nonnegative matrix theory. When the fertility matrix is scaled by the net reproductive rate, the growth rate of the model is $1$. More generally, we show how to achieve a given growth rate for the model by scaling the fertility matrix. Demographic interpretations of the results are given.

  4. Nested-scale discharge and groundwater level monitoring to improve predictions of flow route discharges and nitrate loads

    NASA Astrophysics Data System (ADS)

    van der Velde, Y.; Rozemeijer, J. C.; de Rooij, G. H.; van Geer, F. C.; Torfs, P. J. J. F.; de Louw, P. G. B.

    2010-10-01

    Identifying effective measures to reduce nutrient loads of headwaters in lowland catchments requires a thorough understanding of flow routes of water and nutrients. In this paper we assess the value of nested-scale discharge and groundwater level measurements for predictions of catchment-scale discharge and nitrate loads. In order to relate field-site measurements to the catchment-scale an upscaling approach is introduced that assumes that scale differences in flow route fluxes originate from differences in the relationship between groundwater storage and the spatial structure of the groundwater table. This relationship is characterized by the Groundwater Depth Distribution (GDD) curve that relates spatial variation in groundwater depths to the average groundwater depth. The GDD-curve was measured for a single field site (0.009 km2) and simple process descriptions were applied to relate the groundwater levels to flow route discharges. This parsimonious model could accurately describe observed storage, tube drain discharge, overland flow and groundwater flow simultaneously with Nash-Sutcliff coefficients exceeding 0.8. A probabilistic Monte Carlo approach was applied to upscale field-site measurements to catchment scales by inferring scale-specific GDD-curves from hydrographs of two nested catchments (0.4 and 6.5 km2). The estimated contribution of tube drain effluent (a dominant source for nitrates) decreased with increasing scale from 76-79% at the field-site to 34-61% and 25-50% for both catchment scales. These results were validated by demonstrating that a model conditioned on nested-scale measurements simulates better nitrate loads and better predictions of extreme discharges during validation periods compared to a model that was conditioned on catchment discharge only.

  5. Universal relations for range corrections to Efimov features

    DOE PAGES

    Ji, Chen; Braaten, Eric; Phillips, Daniel R.; ...

    2015-09-09

    In a three-body system of identical bosons interacting through a large S-wave scattering length a, there are several sets of features related to the Efimov effect that are characterized by discrete scale invariance. Effective field theory was recently used to derive universal relations between these Efimov features that include the first-order correction due to a nonzero effective range r s. We reveal a simple pattern in these range corrections that had not been previously identified. The pattern is explained by the renormalization group for the effective field theory, which implies that the Efimov three-body parameter runs logarithmically with the momentummore » scale at a rate proportional to r s/a. The running Efimov parameter also explains the empirical observation that range corrections can be largely taken into account by shifting the Efimov parameter by an adjustable parameter divided by a. Furthermore, the accuracy of universal relations that include first-order range corrections is verified by comparing them with various theoretical calculations using models with nonzero range.« less

  6. The role of strength defects in shaping impact crater planforms

    NASA Astrophysics Data System (ADS)

    Watters, W. A.; Geiger, L. M.; Fendrock, M.; Gibson, R.; Hundal, C. B.

    2017-04-01

    High-resolution imagery and digital elevation models (DEMs) were used to measure the planimetric shapes of well-preserved impact craters. These measurements were used to characterize the size-dependent scaling of the departure from circular symmetry, which provides useful insights into the processes of crater growth and modification. For example, we characterized the dependence of the standard deviation of radius (σR) on crater diameter (D) as σR ∼ Dm. For complex craters on the Moon and Mars, m ranges from 0.9 to 1.2 among strong and weak target materials. For the martian simple craters in our data set, m varies from 0.5 to 0.8. The value of m tends toward larger values in weak materials and modified craters, and toward smaller values in relatively unmodified craters as well as craters in high-strength targets, such as young lava plains. We hypothesize that m ≈ 1 for planforms shaped by modification processes (slumping and collapse), whereas m tends toward ∼ 1/2 for planforms shaped by an excavation flow that was influenced by strength anisotropies. Additional morphometric parameters were computed to characterize the following planform properties: the planform aspect ratio or ellipticity, the deviation from a fitted ellipse, and the deviation from a convex shape. We also measured the distribution of crater shapes using Fourier decomposition of the planform, finding a similar distribution for simple and complex craters. By comparing the strength of small and large circular harmonics, we confirmed that lunar and martian complex craters are more polygonal at small sizes. Finally, we have used physical and geometrical principles to motivate scaling arguments and simple Monte Carlo models for generating synthetic planforms, which depend on a characteristic length scale of target strength defects. One of these models can be used to generate populations of synthetic planforms which are very similar to the measured population of well-preserved simple craters on Mars.

  7. Large-scale velocities and primordial non-Gaussianity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schmidt, Fabian

    2010-09-15

    We study the peculiar velocities of density peaks in the presence of primordial non-Gaussianity. Rare, high-density peaks in the initial density field can be identified with tracers such as galaxies and clusters in the evolved matter distribution. The distribution of relative velocities of peaks is derived in the large-scale limit using two different approaches based on a local biasing scheme. Both approaches agree, and show that halos still stream with the dark matter locally as well as statistically, i.e. they do not acquire a velocity bias. Nonetheless, even a moderate degree of (not necessarily local) non-Gaussianity induces a significant skewnessmore » ({approx}0.1-0.2) in the relative velocity distribution, making it a potentially interesting probe of non-Gaussianity on intermediate to large scales. We also study two-point correlations in redshift space. The well-known Kaiser formula is still a good approximation on large scales, if the Gaussian halo bias is replaced with its (scale-dependent) non-Gaussian generalization. However, there are additional terms not encompassed by this simple formula which become relevant on smaller scales (k > or approx. 0.01h/Mpc). Depending on the allowed level of non-Gaussianity, these could be of relevance for future large spectroscopic surveys.« less

  8. Textures on Mars: evidences of a biogenic environment

    NASA Astrophysics Data System (ADS)

    Rizzo, V.; Cantasano, N.

    Sediments on Mars could be explained as the result of simple coalescing structures having the ability to produce oriented concretions and more complex forms, as are intertwined filaments of microspherules, laminae and "blueberries", growing from a microscopic scale to a macroscopic one. Of which we have examples in some terrestrial microbial community, especially in regards to cyanobacteria and their organosedimentary products named stromatolites. This study aims to describe the most-often structural features that occur, showing their mutual relations in passing from simple to complex forms. These relationships could explain the genesis and the funny shapes of "blueberries" as the result of two different processes: by an enrolling sheet of microspherules or by an internal growing of minor spherule aggregates.

  9. Simple and compact expressions for neutrino oscillation probabilities in matter

    DOE PAGES

    Minakata, Hisakazu; Parke, Stephen J.

    2016-01-29

    We reformulate perturbation theory for neutrino oscillations in matter with an expansion parameter related to the ratio of the solar to the atmospheric Δm 2 scales. Unlike previous works, use a renormalized basis in which certain first-order effects are taken into account in the zeroth-order Hamiltonian. Using this perturbation theory we derive extremely compact expressions for the neutrino oscillations probabilities in matter. We find, for example, that the ν e disappearance probability at this order is of a simple two flavor form with an appropriately identified mixing angle and Δm 2. Furthermore, despite exceptional simplicity in their forms they accommodatemore » all order effects θ 13 and the matter potential.« less

  10. The application of sensitivity analysis to models of large scale physiological systems

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    A survey of the literature of sensitivity analysis as it applies to biological systems is reported as well as a brief development of sensitivity theory. A simple population model and a more complex thermoregulatory model illustrate the investigatory techniques and interpretation of parameter sensitivity analysis. The role of sensitivity analysis in validating and verifying models, and in identifying relative parameter influence in estimating errors in model behavior due to uncertainty in input data is presented. This analysis is valuable to the simulationist and the experimentalist in allocating resources for data collection. A method for reducing highly complex, nonlinear models to simple linear algebraic models that could be useful for making rapid, first order calculations of system behavior is presented.

  11. Deployment Design of Wireless Sensor Network for Simple Multi-Point Surveillance of a Moving Target

    PubMed Central

    Tsukamoto, Kazuya; Ueda, Hirofumi; Tamura, Hitomi; Kawahara, Kenji; Oie, Yuji

    2009-01-01

    In this paper, we focus on the problem of tracking a moving target in a wireless sensor network (WSN), in which the capability of each sensor is relatively limited, to construct large-scale WSNs at a reasonable cost. We first propose two simple multi-point surveillance schemes for a moving target in a WSN and demonstrate that one of the schemes can achieve high tracking probability with low power consumption. In addition, we examine the relationship between tracking probability and sensor density through simulations, and then derive an approximate expression representing the relationship. As the results, we present guidelines for sensor density, tracking probability, and the number of monitoring sensors that satisfy a variety of application demands. PMID:22412326

  12. Impact vaporization: Late time phenomena from experiments

    NASA Technical Reports Server (NTRS)

    Schultz, P. H.; Gault, D. E.

    1987-01-01

    While simple airflow produced by the outward movement of the ejecta curtain can be scaled to large dimensions, the interaction between an impact-vaporized component and the ejecta curtain is more complicated. The goal of these experiments was to examine such interaction in a real system involving crater growth, ejection of material, two phased mixtures of gas and dust, and strong pressure gradients. The results will be complemented by theoretical studies at laboratory scales in order to separate the various parameters for planetary scale processes. These experiments prompt, however, the following conclusions that may have relevance at broader scales. First, under near vacuum or low atmospheric pressures, an expanding vapor cloud scours the surrounding surface in advance of arriving ejecta. Second, the effect of early-time vaporization is relatively unimportant at late-times. Third, the overpressure created within the crater cavity by significant vaporization results in increased cratering efficiency and larger aspect ratios.

  13. Short term evolution of coronal hole boundaries

    NASA Technical Reports Server (NTRS)

    Nolte, J. T.; Krieger, A. S.; Solodyna, C. V.

    1978-01-01

    The evolution of coronal hole boundary positions on a time scale of approximately 1 day is studied on the basis of an examination of all coronal holes observed by Skylab from May to November 1973. It is found that a substantial fraction (an average of 38%) of all coronal hole boundaries shifted by at least 1 deg heliocentric in the course of a day. Most (70%) of these changes were on a relatively small scale (less than 3 times the supergranulation cell size), but a significant fraction occurred as discrete events on a much larger scale. The large-scale shifts in the boundary locations involved changes in X-ray emission from these areas of the sun. There were generally more changes in the boundaries of the most rapidly evolving holes, but no simple relationship between the amount of change and the rate of hole growth or decay.

  14. Activity affects intraspecific body-size scaling of metabolic rate in ectothermic animals.

    PubMed

    Glazier, Douglas Stewart

    2009-10-01

    Metabolic rate is commonly thought to scale with body mass (M) to the 3/4 power. However, the metabolic scaling exponent (b) may vary with activity state, as has been shown chiefly for interspecific relationships. Here I use a meta-analysis of literature data to test whether b changes with activity level within species of ectothermic animals. Data for 19 species show that b is usually higher during active exercise (mean +/- 95% confidence limits = 0.918 +/- 0.038) than during rest (0.768 +/- 0.069). This significant upward shift in b to near 1 is consistent with the metabolic level boundaries hypothesis, which predicts that maximal metabolic rate during exercise should be chiefly influenced by volume-related muscular power production (scaling as M (1)). This dependence of b on activity level does not appear to be a simple temperature effect because body temperature in ectotherms changes very little during exercise.

  15. Simple scale interpolator facilitates reading of graphs

    NASA Technical Reports Server (NTRS)

    Fazio, A.; Henry, B.; Hood, D.

    1966-01-01

    Set of cards with scale divisions and a scale finder permits accurate reading of the coordinates of points on linear or logarithmic graphs plotted on rectangular grids. The set contains 34 different scales for linear plotting and 28 single cycle scales for log plots.

  16. A surprisingly simple correlation between the classical and quantum structural networks in liquid water

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamm, Peter; Fanourgakis, George S.; Xantheas, Sotiris S.

    Nuclear quantum effects in liquid water have profound implications for several of its macroscopic properties related to structure, dynamics, spectroscopy and transport. Although several of water’s macroscopic properties can be reproduced by classical descriptions of the nuclei using potentials effectively parameterized for a narrow range of its phase diagram, a proper account of the nuclear quantum effects is required in order to ensure that the underlying molecular interactions are transferable across a wide temperature range covering different regions of that diagram. When performing an analysis of the hydrogen bonded structural networks in liquid water resulting from the classical (class.) andmore » quantum (q.m.) descriptions of the nuclei with the transferable, flexible, polarizable TTM3-F interaction potential, we found that the two results can be superimposed over the temperature range of T=270-350 K using a surprisingly simple, linear scaling of the two temperatures according to T(q.m.)=aT(class)- T , where a=1.2 and T=51 K. The linear scaling and constant shift of the temperature scale can be considered as a generalization of the previously reported temperature shifts (corresponding to structural changes and the melting T) induced by quantum effects in liquid water.« less

  17. SPARK: A Framework for Multi-Scale Agent-Based Biomedical Modeling.

    PubMed

    Solovyev, Alexey; Mikheev, Maxim; Zhou, Leming; Dutta-Moscato, Joyeeta; Ziraldo, Cordelia; An, Gary; Vodovotz, Yoram; Mi, Qi

    2010-01-01

    Multi-scale modeling of complex biological systems remains a central challenge in the systems biology community. A method of dynamic knowledge representation known as agent-based modeling enables the study of higher level behavior emerging from discrete events performed by individual components. With the advancement of computer technology, agent-based modeling has emerged as an innovative technique to model the complexities of systems biology. In this work, the authors describe SPARK (Simple Platform for Agent-based Representation of Knowledge), a framework for agent-based modeling specifically designed for systems-level biomedical model development. SPARK is a stand-alone application written in Java. It provides a user-friendly interface, and a simple programming language for developing Agent-Based Models (ABMs). SPARK has the following features specialized for modeling biomedical systems: 1) continuous space that can simulate real physical space; 2) flexible agent size and shape that can represent the relative proportions of various cell types; 3) multiple spaces that can concurrently simulate and visualize multiple scales in biomedical models; 4) a convenient graphical user interface. Existing ABMs of diabetic foot ulcers and acute inflammation were implemented in SPARK. Models of identical complexity were run in both NetLogo and SPARK; the SPARK-based models ran two to three times faster.

  18. Seismic waves and earthquakes in a global monolithic model

    NASA Astrophysics Data System (ADS)

    Roubíček, Tomáš

    2018-03-01

    The philosophy that a single "monolithic" model can "asymptotically" replace and couple in a simple elegant way several specialized models relevant on various Earth layers is presented and, in special situations, also rigorously justified. In particular, global seismicity and tectonics is coupled to capture, e.g., (here by a simplified model) ruptures of lithospheric faults generating seismic waves which then propagate through the solid-like mantle and inner core both as shear (S) or pressure (P) waves, while S-waves are suppressed in the fluidic outer core and also in the oceans. The "monolithic-type" models have the capacity to describe all the mentioned features globally in a unified way together with corresponding interfacial conditions implicitly involved, only when scaling its parameters appropriately in different Earth's layers. Coupling of seismic waves with seismic sources due to tectonic events is thus an automatic side effect. The global ansatz is here based, rather for an illustration, only on a relatively simple Jeffreys' viscoelastic damageable material at small strains whose various scaling (limits) can lead to Boger's viscoelastic fluid or even to purely elastic (inviscid) fluid. Self-induced gravity field, Coriolis, centrifugal, and tidal forces are counted in our global model, as well. The rigorous mathematical analysis as far as the existence of solutions, convergence of the mentioned scalings, and energy conservation is briefly presented.

  19. Properties of knotted ring polymers. I. Equilibrium dimensions.

    PubMed

    Mansfield, Marc L; Douglas, Jack F

    2010-07-28

    We report calculations on three classes of knotted ring polymers: (1) simple-cubic lattice self-avoiding rings (SARs), (2) "true" theta-state rings, i.e., SARs generated on the simple-cubic lattice with an attractive nearest-neighbor contact potential (theta-SARs), and (3) ideal, Gaussian rings. Extrapolations to large polymerization index N imply knot localization in all three classes of chains. Extrapolations of our data are also consistent with conjectures found in the literature which state that (1) R(g)-->AN(nu) asymptotically for ensembles of random knots restricted to any particular knot state, including the unknot; (2) A is universal across knot types for any given class of flexible chains; and (3) nu is equal to the standard self-avoiding walk (SAW) exponent (congruent with 0.588) for all three classes of chains (SARs, theta-SARs, and ideal rings). However, current computer technology is inadequate to directly sample the asymptotic domain, so that we remain in a crossover scaling regime for all accessible values of N. We also observe that R(g) approximately p(-0.27), where p is the "rope length" of the maximally inflated knot. This scaling relation holds in the crossover regime, but we argue that it is unlikely to extend into the asymptotic scaling regime where knots become localized.

  20. Understanding quantum tunneling using diffusion Monte Carlo simulations

    NASA Astrophysics Data System (ADS)

    Inack, E. M.; Giudici, G.; Parolini, T.; Santoro, G.; Pilati, S.

    2018-03-01

    In simple ferromagnetic quantum Ising models characterized by an effective double-well energy landscape the characteristic tunneling time of path-integral Monte Carlo (PIMC) simulations has been shown to scale as the incoherent quantum-tunneling time, i.e., as 1 /Δ2 , where Δ is the tunneling gap. Since incoherent quantum tunneling is employed by quantum annealers (QAs) to solve optimization problems, this result suggests that there is no quantum advantage in using QAs with respect to quantum Monte Carlo (QMC) simulations. A counterexample is the recently introduced shamrock model (Andriyash and Amin, arXiv:1703.09277), where topological obstructions cause an exponential slowdown of the PIMC tunneling dynamics with respect to incoherent quantum tunneling, leaving open the possibility for potential quantum speedup, even for stoquastic models. In this work we investigate the tunneling time of projective QMC simulations based on the diffusion Monte Carlo (DMC) algorithm without guiding functions, showing that it scales as 1 /Δ , i.e., even more favorably than the incoherent quantum-tunneling time, both in a simple ferromagnetic system and in the more challenging shamrock model. However, a careful comparison between the DMC ground-state energies and the exact solution available for the transverse-field Ising chain indicates an exponential scaling of the computational cost required to keep a fixed relative error as the system size increases.

  1. Photosynthesis and stomatal conductance related to reflectance on the canopy scale

    NASA Technical Reports Server (NTRS)

    Verma, S. B.; Sellers, P. J.; Walthall, C. L.; Hall, F. G.; Kim, J.; Goetz, S. J.

    1993-01-01

    Field measurements of carbon dioxide and water vapor fluxes were analyzed in conjunction with reflectances obtained from a helicopter-mounted Modular Multiband Radiometer at a grassland study site during the First International Satellite Land Surface Climatology Project Field Experiment. These measurements are representative of the canopy scale and were made over a range of meteorological and soil moisture conditions during different stages in the annual life cycle of the prairie vegetation, and thus provide a good basis for investigating hpotheses/relationships potentially useful in remote sensing applications. We tested the hypothesis (Sellers, 1987) that the simple ratio vegetation index should be near-linearly related to the derivatives of the unstressed canopy stomatal conductance and the unstressed canopy photosynthesis with respect to photosynthetically active radiation. Even though there is some scatter in our data, the results seem to support this hypothesis.

  2. Biology meets physics: Reductionism and multi-scale modeling of morphogenesis.

    PubMed

    Green, Sara; Batterman, Robert

    2017-02-01

    A common reductionist assumption is that macro-scale behaviors can be described "bottom-up" if only sufficient details about lower-scale processes are available. The view that an "ideal" or "fundamental" physics would be sufficient to explain all macro-scale phenomena has been met with criticism from philosophers of biology. Specifically, scholars have pointed to the impossibility of deducing biological explanations from physical ones, and to the irreducible nature of distinctively biological processes such as gene regulation and evolution. This paper takes a step back in asking whether bottom-up modeling is feasible even when modeling simple physical systems across scales. By comparing examples of multi-scale modeling in physics and biology, we argue that the "tyranny of scales" problem presents a challenge to reductive explanations in both physics and biology. The problem refers to the scale-dependency of physical and biological behaviors that forces researchers to combine different models relying on different scale-specific mathematical strategies and boundary conditions. Analyzing the ways in which different models are combined in multi-scale modeling also has implications for the relation between physics and biology. Contrary to the assumption that physical science approaches provide reductive explanations in biology, we exemplify how inputs from physics often reveal the importance of macro-scale models and explanations. We illustrate this through an examination of the role of biomechanical modeling in developmental biology. In such contexts, the relation between models at different scales and from different disciplines is neither reductive nor completely autonomous, but interdependent. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Mirror, mirror on the wall…: self-perception of facial beauty versus judgement by others.

    PubMed

    Springer, I N; Wiltfang, J; Kowalski, J T; Russo, P A J; Schulze, M; Becker, S; Wolfart, S

    2012-12-01

    In 1878, Margaret Wolfe Hungerford published a simple but insightful phrase in her novel 'Molly Bawn' that was to be quoted so often it has almost become cliché: "Beauty is in the eye of the beholder". While many questions regarding the perception and neural processing of facial attractiveness have been resolved, it became obvious to us that study designs have been principally based on either facial self-perception or perception by others. The relationship between these however, remains both crucial and unknown. Standardized images were taken of 141 subjects. These 141 subjects were asked to complete the adjective mood scale (AMS) and to rank specific issues related to their looks on a visual analogue scale. The images were then shown to independent judges to rank specific issues related to their looks on a visual analogue scale. Our results show proof for a strikingly simple observation: that individuals perceive their own beauty to be greater than that expressed in the opinions of others (p < 0.001). This observation provides insight into our basic behavioural patterns and suggests that there are strong psychological mechanisms in humans supporting self-identification and thereby encouraging the self-confidence and resilience necessary to maintain one's social standing. While the psychological basis of self-confidence is multifactorial, our finding provides critical objective insight. We prove here for the first time that nothing more than the beauty of the beholder is in the eyes of the latter. Copyright © 2012 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  4. Inflation, quintessence, and the origin of mass

    NASA Astrophysics Data System (ADS)

    Wetterich, C.

    2015-08-01

    In a unified picture both inflation and present dynamical dark energy arise from the same scalar field. The history of the Universe describes a crossover from a scale invariant "past fixed point" where all particles are massless, to a "future fixed point" for which spontaneous breaking of the exact scale symmetry generates the particle masses. The cosmological solution can be extrapolated to the infinite past in physical time - the universe has no beginning. This is seen most easily in a frame where particle masses and the Planck mass are field-dependent and increase with time. In this "freeze frame" the Universe shrinks and heats up during radiation and matter domination. In the equivalent, but singular Einstein frame cosmic history finds the familiar big bang description. The vicinity of the past fixed point corresponds to inflation. It ends at a first stage of the crossover. A simple model with no more free parameters than ΛCDM predicts for the primordial fluctuations a relation between the tensor amplitude r and the spectral index n, r = 8.19 (1 - n) - 0.137. The crossover is completed by a second stage where the beyond-standard-model sector undergoes the transition to the future fixed point. The resulting increase of neutrino masses stops a cosmological scaling solution, relating the present dark energy density to the present neutrino mass. At present our simple model seems compatible with all observational tests. We discuss how the fixed points can be rooted within quantum gravity in a crossover between ultraviolet and infrared fixed points. Then quantum properties of gravity could be tested both by very early and late cosmology.

  5. Improving catchment discharge predictions by inferring flow route contributions from a nested-scale monitoring and model setup

    NASA Astrophysics Data System (ADS)

    van der Velde, Y.; Rozemeijer, J. C.; de Rooij, G. H.; van Geer, F. C.; Torfs, P. J. J. F.; de Louw, P. G. B.

    2011-03-01

    Identifying effective measures to reduce nutrient loads of headwaters in lowland catchments requires a thorough understanding of flow routes of water and nutrients. In this paper we assess the value of nested-scale discharge and groundwater level measurements for the estimation of flow route volumes and for predictions of catchment discharge. In order to relate field-site measurements to the catchment-scale an upscaling approach is introduced that assumes that scale differences in flow route fluxes originate from differences in the relationship between groundwater storage and the spatial structure of the groundwater table. This relationship is characterized by the Groundwater Depth Distribution (GDD) curve that relates spatial variation in groundwater depths to the average groundwater depth. The GDD-curve was measured for a single field site (0.009 km2) and simple process descriptions were applied to relate groundwater levels to flow route discharges. This parsimonious model could accurately describe observed storage, tube drain discharge, overland flow and groundwater flow simultaneously with Nash-Sutcliff coefficients exceeding 0.8. A probabilistic Monte Carlo approach was applied to upscale field-site measurements to catchment scales by inferring scale-specific GDD-curves from the hydrographs of two nested catchments (0.4 and 6.5 km2). The estimated contribution of tube drain effluent (a dominant source for nitrates) decreased with increasing scale from 76-79% at the field-site to 34-61% and 25-50% for both catchment scales. These results were validated by demonstrating that a model conditioned on nested-scale measurements improves simulations of nitrate loads and predictions of extreme discharges during validation periods compared to a model that was conditioned on catchment discharge only.

  6. Scaling laws of passive-scalar diffusion in the interstellar medium

    NASA Astrophysics Data System (ADS)

    Colbrook, Matthew J.; Ma, Xiangcheng; Hopkins, Philip F.; Squire, Jonathan

    2017-05-01

    Passive-scalar mixing (metals, molecules, etc.) in the turbulent interstellar medium (ISM) is critical for abundance patterns of stars and clusters, galaxy and star formation, and cooling from the circumgalactic medium. However, the fundamental scaling laws remain poorly understood in the highly supersonic, magnetized, shearing regime relevant for the ISM. We therefore study the full scaling laws governing passive-scalar transport in idealized simulations of supersonic turbulence. Using simple phenomenological arguments for the variation of diffusivity with scale based on Richardson diffusion, we propose a simple fractional diffusion equation to describe the turbulent advection of an initial passive scalar distribution. These predictions agree well with the measurements from simulations, and vary with turbulent Mach number in the expected manner, remaining valid even in the presence of a large-scale shear flow (e.g. rotation in a galactic disc). The evolution of the scalar distribution is not the same as obtained using simple, constant 'effective diffusivity' as in Smagorinsky models, because the scale dependence of turbulent transport means an initially Gaussian distribution quickly develops highly non-Gaussian tails. We also emphasize that these are mean scalings that apply only to ensemble behaviours (assuming many different, random scalar injection sites): individual Lagrangian 'patches' remain coherent (poorly mixed) and simply advect for a large number of turbulent flow-crossing times.

  7. A rapid and robust gradient measurement technique using dynamic single-point imaging.

    PubMed

    Jang, Hyungseok; McMillan, Alan B

    2017-09-01

    We propose a new gradient measurement technique based on dynamic single-point imaging (SPI), which allows simple, rapid, and robust measurement of k-space trajectory. To enable gradient measurement, we utilize the variable field-of-view (FOV) property of dynamic SPI, which is dependent on gradient shape. First, one-dimensional (1D) dynamic SPI data are acquired from a targeted gradient axis, and then relative FOV scaling factors between 1D images or k-spaces at varying encoding times are found. These relative scaling factors are the relative k-space position that can be used for image reconstruction. The gradient measurement technique also can be used to estimate the gradient impulse response function for reproducible gradient estimation as a linear time invariant system. The proposed measurement technique was used to improve reconstructed image quality in 3D ultrashort echo, 2D spiral, and multi-echo bipolar gradient-echo imaging. In multi-echo bipolar gradient-echo imaging, measurement of the k-space trajectory allowed the use of a ramp-sampled trajectory for improved acquisition speed (approximately 30%) and more accurate quantitative fat and water separation in a phantom. The proposed dynamic SPI-based method allows fast k-space trajectory measurement with a simple implementation and no additional hardware for improved image quality. Magn Reson Med 78:950-962, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  8. Assessment of the spatial scaling behaviour of floods in the United Kingdom

    NASA Astrophysics Data System (ADS)

    Formetta, Giuseppe; Stewart, Elizabeth; Bell, Victoria

    2017-04-01

    Floods are among the most dangerous natural hazards, causing loss of life and significant damage to private and public property. Regional flood-frequency analysis (FFA) methods are essential tools to assess the flood hazard and plan interventions for its mitigation. FFA methods are often based on the well-known index flood method that assumes the invariance of the coefficient of variation of floods with drainage area. This assumption is equivalent to the simple scaling or self-similarity assumption for peak floods, i.e. their spatial structure remains similar in a particular, relatively simple, way to itself over a range of scales. Spatial scaling of floods has been evaluated at national scale for different countries such as Canada, USA, and Australia. According our knowledge. Such a study has not been conducted for the United Kingdom even though the standard FFA method there is based on the index flood assumption. In this work we present an integrated approach to assess of the spatial scaling behaviour of floods in the United Kingdom using three different methods: product moments (PM), probability weighted moments (PWM), and quantile analysis (QA). We analyse both instantaneous and daily annual observed maximum floods and performed our analysis both across the entire country and in its sub-climatic regions as defined in the Flood Studies Report (NERC, 1975). To evaluate the relationship between the k-th moments or quantiles and the drainage area we used both regression with area alone and multiple regression considering other explanatory variables to account for the geomorphology, amount of rainfall, and soil type of the catchments. The latter multiple regression approach was only recently demonstrated being more robust than the traditional regression with area alone that can lead to biased estimates of scaling exponents and misinterpretation of spatial scaling behaviour. We tested our framework on almost 600 rural catchments in UK considered as entire region and split in 11 sub-regions with 50 catchments per region on average. Preliminary results from the three different spatial scaling methods are generally in agreement and indicate that: i) only some of the peak flow variability is explained by area alone (approximately 50% for the entire country and ranging between the 40% and 70% for the sub-regions); ii) this percentage increases to 90% for the entire country and ranges between 80% and 95% for the sub-regions when the multiple regression is used; iii) the simple scaling hypothesis holds in all sub-regions with the exception of weak multi-scaling found in the regions 2 (North), and 5 and 6 (South East). We hypothesize that these deviations can be explained by heterogeneity in large scale precipitation and by the influence of the soil type (predominantly chalk) on the flood formation process in regions 5 and 6.

  9. Scale invariance of temporal order discrimination using complex, naturalistic events

    PubMed Central

    Kwok, Sze Chai; Macaluso, Emiliano

    2015-01-01

    Recent demonstrations of scale invariance in cognitive domains prompted us to investigate whether a scale-free pattern might exist in retrieving the temporal order of events from episodic memory. We present four experiments using an encoding-retrieval paradigm with naturalistic stimuli (movies or video clips). Our studies show that temporal order judgement retrieval times were negatively correlated with the temporal separation between two events in the movie. This relation held, irrespective of whether temporal distances were on the order of tens of minutes (Exp 1−2) or just a few seconds (Exp 3−4). Using the SIMPLE model, we factored in the retention delays between encoding and retrieval (delays of 24 h, 15 min, 1.5–2.5 s, and 0.5 s for Exp 1–4, respectively) and computed a temporal similarity score for each trial. We found a positive relation between similarity and retrieval times; that is, the more temporally similar two events, the slower the retrieval of their temporal order. Using Bayesian analysis, we confirmed the equivalence of the RT/similarity relation across all experiments, which included a vast range of temporal distances and retention delays. These results provide evidence for scale invariance during the retrieval of temporal order of episodic memories. PMID:25909581

  10. Dynamical Dark Matter from thermal freeze-out

    NASA Astrophysics Data System (ADS)

    Dienes, Keith R.; Fennick, Jacob; Kumar, Jason; Thomas, Brooks

    2018-03-01

    In the Dynamical Dark-Matter (DDM) framework, the dark sector comprises a large number of constituent dark particles whose individual masses, lifetimes, and cosmological abundances obey specific scaling relations with respect to each other. In particular, the most natural versions of this framework tend to require a spectrum of cosmological abundances which scale inversely with mass, so that dark-sector states with larger masses have smaller abundances. Thus far, DDM model-building has primarily relied on nonthermal mechanisms for abundance generation such as misalignment production, since these mechanisms give rise to abundances that have this property. By contrast, the simplest versions of thermal freeze-out tend to produce abundances that increase, rather than decrease, with the mass of the dark-matter component. In this paper, we demonstrate that there exist relatively simple modifications of the traditional thermal freeze-out mechanism which "flip" the resulting abundance spectrum, producing abundances that scale inversely with mass. Moreover, we demonstrate that a far broader variety of scaling relations between lifetimes, abundances, and masses can emerge through thermal freeze-out than through the nonthermal mechanisms previously considered for DDM ensembles. The results of this paper thus extend the DDM framework into the thermal domain and essentially allow us to "design" our resulting DDM ensembles at will in order to realize a rich array of resulting dark-matter phenomenologies.

  11. Translation, cultural adaptation, cross-validation of the Turkish diabetes quality-of-life (DQOL) measure.

    PubMed

    Yildirim, Aysegul; Akinci, Fevzi; Gozu, Hulya; Sargin, Haluk; Orbay, Ekrem; Sargin, Mehmet

    2007-06-01

    The aim of this study was to test the validity and reliability of the Turkish version of the diabetes quality of life (DQOL) questionnaire for use with patients with diabetes. Turkish version of the generic quality of life (QoL) scale 15D and DQOL, socio-demographics and clinical parameter characteristics were administered to 150 patients with type 2 diabetes. Study participants were randomly sampled from the Endocrinology and Diabetes Outpatient Department of Dr. Lutfi Kirdar Kartal Education and Research Hospital in Istanbul, Turkey. The Cronbach alpha coefficient of the overall DQOL scale was 0.89; the Cronbach alpha coefficient ranged from 0.80 to 0.94 for subscales. Distress, discomfort and its symptoms, depression, mobility, usual activities, and vitality on the 15 D scale had statistically significant correlations with social/vocational worry and diabetes-related worry on the DQOL scale indicating good convergent validity. Factor analysis identified four subscales: satisfaction", impact", "diabetes-related worry", and "social/vocational worry". Statistical analyses showed that the Turkish version of the DQOL is a valid and reliable instrument to measure disease related QoL in patients with diabetes. It is a simple and quick screening tool with about 15 +/- 5.8 min administration time for measuring QoL in this population.

  12. Supersymmetry from typicality: TeV-scale gauginos and PeV-scale squarks and sleptons.

    PubMed

    Nomura, Yasunori; Shirai, Satoshi

    2014-09-12

    We argue that under a set of simple assumptions the multiverse leads to low-energy supersymmetry with the spectrum often called spread or minisplit supersymmetry: the gauginos are in the TeV region with the other superpartners 2 or 3 orders of magnitude heavier. We present a particularly simple realization of supersymmetric grand unified theory using this idea.

  13. Uncertainty analysis on simple mass balance model to calculate critical loads for soil acidity

    Treesearch

    Harbin Li; Steven G. McNulty

    2007-01-01

    Simple mass balance equations (SMBE) of critical acid loads (CAL) in forest soil were developed to assess potential risks of air pollutants to ecosystems. However, to apply SMBE reliably at large scales, SMBE must be tested for adequacy and uncertainty. Our goal was to provide a detailed analysis of uncertainty in SMBE so that sound strategies for scaling up CAL...

  14. Resonance Meeting, May 11-15, 1997, Asilomar Conference Center, Pacific Grove, CA. Volume 2. Transparencies

    DTIC Science & Technology

    1997-05-01

    air and in water Brian T. Hefner and Phillip L. Marston 340 Material property measurements via GHz interferometry H. Spetzler, et al. PAGE 361...temperature scale Michael R. Moldover 464 Cheap acoustic gas analyzers Matthew Golden, et al. 502 Measurements of relaxation processes in gases and Henry E... expected behavior based on measurements of earth materials. Birch (4) first proposed a simple linear relation between compressional velocity and

  15. The Theory and Practice of Bayesian Image Labeling

    DTIC Science & Technology

    1988-08-01

    simple. The intensity images are the results of many confounding factors - lighting, surface geometry , surface reflectance, and camera characteristics...related through the geometry of the surfaces in view. They are conditionally independent in the following sense: P (g,O Ifp) = P (g Ifl) P(O If,). (6.6a...different spatial resolution and projection geometry , or using DOG-type filters of various scales. We believe that the success of visual integration at

  16. Economic decision making and the application of nonparametric prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2007-01-01

    Sustained increases in energy prices have focused attention on gas resources in low permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are large. Planning and development decisions for extraction of such resources must be area-wide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm the decision to enter such plays depends on reconnaissance level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional scale cost functions. The context of the worked example is the Devonian Antrim shale gas play, Michigan Basin. One finding relates to selection of the resource prediction model to be used with economic models. Models which can best predict aggregate volume over larger areas (many hundreds of sites) may lose granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined by extraneous factors. The paper also shows that when these simple prediction models are used to strategically order drilling prospects, the gain in gas volume over volumes associated with simple random site selection amounts to 15 to 20 percent. It also discusses why the observed benefit of updating predictions from results of new drilling, as opposed to following static predictions, is somewhat smaller. Copyright 2007, Society of Petroleum Engineers.

  17. General relativistic screening in cosmological simulations

    NASA Astrophysics Data System (ADS)

    Hahn, Oliver; Paranjape, Aseem

    2016-10-01

    We revisit the issue of interpreting the results of large volume cosmological simulations in the context of large-scale general relativistic effects. We look for simple modifications to the nonlinear evolution of the gravitational potential ψ that lead on large scales to the correct, fully relativistic description of density perturbations in the Newtonian gauge. We note that the relativistic constraint equation for ψ can be cast as a diffusion equation, with a diffusion length scale determined by the expansion of the Universe. Exploiting the weak time evolution of ψ in all regimes of interest, this equation can be further accurately approximated as a Helmholtz equation, with an effective relativistic "screening" scale ℓ related to the Hubble radius. We demonstrate that it is thus possible to carry out N-body simulations in the Newtonian gauge by replacing Poisson's equation with this Helmholtz equation, involving a trivial change in the Green's function kernel. Our results also motivate a simple, approximate (but very accurate) gauge transformation—δN(k )≈δsim(k )×(k2+ℓ-2)/k2 —to convert the density field δsim of standard collisionless N -body simulations (initialized in the comoving synchronous gauge) into the Newtonian gauge density δN at arbitrary times. A similar conversion can also be written in terms of particle positions. Our results can be interpreted in terms of a Jeans stability criterion induced by the expansion of the Universe. The appearance of the screening scale ℓ in the evolution of ψ , in particular, leads to a natural resolution of the "Jeans swindle" in the presence of superhorizon modes.

  18. STAR FORMATION LAWS: THE EFFECTS OF GAS CLOUD SAMPLING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Calzetti, D.; Liu, G.; Koda, J., E-mail: calzetti@astro.umass.edu

    Recent observational results indicate that the functional shape of the spatially resolved star formation-molecular gas density relation depends on the spatial scale considered. These results may indicate a fundamental role of sampling effects on scales that are typically only a few times larger than those of the largest molecular clouds. To investigate the impact of this effect, we construct simple models for the distribution of molecular clouds in a typical star-forming spiral galaxy and, assuming a power-law relation between star formation rate (SFR) and cloud mass, explore a range of input parameters. We confirm that the slope and the scattermore » of the simulated SFR-molecular gas surface density relation depend on the size of the sub-galactic region considered, due to stochastic sampling of the molecular cloud mass function, and the effect is larger for steeper relations between SFR and molecular gas. There is a general trend for all slope values to tend to {approx}unity for region sizes larger than 1-2 kpc, irrespective of the input SFR-cloud relation. The region size of 1-2 kpc corresponds to the area where the cloud mass function becomes fully sampled. We quantify the effects of selection biases in data tracing the SFR, either as thresholds (i.e., clouds smaller than a given mass value do not form stars) or as backgrounds (e.g., diffuse emission unrelated to current star formation is counted toward the SFR). Apparently discordant observational results are brought into agreement via this simple model, and the comparison of our simulations with data for a few galaxies supports a steep (>1) power-law index between SFR and molecular gas.« less

  19. Self-folding and aggregation of amyloid nanofibrils

    NASA Astrophysics Data System (ADS)

    Paparcone, Raffaella; Cranford, Steven W.; Buehler, Markus J.

    2011-04-01

    Amyloids are highly organized protein filaments, rich in β-sheet secondary structures that self-assemble to form dense plaques in brain tissues affected by severe neurodegenerative disorders (e.g. Alzheimer's Disease). Identified as natural functional materials in bacteria, in addition to their remarkable mechanical properties, amyloids have also been proposed as a platform for novel biomaterials in nanotechnology applications including nanowires, liquid crystals, scaffolds and thin films. Despite recent progress in understanding amyloid structure and behavior, the latent self-assembly mechanism and the underlying adhesion forces that drive the aggregation process remain poorly understood. On the basis of previous full atomistic simulations, here we report a simple coarse-grain model to analyze the competition between adhesive forces and elastic deformation of amyloid fibrils. We use simple model system to investigate self-assembly mechanisms of fibrils, focused on the formation of self-folded nanorackets and nanorings, and thereby address a critical issue in linking the biochemical (Angstrom) to micrometre scales relevant for larger-scale states of functional amyloid materials. We investigate the effect of varying the interfibril adhesion energy on the structure and stability of self-folded nanorackets and nanorings and demonstrate that these aggregated amyloid fibrils are stable in such states even when the fibril-fibril interaction is relatively weak, given that the constituting amyloid fibril length exceeds a critical fibril length-scale of several hundred nanometres. We further present a simple approach to directly determine the interfibril adhesion strength from geometric measures. In addition to providing insight into the physics of aggregation of amyloid fibrils our model enables the analysis of large-scale amyloid plaques and presents a new method for the estimation and engineering of the adhesive forces responsible of the self-assembly process of amyloidnanostructures, filling a gap that previously existed between full atomistic simulations of primarily ultra-short fibrils and much larger micrometre-scale amyloid aggregates. Via direct simulation of large-scale amyloid aggregates consisting of hundreds of fibrils we demonstrate that the fibril length has a profound impact on their structure and mechanical properties, where the critical fibril length-scale derived from our analysis of self-folded nanorackets and nanorings defines the structure of amyloid aggregates. A multi-scale modeling approach as used here, bridging the scales from Angstroms to micrometres, opens a wide range of possible nanotechnology applications by presenting a holistic framework that balances mechanical properties of individual fibrils, hierarchical self-assembly, and the adhesive forces determining their stability to facilitate the design of de novoamyloid materials.

  20. A simple method for estimating the size of nuclei on fractal surfaces

    NASA Astrophysics Data System (ADS)

    Zeng, Qiang

    2017-10-01

    Determining the size of nuclei on complex surfaces remains a big challenge in aspects of biological, material and chemical engineering. Here the author reported a simple method to estimate the size of the nuclei in contact with complex (fractal) surfaces. The established approach was based on the assumptions of contact area proportionality for determining nucleation density and the scaling congruence between nuclei and surfaces for identifying contact regimes. It showed three different regimes governing the equations for estimating the nucleation site density. Nuclei in the size large enough could eliminate the effect of fractal structure. Nuclei in the size small enough could lead to the independence of nucleation site density on fractal parameters. Only when nuclei match the fractal scales, the nucleation site density is associated with the fractal parameters and the size of the nuclei in a coupling pattern. The method was validated by the experimental data reported in the literature. The method may provide an effective way to estimate the size of nuclei on fractal surfaces, through which a number of promising applications in relative fields can be envisioned.

  1. The dependence of the strength and thickness of field-aligned currents on solar wind and ionospheric parameters

    PubMed Central

    Johnson, Jay R.; Wing, Simon

    2017-01-01

    Sheared plasma flows at the low-latitude boundary layer (LLBL) correlate well with early afternoon auroral arcs and upward field-aligned currents. We present a simple analytic model that relates solar wind and ionospheric parameters to the strength and thickness of field-aligned currents (Λ) in a region of sheared velocity, such as the LLBL. We compare the predictions of the model with DMSP observations and find remarkably good scaling of the upward region 1 currents with solar wind and ionospheric parameters in region located at the boundary layer or open field lines at 1100–1700 magnetic local time. We demonstrate that Λ~nsw−0.5 and Λ ~ L when Λ/L < 5 where L is the auroral electrostatic scale length. The sheared boundary layer thickness (Δm) is inferred to be around 3000 km, which appears to have weak dependence on Vsw. J‖ has dependencies on Δm, Σp, nsw, and Vsw. The analytic model provides a simple way to organize data and to infer boundary layer structures from ionospheric data. PMID:29057194

  2. Self-organizing biopsychosocial dynamics and the patient-healer relationship.

    PubMed

    Pincus, David

    2012-01-01

    The patient-healer relationship has an increasing area of interest for complementary and alternative medicine (CAM) researchers. This focus on the interpersonal context of treatment is not surprising as dismantling studies, clinical trials and other linear research designs continually point toward the critical role of context and the broadband biopsychosocial nature of therapeutic responses to CAM. Unfortunately, the same traditional research models and methods that fail to find simple and specific treatment-outcome relations are similarly failing to find simple and specific mechanisms to explain how interpersonal processes influence patient outcomes. This paper presents an overview of some of the key models and methods from nonlinear dynamical systems that are better equipped for empirical testing of CAM outcomes on broadband biopsychosocial processes. Suggestions are made for CAM researchers to assist in modeling the interactions among key process dynamics interacting across biopsychosocial scales: empathy, intra-psychic conflict, physiological arousal, and leukocyte telomerase activity. Finally, some speculations are made regarding the possibility for deeper cross-scale information exchange involving quantum temporal nonlocality. Copyright © 2012 S. Karger AG, Basel.

  3. Redshift-space equal-time angular-averaged consistency relations of the gravitational dynamics

    NASA Astrophysics Data System (ADS)

    Nishimichi, Takahiro; Valageas, Patrick

    2015-12-01

    We present the redshift-space generalization of the equal-time angular-averaged consistency relations between (ℓ+n )- and n -point polyspectra (i.e., the Fourier counterparts of correlation functions) of the cosmological matter density field. Focusing on the case of the ℓ=1 large-scale mode and n small-scale modes, we use an approximate symmetry of the gravitational dynamics to derive explicit expressions that hold beyond the perturbative regime, including both the large-scale Kaiser effect and the small-scale fingers-of-god effects. We explicitly check these relations, both perturbatively, for the lowest-order version that applies to the bispectrum, and nonperturbatively, for all orders but for the one-dimensional dynamics. Using a large ensemble of N -body simulations, we find that our relation on the bispectrum in the squeezed limit (i.e., the limit where one wave number is much smaller than the other two) is valid to better than 20% up to 1 h Mpc-1 , for both the monopole and quadrupole at z =0.35 , in a Λ CDM cosmology. Additional simulations done for the Einstein-de Sitter background suggest that these discrepancies mainly come from the breakdown of the approximate symmetry of the gravitational dynamics. For practical applications, we introduce a simple ansatz to estimate the new derivative terms in the relation using only observables. Although the relation holds worse after using this ansatz, we can still recover it within 20% up to 1 h Mpc-1 , at z =0.35 for the monopole. On larger scales, k =0.2 h Mpc-1 , it still holds within the statistical accuracy of idealized simulations of volume ˜8 h-3Gpc3 without shot-noise error.

  4. Time-dependent scaling patterns in high frequency financial data

    NASA Astrophysics Data System (ADS)

    Nava, Noemi; Di Matteo, Tiziana; Aste, Tomaso

    2016-10-01

    We measure the influence of different time-scales on the intraday dynamics of financial markets. This is obtained by decomposing financial time series into simple oscillations associated with distinct time-scales. We propose two new time-varying measures of complexity: 1) an amplitude scaling exponent and 2) an entropy-like measure. We apply these measures to intraday, 30-second sampled prices of various stock market indices. Our results reveal intraday trends where different time-horizons contribute with variable relative amplitudes over the course of the trading day. Our findings indicate that the time series we analysed have a non-stationary multifractal nature with predominantly persistent behaviour at the middle of the trading session and anti-persistent behaviour at the opening and at the closing of the session. We demonstrate that these patterns are statistically significant, robust, reproducible and characteristic of each stock market. We argue that any modelling, analytics or trading strategy must take into account these non-stationary intraday scaling patterns.

  5. Three-Dimensional CdS/Au Butterfly Wing Scales with Hierarchical Rib Structures for Plasmon-Enhanced Photocatalytic Hydrogen Production.

    PubMed

    Fang, Jing; Gu, Jiajun; Liu, Qinglei; Zhang, Wang; Su, Huilan; Zhang, Di

    2018-06-13

    Localized surface plasmon resonance (LSPR) of plasmonic metals (e.g., Au) can help semiconductors improve their photocatalytic hydrogen (H 2 ) production performance. However, an artificial synthesis of hierarchical plasmonic structures down to nanoscales is usually difficult. Here, we adopt the butterfly wing scales from Morpho didius to fabricate three-dimensional (3D) CdS/Au butterfly wing scales for plasmonic photocatalysis. The as-prepared materials well-inherit the pristine hierarchical biostructures. The 3D CdS/Au butterfly wing scales exhibit a high H 2 production rate (221.8 μmol·h -1 within 420-780 nm), showing a 241-fold increase over the CdS butterfly wing scales. This is attributed to the effective potentiation effect of LSPR introduced by multilayer metallic rib structures and a good interface bonding state between Au and CdS nanoparticles. Thus, our study provides a relatively simple method to learn from nature and inspiration for preparing highly efficient plasmonic photocatalysts.

  6. Validation of the Simple Shoulder Test in a Portuguese-Brazilian population. Is the latent variable structure and validation of the Simple Shoulder Test Stable across cultures?

    PubMed

    Neto, Jose Osni Bruggemann; Gesser, Rafael Lehmkuhl; Steglich, Valdir; Bonilauri Ferreira, Ana Paula; Gandhi, Mihir; Vissoci, João Ricardo Nickenig; Pietrobon, Ricardo

    2013-01-01

    The validation of widely used scales facilitates the comparison across international patient samples. The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. The Simple Shoulder Test was translated from English into Brazilian Portuguese, translated back into English, and evaluated for accuracy by an expert committee. It was then administered to 100 patients with shoulder conditions. Psychometric properties were analyzed including factor analysis, internal reliability, test-retest reliability at seven days, and construct validity in relation to the Short Form 36 health survey (SF-36). Factor analysis demonstrated a three factor solution. Cronbach's alpha was 0.82. Test-retest reliability index as measured by intra-class correlation coefficient (ICC) was 0.84. Associations were observed in the hypothesized direction with all subscales of SF-36 questionnaire. The Simple Shoulder Test translation and cultural adaptation to Brazilian-Portuguese demonstrated adequate factor structure, internal reliability, and validity, ultimately allowing for its use in the comparison with international patient samples.

  7. Validation of the Simple Shoulder Test in a Portuguese-Brazilian Population. Is the Latent Variable Structure and Validation of the Simple Shoulder Test Stable across Cultures?

    PubMed Central

    Neto, Jose Osni Bruggemann; Gesser, Rafael Lehmkuhl; Steglich, Valdir; Bonilauri Ferreira, Ana Paula; Gandhi, Mihir; Vissoci, João Ricardo Nickenig; Pietrobon, Ricardo

    2013-01-01

    Background The validation of widely used scales facilitates the comparison across international patient samples. The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. Objective The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. Methods The Simple Shoulder Test was translated from English into Brazilian Portuguese, translated back into English, and evaluated for accuracy by an expert committee. It was then administered to 100 patients with shoulder conditions. Psychometric properties were analyzed including factor analysis, internal reliability, test-retest reliability at seven days, and construct validity in relation to the Short Form 36 health survey (SF-36). Results Factor analysis demonstrated a three factor solution. Cronbach’s alpha was 0.82. Test-retest reliability index as measured by intra-class correlation coefficient (ICC) was 0.84. Associations were observed in the hypothesized direction with all subscales of SF-36 questionnaire. Conclusion The Simple Shoulder Test translation and cultural adaptation to Brazilian-Portuguese demonstrated adequate factor structure, internal reliability, and validity, ultimately allowing for its use in the comparison with international patient samples. PMID:23675436

  8. Simple Numerical Analysis of Longboard Speedometer Data

    ERIC Educational Resources Information Center

    Hare, Jonathan

    2013-01-01

    Simple numerical data analysis is described, using a standard spreadsheet program, to determine distance, velocity (speed) and acceleration from voltage data generated by a skateboard/longboard speedometer (Hare 2012 "Phys. Educ." 47 409-17). This simple analysis is an introduction to data processing including scaling data as well as…

  9. A Theoretical Study of the Luminosity-Temperature Relation for Clusters of Galaxies

    NASA Astrophysics Data System (ADS)

    Del Popolo, A.; Hiotelis, N.; Peñarrubia, J.

    2005-07-01

    A luminosity-temperature relation is derived for clusters of galaxies. The two models used take into account the angular momentum acquisition by the protostructures during their expansion and collapse. The first model is a modification of the self-similar model, while the second is a modification of the punctuated equilibria model of Cavaliere et al. In both models the mass-temperature relation (M-T) used is based on previous calculations of Del Popolo. We show that the above models lead, in X-rays, to a luminosity-temperature relation that scales as L~T5 at the scale of groups, flattening to L~T3 for rich clusters and converging to L~T2 at higher temperatures. However, a fundamental result of our paper is that the nonsimilarity in the L-T relation can be explained by a simple model that takes into account the amount of angular momentum of a protostructure. This result is in disagreement with the widely accepted idea that the nonsimilarity is due to nongravitating processes, such as heating and/or cooling.

  10. Coevolutionary diversification creates nested-modular structure in phage–bacteria interaction networks

    PubMed Central

    Beckett, Stephen J.; Williams, Hywel T. P.

    2013-01-01

    Phage and their bacterial hosts are the most diverse and abundant biological entities in the oceans, where their interactions have a major impact on marine ecology and ecosystem function. The structure of interaction networks for natural phage–bacteria communities offers insight into their coevolutionary origin. At small phylogenetic scales, observed communities typically show a nested structure, in which both hosts and phages can be ranked by their range of resistance and infectivity, respectively. A qualitatively different multi-scale structure is seen at larger phylogenetic scales; a natural assemblage sampled from the Atlantic Ocean displays large-scale modularity and local nestedness within each module. Here, we show that such ‘nested-modular’ interaction networks can be produced by a simple model of host–phage coevolution in which infection depends on genetic matching. Negative frequency-dependent selection causes diversification of hosts (to escape phages) and phages (to track their evolving hosts). This creates a diverse community of bacteria and phage, maintained by kill-the-winner ecological dynamics. When the resulting communities are visualized as bipartite networks of who infects whom, they show the nested-modular structure characteristic of the Atlantic sample. The statistical significance and strength of this observation varies depending on whether the interaction networks take into account the density of the interacting strains, with implications for interpretation of interaction networks constructed by different methods. Our results suggest that the apparently complex community structures associated with marine bacteria and phage may arise from relatively simple coevolutionary origins. PMID:24516719

  11. Fabrication of wafer-scale nanopatterned sapphire substrate through phase separation lithography

    NASA Astrophysics Data System (ADS)

    Guo, Xu; Ni, Mengyang; Zhuang, Zhe; Dai, Jiangping; Wu, Feixiang; Cui, Yushuang; Yuan, Changsheng; Ge, Haixiong; Chen, Yanfeng

    2016-04-01

    A phase separation lithography (PSL) based on polymer blend provides an extremely simple, low-cost, and high-throughput way to fabricate wafer-scale disordered nanopatterns. This method was introduced to fabricate nanopatterned sapphire substrates (NPSSs) for GaN-based light-emitting diodes (LEDs). The PSL process only involved in spin-coating of polystyrene (PS)/polyethylene glycol (PEG) polymer blend on sapphire substrate and followed by a development with deionized water to remove PEG moiety. The PS nanoporous network was facilely obtained, and the structural parameters could be effectively tuned by controlling the PS/PEG weight ratio of the spin-coating solution. 2-in. wafer-scale NPSSs were conveniently achieved through the PS nanoporous network in combination with traditional nanofabrication methods, such as O2 reactive ion etching (RIE), e-beam evaporation deposition, liftoff, and chlorine-based RIE. In order to investigate the performance of such NPSSs, typical blue LEDs with emission wavelengths of ~450 nm were grown on the NPSS and a flat sapphire substrate (FSS) by metal-organic chemical vapor deposition, respectively. The integral photoluminescence (PL) intensity of the NPSS LED was enhanced by 32.3 % compared to that of the FSS-LED. The low relative standard deviation of 4.7 % for PL mappings of NPSS LED indicated the high uniformity of PL data across the whole 2-in. wafer. Extremely simple, low cost, and high throughput of the process and the ability to fabricate at the wafer scale make PSL a potential method for production of nanopatterned sapphire substrates.

  12. A simple method for estimating basin-scale groundwater discharge by vegetation in the basin and range province of Arizona using remote sensing information and geographic information systems

    USGS Publications Warehouse

    Tillman, F.D.; Callegary, J.B.; Nagler, P.L.; Glenn, E.P.

    2012-01-01

    Groundwater is a vital water resource in the arid to semi-arid southwestern United States. Accurate accounting of inflows to and outflows from the groundwater system is necessary to effectively manage this shared resource, including the important outflow component of groundwater discharge by vegetation. A simple method for estimating basin-scale groundwater discharge by vegetation is presented that uses remote sensing data from satellites, geographic information systems (GIS) land cover and stream location information, and a regression equation developed within the Southern Arizona study area relating the Enhanced Vegetation Index from the MODIS sensors on the Terra satellite to measured evapotranspiration. Results computed for 16-day composited satellite passes over the study area during the 2000 through 2007 time period demonstrate a sinusoidal pattern of annual groundwater discharge by vegetation with median values ranging from around 0.3 mm per day in the cooler winter months to around 1.5 mm per day during summer. Maximum estimated annual volume of groundwater discharge by vegetation was between 1.4 and 1.9 billion m3 per year with an annual average of 1.6 billion m3. A simplified accounting of the contribution of precipitation to vegetation greenness was developed whereby monthly precipitation data were subtracted from computed vegetation discharge values, resulting in estimates of minimum groundwater discharge by vegetation. Basin-scale estimates of minimum and maximum groundwater discharge by vegetation produced by this simple method are useful bounding values for groundwater budgets and groundwater flow models, and the method may be applicable to other areas with similar vegetation types.

  13. Simple Kinematic Pathway Approach (KPA) to Catchment-scale Travel Time and Water Age Distributions

    NASA Astrophysics Data System (ADS)

    Soltani, S. S.; Cvetkovic, V.; Destouni, G.

    2017-12-01

    The distribution of catchment-scale water travel times is strongly influenced by morphological dispersion and is partitioned between hillslope and larger, regional scales. We explore whether hillslope travel times are predictable using a simple semi-analytical "kinematic pathway approach" (KPA) that accounts for dispersion on two levels of morphological and macro-dispersion. The study gives new insights to shallow (hillslope) and deep (regional) groundwater travel times by comparing numerical simulations of travel time distributions, referred to as "dynamic model", with corresponding KPA computations for three different real catchment case studies in Sweden. KPA uses basic structural and hydrological data to compute transient water travel time (forward mode) and age (backward mode) distributions at the catchment outlet. Longitudinal and morphological dispersion components are reflected in KPA computations by assuming an effective Peclet number and topographically driven pathway length distributions, respectively. Numerical simulations of advective travel times are obtained by means of particle tracking using the fully-integrated flow model MIKE SHE. The comparison of computed cumulative distribution functions of travel times shows significant influence of morphological dispersion and groundwater recharge rate on the compatibility of the "kinematic pathway" and "dynamic" models. Zones of high recharge rate in "dynamic" models are associated with topographically driven groundwater flow paths to adjacent discharge zones, e.g. rivers and lakes, through relatively shallow pathway compartments. These zones exhibit more compatible behavior between "dynamic" and "kinematic pathway" models than the zones of low recharge rate. Interestingly, the travel time distributions of hillslope compartments remain almost unchanged with increasing recharge rates in the "dynamic" models. This robust "dynamic" model behavior suggests that flow path lengths and travel times in shallow hillslope compartments are controlled by topography, and therefore application and further development of the simple "kinematic pathway" approach is promising for their modeling.

  14. [Developments in preparation and experimental method of solid phase microextraction fibers].

    PubMed

    Yi, Xu; Fu, Yujie

    2004-09-01

    Solid phase microextraction (SPME) is a simple and effective adsorption and desorption technique, which concentrates volatile or nonvolatile compounds from liquid samples or headspace of samples. SPME is compatible with analyte separation and detection by gas chromatography, high performance liquid chromatography, and other instrumental methods. It can provide many advantages, such as wide linear scale, low solvent and sample consumption, short analytical times, low detection limits, simple apparatus, and so on. The theory of SPME is introduced, which includes equilibrium theory and non-equilibrium theory. The novel development of fiber preparation methods and relative experimental techniques are discussed. In addition to commercial fiber preparation, different newly developed fabrication techniques, such as sol-gel, electronic deposition, carbon-base adsorption, high-temperature epoxy immobilization, are presented. Effects of extraction modes, selection of fiber coating, optimization of operating conditions, method sensitivity and precision, and systematical automation, are taken into considerations in the analytical process of SPME. A simple perspective of SPME is proposed at last.

  15. The fluid trampoline: droplets bouncing on a soap film

    NASA Astrophysics Data System (ADS)

    Bush, John; Gilet, Tristan

    2008-11-01

    We present the results of a combined experimental and theoretical investigation of droplets falling onto a horizontal soap film. Both static and vertically vibrated soap films are considered. A quasi-static description of the soap film shape yields a force-displacement relation that provides excellent agreement with experiment, and allows us to model the film as a nonlinear spring. This approach yields an accurate criterion for the transition between droplet bouncing and crossing on the static film; moreover, it allows us to rationalize the observed constancy of the contact time and scaling for the coefficient of restitution in the bouncing states. On the vibrating film, a variety of bouncing behaviours were observed, including simple and complex periodic states, multiperiodicity and chaos. A simple theoretical model is developed that captures the essential physics of the bouncing process, reproducing all observed bouncing states. Quantitative agreement between model and experiment is deduced for simple periodic modes, and qualitative agreement for more complex periodic and chaotic bouncing states.

  16. Organelle Size Scaling of the Budding Yeast Vacuole by Relative Growth and Inheritance.

    PubMed

    Chan, Yee-Hung M; Reyes, Lorena; Sohail, Saba M; Tran, Nancy K; Marshall, Wallace F

    2016-05-09

    It has long been noted that larger animals have larger organs compared to smaller animals of the same species, a phenomenon termed scaling [1]. Julian Huxley proposed an appealingly simple model of "relative growth"-in which an organ and the whole body grow with their own intrinsic rates [2]-that was invoked to explain scaling in organs from fiddler crab claws to human brains. Because organ size is regulated by complex, unpredictable pathways [3], it remains unclear whether scaling requires feedback mechanisms to regulate organ growth in response to organ or body size. The molecular pathways governing organelle biogenesis are simpler than organogenesis, and therefore organelle size scaling in the cell provides a more tractable case for testing Huxley's model. We ask the question: is it possible for organelle size scaling to arise if organelle growth is independent of organelle or cell size? Using the yeast vacuole as a model, we tested whether mutants defective in vacuole inheritance, vac8Δ and vac17Δ, tune vacuole biogenesis in response to perturbations in vacuole size. In vac8Δ/vac17Δ, vacuole scaling increases with the replicative age of the cell. Furthermore, vac8Δ/vac17Δ cells continued generating vacuole at roughly constant rates even when they had significantly larger vacuoles compared to wild-type. With support from computational modeling, these results suggest there is no feedback between vacuole biogenesis rates and vacuole or cell size. Rather, size scaling is determined by the relative growth rates of the vacuole and the cell, thus representing a cellular version of Huxley's model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. An analysis of ratings: A guide to RMRATE

    Treesearch

    Thomas C. Brown; Terry C. Daniel; Herbert W. Schroeder; Glen E. Brink

    1990-01-01

    This report describes RMRATE, a computer program for analyzing rating judgments. RMRATE scales ratings using several scaling procedures, and compares the resulting scale values. The scaling procedures include the median and simple mean, standardized values, scale values based on Thurstone's Law of Categorical Judgment, and regression-based values. RMRATE also...

  18. Scales and scaling in turbulent ocean sciences; physics-biology coupling

    NASA Astrophysics Data System (ADS)

    Schmitt, Francois

    2015-04-01

    Geophysical fields possess huge fluctuations over many spatial and temporal scales. In the ocean, such property at smaller scales is closely linked to marine turbulence. The velocity field is varying from large scales to the Kolmogorov scale (mm) and scalar fields from large scales to the Batchelor scale, which is often much smaller. As a consequence, it is not always simple to determine at which scale a process should be considered. The scale question is hence fundamental in marine sciences, especially when dealing with physics-biology coupling. For example, marine dynamical models have typically a grid size of hundred meters or more, which is more than 105 times larger than the smallest turbulence scales (Kolmogorov scale). Such scale is fine for the dynamics of a whale (around 100 m) but for a fish larvae (1 cm) or a copepod (1 mm) a description at smaller scales is needed, due to the nonlinear nature of turbulence. The same is verified also for biogeochemical fields such as passive and actives tracers (oxygen, fluorescence, nutrients, pH, turbidity, temperature, salinity...) In this framework, we will discuss the scale problem in turbulence modeling in the ocean, and the relation of Kolmogorov's and Batchelor's scales of turbulence in the ocean, with the size of marine animals. We will also consider scaling laws for organism-particle Reynolds numbers (from whales to bacteria), and possible scaling laws for organism's accelerations.

  19. Scaling of mode shapes from operational modal analysis using harmonic forces

    NASA Astrophysics Data System (ADS)

    Brandt, A.; Berardengo, M.; Manzoni, S.; Cigada, A.

    2017-10-01

    This paper presents a new method for scaling mode shapes obtained by means of operational modal analysis. The method is capable of scaling mode shapes on any structure, also structures with closely coupled modes, and the method can be used in the presence of ambient vibration from traffic or wind loads, etc. Harmonic excitation can be relatively easily accomplished by using general-purpose actuators, also for force levels necessary for driving large structures such as bridges and highrise buildings. The signal processing necessary for mode shape scaling by the proposed method is simple and the method can easily be implemented in most measurement systems capable of generating a sine wave output. The tests necessary to scale the modes are short compared to typical operational modal analysis test time. The proposed method is thus easy to apply and inexpensive relative to some other methods for scaling mode shapes that are available in literature. Although it is not necessary per se, we propose to excite the structure at, or close to, the eigenfrequencies of the modes to be scaled, since this provides better signal-to-noise ratio in the response sensors, thus permitting the use of smaller actuators. An extensive experimental activity on a real structure was carried out and the results reported demonstrate the feasibility and accuracy of the proposed method. Since the method utilizes harmonic excitation for the mode shape scaling, we propose to call the method OMAH.

  20. Scaling laws for AC gas breakdown and implications for universality

    NASA Astrophysics Data System (ADS)

    Loveless, Amanda M.; Garner, Allen L.

    2017-10-01

    The reduced dependence on secondary electron emission and electrode surface properties makes radiofrequency (RF) and microwave (MW) plasmas advantageous over direct current (DC) plasmas for various applications, such as microthrusters. Theoretical models relating molecular constants to alternating current (AC) breakdown often fail due to incomplete understanding of both the constants and the mechanisms involved. This work derives simple analytic expressions for RF and MW breakdown, demonstrating the transition between these regimes at their high and low frequency limits, respectively. We further show that the limiting expressions for DC, RF, and MW breakdown voltage all have the same universal scaling dependence on pressure and gap distance at high pressure, agreeing with experiment.

  1. Partitioning in aqueous two-phase systems: Analysis of strengths, weaknesses, opportunities and threats.

    PubMed

    Soares, Ruben R G; Azevedo, Ana M; Van Alstine, James M; Aires-Barros, M Raquel

    2015-08-01

    For half a century aqueous two-phase systems (ATPSs) have been applied for the extraction and purification of biomolecules. In spite of their simplicity, selectivity, and relatively low cost they have not been significantly employed for industrial scale bioprocessing. Recently their ability to be readily scaled and interface easily in single-use, flexible biomanufacturing has led to industrial re-evaluation of ATPSs. The purpose of this review is to perform a SWOT analysis that includes a discussion of: (i) strengths of ATPS partitioning as an effective and simple platform for biomolecule purification; (ii) weaknesses of ATPS partitioning in regard to intrinsic problems and possible solutions; (iii) opportunities related to biotechnological challenges that ATPS partitioning may solve; and (iv) threats related to alternative techniques that may compete with ATPS in performance, economic benefits, scale up and reliability. This approach provides insight into the current status of ATPS as a bioprocessing technique and it can be concluded that most of the perceived weakness towards industrial implementation have now been largely overcome, thus paving the way for opportunities in fermentation feed clarification, integration in multi-stage operations and in single-step purification processes. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. Generalized statistical mechanics approaches to earthquakes and tectonics.

    PubMed

    Vallianatos, Filippos; Papadakis, Giorgos; Michas, Georgios

    2016-12-01

    Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes.

  3. Generalized statistical mechanics approaches to earthquakes and tectonics

    PubMed Central

    Papadakis, Giorgos; Michas, Georgios

    2016-01-01

    Despite the extreme complexity that characterizes the mechanism of the earthquake generation process, simple empirical scaling relations apply to the collective properties of earthquakes and faults in a variety of tectonic environments and scales. The physical characterization of those properties and the scaling relations that describe them attract a wide scientific interest and are incorporated in the probabilistic forecasting of seismicity in local, regional and planetary scales. Considerable progress has been made in the analysis of the statistical mechanics of earthquakes, which, based on the principle of entropy, can provide a physical rationale to the macroscopic properties frequently observed. The scale-invariant properties, the (multi) fractal structures and the long-range interactions that have been found to characterize fault and earthquake populations have recently led to the consideration of non-extensive statistical mechanics (NESM) as a consistent statistical mechanics framework for the description of seismicity. The consistency between NESM and observations has been demonstrated in a series of publications on seismicity, faulting, rock physics and other fields of geosciences. The aim of this review is to present in a concise manner the fundamental macroscopic properties of earthquakes and faulting and how these can be derived by using the notions of statistical mechanics and NESM, providing further insights into earthquake physics and fault growth processes. PMID:28119548

  4. Prehospital Acute Stroke Severity Scale to Predict Large Artery Occlusion: Design and Comparison With Other Scales.

    PubMed

    Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe

    2016-07-01

    We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.

  5. Controlling the scattering properties of thin, particle-doped coatings

    NASA Astrophysics Data System (ADS)

    Rogers, William; Corbett, Madeleine; Manoharan, Vinothan

    2013-03-01

    Coatings and thin films of small particles suspended in a matrix possess optical properties that are important in several industries from cosmetics and paints to polymer composites. Many of the most interesting applications require coatings that produce several bulk effects simultaneously, but it is often difficult to rationally formulate materials with these desired optical properties. Here, we focus on the specific challenge of designing a thin colloidal film that maximizes both diffuse and total hemispherical transmission. We demonstrate that these bulk optical properties follow a simple scaling with two microscopic length scales: the scattering and transport mean free paths. Using these length scales and Mie scattering calculations, we generate basic design rules that relate scattering at the single particle level to the film's bulk optical properties. These ideas will be useful in the rational design of future optically active coatings.

  6. Local self-uniformity in photonic networks.

    PubMed

    Sellers, Steven R; Man, Weining; Sahba, Shervin; Florescu, Marian

    2017-02-17

    The interaction of a material with light is intimately related to its wavelength-scale structure. Simple connections between structure and optical response empower us with essential intuition to engineer complex optical functionalities. Here we develop local self-uniformity (LSU) as a measure of a random network's internal structural similarity, ranking networks on a continuous scale from crystalline, through glassy intermediate states, to chaotic configurations. We demonstrate that complete photonic bandgap structures possess substantial LSU and validate LSU's importance in gap formation through design of amorphous gyroid structures. Amorphous gyroid samples are fabricated via three-dimensional ceramic printing and the bandgaps experimentally verified. We explore also the wing-scale structuring in the butterfly Pseudolycaena marsyas and show that it possesses substantial amorphous gyroid character, demonstrating the subtle order achieved by evolutionary optimization and the possibility of an amorphous gyroid's self-assembly.

  7. Local self-uniformity in photonic networks

    NASA Astrophysics Data System (ADS)

    Sellers, Steven R.; Man, Weining; Sahba, Shervin; Florescu, Marian

    2017-02-01

    The interaction of a material with light is intimately related to its wavelength-scale structure. Simple connections between structure and optical response empower us with essential intuition to engineer complex optical functionalities. Here we develop local self-uniformity (LSU) as a measure of a random network's internal structural similarity, ranking networks on a continuous scale from crystalline, through glassy intermediate states, to chaotic configurations. We demonstrate that complete photonic bandgap structures possess substantial LSU and validate LSU's importance in gap formation through design of amorphous gyroid structures. Amorphous gyroid samples are fabricated via three-dimensional ceramic printing and the bandgaps experimentally verified. We explore also the wing-scale structuring in the butterfly Pseudolycaena marsyas and show that it possesses substantial amorphous gyroid character, demonstrating the subtle order achieved by evolutionary optimization and the possibility of an amorphous gyroid's self-assembly.

  8. A simple force-motion relation for migrating cells revealed by multipole analysis of traction stress.

    PubMed

    Tanimoto, Hirokazu; Sano, Masaki

    2014-01-07

    For biophysical understanding of cell motility, the relationship between mechanical force and cell migration must be uncovered, but it remains elusive. Since cells migrate at small scale in dissipative circumstances, the inertia force is negligible and all forces should cancel out. This implies that one must quantify the spatial pattern of the force instead of just the summation to elucidate the force-motion relation. Here, we introduced multipole analysis to quantify the traction stress dynamics of migrating cells. We measured the traction stress of Dictyostelium discoideum cells and investigated the lowest two moments, the force dipole and quadrupole moments, which reflect rotational and front-rear asymmetries of the stress field. We derived a simple force-motion relation in which cells migrate along the force dipole axis with a direction determined by the force quadrupole. Furthermore, as a complementary approach, we also investigated fine structures in the stress field that show front-rear asymmetric kinetics consistent with the multipole analysis. The tight force-motion relation enables us to predict cell migration only from the traction stress patterns. Copyright © 2014 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  9. A simple atomic-level hydrophobicity scale reveals protein interfacial structure.

    PubMed

    Kapcha, Lauren H; Rossky, Peter J

    2014-01-23

    Many amino acid residue hydrophobicity scales have been created in an effort to better understand and rapidly characterize water-protein interactions based only on protein structure and sequence. There is surprisingly low consistency in the ranking of residue hydrophobicity between scales, and their ability to provide insightful characterization varies substantially across subject proteins. All current scales characterize hydrophobicity based on entire amino acid residue units. We introduce a simple binary but atomic-level hydrophobicity scale that allows for the classification of polar and non-polar moieties within single residues, including backbone atoms. This simple scale is first shown to capture the anticipated hydrophobic character for those whole residues that align in classification among most scales. Examination of a set of protein binding interfaces establishes good agreement between residue-based and atomic-level descriptions of hydrophobicity for five residues, while the remaining residues produce discrepancies. We then show that the atomistic scale properly classifies the hydrophobicity of functionally important regions where residue-based scales fail. To illustrate the utility of the new approach, we show that the atomic-level scale rationalizes the hydration of two hydrophobic pockets and the presence of a void in a third pocket within a single protein and that it appropriately classifies all of the functionally important hydrophilic sites within two otherwise hydrophobic pores. We suggest that an atomic level of detail is, in general, necessary for the reliable depiction of hydrophobicity for all protein surfaces. The present formulation can be implemented simply in a manner no more complex than current residue-based approaches. © 2013.

  10. A dual theory of price and value in a meso-scale economic model with stochastic profit rate

    NASA Astrophysics Data System (ADS)

    Greenblatt, R. E.

    2014-12-01

    The problem of commodity price determination in a market-based, capitalist economy has a long and contentious history. Neoclassical microeconomic theories are based typically on marginal utility assumptions, while classical macroeconomic theories tend to be value-based. In the current work, I study a simplified meso-scale model of a commodity capitalist economy. The production/exchange model is represented by a network whose nodes are firms, workers, capitalists, and markets, and whose directed edges represent physical or monetary flows. A pair of multivariate linear equations with stochastic input parameters represent physical (supply/demand) and monetary (income/expense) balance. The input parameters yield a non-degenerate profit rate distribution across firms. Labor time and price are found to be eigenvector solutions to the respective balance equations. A simple relation is derived relating the expected value of commodity price to commodity labor content. Results of Monte Carlo simulations are consistent with the stochastic price/labor content relation.

  11. Sub-core permeability and relative permeability characterization with Positron Emission Tomography

    NASA Astrophysics Data System (ADS)

    Zahasky, C.; Benson, S. M.

    2017-12-01

    This study utilizes preclinical micro-Positron Emission Tomography (PET) to image and quantify the transport behavior of pulses of a conservative aqueous radiotracer injected during single and multiphase flow experiments in a Berea sandstone core with axial parallel bedding heterogeneity. The core is discretized into streamtubes, and using the micro-PET data, expressions are derived from spatial moment analysis for calculating sub-core scale tracer flux and pore water velocity. Using the flux and velocity data, it is then possible to calculate porosity and saturation from volumetric flux balance, and calculate permeability and water relative permeability from Darcy's law. Full 3D simulations are then constructed based on this core characterization. Simulation results are compared with experimental results in order to test the assumptions of the simple streamtube model. Errors and limitations of this analysis will be discussed. These new methods of imaging and sub-core permeability and relative permeability measurements enable experimental quantification of transport behavior across scales.

  12. Application of a Design Morphology to the MX/OCC Definition of a Fault Detection and Dispatch System.

    DTIC Science & Technology

    1980-09-01

    morphology appears to be effective on an unstructured problem and provides a useful vehicle for clearly defining the functions and tasks that meet the needs...approach used is a structured decision process which was successfully demonstrated in FY 78 on relatively simple mechanical equipment and has now been...including achievement of practical conclusions from the large scale optimization procedures. This design morphology provided a useful vehicle for

  13. A proposed mathematical model for sleep patterning.

    PubMed

    Lawder, R E

    1984-01-01

    The simple model of a ramp, intersecting a triangular waveform, yields results which conform with seven generalized observations of sleep patterning; including the progressive lengthening of 'rapid-eye-movement' (REM) sleep periods within near-constant REM/nonREM cycle periods. Predicted values of REM sleep time, and of Stage 3/4 nonREM sleep time, can be computed using the observed values of other parameters. The distributions of the actual REM and Stage 3/4 times relative to the predicted values were closer to normal than the distributions relative to simple 'best line' fits. It was found that sleep onset tends to occur at a particular moment in the individual subject's '90-min cycle' (the use of a solar time-scale masks this effect), which could account for a subject with a naturally short sleep/wake cycle synchronizing to a 24-h rhythm. A combined 'sleep control system' model offers quantitative simulation of the sleep patterning of endogenous depressives and, with a different perturbation, qualitative simulation of the symptoms of narcolepsy.

  14. A Self-Reported Adherence Measure to Screen for Elevated HIV Viral Load in Pregnant and Postpartum Women on Antiretroviral Therapy

    PubMed Central

    Brittain, Kirsty; Mellins, Claude A.; Zerbe, Allison; Remien, Robert H.; Abrams, Elaine J.; Myer, Landon; Wilson, Ira B.

    2016-01-01

    Maternal adherence to antiretroviral therapy (ART) is a concern and monitoring adherence presents a significant challenge in low-resource settings. We investigated the association between self-reported adherence, measured using a simple three-item scale, and elevated viral load (VL) among HIV-infected pregnant and postpartum women on ART in Cape Town, South Africa. This is the first reported use of this scale in a non-English speaking setting and it achieved good psychometric characteristics (Cronbach α = 0.79). Among 452 women included in the analysis, only 12 % reported perfect adherence on the self-report scale, while 92 % had a VL <1000 copies/mL. Having a raised VL was consistently associated with lower median adherence scores and the area under the curve for the scale was 0.599, 0.656 and 0.642 using a VL cut-off of ≥50, ≥1000 and ≥10000 copies/mL, respectively. This simple self-report adherence scale shows potential as a first-stage adherence screener in this setting. Maternal adherence monitoring in low resource settings requires attention in the era of universal ART, and the value of this simple adherence scale in routine ART care settings warrants further investigation. PMID:27278548

  15. A Generalized Simple Formulation of Convective Adjustment ...

    EPA Pesticide Factsheets

    Convective adjustment timescale (τ) for cumulus clouds is one of the most influential parameters controlling parameterized convective precipitation in climate and weather simulation models at global and regional scales. Due to the complex nature of deep convection, a prescribed value or ad hoc representation of τ is used in most global and regional climate/weather models making it a tunable parameter and yet still resulting in uncertainties in convective precipitation simulations. In this work, a generalized simple formulation of τ for use in any convection parameterization for shallow and deep clouds is developed to reduce convective precipitation biases at different grid spacing. Unlike existing other methods, our new formulation can be used with field campaign measurements to estimate τ as demonstrated by using data from two different special field campaigns. Then, we implemented our formulation into a regional model (WRF) for testing and evaluation. Results indicate that our simple τ formulation can give realistic temporal and spatial variations of τ across continental U.S. as well as grid-scale and subgrid scale precipitation. We also found that as the grid spacing decreases (e.g., from 36 to 4-km grid spacing), grid-scale precipitation dominants over subgrid-scale precipitation. The generalized τ formulation works for various types of atmospheric conditions (e.g., continental clouds due to heating and large-scale forcing over la

  16. Scaling exponent and dispersity of polymers in solution by diffusion NMR.

    PubMed

    Williamson, Nathan H; Röding, Magnus; Miklavcic, Stanley J; Nydén, Magnus

    2017-05-01

    Molecular mass distribution measurements by pulsed gradient spin echo nuclear magnetic resonance (PGSE NMR) spectroscopy currently require prior knowledge of scaling parameters to convert from polymer self-diffusion coefficient to molecular mass. Reversing the problem, we utilize the scaling relation as prior knowledge to uncover the scaling exponent from within the PGSE data. Thus, the scaling exponent-a measure of polymer conformation and solvent quality-and the dispersity (M w /M n ) are obtainable from one simple PGSE experiment. The method utilizes constraints and parametric distribution models in a two-step fitting routine involving first the mass-weighted signal and second the number-weighted signal. The method is developed using lognormal and gamma distribution models and tested on experimental PGSE attenuation of the terminal methylene signal and on the sum of all methylene signals of polyethylene glycol in D 2 O. Scaling exponent and dispersity estimates agree with known values in the majority of instances, leading to the potential application of the method to polymers for which characterization is not possible with alternative techniques. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Objective assessment of isotretinoin-associated cheilitis: Isotretinoin Cheilitis Grading Scale.

    PubMed

    Ornelas, Jennifer; Rosamilia, Lorraine; Larsen, Larissa; Foolad, Negar; Wang, Quinlu; Li, Chin-Shang; Sivamani, Raja K

    2016-01-01

    Isotretinoin remains an effective treatment for severe acne. Despite its effectiveness, it includes many side effects, of which cheilitis is the most common. To develop an objective grading scale for assessment of isotretinoin-associated cheilitis. Cross-sectional clinical grading study. UC Davis Dermatology clinic. Subjects were older than 18 years old and actively treated with oral isotretinoin. Oral Isotretinoin. We developed an Isotretinoin Cheilitis Grading Scale (ICGS) incorporating the following four characteristics: erythema, scale/crust, fissures and inflammation of the commissures. Three board-certified dermatologists independently graded photographs of the subjects. The Kendall's coefficient of concordance (KCC) for the ICGS was 0.88 (p < 0.0001). The Kendall's coefficient was ≥0.72 (p < 0.0001) for each of the four characteristics included in the grading scale. An image-based measurement for lip roughness statistically significantly correlated with the lip scale/crusting assessment (r = 0.52, p < 0.05). The ICGS is reproducible and relatively simple to use. It can be incorporated as an objective tool to aid in the assessment of isotretinoin associated cheilitis.

  18. Biology-Inspired Distributed Consensus in Massively-Deployed Sensor Networks

    NASA Technical Reports Server (NTRS)

    Jones, Kennie H.; Lodding, Kenneth N.; Olariu, Stephan; Wilson, Larry; Xin, Chunsheng

    2005-01-01

    Promises of ubiquitous control of the physical environment by large-scale wireless sensor networks open avenues for new applications that are expected to redefine the way we live and work. Most of recent research has concentrated on developing techniques for performing relatively simple tasks in small-scale sensor networks assuming some form of centralized control. The main contribution of this work is to propose a new way of looking at large-scale sensor networks, motivated by lessons learned from the way biological ecosystems are organized. Indeed, we believe that techniques used in small-scale sensor networks are not likely to scale to large networks; that such large-scale networks must be viewed as an ecosystem in which the sensors/effectors are organisms whose autonomous actions, based on local information, combine in a communal way to produce global results. As an example of a useful function, we demonstrate that fully distributed consensus can be attained in a scalable fashion in massively deployed sensor networks where individual motes operate based on local information, making local decisions that are aggregated across the network to achieve globally-meaningful effects.

  19. Seismic equivalents of volcanic jet scaling laws and multipoles in acoustics

    NASA Astrophysics Data System (ADS)

    Haney, Matthew M.; Matoza, Robin S.; Fee, David; Aldridge, David F.

    2018-04-01

    We establish analogies between equivalent source theory in seismology (moment-tensor and single-force sources) and acoustics (monopoles, dipoles and quadrupoles) in the context of volcanic eruption signals. Although infrasound (acoustic waves < 20 Hz) from volcanic eruptions may be more complex than a simple monopole, dipole or quadrupole assumption, these elementary acoustic sources are a logical place to begin exploring relations with seismic sources. By considering the radiated power of a harmonic force source at the surface of an elastic half-space, we show that a volcanic jet or plume modelled as a seismic force has similar scaling with respect to eruption parameters (e.g. exit velocity and vent area) as an acoustic dipole. We support this by demonstrating, from first principles, a fundamental relationship that ties together explosion, torque and force sources in seismology and highlights the underlying dipole nature of seismic forces. This forges a connection between the multipole expansion of equivalent sources in acoustics and the use of forces and moments as equivalent sources in seismology. We further show that volcanic infrasound monopole and quadrupole sources exhibit scalings similar to seismicity radiated by volume injection and moment sources, respectively. We describe a scaling theory for seismic tremor during volcanic eruptions that agrees with observations showing a linear relation between radiated power of tremor and eruption rate. Volcanic tremor over the first 17 hr of the 2016 eruption at Pavlof Volcano, Alaska, obeyed the linear relation. Subsequent tremor during the main phase of the eruption did not obey the linear relation and demonstrates that volcanic eruption tremor can exhibit other scalings even during the same eruption.

  20. Universal shocks in the Wishart random-matrix ensemble.

    PubMed

    Blaizot, Jean-Paul; Nowak, Maciej A; Warchoł, Piotr

    2013-05-01

    We show that the derivative of the logarithm of the average characteristic polynomial of a diffusing Wishart matrix obeys an exact partial differential equation valid for an arbitrary value of N, the size of the matrix. In the large N limit, this equation generalizes the simple inviscid Burgers equation that has been obtained earlier for Hermitian or unitary matrices. The solution, through the method of characteristics, presents singularities that we relate to the precursors of shock formation in the Burgers equation. The finite N effects appear as a viscosity term in the Burgers equation. Using a scaling analysis of the complete equation for the characteristic polynomial, in the vicinity of the shocks, we recover in a simple way the universal Bessel oscillations (so-called hard-edge singularities) familiar in random-matrix theory.

  1. Universal aspects of brittle fracture, adhesion, and atomic force microscopy

    NASA Technical Reports Server (NTRS)

    Banerjea, Amitava; Ferrante, John; Smith, John R.

    1989-01-01

    This universal relation between binding energy and interatomic separation was originally discovered for adhesion at bimetallic interfaces involving the simple metals Al, Zn, Mg, and Na. It is shown here that the same universal relation extends to adhesion at transition-metal interfaces. Adhesive energies have been computed for the low-index interfaces of Al, Ni, Cu, Ag, Fe, and W, using the equivalent-crystal theory (ECT) and keeping the atoms in each semiinfinite slab fixed rigidly in their equilibrium positions. These adhesive energy curves can be scaled onto each other and onto the universal adhesion curve. The effect of tip shape on the adhesive forces in the atomic-force microscope (AFM) is studied by computing energies and forces using the ECT. While the details of the energy-distance and force-distance curves are sensitive to tip shape, all of these curves can be scaled onto the universal adhesion curve.

  2. [Study on the related factors of suicidal ideation in college undergraduates].

    PubMed

    Gao, Hong-sheng; Qu, Cheng-yi; Miao, Mao-hua

    2003-09-01

    To evaluate psychosocial factors and patterns on suicidal ideation of the undergraduates in Shanxi province. Four thousand eight hundred and eighty-two undergraduates in Shanxi province were investigated with multistage stratified random clustered samples. Factors associated with suicidal ideation were analyzed with logistic regression and Path analysis by scores of Beck Scale for Suicide Ideation (BSSI), Suicide Attitude Questionnaire (QSA), Adolescent Self-Rate Life Events Check List (ASLEC), DSQ, Social Support Rating Scale, SCL-90, Simple Coping Modes Questionnaire and EPQ. Tendency of psychological disorder was the major factor. Negative life events did not directly affect suicidal ideation, but personality did directly or indirectly affect suicidal ideation through coping and defensive response. Personality played a stabilized fundamental role while life events were minor but "triggering" agents. Mental disturbance disposition seemed to be the principal factor related to suicidal ideation. Above three factors were intergraded and resulted in suicidal ideation in chorus.

  3. Jammed Clusters and Non-locality in Dense Granular Flows

    NASA Astrophysics Data System (ADS)

    Kharel, Prashidha; Rognon, Pierre

    We investigate the micro-mechanisms underpinning dense granular flow behaviour from a series of DEM simulations of pure shear flows of dry grains. We observe the development of transient clusters of jammed particles within the flow. Typical size of such clusters is found to scale with the inertial number with a power law that is similar to the scaling of shear-rate profile relaxation lengths observed previously. Based on the simple argument that transient clusters of size l exist in the dense flow regime, the formulation of steady state condition for non-homogeneous shear flow results in a general non-local relation, which is similar in form to the non-local relation conjectured for soft glassy flows. These findings suggest the formation of jammed clusters to be the key micro-mechanism underpinning non-local behaviour in dense granular flows. Particles and Grains Laboratory, School of Civil Engineering, The University of Sydney, Sydney, NSW 2006, Australia.

  4. Application of a simple power law for transport ratio with bimodal distributions of spherical grains under oscillatory forcing

    NASA Astrophysics Data System (ADS)

    Holway, Kevin; Thaxton, Christopher S.; Calantoni, Joseph

    2012-11-01

    Morphodynamic models of coastal evolution require relatively simple parameterizations of sediment transport for application over larger scales. Calantoni and Thaxton (2008) [6] presented a transport parameterization for bimodal distributions of coarse quartz grains derived from detailed boundary layer simulations for sheet flow and near sheet flow conditions. The simulation results, valid over a range of wave forcing conditions and large- to small-grain diameter ratios, were successfully parameterized with a simple power law that allows for the prediction of the transport rates of each size fraction. Here, we have applied the simple power law to a two-dimensional cellular automaton to simulate sheet flow transport. Model results are validated with experiments performed in the small oscillating flow tunnel (S-OFT) at the Naval Research Laboratory at Stennis Space Center, MS, in which sheet flow transport was generated with a bed composed of a bimodal distribution of non-cohesive grains. The work presented suggests that, under the conditions specified, algorithms that incorporate the power law may correctly reproduce laboratory bed surface measurements of bimodal sheet flow transport while inherently incorporating vertical mixing by size.

  5. Symmetrical treatment of "Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition", for major depressive disorders.

    PubMed

    Sawamura, Jitsuki; Morishita, Shigeru; Ishigooka, Jun

    2016-01-01

    We previously presented a group theoretical model that describes psychiatric patient states or clinical data in a graded vector-like format based on modulo groups. Meanwhile, the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5, the current version), is frequently used for diagnosis in daily psychiatric treatments and biological research. The diagnostic criteria of DSM-5 contain simple binominal items relating to the presence or absence of specific symptoms. In spite of its simple form, the practical structure of the DSM-5 system is not sufficiently systemized for data to be treated in a more rationally sophisticated way. To view the disease states in terms of symmetry in the manner of abstract algebra is considered important for the future systematization of clinical medicine. We provide a simple idea for the practical treatment of the psychiatric diagnosis/score of DSM-5 using depressive symptoms in line with our previously proposed method. An expression is given employing modulo-2 and -7 arithmetic (in particular, additive group theory) for Criterion A of a 'major depressive episode' that must be met for the diagnosis of 'major depressive disorder' in DSM-5. For this purpose, the novel concept of an imaginary value 0 that can be recognized as an explicit 0 or implicit 0 was introduced to compose the model. The zeros allow the incorporation or deletion of an item between any other symptoms if they are ordered appropriately. Optionally, a vector-like expression can be used to rate/select only specific items when modifying the criterion/scale. Simple examples are illustrated concretely. Further development of the proposed method for the criteria/scale of a disease is expected to raise the level of formalism of clinical medicine to that of other fields of natural science.

  6. Time scales of porphyry Cu deposit formation: insights from titanium diffusion in quartz

    USGS Publications Warehouse

    Mercer, Celestine N.; Reed, Mark H.; Mercer, Cameron M.

    2015-01-01

    Porphyry dikes and hydrothermal veins from the porphyry Cu-Mo deposit at Butte, Montana, contain multiple generations of quartz that are distinct in scanning electron microscope-cathodoluminescence (SEM-CL) images and in Ti concentrations. A comparison of microprobe trace element profiles and maps to SEM-CL images shows that the concentration of Ti in quartz correlates positively with CL brightness but Al, K, and Fe do not. After calibrating CL brightness in relation to Ti concentration, we use the brightness gradient between different quartz generations as a proxy for Ti gradients that we model to determine time scales of quartz formation and cooling. Model results indicate that time scales of porphyry magma residence are ~1,000s of years and time scales from porphyry quartz phenocryst rim formation to porphyry dike injection and cooling are ~10s of years. Time scales for the formation and cooling of various generations of hydrothermal vein quartz range from 10s to 10,000s of years. These time scales are considerably shorter than the ~0.6 m.y. overall time frame for each porphyry-style mineralization pulse determined from isotopic studies at Butte, Montana. Simple heat conduction models provide a temporal reference point to compare chemical diffusion time scales, and we find that they support short dike and vein formation time scales. We interpret these relatively short time scales to indicate that the Butte porphyry deposit formed by short-lived episodes of hydrofracturing, dike injection, and vein formation, each with discrete thermal pulses, which repeated over the ~3 m.y. generation of the deposit.

  7. Intensity of Territorial Marking Predicts Wolf Reproduction: Implications for Wolf Monitoring

    PubMed Central

    García, Emilio J.

    2014-01-01

    Background The implementation of intensive and complex approaches to monitor large carnivores is resource demanding, restricted to endangered species, small populations, or small distribution ranges. Wolf monitoring over large spatial scales is difficult, but the management of such contentious species requires regular estimations of abundance to guide decision-makers. The integration of wolf marking behaviour with simple sign counts may offer a cost-effective alternative to monitor the status of wolf populations over large spatial scales. Methodology/Principal Findings We used a multi-sampling approach, based on the collection of visual and scent wolf marks (faeces and ground scratching) and the assessment of wolf reproduction using howling and observation points, to test whether the intensity of marking behaviour around the pup-rearing period (summer-autumn) could reflect wolf reproduction. Between 1994 and 2007 we collected 1,964 wolf marks in a total of 1,877 km surveyed and we searched for the pups' presence (1,497 howling and 307 observations points) in 42 sampling sites with a regular presence of wolves (120 sampling sites/year). The number of wolf marks was ca. 3 times higher in sites with a confirmed presence of pups (20.3 vs. 7.2 marks). We found a significant relationship between the number of wolf marks (mean and maximum relative abundance index) and the probability of wolf reproduction. Conclusions/Significance This research establishes a real-time relationship between the intensity of wolf marking behaviour and wolf reproduction. We suggest a conservative cutting point of 0.60 for the probability of wolf reproduction to monitor wolves on a regional scale combined with the use of the mean relative abundance index of wolf marks in a given area. We show how the integration of wolf behaviour with simple sampling procedures permit rapid, real-time, and cost-effective assessments of the breeding status of wolf packs with substantial implications to monitor wolves at large spatial scales. PMID:24663068

  8. Microbial reduction of graphene oxide by Escherichia coli: a green chemistry approach.

    PubMed

    Gurunathan, Sangiliyandi; Han, Jae Woong; Eppakayala, Vasuki; Kim, Jin-Hoi

    2013-02-01

    Graphene and graphene related materials are an important area of research in recent years due to their unique properties. The extensive industrial application of graphene and related compounds has led researchers to devise novel and simple methods for the synthesis of high quality graphene. In this paper, we developed an environment friendly, cost effective, simple method and green approaches for the reduction of graphene oxide (GO) using Escherichia coli biomass. In biological method, we can avoid use of toxic and environmentally harmful reducing agents commonly used in the chemical reduction of GO to obtain graphene. The biomass of E. coli reduces exfoliated GO to graphene at 37°C in an aqueous medium. The E. coli reduced graphene oxide (ERGO) was characterized with UV-visible absorption spectroscopy, particle analyzer, high resolution X-ray diffractometer, scanning electron microscopy and Raman spectroscopy. Besides the reduction potential, the biomass could also play an important role as stabilizing agent, in which synthesized graphene exhibited good stability in water. This method can open up the new avenue for preparing graphene in cost effective and large scale production. Our findings suggest that GO can be reduced by simple eco-friendly method by using E. coli biomass to produce water dispersible graphene. Copyright © 2012 Elsevier B.V. All rights reserved.

  9. Estimating hydraulic properties from tidal attenuation in the Northern Guam Lens Aquifer, territory of Guam, USA

    USGS Publications Warehouse

    Rotzoll, Kolja; Gingerich, Stephen B.; Jenson, John W.; El-Kadi, Aly I.

    2013-01-01

    Tidal-signal attenuations are analyzed to compute hydraulic diffusivities and estimate regional hydraulic conductivities of the Northern Guam Lens Aquifer, Territory of Guam (Pacific Ocean), USA. The results indicate a significant tidal-damping effect at the coastal boundary. Hydraulic diffusivities computed using a simple analytical solution for well responses to tidal forcings near the periphery of the island are two orders of magnitude lower than for wells in the island’s interior. Based on assigned specific yields of ~0.01–0.4, estimated hydraulic conductivities are ~20–800 m/day for peripheral wells, and ~2,000–90,000 m/day for interior wells. The lower conductivity of the peripheral rocks relative to the interior rocks may best be explained by the effects of karst evolution: (1) dissolutional enhancement of horizontal hydraulic conductivity in the interior; (2) case-hardening and concurrent reduction of local hydraulic conductivity in the cliffs and steeply inclined rocks of the periphery; and (3) the stronger influence of higher-conductivity regional-scale features in the interior relative to the periphery. A simple numerical model calibrated with measured water levels and tidal response estimates values for hydraulic conductivity and storage parameters consistent with the analytical solution. The study demonstrates how simple techniques can be useful for characterizing regional aquifer properties.

  10. Estimating evapotranspiration in natural and constructed wetlands

    USGS Publications Warehouse

    Lott, R. Brandon; Hunt, Randall J.

    2001-01-01

    Difficulties in accurately calculating evapotranspiration (ET) in wetlands can lead to inaccurate water balances—information important for many compensatory mitigation projects. Simple meteorological methods or off-site ET data often are used to estimate ET, but these approaches do not include potentially important site-specific factors such as plant community, root-zone water levels, and soil properties. The objective of this study was to compare a commonly used meterological estimate of potential evapotranspiration (PET) with direct measurements of ET (lysimeters and water-table fluctuations) and small-scale root-zone geochemistry in a natural and constructed wetland system. Unlike what has been commonly noted, the results of the study demonstrated that the commonly used Penman combination method of estimating PET underestimated the ET that was measured directly in the natural wetland over most of the growing season. This result is likely due to surface heterogeneity and related roughness efffects not included in the simple PET estimate. The meterological method more closely approximated season-long measured ET rates in the constructed wetland but may overestimate the ET rate late in the growing season. ET rates also were temporally variable in wetlands over a range of time scales because they can be influenced by the relation of the water table to the root zone and the timing of plant senescence. Small-scale geochemical sampling of the shallow root zone was able to provide an independent evaluation of ET rates, supporting the identification of higher ET rates in the natural wetlands and differences in temporal ET rates due to the timing of senescence. These discrepancies illustrate potential problems with extrapolating off-site estimates of ET or single measurements of ET from a site over space or time.

  11. Use of the Oxford Handicap Scale at hospital discharge to predict Glasgow Outcome Scale at 6 months in patients with traumatic brain injury.

    PubMed

    Perel, Pablo; Edwards, Phil; Shakur, Haleema; Roberts, Ian

    2008-11-06

    Traumatic brain injury (TBI) is an important cause of acquired disability. In evaluating the effectiveness of clinical interventions for TBI it is important to measure disability accurately. The Glasgow Outcome Scale (GOS) is the most widely used outcome measure in randomised controlled trials (RCTs) in TBI patients. However GOS measurement is generally collected at 6 months after discharge when loss to follow up could have occurred. The objectives of this study were to evaluate the association and predictive validity between a simple disability scale at hospital discharge, the Oxford Handicap Scale (OHS), and the GOS at 6 months among TBI patients. The study was a secondary analysis of a randomised clinical trial among TBI patients (MRC CRASH Trial). A Spearman correlation was estimated to evaluate the association between the OHS and GOS. The validity of different dichotomies of the OHS for predicting GOS at 6 months was assessed by calculating sensitivity, specificity and the C statistic. Uni and multivariate logistic regression models were fitted including OHS as explanatory variable. For each model we analysed its discrimination and calibration. We found that the OHS is highly correlated with GOS at 6 months (spearman correlation 0.75) with evidence of a linear relationship between the two scales. The OHS dichotomy that separates patients with severe dependency or death showed the greatest discrimination (C statistic: 84.3). Among survivors at hospital discharge the OHS showed a very good discrimination (C statistic 0.78) and excellent calibration when used to predict GOS outcome at 6 months. We have shown that the OHS, a simple disability scale available at hospital discharge can predict disability accurately, according to the GOS, at 6 months. OHS could be used to improve the design and analysis of clinical trials in TBI patients and may also provide a valuable clinical tool for physicians to improve communication with patients and relatives when assessing a patient's prognosis at hospital discharge.

  12. Cosmic vacuum and galaxy formation

    NASA Astrophysics Data System (ADS)

    Chernin, A. D.

    2006-04-01

    It is demonstrated that the protogalactic perturbations must enter the nonlinear regime before the red shift z≈ 1; otherwise they would be destroyed by the antigravity of the vacuum dark energy at the subsequent epoch of the vacuum domination. At the zrrV={M/[(8π/3)ρV]}1/3, where M is the mass of a given over-density and ρV is the vacuum density. The criterion provides a new relation between the largest mass condensations and their spatial scales. All the real large-scale systems follow this relation definitely. It is also shown that a simple formula is possible for the key quantity in the theory of galaxy formation, namely the initial amplitude of the perturbation of the gravitational potential in the protogalactic structures. The amplitude is time independent and given in terms of the Friedmann integrals, which are genuine physical characteristics of the cosmic energies. The results suggest that there is a strong correspondence between the global design of the Universe as a whole and the cosmic structures of various masses and spatial scales.

  13. Wavelet Analysis of Turbulent Spots and Other Coherent Structures in Unsteady Transition

    NASA Technical Reports Server (NTRS)

    Lewalle, Jacques

    1998-01-01

    This is a secondary analysis of a portion of the Halstead data. The hot-film traces from an embedded stage of a low pressure turbine have been extensively analyzed by Halstead et al. In this project, wavelet analysis is used to develop the quantitative characterization of individual coherent structures in terms of size, amplitude, phase, convection speed, etc., as well as phase-averaged time scales. The purposes of the study are (1) to extract information about turbulent time scales for comparison with unsteady model results (e.g. k/epsilon). Phase-averaged maps of dominant time scales will be presented; and (2) to evaluate any differences between wake-induced and natural spots that might affect model performance. Preliminary results, subject to verification with data at higher frequency resolution, indicate that spot properties are independent of their phase relative to the wake footprints: therefore requirements for the physical content of models are kept relatively simple. Incidentally, we also observed that spot substructures can be traced over several stations; further study will examine their possible impact.

  14. Scaling relations in mountain streams: colluvial and Quaternary controls

    NASA Astrophysics Data System (ADS)

    Brardinoni, Francesco; Hassan, Marwan; Church, Michael

    2010-05-01

    In coastal British Columbia, Canada, the glacial palimpsest profoundly affects the geomorphic structure of mountain drainage basins. In this context, by combining remotely sensed, field- and GIS-based data, we examine the scaling behavior of bankfull width and depth with contributing area in a process-based framework. We propose a novel approach that, by detailing interactions between colluvial and fluvial processes, provides new insights on the geomorphic functioning of mountain channels. This approach evaluates the controls exerted by a parsimonious set of governing factors on channel size. Results indicate that systematic deviations from simple power-law trends in bankfull width and depth are common. Deviations are modulated by interactions between the inherited glacial and paraglacial topography (imposed slope), coarse grain-size fraction, and chiefly the rate of colluvial sediment delivery to streams. Cumulatively, departures produce distal cross-sections that are typically narrower and shallower than expected. These outcomes, while reinforcing the notion that mountain drainage basins in formerly glaciated systems are out of balance with current environmental conditions, show that cross-sectional scaling relations are useful metrics for understanding colluvial-alluvial interactions.

  15. Relation of nitrate concentrations to baseflow in the Raccoon River, Iowa

    USGS Publications Warehouse

    Schilling, K.E.; Lutz, D.S.

    2004-01-01

    Excessive nitrate-nitrogen (nitrate) export from the Raccoon River in west central Iowa is an environmental concern to downstream receptors. The 1972 to 2000 record of daily streamflow and the results from 981 nitrate measurements were examined to describe the relation of nitrate to streamflow in the Raccoon River. No long term trends in streamflow and nitrate concentrations were noted in the 28-year record. Strong seasonal patterns were evident in nitrate concentrations, with higher concentrations occurring in spring and fall. Nitrate concentrations were linearly related to streamflow at daily, monthly, seasonal, and annual time scales. At all time scales evaluated, the relation was improved when baseflow was used as the discharge variable instead of total streamflow. Nitrate concentrations were found to be highly stratified according to flow, but there was little relation of nitrate to streamflow within each flow range. Simple linear regression models developed to predict monthly mean nitrate concentrations explained as much as 76 percent of the variability in the monthly nitrate concentration data for 2001. Extrapolation of current nitrate baseflow relations to historical conditions in the Raccoon River revealed that increasing baseflow over the 20th century could account for a measurable increase in nitrate concentrations.

  16. HESS Opinions Catchments as meta-organisms - a new blueprint for hydrological modelling

    NASA Astrophysics Data System (ADS)

    Savenije, Hubert H. G.; Hrachowitz, Markus

    2017-02-01

    Catchment-scale hydrological models frequently miss essential characteristics of what determines the functioning of catchments. The most important active agent in catchments is the ecosystem. It manipulates and partitions moisture in a way that supports the essential functions of survival and productivity: infiltration of water, retention of moisture, mobilization and retention of nutrients, and drainage. Ecosystems do this in the most efficient way, establishing a continuous, ever-evolving feedback loop with the landscape and climatic drivers. In brief, hydrological systems are alive and have a strong capacity to adjust themselves to prevailing and changing environmental conditions. Although most models take Newtonian theory at heart, as best they can, what they generally miss is Darwinian theory on how an ecosystem evolves and adjusts its environment to maintain crucial hydrological functions. In addition, catchments, such as many other natural systems, do not only evolve over time, but develop features of spatial organization, including surface or sub-surface drainage patterns, as a by-product of this evolution. Models that fail to account for patterns and the associated feedbacks miss a critical element of how systems at the interface of atmosphere, biosphere and pedosphere function. In contrast to what is widely believed, relatively simple, semi-distributed conceptual models have the potential to accommodate organizational features and their temporal evolution in an efficient way, a reason for that being that because their parameters (and their evolution over time) are effective at the modelling scale, and thus integrate natural heterogeneity within the system, they may be directly inferred from observations at the same scale, reducing the need for calibration and related problems. In particular, the emergence of new and more detailed observation systems from space will lead towards a more robust understanding of spatial organization and its evolution. This will further permit the development of relatively simple time-dynamic functional relationships that can meaningfully represent spatial patterns and their evolution over time, even in poorly gauged environments.

  17. Construction and validation of a measure of integrative well-being in seven languages: the Pemberton Happiness Index.

    PubMed

    Hervás, Gonzalo; Vázquez, Carmelo

    2013-04-22

    We introduce the Pemberton Happiness Index (PHI), a new integrative measure of well-being in seven languages, detailing the validation process and presenting psychometric data. The scale includes eleven items related to different domains of remembered well-being (general, hedonic, eudaimonic, and social well-being) and ten items related to experienced well-being (i.e., positive and negative emotional events that possibly happened the day before); the sum of these items produces a combined well-being index. A distinctive characteristic of this study is that to construct the scale, an initial pool of items, covering the remembered and experienced well-being domains, were subjected to a complete selection and validation process. These items were based on widely used scales (e.g., PANAS, Satisfaction With Life Scale, Subjective Happiness Scale, and Psychological Well-Being Scales). Both the initial items and reference scales were translated into seven languages and completed via Internet by participants (N = 4,052) aged 16 to 60 years from nine countries (Germany, India, Japan, Mexico, Russia, Spain, Sweden, Turkey, and USA). Results from this initial validation study provided very good support for the psychometric properties of the PHI (i.e., internal consistency, a single-factor structure, and convergent and incremental validity). Given the PHI's good psychometric properties, this simple and integrative index could be used as an instrument to monitor changes in well-being. We discuss the utility of this integrative index to explore well-being in individuals and communities.

  18. The Characterization of Galaxy Structure

    NASA Astrophysics Data System (ADS)

    Zaritsky, Dennis

    There is no all-encompassing intuitive physical understanding of galactic structure. We cannot predict the size, surface brightness, or luminosity of an individual galaxy based on the mass of its halo, or other physical characteristics, from simple first principles or even empirical guidelines. We have come to believe that such an understanding is possible because we have identified a simple scaling relation that applies to all gravitationally bound stellar systems,from giant ellipticals to dwarf spheroidals, from spiral galaxies to globular clusters. The simplicity (and low scatter) of this relationship testifies to an underlying order. In this proposal, we outline what we have learned so far about this scaling relationship, what we need to do to refine it so that it has no free parameters and provides the strongest possible test of galaxy formation and evolution models, and several ways in which we will exploit the relationship to explore other issues. Primarily, the proposed work involves a study of the uniform IR surface photometry of several thousand stellar systems using a single data source (the Spitzer S4G survey) to address shortcomings posed by the current heterogeneous sample and combining these data with the GALEX database to study how excursions from this relationship are related to current or on-going star formation. This relationship, like its antecedents the Fundamental Plane or Tully-Fisher relationship, can also be used to estimate distances and stellar mass-to-light ratios. We will describe the key advantages our relationship has relative to the existing work and how we will exploit those using archival NASA data from the Spitzer, GALEX, and WISE missions.

  19. Assessing the impact of nutrient enrichment in estuaries: susceptibility to eutrophication.

    PubMed

    Painting, S J; Devlin, M J; Malcolm, S J; Parker, E R; Mills, D K; Mills, C; Tett, P; Wither, A; Burt, J; Jones, R; Winpenny, K

    2007-01-01

    The main aim of this study was to develop a generic tool for assessing risks and impacts of nutrient enrichment in estuaries. A simple model was developed to predict the magnitude of primary production by phytoplankton in different estuaries from nutrient input (total available nitrogen and/or phosphorus) and to determine likely trophic status. In the model, primary production is strongly influenced by water residence times and relative light regimes. The model indicates that estuaries with low and moderate light levels are the least likely to show a biological response to nutrient inputs. Estuaries with a good light regime are likely to be sensitive to nutrient enrichment, and to show similar responses, mediated only by site-specific geomorphological features. Nixon's scale was used to describe the relative trophic status of estuaries, and to set nutrient and chlorophyll thresholds for assessing trophic status. Estuaries identified as being eutrophic may not show any signs of eutrophication. Additional attributes need to be considered to assess negative impacts. Here, likely detriment to the oxygen regime was considered, but is most applicable to areas of restricted exchange. Factors which limit phytoplankton growth under high nutrient conditions (water residence times and/or light availability) may favour the growth of other primary producers, such as macrophytes, which may have a negative impact on other biological communities. The assessment tool was developed for estuaries in England and Wales, based on a simple 3-category typology determined by geomorphology and relative light levels. Nixon's scale needs to be validated for estuaries in England and Wales, once more data are available on light levels and primary production.

  20. A study on assimilating potential vorticity data

    NASA Astrophysics Data System (ADS)

    Li, Yong; Ménard, Richard; Riishøjgaard, Lars Peter; Cohn, Stephen E.; Rood, Richard B.

    1998-08-01

    The correlation that exists between the potential vorticity (PV) field and the distribution of chemical tracers such as ozone suggests the possibility of using tracer observations as proxy PV data in atmospheric data assimilation systems. Especially in the stratosphere, there are plentiful tracer observations but a general lack of reliable wind observations, and the correlation is most pronounced. The issue investigated in this study is how model dynamics would respond to the assimilation of PV data. First, numerical experiments of identical-twin type were conducted with a simple univariate nuding algorithm and a global shallow water model based on PV and divergence (PV-D model). All model fields are successfully reconstructed through the insertion of complete PV data alone if an appropriate value for the nudging coefficient is used. A simple linear analysis suggests that slow modes are recovered rapidly, at a rate nearly independent of spatial scale. In a more realistic experiment, appropriately scaled total ozone data from the NIMBUS-7 TOMS instrument were assimilated as proxy PV data into the PV-D model over a 10-day period. The resulting model PV field matches the observed total ozone field relatively well on large spatial scales, and the PV, geopotential and divergence fields are dynamically consistent. These results indicate the potential usefulness that tracer observations, as proxy PV data, may offer in a data assimilation system.

  1. Self-consciousness concept and assessment in self-report measures

    PubMed Central

    DaSilveira, Amanda; DeSouza, Mariane L.; Gomes, William B.

    2015-01-01

    This study examines how self-consciousness is defined and assessed using self-report questionnaires (Self-Consciousness Scale (SCS), Self-Reflection and Insight Scale, Self-Absorption Scale, Rumination-Reflection Questionnaire, and Philadelphia Mindfulness Scale). Authors of self-report measures suggest that self-consciousness can be distinguished by its private/public aspects, its adaptive/maladaptive applied characteristics, and present/past experiences. We examined these claims in a study using 602 young adults to whom the aforementioned scales were administered. Data were analyzed as follows: (1) correlation analysis to find simple associations between the measures; (2) factorial analysis using Oblimin rotation of total scores provided from the scales; and (3) factorial analysis considering the 102 items of the scales all together. It aimed to clarify relational patterns found in the correlations between SCSs, and to identify possible latent constructs behind these scales. Results support the adaptive/maladaptive aspects of self-consciousness, as well as distinguish to some extent public aspects from private ones. However, some scales that claimed to be theoretically derived from the concept of Private Self-Consciousness correlated with some of its public self-aspects. Overall, our findings suggest that while self-reflection measures tend to tap into past experiences and judged concepts that were already processed by the participants’ inner speech and thoughts, the Awareness measure derived from Mindfulness Scale seems to be related to a construct associated with present experiences in which one is aware of without any further judgment or logical/rational symbolization. This sub-scale seems to emphasize the role that present experiences have in self-consciousness, and it is argued that such a concept refers to what has been studied by phenomenology and psychology over more than 100 years: the concept of pre-reflective self-conscious. PMID:26191030

  2. Self-consciousness concept and assessment in self-report measures.

    PubMed

    DaSilveira, Amanda; DeSouza, Mariane L; Gomes, William B

    2015-01-01

    This study examines how self-consciousness is defined and assessed using self-report questionnaires (Self-Consciousness Scale (SCS), Self-Reflection and Insight Scale, Self-Absorption Scale, Rumination-Reflection Questionnaire, and Philadelphia Mindfulness Scale). Authors of self-report measures suggest that self-consciousness can be distinguished by its private/public aspects, its adaptive/maladaptive applied characteristics, and present/past experiences. We examined these claims in a study using 602 young adults to whom the aforementioned scales were administered. Data were analyzed as follows: (1) correlation analysis to find simple associations between the measures; (2) factorial analysis using Oblimin rotation of total scores provided from the scales; and (3) factorial analysis considering the 102 items of the scales all together. It aimed to clarify relational patterns found in the correlations between SCSs, and to identify possible latent constructs behind these scales. Results support the adaptive/maladaptive aspects of self-consciousness, as well as distinguish to some extent public aspects from private ones. However, some scales that claimed to be theoretically derived from the concept of Private Self-Consciousness correlated with some of its public self-aspects. Overall, our findings suggest that while self-reflection measures tend to tap into past experiences and judged concepts that were already processed by the participants' inner speech and thoughts, the Awareness measure derived from Mindfulness Scale seems to be related to a construct associated with present experiences in which one is aware of without any further judgment or logical/rational symbolization. This sub-scale seems to emphasize the role that present experiences have in self-consciousness, and it is argued that such a concept refers to what has been studied by phenomenology and psychology over more than 100 years: the concept of pre-reflective self-conscious.

  3. Collision geometry scaling of Au+Au pseudorapidity density from √(sNN )=19.6 to 200 GeV

    NASA Astrophysics Data System (ADS)

    Back, B. B.; Baker, M. D.; Ballintijn, M.; Barton, D. S.; Betts, R. R.; Bickley, A. A.; Bindel, R.; Budzanowski, A.; Busza, W.; Carroll, A.; Decowski, M. P.; García, E.; George, N.; Gulbrandsen, K.; Gushue, S.; Halliwell, C.; Hamblen, J.; Heintzelman, G. A.; Henderson, C.; Hofman, D. J.; Hollis, R. S.; Hołyński, R.; Holzman, B.; Iordanova, A.; Johnson, E.; Kane, J. L.; Katzy, J.; Khan, N.; Kucewicz, W.; Kulinich, P.; Kuo, C. M.; Lin, W. T.; Manly, S.; McLeod, D.; Mignerey, A. C.; Nouicer, R.; Olszewski, A.; Pak, R.; Park, I. C.; Pernegger, H.; Reed, C.; Remsberg, L. P.; Reuter, M.; Roland, C.; Roland, G.; Rosenberg, L.; Sagerer, J.; Sarin, P.; Sawicki, P.; Skulski, W.; Steinberg, P.; Stephans, G. S.; Sukhanov, A.; Tonjes, M. B.; Tang, J.-L.; Trzupek, A.; Vale, C.; van Nieuwenhuizen, G. J.; Verdier, R.; Wolfs, F. L.; Wosiek, B.; Woźniak, K.; Wuosmaa, A. H.; Wysłouch, B.

    2004-08-01

    The centrality dependence of the midrapidity charged particle multiplicity in Au+Au heavy-ion collisions at √(sNN )=19.6 and 200 GeV is presented. Within a simple model, the fraction of hard (scaling with number of binary collisions) to soft (scaling with number of participant pairs) interactions is consistent with a value of x=0.13±0.01 (stat) ±0.05 (syst) at both energies. The experimental results at both energies, scaled by inelastic p ( p¯ ) +p collision data, agree within systematic errors. The ratio of the data was found not to depend on centrality over the studied range and yields a simple linear scale factor of R200/19.6 =2.03±0.02 (stat) ±0.05 (syst) .

  4. Large Scale Synthesis of Colloidal Si Nanocrystals and their Helium Plasma Processing into Spin-On, Carbon-Free Nanocrystalline Si Films.

    PubMed

    Mohapatra, Pratyasha; Mendivelso-Perez, Deyny; Bobbitt, Jonathan M; Shaw, Santosh; Yuan, Bin; Tian, Xinchun; Smith, Emily A; Cademartiri, Ludovico

    2018-05-30

    This paper describes a simple approach to the large scale synthesis of colloidal Si nanocrystals and their processing by He plasma into spin-on carbon-free nanocrystalline Si films. We further show that the RIE etching rate in these films is 1.87 times faster than for single crystalline Si, consistent with a simple geometric argument that accounts for the nanoscale roughness caused by the nanoparticle shape.

  5. Comparison on genomic predictions using three GBLUP methods and two single-step blending methods in the Nordic Holstein population

    PubMed Central

    2012-01-01

    Background A single-step blending approach allows genomic prediction using information of genotyped and non-genotyped animals simultaneously. However, the combined relationship matrix in a single-step method may need to be adjusted because marker-based and pedigree-based relationship matrices may not be on the same scale. The same may apply when a GBLUP model includes both genomic breeding values and residual polygenic effects. The objective of this study was to compare single-step blending methods and GBLUP methods with and without adjustment of the genomic relationship matrix for genomic prediction of 16 traits in the Nordic Holstein population. Methods The data consisted of de-regressed proofs (DRP) for 5 214 genotyped and 9 374 non-genotyped bulls. The bulls were divided into a training and a validation population by birth date, October 1, 2001. Five approaches for genomic prediction were used: 1) a simple GBLUP method, 2) a GBLUP method with a polygenic effect, 3) an adjusted GBLUP method with a polygenic effect, 4) a single-step blending method, and 5) an adjusted single-step blending method. In the adjusted GBLUP and single-step methods, the genomic relationship matrix was adjusted for the difference of scale between the genomic and the pedigree relationship matrices. A set of weights on the pedigree relationship matrix (ranging from 0.05 to 0.40) was used to build the combined relationship matrix in the single-step blending method and the GBLUP method with a polygenetic effect. Results Averaged over the 16 traits, reliabilities of genomic breeding values predicted using the GBLUP method with a polygenic effect (relative weight of 0.20) were 0.3% higher than reliabilities from the simple GBLUP method (without a polygenic effect). The adjusted single-step blending and original single-step blending methods (relative weight of 0.20) had average reliabilities that were 2.1% and 1.8% higher than the simple GBLUP method, respectively. In addition, the GBLUP method with a polygenic effect led to less bias of genomic predictions than the simple GBLUP method, and both single-step blending methods yielded less bias of predictions than all GBLUP methods. Conclusions The single-step blending method is an appealing approach for practical genomic prediction in dairy cattle. Genomic prediction from the single-step blending method can be improved by adjusting the scale of the genomic relationship matrix. PMID:22455934

  6. A single scaling parameter as a first approximation to describe the rainfall pattern of a place: application on Catalonia

    NASA Astrophysics Data System (ADS)

    Casas-Castillo, M. Carmen; Llabrés-Brustenga, Alba; Rius, Anna; Rodríguez-Solà, Raúl; Navarro, Xavier

    2018-02-01

    As well as in other natural processes, it has been frequently observed that the phenomenon arising from the rainfall generation process presents fractal self-similarity of statistical type, and thus, rainfall series generally show scaling properties. Based on this fact, there is a methodology, simple scaling, which is used quite broadly to find or reproduce the intensity-duration-frequency curves of a place. In the present work, the relationship of the simple scaling parameter with the characteristic rainfall pattern of the area of study has been investigated. The calculation of this scaling parameter has been performed from 147 daily rainfall selected series covering the temporal period between 1883 and 2016 over the Catalonian territory (Spain) and its nearby surroundings, and a discussion about the relationship between the scaling parameter spatial distribution and rainfall pattern, as well as about trends of this scaling parameter over the past decades possibly due to climate change, has been presented.

  7. Why only few are so successful?

    NASA Astrophysics Data System (ADS)

    Mohanty, P. K.

    2007-10-01

    In many professions employees are rewarded according to their relative performance. Corresponding economy can be modeled by taking N independent agents who gain from the market with a rate which depends on their current gain. We argue that this simple realistic rate generates a scale-free distribution even though intrinsic ability of agents are marginally different from each other. As an evidence we provide distribution of scores for two different systems (a) the global stock game where players invest in real stock market and (b) the international cricket.

  8. Free cooling of the one-dimensional wet granular gas.

    PubMed

    Zaburdaev, V Yu; Brinkmann, M; Herminghaus, S

    2006-07-07

    The free cooling behavior of a wet granular gas is studied in one dimension. We employ a particularly simple model system in which the interaction of wet grains is characterized by a fixed energy loss assigned to each collision. Macroscopic laws of energy dissipation and cluster formation are studied on the basis of numerical simulations and mean-field analytical calculations. We find a number of remarkable scaling properties which may shed light on earlier unexplained results for related systems.

  9. AdS black disk model for small-x DIS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cornalba, Lorenzo; Costa, Miguel S.; Penedones, Joao

    2011-05-23

    Using the approximate conformal invariance of QCD at high energies we consider a simple AdS black disk model to describe saturation in DIS. Deep inside saturation the structure functions have the same power law scaling, F{sub T}{approx}F{sub L}{approx}{sup -}{omega}, where {omega} is related to the expansion rate of the black disk with energy. Furthermore, the ratio F{sub L}/F{sub T} is given by the universal value (1+{omega}/3+{omega}), independently of the target.

  10. Multiscale Modeling of Stiffness, Friction and Adhesion in Mechanical Contacts

    DTIC Science & Technology

    2012-02-29

    over a lateral length l scales as a power law: h  lH, where H is called the Hurst exponent . For typical experimental surfaces, H ranges from 0.5 to 0.8...surfaces with a wide range of Hurst exponents using fully atomistic calculations and the Green’s function method. A simple relation like Eq. (2...described above to explore a full range of parameter space with different rms roughness h0, rms slope h’0, Hurst exponent H, adhesion energy

  11. THE FIRST FERMI IN A HIGH ENERGY NUCLEAR COLLISION.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    KRASNITZ,A.

    1999-08-09

    At very high energies, weak coupling, non-perturbative methods can be used to study classical gluon production in nuclear collisions. One observes in numerical simulations that after an initial formation time, the produced partons are on shell, and their subsequent evolution can be studied using transport theory. At the initial formation time, a simple non-perturbative relation exists between the energy and number densities of the produced partons, and a scale determined by the saturated parton density in the nucleus.

  12. Implementation and Performance Issues in Collaborative Optimization

    NASA Technical Reports Server (NTRS)

    Braun, Robert; Gage, Peter; Kroo, Ilan; Sobieski, Ian

    1996-01-01

    Collaborative optimization is a multidisciplinary design architecture that is well-suited to large-scale multidisciplinary optimization problems. This paper compares this approach with other architectures, examines the details of the formulation, and some aspects of its performance. A particular version of the architecture is proposed to better accommodate the occurrence of multiple feasible regions. The use of system level inequality constraints is shown to increase the convergence rate. A series of simple test problems, demonstrated to challenge related optimization architectures, is successfully solved with collaborative optimization.

  13. Low-Temperature Plasma Functionalization of Carbon Nanotubes

    NASA Technical Reports Server (NTRS)

    Khare, Bishun; Meyyappan, M.

    2004-01-01

    A low-temperature plasma process has been devised for attaching specified molecular groups to carbon nanotubes in order to impart desired chemical and/or physical properties to the nanotubes for specific applications. Unlike carbon-nanotube- functionalization processes reported heretofore, this process does not involve the use of wet chemicals, does not involve exposure of the nanotubes to high temperatures, and generates very little chemical residue. In addition, this process can be carried out in a relatively simple apparatus and can readily be scaled up to mass production.

  14. A grid amplifier

    NASA Technical Reports Server (NTRS)

    Kim, Moonil; Weikle, Robert M., II; Hacker, Jonathan B.; Delisio, Michael P.; Rutledge, David B.; Rosenberg, James J.; Smith, R. P.

    1991-01-01

    A 50-MESFET grid amplifier is reported that has a gain of 11 dB at 3.3 GHz. The grid isolates the input from the output by using vertical polarization for the input beam and horizontal polarization for the transmitted output beam. The grid unit cell is a two-MESFET differential amplifier. A simple calibration procedure allows the gain to be calculated from a relative power measurement. This grid is a hybrid circuit, but the structure is suitable for fabrication as a monolithic wafer-scale integrated circuit, particularly at millimeter wavelengths.

  15. Validity of the International HIV Dementia Scale as assessed in a socioeconomically underdeveloped region of Southern China: assessing the influence of educational attainment.

    PubMed

    Dang, Chao; Wei, Bo; Long, JianXiong; Zhou, MengXiao; Han, XinXin; Zhao, TingTing

    2015-04-01

    In 2012, more than 80,000 cases of HIV infection were recorded in the Southern Chinese minority autonomous region of Guangxi Zhuang, where the occurrence of HIV-associated dementia remains high. The International HIV Dementia Scale is a relatively simple-to-administer screening scale for HIV-associated neurocognitive disorders. However, clinical experience in utilizing the scale with large Chinese samples is currently lacking, especially among individuals with limited formal schooling. In this study, a full neuropsychological evaluation the gold standard was conducted to identify the incidence/prevalence of HIV-associated neurocognitive disorders in a socioeconomically underdeveloped region of Southern China and to locate the optimal cut-off scale value using receiver operating characteristic curves. The highest Youden index of the scale was 0.450, with a corresponding cut-off point of 7.25. The sensitivity and specificity were 0.737 and 0.713, respectively. These results suggest that the scale is an effective and feasible screening tool for HIV-associated neurocognitive disorders in poorer regions of China with fewer well-educated residents. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Peak and Tail Scaling of Breakthrough Curves in Hydrologic Tracer Tests

    NASA Astrophysics Data System (ADS)

    Aquino, T.; Aubeneau, A. F.; Bolster, D.

    2014-12-01

    Power law tails, a marked signature of anomalous transport, have been observed in solute breakthrough curves time and time again in a variety of hydrologic settings, including in streams. However, due to the low concentrations at which they occur they are notoriously difficult to measure with confidence. This leads us to ask if there are other associated signatures of anomalous transport that can be sought. We develop a general stochastic transport framework and derive an asymptotic relation between the tail scaling of a breakthrough curve for a conservative tracer at a fixed downstream position and the scaling of the peak concentration of breakthrough curves as a function of downstream position, demonstrating that they provide equivalent information. We then quantify the relevant spatiotemporal scales for the emergence of this asymptotic regime, where the relationship holds, in the context of a very simple model that represents transport in an idealized river. We validate our results using random walk simulations. The potential experimental benefits and limitations of these findings are discussed.

  17. Shear banding leads to accelerated aging dynamics in a metallic glass

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Küchemann, Stefan; Liu, Chaoyang; Dufresne, Eric M.

    Traditionally, strain localization in metallic glasses is related to the thickness of the shear defect, which is confined to the nanometer scale. In this study, using site-specific x-ray photon correlation spectroscopy (XPCS), we reveal significantly accelerated relaxation dynamics around a shear band in a metallic glass at a length scale that is orders of magnitude larger than the defect itself. The relaxation time in the shear-band vicinity is up to ten-times smaller compared to the as-cast matrix, and the relaxation dynamics occurs in a characteristic three-stage aging response that manifests itself in the temperature-dependent shape parameter known from classical stretchedmore » exponential relaxation dynamics of disordered materials. We demonstrate that the time-dependent correlation functions describing the aging at different temperatures can be captured and collapsed using simple scaling functions. Finally, these insights highlight how an ubiquitous nano-scale strain-localization mechanism in metallic glasses leads to a fundamental change of the relaxation dynamics at the mesoscale.« less

  18. Economically viable large-scale hydrogen liquefaction

    NASA Astrophysics Data System (ADS)

    Cardella, U.; Decker, L.; Klein, H.

    2017-02-01

    The liquid hydrogen demand, particularly driven by clean energy applications, will rise in the near future. As industrial large scale liquefiers will play a major role within the hydrogen supply chain, production capacity will have to increase by a multiple of today’s typical sizes. The main goal is to reduce the total cost of ownership for these plants by increasing energy efficiency with innovative and simple process designs, optimized in capital expenditure. New concepts must ensure a manageable plant complexity and flexible operability. In the phase of process development and selection, a dimensioning of key equipment for large scale liquefiers, such as turbines and compressors as well as heat exchangers, must be performed iteratively to ensure technological feasibility and maturity. Further critical aspects related to hydrogen liquefaction, e.g. fluid properties, ortho-para hydrogen conversion, and coldbox configuration, must be analysed in detail. This paper provides an overview on the approach, challenges and preliminary results in the development of efficient as well as economically viable concepts for large-scale hydrogen liquefaction.

  19. Shear banding leads to accelerated aging dynamics in a metallic glass

    DOE PAGES

    Küchemann, Stefan; Liu, Chaoyang; Dufresne, Eric M.; ...

    2018-01-11

    Traditionally, strain localization in metallic glasses is related to the thickness of the shear defect, which is confined to the nanometer scale. In this study, using site-specific x-ray photon correlation spectroscopy (XPCS), we reveal significantly accelerated relaxation dynamics around a shear band in a metallic glass at a length scale that is orders of magnitude larger than the defect itself. The relaxation time in the shear-band vicinity is up to ten-times smaller compared to the as-cast matrix, and the relaxation dynamics occurs in a characteristic three-stage aging response that manifests itself in the temperature-dependent shape parameter known from classical stretchedmore » exponential relaxation dynamics of disordered materials. We demonstrate that the time-dependent correlation functions describing the aging at different temperatures can be captured and collapsed using simple scaling functions. Finally, these insights highlight how an ubiquitous nano-scale strain-localization mechanism in metallic glasses leads to a fundamental change of the relaxation dynamics at the mesoscale.« less

  20. Pyrotechnic modeling for the NSI and pin puller

    NASA Technical Reports Server (NTRS)

    Powers, Joseph M.; Gonthier, Keith A.

    1993-01-01

    A discussion concerning the modeling of pyrotechnically driven actuators is presented in viewgraph format. The following topics are discussed: literature search, constitutive data for full-scale model, simple deterministic model, observed phenomena, and results from simple model.

  1. Sequestering the standard model vacuum energy.

    PubMed

    Kaloper, Nemanja; Padilla, Antonio

    2014-03-07

    We propose a very simple reformulation of general relativity, which completely sequesters from gravity all of the vacuum energy from a matter sector, including all loop corrections and renders all contributions from phase transitions automatically small. The idea is to make the dimensional parameters in the matter sector functionals of the 4-volume element of the Universe. For them to be nonzero, the Universe should be finite in spacetime. If this matter is the standard model of particle physics, our mechanism prevents any of its vacuum energy, classical or quantum, from sourcing the curvature of the Universe. The mechanism is consistent with the large hierarchy between the Planck scale, electroweak scale, and curvature scale, and early Universe cosmology, including inflation. Consequences of our proposal are that the vacuum curvature of an old and large universe is not zero, but very small, that w(DE) ≃ -1 is a transient, and that the Universe will collapse in the future.

  2. On Two-Scale Modelling of Heat and Mass Transfer

    NASA Astrophysics Data System (ADS)

    Vala, J.; Št'astník, S.

    2008-09-01

    Modelling of macroscopic behaviour of materials, consisting of several layers or components, whose microscopic (at least stochastic) analysis is available, as well as (more general) simulation of non-local phenomena, complicated coupled processes, etc., requires both deeper understanding of physical principles and development of mathematical theories and software algorithms. Starting from the (relatively simple) example of phase transformation in substitutional alloys, this paper sketches the general formulation of a nonlinear system of partial differential equations of evolution for the heat and mass transfer (useful in mechanical and civil engineering, etc.), corresponding to conservation principles of thermodynamics, both at the micro- and at the macroscopic level, and suggests an algorithm for scale-bridging, based on the robust finite element techniques. Some existence and convergence questions, namely those based on the construction of sequences of Rothe and on the mathematical theory of two-scale convergence, are discussed together with references to useful generalizations, required by new technologies.

  3. Kullback-Leibler divergence measure of intermittency: Application to turbulence

    NASA Astrophysics Data System (ADS)

    Granero-Belinchón, Carlos; Roux, Stéphane G.; Garnier, Nicolas B.

    2018-01-01

    For generic systems exhibiting power law behaviors, and hence multiscale dependencies, we propose a simple tool to analyze multifractality and intermittency, after noticing that these concepts are directly related to the deformation of a probability density function from Gaussian at large scales to non-Gaussian at smaller scales. Our framework is based on information theory and uses Shannon entropy and Kullback-Leibler divergence. We provide an extensive application to three-dimensional fully developed turbulence, seen here as a paradigmatic complex system where intermittency was historically defined and the concepts of scale invariance and multifractality were extensively studied and benchmarked. We compute our quantity on experimental Eulerian velocity measurements, as well as on synthetic processes and phenomenological models of fluid turbulence. Our approach is very general and does not require any underlying model of the system, although it can probe the relevance of such a model.

  4. Variation in ejecta size with ejection velocity

    NASA Technical Reports Server (NTRS)

    Vickery, Ann M.

    1987-01-01

    The sizes and ranges of over 25,000 secondary craters around twelve large primaries on three different planets were measured and used to infer the size-velocity distribution of that portion of the primary crater ejecta that produced the secondaries. The ballistic equation for spherical bodies was used to convert the ranges to velocities, and the velocities and crater sizes were used in the appropriate Schmidt-Holsapple scaling relation of estimate ejecta sizes, and the velocity exponent was determined. The latter are generally between -1 and -13, with an average value of about -1.9. Problems with data collection made it impossible to determine a simple, unique relation between size and velocity.

  5. Relative sensitivity of Normalized Difference Vegetation Index (NDVI) and Microwave Polarization Difference Index (MPDI) for vegetation and desertification monitoring

    NASA Technical Reports Server (NTRS)

    Becker, Francois; Choudhury, Bhaskar J.

    1988-01-01

    A simple equation relating the Microwave Polarization Difference Index (MPDI) and the Normalized Difference Vegetation Index (NDVI) is proposed which represents well data obtained from Nimbus 7/SMMR at 37 GHz and NOAA/AVHRR Channels 1 and 2. It is found that there is a limit which is characteristic of a particular type of cover for which both indices are equally sensitive to the variation of vegetation, and below which MPDI is more efficient than NDVI. The results provide insight into the relationship between water content and chlorophyll absorption at pixel size scales.

  6. Relation of major volcanic center concentration on Venus to global tectonic patterns

    NASA Technical Reports Server (NTRS)

    Crumpler, L. S.; Head, James W.; Aubele, Jayne C.

    1993-01-01

    Global analysis of Magellan image data indicates that a major concentration of volcanic centers covering about 40 percent of the surface of Venus occurs between the Beta, Atla, and Themis regions. Associated with this enhanced concentration are geological characteristics commonly interpreted as rifting and mantle upwelling. Interconnected low plains in an annulus around this concentration are characterized by crustal shortening and infrequent volcanic centers that may represent sites of mantle return flow and net downwelling. Together, these observations suggest the existence of relatively simple, large-scale patterns of mantle circulation similar to those associated with concentrations of intraplate volcanism on earth.

  7. Constraints on the lithospheric structure of Venus from mechanical models and tectonic surface features

    NASA Technical Reports Server (NTRS)

    Zuber, Maria T.

    1987-01-01

    The evidence for the extensional or compressional origins of some prominent Venusian surface features disclosed by radar images is discussed. Using simple models, the hypothesis that the observed length scales (10-20 km and 100-300 km) of deformations are controlled by dominant wavelengths arising from unstable compression or extension of the Venus lithosphere is tested. The results show that the existence of tectonic features that exhibit both length scales can be explained if, at the time of deformation, the lithosphere consisted of a crust that was relatively strong near the surface and weak at its base, and an upper mantle that was stronger than or nearly comparable in strength to the upper crust.

  8. Minimal analytical model for undular tidal bore profile; quantum and Hawking effect analogies

    NASA Astrophysics Data System (ADS)

    Berry, M. V.

    2018-05-01

    Waves travelling up-river, driven by high tides, often consist of a smooth front followed by a series of undulations. A simple approximate theory gives the rigidly travelling profile of such ‘undular hydraulic jumps’, up to scaling, as the integral of the Airy function; applying self-consistency fixes the scaling. The theory combines the standard hydraulic jump with ideas borrowed from quantum physics: Hamiltonian operators and zero-energy eigenfunctions. There is an analogy between undular bores and the Hawking effect in relativity: both concern waves associated with horizons. ‘Physics is not just Concerning the Nature of Things, but Concerning the Interconnectedness of all the Natures of Things’(Sir Charles Frank, retirement speech 1976).

  9. Reactivating dynamics for the susceptible-infected-susceptible model: a simple method to simulate the absorbing phase

    NASA Astrophysics Data System (ADS)

    Macedo-Filho, A.; Alves, G. A.; Costa Filho, R. N.; Alves, T. F. A.

    2018-04-01

    We investigated the susceptible-infected-susceptible model on a square lattice in the presence of a conjugated field based on recently proposed reactivating dynamics. Reactivating dynamics consists of reactivating the infection by adding one infected site, chosen randomly when the infection dies out, avoiding the dynamics being trapped in the absorbing state. We show that the reactivating dynamics can be interpreted as the usual dynamics performed in the presence of an effective conjugated field, named the reactivating field. The reactivating field scales as the inverse of the lattice number of vertices n, which vanishes at the thermodynamic limit and does not affect any scaling properties including ones related to the conjugated field.

  10. General theory of the plasmoid instability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Comisso, L.; Lingam, M.; Huang, Y. -M.

    2016-10-05

    In a general theory of the onset and development of the plasmoid instability is formulated by means of a principle of least time. We derive and show the scaling relations for the final aspect ratio, transition time to rapid onset, growth rate, and number of plasmoids that depend on the initial perturbation amplitude (more » $$\\hat{w}$$ 0), the characteristic rate of current sheet evolution (1/τ), and the Lundquist number (S). They are not simple power laws, and are proportional to S ατ β[ln f(S,τ,$$\\hat{w}$$ 0)] σ. Finally, the detailed dynamics of the instability is also elucidated, and shown to comprise of a period of quiescence followed by sudden growth over a short time scale.« less

  11. Semiconductor materials for high frequency solid state sources

    NASA Astrophysics Data System (ADS)

    Grubin, H. L.

    1983-03-01

    The broad goal of the subject contract is to suggest candidate materials for high frequency device operation. During the initial phase of the study, attention has been focused on defining the general role of the band structure and associated scattering processes in determining the response of semiconductors to transient high-speed electrical signals. Moments of the Boltzmann transport equation form the basis of the study, and the scattering rates define the semiconductor under study. The selection of semiconductor materials proceeds from a set of simple, yet significant, set of scaling principles. During the first quarter scaling was associated with what can formally be identified as velocity invariants, but which in more practical terms identifies the relative speed advantages of e.g., InP over GaAs.

  12. Lattice functions, wavelet aliasing, and SO(3) mappings of orthonormal filters

    NASA Astrophysics Data System (ADS)

    John, Sarah

    1998-01-01

    A formulation of multiresolution in terms of a family of dyadic lattices {Sj;j∈Z} and filter matrices Mj⊂U(2)⊂GL(2,C) illuminates the role of aliasing in wavelets and provides exact relations between scaling and wavelet filters. By showing the {DN;N∈Z+} collection of compactly supported, orthonormal wavelet filters to be strictly SU(2)⊂U(2), its representation in the Euler angles of the rotation group SO(3) establishes several new results: a 1:1 mapping of the {DN} filters onto a set of orbits on the SO(3) manifold; an equivalence of D∞ to the Shannon filter; and a simple new proof for a criterion ruling out pathologically scaled nonorthonormal filters.

  13. A Simple Decontamination Approach Using Hydrogen ...

    EPA Pesticide Factsheets

    Journal article To evaluate the use of relatively low levels of hydrogen peroxide vapor (HPV) for the inactivation of Bacillus anthracis spores within an indoor environment. Methods and Results: Laboratory-scale decontamination tests were conducted using bacterial spores of both B. anthracis Ames and Bacillus atrophaeus inoculated onto several types of materials. Pilot-scale tests were also conducted using a larger chamber furnished as an indoor office. Commercial off-the-shelf (COTS) humidifiers filled with aqueous solutions of 3% or 8% hydrogen peroxide were used to generate the HPV inside the mock office. The spores were exposed to the HPV for periods ranging from 8 hours up to one week. Conclusions: Four to seven day exposures to low levels of HPV (average air concentrations of approximately 5-10 parts per million) were effective in inactivating B. anthracis spores on multiple materials. The HPV can be generated with COTS humidifiers and household H2O2 solutions. With the exception of one test/material, B. atrophaeus spores were equally or more resistant to HPV inactivation compared to those from B. anthracis Ames. Significance and Impact of Study: This simple and effective decontamination method is another option that could be widely applied in the event of a B. anthracis spore release.

  14. Refined Synthesis of 2,3,4,5-Tetrahydro-1,3,3-trimethyldipyrrin, a Deceptively Simple Precursor to Hydroporphyrins

    PubMed Central

    Ptaszek, Marcin; Bhaumik, Jayeeta; Kim, Han-Je; Taniguchi, Masahiko; Lindsey, Jonathan S.

    2008-01-01

    2,3,4,5-Tetrahydro-1,3,3-trimethyldipyrrin (1) is a crucial building block in the rational synthesis of chlorins and oxochlorins. The prior 5-step synthesis of 1 from pyrrole-2-carboxaldehyde (2) employed relatively simple and well-known reactions yet suffered from several drawbacks, including limited scale (≥ 0.5 g of 1 per run). A streamlined preparation of 1 has been developed that entails four steps: (i) nitro-aldol condensation of 2 and nitromethane under neat conditions to give 2-(2-nitrovinyl)pyrrole (3), (ii) reduction of 3 with NaBH4 to give 2-(2-nitroethyl)pyrrole (4), (iii) Michael addition of 4 with mesityl oxide under neat conditions or at high concentration to give γ-nitrohexanonepyrrole 5, and (iv) reductive cyclization of 5 with zinc/ammonium formate to give 1. Several multistep transformations have been established, including the direct conversion of 2 → 1. The advantages of the new procedures include (1) fewer steps, (2) avoidance of several problematic reagents, (3) diminished consumption of solvents and reagents, (4) lessened reliance on chromatography, and (5) scalability. The new procedures facilitate the preparation of 1 at the multigram scale. PMID:19132135

  15. The effect of the solar field reversal on the modulation of galactic cosmic rays

    NASA Technical Reports Server (NTRS)

    Thomas, B. T.; Goldstein, B. E.

    1983-01-01

    There is now a growing awareness that solar cycle related changes in the large-scale structure of the interplanetary magnetic field (IMF) may play an important role in the modulation of galactic cosmic rays. To date, attention focussed on two aspects of the magnetic field structure: large scale compression regions produced by fast solar wind streams and solar flares, both of which are known to vary in intensity and number over the solar cycle, and the variable warp of the heliospheric current sheet. It is suggested that another feature of the solar cycle is worthy of consideration: the field reversal itself. If the Sun reverses its polarity by simply overturning the heliospheric current sheet (northern fields migrating southward and vice-versa) then there may well be an effect on cosmic ray intensity. However, such a simple picture of solar reversal seems improbable. Observations of the solar corona suggest the existence of not one but several current sheets in the heliosphere at solar maximum. The results of a simple calculation to demonstrate that the variation in cosmic ray intensities that will result can be as large as is actually observed over the solar cycle are given.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu Shengwei; Yu Jiaguo

    Bi{sub 2}WO{sub 6} hierarchical multilayered flower-like assemblies are fabricated on a large scale by a simple hydrothermal method in the presence of polymeric poly(sodium 4-styrenesulfonate). Such 3D Bi{sub 2}WO{sub 6} assemblies are constructed from orderly arranged 2D layers, which are further composed of a large number of interconnected nanoplates with a mean side length of ca. 50 nm. The bimodal mesopores associated with such hierarchical assembly exhibit peak mesopore size of ca. 4 nm for the voids within a layer, and peak mesopore size of ca. 40 nm corresponding to the interspaces between stacked layers, respectively. The formation process ismore » discussed on the basis of the results of time-dependent experiments, which support a novel 'coupled cooperative assembly and localized ripening' formation mechanism. More interestingly, we have noticed that the collective effect related to such hierarchical assembly induces a significantly enhanced optical absorbance in the UV-visible region. This work may shed some light on the design of complex architectures and exploitation of their potential applications. - Graphical abstract: Bi{sub 2}WO{sub 6} hierarchical multilayered flower-like assemblies are fabricated on a large scale by a simple hydrothermal method in the presence of polymeric poly(sodium 4-styrenesulfonate)« less

  17. Understanding the core-halo relation of quantum wave dark matter from 3D simulations.

    PubMed

    Schive, Hsi-Yu; Liao, Ming-Hsuan; Woo, Tak-Pong; Wong, Shing-Kwong; Chiueh, Tzihong; Broadhurst, Tom; Hwang, W-Y Pauchy

    2014-12-31

    We examine the nonlinear structure of gravitationally collapsed objects that form in our simulations of wavelike cold dark matter, described by the Schrödinger-Poisson (SP) equation with a particle mass ∼10(-22)  eV. A distinct gravitationally self-bound solitonic core is found at the center of every halo, with a profile quite different from cores modeled in the warm or self-interacting dark matter scenarios. Furthermore, we show that each solitonic core is surrounded by an extended halo composed of large fluctuating dark matter granules which modulate the halo density on a scale comparable to the diameter of the solitonic core. The scaling symmetry of the SP equation and the uncertainty principle tightly relate the core mass to the halo specific energy, which, in the context of cosmological structure formation, leads to a simple scaling between core mass (Mc) and halo mass (Mh), Mc∝a(-1/2)Mh(1/3), where a is the cosmic scale factor. We verify this scaling relation by (i) examining the internal structure of a statistical sample of virialized halos that form in our 3D cosmological simulations and by (ii) merging multiple solitons to create individual virialized objects. Sufficient simulation resolution is achieved by adaptive mesh refinement and graphic processing units acceleration. From this scaling relation, present dwarf satellite galaxies are predicted to have kiloparsec-sized cores and a minimum mass of ∼10(8)M⊙, capable of solving the small-scale controversies in the cold dark matter model. Moreover, galaxies of 2×10(12)M⊙ at z=8 should have massive solitonic cores of ∼2×10(9)M⊙ within ∼60  pc. Such cores can provide a favorable local environment for funneling the gas that leads to the prompt formation of early stellar spheroids and quasars.

  18. LHC-scale left-right symmetry and unification

    NASA Astrophysics Data System (ADS)

    Arbeláez, Carolina; Romão, Jorge C.; Hirsch, Martin; Malinský, Michal

    2014-02-01

    We construct a comprehensive list of nonsupersymmetric standard model extensions with a low-scale left-right (LR)-symmetric intermediate stage that may be obtained as simple low-energy effective theories within a class of renormalizable SO(10) grand unified theories. Unlike the traditional "minimal" LR models many of our example settings support a perfect gauge coupling unification even if the LR scale is in the LHC domain at a price of only (a few copies of) one or two types of extra fields pulled down to the TeV-scale ballpark. We discuss the main aspects of a potentially realistic model building conforming the basic constraints from the quark and lepton sector flavor structure, proton decay limits, etc. We pay special attention to the theoretical uncertainties related to the limited information about the underlying unified framework in the bottom-up approach, in particular, to their role in the possible extraction of the LR-breaking scale. We observe a general tendency for the models without new colored states in the TeV domain to be on the verge of incompatibility with the proton stability constraints.

  19. Droplet formation and scaling in dense suspensions

    PubMed Central

    Miskin, Marc Z.; Jaeger, Heinrich M.

    2012-01-01

    When a dense suspension is squeezed from a nozzle, droplet detachment can occur similar to that of pure liquids. While in pure liquids the process of droplet detachment is well characterized through self-similar profiles and known scaling laws, we show here the simple presence of particles causes suspensions to break up in a new fashion. Using high-speed imaging, we find that detachment of a suspension drop is described by a power law; specifically we find the neck minimum radius, rm, scales like near breakup at time τ = 0. We demonstrate data collapse in a variety of particle/liquid combinations, packing fractions, solvent viscosities, and initial conditions. We argue that this scaling is a consequence of particles deforming the neck surface, thereby creating a pressure that is balanced by inertia, and show how it emerges from topological constraints that relate particle configurations with macroscopic Gaussian curvature. This new type of scaling, uniquely enforced by geometry and regulated by the particles, displays memory of its initial conditions, fails to be self-similar, and has implications for the pressure given at generic suspension interfaces. PMID:22392979

  20. Collaborative Research: failure of RockMasses from Nucleation and Growth of Microscopic Defects and Disorder

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, William

    Over the 21 years of funding we have pursued several projects related to earthquakes, damage and nucleation. We developed simple models of earthquake faults which we studied to understand Gutenburg-Richter scaling, foreshocks and aftershocks, the effect of spatial structure of the faults and its interaction with underlying self organization and phase transitions. In addition we studied the formation of amorphous solids via the glass transition. We have also studied nucleation with a particular concentration on transitions in systems with a spatial symmetry change. In addition we investigated the nucleation process in models that mimic rock masses. We obtained the structuremore » of the droplet in both homogeneous and heterogeneous nucleation. We also investigated the effect of defects or asperities on the nucleation of failure in simple models of earthquake faults.« less

  1. Effects of liquid layers and distribution patterns on three-phase saturation and relative permeability relationships: a micromodel study.

    PubMed

    Tsai, Jui-Pin; Chang, Liang-Cheng; Hsu, Shao-Yiu; Shan, Hsin-Yu

    2017-12-01

    In the current study, we used micromodel experiments to study three-phase fluid flow in porous media. In contrast to previous studies, we simultaneously observed and measured pore-scale fluid behavior and three-phase constitutive relationships with digital image acquisition/analysis, fluid pressure control, and permeability assays. Our results showed that the fluid layers significantly influenced pore-scale, three-phase fluid displacement as well as water relative permeability. At low water saturation, water relative permeability not only depended on water saturation but also on the distributions of air and diesel. The results also indicate that the relative permeability-saturation model proposed by Parker et al. (1987) could not completely describe the experimental data from our three-phase flow experiments because these models ignore the effects of phase distribution. A simple bundle-of-tubes model shows that the water relative permeability was proportional to the number of apparently continuous water paths before the critical stage in which no apparently continuous water flow path could be found. Our findings constitute additional information about the essential constitutive relationships involved in both the understanding and the modeling of three-phase flows in porous media.

  2. An optimal modification of a Kalman filter for time scales

    NASA Technical Reports Server (NTRS)

    Greenhall, C. A.

    2003-01-01

    The Kalman filter in question, which was implemented in the time scale algorithm TA(NIST), produces time scales with poor short-term stability. A simple modification of the error covariance matrix allows the filter to produce time scales with good stability at all averaging times, as verified by simulations of clock ensembles.

  3. On wildfire complexity, simple models and environmental templates for fire size distributions

    NASA Astrophysics Data System (ADS)

    Boer, M. M.; Bradstock, R.; Gill, M.; Sadler, R.

    2012-12-01

    Vegetation fires affect some 370 Mha annually. At global and continental scales, fire activity follows predictable spatiotemporal patterns driven by gradients and seasonal fluctuations of primary productivity and evaporative demand that set constraints for fuel accumulation rates and fuel dryness, two key ingredients of fire. At regional scales, fires are also known to affect some landscapes more than others and within landscapes to occur preferentially in some sectors (e.g. wind-swept ridges) and rarely in others (e.g. wet gullies). Another common observation is that small fires occur relatively frequent yet collectively burn far less country than relatively infrequent large fires. These patterns of fire activity are well known to management agencies and consistent with their (informal) models of how the basic drivers and constraints of fire (i.e. fuels, ignitions, weather) vary in time and space across the landscape. The statistical behaviour of these landscape fire patterns has excited the (academic) research community by showing some consistency with that of complex dynamical systems poised at a phase transition. The common finding that the frequency-size distributions of actual fires follow power laws that resemble those produced by simple cellular models from statistical mechanics has been interpreted as evidence that flammable landscapes operate as self-organising systems with scale invariant fire size distributions emerging 'spontaneously' from simple rules of contagious fire spread and a strong feedback between fires and fuel patterns. In this paper we argue that the resemblance of simulated and actual fire size distributions is an example of equifinality, that is fires in model landscapes and actual landscapes may show similar statistical behaviour but this is reached by qualitatively different pathways or controlling mechanisms. We support this claim with two key findings regarding simulated fire spread mechanisms and fire-fuel feedbacks. Firstly, we demonstrate that the power law behaviour of fire size distributions in the widely used Drossel and Schwabl (1992) Forest Fire Model (FFM) is strictly conditional on simulating fire spread as a cell-to-cell contagion over a fixed distance; the invariant scaling of fire sizes breaks down under the slightest variation in that distance, suggesting that pattern formation in the FFM is irreconcilable with the reality of disparate rates and modes of fire spread observed in the field. Secondly, we review field evidence showing that fuel age effects on the probability of fire spread, a key assumption in simulation models like the FFM, do not generally apply across flammable environments. Finally, we explore alternative explanations for the formation of scale invariant fire sizes in real landscapes. Using observations from southern Australian forest regions we demonstrate that the spatiotemporal patterns of fuel dryness and magnitudes of fire driving weather events set strong environmental templates for regional fire size distributions.

  4. A Novel Optical/digital Processing System for Pattern Recognition

    NASA Technical Reports Server (NTRS)

    Boone, Bradley G.; Shukla, Oodaye B.

    1993-01-01

    This paper describes two processing algorithms that can be implemented optically: the Radon transform and angular correlation. These two algorithms can be combined in one optical processor to extract all the basic geometric and amplitude features from objects embedded in video imagery. We show that the internal amplitude structure of objects is recovered by the Radon transform, which is a well-known result, but, in addition, we show simulation results that calculate angular correlation, a simple but unique algorithm that extracts object boundaries from suitably threshold images from which length, width, area, aspect ratio, and orientation can be derived. In addition to circumventing scale and rotation distortions, these simulations indicate that the features derived from the angular correlation algorithm are relatively insensitive to tracking shifts and image noise. Some optical architecture concepts, including one based on micro-optical lenslet arrays, have been developed to implement these algorithms. Simulation test and evaluation using simple synthetic object data will be described, including results of a study that uses object boundaries (derivable from angular correlation) to classify simple objects using a neural network.

  5. Perspective: Sloppiness and emergent theories in physics, biology, and beyond.

    PubMed

    Transtrum, Mark K; Machta, Benjamin B; Brown, Kevin S; Daniels, Bryan C; Myers, Christopher R; Sethna, James P

    2015-07-07

    Large scale models of physical phenomena demand the development of new statistical and computational tools in order to be effective. Many such models are "sloppy," i.e., exhibit behavior controlled by a relatively small number of parameter combinations. We review an information theoretic framework for analyzing sloppy models. This formalism is based on the Fisher information matrix, which is interpreted as a Riemannian metric on a parameterized space of models. Distance in this space is a measure of how distinguishable two models are based on their predictions. Sloppy model manifolds are bounded with a hierarchy of widths and extrinsic curvatures. The manifold boundary approximation can extract the simple, hidden theory from complicated sloppy models. We attribute the success of simple effective models in physics as likewise emerging from complicated processes exhibiting a low effective dimensionality. We discuss the ramifications and consequences of sloppy models for biochemistry and science more generally. We suggest that the reason our complex world is understandable is due to the same fundamental reason: simple theories of macroscopic behavior are hidden inside complicated microscopic processes.

  6. Robust Polypropylene Fabrics Super-Repelling Various Liquids: A Simple, Rapid and Scalable Fabrication Method by Solvent Swelling.

    PubMed

    Zhu, Tang; Cai, Chao; Duan, Chunting; Zhai, Shuai; Liang, Songmiao; Jin, Yan; Zhao, Ning; Xu, Jian

    2015-07-01

    A simple, rapid (10 s) and scalable method to fabricate superhydrophobic polypropylene (PP) fabrics is developed by swelling the fabrics in cyclohexane/heptane mixture at 80 °C. The recrystallization of the swollen macromolecules on the fiber surface contributes to the formation of submicron protuberances, which increase the surface roughness dramatically and result in superhydrophobic behavior. The superhydrophobic PP fabrics possess excellent repellency to blood, urine, milk, coffee, and other common liquids, and show good durability and robustness, such as remarkable resistances to water penetration, abrasion, acidic/alkaline solution, and boiling water. The excellent comprehensive performance of the superhydrophobic PP fabrics indicates their potential applications as oil/water separation materials, protective garments, diaper pads, or other medical and health supplies. This simple, fast and low cost method operating at a relatively low temperature is superior to other reported techniques for fabricating superhydrophobic PP materials as far as large scale manufacturing is considered. Moreover, the proposed method is applicable for preparing superhydrophobic PP films and sheets as well.

  7. A simple parameter can switch between different weak-noise-induced phenomena in a simple neuron model

    NASA Astrophysics Data System (ADS)

    Yamakou, Marius E.; Jost, Jürgen

    2017-10-01

    In recent years, several, apparently quite different, weak-noise-induced resonance phenomena have been discovered. Here, we show that at least two of them, self-induced stochastic resonance (SISR) and inverse stochastic resonance (ISR), can be related by a simple parameter switch in one of the simplest models, the FitzHugh-Nagumo (FHN) neuron model. We consider a FHN model with a unique fixed point perturbed by synaptic noise. Depending on the stability of this fixed point and whether it is located to either the left or right of the fold point of the critical manifold, two distinct weak-noise-induced phenomena, either SISR or ISR, may emerge. SISR is more robust to parametric perturbations than ISR, and the coherent spike train generated by SISR is more robust than that generated deterministically. ISR also depends on the location of initial conditions and on the time-scale separation parameter of the model equation. Our results could also explain why real biological neurons having similar physiological features and synaptic inputs may encode very different information.

  8. A review on simple assembly line balancing type-e problem

    NASA Astrophysics Data System (ADS)

    Jusop, M.; Rashid, M. F. F. Ab

    2015-12-01

    Simple assembly line balancing (SALB) is an attempt to assign the tasks to the various workstations along the line so that the precedence relations are satisfied and some performance measure are optimised. Advanced approach of algorithm is necessary to solve large-scale problems as SALB is a class of NP-hard. Only a few studies are focusing on simple assembly line balancing of Type-E problem (SALB-E) since it is a general and complex problem. SALB-E problem is one of SALB problem which consider the number of workstation and the cycle time simultaneously for the purpose of maximising the line efficiency. This paper review previous works that has been done in order to optimise SALB -E problem. Besides that, this paper also reviewed the Genetic Algorithm approach that has been used to optimise SALB-E. From the reviewed that has been done, it was found that none of the existing works are concern on the resource constraint in the SALB-E problem especially on machine and tool constraints. The research on SALB-E will contribute to the improvement of productivity in real industrial application.

  9. Allometric scaling law in a simple oxygen exchanging network: possible implications on the biological allometric scaling laws.

    PubMed

    Santillán, Moisés

    2003-07-21

    A simple model of an oxygen exchanging network is presented and studied. This network's task is to transfer a given oxygen rate from a source to an oxygen consuming system. It consists of a pipeline, that interconnects the oxygen consuming system and the reservoir and of a fluid, the active oxygen transporting element, moving through the pipeline. The network optimal design (total pipeline surface) and dynamics (volumetric flow of the oxygen transporting fluid), which minimize the energy rate expended in moving the fluid, are calculated in terms of the oxygen exchange rate, the pipeline length, and the pipeline cross-section. After the oxygen exchanging network is optimized, the energy converting system is shown to satisfy a 3/4-like allometric scaling law, based upon the assumption that its performance regime is scale invariant as well as on some feasible geometric scaling assumptions. Finally, the possible implications of this result on the allometric scaling properties observed elsewhere in living beings are discussed.

  10. Attribution of regional flood changes based on scaling fingerprints

    NASA Astrophysics Data System (ADS)

    Viglione, A.; Merz, B.; Dung, N.; Parajka, J.; Nester, T.; Bloeschl, G.

    2017-12-01

    Changes in the river flood regime may be due to atmospheric processes (e.g., increasing precipitation), catchment processes (e.g., soil compaction associated with land use change), and river system processes (e.g., loss of retention volume in the floodplains). We propose a framework for attributing flood changes to these drivers based on a regional analysis. We exploit the scaling characteristics (i.e., fingerprints) with catchment area of the effects of the drivers on flood changes. The estimation of their relative contributions is framed in Bayesian terms. Analysis of a synthetic, controlled case suggests that the accuracy of the regional attribution increases with increasing number of sites and record lengths, decreases with increasing regional heterogeneity, increases with increasing difference of the scaling fingerprints, and decreases with an increase of their prior uncertainty. The applicability of the framework is illustrated for a case study set in Austria, where positive flood trends have been observed at many sites in the past decades. The individual scaling fingerprints related to the atmospheric, catchment, and river system processes are estimated from rainfall data and simple hydrological modeling. Although the distributions of the contributions are rather wide, the attribution identifies precipitation change as the main driver of flood change in the study region.

  11. A generalized watershed disturbance-invertebrate relation applicable in a range of environmental settings across the continental United States

    USGS Publications Warehouse

    Steuer, Jeffrey J.

    2010-01-01

    It is widely recognized that urbanization can affect ecological conditions in aquatic systems; numerous studies have identified impervious surface cover as an indicator of urban intensity and as an index of development at the watershed, regional, and national scale. Watershed percent imperviousness, a commonly understood urban metric was used as the basis for a generalized watershed disturbance metric that, when applied in conjunction with weighted percent agriculture and percent grassland, predicted stream biotic conditions based on Ephemeroptera, Plecoptera, and Trichoptera (EPT) richness across a wide range of environmental settings. Data were collected in streams that encompassed a wide range of watershed area (4.4-1,714 km), precipitation (38-204 cm/yr), and elevation (31-2,024 m) conditions. Nevertheless the simple 3-landcover disturbance metric accounted for 58% of the variability in EPT richness based on the 261 nationwide sites. On the metropolitan area scale, relationship r ranged from 0.04 to 0.74. At disturbance values 15. Future work may incorporate watershed management practices within the disturbance metric, further increasing the management applicability of the relation. Such relations developed on a regional or metropolitan area scale are likely to be stronger than geographically generalized models; as found in these EPT richness relations. However, broad spatial models are able to provide much needed understanding in unmonitored areas and provide initial guidance for stream potential.

  12. Small-Scale and Low Cost Electrodes for "Standard" Reduction Potential Measurements

    ERIC Educational Resources Information Center

    Eggen, Per-Odd; Kvittingen, Lise

    2007-01-01

    The construction of three simple and inexpensive electrodes, hydrogen, and chlorine and copper electrode is described. This simple method will encourage students to construct their own electrode and better help in understanding precipitation and other electrochemistry concepts.

  13. Flight Research into Simple Adaptive Control on the NASA FAST Aircraft

    NASA Technical Reports Server (NTRS)

    Hanson, Curtis E.

    2011-01-01

    A series of simple adaptive controllers with varying levels of complexity were designed, implemented and flight tested on the NASA Full-Scale Advanced Systems Testbed (FAST) aircraft. Lessons learned from the development and flight testing are presented.

  14. Type-curve estimation of statistical heterogeneity

    NASA Astrophysics Data System (ADS)

    Neuman, Shlomo P.; Guadagnini, Alberto; Riva, Monica

    2004-04-01

    The analysis of pumping tests has traditionally relied on analytical solutions of groundwater flow equations in relatively simple domains, consisting of one or at most a few units having uniform hydraulic properties. Recently, attention has been shifting toward methods and solutions that would allow one to characterize subsurface heterogeneities in greater detail. On one hand, geostatistical inverse methods are being used to assess the spatial variability of parameters, such as permeability and porosity, on the basis of multiple cross-hole pressure interference tests. On the other hand, analytical solutions are being developed to describe the mean and variance (first and second statistical moments) of flow to a well in a randomly heterogeneous medium. We explore numerically the feasibility of using a simple graphical approach (without numerical inversion) to estimate the geometric mean, integral scale, and variance of local log transmissivity on the basis of quasi steady state head data when a randomly heterogeneous confined aquifer is pumped at a constant rate. By local log transmissivity we mean a function varying randomly over horizontal distances that are small in comparison with a characteristic spacing between pumping and observation wells during a test. Experimental evidence and hydrogeologic scaling theory suggest that such a function would tend to exhibit an integral scale well below the maximum well spacing. This is in contrast to equivalent transmissivities derived from pumping tests by treating the aquifer as being locally uniform (on the scale of each test), which tend to exhibit regional-scale spatial correlations. We show that whereas the mean and integral scale of local log transmissivity can be estimated reasonably well based on theoretical ensemble mean variations of head and drawdown with radial distance from a pumping well, estimating the log transmissivity variance is more difficult. We obtain reasonable estimates of the latter based on theoretical variation of the standard deviation of circumferentially averaged drawdown about its mean.

  15. Development and validation of a brief, descriptive Danish pain questionnaire (BDDPQ).

    PubMed

    Perkins, F M; Werner, M U; Persson, F; Holte, K; Jensen, T S; Kehlet, H

    2004-04-01

    A new pain questionnaire should be simple, be documented to have discriminative function, and be related to previously used questionnaires. Word meaning was validated by using bilingual Danish medical students and asking them to translate words taken from the Danish version of the McGill pain questionnaire into English. Evaluative word value was estimated using a visual analog scale (VAS). Discriminative function was assessed by having patients with one of six painful conditions (postherpetic neuralgia, phantom limb pain, rheumatoid arthritis, ankle fracture, appendicitis, or labor pain) complete the questionnaire. We were not able to find Danish words that were reliably back-translated to the English words 'splitting' or 'gnawing'. A simple three-word set of evaluative terms had good separation when rated on a VAS scale ('let' 17.5+/-6.5 mm; 'moderat' 42.7+/-8.6 mm; and 'staerk' 74.9+/-9.7 mm). The questionnaire was able to discriminate among the six painful conditions with 77% accuracy by just using the descriptive words. The accuracy of the questionnaire increased to 96% with the addition of evaluative terms (for pain at rest and with activity), chronicity (acute vs. chronic), and location of the pain. A Danish pain questionnaire that subjects and patients can self-administer has been developed and validated relative to the words used in the English McGill Pain questionnaire. The discriminative ability of the questionnaire among some common painful conditions has been tested and documented. The questionnaire may be of use in patient care and research.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Hongyi; Sivapalan, Murugesu; Tian, Fuqiang

    Inspired by the Dunne diagram, the climatic and landscape controls on the partitioning of annual runoff into its various components (Hortonian and Dunne overland flow and subsurface stormflow) are assessed quantitatively, from a purely theoretical perspective. A simple distributed hydrologic model has been built sufficient to simulate the effects of different combinations of climate, soil, and topography on the runoff generation processes. The model is driven by a sequence of simple hypothetical precipitation events, for a large combination of climate and landscape properties, and hydrologic responses at the catchment scale are obtained through aggregation of grid-scale responses. It is found,more » first, that the water balance responses, including relative contributions of different runoff generation mechanisms, could be related to a small set of dimensionless similarity parameters. These capture the competition between the wetting, drying, storage, and drainage functions underlying the catchment responses, and in this way, provide a quantitative approximation of the conceptual Dunne diagram. Second, only a subset of all hypothetical catchment/climate combinations is found to be ‘‘behavioral,’’ in terms of falling sufficiently close to the Budyko curve, describing mean annual runoff as a function of climate aridity. Furthermore, these behavioral combinations are mostly consistent with the qualitative picture presented in the Dunne diagram, indicating clearly the commonality between the Budyko curve and the Dunne diagram. These analyses also suggest clear interrelationships amongst the ‘‘behavioral’’ climate, soil, and topography parameter combinations, implying these catchment properties may be constrained to be codependent in order to satisfy the Budyko curve.« less

  17. Objective Assessment of Isotretinoin-Associated Cheilitis: Isotretinoin Cheilitis Grading Scale

    PubMed Central

    Ornelas, Jennifer; Rosamilia, Lorraine; Larsen, Larissa; Foolad, Negar; Wang, Quinlu; Li, Chin-Shang; Sivamani, Raja K.

    2016-01-01

    Importance Isotretinoin remains an effective treatment for severe acne. Despite its effectiveness, it includes many side effects, of which cheilitis is the most common. Objective To develop an objective grading scale for assessment of isotretinoin-associated cheilitis, Design Cross-sectional clinical grading study. Setting UC Davis Dermatology clinic. Participants Subjects were older than 18 years old and actively treated with oral isotretinoin. Exposures Oral Isotretinoin. Main outcomes and Measures We developed an isotretinoin cheilitis grading scale (ICGS) incorporating the following four characteristics: erythema, scale/crust, fissures, and inflammation of the commissures. Three board-certified dermatologists independently graded photographs of the subjects. Results The Kendall’s coefficient of concordance (KCC) for the ICGS was 0.88 (p<0.0001). The Kendall’s coefficient was ≥ 0.72 (p<0.0001) for each of the four characteristics included in the grading scale. An image-based measurement for lip roughness statistically significantly correlated with the lip scale/crusting assessment (r = 0.52, p <0.05). Conclusion and Relevance The ICGS is reproducible and relatively simple to use. It can be incorporated as an objective tool to aid in the assessment of isotretinoin associated cheilitis. PMID:26395167

  18. Self-Organized Criticality and Scaling in Lifetime of Traffic Jams

    NASA Astrophysics Data System (ADS)

    Nagatani, Takashi

    1995-01-01

    The deterministic cellular automaton 184 (the one-dimensional asymmetric simple-exclusion model with parallel dynamics) is extended to take into account injection or extraction of particles. The model presents the traffic flow on a highway with inflow or outflow of cars.Introducing injection or extraction of particles into the asymmetric simple-exclusion model drives the system asymptotically into a steady state exhibiting a self-organized criticality. The typical lifetime of traffic jams scales as \\cong Lν with ν=0.65±0.04. It is shown that the cumulative distribution Nm (L) of lifetimes satisfies the finite-size scaling form Nm (L) \\cong L-1 f(m/Lν).

  19. Phonon scattering in nanoscale systems: lowest order expansion of the current and power expressions

    NASA Astrophysics Data System (ADS)

    Paulsson, Magnus; Frederiksen, Thomas; Brandbyge, Mads

    2006-04-01

    We use the non-equilibrium Green's function method to describe the effects of phonon scattering on the conductance of nano-scale devices. Useful and accurate approximations are developed that both provide (i) computationally simple formulas for large systems and (ii) simple analytical models. In addition, the simple models can be used to fit experimental data and provide physical parameters.

  20. Let's Stop Trying to Quantify Household Vulnerability: The Problem With Simple Scales for Targeting and Evaluating Economic Strengthening Programs.

    PubMed

    Moret, Whitney M

    2018-03-21

    Economic strengthening practitioners are increasingly seeking data collection tools that will help them target households vulnerable to HIV and poor child well-being outcomes, match households to appropriate interventions, monitor their status, and determine readiness for graduation from project support. This article discusses efforts in 3 countries to develop simple, valid tools to quantify and classify economic vulnerability status. In Côte d'Ivoire, we conducted a cross-sectional survey with 3,749 households to develop a scale based on the definition of HIV-related economic vulnerability from the U.S. President's Emergency Plan for AIDS Relief (PEPFAR) for the purpose of targeting vulnerable households for PEPFAR-funded programs for orphans and vulnerable children. The vulnerability measures examined did not cluster in ways that would allow for the creation of a small number of composite measures, and thus we were unable to develop a scale. In Uganda, we assessed the validity of a vulnerability index developed to classify households according to donor classifications of economic status by measuring its association with a validated poverty measure, finding only a modest correlation. In South Africa, we developed monitoring and evaluation tools to assess economic status of individual adolescent girls and their households. We found no significant correlation with our validation measures, which included a validated measure of girls' vulnerability to HIV, a validated poverty measure, and subjective classifications generated by the community, data collector, and respondent. Overall, none of the measures of economic vulnerability used in the 3 countries varied significantly with their proposed validation items. Our findings suggest that broad constructs of economic vulnerability cannot be readily captured using simple scales to classify households and individuals in a way that accounts for a substantial amount of variance at locally defined vulnerability levels. We recommend that researchers and implementers design monitoring and evaluation instruments to capture narrower definitions of vulnerability based on characteristics programs intend to affect. We also recommend using separate tools for targeting based on context-specific indicators with evidence-based links to negative outcomes. Policy makers and donors should avoid reliance on simplified metrics of economic vulnerability in the programs they support. © Moret.

  1. Rapid determination of triclosan in personal care products using new in-tube based ultrasound-assisted salt-induced liquid-liquid microextraction coupled with high performance liquid chromatography-ultraviolet detection.

    PubMed

    Chen, Ming-Jen; Liu, Ya-Ting; Lin, Chiao-Wen; Ponnusamy, Vinoth Kumar; Jen, Jen-Fon

    2013-03-12

    This paper describes the development of a novel, simple and efficient in-tube based ultrasound-assisted salt-induced liquid-liquid microextraction (IT-USA-SI-LLME) technique for the rapid determination of triclosan (TCS) in personal care products by high performance liquid chromatography-ultraviolet (HPLC-UV) detection. IT-USA-SI-LLME method is based on the rapid phase separation of water-miscible organic solvent from the aqueous phase in the presence of high concentration of salt (salting-out phenomena) under ultrasonication. In the present work, an indigenously fabricated home-made glass extraction device (8-mL glass tube inbuilt with a self-scaled capillary tip) was utilized as the phase separation device for USA-SI-LLME. After the extraction, the upper extractant layer was narrowed into the self-scaled capillary tip by pushing the plunger plug; thus, the collection and measurement of the upper organic solvent layer was simple and convenient. The effects of various parameters on the extraction efficiency were thoroughly evaluated and optimized. Under optimal conditions, detection was linear in the concentration range of 0.4-100ngmL(-1) with correlation coefficient of 0.9968. The limit of detection was 0.09ngmL(-1) and the relative standard deviations ranged between 0.8 and 5.3% (n=5). The applicability of the developed method was demonstrated for the analysis of TCS in different commercial personal care products and the relative recoveries ranged from 90.4 to 98.5%. The present method was proven to be a simple, sensitive, less organic solvent consuming, inexpensive and rapid procedure for analysis of TCS in a variety of commercially available personal care products or cosmetic preparations. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Construction and validation of a measure of integrative well-being in seven languages: The Pemberton Happiness Index

    PubMed Central

    2013-01-01

    Purpose We introduce the Pemberton Happiness Index (PHI), a new integrative measure of well-being in seven languages, detailing the validation process and presenting psychometric data. The scale includes eleven items related to different domains of remembered well-being (general, hedonic, eudaimonic, and social well-being) and ten items related to experienced well-being (i.e., positive and negative emotional events that possibly happened the day before); the sum of these items produces a combined well-being index. Methods A distinctive characteristic of this study is that to construct the scale, an initial pool of items, covering the remembered and experienced well-being domains, were subjected to a complete selection and validation process. These items were based on widely used scales (e.g., PANAS, Satisfaction With Life Scale, Subjective Happiness Scale, and Psychological Well-Being Scales). Both the initial items and reference scales were translated into seven languages and completed via Internet by participants (N = 4,052) aged 16 to 60 years from nine countries (Germany, India, Japan, Mexico, Russia, Spain, Sweden, Turkey, and USA). Results Results from this initial validation study provided very good support for the psychometric properties of the PHI (i.e., internal consistency, a single-factor structure, and convergent and incremental validity). Conclusions Given the PHI’s good psychometric properties, this simple and integrative index could be used as an instrument to monitor changes in well-being. We discuss the utility of this integrative index to explore well-being in individuals and communities. PMID:23607679

  3. Anomalous scaling of stochastic processes and the Moses effect

    NASA Astrophysics Data System (ADS)

    Chen, Lijian; Bassler, Kevin E.; McCauley, Joseph L.; Gunaratne, Gemunu H.

    2017-04-01

    The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t1/2. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.

  4. Anomalous scaling of stochastic processes and the Moses effect.

    PubMed

    Chen, Lijian; Bassler, Kevin E; McCauley, Joseph L; Gunaratne, Gemunu H

    2017-04-01

    The state of a stochastic process evolving over a time t is typically assumed to lie on a normal distribution whose width scales like t^{1/2}. However, processes in which the probability distribution is not normal and the scaling exponent differs from 1/2 are known. The search for possible origins of such "anomalous" scaling and approaches to quantify them are the motivations for the work reported here. In processes with stationary increments, where the stochastic process is time-independent, autocorrelations between increments and infinite variance of increments can cause anomalous scaling. These sources have been referred to as the Joseph effect and the Noah effect, respectively. If the increments are nonstationary, then scaling of increments with t can also lead to anomalous scaling, a mechanism we refer to as the Moses effect. Scaling exponents quantifying the three effects are defined and related to the Hurst exponent that characterizes the overall scaling of the stochastic process. Methods of time series analysis that enable accurate independent measurement of each exponent are presented. Simple stochastic processes are used to illustrate each effect. Intraday financial time series data are analyzed, revealing that their anomalous scaling is due only to the Moses effect. In the context of financial market data, we reiterate that the Joseph exponent, not the Hurst exponent, is the appropriate measure to test the efficient market hypothesis.

  5. The Couples' Illness Communication Scale (CICS): development and evaluation of a brief measure assessing illness-related couple communication.

    PubMed

    Arden-Close, Emily; Moss-Morris, Rona; Dennison, Laura; Bayne, Louise; Gidron, Yori

    2010-09-01

    When one member of a couple has a chronic illness, communication about the illness is important for both patient and partner well-being. This study aimed to develop and test a brief self-report measure of illness-related couple communication. A combination of correlations and multiple regression were used to assess the internal consistency and validity of the Couples' Illness Communication Scale (CICS). A scale to provide insight into both patient and partner illness communication was developed. The CICS was then tested on patients with ovarian cancer (N=123) and their partners (N=101), as well as patients with early stage multiple sclerosis (MS) who had stable partnerships (N=64). The CICS demonstrated good acceptability, internal consistency, convergent validity (correlations with general couple communication and marital adjustment), construct validity (correlations with intrusive thoughts, social/family well-being, emotional impact of the illness, and psychological distress), and test-retest reliability. The CICS meets the majority of psychometric criteria for assessment measures in both a life-threatening illness (ovarian cancer) and a chronic progressive disease (MS). Further research is required to understand its suitability for use in other populations. Adoption of the CICS into couple-related research will improve understanding of the role of illness-related communication in adjustment to illness. Use of this short, simple tool in a clinical setting can provide a springboard for addressing difficulties with illness-related couple communication and could aid decision making for referrals to couple counselling.

  6. Simple Scaling of Mulit-Stream Jet Plumes for Aeroacoustic Modeling

    NASA Technical Reports Server (NTRS)

    Bridges, James

    2016-01-01

    When creating simplified, semi-empirical models for the noise of simple single-stream jets near surfaces it has proven useful to be able to generalize the geometry of the jet plume. Having a model that collapses the mean and turbulent velocity fields for a range of flows allows the problem to become one of relating the normalized jet field and the surface. However, most jet flows of practical interest involve jets of two or more coannular flows for which standard models for the plume geometry do not exist. The present paper describes one attempt to relate the mean and turbulent velocity fields of multi-stream jets to that of an equivalent single-stream jet. The normalization of single-stream jets is briefly reviewed, from the functional form of the flow model to the results of the modeling. Next, PIV data from a number of multi-stream jets is analyzed in a similar fashion. The results of several single-stream approximations of the multi-stream jet plume are demonstrated, with a best approximation determined and the shortcomings of the model highlighted.

  7. Simple Scaling of Multi-Stream Jet Plumes for Aeroacoustic Modeling

    NASA Technical Reports Server (NTRS)

    Bridges, James

    2015-01-01

    When creating simplified, semi-empirical models for the noise of simple single-stream jets near surfaces it has proven useful to be able to generalize the geometry of the jet plume. Having a model that collapses the mean and turbulent velocity fields for a range of flows allows the problem to become one of relating the normalized jet field and the surface. However, most jet flows of practical interest involve jets of two or more co-annular flows for which standard models for the plume geometry do not exist. The present paper describes one attempt to relate the mean and turbulent velocity fields of multi-stream jets to that of an equivalent single-stream jet. The normalization of single-stream jets is briefly reviewed, from the functional form of the flow model to the results of the modeling. Next, PIV (Particle Image Velocimetry) data from a number of multi-stream jets is analyzed in a similar fashion. The results of several single-stream approximations of the multi-stream jet plume are demonstrated, with a 'best' approximation determined and the shortcomings of the model highlighted.

  8. Elastic moduli of a smectic membrane: a rod-level scaling analysis

    NASA Astrophysics Data System (ADS)

    Wensink, H. H.; Morales Anda, L.

    2018-02-01

    Chiral rodlike colloids exposed to strong depletion attraction may self-assemble into chiral membranes whose twisted director field differs from that of a 3D bulk chiral nematic. We formulate a simple microscopic variational theory to determine the elastic moduli of rods assembled into a bidimensional smectic membrane. The approach is based on a simple Onsager-Straley theory for a non-uniform director field that we apply to describe rod twist within the membrane. A microscopic approach enables a detailed estimate of the individual Frank elastic moduli (splay, twist and bend) as well as the twist penetration depth of the smectic membrane in relation to the rod density and shape. We find that the elastic moduli are distinctly different from those of a bulk nematic fluid, with the splay elasticity being much stronger and the curvature elasticity much weaker than for rods assembled in a three-dimensional nematic fluid. We argue that the use of the simplistic one-constant approximation in which all moduli are assumed to be of equal magnitude is not appropriate for modelling the structure-property relation of smectic membranes.

  9. Scale problems in assessment of hydrogeological parameters of groundwater flow models

    NASA Astrophysics Data System (ADS)

    Nawalany, Marek; Sinicyn, Grzegorz

    2015-09-01

    An overview is presented of scale problems in groundwater flow, with emphasis on upscaling of hydraulic conductivity, being a brief summary of the conventional upscaling approach with some attention paid to recently emerged approaches. The focus is on essential aspects which may be an advantage in comparison to the occasionally extremely extensive summaries presented in the literature. In the present paper the concept of scale is introduced as an indispensable part of system analysis applied to hydrogeology. The concept is illustrated with a simple hydrogeological system for which definitions of four major ingredients of scale are presented: (i) spatial extent and geometry of hydrogeological system, (ii) spatial continuity and granularity of both natural and man-made objects within the system, (iii) duration of the system and (iv) continuity/granularity of natural and man-related variables of groundwater flow system. Scales used in hydrogeology are categorised into five classes: micro-scale - scale of pores, meso-scale - scale of laboratory sample, macro-scale - scale of typical blocks in numerical models of groundwater flow, local-scale - scale of an aquifer/aquitard and regional-scale - scale of series of aquifers and aquitards. Variables, parameters and groundwater flow equations for the three lowest scales, i.e., pore-scale, sample-scale and (numerical) block-scale, are discussed in detail, with the aim to justify physically deterministic procedures of upscaling from finer to coarser scales (stochastic issues of upscaling are not discussed here). Since the procedure of transition from sample-scale to block-scale is physically well based, it is a good candidate for upscaling block-scale models to local-scale models and likewise for upscaling local-scale models to regional-scale models. Also the latest results in downscaling from block-scale to sample scale are briefly referred to.

  10. Towards a Certified Lightweight Array Bound Checker for Java Bytecode

    NASA Technical Reports Server (NTRS)

    Pichardie, David

    2009-01-01

    Dynamic array bound checks are crucial elements for the security of a Java Virtual Machines. These dynamic checks are however expensive and several static analysis techniques have been proposed to eliminate explicit bounds checks. Such analyses require advanced numerical and symbolic manipulations that 1) penalize bytecode loading or dynamic compilation, 2) complexify the trusted computing base. Following the Foundational Proof Carrying Code methodology, our goal is to provide a lightweight bytecode verifier for eliminating array bound checks that is both efficient and trustable. In this work, we define a generic relational program analysis for an imperative, stackoriented byte code language with procedures, arrays and global variables and instantiate it with a relational abstract domain as polyhedra. The analysis has automatic inference of loop invariants and method pre-/post-conditions, and efficient checking of analysis results by a simple checker. Invariants, which can be large, can be specialized for proving a safety policy using an automatic pruning technique which reduces their size. The result of the analysis can be checked efficiently by annotating the program with parts of the invariant together with certificates of polyhedral inclusions. The resulting checker is sufficiently simple to be entirely certified within the Coq proof assistant for a simple fragment of the Java bytecode language. During the talk, we will also report on our ongoing effort to scale this approach for the full sequential JVM.

  11. Investigation of shear damage considering the evolution of anisotropy

    NASA Astrophysics Data System (ADS)

    Kweon, S.

    2013-12-01

    The damage that occurs in shear deformations in view of anisotropy evolution is investigated. It is widely believed in the mechanics research community that damage (or porosity) does not evolve (increase) in shear deformations since the hydrostatic stress in shear is zero. This paper proves that the above statement can be false in large deformations of simple shear. The simulation using the proposed anisotropic ductile fracture model (macro-scale) in this study indicates that hydrostatic stress becomes nonzero and (thus) porosity evolves (increases or decreases) in the simple shear deformation of anisotropic (orthotropic) materials. The simple shear simulation using a crystal plasticity based damage model (meso-scale) shows the same physics as manifested in the above macro-scale model that porosity evolves due to the grain-to-grain interaction, i.e., due to the evolution of anisotropy. Through a series of simple shear simulations, this study investigates the effect of the evolution of anisotropy, i.e., the rotation of the orthotropic axes onto the damage (porosity) evolution. The effect of the evolutions of void orientation and void shape onto the damage (porosity) evolution is investigated as well. It is found out that the interaction among porosity, the matrix anisotropy and void orientation/shape plays a crucial role in the ductile damage of porous materials.

  12. Defining Simple nD Operations Based on Prosmatic nD Objects

    NASA Astrophysics Data System (ADS)

    Arroyo Ohori, K.; Ledoux, H.; Stoter, J.

    2016-10-01

    An alternative to the traditional approaches to model separately 2D/3D space, time, scale and other parametrisable characteristics in GIS lies in the higher-dimensional modelling of geographic information, in which a chosen set of non-spatial characteristics, e.g. time and scale, are modelled as extra geometric dimensions perpendicular to the spatial ones, thus creating a higher-dimensional model. While higher-dimensional models are undoubtedly powerful, they are also hard to create and manipulate due to our lack of an intuitive understanding in dimensions higher than three. As a solution to this problem, this paper proposes a methodology that makes nD object generation easier by splitting the creation and manipulation process into three steps: (i) constructing simple nD objects based on nD prismatic polytopes - analogous to prisms in 3D -, (ii) defining simple modification operations at the vertex level, and (iii) simple postprocessing to fix errors introduced in the model. As a use case, we show how two sets of operations can be defined and implemented in a dimension-independent manner using this methodology: the most common transformations (i.e. translation, scaling and rotation) and the collapse of objects. The nD objects generated in this manner can then be used as a basis for an nD GIS.

  13. The London handicap scale: a re-evaluation of its validity using standard scoring and simple summation.

    PubMed

    Jenkinson, C; Mant, J; Carter, J; Wade, D; Winner, S

    2000-03-01

    To assess the validity of the London handicap scale (LHS) using a simple unweighted scoring system compared with traditional weighted scoring 323 patients admitted to hospital with acute stroke were followed up by interview 6 months after their stroke as part of a trial looking at the impact of a family support organiser. Outcome measures included the six item LHS, the Dartmouth COOP charts, the Frenchay activities index, the Barthel index, and the hospital anxiety and depression scale. Patients' handicap score was calculated both using the standard procedure (with weighting) for the LHS, and using a simple summation procedure without weighting (U-LHS). Construct validity of both LHS and U-LHS was assessed by testing their correlations with the other outcome measures. Cronbach's alpha for the LHS was 0.83. The U-LHS was highly correlated with the LHS (r=0.98). Correlation of U-LHS with the other outcome measures gave very similar results to correlation of LHS with these measures. Simple summation scoring of the LHS does not lead to any change in the measurement properties of the instrument compared with standard weighted scoring. Unweighted scores are easier to calculate and interpret, so it is recommended that these are used.

  14. Quantifying predictability in a model with statistical features of the atmosphere

    PubMed Central

    Kleeman, Richard; Majda, Andrew J.; Timofeyev, Ilya

    2002-01-01

    The Galerkin truncated inviscid Burgers equation has recently been shown by the authors to be a simple model with many degrees of freedom, with many statistical properties similar to those occurring in dynamical systems relevant to the atmosphere. These properties include long time-correlated, large-scale modes of low frequency variability and short time-correlated “weather modes” at smaller scales. The correlation scaling in the model extends over several decades and may be explained by a simple theory. Here a thorough analysis of the nature of predictability in the idealized system is developed by using a theoretical framework developed by R.K. This analysis is based on a relative entropy functional that has been shown elsewhere by one of the authors to measure the utility of statistical predictions precisely. The analysis is facilitated by the fact that most relevant probability distributions are approximately Gaussian if the initial conditions are assumed to be so. Rather surprisingly this holds for both the equilibrium (climatological) and nonequilibrium (prediction) distributions. We find that in most cases the absolute difference in the first moments of these two distributions (the “signal” component) is the main determinant of predictive utility variations. Contrary to conventional belief in the ensemble prediction area, the dispersion of prediction ensembles is generally of secondary importance in accounting for variations in utility associated with different initial conditions. This conclusion has potentially important implications for practical weather prediction, where traditionally most attention has focused on dispersion and its variability. PMID:12429863

  15. Functional vision and cognition in infants with congenital disorders of the peripheral visual system.

    PubMed

    Dale, Naomi; Sakkalou, Elena; O'Reilly, Michelle; Springall, Clare; De Haan, Michelle; Salt, Alison

    2017-07-01

    To investigate how vision relates to early development by studying vision and cognition in a national cohort of 1-year-old infants with congenital disorders of the peripheral visual system and visual impairment. This was a cross-sectional observational investigation of a nationally recruited cohort of infants with 'simple' and 'complex' congenital disorders of the peripheral visual system. Entry age was 8 to 16 months. Vision level (Near Detection Scale) and non-verbal cognition (sensorimotor understanding, Reynell Zinkin Scales) were assessed. Parents completed demographic questionnaires. Of 90 infants (49 males, 41 females; mean 13mo, standard deviation [SD] 2.5mo; range 7-17mo); 25 (28%) had profound visual impairment (light perception at best) and 65 (72%) had severe visual impairment (basic 'form' vision). The Near Detection Scale correlated significantly with sensorimotor understanding developmental quotients in the 'total', 'simple', and 'complex' groups (all p<0.001). Age and vision accounted for 48% of sensorimotor understanding variance. Infants with profound visual impairment, especially in the 'complex' group with congenital disorders of the peripheral visual system with known brain involvement, showed the greatest cognitive delay. Lack of vision is associated with delayed early-object manipulative abilities and concepts; 'form' vision appeared to support early developmental advance. This paper provides baseline characteristics for cross-sectional and longitudinal follow-up investigations in progress. A methodological strength of the study was the representativeness of the cohort according to national epidemiological and population census data. © 2017 Mac Keith Press.

  16. A New Technique for Personality Scale Construction. Preliminary Findings.

    ERIC Educational Resources Information Center

    Schaffner, Paul E.; Darlington, Richard B.

    Most methods of personality scale construction have clear statistical disadvantages. A hybrid method (Darlington and Bishop, 1966) was found to increase scale validity more than any other method, with large item pools. A simple modification of the Darlington-Bishop method (algebraically and conceptually similar to ridge regression, but…

  17. A Simple and Effective Image Normalization Method to Monitor Boreal Forest Change in a Siberian Burn Chronosequence across Sensors and across Time

    NASA Astrophysics Data System (ADS)

    Chen, X.; Vierling, L. A.; Deering, D. W.

    2004-12-01

    Satellite data offer unique perspectives for monitoring and quantifying land cover change, however, the radiometric consistency among co-located multi-temporal images is difficult to maintain due to variations in sensors and atmosphere. To detect accurate landscape change using multi-temporal images, we developed a new relative radiometric normalization scheme: the temporally invariant cluster (TIC) method. Image data were acquired on 9 June 1990 (Landsat 4), 20 June 2000, and 26 August 2001 (Landsat 7) for analyses over boreal forests near the Siberian city of Krasnoyarsk. Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), and Reduced Simple Ratio (RSR) were investigated in the normalization study. The temporally invariant cluster (TIC) centers were identified through a point density map of the base image and the target image and a normalization regression line was created through all TIC centers. The target image digital data were then converted using the regression function so that the two images could be compared using the resulting common radiometric scale. We found that EVI was very sensitive to vegetation structure and could thus be used to separate conifer forests from deciduous forests and grass/crop lands. NDVI was a very effective vegetation index to reduce the influence of shadow, while EVI was very sensitive to shadowing. After normalization, correlations of NDVI and EVI with field collected total Leaf Area Index (LAI) data in 2000 and 2001 were significantly improved; the r-square values in these regressions increased from 0.49 to 0.69 and from 0.46 to 0.61, respectively. An EVI ¡°cancellation effect¡± where EVI was positively related to understory greenness but negatively related to forest canopy coverage was evident across a post fire chronosequence. These findings indicate that the TIC method provides a simple, effective and repeatable method to create radiometrically comparable data sets for remote detection of landscape change. Compared with some previous relative normalization methods, this new method can avoid subjective selection of a normalization regression line. It does not require high level programming and statistical analyses, yet remains sensitive to landscape changes occurring over seasonal and inter-annual time scales. In addition, the TIC method maintains sensitivity to subtle changes in vegetation phenology and enables normalization even when invariant features are rare.

  18. SimpleBox 4.0: Improving the model while keeping it simple….

    PubMed

    Hollander, Anne; Schoorl, Marian; van de Meent, Dik

    2016-04-01

    Chemical behavior in the environment is often modeled with multimedia fate models. SimpleBox is one often-used multimedia fate model, firstly developed in 1986. Since then, two updated versions were published. Based on recent scientific developments and experience with SimpleBox 3.0, a new version of SimpleBox was developed and is made public here: SimpleBox 4.0. In this new model, eight major changes were implemented: removal of the local scale and vegetation compartments, addition of lake compartments and deep ocean compartments (including the thermohaline circulation), implementation of intermittent rain instead of drizzle and of depth dependent soil concentrations, adjustment of the partitioning behavior for organic acids and bases as well as of the value for enthalpy of vaporization. In this paper, the effects of the model changes in SimpleBox 4.0 on the predicted steady-state concentrations of chemical substances were explored for different substance groups (neutral organic substances, acids, bases, metals) in a standard emission scenario. In general, the largest differences between the predicted concentrations in the new and the old model are caused by the implementation of layered ocean compartments. Undesirable high model complexity caused by vegetation compartments and a local scale were removed to enlarge the simplicity and user friendliness of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. GRAVITATIONAL WAVE BACKGROUND FROM BINARY MERGERS AND METALLICITY EVOLUTION OF GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakazato, Ken’ichiro; Sago, Norichika; Niino, Yuu, E-mail: nakazato@artsci.kyushu-u.ac.jp

    The cosmological evolution of the binary black hole (BH) merger rate and the energy density of the gravitational wave (GW) background are investigated. To evaluate the redshift dependence of the BH formation rate, BHs are assumed to originate from low-metallicity stars, and the relations between the star formation rate, metallicity and stellar mass of galaxies are combined with the stellar mass function at each redshift. As a result, it is found that when the energy density of the GW background is scaled with the merger rate at the local universe, the scaling factor does not depend on the critical metallicitymore » for the formation of BHs. Also taking into account the merger of binary neutron stars, a simple formula to express the energy spectrum of the GW background is constructed for the inspiral phase. The relation between the local merger rate and the energy density of the GW background will be examined by future GW observations.« less

  20. To address surface reaction network complexity using scaling relations machine learning and DFT calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas

    Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying thesemore » methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.« less

  1. To address surface reaction network complexity using scaling relations machine learning and DFT calculations

    DOE PAGES

    Ulissi, Zachary W.; Medford, Andrew J.; Bligaard, Thomas; ...

    2017-03-06

    Surface reaction networks involving hydrocarbons exhibit enormous complexity with thousands of species and reactions for all but the very simplest of chemistries. We present a framework for optimization under uncertainty for heterogeneous catalysis reaction networks using surrogate models that are trained on the fly. The surrogate model is constructed by teaching a Gaussian process adsorption energies based on group additivity fingerprints, combined with transition-state scaling relations and a simple classifier for determining the rate-limiting step. The surrogate model is iteratively used to predict the most important reaction step to be calculated explicitly with computationally demanding electronic structure theory. Applying thesemore » methods to the reaction of syngas on rhodium(111), we identify the most likely reaction mechanism. Lastly, propagating uncertainty throughout this process yields the likelihood that the final mechanism is complete given measurements on only a subset of the entire network and uncertainty in the underlying density functional theory calculations.« less

  2. Relative influence upon microwave emissivity of fine-scale stratigraphy, internal scattering, and dielectric properties

    USGS Publications Warehouse

    England, A.W.

    1976-01-01

    The microwave emissivity of relatively low-loss media such as snow, ice, frozen ground, and lunar soil is strongly influenced by fine-scale layering and by internal scattering. Radiometric data, however, are commonly interpreted using a model of emission from a homogeneous, dielectric halfspace whose emissivity derives exclusively from dielectric properties. Conclusions based upon these simple interpretations can be erroneous. Examples are presented showing that the emission from fresh or hardpacked snow over either frozen or moist soil is governed dominantly by the size distribution of ice grains in the snowpack. Similarly, the thickness of seasonally frozen soil and the concentration of rock clasts in lunar soil noticeably affect, respectively, the emissivities of northern latitude soils in winter and of the lunar regolith. Petrophysical data accumulated in support of the geophysical interpretation of microwave data must include measurements of not only dielectric properties, but also of geometric factors such as finescale layering and size distributions of grains, inclusions, and voids. ?? 1976 Birkha??user Verlag.

  3. A unified framework for the pareto law and Matthew effect using scale-free networks

    NASA Astrophysics Data System (ADS)

    Hu, M.-B.; Wang, W.-X.; Jiang, R.; Wu, Q.-S.; Wang, B.-H.; Wu, Y.-H.

    2006-09-01

    We investigate the accumulated wealth distribution by adopting evolutionary games taking place on scale-free networks. The system self-organizes to a critical Pareto distribution (1897) of wealth P(m)˜m-(v+1) with 1.6 < v <2.0 (which is in agreement with that of U.S. or Japan). Particularly, the agent's personal wealth is proportional to its number of contacts (connectivity), and this leads to the phenomenon that the rich gets richer and the poor gets relatively poorer, which is consistent with the Matthew Effect present in society, economy, science and so on. Though our model is simple, it provides a good representation of cooperation and profit accumulation behavior in economy, and it combines the network theory with econophysics.

  4. Quantitative analysis of population-scale family trees with millions of relatives.

    PubMed

    Kaplanis, Joanna; Gordon, Assaf; Shor, Tal; Weissbrod, Omer; Geiger, Dan; Wahl, Mary; Gershovits, Michael; Markus, Barak; Sheikh, Mona; Gymrek, Melissa; Bhatia, Gaurav; MacArthur, Daniel G; Price, Alkes L; Erlich, Yaniv

    2018-04-13

    Family trees have vast applications in fields as diverse as genetics, anthropology, and economics. However, the collection of extended family trees is tedious and usually relies on resources with limited geographical scope and complex data usage restrictions. We collected 86 million profiles from publicly available online data shared by genealogy enthusiasts. After extensive cleaning and validation, we obtained population-scale family trees, including a single pedigree of 13 million individuals. We leveraged the data to partition the genetic architecture of human longevity and to provide insights into the geographical dispersion of families. We also report a simple digital procedure to overlay other data sets with our resource. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  5. Topological structure dynamics revealing collective evolution in active nematics

    PubMed Central

    Shi, Xia-qing; Ma, Yu-qiang

    2013-01-01

    Topological defects frequently emerge in active matter like bacterial colonies, cytoskeleton extracts on substrates, self-propelled granular or colloidal layers and so on, but their dynamical properties and the relations to large-scale organization and fluctuations in these active systems are seldom touched. Here we reveal, through a simple model for active nematics using self-driven hard elliptic rods, that the excitation, annihilation and transportation of topological defects differ markedly from those in non-active media. These dynamical processes exhibit strong irreversibility in active nematics in the absence of detailed balance. Moreover, topological defects are the key factors in organizing large-scale dynamic structures and collective flows, resulting in multi-spatial temporal effects. These findings allow us to control the self-organization of active matter through topological structures. PMID:24346733

  6. Following atomistic kinetics on experimental timescales with the kinetic Activation–Relaxation Technique

    DOE PAGES

    Mousseau, Normand; Béland, Laurent Karim; Brommer, Peter; ...

    2014-12-24

    The properties of materials, even at the atomic level, evolve on macroscopic time scales. Following this evolution through simulation has been a challenge for many years. For lattice-based activated diffusion, kinetic Monte Carlo has turned out to be an almost perfect solution. Various accelerated molecular dynamical schemes, for their part, have allowed the study on long time scale of relatively simple systems. There is still a desire and need, however, for methods able to handle complex materials such as alloys and disordered systems. In this paper, we review the kinetic Activation–Relaxation Technique (k-ART), one of a handful of off-lattice kineticmore » Monte Carlo methods, with on-the-fly cataloging, that have been proposed in the last few years.« less

  7. Porous medium acoustics of wave-induced vorticity diffusion

    NASA Astrophysics Data System (ADS)

    Müller, T. M.; Sahay, P. N.

    2011-02-01

    A theory for attenuation and dispersion of elastic waves due to wave-induced generation of vorticity at pore-scale heterogeneities in a macroscopically homogeneous porous medium is developed. The diffusive part of the vorticity field associated with a viscous wave in the pore space—the so-called slow shear wave—is linked to the porous medium acoustics through incorporation of the fluid strain rate tensor of a Newtonian fluid in the poroelastic constitutive relations. The method of statistical smoothing is then used to derive dynamic-equivalent elastic wave velocities accounting for the conversion scattering process into the diffusive slow shear wave in the presence of randomly distributed pore-scale heterogeneities. The result is a simple model for wave attenuation and dispersion associated with the transition from viscosity- to inertia-dominated flow regime.

  8. Finding the strong CP problem at the LHC

    NASA Astrophysics Data System (ADS)

    D'Agnolo, Raffaele Tito; Hook, Anson

    2016-11-01

    We show that a class of parity based solutions to the strong CP problem predicts new colored particles with mass at the TeV scale, due to constraints from Planck suppressed operators. The new particles are copies of the Standard Model quarks and leptons. The new quarks can be produced at the LHC and are either collider stable or decay into Standard Model quarks through a Higgs, a W or a Z boson. We discuss some simple but generic predictions of the models for the LHC and find signatures not related to the traditional solutions of the hierarchy problem. We thus provide alternative motivation for new physics searches at the weak scale. We also briefly discuss the cosmological history of these models and how to obtain successful baryogenesis.

  9. Use of healthcare a long time after severe burn injury; relation to perceived health and personality characteristics.

    PubMed

    Wikehult, B; Willebrand, M; Kildal, M; Lannerstam, K; Fugl-Meyer, A R; Ekselius, L; Gerdin, B

    2005-08-05

    The aim of the study was to evaluate which factors are associated with the use of healthcare a long time after severe burn injury. After a review process based on clinical reasoning, 69 former burn patients out of a consecutive group treated at the Uppsala Burn Unit from 1980--1995 were visited in their homes and their use of care and support was assessed in a semi-structured interview. Post-burn health was assessed with the Burn-Specific Health Scale-Brief (BSHS-B) and personality was assessed with the Swedish universities Scales of Personality (SSP). The participants were injured on average eight years previously. Thirty-four had current contact with healthcare due to their burn injury and had significantly lower scores on three BSHS-B-domains: Simple Abilities, Work and Hand function, and significantly higher scores for the SSP-domain Neuroticism and the SSP-scales Stress Susceptibility, Lack of Assertiveness, and lower scores for Social Desirability. There was no relation to age, gender, time since injury, length of stay, or to the surface area burned. A routine screening of personality traits as a supplement to long-term follow-ups may help in identifying the patient's need for care.

  10. Scaling rates of true polar wander in convecting planets and moons

    NASA Astrophysics Data System (ADS)

    Rose, Ian; Buffett, Bruce

    2017-12-01

    Mass redistribution in the convecting mantle of a planet causes perturbations in its moment of inertia tensor. Conservation of angular momentum dictates that these perturbations change the direction of the rotation vector of the planet, a process known as true polar wander (TPW). Although the existence of TPW on Earth is firmly established, its rate and magnitude over geologic time scales remain controversial. Here we present scaling analyses and numerical simulations of TPW due to mantle convection over a range of parameter space relevant to planetary interiors. For simple rotating convection, we identify a set of dimensionless parameters that fully characterize true polar wander. We use these parameters to define timescales for the growth of moment of inertia perturbations due to convection and for their relaxation due to true polar wander. These timescales, as well as the relative sizes of convective anomalies, control the rate and magnitude of TPW. This analysis also clarifies the nature of so called "inertial interchange" TPW events, and relates them to a broader class of events that enable large and often rapid TPW. We expect these events to have been more frequent in Earth's past.

  11. SOME USES OF MODELS OF QUANTITATIVE GENETIC SELECTION IN SOCIAL SCIENCE.

    PubMed

    Weight, Michael D; Harpending, Henry

    2017-01-01

    The theory of selection of quantitative traits is widely used in evolutionary biology, agriculture and other related fields. The fundamental model known as the breeder's equation is simple, robust over short time scales, and it is often possible to estimate plausible parameters. In this paper it is suggested that the results of this model provide useful yardsticks for the description of social traits and the evaluation of transmission models. The differences on a standard personality test between samples of Old Order Amish and Indiana rural young men from the same county and the decline of homicide in Medieval Europe are used as illustrative examples of the overall approach. It is shown that the decline of homicide is unremarkable under a threshold model while the differences between rural Amish and non-Amish young men are too large to be a plausible outcome of simple genetic selection in which assortative mating by affiliation is equivalent to truncation selection.

  12. Microbial risk assessment in heterogeneous aquifers: 1. Pathogen transport

    NASA Astrophysics Data System (ADS)

    Molin, S.; Cvetkovic, V.

    2010-05-01

    Pathogen transport in heterogeneous aquifers is investigated for microbial risk assessment. A point source with time-dependent input of pathogens is assumed, exemplified as a simple on-site sanitation installation, intermingled with water supply wells. Any pathogen transmission pathway (realization) to the receptor from a postulated infection hazard is viewed as a random event, with the hydraulic conductivity varying spatially. For aquifers where VAR[lnK] < 1 and the integral scale is finite, we provide relatively simple semianalytical expressions for pathogen transport that incorporate the colloid filtration theory. We test a wide range of Damkohler numbers in order to assess the significance of rate limitations on the aquifer barrier function. Even slow immobile inactivation may notably affect the retention of pathogens. Analytical estimators for microbial peak discharge are evaluated and are shown to be applicable using parameters representative of rotavirus and Hepatitis A with input of 10-20 days duration.

  13. Gradational evolution of young, simple impact craters on the Earth

    NASA Technical Reports Server (NTRS)

    Grant, J. A.; Schultz, P. H.

    1991-01-01

    From these three craters, a first order gradational evolutionary sequence can be proposed. As crater rims are reduced by backwasting and downwasting through fluvial and mass wasting processes, craters are enlarged by approx. 10 pct. Enlargement of drainages inside the crater eventually forms rim breaches, thereby capturing headward portions of exterior drainages. At the same time, the relative importance of gradational processes may reverse on the ejecta: aeolian activity may supersede fluvial incisement and fan formation at late stages of modification. Despite actual high drainage densities on the crater exterior during early stages of gradation, the subtle scale of these systems results in low density estimates from air photos and satellite images. Because signatures developed on surfaces around all three craters appear to be mostly gradient dependent, they may not be unique to simple crater morphologies. Similar signatures may develop on portions of complex craters as well; however, important differences may also occur.

  14. Fracture surfaces of granular pastes.

    PubMed

    Mohamed Abdelhaye, Y O; Chaouche, M; Van Damme, H

    2013-11-01

    Granular pastes are dense dispersions of non-colloidal grains in a simple or a complex fluid. Typical examples are the coating, gluing or sealing mortars used in building applications. We study the cohesive rupture of thick mortar layers in a simple pulling test where the paste is initially confined between two flat surfaces. After hardening, the morphology of the fracture surfaces was investigated, using either the box counting method to analyze fracture profiles perpendicular to the mean fracture plane, or the slit-island method to analyze the islands obtained by cutting the fracture surfaces at different heights, parallel to the mean fracture plane. The fracture surfaces were shown to exhibit scaling properties over several decades. However, contrary to what has been observed in the brittle or ductile fracture of solid materials, the islands were shown to be mass fractals. This was related to the extensive plastic flow involved in the fracture process.

  15. Further evidence that similar principles govern recall from episodic and semantic memory: the Canadian prime ministerial serial position function.

    PubMed

    Neath, Ian; Saint-Aubin, Jean

    2011-06-01

    The serial position function, with its characteristic primacy and recency effects, is one of the most ubiquitous findings in episodic memory tasks. In contrast, there are only two demonstrations of such functions in tasks thought to tap semantic memory. Here, we provide a third demonstration, showing that free recall of the prime ministers of Canada also results in a serial position function. Scale Independent Memory, Perception, and Learning (SIMPLE), a local distinctiveness model of memory that was designed to account for serial position effects in episodic memory, fit the data. According to SIMPLE, serial position functions observed in episodic and semantic memory all reflect the relative distinctiveness principle: items will be well remembered to the extent that they are more distinct than competing items at the time of retrieval. (PsycINFO Database Record (c) 2011 APA, all rights reserved).

  16. Scaling up Effects in the Organic Laboratory

    ERIC Educational Resources Information Center

    Persson, Anna; Lindstrom, Ulf M.

    2004-01-01

    A simple and effective way of exposing chemistry students to some of the effects of scaling up an organic reaction is described. It gives the student an experience that may encounter in an industrial setting.

  17. Nanocrystal synthesis in microfluidic reactors: where next?

    PubMed

    Phillips, Thomas W; Lignos, Ioannis G; Maceiczyk, Richard M; deMello, Andrew J; deMello, John C

    2014-09-07

    The past decade has seen a steady rise in the use of microfluidic reactors for nanocrystal synthesis, with numerous studies reporting improved reaction control relative to conventional batch chemistry. However, flow synthesis procedures continue to lag behind batch methods in terms of chemical sophistication and the range of accessible materials, with most reports having involved simple one- or two-step chemical procedures directly adapted from proven batch protocols. Here we examine the current status of microscale methods for nanocrystal synthesis, and consider what role microreactors might ultimately play in laboratory-scale research and industrial production.

  18. Electron microscopy investigation of gallium oxide micro/nanowire structures synthesized via vapor phase growth.

    PubMed

    Wang, Y; Xu, J; Wang, R M; Yu, D P

    2004-01-01

    Large-scale micro/nanosized Ga(2)O(3) structures were synthesized via a simple vapor p9hase growth method. The morphology of the as-grown structures varied from aligned arrays of smooth nano/microscale wires to composite and complex microdendrites. We present evidence that the formation of the observed structure depends strongly on its position relative to the source materials (the concentration distribution) and on the growth temperature. A growth model is proposed, based on the vapor-solid (VS) mechanism, which can explain the observed morphologies.

  19. Increased water retention in polymer electrolyte membranes at elevated temperatures assisted by capillary condensation.

    PubMed

    Park, Moon Jeong; Downing, Kenneth H; Jackson, Andrew; Gomez, Enrique D; Minor, Andrew M; Cookson, David; Weber, Adam Z; Balsara, Nitash P

    2007-11-01

    We establish a new systematic methodology for controlling the water retention of polymer electrolyte membranes. Block copolymer membranes comprising hydrophilic phases with widths ranging from 2 to 5 nm become wetter as the temperature of the surrounding air is increased at constant relative humidity. The widths of the moist hydrophilic phases were measured by cryogenic electron microscopy experiments performed on humid membranes. Simple calculations suggest that capillary condensation is important at these length scales. The correlation between moisture content and proton conductivity of the membranes is demonstrated.

  20. In situ intercalation strategies for device-quality hybrid inorganic-organic self-assembled quantum wells

    NASA Astrophysics Data System (ADS)

    Pradeesh, K.; Baumberg, J. J.; Prakash, G. Vijaya

    2009-07-01

    Thin films of self-organized quantum wells of inorganic-organic hybrid perovskites of (C6H9C2H4NH3)2PbI4 are formed from a simple intercalation strategy to yield well-ordered uniform films over centimeter-size scales. These films compare favorably with traditional solution-chemistry-synthesized thin films. The hybrid films show strong room-temperature exciton-related absorption and photoluminescence, which shift with fabrication protocol. We demonstrate the potential of this method for electronic and photonic device applications.

  1. Prediction of free turbulent mixing using a turbulent kinetic energy method

    NASA Technical Reports Server (NTRS)

    Harsha, P. T.

    1973-01-01

    Free turbulent mixing of two-dimensional and axisymmetric one- and two-stream flows is analyzed by a relatively simple turbulent kinetic energy method. This method incorporates a linear relationship between the turbulent shear and the turbulent kinetic energy and an algebraic relationship for the length scale appearing in the turbulent kinetic energy equation. Good results are obtained for a wide variety of flows. The technique is shown to be especially applicable to flows with heat and mass transfer, for which nonunity Prandtl and Schmidt numbers may be assumed.

  2. Time-scale invariance as an emergent property in a perceptron with realistic, noisy neurons

    PubMed Central

    Buhusi, Catalin V.; Oprisan, Sorinel A.

    2013-01-01

    In most species, interval timing is time-scale invariant: errors in time estimation scale up linearly with the estimated duration. In mammals, time-scale invariance is ubiquitous over behavioral, lesion, and pharmacological manipulations. For example, dopaminergic drugs induce an immediate, whereas cholinergic drugs induce a gradual, scalar change in timing. Behavioral theories posit that time-scale invariance derives from particular computations, rules, or coding schemes. In contrast, we discuss a simple neural circuit, the perceptron, whose output neurons fire in a clockwise fashion (interval timing) based on the pattern of coincidental activation of its input neurons. We show numerically that time-scale invariance emerges spontaneously in a perceptron with realistic neurons, in the presence of noise. Under the assumption that dopaminergic drugs modulate the firing of input neurons, and that cholinergic drugs modulate the memory representation of the criterion time, we show that a perceptron with realistic neurons reproduces the pharmacological clock and memory patterns, and their time-scale invariance, in the presence of noise. These results suggest that rather than being a signature of higher-order cognitive processes or specific computations related to timing, time-scale invariance may spontaneously emerge in a massively-connected brain from the intrinsic noise of neurons and circuits, thus providing the simplest explanation for the ubiquity of scale invariance of interval timing. PMID:23518297

  3. Quantifying seismic anisotropy induced by small-scale chemical heterogeneities

    NASA Astrophysics Data System (ADS)

    Alder, C.; Bodin, T.; Ricard, Y.; Capdeville, Y.; Debayle, E.; Montagner, J. P.

    2017-12-01

    Observations of seismic anisotropy are usually used as a proxy for lattice-preferred orientation (LPO) of anisotropic minerals in the Earth's mantle. In this way, seismic anisotropy observed in tomographic models provides important constraints on the geometry of mantle deformation associated with thermal convection and plate tectonics. However, in addition to LPO, small-scale heterogeneities that cannot be resolved by long-period seismic waves may also produce anisotropy. The observed (i.e. apparent) anisotropy is then a combination of an intrinsic and an extrinsic component. Assuming the Earth's mantle exhibits petrological inhomogeneities at all scales, tomographic models built from long-period seismic waves may thus display extrinsic anisotropy. In this paper, we investigate the relation between the amplitude of seismic heterogeneities and the level of induced S-wave radial anisotropy as seen by long-period seismic waves. We generate some simple 1-D and 2-D isotropic models that exhibit a power spectrum of heterogeneities as what is expected for the Earth's mantle, that is, varying as 1/k, with k the wavenumber of these heterogeneities. The 1-D toy models correspond to simple layered media. In the 2-D case, our models depict marble-cake patterns in which an anomaly in shear wave velocity has been advected within convective cells. The long-wavelength equivalents of these models are computed using upscaling relations that link properties of a rapidly varying elastic medium to properties of the effective, that is, apparent, medium as seen by long-period waves. The resulting homogenized media exhibit extrinsic anisotropy and represent what would be observed in tomography. In the 1-D case, we analytically show that the level of anisotropy increases with the square of the amplitude of heterogeneities. This relation is numerically verified for both 1-D and 2-D media. In addition, we predict that 10 per cent of chemical heterogeneities in 2-D marble-cake models can induce more than 3.9 per cent of extrinsic radial S-wave anisotropy. We thus predict that a non-negligible part of the observed anisotropy in tomographic models may be the result of unmapped small-scale heterogeneities in the mantle, mainly in the form of fine layering, and that caution should be taken when interpreting observed anisotropy in terms of LPO and mantle deformation. This effect may be particularly strong in the lithosphere where chemical heterogeneities are assumed to be the strongest.

  4. How fast do living organisms move: Maximum speeds from bacteria to elephants and whales

    NASA Astrophysics Data System (ADS)

    Meyer-Vernet, Nicole; Rospars, Jean-Pierre

    2015-08-01

    Despite their variety and complexity, living organisms obey simple scaling laws due to the universality of the laws of physics. In the present paper, we study the scaling between maximum speed and size, from bacteria to the largest mammals. While the preferred speed has been widely studied in the framework of Newtonian mechanics, the maximum speed has rarely attracted the interest of physicists, despite its remarkable scaling property; it is roughly proportional to length throughout nearly the whole range of running and swimming organisms. We propose a simple order-of-magnitude interpretation of this ubiquitous relationship, based on physical properties shared by life forms of very different body structure and varying by more than 20 orders of magnitude in body mass.

  5. Simple scaling of catastrophic landslide dynamics.

    PubMed

    Ekström, Göran; Stark, Colin P

    2013-03-22

    Catastrophic landslides involve the acceleration and deceleration of millions of tons of rock and debris in response to the forces of gravity and dissipation. Their unpredictability and frequent location in remote areas have made observations of their dynamics rare. Through real-time detection and inverse modeling of teleseismic data, we show that landslide dynamics are primarily determined by the length scale of the source mass. When combined with geometric constraints from satellite imagery, the seismically determined landslide force histories yield estimates of landslide duration, momenta, potential energy loss, mass, and runout trajectory. Measurements of these dynamical properties for 29 teleseismogenic landslides are consistent with a simple acceleration model in which height drop and rupture depth scale with the length of the failing slope.

  6. Scaling up digital circuit computation with DNA strand displacement cascades.

    PubMed

    Qian, Lulu; Winfree, Erik

    2011-06-03

    To construct sophisticated biochemical circuits from scratch, one needs to understand how simple the building blocks can be and how robustly such circuits can scale up. Using a simple DNA reaction mechanism based on a reversible strand displacement process, we experimentally demonstrated several digital logic circuits, culminating in a four-bit square-root circuit that comprises 130 DNA strands. These multilayer circuits include thresholding and catalysis within every logical operation to perform digital signal restoration, which enables fast and reliable function in large circuits with roughly constant switching time and linear signal propagation delays. The design naturally incorporates other crucial elements for large-scale circuitry, such as general debugging tools, parallel circuit preparation, and an abstraction hierarchy supported by an automated circuit compiler.

  7. Active earth pressure model tests versus finite element analysis

    NASA Astrophysics Data System (ADS)

    Pietrzak, Magdalena

    2017-06-01

    The purpose of the paper is to compare failure mechanisms observed in small scale model tests on granular sample in active state, and simulated by finite element method (FEM) using Plaxis 2D software. Small scale model tests were performed on rectangular granular sample retained by a rigid wall. Deformation of the sample resulted from simple wall translation in the direction `from the soil" (active earth pressure state. Simple Coulomb-Mohr model for soil can be helpful in interpreting experimental findings in case of granular materials. It was found that the general alignment of strain localization pattern (failure mechanism) may belong to macro scale features and be dominated by a test boundary conditions rather than the nature of the granular sample.

  8. Maximum entropy production allows a simple representation of heterogeneity in semiarid ecosystems.

    PubMed

    Schymanski, Stanislaus J; Kleidon, Axel; Stieglitz, Marc; Narula, Jatin

    2010-05-12

    Feedbacks between water use, biomass and infiltration capacity in semiarid ecosystems have been shown to lead to the spontaneous formation of vegetation patterns in a simple model. The formation of patterns permits the maintenance of larger overall biomass at low rainfall rates compared with homogeneous vegetation. This results in a bias of models run at larger scales neglecting subgrid-scale variability. In the present study, we investigate the question whether subgrid-scale heterogeneity can be parameterized as the outcome of optimal partitioning between bare soil and vegetated area. We find that a two-box model reproduces the time-averaged biomass of the patterns emerging in a 100 x 100 grid model if the vegetated fraction is optimized for maximum entropy production (MEP). This suggests that the proposed optimality-based representation of subgrid-scale heterogeneity may be generally applicable to different systems and at different scales. The implications for our understanding of self-organized behaviour and its modelling are discussed.

  9. The origin of high frequency radiation in earthquakes and the geometry of faulting

    NASA Astrophysics Data System (ADS)

    Madariaga, R.

    2004-12-01

    In a seminal paper of 1967 Kei Aki discovered the scaling law of earthquake spectra and showed that, among other things, the high frequency decay was of type omega-squared. This implies that high frequency displacement amplitudes are proportional to a characteristic length of the fault, and radiated energy scales with the cube of the fault dimension, just like seismic moment. Later in the seventies, it was found out that a simple explanation for this frequency dependence of spectra was that high frequencies were generated by stopping phases, waves emitted by changes in speed of the rupture front as it propagates along the fault, but this did not explain the scaling of high frequency waves with fault length. Earthquake energy balance is such that, ignoring attenuation, radiated energy is the change in strain energy minus energy released for overcoming friction. Until recently the latter was considered to be a material property that did not scale with fault size. Yet, in another classical paper Aki and Das estimated in the late 70s that energy release rate also scaled with earthquake size, because earthquakes were often stopped by barriers or changed rupture speed at them. This observation was independently confirmed in the late 90s by Ide and Takeo and Olsen et al who found that energy release rates for Kobe and Landers were in the order of a MJ/m2, implying that Gc necessarily scales with earthquake size, because if this was a material property, small earthquakes would never occur. Using both simple analytical and numerical models developed by Addia-Bedia and Aochi and Madariaga, we examine the consequence of these observations for the scaling of high frequency waves with fault size. We demonstrate using some classical results by Kostrov, Husseiny and Freund that high frequency energy flow measures energy release rate and is generated when ruptures change velocity (both direction and speed) at fault kinks or jogs. Our results explain why super shear ruptures are only observed when faults are relatively flat and smooth, and why complex geometry inhibits fast ruptures.

  10. Scaling up ART adherence clubs in the public sector health system in the Western Cape, South Africa: a study of the institutionalisation of a pilot innovation.

    PubMed

    MacGregor, Hayley; McKenzie, Andrew; Jacobs, Tanya; Ullauri, Angelica

    2018-04-25

    In 2011, a decision was made to scale up a pilot innovation involving 'adherence clubs' as a form of differentiated care for HIV positive people in the public sector antiretroviral therapy programme in the Western Cape Province of South Africa. In 2016 we were involved in the qualitative aspect of an evaluation of the adherence club model, the overall objective of which was to assess the health outcomes for patients accessing clubs through epidemiological analysis, and to conduct a health systems analysis to evaluate how the model of care performed at scale. In this paper we adopt a complex adaptive systems lens to analyse planned organisational change through intervention in a state health system. We explore the challenges associated with taking to scale a pilot that began as a relatively simple innovation by a non-governmental organisation. Our analysis reveals how a programme initially representing a simple, unitary system in terms of management and clinical governance had evolved into a complex, differentiated care system. An innovation that was assessed as an excellent idea and received political backing, worked well whilst supported on a small scale. However, as scaling up progressed, challenges have emerged at the same time as support has waned. We identified a 'tipping point' at which the system was more likely to fail, as vulnerabilities magnified and the capacity for adaptation was exceeded. Yet the study also revealed the impressive capacity that a health system can have for catalysing novel approaches. We argue that innovation in largescale, complex programmes in health systems is a continuous process that requires ongoing support and attention to new innovation as challenges emerge. Rapid scaling up is also likely to require recourse to further resources, and a culture of iterative learning to address emerging challenges and mitigate complex system errors. These are necessary steps to the future success of adherence clubs as a cornerstone of differentiated care. Further research is needed to assess the equity and quality outcomes of a differentiated care model and to ensure the inclusive distribution of the benefits to all categories of people living with HIV.

  11. Entropically Stabilized Colloidal Crystals Hold Entropy in Collective Modes

    NASA Astrophysics Data System (ADS)

    Antonaglia, James; van Anders, Greg; Glotzer, Sharon

    Ordered structures can be stabilized by entropy if the system has more ordered microstates available than disordered ones. However, ``locating'' the entropy in an ordered system is challenging because entropic ordering is necessarily a collective effort emerging from the interactions of large numbers of particles. Yet, we can characterize these crystals using simple traditional tools, because entropically stabilized crystals exhibit collective motion and effective stiffness. For a two-dimensional system of hard hexagons, we calculate the dispersion relations of both vibrational and librational collective modes. We find the librational mode is gapped, and the gap provides an emergent, macroscopic, and density-dependent length scale. We quantify the entropic contribution of each collective mode and find that below this length scale, the dominant entropic contributions are librational, and above this length scale, vibrations dominate. This length scale diverges in the high-density limit, so entropy is found predominantly in libration near dense packing. National Science Foundation Graduate Research Fellowship Program Grant No. DGE 1256260, Advanced Research Computing at the University of Michigan, Ann Arbor, and the Simons Foundation.

  12. Transport and Lagrangian Statistics in Rotating Stratified Turbulence

    NASA Astrophysics Data System (ADS)

    Rosenberg, D. L.

    2015-12-01

    Transport plays a crucial role in geophysical flows, both in theatmosphere and in the ocean. Transport in such flows is ultimatelycontrolled by small-scale turbulence, although the large scales arein geostrophic balance between pressure gradient, gravity and Coriolisforces. As a result of the seemingly random nature of the flow, singleparticles are dispersed by the flow and on time scales significantlylonger than the eddy turn-over time, they undergo a diffusive motionwhose diffusion coefficient is the integral of the velocity correlationfunction. On intermediate time scales, in homogeneous, isotropic turbuilence(HIT) the separation between particle pairs has been argued to grow withtime according to the Richardson law: <(Δ x)2(t)> ~ t3, with aproportionality constant that depends on the initial particleseparation. The description of the phenomena associated withthe dispersion of single particles, or of particle pairs, ultimatelyrests on relatively simple statistical properties of the flowvelocity transporting the particles, in particular on its temporalcorrelation function. In this work, we investigate particle dispersionin the anisotropic case of rotating stratified turbulence examining whetherthe dependence on initial particle separation differs from HIT,particularly in the presence of an inverse cascade.

  13. Horizontal Variability of Water and Its Relationship to Cloud Fraction near the Tropical Tropopause: Using Aircraft Observations of Water Vapor to Improve the Representation of Grid-scale Cloud Formation in GEOS-5

    NASA Technical Reports Server (NTRS)

    Selkirk, Henry B.; Molod, Andrea M.

    2014-01-01

    Large-scale models such as GEOS-5 typically calculate grid-scale fractional cloudiness through a PDF parameterization of the sub-gridscale distribution of specific humidity. The GEOS-5 moisture routine uses a simple rectangular PDF varying in height that follows a tanh profile. While below 10 km this profile is informed by moisture information from the AIRS instrument, there is relatively little empirical basis for the profile above that level. ATTREX provides an opportunity to refine the profile using estimates of the horizontal variability of measurements of water vapor, total water and ice particles from the Global Hawk aircraft at or near the tropopause. These measurements will be compared with estimates of large-scale cloud fraction from CALIPSO and lidar retrievals from the CPL on the aircraft. We will use the variability measurements to perform studies of the sensitivity of the GEOS-5 cloud-fraction to various modifications to the PDF shape and to its vertical profile.

  14. Scaling and universality in urban economic diversification.

    PubMed

    Youn, Hyejin; Bettencourt, Luís M A; Lobo, José; Strumsky, Deborah; Samaniego, Horacio; West, Geoffrey B

    2016-01-01

    Understanding cities is central to addressing major global challenges from climate change to economic resilience. Although increasingly perceived as fundamental socio-economic units, the detailed fabric of urban economic activities is only recently accessible to comprehensive analyses with the availability of large datasets. Here, we study abundances of business categories across US metropolitan statistical areas, and provide a framework for measuring the intrinsic diversity of economic activities that transcends scales of the classification scheme. A universal structure common to all cities is revealed, manifesting self-similarity in internal economic structure as well as aggregated metrics (GDP, patents, crime). We present a simple mathematical derivation of the universality, and provide a model, together with its economic implications of open-ended diversity created by urbanization, for understanding the observed empirical distribution. Given the universal distribution, scaling analyses for individual business categories enable us to determine their relative abundances as a function of city size. These results shed light on the processes of economic differentiation with scale, suggesting a general structure for the growth of national economies as integrated urban systems. © 2016 The Authors.

  15. Scaling and universality in urban economic diversification

    PubMed Central

    Bettencourt, Luís M. A.; Lobo, José; Strumsky, Deborah; Samaniego, Horacio; West, Geoffrey B.

    2016-01-01

    Understanding cities is central to addressing major global challenges from climate change to economic resilience. Although increasingly perceived as fundamental socio-economic units, the detailed fabric of urban economic activities is only recently accessible to comprehensive analyses with the availability of large datasets. Here, we study abundances of business categories across US metropolitan statistical areas, and provide a framework for measuring the intrinsic diversity of economic activities that transcends scales of the classification scheme. A universal structure common to all cities is revealed, manifesting self-similarity in internal economic structure as well as aggregated metrics (GDP, patents, crime). We present a simple mathematical derivation of the universality, and provide a model, together with its economic implications of open-ended diversity created by urbanization, for understanding the observed empirical distribution. Given the universal distribution, scaling analyses for individual business categories enable us to determine their relative abundances as a function of city size. These results shed light on the processes of economic differentiation with scale, suggesting a general structure for the growth of national economies as integrated urban systems. PMID:26790997

  16. Revisiting a model of ontogenetic growth: estimating model parameters from theory and data.

    PubMed

    Moses, Melanie E; Hou, Chen; Woodruff, William H; West, Geoffrey B; Nekola, Jeffery C; Zuo, Wenyun; Brown, James H

    2008-05-01

    The ontogenetic growth model (OGM) of West et al. provides a general description of how metabolic energy is allocated between production of new biomass and maintenance of existing biomass during ontogeny. Here, we reexamine the OGM, make some minor modifications and corrections, and further evaluate its ability to account for empirical variation on rates of metabolism and biomass in vertebrates both during ontogeny and across species of varying adult body size. We show that the updated version of the model is internally consistent and is consistent with other predictions of metabolic scaling theory and empirical data. The OGM predicts not only the near universal sigmoidal form of growth curves but also the M(1/4) scaling of the characteristic times of ontogenetic stages in addition to the curvilinear decline in growth efficiency described by Brody. Additionally, the OGM relates the M(3/4) scaling across adults of different species to the scaling of metabolic rate across ontogeny within species. In providing a simple, quantitative description of how energy is allocated to growth, the OGM calls attention to unexplained variation, unanswered questions, and opportunities for future research.

  17. Reconciling tensor and scalar observables in G-inflation

    NASA Astrophysics Data System (ADS)

    Ramírez, Héctor; Passaglia, Samuel; Motohashi, Hayato; Hu, Wayne; Mena, Olga

    2018-04-01

    The simple m2phi2 potential as an inflationary model is coming under increasing tension with limits on the tensor-to-scalar ratio r and measurements of the scalar spectral index ns. Cubic Galileon interactions in the context of the Horndeski action can potentially reconcile the observables. However, we show that this cannot be achieved with only a constant Galileon mass scale because the interactions turn off too slowly, leading also to gradient instabilities after inflation ends. Allowing for a more rapid transition can reconcile the observables but moderately breaks the slow-roll approximation leading to a relatively large and negative running of the tilt αs that can be of order ns‑1. We show that the observables on CMB and large scale structure scales can be predicted accurately using the optimized slow-roll approach instead of the traditional slow-roll expansion. Upper limits on |αs| place a lower bound of rgtrsim 0.005 and, conversely, a given r places a lower bound on |αs|, both of which are potentially observable with next generation CMB and large scale structure surveys.

  18. Interspecies scaling and prediction of human clearance: comparison of small- and macro-molecule drugs

    PubMed Central

    Huh, Yeamin; Smith, David E.; Feng, Meihau Rose

    2014-01-01

    Human clearance prediction for small- and macro-molecule drugs was evaluated and compared using various scaling methods and statistical analysis.Human clearance is generally well predicted using single or multiple species simple allometry for macro- and small-molecule drugs excreted renally.The prediction error is higher for hepatically eliminated small-molecules using single or multiple species simple allometry scaling, and it appears that the prediction error is mainly associated with drugs with low hepatic extraction ratio (Eh). The error in human clearance prediction for hepatically eliminated small-molecules was reduced using scaling methods with a correction of maximum life span (MLP) or brain weight (BRW).Human clearance of both small- and macro-molecule drugs is well predicted using the monkey liver blood flow method. Predictions using liver blood flow from other species did not work as well, especially for the small-molecule drugs. PMID:21892879

  19. A simple scaling approach to produce climate scenarios of local precipitation extremes for the Netherlands

    NASA Astrophysics Data System (ADS)

    Lenderink, Geert; Attema, Jisk

    2015-08-01

    Scenarios of future changes in small scale precipitation extremes for the Netherlands are presented. These scenarios are based on a new approach whereby changes in precipitation extremes are set proportional to the change in water vapor amount near the surface as measured by the 2m dew point temperature. This simple scaling framework allows the integration of information derived from: (i) observations, (ii) a new unprecedentedly large 16 member ensemble of simulations with the regional climate model RACMO2 driven by EC-Earth, and (iii) short term integrations with a non-hydrostatic model Harmonie. Scaling constants are based on subjective weighting (expert judgement) of the three different information sources taking also into account previously published work. In all scenarios local precipitation extremes increase with warming, yet with broad uncertainty ranges expressing incomplete knowledge of how convective clouds and the atmospheric mesoscale circulation will react to climate change.

  20. Characteristic Sizes of Life in the Oceans, from Bacteria to Whales.

    PubMed

    Andersen, K H; Berge, T; Gonçalves, R J; Hartvig, M; Heuschele, J; Hylander, S; Jacobsen, N S; Lindemann, C; Martens, E A; Neuheimer, A B; Olsson, K; Palacz, A; Prowe, A E F; Sainmont, J; Traving, S J; Visser, A W; Wadhwa, N; Kiørboe, T

    2016-01-01

    The size of an individual organism is a key trait to characterize its physiology and feeding ecology. Size-based scaling laws may have a limited size range of validity or undergo a transition from one scaling exponent to another at some characteristic size. We collate and review data on size-based scaling laws for resource acquisition, mobility, sensory range, and progeny size for all pelagic marine life, from bacteria to whales. Further, we review and develop simple theoretical arguments for observed scaling laws and the characteristic sizes of a change or breakdown of power laws. We divide life in the ocean into seven major realms based on trophic strategy, physiology, and life history strategy. Such a categorization represents a move away from a taxonomically oriented description toward a trait-based description of life in the oceans. Finally, we discuss life forms that transgress the simple size-based rules and identify unanswered questions.

  1. Combinational Reasoning of Quantitative Fuzzy Topological Relations for Simple Fuzzy Regions

    PubMed Central

    Liu, Bo; Li, Dajun; Xia, Yuanping; Ruan, Jian; Xu, Lili; Wu, Huanyi

    2015-01-01

    In recent years, formalization and reasoning of topological relations have become a hot topic as a means to generate knowledge about the relations between spatial objects at the conceptual and geometrical levels. These mechanisms have been widely used in spatial data query, spatial data mining, evaluation of equivalence and similarity in a spatial scene, as well as for consistency assessment of the topological relations of multi-resolution spatial databases. The concept of computational fuzzy topological space is applied to simple fuzzy regions to efficiently and more accurately solve fuzzy topological relations. Thus, extending the existing research and improving upon the previous work, this paper presents a new method to describe fuzzy topological relations between simple spatial regions in Geographic Information Sciences (GIS) and Artificial Intelligence (AI). Firstly, we propose a new definition for simple fuzzy line segments and simple fuzzy regions based on the computational fuzzy topology. And then, based on the new definitions, we also propose a new combinational reasoning method to compute the topological relations between simple fuzzy regions, moreover, this study has discovered that there are (1) 23 different topological relations between a simple crisp region and a simple fuzzy region; (2) 152 different topological relations between two simple fuzzy regions. In the end, we have discussed some examples to demonstrate the validity of the new method, through comparisons with existing fuzzy models, we showed that the proposed method can compute more than the existing models, as it is more expressive than the existing fuzzy models. PMID:25775452

  2. On the context-dependent scaling of consumer feeding rates.

    PubMed

    Barrios-O'Neill, Daniel; Kelly, Ruth; Dick, Jaimie T A; Ricciardi, Anthony; MacIsaac, Hugh J; Emmerson, Mark C

    2016-06-01

    The stability of consumer-resource systems can depend on the form of feeding interactions (i.e. functional responses). Size-based models predict interactions - and thus stability - based on consumer-resource size ratios. However, little is known about how interaction contexts (e.g. simple or complex habitats) might alter scaling relationships. Addressing this, we experimentally measured interactions between a large size range of aquatic predators (4-6400 mg over 1347 feeding trials) and an invasive prey that transitions among habitats: from the water column (3D interactions) to simple and complex benthic substrates (2D interactions). Simple and complex substrates mediated successive reductions in capture rates - particularly around the unimodal optimum - and promoted prey population stability in model simulations. Many real consumer-resource systems transition between 2D and 3D interactions, and along complexity gradients. Thus, Context-Dependent Scaling (CDS) of feeding interactions could represent an unrecognised aspect of food webs, and quantifying the extent of CDS might enhance predictive ecology. © The Authors. Ecology Letters published by CNRS and John Wiley & Sons Ltd.

  3. Accurate Modeling of Galaxy Clustering on Small Scales: Testing the Standard ΛCDM + Halo Model

    NASA Astrophysics Data System (ADS)

    Sinha, Manodeep; Berlind, Andreas A.; McBride, Cameron; Scoccimarro, Roman

    2015-01-01

    The large-scale distribution of galaxies can be explained fairly simply by assuming (i) a cosmological model, which determines the dark matter halo distribution, and (ii) a simple connection between galaxies and the halos they inhabit. This conceptually simple framework, called the halo model, has been remarkably successful at reproducing the clustering of galaxies on all scales, as observed in various galaxy redshift surveys. However, none of these previous studies have carefully modeled the systematics and thus truly tested the halo model in a statistically rigorous sense. We present a new accurate and fully numerical halo model framework and test it against clustering measurements from two luminosity samples of galaxies drawn from the SDSS DR7. We show that the simple ΛCDM cosmology + halo model is not able to simultaneously reproduce the galaxy projected correlation function and the group multiplicity function. In particular, the more luminous sample shows significant tension with theory. We discuss the implications of our findings and how this work paves the way for constraining galaxy formation by accurate simultaneous modeling of multiple galaxy clustering statistics.

  4. Experimental test of scaling of mixing by chaotic advection in droplets moving through microfluidic channels

    PubMed Central

    Song, Helen; Bringer, Michelle R.; Tice, Joshua D.; Gerdts, Cory J.; Ismagilov, Rustem F.

    2006-01-01

    This letter describes an experimental test of a simple argument that predicts the scaling of chaotic mixing in a droplet moving through a winding microfluidic channel. Previously, scaling arguments for chaotic mixing have been described for a flow that reduces striation length by stretching, folding, and reorienting the fluid in a manner similar to that of the baker’s transformation. The experimentally observed flow patterns within droplets (or plugs) resembled the baker’s transformation. Therefore, the ideas described in the literature could be applied to mixing in droplets to obtain the scaling argument for the dependence of the mixing time, t~(aw/U)log(Pe), where w [m] is the cross-sectional dimension of the microchannel, a is the dimensionless length of the plug measured relative to w, U [m s−1] is the flow velocity, Pe is the Péclet number (Pe=wU/D), and D [m2s−1] is the diffusion coefficient of the reagent being mixed. Experiments were performed to confirm the scaling argument by varying the parameters w, U, and D. Under favorable conditions, submillisecond mixing has been demonstrated in this system. PMID:17940580

  5. Characterisation of edge turbulence in relation to edge magnetic field configuration in L-mode plasmas in the Mega Amp Spherical Tokamak.

    NASA Astrophysics Data System (ADS)

    Dewhurst, J.; Hnat, B.; Dudson, B.; Dendy, R. O.; Counsell, G. F.; Kirk, A.

    2007-12-01

    Almost all astrophysical and magnetically confined fusion plasmas are turbulent. Here, we examine ion saturation current (Isat) measurements of edge plasma turbulence for three MAST L-mode plasmas that differ primarily in their edge magnetic field configurations. First, absolute moments of the coarse grained data are examined to obtain accurate values of scaling exponents. The dual scaling behaviour is identified in all samples, with the temporal scale τ ≍ 40-60 μs separating the two regimes. Strong universality is then identified in the functional form of the probability density function (PDF) for Isat fluctuations, which is well approximated by the Fréchet distribution on temporal scales τ ≤ 40μs. For temporal scales τ > 40μs, the PDFs appear to converge to the Gumbel distribution, which has been previously identified as a universal feature of many other complex phenomena. The optimal fitting parameters k=1.15 for Fréchet and a=1.35 for Gumbel provide a simple quantitative characterisation of the full spectrum of fluctuations. We conclude that, to good approximation, the properties of the edge turbulence are independent of the edge magnetic field configuration.

  6. Characterization of edge turbulence in relation to edge magnetic field configuration in Ohmic L-mode plasmas in the Mega Amp Spherical Tokamak

    NASA Astrophysics Data System (ADS)

    Hnat, B.; Dudson, B. D.; Dendy, R. O.; Counsell, G. F.; Kirk, A.; MAST Team

    2008-08-01

    Ion saturation current (Isat) measurements of edge plasma turbulence are analysed for six MAST L-mode plasmas that differ primarily in their edge magnetic field configurations. The analysis techniques are designed to capture the strong nonlinearities of the datasets. First, absolute moments of the data are examined to obtain accurate values of scaling exponents. This confirms dual scaling behaviour in all samples, with the temporal scale τ ≈ 40-60 µs separating the two regimes. Strong universality is then identified in the functional form of the probability density function (PDF) for Isat fluctuations, which is well approximated by the Fréchet distribution on temporal scales τ <= 40 µs. For temporal scales τ > 40 µs, the PDFs appear to converge to the Gumbel distribution, which has been previously identified as a universal feature of many other complex phenomena. The optimal fitting parameters k = 1.15 for Fréchet and a = 1.35 for Gumbel provide a simple quantitative characterization of the full spectrum of fluctuations. It is concluded that, to good approximation, the properties of the edge turbulence are independent of the edge magnetic field configuration.

  7. Study of near SOL decay lengths in ASDEX Upgrade under attached and detached divertor conditions

    NASA Astrophysics Data System (ADS)

    Sun, H. J.; Wolfrum, E.; Kurzan, B.; Eich, T.; Lackner, K.; Scarabosio, A.; Paradela Pérez, I.; Kardaun, O.; Faitsch, M.; Potzel, S.; Stroth, U.; the ASDEX Upgrade Team

    2017-10-01

    A database with attached, partially detached and completely detached divertors has been constructed in ASDEX Upgrade discharges in both H-mode and L-mode plasmas with Thomson Scattering data suitable for the analysis of the upstream SOL electron profiles. By comparing upstream temperature decay width, {λ }{Te,u}, with the scaling of the SOL power decay width, {λ }{q\\parallel e}, based on the downstream IR measurements, it is found that a simple relation based on classical electron conduction can relate {λ }{Te,u} and {λ }{q\\parallel e} well. The combined dataset can be described by both a single scaling and a separate scaling for H-modes and L-modes. For the single scaling, a strong inverse dependence of, {λ }{Te,u} on the separatrix temperature, {T}e,u, is found, suggesting the classical parallel Spitzer-Harm conductivity as dominant mechanism controlling the SOL width in both L-mode and H-mode over a large set of plasma parameters. This dependence on {T}e,u explains why, for the same global plasma parameters, {λ }{q\\parallel e} in L-mode is approximately twice that in H-mode and under detached conditions, the SOL upstream electron profile broadens when the density reaches a critical value. Comparing the derived scaling from experimental data with power balance, gives the cross-field thermal diffusivity as {χ }\\perp \\propto {T}e{1/2}/{n}e, consistent with earlier studies on Compass-D, JET and Alcator C-Mod. However, the possibility of the separate scalings for different regimes cannot be excluded, which gives results similar to those previously reported for the H-mode, but here the wider SOL width for L-mode plasmas is explained simply by the larger premultiplying coefficient. The relative merits of the two scalings in representing the data and their theoretical implications are discussed.

  8. Regional Scale Simulations of Nitrate Leaching through Agricultural Soils of California

    NASA Astrophysics Data System (ADS)

    Diamantopoulos, E.; Walkinshaw, M.; O'Geen, A. T.; Harter, T.

    2016-12-01

    Nitrate is recognized as one of California's most widespread groundwater contaminants. As opposed to point sources, which are relative easily identifiable sources of contamination, non-point sources of nitrate are diffuse and linked with widespread use of fertilizers in agricultural soils. California's agricultural regions have an incredible diversity of soils that encompass a huge range of properties. This complicates studies dealing with nitrate risk assessment, since important biological and physicochemical processes appear at the first meters of the vadose zone. The objective of this study is to evaluate all agricultural soils in California according to their potentiality for nitrate leaching based on numerical simulations using the Richards equation. We conducted simulations for 6000 unique soil profiles (over 22000 soil horizons) taking into account the effect of climate, crop type, irrigation and fertilization management scenarios. The final goal of this study is to evaluate simple management methods in terms of reduced nitrate leaching. We estimated drainage rates of water under the root zone and nitrate concentrations in the drain water at the regional scale. We present maps for all agricultural soils in California which can be used for risk assessment studies. Finally, our results indicate that adoption of simple irrigation and fertilization methods may significantly reduce nitrate leaching in vulnerable regions.

  9. Upgrades to the REA method for producing probabilistic climate change projections

    NASA Astrophysics Data System (ADS)

    Xu, Ying; Gao, Xuejie; Giorgi, Filippo

    2010-05-01

    We present an augmented version of the Reliability Ensemble Averaging (REA) method designed to generate probabilistic climate change information from ensembles of climate model simulations. Compared to the original version, the augmented one includes consideration of multiple variables and statistics in the calculation of the performance-based weights. In addition, the model convergence criterion previously employed is removed. The method is applied to the calculation of changes in mean and variability for temperature and precipitation over different sub-regions of East Asia based on the recently completed CMIP3 multi-model ensemble. Comparison of the new and old REA methods, along with the simple averaging procedure, and the use of different combinations of performance metrics shows that at fine sub-regional scales the choice of weighting is relevant. This is mostly because the models show a substantial spread in performance for the simulation of precipitation statistics, a result that supports the use of model weighting as a useful option to account for wide ranges of quality of models. The REA method, and in particular the upgraded one, provides a simple and flexible framework for assessing the uncertainty related to the aggregation of results from ensembles of models in order to produce climate change information at the regional scale. KEY WORDS: REA method, Climate change, CMIP3

  10. Computing local edge probability in natural scenes from a population of oriented simple cells

    PubMed Central

    Ramachandra, Chaithanya A.; Mel, Bartlett W.

    2013-01-01

    A key computation in visual cortex is the extraction of object contours, where the first stage of processing is commonly attributed to V1 simple cells. The standard model of a simple cell—an oriented linear filter followed by a divisive normalization—fits a wide variety of physiological data, but is a poor performing local edge detector when applied to natural images. The brain's ability to finely discriminate edges from nonedges therefore likely depends on information encoded by local simple cell populations. To gain insight into the corresponding decoding problem, we used Bayes's rule to calculate edge probability at a given location/orientation in an image based on a surrounding filter population. Beginning with a set of ∼ 100 filters, we culled out a subset that were maximally informative about edges, and minimally correlated to allow factorization of the joint on- and off-edge likelihood functions. Key features of our approach include a new, efficient method for ground-truth edge labeling, an emphasis on achieving filter independence, including a focus on filters in the region orthogonal rather than tangential to an edge, and the use of a customized parametric model to represent the individual filter likelihood functions. The resulting population-based edge detector has zero parameters, calculates edge probability based on a sum of surrounding filter influences, is much more sharply tuned than the underlying linear filters, and effectively captures fine-scale edge structure in natural scenes. Our findings predict nonmonotonic interactions between cells in visual cortex, wherein a cell may for certain stimuli excite and for other stimuli inhibit the same neighboring cell, depending on the two cells' relative offsets in position and orientation, and their relative activation levels. PMID:24381295

  11. Simple motor tasks independently predict extubation failure in critically ill neurological patients

    PubMed Central

    Kutchak, Fernanda Machado; Rieder, Marcelo de Mello; Victorino, Josué Almeida; Meneguzzi, Carla; Poersch, Karla; Forgiarini, Luiz Alberto; Bianchin, Marino Muxfeldt

    2017-01-01

    ABSTRACT Objective: To evaluate the usefulness of simple motor tasks such as hand grasping and tongue protrusion as predictors of extubation failure in critically ill neurological patients. Methods: This was a prospective cohort study conducted in the neurological ICU of a tertiary care hospital in the city of Porto Alegre, Brazil. Adult patients who had been intubated for neurological reasons and were eligible for weaning were included in the study. The ability of patients to perform simple motor tasks such as hand grasping and tongue protrusion was evaluated as a predictor of extubation failure. Data regarding duration of mechanical ventilation, length of ICU stay, length of hospital stay, mortality, and incidence of ventilator-associated pneumonia were collected. Results: A total of 132 intubated patients who had been receiving mechanical ventilation for at least 24 h and who passed a spontaneous breathing trial were included in the analysis. Logistic regression showed that patient inability to grasp the hand of the examiner (relative risk = 1.57; 95% CI: 1.01-2.44; p < 0.045) and protrude the tongue (relative risk = 6.84; 95% CI: 2.49-18.8; p < 0.001) were independent risk factors for extubation failure. Acute Physiology and Chronic Health Evaluation II scores (p = 0.02), Glasgow Coma Scale scores at extubation (p < 0.001), eye opening response (p = 0.001), MIP (p < 0.001), MEP (p = 0.006), and the rapid shallow breathing index (p = 0.03) were significantly different between the failed extubation and successful extubation groups. Conclusions: The inability to follow simple motor commands is predictive of extubation failure in critically ill neurological patients. Hand grasping and tongue protrusion on command might be quick and easy bedside tests to identify neurocritical care patients who are candidates for extubation. PMID:28746528

  12. Arabic translation, cultural adaptation, and validation study of Knee Outcome Survey: Activities of Daily Living Scale (KOS-ADLS).

    PubMed

    Algarni, Abdulrahman D; Alrabai, Hamza M; Al-Ahaideb, Abdulaziz; Kachanathu, Shaji John; AlShammari, Sulaiman A

    2017-09-01

    Knee complaints and their accompanying functional impairments are frequent problems encountered by healthcare practitioners worldwide. Plenty of functional scoring systems were developed and validated to give a relative estimation about the knee function. Despite the wide geographic distribution of Arabic language in the Middle East and North Africa, it is rare to find a validated knee function scale in Arabic. The present study is aimed to translate, validate, and culturally adjust the Knee Outcome Survey: Activities of Daily Living Scale (KOS-ADLS) into Arabic language for future use among Arabic-speaking patients. Permission for translation was obtained from the copyrights holder. Two different teams of high-level clinical and linguistic expertise conducted translation process blindly. Forward-backward translation technique was implemented to ensure preservation of the main conceptual content. Main study consisted of 280 subjects. Reliability was examined by test-retest pilot study. Visual Analogue Scale (VAS), Get Up and Go (GUG) Test, Ascending/Descending Stairs (A/D Stairs), and Subjective Assessment of Function (SAF) were conducted concurrently to show the validity of Arabic KOS-ADLS statistically in relation to these scales. Final translated version showed no significant discrepancies. Minor adaptive adjustment was required to fit Arabian cultural background. Internal consistency was favourable (Cronbach's alpha 0.90). Patients' scoring on Arabic KOS-ADLS appeared relatively consistent with their scoring on VAS, GUG, A/D Stairs, and SAF. A significant linear relationship was demonstrated between SAF and total KOS-ADLS scores on regression analysis (adj. R 2  = 0.548). Arabic KOS-ADLS, as its English counterpart, was found to be a simple, valid, and useful instrument for knee function evaluation. Arabic version of KOS-ADLS represents a promising candidate for unconditional use among Arabic-speaking patients with knee complaints.

  13. Probabilistic inference of ecohydrological parameters using observations from point to satellite scales

    NASA Astrophysics Data System (ADS)

    Bassiouni, Maoya; Higgins, Chad W.; Still, Christopher J.; Good, Stephen P.

    2018-06-01

    Vegetation controls on soil moisture dynamics are challenging to measure and translate into scale- and site-specific ecohydrological parameters for simple soil water balance models. We hypothesize that empirical probability density functions (pdfs) of relative soil moisture or soil saturation encode sufficient information to determine these ecohydrological parameters. Further, these parameters can be estimated through inverse modeling of the analytical equation for soil saturation pdfs, derived from the commonly used stochastic soil water balance framework. We developed a generalizable Bayesian inference framework to estimate ecohydrological parameters consistent with empirical soil saturation pdfs derived from observations at point, footprint, and satellite scales. We applied the inference method to four sites with different land cover and climate assuming (i) an annual rainfall pattern and (ii) a wet season rainfall pattern with a dry season of negligible rainfall. The Nash-Sutcliffe efficiencies of the analytical model's fit to soil observations ranged from 0.89 to 0.99. The coefficient of variation of posterior parameter distributions ranged from < 1 to 15 %. The parameter identifiability was not significantly improved in the more complex seasonal model; however, small differences in parameter values indicate that the annual model may have absorbed dry season dynamics. Parameter estimates were most constrained for scales and locations at which soil water dynamics are more sensitive to the fitted ecohydrological parameters of interest. In these cases, model inversion converged more slowly but ultimately provided better goodness of fit and lower uncertainty. Results were robust using as few as 100 daily observations randomly sampled from the full records, demonstrating the advantage of analyzing soil saturation pdfs instead of time series to estimate ecohydrological parameters from sparse records. Our work combines modeling and empirical approaches in ecohydrology and provides a simple framework to obtain scale- and site-specific analytical descriptions of soil moisture dynamics consistent with soil moisture observations.

  14. THE FATE OF PLANETESIMALS IN TURBULENT DISKS WITH DEAD ZONES. I. THE TURBULENT STIRRING RECIPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Okuzumi, Satoshi; Ormel, Chris W., E-mail: okuzumi@geo.titech.ac.jp

    2013-07-01

    Turbulence in protoplanetary disks affects planet formation in many ways. While small dust particles are mainly affected by the aerodynamical coupling with turbulent gas velocity fields, planetesimals and larger bodies are more affected by gravitational interaction with gas density fluctuations. For the latter process, a number of numerical simulations have been performed in recent years, but a fully parameter-independent understanding has not been yet established. In this study, we present simple scaling relations for the planetesimal stirring rate in turbulence driven by magnetorotational instability (MRI), taking into account the stabilization of MRI due to ohmic resistivity. We begin with order-of-magnitudemore » estimates of the turbulence-induced gravitational force acting on solid bodies and associated diffusion coefficients for their orbital elements. We then test the predicted scaling relations using the results of recent ohmic-resistive MHD simulations by Gressel et al. We find that these relations successfully explain the simulation results if we properly fix order-of-unity uncertainties within the estimates. We also update the saturation predictor for the density fluctuation amplitude in MRI-driven turbulence originally proposed by Okuzumi and Hirose. Combination of the scaling relations and saturation predictor allows us to know how the turbulent stirring rate of planetesimals depends on disk parameters such as the gas column density, distance from the central star, vertical resistivity distribution, and net vertical magnetic flux. In Paper II, we apply our recipe to planetesimal accretion to discuss its viability in turbulent disks.« less

  15. Analysing attitude data through ridit schemes.

    PubMed

    El-rouby, M G

    1994-12-02

    The attitudes of individuals and populations on various issues are usually assessed through sample surveys. Responses to survey questions are then scaled and combined into a meaningful whole which defines the measured attitude. The applied scales may be of nominal, ordinal, interval, or ratio nature depending upon the degree of sophistication the researcher wants to introduce into the measurement. This paper discusses methods of analysis for categorical variables of the type used in attitude and human behavior research, and recommends adoption of ridit analysis, a technique which has been successfully applied to epidemiological, clinical investigation, laboratory, and microbiological data. The ridit methodology is described after reviewing some general attitude scaling methods and problems of analysis related to them. The ridit method is then applied to a recent study conducted to assess health care service quality in North Carolina. This technique is conceptually and computationally more simple than other conventional statistical methods, and is also distribution-free. Basic requirements and limitations on its use are indicated.

  16. From damselflies to pterosaurs: how burst and sustainable flight performance scale with size.

    PubMed

    Marden, J H

    1994-04-01

    Recent empirical data for short-burst lift and power production of flying animals indicate that mass-specific lift and power output scale independently (lift) or slightly positively (power) with increasing size. These results contradict previous theory, as well as simple observation, which argues for degradation of flight performance with increasing size. Here, empirical measures of lift and power during short-burst exertion are combined with empirically based estimates of maximum muscle power output in order to predict how burst and sustainable performance scale with body size. The resulting model is used to estimate performance of the largest extant flying birds and insects, along with the largest flying animals known from fossils. These estimates indicate that burst flight performance capacities of even the largest extinct fliers (estimated mass 250 kg) would allow takeoff from the ground; however, limitations on sustainable power output should constrain capacity for continuous flight at body sizes exceeding 0.003-1.0 kg, depending on relative wing length and flight muscle mass.

  17. Scaling law and enhancement of lift generation of an insect-size hovering flexible wing

    PubMed Central

    Kang, Chang-kwon; Shyy, Wei

    2013-01-01

    We report a comprehensive scaling law and novel lift generation mechanisms relevant to the aerodynamic functions of structural flexibility in insect flight. Using a Navier–Stokes equation solver, fully coupled to a structural dynamics solver, we consider the hovering motion of a wing of insect size, in which the dynamics of fluid–structure interaction leads to passive wing rotation. Lift generated on the flexible wing scales with the relative shape deformation parameter, whereas the optimal lift is obtained when the wing deformation synchronizes with the imposed translation, consistent with previously reported observations for fruit flies and honeybees. Systematic comparisons with rigid wings illustrate that the nonlinear response in wing motion results in a greater peak angle compared with a simple harmonic motion, yielding higher lift. Moreover, the compliant wing streamlines its shape via camber deformation to mitigate the nonlinear lift-degrading wing–wake interaction to further enhance lift. These bioinspired aeroelastic mechanisms can be used in the development of flapping wing micro-robots. PMID:23760300

  18. Assessment of Error in Synoptic-Scale Diagnostics Derived from Wind Profiler and Radiosonde Network Data

    NASA Technical Reports Server (NTRS)

    Mace, Gerald G.; Ackerman, Thomas P.

    1996-01-01

    A topic of current practical interest is the accurate characterization of the synoptic-scale atmospheric state from wind profiler and radiosonde network observations. We have examined several related and commonly applied objective analysis techniques for performing this characterization and considered their associated level of uncertainty both from a theoretical and a practical standpoint. A case study is presented where two wind profiler triangles with nearly identical centroids and no common vertices produced strikingly different results during a 43-h period. We conclude that the uncertainty in objectively analyzed quantities can easily be as large as the expected synoptic-scale signal. In order to quantify the statistical precision of the algorithms, we conducted a realistic observing system simulation experiment using output from a mesoscale model. A simple parameterization for estimating the uncertainty in horizontal gradient quantities in terms of known errors in the objectively analyzed wind components and temperature is developed from these results.

  19. Shear banding leads to accelerated aging dynamics in a metallic glass

    NASA Astrophysics Data System (ADS)

    Küchemann, Stefan; Liu, Chaoyang; Dufresne, Eric M.; Shin, Jeremy; Maaß, Robert

    2018-01-01

    Traditionally, strain localization in metallic glasses is related to the thickness of the shear defect, which is confined to the nanometer scale. Using site-specific x-ray photon correlation spectroscopy, we reveal significantly accelerated relaxation dynamics around a shear band in a metallic glass at a length scale that is orders of magnitude larger than the defect itself. The relaxation time in the shear-band vicinity is up to ten times smaller compared to the as-cast matrix, and the relaxation dynamics occurs in a characteristic three-stage aging response that manifests itself in the temperature-dependent shape parameter known from classical stretched exponential relaxation dynamics of disordered materials. We demonstrate that the time-dependent correlation functions describing the aging at different temperatures can be captured and collapsed using simple scaling functions. These insights highlight how a ubiquitous nanoscale strain-localization mechanism in metallic glasses leads to a fundamental change of the relaxation dynamics at the mesoscale.

  20. Strengths use as a secret of happiness: Another dimension of visually impaired individuals' psychological state.

    PubMed

    Matsuguma, Shinichiro; Kawashima, Motoko; Negishi, Kazuno; Sano, Fumiya; Mimura, Masaru; Tsubota, Kazuo

    2018-01-01

    It is well recognized that visual impairments (VI) worsen individuals' mental condition. However, little is known about the positive aspects including subjective happiness, positive emotions, and strengths. Therefore, the purpose of this study was to investigate the positive aspects of persons with VI including their subjective happiness, positive emotions, and strengths use. Positive aspects of persons with VI were measured using the Subjective Happiness Scale (SHS), the Scale of Positive and Negative Experience-Balance (SPANE-B), and the Strengths Use Scale (SUS). A cross-sectional analysis was utilized to examine personal information in a Tokyo sample (N = 44). We used a simple regression analysis and found significant relationships between the SHS or SPANE-B and SUS; on the contrary, VI-related variables were not correlated with them. A multiple regression analysis confirmed that SUS was a significant factor associated with both the SHS and SPANE-B. Strengths use might be a possible protective factor from the negative effects of VI.

  1. Multifractality of cerebral blood flow

    NASA Astrophysics Data System (ADS)

    West, Bruce J.; Latka, Miroslaw; Glaubic-Latka, Marta; Latka, Dariusz

    2003-02-01

    Scale invariance, the property relating time series across multiple scales, has provided a new perspective of physiological phenomena and their underlying control systems. The traditional “signal plus noise” paradigm of the engineer was first replaced with a model in which biological time series have a fractal structure in time (Fractal Physiology, Oxford University Press, Oxford, 1994). This new paradigm was subsequently shown to be overly restrictive when certain physiological signals were found to be characterized by more than one scaling parameter and therefore to belong to a class of more complex processes known as multifractals (Fractals, Plenum Press, New York, 1988). Here we demonstrate that in addition to heart rate (Nature 399 (1999) 461) and human gait (Phys. Rev. E, submitted for publication), the nonlinear control system for cerebral blood flow (CBF) (Phys. Rev. Lett., submitted for publication; Phys. Rev. E 59 (1999) 3492) is multifractal. We also find that this multifractality is greatly reduced for subjects with “serious” migraine and we present a simple model for the underlying control process to describe this effect.

  2. How much does a tokamak reactor cost?

    NASA Astrophysics Data System (ADS)

    Freidberg, J.; Cerfon, A.; Ballinger, S.; Barber, J.; Dogra, A.; McCarthy, W.; Milanese, L.; Mouratidis, T.; Redman, W.; Sandberg, A.; Segal, D.; Simpson, R.; Sorensen, C.; Zhou, M.

    2017-10-01

    The cost of a fusion reactor is of critical importance to its ultimate acceptability as a commercial source of electricity. While there are general rules of thumb for scaling both overnight cost and levelized cost of electricity the corresponding relations are not very accurate or universally agreed upon. We have carried out a series of scaling studies of tokamak reactor costs based on reasonably sophisticated plasma and engineering models. The analysis is largely analytic, requiring only a simple numerical code, thus allowing a very large number of designs. Importantly, the studies are aimed at plasma physicists rather than fusion engineers. The goals are to assess the pros and cons of steady state burning plasma experiments and reactors. One specific set of results discusses the benefits of higher magnetic fields, now possible because of the recent development of high T rare earth superconductors (REBCO); with this goal in mind, we calculate quantitative expressions, including both scaling and multiplicative constants, for cost and major radius as a function of central magnetic field.

  3. Linear Scaling of the Exciton Binding Energy versus the Band Gap of Two-Dimensional Materials

    NASA Astrophysics Data System (ADS)

    Choi, Jin-Ho; Cui, Ping; Lan, Haiping; Zhang, Zhenyu

    2015-08-01

    The exciton is one of the most crucial physical entities in the performance of optoelectronic and photonic devices, and widely varying exciton binding energies have been reported in different classes of materials. Using first-principles calculations within the G W -Bethe-Salpeter equation approach, here we investigate the excitonic properties of two recently discovered layered materials: phosphorene and graphene fluoride. We first confirm large exciton binding energies of, respectively, 0.85 and 2.03 eV in these systems. Next, by comparing these systems with several other representative two-dimensional materials, we discover a striking linear relationship between the exciton binding energy and the band gap and interpret the existence of the linear scaling law within a simple hydrogenic picture. The broad applicability of this novel scaling law is further demonstrated by using strained graphene fluoride. These findings are expected to stimulate related studies in higher and lower dimensions, potentially resulting in a deeper understanding of excitonic effects in materials of all dimensionalities.

  4. Measurements of strain at plate boundaries using space based geodetic techniques

    NASA Technical Reports Server (NTRS)

    Robaudo, Stefano; Harrison, Christopher G. A.

    1993-01-01

    We have used the space based geodetic techniques of Satellite Laser Ranging (SLR) and VLBI to study strain along subduction and transform plate boundaries and have interpreted the results using a simple elastic dislocation model. Six stations located behind island arcs were analyzed as representative of subduction zones while 13 sites located on either side of the San Andreas fault were used for the transcurrent zones. The length deformation scale was then calculated for both tectonic margins by fitting the relative strain to an exponentially decreasing function of distance from the plate boundary. Results show that space-based data for the transcurrent boundary along the San Andreas fault help to define better the deformation length scale in the area while fitting nicely the elastic half-space earth model. For subduction type bonndaries the analysis indicates that there is no single scale length which uniquely describes the deformation. This is mainly due to the difference in subduction characteristics for the different areas.

  5. Steps to reconcile inflationary tensor and scalar spectra

    NASA Astrophysics Data System (ADS)

    Miranda, Vinícius; Hu, Wayne; Adshead, Peter

    2014-05-01

    The recent BICEP2 B-mode polarization determination of an inflationary tensor-scalar ratio r=0.2-0.05+0.07 is in tension with simple scale-free models of inflation due to a lack of a corresponding low multipole excess in the temperature power spectrum which places a limit of r0.002<0.11 (95% C.L.) on such models. Single-field inflationary models that reconcile these two observations, even those where the tilt runs substantially, introduce a scale into the scalar power spectrum. To cancel the tensor excess, and simultaneously remove the excess already present without tensors, ideally the model should introduce this scale as a relatively sharp transition in the tensor-scalar ratio around the horizon at recombination. We consider models which generate such a step in this quantity and find that they can improve the joint fit to the temperature and polarization data by up to 2ΔlnL≈-14 without changing cosmological parameters. Precision E-mode polarization measurements should be able to test this explanation.

  6. Minimal model for a hydrodynamic fingering instability in microroller suspensions

    NASA Astrophysics Data System (ADS)

    Delmotte, Blaise; Donev, Aleksandar; Driscoll, Michelle; Chaikin, Paul

    2017-11-01

    We derive a minimal continuum model to investigate the hydrodynamic mechanism behind the fingering instability recently discovered in a suspension of microrollers near a floor [M. Driscoll et al., Nat. Phys. 13, 375 (2017), 10.1038/nphys3970]. Our model, consisting of two continuous lines of rotlets, exhibits a linear instability driven only by hydrodynamic interactions and reproduces the length-scale selection observed in large-scale particle simulations and in experiments. By adjusting only one parameter, the distance between the two lines, our dispersion relation exhibits quantitative agreement with the simulations and qualitative agreement with experimental measurements. Our linear stability analysis indicates that this instability is caused by the combination of the advective and transverse flows generated by the microrollers near a no-slip surface. Our simple model offers an interesting formalism to characterize other hydrodynamic instabilities that have not been well understood, such as size scale selection in suspensions of particles sedimenting adjacent to a wall, or the recently observed formations of traveling phonons in systems of confined driven particles.

  7. ADHD Diagnosis: As Simple As Administering a Questionnaire or a Complex Diagnostic Process?

    PubMed

    Parker, Ashton; Corkum, Penny

    2016-06-01

    The present study investigated the validity of using the Conners' Teacher and Parent Rating Scales (CTRS/CPRS) or semistructured diagnostic interviews (Parent Interview for Child Symptoms and Teacher Telephone Interview) to predict a best-practices clinical diagnosis of ADHD. A total of 279 children received a clinical diagnosis based on a best-practices comprehensive assessment (including diagnostic parent and teacher interviews, collection of historical information, rating scales, classroom observations, and a psychoeducational assessment) at a specialty ADHD Clinic in Truro, Nova Scotia, Canada. Sensitivity and specificity with clinical diagnosis were determined for the ratings scales and diagnostic interviews. Sensitivity and specificity values were high for the diagnostic interviews (91.8% and 70.7%, respectively). However, while sensitivity of the CTRS/CPRS was relatively high (83.5%), specificity was poor (35.7%). The low specificity of the CPRS/CTRS is not sufficient to be used alone to diagnose ADHD. (J. of Att. Dis. 2016; 20(6) 478-486). © The Author(s) 2013.

  8. [Interdependence of plankton spatial distribution and plancton biomass temporal oscillations: mathematical simulation].

    PubMed

    Medvedinskiĭ, A B; Tikhonova, I A; Li, B L; Malchow, H

    2003-01-01

    The dynamics of aquatic biological communities in a patchy environment is of great interest in respect to interrelations between phenomena at various spatial and time scales. To study the complex plankton dynamics in relation to variations of such a biologically essential parameter as the fish predation rate, we use a simple reaction-diffusion model of trophic interactions between phytoplankton, zooplankton, and fish. We suggest that plankton is distributed between two habitats one of which is fish-free due to hydrological inhomogeneity, while the other is fish-populated. We show that temporal variations in the fish predation rate do not violate the strong correspondence between the character of spatial distribution of plankton and changes of plankton biomass in time: regular temporal oscillations of plankton biomass correspond to large-scale plankton patches, while chaotic oscillations correspond to small-scale plankton patterns. As in the case of the constant fish predation rate, the chaotic plankton dynamics is characterized by coexistence of the chaotic attractor and limit cycle.

  9. Freeze-drying process monitoring using a cold plasma ionization device.

    PubMed

    Mayeresse, Y; Veillon, R; Sibille, P H; Nomine, C

    2007-01-01

    A cold plasma ionization device has been designed to monitor freeze-drying processes in situ by monitoring lyophilization chamber moisture content. This plasma device, which consists of a probe that can be mounted directly on the lyophilization chamber, depends upon the ionization of nitrogen and water molecules using a radiofrequency generator and spectrometric signal collection. The study performed on this probe shows that it is steam sterilizable, simple to integrate, reproducible, and sensitive. The limitations include suitable positioning in the lyophilization chamber, calibration, and signal integration. Sensitivity was evaluated in relation to the quantity of vials and the probe positioning, and correlation with existing methods, such as microbalance, was established. These tests verified signal reproducibility through three freeze-drying cycles. Scaling-up studies demonstrated a similar product signature for the same product using pilot-scale and larger-scale equipment. On an industrial scale, the method efficiently monitored the freeze-drying cycle, but in a larger industrial freeze-dryer the signal was slightly modified. This was mainly due to the positioning of the plasma device, in relation to the vapor flow pathway, which is not necessarily homogeneous within the freeze-drying chamber. The plasma tool is a relevant method for monitoring freeze-drying processes and may in the future allow the verification of current thermodynamic freeze-drying models. This plasma technique may ultimately represent a process analytical technology (PAT) approach for the freeze-drying process.

  10. Linking climate projections to performance: A yield-based decision scaling assessment of a large urban water resources system

    NASA Astrophysics Data System (ADS)

    Turner, Sean W. D.; Marlow, David; Ekström, Marie; Rhodes, Bruce G.; Kularathna, Udaya; Jeffrey, Paul J.

    2014-04-01

    Despite a decade of research into climate change impacts on water resources, the scientific community has delivered relatively few practical methodological developments for integrating uncertainty into water resources system design. This paper presents an application of the "decision scaling" methodology for assessing climate change impacts on water resources system performance and asks how such an approach might inform planning decisions. The decision scaling method reverses the conventional ethos of climate impact assessment by first establishing the climate conditions that would compel planners to intervene. Climate model projections are introduced at the end of the process to characterize climate risk in such a way that avoids the process of propagating those projections through hydrological models. Here we simulated 1000 multisite synthetic monthly streamflow traces in a model of the Melbourne bulk supply system to test the sensitivity of system performance to variations in streamflow statistics. An empirical relation was derived to convert decision-critical flow statistics to climatic units, against which 138 alternative climate projections were plotted and compared. We defined the decision threshold in terms of a system yield metric constrained by multiple performance criteria. Our approach allows for fast and simple incorporation of demand forecast uncertainty and demonstrates the reach of the decision scaling method through successful execution in a large and complex water resources system. Scope for wider application in urban water resources planning is discussed.

  11. Economic decision making and the application of nonparametric prediction models

    USGS Publications Warehouse

    Attanasi, E.D.; Coburn, T.C.; Freeman, P.A.

    2008-01-01

    Sustained increases in energy prices have focused attention on gas resources in low-permeability shale or in coals that were previously considered economically marginal. Daily well deliverability is often relatively small, although the estimates of the total volumes of recoverable resources in these settings are often large. Planning and development decisions for extraction of such resources must be areawide because profitable extraction requires optimization of scale economies to minimize costs and reduce risk. For an individual firm, the decision to enter such plays depends on reconnaissance-level estimates of regional recoverable resources and on cost estimates to develop untested areas. This paper shows how simple nonparametric local regression models, used to predict technically recoverable resources at untested sites, can be combined with economic models to compute regional-scale cost functions. The context of the worked example is the Devonian Antrim-shale gas play in the Michigan basin. One finding relates to selection of the resource prediction model to be used with economic models. Models chosen because they can best predict aggregate volume over larger areas (many hundreds of sites) smooth out granularity in the distribution of predicted volumes at individual sites. This loss of detail affects the representation of economic cost functions and may affect economic decisions. Second, because some analysts consider unconventional resources to be ubiquitous, the selection and order of specific drilling sites may, in practice, be determined arbitrarily by extraneous factors. The analysis shows a 15-20% gain in gas volume when these simple models are applied to order drilling prospects strategically rather than to choose drilling locations randomly. Copyright ?? 2008 Society of Petroleum Engineers.

  12. Functional approach to exploring climatic and landscape controls of runoff generation: 1. Behavioral constraints on runoff volume

    NASA Astrophysics Data System (ADS)

    Li, Hong-Yi; Sivapalan, Murugesu; Tian, Fuqiang; Harman, Ciaran

    2014-12-01

    Inspired by the Dunne diagram, the climatic and landscape controls on the partitioning of annual runoff into its various components (Hortonian and Dunne overland flow and subsurface stormflow) are assessed quantitatively, from a purely theoretical perspective. A simple distributed hydrologic model has been built sufficient to simulate the effects of different combinations of climate, soil, and topography on the runoff generation processes. The model is driven by a sequence of simple hypothetical precipitation events, for a large combination of climate and landscape properties, and hydrologic responses at the catchment scale are obtained through aggregation of grid-scale responses. It is found, first, that the water balance responses, including relative contributions of different runoff generation mechanisms, could be related to a small set of dimensionless similarity parameters. These capture the competition between the wetting, drying, storage, and drainage functions underlying the catchment responses, and in this way, provide a quantitative approximation of the conceptual Dunne diagram. Second, only a subset of all hypothetical catchment/climate combinations is found to be "behavioral," in terms of falling sufficiently close to the Budyko curve, describing mean annual runoff as a function of climate aridity. Furthermore, these behavioral combinations are mostly consistent with the qualitative picture presented in the Dunne diagram, indicating clearly the commonality between the Budyko curve and the Dunne diagram. These analyses also suggest clear interrelationships amongst the "behavioral" climate, soil, and topography parameter combinations, implying these catchment properties may be constrained to be codependent in order to satisfy the Budyko curve.

  13. The CSIRO Mk3L climate system model v1.0 coupled to the CABLE land surface scheme v1.4b: evaluation of the control climatology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mao, Jiafu; Phipps, S.J.; Pitman, A.J.

    The CSIRO Mk3L climate system model, a reduced-resolution coupled general circulation model, has previously been described in this journal. The model is configured for millennium scale or multiple century scale simulations. This paper reports the impact of replacing the relatively simple land surface scheme that is the default parameterisation in Mk3L with a sophisticated land surface model that simulates the terrestrial energy, water and carbon balance in a physically and biologically consistent way. An evaluation of the new model s near-surface climatology highlights strengths and weaknesses, but overall the atmospheric variables, including the near-surface air temperature and precipitation, are simulatedmore » well. The impact of the more sophisticated land surface model on existing variables is relatively small, but generally positive. More significantly, the new land surface scheme allows an examination of surface carbon-related quantities including net primary productivity which adds significantly to the capacity of Mk3L. Overall, results demonstrate that this reduced-resolution climate model is a good foundation for exploring long time scale phenomena. The addition of the more sophisticated land surface model enables an exploration of important Earth System questions including land cover change and abrupt changes in terrestrial carbon storage.« less

  14. Dynamical Mass Measurements of Contaminated Galaxy Clusters Using Support Distribution Machines

    NASA Astrophysics Data System (ADS)

    Ntampaka, Michelle; Trac, Hy; Sutherland, Dougal; Fromenteau, Sebastien; Poczos, Barnabas; Schneider, Jeff

    2018-01-01

    We study dynamical mass measurements of galaxy clusters contaminated by interlopers and show that a modern machine learning (ML) algorithm can predict masses by better than a factor of two compared to a standard scaling relation approach. We create two mock catalogs from Multidark’s publicly available N-body MDPL1 simulation, one with perfect galaxy cluster membership infor- mation and the other where a simple cylindrical cut around the cluster center allows interlopers to contaminate the clusters. In the standard approach, we use a power-law scaling relation to infer cluster mass from galaxy line-of-sight (LOS) velocity dispersion. Assuming perfect membership knowledge, this unrealistic case produces a wide fractional mass error distribution, with a width E=0.87. Interlopers introduce additional scatter, significantly widening the error distribution further (E=2.13). We employ the support distribution machine (SDM) class of algorithms to learn from distributions of data to predict single values. Applied to distributions of galaxy observables such as LOS velocity and projected distance from the cluster center, SDM yields better than a factor-of-two improvement (E=0.67) for the contaminated case. Remarkably, SDM applied to contaminated clusters is better able to recover masses than even the scaling relation approach applied to uncon- taminated clusters. We show that the SDM method more accurately reproduces the cluster mass function, making it a valuable tool for employing cluster observations to evaluate cosmological models.

  15. Pore-Scale Determination of Gas Relative Permeability in Hydrate-Bearing Sediments Using X-Ray Computed Micro-Tomography and Lattice Boltzmann Method

    NASA Astrophysics Data System (ADS)

    Chen, Xiongyu; Verma, Rahul; Espinoza, D. Nicolas; Prodanović, Maša.

    2018-01-01

    This work uses X-ray computed micro-tomography (μCT) to monitor xenon hydrate growth in a sandpack under the excess gas condition. The μCT images give pore-scale hydrate distribution and pore habit in space and time. We use the lattice Boltzmann method to calculate gas relative permeability (krg) as a function of hydrate saturation (Shyd) in the pore structure of the experimental hydrate-bearing sand retrieved from μCT data. The results suggest the krg - Shyd data fit well a new model krg = (1-Shyd)·exp(-4.95·Shyd) rather than the simple Corey model. In addition, we calculate krg-Shyd curves using digital models of hydrate-bearing sand based on idealized grain-attaching, coarse pore-filling, and dispersed pore-filling hydrate habits. Our pore-scale measurements and modeling show that the krg-Shyd curves are similar regardless of whether hydrate crystals develop grain-attaching or coarse pore-filling habits. The dispersed pore filling habit exhibits much lower gas relative permeability than the other two, but it is not observed in the experiment and not compatible with Ostwald ripening mechanisms. We find that a single grain-shape factor can be used in the Carman-Kozeny equation to calculate krg-Shyd data with known porosity and average grain diameter, suggesting it is a useful model for hydrate-bearing sand.

  16. A comprehensive allometric analysis of 2nd digit length to 4th digit length in humans.

    PubMed

    Lolli, Lorenzo; Batterham, Alan M; Kratochvíl, Lukáš; Flegr, Jaroslav; Weston, Kathryn L; Atkinson, Greg

    2017-06-28

    It has been widely reported that men have a lower ratio of the 2nd and 4th human finger lengths (2D : 4D). Size-scaling ratios, however, have the seldom-appreciated potential for providing biased estimates. Using an information-theoretic approach, we compared 12 candidate models, with different assumptions and error structures, for scaling untransformed 2D to 4D lengths from 154 men and 262 women. In each hand, the two-parameter power function and the straight line with intercept models, both with normal, homoscedastic error, were superior to the other models and essentially equivalent to each other for normalizing 2D to 4D lengths. The conventional 2D : 4D ratio biased relative 2D length low for the generally bigger hands of men, and vice versa for women, thereby leading to an artefactual indication that mean relative 2D length is lower in men than women. Conversely, use of the more appropriate allometric or linear regression models revealed that mean relative 2D length was, in fact, greater in men than women. We conclude that 2D does not vary in direct proportion to 4D for both men and women, rendering the use of the simple 2D : 4D ratio inappropriate for size-scaling purposes and intergroup comparisons. © 2017 The Author(s).

  17. Halo Profiles and the Concentration–Mass Relation for a ΛCDM Universe

    NASA Astrophysics Data System (ADS)

    Child, Hillary L.; Habib, Salman; Heitmann, Katrin; Frontiere, Nicholas; Finkel, Hal; Pope, Adrian; Morozov, Vitali

    2018-05-01

    Profiles of dark matter-dominated halos at the group and cluster scales play an important role in modern cosmology. Using results from two very large cosmological N-body simulations, which increase the available volume at their mass resolution by roughly two orders of magnitude, we robustly determine the halo concentration–mass (c‑M) relation over a wide range of masses, employing multiple methods of concentration measurement. We characterize individual halo profiles, as well as stacked profiles, relevant for galaxy–galaxy lensing and next-generation cluster surveys; the redshift range covered is 0 ≤ z ≤ 4, with a minimum halo mass of M 200c ∼ 2 × 1011 M ⊙. Despite the complexity of a proper description of a halo (environmental effects, merger history, nonsphericity, relaxation state), when the mass is scaled by the nonlinear mass scale M ⋆(z), we find that a simple non-power-law form for the c–M/M ⋆ relation provides an excellent description of our simulation results across eight decades in M/M ⋆ and for 0 ≤ z ≤ 4. Over the mass range covered, the c–M relation has two asymptotic forms: an approximate power law below a mass threshold M/M ⋆ ∼ 500–1000, transitioning to a constant value, c 0 ∼ 3 at higher masses. The relaxed halo fraction decreases with mass, transitioning to a constant value of ∼0.5 above the same mass threshold. We compare Navarro–Frenk–White (NFW) and Einasto fits to stacked profiles in narrow mass bins at different redshifts; as expected, the Einasto profile provides a better description of the simulation results. At cluster scales at low redshift, however, both NFW and Einasto profiles are in very good agreement with the simulation results, consistent with recent weak lensing observations.

  18. Local and landscape associations between wintering dabbling ducks and wetland complexes in Mississippi

    USGS Publications Warehouse

    Pearse, Aaron T.; Kaminski, Richard M.; Reinecke, Kenneth J.; Dinsmore, Stephen J.

    2012-01-01

    Landscape features influence distribution of waterbirds throughout their annual cycle. A conceptual model, the wetland habitat complex, may be useful in conservation of wetland habitats for dabbling ducks (Anatini). The foundation of this conceptual model is that ducks seek complexes of wetlands containing diverse resources to meet dynamic physiological needs. We included flooded croplands, wetlands and ponds, public-land waterfowl sanctuary, and diversity of habitats as key components of wetland habitat complexes and compared their relative influence at two spatial scales (i.e., local, 0.25-km radius; landscape, 4-km) on dabbling ducks wintering in western Mississippi, USA during winters 2002–2004. Distribution of mallard (Anas platyrhynchos) groups was positively associated with flooded cropland at local and landscape scales. Models representing flooded croplands at the landscape scale best explained occurrence of other dabbling ducks. Habitat complexity measured at both scales best explained group size of other dabbling ducks. Flooded croplands likely provided food that had decreased in availability due to conversion of wetlands to agriculture. Wetland complexes at landscape scales were more attractive to wintering ducks than single or structurally simple wetlands. Conservation of wetland complexes at large spatial scales (≥5,000 ha) on public and private lands will require coordination among multiple stakeholders.

  19. Allometric Scaling in Biology

    NASA Astrophysics Data System (ADS)

    Banavar, Jayanth

    2009-03-01

    The unity of life is expressed not only in the universal basis of inheritance and energetics at the molecular level, but also in the pervasive scaling of traits with body size at the whole-organism level. More than 75 years ago, Kleiber and Brody and Proctor independently showed that the metabolic rates, B, of mammals and birds scale as the three-quarter power of their mass, M. Subsequent studies showed that most biological rates and times scale as M-1/4 and M^1/4 respectively, and that these so called quarter-power scaling relations hold for a variety of organisms, from unicellular prokaryotes and eukaryotes to trees and mammals. The wide applicability of Kleiber's law, across the 22 orders of magnitude of body mass from minute bacteria to giant whales and sequoias, raises the hope that there is some simple general explanation that underlies the incredible diversity of form and function. We will present a general theoretical framework for understanding the relationship between metabolic rate, B, and body mass, M. We show how the pervasive quarter-power biological scaling relations arise naturally from optimal directed resource supply systems. This framework robustly predicts that: 1) whole organism power and resource supply rate, B, scale as M^3/4; 2) most other rates, such as heart rate and maximal population growth rate scale as M-1/4; 3) most biological times, such as blood circulation time and lifespan, scale as M^1/4; and 4) the average velocity of flow through the network, v, such as the speed of blood and oxygen delivery, scales as M^1/12. Our framework is valid even when there is no underlying network. Our theory is applicable to unicellular organisms as well as to large animals and plants. This work was carried out in collaboration with Amos Maritan along with Jim Brown, John Damuth, Melanie Moses, Andrea Rinaldo, and Geoff West.

  20. A simple phenomenological model for grain clustering in turbulence

    NASA Astrophysics Data System (ADS)

    Hopkins, Philip F.

    2016-01-01

    We propose a simple model for density fluctuations of aerodynamic grains, embedded in a turbulent, gravitating gas disc. The model combines a calculation for the behaviour of a group of grains encountering a single turbulent eddy, with a hierarchical approximation of the eddy statistics. This makes analytic predictions for a range of quantities including: distributions of grain densities, power spectra and correlation functions of fluctuations, and maximum grain densities reached. We predict how these scale as a function of grain drag time ts, spatial scale, grain-to-gas mass ratio tilde{ρ }, strength of turbulence α, and detailed disc properties. We test these against numerical simulations with various turbulence-driving mechanisms. The simulations agree well with the predictions, spanning ts Ω ˜ 10-4-10, tilde{ρ }˜ 0{-}3, α ˜ 10-10-10-2. Results from `turbulent concentration' simulations and laboratory experiments are also predicted as a special case. Vortices on a wide range of scales disperse and concentrate grains hierarchically. For small grains this is most efficient in eddies with turnover time comparable to the stopping time, but fluctuations are also damped by local gas-grain drift. For large grains, shear and gravity lead to a much broader range of eddy scales driving fluctuations, with most power on the largest scales. The grain density distribution has a log-Poisson shape, with fluctuations for large grains up to factors ≳1000. We provide simple analytic expressions for the predictions, and discuss implications for planetesimal formation, grain growth, and the structure of turbulence.

  1. A Physics-Based Engineering Approach to Predict the Cross Section for Advanced SRAMs

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhou, Wanting; Liu, Huihua

    2012-12-01

    This paper presents a physics-based engineering approach to estimate the heavy ion induced upset cross section for 6T SRAM cells from layout and technology parameters. The new approach calculates the effects of radiation with junction photocurrent, which is derived based on device physics. The new and simple approach handles the problem by using simple SPICE simulations. At first, the approach uses a standard SPICE program on a typical PC to predict the SPICE-simulated curve of the collected charge vs. its affected distance from the drain-body junction with the derived junction photocurrent. And then, the SPICE-simulated curve is used to calculate the heavy ion induced upset cross section with a simple model, which considers that the SEU cross section of a SRAM cell is more related to a “radius of influence” around a heavy ion strike than to the physical size of a diffusion node in the layout for advanced SRAMs in nano-scale process technologies. The calculated upset cross section based on this method is in good agreement with the test results for 6T SRAM cells processed using 90 nm process technology.

  2. A simple framework for relating variations in runoff to variations in climatic conditions and catchment properties

    NASA Astrophysics Data System (ADS)

    Roderick, Michael L.; Farquhar, Graham D.

    2011-12-01

    We use the Budyko framework to calculate catchment-scale evapotranspiration (E) and runoff (Q) as a function of two climatic factors, precipitation (P) and evaporative demand (Eo = 0.75 times the pan evaporation rate), and a third parameter that encodes the catchment properties (n) and modifies how P is partitioned between E and Q. This simple theory accurately predicted the long-term evapotranspiration (E) and runoff (Q) for the Murray-Darling Basin (MDB) in southeast Australia. We extend the theory by developing a simple and novel analytical expression for the effects on E and Q of small perturbations in P, Eo, and n. The theory predicts that a 10% change in P, with all else constant, would result in a 26% change in Q in the MDB. Future climate scenarios (2070-2099) derived using Intergovernmental Panel on Climate Change AR4 climate model output highlight the diversity of projections for P (±30%) with a correspondingly large range in projections for Q (±80%) in the MDB. We conclude with a qualitative description about the impact of changes in catchment properties on water availability and focus on the interaction between vegetation change, increasing atmospheric [CO2], and fire frequency. We conclude that the modern version of the Budyko framework is a useful tool for making simple and transparent estimates of changes in water availability.

  3. Challenges in converting among log scaling methods.

    Treesearch

    Henry Spelter

    2003-01-01

    The traditional method of measuring log volume in North America is the board foot log scale, which uses simple assumptions about how much of a log's volume is recoverable. This underestimates the true recovery potential and leads to difficulties in comparing volumes measured with the traditional board foot system and those measured with the cubic scaling systems...

  4. An extinction scale-expansion unit for the Beckman DK2 spectrophotometer

    PubMed Central

    Dixon, M.

    1967-01-01

    The paper describes a simple but accurate unit for the Beckman DK2 recording spectrophotometer, whereby any 0·1 section of the extinction (`absorbance') scale may be expanded tenfold, while preserving complete linearity in extinction. PMID:6048800

  5. Beyond δ: Tailoring marked statistics to reveal modified gravity

    NASA Astrophysics Data System (ADS)

    Valogiannis, Georgios; Bean, Rachel

    2018-01-01

    Models which attempt to explain the accelerated expansion of the universe through large-scale modifications to General Relativity (GR), must satisfy the stringent experimental constraints of GR in the solar system. Viable candidates invoke a “screening” mechanism, that dynamically suppresses deviations in high density environments, making their overall detection challenging even for ambitious future large-scale structure surveys. We present methods to efficiently simulate the non-linear properties of such theories, and consider how a series of statistics that reweight the density field to accentuate deviations from GR can be applied to enhance the overall signal-to-noise ratio in differentiating the models from GR. Our results demonstrate that the cosmic density field can yield additional, invaluable cosmological information, beyond the simple density power spectrum, that will enable surveys to more confidently discriminate between modified gravity models and ΛCDM.

  6. Increased independence and decreased vertigo after vestibular rehabilitation.

    PubMed

    Cohen, Helen S; Kimball, Kay T

    2003-01-01

    We sought to determine the effectiveness in decreasing some symptoms, such as vertigo, and increasing performance of daily life skills after vestibular rehabilitation. Patients who had chronic vertigo due to peripheral vestibular impairments were seen at a tertiary care center. They were referred for vestibular rehabilitation and were assessed on vertigo intensity and frequency with the use of the Vertigo Symptom Scale, the Vertigo Handicap Questionnaire, the Vestibular Disorders Activities of Daily Living Scale, and the Dizziness Handicap Inventory. They were then randomly assigned to 1 of 3 home program treatment groups. Vertigo decreased and independence in activities of daily living improved significantly. Improvement was not affected by age, gender, or history of vertigo. For many patients a simple home program of vestibular habituation head movement exercises is related to reduction in symptoms and increasing independence in activities of daily living.

  7. Discretization of Continuous Time Discrete Scale Invariant Processes: Estimation and Spectra

    NASA Astrophysics Data System (ADS)

    Rezakhah, Saeid; Maleki, Yasaman

    2016-07-01

    Imposing some flexible sampling scheme we provide some discretization of continuous time discrete scale invariant (DSI) processes which is a subsidiary discrete time DSI process. Then by introducing some simple random measure we provide a second continuous time DSI process which provides a proper approximation of the first one. This enables us to provide a bilateral relation between covariance functions of the subsidiary process and the new continuous time processes. The time varying spectral representation of such continuous time DSI process is characterized, and its spectrum is estimated. Also, a new method for estimation time dependent Hurst parameter of such processes is provided which gives a more accurate estimation. The performance of this estimation method is studied via simulation. Finally this method is applied to the real data of S & P500 and Dow Jones indices for some special periods.

  8. Sleepiness, driving, and motor vehicle accidents: A questionnaire-based survey.

    PubMed

    Zwahlen, Daniel; Jackowski, Christian; Pfäffli, Matthias

    2016-11-01

    In Switzerland, the prevalence of an excessive daytime sleepiness (EDS) in drivers undergoing a driving capacity assessment is currently not known. In this study, private and professional drivers were evaluated by means of a paper-based questionnaire, including Epworth Sleepiness Scale, Berlin Questionnaire, and additional questions to sleepiness-related accidents, near-miss accidents, health issues, and demographic data. Of the 435 distributed questionnaires, 128 completed were returned. The response rate was 29%. The mean age of the investigated drivers was 42.5 years (20-85 years). According to the Epworth Sleepiness Scale, 9% of the participants are likely to suffer from excessive daytime sleepiness. An equal percentage has a high risk for obstructive sleep apnea syndrome based on the Berlin Questionnaire. 16% admitted an involuntary nodding off while driving a motor vehicle. This subset of the participants scored statistically significant higher on the Epworth Sleepiness Scale (p = 0.036). 8% of the participants already suffered an accident because of being sleepy while driving. An equal number experienced a sleepiness-related near-miss accident on the road. The study shows that a medical workup of excessive daytime sleepiness is highly recommended in each driver undergoing a driving capacity assessment. Routine application of easily available and time-saving assessment tools such as the Epworth Sleepiness Scale questionnaire could prevent accidents in a simple way. The applicability of the Berlin Questionnaire to screen suspected fatal sleepiness-related motor vehicle accidents is discussed. Copyright © 2016 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  9. Pasta production: complexity in defining processing conditions for reference trials and quality assessment models

    USDA-ARS?s Scientific Manuscript database

    Pasta is a simple food made from water and durum wheat (Triticum turgidum subsp. durum) semolina. As pasta increases in popularity, studies have endeavored to analyze the attributes that contribute to high quality pasta. Despite being a simple food, the laboratory scale analysis of pasta quality is ...

  10. A Simple, Small-Scale Lego Colorimeter with a Light-Emitting Diode (LED) Used as Detector

    ERIC Educational Resources Information Center

    Asheim, Jonas; Kvittingen, Eivind V.; Kvittingen, Lise; Verley, Richard

    2014-01-01

    This article describes how to construct a simple, inexpensive, and robust colorimeter from a few Lego bricks, in which one light-emitting diode (LED) is used as a light source and a second LED as a light detector. The colorimeter is suited to various grades and curricula.

  11. Evaluation of interpolation techniques for the creation of gridded daily precipitation (1 × 1 km2); Cyprus, 1980-2010

    NASA Astrophysics Data System (ADS)

    Camera, Corrado; Bruggeman, Adriana; Hadjinicolaou, Panos; Pashiardis, Stelios; Lange, Manfred A.

    2014-01-01

    High-resolution gridded daily data sets are essential for natural resource management and the analyses of climate changes and their effects. This study aims to evaluate the performance of 15 simple or complex interpolation techniques in reproducing daily precipitation at a resolution of 1 km2 over topographically complex areas. Methods are tested considering two different sets of observation densities and different rainfall amounts. We used rainfall data that were recorded at 74 and 145 observational stations, respectively, spread over the 5760 km2 of the Republic of Cyprus, in the Eastern Mediterranean. Regression analyses utilizing geographical copredictors and neighboring interpolation techniques were evaluated both in isolation and combined. Linear multiple regression (LMR) and geographically weighted regression methods (GWR) were tested. These included a step-wise selection of covariables, as well as inverse distance weighting (IDW), kriging, and 3D-thin plate splines (TPS). The relative rank of the different techniques changes with different station density and rainfall amounts. Our results indicate that TPS performs well for low station density and large-scale events and also when coupled with regression models. It performs poorly for high station density. The opposite is observed when using IDW. Simple IDW performs best for local events, while a combination of step-wise GWR and IDW proves to be the best method for large-scale events and high station density. This study indicates that the use of step-wise regression with a variable set of geographic parameters can improve the interpolation of large-scale events because it facilitates the representation of local climate dynamics.

  12. Aeroelastic Tailoring via Tow Steered Composites

    NASA Technical Reports Server (NTRS)

    Stanford, Bret K.; Jutte, Christine V.

    2014-01-01

    The use of tow steered composites, where fibers follow prescribed curvilinear paths within a laminate, can improve upon existing capabilities related to aeroelastic tailoring of wing structures, though this tailoring method has received relatively little attention in the literature. This paper demonstrates the technique for both a simple cantilevered plate in low-speed flow, as well as the wing box of a full-scale high aspect ratio transport configuration. Static aeroelastic stresses and dynamic flutter boundaries are obtained for both cases. The impact of various tailoring choices upon the aeroelastic performance is quantified: curvilinear fiber steering versus straight fiber steering, certifiable versus noncertifiable stacking sequences, a single uniform laminate per wing skin versus multiple laminates, and identical upper and lower wing skins structures versus individual tailoring.

  13. Description and validation of the Simple, Efficient, Dynamic, Global, Ecological Simulator (SEDGES v.1.0)

    NASA Astrophysics Data System (ADS)

    Paiewonsky, Pablo; Elison Timm, Oliver

    2018-03-01

    In this paper, we present a simple dynamic global vegetation model whose primary intended use is auxiliary to the land-atmosphere coupling scheme of a climate model, particularly one of intermediate complexity. The model simulates and provides important ecological-only variables but also some hydrological and surface energy variables that are typically either simulated by land surface schemes or else used as boundary data input for these schemes. The model formulations and their derivations are presented here, in detail. The model includes some realistic and useful features for its level of complexity, including a photosynthetic dependency on light, full coupling of photosynthesis and transpiration through an interactive canopy resistance, and a soil organic carbon dependence for bare-soil albedo. We evaluate the model's performance by running it as part of a simple land surface scheme that is driven by reanalysis data. The evaluation against observational data includes net primary productivity, leaf area index, surface albedo, and diagnosed variables relevant for the closure of the hydrological cycle. In this setup, we find that the model gives an adequate to good simulation of basic large-scale ecological and hydrological variables. Of the variables analyzed in this paper, gross primary productivity is particularly well simulated. The results also reveal the current limitations of the model. The most significant deficiency is the excessive simulation of evapotranspiration in mid- to high northern latitudes during their winter to spring transition. The model has a relative advantage in situations that require some combination of computational efficiency, model transparency and tractability, and the simulation of the large-scale vegetation and land surface characteristics under non-present-day conditions.

  14. Rheological Properties of Natural Subduction Zone Interface: Insights from "Digital" Griggs Experiments

    NASA Astrophysics Data System (ADS)

    Ioannidi, P. I.; Le Pourhiet, L.; Moreno, M.; Agard, P.; Oncken, O.; Angiboust, S.

    2017-12-01

    The physical nature of plate locking and its relation to surface deformation patterns at different time scales (e.g. GPS displacements during the seismic cycle) can be better understood by determining the rheological parameters of the subduction interface. However, since direct rheological measurements are not possible, finite element modelling helps to determine the effective rheological parameters of the subduction interface. We used the open source finite element code pTatin to create 2D models, starting with a homogeneous medium representing shearing at the subduction interface. We tested several boundary conditions that mimic simple shear and opted for the one that best describes the Grigg's type simple shear experiments. After examining different parameters, such as shearing velocity, temperature and viscosity, we added complexity to the geometry by including a second phase. This arises from field observations, where shear zone outcrops are often composites of multiple phases: stronger crustal blocks embedded within a sedimentary and/or serpentinized matrix have been reported for several exhumed subduction zones. We implemented a simplified model to simulate simple shearing of a two-phase medium in order to quantify the effect of heterogeneous rheology on stress and strain localization. Preliminary results show different strength in the models depending on the block-to-matrix ratio. We applied our method to outcrop scale block-in-matrix geometries and by sampling at different depths along exhumed former subduction interfaces, we expect to be able to provide effective friction and viscosity of a natural interface. In a next step, these effective parameters will be used as input into seismic cycle deformation models in an attempt to assess the possible signature of field geometries on the slip behaviour of the plate interface.

  15. Creating a test blueprint for a progress testing program: A paired-comparisons approach.

    PubMed

    von Bergmann, HsingChi; Childs, Ruth A

    2018-03-01

    Creating a new testing program requires the development of a test blueprint that will determine how the items on each test form are distributed across possible content areas and practice domains. To achieve validity, categories of a blueprint are typically based on the judgments of content experts. How experts judgments are elicited and combined is important to the quality of resulting test blueprints. Content experts in dentistry participated in a day-long faculty-wide workshop to discuss, refine, and confirm the categories and their relative weights. After reaching agreement on categories and their definitions, experts judged the relative importance between category pairs, registering their judgments anonymously using iClicker, an audience response system. Judgments were combined in two ways: a simple calculation that could be performed during the workshop and a multidimensional scaling of the judgments performed later. Content experts were able to produce a set of relative weights using this approach. The multidimensional scaling yielded a three-dimensional model with the potential to provide deeper insights into the basis of the experts' judgments. The approach developed and demonstrated in this study can be applied across academic disciplines to elicit and combine content experts judgments for the development of test blueprints.

  16. Towards the computation of time-periodic inertial range dynamics

    NASA Astrophysics Data System (ADS)

    van Veen, L.; Vela-Martín, A.; Kawahara, G.

    2018-04-01

    We explore the possibility of computing simple invariant solutions, like travelling waves or periodic orbits, in Large Eddy Simulation (LES) on a periodic domain with constant external forcing. The absence of material boundaries and the simple forcing mechanism make this system a comparatively simple target for the study of turbulent dynamics through invariant solutions. We show, that in spite of the application of eddy viscosity the computations are still rather challenging and must be performed on GPU cards rather than conventional coupled CPUs. We investigate the onset of turbulence in this system by means of bifurcation analysis, and present a long-period, large-amplitude unstable periodic orbit that is filtered from a turbulent time series. Although this orbit is computed on a coarse grid, with only a small separation between the integral scale and the LES filter length, the periodic dynamics seem to capture a regeneration process of the large-scale vortices.

  17. Optical identification using imperfections in 2D materials

    NASA Astrophysics Data System (ADS)

    Cao, Yameng; Robson, Alexander J.; Alharbi, Abdullah; Roberts, Jonathan; Woodhead, Christopher S.; Noori, Yasir J.; Bernardo-Gavito, Ramón; Shahrjerdi, Davood; Roedig, Utz; Fal'ko, Vladimir I.; Young, Robert J.

    2017-12-01

    The ability to uniquely identify an object or device is important for authentication. Imperfections, locked into structures during fabrication, can be used to provide a fingerprint that is challenging to reproduce. In this paper, we propose a simple optical technique to read unique information from nanometer-scale defects in 2D materials. Imperfections created during crystal growth or fabrication lead to spatial variations in the bandgap of 2D materials that can be characterized through photoluminescence measurements. We show a simple setup involving an angle-adjustable transmission filter, simple optics and a CCD camera can capture spatially-dependent photoluminescence to produce complex maps of unique information from 2D monolayers. Atomic force microscopy is used to verify the origin of the optical signature measured, demonstrating that it results from nanometer-scale imperfections. This solution to optical identification with 2D materials could be employed as a robust security measure to prevent counterfeiting.

  18. Composite annotations: requirements for mapping multiscale data and models to biomedical ontologies

    PubMed Central

    Cook, Daniel L.; Mejino, Jose L. V.; Neal, Maxwell L.; Gennari, John H.

    2009-01-01

    Current methods for annotating biomedical data resources rely on simple mappings between data elements and the contents of a variety of biomedical ontologies and controlled vocabularies. Here we point out that such simple mappings are inadequate for large-scale multiscale, multidomain integrative “virtual human” projects. For such integrative challenges, we describe a “composite annotation” schema that is simple yet sufficiently extensible for mapping the biomedical content of a variety of data sources and biosimulation models to available biomedical ontologies. PMID:19964601

  19. Crater size estimates for large-body terrestrial impact

    NASA Technical Reports Server (NTRS)

    Schmidt, Robert M.; Housen, Kevin R.

    1988-01-01

    Calculating the effects of impacts leading to global catastrophes requires knowledge of the impact process at very large size scales. This information cannot be obtained directly but must be inferred from subscale physical simulations, numerical simulations, and scaling laws. Schmidt and Holsapple presented scaling laws based upon laboratory-scale impact experiments performed on a centrifuge (Schmidt, 1980 and Schmidt and Holsapple, 1980). These experiments were used to develop scaling laws which were among the first to include gravity dependence associated with increasing event size. At that time using the results of experiments in dry sand and in water to provide bounds on crater size, they recognized that more precise bounds on large-body impact crater formation could be obtained with additional centrifuge experiments conducted in other geological media. In that previous work, simple power-law formulae were developed to relate final crater diameter to impactor size and velocity. In addition, Schmidt (1980) and Holsapple and Schmidt (1982) recognized that the energy scaling exponent is not a universal constant but depends upon the target media. Recently, Holsapple and Schmidt (1987) includes results for non-porous materials and provides a basis for estimating crater formation kinematics and final crater size. A revised set of scaling relationships for all crater parameters of interest are presented. These include results for various target media and include the kinematics of formation. Particular attention is given to possible limits brought about by very large impactors.

  20. A simple and low-cost platform technology for producing pexiganan antimicrobial peptide in E. coli.

    PubMed

    Zhao, Chun-Xia; Dwyer, Mirjana Dimitrijev; Yu, Alice Lei; Wu, Yang; Fang, Sheng; Middelberg, Anton P J

    2015-05-01

    Antimicrobial peptides, as a new class of antibiotics, have generated tremendous interest as potential alternatives to classical antibiotics. However, the large-scale production of antimicrobial peptides remains a significant challenge. This paper reports a simple and low-cost chromatography-free platform technology for producing antimicrobial peptides in Escherichia coli (E. coli). A fusion protein comprising a variant of the helical biosurfactant protein DAMP4 and the known antimicrobial peptide pexiganan is designed by joining the two polypeptides, at the DNA level, via an acid-sensitive cleavage site. The resulting DAMP4(var)-pexiganan fusion protein expresses at high level and solubility in recombinant E. coli, and a simple heat-purification method was applied to disrupt cells and deliver high-purity DAMP4(var)-pexiganan protein. Simple acid cleavage successfully separated the DAMP4 variant protein and the antimicrobial peptide. Antimicrobial activity tests confirmed that the bio-produced antimicrobial peptide has the same antimicrobial activity as the equivalent product made by conventional chemical peptide synthesis. This simple and low-cost platform technology can be easily adapted to produce other valuable peptide products, and opens a new manufacturing approach for producing antimicrobial peptides at large scale using the tools and approaches of biochemical engineering. © 2014 Wiley Periodicals, Inc.

  1. Atomistic Modeling of the Fluid-Solid Interface in Simple Fluids

    NASA Astrophysics Data System (ADS)

    Hadjiconstantinou, Nicolas; Wang, Gerald

    2017-11-01

    Fluids can exhibit pronounced structuring effects near a solid boundary, typically manifested in a layered structure that has been extensively shown to directly affect transport across the interface. We present and discuss several results from molecular-mechanical modeling and molecular-dynamics (MD) simulations aimed at characterizing the structure of the first fluid layer directly adjacent to the solid. We identify a new dimensionless group - termed the Wall number - which characterizes the degree of fluid layering, by comparing the competing effects of wall-fluid interaction and thermal energy. We find that in the layering regime, several key features of the first layer layer - including its distance from the solid, its width, and its areal density - can be described using mean-field-energy arguments, as well as asymptotic analysis of the Nernst-Planck equation. For dense fluids, the areal density and the width of the first layer can be related to the bulk fluid density using a simple scaling relation. MD simulations show that these results are broadly applicable and robust to the presence of a second confining solid boundary, different choices of wall structure and thermalization, strengths of fluid-solid interaction, and wall geometries.

  2. Effect of the material properties on the crumpling of a thin sheet.

    PubMed

    Habibi, Mehdi; Adda-Bedia, Mokhtar; Bonn, Daniel

    2017-06-07

    While simple at first glance, the dense packing of sheets is a complex phenomenon that depends on material parameters and the packing protocol. We study the effect of plasticity on the crumpling of sheets of different materials by performing isotropic compaction experiments on sheets of different sizes and elasto-plastic properties. First, we quantify the material properties using a dimensionless foldability index. Then, the compaction force required to crumple a sheet into a ball as well as the average number of layers inside the ball are measured. For each material, both quantities exhibit a power-law dependence on the diameter of the crumpled ball. We experimentally establish the power-law exponents and find that both depend nonlinearly on the foldability index. However the exponents that characterize the mechanical response and morphology of the crumpled materials are related linearly. A simple scaling argument explains this in terms of the buckling of the sheets, and recovers the relation between the crumpling force and the morphology of the crumpled structure. Our results suggest a new approach to tailor the mechanical response of the crumpled objects by carefully selecting their material properties.

  3. A nested observation and model approach to non linear groundwater surface water interactions.

    NASA Astrophysics Data System (ADS)

    van der Velde, Y.; Rozemeijer, J. C.; de Rooij, G. H.

    2009-04-01

    Surface water quality measurements in The Netherlands are scattered in time and space. Therefore, water quality status and its variations and trends are difficult to determine. In order to reach the water quality goals according to the European Water Framework Directive, we need to improve our understanding of the dynamics of surface water quality and the processes that affect it. In heavily drained lowland catchment groundwater influences the discharge towards the surface water network in many complex ways. Especially a strong seasonal contracting and expanding system of discharging ditches and streams affects discharge and solute transport. At a tube drained field site the tube drain flux and the combined flux of all other flow routes toward a stretch of 45 m of surface water have been measured for a year. Also the groundwater levels at various locations in the field and the discharge at two nested catchment scales have been monitored. The unique reaction of individual flow routes on rainfall events at the field site allowed us to separate the discharge at a 4 ha catchment and at a 6 km2 into flow route contributions. The results of this nested experimental setup combined with the results of a distributed hydrological model has lead to the formulation of a process model approach that focuses on the spatial variability of discharge generation driven by temporal and spatial variations in groundwater levels. The main idea of this approach is that discharge is not generated by catchment average storages or groundwater heads, but is mainly generated by points scale extremes i.e. extreme low permeability, extreme high groundwater heads or extreme low surface elevations, all leading to catchment discharge. We focused on describing the spatial extremes in point scale storages and this led to a simple and measurable expression that governs the non-linear groundwater surface water interaction. We will present the analysis of the field site data to demonstrate the potential of nested-scale, high frequency observations. The distributed hydrological model results will be used to show transient catchment scale relations between groundwater levels and discharges. These analyses lead to a simple expression that can describe catchment scale groundwater surface water interactions.

  4. Topology-optimized dual-polarization Dirac cones

    NASA Astrophysics Data System (ADS)

    Lin, Zin; Christakis, Lysander; Li, Yang; Mazur, Eric; Rodriguez, Alejandro W.; Lončar, Marko

    2018-02-01

    We apply a large-scale computational technique, known as topology optimization, to the inverse design of photonic Dirac cones. In particular, we report on a variety of photonic crystal geometries, realizable in simple isotropic dielectric materials, which exhibit dual-polarization Dirac cones. We present photonic crystals of different symmetry types, such as fourfold and sixfold rotational symmetries, with Dirac cones at different points within the Brillouin zone. The demonstrated and related optimization techniques open avenues to band-structure engineering and manipulating the propagation of light in periodic media, with possible applications to exotic optical phenomena such as effective zero-index media and topological photonics.

  5. Quick clay and landslides of clayey soils.

    PubMed

    Khaldoun, Asmae; Moller, Peder; Fall, Abdoulaye; Wegdam, Gerard; De Leeuw, Bert; Méheust, Yves; Otto Fossum, Jon; Bonn, Daniel

    2009-10-30

    We study the rheology of quick clay, an unstable soil responsible for many landslides. We show that above a critical stress the material starts flowing abruptly with a very large viscosity decrease caused by the flow. This leads to avalanche behavior that accounts for the instability of quick clay soils. Reproducing landslides on a small scale in the laboratory shows that an additional factor that determines the violence of the slides is the inhomogeneity of the flow. We propose a simple yield stress model capable of reproducing the laboratory landslide data, allowing us to relate landslides to the measured rheology.

  6. Quick Clay and Landslides of Clayey Soils

    NASA Astrophysics Data System (ADS)

    Khaldoun, Asmae; Moller, Peder; Fall, Abdoulaye; Wegdam, Gerard; de Leeuw, Bert; Méheust, Yves; Otto Fossum, Jon; Bonn, Daniel

    2009-10-01

    We study the rheology of quick clay, an unstable soil responsible for many landslides. We show that above a critical stress the material starts flowing abruptly with a very large viscosity decrease caused by the flow. This leads to avalanche behavior that accounts for the instability of quick clay soils. Reproducing landslides on a small scale in the laboratory shows that an additional factor that determines the violence of the slides is the inhomogeneity of the flow. We propose a simple yield stress model capable of reproducing the laboratory landslide data, allowing us to relate landslides to the measured rheology.

  7. Jetting from impact of a spherical drop with a deep layer

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Toole, Jameson; Fazzaa, Kamel; Deegan, Robert; Deegan Group Team; X-Ray Science Division, Advanced Photon Source Collaboration

    2011-11-01

    We performed an experimental study of jets during the impact of a spherical drop with a deep layer of same liquid. Using high speed optical and X-ray imaging, we observe two types of jets: the so-called ejecta sheet which emerges almost immediately after impact and the lamella which emerges later. For high Reynolds number the two jets are distinct, while for low Reynolds number the two jets combine into a single continuous jet. We also measured the emergence time, speed, and position of the ejecta sheet and found simple scaling relations for these quantities.

  8. Size-sensitive sorting of microparticles through control of flow geometry

    NASA Astrophysics Data System (ADS)

    Wang, Cheng; Jalikop, Shreyas V.; Hilgenfeldt, Sascha

    2011-07-01

    We demonstrate a general concept of flow manipulation in microfluidic environments, based on controlling the shape and position of flow domains in order to force switching and sorting of microparticles without moving parts or changes in design geometry. Using microbubble acoustic streaming, we show that regulation of the relative strength of streaming and a superimposed Poiseuille flow allows for size-selective trapping and releasing of particles, with particle size sensitivity much greater than what is imposed by the length scales of microfabrication. A simple criterion allows for quantitative tuning of microfluidic devices for switching and sorting of particles of desired size.

  9. Efficiency equations of the railgun

    NASA Astrophysics Data System (ADS)

    Sadedin, D. R.

    1984-03-01

    The feasibility of an employment of railguns for large scale applications, such as space launching, will ultimately be determined by efficiency considerations. The present investigation is concerned with the calculation of the efficiencies for constant current railguns. Elementary considerations are discussed, taking into account a simple condition for high efficiency, the magnetic field of the rails, and the acceleration force on the projectile. The loss in a portion of the rails is considered along with rail loss comparisons, applications to the segmented gun, rail losses related to the constant resistance per unit length, efficiency expressions, and arc, or muzzle voltage energy.

  10. A simple and effective radiometric correction method to improve landscape change detection across sensors and across time

    USGS Publications Warehouse

    Chen, X.; Vierling, Lee; Deering, D.

    2005-01-01

    Satellite data offer unrivaled utility in monitoring and quantifying large scale land cover change over time. Radiometric consistency among collocated multi-temporal imagery is difficult to maintain, however, due to variations in sensor characteristics, atmospheric conditions, solar angle, and sensor view angle that can obscure surface change detection. To detect accurate landscape change using multi-temporal images, we developed a variation of the pseudoinvariant feature (PIF) normalization scheme: the temporally invariant cluster (TIC) method. Image data were acquired on June 9, 1990 (Landsat 4), June 20, 2000 (Landsat 7), and August 26, 2001 (Landsat 7) to analyze boreal forests near the Siberian city of Krasnoyarsk using the normalized difference vegetation index (NDVI), enhanced vegetation index (EVI), and reduced simple ratio (RSR). The temporally invariant cluster (TIC) centers were identified via a point density map of collocated pixel VIs from the base image and the target image, and a normalization regression line was created to intersect all TIC centers. Target image VI values were then recalculated using the regression function so that these two images could be compared using the resulting common radiometric scale. We found that EVI was very indicative of vegetation structure because of its sensitivity to shadowing effects and could thus be used to separate conifer forests from deciduous forests and grass/crop lands. Conversely, because NDVI reduced the radiometric influence of shadow, it did not allow for distinctions among these vegetation types. After normalization, correlations of NDVI and EVI with forest leaf area index (LAI) field measurements combined for 2000 and 2001 were significantly improved; the r 2 values in these regressions rose from 0.49 to 0.69 and from 0.46 to 0.61, respectively. An EVI "cancellation effect" where EVI was positively related to understory greenness but negatively related to forest canopy coverage was evident across a post fire chronosequence with normalized data. These findings indicate that the TIC method provides a simple, effective and repeatable method to create radiometrically comparable data sets for remote detection of landscape change. Compared to some previous relative radiometric normalization methods, this new method does not require high level programming and statistical skills, yet remains sensitive to landscape changes occurring over seasonal and inter-annual time scales. In addition, the TIC method maintains sensitivity to subtle changes in vegetation phenology and enables normalization even when invariant features are rare. While this normalization method allowed detection of a range of land use, land cover, and phenological/biophysical changes in the Siberian boreal forest region studied here, it is necessary to further examine images representing a wide variety of ecoregions to thoroughly evaluate the TIC method against other normalization schemes. ?? 2005 Elsevier Inc. All rights reserved.

  11. Analytical approach to chromatic correction in the final focus system of circular colliders

    DOE PAGES

    Cai, Yunhai

    2016-11-28

    Here, a conventional final focus system in particle accelerators is systematically analyzed. We find simple relations between the parameters of two focus modules in the final telescope. Using the relations, we derive the chromatic Courant-Snyder parameters for the telescope. The parameters are scaled approximately according to (L*/βmore » $$*\\atop{y}$$)δ, where L* is the distance from the interaction point to the first quadrupole, β$$*\\atop{y}$$ the vertical beta function at the interaction point, and δ the relative momentum deviation. Most importantly, we show how to compensate its chromaticity order by order in δ by a traditional correction module flanked by an asymmetric pair of harmonic multipoles. The method enables a circular Higgs collider with 2% momentum aperture and illuminates a path forward to 4% in the future.« less

  12. Statistical self-similarity of width function maxima with implications to floods

    USGS Publications Warehouse

    Veitzer, S.A.; Gupta, V.K.

    2001-01-01

    Recently a new theory of random self-similar river networks, called the RSN model, was introduced to explain empirical observations regarding the scaling properties of distributions of various topologic and geometric variables in natural basins. The RSN model predicts that such variables exhibit statistical simple scaling, when indexed by Horton-Strahler order. The average side tributary structure of RSN networks also exhibits Tokunaga-type self-similarity which is widely observed in nature. We examine the scaling structure of distributions of the maximum of the width function for RSNs for nested, complete Strahler basins by performing ensemble simulations. The maximum of the width function exhibits distributional simple scaling, when indexed by Horton-Strahler order, for both RSNs and natural river networks extracted from digital elevation models (DEMs). We also test a powerlaw relationship between Horton ratios for the maximum of the width function and drainage areas. These results represent first steps in formulating a comprehensive physical statistical theory of floods at multiple space-time scales for RSNs as discrete hierarchical branching structures. ?? 2001 Published by Elsevier Science Ltd.

  13. Acoustic Treatment Design Scaling Methods. Volume 2; Advanced Treatment Impedance Models for High Frequency Ranges

    NASA Technical Reports Server (NTRS)

    Kraft, R. E.; Yu, J.; Kwan, H. W.

    1999-01-01

    The primary purpose of this study is to develop improved models for the acoustic impedance of treatment panels at high frequencies, for application to subscale treatment designs. Effects that cause significant deviation of the impedance from simple geometric scaling are examined in detail, an improved high-frequency impedance model is developed, and the improved model is correlated with high-frequency impedance measurements. Only single-degree-of-freedom honeycomb sandwich resonator panels with either perforated sheet or "linear" wiremesh faceplates are considered. The objective is to understand those effects that cause the simple single-degree-of- freedom resonator panels to deviate at the higher-scaled frequency from the impedance that would be obtained at the corresponding full-scale frequency. This will allow the subscale panel to be designed to achieve a specified impedance spectrum over at least a limited range of frequencies. An advanced impedance prediction model has been developed that accounts for some of the known effects at high frequency that have previously been ignored as a small source of error for full-scale frequency ranges.

  14. Impact force as a scaling parameter

    NASA Technical Reports Server (NTRS)

    Poe, Clarence C., Jr.; Jackson, Wade C.

    1994-01-01

    The Federal Aviation Administration (FAR PART 25) requires that a structure carry ultimate load with nonvisible impact damage and carry 70 percent of limit flight loads with discrete damage. The Air Force has similar criteria (MIL-STD-1530A). Both civilian and military structures are designed by a building block approach. First, critical areas of the structure are determined, and potential failure modes are identified. Then, a series of representative specimens are tested that will fail in those modes. The series begins with tests of simple coupons, progresses through larger and more complex subcomponents, and ends with a test on a full-scale component, hence the term 'building block.' In order to minimize testing, analytical models are needed to scale impact damage and residual strength from the simple coupons to the full-scale component. Using experiments and analysis, the present paper illustrates that impact damage can be better understood and scaled using impact force than just kinetic energy. The plate parameters considered are size and thickness, boundary conditions, and material, and the impact parameters are mass, shape, and velocity.

  15. On identifying relationships between the flood scaling exponent and basin attributes.

    PubMed

    Medhi, Hemanta; Tripathi, Shivam

    2015-07-01

    Floods are known to exhibit self-similarity and follow scaling laws that form the basis of regional flood frequency analysis. However, the relationship between basin attributes and the scaling behavior of floods is still not fully understood. Identifying these relationships is essential for drawing connections between hydrological processes in a basin and the flood response of the basin. The existing studies mostly rely on simulation models to draw these connections. This paper proposes a new methodology that draws connections between basin attributes and the flood scaling exponents by using observed data. In the proposed methodology, region-of-influence approach is used to delineate homogeneous regions for each gaging station. Ordinary least squares regression is then applied to estimate flood scaling exponents for each homogeneous region, and finally stepwise regression is used to identify basin attributes that affect flood scaling exponents. The effectiveness of the proposed methodology is tested by applying it to data from river basins in the United States. The results suggest that flood scaling exponent is small for regions having (i) large abstractions from precipitation in the form of large soil moisture storages and high evapotranspiration losses, and (ii) large fractions of overland flow compared to base flow, i.e., regions having fast-responding basins. Analysis of simple scaling and multiscaling of floods showed evidence of simple scaling for regions in which the snowfall dominates the total precipitation.

  16. Let's Stop Trying to Quantify Household Vulnerability: The Problem With Simple Scales for Targeting and Evaluating Economic Strengthening Programs

    PubMed Central

    Moret, Whitney M

    2018-01-01

    Introduction: Economic strengthening practitioners are increasingly seeking data collection tools that will help them target households vulnerable to HIV and poor child well-being outcomes, match households to appropriate interventions, monitor their status, and determine readiness for graduation from project support. This article discusses efforts in 3 countries to develop simple, valid tools to quantify and classify economic vulnerability status. Methods and Findings: In Côte d'Ivoire, we conducted a cross-sectional survey with 3,749 households to develop a scale based on the definition of HIV-related economic vulnerability from the U.S. President's Emergency Plan for AIDS Relief (PEPFAR) for the purpose of targeting vulnerable households for PEPFAR-funded programs for orphans and vulnerable children. The vulnerability measures examined did not cluster in ways that would allow for the creation of a small number of composite measures, and thus we were unable to develop a scale. In Uganda, we assessed the validity of a vulnerability index developed to classify households according to donor classifications of economic status by measuring its association with a validated poverty measure, finding only a modest correlation. In South Africa, we developed monitoring and evaluation tools to assess economic status of individual adolescent girls and their households. We found no significant correlation with our validation measures, which included a validated measure of girls' vulnerability to HIV, a validated poverty measure, and subjective classifications generated by the community, data collector, and respondent. Overall, none of the measures of economic vulnerability used in the 3 countries varied significantly with their proposed validation items. Conclusion: Our findings suggest that broad constructs of economic vulnerability cannot be readily captured using simple scales to classify households and individuals in a way that accounts for a substantial amount of variance at locally defined vulnerability levels. We recommend that researchers and implementers design monitoring and evaluation instruments to capture narrower definitions of vulnerability based on characteristics programs intend to affect. We also recommend using separate tools for targeting based on context-specific indicators with evidence-based links to negative outcomes. Policy makers and donors should avoid reliance on simplified metrics of economic vulnerability in the programs they support. PMID:29496734

  17. A reduced-order modeling approach to represent subgrid-scale hydrological dynamics for land-surface simulations: application in a polygonal tundra landscape

    DOE PAGES

    Pau, G. S. H.; Bisht, G.; Riley, W. J.

    2014-09-17

    Existing land surface models (LSMs) describe physical and biological processes that occur over a wide range of spatial and temporal scales. For example, biogeochemical and hydrological processes responsible for carbon (CO 2, CH 4) exchanges with the atmosphere range from the molecular scale (pore-scale O 2 consumption) to tens of kilometers (vegetation distribution, river networks). Additionally, many processes within LSMs are nonlinearly coupled (e.g., methane production and soil moisture dynamics), and therefore simple linear upscaling techniques can result in large prediction error. In this paper we applied a reduced-order modeling (ROM) technique known as "proper orthogonal decomposition mapping method" thatmore » reconstructs temporally resolved fine-resolution solutions based on coarse-resolution solutions. We developed four different methods and applied them to four study sites in a polygonal tundra landscape near Barrow, Alaska. Coupled surface–subsurface isothermal simulations were performed for summer months (June–September) at fine (0.25 m) and coarse (8 m) horizontal resolutions. We used simulation results from three summer seasons (1998–2000) to build ROMs of the 4-D soil moisture field for the study sites individually (single-site) and aggregated (multi-site). The results indicate that the ROM produced a significant computational speedup (> 10 3) with very small relative approximation error (< 0.1%) for 2 validation years not used in training the ROM. We also demonstrate that our approach: (1) efficiently corrects for coarse-resolution model bias and (2) can be used for polygonal tundra sites not included in the training data set with relatively good accuracy (< 1.7% relative error), thereby allowing for the possibility of applying these ROMs across a much larger landscape. By coupling the ROMs constructed at different scales together hierarchically, this method has the potential to efficiently increase the resolution of land models for coupled climate simulations to spatial scales consistent with mechanistic physical process representation.« less

  18. Interactions of multi-scale heterogeneity in the lithosphere: Australia

    NASA Astrophysics Data System (ADS)

    Kennett, B. L. N.; Yoshizawa, K.; Furumura, T.

    2017-10-01

    Understanding the complex heterogeneity of the continental lithosphere involves a wide variety of spatial scales and the synthesis of multiple classes of information. Seismic surface waves and multiply reflected body waves provide the main constraints on broad-scale structure, and bounds on the extent of the lithosphere-asthenosphere transition (LAT) can be found from the vertical gradients of S wavespeed. Information on finer-scale structures comes through body wave studies, including detailed seismic tomography and P-wave reflectivity extracted from stacked autocorrelograms of continuous component records. With the inclusion of deterministic large-scale structure and realistic medium-scale stochastic features fine-scale variations are subdued. The resulting multi-scale heterogeneity model for the Australian region gives a good representation of the character of observed seismograms and their geographic variations and matches the observations of P-wave reflectivity. P reflections in the 0.5-3.0 Hz band in the uppermost mantle suggest variations on vertical scales of a few hundred metres with amplitudes of the order of 1%. Interference of waves reflected or converted at sequences of such modest variations in physical properties produce relatively simple behaviour for lower frequencies, which can suggest simpler structures than are actually present. Vertical changes in the character of fine-scale heterogeneity can produce apparent discontinuities. In Central Australia a 'mid-lithospheric discontinuity' can be tracked via changes in frequency content of station reflectivity, with links to the broad-scale pattern of wavespeed gradients and, in particular, the gradients of radial anisotropy. Comparisons with xenolith results from southeastern Australia indicate a strong tie between geochemical stratification and P-wave reflectivity.

  19. Demonstration of nanoimprinted hyperlens array for high-throughput sub-diffraction imaging

    NASA Astrophysics Data System (ADS)

    Byun, Minsueop; Lee, Dasol; Kim, Minkyung; Kim, Yangdoo; Kim, Kwan; Ok, Jong G.; Rho, Junsuk; Lee, Heon

    2017-04-01

    Overcoming the resolution limit of conventional optics is regarded as the most important issue in optical imaging science and technology. Although hyperlenses, super-resolution imaging devices based on highly anisotropic dispersion relations that allow the access of high-wavevector components, have recently achieved far-field sub-diffraction imaging in real-time, the previously demonstrated devices have suffered from the extreme difficulties of both the fabrication process and the non-artificial objects placement. This results in restrictions on the practical applications of the hyperlens devices. While implementing large-scale hyperlens arrays in conventional microscopy is desirable to solve such issues, it has not been feasible to fabricate such large-scale hyperlens array with the previously used nanofabrication methods. Here, we suggest a scalable and reliable fabrication process of a large-scale hyperlens device based on direct pattern transfer techniques. We fabricate a 5 cm × 5 cm size hyperlenses array and experimentally demonstrate that it can resolve sub-diffraction features down to 160 nm under 410 nm wavelength visible light. The array-based hyperlens device will provide a simple solution for much more practical far-field and real-time super-resolution imaging which can be widely used in optics, biology, medical science, nanotechnology and other closely related interdisciplinary fields.

  20. Simplifying the use of prognostic information in traumatic brain injury. Part 1: The GCS-Pupils score: an extended index of clinical severity.

    PubMed

    Brennan, Paul M; Murray, Gordon D; Teasdale, Graham M

    2018-06-01

    OBJECTIVE Glasgow Coma Scale (GCS) scores and pupil responses are key indicators of the severity of traumatic brain damage. The aim of this study was to determine what information would be gained by combining these indicators into a single index and to explore the merits of different ways of achieving this. METHODS Information about early GCS scores, pupil responses, late outcomes on the Glasgow Outcome Scale, and mortality were obtained at the individual patient level by reviewing data from the CRASH (Corticosteroid Randomisation After Significant Head Injury; n = 9,045) study and the IMPACT (International Mission for Prognosis and Clinical Trials in TBI; n = 6855) database. These data were combined into a pooled data set for the main analysis. Methods of combining the Glasgow Coma Scale and pupil response data varied in complexity from using a simple arithmetic score (GCS score [range 3-15] minus the number of nonreacting pupils [0, 1, or 2]), which we call the GCS-Pupils score (GCS-P; range 1-15), to treating each factor as a separate categorical variable. The content of information about patient outcome in each of these models was evaluated using Nagelkerke's R 2 . RESULTS Separately, the GCS score and pupil response were each related to outcome. Adding information about the pupil response to the GCS score increased the information yield. The performance of the simple GCS-P was similar to the performance of more complex methods of evaluating traumatic brain damage. The relationship between decreases in the GCS-P and deteriorating outcome was seen across the complete range of possible scores. The additional 2 lowest points offered by the GCS-Pupils scale (GCS-P 1 and 2) extended the information about injury severity from a mortality rate of 51% and an unfavorable outcome rate of 70% at GCS score 3 to a mortality rate of 74% and an unfavorable outcome rate of 90% at GCS-P 1. The paradoxical finding that GCS score 4 was associated with a worse outcome than GCS score 3 was not seen when using the GCS-P. CONCLUSIONS A simple arithmetic combination of the GCS score and pupillary response, the GCS-P, extends the information provided about patient outcome to an extent comparable to that obtained using more complex methods. The greater range of injury severities that are identified and the smoothness of the stepwise pattern of outcomes across the range of scores may be useful in evaluating individual patients and identifying patient subgroups. The GCS-P may be a useful platform onto which information about other key prognostic features can be added in a simple format likely to be useful in clinical practice.

  1. Experimental Investigation of the Flow on a Simple Frigate Shape (SFS)

    PubMed Central

    Mora, Rafael Bardera

    2014-01-01

    Helicopters operations on board ships require special procedures introducing additional limitations known as ship helicopter operational limitations (SHOLs) which are a priority for all navies. This paper presents the main results obtained from the experimental investigation of a simple frigate shape (SFS) which is a typical case of study in experimental and computational aerodynamics. The results obtained in this investigation are used to make an assessment of the flow predicted by the SFS geometry in comparison with experimental data obtained testing a ship model (reduced scale) in the wind tunnel and on board (full scale) measurements performed on a real frigate type ship geometry. PMID:24523646

  2. Rapid and simple procedure for homogenizing leaf tissues suitable for mini-midi-scale DNA extraction in rice.

    PubMed

    Yi, Gihwan; Choi, Jun-Ho; Lee, Jong-Hee; Jeong, Unggi; Nam, Min-Hee; Yun, Doh-Won; Eun, Moo-Young

    2005-01-01

    We describe a rapid and simple procedure for homogenizing leaf samples suitable for mini/midi-scale DNA preparation in rice. The methods used tungsten carbide beads and general vortexer for homogenizing leaf samples. In general, two samples can be ground completely within 11.3+/-1.5 sec at one time. Up to 20 samples can be ground at a time using a vortexer attachment. The yields of the DNA ranged from 2.2 to 7.6 microg from 25-150 mg of young fresh leaf tissue. The quality and quantity of DNA was compatible for most of PCR work and RFLP analysis.

  3. Estimation of critical behavior from the density of states in classical statistical models

    NASA Astrophysics Data System (ADS)

    Malakis, A.; Peratzakis, A.; Fytas, N. G.

    2004-12-01

    We present a simple and efficient approximation scheme which greatly facilitates the extension of Wang-Landau sampling (or similar techniques) in large systems for the estimation of critical behavior. The method, presented in an algorithmic approach, is based on a very simple idea, familiar in statistical mechanics from the notion of thermodynamic equivalence of ensembles and the central limit theorem. It is illustrated that we can predict with high accuracy the critical part of the energy space and by using this restricted part we can extend our simulations to larger systems and improve the accuracy of critical parameters. It is proposed that the extensions of the finite-size critical part of the energy space, determining the specific heat, satisfy a scaling law involving the thermal critical exponent. The method is applied successfully for the estimation of the scaling behavior of specific heat of both square and simple cubic Ising lattices. The proposed scaling law is verified by estimating the thermal critical exponent from the finite-size behavior of the critical part of the energy space. The density of states of the zero-field Ising model on these lattices is obtained via a multirange Wang-Landau sampling.

  4. Function Invariant and Parameter Scale-Free Transformation Methods

    ERIC Educational Resources Information Center

    Bentler, P. M.; Wingard, Joseph A.

    1977-01-01

    A scale-invariant simple structure function of previously studied function components for principal component analysis and factor analysis is defined. First and second partial derivatives are obtained, and Newton-Raphson iterations are utilized. The resulting solutions are locally optimal and subjectively pleasing. (Author/JKS)

  5. On the consideration of scaling properties of extreme rainfall in Madrid (Spain) for developing a generalized intensity-duration-frequency equation and assessing probable maximum precipitation estimates

    NASA Astrophysics Data System (ADS)

    Casas-Castillo, M. Carmen; Rodríguez-Solà, Raúl; Navarro, Xavier; Russo, Beniamino; Lastra, Antonio; González, Paula; Redaño, Angel

    2018-01-01

    The fractal behavior of extreme rainfall intensities registered between 1940 and 2012 by the Retiro Observatory of Madrid (Spain) has been examined, and a simple scaling regime ranging from 25 min to 3 days of duration has been identified. Thus, an intensity-duration-frequency (IDF) master equation of the location has been constructed in terms of the simple scaling formulation. The scaling behavior of probable maximum precipitation (PMP) for durations between 5 min and 24 h has also been verified. For the statistical estimation of the PMP, an envelope curve of the frequency factor ( k m ) based on a total of 10,194 station-years of annual maximum rainfall from 258 stations in Spain has been developed. This curve could be useful to estimate suitable values of PMP at any point of the Iberian Peninsula from basic statistical parameters (mean and standard deviation) of its rainfall series. [Figure not available: see fulltext.

  6. Self-organized criticality in asymmetric exclusion model with noise for freeway traffic

    NASA Astrophysics Data System (ADS)

    Nagatani, Takashi

    1995-02-01

    The one-dimensional asymmetric simple-exclusion model with open boundaries for parallel update is extended to take into account temporary stopping of particles. The model presents the traffic flow on a highway with temporary deceleration of cars. Introducing temporary stopping into the asymmetric simple-exclusion model drives the system asymptotically into a steady state exhibiting a self-organized criticality. In the self-organized critical state, start-stop waves (or traffic jams) appear with various sizes (or lifetimes). The typical interval < s>between consecutive jams scales as < s> ≃ Lv with v = 0.51 ± 0.05 where L is the system size. It is shown that the cumulative jam-interval distribution Ns( L) satisfies the finite-size scaling form ( Ns( L) ≃ L- vf( s/ Lv). Also, the typical lifetime ≃ Lv‧ with v‧ = 0.52 ± 0.05. The cumulative distribution Nm( L) of lifetimes satisfies the finite-size scaling form Nm( L)≃ L-1g( m/ Lv‧).

  7. Use of a Cutaneous Body Image (CBI) scale to evaluate self perception of body image in acne vulgaris.

    PubMed

    Amr, Mostafa; Kaliyadan, Feroze; Shams, Tarek

    2014-01-01

    Skin disorders such as acne, which have significant cosmetic implications, can affect the self-perception of cutaneous body image. There are many scales which measure self-perception of cutaneous body image. We evaluated the use of a simple Cutaneous Body Image (CBI) scale to assess self-perception of body image in a sample of young Arab patients affected with acne. A total of 70 patients with acne answered the CBI questionnaire. The CBI score was correlated with the severity of acne and acne scarring, gender, and history of retinoids use. There was no statistically significant correlation between CBI and the other parameters - gender, acne/acne scarring severity, and use of retinoids. Our study suggests that cutaneous body image perception in Arab patients with acne was not dependent on variables like gender and severity of acne or acne scarring. A simple CBI scale alone is not a sufficiently reliable tool to assess self-perception of body image in patients with acne vulgaris.

  8. A simple scaling law for the equation of state and the radial distribution functions calculated by density-functional theory molecular dynamics

    NASA Astrophysics Data System (ADS)

    Danel, J.-F.; Kazandjian, L.

    2018-06-01

    It is shown that the equation of state (EOS) and the radial distribution functions obtained by density-functional theory molecular dynamics (DFT-MD) obey a simple scaling law. At given temperature, the thermodynamic properties and the radial distribution functions given by a DFT-MD simulation remain unchanged if the mole fractions of nuclei of given charge and the average volume per atom remain unchanged. A practical interest of this scaling law is to obtain an EOS table for a fluid from that already obtained for another fluid if it has the right characteristics. Another practical interest of this result is that an asymmetric mixture made up of light and heavy atoms requiring very different time steps can be replaced by a mixture of atoms of equal mass, which facilitates the exploration of the configuration space in a DFT-MD simulation. The scaling law is illustrated by numerical results.

  9. Effects of practice on the Wechsler Adult Intelligence Scale-IV across 3- and 6-month intervals.

    PubMed

    Estevis, Eduardo; Basso, Michael R; Combs, Dennis

    2012-01-01

    A total of 54 participants (age M = 20.9; education M = 14.9; initial Full Scale IQ M = 111.6) were administered the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) at baseline and again either 3 or 6 months later. Scores on the Full Scale IQ, Verbal Comprehension, Working Memory, Perceptual Reasoning, Processing Speed, and General Ability Indices improved approximately 7, 5, 4, 5, 9, and 6 points, respectively, and increases were similar regardless of whether the re-examination occurred over 3- or 6-month intervals. Reliable change indices (RCI) were computed using the simple difference and bivariate regression methods, providing estimated base rates of change across time. The regression method provided more accurate estimates of reliable change than did the simple difference between baseline and follow-up scores. These findings suggest that prior exposure to the WAIS-IV results in significant score increments. These gains reflect practice effects instead of genuine intellectual changes, which may lead to errors in clinical judgment.

  10. The brainstem reticular formation is a small-world, not scale-free, network

    PubMed Central

    Humphries, M.D; Gurney, K; Prescott, T.J

    2005-01-01

    Recently, it has been demonstrated that several complex systems may have simple graph-theoretic characterizations as so-called ‘small-world’ and ‘scale-free’ networks. These networks have also been applied to the gross neural connectivity between primate cortical areas and the nervous system of Caenorhabditis elegans. Here, we extend this work to a specific neural circuit of the vertebrate brain—the medial reticular formation (RF) of the brainstem—and, in doing so, we have made three key contributions. First, this work constitutes the first model (and quantitative review) of this important brain structure for over three decades. Second, we have developed the first graph-theoretic analysis of vertebrate brain connectivity at the neural network level. Third, we propose simple metrics to quantitatively assess the extent to which the networks studied are small-world or scale-free. We conclude that the medial RF is configured to create small-world (implying coherent rapid-processing capabilities), but not scale-free, type networks under assumptions which are amenable to quantitative measurement. PMID:16615219

  11. The Forest Canopy as a Temporally and Spatially Dynamic Ecosystem: Preliminary Results of Biomass Scaling and Habitat Use from a Case Study in Large Eastern White Pines (Pinus Strobus)

    NASA Astrophysics Data System (ADS)

    Martin, J.; Laughlin, M. M.; Olson, E.

    2017-12-01

    Canopy processes can be viewed at many scales and through many lenses. Fundamentally, we may wish to start by treating each canopy as a unique surface, an ecosystem unto itself. By doing so, we can may make some important observations that greatly influence our ability to scale canopies to landscape, regional and global scales. This work summarizes an ongoing endeavor to quantify various canopy level processes on individual old and large Eastern white pine trees (Pinus strobus). Our work shows that these canopies contain complex structures that vary with height and as the tree ages. This phenomenon complicates the allometric scaling of these large trees using standard methods, but detailed measurements from within the canopy provided a method to constrain scaling equations. We also quantified how these canopies change and respond to canopy disturbance, and documented disproportionate variation of growth compared to the lower stem as the trees develop. Additionally, the complex shape and surface area allow these canopies to act like ecosystems themselves; despite being relatively young and more commonplace when compared to the more notable canopies of the tropics and the Pacific Northwestern US. The white pines of these relatively simple, near boreal forests appear to house various species including many lichens. The lichen species can cover significant portions of the canopy surface area (which may be only 25 to 50 years old) and are a sizable source of potential nitrogen additions to the soils below, as well as a modulator to hydrologic cycles by holding significant amounts of precipitation. Lastly, the combined complex surface area and focused verticality offers important habitat to numerous animal species, some of which are quite surprising.

  12. Exploring earthquake databases for the creation of magnitude-homogeneous catalogues: tools for application on a regional and global scale

    NASA Astrophysics Data System (ADS)

    Weatherill, G. A.; Pagani, M.; Garcia, J.

    2016-09-01

    The creation of a magnitude-homogenized catalogue is often one of the most fundamental steps in seismic hazard analysis. The process of homogenizing multiple catalogues of earthquakes into a single unified catalogue typically requires careful appraisal of available bulletins, identification of common events within multiple bulletins and the development and application of empirical models to convert from each catalogue's native scale into the required target. The database of the International Seismological Center (ISC) provides the most exhaustive compilation of records from local bulletins, in addition to its reviewed global bulletin. New open-source tools are developed that can utilize this, or any other compiled database, to explore the relations between earthquake solutions provided by different recording networks, and to build and apply empirical models in order to harmonize magnitude scales for the purpose of creating magnitude-homogeneous earthquake catalogues. These tools are described and their application illustrated in two different contexts. The first is a simple application in the Sub-Saharan Africa region where the spatial coverage and magnitude scales for different local recording networks are compared, and their relation to global magnitude scales explored. In the second application the tools are used on a global scale for the purpose of creating an extended magnitude-homogeneous global earthquake catalogue. Several existing high-quality earthquake databases, such as the ISC-GEM and the ISC Reviewed Bulletins, are harmonized into moment magnitude to form a catalogue of more than 562 840 events. This extended catalogue, while not an appropriate substitute for a locally calibrated analysis, can help in studying global patterns in seismicity and hazard, and is therefore released with the accompanying software.

  13. Near-Field Terahertz Transmission Imaging at 0.210 Terahertz Using a Simple Aperture Technique

    DTIC Science & Technology

    2015-10-01

    This report discusses a simple aperture useful for terahertz near-field imaging at .2010 terahertz ( lambda = 1.43 millimeters). The aperture requires...achieve a spatial resolution of lambda /7. The aperture can be scaled with the assistance of machinery found in conventional machine shops to achieve similar results using shorter terahertz wavelengths.

  14. Levelized Cost of Energy Calculator | Energy Analysis | NREL

    Science.gov Websites

    Levelized Cost of Energy Calculator Levelized Cost of Energy Calculator Transparent Cost Database Button The levelized cost of energy (LCOE) calculator provides a simple calculator for both utility-scale need to be included for a thorough analysis. To estimate simple cost of energy, use the slider controls

  15. The role of topography on catchment‐scale water residence time

    USGS Publications Warehouse

    McGuire, K.J.; McDonnell, Jeffery J.; Weiler, M.; Kendall, C.; McGlynn, B.L.; Welker, J.M.; Seibert, J.

    2005-01-01

    The age, or residence time, of water is a fundamental descriptor of catchment hydrology, revealing information about the storage, flow pathways, and source of water in a single integrated measure. While there has been tremendous recent interest in residence time estimation to characterize watersheds, there are relatively few studies that have quantified residence time at the watershed scale, and fewer still that have extended those results beyond single catchments to larger landscape scales. We examined topographic controls on residence time for seven catchments (0.085–62.4 km2) that represent diverse geologic and geomorphic conditions in the western Cascade Mountains of Oregon. Our primary objective was to determine the dominant physical controls on catchment‐scale water residence time and specifically test the hypothesis that residence time is related to the size of the basin. Residence times were estimated by simple convolution models that described the transfer of precipitation isotopic composition to the stream network. We found that base flow mean residence times for exponential distributions ranged from 0.8 to 3.3 years. Mean residence time showed no correlation to basin area (r2 < 0.01) but instead was correlated (r2 = 0.91) to catchment terrain indices representing the flow path distance and flow path gradient to the stream network. These results illustrate that landscape organization (i.e., topography) rather than basin area controls catchment‐scale transport. Results from this study may provide a framework for describing scale‐invariant transport across climatic and geologic conditions, whereby the internal form and structure of the basin defines the first‐order control on base flow residence time.

  16. Multi-scale curvature for automated identification of glaciated mountain landscapes

    NASA Astrophysics Data System (ADS)

    Prasicek, Günther; Otto, Jan-Christoph; Montgomery, David; Schrott, Lothar

    2014-05-01

    Automated morphometric interpretation of digital terrain data based on impartial rule sets holds substantial promise for large dataset processing and objective landscape classification. However, the geomorphological realm presents tremendous complexity in the translation of qualitative descriptions into geomorphometric semantics. Here, the simple, conventional distinction of V-shaped fluvial and U-shaped glacial valleys is analyzed quantitatively using the relation of multi-scale curvature and drainage area. Glacial and fluvial erosion shapes mountain landscapes in a long-recognized and characteristic way. Valleys incised by fluvial processes typically have V-shaped cross-sections with uniform and moderately steep slopes, whereas glacial valleys tend to have U-shaped profiles and topographic gradients steepening with distance from valley floor. On a DEM, thalweg cells are determined by a drainage area cutoff and multiple moving window sizes are used to derive per-cell curvature over a variety of scales ranging from the vicinity of the flow path at the valley bottom to catchment sections fully including valley sides. The relation of the curvatures calculated for the user-defined minimum scale and the automatically detected maximum scale is presented as a novel morphometric variable termed Difference of Minimum Curvature (DMC). DMC thresholds determined from typical glacial and fluvial sample catchments are employed to identify quadrats of glaciated and non-glaciated mountain landscapes and the distinctions are validated by field-based geological and geomorphological maps. A first test of the novel algorithm at three study sites in the western United States and a subsequent application to Europe and western Asia demonstrate the transferability of the approach.

  17. Numerical Simulation of Cylindrical, Self-field MPD Thrusters with Multiple Propellants

    NASA Technical Reports Server (NTRS)

    Lapointe, Michael R.

    1994-01-01

    A two-dimensional, two-temperature, single fluid MHD code was used to predict the performance of cylindrical, self-field magnetoplasmadynamic (MPD) thrusters operated with argon, lithium, and hydrogen propellants. A thruster stability equation was determined relating maximum stable J(sup 2)/m values to cylindrical thruster geometry and propellant species. The maximum value of J(sup 2)/m was found to scale as the inverse of the propellant molecular weight to the 0.57 power, in rough agreement with limited experimental data which scales as the inverse square root of the propellant molecular weight. A general equation which relates total thrust to electromagnetic thrust, propellant molecular weight, and J(sup 2)/m was determined using reported thrust values for argon and hydrogen and calculated thrust values for lithium. In addition to argon, lithium, and hydrogen, the equation accurately predicted thrust for ammonia at sufficiently high J(sup 2)/m values. A simple algorithm is suggested to aid in the preliminary design of cylindrical, self-field MPD thrusters. A brief example is presented to illustrate the use of the algorithm in the design of a low power MPD thruster.

  18. To what extent does immigration affect inequality?

    NASA Astrophysics Data System (ADS)

    Berman, Yonatan; Aste, Tomaso

    2016-11-01

    The current surge in income and wealth inequality in most western countries, along with the continuous immigration to those countries demand a quantitative analysis of the effect immigration has on economic inequality. This paper presents a quantitative analysis framework providing a way to calculate this effect. It shows that in most cases, the effect of immigration on wealth and income inequality is limited, mainly due to the relative small scale of immigration waves. For a large scale flow of immigrants, such as the immigration to the US, the UK and Australia in the past few decades, we estimate that 10 % ÷ 15 % of the wealth and income inequality increase can be attributed to immigration. The results demonstrate that immigration could possibly decrease inequality substantially, if the characteristics of the immigrants resemble the characteristics of the destination middle class population in terms of wealth or income. We empirically found that the simple linear relation ΔS = 0.18 ρ roughly describes the increase in the wealth share of the top 10 % due to immigration of a fraction ρ of the population.

  19. DFLOWZ: A free program to evaluate the area potentially inundated by a debris flow

    NASA Astrophysics Data System (ADS)

    Berti, M.; Simoni, A.

    2014-06-01

    The transport and deposition mechanisms of debris flows are still poorly understood due to the complexity of the interactions governing the behavior of water-sediment mixtures. Empirical-statistical methods can therefore be used, instead of more sophisticated numerical methods, to predict the depositional behavior of these highly dangerous gravitational movements. We use widely accepted semi-empirical scaling relations and propose an automated procedure (DFLOWZ) to estimate the area potentially inundated by a debris flow event. Beside a digital elevation model (DEM), the procedure has only two input requirements: the debris flow volume and the possible flow-path. The procedure is implemented in Matlab and a Graphical User Interface helps to visualize initial conditions, flow propagation and final results. Different hypothesis about the depositional behavior of an event can be tested together with the possible effect of simple remedial measures. Uncertainties associated to scaling relations can be treated and their impact on results evaluated. Our freeware application aims to facilitate and speed up the process of susceptibility mapping. We discuss limits and advantages of the method in order to inform inexperienced users.

  20. Rough fibrils provide a toughening mechanism in biological fibers.

    PubMed

    Brown, Cameron P; Harnagea, Catalin; Gill, Harinderjit S; Price, Andrew J; Traversa, Enrico; Licoccia, Silvia; Rosei, Federico

    2012-03-27

    Spider silk is a fascinating natural composite material. Its combination of strength and toughness is unrivalled in nature, and as a result, it has gained considerable interest from the medical, physics, and materials communities. Most of this attention has focused on the one to tens of nanometer scale: predominantly the primary (peptide sequences) and secondary (β sheets, helices, and amorphous domains) structure, with some insights into tertiary structure (the arrangement of these secondary structures) to describe the origins of the mechanical and biological performance. Starting with spider silk, and relating our findings to collagen fibrils, we describe toughening mechanisms at the hundreds of nanometer scale, namely, the fibril morphology and its consequences for mechanical behavior and the dissipation of energy. Under normal conditions, this morphology creates a nonslip fibril kinematics, restricting shearing between fibrils, yet allowing controlled local slipping under high shear stress, dissipating energy without bulk fracturing. This mechanism provides a relatively simple target for biomimicry and, thus, can potentially be used to increase fracture resistance in synthetic materials. © 2012 American Chemical Society

  1. Outbreak statistics and scaling laws for externally driven epidemics.

    PubMed

    Singh, Sarabjeet; Myers, Christopher R

    2014-04-01

    Power-law scalings are ubiquitous to physical phenomena undergoing a continuous phase transition. The classic susceptible-infectious-recovered (SIR) model of epidemics is one such example where the scaling behavior near a critical point has been studied extensively. In this system the distribution of outbreak sizes scales as P(n)∼n-3/2 at the critical point as the system size N becomes infinite. The finite-size scaling laws for the outbreak size and duration are also well understood and characterized. In this work, we report scaling laws for a model with SIR structure coupled with a constant force of infection per susceptible, akin to a "reservoir forcing". We find that the statistics of outbreaks in this system fundamentally differ from those in a simple SIR model. Instead of fixed exponents, all scaling laws exhibit tunable exponents parameterized by the dimensionless rate of external forcing. As the external driving rate approaches a critical value, the scale of the average outbreak size converges to that of the maximal size, and above the critical point, the scaling laws bifurcate into two regimes. Whereas a simple SIR process can only exhibit outbreaks of size O(N1/3) and O(N) depending on whether the system is at or above the epidemic threshold, a driven SIR process can exhibit a richer spectrum of outbreak sizes that scale as O(Nξ), where ξ∈(0,1]∖{2/3} and O((N/lnN)2/3) at the multicritical point.

  2. On the finite length-scale of compressible shock-waves formed in free-surface flows of dry granular materials down a slope

    NASA Astrophysics Data System (ADS)

    Faug, Thierry

    2017-04-01

    The Rankine-Hugoniot jump conditions traditionally describe the theoretical relationship between the equilibrium state on both sides of a shock-wave. They are based on the crucial assumption that the length-scale needed to adjust the equilibrium state upstream of the shock to downstream of it is too small to be of significance to the problem. They are often used with success to describe the shock-waves in a number of applications found in both fluid and solid mechanics. However, the relations based on jump conditions at singular surfaces may fail to capture some features of the shock-waves formed in complex materials, such as granular matter. This study addresses the particular problem of compressible shock-waves formed in flows of dry granular materials down a slope. This problem is for instance relevant to full-scale geophysical granular flows in interaction with natural obstacles or man-made structures, such as topographical obstacles or mitigation dams respectively. Steady-state jumps formed in granular flows and travelling shock-waves produced at the impact of a granular avalanche-flow with a rigid wall are considered. For both situations, new analytical relations which do not consider that the granular shock-wave shrinks into a singular surface are derived, by using balance equations in their depth-averaged forms for mass and momentum. However, these relations need additional inputs that are closure relations for the size and the shape of the shock-wave, and a relevant constitutive friction law. Small-scale laboratory tests and numerical simulations based on the discrete element method are shortly presented and used to infer crucial information needed for the closure relations. This allows testing some predictive aspects of the simple analytical approach proposed for both steady-state and travelling shock-waves formed in free-surface flows of dry granular materials down a slope.

  3. Predicting km-scale shear zone formation

    NASA Astrophysics Data System (ADS)

    Gerbi, Christopher; Culshaw, Nicholas; Shulman, Deborah; Foley, Maura; Marsh, Jeffrey

    2015-04-01

    Because km-scale shear zones play a first-order role in lithospheric kinematics, accurate conceptual and numerical models of orogenic development require predicting when and where they form. Although a strain-based algorithm in the upper crust for weakening due to faulting appears to succeed (e.g., Koons et al., 2010, doi:10.1029/2009TC002463), a comparable general rule for the viscous crust remains unestablished. Here we consider two aspects of the geological argument for a similar algorithm in the viscous regime, namely (1) whether predicting km-scale shear zone development based on a single parameter (such as strain or shear heating) is reasonable; and (2) whether lithologic variability inherent in most orogenic systems precludes a simple predictive rule. A review of tectonically significant shear zones worldwide and more detailed investigations in the Central Gneiss belt of the Ontario segment of the Grenville Province reveals that most km-scale shear zones occur at lithological boundaries and involve mass transfer, but have fairly little else in common. As examples, the relatively flat-lying Twelve Mile Bay shear zone in the western Central Gneiss belt bounds the Parry Sound domain and is likely the product of both localized anatexis and later retrograde hydration with attendant metamorphism. Moderately dipping shear zones in granitoids of the Grenville Front Tectonic Zone apparently resulted from cooperation among several complementary microstructural processes, such as grain size reduction, enhanced diffusion, and a small degree of metamorphic reaction. Localization into shear zones requires the operation of some spatially restricted processes such as stress concentration, metamorphism/fluid access, textural evolution, and thermal perturbation. All of these could be due in part to strain, but not necessarily linearly related to strain. Stress concentrations, such as those that form at rheological boundaries, may be sufficient to nucleate high strain gradients but are insufficient to maintain them because the stress perturbations will dissipate with deformation. Metamorphism can unquestionably cause sufficient rheological change, but only in certain rock types: for example, granitoids have much less capacity for metamorphically induced rheologic change than do mafic rocks. The magnitude of phase geometry variation observed in natural systems suggests that morphological change (e.g., interconnection of weak phases) likely has little direct affect on strength changes, although other textural factors related to diffusion paths and crystallographic orientation could play a significant role. Thermal perturbation, mainly in the form of shear heating, remains potentially powerful but inconclusive. Taken together, these observations indicate that a simple algorithm predicting shear zone formation will not succeed in many geologically relevant instances. One significant reason may be that the inherent lithologic variation at the km scale, such as observed in the Central Gneiss belt, prevents the development of self-organized strain patterns that would form in more rheologically uniform systems.

  4. Aerosol Light Absorption and Scattering Assessments and the Impact of City Size on Air Pollution

    NASA Astrophysics Data System (ADS)

    Paredes-Miranda, Guadalupe

    The general problem of urban pollution and its relation to the city population is examined in this dissertation. A simple model suggests that pollutant concentrations should scale approximately with the square root of city population. This model and its experimental evaluation presented here serve as important guidelines for urban planning and attainment of air quality standards including the limits that air pollution places on city population. The model was evaluated using measurements of air pollution. Optical properties of aerosol pollutants such as light absorption and scattering plus chemical species mass concentrations were measured with a photoacoustic spectrometer, a reciprocal nephelometer, and an aerosol mass spectrometer in Mexico City in the context of the multinational project "Megacity Initiative: Local And Global Research Observations (MILAGRO)" in March 2006. Aerosol light absorption and scattering measurements were also obtained for Reno and Las Vegas, NV USA in December 2008-March 2009 and January-February 2003, respectively. In all three cities, the morning scattering peak occurs a few hours later than the absorption peak due to the formation of secondary photochemically produced aerosols. In particular, for Mexico City we determined the fraction of photochemically generated secondary aerosols to be about 75% of total aerosol mass concentration at its peak near midday. The simple 2-d box model suggests that commonly emitted primary air pollutant (e.g., black carbon) mass concentrations scale approximately as the square root of the urban population. This argument extends to the absorption coefficient, as it is approximately proportional to the black carbon mass concentration. Since urban secondary pollutants form through photochemical reactions involving primary precursors, in linear approximation their mass concentration also should scale with the square root of population. Therefore, the scattering coefficient, a proxy for particulate matter mass concentration, is also expected to scale the same way. Experimental data for five cities: Mexico City, Mexico; Las Vegas and Reno, NV, USA; Beijing, China; and Delhi, India (the data for the last two cities were obtained from the literature); are in reasonable accord with the model. The scaling relation provided by the model may be considered a useful metric depending on the assumption that specific city conditions (such as latitude, altitude, local meteorological conditions, degree of industrialization, population density, number of cars per capita, city shape, etc.) vary randomly, independent of city size. While more detailed studies (including data from more cities) are needed, we believe that this relatively weak dependence of the pollution concentration on the city population might help to explain why the worsening of urban air quality does not directly lead to a decrease in the rate of growth in city population.

  5. Advances in time-scale algorithms

    NASA Technical Reports Server (NTRS)

    Stein, S. R.

    1993-01-01

    The term clock is usually used to refer to a device that counts a nearly periodic signal. A group of clocks, called an ensemble, is often used for time keeping in mission critical applications that cannot tolerate loss of time due to the failure of a single clock. The time generated by the ensemble of clocks is called a time scale. The question arises how to combine the times of the individual clocks to form the time scale. One might naively be tempted to suggest the expedient of averaging the times of the individual clocks, but a simple thought experiment demonstrates the inadequacy of this approach. Suppose a time scale is composed of two noiseless clocks having equal and opposite frequencies. The mean time scale has zero frequency. However if either clock fails, the time-scale frequency immediately changes to the frequency of the remaining clock. This performance is generally unacceptable and simple mean time scales are not used. First, previous time-scale developments are reviewed and then some new methods that result in enhanced performance are presented. The historical perspective is based upon several time scales: the AT1 and TA time scales of the National Institute of Standards and Technology (NIST), the A.1(MEAN) time scale of the US Naval observatory (USNO), the TAI time scale of the Bureau International des Poids et Measures (BIPM), and the KAS-1 time scale of the Naval Research laboratory (NRL). The new method was incorporated in the KAS-2 time scale recently developed by Timing Solutions Corporation. The goal is to present time-scale concepts in a nonmathematical form with as few equations as possible. Many other papers and texts discuss the details of the optimal estimation techniques that may be used to implement these concepts.

  6. Effect of Varying the 1-4 Intramolecular Scaling Factor in Atomistic Simulations of Long-Chain N-alkanes with the OPLS-AA Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    de Almeida, Valmor F; Ye, Xianggui; Cui, Shengting

    2013-01-01

    A comprehensive molecular dynamics simulation study of n-alkanes using the Optimized Potential for Liquid Simulation-All Atoms (OPLS-AA) force field at ambient condition has been performed. Our results indicate that while simulations with the OPLS-AA force field accurately predict the liquid state mass density for n-alkanes with carbon number equal or less than 10, for n-alkanes with carbon number equal or exceeding 12, the OPLS-AA force field with the standard scaling factor for the 1-4 intramolecular Van der Waals and electrostatic interaction gives rise to a quasi-crystalline structure. We found that accurate predictions of the liquid state properties are obtained bymore » successively reducing the aforementioned scaling factor for each increase of the carbon number beyond n-dodecane. To better un-derstand the effects of reducing the scaling factor, we analyzed the variation of the torsion potential pro-file with the scaling factor, and the corresponding impact on the gauche-trans conformer distribution, heat of vaporization, melting point, and self-diffusion coefficient for n-dodecane. This relatively simple procedure thus allows for more accurate predictions of the thermo-physical properties of longer n-alkanes.« less

  7. Constraints and consequences of reducing small scale structure via large dark matter-neutrino interactions

    DOE PAGES

    Bertoni, Bridget; Ipek, Seyda; McKeen, David; ...

    2015-04-30

    Here, cold dark matter explains a wide range of data on cosmological scales. However, there has been a steady accumulation of evidence for discrepancies between simulations and observations at scales smaller than galaxy clusters. One promising way to affect structure formation on small scales is a relatively strong coupling of dark matter to neutrinos. We construct an experimentally viable, simple, renormalizable model with new interactions between neutrinos and dark matter and provide the first discussion of how these new dark matter-neutrino interactions affect neutrino phenomenology. We show that addressing the small scale structure problems requires asymmetric dark matter with amore » mass that is tens of MeV. Generating a sufficiently large dark matter-neutrino coupling requires a new heavy neutrino with a mass around 100 MeV. The heavy neutrino is mostly sterile but has a substantial τ neutrino component, while the three nearly massless neutrinos are partly sterile. This model can be tested by future astrophysical, particle physics, and neutrino oscillation data. Promising signatures of this model include alterations to the neutrino energy spectrum and flavor content observed from a future nearby supernova, anomalous matter effects in neutrino oscillations, and a component of the τ neutrino with mass around 100 MeV.« less

  8. Simple scale for assessing level of dependency of patients in general practice.

    PubMed Central

    Willis, J

    1986-01-01

    A rating scale has been designed for assessing the degree of dependency of patients in general practice. An analysis of the elderly and disabled patients in a two doctor practice is given as an example of its use and simplicity. PMID:3087556

  9. Quantitative Evaluation of Musical Scale Tunings

    ERIC Educational Resources Information Center

    Hall, Donald E.

    1974-01-01

    The acoustical and mathematical basis of the problem of tuning the twelve-tone chromatic scale is reviewed. A quantitative measurement showing how well any tuning succeeds in providing just intonation for any specific piece of music is explained and applied to musical examples using a simple computer program. (DT)

  10. Moving on up: Can Results from Simple Aquatic Mesocosm Experiments be Applied Across Broad Spatial Scales?

    EPA Science Inventory

    1. Aquatic ecologists use mesocosm experiments to understand mechanisms driving ecological processes. Comparisons across experiments, and extrapolations to larger scales, are complicated by the use of mesocosms with varying dimensions. We conducted a mesocosm experiment over a vo...

  11. Urban scaling in Europe

    PubMed Central

    Bettencourt, Luís M. A.; Lobo, José

    2016-01-01

    Over the last few decades, in disciplines as diverse as economics, geography and complex systems, a perspective has arisen proposing that many properties of cities are quantitatively predictable due to agglomeration or scaling effects. Using new harmonized definitions for functional urban areas, we examine to what extent these ideas apply to European cities. We show that while most large urban systems in Western Europe (France, Germany, Italy, Spain, UK) approximately agree with theoretical expectations, the small number of cities in each nation and their natural variability preclude drawing strong conclusions. We demonstrate how this problem can be overcome so that cities from different urban systems can be pooled together to construct larger datasets. This leads to a simple statistical procedure to identify urban scaling relations, which then clearly emerge as a property of European cities. We compare the predictions of urban scaling to Zipf's law for the size distribution of cities and show that while the former holds well the latter is a poor descriptor of European cities. We conclude with scenarios for the size and properties of future pan-European megacities and their implications for the economic productivity, technological sophistication and regional inequalities of an integrated European urban system. PMID:26984190

  12. Nonlinear evolution of f(R) cosmologies. II. Power spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oyaizu, Hiroaki; Hu, Wayne; Department of Astronomy and Astrophysics, University of Chicago, Chicago Illinois 60637

    2008-12-15

    We carry out a suite of cosmological simulations of modified action f(R) models where cosmic acceleration arises from an alteration of gravity instead of dark energy. These models introduce an extra scalar degree of freedom which enhances the force of gravity below the inverse mass or Compton scale of the scalar. The simulations exhibit the so-called chameleon mechanism, necessary for satisfying local constraints on gravity, where this scale depends on environment, in particular, the depth of the local gravitational potential. We find that the chameleon mechanism can substantially suppress the enhancement of power spectrum in the nonlinear regime if themore » background field value is comparable to or smaller than the depth of the gravitational potentials of typical structures. Nonetheless power spectrum enhancements at intermediate scales remain at a measurable level for models even when the expansion history is indistinguishable from a cosmological constant, cold dark matter model. Simple scaling relations that take the linear power spectrum into a nonlinear spectrum fail to capture the modifications of f(R) due to the change in collapsed structures, the chameleon mechanism, and the time evolution of the modifications.« less

  13. Measuring Networking as an Outcome Variable in Undergraduate Research Experiences

    PubMed Central

    Hanauer, David I.; Hatfull, Graham

    2015-01-01

    The aim of this paper is to propose, present, and validate a simple survey instrument to measure student conversational networking. The tool consists of five items that cover personal and professional social networks, and its basic principle is the self-reporting of degrees of conversation, with a range of specific discussion partners. The networking instrument was validated in three studies. The basic psychometric characteristics of the scales were established by conducting a factor analysis and evaluating internal consistency using Cronbach’s alpha. The second study used a known-groups comparison and involved comparing outcomes for networking scales between two different undergraduate laboratory courses (one involving a specific effort to enhance networking). The final study looked at potential relationships between specific networking items and the established psychosocial variable of project ownership through a series of binary logistic regressions. Overall, the data from the three studies indicate that the networking scales have high internal consistency (α = 0.88), consist of a unitary dimension, can significantly differentiate between research experiences with low and high networking designs, and are related to project ownership scales. The ramifications of the networking instrument for student retention, the enhancement of public scientific literacy, and the differentiation of laboratory courses are discussed. PMID:26538387

  14. Validation of a simple disease-specific, quality-of-life measure for diabetic polyneuropathy: CAPPRI.

    PubMed

    Gwathmey, Kelly G; Sadjadi, Reza; Horton, William B; Conaway, Mark R; Barnett-Tapia, Carolina; Bril, Vera; Russell, James W; Shaibani, Aziz; Mauermann, Michelle L; Hehir, Michael K; Kolb, Noah; Guptill, Jeffrey; Hobson-Webb, Lisa; Gable, Karissa; Raja, Shruti; Silvestri, Nicholas; Wolfe, Gil I; Smith, A Gordon; Malik, Rabia; Traub, Rebecca; Joshi, Amruta; Elliott, Matthew P; Jones, Sarah; Burns, Ted M

    2018-06-05

    We studied the performance of a 15-item, health-related quality-of-life polyneuropathy scale in the clinic setting in patients with diabetic distal sensorimotor polyneuropathy (DSPN). Patients with DSPN from 11 academic sites completed a total of 231 Chronic Acquired Polyneuropathy Patient-Reported Index (CAPPRI) scales during their clinic visits. Conventional and modern psychometric analyses were performed on the completed forms. Conventional and modern analyses generally indicated excellent psychometric properties of the CAPPRI in patients with DSPN. For example, the CAPPRI demonstrated unidimensionality and performed like an interval-level scale. Attributes of the CAPPRI for DSPN include ease of use and interpretation; unidimensionality, allowing scores to be summed; adequate coverage of disease severity; and the scale's ability to address relevant life domains. Furthermore, the CAPPRI is free and in the public domain. The CAPPRI may assist the clinician and patient with DSPN in estimating disease-specific quality of life, especially in terms of pain, sleep, psychological well-being, and everyday function. The CAPPRI may be most useful in the everyday clinical setting but merits further study in this setting, as well as the clinical trial setting. © 2018 American Academy of Neurology.

  15. Hierarchical formation of dark matter halos and the free streaming scale

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishiyama, Tomoaki, E-mail: ishiyama@ccs.tsukuba.ac.jp

    2014-06-10

    The smallest dark matter halos are formed first in the early universe. According to recent studies, the central density cusp is much steeper in these halos than in larger halos and scales as ρ∝r {sup –(1.5-1.3)}. We present the results of very large cosmological N-body simulations of the hierarchical formation and evolution of halos over a wide mass range, beginning from the formation of the smallest halos. We confirmed early studies that the inner density cusps are steeper in halos at the free streaming scale. The cusp slope gradually becomes shallower as the halo mass increases. The slope of halosmore » 50 times more massive than the smallest halo is approximately –1.3. No strong correlation exists between the inner slope and the collapse epoch. The cusp slope of halos above the free streaming scale seems to be reduced primarily due to major merger processes. The concentration, estimated at the present universe, is predicted to be 60-70, consistent with theoretical models and earlier simulations, and ruling out simple power law mass-concentration relations. Microhalos could still exist in the present universe with the same steep density profiles.« less

  16. Predictive power of food web models based on body size decreases with trophic complexity.

    PubMed

    Jonsson, Tomas; Kaartinen, Riikka; Jonsson, Mattias; Bommarco, Riccardo

    2018-05-01

    Food web models parameterised using body size show promise to predict trophic interaction strengths (IS) and abundance dynamics. However, this remains to be rigorously tested in food webs beyond simple trophic modules, where indirect and intraguild interactions could be important and driven by traits other than body size. We systematically varied predator body size, guild composition and richness in microcosm insect webs and compared experimental outcomes with predictions of IS from models with allometrically scaled parameters. Body size was a strong predictor of IS in simple modules (r 2  = 0.92), but with increasing complexity the predictive power decreased, with model IS being consistently overestimated. We quantify the strength of observed trophic interaction modifications, partition this into density-mediated vs. behaviour-mediated indirect effects and show that model shortcomings in predicting IS is related to the size of behaviour-mediated effects. Our findings encourage development of dynamical food web models explicitly including and exploring indirect mechanisms. © 2018 John Wiley & Sons Ltd/CNRS.

  17. Interferometrically enhanced sub-terahertz picosecond imaging utilizing a miniature collapsing-field-domain source

    NASA Astrophysics Data System (ADS)

    Vainshtein, Sergey N.; Duan, Guoyong; Mikhnev, Valeri A.; Zemlyakov, Valery E.; Egorkin, Vladimir I.; Kalyuzhnyy, Nikolay A.; Maleev, Nikolai A.; Näpänkangas, Juha; Sequeiros, Roberto Blanco; Kostamovaara, Juha T.

    2018-05-01

    Progress in terahertz spectroscopy and imaging is mostly associated with femtosecond laser-driven systems, while solid-state sources, mainly sub-millimetre integrated circuits, are still in an early development phase. As simple and cost-efficient an emitter as a Gunn oscillator could cause a breakthrough in the field, provided its frequency limitations could be overcome. Proposed here is an application of the recently discovered collapsing field domains effect that permits sub-THz oscillations in sub-micron semiconductor layers thanks to nanometer-scale powerfully ionizing domains arising due to negative differential mobility in extreme fields. This shifts the frequency limit by an order of magnitude relative to the conventional Gunn effect. Our first miniature picosecond pulsed sources cover the 100-200 GHz band and promise milliwatts up to ˜500 GHz. Thanks to the method of interferometrically enhanced time-domain imaging proposed here and the low single-shot jitter of ˜1 ps, our simple imaging system provides sufficient time-domain imaging contrast for fresh-tissue terahertz histology.

  18. Recoiling from a Kick in the Head-On Case

    NASA Technical Reports Server (NTRS)

    Choi, Dae-Il; Kelly, Bernard J.; Boggs, William D.; Baker, John G.; Centrella, Joan; Van Meter, James

    2007-01-01

    Recoil "kicks" induced by gravitational radiation are expected in the inspiral and merger of black holes. Recently the numerical relativity community has begun to measure the significant kicks found when both unequal masses and spins are considered. Because understanding the cause and magnitude of each component of this kick may be complicated in inspiral simulations, we consider these effects in the context of a simple test problem. We study recoils from collisions of binaries with initially head-on trajectories, starting with the simplest case of equal masses with no spin; adding spin and varying the mass ratio, both separately and jointly. We find spin-induced recoils to be significant even in head-on configurations. Additionally, it appears that the scaling of transverse kicks with spins is consistent with post-Newtonian (PN) theory, even though the kick is generated in the nonlinear merger interaction, where PN theory should not apply. This suggests that a simple heuristic description might be effective in the estimation of spin-kicks.

  19. A simple method to determine evaporation and compensate for liquid losses in small-scale cell culture systems.

    PubMed

    Wiegmann, Vincent; Martinez, Cristina Bernal; Baganz, Frank

    2018-04-24

    Establish a method to indirectly measure evaporation in microwell-based cell culture systems and show that the proposed method allows compensating for liquid losses in fed-batch processes. A correlation between evaporation and the concentration of Na + was found (R 2  = 0.95) when using the 24-well-based miniature bioreactor system (micro-Matrix) for a batch culture with GS-CHO. Based on these results, a method was developed to counteract evaporation with periodic water additions based on measurements of the Na + concentration. Implementation of this method resulted in a reduction of the relative liquid loss after 15 days of a fed-batch cultivation from 36.7 ± 6.7% without volume corrections to 6.9 ± 6.5% with volume corrections. A procedure was established to indirectly measure evaporation through a correlation with the level of Na + ions in solution and deriving a simple formula to account for liquid losses.

  20. Forgetting in immediate serial recall: decay, temporal distinctiveness, or interference?

    PubMed

    Oberauer, Klaus; Lewandowsky, Stephan

    2008-07-01

    Three hypotheses of forgetting from immediate memory were tested: time-based decay, decreasing temporal distinctiveness, and interference. The hypotheses were represented by 3 models of serial recall: the primacy model, the SIMPLE (scale-independent memory, perception, and learning) model, and the SOB (serial order in a box) model, respectively. The models were fit to 2 experiments investigating the effect of filled delays between items at encoding or at recall. Short delays between items, filled with articulatory suppression, led to massive impairment of memory relative to a no-delay baseline. Extending the delays had little additional effect, suggesting that the passage of time alone does not cause forgetting. Adding a choice reaction task in the delay periods to block attention-based rehearsal did not change these results. The interference-based SOB fit the data best; the primacy model overpredicted the effect of lengthening delays, and SIMPLE was unable to explain the effect of delays at encoding. The authors conclude that purely temporal views of forgetting are inadequate. Copyright (c) 2008 APA, all rights reserved.

Top