DEVELOPMENT OF STANDARDIZED LARGE RIVER BIOASSESSMENT PROTOCOLS (LR-BP) FOR FISH ASSEMBLAGES
We conducted research comparing several methods currently in use for the bioassessment and monitoring of fish and benthic macroinvertebrate assemblages for large rivers. Fish data demonstrate that electrofishing 1000 m of shoreline is sufficient for bioassessments on boatable ri...
We conducted research comparing several methods currently in use for the bioassessment and monitoring of fish and benthic macroinvertebrate assemblages of large rivers. Fish data demonstrate that electrofishing 1000 m of shoreline is sufficient for bioassessments on boatable riv...
A COMPARISON OF SIX BENTHIC MACROINVERTEBRATE SAMPLING METHODS IN FOUR LARGE RIVERS
In 1999, a study was conducted to compare six macroinvertebrate sampling methods in four large (boatable) rivers that drain into the Ohio River. Two methods each were adapted from existing methods used by the USEPA, USGS and Ohio EPA. Drift nets were unable to collect a suffici...
Classification of large-sized hyperspectral imagery using fast machine learning algorithms
NASA Astrophysics Data System (ADS)
Xia, Junshi; Yokoya, Naoto; Iwasaki, Akira
2017-07-01
We present a framework of fast machine learning algorithms in the context of large-sized hyperspectral images classification from the theoretical to a practical viewpoint. In particular, we assess the performance of random forest (RF), rotation forest (RoF), and extreme learning machine (ELM) and the ensembles of RF and ELM. These classifiers are applied to two large-sized hyperspectral images and compared to the support vector machines. To give the quantitative analysis, we pay attention to comparing these methods when working with high input dimensions and a limited/sufficient training set. Moreover, other important issues such as the computational cost and robustness against the noise are also discussed.
Limitations and tradeoffs in synchronization of large-scale networks with uncertain links
Diwadkar, Amit; Vaidya, Umesh
2016-01-01
The synchronization of nonlinear systems connected over large-scale networks has gained popularity in a variety of applications, such as power grids, sensor networks, and biology. Stochastic uncertainty in the interconnections is a ubiquitous phenomenon observed in these physical and biological networks. We provide a size-independent network sufficient condition for the synchronization of scalar nonlinear systems with stochastic linear interactions over large-scale networks. This sufficient condition, expressed in terms of nonlinear dynamics, the Laplacian eigenvalues of the nominal interconnections, and the variance and location of the stochastic uncertainty, allows us to define a synchronization margin. We provide an analytical characterization of important trade-offs between the internal nonlinear dynamics, network topology, and uncertainty in synchronization. For nearest neighbour networks, the existence of an optimal number of neighbours with a maximum synchronization margin is demonstrated. An analytical formula for the optimal gain that produces the maximum synchronization margin allows us to compare the synchronization properties of various complex network topologies. PMID:27067994
Large eddy simulation of fine water sprays: comparative analysis of two models and computer codes
NASA Astrophysics Data System (ADS)
Tsoy, A. S.; Snegirev, A. Yu.
2015-09-01
The model and the computer code FDS, albeit widely used in engineering practice to predict fire development, is not sufficiently validated for fire suppression by fine water sprays. In this work, the effect of numerical resolution of the large scale turbulent pulsations on the accuracy of predicted time-averaged spray parameters is evaluated. Comparison of the simulation results obtained with the two versions of the model and code, as well as that of the predicted and measured radial distributions of the liquid flow rate revealed the need to apply monotonic and yet sufficiently accurate discrete approximations of the convective terms. Failure to do so delays jet break-up, otherwise induced by large turbulent eddies, thereby excessively focuses the predicted flow around its axis. The effect of the pressure drop in the spray nozzle is also examined, and its increase has shown to cause only weak increase of the evaporated fraction and vapor concentration despite the significant increase of flow velocity.
Characterizing dispersal patterns in a threatened seabird with limited genetic structure
Laurie A. Hall; Per J. Palsboll; Steven R. Beissinger; James T. Harvey; Martine Berube; Martin G. Raphael; Kim Nelson; Richard T. Golightly; Laura McFarlane-Tranquilla; Scott H. Newman; M. Zachariah Peery
2009-01-01
Genetic assignment methods provide an appealing approach for characterizing dispersal patterns on ecological time scales, but require sufficient genetic differentiation to accurately identify migrants and a large enough sample size of migrants to, for example, compare dispersal between sexes or age classes. We demonstrate that assignment methods can be rigorously used...
Lafontaine, Sean J V; Sawada, M; Kristjansson, Elizabeth
2017-02-16
With the expansion and growth of research on neighbourhood characteristics, there is an increased need for direct observational field audits. Herein, we introduce a novel direct observational audit method and systematic social observation instrument (SSOI) for efficiently assessing neighbourhood aesthetics over large urban areas. Our audit method uses spatial random sampling stratified by residential zoning and incorporates both mobile geographic information systems technology and virtual environments. The reliability of our method was tested in two ways: first, in 15 Ottawa neighbourhoods, we compared results at audited locations over two subsequent years, and second; we audited every residential block (167 blocks) in one neighbourhood and compared the distribution of SSOI aesthetics index scores with results from the randomly audited locations. Finally, we present interrater reliability and consistency results on all observed items. The observed neighbourhood average aesthetics index score estimated from four or five stratified random audit locations is sufficient to characterize the average neighbourhood aesthetics. The SSOI was internally consistent and demonstrated good to excellent interrater reliability. At the neighbourhood level, aesthetics is positively related to SES and physical activity and negatively correlated with BMI. The proposed approach to direct neighbourhood auditing performs sufficiently and has the advantage of financial and temporal efficiency when auditing a large city.
Tidal Dissipation Compared To Seismic Dissipation: In Small Bodies, Earths, And Super-Earths
2012-02-20
The Astrophysical Journal, 746:150 (20pp), 2012 February 20 doi:10.1088/0004-637X/746/2/150 C© 2012. The American Astronomical Society. All rights...becomes more complex, though for sufficiently large super-Earths the same rule applies: the larger the planet, the weaker the tidal dissipation in it...damping) becomes very considerable for large exoplanets (super-Earths). In those, it is much lower than what one might expect from using a seismic
Exploiting Universality in Atoms with Large Scattering Lengths
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braaten, Eric
2012-05-31
The focus of this research project was atoms with scattering lengths that are large compared to the range of their interactions and which therefore exhibit universal behavior at sufficiently low energies. Recent dramatic advances in cooling atoms and in manipulating their scattering lengths have made this phenomenon of practical importance for controlling ultracold atoms and molecules. This research project was aimed at developing a systematically improvable method for calculating few-body observables for atoms with large scattering lengths starting from the universal results as a first approximation. Significant progress towards this goal was made during the five years of the project.
Thermal evaluation of advanced solar dynamic heat receiver performance
NASA Technical Reports Server (NTRS)
Crane, Roger A.
1989-01-01
The thermal performance of a variety of concepts for thermal energy storage as applied to solar dynamic applications is discussed. It is recognized that designs providing large thermal gradients or large temperature swings during orbit are susceptible to early mechanical failure. Concepts incorporating heat pipe technology may encounter operational limitations over sufficiently large ranges. By reviewing the thermal performance of basic designs, the relative merits of the basic concepts are compared. In addition the effect of thermal enhancement and metal utilization as applied to each design provides a partial characterization of the performance improvements to be achieved by developing these technologies.
Twisted versus braided magnetic flux ropes in coronal geometry. II. Comparative behaviour
NASA Astrophysics Data System (ADS)
Prior, C.; Yeates, A. R.
2016-06-01
Aims: Sigmoidal structures in the solar corona are commonly associated with magnetic flux ropes whose magnetic field lines are twisted about a mutual axis. Their dynamical evolution is well studied, with sufficient twisting leading to large-scale rotation (writhing) and vertical expansion, possibly leading to ejection. Here, we investigate the behaviour of flux ropes whose field lines have more complex entangled/braided configurations. Our hypothesis is that this internal structure will inhibit the large-scale morphological changes. Additionally, we investigate the influence of the background field within which the rope is embedded. Methods: A technique for generating tubular magnetic fields with arbitrary axial geometry and internal structure, introduced in part I of this study, provides the initial conditions for resistive-MHD simulations. The tubular fields are embedded in a linear force-free background, and we consider various internal structures for the tubular field, including both twisted and braided topologies. These embedded flux ropes are then evolved using a 3D MHD code. Results: Firstly, in a background where twisted flux ropes evolve through the expected non-linear writhing and vertical expansion, we find that flux ropes with sufficiently braided/entangled interiors show no such large-scale changes. Secondly, embedding a twisted flux rope in a background field with a sigmoidal inversion line leads to eventual reversal of the large-scale rotation. Thirdly, in some cases a braided flux rope splits due to reconnection into two twisted flux ropes of opposing chirality - a phenomenon previously observed in cylindrical configurations. Conclusions: Sufficiently complex entanglement of the magnetic field lines within a flux rope can suppress large-scale morphological changes of its axis, with magnetic energy reduced instead through reconnection and expansion. The structure of the background magnetic field can significantly affect the changing morphology of a flux rope.
ERIC Educational Resources Information Center
Hajat, Anjum; Lucas, Jacqueline B.; Kington, Raynard
In this report, various health measures are compared across Hispanic subgroups in the United States. National Health Interview Survey (NHIS) data aggregated from 1992 through 1995 were analyzed. NHIS is one of the few national surveys that has a sample sufficiently large enough to allow such comparisons. Both age-adjusted and unadjusted estimates…
Trapped waves on the mid-latitude β-plane
NASA Astrophysics Data System (ADS)
Paldor, Nathan; Sigalov, Andrey
2008-08-01
A new type of approximate solutions of the Linearized Shallow Water Equations (LSWE) on the mid-latitude β-plane, zonally propagating trapped waves with Airy-like latitude-dependent amplitude, is constructed in this work, for sufficiently small radius of deformation. In contrast to harmonic Poincare and Rossby waves, these newly found trapped waves vanish fast in the positive half-axis, and their zonal phase speed is larger than that of the corresponding harmonic waves for sufficiently large meridional domains. Our analysis implies that due to the smaller radius of deformation in the ocean compared with that in the atmosphere, the trapped waves are relevant to observations in the ocean whereas harmonic waves typify atmospheric observations. The increase in the zonal phase speed of trapped Rossby waves compared with that of harmonic ones is consistent with recent observations that showed that Sea Surface Height features propagated westwards faster than the phase speed of harmonic Rossby waves.
The generation of gravitational waves. III - Derivation of bremsstrahlung formulae
NASA Technical Reports Server (NTRS)
Kovacs, S. J.; Thorne, K. S.
1977-01-01
Formulas are derived describing the gravitational waves produced by a stellar encounter of the following type. The two stars have stationary (i.e., nonpulsating) nearly Newtonian structures with arbitrary relative masses; they fly past each other with an arbitrary relative velocity; and their impact parameter is sufficiently large that they gravitationally deflect each other through an angle that is small as compared with 90 deg.
Direct Demonstration of the Concept of Unrestricted Effective-Medium Approximation
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Zhanna M.; Zakharova, Nadezhda T.
2014-01-01
The modified unrestricted effective-medium refractive index is defined as one that yields accurate values of a representative set of far-field scattering characteristics (including the scattering matrix) for an object made of randomly heterogeneous materials. We validate the concept of the modified unrestricted effective-medium refractive index by comparing numerically exact superposition T-matrix results for a spherical host randomly filled with a large number of identical small inclusions and Lorenz-Mie results for a homogeneous spherical counterpart. A remarkable quantitative agreement between the superposition T-matrix and Lorenz-Mie scattering matrices over the entire range of scattering angles demonstrates unequivocally that the modified unrestricted effective-medium refractive index is a sound (albeit still phenomenological) concept provided that the size parameter of the inclusions is sufficiently small and their number is sufficiently large. Furthermore, it appears that in cases when the concept of the modified unrestricted effective-medium refractive index works, its actual value is close to that predicted by the Maxwell-Garnett mixing rule.
Hershkowitz, Noah [Madison, WI; Longmier, Benjamin [Madison, WI; Baalrud, Scott [Madison, WI
2009-03-03
An electron generating device extracts electrons, through an electron sheath, from plasma produced using RF fields. The electron sheath is located near a grounded ring at one end of a negatively biased conducting surface, which is normally a cylinder. Extracted electrons pass through the grounded ring in the presence of a steady state axial magnetic field. Sufficiently large magnetic fields and/or RF power into the plasma allow for helicon plasma generation. The ion loss area is sufficiently large compared to the electron loss area to allow for total non-ambipolar extraction of all electrons leaving the plasma. Voids in the negatively-biased conducting surface allow the time-varying magnetic fields provided by the antenna to inductively couple to the plasma within the conducting surface. The conducting surface acts as a Faraday shield, which reduces any time-varying electric fields from entering the conductive surface, i.e. blocks capacitive coupling between the antenna and the plasma.
NASA Technical Reports Server (NTRS)
Hershkowitz, Noah (Inventor); Longmier, Benjamin (Inventor); Baalrud, Scott (Inventor)
2011-01-01
An electron generating device extracts electrons, through an electron sheath, from plasma produced using RF fields. The electron sheath is located near a grounded ring at one end of a negatively biased conducting surface, which is normally a cylinder. Extracted electrons pass through the grounded ring in the presence of a steady state axial magnetic field. Sufficiently large magnetic fields and/or RF power into the plasma allow for helicon plasma generation. The ion loss area is sufficiently large compared to the electron loss area to allow for total non-ambipolar extraction of all electrons leaving the plasma. Voids in the negatively-biased conducting surface allow the time-varying magnetic fields provided by the antenna to inductively couple to the plasma within the conducting surface. The conducting surface acts as a Faraday shield, which reduces any time-varying electric fields from entering the conductive surface, i.e. blocks capacitive coupling between the antenna and the plasma.
NASA Technical Reports Server (NTRS)
Hershkowitz, Noah (Inventor); Longmier, Benjamin (Inventor); Baalrud, Scott (Inventor)
2009-01-01
An electron generating device extracts electrons, through an electron sheath, from plasma produced using RF fields. The electron sheath is located near a grounded ring at one end of a negatively biased conducting surface, which is normally a cylinder. Extracted electrons pass through the grounded ring in the presence of a steady state axial magnetic field. Sufficiently large magnetic fields and/or RF power into the plasma allow for helicon plasma generation. The ion loss area is sufficiently large compared to the electron loss area to allow for total non-ambipolar extraction of all electrons leaving the plasma. Voids in the negatively-biased conducting surface allow the time-varying magnetic fields provided by the antenna to inductively couple to the plasma within the conducting surface. The conducting surface acts as a Faraday shield, which reduces any time-varying electric fields from entering the conductive surface, i.e. blocks capacitive coupling between the antenna and the plasma.
Exact diagonalization of quantum lattice models on coprocessors
NASA Astrophysics Data System (ADS)
Siro, T.; Harju, A.
2016-10-01
We implement the Lanczos algorithm on an Intel Xeon Phi coprocessor and compare its performance to a multi-core Intel Xeon CPU and an NVIDIA graphics processor. The Xeon and the Xeon Phi are parallelized with OpenMP and the graphics processor is programmed with CUDA. The performance is evaluated by measuring the execution time of a single step in the Lanczos algorithm. We study two quantum lattice models with different particle numbers, and conclude that for small systems, the multi-core CPU is the fastest platform, while for large systems, the graphics processor is the clear winner, reaching speedups of up to 7.6 compared to the CPU. The Xeon Phi outperforms the CPU with sufficiently large particle number, reaching a speedup of 2.5.
Integrating Technologies into Mathematics: Comparing the Cases of Square Roots and Integrals
ERIC Educational Resources Information Center
Kissane, Barry
2016-01-01
Two decades ago, in an award-winning paper, Dan Kennedy (1995) likened learning mathematics to climbing a tree, for which there was only one way to climb: up a large and solid trunk. In the limited time that is available, many students give up the climb, impede others, fall off the trunk, or fail to climb the tree sufficiently well. In the case of…
Melen, Miranda K; Herman, Julie A; Lucas, Jessica; O'Malley, Rachel E; Parker, Ingrid M; Thom, Aaron M; Whittall, Justen B
2016-11-01
Self incompatibility (SI) in rare plants presents a unique challenge-SI protects plants from inbreeding depression, but requires a sufficient number of mates and xenogamous pollination. Does SI persist in an endangered polyploid? Is pollinator visitation sufficient to ensure reproductive success? Is there evidence of inbreeding/outbreeding depression? We characterized the mating system, primary pollinators, pollen limitation, and inbreeding/outbreeding depression in Erysimum teretifolium to guide conservation efforts. We compared seed production following self pollination and within- and between-population crosses. Pollen tubes were visualized after self pollinations and between-population pollinations. Pollen limitation was tested in the field. Pollinator observations were quantified using digital video. Inbreeding/outbreeding depression was assessed in progeny from self and outcross pollinations at early and later developmental stages. Self-pollination reduced seed set by 6.5× and quadrupled reproductive failure compared with outcross pollination. Pollen tubes of some self pollinations were arrested at the stigmatic surface. Seed-set data indicated strong SI, and fruit-set data suggested partial SI. Pollinator diversity and visitation rates were high, and there was no evidence of pollen limitation. Inbreeding depression (δ) was weak for early developmental stages and strong for later developmental stages, with no evidence of outbreeding depression. The rare hexaploid E. teretifolium is largely self incompatible and suffers from late-acting inbreeding depression. Reproductive success in natural populations was accomplished through high pollinator visitation rates consistent with a lack of pollen limitation. Future reproductive health for this species will require large population sizes with sufficient mates and a robust pollinator community. © 2016 Melen et al. Published by the Botanical Society of America. This work is licensed under a Creative Commons Attribution License (CC-BY).
Fu, Yong; Ji, Zhong; Ding, Wenzheng; Ye, Fanghao; Lou, Cunguang
2014-11-01
Previous studies demonstrated that thermoacoustic imaging (TAI) has great potential for breast tumor detection. However, large field of view (FOV) imaging remains a long-standing challenge for three-dimensional (3D) breast tumor localization. Here, the authors propose a practical TAI system for noninvasive 3D localization of breast tumors with large FOV through the use of ultrashort microwave pulse (USMP). A USMP generator was employed for TAI. The energy density required for quality imaging and the corresponding microwave-to-acoustic conversion efficiency were compared with that of conventional TAI. The microwave energy distribution, imaging depth, resolution, and 3D imaging capabilities were then investigated. Finally, a breast phantom embedded with a laboratory-grown tumor was imaged to evaluate the FOV performance of the USMP TAI system, under a simulated clinical situation. A radiation energy density equivalent to just 1.6%-2.2% of that for conventional submicrosecond microwave TAI was sufficient to obtain a thermoacoustic signal with the required signal-to-noise ratio. This result clearly demonstrated a significantly higher microwave-to-acoustic conversion efficiency of USMP TAI compared to that of conventional TAI. The USMP TAI system achieved 61 mm imaging depth and 12 × 12 cm(2) microwave radiation area. The volumetric image of a copper target measured at depth of 4-6 cm matched well with the actual shape and the resolution reaches 230 μm. The TAI of the breast phantom was precisely localized to an accuracy of 0.1 cm over an 8 × 8 cm(2) FOV. The experimental results demonstrated that the USMP TAI system offered significant potential for noninvasive clinical detection and 3D localization of deep breast tumors, with low microwave radiation dose and high spatial resolution over a sufficiently large FOV.
NASA Astrophysics Data System (ADS)
Song, Ningfang; Wu, Chunxiao; Luo, Wenyong; Zhang, Zuchen; Li, Wei
2016-12-01
High strength fusion splicing hollow core photonic crystal fiber (HC-PCF) and single-mode fiber (SMF) requires sufficient energy, which results in collapse of the air holes inside HC-PCF. Usually the additional splice loss induced by the collapse of air holes is too large. By large offset reheating, the collapse length of HC-PCF is reduced, thus the additional splice loss induced by collapse is effectively suppressed. This method guarantees high-strength fusion splicing between the two types of fiber with a low splice loss. The strength of the splice compares favorably with the strength of HC-PCF itself. This method greatly improves the reliability of splices between HC-PCFs and SMFs.
Large- and small-scale constraints on power spectra in Omega = 1 universes
NASA Technical Reports Server (NTRS)
Gelb, James M.; Gradwohl, Ben-Ami; Frieman, Joshua A.
1993-01-01
The CDM model of structure formation, normalized on large scales, leads to excessive pairwise velocity dispersions on small scales. In an attempt to circumvent this problem, we study three scenarios (all with Omega = 1) with more large-scale and less small-scale power than the standard CDM model: (1) cold dark matter with significantly reduced small-scale power (inspired by models with an admixture of cold and hot dark matter); (2) cold dark matter with a non-scale-invariant power spectrum; and (3) cold dark matter with coupling of dark matter to a long-range vector field. When normalized to COBE on large scales, such models do lead to reduced velocities on small scales and they produce fewer halos compared with CDM. However, models with sufficiently low small-scale velocities apparently fail to produce an adequate number of halos.
Constraints to commercialization of algal fuels.
Chisti, Yusuf
2013-09-10
Production of algal crude oil has been achieved in various pilot scale facilities, but whether algal fuels can be produced in sufficient quantity to meaningfully displace petroleum fuels, has been largely overlooked. Limitations to commercialization of algal fuels need to be understood and addressed for any future commercialization. This review identifies the major constraints to commercialization of transport fuels from microalgae. Algae derived fuels are expensive compared to petroleum derived fuels, but this could change. Unfortunately, improved economics of production are not sufficient for an environmentally sustainable production, or its large scale feasibility. A low-cost point supply of concentrated carbon dioxide colocated with the other essential resources is necessary for producing algal fuels. An insufficiency of concentrated carbon dioxide is actually a major impediment to any substantial production of algal fuels. Sustainability of production requires the development of an ability to almost fully recycle the phosphorous and nitrogen nutrients that are necessary for algae culture. Development of a nitrogen biofixation ability to support production of algal fuels ought to be an important long term objective. At sufficiently large scale, a limited supply of freshwater will pose a significant limitation to production even if marine algae are used. Processes for recovering energy from the algal biomass left after the extraction of oil, are required for achieving a net positive energy balance in the algal fuel oil. The near term outlook for widespread use of algal fuels appears bleak, but fuels for niche applications such as in aviation may be likely in the medium term. Genetic and metabolic engineering of microalgae to boost production of fuel oil and ease its recovery, are essential for commercialization of algal fuels. Algae will need to be genetically modified for improved photosynthetic efficiency in the long term. Copyright © 2013 Elsevier B.V. All rights reserved.
Planning multi-arm screening studies within the context of a drug development program
Wason, James M S; Jaki, Thomas; Stallard, Nigel
2013-01-01
Screening trials are small trials used to decide whether an intervention is sufficiently promising to warrant a large confirmatory trial. Previous literature examined the situation where treatments are tested sequentially until one is considered sufficiently promising to take forward to a confirmatory trial. An important consideration for sponsors of clinical trials is how screening trials should be planned to maximize the efficiency of the drug development process. It has been found previously that small screening trials are generally the most efficient. In this paper we consider the design of screening trials in which multiple new treatments are tested simultaneously. We derive analytic formulae for the expected number of patients until a successful treatment is found, and propose methodology to search for the optimal number of treatments, and optimal sample size per treatment. We compare designs in which only the best treatment proceeds to a confirmatory trial and designs in which multiple treatments may proceed to a multi-arm confirmatory trial. We find that inclusion of a large number of treatments in the screening trial is optimal when only one treatment can proceed, and a smaller number of treatments is optimal when more than one can proceed. The designs we investigate are compared on a real-life set of screening designs. Copyright © 2013 John Wiley & Sons, Ltd. PMID:23529936
Freedman, Laurence S; Kipnis, Victor; Schatzkin, Arthur; Potischman, Nancy
2008-01-01
Results from several large cohort studies that were reported 10 to 20 years ago seemed to indicate that the hypothesized link between dietary fat intake and breast cancer risk was illusory. In this article, we review several strands of more recent evidence that have emerged. These include two studies comparing the performance of dietary instruments used to investigate the dietary fat- breast cancer hypothesis, a large randomized disease prevention trial, a more recent meta-analysis of nutritional cohort studies, and a very large nutritional cohort study. Each of the studies discussed in this article suggests that a modest but real association between fat intake and breast cancer is likely. If the association is causative, it would have important implications for public health strategies in reducing breast cancer incidence. The evidence is not yet conclusive, but additional follow-up in the randomized trial, as well as efforts to improve dietary assessment methodology for cohort studies, may be sufficient to provide a convincing answer.
Experience of public procurement of Open Compute servers
NASA Astrophysics Data System (ADS)
Bärring, Olof; Guerri, Marco; Bonfillou, Eric; Valsan, Liviu; Grigore, Alexandru; Dore, Vincent; Gentit, Alain; Clement, Benoît; Grossir, Anthony
2015-12-01
The Open Compute Project. OCP (http://www.opencompute.org/). was launched by Facebook in 2011 with the objective of building efficient computing infrastructures at the lowest possible cost. The technologies are released as open hardware. with the goal to develop servers and data centres following the model traditionally associated with open source software projects. In 2013 CERN acquired a few OCP servers in order to compare performance and power consumption with standard hardware. The conclusions were that there are sufficient savings to motivate an attempt to procure a large scale installation. One objective is to evaluate if the OCP market is sufficiently mature and broad enough to meet the constraints of a public procurement. This paper summarizes this procurement. which started in September 2014 and involved the Request for information (RFI) to qualify bidders and Request for Tender (RFT).
Running Out of Time: Why Elephants Don't Gallop
NASA Astrophysics Data System (ADS)
Noble, Julian V.
2001-11-01
The physics of high speed running implies that galloping becomes impossible for sufficiently large animals. Some authors have suggested that because the strength/weight ratio decreases with size and eventually renders large animals excessively liable to injury when they attempt to gallop. This paper suggests that large animals cannot move their limbs sufficiently rapidly to take advantage of leaving the ground, hence are restricted to walking gaits. >From this point of view the relatively low strength/weight ratio of elephants follows from their inability to gallop, rather than causing it.
Recent progress and tests of radiation resistant impregnation materials for Nb3Sn coils
NASA Astrophysics Data System (ADS)
Bossert, R.; Krave, S.; Ambrosio, G.; Andreev, N.; Chlachidze, G.; Nobrega, A.; Novitski, I.; Yu, M.; Zlobin, A. V.
2014-01-01
Fermilab is collaborating with Lawrence Berkeley National Laboratory (LBNL) and Brookhaven National Laboratory (BNL) (US-LARP collaboration) to develop a large-aperture Nb3Sn superconducting quadrupole for the Large Hadron Collider (LHC) luminosity upgrade. An important component of this work is the development of materials that are sufficiently radiation resistant for use in critical areas of the upgrade. This paper describes recent progress in characterization of materials, including the baseline CTD101K epoxy, cyanate ester blends, and Matrimid 5292, a bismaleimide-based system. Structural properties of "ten stacks" of cable impregnated with these materials are tested at room and cryogenic temperatures and compared to the baseline CT-101K. Experience with potting 1 and 2 meter long coils with Matrimid 5292 are described. Test results of a single 1-m coil impregnated with Matrimid 5292 are reported and compared to similar coils impregnated with the traditional epoxy.
Computationally efficient simulation of unsteady aerodynamics using POD on the fly
NASA Astrophysics Data System (ADS)
Moreno-Ramos, Ruben; Vega, José M.; Varas, Fernando
2016-12-01
Modern industrial aircraft design requires a large amount of sufficiently accurate aerodynamic and aeroelastic simulations. Current computational fluid dynamics (CFD) solvers with aeroelastic capabilities, such as the NASA URANS unstructured solver FUN3D, require very large computational resources. Since a very large amount of simulation is necessary, the CFD cost is just unaffordable in an industrial production environment and must be significantly reduced. Thus, a more inexpensive, yet sufficiently precise solver is strongly needed. An opportunity to approach this goal could follow some recent results (Terragni and Vega 2014 SIAM J. Appl. Dyn. Syst. 13 330-65 Rapun et al 2015 Int. J. Numer. Meth. Eng. 104 844-68) on an adaptive reduced order model that combines ‘on the fly’ a standard numerical solver (to compute some representative snapshots), proper orthogonal decomposition (POD) (to extract modes from the snapshots), Galerkin projection (onto the set of POD modes), and several additional ingredients such as projecting the equations using a limited amount of points and fairly generic mode libraries. When applied to the complex Ginzburg-Landau equation, the method produces acceleration factors (comparing with standard numerical solvers) of the order of 20 and 300 in one and two space dimensions, respectively. Unfortunately, the extension of the method to unsteady, compressible flows around deformable geometries requires new approaches to deal with deformable meshes, high-Reynolds numbers, and compressibility. A first step in this direction is presented considering the unsteady compressible, two-dimensional flow around an oscillating airfoil using a CFD solver in a rigidly moving mesh. POD on the Fly gives results whose accuracy is comparable to that of the CFD solver used to compute the snapshots.
The X-ray luminosity functions of Abell clusters from the Einstein Cluster Survey
NASA Technical Reports Server (NTRS)
Burg, R.; Giacconi, R.; Forman, W.; Jones, C.
1994-01-01
We have derived the present epoch X-ray luminosity function of northern Abell clusters using luminosities from the Einstein Cluster Survey. The sample is sufficiently large that we can determine the luminosity function for each richness class separately with sufficient precision to study and compare the different luminosity functions. We find that, within each richness class, the range of X-ray luminosity is quite large and spans nearly a factor of 25. Characterizing the luminosity function for each richness class with a Schechter function, we find that the characteristic X-ray luminosity, L(sub *), scales with richness class as (L(sub *) varies as N(sub*)(exp gamma), where N(sub *) is the corrected, mean number of galaxies in a richness class, and the best-fitting exponent is gamma = 1.3 +/- 0.4. Finally, our analysis suggests that there is a lower limit to the X-ray luminosity of clusters which is determined by the integrated emission of the cluster member galaxies, and this also scales with richness class. The present sample forms a baseline for testing cosmological evolution of Abell-like clusters when an appropriate high-redshift cluster sample becomes available.
Non-blackbody Disks Can Help Explain Inferred AGN Accretion Disk Sizes
NASA Astrophysics Data System (ADS)
Hall, Patrick B.; Sarrouh, Ghassan T.; Horne, Keith
2018-02-01
If the atmospheric density {ρ }atm} in the accretion disk of an active galactic nucleus (AGN) is sufficiently low, scattering in the atmosphere can produce a non-blackbody emergent spectrum. For a given bolometric luminosity, at ultraviolet and optical wavelengths such disks have lower fluxes and apparently larger sizes as compared to disks that emit as blackbodies. We show that models in which {ρ }atm} is a sufficiently low fixed fraction of the interior density ρ can match the AGN STORM observations of NGC 5548 but produce disk spectral energy distributions that peak at shorter wavelengths than observed in luminous AGN in general. Thus, scattering atmospheres can contribute to the explanation for large inferred AGN accretion disk sizes but are unlikely to be the only contributor. In the appendix section, we present unified equations for the interior ρ and T in gas pressure-dominated regions of a thin accretion disk.
A proposed origin for palimpsests and anomalous pit craters on Ganymede and Callisto
NASA Technical Reports Server (NTRS)
Croft, S. K.
1983-01-01
The hypothesis that palimpsests and anomalous pit craters are essentially pristine crater forms derived from high-velocity impacts and/or impacts into an ice crust with preimpact temperatures near melting is explored. The observational data are briefly reviewed, and an impact model is proposed for the direct formation of a palimpsest from an impact when the modification flow which produces the final crater is dominated by 'wet' fluid flow, as opposed to the 'dry' granular flow which produces normal craters. Conditions of 'wet' modification occur when the volume of impact melt remaining in the transient crater attains a volume comparable to the transient crater. The normal crater-palimpsest transition is found to occur for sufficiently large impacts or sufficiently fast impactors. The range of crater diameters and morphological characteristics inferred from the impact model is consistent with the observed characteristics of palimpsests and anomalous pit craters.
Stochastic algorithm for simulating gas transport coefficients
NASA Astrophysics Data System (ADS)
Rudyak, V. Ya.; Lezhnev, E. V.
2018-02-01
The aim of this paper is to create a molecular algorithm for modeling the transport processes in gases that will be more efficient than molecular dynamics method. To this end, the dynamics of molecules are modeled stochastically. In a rarefied gas, it is sufficient to consider the evolution of molecules only in the velocity space, whereas for a dense gas it is necessary to model the dynamics of molecules also in the physical space. Adequate integral characteristics of the studied system are obtained by averaging over a sufficiently large number of independent phase trajectories. The efficiency of the proposed algorithm was demonstrated by modeling the coefficients of self-diffusion and the viscosity of several gases. It was shown that the accuracy comparable to the experimental one can be obtained on a relatively small number of molecules. The modeling accuracy increases with the growth of used number of molecules and phase trajectories.
Large, nonsaturating thermopower in a quantizing magnetic field
Fu, Liang
2018-01-01
The thermoelectric effect is the generation of an electrical voltage from a temperature gradient in a solid material due to the diffusion of free charge carriers from hot to cold. Identifying materials with a large thermoelectric response is crucial for the development of novel electric generators and coolers. We theoretically consider the thermopower of Dirac/Weyl semimetals subjected to a quantizing magnetic field. We contrast their thermoelectric properties with those of traditional heavily doped semiconductors and show that, under a sufficiently large magnetic field, the thermopower of Dirac/Weyl semimetals grows linearly with the field without saturation and can reach extremely high values. Our results suggest an immediate pathway for achieving record-high thermopower and thermoelectric figure of merit, and they compare well with a recent experiment on Pb1–xSnxSe. PMID:29806031
Evaluation of Existing Methods for Human Blood mRNA Isolation and Analysis for Large Studies
Meyer, Anke; Paroni, Federico; Günther, Kathrin; Dharmadhikari, Gitanjali; Ahrens, Wolfgang; Kelm, Sørge; Maedler, Kathrin
2016-01-01
Aims Prior to implementing gene expression analyses from blood to a larger cohort study, an evaluation to set up a reliable and reproducible method is mandatory but challenging due to the specific characteristics of the samples as well as their collection methods. In this pilot study we optimized a combination of blood sampling and RNA isolation methods and present reproducible gene expression results from human blood samples. Methods The established PAXgeneTM blood collection method (Qiagen) was compared with the more recent TempusTM collection and storing system. RNA from blood samples collected by both systems was extracted on columns with the corresponding Norgen and PAX RNA extraction Kits. RNA quantity and quality was compared photometrically, with Ribogreen and by Real-Time PCR analyses of various reference genes (PPIA, β-ACTIN and TUBULIN) and exemplary of SIGLEC-7. Results Combining different sampling methods and extraction kits caused strong variations in gene expression. The use of PAXgeneTM and TempusTM collection systems resulted in RNA of good quality and quantity for the respective RNA isolation system. No large inter-donor variations could be detected for both systems. However, it was not possible to extract sufficient RNA of good quality with the PAXgeneTM RNA extraction system from samples collected by TempusTM collection tubes. Comparing only the Norgen RNA extraction methods, RNA from blood collected either by the TempusTM or PAXgeneTM collection system delivered sufficient amount and quality of RNA, but the TempusTM collection delivered higher RNA concentration compared to the PAXTM collection system. The established Pre-analytix PAXgeneTM RNA extraction system together with the PAXgeneTM blood collection system showed lowest CT-values, i.e. highest RNA concentration of good quality. Expression levels of all tested genes were stable and reproducible. Conclusions This study confirms that it is not possible to mix or change sampling or extraction strategies during the same study because of large variations of RNA yield and expression levels. PMID:27575051
Space Durable Polyimide/Carbon Nanotube Composite Films for Electrostatic Charge Mitigation
NASA Technical Reports Server (NTRS)
Watson, Kent A.; Smith, Joseph G., Jr.; Connell, John W.
2003-01-01
Low color, space environmentally durable polymeric films with sufficient electrical conductivity to mitigate electrostatic charge (ESC) build-up have been under investigation as part of a materials development activity. These materials have potential applications on advanced spacecraft, particularly on large, deployable, ultra-light weight Gossamer spacecraft. The approach taken to impart sufficient electrical conductivity into the polymer film is based on the use of single walled carbon nanotubes (SWNT) as conductive additives. Earlier approaches investigated in our lab involved both an in-situ polymerization approach and addition of SWNT to an oligomer containing reactive end-groups as methods to improve SWNT dispersion. The work described herein is based on the spray coating of a SWNT/solvent dispersion onto the film surface. Two types of polyimides were investigated, one with reactive end groups that can lead to bond formation between the oligomer chain and the SWNT surface and those without reactive end-groups. Surface conductivities (measured as surface resistance) in the range sufficient for ESC mitigation were achieved with minimal effects on the mechanical, optical, thermo-optical properties of the film as compared to the other methods. The chemistry and physical properties of these nanocomposites will be discussed.
Functional and morphological adaptations to aging in knee extensor muscles of physically active men.
Baroni, Bruno Manfredini; Geremia, Jeam Marcel; Rodrigues, Rodrigo; Borges, Marcelo Krás; Jinha, Azim; Herzog, Walter; Vaz, Marco Aurélio
2013-10-01
It is not known if a physically active lifestyle, without systematic training, is sufficient to combat age-related muscle and strength loss. Therefore, the purpose of this study was to evaluate if the maintenance of a physically active lifestyle prevents muscle impairments due to aging. To address this issue, we evaluated 33 healthy men with similar physical activity levels (IPAQ = 2) across a large range of ages. Functional (torque-angle and torque-velocity relations) and morphological (vastus lateralis muscle architecture) properties of the knee extensor muscles were assessed and compared between three age groups: young adults (30 ± 6 y), middle-aged subjects (50 ± 7 y) and elderly subjects (69 ± 5 y). Isometric peak torques were significantly lower (30% to 36%) in elderly group subjects compared with the young adults. Concentric peak torques were significantly lower in the middle aged (18% to 32%) and elderly group (40% to 53%) compared with the young adults. Vastus lateralis thickness and fascicles lengths were significantly smaller in the elderly group subjects (15.8 ± 3.9 mm; 99.1 ± 25.8 mm) compared with the young adults (19.8 ± 3.6 mm; 152.1 ± 42.0 mm). These findings suggest that a physically active lifestyle, without systematic training, is not sufficient to avoid loss of strength and muscle mass with aging.
Oblique nonlinear whistler wave
NASA Astrophysics Data System (ADS)
Yoon, Peter H.; Pandey, Vinay S.; Lee, Dong-Hun
2014-03-01
Motivated by satellite observation of large-amplitude whistler waves propagating in oblique directions with respect to the ambient magnetic field, a recent letter discusses the physics of large-amplitude whistler waves and relativistic electron acceleration. One of the conclusions of that letter is that oblique whistler waves will eventually undergo nonlinear steepening regardless of the amplitude. The present paper reexamines this claim and finds that the steepening associated with the density perturbation almost never occurs, unless whistler waves have sufficiently high amplitude and propagate sufficiently close to the resonance cone angle.
Simultaneous production of lepton pairs in ultraperipheral relativistic heavy ion collisions
NASA Astrophysics Data System (ADS)
Kurban, E.; Güçlü, M. C.
2017-10-01
We calculate the total cross sections and probabilities of electromagnetic productions of electron, muon, and tauon pairs simultaneously. At the CERN Large Hadron Collider (LHC), the available electromagnetic energy is sufficient to produce all kinds of leptons coherently. The masses of muons and tauons are large, so their Compton wavelengths are small enough to interact with the colliding nuclei. Therefore, the realistic nuclear form factors are included in the calculations of electromagnetic pair productions. The cross section calculations show that, at LHC energies, the probabilities of simultaneous productions of all kinds of leptons are increased significantly compared to energies available at the BNL Relativistic Heavy Ion Collider (RHIC) . Experimentally, observing this simultaneous production can give us important information about strong QED.
Remote sensing applied to numerical modelling. [water resources pollution
NASA Technical Reports Server (NTRS)
Sengupta, S.; Lee, S. S.; Veziroglu, T. N.; Bland, R.
1975-01-01
Progress and remaining difficulties in the construction of predictive mathematical models of large bodies of water as ecosystems are reviewed. Surface temperature is at present the only variable than can be measured accurately and reliably by remote sensing techniques, but satellite infrared data are of sufficient resolution for macro-scale modeling of oceans and large lakes, and airborne radiometers are useful in meso-scale analysis (of lakes, bays, and thermal plumes). Finite-element and finite-difference techniques applied to the solution of relevant coupled time-dependent nonlinear partial differential equations are compared, and the specific problem of the Biscayne Bay and environs ecosystem is tackled in a finite-differences treatment using the rigid-lid model and a rigid-line grid system.
Fused Silica and Other Transparent Window Materials
NASA Technical Reports Server (NTRS)
Salem, Jon
2016-01-01
Several transparent ceramics, such as spinel and AlONs are now being produced in sufficient large areas to be used in space craft window applications. The work horse transparent material for space missions from Apollo to the International Space Station has been fused silica due in part to its low coefficient of expansion and optical quality. Despite its successful use, fused silica exhibits anomalies in its crack growth behavior, depending on environmental preconditioning and surface damage. This presentation will compare recent optical ceramics to fused silica and discuss sources of variation in slow crack growth behavior.
A quantum heuristic algorithm for the traveling salesman problem
NASA Astrophysics Data System (ADS)
Bang, Jeongho; Ryu, Junghee; Lee, Changhyoup; Yoo, Seokwon; Lim, James; Lee, Jinhyoung
2012-12-01
We propose a quantum heuristic algorithm to solve the traveling salesman problem by generalizing the Grover search. Sufficient conditions are derived to greatly enhance the probability of finding the tours with the cheapest costs reaching almost to unity. These conditions are characterized by the statistical properties of tour costs and are shown to be automatically satisfied in the large-number limit of cities. In particular for a continuous distribution of the tours along the cost, we show that the quantum heuristic algorithm exhibits a quadratic speedup compared to its classical heuristic algorithm.
Assessing sufficiency of thermal riverscapes for resilient ...
Resilient salmon populations require river networks that provide water temperature regimes sufficient to support a diversity of salmonid life histories across space and time. Efforts to protect, enhance and restore watershed thermal regimes for salmon may target specific locations and features within stream networks hypothesized to provide disproportionately high-value functional resilience to salmon populations. These include relatively small-scale features such as thermal refuges, and larger-scale features such as entire watersheds or aquifers that support thermal regimes buffered from local climatic conditions. Quantifying the value of both small and large scale thermal features to salmon populations has been challenged by both the difficulty of mapping thermal regimes at sufficient spatial and temporal resolutions, and integrating thermal regimes into population models. We attempt to address these challenges by using newly-available datasets and modeling approaches to link thermal regimes to salmon populations across scales. We will describe an individual-based modeling approach for assessing sufficiency of thermal refuges for migrating salmon and steelhead in large rivers, as well as a population modeling approach for assessing large-scale climate refugia for salmon in the Pacific Northwest. Many rivers and streams in the Pacific Northwest are currently listed as impaired under the Clean Water Act as a result of high summer water temperatures. Adverse effec
Fast secant methods for the iterative solution of large nonsymmetric linear systems
NASA Technical Reports Server (NTRS)
Deuflhard, Peter; Freund, Roland; Walter, Artur
1990-01-01
A family of secant methods based on general rank-1 updates was revisited in view of the construction of iterative solvers for large non-Hermitian linear systems. As it turns out, both Broyden's good and bad update techniques play a special role, but should be associated with two different line search principles. For Broyden's bad update technique, a minimum residual principle is natural, thus making it theoretically comparable with a series of well known algorithms like GMRES. Broyden's good update technique, however, is shown to be naturally linked with a minimum next correction principle, which asymptotically mimics a minimum error principle. The two minimization principles differ significantly for sufficiently large system dimension. Numerical experiments on discretized partial differential equations of convection diffusion type in 2-D with integral layers give a first impression of the possible power of the derived good Broyden variant.
NASA Technical Reports Server (NTRS)
Peterson, Harold; Beasley, William
2011-01-01
We address the question of whether ice crystals with habits typically encountered by lightning discharges may serve as catalysts for the production of NOx by lightning. If so, and if the effect is sufficiently large, it would need to be taken into account in estimates of global NOx production by lightning. In this study, we make a series of plausible assumptions about the temperatures and concentrations of reactant species in the environment of discharges and we postulate a mechanism by which ice crystals could adsorb nitrogen atoms. We then compare production rates between uncatalyzed and catalyzed reactions at 2000 K, 3000 K, and 4000 K, temperatures observed in lightning channels during the cool-down period after a return stroke. Catalyzed NO production rates are greater at 2000 K, whereas uncatalyzed production occurs most rapidly at 4000 K. The channel temperature stays around 2000 K for a longer period of time than at 4000 K. The longer residence time at 2000 K is sufficient to allow fresh reactants to participate in the mix in. Therefore, our results suggest that nearly three times as much NO per flash is produced by ice-catalyzed reactions as compared with uncatalyzed reactions.
How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography
Jørgensen, J. S.; Sidky, E. Y.
2015-01-01
We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization. PMID:25939620
How little data is enough? Phase-diagram analysis of sparsity-regularized X-ray computed tomography.
Jørgensen, J S; Sidky, E Y
2015-06-13
We introduce phase-diagram analysis, a standard tool in compressed sensing (CS), to the X-ray computed tomography (CT) community as a systematic method for determining how few projections suffice for accurate sparsity-regularized reconstruction. In CS, a phase diagram is a convenient way to study and express certain theoretical relations between sparsity and sufficient sampling. We adapt phase-diagram analysis for empirical use in X-ray CT for which the same theoretical results do not hold. We demonstrate in three case studies the potential of phase-diagram analysis for providing quantitative answers to questions of undersampling. First, we demonstrate that there are cases where X-ray CT empirically performs comparably with a near-optimal CS strategy, namely taking measurements with Gaussian sensing matrices. Second, we show that, in contrast to what might have been anticipated, taking randomized CT measurements does not lead to improved performance compared with standard structured sampling patterns. Finally, we show preliminary results of how well phase-diagram analysis can predict the sufficient number of projections for accurately reconstructing a large-scale image of a given sparsity by means of total-variation regularization.
Electrohydrodynamically driven large-area liquid ion sources
Pregenzer, Arian L.
1988-01-01
A large-area liquid ion source comprises means for generating, over a large area of the surface of a liquid, an electric field of a strength sufficient to induce emission of ions from a large area of said liquid. Large areas in this context are those distinct from emitting areas in unidimensional emitters.
Samuel A. Cushman; Erin L. Landguth; Curtis H. Flather
2012-01-01
Aim: The goal of this study was to evaluate the sufficiency of the network of protected lands in the U.S. northern Rocky Mountains in providing protection for habitat connectivity for 105 hypothetical organisms. A large proportion of the landscape...
17 CFR 36.2 - Exempt boards of trade.
Code of Federal Regulations, 2014 CFR
2014-04-01
... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... supply that is sufficiently large, and a cash market sufficiently liquid, to render any contract traded... market. (2) The commodities that meet the criteria of paragraph (a)(1) of this section are: (i) The...
17 CFR 36.2 - Exempt boards of trade.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... deliverable supply; (ii) A deliverable supply that is sufficiently large, and a cash market sufficiently... manipulation; or (iii)No cash market. (2) The commodities that meet the criteria of paragraph (a)(1) of this...
17 CFR 36.2 - Exempt boards of trade.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... deliverable supply; (ii) A deliverable supply that is sufficiently large, and a cash market sufficiently... manipulation; or (iii)No cash market. (2) The commodities that meet the criteria of paragraph (a)(1) of this...
17 CFR 36.2 - Exempt boards of trade.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... deliverable supply; (ii) A deliverable supply that is sufficiently large, and a cash market sufficiently... manipulation; or (iii)No cash market. (2) The commodities that meet the criteria of paragraph (a)(1) of this...
17 CFR 36.2 - Exempt boards of trade.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Section 36.2 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION EXEMPT MARKETS § 36.2... supply that is sufficiently large, and a cash market sufficiently liquid, to render any contract traded... market. (2) The commodities that meet the criteria of paragraph (a)(1) of this section are: (i) The...
NASA Technical Reports Server (NTRS)
Contreras, Michael T.; Peng, Chia-Yen; Wang, Dongdong; Chen, Jiun-Shyan
2012-01-01
A wheel experiencing sinkage and slippage events poses a high risk to rover missions as evidenced by recent mobility challenges on the Mars Exploration Rover (MER) project. Because several factors contribute to wheel sinkage and slippage conditions such as soil composition, large deformation soil behavior, wheel geometry, nonlinear contact forces, terrain irregularity, etc., there are significant benefits to modeling these events to a sufficient degree of complexity. For the purposes of modeling wheel sinkage and slippage at an engineering scale, meshfree finite element approaches enable simulations that capture sufficient detail of wheel-soil interaction while remaining computationally feasible. This study demonstrates some of the large deformation modeling capability of meshfree methods and the realistic solutions obtained by accounting for the soil material properties. A benchmark wheel-soil interaction problem is developed and analyzed using a specific class of meshfree methods called Reproducing Kernel Particle Method (RKPM). The benchmark problem is also analyzed using a commercially available finite element approach with Lagrangian meshing for comparison. RKPM results are comparable to classical pressure-sinkage terramechanics relationships proposed by Bekker-Wong. Pending experimental calibration by future work, the meshfree modeling technique will be a viable simulation tool for trade studies assisting rover wheel design.
`Log-Chipper' Turbulence in the Convective Boundary Layer.
NASA Astrophysics Data System (ADS)
Kimmel, Shari J.; Wyngaard, John C.; Otte, Martin J.
2002-03-01
Turbulent fluctuations of a conservative scalar in the atmospheric boundary layer (ABL) can be generated by a scalar flux at the surface, a scalar flux of entrainment at the ABL top, and the `chewing up' of scalar variations on the mesoscale. The first two have been previously studied, while the third is examined in this paper through large-eddy simulation (LES). The LES results show that the scalar fluctuations due to the breakdown of mesoscale variations in advected conservative scalar fields, which the authors call the `log-chipper' component of scalar fluctuations, are uniformly distributed through the depth of the convective ABL, unlike the top-down and bottom-up components.A similarity function, similar to those for the top-down and bottom-up scalars, is derived for the log-chipper scalar variance in the convective ABL and used to compare the relative importance of these three processes for generating scalar fluctuations. Representative mesoscale gradients for water vapor mixing ratio and potential temperature are computed from airplane measurements over both land and water. In situations where the entrainment and surface fluxes are sufficiently small, or the ABL depth, turbulence intensity, or the mesoscale scalar gradient is sufficiently large, the variance of the log-chipper scalar fluctuations in mid-ABL can be of the order of the variance of top-down and bottom-up scalars.
Integrated Surface Power Strategy for Mars
NASA Technical Reports Server (NTRS)
Rucker, Michelle
2015-01-01
A National Aeronautics and Space Administration (NASA) study team evaluated surface power needs for a conceptual crewed 500-day Mars mission. This study had four goals: 1. Determine estimated surface power needed to support the reference mission; 2. Explore alternatives to minimize landed power system mass; 3. Explore alternatives to minimize Mars Lander power self-sufficiency burden; and 4. Explore alternatives to minimize power system handling and surface transportation mass. The study team concluded that Mars Ascent Vehicle (MAV) oxygen propellant production drives the overall surface power needed for the reference mission. Switching to multiple, small Kilopower fission systems can potentially save four to eight metric tons of landed mass, as compared to a single, large Fission Surface Power (FSP) concept. Breaking the power system up into modular packages creates new operational opportunities, with benefits ranging from reduced lander self-sufficiency for power, to extending the exploration distance from a single landing site. Although a large FSP trades well for operational complexity, a modular approach potentially allows Program Managers more flexibility to absorb late mission changes with less schedule or mass risk, better supports small precursor missions, and allows a program to slowly build up mission capability over time. A number of Kilopower disadvantages-and mitigation strategies-were also explored.
Growth hormone deficiency after mild combat-related traumatic brain injury.
Ioachimescu, Adriana G; Hampstead, Benjamin M; Moore, Anna; Burgess, Elizabeth; Phillips, Lawrence S
2015-08-01
Traumatic brain injury (TBI) has been recognized as a cause of growth hormone deficiency (GHD) in civilians. However, comparable data are sparse in veterans who incurred TBI during combat. Our objective was to determine the prevalence of GHD in veterans with a history of combat-related TBI, and its association with cognitive and psychosocial dysfunction. Single center prospective study. Twenty male veterans with mild TBI incurred during combat 8-72 months prior to enrollment. GHD was defined by a GH peak <3 μg/L during glucagon stimulation test. Differences in neuropsychological, emotional, and quality of life of the GHD Veterans were described using Cohen's d. Large effect sizes were considered meaningful. Mean age was 33.7 years (SD 7.8) and all subjects had normal thyroid hormone and cortisol levels. Five (25%) exhibited a subnormal response to glucagon. Sixteen participants (80%) provided sufficient effort for valid neuropsychological assessment (12 GH-sufficient, 4 GHD). There were large effect size differences in self-monitoring during memory testing (d = 1.46) and inhibitory control (d = 0.92), with worse performances in the GHD group. While fatigue and post-traumatic stress disorder were comparable, the GHD group reported more depression (d = 0.80) and lower quality of life (d = 0.64). Our study found a 25% prevalence of GHD in veterans with mild TBI as shown by glucagon stimulation. The neuropsychological findings raise the possibility that GHD has adverse effects on executive abilities and mood. Further studies are needed to determine whether GH replacement is an effective treatment in these patients.
Calculating p-values and their significances with the Energy Test for large datasets
NASA Astrophysics Data System (ADS)
Barter, W.; Burr, C.; Parkes, C.
2018-04-01
The energy test method is a multi-dimensional test of whether two samples are consistent with arising from the same underlying population, through the calculation of a single test statistic (called the T-value). The method has recently been used in particle physics to search for samples that differ due to CP violation. The generalised extreme value function has previously been used to describe the distribution of T-values under the null hypothesis that the two samples are drawn from the same underlying population. We show that, in a simple test case, the distribution is not sufficiently well described by the generalised extreme value function. We present a new method, where the distribution of T-values under the null hypothesis when comparing two large samples can be found by scaling the distribution found when comparing small samples drawn from the same population. This method can then be used to quickly calculate the p-values associated with the results of the test.
The Panchromatic Comparative Exoplanetary Treasury Program
NASA Astrophysics Data System (ADS)
Sing, David
2016-10-01
HST has played the definitive role in the characterization of exoplanets and from the first planets available, we have learned that their atmospheres are incredibly diverse. The large number of transiting planets now available has prompted a new era of atmospheric studies, where wide scale comparative planetology is now possible. The atmospheric chemistry of cloud/haze formation and atmospheric mass-loss are a major outstanding issues in the field of exoplanets, and we seek to make progress gaining insight into their underlying physical process through comparative studies. Here we propose to use Hubble's full spectroscopic capabilities to produce the first large-scale, simultaneous UVOIR comparative study of exoplanets. With full wavelength coverage, an entire planet's atmosphere can be probed simultaneously and with sufficient numbers of planets, we can statistically compare their features with physical parameters for the first time. This panchromatic program will build a lasting HST legacy, providing the UV and blue-optical spectra unavailable to JWST. From these observations, chemistry over a wide range of physical environments will be probed, from the hottest condensates to much cooler planets where photochemical hazes could be present. Constraints on aerosol size and composition will help unlock our understanding of clouds and how they are suspended at such high altitudes. Notably, there have been no large transiting UV HST programs, and this panchromatic program will provide a fundamental legacy contribution to atmospheric escape of small exoplanets, where the mass loss can be significant and have a major impact on the evolution of the planet itself.
Hand coverage by alcohol-based handrub varies: Volume and hand size matter.
Zingg, Walter; Haidegger, Tamas; Pittet, Didier
2016-12-01
Visitors of an infection prevention and control conference performed hand hygiene with 1, 2, or 3 mL ultraviolet light-traced alcohol-based handrub. Coverage of palms, dorsums, and fingertips were measured by digital images. Palms of all hand sizes were sufficiently covered when 2 mL was applied, dorsums of medium and large hands were never sufficiently covered. Palmar fingertips were sufficiently covered when 2 or 3 mL was applied, and dorsal fingertips were never sufficiently covered. Copyright © 2016 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Protein Structure Determination using Metagenome sequence data
Ovchinnikov, Sergey; Park, Hahnbeom; Varghese, Neha; Huang, Po-Ssu; Pavlopoulos, Georgios A.; Kim, David E.; Kamisetty, Hetunandan; Kyrpides, Nikos C.; Baker, David
2017-01-01
Despite decades of work by structural biologists, there are still ~5200 protein families with unknown structure outside the range of comparative modeling. We show that Rosetta structure prediction guided by residue-residue contacts inferred from evolutionary information can accurately model proteins that belong to large families, and that metagenome sequence data more than triples the number of protein families with sufficient sequences for accurate modeling. We then integrate metagenome data, contact based structure matching and Rosetta structure calculations to generate models for 614 protein families with currently unknown structures; 206 are membrane proteins and 137 have folds not represented in the PDB. This approach provides the representative models for large protein families originally envisioned as the goal of the protein structure initiative at a fraction of the cost. PMID:28104891
Prediagnostic Plasma 25-Hydroxyvitamin D and Pancreatic Cancer Survival
Yuan, Chen; Qian, Zhi Rong; Babic, Ana; Morales-Oyarvide, Vicente; Rubinson, Douglas A.; Kraft, Peter; Ng, Kimmie; Bao, Ying; Giovannucci, Edward L.; Ogino, Shuji; Stampfer, Meir J.; Gaziano, John Michael; Sesso, Howard D.; Buring, Julie E.; Cochrane, Barbara B.; Chlebowski, Rowan T.; Snetselaar, Linda G.; Manson, JoAnn E.; Fuchs, Charles S.
2016-01-01
Purpose Although vitamin D inhibits pancreatic cancer proliferation in laboratory models, the association of plasma 25-hydroxyvitamin D [25(OH)D] with patient survival is largely unexplored. Patients and Methods We analyzed survival among 493 patients from five prospective US cohorts who were diagnosed with pancreatic cancer from 1984 to 2008. We estimated hazard ratios (HRs) for death by plasma level of 25(OH)D (insufficient, < 20 ng/mL; relative insufficiency, 20 to < 30 ng/mL; sufficient ≥ 30 ng/mL) by using Cox proportional hazards regression models adjusted for age, cohort, race and ethnicity, smoking, diagnosis year, stage, and blood collection month. We also evaluated 30 tagging single-nucleotide polymorphisms in the vitamin D receptor gene, requiring P < .002 (0.05 divided by 30 genotyped variants) for statistical significance. Results Mean prediagnostic plasma level of 25(OH)D was 24.6 ng/mL, and 165 patients (33%) were vitamin D insufficient. Compared with patients with insufficient levels, multivariable-adjusted HRs for death were 0.79 (95% CI, 0.48 to 1.29) for patients with relative insufficiency and 0.66 (95% CI, 0.49 to 0.90) for patients with sufficient levels (P trend = .01). These results were unchanged after further adjustment for body mass index and history of diabetes (P trend = .02). The association was strongest among patients with blood collected within 5 years of diagnosis, with an HR of 0.58 (95% CI, 0.35 to 0.98) comparing patients with sufficient to patients with insufficient 25(OH)D levels. No single-nucleotide polymorphism at the vitamin D receptor gene met our corrected significance threshold of P < .002; rs7299460 was most strongly associated with survival (HR per minor allele, 0.80; 95% CI, 0.68 to 0.95; P = .01). Conclusion We observed longer overall survival in patients with pancreatic cancer who had sufficient prediagnostic plasma levels of 25(OH)D. PMID:27325858
Levanon, Yafa; Lerman, Yehuda; Gefen, Amit; Ratzon, Navah Z
2014-01-01
Awkward body posture while typing is associated with musculoskeletal disorders (MSDs). Valid rapid assessment of computer workers' body posture is essential for the prevention of MSD among this large population. This study aimed to examine the validity of the modified rapid upper limb assessment (mRULA) which adjusted the rapid upper limb assessment (RULA) for computer workers. Moreover, this study examines whether one observation during a working day is sufficient or more observations are needed. A total of 29 right-handed computer workers were recruited. RULA and mRULA were conducted. The observations were then repeated six times at one-hour intervals. A significant moderate correlation (r = 0.6 and r = 0.7 for mouse and keyboard, respectively) was found between the assessments. No significant differences were found between one observation and six observations per working day. The mRULA was found to be valid for the assessment of computer workers, and one observation was sufficient to assess the work-related risk factor.
Trietsch, Jasper; van Steenkiste, Ben; Hobma, Sjoerd; Frericks, Arnoud; Grol, Richard; Metsemakers, Job; van der Weijden, Trudy
2014-12-01
A quality improvement strategy consisting of comparative feedback and peer review embedded in available local quality improvement collaboratives proved to be effective in changing the test-ordering behaviour of general practitioners. However, implementing this strategy was problematic. We aimed for large-scale implementation of an adapted strategy covering both test ordering and prescribing performance. Because we failed to achieve large-scale implementation, the aim of this study was to describe and analyse the challenges of the transferring process. In a qualitative study 19 regional health officers, pharmacists, laboratory specialists and general practitioners were interviewed within 6 months after the transfer period. The interviews were audiotaped, transcribed and independently coded by two of the authors. The codes were matched to the dimensions of the normalization process theory. The general idea of the strategy was widely supported, but generating the feedback was more complex than expected and the need for external support after transfer of the strategy remained high because participants did not assume responsibility for the work and the distribution of resources that came with it. Evidence on effectiveness, a national infrastructure for these collaboratives and a general positive attitude were not sufficient for normalization. Thinking about managing large databases, responsibility for tasks and distribution of resources should start as early as possible when planning complex quality improvement strategies. Merely exploring the barriers and facilitators experienced in a preceding trial is not sufficient. Although multifaceted implementation strategies to change professional behaviour are attractive, their inherent complexity is also a pitfall for large-scale implementation. © 2014 John Wiley & Sons, Ltd.
Basolateral junctions are sufficient to suppress epithelial invasion during Drosophila oogenesis.
Szafranski, Przemyslaw; Goode, Scott
2007-02-01
Epithelial junctions play crucial roles during metazoan evolution and development by facilitating tissue formation, maintenance, and function. Little is known about the role of distinct types of junctions in controlling epithelial transformations leading to invasion of neighboring tissues. Discovering the key junction complexes that control these processes and how they function may also provide mechanistic insight into carcinoma cell invasion. Here, using the Drosophila ovary as a model, we show that four proteins of the basolateral junction (BLJ), Fasciclin-2, Neuroglian, Discs-large, and Lethal-giant-larvae, but not proteins of other epithelial junctions, directly suppress epithelial tumorigenesis and invasion. Remarkably, the expression pattern of Fasciclin-2 predicts which cells will invade. We compared the apicobasal polarity of BLJ tumor cells to border cells (BCs), an epithelium-derived cluster that normally migrates during mid-oogenesis. Both tumor cells and BCs differentiate a lateralized membrane pattern that is necessary but not sufficient for invasion. Independent of lateralization, derepression of motility pathways is also necessary, as indicated by a strong linear correlation between faster BC migration and an increased incidence of tumor invasion. However, without membrane lateralization, derepression of motility pathways is also not sufficient for invasion. Our results demonstrate that spatiotemporal patterns of basolateral junction activity directly suppress epithelial invasion by organizing the cooperative activity of distinct polarity and motility pathways.
NASA Technical Reports Server (NTRS)
Margolis, Stephen B.; Sacksteder, Kurt (Technical Monitor)
2000-01-01
A pulsating form of hydrodynamic instability has recently been shown to arise during liquid-propellant deflagration in those parameter regimes where the pressure-dependent burning rate is characterized by a negative pressure sensitivity. This type of instability can coexist with the classical cellular, or Landau form of hydrodynamic instability, with the occurrence of either dependent on whether the pressure sensitivity is sufficiently large or small in magnitude. For the inviscid problem, it has been shown that, when the burning rate is realistically allowed to depend on temperature as well as pressure, sufficiently large values of the temperature sensitivity relative to the pressure sensitivity causes like pulsating form of hydrodynamic instability to become dominant. In that regime, steady, planar burning becomes intrinsically unstable to pulsating disturbances whose wave numbers are sufficiently small. This analysis is extended to the fully viscous case, where it is shown that although viscosity is stabilizing for intermediate and larger wave number perturbations, the intrinsic pulsating instability for small wave numbers remains. Under these conditions, liquid-propellant combustion is predicted to be characterized by large unsteady cells along the liquid/gas interface.
High-Resolution Large Field-of-View FUV Compact Camera
NASA Technical Reports Server (NTRS)
Spann, James F.
2006-01-01
The need for a high resolution camera with a large field of view and capable to image dim emissions in the far-ultraviolet is driven by the widely varying intensities of FUV emissions and spatial/temporal scales of phenomena of interest in the Earth% ionosphere. In this paper, the concept of a camera is presented that is designed to achieve these goals in a lightweight package with sufficient visible light rejection to be useful for dayside and nightside emissions. The camera employs the concept of self-filtering to achieve good spectral resolution tuned to specific wavelengths. The large field of view is sufficient to image the Earth's disk at Geosynchronous altitudes and capable of a spatial resolution of >20 km. The optics and filters are emphasized.
Assessment of Breast, Brain and Skin Pathological Tissue Using Full Field OCM
NASA Astrophysics Data System (ADS)
Dalimier, Eugénie; Assayag, Osnath; Harms, Fabrice; Boccara, A. Claude
The aim of this chapter is to assess whether the images of the breast, brain, and skin tissue obtained by FFOCM contain sufficient detail to allow pathologists to make a diagnosis of cancer and other pathologies comparable to what was obtained by conventional histological techniques. More precisely, it is necessary to verify on FFOCM images if it is possible to differentiate a healthy area from a pathological area. The reader interested in other organs or in animal studies may find a large number of 2D or 3D images in the atlas [2].
Quasi-Static Analysis of Round LaRC THUNDER Actuators
NASA Technical Reports Server (NTRS)
Campbell, Joel F.
2007-01-01
An analytic approach is developed to predict the shape and displacement with voltage in the quasi-static limit of round LaRC Thunder Actuators. The problem is treated with classical lamination theory and Von Karman non-linear analysis. In the case of classical lamination theory exact analytic solutions are found. It is shown that classical lamination theory is insufficient to describe the physical situation for large actuators but is sufficient for very small actuators. Numerical results are presented for the non-linear analysis and compared with experimental measurements. Snap-through behavior, bifurcation, and stability are presented and discussed.
Quasi-Static Analysis of LaRC THUNDER Actuators
NASA Technical Reports Server (NTRS)
Campbell, Joel F.
2007-01-01
An analytic approach is developed to predict the shape and displacement with voltage in the quasi-static limit of LaRC Thunder Actuators. The problem is treated with classical lamination theory and Von Karman non-linear analysis. In the case of classical lamination theory exact analytic solutions are found. It is shown that classical lamination theory is insufficient to describe the physical situation for large actuators but is sufficient for very small actuators. Numerical results are presented for the non-linear analysis and compared with experimental measurements. Snap-through behavior, bifurcation, and stability are presented and discussed.
Plasma motions in planetary magnetospheres
NASA Technical Reports Server (NTRS)
Hill, T. W.; Dessler, A. J.
1991-01-01
Interplanetary space is pervaded by a supersonic 'solar wind' plasma; five planets, in addition to the earth, have magnetic fields of sufficient strength to form the cometlike cavities called 'magnetospheres'. Comparative studies of these structures have indicated the specific environmental factor that can result in dramatic differences in the behavior of any pair of magnetospheres. Although planetary magnetospheres are large enough to serve as laboratories for in situ study of cosmic plasma and magnetic field behavior effects on particle acceleration and EM emission, much work remains to be done toward relating magnetospheric physics results to the study of remote astrophysical plasmas.
NASA Technical Reports Server (NTRS)
Miles, J. H.
1981-01-01
A predicted standing wave pressure and phase angle profile for a hard wall rectangular duct with a region of converging-diverging area variation is compared to published experimental measurements in a study of sound propagation without flow. The factor of 1/2 area variation used is sufficient magnitude to produce large reflections. The prediction is based on a transmission matrix approach developed for the analysis of sound propagation in a variable area duct with and without flow. The agreement between the measured and predicted results is shown to be excellent.
COBE DMR-normalized open inflation cold dark matter cosmogony
NASA Technical Reports Server (NTRS)
Gorski, Krzysztof M.; Ratra, Bharat; Sugiyama, Naoshi; Banday, Anthony J.
1995-01-01
A cut-sky orthogonal mode analysis of the 2 year COBE DMR 53 and 90 GHz sky maps (in Galactic coordinates) is used to determine the normalization of an open inflation model based on the cold dark matter (CDM) scenario. The normalized model is compared to measures of large-scale structure in the universe. Although the DMR data alone does not provide sufficient discriminative power to prefer a particular value of the mass density parameter, the open model appears to be reasonably consistent with observations when Omega(sub 0) is approximately 0.3-0.4 and merits further study.
Piezoelectric coefficients of bulk 3R transition metal dichalcogenides
NASA Astrophysics Data System (ADS)
Konabe, Satoru; Yamamoto, Takahiro
2017-09-01
The piezoelectric properties of bulk transition metal dichalcogenides (TMDCs) with a 3R structure were investigated using first-principles calculations based on density functional theory combined with the Berry phase treatment. Values for the elastic constant Cijkl , the piezoelectric coefficient eijk , and the piezoelectric coefficient dijk are given for bulk 3R-TMDCs (MoS2, MoSe2, WS2, and WSe2). The piezoelectric coefficients of bulk 3R-TMDCs are shown to be sufficiently large or comparable to those of conventional bulk piezoelectric materials such as α-quartz, wurtzite GaN, and wurtzite AlN.
Sulfur- and Oyxgen(?)-Rich Cores of Large Icy Satellites
NASA Astrophysics Data System (ADS)
McKinnon, W. B.
2008-12-01
The internal structures of Jupiter's large moons, Io, Europa, Ganymede, and Callisto, and Titan once Cassini data is sufficiently analyzed, can be usefully compared with those of the terrestrial planets. With sufficient heating we expect not only separation of rock from ice, but also metal from rock. The internally generated dipole magnetic field of Ganymede is perhaps the strongest evidence for this separation, but the gravity field of Io also implies a metallic core. Nevertheless, the evolutionary paths to differentiation taken (or avoided in the case of Callisto) by these worlds are quite different from those presumed to have the governed differentiation of the terrestrial planets, major asteroids, and iron meteorite parent bodies. Several aspects stand out. Slow accretion in gas-starved protosatellite nebulae implies that neither giant, magma-forming impacts were likely, nor were short-lived radiogenic nuclei in sufficient abundance to drive prompt differentiation. Rather, differentiation would have relied on quotidian long-lived radionuclide heating and/or in the cases of Io, Europa, and possibly Ganymede, tidal heating in mean-motion resonances. The best a priori estimate for the composition of the "rock" component near Jupiter and Saturn is solar, and it is this material that is fed into the accretion disks around Jupiter and Saturn, across the gaps the planets likely created in the solar nebula. Solar composition rock implies a sulfur abundance close to the Fe-FeS eutectic (at appropriate pressures). The rocky component of these worlds was likely highly oxidized as well, based on carbonaceous meteorite analogues, implying relatively low Mg#s (by terrestrial standards), lower amounts of Fe metal available for core formation, or even oxidized Fe3O4 as a potential core component. The latter may be important, as an Fe-S-O melt wets silicate grains readily, and thus can easily percolate downward, Elsasser style, to form a core. Nevertheless, the amount of FeS alone available to form a core may have been considerable, and a picture emerges of large, relatively low-density cores (a far greater proportion of "light alloying elements" than in the Earth's core), and relatively iron-rich rock mantles. Ganymede, and possibly Europa, may even retain residual solid FeS in their rock mantles, depending on the tidal heating history of each. Large, dominantly fluid cores imply enhanced mantle tidal deformation and heating. Published models have claimed that the Galilean satellites are depleted in Fe compared to rock, and in the case of Ganymede, that it is either depleted or enhanced in Fe. Obviously Ganymede cannot be both, and detailed structural models show that the Galilean satellites can be explained in terms of solar composition, once one allows for abundant sulfur and hot (liquid) cores.
Quantifying the uncertainty in heritability.
Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph
2014-05-01
The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large.
Statistical significance test for transition matrices of atmospheric Markov chains
NASA Technical Reports Server (NTRS)
Vautard, Robert; Mo, Kingtse C.; Ghil, Michael
1990-01-01
Low-frequency variability of large-scale atmospheric dynamics can be represented schematically by a Markov chain of multiple flow regimes. This Markov chain contains useful information for the long-range forecaster, provided that the statistical significance of the associated transition matrix can be reliably tested. Monte Carlo simulation yields a very reliable significance test for the elements of this matrix. The results of this test agree with previously used empirical formulae when each cluster of maps identified as a distinct flow regime is sufficiently large and when they all contain a comparable number of maps. Monte Carlo simulation provides a more reliable way to test the statistical significance of transitions to and from small clusters. It can determine the most likely transitions, as well as the most unlikely ones, with a prescribed level of statistical significance.
Mutoh, Hiroki; Mishina, Yukiko; Gallero-Salas, Yasir; Knöpfel, Thomas
2015-01-01
Traditional small molecule voltage sensitive dye indicators have been a powerful tool for monitoring large scale dynamics of neuronal activities but have several limitations including the lack of cell class specific targeting, invasiveness and difficulties in conducting longitudinal studies. Recent advances in the development of genetically-encoded voltage indicators have successfully overcome these limitations. Genetically-encoded voltage indicators (GEVIs) provide sufficient sensitivity to map cortical representations of sensory information and spontaneous network activities across cortical areas and different brain states. In this study, we directly compared the performance of a prototypic GEVI, VSFP2.3, with that of a widely used small molecule voltage sensitive dye (VSD), RH1691, in terms of their ability to resolve mesoscopic scale cortical population responses. We used three synchronized CCD cameras to simultaneously record the dual emission ratiometric fluorescence signal from VSFP2.3 and RH1691 fluorescence. The results show that VSFP2.3 offers more stable and less invasive recording conditions, while the signal-to-noise level and the response dynamics to sensory inputs are comparable to RH1691 recordings. PMID:25964738
Re-evaluating the link between brain size and behavioural ecology in primates.
Powell, Lauren E; Isler, Karin; Barton, Robert A
2017-10-25
Comparative studies have identified a wide range of behavioural and ecological correlates of relative brain size, with results differing between taxonomic groups, and even within them. In primates for example, recent studies contradict one another over whether social or ecological factors are critical. A basic assumption of such studies is that with sufficiently large samples and appropriate analysis, robust correlations indicative of selection pressures on cognition will emerge. We carried out a comprehensive re-examination of correlates of primate brain size using two large comparative datasets and phylogenetic comparative methods. We found evidence in both datasets for associations between brain size and ecological variables (home range size, diet and activity period), but little evidence for an effect of social group size, a correlation which has previously formed the empirical basis of the Social Brain Hypothesis. However, reflecting divergent results in the literature, our results exhibited instability across datasets, even when they were matched for species composition and predictor variables. We identify several potential empirical and theoretical difficulties underlying this instability and suggest that these issues raise doubts about inferring cognitive selection pressures from behavioural correlates of brain size. © 2017 The Author(s).
Energy use in Poland, 1970--1991: Sectoral analysis and international comparison
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyers, S.; Schipper, L.; Salay, J.
This report provides an analysis of how and why energy use has changed in Poland since the 1970s, with particular emphasis on changes since the country began its transition from a centrally planned to a market economy in 1989. The most important factors behind the large decline in Polish energy use in 1990 were a sharp fall in industrial output and a huge drop in residential coal use driven by higher prices. The structural shift away from heavy industry was slight. Key factors that worked to increase energy use were the rise in energy intensity in many heavy industries andmore » the shift toward more energy intensive modes of transport. The growth in private activities in 1991 was nearly sufficient to balance out continued decline in industrial energy use in that year. We compared energy use in Poland and the factors that shape it with similar elements in the West. We made a number of modifications to the Polish energy data to bring it closer to a Western energy accounting framework, and augmented these with a variety of estimates in order to construct a sufficiently detailed portrait of Polish energy use to allow comparison with Western data. Per capita energy use in Poland was not much below W. European levels despite Poland`s much lower GDP per capita. Poland has comparatively high energy intensities in manufacturing and residential space heating, and a large share of heavy industries in manufacturing output, all factors that contribute to higher energy use per capita. The structure of passenger and freight transportation and the energy intensity of automobiles contribute to lower energy use per capita in Poland than in Western Europe, but the patterns in Poland are moving closer to those that prevail in the West.« less
Mazzilli, Sarah A.; Hershberger, Pamela A.; Reid, Mary E.; Bogner, Paul N.; Atwood, Kristopher; Trump, Donald L.; Johnson, Candace S.
2015-01-01
The chemopreventive actions of vitamin D were examined in the N-nitroso-tris-chloroethylurea (NTCU) mouse model, a progressive model of lung squamous cell carcinoma (SCC). SWR/J mice were fed a deficient diet (D) containing no vitamin D3, a sufficient diet (S) containing 2000 IU/kg vitamin D3, or the same diets in combination with the active metabolite of vitamin D, calcitriol (C) (80 μg/kg, weekly). The percentage (%) of the mucosal surface of large airways occupied by dysplastic lesions was determined in mice after treatment with a total dose of 15 or 25 μmol NTCU (N). After treatment with 15 μmol NTCU, the % of the surface of large airways containing high-grade dysplastic (HGD) lesions were vitamin D-deficient +NTCU (DN), 22.7 % (p<0.05 compared to vitamin D-sufficient +NTCU (SN)); DN + C, 12.3%; SN, 8.7%; and SN + C, 6.6%. The extent of HGD increased with NTCU dose in the DN group. Proliferation, assessed by Ki-67 labeling, increased upon NTCU treatment. The highest Ki-67 labeling index was seen in the DN group. As compared to SN mice, DN mice exhibited a 3-fold increase (p <0.005) in circulating white blood cells (WBC), a 20% (p <0.05) increase in IL-6 levels, and a 4 -fold (p <0.005) increase in WBC in bronchial lavages. Thus, vitamin D repletion reduces the progression of premalignant lesions, proliferation, and inflammation, and may thereby suppress development of lung SCC. Further investigations of the chemopreventive effects of vitamin D in lung SCC are warranted. PMID:26276745
Evaluation of residual uranium contamination in the dirt floor of an abandoned metal rolling mill.
Glassford, Eric; Spitz, Henry; Lobaugh, Megan; Spitler, Grant; Succop, Paul; Rice, Carol
2013-02-01
A single, large, bulk sample of uranium-contaminated material from the dirt floor of an abandoned metal rolling mill was separated into different types and sizes of aliquots to simulate samples that would be collected during site remediation. The facility rolled approximately 11,000 tons of hot-forged ingots of uranium metal approximately 60 y ago, and it has not been used since that time. Thirty small mass (≈ 0.7 g) and 15 large mass (≈ 70 g) samples were prepared from the heterogeneously contaminated bulk material to determine how measurements of the uranium contamination vary with sample size. Aliquots of bulk material were also resuspended in an exposure chamber to produce six samples of respirable particles that were obtained using a cascade impactor. Samples of removable surface contamination were collected by wiping 100 cm of the interior surfaces of the exposure chamber with 47-mm-diameter fiber filters. Uranium contamination in each of the samples was measured directly using high-resolution gamma ray spectrometry. As expected, results for isotopic uranium (i.e., U and U) measured with the large-mass and small-mass samples are significantly different (p < 0.001), and the coefficient of variation (COV) for the small-mass samples was greater than for the large-mass samples. The uranium isotopic concentrations measured in the air and on the wipe samples were not significantly different and were also not significantly different (p > 0.05) from results for the large- or small-mass samples. Large-mass samples are more reliable for characterizing heterogeneously distributed radiological contamination than small-mass samples since they exhibit the least variation compared to the mean. Thus, samples should be sufficiently large in mass to insure that the results are truly representative of the heterogeneously distributed uranium contamination present at the facility. Monitoring exposure of workers and the public as a result of uranium contamination resuspended during site remediation should be evaluated using samples of sufficient size and type to accommodate the heterogeneous distribution of uranium in the bulk material.
Simple Deterministically Constructed Recurrent Neural Networks
NASA Astrophysics Data System (ADS)
Rodan, Ali; Tiňo, Peter
A large number of models for time series processing, forecasting or modeling follows a state-space formulation. Models in the specific class of state-space approaches, referred to as Reservoir Computing, fix their state-transition function. The state space with the associated state transition structure forms a reservoir, which is supposed to be sufficiently complex so as to capture a large number of features of the input stream that can be potentially exploited by the reservoir-to-output readout mapping. The largely "black box" character of reservoirs prevents us from performing a deeper theoretical investigation of the dynamical properties of successful reservoirs. Reservoir construction is largely driven by a series of (more-or-less) ad-hoc randomized model building stages, with both the researchers and practitioners having to rely on a series of trials and errors. We show that a very simple deterministically constructed reservoir with simple cycle topology gives performances comparable to those of the Echo State Network (ESN) on a number of time series benchmarks. Moreover, we argue that the memory capacity of such a model can be made arbitrarily close to the proved theoretical limit.
Global Hopf bifurcation analysis on a BAM neural network with delays
NASA Astrophysics Data System (ADS)
Sun, Chengjun; Han, Maoan; Pang, Xiaoming
2007-01-01
A delayed differential equation that models a bidirectional associative memory (BAM) neural network with four neurons is considered. By using a global Hopf bifurcation theorem for FDE and a Bendixon's criterion for high-dimensional ODE, a group of sufficient conditions for the system to have multiple periodic solutions are obtained when the sum of delays is sufficiently large.
Evidence for Tropopause Layer Moistening by Convection During CRYSTAL-FACE
NASA Technical Reports Server (NTRS)
Ackerman, A.; Fridlind, A.; Jensen, E.; Miloshevich, L.; Heymsfield, G.; McGill, M.
2003-01-01
Measurements and analysis of the impact of deep convection on tropopause layer moisture are easily confounded by difficulties making precise observations with sufficient spatial coverage before and after convective events and difficulties distinguishing between changes due to local convection versus large-scale advection. The interactions between cloud microphysics and dynamics in the convective transport of moisture into the tropopause layer also result in a sufficiently complex and poorly characterized system to allow for considerable freedom in theoretical models of stratosphere-troposphere exchange. In this work we perform detailed large-eddy simulations with an explicit cloud microphysics model to study the impact of deep convection on tropopause layer moisture profiles observed over southern Florida during CRYSTALFACE. For four days during the campaign (July 11, 16, 28, and 29) we initialize a 100-km square domain with temperature and moisture profiles measured prior to convection at the PARSL ground site, and initiate convection with a warm bubble that produces an anvil at peak elevations in agreement with lidar and radar observations on that day. Comparing the moisture field after the anvils decay with the initial state, we find that convection predominantly moistens the tropopause layer (as defined by minimum temperature and minimum potential temperature lapse rate), although some drying is also predicted in localized layers. We will also present results of sensitivity tests designed to separate the roles of cloud microphysics and dynamics.
Solis, Kyle Jameson; Martin, James E.
2012-11-01
Isothermal magnetic advection is a recently discovered method of inducing highly organized, non-contact flow lattices in suspensions of magnetic particles, using only uniform ac magnetic fields of modest strength. The initiation of these vigorous flows requires neither a thermal gradient nor a gravitational field and so can be used to transfer heat and mass in circumstances where natural convection does not occur. These advection lattices are comprised of a square lattice of antiparallel flow columns. If the column spacing is sufficiently large compared to the column length, and the flow rate within the columns is sufficiently large, then one wouldmore » expect efficient transfer of both heat and mass. Otherwise, the flow lattice could act as a countercurrent heat exchanger and only mass will be efficiently transferred. Although this latter case might be useful for feeding a reaction front without extracting heat, it is likely that most interest will be focused on using IMA for heat transfer. In this paper we explore the various experimental parameters of IMA to determine which of these can be used to control the column spacing. These parameters include the field frequency, strength, and phase relation between the two field components, the liquid viscosity and particle volume fraction. We find that the column spacing can easily be tuned over a wide range, to enable the careful control of heat and mass transfer.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Capela, Fabio; Ramazanov, Sabir, E-mail: fc403@cam.ac.uk, E-mail: Sabir.Ramazanov@ulb.ac.be
At large scales and for sufficiently early times, dark matter is described as a pressureless perfect fluid—dust— non-interacting with Standard Model fields. These features are captured by a simple model with two scalars: a Lagrange multiplier and another playing the role of the velocity potential. That model arises naturally in some gravitational frameworks, e.g., the mimetic dark matter scenario. We consider an extension of the model by means of higher derivative terms, such that the dust solutions are preserved at the background level, but there is a non-zero sound speed at the linear level. We associate this Modified Dust withmore » dark matter, and study the linear evolution of cosmological perturbations in that picture. The most prominent effect is the suppression of their power spectrum for sufficiently large cosmological momenta. This can be relevant in view of the problems that cold dark matter faces at sub-galactic scales, e.g., the missing satellites problem. At even shorter scales, however, perturbations of Modified Dust are enhanced compared to the predictions of more common particle dark matter scenarios. This is a peculiarity of their evolution in radiation dominated background. We also briefly discuss clustering of Modified Dust. We write the system of equations in the Newtonian limit, and sketch the possible mechanism which could prevent the appearance of caustic singularities. The same mechanism may be relevant in light of the core-cusp problem.« less
Data-driven confounder selection via Markov and Bayesian networks.
Häggström, Jenny
2018-06-01
To unbiasedly estimate a causal effect on an outcome unconfoundedness is often assumed. If there is sufficient knowledge on the underlying causal structure then existing confounder selection criteria can be used to select subsets of the observed pretreatment covariates, X, sufficient for unconfoundedness, if such subsets exist. Here, estimation of these target subsets is considered when the underlying causal structure is unknown. The proposed method is to model the causal structure by a probabilistic graphical model, for example, a Markov or Bayesian network, estimate this graph from observed data and select the target subsets given the estimated graph. The approach is evaluated by simulation both in a high-dimensional setting where unconfoundedness holds given X and in a setting where unconfoundedness only holds given subsets of X. Several common target subsets are investigated and the selected subsets are compared with respect to accuracy in estimating the average causal effect. The proposed method is implemented with existing software that can easily handle high-dimensional data, in terms of large samples and large number of covariates. The results from the simulation study show that, if unconfoundedness holds given X, this approach is very successful in selecting the target subsets, outperforming alternative approaches based on random forests and LASSO, and that the subset estimating the target subset containing all causes of outcome yields smallest MSE in the average causal effect estimation. © 2017, The International Biometric Society.
Bates, Jonathan; Parzynski, Craig S; Dhruva, Sanket S; Coppi, Andreas; Kuntz, Richard; Li, Shu-Xia; Marinac-Dabic, Danica; Masoudi, Frederick A; Shaw, Richard E; Warner, Frederick; Krumholz, Harlan M; Ross, Joseph S
2018-06-12
To estimate medical device utilization needed to detect safety differences among implantable cardioverter defibrillators (ICDs) generator models and compare these estimates to utilization in practice. We conducted repeated sample size estimates to calculate the medical device utilization needed, systematically varying device-specific safety event rate ratios and significance levels while maintaining 80% power, testing 3 average adverse event rates (3.9, 6.1, and 12.6 events per 100 person-years) estimated from the American College of Cardiology's 2006 to 2010 National Cardiovascular Data Registry of ICDs. We then compared with actual medical device utilization. At significance level 0.05 and 80% power, 34% or fewer ICD models accrued sufficient utilization in practice to detect safety differences for rate ratios <1.15 and an average event rate of 12.6 events per 100 person-years. For average event rates of 3.9 and 12.6 events per 100 person-years, 30% and 50% of ICD models, respectively, accrued sufficient utilization for a rate ratio of 1.25, whereas 52% and 67% for a rate ratio of 1.50. Because actual ICD utilization was not uniformly distributed across ICD models, the proportion of individuals receiving any ICD that accrued sufficient utilization in practice was 0% to 21%, 32% to 70%, and 67% to 84% for rate ratios of 1.05, 1.15, and 1.25, respectively, for the range of 3 average adverse event rates. Small safety differences among ICD generator models are unlikely to be detected through routine surveillance given current ICD utilization in practice, but large safety differences can be detected for most patients at anticipated average adverse event rates. Copyright © 2018 John Wiley & Sons, Ltd.
Space Geodesy and the New Madrid Seismic Zone
NASA Astrophysics Data System (ADS)
Smalley, Robert; Ellis, Michael A.
2008-07-01
One of the most contentious issues related to earthquake hazards in the United States centers on the midcontinent and the origin, magnitudes, and likely recurrence intervals of the 1811-1812 New Madrid earthquakes that occurred there. The stakeholder groups in the debate (local and state governments, reinsurance companies, American businesses, and the scientific community) are similar to the stakeholder groups in regions more famous for large earthquakes. However, debate about New Madrid seismic hazard has been fiercer because of the lack of two fundamental components of seismic hazard estimation: an explanatory model for large, midplate earthquakes; and sufficient or sufficiently precise data about the causes, effects, and histories of such earthquakes.
Stability and stabilisation of a class of networked dynamic systems
NASA Astrophysics Data System (ADS)
Liu, H. B.; Wang, D. Q.
2018-04-01
We investigate the stability and stabilisation of a linear time invariant networked heterogeneous system with arbitrarily connected subsystems. A new linear matrix inequality based sufficient and necessary condition for the stability is derived, based on which the stabilisation is provided. The obtained conditions efficiently utilise the block-diagonal characteristic of system parameter matrices and the sparseness of subsystem connection matrix. Moreover, a sufficient condition only dependent on each individual subsystem is also presented for the stabilisation of the networked systems with a large scale. Numerical simulations show that these conditions are computationally valid in the analysis and synthesis of a large-scale networked system.
NASA Technical Reports Server (NTRS)
Feinberg, Lee; Bolcar, Matt; Liu, Alice; Guyon, Olivier; Stark,Chris; Arenberg, Jon
2016-01-01
Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance.
Viscous and Thermal Effects on Hydrodynamic Instability in Liquid-Propellant Combustion
NASA Technical Reports Server (NTRS)
Margolis, Stephen B.; Sacksteder, Kurt (Technical Monitor)
2000-01-01
A pulsating form of hydrodynamic instability has recently been shown to arise during the deflagration of liquid propellants in those parameter regimes where the pressure-dependent burning rate is characterized by a negative pressure sensitivity. This type of instability can coexist with the classical cellular, or Landau, form of hydrodynamic instability, with the occurrence of either dependent on whether the pressure sensitivity is sufficiently large or small in magnitude. For the inviscid problem, it has been shown that when the burning rate is realistically allowed to depend on temperature as well as pressure, that sufficiently large values of the temperature sensitivity relative to the pressure sensitivity causes the pulsating form of hydrodynamic instability to become dominant. In that regime, steady, planar burning becomes intrinsically unstable to pulsating disturbances whose wavenumbers are sufficiently small. In the present work, this analysis is extended to the fully viscous case, where it is shown that although viscosity is stabilizing for intermediate and larger wavenumber perturbations, the intrinsic pulsating instability for small wavenumbers remains. Under these conditions, liquid-propellant combustion is predicted to be characterized by large unsteady cells along the liquid/gas interface.
Setchell, Joanna M; Abbott, Kristin M; Gonzalez, Jean-Paul; Knapp, Leslie A
2013-10-01
A large body of evidence suggests that major histocompatibility complex (MHC) genotype influences mate choice. However, few studies have investigated MHC-mediated post-copulatory mate choice under natural, or even semi-natural, conditions. We set out to explore this question in a large semi-free-ranging population of mandrills (Mandrillus sphinx) using MHC-DRB genotypes for 127 parent-offspring triads. First, we showed that offspring MHC heterozygosity correlates positively with parental MHC dissimilarity suggesting that mating among MHC dissimilar mates is efficient in increasing offspring MHC diversity. Second, we compared the haplotypes of the parental dyad with those of the offspring to test whether post-copulatory sexual selection favored offspring with two different MHC haplotypes, more diverse gamete combinations, or greater within-haplotype diversity. Limited statistical power meant that we could only detect medium or large effect sizes. Nevertheless, we found no evidence for selection for heterozygous offspring when parents share a haplotype (large effect size), genetic dissimilarity between parental haplotypes (we could detect an odds ratio of ≥1.86), or within-haplotype diversity (medium-large effect). These findings suggest that comparing parental and offspring haplotypes may be a useful approach to test for post-copulatory selection when matings cannot be observed, as is the case in many study systems. However, it will be extremely difficult to determine conclusively whether post-copulatory selection mechanisms for MHC genotype exist, particularly if the effect sizes are small, due to the difficulty in obtaining a sufficiently large sample. © 2013 Wiley Periodicals, Inc.
Thermoelectric properties of layered NaSbSe2.
Putatunda, Aditya; Xing, Guangzong; Sun, Jifeng; Li, Yuwei; Singh, David J
2018-06-06
We investigate ordered monoclinic NaSbSe 2 as a thermoelectric using first principles calculations. We find that from an electronic point of view, ordered and oriented n-type NaSbSe 2 is comparable to the best known thermoelectric materials. This phase has a sufficiently large band gap for thermoelectric and solar absorber applications in contrast to the disordered phase which has a much narrower gap. The electronic structure shows anisotropic, non-parabolic bands. The results show a high Seebeck coefficient in addition to direction dependent high conductivity. The electronic structure quantified by an electron fitness function is very favorable, especially in the n-type case.
On the accuracy of various large axial displacement formulae for crooked columns
NASA Astrophysics Data System (ADS)
Mallis, J.; Kounadis, A. N.
1988-11-01
The axial displacements of an initially crooked, simply supported column, subjected to an axial compressive force at its end, are determined by using several variants of the axial strain-displacement relationship. Their accuracy and range of applicability are thoroughly discussed by comparing the corresponding results with those of the exact elastica analysis in which the compressibility effect of the bar axis is accounted for. Among other findings, the important conclusion is drawn that the simplified linear kinematic relation leads to a sufficiently accurate evaluation of the initial part of the postbuckling path which is of significant importance for structural design purposes.
The relative roles of sulfate aerosols and greenhouse gases in climate forcing
NASA Technical Reports Server (NTRS)
Kiehl, J. T.; Briegleb, B. P.
1993-01-01
Calculations of the effects of both natural and anthropogenic tropospheric sulfate aerosols indicate that the aerosol climate forcing is sufficiently large in a number of regions of the Northern Hemisphere to reduce significantly the positive forcing from increased greenhouse gases. Summer sulfate aerosol forcing in the Northern Hemisphere completely offsets the greenhouse forcing over the eastern United States and central Europe. Anthropogenic sulfate aerosols contribute a globally averaged annual forcing of -0.3 watt per square meter as compared with +2.1 watts per square meter for greenhouse gases. Sources of the difference in magnitude with the previous estimate of Charlson et al. (1992) are discussed.
A slewing control experiment for flexible structures
NASA Technical Reports Server (NTRS)
Juang, J.-N.; Horta, L. G.; Robertshaw, H. H.
1985-01-01
A hardware set-up has been developed to study slewing control for flexible structures including a steel beam and a solar panel. The linear optimal terminal control law is used to design active controllers which are implemented in an analog computer. The objective of this experiment is to demonstrate and verify the dynamics and optimal terminal control laws as applied to flexible structures for large angle maneuver. Actuation is provided by an electric motor while sensing is given by strain gages and angle potentiometer. Experimental measurements are compared with analytical predictions in terms of modal parameters of the system stability matrix and sufficient agreement is achieved to validate the theory.
Controlling Laser Plasma Instabilities Using Temporal Bandwidth
NASA Astrophysics Data System (ADS)
Tsung, Frank; Weaver, J.; Lehmberg, R.
2016-10-01
We are performing particle-in-cell simulations using the code OSIRIS to study the effects of laser plasma interactions in the presence of temporal bandwidth under conditions relevant to current and future experiments on the NIKE laser. Our simulations show that, for sufficiently large bandwidth (where the inverse bandwidth is comparable with the linear growth time), the saturation level, and the distribution of hot electrons, can be effected by the addition of temporal bandwidths (which can be accomplished in experiments using beam smoothing techniques such as ISI). We will quantify these effects and investigate higher dimensional effects such as laser speckles. This work is supported by DOE and NRL.
Thermoelectric properties of layered NaSbSe2
NASA Astrophysics Data System (ADS)
Putatunda, Aditya; Xing, Guangzong; Sun, Jifeng; Li, Yuwei; Singh, David J.
2018-06-01
We investigate ordered monoclinic NaSbSe2 as a thermoelectric using first principles calculations. We find that from an electronic point of view, ordered and oriented n-type NaSbSe2 is comparable to the best known thermoelectric materials. This phase has a sufficiently large band gap for thermoelectric and solar absorber applications in contrast to the disordered phase which has a much narrower gap. The electronic structure shows anisotropic, non-parabolic bands. The results show a high Seebeck coefficient in addition to direction dependent high conductivity. The electronic structure quantified by an electron fitness function is very favorable, especially in the n-type case.
Antenatal training to improve breast feeding: a randomised trial.
Kronborg, Hanne; Maimburg, Rikke Damkjær; Væth, Michael
2012-12-01
to assess the effect of an antenatal training programme on knowledge, self-efficacy and problems related to breast feeding and on breast-feeding duration. a randomised controlled trial. the Aarhus Midwifery Clinic, a large clinic connected to a Danish university hospital in an urban area of Denmark. a total of 1193 nulliparous women were recruited before week 21+6 days of gestation, 603 were randomised to the intervention group, and 590 to the reference group. we compared a structured antenatal training programme attended in mid-pregnancy with usual practice. data were collected through self-reported questionnaires sent to the women's e-mail addresses and analysed according to the intention to treat principle. The primary outcomes were duration of full and any breast feeding collected 6 weeks post partum (any) and 1 year post partum (full and any). no differences were found between groups according to duration of breast feeding, self-efficacy score, or breast-feeding problems, but after participation in the course in week 36 of gestation women in the intervention group reported a higher level of confidence (p=0.05), and 6 weeks after birth they reported to have obtained sufficient knowledge about breast feeding (p=0.02). Supplemental analysis in the intervention group revealed that women with sufficient knowledge breast fed significantly longer than women without sufficient knowledge (HR=0.74 CI: 0.58-0.97). This association was not found in the reference group (HR=1.12 CI: 0.89-1.41). antenatal training can increase confidence of breast feeding in pregnancy and provide women with sufficient knowledge about breast feeding after birth. Antenatal training may therefore be an important low-technology health promotion tool that can be provided at low costs in most settings. The antenatal training programme needs to be followed by postnatal breast-feeding support as it is not sufficient in itself to increase the duration of breast feeding or reduce breast-feeding problems. Copyright © 2011 Elsevier Ltd. All rights reserved.
Polyimide/Carbon Nanotube Composite Films for Electrostatic Charge Mitigation
NASA Technical Reports Server (NTRS)
Smith, Joseph G., Jr.; Delozier, Donavon M.; Connell, John W.; Watson, Kent A.
2004-01-01
Low color, space environmentally durable polymeric films with sufficient electrical conductivity to mitigate electrostatic charge (ESC) build-up have potential applications on large, deployable, ultra-light weight Gossamer spacecraft as thin film membranes on antennas, solar sails, thermal/optical coatings, multi-layer insulation blankets, etc.. The challenge has been to develop a method to impart robust electrical conductivity into these materials without increasing solar absorptivity (alpha ) or decreasing optical transparency or film flexibility. Since these spacecraft will require significant compaction prior to launch, the film portion of the spacecraft will require folding. The state-of-the-art clear, conductive coating (e.g. indium-tin-oxide, ITO) is brittle and cannot tolerate folding. In this report, doping a polymer with single-walled carbon nanotubes (SWNTs) using two different methods afforded materials with good flexibility and surface conductivities in the range sufficient for ESC mitigation. A coating method afforded materials with minimal effects on the mechanical, optical, and thermo-optical properties as compared to dispersal of SWNTs in the matrix. The chemistry and physical properties of these nanocomposites are discussed.
Coding “What” and “When” in the Archer Fish Retina
Vasserman, Genadiy; Shamir, Maoz; Ben Simon, Avi; Segev, Ronen
2010-01-01
Traditionally, the information content of the neural response is quantified using statistics of the responses relative to stimulus onset time with the assumption that the brain uses onset time to infer stimulus identity. However, stimulus onset time must also be estimated by the brain, making the utility of such an approach questionable. How can stimulus onset be estimated from the neural responses with sufficient accuracy to ensure reliable stimulus identification? We address this question using the framework of colour coding by the archer fish retinal ganglion cell. We found that stimulus identity, “what”, can be estimated from the responses of best single cells with an accuracy comparable to that of the animal's psychophysical estimation. However, to extract this information, an accurate estimation of stimulus onset is essential. We show that stimulus onset time, “when”, can be estimated using a linear-nonlinear readout mechanism that requires the response of a population of 100 cells. Thus, stimulus onset time can be estimated using a relatively simple readout. However, large nerve cell populations are required to achieve sufficient accuracy. PMID:21079682
Arora, Neha; Patel, Alok; Pruthi, Parul A; Pruthi, Vikas
2016-08-01
The study synergistically optimized nitrogen and phosphorous concentrations for attainment of maximum lipid productivity in Chlorella minutissima. Nitrogen and phosphorous limited cells (N(L)P(L)) showed maximum lipid productivity (49.1±0.41mg/L/d), 1.47 folds higher than control. Nitrogen depletion resulted in reduced cell size with large sized lipid droplets encompassing most of the intracellular space while discrete lipid bodies were observed under nitrogen sufficiency. Synergistic N/P starvations showed more prominent effect on photosynthetic pigments as to individual deprivations. Phosphorous deficiency along with N starvation exhibited 17.12% decline in carbohydrate while no change in nitrogen sufficient cells were recorded. The optimum N(L)P(L) concentration showed balance between biomass and lipid by maintaining intermediate cell size, pigments, carbohydrate and proteins. FAME profile showed C14-C18 carbon chains in N(L)P(L) cells with biodiesel properties comparable to plant oil methyl esters. Hence, synergistic N/P limitation was effective for enhancing lipid productivity with reduced consumption of nutrients. Copyright © 2016 Elsevier Ltd. All rights reserved.
Epidemiologic methods in clinical trials.
Rothman, K J
1977-04-01
Epidemiologic methods developed to control confounding in non-experimental studies are equally applicable for experiments. In experiments, most confounding is usually controlled by random allocation of subjects to treatment groups, but randomization does not preclude confounding except for extremely large studies, the degree of confounding expected being inversely related to the size of the treatment groups. In experiments, as in non-experimental studies, the extent of confounding for each risk indicator should be assessed, and if sufficiently large, controlled. Confounding is properly assessed by comparing the unconfounded effect estimate to the crude effect estimate; a common error is to assess confounding by statistical tests of significance. Assessment of confounding involves its control as a prerequisite. Control is most readily and cogently achieved by stratification of the data, though with many factors to control simultaneously, multivariate analysis or a combination of multivariate analysis and stratification might be necessary.
NASA Technical Reports Server (NTRS)
Imoto, Naoko; Bandler, SImon; Brekosky, Regis; Chervenak, James; Figueroa-Felicano, Enectali; Finkbeiner, Frederick; Kelley, Richard; Kilbourne, Caroline; Porter, Frederick; Sadleir, Jack;
2007-01-01
We are developing large, close-packed arrays of x-ray transition-edge sensor (TES) microcalorimeters. In such a device, sufficient heat sinking is important to to minimize thermal cross talk between pixels and to stabilize the bath temperature for all pixels. We have measured cross talk on out 8 x 8 arrays and studied the shape and amount of thermal crosstalk as a function of pixel location and efficiency of electrothermal feedback. In this presentation, we will compare measurements made on arrays with and without a backside, heat-sinking copper layer, as well as results of devices on silicon-nitride membranes and on solid substrates, and we will discuss the implications for energy resolution and maximum count rate. We will also discuss the dependence of pulse height upon bath temperature, and the measured and required stability of the bath temperature.
Dephasing due to Nuclear Spins in Large-Amplitude Electric Dipole Spin Resonance.
Chesi, Stefano; Yang, Li-Ping; Loss, Daniel
2016-02-12
We analyze effects of the hyperfine interaction on electric dipole spin resonance when the amplitude of the quantum-dot motion becomes comparable or larger than the quantum dot's size. Away from the well-known small-drive regime, the important role played by transverse nuclear fluctuations leads to a Gaussian decay with characteristic dependence on drive strength and detuning. A characterization of spin-flip gate fidelity, in the presence of such additional drive-dependent dephasing, shows that vanishingly small errors can still be achieved at sufficiently large amplitudes. Based on our theory, we analyze recent electric dipole spin resonance experiments relying on spin-orbit interactions or the slanting field of a micromagnet. We find that such experiments are already in a regime with significant effects of transverse nuclear fluctuations and the form of decay of the Rabi oscillations can be reproduced well by our theory.
The Experiment of Modulated Toroidal Current on HT-7 and HT-6M Tokamak
NASA Astrophysics Data System (ADS)
Mao, Jian-shan; P, Phillips; Luo, Jia-rong; Xu, Yu-hong; Zhao, Jun-yu; Zhang, Xian-mei; Wan, Bao-nian; Zhang, Shou-yin; Jie, Yin-xian; Wu, Zhen-wei; Hu, Li-qun; Liu, Sheng-xia; Shi, Yue-jiang; Li, Jian-gang; HT-6M; HT-7 Group
2003-02-01
The Experiments of Modulated Toroidal Current were done on the HT-6M tokamak and HT-7 superconducting tokamak. The toroidal current was modulated by programming the Ohmic heating field. Modulation of the plasma current has been used successfully to suppress MHD activity in discharges near the density limit where large MHD m = 2 tearing modes were suppressed by sufficiently large plasma current oscillations. The improved Ohmic confinement phase was observed during modulating toroidal current (MTC) on the Hefei Tokamak-6M (HT-6M) and Hefei superconducting Tokamak-7 (HT-7). A toroidal frequency-modulated current, induced by a modulated loop voltage, was added on the plasma equilibrium current. The ratio of A.C. amplitude of plasma current to the main plasma current ΔIp/Ip is about 12%-30%. The different formats of the frequency-modulated toroidal current were compared.
Aerodynamic force measurement on a large-scale model in a short duration test facility
NASA Astrophysics Data System (ADS)
Tanno, H.; Kodera, M.; Komuro, T.; Sato, K.; Takahasi, M.; Itoh, K.
2005-03-01
A force measurement technique has been developed for large-scale aerodynamic models with a short test time. The technique is based on direct acceleration measurements, with miniature accelerometers mounted on a test model suspended by wires. Measuring acceleration at two different locations, the technique can eliminate oscillations from natural vibration of the model. The technique was used for drag force measurements on a 3m long supersonic combustor model in the HIEST free-piston driven shock tunnel. A time resolution of 350μs is guaranteed during measurements, whose resolution is enough for ms order test time in HIEST. To evaluate measurement reliability and accuracy, measured values were compared with results from a three-dimensional Navier-Stokes numerical simulation. The difference between measured values and numerical simulation values was less than 5%. We conclude that this measurement technique is sufficiently reliable for measuring aerodynamic force within test durations of 1ms.
Eby, Joshua; Leembruggen, Madelyn; Suranyi, Peter; ...
2016-12-15
Axion stars, gravitationally bound states of low-energy axion particles, have a maximum mass allowed by gravitational stability. Weakly bound states obtaining this maximum mass have sufficiently large radii such that they are dilute, and as a result, they are well described by a leading-order expansion of the axion potential. Here, heavier states are susceptible to gravitational collapse. Inclusion of higher-order interactions, present in the full potential, can give qualitatively different results in the analysis of collapsing heavy states, as compared to the leading-order expansion. In this work, we find that collapsing axion stars are stabilized by repulsive interactions present inmore » the full potential, providing evidence that such objects do not form black holes. In the last moments of collapse, the binding energy of the axion star grows rapidly, and we provide evidence that a large amount of its energy is lost through rapid emission of relativistic axions.« less
NASA Astrophysics Data System (ADS)
Dorofeeva, Olga V.; Suchkova, Taisiya A.
2018-04-01
The gas-phase enthalpies of formation of four molecules with high flexibility, which leads to the existence of a large number of low-energy conformers, were calculated with the G4 method to see whether the lowest energy conformer is sufficient to achieve high accuracy in the computed values. The calculated values were in good agreement with the experiment, whereas adding the correction for conformer distribution makes the agreement worse. The reason for this effect is a large anharmonicity of low-frequency torsional motions, which is ignored in the calculation of ZPVE and thermal enthalpy. It was shown that the approximate correction for anharmonicity estimated using a free rotor model is of very similar magnitude compared with the conformer correction but has the opposite sign, and thus almost fully compensates for it. Therefore, the common practice of adding only the conformer correction is not without problems.
Quantifying the uncertainty in heritability
Furlotte, Nicholas A; Heckerman, David; Lippert, Christoph
2014-01-01
The use of mixed models to determine narrow-sense heritability and related quantities such as SNP heritability has received much recent attention. Less attention has been paid to the inherent variability in these estimates. One approach for quantifying variability in estimates of heritability is a frequentist approach, in which heritability is estimated using maximum likelihood and its variance is quantified through an asymptotic normal approximation. An alternative approach is to quantify the uncertainty in heritability through its Bayesian posterior distribution. In this paper, we develop the latter approach, make it computationally efficient and compare it to the frequentist approach. We show theoretically that, for a sufficiently large sample size and intermediate values of heritability, the two approaches provide similar results. Using the Atherosclerosis Risk in Communities cohort, we show empirically that the two approaches can give different results and that the variance/uncertainty can remain large. PMID:24670270
Responses of large mammals to climate change.
Hetem, Robyn S; Fuller, Andrea; Maloney, Shane K; Mitchell, Duncan
2014-01-01
Most large terrestrial mammals, including the charismatic species so important for ecotourism, do not have the luxury of rapid micro-evolution or sufficient range shifts as strategies for adjusting to climate change. The rate of climate change is too fast for genetic adaptation to occur in mammals with longevities of decades, typical of large mammals, and landscape fragmentation and population by humans too widespread to allow spontaneous range shifts of large mammals, leaving only the expression of latent phenotypic plasticity to counter effects of climate change. The expression of phenotypic plasticity includes anatomical variation within the same species, changes in phenology, and employment of intrinsic physiological and behavioral capacity that can buffer an animal against the effects of climate change. Whether that buffer will be realized is unknown, because little is known about the efficacy of the expression of plasticity, particularly for large mammals. Future research in climate change biology requires measurement of physiological characteristics of many identified free-living individual animals for long periods, probably decades, to allow us to detect whether expression of phenotypic plasticity will be sufficient to cope with climate change.
Responses of large mammals to climate change
Hetem, Robyn S; Fuller, Andrea; Maloney, Shane K; Mitchell, Duncan
2014-01-01
Most large terrestrial mammals, including the charismatic species so important for ecotourism, do not have the luxury of rapid micro-evolution or sufficient range shifts as strategies for adjusting to climate change. The rate of climate change is too fast for genetic adaptation to occur in mammals with longevities of decades, typical of large mammals, and landscape fragmentation and population by humans too widespread to allow spontaneous range shifts of large mammals, leaving only the expression of latent phenotypic plasticity to counter effects of climate change. The expression of phenotypic plasticity includes anatomical variation within the same species, changes in phenology, and employment of intrinsic physiological and behavioral capacity that can buffer an animal against the effects of climate change. Whether that buffer will be realized is unknown, because little is known about the efficacy of the expression of plasticity, particularly for large mammals. Future research in climate change biology requires measurement of physiological characteristics of many identified free-living individual animals for long periods, probably decades, to allow us to detect whether expression of phenotypic plasticity will be sufficient to cope with climate change. PMID:27583293
Code of Federal Regulations, 2010 CFR
2010-04-01
... provide sufficient comparative data to justify the selection of the proposed route. (§ 380.12(l)(2)(ii)) 5... sufficient comparative data to justify the selection of the proposed site. (§ 380.12(l)(2)(ii)) Resource Report 11—Reliability and Safety Describe how the project facilities would be designed, constructed...
Code of Federal Regulations, 2012 CFR
2012-04-01
... provide sufficient comparative data to justify the selection of the proposed route. (§ 380.12(l)(2)(ii)) 5... sufficient comparative data to justify the selection of the proposed site. (§ 380.12(l)(2)(ii)) Resource Report 11—Reliability and Safety Describe how the project facilities would be designed, constructed...
Code of Federal Regulations, 2014 CFR
2014-04-01
... provide sufficient comparative data to justify the selection of the proposed route. (§ 380.12(l)(2)(ii)) 5... sufficient comparative data to justify the selection of the proposed site. (§ 380.12(l)(2)(ii)) Resource Report 11—Reliability and Safety Describe how the project facilities would be designed, constructed...
Code of Federal Regulations, 2013 CFR
2013-04-01
... provide sufficient comparative data to justify the selection of the proposed route. (§ 380.12(l)(2)(ii)) 5... sufficient comparative data to justify the selection of the proposed site. (§ 380.12(l)(2)(ii)) Resource Report 11—Reliability and Safety Describe how the project facilities would be designed, constructed...
Code of Federal Regulations, 2011 CFR
2011-04-01
... provide sufficient comparative data to justify the selection of the proposed route. (§ 380.12(l)(2)(ii)) 5... sufficient comparative data to justify the selection of the proposed site. (§ 380.12(l)(2)(ii)) Resource Report 11—Reliability and Safety Describe how the project facilities would be designed, constructed...
Design of sEMG assembly to detect external anal sphincter activity: a proof of concept.
Shiraz, Arsam; Leaker, Brian; Mosse, Charles Alexander; Solomon, Eskinder; Craggs, Michael; Demosthenous, Andreas
2017-10-31
Conditional trans-rectal stimulation of the pudendal nerve could provide a viable solution to treat hyperreflexive bladder in spinal cord injury. A set threshold of the amplitude estimate of the external anal sphincter surface electromyography (sEMG) may be used as the trigger signal. The efficacy of such a device should be tested in a large scale clinical trial. As such, a probe should remain in situ for several hours while patients attend to their daily routine; the recording electrodes should be designed to be large enough to maintain good contact while observing design constraints. The objective of this study was to arrive at a design for intra-anal sEMG recording electrodes for the subsequent clinical trials while deriving the possible recording and processing parameters. Having in mind existing solutions and based on theoretical and anatomical considerations, a set of four multi-electrode probes were designed and developed. These were tested in a healthy subject and the measured sEMG traces were recorded and appropriately processed. It was shown that while comparatively large electrodes record sEMG traces that are not sufficiently correlated with the external anal sphincter contractions, smaller electrodes may not maintain a stable electrode tissue contact. It was shown that 3 mm wide and 1 cm long electrodes with 5 mm inter-electrode spacing, in agreement with Nyquist sampling, placed 1 cm from the orifice may intra-anally record a sEMG trace sufficiently correlated with external anal sphincter activity. The outcome of this study can be used in any biofeedback, treatment or diagnostic application where the activity of the external anal sphincter sEMG should be detected for an extended period of time.
Control methods for aiding a pilot during STOL engine failure transients
NASA Technical Reports Server (NTRS)
Nelson, E. R.; Debra, D. B.
1976-01-01
Candidate autopilot control laws that control the engine failure transient sink rates by demonstrating the engineering application of modern state variable control theory were defined. The results of approximate modal analysis were compared to those derived from full state analyses provided from computer design solutions. The aircraft was described, and a state variable model of its longitudinal dynamic motion due to engine and control variations was defined. The classical fast and slow modes were assumed to be sufficiently different to define reduced order approximations of the aircraft motion amendable to hand analysis control definition methods. The original state equations of motion were also applied to a large scale state variable control design program, in particular OPTSYS. The resulting control laws were compared with respect to their relative responses, ease of application, and meeting the desired performance objectives.
Robust Optimal Adaptive Control Method with Large Adaptive Gain
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.
2009-01-01
In the presence of large uncertainties, a control system needs to be able to adapt rapidly to regain performance. Fast adaptation is referred to the implementation of adaptive control with a large adaptive gain to reduce the tracking error rapidly. However, a large adaptive gain can lead to high-frequency oscillations which can adversely affect robustness of an adaptive control law. A new adaptive control modification is presented that can achieve robust adaptation with a large adaptive gain without incurring high-frequency oscillations as with the standard model-reference adaptive control. The modification is based on the minimization of the Y2 norm of the tracking error, which is formulated as an optimal control problem. The optimality condition is used to derive the modification using the gradient method. The optimal control modification results in a stable adaptation and allows a large adaptive gain to be used for better tracking while providing sufficient stability robustness. Simulations were conducted for a damaged generic transport aircraft with both standard adaptive control and the adaptive optimal control modification technique. The results demonstrate the effectiveness of the proposed modification in tracking a reference model while maintaining a sufficient time delay margin.
Mechanism of explosive eruptions of Kilauea Volcano, Hawaii
Dvorak, J.J.
1992-01-01
A small explosive eruption of Kilauea Volcano, Hawaii, occurred in May 1924. The eruption was preceded by rapid draining of a lava lake and transfer of a large volume of magma from the summit reservoir to the east rift zone. This lowered the magma column, which reduced hydrostatic pressure beneath Halemaumau and allowed groundwater to flow rapidly into areas of hot rock, producing a phreatic eruption. A comparison with other events at Kilauea shows that the transfer of a large volume of magma out of the summit reservoir is not sufficient to produce a phreatic eruption. For example, the volume transferred at the beginning of explosive activity in May 1924 was less than the volumes transferred in March 1955 and January-February 1960, when no explosive activity occurred. Likewise, draining of a lava lake and deepening of the floor of Halemaumau, which occurred in May 1922 and August 1923, were not sufficient to produce explosive activity. A phreatic eruption of Kilauea requires both the transfer of a large volume of magma from the summit reservoir and the rapid removal of magma from near the surface, where the surrounding rocks have been heated to a sufficient temperature to produce steam explosions when suddenly contacted by groundwater. ?? 1992 Springer-Verlag.
Metabolic rates of giant pandas inform conservation strategies.
Fei, Yuxiang; Hou, Rong; Spotila, James R; Paladino, Frank V; Qi, Dunwu; Zhang, Zhihe
2016-06-06
The giant panda is an icon of conservation and survived a large-scale bamboo die off in the 1980s in China. Captive breeding programs have produced a large population in zoos and efforts continue to reintroduce those animals into the wild. However, we lack sufficient knowledge of their physiological ecology to determine requirements for survival now and in the face of climate change. We measured resting and active metabolic rates of giant pandas in order to determine if current bamboo resources were sufficient for adding additional animals to populations in natural reserves. Resting metabolic rates were somewhat below average for a panda sized mammal and active metabolic rates were in the normal range. Pandas do not have exceptionally low metabolic rates. Nevertheless, there is enough bamboo in natural reserves to support both natural populations and large numbers of reintroduced pandas. Bamboo will not be the limiting factor in successful reintroduction.
Metabolic rates of giant pandas inform conservation strategies
NASA Astrophysics Data System (ADS)
Fei, Yuxiang; Hou, Rong; Spotila, James R.; Paladino, Frank V.; Qi, Dunwu; Zhang, Zhihe
2016-06-01
The giant panda is an icon of conservation and survived a large-scale bamboo die off in the 1980s in China. Captive breeding programs have produced a large population in zoos and efforts continue to reintroduce those animals into the wild. However, we lack sufficient knowledge of their physiological ecology to determine requirements for survival now and in the face of climate change. We measured resting and active metabolic rates of giant pandas in order to determine if current bamboo resources were sufficient for adding additional animals to populations in natural reserves. Resting metabolic rates were somewhat below average for a panda sized mammal and active metabolic rates were in the normal range. Pandas do not have exceptionally low metabolic rates. Nevertheless, there is enough bamboo in natural reserves to support both natural populations and large numbers of reintroduced pandas. Bamboo will not be the limiting factor in successful reintroduction.
Metabolic rates of giant pandas inform conservation strategies
Fei, Yuxiang; Hou, Rong; Spotila, James R.; Paladino, Frank V.; Qi, Dunwu; Zhang, Zhihe
2016-01-01
The giant panda is an icon of conservation and survived a large-scale bamboo die off in the 1980s in China. Captive breeding programs have produced a large population in zoos and efforts continue to reintroduce those animals into the wild. However, we lack sufficient knowledge of their physiological ecology to determine requirements for survival now and in the face of climate change. We measured resting and active metabolic rates of giant pandas in order to determine if current bamboo resources were sufficient for adding additional animals to populations in natural reserves. Resting metabolic rates were somewhat below average for a panda sized mammal and active metabolic rates were in the normal range. Pandas do not have exceptionally low metabolic rates. Nevertheless, there is enough bamboo in natural reserves to support both natural populations and large numbers of reintroduced pandas. Bamboo will not be the limiting factor in successful reintroduction. PMID:27264109
Kim, Hyoung Jun; Kim, Tae Oh; Shin, Bong Chul; Woo, Jae Gon; Seo, Eun Hee; Joo, Hee Rin; Heo, Nae-Yun; Park, Jongha; Park, Seung Ha; Yang, Sung Yeon; Moon, Young Soo; Shin, Jin-Yong; Lee, Nae Young
2012-01-01
Currently, a split-dose of polyethylene glycol (PEG) is the mainstay of bowel preparation due to its tolerability, bowel-cleansing action, and safety. However, bowel preparation with PEG is suboptimal because residual fluid reduces the polyp detection rate and requires a more thorough colon inspection. The aim of our study was to demonstrate the efficacy of a sufficient dose of prokinetics on bowel cleansing together with split-dose PEG. A prospective endoscopist-blinded study was conducted. Patients were randomly allocated to two groups: prokinetic with split-dose PEG or split-dose PEG alone. A prokinetic [100 mg itopride (Itomed)], was administered twice simultaneously with each split-dose of PEG. Bowel-cleansing efficacy was measured by endoscopists using the Ottawa scale and the segmental fluidity scale score. Each participant completed a bowel preparation survey. Mean scores from the Ottawa scale, segmental fluid scale, and rate of poor preparation were compared between both groups. Patients in the prokinetics with split-dose PEG group showed significantly lower total Ottawa and segmental fluid scores compared with patients in the split-dose of PEG alone group. A sufficient dose of prokinetics with a split-dose of PEG showed efficacy in bowel cleansing for morning colonoscopy, largely due to the reduction in colonic fluid. Copyright © 2012 S. Karger AG, Basel.
Teaching Students about Plagiarism: An Internet Solution to an Internet Problem
ERIC Educational Resources Information Center
Snow, Eleanour
2006-01-01
The Internet has changed the ways that students think, learn, and write. Students have large amounts of information, largely anonymous and without clear copyright information, literally at their fingertips. Without sufficient guidance, the inappropriate use of this information seems inevitable. Plagiarism among college students is rising, due to…
Sufficient Forecasting Using Factor Models
Fan, Jianqing; Xue, Lingzhou; Yao, Jiawei
2017-01-01
We consider forecasting a single time series when there is a large number of predictors and a possible nonlinear effect. The dimensionality was first reduced via a high-dimensional (approximate) factor model implemented by the principal component analysis. Using the extracted factors, we develop a novel forecasting method called the sufficient forecasting, which provides a set of sufficient predictive indices, inferred from high-dimensional predictors, to deliver additional predictive power. The projected principal component analysis will be employed to enhance the accuracy of inferred factors when a semi-parametric (approximate) factor model is assumed. Our method is also applicable to cross-sectional sufficient regression using extracted factors. The connection between the sufficient forecasting and the deep learning architecture is explicitly stated. The sufficient forecasting correctly estimates projection indices of the underlying factors even in the presence of a nonparametric forecasting function. The proposed method extends the sufficient dimension reduction to high-dimensional regimes by condensing the cross-sectional information through factor models. We derive asymptotic properties for the estimate of the central subspace spanned by these projection directions as well as the estimates of the sufficient predictive indices. We further show that the natural method of running multiple regression of target on estimated factors yields a linear estimate that actually falls into this central subspace. Our method and theory allow the number of predictors to be larger than the number of observations. We finally demonstrate that the sufficient forecasting improves upon the linear forecasting in both simulation studies and an empirical study of forecasting macroeconomic variables. PMID:29731537
Cooling system for continuous metal casting machines
Draper, Robert; Sumpman, Wayne C.; Baker, Robert J.; Williams, Robert S.
1988-01-01
A continuous metal caster cooling system is provided in which water is supplied in jets from a large number of small nozzles 19 against the inner surface of rim 13 at a temperature and with sufficient pressure that the velocity of the jets is sufficiently high that the mode of heat transfer is substantially by forced convection, the liquid being returned from the cooling chambers 30 through return pipes 25 distributed interstitially among the nozzles.
Cooling system for continuous metal casting machines
Draper, R.; Sumpman, W.C.; Baker, R.J.; Williams, R.S.
1988-06-07
A continuous metal caster cooling system is provided in which water is supplied in jets from a large number of small nozzles against the inner surface of rim at a temperature and with sufficient pressure that the velocity of the jets is sufficiently high that the mode of heat transfer is substantially by forced convection, the liquid being returned from the cooling chambers through return pipes distributed interstitially among the nozzles. 9 figs.
Robustness of inflation to inhomogeneous initial conditions
NASA Astrophysics Data System (ADS)
Clough, Katy; Lim, Eugene A.; DiNunno, Brandon S.; Fischler, Willy; Flauger, Raphael; Paban, Sonia
2017-09-01
We consider the effects of inhomogeneous initial conditions in both the scalar field profile and the extrinsic curvature on different inflationary models. In particular, we compare the robustness of small field inflation to that of large field inflation, using numerical simulations with Einstein gravity in 3+1 dimensions. We find that small field inflation can fail in the presence of subdominant gradient energies, suggesting that it is much less robust to inhomogeneities than large field inflation, which withstands dominant gradient energies. However, we also show that small field inflation can be successful even if some regions of spacetime start out in the region of the potential that does not support inflation. In the large field case, we confirm previous results that inflation is robust if the inflaton occupies the inflationary part of the potential. Furthermore, we show that increasing initial scalar gradients will not form sufficiently massive inflation-ending black holes if the initial hypersurface is approximately flat. Finally, we consider the large field case with a varying extrinsic curvature K, such that some regions are initially collapsing. We find that this may again lead to local black holes, but overall the spacetime remains inflationary if the spacetime is open, which confirms previous theoretical studies.
Effects of SO(10)-inspired scalar non-universality on the MSSM parameter space at large tanβ
NASA Astrophysics Data System (ADS)
Ramage, M. R.
2005-08-01
We analyze the parameter space of the ( μ>0, A=0) CMSSM at large tanβ with a small degree of non-universality originating from D-terms and Higgs-sfermion splitting inspired by SO(10) GUT models. The effects of such non-universalities on the sparticle spectrum and observables such as (, B(b→Xγ), the SUSY threshold corrections to the bottom mass and Ωh are examined in detail and the consequences for the allowed parameter space of the model are investigated. We find that even small deviations to universality can result in large qualitative differences compared to the universal case; for certain values of the parameters, we find, even at low m and m, that radiative electroweak symmetry breaking fails as a consequence of either |<0 or mA2<0. We find particularly large departures from the mSugra case for the neutralino relic density, which is sensitive to significant changes in the position and shape of the A resonance and a substantial increase in the Higgsino component of the LSP. However, we find that the corrections to the bottom mass are not sufficient to allow for Yukawa unification.
Dispersal Mutualism Incorporated into Large-Scale, Infrequent Disturbances
Parker, V. Thomas
2015-01-01
Because of their influence on succession and other community interactions, large-scale, infrequent natural disturbances also should play a major role in mutualistic interactions. Using field data and experiments, I test whether mutualisms have been incorporated into large-scale wildfire by whether the outcomes of a mutualism depend on disturbance. In this study a seed dispersal mutualism is shown to depend on infrequent, large-scale disturbances. A dominant shrubland plant (Arctostaphylos species) produces seeds that make up a persistent soil seed bank and requires fire to germinate. In post-fire stands, I show that seedlings emerging from rodent caches dominate sites experiencing higher fire intensity. Field experiments show that rodents (Perimyscus californicus, P. boylii) do cache Arctostaphylos fruit and bury most seed caches to a sufficient depth to survive a killing heat pulse that a fire might drive into the soil. While the rodent dispersal and caching behavior itself has not changed compared to other habitats, the environmental transformation caused by wildfire converts the caching burial of seed from a dispersal process to a plant fire adaptive trait, and provides the context for stimulating subsequent life history evolution in the plant host. PMID:26151560
Collaborative visual analytics of radio surveys in the Big Data era
NASA Astrophysics Data System (ADS)
Vohl, Dany; Fluke, Christopher J.; Hassan, Amr H.; Barnes, David G.; Kilborn, Virginia A.
2017-06-01
Radio survey datasets comprise an increasing number of individual observations stored as sets of multidimensional data. In large survey projects, astronomers commonly face limitations regarding: 1) interactive visual analytics of sufficiently large subsets of data; 2) synchronous and asynchronous collaboration; and 3) documentation of the discovery workflow. To support collaborative data inquiry, we present encube, a large-scale comparative visual analytics framework. encube can utilise advanced visualization environments such as the CAVE2 (a hybrid 2D and 3D virtual reality environment powered with a 100 Tflop/s GPU-based supercomputer and 84 million pixels) for collaborative analysis of large subsets of data from radio surveys. It can also run on standard desktops, providing a capable visual analytics experience across the display ecology. encube is composed of four primary units enabling compute-intensive processing, advanced visualisation, dynamic interaction, parallel data query, along with data management. Its modularity will make it simple to incorporate astronomical analysis packages and Virtual Observatory capabilities developed within our community. We discuss how encube builds a bridge between high-end display systems (such as CAVE2) and the classical desktop, preserving all traces of the work completed on either platform - allowing the research process to continue wherever you are.
Advanced solar concentrator: Preliminary and detailed design
NASA Technical Reports Server (NTRS)
Bell, D. M.; Maraschin, R. A.; Matsushita, M. T.; Erskine, D.; Carlton, R.; Jakovcevic, A.; Yasuda, A. K.
1981-01-01
A single reflection point focusing two-axis tracking paraboloidal dish with a reflector aperture diameter of approximately 11 m has a reflective surface made up of 64 independent, optical quality gores. Each gore is a composite of a thin backsilvered mirror glass face sheet continuously bonded to a contoured substrate of lightweight, rigid cellular glass. The use of largely self-supporting gores allows a significant reduction in the weight of the steel support structure as compared to alternate design concepts. Primary emphasis in the preliminary design package for the low-cost, low-weight, mass producible concentrator was placed on the design of the higher cost subsystems. The outer gore element was sufficiently designed to allow fabrication of prototype gores.
Orally ingestion of krokodil in Spain: report of a case.
Baquero Escribano, Abel; Beltrán Negre, María Teresa; Calvo Orenga, Gema; Carratalá Monfort, Sonia; Arnau Peiró, Francisco; Meca Zapatero, Sara; Haro Cortés, Gonzalo
2016-06-14
The krokodil use disorder is an addictive pathology with quite severe organic effects, especially at the skin level, that causes severe and degenerative necrosis of blood and muscle tissue. Though this disorder has a low prevalence in Spain, compared to the large number of consumers in other countries such as Ukraine or Russia, its consumption is slowly but gradually expanding in countries of the European Union and America. The simplicity of the process of obtaining the substance from desomorphine, together with its high availability and low cost, contribute toward consumers' self-sufficiency. This article presents the case of a user of krokodil and reviews the clinical symptoms of oral ingestion.
Pierre, Th
2013-01-01
In a new toroidal laboratory plasma device including a poloidal magnetic field created by an internal circular conductor, the confinement efficiency of the magnetized plasma and the turbulence level are studied in different situations. The plasma density is greatly enhanced when a sufficiently large poloidal magnetic field is established. Moreover, the instabilities and the turbulence usually found in toroidal devices without sheared magnetic field lines are suppressed by the finite rotational transform. The particle confinement time is estimated from the measurement of the plasma decay time. It is compared to the Bohm diffusion time and to the value predicted by different diffusion models, in particular neoclassical diffusion involving trapped particles.
A new nonlinear conjugate gradient coefficient under strong Wolfe-Powell line search
NASA Astrophysics Data System (ADS)
Mohamed, Nur Syarafina; Mamat, Mustafa; Rivaie, Mohd
2017-08-01
A nonlinear conjugate gradient method (CG) plays an important role in solving a large-scale unconstrained optimization problem. This method is widely used due to its simplicity. The method is known to possess sufficient descend condition and global convergence properties. In this paper, a new nonlinear of CG coefficient βk is presented by employing the Strong Wolfe-Powell inexact line search. The new βk performance is tested based on number of iterations and central processing unit (CPU) time by using MATLAB software with Intel Core i7-3470 CPU processor. Numerical experimental results show that the new βk converge rapidly compared to other classical CG method.
A theoretical comparison of evolutionary algorithms and simulated annealing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, W.E.
1995-08-28
This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications formore » the performance of a variety of other optimization algorithm.« less
Park, Y; Subramanian, K; Verfaillie, C M; Hu, W S
2010-10-01
Many potential applications of stem cells require large quantities of cells, especially those involving large organs such as the liver. For such applications, a scalable reactor system is desirable to ensure a reliable supply of sufficient quantities of differentiation competent or differentiated cells. We employed a microcarrier culture system for the expansion of undifferentiated rat multipotent adult progenitor cells (rMAPC) as well as for directed differentiation of these cells to hepatocyte-like cells. During the 4-day expansion culture, cell concentration increased by 85-fold while expression level of pluripotency markers were maintained, as well as the MAPC differentiation potential. Directed differentiation into hepatocyte-like cells on the microcarriers themselves gave comparable results as observed with cells cultured in static cultures. The cells expressed several mature hepatocyte-lineage genes and asialoglycoprotein receptor-1 (ASGPR-1) surface protein, and secreted albumin and urea. Microcarrier culture thus offers the potential of large-scale expansion and differentiation of stem cells in a more controlled bioreactor environment. Copyright © 2010 Elsevier B.V. All rights reserved.
Pittman, Andrew J.; Law, Mei-Yee; Chien, Chi-Bin
2008-01-01
Summary Navigating axons respond to environmental guidance signals, but can also follow axons that have gone before—pioneer axons. Pioneers have been studied extensively in simple systems, but the role of axon-axon interactions remains largely unexplored in large vertebrate axon tracts, where cohorts of identical axons could potentially use isotypic interactions to guide each other through multiple choice points. Furthermore, the relative importance of axon-axon interactions compared to axon-autonomous receptor function has not been assessed. Here we test the role of axon-axon interactions in retinotectal development, by devising a technique to selectively remove or replace early-born retinal ganglion cells (RGCs). We find that early RGCs are both necessary and sufficient for later axons to exit the eye. Furthermore, introducing misrouted axons by transplantation reveals that guidance from eye to tectum relies heavily on interactions between axons, including both pioneer-follower and community effects. We conclude that axon-axon interactions and ligand-receptor signaling have coequal roles, cooperating to ensure the fidelity of axon guidance in developing vertebrate tracts. PMID:18653554
Modeling CMB lensing cross correlations with CLEFT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Modi, Chirag; White, Martin; Vlah, Zvonimir, E-mail: modichirag@berkeley.edu, E-mail: mwhite@berkeley.edu, E-mail: zvlah@stanford.edu
2017-08-01
A new generation of surveys will soon map large fractions of sky to ever greater depths and their science goals can be enhanced by exploiting cross correlations between them. In this paper we study cross correlations between the lensing of the CMB and biased tracers of large-scale structure at high z . We motivate the need for more sophisticated bias models for modeling increasingly biased tracers at these redshifts and propose the use of perturbation theories, specifically Convolution Lagrangian Effective Field Theory (CLEFT). Since such signals reside at large scales and redshifts, they can be well described by perturbative approaches.more » We compare our model with the current approach of using scale independent bias coupled with fitting functions for non-linear matter power spectra, showing that the latter will not be sufficient for upcoming surveys. We illustrate our ideas by estimating σ{sub 8} from the auto- and cross-spectra of mock surveys, finding that CLEFT returns accurate and unbiased results at high z . We discuss uncertainties due to the redshift distribution of the tracers, and several avenues for future development.« less
Effect of H-wave polarization on laser radar detection of partially convex targets in random media.
El-Ocla, Hosam
2010-07-01
A study on the performance of laser radar cross section (LRCS) of conducting targets with large sizes is investigated numerically in free space and random media. The LRCS is calculated using a boundary value method with beam wave incidence and H-wave polarization. Considered are those elements that contribute to the LRCS problem including random medium strength, target configuration, and beam width. The effect of the creeping waves, stimulated by H-polarization, on the LRCS behavior is manifested. Targets taking large sizes of up to five wavelengths are sufficiently larger than the beam width and are sufficient for considering fairly complex targets. Scatterers are assumed to have analytical partially convex contours with inflection points.
NASA Technical Reports Server (NTRS)
Majda, G.
1985-01-01
A large set of variable coefficient linear systems of ordinary differential equations which possess two different time scales, a slow one and a fast one is considered. A small parameter epsilon characterizes the stiffness of these systems. A system of o.d.e.s. in this set is approximated by a general class of multistep discretizations which includes both one-leg and linear multistep methods. Sufficient conditions are determined under which each solution of a multistep method is uniformly bounded, with a bound which is independent of the stiffness of the system of o.d.e.s., when the step size resolves the slow time scale, but not the fast one. This property is called stability with large step sizes. The theory presented lets one compare properties of one-leg methods and linear multistep methods when they approximate variable coefficient systems of stiff o.d.e.s. In particular, it is shown that one-leg methods have better stability properties with large step sizes than their linear multistep counter parts. The theory also allows one to relate the concept of D-stability to the usual notions of stability and stability domains and to the propagation of errors for multistep methods which use large step sizes.
Diversity and Community Can Coexist.
Stivala, Alex; Robins, Garry; Kashima, Yoshihisa; Kirley, Michael
2016-03-01
We examine the (in)compatibility of diversity and sense of community by means of agent-based models based on the well-known Schelling model of residential segregation and Axelrod model of cultural dissemination. We find that diversity and highly clustered social networks, on the assumptions of social tie formation based on spatial proximity and homophily, are incompatible when agent features are immutable, and this holds even for multiple independent features. We include both mutable and immutable features into a model that integrates Schelling and Axelrod models, and we find that even for multiple independent features, diversity and highly clustered social networks can be incompatible on the assumptions of social tie formation based on spatial proximity and homophily. However, this incompatibility breaks down when cultural diversity can be sufficiently large, at which point diversity and clustering need not be negatively correlated. This implies that segregation based on immutable characteristics such as race can possibly be overcome by sufficient similarity on mutable characteristics based on culture, which are subject to a process of social influence, provided a sufficiently large "scope of cultural possibilities" exists. © Society for Community Research and Action 2016.
TCR-engineered, customized, antitumor T cells for cancer immunotherapy: advantages and limitations.
Chhabra, Arvind
2011-01-05
The clinical outcome of the traditional adoptive cancer immunotherapy approaches involving the administration of donor-derived immune effectors, expanded ex vivo, has not met expectations. This could be attributed, in part, to the lack of sufficient high-avidity antitumor T-cell precursors in most cancer patients, poor immunogenicity of cancer cells, and the technological limitations to generate a sufficiently large number of tumor antigen-specific T cells. In addition, the host immune regulatory mechanisms and immune homeostasis mechanisms, such as activation-induced cell death (AICD), could further limit the clinical efficacy of the adoptively administered antitumor T cells. Since generation of a sufficiently large number of potent antitumor immune effectors for adoptive administration is critical for the clinical success of this approach, recent advances towards generating customized donor-specific antitumor-effector T cells by engrafting human peripheral blood-derived T cells with a tumor-associated antigen-specific transgenic T-cell receptor (TCR) are quite interesting. This manuscript provides a brief overview of the TCR engineering-based cancer immunotherapy approach, its advantages, and the current limitations.
NASA Astrophysics Data System (ADS)
Ng, C. S.; Bhattacharjee, A.
1996-08-01
A sufficient condition is obtained for the development of a finite-time singularity in a highly symmetric Euler flow, first proposed by Kida [J. Phys. Soc. Jpn. 54, 2132 (1995)] and recently simulated by Boratav and Pelz [Phys. Fluids 6, 2757 (1994)]. It is shown that if the second-order spatial derivative of the pressure (pxx) is positive following a Lagrangian element (on the x axis), then a finite-time singularity must occur. Under some assumptions, this Lagrangian sufficient condition can be reduced to an Eulerian sufficient condition which requires that the fourth-order spatial derivative of the pressure (pxxxx) at the origin be positive for all times leading up to the singularity. Analytical as well as direct numerical evaluation over a large ensemble of initial conditions demonstrate that for fixed total energy, pxxxx is predominantly positive with the average value growing with the numbers of modes.
The complex phase gradient method applied to leaky Lamb waves.
Lenoir, O; Conoir, J M; Izbicki, J L
2002-10-01
The classical phase gradient method applied to the characterization of the angular resonances of an immersed elastic plate, i.e., the angular poles of its reflection coefficient R, was proved to be efficient when their real parts are close to the real zeros of R and their imaginary parts are not too large compared to their real parts. This method consists of plotting the partial reflection coefficient phase derivative with respect to the sine of the incidence angle, considered as real, versus incidence angle. In the vicinity of a resonance, this curve exhibits a Breit-Wigner shape, whose minimum is located at the pole real part and whose amplitude is the inverse of its imaginary part. However, when the imaginary part is large, this method is not sufficiently accurate compared to the exact calculation of the complex angular root. An improvement of this method consists of plotting, in 3D, in the complex angle plane and at a given frequency, the angular phase derivative with respect to the real part of the sine of the incidence angle, considered as complex. When the angular pole is reached, the 3D curve shows a clear-cut transition whose position is easily obtained.
Many-body localization of bosons in optical lattices
NASA Astrophysics Data System (ADS)
Sierant, Piotr; Zakrzewski, Jakub
2018-04-01
Many-body localization for a system of bosons trapped in a one-dimensional lattice is discussed. Two models that may be realized for cold atoms in optical lattices are considered. The model with a random on-site potential is compared with previously introduced random interactions model. While the origin and character of the disorder in both systems is different they show interesting similar properties. In particular, many-body localization appears for a sufficiently large disorder as verified by a time evolution of initial density wave states as well as using statistical properties of energy levels for small system sizes. Starting with different initial states, we observe that the localization properties are energy-dependent which reveals an inverted many-body localization edge in both systems (that finding is also verified by statistical analysis of energy spectrum). Moreover, we consider computationally challenging regime of transition between many body localized and extended phases where we observe a characteristic algebraic decay of density correlations which may be attributed to subdiffusion (and Griffiths-like regions) in the studied systems. Ergodicity breaking in the disordered Bose–Hubbard models is compared with the slowing-down of the time evolution of the clean system at large interactions.
Design and control of six degree-of-freedom active vibration isolation table.
Hong, Jinpyo; Park, Kyihwan
2010-03-01
A six-axis active vibration isolation system (AVIS) is designed by using the direct driven guide and ball contact mechanisms in order to have no cross-coupling between actuators. The point contact configuration gives an advantage of having an easy assembly of eight voice coil actuators to an upper and a base plate. A voice coil actuator is used since it can provide a large displacement and sufficient bandwidth required for vibration control. The AVIS is controlled considering the effect of flexible vibration mode in the upper plate and velocity sensor dynamics. A loop shaping technique and phase margin condition are applied to design a vibration controller. The performances of the AVIS are investigated in the frequency domain and finally validated by comparing with the passive isolation system. The scanning profiles of the specimen are compared together by using the atomic force microscope. The robustness of the AVIS is verified by showing the impulse response.
Olbrant, Edgar; Frank, Martin
2010-12-01
In this paper, we study a deterministic method for particle transport in biological tissues. The method is specifically developed for dose calculations in cancer therapy and for radiological imaging. Generalized Fokker-Planck (GFP) theory [Leakeas and Larsen, Nucl. Sci. Eng. 137 (2001), pp. 236-250] has been developed to improve the Fokker-Planck (FP) equation in cases where scattering is forward-peaked and where there is a sufficient amount of large-angle scattering. We compare grid-based numerical solutions to FP and GFP in realistic medical applications. First, electron dose calculations in heterogeneous parts of the human body are performed. Therefore, accurate electron scattering cross sections are included and their incorporation into our model is extensively described. Second, we solve GFP approximations of the radiative transport equation to investigate reflectance and transmittance of light in biological tissues. All results are compared with either Monte Carlo or discrete-ordinates transport solutions.
Comparative performance evaluation of transform coding in image pre-processing
NASA Astrophysics Data System (ADS)
Menon, Vignesh V.; NB, Harikrishnan; Narayanan, Gayathri; CK, Niveditha
2017-07-01
We are in the midst of a communication transmute which drives the development as largely as dissemination of pioneering communication systems with ever-increasing fidelity and resolution. Distinguishable researches have been appreciative in image processing techniques crazed by a growing thirst for faster and easier encoding, storage and transmission of visual information. In this paper, the researchers intend to throw light on many techniques which could be worn at the transmitter-end in order to ease the transmission and reconstruction of the images. The researchers investigate the performance of different image transform coding schemes used in pre-processing, their comparison, and effectiveness, the necessary and sufficient conditions, properties and complexity in implementation. Whimsical by prior advancements in image processing techniques, the researchers compare various contemporary image pre-processing frameworks- Compressed Sensing, Singular Value Decomposition, Integer Wavelet Transform on performance. The paper exposes the potential of Integer Wavelet transform to be an efficient pre-processing scheme.
Design and control of six degree-of-freedom active vibration isolation table
NASA Astrophysics Data System (ADS)
Hong, Jinpyo; Park, Kyihwan
2010-03-01
A six-axis active vibration isolation system (AVIS) is designed by using the direct driven guide and ball contact mechanisms in order to have no cross-coupling between actuators. The point contact configuration gives an advantage of having an easy assembly of eight voice coil actuators to an upper and a base plate. A voice coil actuator is used since it can provide a large displacement and sufficient bandwidth required for vibration control. The AVIS is controlled considering the effect of flexible vibration mode in the upper plate and velocity sensor dynamics. A loop shaping technique and phase margin condition are applied to design a vibration controller. The performances of the AVIS are investigated in the frequency domain and finally validated by comparing with the passive isolation system. The scanning profiles of the specimen are compared together by using the atomic force microscope. The robustness of the AVIS is verified by showing the impulse response.
Efficacy of cognitive-behavioral therapy for childhood anxiety and depression.
Crowe, Katherine; McKay, Dean
2017-06-01
A review of meta-analyses of cognitive-behavioral therapy (CBT) for childhood anxiety and depression was conducted. A total of 36 meta-analyses were identified that met inclusion criteria for this review. In most cases, medium-to-large effect sizes for treatment reduction were observed when CBT was compared to non-active control conditions. Small-to-medium effects were observed when CBT was compared to active control treatments. The available meta-analyses generally did not examine, or data were not sufficient to evaluate, potential moderators of outcome, differential effects for parental involvement, or changes in quality of life or functional outcomes associated with treatment. Accordingly, while CBT should be broadly considered an effective treatment approach for childhood anxiety and depression, additional research is warranted in order to establish guidelines for service delivery for complicating factors in client presentation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Regulations Relating to Public Welfare (Continued) OFFICE OF HUMAN DEVELOPMENT SERVICES, DEPARTMENT OF HEALTH... sufficient information for ANA to determine the extent to which the recipient meets ANA project evaluation standards. Sufficient information means information adequate to enable ANA to compare the recipient's...
Code of Federal Regulations, 2011 CFR
2011-10-01
... Regulations Relating to Public Welfare (Continued) OFFICE OF HUMAN DEVELOPMENT SERVICES, DEPARTMENT OF HEALTH... sufficient information for ANA to determine the extent to which the recipient meets ANA project evaluation standards. Sufficient information means information adequate to enable ANA to compare the recipient's...
Code of Federal Regulations, 2010 CFR
2010-10-01
... Regulations Relating to Public Welfare (Continued) OFFICE OF HUMAN DEVELOPMENT SERVICES, DEPARTMENT OF HEALTH... sufficient information for ANA to determine the extent to which the recipient meets ANA project evaluation standards. Sufficient information means information adequate to enable ANA to compare the recipient's...
Code of Federal Regulations, 2014 CFR
2014-10-01
... Regulations Relating to Public Welfare (Continued) OFFICE OF HUMAN DEVELOPMENT SERVICES, DEPARTMENT OF HEALTH... sufficient information for ANA to determine the extent to which the recipient meets ANA project evaluation standards. Sufficient information means information adequate to enable ANA to compare the recipient's...
Implementation and Assessment of Advanced Analog Vector-Matrix Processor
NASA Technical Reports Server (NTRS)
Gary, Charles K.; Bualat, Maria G.; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
This paper discusses the design and implementation of an analog optical vecto-rmatrix coprocessor with a throughput of 128 Mops for a personal computer. Vector matrix calculations are inherently parallel, providing a promising domain for the use of optical calculators. However, to date, digital optical systems have proven too cumbersome to replace electronics, and analog processors have not demonstrated sufficient accuracy in large scale systems. The goal of the work described in this paper is to demonstrate a viable optical coprocessor for linear operations. The analog optical processor presented has been integrated with a personal computer to provide full functionality and is the first demonstration of an optical linear algebra processor with a throughput greater than 100 Mops. The optical vector matrix processor consists of a laser diode source, an acoustooptical modulator array to input the vector information, a liquid crystal spatial light modulator to input the matrix information, an avalanche photodiode array to read out the result vector of the vector matrix multiplication, as well as transport optics and the electronics necessary to drive the optical modulators and interface to the computer. The intent of this research is to provide a low cost, highly energy efficient coprocessor for linear operations. Measurements of the analog accuracy of the processor performing 128 Mops are presented along with an assessment of the implications for future systems. A range of noise sources, including cross-talk, source amplitude fluctuations, shot noise at the detector, and non-linearities of the optoelectronic components are measured and compared to determine the most significant source of error. The possibilities for reducing these sources of error are discussed. Also, the total error is compared with that expected from a statistical analysis of the individual components and their relation to the vector-matrix operation. The sufficiency of the measured accuracy of the processor is compared with that required for a range of typical problems. Calculations resolving alloy concentrations from spectral plume data of rocket engines are implemented on the optical processor, demonstrating its sufficiency for this problem. We also show how this technology can be easily extended to a 100 x 100 10 MHz (200 Cops) processor.
Assays for the activities of polyamine biosynthetic enzymes using intact tissues
Rakesh Minocha; Stephanie Long; Hisae Maki; Subhash C. Minocha
1999-01-01
Traditionally, most enzyme assays utilize homogenized cell extracts with or without dialysis. Homogenization and centrifugation of large numbers of samples for screening of mutants and transgenic cell lines is quite cumbersome and generally requires sufficiently large amounts (hundreds of milligrams) of tissue. However, in situations where the tissue is available in...
Monitoring conservation success in a large oak woodland landscape
Rich Reiner; Emma Underwood; John-O Niles
2002-01-01
Monitoring is essential in understanding the success or failure of a conservation project and provides the information needed to conduct adaptive management. Although there is a large body of literature on monitoring design, it fails to provide sufficient information to practitioners on how to organize and apply monitoring when implementing landscape-scale conservation...
Solving the critical thermal bowing in 3C-SiC/Si(111) by a tilting Si pillar architecture
NASA Astrophysics Data System (ADS)
Albani, Marco; Marzegalli, Anna; Bergamaschini, Roberto; Mauceri, Marco; Crippa, Danilo; La Via, Francesco; von Känel, Hans; Miglio, Leo
2018-05-01
The exceptionally large thermal strain in few-micrometers-thick 3C-SiC films on Si(111), causing severe wafer bending and cracking, is demonstrated to be elastically quenched by substrate patterning in finite arrays of Si micro-pillars, sufficiently large in aspect ratio to allow for lateral pillar tilting, both by simulations and by preliminary experiments. In suspended SiC patches, the mechanical problem is addressed by finite element method: both the strain relaxation and the wafer curvature are calculated at different pillar height, array size, and film thickness. Patches as large as required by power electronic devices (500-1000 μm in size) show a remarkable residual strain in the central area, unless the pillar aspect ratio is made sufficiently large to allow peripheral pillars to accommodate the full film retraction. A sublinear relationship between the pillar aspect ratio and the patch size, guaranteeing a minimal curvature radius, as required for wafer processing and micro-crack prevention, is shown to be valid for any heteroepitaxial system.
Large Angle Transient Dynamics (LATDYN) user's manual
NASA Technical Reports Server (NTRS)
Abrahamson, A. Louis; Chang, Che-Wei; Powell, Michael G.; Wu, Shih-Chin; Bingel, Bradford D.; Theophilos, Paula M.
1991-01-01
A computer code for modeling the large angle transient dynamics (LATDYN) of structures was developed to investigate techniques for analyzing flexible deformation and control/structure interaction problems associated with large angular motions of spacecraft. This type of analysis is beyond the routine capability of conventional analytical tools without simplifying assumptions. In some instances, the motion may be sufficiently slow and the spacecraft (or component) sufficiently rigid to simplify analyses of dynamics and controls by making pseudo-static and/or rigid body assumptions. The LATDYN introduces a new approach to the problem by combining finite element structural analysis, multi-body dynamics, and control system analysis in a single tool. It includes a type of finite element that can deform and rotate through large angles at the same time, and which can be connected to other finite elements either rigidly or through mechanical joints. The LATDYN also provides symbolic capabilities for modeling control systems which are interfaced directly with the finite element structural model. Thus, the nonlinear equations representing the structural model are integrated along with the equations representing sensors, processing, and controls as a coupled system.
System for producing a uniform rubble bed for in situ processes
Galloway, T.R.
1983-07-05
A method and a cutter are disclosed for producing a large cavity filled with a uniform bed of rubblized oil shale or other material, for in situ processing. A raise drill head has a hollow body with a generally circular base and sloping upper surface. A hollow shaft extends from the hollow body. Cutter teeth are mounted on the upper surface of the body and relatively small holes are formed in the body between the cutter teeth. Relatively large peripheral flutes around the body allow material to drop below the drill head. A pilot hole is drilled into the oil shale deposit. The pilot hole is reamed into a large diameter hole by means of a large diameter raise drill head or cutter to produce a cavity filled with rubble. A flushing fluid, such as air, is circulated through the pilot hole during the reaming operation to remove fines through the raise drill, thereby removing sufficient material to create sufficient void space, and allowing the larger particles to fill the cavity and provide a uniform bed of rubblized oil shale. 4 figs.
Kiraz, Nuri; Oz, Yasemin; Aslan, Huseyin; Erturan, Zayre; Ener, Beyza; Akdagli, Sevtap Arikan; Muslumanoglu, Hamza; Cetinkaya, Zafer
2015-10-01
Although conventional identification of pathogenic fungi is based on the combination of tests evaluating their morphological and biochemical characteristics, they can fail to identify the less common species or the differentiation of closely related species. In addition these tests are time consuming, labour-intensive and require experienced personnel. We evaluated the feasibility and sufficiency of DNA extraction by Whatman FTA filter matrix technology and DNA sequencing of D1-D2 region of the large ribosomal subunit gene for identification of clinical isolates of 21 yeast and 160 moulds in our clinical mycology laboratory. While the yeast isolates were identified at species level with 100% homology, 102 (63.75%) clinically important mould isolates were identified at species level, 56 (35%) isolates at genus level against fungal sequences existing in DNA databases and two (1.25%) isolates could not be identified. Consequently, Whatman FTA filter matrix technology was a useful method for extraction of fungal DNA; extremely rapid, practical and successful. Sequence analysis strategy of D1-D2 region of the large ribosomal subunit gene was found considerably sufficient in identification to genus level for the most clinical fungi. However, the identification to species level and especially discrimination of closely related species may require additional analysis. © 2015 Blackwell Verlag GmbH.
Proton velocity ring-driven instabilities and their dependence on the ring speed: Linear theory
NASA Astrophysics Data System (ADS)
Min, Kyungguk; Liu, Kaijun; Gary, S. Peter
2017-08-01
Linear dispersion theory is used to study the Alfvén-cyclotron, mirror and ion Bernstein instabilities driven by a tenuous (1%) warm proton ring velocity distribution with a ring speed, vr, varying between 2vA and 10vA, where vA is the Alfvén speed. Relatively cool background protons and electrons are assumed. The modeled ring velocity distributions are unstable to both the Alfvén-cyclotron and ion Bernstein instabilities whose maximum growth rates are roughly a linear function of the ring speed. The mirror mode, which has real frequency ωr=0, becomes the fastest growing mode for sufficiently large vr/vA. The mirror and Bernstein instabilities have maximum growth at propagation oblique to the background magnetic field and become more field-aligned with an increasing ring speed. Considering its largest growth rate, the mirror mode, in addition to the Alfvén-cyclotron mode, can cause pitch angle diffusion of the ring protons when the ring speed becomes sufficiently large. Moreover, because the parallel phase speed, v∥ph, becomes sufficiently small relative to vr, the low-frequency Bernstein waves can also aid the pitch angle scattering of the ring protons for large vr. Potential implications of including these two instabilities at oblique propagation on heliospheric pickup ion dynamics are discussed.
NASA Astrophysics Data System (ADS)
Matsuda, K.; Onishi, R.; Takahashi, K.
2017-12-01
Urban high temperatures due to the combined influence of global warming and urban heat islands increase the risk of heat stroke. Greenery is one of possible countermeasures for mitigating the heat environments since the transpiration and shading effect of trees can reduce the air temperature and the radiative heat flux. In order to formulate effective measures, it is important to estimate the influence of the greenery on the heat stroke risk. In this study, we have developed a tree-crown-resolving large-eddy simulation (LES) model that is coupled with three-dimensional radiative transfer (3DRT) model. The Multi-Scale Simulator for the Geoenvironment (MSSG) is used for performing building- and tree-crown-resolving LES. The 3DRT model is implemented in the MSSG so that the 3DRT is calculated repeatedly during the time integration of the LES. We have confirmed that the computational time for the 3DRT model is negligibly small compared with that for the LES and the accuracy of the 3DRT model is sufficiently high to evaluate the radiative heat flux at the pedestrian level. The present model is applied to the analysis of the heat environment in an actual urban area around the Tokyo Bay area, covering 8 km × 8 km with 5-m grid mesh, in order to confirm its feasibility. The results show that the wet-bulb globe temperature (WBGT), which is an indicator of the heat stroke risk, is predicted in a sufficiently high accuracy to evaluate the influence of tree crowns on the heat environment. In addition, by comparing with a case without the greenery in the Tokyo Bay area, we have confirmed that the greenery increases the low WBGT areas in major pedestrian spaces by a factor of 3.4. This indicates that the present model can predict the greenery effect on the urban heat environment quantitatively.
Democracy, Equal Citizenship, and Education
ERIC Educational Resources Information Center
Callan, Eamonn
2016-01-01
Two appealing principles of educational distribution--equality and sufficiency--are comparatively assessed. The initial point of comparison is the distribution of civic educational goods. One reason to favor equality in educational distribution rather than sufficiency is the elimination of undeserved positional advantage in access to labor…
Potential of lattice Boltzmann to model droplets on chemically stripe-patterned substrates
NASA Astrophysics Data System (ADS)
Patrick Jansen, H.; Sotthewes, K.; Zandvliet, Harold J. W.; Kooij, E. Stefan
2016-01-01
Lattice Boltzmann modelling (LBM) has recently been applied to a range of different wetting situations. Here we demonstrate its potential in representing complex kinetic effects encountered in droplets on chemically stripe-patterned surfaces. An ultimate example of the power of LBM is provided by comparing simulations and experiments of impacting droplets with varying Weber numbers. Also, the shape evolution of droplets is discussed in relation to their final shape. The latter can then be compared to Surface Evolver (SE) results, since under the proper boundary conditions both approaches should yield the same configuration in a static state. During droplet growth in LBM simulations, achieved by increasing the density within the droplet, the contact line initially advances in the direction parallel to the stripes, therewith increasing its aspect ratio. Once the volume becomes too large the droplet starts wetting additional stripes, leading to a lower aspect ratio. The maximum aspect ratio is shown to be a function of the width ratio of the hydrophobic and hydrophilic stripes and also their absolute widths. In the limit of sufficiently large stripe widths the aspect ratio is solely dependent on the relative stripe widths. The maximum droplet aspect ratio in the LBM simulations is compared to SE simulations and results are shown to be in good agreement. Additionally, we also show the ability of LBM to investigate single stripe wetting, enabling determination of the maximum aspect ratio that can be achieved in the limit of negligible hydrophobic stripe width, under the constraint that the stripe widths are large enough such that they are not easily crossed.
The microwave radiometer spacecraft: A design study
NASA Technical Reports Server (NTRS)
Wright, R. L. (Editor)
1981-01-01
A large passive microwave radiometer spacecraft with near all weather capability of monitoring soil moisture for global crop forecasting was designed. The design, emphasizing large space structures technology, characterized the mission hardware at the conceptual level in sufficient detail to identify enabling and pacing technologies. Mission and spacecraft requirements, design and structural concepts, electromagnetic concepts, and control concepts are addressed.
ERIC Educational Resources Information Center
Bowman, Thomas G.
2012-01-01
The athletic training profession is in the midst of a large increase in demand for health care professionals for the physically active. In order to meet demand, directors of athletic training education programs (ATEPs) are challenged with providing sufficient graduates. There has been a large increase in ATEPs nationwide since educational reform…
Entropy production during an isothermal phase transition in the early universe
NASA Astrophysics Data System (ADS)
Kaempfer, B.
The analytical model of Lodenquai and Dixit (1983) and of Bonometto and Matarrese (1983) of an isothermal era in the early universe is extended here to arbitrary temperatures. It is found that a sufficiently large supercooling gives rise to a large entropy production which may significantly dilute the primordial monopole or baryon to entropy ratio. Whether such large supercooling can be achieved depends on the characteristics of the nucleation process.
Method and apparatus for thermal processing of semiconductor substrates
Griffiths, Stewart K.; Nilson, Robert H.; Mattson, Brad S.; Savas, Stephen E.
2002-01-01
An improved apparatus and method for thermal processing of semiconductor wafers. The apparatus and method provide the temperature stability and uniformity of a conventional batch furnace as well as the processing speed and reduced time-at-temperature of a lamp-heated rapid thermal processor (RTP). Individual wafers are rapidly inserted into and withdrawn from a furnace cavity held at a nearly constant and isothermal temperature. The speeds of insertion and withdrawal are sufficiently large to limit thermal stresses and thereby reduce or prevent plastic deformation of the wafer as it enters and leaves the furnace. By processing the semiconductor wafer in a substantially isothermal cavity, the wafer temperature and spatial uniformity of the wafer temperature can be ensured by measuring and controlling only temperatures of the cavity walls. Further, peak power requirements are very small compared to lamp-heated RTPs because the cavity temperature is not cycled and the thermal mass of the cavity is relatively large. Increased speeds of insertion and/or removal may also be used with non-isothermal furnaces.
Method and apparatus for thermal processing of semiconductor substrates
Griffiths, Stewart K.; Nilson, Robert H.; Mattson, Brad S.; Savas, Stephen E.
2000-01-01
An improved apparatus and method for thermal processing of semiconductor wafers. The apparatus and method provide the temperature stability and uniformity of a conventional batch furnace as well as the processing speed and reduced time-at-temperature of a lamp-heated rapid thermal processor (RTP). Individual wafers are rapidly inserted into and withdrawn from a furnace cavity held at a nearly constant and isothermal temperature. The speeds of insertion and withdrawal are sufficiently large to limit thermal stresses and thereby reduce or prevent plastic deformation of the wafer as it enters and leaves the furnace. By processing the semiconductor wafer in a substantially isothermal cavity, the wafer temperature and spatial uniformity of the wafer temperature can be ensured by measuring and controlling only temperatures of the cavity walls. Further, peak power requirements are very small compared to lamp-heated RTPs because the cavity temperature is not cycled and the thermal mass of the cavity is relatively large. Increased speeds of insertion and/or removal may also be used with non-isothermal furnaces.
Solution-Processed Metal Coating to Nonwoven Fabrics for Wearable Rechargeable Batteries.
Lee, Kyulin; Choi, Jin Hyeok; Lee, Hye Moon; Kim, Ki Jae; Choi, Jang Wook
2017-12-27
Wearable rechargeable batteries require electrode platforms that can withstand various physical motions, such as bending, folding, and twisting. To this end, conductive textiles and paper have been highlighted, as their porous structures can accommodate the stress built during various physical motions. However, fabrics with plain weaves or knit structures have been mostly adopted without exploration of nonwoven counterparts. Also, the integration of conductive materials, such as carbon or metal nanomaterials, to achieve sufficient conductivity as current collectors is not well-aligned with large-scale processing in terms of cost and quality control. Here, the superiority of nonwoven fabrics is reported in electrochemical performance and bending capability compared to currently dominant woven counterparts, due to smooth morphology near the fiber intersections and the homogeneous distribution of fibers. Moreover, solution-processed electroless deposition of aluminum and nickel-copper composite is adopted for cathodes and anodes, respectively, demonstrating the large-scale feasibility of conductive nonwoven platforms for wearable rechargeable batteries. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Ter-Martirosyan, Z. G.; Ter-Martirosyan, A. Z.; Sidorov, V. V.
2017-11-01
Deep foundations are used for the design of high-rise buildings due to a large pressure transfer on the soil base. The foundations of buildings sometimes use barrettes which are able to perceive significant vertical and horizontal loads due to improved lateral surface. Barrettes have increased load bearing capacity as compared with large diameter piles. In modern practice the interaction between barrettes and soil is investigated by analytical and numerical methods and has no sufficient experimental confirmation. The review of experimental methods for the research of the intense stress-strain state of the uniform soil massif at interaction with elements of a deep foundation is provided in this article. Experimental research are planned with the use of laboratory stand for the purpose of qualitative data obtaining on the interaction barrettes with an assessment of a settlement model adequacy and also at the research of the intense stress-strain state by numerical methods.
Ultrasonic seam welding on thin silicon solar cells
NASA Technical Reports Server (NTRS)
Stofel, E. J.
1982-01-01
The ultrathin silicon solar cell has progressed to where it is a serious candidate for future light weight or radiation tolerant spacecraft. The ultrasonic method of producing welds was found to be satisfactory. These ultrathin cells could be handled without breakage in a semiautomated welding machine. This is a prototype of a machine capable of production rates sufficiently large to support spacecraft array assembly needs. For comparative purposes, this project also welded a variety of cells with thicknesses up to 0.23 mm as well as the 0.07 mm ultrathin cells. There was no electrical degradation in any cells. The mechanical pull strength of welds on the thick cells was excellent when using a large welding force. The mechanical strength of welds on thin cells was less since only a small welding force could be used without cracking these cells. Even so, the strength of welds on thin cells appears adequate for array application. The ability of such welds to survive multiyear, near Earth orbit thermal cycles needs to be demonstrated.
Fast segmentation of stained nuclei in terabyte-scale, time resolved 3D microscopy image stacks.
Stegmaier, Johannes; Otte, Jens C; Kobitski, Andrei; Bartschat, Andreas; Garcia, Ariel; Nienhaus, G Ulrich; Strähle, Uwe; Mikut, Ralf
2014-01-01
Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu's method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm's superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results.
NASA Astrophysics Data System (ADS)
Kishore Kumar, G.; Nesse Tyssøy, H.; Williams, Bifford P.
2018-03-01
We investigate the possibility that sufficiently large electric fields and/or ionization during geomagnetic disturbed conditions may invalidate the assumptions applied in the retrieval of neutral horizontal winds from meteor and/or lidar measurements. As per our knowledge, the possible errors in the wind estimation have never been reported. In the present case study, we have been using co-located meteor radar and sodium resonance lidar zonal wind measurements over Andenes (69.27°N, 16.04°E) during intense substorms in the declining phase of the January 2005 solar proton event (21-22 January 2005). In total, 14 h of measurements are available for the comparison, which covers both quiet and disturbed conditions. For comparison, the lidar zonal wind measurements are averaged over the same time and altitude as the meteor radar wind measurements. High cross correlations (∼0.8) are found in all height regions. The discrepancies can be explained in light of differences in the observational volumes of the two instruments. Further, we extended the comparison to address the electric field and/or ionization impact on the neutral wind estimation. For the periods of low ionization, the neutral winds estimated with both instruments are quite consistent with each other. During periods of elevated ionization, comparatively large differences are noticed at the highermost altitude, which might be due to the electric field and/or ionization impact on the wind estimation. At present, one event is not sufficient to make any firm conclusion. Further study with more co-located measurements are needed to test the statistical significance of the result.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tojo, H.; Hatae, T.; Hamano, T.
2013-09-15
Collection optics for core measurements in a JT-60SA Thomson scattering system were designed. The collection optics will be installed in a limited space and have a wide field of view and wide wavelength range. Two types of the optics are therefore suggested: refraction and reflection types. The reflection system, with a large primary mirror, avoids large chromatic aberrations. Because the size limit of the primary mirror and vignetting due to the secondary mirror affect the total collection throughput, conditions that provide the high throughput are found through an optimization. A refraction system with four lenses forming an Ernostar system ismore » also employed. The use of high-refractive-index glass materials enhances the freedom of the lens curvatures, resulting in suppression of the spherical and coma aberration. Moreover, sufficient throughput can be achieved, even with smaller lenses than that of a previous design given in [H. Tojo, T. Hatae, T. Sakuma, T. Hamano, K. Itami, Y. Aida, S. Suitoh, and D. Fujie, Rev. Sci. Instrum. 81, 10D539 (2010)]. The optical resolutions of the reflection and refraction systems are both sufficient for understanding the spatial structures in plasma. In particular, the spot sizes at the image of the optics are evaluated as ∼0.3 mm and ∼0.4 mm, respectively. The throughput for the two systems, including the pupil size and transmissivity, are also compared. The results show that good measurement accuracy (<10%) even at high electron temperatures (<30 keV) can be expected in the refraction system.« less
Tojo, H; Hatae, T; Hamano, T; Sakuma, T; Itami, K
2013-09-01
Collection optics for core measurements in a JT-60SA Thomson scattering system were designed. The collection optics will be installed in a limited space and have a wide field of view and wide wavelength range. Two types of the optics are therefore suggested: refraction and reflection types. The reflection system, with a large primary mirror, avoids large chromatic aberrations. Because the size limit of the primary mirror and vignetting due to the secondary mirror affect the total collection throughput, conditions that provide the high throughput are found through an optimization. A refraction system with four lenses forming an Ernostar system is also employed. The use of high-refractive-index glass materials enhances the freedom of the lens curvatures, resulting in suppression of the spherical and coma aberration. Moreover, sufficient throughput can be achieved, even with smaller lenses than that of a previous design given in [H. Tojo, T. Hatae, T. Sakuma, T. Hamano, K. Itami, Y. Aida, S. Suitoh, and D. Fujie, Rev. Sci. Instrum. 81, 10D539 (2010)]. The optical resolutions of the reflection and refraction systems are both sufficient for understanding the spatial structures in plasma. In particular, the spot sizes at the image of the optics are evaluated as ~0.3 mm and ~0.4 mm, respectively. The throughput for the two systems, including the pupil size and transmissivity, are also compared. The results show that good measurement accuracy (<10%) even at high electron temperatures (<30 keV) can be expected in the refraction system.
LES with and without explicit filtering: comparison and assessment of various models
NASA Astrophysics Data System (ADS)
Winckelmans, Gregoire S.; Jeanmart, Herve; Wray, Alan A.; Carati, Daniele
2000-11-01
The proper mathematical formalism for large eddy simulation (LES) of turbulent flows assumes that a regular ``explicit" filter (i.e., a filter with a well-defined second moment, such as the gaussian, the top hat, etc.) is applied to the equations of fluid motion. This filter is then responsible for a ``filtered-scale" stress. Because of the discretization of the filtered equations, using the LES grid, there is also a ``subgrid-scale" stress. The global effective stress is found to be the discretization of a filtered-scale stress plus a subgrid-scale stress. The former can be partially reconstructed from an exact, infinite, series, the first term of which is the ``tensor-diffusivity" model of Leonard and is found, in practice, to be sufficient for modeling. Alternatively, sufficient reconstruction can also be achieved using the ``scale-similarity" model of Bardina. The latter corresponds to loss of information: it cannot be reconstructed; its effect (essentially dissipation) must be modeled using ad hoc modeling strategies (such as the dynamic version of the ``effective viscosity" model of Smagorinsky). Practitionners also often assume LES without explicit filtering: the effective stress is then only a subgrid-scale stress. We here compare the performance of various LES models for both approaches (with and without explicit filtering), and for cases without solid boundaries: (1) decay of isotropic turbulence; (2) decay of aircraft wake vortices in a turbulent atmosphere. One main conclusion is that better subgrid-scale models are still needed, the effective viscosity models being too active at the large scales.
NASA Astrophysics Data System (ADS)
Grandhi, Kishore Kumar; Nesse Tyssøy, Hilde; Williams, Bifford P.; Stober, Gunter
2017-04-01
It is speculated that sufficiently large electric fields during geomagnetic disturbed conditions may decouple the meteor trail electron motions from the background neutral winds and leads to erroneous neutral wind estimation. As per our knowledge, the potential errors have never been reported. In the present case study, we have been using co-located meteor radar and sodium resonance lidar zonal wind measurements over Andenes (69.27oN,16.04oE) during intense sub storms in the declining phase of Jan 2005 solar proton event (21-22 Jan 2005). In total 14 hours of continuous measurements are available for the comparison, which covers both quiet and disturbed conditions. For comparison, the lidar zonal winds are averaged in meteor radar time and height bins. High cross correlations (˜0.8) are found in all height regions. The discrepancies can be explained in the light of differences in the observational volumes of the two instruments. Further, we extended the comparison to address the ionization impact on the meteor radar winds. For quiet hours, the observed meteor radar winds are quite consistent with lidar winds. While during the disturbed hours comparatively large differences are noticed at higher most altitudes. This might be due to ionization impact on meteor radar winds. At the present one event is not sufficient to make any consolidate conclusion. However, at least from this study we found some effect on the neutral wind measurements for the meteor radar. Further study with more co-located measurements are needed to test statistical significance of the result.
Benchmark of the local drift-kinetic models for neoclassical transport simulation in helical plasmas
NASA Astrophysics Data System (ADS)
Huang, B.; Satake, S.; Kanno, R.; Sugama, H.; Matsuoka, S.
2017-02-01
The benchmarks of the neoclassical transport codes based on the several local drift-kinetic models are reported here. Here, the drift-kinetic models are zero orbit width (ZOW), zero magnetic drift, DKES-like, and global, as classified in Matsuoka et al. [Phys. Plasmas 22, 072511 (2015)]. The magnetic geometries of Helically Symmetric Experiment, Large Helical Device (LHD), and Wendelstein 7-X are employed in the benchmarks. It is found that the assumption of E ×B incompressibility causes discrepancy of neoclassical radial flux and parallel flow among the models when E ×B is sufficiently large compared to the magnetic drift velocities. For example, Mp≤0.4 where Mp is the poloidal Mach number. On the other hand, when E ×B and the magnetic drift velocities are comparable, the tangential magnetic drift, which is included in both the global and ZOW models, fills the role of suppressing unphysical peaking of neoclassical radial-fluxes found in the other local models at Er≃0 . In low collisionality plasmas, in particular, the tangential drift effect works well to suppress such unphysical behavior of the radial transport caused in the simulations. It is demonstrated that the ZOW model has the advantage of mitigating the unphysical behavior in the several magnetic geometries, and that it also implements the evaluation of bootstrap current in LHD with the low computation cost compared to the global model.
Self-assembled ordered structures in thin films of HAT5 discotic liquid crystal.
Morales, Piero; Lagerwall, Jan; Vacca, Paolo; Laschat, Sabine; Scalia, Giusy
2010-05-20
Thin films of the discotic liquid crystal hexapentyloxytriphenylene (HAT5), prepared from solution via casting or spin-coating, were investigated by atomic force microscopy and polarizing optical microscopy, revealing large-scale ordered structures substantially different from those typically observed in standard samples of the same material. Thin and very long fibrils of planar-aligned liquid crystal were found, possibly formed as a result of an intermediate lyotropic nematic state arising during the solvent evaporation process. Moreover, in sufficiently thin films the crystallization seems to be suppressed, extending the uniform order of the liquid crystal phase down to room temperature. This should be compared to the bulk situation, where the same material crystallizes into a polymorphic structure at 68 °C.
A model for plant lighting system selection.
Ciolkosz, D E; Albright, L D; Sager, J C; Langhans, R W
2002-01-01
A decision model is presented that compares lighting systems for a plant growth scenario and chooses the most appropriate system from a given set of possible choices. The model utilizes a Multiple Attribute Utility Theory approach, and incorporates expert input and performance simulations to calculate a utility value for each lighting system being considered. The system with the highest utility is deemed the most appropriate system. The model was applied to a greenhouse scenario, and analyses were conducted to test the model's output for validity. Parameter variation indicates that the model performed as expected. Analysis of model output indicates that differences in utility among the candidate lighting systems were sufficiently large to give confidence that the model's order of selection was valid.
NASA Technical Reports Server (NTRS)
Lyle, Karen H.
2015-01-01
Acceptance of new spacecraft structural architectures and concepts requires validated design methods to minimize the expense involved with technology demonstration via flight-testing. Hypersonic Inflatable Aerodynamic Decelerator (HIAD) architectures are attractive for spacecraft deceleration because they are lightweight, store compactly, and utilize the atmosphere to decelerate a spacecraft during entry. However, designers are hesitant to include these inflatable approaches for large payloads or spacecraft because of the lack of flight validation. This publication summarizes results comparing analytical results with test data for two concepts subjected to representative entry, static loading. The level of agreement and ability to predict the load distribution is considered sufficient to enable analytical predictions to be used in the design process.
NASA Astrophysics Data System (ADS)
Kuroki, Nahoko; Mori, Hirotoshi
2018-02-01
Effective fragment potential version 2 - molecular dynamics (EFP2-MD) simulations, where the EFP2 is a polarizable force field based on ab initio electronic structure calculations were applied to water-methanol binary mixture. Comparing EFP2s defined with (aug-)cc-pVXZ (X = D,T) basis sets, it was found that large sets are necessary to generate sufficiently accurate EFP2 for predicting mixture properties. It was shown that EFP2-MD could predict the excess molar volume. Since the computational cost of EFP2-MD are far less than ab initio MD, the results presented herein demonstrate that EFP2-MD is promising for predicting physicochemical properties of novel mixed solvents.
Validity of the two-level approximation in the interaction of few-cycle light pulses with atoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng Jing; Zhou Jianying
2003-04-01
The validity of the two-level approximation (TLA) in the interaction of atoms with few-cycle light pulses is studied by investigating a simple (V)-type three-level atom model. Even the transition frequency between the ground state and the third level is far away from the spectrum of the pulse; this additional transition can make the TLA inaccuracy. For a sufficiently large transition frequency or a weak coupling between the ground state and the third level, the TLA is a reasonable approximation and can be used safely. When decreasing the pulse width or increasing the pulse area, the TLA will give rise tomore » non-negligible errors compared with the precise results.« less
Validity of the two-level approximation in the interaction of few-cycle light pulses with atoms
NASA Astrophysics Data System (ADS)
Cheng, Jing; Zhou, Jianying
2003-04-01
The validity of the two-level approximation (TLA) in the interaction of atoms with few-cycle light pulses is studied by investigating a simple
Paying the right price for pharmaceuticals: a case study of why the comparator matters.
Spinks, Jean M; Richardson, Jeff R J
2011-08-01
This article considers the pricing policy for pharmaceuticals in Australia, which is widely seen as having achieved low drug prices. However, compared to New Zealand, the evidence implies that Australia might have improved its performance significantly if it had proactively sought market best pricing. The Australian record suggests that the information sought by authorities may not be sufficient for optimal pricing and that the economic evaluation of pharmaceuticals may be neither necessary nor sufficient for achieving this goal.
Robustness of inflation to inhomogeneous initial conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clough, Katy; Lim, Eugene A.; DiNunno, Brandon S.
We consider the effects of inhomogeneous initial conditions in both the scalar field profile and the extrinsic curvature on different inflationary models. In particular, we compare the robustness of small field inflation to that of large field inflation, using numerical simulations with Einstein gravity in 3+1 dimensions. We find that small field inflation can fail in the presence of subdominant gradient energies, suggesting that it is much less robust to inhomogeneities than large field inflation, which withstands dominant gradient energies. However, we also show that small field inflation can be successful even if some regions of spacetime start out inmore » the region of the potential that does not support inflation. In the large field case, we confirm previous results that inflation is robust if the inflaton occupies the inflationary part of the potential. Furthermore, we show that increasing initial scalar gradients will not form sufficiently massive inflation-ending black holes if the initial hypersurface is approximately flat. Finally, we consider the large field case with a varying extrinsic curvature K , such that some regions are initially collapsing. We find that this may again lead to local black holes, but overall the spacetime remains inflationary if the spacetime is open, which confirms previous theoretical studies.« less
Impedance Eduction in Large Ducts Containing Higher-Order Modes and Grazing Flow
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Jones, Michael G.
2017-01-01
Impedance eduction test data are acquired in ducts with small and large cross-sectional areas at the NASA Langley Research Center. An improved data acquisition system in the large duct has resulted in increased control of the acoustic energy in source modes and more accurate resolution of higher-order duct modes compared to previous tests. Two impedance eduction methods that take advantage of the improved data acquisition to educe the liner impedance in grazing flow are presented. One method measures the axial propagation constant of a dominant mode in the liner test section (by implementing the Kumarsean and Tufts algorithm) and educes the impedance from an exact analytical expression. The second method solves numerically the convected Helmholtz equation and minimizes an objective function to obtain the liner impedance. The two methods are tested first on data synthesized from an exact mode solution and then on measured data. Results show that when the methods are applied to data acquired in the larger duct with a dominant higher-order mode, the same impedance spectra are educed as that obtained in the small duct where only the plane wave mode propagates. This result holds for each higher-order mode in the large duct provided that the higher-order mode is sufficiently attenuated by the liner.
Ohta, Y; Chiba, S; Imai, Y; Kamiya, Y; Arisawa, T; Kitagawa, A
2006-12-01
We examined whether ascorbic acid (AA) deficiency aggravates water immersion restraint stress (WIRS)-induced gastric mucosal lesions in genetically scorbutic ODS rats. ODS rats received scorbutic diet with either distilled water containing AA (1 g/l) or distilled water for 2 weeks. AA-deficient rats had 12% of gastric mucosal AA content in AA-sufficient rats. AA-deficient rats showed more severe gastric mucosal lesions than AA-sufficient rats at 1, 3 or 6 h after the onset of WIRS, although AA-deficient rats had a slight decrease in gastric mucosal AA content, while AA-sufficient rats had a large decrease in that content. AA-deficient rats had more decreased gastric mucosal nonprotein SH and vitamin E contents and increased gastric mucosal lipid peroxide content than AA-sufficient rats at 1, 3 or 6 h of WIRS. These results indicate that AA deficiency aggravates WIRS-induced gastric mucosal lesions in ODS rats by enhancing oxidative damage in the gastric mucosa.
14 CFR 23.773 - Pilot compartment view.
Code of Federal Regulations, 2010 CFR
2010-01-01
... side windows sufficiently large to provide the view specified in paragraph (a)(1) of this section... be shown that the windshield and side windows can be easily cleared by the pilot without interruption...
Energy-Dependent Ionization States of Shock-Accelerated Particles in the Solar Corona
NASA Technical Reports Server (NTRS)
Reames, Donald V.; Ng, C. K.; Tylka, A. J.
2000-01-01
We examine the range of possible energy dependence of the ionization states of ions that are shock-accelerated from the ambient plasma of the solar corona. If acceleration begins in a region of moderate density, sufficiently low in the corona, ions above about 0.1 MeV/amu approach an equilibrium charge state that depends primarily upon their speed and only weakly on the plasma temperature. We suggest that the large variations of the charge states with energy for ions such as Si and Fe observed in the 1997 November 6 event are consistent with stripping in moderately dense coronal. plasma during shock acceleration. In the large solar-particle events studied previously, acceleration occurs sufficiently high in the corona that even Fe ions up to 600 MeV/amu are not stripped of electrons.
Front propagation and clustering in the stochastic nonlocal Fisher equation
NASA Astrophysics Data System (ADS)
Ganan, Yehuda A.; Kessler, David A.
2018-04-01
In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.
Front propagation and clustering in the stochastic nonlocal Fisher equation.
Ganan, Yehuda A; Kessler, David A
2018-04-01
In this work, we study the problem of front propagation and pattern formation in the stochastic nonlocal Fisher equation. We find a crossover between two regimes: a steadily propagating regime for not too large interaction range and a stochastic punctuated spreading regime for larger ranges. We show that the former regime is well described by the heuristic approximation of the system by a deterministic system where the linear growth term is cut off below some critical density. This deterministic system is seen not only to give the right front velocity, but also predicts the onset of clustering for interaction kernels which give rise to stable uniform states, such as the Gaussian kernel, for sufficiently large cutoff. Above the critical cutoff, distinct clusters emerge behind the front. These same features are present in the stochastic model for sufficiently small carrying capacity. In the latter, punctuated spreading, regime, the population is concentrated on clusters, as in the infinite range case, which divide and separate as a result of the stochastic noise. Due to the finite interaction range, if a fragment at the edge of the population separates sufficiently far, it stabilizes as a new cluster, and the processes begins anew. The deterministic cutoff model does not have this spreading for large interaction ranges, attesting to its purely stochastic origins. We show that this mode of spreading has an exponentially small mean spreading velocity, decaying with the range of the interaction kernel.
System for producing a uniform rubble bed for in situ processes
Galloway, Terry R.
1983-01-01
A method and a cutter for producing a large cavity filled with a uniform bed of rubblized oil shale or other material, for in situ processing. A raise drill head (72) has a hollow body (76) with a generally circular base and sloping upper surface. A hollow shaft (74) extends from the hollow body (76). Cutter teeth (78) are mounted on the upper surface of the body (76) and relatively small holes (77) are formed in the body (76) between the cutter teeth (78). Relatively large peripheral flutes (80) around the body (76) allow material to drop below the drill head (72). A pilot hole is drilled into the oil shale deposit. The pilot hole is reamed into a large diameter hole by means of a large diameter raise drill head or cutter to produce a cavity filled with rubble. A flushing fluid, such as air, is circulated through the pilot hole during the reaming operation to remove fines through the raise drill, thereby removing sufficient material to create sufficient void space, and allowing the larger particles to fill the cavity and provide a uniform bed of rubblized oil shale.
Webber, Whitney M.; Li, Ya-Wei
2016-01-01
Managers of large, complex wildlife conservation programs need information on the conservation status of each of many species to help strategically allocate limited resources. Oversimplifying status data, however, runs the risk of missing information essential to strategic allocation. Conservation status consists of two components, the status of threats a species faces and the species’ demographic status. Neither component alone is sufficient to characterize conservation status. Here we present a simple key for scoring threat and demographic changes for species using detailed information provided in free-form textual descriptions of conservation status. This key is easy to use (simple), captures the two components of conservation status without the cost of more detailed measures (sufficient), and can be applied by different personnel to any taxon (consistent). To evaluate the key’s utility, we performed two analyses. First, we scored the threat and demographic status of 37 species recently recommended for reclassification under the Endangered Species Act (ESA) and 15 control species, then compared our scores to two metrics used for decision-making and reports to Congress. Second, we scored the threat and demographic status of all non-plant ESA-listed species from Florida (54 spp.), and evaluated scoring repeatability for a subset of those. While the metrics reported by the U.S. Fish and Wildlife Service (FWS) are often consistent with our scores in the first analysis, the results highlight two problems with the oversimplified metrics. First, we show that both metrics can mask underlying demographic declines or threat increases; for example, ∼40% of species not recommended for reclassification had changes in threats or demography. Second, we show that neither metric is consistent with either threats or demography alone, but conflates the two. The second analysis illustrates how the scoring key can be applied to a substantial set of species to understand overall patterns of ESA implementation. The scoring repeatability analysis shows promise, but indicates thorough training will be needed to ensure consistency. We propose that large conservation programs adopt our simple scoring system for threats and demography. By doing so, program administrators will have better information to monitor program effectiveness and guide their decisions. PMID:27478713
Malcom, Jacob W; Webber, Whitney M; Li, Ya-Wei
2016-01-01
Managers of large, complex wildlife conservation programs need information on the conservation status of each of many species to help strategically allocate limited resources. Oversimplifying status data, however, runs the risk of missing information essential to strategic allocation. Conservation status consists of two components, the status of threats a species faces and the species' demographic status. Neither component alone is sufficient to characterize conservation status. Here we present a simple key for scoring threat and demographic changes for species using detailed information provided in free-form textual descriptions of conservation status. This key is easy to use (simple), captures the two components of conservation status without the cost of more detailed measures (sufficient), and can be applied by different personnel to any taxon (consistent). To evaluate the key's utility, we performed two analyses. First, we scored the threat and demographic status of 37 species recently recommended for reclassification under the Endangered Species Act (ESA) and 15 control species, then compared our scores to two metrics used for decision-making and reports to Congress. Second, we scored the threat and demographic status of all non-plant ESA-listed species from Florida (54 spp.), and evaluated scoring repeatability for a subset of those. While the metrics reported by the U.S. Fish and Wildlife Service (FWS) are often consistent with our scores in the first analysis, the results highlight two problems with the oversimplified metrics. First, we show that both metrics can mask underlying demographic declines or threat increases; for example, ∼40% of species not recommended for reclassification had changes in threats or demography. Second, we show that neither metric is consistent with either threats or demography alone, but conflates the two. The second analysis illustrates how the scoring key can be applied to a substantial set of species to understand overall patterns of ESA implementation. The scoring repeatability analysis shows promise, but indicates thorough training will be needed to ensure consistency. We propose that large conservation programs adopt our simple scoring system for threats and demography. By doing so, program administrators will have better information to monitor program effectiveness and guide their decisions.
Forman, Ruth; Bramhall, Michael; Logunova, Larisa; Svensson-Frej, Marcus; Cruickshank, Sheena M; Else, Kathryn J
2016-05-31
Eosinophils are innate immune cells present in the intestine during steady state conditions. An intestinal eosinophilia is a hallmark of many infections and an accumulation of eosinophils is also observed in the intestine during inflammatory disorders. Classically the function of eosinophils has been associated with tissue destruction, due to the release of cytotoxic granule contents. However, recent evidence has demonstrated that the eosinophil plays a more diverse role in the immune system than previously acknowledged, including shaping adaptive immune responses and providing plasma cell survival factors during the steady state. Importantly, it is known that there are regional differences in the underlying immunology of the small and large intestine, but whether there are differences in context of the intestinal eosinophil in the steady state or inflammation is not known. Our data demonstrates that there are fewer IgA(+) plasma cells in the small intestine of eosinophil-deficient ΔdblGATA-1 mice compared to eosinophil-sufficient wild-type mice, with the difference becoming significant post-infection with Toxoplasma gondii. Remarkably, and in complete contrast, the absence of eosinophils in the inflamed large intestine does not impact on IgA(+) cell numbers during steady state, and is associated with a significant increase in IgA(+) cells post-infection with Trichuris muris compared to wild-type mice. Thus, the intestinal eosinophil appears to be less important in sustaining the IgA(+) cell pool in the large intestine compared to the small intestine, and in fact, our data suggests eosinophils play an inhibitory role. The dichotomy in the influence of the eosinophil over small and large intestinal IgA(+) cells did not depend on differences in plasma cell growth factors, recruitment potential or proliferation within the different regions of the gastrointestinal tract (GIT). We demonstrate for the first time that there are regional differences in the requirement of eosinophils for maintaining IgA+ cells between the large and small intestine, which are more pronounced during inflammation. This is an important step towards further delineation of the enigmatic functions of gut-resident eosinophils.
Enhanced peculiar velocities in brane-induced gravity
NASA Astrophysics Data System (ADS)
Wyman, Mark; Khoury, Justin
2010-08-01
The mounting evidence for anomalously large peculiar velocities in our Universe presents a challenge for the ΛCDM paradigm. The recent estimates of the large-scale bulk flow by Watkins et al. are inconsistent at the nearly 3σ level with ΛCDM predictions. Meanwhile, Lee and Komatsu have recently estimated that the occurrence of high-velocity merging systems such as the bullet cluster (1E0657-57) is unlikely at a 6.5-5.8σ level, with an estimated probability between 3.3×10-11 and 3.6×10-9 in ΛCDM cosmology. We show that these anomalies are alleviated in a broad class of infrared-modifed gravity theories, called brane-induced gravity, in which gravity becomes higher-dimensional at ultralarge distances. These theories include additional scalar forces that enhance gravitational attraction and therefore speed up structure formation at late times and on sufficiently large scales. The peculiar velocities are enhanced by 24-34% compared to standard gravity, with the maximal enhancement nearly consistent at the 2σ level with bulk flow observations. The occurrence of the bullet cluster in these theories is ≈104 times more probable than in ΛCDM cosmology.
Enhanced peculiar velocities in brane-induced gravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wyman, Mark; Khoury, Justin
The mounting evidence for anomalously large peculiar velocities in our Universe presents a challenge for the {Lambda}CDM paradigm. The recent estimates of the large-scale bulk flow by Watkins et al. are inconsistent at the nearly 3{sigma} level with {Lambda}CDM predictions. Meanwhile, Lee and Komatsu have recently estimated that the occurrence of high-velocity merging systems such as the bullet cluster (1E0657-57) is unlikely at a 6.5-5.8{sigma} level, with an estimated probability between 3.3x10{sup -11} and 3.6x10{sup -9} in {Lambda}CDM cosmology. We show that these anomalies are alleviated in a broad class of infrared-modifed gravity theories, called brane-induced gravity, in which gravitymore » becomes higher-dimensional at ultralarge distances. These theories include additional scalar forces that enhance gravitational attraction and therefore speed up structure formation at late times and on sufficiently large scales. The peculiar velocities are enhanced by 24-34% compared to standard gravity, with the maximal enhancement nearly consistent at the 2{sigma} level with bulk flow observations. The occurrence of the bullet cluster in these theories is {approx_equal}10{sup 4} times more probable than in {Lambda}CDM cosmology.« less
NASA Astrophysics Data System (ADS)
Pan, Wen-hao; Liu, Shi-he; Huang, Li
2018-02-01
This study developed a three-layer velocity model for turbulent flow over large-scale roughness. Through theoretical analysis, this model coupled both surface and subsurface flow. Flume experiments with flat cobble bed were conducted to examine the theoretical model. Results show that both the turbulent flow field and the total flow characteristics are quite different from that in the low gradient flow over microscale roughness. The velocity profile in a shallow stream converges to the logarithmic law away from the bed, while inflecting over the roughness layer to the non-zero subsurface flow. The velocity fluctuations close to a cobble bed are different from that of a sand bed, and it indicates no sufficiently large peak velocity. The total flow energy loss deviates significantly from the 1/7 power law equation when the relative flow depth is shallow. Both the coupled model and experiments indicate non-negligible subsurface flow that accounts for a considerable proportion of the total flow. By including the subsurface flow, the coupled model is able to predict a wider range of velocity profiles and total flow energy loss coefficients when compared with existing equations.
Customer premise service study for 30/20 GHz satellite system
NASA Technical Reports Server (NTRS)
Milton, R. T.; Ross, D. P.; Harcar, A. R.; Freedenberg, P.; Schoen, D.
1983-01-01
Satellite systems in which the space segment operates in the 30/20 GHz frequency band are defined and compared as to their potential for providing various types of communications services to customer premises and the economic and technical feasibility of doing so. Technical tasks performed include: market postulation, definition of the ground segment, definition of the space segment, definition of the integrated satellite system, service costs for satellite systems, sensitivity analysis, and critical technology. Based on an analysis of market data, a sufficiently large market for services is projected so as to make the system economically viable. A large market, and hence a high capacity satellite system, is found to be necessary to minimize service costs, i.e., economy of scale is found to hold. The wide bandwidth expected to be available in the 30/20 GHz band, along with frequency reuse which further increases the effective system bandwidth, makes possible the high capacity system. Extensive ground networking is required in most systems to both connect users into the system and to interconnect Earth stations to provide spatial diversity. Earth station spatial diversity is found to be a cost effective means of compensating the large fading encountered in the 30/20 GHz operating band.
Selection of core animals in the Algorithm for Proven and Young using a simulation model.
Bradford, H L; Pocrnić, I; Fragomeni, B O; Lourenco, D A L; Misztal, I
2017-12-01
The Algorithm for Proven and Young (APY) enables the implementation of single-step genomic BLUP (ssGBLUP) in large, genotyped populations by separating genotyped animals into core and non-core subsets and creating a computationally efficient inverse for the genomic relationship matrix (G). As APY became the choice for large-scale genomic evaluations in BLUP-based methods, a common question is how to choose the animals in the core subset. We compared several core definitions to answer this question. Simulations comprised a moderately heritable trait for 95,010 animals and 50,000 genotypes for animals across five generations. Genotypes consisted of 25,500 SNP distributed across 15 chromosomes. Genotyping errors and missing pedigree were also mimicked. Core animals were defined based on individual generations, equal representation across generations, and at random. For a sufficiently large core size, core definitions had the same accuracies and biases, even if the core animals had imperfect genotypes. When genotyped animals had unknown parents, accuracy and bias were significantly better (p ≤ .05) for random and across generation core definitions. © 2017 The Authors. Journal of Animal Breeding and Genetics Published by Blackwell Verlag GmbH.
Assessment of dynamic closure for premixed combustion large eddy simulation
NASA Astrophysics Data System (ADS)
Langella, Ivan; Swaminathan, Nedunchezhian; Gao, Yuan; Chakraborty, Nilanjan
2015-09-01
Turbulent piloted Bunsen flames of stoichiometric methane-air mixtures are computed using the large eddy simulation (LES) paradigm involving an algebraic closure for the filtered reaction rate. This closure involves the filtered scalar dissipation rate of a reaction progress variable. The model for this dissipation rate involves a parameter βc representing the flame front curvature effects induced by turbulence, chemical reactions, molecular dissipation, and their interactions at the sub-grid level, suggesting that this parameter may vary with filter width or be a scale-dependent. Thus, it would be ideal to evaluate this parameter dynamically by LES. A procedure for this evaluation is discussed and assessed using direct numerical simulation (DNS) data and LES calculations. The probability density functions of βc obtained from the DNS and LES calculations are very similar when the turbulent Reynolds number is sufficiently large and when the filter width normalised by the laminar flame thermal thickness is larger than unity. Results obtained using a constant (static) value for this parameter are also used for comparative evaluation. Detailed discussion presented in this paper suggests that the dynamic procedure works well and physical insights and reasonings are provided to explain the observed behaviour.
Cary, Miranda A; Brittain, Danielle R; Gyurcsik, Nancy C
2017-07-01
Adults with arthritis struggle to meet the physical activity recommendation for disease self-management. Identifying psychosocial factors that differentiate adults who meet (sufficiently active) or do not meet (insufficiently active) the recommendation is needed. This study sought to examine differences in psychosocial responses to arthritis pain among adults who were sufficiently or insufficiently active. This prospective study included adults with medically diagnosed arthritis (N = 136, M age = 49.75 ± 13.88 years) who completed two online surveys: (1) baseline: pain and psychosocial responses to pain and (2) two weeks later: physical activity. Psychosocial responses examined in this study were psychological flexibility in response to pain, pain anxiety and maladaptive responses to pain anxiety. A between-groups MANCOVA comparing sufficiently active (n = 87) to insufficiently active (n = 49) participants on psychosocial responses, after controlling for pain intensity, was significant (p = .005). Follow-up ANOVA's revealed that sufficiently active participants reported significantly higher psychological flexibility and used maladaptive responses less often compared to insufficiently active participants (p's < .05). These findings provide preliminary insight into the psychosocial profile of adults at risk for nonadherence due to their responses to arthritis pain.
Cross-flow turbines: physical and numerical model studies towards improved array simulations
NASA Astrophysics Data System (ADS)
Wosnik, M.; Bachant, P.
2015-12-01
Cross-flow, or vertical-axis turbines, show potential in marine hydrokinetic (MHK) and wind energy applications. As turbine designs mature, the research focus is shifting from individual devices towards improving turbine array layouts for maximizing overall power output, i.e., minimizing wake interference for axial-flow turbines, or taking advantage of constructive wake interaction for cross-flow turbines. Numerical simulations are generally better suited to explore the turbine array design parameter space, as physical model studies of large arrays at large model scale would be expensive. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries, the turbines' interaction with the energy resource needs to be parameterized, or modeled. Most models in use today, e.g. actuator disk, are not able to predict the unique wake structure generated by cross-flow turbines. Experiments were carried out using a high-resolution turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier--Stokes models. The ALM predicts turbine loading with the blade element method combined with sub-models for dynamic stall and flow curvature. The open-source software is written as an extension library for the OpenFOAM CFD package, which allows the ALM body force to be applied to their standard RANS and LES solvers. Turbine forcing is also applied to volume of fluid (VOF) models, e.g., for predicting free surface effects on submerged MHK devices. An additional sub-model is considered for injecting turbulence model scalar quantities based on actuator line element loading. Results are presented for the simulation of performance and wake dynamics of axial- and cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET grant 1150797.
Zhang, Yun; Okubo, Ryuhi; Hirano, Mayumi; Eto, Yujiro; Hirano, Takuya
2015-01-01
Spatially separated entanglement is demonstrated by interfering two high-repetition squeezed pulse trains. The entanglement correlation of the quadrature amplitudes between individual pulses is interrogated. It is characterized in terms of the sufficient inseparability criterion with an optimum result of in the frequency domain and in the time domain. The quantum correlation is also observed when the two measurement stations are separated by a physical distance of 4.5 m, which is sufficiently large to demonstrate the space-like separation, after accounting for the measurement time. PMID:26278478
NASA Technical Reports Server (NTRS)
2008-01-01
When we began our study we sought to answer five fundamental implementation questions: 1) can foregrounds be measured and subtracted to a sufficiently low level?; 2) can systematic errors be controlled?; 3) can we develop optics with sufficiently large throughput, low polarization, and frequency coverage from 30 to 300 GHz?; 4) is there a technical path to realizing the sensitivity and systematic error requirements?; and 5) what are the specific mission architecture parameters, including cost? Detailed answers to these questions are contained in this report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickinson, Robert E.; Oleson, Keith; Bonan, Gordon
2006-01-01
Several multidecadal simulations have been carried out with the new version of the Community Climate System Model (CCSM). This paper reports an analysis of the land component of these simulations. Global annual averages over land appear to be within the uncertainty of observational datasets, but the seasonal cycle over land of temperature and precipitation appears to be too weak. These departures from observations appear to be primarily a consequence of deficiencies in the simulation of the atmospheric model rather than of the land processes. High latitudes of northern winter are biased sufficiently warm to have a significant impact on themore » simulated value of global land temperature. The precipitation is approximately doubled from what it should be at some locations, and the snowpack and spring runoff are also excessive. The winter precipitation over Tibet is larger than observed. About two-thirds of this precipitation is sublimated during the winter, but what remains still produces a snowpack that is very large compared to that observed with correspondingly excessive spring runoff. A large cold anomaly over the Sahara Desert and Sahel also appears to be a consequence of a large anomaly in downward longwave radiation; low column water vapor appears to be most responsible. The modeled precipitation over the Amazon basin is low compared to that observed, the soil becomes too dry, and the temperature is too warm during the dry season.« less
Innovative contracting methods and construction traffic congestion.
DOT National Transportation Integrated Search
2012-01-01
Increasing travel demand and lack of sufficient highway capacity are serious problems in most : major metropolitan areas in the United States. Large metropolitan cities have been experiencing : increased traffic congestion problems over the past seve...
EVALUATION OF THE SCATS CONTROL SYSTEM
DOT National Transportation Integrated Search
2008-12-01
Increasing travel demand and lack of sufficient highway capacity are serious problems in most major metropolitan areas in the United States. Large metropolitan areas have been experiencing increased traffic congestion problems over the past several y...
Novel crystal timing calibration method based on total variation
NASA Astrophysics Data System (ADS)
Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng
2016-11-01
A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.
Meda, Shashwath A.; Giuliani, Nicole R.; Calhoun, Vince D.; Jagannathan, Kanchana; Schretlen, David J.; Pulver, Anne; Cascella, Nicola; Keshavan, Matcheri; Kates, Wendy; Buchanan, Robert; Sharma, Tonmoy; Pearlson, Godfrey D.
2008-01-01
Background Many studies have employed voxel-based morphometry (VBM) of MRI images as an automated method of investigating cortical gray matter differences in schizophrenia. However, results from these studies vary widely, likely due to different methodological or statistical approaches. Objective To use VBM to investigate gray matter differences in schizophrenia in a sample significantly larger than any published to date, and to increase statistical power sufficiently to reveal differences missed in smaller analyses. Methods Magnetic resonance whole brain images were acquired from four geographic sites, all using the same model 1.5T scanner and software version, and combined to form a sample of 200 patients with both first episode and chronic schizophrenia and 200 healthy controls, matched for age, gender and scanner location. Gray matter concentration was assessed and compared using optimized VBM. Results Compared to the healthy controls, schizophrenia patients showed significantly less gray matter concentration in multiple cortical and subcortical regions, some previously unreported. Overall, we found lower concentrations of gray matter in regions identified in prior studies, most of which reported only subsets of the affected areas. Conclusions Gray matter differences in schizophrenia are most comprehensively elucidated using a large, diverse and representative sample. PMID:18378428
FASTPM: a new scheme for fast simulations of dark matter and haloes
NASA Astrophysics Data System (ADS)
Feng, Yu; Chu, Man-Yat; Seljak, Uroš; McDonald, Patrick
2016-12-01
We introduce FASTPM, a highly scalable approximated particle mesh (PM) N-body solver, which implements the PM scheme enforcing correct linear displacement (1LPT) evolution via modified kick and drift factors. Employing a two-dimensional domain decomposing scheme, FASTPM scales extremely well with a very large number of CPUs. In contrast to Comoving-Lagrangian (COLA) approach, we do not require to split the force or track separately the 2LPT solution, reducing the code complexity and memory requirements. We compare FASTPM with different number of steps (Ns) and force resolution factor (B) against three benchmarks: halo mass function from friends-of-friends halo finder; halo and dark matter power spectrum; and cross-correlation coefficient (or stochasticity), relative to a high-resolution TREEPM simulation. We show that the modified time stepping scheme reduces the halo stochasticity when compared to COLA with the same number of steps and force resolution. While increasing Ns and B improves the transfer function and cross-correlation coefficient, for many applications FASTPM achieves sufficient accuracy at low Ns and B. For example, Ns = 10 and B = 2 simulation provides a substantial saving (a factor of 10) of computing time relative to Ns = 40, B = 3 simulation, yet the halo benchmarks are very similar at z = 0. We find that for abundance matched haloes the stochasticity remains low even for Ns = 5. FASTPM compares well against less expensive schemes, being only 7 (4) times more expensive than 2LPT initial condition generator for Ns = 10 (Ns = 5). Some of the applications where FASTPM can be useful are generating a large number of mocks, producing non-linear statistics where one varies a large number of nuisance or cosmological parameters, or serving as part of an initial conditions solver.
Structure and biochemical functions of four simian virus 40 truncated large-T antigens.
Chaudry, F; Harvey, R; Smith, A E
1982-01-01
The structure of four abnormal T antigens which are present in different simian virus 40 (SV40)-transformed mouse cell lines was studied by tryptic peptide mapping, partial proteolysis fingerprinting, immunoprecipitation with monoclonal antibodies, and in vitro translation. The results obtained allowed us to deduce that these proteins, which have apparent molecular weights of 15,000, 22,000, 33,000 and 45,000, are truncated forms of large-T antigen extending to different amounts into the amino acid sequences unique to large-T. The proteins are all phosphorylated, probably at a site between amino acids 106 and 123. The mRNAs coding for the proteins probably contain the normal large-T splice but are shorter than the normal transcripts of the SV40 early region. The truncated large-Ts were tested for the ability to bind to double-stranded DNA-cellulose. This showed that the 33,000- and 45,000-molecular-weight polypeptides contained sequences sufficient for binding under the conditions used, whereas the 15,000- and 22,000-molecular-weight forms did not. Together with published data, this allows the tentative mapping of a region of SV40 large-T between amino acids 109 and 272 that is necessary and may be sufficient for the binding to double-stranded DNA-cellulose in vitro. None of the truncated large-T species formed a stable complex with the host cell protein referred to as nonviral T-antigen or p53, suggesting that the carboxy-terminal sequences of large-T are necessary for complex formation. Images PMID:6292504
ERIC Educational Resources Information Center
Colman, Rosalie Marson; And Others
The Connecticut Haitian American community has recently become large enough and sufficiently well established to develop programs to assist economic and educational development in the Republic of Haiti. Southern Connecticut became a destination for large numbers of Haitian emigrants and political refugees in the 1950s, in 1964, and again in 1971.…
Ethnic use of the Tonto: geographic extension of the recreation knowledge base
Denver Hospodarsky; Martha Lee
1995-01-01
The recreational use of the Tonto National Forest, Arizona was investigated by using data on ethnic and racial subgroups. The Tonto is a Class 1 urban proximate forest adjoining the large, culturally diverse population of the Phoenix. An on-site survey of 524 recreating groups found sufficiently large numbers of Anglos (n=425) and Hispanics (n=82) who participated in...
Is There a Maximum Size of Water Drops in Nature?
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2013-01-01
In nature, water drops can have a large variety of sizes and shapes. Small droplets with diameters of the order of 5 to 10 µm are present in fog and clouds. This is not sufficiently large for gravity to dominate their behavior. In contrast, raindrops typically have sizes of the order of 1 mm, with observed maximum sizes in nature of around 5 mm in…
Observation of Planetary Motion Using a Digital Camera
ERIC Educational Resources Information Center
Meyn, Jan-Peter
2008-01-01
A digital SLR camera with a standard lens (50 mm focal length, f/1.4) on a fixed tripod is used to obtain photographs of the sky which contain stars up to 8[superscript m] apparent magnitude. The angle of view is large enough to ensure visual identification of the photograph with a large sky region in a stellar map. The resolution is sufficient to…
Static versus dynamic sampling for data mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
John, G.H.; Langley, P.
1996-12-31
As data warehouses grow to the point where one hundred gigabytes is considered small, the computational efficiency of data-mining algorithms on large databases becomes increasingly important. Using a sample from the database can speed up the datamining process, but this is only acceptable if it does not reduce the quality of the mined knowledge. To this end, we introduce the {open_quotes}Probably Close Enough{close_quotes} criterion to describe the desired properties of a sample. Sampling usually refers to the use of static statistical tests to decide whether a sample is sufficiently similar to the large database, in the absence of any knowledgemore » of the tools the data miner intends to use. We discuss dynamic sampling methods, which take into account the mining tool being used and can thus give better samples. We describe dynamic schemes that observe a mining tool`s performance on training samples of increasing size and use these results to determine when a sample is sufficiently large. We evaluate these sampling methods on data from the UCI repository and conclude that dynamic sampling is preferable.« less
Johnson, Sean J; Alford, Chris; Stewart, Karina; Verster, Joris C
2016-12-01
Previous research reported positive associations between alcohol mixed with energy drink (AMED) consumption and overall alcohol consumption. However, results were largely based on between-subjects comparisons comparing AMED consumers with alcohol-only (AO) consumers, and therefore cannot sufficiently control for differences in personal characteristics between these groups. In order to determine whether AMED consumers drink more alcohol on occasions they consume AMED compared to those when they drink AO additional within-subjects comparisons are required. Therefore, this UK student survey assessed both alcohol consumption and alcohol-related negative consequences when consumed alone and when mixed with energy drinks, using a within-subject design. A total of 1873 students completed the survey, including 732 who consumed AMED. It was found that AMED consumers drank significantly less alcohol when they consumed AMED compared to when they drank AO (p < 0.001). In line with reduced alcohol consumption significantly fewer negative alcohol-related consequences were reported on AMED occasions compared to AO occasions (p < 0.001). These findings suggest that mixing alcohol with energy drinks does not increase total alcohol consumption or alcohol-related negative consequences.
Sekine, Masashi; Kita, Kahori; Yu, Wenwei
2015-01-01
Unlike forearm amputees, transhumeral amputees have residual stumps that are too small to provide a sufficient range of operation for their prosthetic parts to perform usual activities of daily living. Furthermore, it is difficult for small residual stumps to provide sufficient impact absorption for safe manipulation in daily living, as intact arms do. Therefore, substitution of upper limb function in transhumeral amputees requires a sufficient range of motion and sufficient viscoelasticity for shoulder prostheses under critical weight and dimension constraints. We propose the use of two different types of actuators, ie, pneumatic elastic actuators (PEAs) and servo motors. PEAs offer high power-to-weight performance and have intrinsic viscoelasticity in comparison with motors or standard industrial pneumatic cylinder actuators. However, the usefulness of PEAs in large working spaces is limited because of their short strokes. Servo motors, in contrast, can be used to achieve large ranges of motion. In this study, the relationship between the force and stroke of PEAs was investigated. The impact absorption of both types of actuators was measured using a single degree-of-freedom prototype to evaluate actuator compliance for safety purposes. Based on the fundamental properties of the actuators identified, a four degree-of-freedom robotic arm is proposed for prosthetic use. The configuration of the actuators and functional parts was designed to achieve a specified range of motion and torque calculated from the results of a simulation of typical movements performed in usual activities of daily living. Our experimental results showed that the requirements for the shoulder prostheses could be satisfied.
Fatigue Crack Propagation in Rail Steels
DOT National Transportation Integrated Search
1977-06-01
In order to establish safe inspection periods of railroad rails, information on fatigue crack growth rates is required. These data should come from a sufficiently large sample of rails presently in service. The reported research consisted of the gene...
A frequency dependent preconditioned wavelet method for atmospheric tomography
NASA Astrophysics Data System (ADS)
Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny
2013-12-01
Atmospheric tomography, i.e. the reconstruction of the turbulence in the atmosphere, is a main task for the adaptive optics systems of the next generation telescopes. For extremely large telescopes, such as the European Extremely Large Telescope, this problem becomes overly complex and an efficient algorithm is needed to reduce numerical costs. Recently, a conjugate gradient method based on wavelet parametrization of turbulence layers was introduced [5]. An iterative algorithm can only be numerically efficient when the number of iterations required for a sufficient reconstruction is low. A way to achieve this is to design an efficient preconditioner. In this paper we propose a new frequency-dependent preconditioner for the wavelet method. In the context of a multi conjugate adaptive optics (MCAO) system simulated on the official end-to-end simulation tool OCTOPUS of the European Southern Observatory we demonstrate robustness and speed of the preconditioned algorithm. We show that three iterations are sufficient for a good reconstruction.
NASA Astrophysics Data System (ADS)
Nolan, R. H.; Boer, M. M.; Resco de Dios, V.; Caccamo, G.; Bradstock, R. A.
2016-05-01
The occurrence of large, high-intensity wildfires requires plant biomass, or fuel, that is sufficiently dry to burn. This poses the question, what is "sufficiently dry"? Until recently, the ability to address this question has been constrained by the spatiotemporal scale of available methods to monitor the moisture contents of both dead and live fuels. Here we take advantage of recent developments in macroscale monitoring of fuel moisture through a combination of remote sensing and climatic modeling. We show there are clear thresholds of fuel moisture content associated with the occurrence of wildfires in forests and woodlands. Furthermore, we show that transformations in fuel moisture conditions across these thresholds can occur rapidly, within a month. Both the approach presented here, and our findings, can be immediately applied and may greatly improve fire risk assessments in forests and woodlands globally.
NASA Technical Reports Server (NTRS)
Feinberg, Lee; Rioux, Norman; Bolcar, Matthew; Liu, Alice; Guyon, Oliver; Stark, Chris; Arenberg, Jon
2016-01-01
Key challenges of a future large aperture, segmented Ultraviolet Optical Infrared (UVOIR) Telescope capable of performing a spectroscopic survey of hundreds of Exoplanets will be sufficient stability to achieve 10^-10 contrast measurements and sufficient throughput and sensitivity for high yield Exo-Earth spectroscopic detection. Our team has collectively assessed an optimized end to end architecture including a high throughput coronagraph capable of working with a segmented telescope, a cost-effective and heritage based stable segmented telescope, a control architecture that minimizes the amount of new technologies, and an Exo-Earth yield assessment to evaluate potential performance. These efforts are combined through integrated modeling, coronagraph evaluations, and Exo-Earth yield calculations to assess the potential performance of the selected architecture. In addition, we discusses the scalability of this architecture to larger apertures and the technological tall poles to enabling it.
NASA Astrophysics Data System (ADS)
Zhong, Xin; Frehner, Marcel; Kunze, Karsten; Zappone, Alba
2014-10-01
A novel electron backscatter diffraction (EBSD) -based finite-element (FE) wave propagation simulation is presented and applied to investigate seismic anisotropy of peridotite samples. The FE model simulates the dynamic propagation of seismic waves along any chosen direction through representative 2D EBSD sections. The numerical model allows separation of the effects of crystallographic preferred orientation (CPO) and shape preferred orientation (SPO). The obtained seismic velocities with respect to specimen orientation are compared with Voigt-Reuss-Hill estimates and with laboratory measurements. The results of these three independent methods testify that CPO is the dominant factor controlling seismic anisotropy. Fracture fillings and minor minerals like hornblende only influence the seismic anisotropy if their volume proportion is sufficiently large (up to 23%). The SPO influence is minor compared to the other factors. The presented FE model is discussed with regard to its potential in simulating seismic wave propagation using EBSD data representing natural rock petrofabrics.
Homogenization of a Directed Dispersal Model for Animal Movement in a Heterogeneous Environment.
Yurk, Brian P
2016-10-01
The dispersal patterns of animals moving through heterogeneous environments have important ecological and epidemiological consequences. In this work, we apply the method of homogenization to analyze an advection-diffusion (AD) model of directed movement in a one-dimensional environment in which the scale of the heterogeneity is small relative to the spatial scale of interest. We show that the large (slow) scale behavior is described by a constant-coefficient diffusion equation under certain assumptions about the fast-scale advection velocity, and we determine a formula for the slow-scale diffusion coefficient in terms of the fast-scale parameters. We extend the homogenization result to predict invasion speeds for an advection-diffusion-reaction (ADR) model with directed dispersal. For periodic environments, the homogenization approximation of the solution of the AD model compares favorably with numerical simulations. Invasion speed approximations for the ADR model also compare favorably with numerical simulations when the spatial period is sufficiently small.
Polar Wind Measurements with TIDE/PSI and HYDRA on the Polar Spacecraft
NASA Technical Reports Server (NTRS)
Su, Y. J.; Horwitz, J. L.; Moore, Thomas E.; Giles, Barbara L.; Chandler, Michael O.; Craven, Paul D.; Chang, S.-W.; Scudder, J.
1998-01-01
The Thermal Ion Dynamics Experiment (TIDE) on the POLAR spacecraft has allowed sampling of the three-dimensional ion distributions with excellent energy, angular, and mass resolution. The companion Plasma Source Instrument, when operated, allows sufficient diminution of the electric potential to observe the polar wind at very high altitudes. In this presentation, we will describe the results of polar wind characteristics H+, He+, and 0+ as observed by TIDE at 5000 km and 8 RE altitudes. The relationship of the polar wind parameters with the solar zenith angle and with the day-night distance in the Solar Magnetic coordinate system will also be presented. We will compare these measurements with recent simulations of the photoelectron-driven polar wind using a couple fluid-semikinetic model. In addition, we will compare these polar wind observations with low-energy electrons sampled by the HYDRA experiment on POLAR to examine possible effects of the polar rain and photoelectrons and hopefully explain the large ion outflow velocity variations at POLAR apogee.
Disentangling diatom species complexes: does morphometry suffice?
Borrego-Ramos, María; Olenici, Adriana
2017-01-01
Accurate taxonomic resolution in light microscopy analyses of microalgae is essential to achieve high quality, comparable results in both floristic analyses and biomonitoring studies. A number of closely related diatom taxa have been detected to date co-occurring within benthic diatom assemblages, sharing many morphological, morphometrical and ecological characteristics. In this contribution, we analysed the hypothesis that, where a large sample size (number of individuals) is available, common morphometrical parameters (valve length, width and stria density) are sufficient to achieve a correct identification to the species level. We focused on some common diatom taxa belonging to the genus Gomphonema. More than 400 valves and frustules were photographed in valve view and measured using Fiji software. Several statistical tools (mixture and discriminant analysis, k-means clustering, classification trees, etc.) were explored to test whether mere morphometry, independently of other valve features, leads to correct identifications, when compared to identifications made by experts. In view of the results obtained, morphometry-based determination in diatom taxonomy is discouraged. PMID:29250472
NASA Astrophysics Data System (ADS)
Loo, Lit-Hsin; Bougen-Zhukov, Nicola Michelle; Tan, Wei-Ling Cecilia
2017-03-01
Signaling pathways can generate different cellular responses to the same cytotoxic agents. Current quantitative models for predicting these differential responses are usually based on large numbers of intracellular gene products or signals at different levels of signaling cascades. Here, we report a study to predict cellular sensitivity to tumor necrosis factor alpha (TNFα) using high-throughput cellular imaging and machine-learning methods. We measured and compared 1170 protein phosphorylation events in a panel of human lung cancer cell lines based on different signals, subcellular regions, and time points within one hour of TNFα treatment. We found that two spatiotemporal-specific changes in an intermediate signaling protein, p90 ribosomal S6 kinase (RSK), are sufficient to predict the TNFα sensitivity of these cell lines. Our models could also predict the combined effects of TNFα and other kinase inhibitors, many of which are not known to target RSK directly. Therefore, early spatiotemporal-specific changes in intermediate signals are sufficient to represent the complex cellular responses to these perturbations. Our study provides a general framework for the development of rapid, signaling-based cytotoxicity screens that may be used to predict cellular sensitivity to a cytotoxic agent, or identify co-treatments that may sensitize or desensitize cells to the agent.
Clinical evaluation of a miniaturized desktop breath hydrogen analyzer.
Duan, L P; Braden, B; Clement, T; Caspary, W F; Lembcke, B
1994-10-01
A small desktop electrochemical H2 analyzer (EC-60-Hydrogen monitor) was compared with a stationary electrochemical H2 monitor (GMI-exhaled Hydrogen monitor). The EC-60-H2 monitor shows a high degree of precision for repetitive (n = 10) measurements of standard hydrogen mixtures (CV 1-8%). The response time for completion of measurement is shorter than that of the GMI-exhaled H2 monitor (37 sec. vs 53 sec.; p < 0.0001), while reset times are almost identical (54 sec. vs 51 sec. n.s). In a clinical setting, breath H2-concentrations measured with the EC-60-H2 monitor and the GMI-exhaled H2 monitor were in excellent agreement with a linear correlation (Y = 1.12X + 1.022, r2 = 0.9617, n = 115). With increasing H2-concentrations the EC-60-H2 monitor required larger sample volumes for maintaining sufficient precision, and sample volumes greater than 200 ml were required with H2-concentrations > 30 ppm. For routine gastrointestinal function testing, the EC-60-H2 monitor is an satisfactory and reliable, easy to use and inexpensive desktop breath hydrogen analyzer, whereas in patients with difficulty in cooperating (children, people with severe pulmonary insufficiency), special care has to be applied to obtain sufficiently large breath samples.
Loo, Lit-Hsin; Bougen-Zhukov, Nicola Michelle; Tan, Wei-Ling Cecilia
2017-01-01
Signaling pathways can generate different cellular responses to the same cytotoxic agents. Current quantitative models for predicting these differential responses are usually based on large numbers of intracellular gene products or signals at different levels of signaling cascades. Here, we report a study to predict cellular sensitivity to tumor necrosis factor alpha (TNFα) using high-throughput cellular imaging and machine-learning methods. We measured and compared 1170 protein phosphorylation events in a panel of human lung cancer cell lines based on different signals, subcellular regions, and time points within one hour of TNFα treatment. We found that two spatiotemporal-specific changes in an intermediate signaling protein, p90 ribosomal S6 kinase (RSK), are sufficient to predict the TNFα sensitivity of these cell lines. Our models could also predict the combined effects of TNFα and other kinase inhibitors, many of which are not known to target RSK directly. Therefore, early spatiotemporal-specific changes in intermediate signals are sufficient to represent the complex cellular responses to these perturbations. Our study provides a general framework for the development of rapid, signaling-based cytotoxicity screens that may be used to predict cellular sensitivity to a cytotoxic agent, or identify co-treatments that may sensitize or desensitize cells to the agent. PMID:28272488
Sleutels, Tom H. J. A.; Molenaar, Sam D.; Heijne, Annemiek Ter; Buisman, Cees J. N.
2016-01-01
A crucial aspect for the application of bioelectrochemical systems (BESs) as a wastewater treatment technology is the efficient oxidation of complex substrates by the bioanode, which is reflected in high Coulombic efficiency (CE). To achieve high CE, it is essential to give a competitive advantage to electrogens over methanogens. Factors that affect CE in bioanodes are, amongst others, the type of wastewater, anode potential, substrate concentration and pH. In this paper, we focus on acetate as a substrate and analyze the competition between methanogens and electrogens from a thermodynamic and kinetic point of view. We reviewed experimental data from earlier studies and propose that low substrate loading in combination with a sufficiently high anode overpotential plays a key-role in achieving high CE. Low substrate loading is a proven strategy against methanogenic activity in large-scale reactors for sulfate reduction. The combination of low substrate loading with sufficiently high overpotential is essential because it results in favorable growth kinetics of electrogens compared to methanogens. To achieve high current density in combination with low substrate concentrations, it is essential to have a high specific anode surface area. New reactor designs with these features are essential for BESs to be successful in wastewater treatment in the future. PMID:27681899
Sleutels, Tom H J A; Molenaar, Sam D; Heijne, Annemiek Ter; Buisman, Cees J N
2016-01-05
A crucial aspect for the application of bioelectrochemical systems (BESs) as a wastewater treatment technology is the efficient oxidation of complex substrates by the bioanode, which is reflected in high Coulombic efficiency (CE). To achieve high CE, it is essential to give a competitive advantage to electrogens over methanogens. Factors that affect CE in bioanodes are, amongst others, the type of wastewater, anode potential, substrate concentration and pH. In this paper, we focus on acetate as a substrate and analyze the competition between methanogens and electrogens from a thermodynamic and kinetic point of view. We reviewed experimental data from earlier studies and propose that low substrate loading in combination with a sufficiently high anode overpotential plays a key-role in achieving high CE. Low substrate loading is a proven strategy against methanogenic activity in large-scale reactors for sulfate reduction. The combination of low substrate loading with sufficiently high overpotential is essential because it results in favorable growth kinetics of electrogens compared to methanogens. To achieve high current density in combination with low substrate concentrations, it is essential to have a high specific anode surface area. New reactor designs with these features are essential for BESs to be successful in wastewater treatment in the future.
Fokker-Planck simulation of runaway electron generation in disruptions with the hot-tail effect
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nuga, H., E-mail: nuga@p-grp.nucleng.kyoto-u.ac.jp; Fukuyama, A.; Yagi, M.
2016-06-15
To study runaway electron generation in disruptions, we have extended the three-dimensional (two-dimensional in momentum space; one-dimensional in the radial direction) Fokker-Planck code, which describes the evolution of the relativistic momentum distribution function of electrons and the induced toroidal electric field in a self-consistent manner. A particular focus is placed on the hot-tail effect in two-dimensional momentum space. The effect appears if the drop of the background plasma temperature is sufficiently rapid compared with the electron-electron slowing down time for a few times of the pre-quench thermal velocity. It contributes to not only the enhancement of the primary runaway electronmore » generation but also the broadening of the runaway electron distribution in the pitch angle direction. If the thermal energy loss during the major disruption is assumed to be isotropic, there are hot-tail electrons that have sufficiently large perpendicular momentum, and the runaway electron distribution becomes broader in the pitch angle direction. In addition, the pitch angle scattering also yields the broadening. Since the electric field is reduced due to the burst of runaway electron generation, the time required for accelerating electrons to the runaway region becomes longer. The longer acceleration period makes the pitch-angle scattering more effective.« less
Drury, Suzanne; Salter, Janine; Baehner, Frederick L; Shak, Steven; Dowsett, Mitch
2010-06-01
To determine whether 0.6 mm cores of formalin-fixed paraffin-embedded (FFPE) tissue, as commonly used to construct immunohistochemical tissue microarrays, may be a valid alternative to tissue sections as source material for quantitative real-time PCR-based transcriptional profiling of breast cancer. Four matched 0.6 mm cores of invasive breast tumour and two 10 microm whole sections were taken from eight FFPE blocks. RNA was extracted and reverse transcribed, and TaqMan assays were performed on the 21 genes of the Oncotype DX Breast Cancer assay. Expression of the 16 recurrence-related genes was normalised to the set of five reference genes, and the recurrence score (RS) was calculated. RNA yield was lower from 0.6 mm cores than from 10 microm whole sections, but was still more than sufficient to perform the assay. RS and single gene data from cores were highly comparable with those from whole sections (RS p=0.005). Greater variability was seen between cores than between sections. FFPE sections are preferable to 0.6 mm cores for RNA profiling in order to maximise RNA yield and to allow for standard histopathological assessment. However, 0.6 mm cores are sufficient and would be appropriate to use for large cohort studies.
Chuang, Emmeline; Dill, Janette; Morgan, Jennifer Craft; Konrad, Thomas R
2012-01-01
Objective To identify high-performance work practices (HPWP) associated with high frontline health care worker (FLW) job satisfaction and perceived quality of care. Methods Cross-sectional survey data from 661 FLWs in 13 large health care employers were collected between 2007 and 2008 and analyzed using both regression and fuzzy-set qualitative comparative analysis. Principal Findings Supervisor support and team-based work practices were identified as necessary for high job satisfaction and high quality of care but not sufficient to achieve these outcomes unless implemented in tandem with other HPWP. Several configurations of HPWP were associated with either high job satisfaction or high quality of care. However, only one configuration of HPWP was sufficient for both: the combination of supervisor support, performance-based incentives, team-based work, and flexible work. These findings were consistent even after controlling for FLW demographics and employer type. Additional research is needed to clarify whether HPWP have differential effects on quality of care in direct care versus administrative workers. Conclusions High-performance work practices that integrate FLWs in health care teams and provide FLWs with opportunities for participative decision making can positively influence job satisfaction and perceived quality of care, but only when implemented as bundles of complementary policies and practices. PMID:22224858
Reentrant equilibrium disordering in nanoparticle–polymer mixtures
Meng, Dong; Kumar, Sanat K.; Grest, Gary S.; ...
2017-01-31
A large body of experimental work has established that athermal colloid/polymer mixtures undergo a sequence of transitions from a disordered fluid state to a colloidal crystal to a second disordered phase with increasing polymer concentration. These transitions are driven by polymer-mediated interparticle attraction, which is a function of both the polymer density and size. It has been posited that the disordered state at high polymer density is a consequence of strong interparticle attractions that kinetically inhibit the formation of the colloidal crystal, i.e., the formation of a non-equilibrium gel phase interferes with crystallization. Here we use molecular dynamics simulations andmore » density functional theory on polymers and nanoparticles (NPs) of comparable size and show that the crystal-disordered phase coexistence at high polymer density for sufficiently long chains corresponds to an equilibrium thermodynamic phase transition. While the crystal is, indeed, stabilized at intermediate polymer density by polymer-induced intercolloid attractions, it is destabilized at higher densities because long chains lose significant configurational entropy when they are forced to occupy all of the crystal voids. Finally, our results are in quantitative agreement with existing experimental data and show that, at least in the nanoparticle limit of sufficiently small colloidal particles, the crystal phase only has a modest range of thermodynamic stability.« less
Domenichiello, Anthony F; Chen, Chuck T; Trepanier, Marc-Olivier; Stavro, P Mark; Bazinet, Richard P
2014-01-01
Docosahexaenoic acid (DHA) is important for brain function, however, the exact amount required for the brain is not agreed upon. While it is believed that the synthesis rate of DHA from α-linolenic acid (ALA) is low, how this synthesis rate compares with the amount of DHA required to maintain brain DHA levels is unknown. The objective of this work was to assess whether DHA synthesis from ALA is sufficient for the brain. To test this, rats consumed a diet low in n-3 PUFAs, or a diet containing ALA or DHA for 15 weeks. Over the 15 weeks, whole body and brain DHA accretion was measured, while at the end of the study, whole body DHA synthesis rates, brain gene expression, and DHA uptake rates were measured. Despite large differences in body DHA accretion, there was no difference in brain DHA accretion between rats fed ALA and DHA. In rats fed ALA, DHA synthesis and accretion was 100-fold higher than brain DHA accretion of rats fed DHA. Also, ALA-fed rats synthesized approximately 3-fold more DHA than the DHA uptake rate into the brain. This work indicates that DHA synthesis from ALA may be sufficient to supply the brain.
Domenichiello, Anthony F.; Chen, Chuck T.; Trepanier, Marc-Olivier; Stavro, P. Mark; Bazinet, Richard P.
2014-01-01
Docosahexaenoic acid (DHA) is important for brain function, however, the exact amount required for the brain is not agreed upon. While it is believed that the synthesis rate of DHA from α-linolenic acid (ALA) is low, how this synthesis rate compares with the amount of DHA required to maintain brain DHA levels is unknown. The objective of this work was to assess whether DHA synthesis from ALA is sufficient for the brain. To test this, rats consumed a diet low in n-3 PUFAs, or a diet containing ALA or DHA for 15 weeks. Over the 15 weeks, whole body and brain DHA accretion was measured, while at the end of the study, whole body DHA synthesis rates, brain gene expression, and DHA uptake rates were measured. Despite large differences in body DHA accretion, there was no difference in brain DHA accretion between rats fed ALA and DHA. In rats fed ALA, DHA synthesis and accretion was 100-fold higher than brain DHA accretion of rats fed DHA. Also, ALA-fed rats synthesized approximately 3-fold more DHA than the DHA uptake rate into the brain. This work indicates that DHA synthesis from ALA may be sufficient to supply the brain. PMID:24212299
Fitzgerald, Paul B; Hoy, Kate E; Elliot, David; McQueen, Susan; Wambeek, Lenore E; Chen, Leo; Clinton, Anne Maree; Downey, Glenn; Daskalakis, Zafiris J
2018-05-01
Magnetic seizure therapy (MST) is a novel brain stimulation technique that uses a high-powered transcranial magnetic stimulation device to produce therapeutic seizures. Preliminary MST studies have found antidepressant effects in the absence of cognitive side effects but its efficacy compared to electroconvulsive therapy (ECT) remains unclear. The aim of this study was to investigate the therapeutic efficacy and cognitive profile of MST compared to standard right unilateral ECT treatment. Thirty-seven patients completed a course of at least nine ECT or MST treatments in a randomized double-blind protocol. Assessments of depression severity and cognition were performed before and after treatment. No difference in the antidepressant effectiveness between the treatments was seen across any of the clinical outcome measures, although the overall response rates in both groups were quite low. In regards to cognition, following MST there were significant improvements in tests of psychomotor speed, verbal memory, and cognitive inhibition, with no reductions in cognitive performance. Following ECT there was significant improvement in only one of the cognitive inhibition tasks. With respect to the between-group comparisons, the MST group showed a significantly greater improvement on psychomotor speed than ECT. MST showed similar efficacy to right unilateral ECT in patients with treatment-resistant depression without cognitive side effects but in a sample that was only of sufficient size to demonstrate relatively large differences in response between the two groups. Future research should aim to optimize the methods of MST administration and compare its efficacy to ECT in large randomized controlled trials. © 2018 Wiley Periodicals, Inc.
Klinge, Uwe; Otto, Jens; Mühl, Thomas
2015-01-01
Reinforcement of tissues by use of textiles is encouraged by the reduced rate of recurrent tissue dehiscence but for the price of an inflammatory and fibrotic tissue reaction to the implant. The latter mainly is affected by the size of the pores, whereas only sufficiently large pores are effective in preventing a complete scar entrapment. Comparing two different sling implants (TVT and SIS), which are used for the treatment of urinary incontinence, we can demonstrate that the measurement of the effective porosity reveals considerable differences in the textile construction. Furthermore the changes of porosity after application of a tensile load can indicate a structural instability, favouring pore collapse at stress and questioning the use for purposes that are not “tension-free.” PMID:25973427
Linear reduction method for predictive and informative tag SNP selection.
He, Jingwu; Westbrooks, Kelly; Zelikovsky, Alexander
2005-01-01
Constructing a complete human haplotype map is helpful when associating complex diseases with their related SNPs. Unfortunately, the number of SNPs is very large and it is costly to sequence many individuals. Therefore, it is desirable to reduce the number of SNPs that should be sequenced to a small number of informative representatives called tag SNPs. In this paper, we propose a new linear algebra-based method for selecting and using tag SNPs. We measure the quality of our tag SNP selection algorithm by comparing actual SNPs with SNPs predicted from selected linearly independent tag SNPs. Our experiments show that for sufficiently long haplotypes, knowing only 0.4% of all SNPs the proposed linear reduction method predicts an unknown haplotype with the error rate below 2% based on 10% of the population.
Some observations of tip-vortex cavitation
NASA Astrophysics Data System (ADS)
Arndt, R. E. A.; Arakeri, V. H.; Higuchi, H.
1991-08-01
Cavitation has been observed in the trailing vortex system of an elliptic platform hydrofoil. A complex dependence on Reynolds number and gas content is noted at inception. Some of the observations can be related to tension effects associated with the lack of sufficiently large-sized nuclei. Inception measurements are compared with estimates of pressure in the vortex obtained from LDV measurements of velocity within the vortex. It is concluded that a complete correlation is not possible without knowledge of the fluctuating levels of pressure in tip-vortex flows. When cavitation is fully developed, the observed tip-vortex trajectory flows. When cavitation is fully developed, the observed tip-vortex trajectory shows a surprising lack of dependence on any of the physical parameters varied, such as angle of attack, Reynolds number, cavitation number, and dissolved gas content.
A Fractal Dimension Survey of Active Region Complexity
NASA Technical Reports Server (NTRS)
McAteer, R. T. James; Gallagher, Peter; Ireland, Jack
2005-01-01
A new approach to quantifying the magnetic complexity of active regions using a fractal dimension measure is presented. This fully-automated approach uses full disc MDI magnetograms of active regions from a large data set (2742 days of the SoHO mission; 9342 active regions) to compare the calculated fractal dimension to both Mount Wilson classification and flare rate. The main Mount Wilson classes exhibit no distinct fractal dimension distribution, suggesting a self-similar nature of all active regions. Solar flare productivity exhibits an increase in both the frequency and GOES X-ray magnitude of flares from regions with higher fractal dimensions. Specifically a lower threshold fractal dimension of 1.2 and 1.25 exists as a necessary, but not sufficient, requirement for an active region to produce M- and X-class flares respectively .
Embedded WENO: A design strategy to improve existing WENO schemes
NASA Astrophysics Data System (ADS)
van Lith, Bart S.; ten Thije Boonkkamp, Jan H. M.; IJzerman, Wilbert L.
2017-02-01
Embedded WENO methods utilise all adjacent smooth substencils to construct a desirable interpolation. Conventional WENO schemes under-use this possibility close to large gradients or discontinuities. We develop a general approach for constructing embedded versions of existing WENO schemes. Embedded methods based on the WENO schemes of Jiang and Shu [1] and on the WENO-Z scheme of Borges et al. [2] are explicitly constructed. Several possible choices are presented that result in either better spectral properties or a higher order of convergence for sufficiently smooth solutions. However, these improvements carry over to discontinuous solutions. The embedded methods are demonstrated to be indeed improvements over their standard counterparts by several numerical examples. All the embedded methods presented have no added computational effort compared to their standard counterparts.
NASA Astrophysics Data System (ADS)
Tinguely, Jean-Claude; Solarska, Renata; Braun, Artur; Graule, Thomas
2011-04-01
A new approach for the large-scale production of flexible photoelectrodes for dye-sensitized solar cells (DSSCs) is presented by roll-to-roll coating of a titanium dioxide nanodispersion containing the block copolymer 'Pluronic®' (PEOx-PPOy-PEOx, PEO: poly(ethylene oxide), PPO: poly(propylene oxide)). Functional DSSCs were assembled and the different coating procedures compared with respect to their solar power conversion efficiency. It is shown that the binder 'Pluronic' can be removed at processing temperatures as low as 140 °C, thus aiding achievement of sufficient adhesion to the ITO-PET support, higher porosity of the TiO2 layer and decreased crack appearance. Further optimization of this method is particularly promising when combined with other known low-temperature methods.
High-frequency CAD-based scattering model: SERMAT
NASA Astrophysics Data System (ADS)
Goupil, D.; Boutillier, M.
1991-09-01
Specifications for an industrial radar cross section (RCS) calculation code are given: it must be able to exchange data with many computer aided design (CAD) systems, it must be fast, and it must have powerful graphic tools. Classical physical optics (PO) and equivalent currents (EC) techniques have proven their efficiency on simple objects for a long time. Difficult geometric problems occur when objects with very complex shapes have to be computed. Only a specific geometric code can solve these problems. We have established that, once these problems have been solved: (1) PO and EC give good results on complex objects of large size compared to wavelength; and (2) the implementation of these objects in a software package (SERMAT) allows fast and sufficiently precise domain RCS calculations to meet industry requirements in the domain of stealth.
A compact time reversal emitter-receiver based on a leaky random cavity
Luong, Trung-Dung; Hies, Thomas; Ohl, Claus-Dieter
2016-01-01
Time reversal acoustics (TRA) has gained widespread applications for communication and measurements. In general, a scattering medium in combination with multiple transducers is needed to achieve a sufficiently large acoustical aperture. In this paper, we report an implementation for a cost-effective and compact time reversal emitter-receiver driven by a single piezoelectric element. It is based on a leaky cavity with random 3-dimensional printed surfaces. The random surfaces greatly increase the spatio-temporal focusing quality as compared to flat surfaces and allow the focus of an acoustic beam to be steered over an angle of 41°. We also demonstrate its potential use as a scanner by embedding a receiver to detect an object from its backscatter without moving the TRA emitter. PMID:27811957
NASA Astrophysics Data System (ADS)
Ayars, Eric James
2000-10-01
The purpose of this research is to investigate differences observed between Raman spectra when seen through a Near-field Scanning Optical Microscope (NSOM) and spectra of the same materials in conventional Raman or micro-Raman configurations. One source of differences in the observed spectra is a strong z polarized component in the near-field radiation; observations of the magnitude of this effect are compared with theoretical predictions for the field intensity near an NSOM tip. Large electric field gradients near the sharp NSOM probe may be another source of differences. This Gradient-Field Raman (GFR) effect was observed, and there is good evidence that it plays a significant role in Surface-Enhanced Raman Spectroscopy (SERS). The NSOM data seen, however, are not sufficient to prove conclusively that the spectral variations seen are due to the field gradients.
Gravitational waves and large field inflation
NASA Astrophysics Data System (ADS)
Linde, Andrei
2017-02-01
According to the famous Lyth bound, one can confirm large field inflation by finding tensor modes with sufficiently large tensor-to-scalar ratio r. Here we will try to answer two related questions: is it possible to rule out all large field inflationary models by not finding tensor modes with r above some critical value, and what can we say about the scale of inflation by measuring r? However, in order to answer these questions one should distinguish between two different definitions of the large field inflation and three different definitions of the scale of inflation. We will examine these issues using the theory of cosmological α-attractors as a convenient testing ground.
A theory of photometric stereo for a class of diffuse non-Lambertian surfaces
NASA Technical Reports Server (NTRS)
Tagare, Hemant D.; Defigueiredo, Rui J. P.
1991-01-01
A theory of photometric stereo is proposed for a large class of non-Lambertian reflectance maps. The authors review the different reflectance maps proposed in the literature for modeling reflection from real-world surfaces. From this, they obtain a mathematical class of reflectance maps to which the maps belong. They show that three lights can be sufficient for a unique inversion of the photometric stereo equation for the entire class of reflectance maps. They also obtain a constraint on the positions of light sources for obtaining this solution. They investigate the sufficiency of three light sources to estimate the surface normal and the illuminant strength. The issue of completeness of reconstruction is addressed. They shown that if k lights are sufficient for a unique inversion, 2k lights are necessary for a complete inversion.
Wide-range radioactive-gas-concentration detector
Anderson, D.F.
1981-11-16
A wide-range radioactive-gas-concentration detector and monitor capable of measuring radioactive-gas concentrations over a range of eight orders of magnitude is described. The device is designed to have an ionization chamber sufficiently small to give a fast response time for measuring radioactive gases but sufficiently large to provide accurate readings at low concentration levels. Closely spaced parallel-plate grids provide a uniform electric field in the active region to improve the accuracy of measurements and reduce ion migration time so as to virtually eliminate errors due to ion recombination. The parallel-plate grids are fabricated with a minimal surface area to reduce the effects of contamination resulting from absorption of contaminating materials on the surface of the grids. Additionally, the ionization-chamber wall is spaced a sufficient distance from the active region of the ionization chamber to minimize contamination effects.
NASA Astrophysics Data System (ADS)
Lovely, P. J.; Mutlu, O.; Pollard, D. D.
2007-12-01
Cohesive end-zones (CEZs) are regions of increased frictional strength and/or cohesion near the peripheries of faults that cause slip distributions to taper toward the fault-tip. Laboratory results, field observations, and theoretical models suggest an important role for CEZs in small-scale fractures and faults; however, their role in crustal-scale faulting and associated large earthquakes is less thoroughly understood. We present a numerical study of the potential role of CEZs on slip distributions in large, multi-segmented, strike-slip earthquake ruptures including the 1992 Landers Earthquake (Mw 7.2) and 1999 Hector Mine Earthquake (Mw 7.1). Displacement discontinuity is calculated using a quasi-static, 2D plane-strain boundary element (BEM) code for a homogeneous, isotropic, linear-elastic material. Friction is implemented by enforcing principles of complementarity. Model results with and without CEZs are compared with slip distributions measured by combined inversion of geodetic, strong ground motion, and teleseismic data. Stepwise and linear distributions of increasing frictional strength within CEZs are considered. The incorporation of CEZs in our model enables an improved match to slip distributions measured by inversion, suggesting that CEZs play a role in governing slip in large, strike-slip earthquakes. Additionally, we present a parametric study highlighting the very great sensitivity of modeled slip magnitude to small variations of the coefficient of friction. This result suggests that, provided a sufficiently well-constrained stress tensor and elastic moduli for the surrounding rock, relatively simple models could provide precise estimates of the magnitude of frictional strength. These results are verified by comparison with geometrically comparable finite element (FEM) models using the commercial code ABAQUS. In FEM models, friction is implemented by use of both Lagrange multipliers and penalty methods.
Design of a pulse-type strain gauge balance for a long-test-duration hypersonic shock tunnel
NASA Astrophysics Data System (ADS)
Wang, Y.; Liu, Y.; Jiang, Z.
2016-11-01
When the measurement of aerodynamic forces is conducted in a hypersonic shock tunnel, the inertial forces lead to low-frequency vibrations of the model, and its motion cannot be addressed through digital filtering because a sufficient number of cycles cannot be obtained during a tunnel run. This finding implies restrictions on the model size and mass as the natural frequencies are inversely proportional to the length scale of the model. Therefore, the force measurement still has many problems, particularly for large and heavy models. Different structures of a strain gauge balance (SGB) are proposed and designed, and the measurement element is further optimized to overcome the difficulties encountered during the measurement of aerodynamic forces in a shock tunnel. The motivation for this study is to assess the structural performance of the SGB used in a long-test-duration JF12 hypersonic shock tunnel, which has more than 100 ms of test time. Force tests were conducted for a large-scale cone with a 10^° semivertex angle and a length of 0.75 m in the JF12 long-test-duration shock tunnel. The finite element method was used for the analysis of the vibrational characteristics of the Model-Balance-Sting System (MBSS) to ensure a sufficient number of cycles, particularly for the axial force signal during a shock tunnel run. The higher-stiffness SGB used in the test shows good performance, wherein the frequency of the MBSS increases because of the stiff construction of the balance. The experimental results are compared with the data obtained in another wind tunnel and exhibit good agreement at M = 7 and α =5°.
An investigation of small scales of turbulence in a boundary layer at high Reynolds numbers
NASA Technical Reports Server (NTRS)
Wallace, James M.; Ong, L.; Balint, J.-L.
1993-01-01
The assumption that turbulence at large wave-numbers is isotropic and has universal spectral characteristics which are independent of the flow geometry, at least for high Reynolds numbers, has been a cornerstone of closure theories as well as of the most promising recent development in the effort to predict turbulent flows, viz. large eddy simulations. This hypothesis was first advanced by Kolmogorov based on the supposition that turbulent kinetic energy cascades down the scales (up the wave-numbers) of turbulence and that, if the number of these cascade steps is sufficiently large (i.e. the wave-number range is large), then the effects of anisotropies at the large scales are lost in the energy transfer process. Experimental attempts were repeatedly made to verify this fundamental assumption. However, Van Atta has recently suggested that an examination of the scalar and velocity gradient fields is necessary to definitively verify this hypothesis or prove it to be unfounded. Of course, this must be carried out in a flow with a sufficiently high Reynolds number to provide the necessary separation of scales in order unambiguously to provide the possibility of local isotropy at large wave-numbers. An opportunity to use our 12-sensor hot-wire probe to address this issue directly was made available at the 80'x120' wind tunnel at the NASA Ames Research Center, which is normally used for full-scale aircraft tests. An initial report on this high Reynolds number experiment and progress toward its evaluation is presented.
Rodrigues, Jorge L. M.; Serres, Margrethe H.; Tiedje, James M.
2011-01-01
The use of comparative genomics for the study of different microbiological species has increased substantially as sequence technologies become more affordable. However, efforts to fully link a genotype to its phenotype remain limited to the development of one mutant at a time. In this study, we provided a high-throughput alternative to this limiting step by coupling comparative genomics to the use of phenotype arrays for five sequenced Shewanella strains. Positive phenotypes were obtained for 441 nutrients (C, N, P, and S sources), with N-based compounds being the most utilized for all strains. Many genes and pathways predicted by genome analyses were confirmed with the comparative phenotype assay, and three degradation pathways believed to be missing in Shewanella were confirmed as missing. A number of previously unknown gene products were predicted to be parts of pathways or to have a function, expanding the number of gene targets for future genetic analyses. Ecologically, the comparative high-throughput phenotype analysis provided insights into niche specialization among the five different strains. For example, Shewanella amazonensis strain SB2B, isolated from the Amazon River delta, was capable of utilizing 60 C compounds, whereas Shewanella sp. strain W3-18-1, isolated from deep marine sediment, utilized only 25 of them. In spite of the large number of nutrient sources yielding positive results, our study indicated that except for the N sources, they were not sufficiently informative to predict growth phenotypes from increasing evolutionary distances. Our results indicate the importance of phenotypic evaluation for confirming genome predictions. This strategy will accelerate the functional discovery of genes and provide an ecological framework for microbial genome sequencing projects. PMID:21642407
Regan, Rainy D; Fenyk-Melody, Judy E; Tran, Sam M; Chen, Guang; Stocking, Kim L
2016-01-01
Nonterminal blood sample collection of sufficient volume and quality for research is complicated in mice due to their small size and anatomy. Large (>100 μL) nonterminal volumes of unhemolyzed or unclotted blood currently are typically collected from the retroorbital sinus or submandibular plexus. We developed a third method—submental blood collection—which is similar in execution to the submandibular method but with minor changes in animal restraint and collection location. Compared with other techniques, submental collection is easier to perform due to the direct visibility of the target vessels, which are located in a sparsely furred region. Compared with the submandibular method, the submental method did not differ regarding weight change and clotting score but significantly decreased hemolysis and increased the overall number of high-quality samples. The submental method was performed with smaller lancets for the majority of the bleeds, yet resulted in fewer repeat collection attempts, fewer insufficient samples, and less extraneous blood loss and was qualitatively less traumatic. Compared with the retroorbital technique, the submental method was similar regarding weight change but decreased hemolysis, clotting, and the number of overall high-quality samples; however the retroorbital method resulted in significantly fewer incidents of insufficient sample collection. Extraneous blood loss was roughly equivalent between the submental and retroorbital methods. We conclude that the submental method is an acceptable venipuncture technique for obtaining large, nonterminal volumes of blood from mice. PMID:27657712
Lefeber, Nina; Swinnen, Eva; Kerckhofs, Eric
2017-10-01
The integration of sufficient cardiovascular stress into robot-assisted gait (RAG) training could combine the benefits of both RAG and aerobic training. The aim was to summarize literature data on the immediate effects of RAG compared to walking without robot-assistance on metabolic-, cardiorespiratory- and fatigue-related parameters. PubMed and Web of Science were searched for eligible articles till February 2016. Means, SDs and significance values were extracted. Effect sizes were calculated. Fourteen studies were included, concerning 155 participants (85 healthy subjects, 39 stroke and 31 spinal cord injury patients), 9 robots (2 end-effectors, 1 treadmill-based and 6 wearable exoskeletons), and 7 outcome parameters (mostly oxygen consumption and heart rate). Overall, metabolic and cardiorespiratory parameters were lower during RAG compared to walking without robot-assistance (moderate to large effect sizes). In healthy subjects, when no body-weight support (BWS) was provided, RAG with an end-effector device was more energy demanding than walking overground (p > .05, large effect sizes). Generally, results suggest that RAG is less energy-consuming and cardiorespiratory stressful than walking without robot-assistance, but results depend on factors such as robot type, walking speed, BWS and effort. Additional research is needed to draw firm conclusions. Implications for Rehabilitation Awareness of the energy consumption and cardiorespiratory load of robot-assisted gait (RAG) training is important in the rehabilitation of (neurological) patients with impaired cardiorespiratory fitness and patients who are at risk of cardiovascular diseases. On the other hand, the integration of sufficient cardiometabolic stress in RAG training could combine the effects of both RAG and aerobic training. Energy consumption and cardiorespiratory load during walking with robot-assistance seems to depend on factors such as robot type, walking speed, body-weight support or amount of effort. These parameters could be adjusted in RAG rehabilitation to make RAG more or less energy-consuming and cardiorespiratory stressful. Overall, short duration exoskeleton walking seems less energy-consuming and cardiorespiratory stressful than walking without robot-assistance. This might implicate that the exercise intensity is safe for (neurological) patients at risk of cardiovascular diseases. How this changes in extended walking time is unclear.
1987-01-01
This report is an assessment of all available literature that pertains to the potential risk of cancer associated with ingestion of asbestos. It was compiled by a working group to assist policy makers in the Department of Health and Human Services determine if adequate information was available for a definitive risk assessment on this potential problem and evaluate if the weight of evidence was sufficient to prioritize this issue for new policy recommendations. The work group considered the basis for concern over this problem, the body of toxicology experiments, the individual epidemiologic studies which have attempted to investigate this issue, and the articles that discuss components of risk assessment pertaining to the ingestion of asbestos. In the report, the work group concluded: that no direct, definitive risk assessment can be conducted at this time; that further epidemiologic investigations will be very costly and only possess sufficient statistical power to detect relatively large excesses in cancers related to asbestos ingestion; and that probably the most pertinent toxicologic experiments relate to resolving the differences in how inhaled asbestos, which is eventually swallowed, is biologically processed by humans, compared to how ingested asbestos is processed. The work group believes that the cancer risk associated with asbestos ingestion should not be perceived as one of the most pressing potential public health hazards facing the nation. However, the work group does not believe that information was sufficient to assess the level of cancer risk associated with the ingestion and therefore, this potential hazard should not be discounted, and ingestion exposure to asbestos should be eliminated whenever possible. PMID:3304998
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hou, Shiqiang; Fang, Ming; Zhu, Qian
Coronary collateral circulation (CCC) functions as a natural bypass in the event of coronary obstruction, which markedly improves prognosis in patients with coronary artery disease (CAD). MicroRNAs (miRNAs) have been implicated in multiple physiological and pathological processes, including angiogenesis involved in CCC growth. The roles that miRNA-939 (miR-939) plays in angiogenesis remain largely unknown. We conducted this study to explore the expression of miR-939 in CAD patients and its role in angiogenesis. For the first time, our results indicated that the expression of circulating miR-939 was down-regulated in patients with sufficient CCC compared with patients with poor CCC. Overexpression ofmore » miR-939 in primary human umbilical vein endothelial cells (HUVECs) significantly inhibited the proliferation, adhesion and tube formation, but promoted the migration of cells. In contrast, miR-939 knockdown exerted reverse effects. We further identified that γ-catenin was a novel target of miR-939 by translational repression, which could rescue the effects of miR-939 in HUVECs. In summary, this study revealed that the expression of circulating miR-939 was down-regulated in CAD patients with sufficient CCC. MiR-939 abolished vascular integrity and repressed angiogenesis through directly targeting γ-catenin. It provided a potential biomarker and a therapeutic target for CAD. - Highlights: • Circulating miR-939 is decreased in sufficient coronary collateral circulation. • MiR-939 abolishes vascular integrity in endothelial cells. • MiR-939 represses angiogenesis. • γ-catenin is a novel target of miR-939.« less
[Lymphocytic infiltration in uveal melanoma].
Sach, J; Kocur, J
1993-11-01
After our observation of lymphocytic infiltration in uveal melanomas we present theoretical review to this interesting topic. Due to relatively low incidence of this feature we haven't got sufficiently large collection of cases for presentation of our statistically significant conclusions.
Digital tissue and what it may reveal about the brain.
Morgan, Josh L; Lichtman, Jeff W
2017-10-30
Imaging as a means of scientific data storage has evolved rapidly over the past century from hand drawings, to photography, to digital images. Only recently can sufficiently large datasets be acquired, stored, and processed such that tissue digitization can actually reveal more than direct observation of tissue. One field where this transformation is occurring is connectomics: the mapping of neural connections in large volumes of digitized brain tissue.
Optical Communications With A Geiger Mode APD Array
2016-02-09
spurious fires from numerous sources, including crosstalk from other detectors in the same array . Additionally, after a 9 successful detection, the...be combined into arrays with large numbers of detectors , allowing for scaling of dynamic range with relatively little overhead on space and power...overall higher rate of dark counts than a single detector , this is more than compensated for by the extra detectors . A sufficiently large APD array could
Asymptotic form for the cross section for the Coulomb interacting rearrangement collisions.
NASA Technical Reports Server (NTRS)
Omidvar, K.
1973-01-01
It is shown that in a rearrangement collision leading to the formation of highly excited hydrogenlike states the cross section at high energies behaves as 1/n-squared, with n the principal quantum number, thus invalidating the Brinkman-Kramers approximation for large n. Similarly, in high-energy inelastic electron-hydrogenlike-atom collisions the exchange cross section for sufficiently large n dominates the direct excitation cross section.
Seidl, Roman; Moser, Corinne; Blumer, Yann
2017-01-01
Many countries have some kind of energy-system transformation either planned or ongoing for various reasons, such as to curb carbon emissions or to compensate for the phasing out of nuclear energy. One important component of these transformations is the overall reduction in energy demand. It is generally acknowledged that the domestic sector represents a large share of total energy consumption in many countries. Increased energy efficiency is one factor that reduces energy demand, but behavioral approaches (known as "sufficiency") and their respective interventions also play important roles. In this paper, we address citizens' heterogeneity regarding both their current behaviors and their willingness to realize their sufficiency potentials-that is, to reduce their energy consumption through behavioral change. We collaborated with three Swiss cities for this study. A survey conducted in the three cities yielded thematic sets of energy-consumption behavior that various groups of participants rated differently. Using this data, we identified four groups of participants with different patterns of both current behaviors and sufficiency potentials. The paper discusses intervention types and addresses citizens' heterogeneity and behaviors from a city-based perspective.
Hossack, A. C.; Sutherland, D. A.; Jarboe, T. R.
2017-02-01
A derivation is given showing that the current inside a closed-current volume can be sustained against resistive dissipation by appropriately phased magnetic perturbations. Imposed-dynamo current drive (IDCD) theory is used to predict the toroidal current evolution in the HIT-SI experiment as a function of magnetic fluctuations at the edge. Analysis of magnetic fields from a HIT-SI discharge shows that the injector-imposed fluctuations are sufficient to sustain the measured toroidal current without instabilities whereas the small, plasma-generated magnetic fluctuations are not sufficiently large to sustain the current.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hossack, A. C.; Sutherland, D. A.; Jarboe, T. R.
A derivation is given showing that the current inside a closed-current volume can be sustained against resistive dissipation by appropriately phased magnetic perturbations. Imposed-dynamo current drive (IDCD) theory is used to predict the toroidal current evolution in the HIT-SI experiment as a function of magnetic fluctuations at the edge. Analysis of magnetic fields from a HIT-SI discharge shows that the injector-imposed fluctuations are sufficient to sustain the measured toroidal current without instabilities whereas the small, plasma-generated magnetic fluctuations are not sufficiently large to sustain the current.
Water intrusion in thin-skinned composite honeycomb sandwich structures
NASA Technical Reports Server (NTRS)
Jackson, Wade C.; O'Brien, T. Kevin
1988-01-01
Thin-skinned composite honeycomb sandwich structures from the trailing edge of the U.S. Army's Apache and Chinook helicopters have been tested to ascertain their susceptibility to water intrusion as well as such intrusions' effects on impact damage and cyclic loading. Minimum-impact and fatigue conditions were determined which would create microcracks sufficiently large to allow the passage of water through the skins; damage sufficient for this to occur was for some skins undetectable under a 40X-magnification optical microscope. Flow rate was a function of moisture content, damage, applied strain, and pressure differences.
Eating and health behaviors in vegans compared to omnivores: Dispelling common myths.
Heiss, Sydney; Coffino, Jaime A; Hormes, Julia M
2017-11-01
Studies comparing eating behaviors in individuals avoiding meat and other animal products to omnivores have produced largely inconclusive findings, in part due to a failure to obtain sufficiently large samples of vegan participants to make meaningful comparisons. This study examined eating and health behaviors in a large community sample of dietary vegans ("vegans"), compared to omnivores. Participants (n = 578, 80.4% female) completed an online questionnaire assessing a range of eating- and other health-related attitudes and behaviors. Vegans (62.0%, n = 358) and omnivores (38.1%, n = 220) were comparable in terms of demographics. Vegans scored significantly lower than omnivores the Eating Disorder Examination - Questionnaire (multivariate p < 0.001), a measure of pathological eating behavior. They also were more likely to consider themselves "healthy" (p < 0.001) and to prepare food at home (p < 0.001). Vegans more frequently consumed fruits, vegetables, nuts, beans and grains (all p < 0.001), and less frequently consumed caffeinated soft drinks (p < 0.001). There were no significant differences between vegans and omnivores on measures of eating styles, body mass index, smoking or exercise behaviors, or problems related to alcohol consumption. Effect sizes for comparisons on eating-related measures were generally small, with η p 2 ranging from <0.01 to 0.05; the size of effects for comparisons on measures of other health behaviors ranged from small to medium (Φ = 0.09 to 0.33 and η p 2 < 0.01 to 0.42). Taken together, findings suggest that ultimately, vegans do not differ much from omnivores in their eating attitudes and behaviors, and when they do, differences indicate slightly healthier attitudes and behaviors towards food. Similarly, vegans closely resembled omnivores in non-eating related health behaviors. Copyright © 2017 Elsevier Ltd. All rights reserved.
Performance of ceramic superconductors in magnetic bearings
NASA Technical Reports Server (NTRS)
Kirtley, James L., Jr.; Downer, James R.
1993-01-01
Magnetic bearings are large-scale applications of magnet technology, quite similar in certain ways to synchronous machinery. They require substantial flux density over relatively large volumes of space. Large flux density is required to have satisfactory force density. Satisfactory dynamic response requires that magnetic circuit permeances not be too large, implying large air gaps. Superconductors, which offer large magnetomotive forces and high flux density in low permeance circuits, appear to be desirable in these situations. Flux densities substantially in excess of those possible with iron can be produced, and no ferromagnetic material is required. Thus the inductance of active coils can be made low, indicating good dynamic response of the bearing system. The principal difficulty in using superconductors is, of course, the deep cryogenic temperatures at which they must operate. Because of the difficulties in working with liquid helium, the possibility of superconductors which can be operated in liquid nitrogen is thought to extend the number and range of applications of superconductivity. Critical temperatures of about 98 degrees Kelvin were demonstrated in a class of materials which are, in fact, ceramics. Quite a bit of public attention was attracted to these new materials. There is a difficulty with the ceramic superconducting materials which were developed to date. Current densities sufficient for use in large-scale applications have not been demonstrated. In order to be useful, superconductors must be capable of carrying substantial currents in the presence of large magnetic fields. The possible use of ceramic superconductors in magnetic bearings is investigated and discussed and requirements that must be achieved by superconductors operating at liquid nitrogen temperatures to make their use comparable with niobium-titanium superconductors operating at liquid helium temperatures are identified.
Effects of the seasonal cycle on superrotation in planetary atmospheres
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Jonathan L.; Vallis, Geoffrey K.; Potter, Samuel F.
2014-05-20
The dynamics of dry atmospheric general circulation model simulations forced by seasonally varying Newtonian relaxation are explored over a wide range of two control parameters and are compared with the large-scale circulation of Earth, Mars, and Titan in their relevant parameter regimes. Of the parameters that govern the behavior of the system, the thermal Rossby number (Ro) has previously been found to be important in governing the spontaneous transition from an Earth-like climatology of winds to a superrotating one with prograde equatorial winds, in the absence of a seasonal cycle. This case is somewhat unrealistic as it applies only ifmore » the planet has zero obliquity or if surface thermal inertia is very large. While Venus has nearly vanishing obliquity, Earth, Mars, and Titan (Saturn) all have obliquities of ∼25° and varying degrees of seasonality due to their differing thermal inertias and orbital periods. Motivated by this, we introduce a time-dependent Newtonian cooling to drive a seasonal cycle using idealized model forcing, and we define a second control parameter that mimics non-dimensional thermal inertia of planetary surfaces. We then perform and analyze simulations across the parameter range bracketed by Earth-like and Titan-like regimes, assess the impact on the spontaneous transition to superrotation, and compare Earth, Mars, and Titan to the model simulations in the relevant parameter regime. We find that a large seasonal cycle (small thermal inertia) prevents model atmospheres with large thermal Rossby numbers from developing superrotation by the influences of (1) cross-equatorial momentum advection by the Hadley circulation and (2) hemispherically asymmetric zonal-mean zonal winds that suppress instabilities leading to equatorial momentum convergence. We also demonstrate that baroclinic instabilities must be sufficiently weak to allow superrotation to develop. In the relevant parameter regimes, our seasonal model simulations compare favorably to large-scale, seasonal phenomena observed on Earth and Mars. In the Titan-like regime the seasonal cycle in our model acts to prevent superrotation from developing, and it is necessary to increase the value of a third parameter—the atmospheric Newtonian cooling time—to achieve a superrotating climatology.« less
Small-molecule ligand docking into comparative models with Rosetta
Combs, Steven A; DeLuca, Samuel L; DeLuca, Stephanie H; Lemmon, Gordon H; Nannemann, David P; Nguyen, Elizabeth D; Willis, Jordan R; Sheehan, Jonathan H; Meiler, Jens
2017-01-01
Structure-based drug design is frequently used to accelerate the development of small-molecule therapeutics. Although substantial progress has been made in X-ray crystallography and nuclear magnetic resonance (NMR) spectroscopy, the availability of high-resolution structures is limited owing to the frequent inability to crystallize or obtain sufficient NMR restraints for large or flexible proteins. Computational methods can be used to both predict unknown protein structures and model ligand interactions when experimental data are unavailable. This paper describes a comprehensive and detailed protocol using the Rosetta modeling suite to dock small-molecule ligands into comparative models. In the protocol presented here, we review the comparative modeling process, including sequence alignment, threading and loop building. Next, we cover docking a small-molecule ligand into the protein comparative model. In addition, we discuss criteria that can improve ligand docking into comparative models. Finally, and importantly, we present a strategy for assessing model quality. The entire protocol is presented on a single example selected solely for didactic purposes. The results are therefore not representative and do not replace benchmarks published elsewhere. We also provide an additional tutorial so that the user can gain hands-on experience in using Rosetta. The protocol should take 5–7 h, with additional time allocated for computer generation of models. PMID:23744289
Statistical tests to compare motif count exceptionalities
Robin, Stéphane; Schbath, Sophie; Vandewalle, Vincent
2007-01-01
Background Finding over- or under-represented motifs in biological sequences is now a common task in genomics. Thanks to p-value calculation for motif counts, exceptional motifs are identified and represent candidate functional motifs. The present work addresses the related question of comparing the exceptionality of one motif in two different sequences. Just comparing the motif count p-values in each sequence is indeed not sufficient to decide if this motif is significantly more exceptional in one sequence compared to the other one. A statistical test is required. Results We develop and analyze two statistical tests, an exact binomial one and an asymptotic likelihood ratio test, to decide whether the exceptionality of a given motif is equivalent or significantly different in two sequences of interest. For that purpose, motif occurrences are modeled by Poisson processes, with a special care for overlapping motifs. Both tests can take the sequence compositions into account. As an illustration, we compare the octamer exceptionalities in the Escherichia coli K-12 backbone versus variable strain-specific loops. Conclusion The exact binomial test is particularly adapted for small counts. For large counts, we advise to use the likelihood ratio test which is asymptotic but strongly correlated with the exact binomial test and very simple to use. PMID:17346349
Schuler, Matthew S; Chase, Jonathan M; Knight, Tiffany M
2017-06-01
Habitat heterogeneity is a primary mechanism influencing species richness. Despite the general expectation that increased heterogeneity should increase species richness, there is considerable variation in the observed relationship, including many studies that show negative effects of heterogeneity on species richness. One mechanism that can create such disparate results is the predicted trade-off between habitat area and heterogeneity, sometimes called the area-heterogeneity-trade-off (AHTO) hypothesis. The AHTO hypothesis predicts positive effects of heterogeneity on species richness in large habitats, but negative effects in small habitats. We examined the interplay between habitat size and habitat heterogeneity in experimental mesocosms that mimic freshwater ponds, and measured responses in a species-rich zooplankton community. We used the AHTO hypothesis and related mechanisms to make predictions about how heterogeneity would affect species richness and diversity in large compared to small habitats. We found that heterogeneity had a positive influence on species richness in large, but not small habitats, and that this likely resulted because habitat specialists were able to persist only when habitat size was sufficiently large, consistent with the predictions of the AHTO hypothesis. Our results emphasize the importance of considering context (e.g., habitat size in this case) when investigating the relative importance of ecological drivers of diversity, like heterogeneity. © 2017 by the Ecological Society of America.
Lightning Mapping Observations During DC3 in Northern Colorado
NASA Astrophysics Data System (ADS)
Krehbiel, P. R.; Rison, W.; Thomas, R. J.
2012-12-01
The Deep Convective Clouds and Chemistry Experiment (DC3) was conducted in three regions covered by Lightning Mapping Arrays (LMAs): Oklahoma and west Texas, northern Alabama, and northern Colorado. In this and a companion presentation, we discuss results obtained from the newly-deployed North Colorado LMA. The CO LMA revealed a surprising variety of lightning-inferred electrical structures, ranging from classic tripolar, normal polarity storms to several variations of anomalously electrified systems. Storms were often characterized by a pronounced lack or deficit of cloud-to-ground discharges (negative or positive), both in relative and absolute terms compared to the large amount of intracloud activity revealed by the LMA. Anomalous electrification was observed in small, localized storms as well as in large, deeply convective and severe storms. Another surprising observation was the frequent occurrence of embedded convection in the downwind anvil/outflow region of large storm systems. Observations of discharges in low flash rate situations over or near the network are sufficiently detailed to enable branching algorithms to estimate total channel lengths for modeling NOx production. However, this will not be possible in large or distant storm systems where the lightning was essentially continuous and structurally complex, or spatially noisy. Rather, a simple empirical metric for characterizing the lightning activity can be developed based on the number of located VHF radiation sources, weighted for example by the peak source power, source altitude, and temporal duration.
Stabilizing effect of elasticity on the inertial instability of submerged viscoelastic liquid jets
NASA Astrophysics Data System (ADS)
Keshavarz, Bavand; McKinley, Gareth
2017-11-01
The stability of submerged Newtonian and viscoelastic liquid jets is studied experimentally using flow visualization. Precise control of the amplitude and frequency of the imposed linear perturbations is achieved through a piezoelectric actuator attached to the nozzle. By illuminating the jet with a strobe light driven at a frequency slightly less than the frequency of the perturbation we slow down the apparent motion by large factors ( 100 , 000) and capture the phenomena with high temporal and spatial resolution. Newtonian liquid jets become unstable at moderate Reynolds numbers (Rej 150) and sinuous or varicose patterns emerge and grow in amplitude. As the jet moves downstream, the varicose waves gradually pile up in the sinuous ones due to the difference in their corresponding wave speeds, leading to a unique chevron-like morphology. Experiments with model viscoelastic polymer solutions show that this inertial instability is fully stabilized sufficiently large levels of elasticity. We compare our experimental results with the theoretical predictions of an elastic Rayleigh equation for an axisymmetric jet and show that the presence of streamline tension is indeed the stabilizing effect for inertioelastic jets.
Does size matter? Statistical limits of paleomagnetic field reconstruction from small rock specimens
NASA Astrophysics Data System (ADS)
Berndt, Thomas; Muxworthy, Adrian R.; Fabian, Karl
2016-01-01
As samples of ever decreasing sizes are being studied paleomagnetically, care has to be taken that the underlying assumptions of statistical thermodynamics (Maxwell-Boltzmann statistics) are being met. Here we determine how many grains and how large a magnetic moment a sample needs to have to be able to accurately record an ambient field. It is found that for samples with a thermoremanent magnetic moment larger than 10-11Am2 the assumption of a sufficiently large number of grains is usually given. Standard 25 mm diameter paleomagnetic samples usually contain enough magnetic grains such that statistical errors are negligible, but "single silicate crystal" works on, for example, zircon, plagioclase, and olivine crystals are approaching the limits of what is physically possible, leading to statistic errors in both the angular deviation and paleointensity that are comparable to other sources of error. The reliability of nanopaleomagnetic imaging techniques capable of resolving individual grains (used, for example, to study the cloudy zone in meteorites), however, is questionable due to the limited area of the material covered.
A particle finite element method for machining simulations
NASA Astrophysics Data System (ADS)
Sabel, Matthias; Sator, Christian; Müller, Ralf
2014-07-01
The particle finite element method (PFEM) appears to be a convenient technique for machining simulations, since the geometry and topology of the problem can undergo severe changes. In this work, a short outline of the PFEM-algorithm is given, which is followed by a detailed description of the involved operations. The -shape method, which is used to track the topology, is explained and tested by a simple example. Also the kinematics and a suitable finite element formulation are introduced. To validate the method simple settings without topological changes are considered and compared to the standard finite element method for large deformations. To examine the performance of the method, when dealing with separating material, a tensile loading is applied to a notched plate. This investigation includes a numerical analysis of the different meshing parameters, and the numerical convergence is studied. With regard to the cutting simulation it is found that only a sufficiently large number of particles (and thus a rather fine finite element discretisation) leads to converged results of process parameters, such as the cutting force.
Defining surfaces for skewed, highly variable data
Helsel, D.R.; Ryker, S.J.
2002-01-01
Skewness of environmental data is often caused by more than simply a handful of outliers in an otherwise normal distribution. Statistical procedures for such datasets must be sufficiently robust to deal with distributions that are strongly non-normal, containing both a large proportion of outliers and a skewed main body of data. In the field of water quality, skewness is commonly associated with large variation over short distances. Spatial analysis of such data generally requires either considerable effort at modeling or the use of robust procedures not strongly affected by skewness and local variability. Using a skewed dataset of 675 nitrate measurements in ground water, commonly used methods for defining a surface (least-squares regression and kriging) are compared to a more robust method (loess). Three choices are critical in defining a surface: (i) is the surface to be a central mean or median surface? (ii) is either a well-fitting transformation or a robust and scale-independent measure of center used? (iii) does local spatial autocorrelation assist in or detract from addressing objectives? Published in 2002 by John Wiley & Sons, Ltd.
Adams, Matthew T.; Cleveland, Robin O.; Roy, Ronald A.
2017-01-01
Abstract. Real-time acousto-optic (AO) sensing has been shown to noninvasively detect changes in ex vivo tissue optical properties during high-intensity focused ultrasound (HIFU) exposures. The technique is particularly appropriate for monitoring noncavitating lesions that offer minimal acoustic contrast. A numerical model is presented for an AO-guided HIFU system with an illumination wavelength of 1064 nm and an acoustic frequency of 1.1 MHz. To confirm the model’s accuracy, it is compared to previously published experimental data gathered during AO-guided HIFU in chicken breast. The model is used to determine an optimal design for an AO-guided HIFU system, to assess its robustness, and to predict its efficacy for the ablation of large volumes. It was found that a through transmission geometry results in the best performance, and an optical wavelength around 800 nm was optimal as it provided sufficient contrast with low absorption. Finally, it was shown that the strategy employed while treating large volumes with AO guidance has a major impact on the resulting necrotic volume and symmetry. PMID:28114454
Obesity, hypertension and home sphygmomanometer cuffs.
Akpolat, Tekin
2010-08-01
Since the increasing prevalence of obesity leads to a larger mean arm circumferences in the hypertensive population and appropriate cuff size is essential for accurate measurement of blood pressure, overweight and obese patients often require automated home sphygmomanometers with large- or extra large-sized cuffs. The aims of this study were to evaluate the information about cuff size on automated upper arm home sphygmomanometer packing boxes and compare the findings with wrist device boxes. One hundred twelve different device boxes (49 automated upper arm, 5 semi-automatic, and 58 wrist) produced by 40 manufacturers were investigated. Three different types of information were observed (written, graphical, or a combination of both). There was not any information about cuff size on 49 (44%) device boxes. Most of the information expressed on the boxes was not attractive or informative for the patients. This study showed that the information regarding cuff size on most of the device boxes was obtuse and the patients are not warned sufficiently about appropriate cuff size. Physicians and health care providers should inform and train their patients about appropriate cuff size.
A porous media theory for characterization of membrane blood oxygenation devices
NASA Astrophysics Data System (ADS)
Sano, Yoshihiko; Adachi, Jun; Nakayama, Akira
2013-07-01
A porous media theory has been proposed to characterize oxygen transport processes associated with membrane blood oxygenation devices. For the first time, a rigorous mathematical procedure based a volume averaging procedure has been presented to derive a complete set of the governing equations for the blood flow field and oxygen concentration field. As a first step towards a complete three-dimensional numerical analysis, one-dimensional steady case is considered to model typical membrane blood oxygenator scenarios, and to validate the derived equations. The relative magnitudes of oxygen transport terms are made clear, introducing a dimensionless parameter which measures the distance the oxygen gas travels to dissolve in the blood as compared with the blood dispersion length. This dimensionless number is found so large that the oxygen diffusion term can be neglected in most cases. A simple linear relationship between the blood flow rate and total oxygen transfer rate is found for oxygenators with sufficiently large membrane surface areas. Comparison of the one-dimensional analytic results and available experimental data reveals the soundness of the present analysis.
Present status and future prospects of heavy ion beams as drivers for ICF
NASA Astrophysics Data System (ADS)
Godlove, Terry F.
1986-01-01
A candidate driver for a practical inertial fusion reactor system must, among other characteristics, be cost effective and reliable for the parameters required by the fusion target and the remainder of the system. Although the history of large particle accelerators provides abundant evidence of their reliability at high repetition rates, their capital cost for the fusion application has been open to question. Attempts to design cost effective systems began with accelerators based on currently available technology such as RF linacs and storage rings. The West German HIBALL and the Japanese HIBLIC are examples of this initial effort. These designs are sufficiently credible that a strong argument can be made for the heavy ion method in general, but to reduce the cost per unit power it was found necessary to design for large scale, hence high capital cost. Emphasis in the U.S. shifted to newer technologies which offer hope of significant improvement in cost. In this paper the status of various heavy ion driver designs are compared with currently perceived requirements in order to illustrate their potential and assess their development needs.
NASA Astrophysics Data System (ADS)
Park, Joonam; Appiah, Williams Agyei; Byun, Seoungwoo; Jin, Dahee; Ryou, Myung-Hyun; Lee, Yong Min
2017-10-01
To overcome the limitation of simple empirical cycle life models based on only equivalent circuits, we attempt to couple a conventional empirical capacity loss model with Newman's porous composite electrode model, which contains both electrochemical reaction kinetics and material/charge balances. In addition, an electrolyte depletion function is newly introduced to simulate a sudden capacity drop at the end of cycling, which is frequently observed in real lithium-ion batteries (LIBs). When simulated electrochemical properties are compared with experimental data obtained with 20 Ah-level graphite/LiFePO4 LIB cells, our semi-empirical model is sufficiently accurate to predict a voltage profile having a low standard deviation of 0.0035 V, even at 5C. Additionally, our model can provide broad cycle life color maps under different c-rate and depth-of-discharge operating conditions. Thus, this semi-empirical model with an electrolyte depletion function will be a promising platform to predict long-term cycle lives of large-format LIB cells under various operating conditions.
Wang, Shu-Hua; Stevenson, Kurt B; Hines, Lisa; Mediavilla, José R; Khan, Yosef; Soni, Ruchi; Dutch, Wendy; Brandt, Eric; Bannerman, Tammy; Kreiswirth, Barry N; Pancholi, Preeti
2015-01-01
Repetitive element polymerase chain reaction (rep-PCR) typing has been used for methicillin-resistant Staphylococcus aureus (MRSA) strain characterization. The goal of this study was to determine if a rapid commercial rep-PCR system, DiversiLab™ (DL; bioMérieux, Durham, NC, USA), could be used for MRSA surveillance at a large medical center and community hospitals. A total of 1286 MRSA isolates genotyped by the DL system were distributed into 84 distinct rep-PCR patterns: 737/1286 (57%) were clustered into 6 major rep-PCR patterns. A subset of 220 isolates was further typed by pulsed-field gel electrophoresis (PFGE), spa typing, and SCCmec typing. The 220 isolates were distributed into 80 rep-PCR patterns, 94 PFGE pulsotypes, 27 spa, and 3 SCCmec types. The DL rep-PCR system is sufficient for surveillance, but the DL system alone cannot be used to compare data to other institutions until a standardized nomenclature is established and the DL MRSA reference library is expanded. Copyright © 2015 Elsevier Inc. All rights reserved.
Marangoni-flow-induced partial coalescence of a droplet on a liquid/air interface
NASA Astrophysics Data System (ADS)
Sun, Kai; Zhang, Peng; Che, Zhizhao; Wang, Tianyou
2018-02-01
The coalescence of a droplet and a liquid/air interface of lower surface tension was numerically studied by using the lattice Boltzmann phase-field method. The experimental phenomenon of droplet ejection observed by Blanchette et al. [Phys. Fluids 21, 072107 (2009), 10.1063/1.3177339] at sufficiently large surface tension differences was successfully reproduced for the first time. Furthermore, the emergence, disappearance, and re-emergence of "partial coalescence" with increasing surface tension difference was observed and explained. The re-emergence of partial coalescence under large surface tension differences is caused by the remarkable lifting motion of the Marangoni flow, which significantly retards the vertical collapse. Two different modes of partial coalescence were identified by the simulation, namely peak injection occurs at lower Ohnesorge numbers and bottom pinch-off at higher Ohnesorge numbers. By comparing the characteristic timescales of the upward Marangoni flow with that of the downward flow driven by capillary pressure, a criterion for the transition from partial to total coalescence was derived based on scaling analysis and numerically validated.
Phonon Calculations Using the Real-Space Multigrid Method (RMG)
NASA Astrophysics Data System (ADS)
Zhang, Jiayong; Lu, Wenchang; Briggs, Emil; Cheng, Yongqiang; Ramirez-Cuesta, A. J.; Bernholc, Jerry
RMG, a DFT-based open-source package using the real-space multigrid method, has proven to work effectively on large scale systems with thousands of atoms. Our recent work has shown its practicability for high accuracy phonon calculations employing the frozen phonon method. In this method, a primary unit cell with a small lattice constant is enlarged to a supercell that is sufficiently large to obtain the force constants matrix by finite displacements of atoms in the supercell. An open-source package PhonoPy is used to determine the necessary displacements by taking symmetry into account. A python script coupling RMG and PhonoPy enables us to perform high-throughput calculations of phonon properties. We have applied this method to many systems, such as silicon, silica glass, ZIF-8, etc. Results from RMG are compared to the experimental spectra measured using the VISION inelastic neutron scattering spectrometer at the Spallation Neutron Source at ORNL, as well as results from other DFT codes. The computing resources were made available through the VirtuES (Virtual Experiments in Spectroscopy) project, funded by Laboratory Directed Research and Development program (LDRD project No. 7739)
Fast Segmentation of Stained Nuclei in Terabyte-Scale, Time Resolved 3D Microscopy Image Stacks
Stegmaier, Johannes; Otte, Jens C.; Kobitski, Andrei; Bartschat, Andreas; Garcia, Ariel; Nienhaus, G. Ulrich; Strähle, Uwe; Mikut, Ralf
2014-01-01
Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu’s method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm’s superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results. PMID:24587204
In-Flight Measurement of the Absolute Energy Scale of the Fermi Large Area Telescope
NASA Technical Reports Server (NTRS)
Ackermann, M.; Ajello, M.; Allafort, A.; Atwood, W. B.; Axelsson, M.; Baldini, L.; Barbielini, G; Bastieri, D.; Bechtol, K.; Bellazzini, R.;
2012-01-01
The Large Area Telescope (LAT) on-board the Fermi Gamma-ray Space Telescope is a pair-conversion telescope designed to survey the gamma-ray sky from 20 MeV to several hundreds of GeV. In this energy band there are no astronomical sources with sufficiently well known and sharp spectral features to allow an absolute calibration of the LAT energy scale. However, the geomagnetic cutoff in the cosmic ray electron- plus-positron (CRE) spectrum in low Earth orbit does provide such a spectral feature. The energy and spectral shape of this cutoff can be calculated with the aid of a numerical code tracing charged particles in the Earth's magnetic field. By comparing the cutoff value with that measured by the LAT in different geomagnetic positions, we have obtained several calibration points between approx. 6 and approx. 13 GeV with an estimated uncertainty of approx. 2%. An energy calibration with such high accuracy reduces the systematic uncertainty in LAT measurements of, for example, the spectral cutoff in the emission from gamma ray pulsars.
Conservation laws in baroclinic inertial-symmetric instabilities
NASA Astrophysics Data System (ADS)
Grisouard, Nicolas; Fox, Morgan B.; Nijjer, Japinder
2017-04-01
Submesoscale oceanic density fronts are structures in geostrophic and hydrostatic balance, but are more prone to instabilities than mesoscale flows. As a consequence, they are believed to play a large role in air-sea exchanges, near-surface turbulence and dissipation of kinetic energy of geostrophically and hydrostatically balanced flows. We will present two-dimensional (x, z) Boussinesq numerical experiments of submesoscale baroclinic fronts on the f-plane. Instabilities of the mixed inertial and symmetric types (the actual name varies across the literature) develop, with the absence of along-front variations prohibiting geostrophic baroclinic instabilities. Two new salient facts emerge. First, contrary to pure inertial and/or pure symmetric instability, the potential energy budget is affected, the mixed instability extracting significant available potential energy from the front and dissipating it locally. Second, in the submesoscale regime, the growth rate of this mixed instability is sufficiently large that significant radiation of near-inertial internal waves occurs. Although energetically small compared to e.g. local dissipation within the front, this process might be a significant source of near-inertial energy in the ocean.
In-Flight Measurement of the Absolute Energy Scale of the Fermi Large Area Telescope
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ackermann, M.; /Stanford U., HEPL /SLAC /KIPAC, Menlo Park; Ajello, M.
The Large Area Telescope (LAT) on-board the Fermi Gamma-ray Space Telescope is a pair-conversion telescope designed to survey the gamma-ray sky from 20 MeV to several hundreds of GeV. In this energy band there are no astronomical sources with sufficiently well known and sharp spectral features to allow an absolute calibration of the LAT energy scale. However, the geomagnetic cutoff in the cosmic ray electron-plus-positron (CRE) spectrum in low Earth orbit does provide such a spectral feature. The energy and spectral shape of this cutoff can be calculated with the aid of a numerical code tracing charged particles in themore » Earth's magnetic field. By comparing the cutoff value with that measured by the LAT in different geomagnetic positions, we have obtained several calibration points between {approx}6 and {approx}13 GeV with an estimated uncertainty of {approx}2%. An energy calibration with such high accuracy reduces the systematic uncertainty in LAT measurements of, for example, the spectral cutoff in the emission from gamma ray pulsars.« less
Precision cosmology with time delay lenses: High resolution imaging requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Xiao -Lei; Treu, Tommaso; Agnello, Adriano
Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as ``Einstein Rings'' in high resolution images. The distortion of these arcs and counter-arcs, as measured over a large number of pixels, provides tight constraints on the difference between the gravitational potential between the quasar image positions, and thus on cosmology in combination with the measured time delay. We carry out a systematic exploration ofmore » the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope γ' of the total mass density profile ρ tot∝ r–γ' for the main deflector can be measured. Ideally, we require that the statistical error on γ' be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. Furthermore, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation Adaptive Optics System, and TMT, will only be of order a few minutes per system, thus making the follow-up of hundreds of systems a practical and efficient cosmological probe.« less
Precision cosmology with time delay lenses: high resolution imaging requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Xiao-Lei; Liao, Kai; Treu, Tommaso
Lens time delays are a powerful probe of cosmology, provided that the gravitational potential of the main deflector can be modeled with sufficient precision. Recent work has shown that this can be achieved by detailed modeling of the host galaxies of lensed quasars, which appear as ''Einstein Rings'' in high resolution images. The distortion of these arcs and counter-arcs, as measured over a large number of pixels, provides tight constraints on the difference between the gravitational potential between the quasar image positions, and thus on cosmology in combination with the measured time delay. We carry out a systematic exploration ofmore » the high resolution imaging required to exploit the thousands of lensed quasars that will be discovered by current and upcoming surveys with the next decade. Specifically, we simulate realistic lens systems as imaged by the Hubble Space Telescope (HST), James Webb Space Telescope (JWST), and ground based adaptive optics images taken with Keck or the Thirty Meter Telescope (TMT). We compare the performance of these pointed observations with that of images taken by the Euclid (VIS), Wide-Field Infrared Survey Telescope (WFIRST) and Large Synoptic Survey Telescope (LSST) surveys. We use as our metric the precision with which the slope γ' of the total mass density profile ρ{sub tot}∝ r{sup −γ'} for the main deflector can be measured. Ideally, we require that the statistical error on γ' be less than 0.02, such that it is subdominant to other sources of random and systematic uncertainties. We find that survey data will likely have sufficient depth and resolution to meet the target only for the brighter gravitational lens systems, comparable to those discovered by the SDSS survey. For fainter systems, that will be discovered by current and future surveys, targeted follow-up will be required. However, the exposure time required with upcoming facilitites such as JWST, the Keck Next Generation Adaptive Optics System, and TMT, will only be of order a few minutes per system, thus making the follow-up of hundreds of systems a practical and efficient cosmological probe.« less
NASA Astrophysics Data System (ADS)
James, Ryan G.; Mahoney, John R.; Crutchfield, James P.
2017-06-01
One of the most basic characterizations of the relationship between two random variables, X and Y , is the value of their mutual information. Unfortunately, calculating it analytically and estimating it empirically are often stymied by the extremely large dimension of the variables. One might hope to replace such a high-dimensional variable by a smaller one that preserves its relationship with the other. It is well known that either X (or Y ) can be replaced by its minimal sufficient statistic about Y (or X ) while preserving the mutual information. While intuitively reasonable, it is not obvious or straightforward that both variables can be replaced simultaneously. We demonstrate that this is in fact possible: the information X 's minimal sufficient statistic preserves about Y is exactly the information that Y 's minimal sufficient statistic preserves about X . We call this procedure information trimming. As an important corollary, we consider the case where one variable is a stochastic process' past and the other its future. In this case, the mutual information is the channel transmission rate between the channel's effective states. That is, the past-future mutual information (the excess entropy) is the amount of information about the future that can be predicted using the past. Translating our result about minimal sufficient statistics, this is equivalent to the mutual information between the forward- and reverse-time causal states of computational mechanics. We close by discussing multivariate extensions to this use of minimal sufficient statistics.
Quantum communication complexity advantage implies violation of a Bell inequality
Buhrman, Harry; Czekaj, Łukasz; Grudka, Andrzej; Horodecki, Michał; Horodecki, Paweł; Markiewicz, Marcin; Speelman, Florian; Strelchuk, Sergii
2016-01-01
We obtain a general connection between a large quantum advantage in communication complexity and Bell nonlocality. We show that given any protocol offering a sufficiently large quantum advantage in communication complexity, there exists a way of obtaining measurement statistics that violate some Bell inequality. Our main tool is port-based teleportation. If the gap between quantum and classical communication complexity can grow arbitrarily large, the ratio of the quantum value to the classical value of the Bell quantity becomes unbounded with the increase in the number of inputs and outputs. PMID:26957600
Gas-Centered Swirl Coaxial Liquid Injector Evaluations
NASA Technical Reports Server (NTRS)
Cohn, A. K.; Strakey, P. A.; Talley, D. G.
2005-01-01
Development of Liquid Rocket Engines is expensive. Extensive testing at large scales usually required. In order to verify engine lifetime, large number of tests required. Limited Resources available for development. Sub-scale cold-flow and hot-fire testing is extremely cost effective. Could be a necessary (but not sufficient) condition for long engine lifetime. Reduces overall costs and risk of large scale testing. Goal: Determine knowledge that can be gained from sub-scale cold-flow and hot-fire evaluations of LRE injectors. Determine relationships between cold-flow and hot-fire data.
Flow-field in a vortex with breakdown above sharp edged delta wings
NASA Technical Reports Server (NTRS)
Hayashi, Y.; Nakaya, T.
1978-01-01
The behavior of vortex-flow, accompanied with breakdown, formed above sharp-edged delta wings, was studied experimentally as well as theoretically. Emphasis is placed particularly on the criterion for the breakdown at sufficiently large Reynolds numbers
Wide range radioactive gas concentration detector
Anderson, David F.
1984-01-01
A wide range radioactive gas concentration detector and monitor which is capable of measuring radioactive gas concentrations over a range of eight orders of magnitude. The device of the present invention is designed to have an ionization chamber which is sufficiently small to give a fast response time for measuring radioactive gases but sufficiently large to provide accurate readings at low concentration levels. Closely spaced parallel plate grids provide a uniform electric field in the active region to improve the accuracy of measurements and reduce ion migration time so as to virtually eliminate errors due to ion recombination. The parallel plate grids are fabricated with a minimal surface area to reduce the effects of contamination resulting from absorption of contaminating materials on the surface of the grids. Additionally, the ionization chamber wall is spaced a sufficient distance from the active region of the ionization chamber to minimize contamination effects.
Universal Features of the Fluid to Solid Transition for Attractive Colloidal Particles
NASA Technical Reports Server (NTRS)
Cipelletti, L.; Prasad, V.; Dinsmore, A.; Segre, P. N.; Weitz, D. A.; Trappe, V.
2002-01-01
Attractive colloidal particles can exhibit a fluid to solid phase transition if the magnitude of the attractive interaction is sufficiently large, if the volume fraction is sufficiently high, and if the applied stress is sufficiently small. The nature of this fluid to solid transition is similar for many different colloid systems, and for many different forms of interaction. The jamming phase transition captures the common features of these fluid to solid translations, by unifying the behavior as a function of the particle volume fraction, the energy of interparticle attractions, and the applied stress. This paper describes the applicability of the jamming state diagram, and highlights those regions where the fluid to solid transition is still poorly understood. It also presents new data for gelation of colloidal particles with an attractive depletion interaction, providing more insight into the origin of the fluid to solid transition.
Genetic Drift, Not Life History or RNAi, Determine Long-Term Evolution of Transposable Elements
Szitenberg, Amir; Cha, Soyeon; Opperman, Charles H.; Bird, David M.; Blaxter, Mark L.; Lunt, David H.
2016-01-01
Abstract Transposable elements (TEs) are a major source of genome variation across the branches of life. Although TEs may play an adaptive role in their host’s genome, they are more often deleterious, and purifying selection is an important factor controlling their genomic loads. In contrast, life history, mating system, GC content, and RNAi pathways have been suggested to account for the disparity of TE loads in different species. Previous studies of fungal, plant, and animal genomes have reported conflicting results regarding the direction in which these genomic features drive TE evolution. Many of these studies have had limited power, however, because they studied taxonomically narrow systems, comparing only a limited number of phylogenetically independent contrasts, and did not address long-term effects on TE evolution. Here, we test the long-term determinants of TE evolution by comparing 42 nematode genomes spanning over 500 million years of diversification. This analysis includes numerous transitions between life history states, and RNAi pathways, and evaluates if these forces are sufficiently persistent to affect the long-term evolution of TE loads in eukaryotic genomes. Although we demonstrate statistical power to detect selection, we find no evidence that variation in these factors influence genomic TE loads across extended periods of time. In contrast, the effects of genetic drift appear to persist and control TE variation among species. We suggest that variation in the tested factors are largely inconsequential to the large differences in TE content observed between genomes, and only by these large-scale comparisons can we distinguish long-term and persistent effects from transient or random changes. PMID:27566762
Green, Carmen R; Ndao-Brumblay, S Khady; West, Brady; Washington, Tamika
2005-10-01
Little is known about physical barriers to adequate pain treatment for minorities. This investigation explored sociodemographic determinants of pain medication availability in Michigan pharmacies. A cross-sectional survey-based study with census data and data provided by Michigan community retail pharmacists was designed. Sufficient opioid analgesic supplies was defined as stocking at least one long-acting, short-acting, and combination opioid analgesic. Pharmacies located in minority (
Wagner, Brian M.; Schuster, Stephanie A.; Boyes, Barry E.; Shields, Taylor J.; Miles, William L.; Haynes, Mark J.; Moran, Robert E.; Kirkland, Joseph J.; Schure, Mark R.
2017-01-01
To facilitate mass transport and column efficiency, solutes must have free access to particle pores to facilitate interactions with the stationary phase. To ensure this feature, particles should be used for HPLC separations which have pores sufficiently large to accommodate the solute without restricted diffusion. This paper describes the design and properties of superficially porous (also called Fused-Core®, core shell or porous shell) particles with very large (1000 Å) pores specifically developed for separating very large biomolecules and polymers. Separations of DNA fragments, monoclonal antibodies, large proteins and large polystyrene standards are used to illustrate the utility of these particles for efficient, high-resolution applications. PMID:28213987
Wagner, Brian M; Schuster, Stephanie A; Boyes, Barry E; Shields, Taylor J; Miles, William L; Haynes, Mark J; Moran, Robert E; Kirkland, Joseph J; Schure, Mark R
2017-03-17
To facilitate mass transport and column efficiency, solutes must have free access to particle pores to facilitate interactions with the stationary phase. To ensure this feature, particles should be used for HPLC separations which have pores sufficiently large to accommodate the solute without restricted diffusion. This paper describes the design and properties of superficially porous (also called Fused-Core ® , core shell or porous shell) particles with very large (1000Å) pores specifically developed for separating very large biomolecules and polymers. Separations of DNA fragments, monoclonal antibodies, large proteins and large polystyrene standards are used to illustrate the utility of these particles for efficient, high-resolution applications. Copyright © 2017 Elsevier B.V. All rights reserved.
Minimal microwave anisotrophy from perturbations induced at late times
NASA Technical Reports Server (NTRS)
Jaffe, Andrew H.; Stebbins, Albert; Frieman, Joshua A.
1994-01-01
Aside from primordial gravitational instability of the cosmological fluid, various mechanisms have been proposed to generate large-scale structure at relatively late times, including, e.g., 'late-time' cosmological phase transitions. In these scenarios, it is envisioned that the universe is nearly homogeneous at the times of last scattering and that perturbations grow rapidly sometimes after the primordial plasma recombines. On this basis, it was suggested that large inhomogeneities could be generated while leaving relatively little imprint on the cosmic microwave background (MBR) anisotropy. In this paper, we calculate the minimal anisotropies possible in any 'late-time' scenario for structure formation, given the level of inhomogeneity observed at present. Since the growth of the inhomogeneity involves time-varying gravitational fields, these scenarios inevitably generate significant MBR anisotropy via the Sachs-Wolfe effect. Moreover, we show that the large-angle MBR anisotropy produced by the rapid post-recombination growth of inhomogeneity is generally greater than that produced by the same inhomogeneity growth via gravitational instability. In 'realistic' scenarios one can decrease the anisotropy compared to models with primordial adiabatic fluctuations, but only on very small angular scales. The value of any particular measure of the anisotropy can be made small in late-time models, but only by making the time-dependence of the gravitational field sufficiently 'pathological'.
High-Dimensional Multivariate Repeated Measures Analysis with Unequal Covariance Matrices.
Harrar, Solomon W; Kong, Xiaoli
2015-03-01
In this paper, test statistics for repeated measures design are introduced when the dimension is large. By large dimension is meant the number of repeated measures and the total sample size grow together but either one could be larger than the other. Asymptotic distribution of the statistics are derived for the equal as well as unequal covariance cases in the balanced as well as unbalanced cases. The asymptotic framework considered requires proportional growth of the sample sizes and the dimension of the repeated measures in the unequal covariance case. In the equal covariance case, one can grow at much faster rate than the other. The derivations of the asymptotic distributions mimic that of Central Limit Theorem with some important peculiarities addressed with sufficient rigor. Consistent and unbiased estimators of the asymptotic variances, which make efficient use of all the observations, are also derived. Simulation study provides favorable evidence for the accuracy of the asymptotic approximation under the null hypothesis. Power simulations have shown that the new methods have comparable power with a popular method known to work well in low-dimensional situation but the new methods have shown enormous advantage when the dimension is large. Data from Electroencephalograph (EEG) experiment is analyzed to illustrate the application of the results.
High-Dimensional Multivariate Repeated Measures Analysis with Unequal Covariance Matrices
Harrar, Solomon W.; Kong, Xiaoli
2015-01-01
In this paper, test statistics for repeated measures design are introduced when the dimension is large. By large dimension is meant the number of repeated measures and the total sample size grow together but either one could be larger than the other. Asymptotic distribution of the statistics are derived for the equal as well as unequal covariance cases in the balanced as well as unbalanced cases. The asymptotic framework considered requires proportional growth of the sample sizes and the dimension of the repeated measures in the unequal covariance case. In the equal covariance case, one can grow at much faster rate than the other. The derivations of the asymptotic distributions mimic that of Central Limit Theorem with some important peculiarities addressed with sufficient rigor. Consistent and unbiased estimators of the asymptotic variances, which make efficient use of all the observations, are also derived. Simulation study provides favorable evidence for the accuracy of the asymptotic approximation under the null hypothesis. Power simulations have shown that the new methods have comparable power with a popular method known to work well in low-dimensional situation but the new methods have shown enormous advantage when the dimension is large. Data from Electroencephalograph (EEG) experiment is analyzed to illustrate the application of the results. PMID:26778861
NASA Astrophysics Data System (ADS)
Wu, Min-Lin; Wu, Yung-Hsien; Lin, Chia-Chun; Chen, Lun-Lun
2012-10-01
The structure of SiGe nanocrystals embedded in Al2O3 formed by sequential deposition of Al2O3/Si/Ge/Al2O3 and a subsequent annealing was confirmed by transmission electron microscopy and energy dispersive spectroscopy (EDS), and its application for write-once-read-many-times (WORM) memory devices was explored in this study. By applying a -10 V pulse for 1 s, a large amount of holes injected from Si substrate are stored in the nanocrystals and consequently, the current at +1.5 V increases by a factor of 104 as compared to that of the initial state. Even with a smaller -5 V pulse for 1 μs, a sufficiently large current ratio of 36 can still be obtained, verifying the low power operation. Since holes are stored in nanocrystals which are isolated from Si substrate by Al2O3 with good integrity and correspond to a large valence band offset with respect to Al2O3, desirable read endurance up to 105 cycles and excellent retention over 100 yr are achieved. Combining these promising characteristics, WORM memory devices are appropriate for high-performance archival storage applications.
NASA Astrophysics Data System (ADS)
Wosnik, Martin; Bachant, Peter
2016-11-01
Cross-flow turbines show potential in marine hydrokinetic (MHK) applications. A research focus is on accurately predicting device performance and wake evolution to improve turbine array layouts for maximizing overall power output, i.e., minimizing wake interference, or taking advantage of constructive wake interaction. Experiments were carried with large laboratory-scale cross-flow turbines D O (1 m) using a turbine test bed in a large cross-section tow tank, designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. Several turbines of varying solidity were employed, including the UNH Reference Vertical Axis Turbine (RVAT) and a 1:6 scale model of the DOE-Sandia Reference Model 2 (RM2) turbine. To improve parameterization in array simulations, an actuator line model (ALM) was developed to provide a computationally feasible method for simulating full turbine arrays inside Navier-Stokes models. Results are presented for the simulation of performance and wake dynamics of cross-flow turbines and compared with experiments and body-fitted mesh, blade-resolving CFD. Supported by NSF-CBET Grant 1150797, Sandia National Laboratories.
Custom implant design for large cranial defects.
Marreiros, Filipe M M; Heuzé, Y; Verius, M; Unterhofer, C; Freysinger, W; Recheis, W
2016-12-01
The aim of this work was to introduce a computer-aided design (CAD) tool that enables the design of large skull defect (>100 [Formula: see text]) implants. Functional and aesthetically correct custom implants are extremely important for patients with large cranial defects. For these cases, preoperative fabrication of implants is recommended to avoid problems of donor site morbidity, sufficiency of donor material and quality. Finally, crafting the correct shape is a non-trivial task increasingly complicated by defect size. We present a CAD tool to design such implants for the neurocranium. A combination of geometric morphometrics and radial basis functions, namely thin-plate splines, allows semiautomatic implant generation. The method uses symmetry and the best fitting shape to estimate missing data directly within the radiologic volume data. In addition, this approach delivers correct implant fitting via a boundary fitting approach. This method generates a smooth implant surface, free of sharp edges that follows the main contours of the boundary, enabling accurate implant placement in the defect site intraoperatively. The present approach is evaluated and compared to existing methods. A mean error of 89.29 % (72.64-100 %) missing landmarks with an error less or equal to 1 mm was obtained. In conclusion, the results show that our CAD tool can generate patient-specific implants with high accuracy.
First-principles study of spin-transfer torque in Co{sub 2}MnSi/Al/Co{sub 2}MnSi spin-valve
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Ling, E-mail: lingtang@zjut.edu.cn; Yang, Zejin, E-mail: zejinyang@zjut.edu.cn
The spin-transfer torque (STT) in Co{sub 2}MnSi(CMS)/Al/Co{sub 2}MnSi spin-valve system with and without interfacial disorder is studied by a first-principles noncollinear wave-function-matching method. It is shown that in the case of clean interface the angular dependence of STT for CoCo/Al (the asymmetry parameter Λ≈4.5) is more skewed than that for MnSi/Al (Λ≈2.9), which suggests the clean CoCo/Al architecture is much more efficient for the application on radio frequency oscillation. We also find that even with interfacial disorder the spin-valve of half-metallic CMS still has a relatively large parameter Λ compared to that of conventional ferromagnet. In addition, for clean interfacemore » the in-plane torkance of MnSi/Al is about twice as large as that of CoCo/Al. However, as long as the degree of interfacial disorder is sufficiently large, the CoCo/Al and MnSi/Al will show approximately the same magnitude of in-plane torkance. Furthermore, our results demonstrate that CMS/Al/CMS system has very high efficiency of STT to switch the magnetic layer of spin-valve.« less
Investigation of Recombination Processes In A Magnetized Plasma
NASA Technical Reports Server (NTRS)
Chavers, Greg; Chang-Diaz, Franklin; Rodgers, Stephen L. (Technical Monitor)
2002-01-01
Interplanetary travel requires propulsion systems that can provide high specific impulse (Isp), while also having sufficient thrust to rapidly accelerate large payloads. One such propulsion system is the Variable Specific Impulse Magneto-plasma Rocket (VASIMR), which creates, heats, and exhausts plasma to provide variable thrust and Isp, optimally meeting the mission requirements. A large fraction of the energy to create the plasma is frozen in the exhaust in the form of ionization energy. This loss mechanism is common to all electromagnetic plasma thrusters and has an impact on their efficiency. When the device operates at high Isp, where the exhaust kinetic energy is high compared to the ionization energy, the frozen flow component is of little consequence; however, at low Isp, the effect of the frozen flow may be important. If some of this energy could be recovered through recombination processes, and re-injected as neutral kinetic energy, the efficiency of VASIMR, in its low Isp/high thrust mode may be improved. In this operating regime, the ionization energy is a large portion of the total plasma energy. An experiment is being conducted to investigate the possibility of recovering some of the energy used to create the plasma. This presentation will cover the progress and status of the experiment involving surface recombination of the plasma.
Studies of Postdisaster Economic Recovery: Analysis, Synthesis, and Assessment
1987-06-01
of a large-scale nuclear disaster can be viewed in the aggre- gate as attempting to answer two broad questions: 1. Do resources survive in sufficient...With respect to economic institutional issues in the aftermath of a nuclear disaster , published research has been, almost without exception, speculative...possibilities. There are at least three major themes that per- meate the literature on economic control in the event of a large-scale nuclear disaster . First
Twofold Transition in PT-symmetric Coupled Oscillators
2013-12-26
theoretical model exhibits two PT transitions depending on the size of the coupling parameter . For small , the PT symmetry is broken and the system is...small , the PT symmetry is broken and the system is not in equilibrium, but when becomes sufficiently large, the system undergoes a transition to...an equilibrium phase in which the PT symmetry is unbroken. For very large , the system undergoes a second transition and is no longer in
Large polar pretilt for the liquid crystal homologous series alkylcyanobiphenyl
NASA Astrophysics Data System (ADS)
Huang, Zhibin; Rosenblatt, Charles
2005-01-01
Sufficiently strong rubbing of the polyimide alignment layer SE-1211 (Nissan Chemical Industries, Ltd.) results in a large pretilt of the liquid crystal director from the homeotropic orientation. The threshold rubbing strength required to induce nonzero pretilt is found to be a monotonic function of the number of methylene units in the homologous liquid crystal series alkylcyanobiphenyl. The results are discussed in terms of the dual easy axis model for alignment.
Asymptotic form for the cross section for the Coulomb interacting rearrangement collisions
NASA Technical Reports Server (NTRS)
Omidvar, K.
1973-01-01
It is shown that in a rearrangement collision leading to the formation of the highly excited hydrogenlike states the cross section in all orders of the Born approximation behaves as 1/n sq, with n the principal quantum number, thus invalidating the Brinkman-Kramers approximation for large n. Similarly, in high energy inelastic electron-hydrogenlike atom collisions the exchange cross section for sufficiently large n dominates the direct excitation cross section.
Geography, Race/Ethnicity, and Physical Activity Among Men in the United States.
Sohn, Elizabeth Kelley; Porch, Tichelle; Hill, Sarah; Thorpe, Roland J
2017-07-01
Engaging in regular physical activity reduces one's risk of chronic disease, stroke, cardiovascular disease, and some forms of cancer. These preventive benefits associated with physical activity are of particular importance for men, who have shorter life expectancy and experience higher rates of chronic diseases as compared to women. Studies at the community and national levels have found that social and environmental factors are important determinants of men's physical activity, but little is known about how regional influences affect physical activity behaviors among men. The objective of this study is to examine the association between geographic region and physical activity among men in the United States, and to determine if there are racial/ethnic differences in physical activity within these geographic regions. Cross-sectional data from men who participated the 2000 to 2010 National Health Interview Survey ( N = 327,556) was used. The primary outcome in this study was whether or not men had engaged in sufficient physical activity to receive health benefits, defined as meeting the 2008 Physical Activity Guidelines for Americans. Race/ethnicity and geographic region were the primary independent variables. Within every region, Hispanic and Asian men had lower odds of engaging in sufficient physical activity compared to white men. Within the Northeast, South, and West, black men had lower odds of engaging in sufficient physical activity compared to white men. The key findings indicate that the odds of engaging in sufficient physical activity among men differ significantly between geographic regions and within regions by race/ethnicity.
Large N phase transitions and the fate of small Schwarzschild-AdS black holes
NASA Astrophysics Data System (ADS)
Yaffe, Laurence G.
2018-01-01
Sufficiently small Schwarzschild-AdS black holes in asymptotically global AdS5×S5 spacetime are known to become dynamically unstable toward deformation of the internal S5 geometry. The resulting evolution of such an unstable black hole is related, via holography, to the dynamics of supercooled plasma which has reached the limit of metastability in maximally supersymmetric large-N Yang-Mills theory on R ×S3. Puzzles related to the resulting dynamical evolution are discussed, with a key issue involving differences between the large-N limit in the dual field theory and typical large volume thermodynamic limits.
Projection rule for complex-valued associative memory with large constant terms
NASA Astrophysics Data System (ADS)
Kitahara, Michimasa; Kobayashi, Masaki
Complex-valued Associative Memory (CAM) has an inherent property of rotation invariance. Rotation invariance produces many undesirable stable states and reduces the noise robustness of CAM. Constant terms may remove rotation invariance, but if the constant terms are too small, rotation invariance does not vanish. In this paper, we eliminate rotation invariance by introducing large constant terms to complex-valued neurons. We have to make constant terms sufficiently large to improve the noise robustness. We introduce a parameter to control the amplitudes of constant terms into projection rule. The large constant terms are proved to be effective by our computer simulations.
SAR STUDY OF NASAL TOXICITY: LESSONS FOR MODELING SMALL TOXICITY DATASETS
Most toxicity data, particularly from whole animal bioassays, are generated without the needs or capabilities of structure-activity relationship (SAR) modeling in mind. Some toxicity endpoints have been of sufficient regulatory concern to warrant large scale testing efforts (e.g....
Dairy manure biochar as a phosphorus fertilizer
USDA-ARS?s Scientific Manuscript database
Future manure management practices will need to remove large amounts of organic waste as well as harness energy to generate value-added products. Manures can be processed using thermochemical conversion technologies to generate a solid product called biochar. Dairy manure biochars contain sufficient...
When evolution is the solution to pollution...
Rapid evolutionary adaptation is not expected to be sufficiently rapid to buffer the effects of human-mediated environmental changes for most species. Yet large persistent populations of small bodied fish residing in some of the most contaminated estuaries of the US have provided...
48 CFR 245.7302-5 - Mailing lists.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Mailing lists. (a) The plant clearance officer will ensure the contractor solicits a sufficient number of bidders to obtain adequate competition. (b) When large quantities of property, special commodities, or unusual geographic locations are involved, the plant clearance officer is encouraged to obtain additional...
NASA Astrophysics Data System (ADS)
Qin, Jianqi; Celestin, Sebastien; Pasko, Victor P.
2013-05-01
Carrot sprites, exhibiting both upward and downward propagating streamers, and columniform sprites, characterized by predominantly vertical downward streamers, represent two distinct morphological classes of lightning-driven transient luminous events in the upper atmosphere. It is found that positive cloud-to-ground lightning discharges (+CGs) associated with large charge moment changes (QhQ) tend to produce carrot sprites with the presence of a mesospheric region where the electric field exceeds the value 0.8Ek and persists for
ERIC Educational Resources Information Center
Papadopoulos, Timothy C.; Kendeou, Panayiota; Spanoudis, George
2012-01-01
Theory-driven conceptualizations of phonological abilities in a sufficiently transparent language (Greek) were examined in children ages 5 years 8 months to 7 years 7 months, by comparing a set of a priori models. Specifically, the fit of 9 different models was evaluated, as defined by the Number of Factors (1 to 3; represented by rhymes,…
Heavy-flavor parton distributions without heavy-flavor matching prescriptions
NASA Astrophysics Data System (ADS)
Bertone, Valerio; Glazov, Alexandre; Mitov, Alexander; Papanastasiou, Andrew S.; Ubiali, Maria
2018-04-01
We show that the well-known obstacle for working with the zero-mass variable flavor number scheme, namely, the omission of O(1) mass power corrections close to the conventional heavy flavor matching point (HFMP) μ b = m, can be easily overcome. For this it is sufficient to take advantage of the freedom in choosing the position of the HFMP. We demonstrate that by choosing a sufficiently large HFMP, which could be as large as 10 times the mass of the heavy quark, one can achieve the following improvements: 1) above the HFMP the size of missing power corrections O(m) is restricted by the value of μ b and, therefore, the error associated with their omission can be made negligible; 2) additional prescriptions for the definition of cross-sections are not required; 3) the resummation accuracy is maintained and 4) contrary to the common lore we find that the discontinuity of α s and pdfs across thresholds leads to improved continuity in predictions for observables. We have considered a large set of proton-proton and electron-proton collider processes, many through NNLO QCD, that demonstrate the broad applicability of our proposal.
Secondary flow structures in large rivers
NASA Astrophysics Data System (ADS)
Chauvet, H.; Devauchelle, O.; Metivier, F.; Limare, A.; Lajeunesse, E.
2012-04-01
Measuring the velocity field in large rivers remains a challenge, even with recent measurement techniques such as Acoustic Doppler Current Profiler (ADCP). Indeed, due to the diverging angle between its ultrasonic beams, an ADCP cannot detect small-scale flow structures. However, when the measurements are limited to a single location for a sufficient period of time, averaging can reveal large, stationary flow structures. Here we present velocity measurements in a straight reach of the Seine river in Paris, France, where the cross-section is close to rectangular. The transverse modulation of the streamwise velocity indicates secondary flow cells, which seem to occupy the entire width of the river. This observation is reminiscent of the longitudinal vortices observed in laboratory experiments (e.g. Blanckaert et al., Advances in Water Resources, 2010, 33, 1062-1074). Although the physical origin of these secondary structures remains unclear, their measured velocity is sufficient to significantly impact the distribution of streamwise momentum. We propose a model for the transverse profile of the depth-averaged velocity based on a crude representation of the longitudinal vortices, with a single free parameter. Preliminary results are in good agreement with field measurements. This model also provides an estimate for the bank shear stress, which controls bank erosion.
Heimes, F.J.; Moore, G.K.; Steele, T.D.
1978-01-01
Expanded energy- and recreation-related activities in the Yampa River basin, Colorado and Wyoming, have caused a rapid increase in economic development which will result in increased demand and competition for natural resources. In planning for efficient allocation of the basin 's natural resources, Landsat images and small-scale color and color-infrared photographs were used for selected geologic, hydrologic and land-use applications within the Yampa River basin. Applications of Landsat data included: (1) regional land-use classification and mapping, (2) lineament mapping, and (3) areal snow-cover mapping. Results from the Landsat investigations indicated that: (1) Landsat land-use classification maps, at a regional level, compared favorably with areal land-use patterns that were defined from available ground information, (2) lineaments were mapped in sufficient detail using recently developed techniques for interpreting aerial photographs, (3) snow cover generally could be mapped for large areas with the exception of some densely forested areas of the basin and areas having a large percentage of winter-season cloud cover. Aerial photographs were used for estimation of turbidity for eight stream locations in the basin. Spectral reflectance values obtained by digitizing photographs were compared with measured turbidity values. Results showed strong correlations (variances explained of greater than 90 percent) between spectral reflectance obtained from color photographs and measured turbidity values. (Woodard-USGS)
Advances in the simulation of toroidal gyro-Landau fluid model turbulence
NASA Astrophysics Data System (ADS)
Waltz, R. E.; Kerbel, G. D.; Milovich, J.; Hammett, G. W.
1995-06-01
The gyro-Landau fluid (GLF) model equations for toroidal geometry [R. E. Waltz, R. R. Dominguez, and G. W. Hammett, Phys. Fluids B 4, 3138 (1992)] have been recently applied to study ion temperature gradient (ITG) mode turbulence using the three-dimensional (3-D) nonlinear ballooning mode representation (BMR) outlined earlier [R. E. Waltz, G. D. Kerbel, and J. Milovich, Phys. Plasmas 1, 2229 (1994)]. The present paper extends this work by treating some unresolved issues concerning ITG turbulence with adiabatic electrons. Although eddies are highly elongated in the radial direction, long time radial correlation lengths are short and comparable to poloidal lengths. Although transport at vanishing shear is not particularly large, transport at reverse global shear, is significantly less. Electrostatic transport at moderate shear is not much affected by inclusion of local shear and average favorable curvature. Transport is suppressed when critical E×B rotational shear is comparable to the maximum linear growth rate with only a weak dependence on magnetic shear. Self-consistent turbulent transport of toroidal momentum can result in a transport bifurcation at sufficiently large r/(Rq). However, the main thrust of the new formulation in the paper deals with advances in the development of finite beta GLF models with trapped electrons and BMR numerical methods for treating the fast parallel field motion of the untrapped electrons.
The Continuum of Health Professions
Jensen, Clyde B.
2015-01-01
The large number of health care professions with overlapping scopes of practice is intimidating to students, confusing to patients, and frustrating to policymakers. As abundant and diverse as the hundreds of health care professions are, they possess sufficient numbers of common characteristics to warrant their placement on a common continuum of health professions that permits methodical comparisons. From 2009–2012, the author developed and delivered experimental courses at 2 community colleges for the purposes of creating and validating a novel method for comparing health care professions. This paper describes the bidirectional health professions continuum that emerged from these courses and its potential value in helping students select a health care career, motivating health care providers to seek interprofessional collaboration, assisting patients with the selection of health care providers, and helping policymakers to better understand the health care professions they regulate. PMID:26770147
Serial Femtosecond Crystallography of G Protein-Coupled Receptors
Liu, Wei; Wacker, Daniel; Gati, Cornelius; Han, Gye Won; James, Daniel; Wang, Dingjie; Nelson, Garrett; Weierstall, Uwe; Katritch, Vsevolod; Barty, Anton; Zatsepin, Nadia A.; Li, Dianfan; Messerschmidt, Marc; Boutet, Sébastien; Williams, Garth J.; Koglin, Jason E.; Seibert, M. Marvin; Wang, Chong; Shah, Syed T.A.; Basu, Shibom; Fromme, Raimund; Kupitz, Christopher; Rendek, Kimberley N.; Grotjohann, Ingo; Fromme, Petra; Kirian, Richard A.; Beyerlein, Kenneth R.; White, Thomas A.; Chapman, Henry N.; Caffrey, Martin; Spence, John C.H.; Stevens, Raymond C.; Cherezov, Vadim
2014-01-01
X-ray crystallography of G protein-coupled receptors and other membrane proteins is hampered by difficulties associated with growing sufficiently large crystals that withstand radiation damage and yield high-resolution data at synchrotron sources. Here we used an x-ray free-electron laser (XFEL) with individual 50-fs duration x-ray pulses to minimize radiation damage and obtained a high-resolution room temperature structure of a human serotonin receptor using sub-10 µm microcrystals grown in a membrane mimetic matrix known as lipidic cubic phase. Compared to the structure solved by traditional microcrystallography from cryo-cooled crystals of about two orders of magnitude larger volume, the room temperature XFEL structure displays a distinct distribution of thermal motions and conformations of residues that likely more accurately represent the receptor structure and dynamics in a cellular environment. PMID:24357322
Can Ab Initio Theory Explain the Phenomenon of Parity Inversion in Be 11 ?
Calci, Angelo; Navratil, Petr; Roth, Robert; ...
2016-12-09
The weakly bound exotic 11Be nucleus, famous for its ground-state parity inversion and distinct n + 10Be halo structure, is investigated from first principles using chiral two- and three-nucleon forces. An explicit treatment of continuum effects is found to be indispensable. We study the sensitivity of the 11Be spectrum to the details of the three-nucleon force and demonstrate that only certain chiral interactions are capable of reproducing the parity inversion. With such interactions, the extremely large E1 transition between the bound states is reproduced. We compare our photodisintegration calculations to conflicting experimental data and predict a distinct dip around themore » 3/2 – 1 resonance energy. Finally, we predict low-lying 3/2 + and 9/2 + resonances that are not or not sufficiently measured in experiments.« less
Secular evolution of eccentricity in protoplanetary discs with gap-opening planets
NASA Astrophysics Data System (ADS)
Teyssandier, Jean; Ogilvie, Gordon I.
2017-06-01
We explore the evolution of the eccentricity of an accretion disc perturbed by an embedded planet whose mass is sufficient to open a large gap in the disc. Various methods for representing the orbit-averaged motion of an eccentric disc are discussed. We characterize the linear instability that leads to the growth of eccentricity by means of hydrodynamical simulations. We numerically recover the known result that eccentricity growth in the disc is possible when the planet-to-star mass ratio exceeds 3 × 10-3. For mass ratios larger than this threshold, the precession rates and growth rates derived from simulations, as well as the shape of the eccentric mode, compare well with the predictions of a linear theory of eccentric discs. We study mechanisms by which the eccentricity growth eventually saturates into a non-linear regime.
NASA Astrophysics Data System (ADS)
Han, Kai; Xu, Xiaojun; Liu, Zejin
2013-05-01
Based on the spectral manipulation technique, the Stimulated Brillouin Scattering (SBS) suppression effect and the coherent beam combination (CBC) effect in multi-tone CBC system are researched theoretically and experimentally. To get satisfactory SBS suppression, the frequency interval of the multi-tone seed laser should be large enough, at least larger than the SBS gain bandwidth. In order to attain excellent CBC effect, the spectra of the multi-tone seed laser need to be matched with the optical path differences among the amplifier chains. Hence, a sufficiently separated matching spectrum is capable at both SBS mitigation and coherent property preservation. By comparing the SBS suppression effect and the CBC effect at various spectra, the optimal spectral structure for simultaneous SBS suppression and excellent CBC effect is found.
Recent developments in plastic scintillators with pulse shape discrimination
NASA Astrophysics Data System (ADS)
Zaitseva, N. P.; Glenn, A. M.; Mabe, A. N.; Carman, M. L.; Hurlbut, C. R.; Inman, J. W.; Payne, S. A.
2018-05-01
The paper reports results of studies conducted to improve scintillation performance of plastic scintillators capable of neutron/gamma pulse-shape discrimination (PSD). Compositional modifications made with the polymer matrix improved physical stability, allowing for increased loads of the primary dye that, in combination with selected secondary dyes, provided enhanced PSD especially important for the lower energy ranges. Additional measurements were made with a newly-introduced PSD plastic EJ-276, that replaces the first commercially produced EJ-299. Comparative studies conducted with the new materials and EJ-309 liquids at large scale (up to 10 cm) show that current plastics may provide scintillation and PSD performance sufficient for the replacement of liquid scintillators. Comparison to stilbene single crystals compliments the information about the status of the solid-state materials recently developed for fast neutron detection applications.
OSO-7 Orbiting Solar Observatory program
NASA Technical Reports Server (NTRS)
1972-01-01
The seventh Orbiting Solar Observatory (OSO-7) in the continuing series designed to gather solar and celestial data that cannot be obtained from the earth's surface is described. OSO-7 was launched September 29, 1971. It has been highly successful in returning scientific data giving new and important information about solar flare development, coronal temperature variations, streamer dynamics of plasma flow, and solar nuclear processes. OSO-7 is expected to have sufficient lifetime to permit data comparisons with the Skylab A mission during 1973. The OSO-7 is a second generation observatory. It is about twice as large and heavy as its predecessors, giving it considerably greater capability for scientific measurements. This report reviews mission objectives, flight history, and scientific experiments; describes the observatory; briefly compares OSO-7 with the first six OSO's; and summarizes the performance of OSO-7.
Materials identification using a small-scale pixellated x-ray diffraction system
NASA Astrophysics Data System (ADS)
O'Flynn, D.; Crews, C.; Drakos, I.; Christodoulou, C.; Wilson, M. D.; Veale, M. C.; Seller, P.; Speller, R. D.
2016-05-01
A transmission x-ray diffraction system has been developed using a pixellated, energy-resolving detector (HEXITEC) and a small-scale, mains operated x-ray source (Amptek Mini-X). HEXITEC enables diffraction to be measured without the requirement of incident spectrum filtration, or collimation of the scatter from the sample, preserving a large proportion of the useful signal compared with other diffraction techniques. Due to this efficiency, sufficient molecular information for material identification can be obtained within 5 s despite the relatively low x-ray source power. Diffraction data are presented from caffeine, hexamine, paracetamol, plastic explosives and narcotics. The capability to determine molecular information from aspirin tablets inside their packaging is demonstrated. Material selectivity and the potential for a sample classification model is shown with principal component analysis, through which each different material can be clearly resolved.
Application of laboratory permeability data
Johnson, A.I.
1963-01-01
Some of the basic material contained in this report originally was prepared in 1952 as instructional handouts for ground-water short courses and for training of foreign participants. The material has been revised and expanded and is presented in the present form to make it more readily available to the field hydrologist. Illustrations now present published examples of the applications suggested in the 1952 material. For small areas, a field pumping test is sufficient to predict the characteristics of an aquifer. With a large area under study, the aquifer properties must be determined at many different locations and it is not usually economically feasible to make sufficient field tests to define the aquifer properties in detail for the whole aquifer. By supplementing a few field tests with laboratory permeability data and geologic interpretation, more point measurements representative of the hydrologic properties of the aquifer may be obtained. A sufficient number of samples seldom can be obtained to completely identify the permeability or transmissibility in detail for a project area. However, a few judiciously chosen samples of high quality, combined with good geologic interpretation, often will permit the extrapolation of permeability information over a large area with a fair degree of reliability. The importance of adequate geologic information, as well as the importance of collecting samples representative of at least all major textural units lying within the section or area of study, cannot be overemphasized.
Carbon Nanotube/Space Durable Polymer Nanocomposite Films for Electrostatic Charge Dissipation
NASA Technical Reports Server (NTRS)
Smith, J. G., Jr.; Watson, K. A.; Thompson, C. M.; Connell, J. W.
2002-01-01
Low solar absorptivity, space environmentally stable polymeric materials possessing sufficient electrical conductivity for electrostatic charge dissipation (ESD) are of interest for potential applications on spacecraft as thin film membranes on antennas, solar sails, large lightweight space optics, and second surface mirrors. One method of imparting electrical conductivity while maintaining low solar absorptivity is through the use of single wall carbon nanotubes (SWNTs). However, SWNTs are difficult to disperse. Several preparative methods were employed to disperse SWNTs into the polymer matrix. Several examples possessed electrical conductivity sufficient for ESD. The chemistry, physical, and mechanical properties of the nanocomposite films will be presented.
NASA Astrophysics Data System (ADS)
Hossack, A. C.; Sutherland, D. A.; Jarboe, T. R.
2017-02-01
A derivation is given showing that the current inside a closed-current volume can be sustained against resistive dissipation by appropriately phased magnetic perturbations. Imposed-dynamo current drive theory is used to predict the toroidal current evolution in the helicity injected torus with steady inductive helicity injection (HIT-SI) experiment as a function of magnetic fluctuations at the edge. Analysis of magnetic fields from a HIT-SI discharge shows that the injector-imposed fluctuations are sufficient to sustain the measured toroidal current without instabilities whereas the small, plasma-generated magnetic fluctuations are not sufficiently large to sustain the current.
Photopic transduction implicated in human circadian entrainment
NASA Technical Reports Server (NTRS)
Zeitzer, J. M.; Kronauer, R. E.; Czeisler, C. A.
1997-01-01
Despite the preeminence of light as the synchronizer of the circadian timing system, the phototransductive machinery in mammals which transmits photic information from the retina to the hypothalamic circadian pacemaker remains largely undefined. To determine the class of photopigments which this phototransductive system uses, we exposed a group (n = 7) of human subjects to red light below the sensitivity threshold of a scotopic (i.e. rhodopsin/rod-based) system, yet of sufficient strength to activate a photopic (i.e. cone-based) system. Exposure to this light stimulus was sufficient to reset significantly the human circadian pacemaker, indicating that the cone pigments which mediate color vision can also mediate circadian vision.
Baryon asymmetry from hypermagnetic helicity in dilaton hypercharge electromagnetism
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bamba, Kazuharu
2006-12-15
The generation of the baryon asymmetry of the Universe from the hypermagnetic helicity, the physical interpretation of which is given in terms of hypermagnetic knots, is studied in inflationary cosmology, taking into account the breaking of the conformal invariance of hypercharge electromagnetic fields through both a coupling with the dilaton and with a pseudoscalar field. It is shown that, if the electroweak phase transition is strongly first order and the present amplitude of the generated magnetic fields on the horizon scale is sufficiently large, a baryon asymmetry with a sufficient magnitude to account for the observed baryon-to-entropy ratio can bemore » generated.« less
Walczewska-Szewc, Katarzyna; Deplazes, Evelyne; Corry, Ben
2015-07-14
Adequately sampling the large number of conformations accessible to proteins and other macromolecules is one of the central challenges in molecular dynamics (MD) simulations; this activity can be difficult, even for relatively simple systems. An example where this problem arises is in the simulation of dye-labeled proteins, which are now being widely used in the design and interpretation of Förster resonance energy transfer (FRET) experiments. In this study, MD simulations are used to characterize the motion of two commonly used FRET dyes attached to an immobilized chain of polyproline. Even in this simple system, the dyes exhibit complex behavior that is a mixture of fast and slow motions. Consequently, very long MD simulations are required to sufficiently sample the entire range of dye motion. Here, we compare the ability of enhanced sampling methods to reproduce the behavior of fluorescent labels on proteins. In particular, we compared Accelerated Molecular Dynamics (AMD), metadynamics, Replica Exchange Molecular Dynamics (REMD), and High Temperature Molecular Dynamics (HTMD) to equilibrium MD simulations. We find that, in our system, all of these methods improve the sampling of the dye motion, but the most significant improvement is achieved using REMD.
Evaluation of a wave-vector-frequency-domain method for nonlinear wave propagation
Jing, Yun; Tao, Molei; Clement, Greg T.
2011-01-01
A wave-vector-frequency-domain method is presented to describe one-directional forward or backward acoustic wave propagation in a nonlinear homogeneous medium. Starting from a frequency-domain representation of the second-order nonlinear acoustic wave equation, an implicit solution for the nonlinear term is proposed by employing the Green’s function. Its approximation, which is more suitable for numerical implementation, is used. An error study is carried out to test the efficiency of the model by comparing the results with the Fubini solution. It is shown that the error grows as the propagation distance and step-size increase. However, for the specific case tested, even at a step size as large as one wavelength, sufficient accuracy for plane-wave propagation is observed. A two-dimensional steered transducer problem is explored to verify the nonlinear acoustic field directional independence of the model. A three-dimensional single-element transducer problem is solved to verify the forward model by comparing it with an existing nonlinear wave propagation code. Finally, backward-projection behavior is examined. The sound field over a plane in an absorptive medium is backward projected to the source and compared with the initial field, where good agreement is observed. PMID:21302985
NASA Astrophysics Data System (ADS)
Lorenzi, M.; Mitroglou, N.; Santini, M.; Gavaises, M.
2017-03-01
An experimental technique for the estimation of the temporal-averaged vapour volume fraction within high-speed cavitating flow orifices is presented. The scientific instrument is designed to employ X-ray micro computed tomography (microCT) as a quantitative 3D measuring technique applied to custom designed, large-scale, orifice-type flow channels made from Polyether-ether-ketone (PEEK). The attenuation of the ionising electromagnetic radiation by the fluid under examination depends on its local density; the transmitted radiation through the cavitation volume is compared to the incident radiation, and combination of radiographies from sufficient number of angles leads to the reconstruction of attenuation coefficients versus the spatial position. This results to a 3D volume fraction distribution measurement of the developing multiphase flow. The experimental results obtained are compared against the high speed shadowgraph visualisation images obtained in an optically transparent nozzle with identical injection geometry; comparison between the temporal mean image and the microCT reconstruction shows excellent agreement. At the same time, the real 3D internal channel geometry (possibly eroded) has been measured and compared to the nominal manufacturing CAD drawing of the test nozzle.
Lorenzi, M; Mitroglou, N; Santini, M; Gavaises, M
2017-03-01
An experimental technique for the estimation of the temporal-averaged vapour volume fraction within high-speed cavitating flow orifices is presented. The scientific instrument is designed to employ X-ray micro computed tomography (microCT) as a quantitative 3D measuring technique applied to custom designed, large-scale, orifice-type flow channels made from Polyether-ether-ketone (PEEK). The attenuation of the ionising electromagnetic radiation by the fluid under examination depends on its local density; the transmitted radiation through the cavitation volume is compared to the incident radiation, and combination of radiographies from sufficient number of angles leads to the reconstruction of attenuation coefficients versus the spatial position. This results to a 3D volume fraction distribution measurement of the developing multiphase flow. The experimental results obtained are compared against the high speed shadowgraph visualisation images obtained in an optically transparent nozzle with identical injection geometry; comparison between the temporal mean image and the microCT reconstruction shows excellent agreement. At the same time, the real 3D internal channel geometry (possibly eroded) has been measured and compared to the nominal manufacturing CAD drawing of the test nozzle.
A Review of Secondary Organic Aerosol (SOA) Formation from Isoprene
Recent field and laboratory evidence indicates that the oxidation of isoprene forms secondary organic aerosol (SOA). Global biogenic emissions of isoprene (600 Tg yr-1) are sufficiently large the formation of SOA is even small yields results in substantial production ...
On Maximal Subalgebras and the Hypercentre of Lie Algebras.
ERIC Educational Resources Information Center
Honda, Masanobu
1997-01-01
Derives two sufficient conditions for a finitely generated Lie algebra to have the nilpotent hypercenter. Presents a relatively large class of generalized soluble Lie algebras. Proves that if a finitely generated Lie algebra has a nilpotent maximal subalgebra, the Fitting radical is nilpotent. (DDR)
EVALUATING THE POTENTIAL FOR CHLORINATED SOLVENT DEGRADATION FROM HYDROGEN CONCENTRATIONS
Long-term monitoring of a large trichioroethylene (TCE) and 1,1,1-trichloroethane (TCA) ground water plume in Minnesota indicated that these contaminants attenuated with distance from the source. Mathematical modelling indicated that sufficient time had passed for the plume to fu...
Strategic approaches to unraveling genetic causes of cardiovascular diseases
USDA-ARS?s Scientific Manuscript database
DNA sequence variants are major components of the "causal field" for virtually all medical phenotypes, whether single gene familial disorders or complex traits without a clear familial aggregation. The causal variants in single gene disorders are necessary and sufficient to impart large effects. In ...
ENVIRONMENTAL IMPLICATIONS OF PLANTS MODIFIED TO CONTAIN INSECTICIDAL GENES
Genetically modified (GM) crops are being grown on large acreages in the United States. Before being approved for sale, sufficient scientific evidence allowed the EPA to determine that they are safe. The results of this research project will strengthen the scientific basis EPA u...
Energy expenditure and enjoyment of common children's games in a simulated free-play environment.
Howe, Cheryl A; Freedson, Patty S; Feldman, Henry A; Osganian, Stavroula K
2010-12-01
To measure the energy expenditure and enjoyment of children's games to be used in developing a school-based intervention for preventing excessive weight gain. Healthy weight (body mass index [BMI] < 85th percentile) and overweight or obese (BMI ≥ 85th percentile) third-grade children (15 boys; 13 girls) were recruited. In a large gymnasium, children performed 10 games randomly selected from 30 games used in previous interventions. Total energy expenditure was measured with a portable metabolic unit and perceived enjoyment was assessed using a 9-point Likert scale of facial expressions. Mean physical activity energy expenditure (PAEE = total energy expenditure minus resting metabolism) and enjoyment of the games were adjusted for sex and BMI classification. PAEE and enjoyment were compared using a repeated-measures ANOVA with sex, BMI classification, and games as main effects. The games elicited a moderate intensity effort (mean ± standard deviation = 5.0 ± 1.3 metabolic equivalents, 123 ± 36 kcal/30 min). PAEE was higher for boys than for girls (0.12 ± 0.04 versus 0.11 ± 0.04 kcal/kg/min) and for healthy weight compared with overweight children (0.13 ± 0.04 versus 0.11 ± 0.03 kcal/kg/min). Twenty-two of the 30 games elicited a sufficiently high PAEE (≥ 100 kcal/30 min) and enjoyment (≥ neutral expression) for inclusion in future school-based interventions. Not all children's games are perceived as enjoyable or resulted in an energy expenditure that was sufficiently high for inclusion in future physical activity interventions to prevent the excess weight gain associated with childhood obesity. Copyright © 2010 Mosby, Inc. All rights reserved.
TURBULENCE IN THE SOLAR WIND MEASURED WITH COMET TAIL TEST PARTICLES
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeForest, C. E.; Howard, T. A.; Matthaeus, W. H.
2015-10-20
By analyzing the motions of test particles observed remotely in the tail of Comet Encke, we demonstrate that the solar wind undergoes turbulent processing enroute from the Sun to the Earth and that the kinetic energy entrained in the large-scale turbulence is sufficient to explain the well-known anomalous heating of the solar wind. Using the heliospheric imaging (HI-1) camera on board NASA's STEREO-A spacecraft, we have observed an ensemble of compact features in the comet tail as they became entrained in the solar wind near 0.4 AU. We find that the features are useful as test particles, via mean-motion analysismore » and a forward model of pickup dynamics. Using population analysis of the ensemble's relative motion, we find a regime of random-walk diffusion in the solar wind, followed, on larger scales, by a surprising regime of semiconfinement that we attribute to turbulent eddies in the solar wind. The entrained kinetic energy of the turbulent motions represents a sufficient energy reservoir to heat the solar wind to observed temperatures at 1 AU. We determine the Lagrangian-frame diffusion coefficient in the diffusive regime, derive upper limits for the small scale coherence length of solar wind turbulence, compare our results to existing Eulerian-frame measurements, and compare the turbulent velocity with the size of the observed eddies extrapolated to 1 AU. We conclude that the slow solar wind is fully mixed by turbulence on scales corresponding to a 1–2 hr crossing time at Earth; and that solar wind variability on timescales shorter than 1–2 hr is therefore dominated by turbulent processing rather than by direct solar effects.« less
Tauchman, Eric C; Boehm, Frederick J; DeLuca, Jennifer G
2015-12-01
During mitosis, duplicated sister chromatids attach to microtubules emanating from opposing sides of the bipolar spindle through large protein complexes called kinetochores. In the absence of stable kinetochore-microtubule attachments, a cell surveillance mechanism known as the spindle assembly checkpoint (SAC) produces an inhibitory signal that prevents anaphase onset. Precisely how the inhibitory SAC signal is extinguished in response to microtubule attachment remains unresolved. To address this, we induced formation of hyper-stable kinetochore-microtubule attachments in human cells using a non-phosphorylatable version of the protein Hec1, a core component of the attachment machinery. We find that stable attachments are sufficient to silence the SAC in the absence of sister kinetochore bi-orientation and strikingly in the absence of detectable microtubule pulling forces or tension. Furthermore, we find that SAC satisfaction occurs despite the absence of large changes in intra-kinetochore distance, suggesting that substantial kinetochore stretching is not required for quenching the SAC signal.
Multidimensional equilibria and their stability in copolymer-solvent mixtures
NASA Astrophysics Data System (ADS)
Glasner, Karl; Orizaga, Saulo
2018-06-01
This paper discusses localized equilibria which arise in copolymer-solvent mixtures. A free boundary problem associated with the sharp-interface limit of a density functional model is used to identify both lamellar and concentric domain patterns composed of a finite number of layers. Stability of these morphologies is studied through explicit linearization of the free boundary evolution. For the multilayered lamellar configuration, transverse instability is observed for sufficiently small dimensionless interfacial energies. Additionally, a crossover between small and large wavelength instabilities is observed depending on whether solvent-polymer or monomer-monomer interfacial energy is dominant. Concentric domain patterns resembling multilayered micelles and vesicles exhibit bifurcations wherein they only exist for sufficiently small dimensionless interfacial energies. The bifurcation of large radii vesicle solutions is studied analytically, and a crossover from a supercritical case with only one solution branch to a subcritical case with two is observed. Linearized stability of these configurations shows that azimuthal perturbation may lead to instabilities as interfacial energy is decreased.
DataWarrior: an open-source program for chemistry aware data visualization and analysis.
Sander, Thomas; Freyss, Joel; von Korff, Modest; Rufener, Christian
2015-02-23
Drug discovery projects in the pharmaceutical industry accumulate thousands of chemical structures and ten-thousands of data points from a dozen or more biological and pharmacological assays. A sufficient interpretation of the data requires understanding, which molecular families are present, which structural motifs correlate with measured properties, and which tiny structural changes cause large property changes. Data visualization and analysis software with sufficient chemical intelligence to support chemists in this task is rare. In an attempt to contribute to filling the gap, we released our in-house developed chemistry aware data analysis program DataWarrior for free public use. This paper gives an overview of DataWarrior's functionality and architecture. Exemplarily, a new unsupervised, 2-dimensional scaling algorithm is presented, which employs vector-based or nonvector-based descriptors to visualize the chemical or pharmacophore space of even large data sets. DataWarrior uses this method to interactively explore chemical space, activity landscapes, and activity cliffs.
Digital Archiving of People Flow by Recycling Large-Scale Social Survey Data of Developing Cities
NASA Astrophysics Data System (ADS)
Sekimoto, Y.; Watanabe, A.; Nakamura, T.; Horanont, T.
2012-07-01
Data on people flow has become increasingly important in the field of business, including the areas of marketing and public services. Although mobile phones enable a person's position to be located to a certain degree, it is a challenge to acquire sufficient data from people with mobile phones. In order to grasp people flow in its entirety, it is important to establish a practical method of reconstructing people flow from various kinds of existing fragmentary spatio-temporal data such as social survey data. For example, despite typical Person Trip Survey Data collected by the public sector showing the fragmentary spatio-temporal positions accessed, the data are attractive given the sufficiently large sample size to estimate the entire flow of people. In this study, we apply our proposed basic method to Japan International Cooperation Agency (JICA) PT data pertaining to developing cities around the world, and we propose some correction methods to resolve the difficulties in applying it to many cities and stably to infrastructure data.
Tauchman, Eric C.; Boehm, Frederick J.; DeLuca, Jennifer G.
2015-01-01
During mitosis, duplicated sister chromatids attach to microtubules emanating from opposing sides of the bipolar spindle through large protein complexes called kinetochores. In the absence of stable kinetochore–microtubule attachments, a cell surveillance mechanism known as the spindle assembly checkpoint (SAC) produces an inhibitory signal that prevents anaphase onset. Precisely how the inhibitory SAC signal is extinguished in response to microtubule attachment remains unresolved. To address this, we induced formation of hyper-stable kinetochore–microtubule attachments in human cells using a non-phosphorylatable version of the protein Hec1, a core component of the attachment machinery. We find that stable attachments are sufficient to silence the SAC in the absence of sister kinetochore bi-orientation and strikingly in the absence of detectable microtubule pulling forces or tension. Furthermore, we find that SAC satisfaction occurs despite the absence of large changes in intra-kinetochore distance, suggesting that substantial kinetochore stretching is not required for quenching the SAC signal. PMID:26620470
Federal solar policies yield neither heat nor light
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silverstein, M.
1978-02-06
Thirty years of Federal energy policies and bureaucracy are criticized for their limited success in promoting nuclear energy and their present involvement in solar technology. Mr. Silverstein feels that poor judgment was shown in pursuit of large-scale solar demonstrations between 1973 and 1976 when Federal agencies ignored existing solar companies and awarded contracts to the large corporations. A fetish for crash research programs, he also feels, led to the creation of the Solar Energy Research Institute (SERI), which concentrates on wasteful high-technology projects rather than building on what has already been developed in the field. He cites ''even more destructive''more » policies adopted by the Housing and Urban Development Agency (HUD), which attacked many solar suppliers without sufficient evidence and then developed a solar-water-heater grant program that effectively distorted the market. The author feels that the solar technology market is sufficiently viable and that government participation is more appropriate in the form of tax credits and guaranteed loans.« less
NASA Astrophysics Data System (ADS)
Mokiem, M. R.; de Koter, A.; Evans, C. J.; Puls, J.; Smartt, S. J.; Crowther, P. A.; Herrero, A.; Langer, N.; Lennon, D. J.; Najarro, F.; Villamariz, M. R.; Vink, J. S.
2007-04-01
We have studied the optical spectra of a sample of 28 O- and early B-type stars in the Large Magellanic Cloud, 22 of which are associated with the young star forming region N11. Our observations sample the central associations of LH9 and LH10, and the surrounding regions. Stellar parameters are determined using an automated fitting method (Mokiem et al. 2005), which combines the stellar atmosphere code fastwind (Puls et al. 2005) with the genetic algorithm based optimisation routine pikaia (Charbonneau 1995). We derive an age of 7.0 ± 1.0 and 3.0 ± 1.0 Myr for LH9 and LH10, respectively. The age difference and relative distance of the associations are consistent with a sequential star formation scenario in which stellar activity in LH9 triggered the formation of LH10. Our sample contains four stars of spectral type O2. From helium and hydrogen line fitting we find the hottest three of these stars to be 49{-}54 kK (compared to 45{-}46 kK for O3 stars). Detailed determination of the helium mass fraction reveals that the masses of helium enriched dwarfs and giants derived in our spectroscopic analysis are systematically lower than those implied by non-rotating evolutionary tracks. We interpret this as evidence for efficient rotationally enhanced mixing leading to the surfacing of primary helium and to an increase of the stellar luminosity. This result is consistent with findings for SMC stars by Mokiem et al. (2006). For bright giants and supergiants no such mass discrepancy is found; these stars therefore appear to follow tracks of modestly or non-rotating objects. The set of programme stars was sufficiently large to establish the mass loss rates of OB stars in this Z ˜ 1/2 Z⊙ environment sufficiently accurate to allow for a quantitative comparison with similar objects in the Galaxy and the SMC. The mass loss properties are found to be intermediate to massive stars in the Galaxy and SMC. Comparing the derived modified wind momenta D_mom as a function of luminosity with predictions for LMC metallicities by Vink et al. (2001) yields good agreement in the entire luminosity range that was investigated, i.e. 5.0 < log L/L⊙< 6.1. Appendix A is only available in electronic form at http://www.aanda.org
Automated sizing of large structures by mixed optimization methods
NASA Technical Reports Server (NTRS)
Sobieszczanski, J.; Loendorf, D.
1973-01-01
A procedure for automating the sizing of wing-fuselage airframes was developed and implemented in the form of an operational program. The program combines fully stressed design to determine an overall material distribution with mass-strength and mathematical programming methods to design structural details accounting for realistic design constraints. The practicality and efficiency of the procedure is demonstrated for transport aircraft configurations. The methodology is sufficiently general to be applicable to other large and complex structures.
Characterizing and Optimizing the Performance of the MAESTRO 49-Core Processor
2014-03-27
process large volumes of data, it is necessary during testing to vary the dimensions of the inbound data matrix to determine what effect this has on the...needed that can process the extra data these systems seek to collect. However, the space environment presents a number of threats, such as ambient or...induced faults, and that also have sufficient computational power to handle the large flow of data they encounter. This research investigates one
Maret, Terry R.; Ott, D.S.
2004-01-01
width was determined to be sufficient for collecting an adequate number of fish to estimate species richness and evaluate biotic integrity. At most sites, about 250 fish were needed to effectively represent 95 percent of the species present. Fifty-three percent of the sites assessed, using an IBI developed specifically for large Idaho rivers, received scores of less than 50, indicating poor biotic integrity.
NASA Astrophysics Data System (ADS)
Rahim, Farah Hanim Abdul; Abidin, Norhaslinda Zainal; Hawari, Nurul Nazihah
2017-11-01
The Malaysian government had targeted for the rice industry in the country to achieve 100% rice self-sufficiency where Malaysia's rice self-sufficiency level (SSL) is currently at 65% to 75%. Thus, the government had implemented few policies to increase the rice production in Malaysia in order to meet the growing demand of rice. In this paper, the effect of price support on the rice production system in Malaysia is investigated. This study utilizes the system dynamics approach of the rice production system in Malaysia where the complexity of the factor is interrelated and changed dynamically through time. Scenario analysis was conducted using system dynamics model by making changes on the price subsidy to see its effect on the rice production and rice SSL. The system dynamics model provides a framework for understanding the effect of price subsidy on the rice self-sufficiency level. The scenario analysis of the model shows that a 50% increase in the price subsidy leads to a substantial increase in demand as the rice price drops. Accordingly, the local production increases by 15%. However, the SSL slightly decreases as the local production is insufficient to meet the large demand.
Tuberculosis and the role of war in the modern era.
Drobniewski, F A; Verlander, N Q
2000-12-01
Tuberculosis (TB) remains a major global health problem; historically, major wars have increased TB notifications. This study evaluated whether modern conflicts worldwide affected TB notifications between 1975 and 1995. Dates of conflicts were obtained and matched with national TB notification data reported to the World Health Organization. Overall notification rates were calculated pre and post conflict. Poisson regression analysis was applied to all conflicts with sufficient data for detailed trend analysis. Thirty-six conflicts were identified, for which 3-year population and notification data were obtained. Overall crude TB notification rates were 81.9 and 105.1/100,000 pre and post start of conflict in these countries. Sufficient data existed in 16 countries to apply Poisson regression analysis to model 5-year pre and post start of conflict trends. This analysis indicated that the risk of presenting with TB in any country 2.5 years after the outbreak of conflict relative to 2.5 years before the outbreak was 1.016 (95%CI 0.9435-1.095). The modelling suggested that in the modern era war may not significantly damage efforts to control TB in the long term. This might be due to the limited scale of most of these conflicts compared to the large-scale civilian disruption associated with 'world wars'. The management of TB should be considered in planning post-conflict refugee and reconstruction programmes.
Garcia Hejl, Carine; Martinaud, Christophe; Macarez, Remi; Sill, Joshua; Le Golvan, Armelle; Dulou, Renaud; Longin Roche, Celine; De Rudnicki, Stephane
2015-05-01
We present here a description of the experience in whole-blood transfusion of a health service team deployed to a medical treatment facility in Afghanistan from June 2011 to October 2011. The aim of our work was to show how a "walking blood bank" could provide a sufficient supply. We gathered the blood-group types of military personnel deployed to the theater of operations to evaluate our "potential walking blood bank," and we compared these data with our needs. Blood type frequencies among our "potential walking blood bank" were similar to those observed in European or American countries. Our resources could have been limited because of a low frequency of B blood type and negative rhesus in our "potential walking blood bank." Because of the large number of potential donors in the theater of operations, the risk of blood shortage was quite low and we did not face blood shortage despite significant transfusion requirements. Actually, 93 blood bags were collected, including rare blood types like AB and B blood types. In our experience, this international "walking blood bank" provided a quick, safe, and sufficient blood supply. More research in this area is needed, and our results should be confirmed by further prospective trials. Therapeutic study, level V.
NASA Astrophysics Data System (ADS)
Taylor, Bradford P.; Penington, Catherine J.; Weitz, Joshua S.
2016-12-01
Multiple virus particles can infect a target host cell. Such multiple infections (MIs) have significant and varied ecological and evolutionary consequences for both virus and host populations. Yet, the in situ rates and drivers of MIs in virus-microbe systems remain largely unknown. Here, we develop an individual-based model (IBM) of virus-microbe dynamics to probe how spatial interactions drive the frequency and nature of MIs. In our IBMs, we identify increasingly spatially correlated clusters of viruses given sufficient decreases in viral movement. We also identify increasingly spatially correlated clusters of viruses and clusters of hosts given sufficient increases in viral infectivity. The emergence of clusters is associated with an increase in multiply infected hosts as compared to expectations from an analogous mean field model. We also observe long-tails in the distribution of the multiplicity of infection in contrast to mean field expectations that such events are exponentially rare. We show that increases in both the frequency and severity of MIs occur when viruses invade a cluster of uninfected microbes. We contend that population-scale enhancement of MI arises from an aggregate of invasion dynamics over a distribution of microbe cluster sizes. Our work highlights the need to consider spatially explicit interactions as a potentially key driver underlying the ecology and evolution of virus-microbe communities.
Zuo, Ran; Zhang, Yi; Huguet-Tapia, Jose C; Mehta, Mishal; Dedic, Evelina; Bruner, Steven D; Loria, Rosemary; Ding, Yousong
2016-05-01
Aromatic nitration is an immensely important industrial process to produce chemicals for a variety of applications, but it often suffers from multiple unsolved challenges. Enzymes as biocatalysts have been increasingly used for organic chemistry synthesis due to their high selectivity and environmental friendliness, but nitration has benefited minimally from the development of biocatalysis. In this work, we aimed to develop TxtE as practical biocatalysts for aromatic nitration. TxtE is a unique class I cytochrome P450 enzyme that nitrates the indole of l-tryptophan. To develop cost-efficient nitration processes, we fused TxtE with the reductase domains of CYP102A1 (P450BM3) and of P450RhF to create class III self-sufficient biocatalysts. The best engineered fusion protein was comparable with wild type TxtE in terms of nitration performance and other key biochemical properties. To demonstrate the application potential of the fusion enzyme, we nitrated 4-F-dl-tryptophan and 5-F-l-tryptophan in large scale enzymatic reactions. Tandem MS/MS and NMR analyses of isolated products revealed altered nitration sites. To our knowledge, these studies represent the first practice in developing biological nitration approaches and lay a solid basis to the use of TxtE-based biocatalysts for the production of valuable nitroaromatics. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Beerling, David J; Harfoot, Michael; Lomax, Barry; Pyle, John A
2007-07-15
The discovery of mutated palynomorphs in end-Permian rocks led to the hypothesis that the eruption of the Siberian Traps through older organic-rich sediments synthesized and released massive quantities of organohalogens, which caused widespread O3 depletion and allowed increased terrestrial incidence of harmful ultraviolet-B radiation (UV-B, 280-315nm; Visscher et al. 2004 Proc. Natl Acad. Sci. USA 101, 12952-12956). Here, we use an extended version of the Cambridge two-dimensional chemistry-transport model to evaluate quantitatively this possibility along with two other potential causes of O3 loss at this time: (i) direct effects of HCl release by the Siberian Traps and (ii) the indirect release of organohalogens from dispersed organic matter. According to our simulations, CH3Cl released from the heating of coals alone caused comparatively minor O3 depletion (5-20% maximum) because this mechanism fails to deliver sufficiently large amounts of Cl into the stratosphere. The unusual explosive nature of the Siberian Traps, combined with the direct release of large quantities of HCl, depleted the model O3 layer in the high northern latitudes by 33-55%, given a main eruptive phase of less than or equal to 200kyr. Nevertheless, O3 depletion was most extensive when HCl release from the Siberian Traps was combined with massive CH3Cl release synthesized from a large reservoir of dispersed organic matter in Siberian rocks. This suite of model experiments produced column O3 depletion of 70-85% and 55-80% in the high northern and southern latitudes, respectively, given eruption durations of 100-200kyr. On longer eruption time scales of 400-600kyr, corresponding O3 depletion was 30-40% and 20-30%, respectively. Calculated year-round increases in total near-surface biologically effective (BE) UV-B radiation following these reductions in O3 layer range from 30-60 (kJm(-2)d(-1))BE up to 50-100 (kJm(-2)d(-1))BE. These ranges of daily UV-B doses appear sufficient to exert mutagenic effects on plants, especially if sustained over tens of thousands of years, unlike either rising temperatures or SO2 concentrations.
NASA Astrophysics Data System (ADS)
Druzhinin, O.; Troitskaya, Yu; Zilitinkevich, S.
2018-01-01
The detailed knowledge of turbulent exchange processes occurring in the atmospheric marine boundary layer are of primary importance for their correct parameterization in large-scale prognostic models. These processes are complicated, especially at sufficiently strong wind forcing conditions, by the presence of sea-spray drops which are torn off the crests of sufficiently steep surface waves by the wind gusts. Natural observations indicate that mass fraction of sea-spray drops increases with wind speed and their impact on the dynamics of the air in the vicinity of the sea surface can become quite significant. Field experiments, however, are limited by insufficient accuracy of the acquired data and are in general costly and difficult. Laboratory modeling presents another route to investigate the spray-mediated exchange processes in much more detail as compared to the natural experiments. However, laboratory measurements, contact as well as Particle Image Velocimetry (PIV) methods, also suffer from inability to resolve the dynamics of the near-surface air-flow, especially in the surface wave troughs. In this report, we present a first attempt to use Direct Numerical Simulation (DNS) as tool for investigation of the drops-mediated momentum, heat and moisture transfer in a turbulent, droplet-laden air flow over a wavy water surface. DNS is capable of resolving the details of the transfer processes and do not involve any closure assumptions typical of Large-Eddy and Reynolds Averaged Navier-Stokes (LES and RANS) simulations. Thus DNS provides a basis for improving parameterizations in LES and RANS closure models and further development of large-scale prognostic models. In particular, we discuss numerical results showing the details of the modification of the air flow velocity, temperature and relative humidity fields by multidisperse, evaporating drops. We use Eulerian-Lagrangian approach where the equations for the air-flow fields are solved in a Eulerian frame whereas the drops dymanics equations are solved in a Largangain frame. The effects of air flow and drops on the water surface wave are neglected. A point-force approximation is employed to model the feed-back contributions by the drops to the air momentum, heat and moisture transfer.
Hansen, William B; Derzon, James H; Reese, Eric L
2014-06-01
We propose a method for creating groups against which outcomes of local pretest-posttest evaluations of evidence-based programs can be judged. This involves assessing pretest markers for new and previously conducted evaluations to identify groups that have high pretest similarity. A database of 802 prior local evaluations provided six summary measures for analysis. The proximity of all groups using these variables is calculated as standardized proximities having values between 0 and 1. Five methods for creating standardized proximities are demonstrated. The approach allows proximity limits to be adjusted to find sufficient numbers of synthetic comparators. Several index cases are examined to assess the numbers of groups available to serve as comparators. Results show that most local evaluations would have sufficient numbers of comparators available for estimating program effects. This method holds promise as a tool for local evaluations to estimate relative effectiveness. © The Author(s) 2012.
Accuracy assessment with complex sampling designs
Raymond L. Czaplewski
2010-01-01
A reliable accuracy assessment of remotely sensed geospatial data requires a sufficiently large probability sample of expensive reference data. Complex sampling designs reduce cost or increase precision, especially with regional, continental and global projects. The General Restriction (GR) Estimator and the Recursive Restriction (RR) Estimator separate a complex...
Systems Thinking for Transformational Change in Health
ERIC Educational Resources Information Center
Willis, Cameron D.; Best, Allan; Riley, Barbara; Herbert, Carol P.; Millar, John; Howland, David
2014-01-01
Incremental approaches to introducing change in Canada's health systems have not sufficiently improved the quality of services and outcomes. Further progress requires 'large system transformation', considered to be the systematic effort to generate coordinated change across organisations sharing a common vision and goal. This essay draws on…
DOT National Transportation Integrated Search
2015-08-01
Large structures like bridges or tall buildings are often built on deep foundations either precast concrete piles or cast-in-place drilled shafts. The pile or shaft must be long enough to reach : a rock layer or to provide sufficient resistance t...
Bilingualism and Special Education: Program and Pedagogical Issues.
ERIC Educational Resources Information Center
Cummins, Jim
1983-01-01
Application of the research-based principle that first and second language cognitive and academic development are interdependent and that language acquisition is largely dependent on students receiving sufficient comprehensible input in the target language is discussed in reference to program planning for academically at risk language minority…
Assessing the Potential for Rooftop Rainwater Harvesting from Large Public Institutions.
Adugna, Dagnachew; Jensen, Marina Bergen; Lemma, Brook; Gebrie, Geremew Sahilu
2018-02-14
As in many other cities, urbanization coupled with population growth worsens the water supply problem of Addis Ababa, Ethiopia, with a water supply deficit of 41% in 2016. To investigate the potential contribution of rooftop rainwater harvesting (RWH) from large public institutions, 320 such institutions were selected and grouped into 11 categories, from which 25-30% representative 588 rooftops were digitalized and the potential RWH volume computed based on a ten-year rainfall dataset. When comparing the resulting RWH potential with the water consumption, up to 2.3% of the annual, potable water supply can be provided. If reused only within one's own institution, the self-sufficiency varies from 0.9 to 649%. Non-uniform rainfall patterns add uncertainty to these numbers, since the size of the storage tank becomes critical for coverage in the dry season from October to May. Despite the low replacement potential at the city level, RWH from large institutions will enable a significant volume of potable water to be transferred to localities critically suffering from water shortage. Further, large institutions may demonstrate how RWH can be practiced, thus acting as a frontrunner for the dissemination of RWH to other types of rooftops. To narrow the water supply gap, considering rooftop RWH as an alternative water supply source is recommended. However, the present study assumed that financial constraints to install large sized storage tanks are considered as a possible challenge. Thus, future research is needed to investigate the cost-benefit balance along with the invention of a cheap storage tank as they may affect the potential contribution of RWH from rooftops.
Solving large scale structure in ten easy steps with COLA
NASA Astrophysics Data System (ADS)
Tassev, Svetlin; Zaldarriaga, Matias; Eisenstein, Daniel J.
2013-06-01
We present the COmoving Lagrangian Acceleration (COLA) method: an N-body method for solving for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). Unlike standard N-body methods, the COLA method can straightforwardly trade accuracy at small-scales in order to gain computational speed without sacrificing accuracy at large scales. This is especially useful for cheaply generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing, as those catalogs are essential for performing detailed error analysis for ongoing and future surveys of LSS. As an illustration, we ran a COLA-based N-body code on a box of size 100 Mpc/h with particles of mass ≈ 5 × 109Msolar/h. Running the code with only 10 timesteps was sufficient to obtain an accurate description of halo statistics down to halo masses of at least 1011Msolar/h. This is only at a modest speed penalty when compared to mocks obtained with LPT. A standard detailed N-body run is orders of magnitude slower than our COLA-based code. The speed-up we obtain with COLA is due to the fact that we calculate the large-scale dynamics exactly using LPT, while letting the N-body code solve for the small scales, without requiring it to capture exactly the internal dynamics of halos. Achieving a similar level of accuracy in halo statistics without the COLA method requires at least 3 times more timesteps than when COLA is employed.
Anticipation of Monetary Reward Can Attenuate the Vigilance Decrement
Grosso, Mallory; Liu, Guanyu; Mitko, Alex; Morris, Rachael; DeGutis, Joseph
2016-01-01
Motivation and reward can have differential effects on separate aspects of sustained attention. We previously demonstrated that continuous reward/punishment throughout a sustained attention task improves overall performance, but not vigilance decrements. One interpretation of these findings is that vigilance decrements are due to resource depletion, which is not overcome by increasing overall motivation. However, an alternative explanation is that as one performs a continuously rewarded task there are less potential gains/losses as the task progresses, which could decrease motivation over time, producing a vigilance decrement. This would predict that keeping future gains/losses consistent throughout the task would reduce the vigilance decrement. In the current study, we examined this possibility by comparing two versions (continuous-small loss vs. anticipate-large loss) of a 10-minute gradual onset continuous performance task (gradCPT), a challenging go/no-go sustained attention task. Participants began each task with the potential to keep $18. In the continuous-small-loss version, small monetary losses were accrued continuously throughout the task for each error. However, in the anticipate-large-loss version, participants lost all $18 if they erroneously responded to one target that always appeared toward the end of the vigil. Typical vigilance decrements were observed in the continuous-small-loss condition. In the anticipate-large-loss condition, vigilance decrements were reduced, particularly when the anticipate-large loss condition was completed second. This suggests that the looming possibility of a large loss can attenuate the vigilance decrement and that this attenuation may occur most consistently after sufficient task experience. We discuss these results in the context of current theories of sustained attention. PMID:27472785
An Eulerian time filtering technique to study large-scale transient flow phenomena
NASA Astrophysics Data System (ADS)
Vanierschot, Maarten; Persoons, Tim; van den Bulck, Eric
2009-10-01
Unsteady fluctuating velocity fields can contain large-scale periodic motions with frequencies well separated from those of turbulence. Examples are the wake behind a cylinder or the processing vortex core in a swirling jet. These turbulent flow fields contain large-scale, low-frequency oscillations, which are obscured by turbulence, making it impossible to identify them. In this paper, we present an Eulerian time filtering (ETF) technique to extract the large-scale motions from unsteady statistical non-stationary velocity fields or flow fields with multiple phenomena that have sufficiently separated spectral content. The ETF method is based on non-causal time filtering of the velocity records in each point of the flow field. It is shown that the ETF technique gives good results, similar to the ones obtained by the phase-averaging method. In this paper, not only the influence of the temporal filter is checked, but also parameters such as the cut-off frequency and sampling frequency of the data are investigated. The technique is validated on a selected set of time-resolved stereoscopic particle image velocimetry measurements such as the initial region of an annular jet and the transition between flow patterns in an annular jet. The major advantage of the ETF method in the extraction of large scales is that it is computationally less expensive and it requires less measurement time compared to other extraction methods. Therefore, the technique is suitable in the startup phase of an experiment or in a measurement campaign where several experiments are needed such as parametric studies.
Comparing DIF Methods for Data with Dual Dependency
ERIC Educational Resources Information Center
Jin, Ying; Kang, Minsoo
2016-01-01
Background: The current study compared four differential item functioning (DIF) methods to examine their performances in terms of accounting for dual dependency (i.e., person and item clustering effects) simultaneously by a simulation study, which is not sufficiently studied under the current DIF literature. The four methods compared are logistic…
Using Inquiry and Phylogeny: To Teach Comparative Morphology
ERIC Educational Resources Information Center
Giese, Alan R.
2005-01-01
A description on inquiry-based approach to teaching comparative vertebrate, skeletal morphology is presented that could be easily adapted to teach comparative morphology for any discipline, provided that sufficient physical models are available. This approach requires students to probe the material world for evidence that would allow them to…
Radar observations of Comet Halley
NASA Technical Reports Server (NTRS)
Campbell, D. B.; Harmon, J. K.; Shapiro, I. I.
1989-01-01
Five nights of Arecibo radar observations of Comet Halley are reported which reveal a feature in the overall average spectrum which, though weak, seems consistent with being an echo from the comet. The large radar cross section and large bandwidth of the feature suggest that the echo is predominantly from large grains which have been ejected from the nucleus. Extrapolation of the dust particle size distribution to large grain sizes gives a sufficient number of grains to account for the echo. The lack of a detectable echo from the nucleus, combined with estimates of its size and rotation rate from spacecraft encounters and other data, indicate that the nucleus has a surface of relatively high porosity.
Essiet, Inimfon Aniema; Baharom, Anisah; Shahar, Hayati Kadir; Uzochukwu, Benjamin
2017-01-01
Physical activity among university students is a catalyst for habitual physical activity in adulthood. Physical activity has many health benefits besides the improvement in academic performance. The present study assessed the predictors of physical activity among Nigerian university students using the Social Ecological Model (SEM). This cross-sectional study recruited first-year undergraduate students in the University of Uyo, Nigeria by multistage sampling. The International Physical Activity Questionnaire (IPAQ) short-version was used to assess physical activity in the study. Factors were categorised according to the Socio-Ecological Model which consisted of individual, social environment, physical environment and policy level. Data was analysed using the IBM SPSS statistical software, version 22. Simple and multiple logistic regression were used to determine the predictors of sufficient physical activity. A total of 342 respondents completed the study questionnaire. Majority of the respondents (93.6%) reported sufficient physical activity at 7-day recall. Multivariate analysis revealed that respondents belonging to the Ibibio ethnic group were about four times more likely to be sufficiently active compared to those who belonged to the other ethnic groups (AOR = 3.725, 95% CI = 1.383 to 10.032). Also, participants who had a normal weight were about four times more likely to be physically active compared to those who were underweight (AOR = 4.268, 95% CI = 1.323 to 13.772). This study concluded that there was sufficient physical activity levels among respondents. It is suggested that emphasis be given to implementing interventions aimed at sustaining sufficient levels of physical activity among students.
NASA Astrophysics Data System (ADS)
Wördenweber, Roger; Hollmann, Eugen; Poltiasev, Michael; Neumüller, Heinz-Werner
2003-05-01
This paper addresses the development of a technically relevant sputter-deposition process for YBa2Cu3O7-delta films. First, the simulation of the particle transport from target to substrate indicates that only at a reduced pressure of p approx 1-10 Pa can a sufficiently large deposition rate and homogeneous stoichiometric distribution of the particles during large-area deposition be expected. The results of the simulations are generally confirmed by deposition experiments on CeO2 buffered sapphire and LaAlO3 substrates using a magnetron sputtering system suitable for large-area deposition. However, it is shown that in addition to the effect of scattering during particle transport, the conditions at the substrate lead to a selective growth of Y-Ba-Cu-O phases that, among others, strongly affect the growth rate. For example, the growth rate is more than three times larger for optimized parameters compared to the same set of parameters but at 100 K lower substrate temperature. Stoichiometrical and structural perfect films can be grown at low pressure (p < 10 Pa). However, the superconducting transition temperature of these films is reduced. The Tc reduction seems to be correlated with the c-axis length of YBa2Cu3O7-delta. Two possible explanations for the increased c-axis length and the correlated reduced transition temperature are discussed, i.e. reduced oxygen content and strong cation site disorder due to the heavy particle bombardment.
Labor supply responses to large social transfers: Longitudinal evidence from South Africa
Ardington, Cally; Case, Anne; Hosegood, Victoria
2009-01-01
In many parts of the developing world, rural areas exhibit high rates of unemployment and underemployment. Understanding what prevents people from migrating to find better jobs is central to the development process. In this paper, we examine whether binding credit constraints and childcare constraints limit the ability of households to send labor migrants, and whether the arrival of a large, stable source of income – here, the South African old-age pension – helps households to overcome these constraints. Specifically, we quantify the labor supply responses of prime-aged individuals to changes in the presence of pensioners, using longitudinal data collected in KwaZulu-Natal. Our ability to compare households and individuals before and after pension receipt, and pension loss, allows us to control for a host of unobservable household and individual characteristics that may determine labor market behavior. We find that large cash transfers to elderly South Africans lead to increased employment among prime-aged members of their households, a result that is masked in cross-sectional analysis by differences between pension and non-pension households. Pension receipt also influences where this employment takes place. We find large, significant effects on labor migration upon pension arrival. The pension’s impact is attributable both to the increase in household resources it represents, which can be used to stake migrants until they become self-sufficient, and to the presence of pensioners who can care for small children, which allows prime-aged adults to look for work elsewhere. PMID:19750139
High-precision Ru isotopic measurements by multi-collector ICP-MS.
Becker, Harry; Dalpe, Claude; Walker, Richard J
2002-06-01
Ruthenium isotopic data for a pure Aldrich ruthenium nitrate solution obtained using a Nu Plasma multi collector inductively coupled plasma-mass spectrometer (MC-ICP-MS) shows excellent agreement (better than 1 epsilon unit = 1 part in 10(4)) with data obtained by other techniques for the mass range between 96 and 101 amu. External precisions are at the 0.5-1.7 epsilon level (2sigma). Higher sensitivity for MC ICP-MS compared to negative thermal ionization mass spectrometry (N-TIMS) is offset by the uncertainties introduced by relatively large mass discrimination and instabilities in the plasma source-ion extraction region that affect the long-term reproducibility. Large mass bias correction in ICP mass spectrometry demands particular attention to be paid to the choice of normalizing isotopes. Because of its position in the mass spectrum and the large mass bias correction, obtaining precise and accurate abundance data for 104Ru by MC-ICP-MS remains difficult. Internal and external mass bias correction schemes in this mass range may show similar shortcomings if the isotope of interest does not lie within the mass range covered by the masses used for normalization. Analyses of meteorite samples show that if isobaric interferences from Mo are sufficiently large (Ru/Mo < 10(4)), uncertainties on the Mo interference correction propagate through the mass bias correction and yield inaccurate results for Ru isotopic compositions. Second-order linear corrections may be used to correct for these inaccuracies, but such results are generally less precise than N-TIMS data.
Aerobic Hydrogen Production via Nitrogenase in Azotobacter vinelandii CA6
Noar, Jesse; Loveless, Telisa; Navarro-Herrero, José Luis; Olson, Jonathan W.
2015-01-01
The diazotroph Azotobacter vinelandii possesses three distinct nitrogenase isoenzymes, all of which produce molecular hydrogen as a by-product. In batch cultures, A. vinelandii strain CA6, a mutant of strain CA, displays multiple phenotypes distinct from its parent: tolerance to tungstate, impaired growth and molybdate transport, and increased hydrogen evolution. Determining and comparing the genomic sequences of strains CA and CA6 revealed a large deletion in CA6's genome, encompassing genes related to molybdate and iron transport and hydrogen reoxidation. A series of iron uptake analyses and chemostat culture experiments confirmed iron transport impairment and showed that the addition of fixed nitrogen (ammonia) resulted in cessation of hydrogen production. Additional chemostat experiments compared the hydrogen-producing parameters of different strains: in iron-sufficient, tungstate-free conditions, strain CA6's yields were identical to those of a strain lacking only a single hydrogenase gene. However, in the presence of tungstate, CA6 produced several times more hydrogen. A. vinelandii may hold promise for developing a novel strategy for production of hydrogen as an energy compound. PMID:25911479
NASA Astrophysics Data System (ADS)
Springborg, Michael; Molayem, Mohammad; Kirtman, Bernard
2017-09-01
A theoretical treatment for the orbital response of an infinite, periodic system to a static, homogeneous, magnetic field is presented. It is assumed that the system of interest has an energy gap separating occupied and unoccupied orbitals and a zero Chern number. In contrast to earlier studies, we do not utilize a perturbation expansion, although we do assume the field is sufficiently weak that the occurrence of Landau levels can be ignored. The theory is developed by analyzing results for large, finite systems and also by comparing with the analogous treatment of an electrostatic field. The resulting many-electron Hamilton operator is forced to be hermitian, but hermiticity is not preserved, in general, for the subsequently derived single-particle operators that determine the electronic orbitals. However, we demonstrate that when focusing on the canonical solutions to the single-particle equations, hermiticity is preserved. The issue of gauge-origin dependence of approximate solutions is addressed. Our approach is compared with several previously proposed treatments, whereby limitations in some of the latter are identified.
Numerical study of the effects of rotating forced downdraft in reproducing tornado-like vortices
NASA Astrophysics Data System (ADS)
Zhu, Jinwei; Cao, Shuyang; Tamura, Tetsuro; Tokyo Institute of Technology Collaboration; Tongji Univ Collaboration
2016-11-01
Appropriate physical modeling of a tornado-like vortex is a prerequisite to studying near-surface tornado structure and tornado-induced wind loads on structures. Ward-type tornado simulator modeled tornado-like flow by mounting guide vanes around the test area to provide angular momentum to converging flow. Iowa State University, USA modified the Ward-type simulator by locating guide vanes at a high position to allow vertical circulation of flow that creates a rotating forced downdraft in the process of generating a tornado. However, the characteristics of the generated vortices have not been sufficiently investigated till now. In this study, large-eddy simulations were conducted to compare the dynamic vortex structure generated with/without the effect of rotating forced downdraft. The results were also compared with other CFD and experimental results. Particular attention was devoted to the behavior of vortex wander of generated tornado-like vortices. The present study shows that the vortex center wanders more significantly when the rotating forced downdraft is introduced into the flow. The rotating forced downdraft is advantageous for modeling the rear flank downdraft phenomenon of a real tornado.
Su, Zheng; Borho, Nicole; Xu, Yunjie
2006-12-27
In this report, we describe rotational spectroscopic and high-level ab initio studies of the 1:1 chiral molecular adduct of propylene oxide dimer. The complexes are bound by weak secondary hydrogen bonds, that is, the O(epoxy)...H-C noncovalent interactions. Six homochiral and six heterochiral conformers were predicted to be the most stable configurations where each monomer acts as a proton acceptor and a donor simultaneously, forming two six- or five-membered intermolecular hydrogen-bonded rings. Rotational spectra of six, that is, three homochiral and heterochiral conformer pairs, out of the eight conformers that were predicted to have sufficiently large permanent electric dipole moments were measured and analyzed. The relative conformational stability order and the signs of the chiral recognition energies of the six conformers were determined experimentally and were compared to the ab initio computational results. The experimental observations and the ab initio calculations suggest that the concerted effort of these weak secondary hydrogen bonds can successfully lock the subunits in a particular orientation and that the overall binding strength is comparable to a classic hydrogen bond.
Estimating raptor nesting success: old and new approaches
Brown, Jessi L.; Steenhof, Karen; Kochert, Michael N.; Bond, Laura
2013-01-01
Studies of nesting success can be valuable in assessing the status of raptor populations, but differing monitoring protocols can present unique challenges when comparing populations of different species across time or geographic areas. We used large datasets from long-term studies of 3 raptor species to compare estimates of apparent nest success (ANS, the ratio of successful to total number of nesting attempts), Mayfield nesting success, and the logistic-exposure model of nest survival. Golden eagles (Aquila chrysaetos), prairie falcons (Falco mexicanus), and American kestrels (F. sparverius) differ in their breeding biology and the methods often used to monitor their reproduction. Mayfield and logistic-exposure models generated similar estimates of nesting success with similar levels of precision. Apparent nest success overestimated nesting success and was particularly sensitive to inclusion of nesting attempts discovered late in the nesting season. Thus, the ANS estimator is inappropriate when exact point estimates are required, especially when most raptor pairs cannot be located before or soon after laying eggs. However, ANS may be sufficient to assess long-term trends of species in which nesting attempts are highly detectable.
The efficiency of asset management strategies to reduce urban flood risk.
ten Veldhuis, J A E; Clemens, F H L R
2011-01-01
In this study, three asset management strategies were compared with respect to their efficiency to reduce flood risk. Data from call centres at two municipalities were used to quantify urban flood risks associated with three causes of urban flooding: gully pot blockage, sewer pipe blockage and sewer overloading. The efficiency of three flood reduction strategies was assessed based on their effect on the causes contributing to flood risk. The sensitivity of the results to uncertainty in the data source, citizens' calls, was analysed through incorporation of uncertainty ranges taken from customer complaint literature. Based on the available data it could be shown that increasing gully pot blockage is the most efficient action to reduce flood risk, given data uncertainty. If differences between cause incidences are large, as in the presented case study, call data are sufficient to decide how flood risk can be most efficiently reduced. According to the results of this analysis, enlargement of sewer pipes is not an efficient strategy to reduce flood risk, because flood risk associated with sewer overloading is small compared to other failure mechanisms.
Hudec, Kristen L; Alderson, R Matt; Patros, Connor H G; Lea, Sarah E; Tarle, Stephanie J; Kasper, Lisa J
2015-01-01
Motor activity of boys (age 8-12 years) with (n=19) and without (n=18) ADHD was objectively measured with actigraphy across experimental conditions that varied with regard to demands on executive functions. Activity exhibited during two n-back (1-back, 2-back) working memory tasks was compared to activity during a choice-reaction time (CRT) task that placed relatively fewer demands on executive processes and during a simple reaction time (SRT) task that required mostly automatic processing with minimal executive demands. Results indicated that children in the ADHD group exhibited greater activity compared to children in the non-ADHD group. Further, both groups exhibited the greatest activity during conditions with high working memory demands, followed by the reaction time and control task conditions, respectively. The findings indicate that large-magnitude increases in motor activity are predominantly associated with increased demands on working memory, though demands on non-executive processes are sufficient to elicit small to moderate increases in motor activity as well. Published by Elsevier Ltd.
Ticketing aggressive cars and trucks (TACT): How does it work on city streets?
Telford, Russell; Cook, Lawrence J; Olson, Lenora M
2018-02-17
The purpose of this study was to determine the feasibility of modifying the Ticking Aggressive Cars and Trucks (TACT) program, originally designed to work on state highways, within a metropolitan area to reduce unsafe interactions and their related crashes between drivers of large trucks and passenger vehicles. Using crash data, the driving behaviors most commonly associated with large truck and passenger vehicle crashes were identified. A public awareness campaign using media messaging and increased law enforcement was created targeting these associated behaviors. The frequency of these behaviors both before and after the public awareness campaign was determined through observation of traffic at 3 specific locations within the city. Each location had a sufficient volume of large truck and passenger traffic to observe frequent interactions. Pre- and postintervention data were compared using negative binomial regression with generalized estimating equations to evaluate whether the campaign was associated with a reduction in the identified driving behaviors. A comparison between crash data from before, during, and after the campaign and crashes during the same time periods in previous years did not show a significant difference (P =.081). The number of large trucks observed in traffic remained the same during pre- and postintervention periods (P =.625). The rates of negative interactions per 100 large trucks decreased for both large trucks and passenger vehicles after the intervention, with calculated rate ratios of 0.58 (95% confidence interval [CI], 0.48, 0.70) and 0.31 (95% CI, 0.20, 0.47). The greatest reduction was seen in passenger vehicles following too close, with a rate ratio of 0.21 (95% CI, 0.15, 0.30). Although designed for reducing crashes on highways, the TACT program can be an effective approach for improving driver behaviors on city streets.
Forward Helion Scattering and Neutron Polarization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buttimore, N. H.
The elastic scattering of spin half helium-3 nuclei at small angles can show a sufficiently large analyzing power to enable the level of helion polarization to be evaluated. As the helion to a large extent inherits the polarization of its unpaired neutron the asymmetry observed in helion collisions can be transformed into a measurement of the polarization of its constituent neutron. Neutron polarimetry therefore relies upon understanding the spin dependence of the electromagnetic and hadronic interactions in the region of interference where there is an optimal analyzing power.
Stackable In-Line Surface Missile Launch System for a Modular Payload Bay
2004-11-08
stacked modules 14 are connected 8 and sealed to form a single long continuous missile tube . 9 Flexible seals may be used at the base of each missile...vehicles, such as missiles, 22 both through vertical launch via specialized launch tubes on the 23 submarine, and horizontal launch via the submarine’s...torpedo 24 tubes . In some cases, the missiles are quite large, such as the 1 1 Tomahawk missile, which requires sufficient support for the 2 large
Commentary: Environmental nanophotonics and energy
NASA Astrophysics Data System (ADS)
Smith, Geoff B.
2011-01-01
The reasons nanophotonics is proving central to meeting the need for large gains in energy efficiency and renewable energy supply are analyzed. It enables optimum management and use of environmental energy flows at low cost and on a sufficient scale by providing spectral, directional and temporal control in tune with radiant flows from the sun, and the local atmosphere. Benefits and problems involved in large scale manufacture and deployment are discussed including how managing and avoiding safety issues in some nanosystems will occur, a process long established in nature.
Collection of Calibration and Validation Data for an Airport Landside Dynamic Simulation Model.
1980-04-01
movements. The volume of skiers passing through Denver is sufficiently large to warrant the installation of special check-in counters for passengers with...Terminal, only seven sectors were used. Training Procedures MIA was the first of the three airports surveyed. A substantial amount of knowledge and
34 CFR 5.64 - Waiver or reduction of fees.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 34 Education 1 2010-07-01 2010-07-01 false Waiver or reduction of fees. 5.64 Section 5.64 Education Office of the Secretary, Department of Education AVAILABILITY OF INFORMATION TO THE PUBLIC... requester is sufficiently large, in comparison with the public interest in disclosure, that disclosure is...
Capacity Issue Looms for Vouchers
ERIC Educational Resources Information Center
Zehr, Mary Ann
2011-01-01
State-level momentum in support of vouchers and tax credits that help students go to private schools highlights what has been a largely theoretical issue: private school capacity to support voucher-financed enrollment. Academics say the national supply of seats in secular and religious private schools is sufficient to meet short-term demand from…
A Theory of School Achievement: A Quantum View
ERIC Educational Resources Information Center
Phelps, James L.
2012-01-01
In most school achievement research, the relationships between achievement and explanatory variables follow the Newton and Einstein concept/principle and the viewpoint of the macro-observer: Deterministic measures based on the mean value of a sufficiently large number of schools. What if the relationships between achievement and explanatory…
Thin-Slice Perception Develops Slowly
ERIC Educational Resources Information Center
Balas, Benjamin; Kanwisher, Nancy; Saxe, Rebecca
2012-01-01
Body language and facial gesture provide sufficient visual information to support high-level social inferences from "thin slices" of behavior. Given short movies of nonverbal behavior, adults make reliable judgments in a large number of tasks. Here we find that the high precision of adults' nonverbal social perception depends on the slow…
Principals' Perceptions of the Superintendency: A Five-State Study
ERIC Educational Resources Information Center
Boyland, Lori
2013-01-01
Due to an aging population of currently practicing superintendents, research predicts a large turnover in public school superintendent positions in this decade. Questions exist regarding whether there are sufficient numbers of potential superintendent candidates in training to fill these positions. Although principals have been recognized as a…
In-House vs. Franchise College Food Services and Bookstores.
ERIC Educational Resources Information Center
Stumph, W. J.
In determining whether colleges or universities should operate their own food services or bookstores or lease them to contract operators, school business officers should consider a number of factors. These include whether sales volume is sufficiently large to cover direct operating costs and overhead; inventory investment; appearance, service, and…
36 CFR 219.26 - Identifying and designating suitable uses.
Code of Federal Regulations, 2010 CFR
2010-07-01
... travel, or other uses except where lands are determined to be unsuited for a particular use. Lands are.... Planning documents should describe or display lands suitable for various uses in areas large enough to provide sufficient latitude for periodic adjustments in use to conform to changing needs and conditions. ...
ERIC Educational Resources Information Center
Shields, Tanya
2012-01-01
As a primary teacher in a large junior school the author would spend many Sunday afternoons planning exciting science lessons only to find they did not include sufficient mathematical knowledge and skills. At the time, the Numeracy Strategy was spreading through classrooms like wildfire. Meanwhile, science lessons were progressing under the…
Microarrays for Undergraduate Classes
ERIC Educational Resources Information Center
Hancock, Dale; Nguyen, Lisa L.; Denyer, Gareth S.; Johnston, Jill M.
2006-01-01
A microarray experiment is presented that, in six laboratory sessions, takes undergraduate students from the tissue sample right through to data analysis. The model chosen, the murine erythroleukemia cell line, can be easily cultured in sufficient quantities for class use. Large changes in gene expression can be induced in these cells by…
African American Women and Obesity through the Prism of Race
ERIC Educational Resources Information Center
Knox-Kazimierczuk, Francoise; Geller, Karly; Sellers, Sherrill; Taliaferro Baszile, Denise; Smith-Shockley, Meredith
2018-01-01
Background: There are minimal studies focusing on African American women and obesity, and there are even fewer studies examining obesity through a critical race theoretical framework. African American obesity research has largely focused on individual and community interventions, which have not been sufficient to reverse the obesity epidemic.…
Protocol for emergency EPR dosimetry in fingernails
USDA-ARS?s Scientific Manuscript database
There is an increased need for after-the fact dosimetry because of the high risk of radiation exposures due to terrorism or accidents. In case of such an event, a method is needed to make measurements of dose in a large number of individuals rapidly and with sufficient accuracy to facilitate effect...
Licensing in an International Market
NASA Astrophysics Data System (ADS)
Ferreira, Fernanda A.
2008-09-01
We study the effects of entry of a foreign firm on domestic welfare in the presence of licensing, when the entrant is technologically superior to the incumbent. We show that foreign entry increases domestic welfare for sufficiently large technological differences between the firms under both fixed-fee licensing and royalty licensing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yokosawa, A.
The first polarized collider where we collide 250-GeV/c beams of 70% polarized protons at high luminosity is under construction. This will allow a determination of the nucleon spin-dependent structure functions over a large range in x and a collection of sufficient W and Z events to investigate extremely interesting spin-related phenomena.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yokosawa, A.
The first polarized collider where one collides 250-GeV/c beams of 70% polarized protons at high luminosity is under construction. This will allow a determination of the nucleon spin-dependent structure functions over a large range in x and a collection of sufficient W and Z events to investigate extremely interesting spin-related phenomena.
Improving a Lecture-Size Molecular Model Set by Repurposing Used Whiteboard Markers
ERIC Educational Resources Information Center
Dragojlovic, Veljko
2015-01-01
Preparation of an inexpensive model set from whiteboard markers and either HGS molecular model set or atoms made of wood is described. The model set is relatively easy to prepare and is sufficiently large to be suitable as an instructor set for use in lectures.
Distance Learning: Videoconferences as Vehicles for Faculty Development in Gerontology/Geriatrics.
ERIC Educational Resources Information Center
Wood, Joan B.; Parham, Iris A.
1996-01-01
From 1985-1992 the Virginia Geriatric Education Center broadcast via satellite 22 videoconferences involving over 22,000 health professionals in the United States, Canada, and Bermuda. The program required substantial marketing to attract sufficiently large audiences to be cost effective, was labor intensive, and necessitated technical expertise.…
How, When, and Where? Assessing Renewable Energy Self-Sufficiency at the Neighborhood Level.
Grosspietsch, David; Thömmes, Philippe; Girod, Bastien; Hoffmann, Volker H
2018-02-20
Self-sufficient decentralized systems challenge the centralized energy paradigm. Although scholars have assessed specific locations and technological aspects, it remains unclear how, when, and where energy self-sufficiency could become competitive. To address this gap, we develop a techno-economic model for energy self-sufficient neighborhoods that integrates solar photovoltaics (PV), conversion, and storage technologies. We assess the cost of 100% self-sufficiency for both electricity and heat, comparing different technical configurations for a stylized neighborhood in Switzerland and juxtaposing these findings with projections on market and technology development. We then broaden the scope and vary the neighborhood's composition (residential share) and geographic position (along different latitudes). Regarding how to design self-sufficient neighborhoods, we find two promising technical configurations. The "PV-battery-hydrogen" configuration is projected to outperform a fossil-fueled and grid-connected reference configuration when energy prices increase by 2.5% annually and cost reductions in hydrogen-related technologies by a factor of 2 are achieved. The "PV-battery" configuration would allow achieving parity with the reference configuration sooner, at 21% cost reduction. Additionally, more cost-efficient deployment is found in neighborhoods where the end-use is small commercial or mixed and in regions where seasonal fluctuations are low and thus allow for reducing storage requirements.
Bunno, Maki; Gouda, Kyosuke; Yamahara, Kunihiro; Kawaguchi, Masanori
2013-01-01
Endoscopic submucosal dissection (ESD) is useful for treating gastric tumors. Several trials have shown the efficacy of 4 or 8 weeks of proton pump inhibitor (PPI) administration for post-ESD ulcers. However, if the size of the post-ESD ulcer is larger than predicted, PPI administration alone might not be sufficient for the ulcer to heal within 4 weeks. There is no report about the efficacy of post-ESD gastric ulcers by esomeprazole. We examined retrospectively the efficacy of a combination therapy of esomeprazole plus rebamipide, a mucosal-protective antiulcer drug, on the acceleration of post-ESD ulcer healing comparing with omeprazole plus rebamipide. We reviewed the medical records of patients who underwent ESD for gastric neoplasia. We conducted a case-control study to compare the healing rates within 4 weeks effected by esomeprazole plus rebamipide (group E) and omeprazole plus rebamipide (group O). The sizes of the artificial ulcers were divided into normal-sized or large-sized. The baseline characteristics did not differ significantly between the two groups except age and sex. Stage S1 disease was observed in 27.6% and 38.7% of patients after 4 weeks of treatment in the group E and O, respectively. In large-sized artificial ulcers, the healing rate of stage S1 in group E is significantly higher than that in group O in 4 weeks.(25% VS 0%:P = 0.02). The safety and efficacy profiles of esomeprazole plus rebamipide and omeprazole and rebamipide are similar for the treatment of ESD-induced ulcers. In large-sized ulcers, esomeprazole plus rebamipide promotes ulcer healing.
Assmann, Karen E; Bailet, Marion; Lecoffre, Amandine C; Galan, Pilar; Hercberg, Serge; Amieva, Hélène; Kesse-Guyot, Emmanuelle
2016-04-05
Dementia is a major public health problem, and repeated cognitive data from large epidemiological studies could help to develop efficient measures of early prevention. Data collection by self-administered online tools could drastically reduce the logistical and financial burden of such large-scale investigations. In this context, it is important to obtain data concerning the comparability of such new online tools with traditional, supervised modes of cognitive assessment. Our objective was to compare self-administration of the Web-based NutriNet-Santé cognitive test battery (NutriCog) with administration by a neuropsychologist. The test battery included four tests, measuring, among others aspects, psychomotor speed, attention, executive function, episodic memory, working memory, and associative memory. Both versions of the cognitive battery were completed by 189 volunteers (either self-administered version first, n=99, or supervised version first, n=90). Subjects also completed a satisfaction questionnaire. Concordance was assessed by Spearman correlation. Agreement between both versions varied according to the investigated cognitive task and outcome variable. Spearman correlations ranged between .42 and .73. Moreover, a majority of participants responded that they "absolutely" or "rather" agreed that the duration of the self-administered battery was acceptable (184/185, 99.5%), that the tasks were amusing (162/185, 87.6%), that the instructions were sufficiently detailed (168/185; 90.8%) and understandable (164/185, 88.7%), and that they had overall enjoyed the test battery (182/185, 98.4%). The self-administered version of the Web-based NutriCog cognitive test battery provided similar information as the supervised version. Thus, integrating repeated cognitive evaluations into large cohorts via the implementation of self-administered online versions of traditional test batteries appears to be feasible.
Numerical comparisons of ground motion predictions with kinematic rupture modeling
NASA Astrophysics Data System (ADS)
Yuan, Y. O.; Zurek, B.; Liu, F.; deMartin, B.; Lacasse, M. D.
2017-12-01
Recent advances in large-scale wave simulators allow for the computation of seismograms at unprecedented levels of detail and for areas sufficiently large to be relevant to small regional studies. In some instances, detailed information of the mechanical properties of the subsurface has been obtained from seismic exploration surveys, well data, and core analysis. Using kinematic rupture modeling, this information can be used with a wave propagation simulator to predict the ground motion that would result from an assumed fault rupture. The purpose of this work is to explore the limits of wave propagation simulators for modeling ground motion in different settings, and in particular, to explore the numerical accuracy of different methods in the presence of features that are challenging to simulate such as topography, low-velocity surface layers, and shallow sources. In the main part of this work, we use a variety of synthetic three-dimensional models and compare the relative costs and benefits of different numerical discretization methods in computing the seismograms of realistic-size models. The finite-difference method, the discontinuous-Galerkin method, and the spectral-element method are compared for a range of synthetic models having different levels of complexity such as topography, large subsurface features, low-velocity surface layers, and the location and characteristics of fault ruptures represented as an array of seismic sources. While some previous studies have already demonstrated that unstructured-mesh methods can sometimes tackle complex problems (Moczo et al.), we investigate the trade-off between unstructured-mesh methods and regular-grid methods for a broad range of models and source configurations. Finally, for comparison, our direct simulation results are briefly contrasted with those predicted by a few phenomenological ground-motion prediction equations, and a workflow for accurately predicting ground motion is proposed.
Gao, Chunsheng; Xin, Pengfei; Cheng, Chaohua; Tang, Qing; Chen, Ping; Wang, Changbiao; Zang, Gonggu; Zhao, Lining
2014-01-01
Cannabis sativa L. is an important economic plant for the production of food, fiber, oils, and intoxicants. However, lack of sufficient simple sequence repeat (SSR) markers has limited the development of cannabis genetic research. Here, large-scale development of expressed sequence tag simple sequence repeat (EST-SSR) markers was performed to obtain more informative genetic markers, and to assess genetic diversity in cannabis (Cannabis sativa L.). Based on the cannabis transcriptome, 4,577 SSRs were identified from 3,624 ESTs. From there, a total of 3,442 complementary primer pairs were designed as SSR markers. Among these markers, trinucleotide repeat motifs (50.99%) were the most abundant, followed by hexanucleotide (25.13%), dinucleotide (16.34%), tetranucloetide (3.8%), and pentanucleotide (3.74%) repeat motifs, respectively. The AAG/CTT trinucleotide repeat (17.96%) was the most abundant motif detected in the SSRs. One hundred and seventeen EST-SSR markers were randomly selected to evaluate primer quality in 24 cannabis varieties. Among these 117 markers, 108 (92.31%) were successfully amplified and 87 (74.36%) were polymorphic. Forty-five polymorphic primer pairs were selected to evaluate genetic diversity and relatedness among the 115 cannabis genotypes. The results showed that 115 varieties could be divided into 4 groups primarily based on geography: Northern China, Europe, Central China, and Southern China. Moreover, the coefficient of similarity when comparing cannabis from Northern China with the European group cannabis was higher than that when comparing with cannabis from the other two groups, owing to a similar climate. This study outlines the first large-scale development of SSR markers for cannabis. These data may serve as a foundation for the development of genetic linkage, quantitative trait loci mapping, and marker-assisted breeding of cannabis.
Cheng, Chaohua; Tang, Qing; Chen, Ping; Wang, Changbiao; Zang, Gonggu; Zhao, Lining
2014-01-01
Cannabis sativa L. is an important economic plant for the production of food, fiber, oils, and intoxicants. However, lack of sufficient simple sequence repeat (SSR) markers has limited the development of cannabis genetic research. Here, large-scale development of expressed sequence tag simple sequence repeat (EST-SSR) markers was performed to obtain more informative genetic markers, and to assess genetic diversity in cannabis (Cannabis sativa L.). Based on the cannabis transcriptome, 4,577 SSRs were identified from 3,624 ESTs. From there, a total of 3,442 complementary primer pairs were designed as SSR markers. Among these markers, trinucleotide repeat motifs (50.99%) were the most abundant, followed by hexanucleotide (25.13%), dinucleotide (16.34%), tetranucloetide (3.8%), and pentanucleotide (3.74%) repeat motifs, respectively. The AAG/CTT trinucleotide repeat (17.96%) was the most abundant motif detected in the SSRs. One hundred and seventeen EST-SSR markers were randomly selected to evaluate primer quality in 24 cannabis varieties. Among these 117 markers, 108 (92.31%) were successfully amplified and 87 (74.36%) were polymorphic. Forty-five polymorphic primer pairs were selected to evaluate genetic diversity and relatedness among the 115 cannabis genotypes. The results showed that 115 varieties could be divided into 4 groups primarily based on geography: Northern China, Europe, Central China, and Southern China. Moreover, the coefficient of similarity when comparing cannabis from Northern China with the European group cannabis was higher than that when comparing with cannabis from the other two groups, owing to a similar climate. This study outlines the first large-scale development of SSR markers for cannabis. These data may serve as a foundation for the development of genetic linkage, quantitative trait loci mapping, and marker-assisted breeding of cannabis. PMID:25329551
Sufficiency of Mesolimbic Dopamine Neuron Stimulation for the Progression to Addiction.
Pascoli, Vincent; Terrier, Jean; Hiver, Agnès; Lüscher, Christian
2015-12-02
The factors causing the transition from recreational drug consumption to addiction remain largely unknown. It has not been tested whether dopamine (DA) is sufficient to trigger this process. Here we use optogenetic self-stimulation of DA neurons of the ventral tegmental area (VTA) to selectively mimic the defining commonality of addictive drugs. All mice readily acquired self-stimulation. After weeks of abstinence, cue-induced relapse was observed in parallel with a potentiation of excitatory afferents onto D1 receptor-expressing neurons of the nucleus accumbens (NAc). When the mice had to endure a mild electric foot shock to obtain a stimulation, some stopped while others persevered. The resistance to punishment was associated with enhanced neural activity in the orbitofrontal cortex (OFC) while chemogenetic inhibition of the OFC reduced compulsivity. Together, these results show that stimulating VTA DA neurons induces behavioral and cellular hallmarks of addiction, indicating sufficiency for the induction and progression of the disease. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Watson, Kent A.; Connell, John W.; Delozier, Donavon M.; Smith, Joseph G., Jr.
2004-01-01
Space environmentally durable polymeric films with low color and sufficient electrical conductivity to mitigate electrostatic charge (ESC) build-up have been under investigation as part of a materials development activity. These materials have potential applications on advanced spacecraft, particularly on large, deployable, ultra-light weight Gossamer spacecraft. The approach taken to impart sufficient electrical conductivity into the polymer film while maintaining flexibility is to use single wall carbon nanotubes (SWNTs) as conductive additives. Approaches investigated in our lab involved an in-situ polymerization method, addition of SWNTs to a polymer containing reactive end-groups, and spray coating of polymer surfaces. The work described herein is a summary of the current status of this project. Surface conductivities (measured as surface resistance) in the range sufficient for ESC mitigation were achieved with minimal effects on the physical, thermal, mechanical and optical properties of the films. Additionally, the electrical conductivity was not affected by harsh mechanical manipulation of the films. The chemistry and physical properties of these nanocomposites will be discussed.
Validation of heart rate extraction through an iPhone accelerometer.
Kwon, Sungjun; Lee, Jeongsu; Chung, Gih Sung; Park, Kwang Suk
2011-01-01
Ubiquitous medical technology may provide advanced utility for evaluating the status of the patient beyond the clinical environment. The iPhone provides the capacity to measure the heart rate, as the iPhone consists of a 3-axis accelerometer that is sufficiently sensitive to perceive tiny body movements caused by heart pumping. In this preliminary study, an iPhone was tested and evaluated as the reliable heart rate extractor to use for medical purpose by comparing with reference electrocardiogram. By comparing the extracted heart rate from acquired acceleration data with the extracted one from ECG reference signal, iPhone functioning as the reliable heart rate extractor has demonstrated sufficient accuracy and consistency.
Modeling and Improving Information Flows in the Development of Large Business Applications
NASA Astrophysics Data System (ADS)
Schneider, Kurt; Lübke, Daniel
Designing a good architecture for an application is a wicked problem. Therefore, experience and knowledge are considered crucial for informing work in software architecture. However, many organizations do not pay sufficient attention to experience exploitation and architectural learning. Many users of information systems are not aware of the options and the needs to report problems and requirements. They often do not have time to describe a problem encountered in sufficient detail for developers to remove it. And there may be a lengthy process for providing feedback. Hence, the knowledge about problems and potential solutions is not shared effectively. Architectural knowledge needs to include evaluative feedback as well as decisions and their reasons (rationale).
Complex behavior in chains of nonlinear oscillators.
Alonso, Leandro M
2017-06-01
This article outlines sufficient conditions under which a one-dimensional chain of identical nonlinear oscillators can display complex spatio-temporal behavior. The units are described by phase equations and consist of excitable oscillators. The interactions are local and the network is poised to a critical state by balancing excitation and inhibition locally. The results presented here suggest that in networks composed of many oscillatory units with local interactions, excitability together with balanced interactions is sufficient to give rise to complex emergent features. For values of the parameters where complex behavior occurs, the system also displays a high-dimensional bifurcation where an exponentially large number of equilibria are borne in pairs out of multiple saddle-node bifurcations.
A Simple Model for Immature Retrovirus Capsid Assembly
NASA Astrophysics Data System (ADS)
Paquay, Stefan; van der Schoot, Paul; Dragnea, Bogdan
In this talk I will present simulations of a simple model for capsomeres in immature virus capsids, consisting of only point particles with a tunable range of attraction constrained to a spherical surface. We find that, at sufficiently low density, a short interaction range is sufficient for the suppression of five-fold defects in the packing and causes instead larger tears and scars in the capsid. These findings agree both qualitatively and quantitatively with experiments on immature retrovirus capsids, implying that the structure of the retroviral protein lattice can, for a large part, be explained simply by the effective interaction between the capsomeres. We thank the HFSP for funding under Grant RGP0017/2012.
The motion of a charged particle on a Riemannian surface under a non-zero magnetic field
NASA Astrophysics Data System (ADS)
Castilho, Cesar Augusto Rodrigues
In this thesis we study the motion of a charged particle on a Riemmanian surface under the influence of a positive magnetic field B. Using Moser's Twist Theorem and ideas from classical pertubation theory we find sufficient conditions to perpetually trap the motion of a particle with a sufficient large charge in a neighborhood of a level set of the magnetic field. The conditions on the level set of the magnetic field that guarantee the trapping are local and hold near all non- degenerate critical local minima or maxima of B. Using sympletic reduction we apply the results of our work to certain S1-invariant magnetic fields on R3.
The Motion of a Charged Particle on a Riemannian Surface under a Non-Zero Magnetic Field
NASA Astrophysics Data System (ADS)
Castilho, César
2001-03-01
In this paper we study the motion of a charged particle on a Riemmanian surface under the influence of a positive magnetic field B. Using Moser's Twist Theorem and ideas from classical pertubation theory we find sufficient conditions to perpetually trap the motion of a particle with a sufficient large charge in a neighborhood of a level set of the magnetic field. The conditions on the level set of the magnetic field that guarantee the trapping are local and hold near all non-degenerate critical local minima or maxima of B. Using symplectic reduction we apply the results of our work to certain S1-invariant magnetic fields on R3.
NASA Technical Reports Server (NTRS)
Kazanas, Demosthenes; Fukumura, K.
2009-01-01
We present detailed computations of photon orbits emitted by flares at the ISCO of accretion disks around rotating black holes. We show that for sufficiently large spin parameter, i.e. $a > 0.94 M$, following a flare at ISCO, a sufficient number of photons arrive at an observer after multiple orbits around the black hole, to produce an "photon echo" of constant lag, i.e. independent of the relative phase between the black hole and the observer, of $\\Delta T \\simeq 14 M$. This constant time delay, then, leads to the presence of a QPO in the source power spectrum at a frequency $\
NASA Astrophysics Data System (ADS)
Crowhurst, Jonathan
2013-06-01
In recent years, techniques based on table-top laser systems have shown promise for investigating dynamic material behavior at high rates of both compressive and tensile strain. Common to these techniques is a laser pulse that is used in some manner to rapidly deliver energy to the sample; while the energy itself is often comparatively very small, the intensity can be made high by tightly focusing the pump light. In this way pressures or stresses can be obtained that are sufficiently large to have relevance to a wide range of basic and applied fields. Also, when combined with established ultrafast diagnostics these experiments provide very high time resolution which is particularly desirable when studying, for example shock waves, in which the time for the material to pass from undisturbed to fully compressed (the ``rise time'') can be extremely short (order 10 ps or less) even at fairly small peak stresses. Since much of the most interesting physics comes into play during this process it is important to be able to adequately resolve the shock rise. In this context I will discuss our measurements on aluminum and iron thin films and compare the results with known behavior observed at lower strain rates. Specifically, for aluminum, I will compare our assumed steady wave data at strain rates of up to 1010 s-1 to literature data up to ~107 s-1 and show that the well-known fourth power scaling relation of strain rate to shock stress is maintained even at these very high strain rates. For iron, I will show how we have used our nonsteady data (up to ~109 s-1) to infer a number of important properties of the alpha to epsilon polymorphic transition: 1. The transition can occur on the tens of ps time scale at sufficiently high strain rates and corresponding very large deviatoric stresses, and 2, most of the material appears to transform at a substantially higher stress than the nominal value usually inferred from shock wave experiments of about 13 GPa. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract No. DE-AC52-07NA27344 with Laboratory directed Research and Development funding (12ERD042), as well as being based on work supported as part of the EFree, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award No. DESC0001057.
Evaluation of Assimilated SMOS Soil Moisture Data for US Cropland Soil Moisture Monitoring
NASA Technical Reports Server (NTRS)
Yang, Zhengwei; Sherstha, Ranjay; Crow, Wade; Bolten, John; Mladenova, Iva; Yu, Genong; Di, Liping
2016-01-01
Remotely sensed soil moisture data can provide timely, objective and quantitative crop soil moisture information with broad geospatial coverage and sufficiently high resolution observations collected throughout the growing season. This paper evaluates the feasibility of using the assimilated ESA Soil Moisture Ocean Salinity (SMOS)Mission L-band passive microwave data for operational US cropland soil surface moisture monitoring. The assimilated SMOS soil moisture data are first categorized to match with the United States Department of Agriculture (USDA)National Agricultural Statistics Service (NASS) survey based weekly soil moisture observation data, which are ordinal. The categorized assimilated SMOS soil moisture data are compared with NASSs survey-based weekly soil moisture data for consistency and robustness using visual assessment and rank correlation. Preliminary results indicate that the assimilated SMOS soil moisture data highly co-vary with NASS field observations across a large geographic area. Therefore, SMOS data have great potential for US operational cropland soil moisture monitoring.
NASA Astrophysics Data System (ADS)
Huang, J. G.; Slavcheva, G.; Hess, O.
2008-04-01
We propose a dynamical model for description of the nonlinear Faraday rotation experienced by a short pulse propagating in a resonant medium subject to an ultra-strong static magnetic field. Under the assumptions of a sufficiently strong external magnetic field, such that the Zeeman splitting of the quantum system energy levels is large compared to the linewidth of the optical transitions involved and the bandwidth of the incident light, the light effectively interacts with a two-level system. Our numerical simulations show that the Faraday effect under these conditions is significantly distinctive from the one caused by weak to moderately strong magnetic field. Nonlinear coherent effects such as inhomogeneous polarization rotation along the pulse duration and an onset of a circularly polarized stimulated emission and coherent ringing have been demonstrated. Some views on the experimental observation of the predicted phenomena are given.
Multiwavelength Rapid Variability in XTE J1118+480
NASA Astrophysics Data System (ADS)
Hynes, R. I.; Haswell, C. A.; Chaty, S.; Cui, W.; Shrader, C. R.
2000-10-01
The black hole candidate XTE J1118+480 has been in an unusual low-state outburst since January 2000. It has exhibited large amplitude rapid variability on timescales of tens of seconds and less at all wavelengths with a sufficient count rate to detect such variability. We will compare X-ray data with simultaneous (UV) and contemporaneous (UV--IR) data. Very similar power density spectra are seen at X-ray and UV wavelengths, with a prominent low-frequency QPO at ~0.1 Hz, evolving with time. Simultaneous X-ray and UV lightcurves are well correlated down to timescales of seconds. The correlated variability could arise either from reprocessing of X-ray variations by the disc or companion star, or from a component of emission originating in the X-ray production region, likely close to the compact object. Possible lags between the wavebands will constrain explanations. This presentation is funded by the Leverhulme Trust.
NASA Technical Reports Server (NTRS)
Vetter, A. A.; Maxwell, C. D.; Swean, T. F., Jr.; Demetriades, S. T.; Oliver, D. A.; Bangerter, C. D.
1981-01-01
Data from sufficiently well-instrumented, short-duration experiments at AEDC/HPDE, Reynolds Metal Co., and Hercules, Inc., are compared to analyses with multidimensional and time-dependent simulations with the STD/MHD computer codes. These analyses reveal detailed features of major transient events, severe loss mechanisms, and anomalous MHD behavior. In particular, these analyses predicted higher-than-design voltage drops, Hall voltage overshoots, and asymmetric voltage drops before the experimental data were available. The predictions obtained with these analyses are in excellent agreement with the experimental data and the failure predictions are consistent with the experiments. The design of large, high-interaction or advanced MHD experiments will require application of sophisticated, detailed and comprehensive computational procedures in order to account for the critical mechanisms which led to the observed behavior in these experiments.
Vibration characteristics of a steadily rotating slender ring
NASA Technical Reports Server (NTRS)
Lallman, F. J.
1980-01-01
Partial differential equations are derived to describe the structural vibrations of a uniform homogeneous ring which is very flexible because the radius is very large compared with the cross sectional dimensions. Elementary beam theory is used and small deflections are assumed in the derivation. Four sets of structural modes are examined: bending and compression modes in the plane of the ring; bending modes perpendicular to the plane of the ring; and twisting modes about the centroid of the ring cross section. Spatial and temporal characteristics of these modes, presented in terms of vibration frequencies and ratios between vibration amplitudes, are demonstrated in several figures. Given a sufficiently high rotational rate, the dynamics of the ring approach those of a vibrating string. In this case, the velocity of traveling wave in the material of the ring approaches in velocity of the material relative to inertial space, resulting in structural modes which are almost stationary in space.
A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates
NASA Astrophysics Data System (ADS)
Huang, Weizhang; Kamenski, Lennard; Lang, Jens
2010-03-01
A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.
Parameter recovery, bias and standard errors in the linear ballistic accumulator model.
Visser, Ingmar; Poessé, Rens
2017-05-01
The linear ballistic accumulator (LBA) model (Brown & Heathcote, , Cogn. Psychol., 57, 153) is increasingly popular in modelling response times from experimental data. An R package, glba, has been developed to fit the LBA model using maximum likelihood estimation which is validated by means of a parameter recovery study. At sufficient sample sizes parameter recovery is good, whereas at smaller sample sizes there can be large bias in parameters. In a second simulation study, two methods for computing parameter standard errors are compared. The Hessian-based method is found to be adequate and is (much) faster than the alternative bootstrap method. The use of parameter standard errors in model selection and inference is illustrated in an example using data from an implicit learning experiment (Visser et al., , Mem. Cogn., 35, 1502). It is shown that typical implicit learning effects are captured by different parameters of the LBA model. © 2017 The British Psychological Society.
Role of laser therapy in bladder carcinoma
NASA Astrophysics Data System (ADS)
Sharpe, Brent A.; de Riese, Werner T.
2001-05-01
Transitional cell carcinoma (TCC) of the bladder is most common genitourinary tract cancer and its treatment comprises a large number of surgical procedures in urological oncology. Seventy-five percent (75%) of cases recur within two years and the recurrence rate is correlated with the grade of the initial tumor. While Transurethral Resection of the Bladder (TURB) is the current standard of care, the use of laser offers a proven alternative. Sufficient evidence is available that laser treatment of superficial bladder cancer is as effective as TURB. Laser treatment offers several advantages such as decreased incidence of bladder perforation, a near bloodless procedure, catheter-free procedure, and the possibility of outpatient therapy. It has been reported that laser treatment may reduce the recurrence rate of TCC as compared to electrocautery resection. Furthermore, some studies suggest seeding can be avoided with laser resection; however, both items remain highly controversial.
A Functional Landscape of Resistance to ALK Inhibition in Lung Cancer
Wilson, Frederick H.; Johannessen, Cory M.; Piccioni, Federica; Tamayo, Pablo; Kim, Jong Wook; Van Allen, Eliezer M.; Corsello, Steven M.; Capelletti, Marzia; Calles, Antonio; Butaney, Mohit; Sharifnia, Tanaz; Gabriel, Stacey B.; Mesirov, Jill P.; Hahn, William C.; Engelman, Jeffrey A.; Meyerson, Matthew; Root, David E.; Jänne, Pasi A.; Garraway, Levi A.
2015-01-01
Summary We conducted a large-scale functional genetic study to characterize mechanisms of resistance to ALK inhibition in ALK-dependent lung cancer cells. We identify members of known resistance pathways and additional putative resistance drivers. Among the latter were members of the P2Y purinergic receptor family of G-protein coupled receptors (P2Y1, P2Y2, and P2Y6). P2Y receptors mediated resistance in part through a protein kinase C (PKC)-dependent mechanism. Moreover, PKC activation alone was sufficient to confer resistance to ALK inhibitors whereas combined ALK and PKC inhibition restored sensitivity. We observed enrichment of gene signatures associated with several resistance drivers (including P2Y receptors) in crizotinib-resistant ALK-rearranged lung tumors compared to treatment-naïve controls, supporting a role for identified resistance mechanisms in clinical resistance. PMID:25759024
Bell, Vaughan; Mills, Kathryn L.; Modinos, Gemma; Wilkinson, Sam
2017-01-01
The positive symptoms of psychosis largely involve the experience of illusory social actors, and yet our current measures of social cognition, at best, only weakly predict their presence. We review evidence to suggest that the range of current approaches in social cognition is not sufficient to explain the fundamentally social nature of these experiences. We argue that social agent representation is an important organizing principle for understanding social cognition and that alterations in social agent representation may be a factor in the formation of delusions and hallucination in psychosis. We evaluate the feasibility of this approach in light of clinical and nonclinical studies, developmental research, cognitive anthropology, and comparative psychology. We conclude with recommendations for empirical testing of specific hypotheses and how studies of social cognition could more fully capture the extent of social reasoning and experience in both psychosis and more prosaic mental states. PMID:28533946
Cosmological abundance of the QCD axion coupled to hidden photons
NASA Astrophysics Data System (ADS)
Kitajima, Naoya; Sekiguchi, Toyokazu; Takahashi, Fuminobu
2018-06-01
We study the cosmological evolution of the QCD axion coupled to hidden photons. For a moderately strong coupling, the motion of the axion field leads to an explosive production of hidden photons by tachyonic instability. We use lattice simulations to evaluate the cosmological abundance of the QCD axion. In doing so, we incorporate the backreaction of the produced hidden photons on the axion dynamics, which becomes significant in the non-linear regime. We find that the axion abundance is suppressed by at most O (102) for the decay constant fa =1016GeV, compared to the case without the coupling. For a sufficiently large coupling, the motion of the QCD axion becomes strongly damped, and as a result, the axion abundance is enhanced. Our results show that the cosmological upper bound on the axion decay constant can be relaxed by a few hundred for a certain range of the coupling to hidden photons.
Biomechanical design of escalading lower limb exoskeleton with novel linkage joints.
Zhang, Guoan; Liu, Gangfeng; Ma, Sun; Wang, Tianshuo; Zhao, Jie; Zhu, Yanhe
2017-07-20
In this paper, an obstacle-surmounting-enabled lower limb exoskeleton with novel linkage joints that perfectly mimicked human motions was proposed. Currently, most lower exoskeletons that use linear actuators have a direct connection between the wearer and the controlled part. Compared to the existing joints, the novel linkage joint not only fitted better into compact chasis, but also provided greater torque when the joint was at a large bend angle. As a result, it extended the angle range of joint peak torque output. With any given power, torque was prioritized over rotational speed, because instead of rotational speed, sufficiency of torque is the premise for most joint actions. With insufficient torque, the exoskeleton will be a burden instead of enhancement to its wearer. With optimized distribution of torque among the joints, the novel linkage method may contribute to easier exoskeleton movements.
Intercomparison of fog water samplers
NASA Astrophysics Data System (ADS)
Schell, Dieter; Georgii, Hans-Walter; Maser, Rolf; Jaeschke, Wolfgang; Arends, Beate G.; Kos, Gerard P. A.; Winkler, Peter; Schneider, Thomas; Berner, Axel; Kruisz, Christian
1992-11-01
During the Po Valley Fog Experiment 1989, two fogwater collectors were operated simultaneously at the ground and the results were compared to each other. The chemical analyses of the samples as well as the collection efficiencies showed remarkable differences between both collectors. Some differences in the solute concentrations in the samples of both collectors could be expected due to small differences in the 50-percent cut-off diameters. The large differences in the collection efficiencies however cannot be explained by these small variations of d sub 50, because normally only a small fraction of the water mass is concentrated in the size range of 5-7-micron droplets. It is shown that it is not sufficient to characterize a fogwater collector only by its cut-off diameter. The results of several wind tunnel calibration tests show that the collection efficiencies of the fogwater collectors are a function of windspeed and shape of the droplet spectra.
Effectiveness of Culturally Appropriate Adaptations to Juvenile Justice Services
Vergara, Andrew T.; Kathuria, Parul; Woodmass, Kyler; Janke, Robert; Wells, Susan J.
2017-01-01
Despite efforts to increase cultural competence of services within juvenile justice systems, disproportional minority contact (DMC) persists throughout Canada and the United States. Commonly cited approaches to decreasing DMC include large-scale systemic changes as well as enhancement of the cultural relevance and responsiveness of services delivered. Cultural adaptations to service delivery focus on prevention, decision-making, and treatment services to reduce initial contact, minimize unnecessary restraint, and reduce recidivism. Though locating rigorous testing of these approaches compared to standard interventions is difficult, this paper identifies and reports on such research. The Cochrane guidelines for systematic literature reviews and meta-analyses served as a foundation for study methodology. Databases such as Legal Periodicals and Books were searched through June 2015. Three studies were sufficiently rigorous to identify the effect of the cultural adaptations, and three studies that are making potentially important contributions to the field were also reviewed. PMID:29468092
NASA Astrophysics Data System (ADS)
Teuben, P. J.; Wolfire, M. G.; Pound, M. W.; Mundy, L. G.
We have assembled a cluster of Intel-Pentium based PCs running Linux to compute a large set of Photodissociation Region (PDR) and Dust Continuum models. For various reasons the cluster is heterogeneous, currently ranging from a single Pentium-II 333 MHz to dual Pentium-III 450 MHz CPU machines. Although this will be sufficient for our ``embarrassingly parallelizable problem'' it may present some challenges for as yet unplanned future use. In addition the cluster was used to construct a MIRIAD benchmark, and compared to equivalent Ultra-Sparc based workstations. Currently the cluster consists of 8 machines, 14 CPUs, 50GB of disk-space, and a total peak speed of 5.83 GHz, or about 1.5 Gflops. The total cost of this cluster has been about $12,000, including all cabling, networking equipment, rack, and a CD-R backup system. The URL for this project is http://dustem.astro.umd.edu.
‘Small Changes’ to Diet and Physical Activity Behaviors for Weight Management
Hills, Andrew P.; Byrne, Nuala M.; Lindstrom, Rachel; Hill, James O.
2013-01-01
Obesity is associated with numerous short- and long-term health consequences. Low levels of physical activity and poor dietary habits are consistent with an increased risk of obesity in an obesogenic environment. Relatively little research has investigated associations between eating and activity behaviors by using a systems biology approach and by considering the dynamics of the energy balance concept. A significant body of research indicates that a small positive energy balance over time is sufficient to cause weight gain in many individuals. In contrast, small changes in nutrition and physical activity behaviors can prevent weight gain. In the context of weight management, it may be more feasible for most people to make small compared to large short-term changes in diet and activity. This paper presents a case for the use of small and incremental changes in diet and physical activity for improved weight management in the context of a toxic obesogenic environment. PMID:23711772
Rising out-of-pocket costs in disease management programs.
Chernew, Michael E; Rosen, Allison B; Fendrick, A Mark
2006-03-01
To document the rise in copayments for patients in disease management programs and to call attention to the inherent conflicts that exist between these 2 approaches to benefit design. Data from 2 large health plans were used to compare cost sharing in disease management programs with cost sharing outside of disease management programs. The copayments charged to participants in disease management programs usually do not differ substantially from those charged to other beneficiaries. Cost sharing and disease management result in conflicting approaches to benefit design. Increasing copayments may lead to underuse of recommended services, thereby decreasing the clinical effectiveness and increasing the overall costs of disease management programs. Policymakers and private purchasers should consider the use of targeted benefit designs when implementing disease management programs or redesigning cost-sharing provisions. Current information systems and health services research are sufficiently advanced to permit these benefit designs.
Brindle, Ryan C; Ginty, Annie T; Phillips, Anna C; Carroll, Douglas
2014-10-01
A series of meta-analyses was undertaken to determine the contributions of sympathetic and parasympathetic activation to cardiovascular stress reactivity. A literature search yielded 186 studies of sufficient quality that measured indices of sympathetic (n = 113) and/or parasympathetic activity (n = 73). A range of psychological stressors perturbed blood pressure and heart rate. There were comparable aggregate effects for sympathetic activation, as indexed by increased plasma epinephrine and norepinephrine, and shortened pre-ejection period and parasympathetic deactivation, as indexed by heart rate variability measures. Effect sizes varied with stress task, sex, and age. In contrast to alpha-adrenergic blockade, beta-blockade attenuated cardiovascular reactivity. Cardiovascular reactivity to acute psychological stress would appear to reflect both beta-adrenergic activation and vagal withdrawal to a largely equal extent. Copyright © 2014 Society for Psychophysiological Research.
NASA Astrophysics Data System (ADS)
Tsung, Frank; Weaver, J.; Lehmberg, R.
2017-10-01
We are performing particle-in-cell simulations using the code OSIRIS to study the effects of laser plasma interactions in the presence of temporal bandwidth under plasma conditions relevant to experiments on the Nike laser with induced spatial incoherence (ISI). With ISI, the instantaneous laser intensity can be 3-4 times larger than the average intensity, leading to the excitation of additional TPD modes and producing electrons with larger angular spread. In our simulations, we observe that although ISI can increase the interaction regions for short bursts of time, time-averaged (over many pico-seconds) laser plasma interactions can be reduced by a factor of 2 in systems with sufficiently large bandwidths (where the inverse bandwidth is comparable with the linear growth time). We will quantify these effects and investigate higher dimensional effects such as laser speckles and the effects of Coulomb collisions. Work supported by NRL, NNSA, and NSF.
Gene and translation initiation site prediction in metagenomic sequences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyatt, Philip Douglas; LoCascio, Philip F; Hauser, Loren John
2012-01-01
Gene prediction in metagenomic sequences remains a difficult problem. Current sequencing technologies do not achieve sufficient coverage to assemble the individual genomes in a typical sample; consequently, sequencing runs produce a large number of short sequences whose exact origin is unknown. Since these sequences are usually smaller than the average length of a gene, algorithms must make predictions based on very little data. We present MetaProdigal, a metagenomic version of the gene prediction program Prodigal, that can identify genes in short, anonymous coding sequences with a high degree of accuracy. The novel value of the method consists of enhanced translationmore » initiation site identification, ability to identify sequences that use alternate genetic codes and confidence values for each gene call. We compare the results of MetaProdigal with other methods and conclude with a discussion of future improvements.« less
Lin, Kao; Li, Haipeng; Schlötterer, Christian; Futschik, Andreas
2011-01-01
Summary statistics are widely used in population genetics, but they suffer from the drawback that no simple sufficient summary statistic exists, which captures all information required to distinguish different evolutionary hypotheses. Here, we apply boosting, a recent statistical method that combines simple classification rules to maximize their joint predictive performance. We show that our implementation of boosting has a high power to detect selective sweeps. Demographic events, such as bottlenecks, do not result in a large excess of false positives. A comparison to other neutrality tests shows that our boosting implementation performs well compared to other neutrality tests. Furthermore, we evaluated the relative contribution of different summary statistics to the identification of selection and found that for recent sweeps integrated haplotype homozygosity is very informative whereas older sweeps are better detected by Tajima's π. Overall, Watterson's θ was found to contribute the most information for distinguishing between bottlenecks and selection. PMID:21041556
A pseudoscalar decaying to photon pairs in the early LHC Run 2 data
Low, Matthew; Tesi, Andrea; Wang, Lian -Tao
2016-03-16
In this paper we explore the possibility of a pseudoscalar resonance to account for the 750 GeV diphoton excess observed both at ATLAS and at CMS. We analyze the ingredients needed from the low energy perspective to obtain a sufficiently large diphoton rate to explain the signal while avoiding constraints from other channels. Additionally, we point out composite Higgs models in which one can naturally obtain a pseudoscalar at the 750 GeV mass scale and we estimate the pseudoscalar couplings to standard model particles that one would have in such models. A generic feature of models that can explain themore » excess is the presence of new particles in addition to the 750 GeV state. In conclusion, we note that due to the origin of the coupling of the resonance to photons, one expects to see comparable signals in the Zγ, ZZ, and W W channels.« less
Timeliness of notification systems for infectious diseases: A systematic literature review.
Swaan, Corien; van den Broek, Anouk; Kretzschmar, Mirjam; Richardus, Jan Hendrik
2018-01-01
Timely notification of infectious diseases is crucial for prompt response by public health services. Adequate notification systems facilitate timely notification. A systematic literature review was performed to assess outcomes of studies on notification timeliness and to determine which aspects of notification systems are associated with timely notification. Articles reviewing timeliness of notifications published between 2000 and 2017 were searched in Pubmed and Scopus. Using a standardized notification chain, timeliness of reporting system for each article was defined as either sufficient (≥ 80% notifications in time), partly sufficient (≥ 50-80%), or insufficient (< 50%) according to the article's predefined timeframe, a standardized timeframe for all articles, and a disease specific timeframe. Electronic notification systems were compared with conventional methods (postal mail, fax, telephone, email) and mobile phone reporting. 48 articles were identified. In almost one third of the studies with a predefined timeframe (39), timeliness of notification systems was either sufficient or insufficient (11/39, 28% and 12/39, 31% resp.). Applying the standardized timeframe (45 studies) revealed similar outcomes (13/45, 29%, sufficient notification timeframe, vs 15/45, 33%, insufficient). The disease specific timeframe was not met by any study. Systems involving reporting by laboratories most often complied sufficiently with predefined or standardized timeframes. Outcomes were not related to electronic, conventional notification systems or mobile phone reporting. Electronic systems were faster in comparative studies (10/13); this hardly resulted in sufficient timeliness, neither according to predefined nor to standardized timeframes. A minority of notification systems meets either predefined, standardized or disease specific timeframes. Systems including laboratory reporting are associated with timely notification. Electronic systems reduce reporting delay, but implementation needs considerable effort to comply with notification timeframes. During outbreak threats, patient, doctors and laboratory testing delays need to be reduced to achieve timely detection and notification. Public health authorities should incorporate procedures for this in their preparedness plans.
Bennett, Jerry M.; Cortes, Peter M.
1985-01-01
The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios. PMID:16664367
Bennett, J M; Cortes, P M
1985-09-01
The adsorption of water by thermocouple psychrometer assemblies is known to cause errors in the determination of water potential. Experiments were conducted to evaluate the effect of sample size and psychrometer chamber volume on measured water potentials of leaf discs, leaf segments, and sodium chloride solutions. Reasonable agreement was found between soybean (Glycine max L. Merr.) leaf water potentials measured on 5-millimeter radius leaf discs and large leaf segments. Results indicated that while errors due to adsorption may be significant when using small volumes of tissue, if sufficient tissue is used the errors are negligible. Because of the relationship between water potential and volume in plant tissue, the errors due to adsorption were larger with turgid tissue. Large psychrometers which were sealed into the sample chamber with latex tubing appeared to adsorb more water than those sealed with flexible plastic tubing. Estimates are provided of the amounts of water adsorbed by two different psychrometer assemblies and the amount of tissue sufficient for accurate measurements of leaf water potential with these assemblies. It is also demonstrated that water adsorption problems may have generated low water potential values which in prior studies have been attributed to large cut surface area to volume ratios.
NASA Technical Reports Server (NTRS)
Righetti, Pier Giorgio; Casale, Elena; Carter, Daniel; Snyder, Robert S.; Wenisch, Elisabeth; Faupel, Michel
1990-01-01
Recombinant-DNA (deoxyribonucleic acid) (r-DNA) proteins, produced in large quantities for human consumption, are now available in sufficient amounts for crystal growth. Crystallographic analysis is the only method now available for defining the atomic arrangements within complex biological molecules and decoding, e.g., the structure of the active site. Growing protein crystals in microgravity has become an important aspect of biology in space, since crystals that are large enough and of sufficient quality to permit complete structure determinations are usually obtained. However even small amounts of impurities in a protein preparation are anathema for the growth of a regular crystal lattice. A multicompartment electrolyzer with isoelectric, immobiline membranes, able to purify large quantities of r-DNA proteins is described. The electrolyzer consists of a stack of flow cells, delimited by membranes of very precise isoelectric point (pI, consisting of polyacrylamide supported by glass fiber filters containing Immobiline buffers and titrants to uniquely define a pI value) and very high buffering power, able to titrate all proteins tangent or crossing such membranes. By properly selecting the pI values of two membranes delimiting a flow chamber, a single protein can be kept isoelectric in a single flow chamber and thus, be purified to homogeneity (by the most stringent criterion, charge homogeneity).
Nakatsuka, Matthew A; Barback, Christopher V; Fitch, Kirsten R; Farwell, Alexander R; Esener, Sadik C; Mattrey, Robert F; Cha, Jennifer N; Goodwin, Andrew P
2013-12-01
The use of microbubbles as ultrasound contrast agents is one of the primary methods to diagnose deep venous thrombosis. However, current microbubble imaging strategies require either a clot sufficiently large to produce a circulation filling defect or a clot with sufficient vascularization to allow for targeted accumulation of contrast agents. Previously, we reported the design of a microbubble formulation that modulated its ability to generate ultrasound contrast from interaction with thrombin through incorporation of aptamer-containing DNA crosslinks in the encapsulating shell, enabling the measurement of a local chemical environment by changes in acoustic activity. However, this contrast agent lacked sufficient stability and lifetime in blood to be used as a diagnostic tool. Here we describe a PEG-stabilized, thrombin-activated microbubble (PSTA-MB) with sufficient stability to be used in vivo in circulation with no change in biomarker sensitivity. In the presence of actively clotting blood, PSTA-MBs showed a 5-fold increase in acoustic activity. Specificity for the presence of thrombin and stability under constant shear flow were demonstrated in a home-built in vitro model. Finally, PSTA-MBs were able to detect the presence of an active clot within the vena cava of a rabbit sufficiently small as to not be visible by current non-specific contrast agents. By activating in non-occlusive environments, these contrast agents will be able to detect clots not diagnosable by current contrast agents. Copyright © 2013 Elsevier Ltd. All rights reserved.
High-frequency signal and noise estimates of CSR GRACE RL04
NASA Astrophysics Data System (ADS)
Bonin, Jennifer A.; Bettadpur, Srinivas; Tapley, Byron D.
2012-12-01
A sliding window technique is used to create daily-sampled Gravity Recovery and Climate Experiment (GRACE) solutions with the same background processing as the official CSR RL04 monthly series. By estimating over shorter time spans, more frequent solutions are made using uncorrelated data, allowing for higher frequency resolution in addition to daily sampling. Using these data sets, high-frequency GRACE errors are computed using two different techniques: assuming the GRACE high-frequency signal in a quiet area of the ocean is the true error, and computing the variance of differences between multiple high-frequency GRACE series from different centers. While the signal-to-noise ratios prove to be sufficiently high for confidence at annual and lower frequencies, at frequencies above 3 cycles/year the signal-to-noise ratios in the large hydrological basins looked at here are near 1.0. Comparisons with the GLDAS hydrological model and high frequency GRACE series developed at other centers confirm CSR GRACE RL04's poor ability to accurately and reliably measure hydrological signal above 3-9 cycles/year, due to the low power of the large-scale hydrological signal typical at those frequencies compared to the GRACE errors.
Autofocus algorithm for synthetic aperture radar imaging with large curvilinear apertures
NASA Astrophysics Data System (ADS)
Bleszynski, E.; Bleszynski, M.; Jaroszewicz, T.
2013-05-01
An approach to autofocusing for large curved synthetic aperture radar (SAR) apertures is presented. Its essential feature is that phase corrections are being extracted not directly from SAR images, but rather from reconstructed SAR phase-history data representing windowed patches of the scene, of sizes sufficiently small to allow the linearization of the forward- and back-projection formulae. The algorithm processes data associated with each patch independently and in two steps. The first step employs a phase-gradient-type method in which phase correction compensating (possibly rapid) trajectory perturbations are estimated from the reconstructed phase history for the dominant scattering point on the patch. The second step uses phase-gradient-corrected data and extracts the absolute phase value, removing in this way phase ambiguities and reducing possible imperfections of the first stage, and providing the distances between the sensor and the scattering point with accuracy comparable to the wavelength. The features of the proposed autofocusing method are illustrated in its applications to intentionally corrupted small-scene 2006 Gotcha data. The examples include the extraction of absolute phases (ranges) for selected prominent point targets. They are then used to focus the scene and determine relative target-target distances.
Biosimilars: a regulatory perspective from America
2011-01-01
Biosimilars are protein products that are sufficiently similar to a biopharmaceutical already approved by a regulatory agency. Several biotechnology companies and generic drug manufacturers in Asia and Europe are developing biosimilars of tumor necrosis factor inhibitors and rituximab. A biosimilar etanercept is already being marketed in Colombia and China. In the US, several natural source products and recombinant proteins have been approved as generic drugs under Section 505(b)(2) of the Food, Drug, and Cosmetic Act. However, because the complexity of large biopharmaceuticals makes it difficult to demonstrate that a biosimilar is structurally identical to an already approved biopharmaceutical, this Act does not apply to biosimilars of large biopharmaceuticals. Section 7002 of the Patient Protection and Affordable Care Act of 2010, which is referred to as the Biologics Price Competition and Innovation Act of 2009, amends Section 351 of the Public Health Service Act to create an abbreviated pathway that permits a biosimilar to be evaluated by comparing it with only a single reference biological product. This paper reviews the processes for approval of biosimilars in the US and the European Union and highlights recent changes in federal regulations governing the approval of biosimilars in the US. PMID:21586106
Top-rated British business research: has the emperor got any clothes?
Lilford, R J; Dobbie, F; Warren, R; Braunholtz, D; Boaden, R
2003-08-01
Business schools have great prestige and charge large amounts of money for their courses. But how good is the science on which they base their prescriptions for action? To find out we examined the published output from the only three British business schools with the highest (5*) research assessment ranking at the time the articles were published. We conclude that theory development and model construction are often elegant. However, the methods used to obtain primary empirical information to confirm or refute the theories or populate models are poor, at least from a positivist or pragmatic ontological perspective. Large scale comparative studies made up only a small proportion of research output from the business schools. Literature reviews were not systematic. The sampling frame and rationale for selection of cases for study are inadequately described. The methods of data collection were frequently not given in sufficient detail to enable the study to be replicated and the conclusions tended to go far beyond what the data by themselves could support. However, this does not have to be the case-there are excellent examples of research in social sciences. We conclude, therefore, that top-rated British business research is a scantily clad emperor.
Tilley, Colin; McIntosh, Emma; Bahrami, Maryam; Clarkson, Jan; Deery, Chris; Pitts, Nigel
2005-07-01
To compare the cost-effectiveness of four third molar guideline implementation strategies. Fifty-one dental practices in Scotland were randomized to one of four implementation strategies. The effectiveness of the strategies was measured by general dental practitioners' compliance with the guideline. The effectiveness of the guideline depended crucially upon the type of patient treated. In particular, for a minority of patients (14%) with no clinical signals of their 'type', the implementation strategies generate potentially large gains in evidence-based practice. However, the cost per patient of achieving these gains is large given that the costs are incurred for all patients, but benefits accrue only to a minority. The results show that the type of patient presenting for treatment can influence the effectiveness, cost-effectiveness and therefore policy conclusions. Consequently, the design and analysis of studies need to be sufficiently sensitive to detect subtle interaction effects. This may explain the dearth of guideline implementation trials with significant findings. The results also suggest that a more cost-effective implementation method in primary care dentistry may be to subsidize treatment conditional upon patient type.
Biosimilars: a regulatory perspective from America.
Kay, Jonathan
2011-05-12
Biosimilars are protein products that are sufficiently similar to a biopharmaceutical already approved by a regulatory agency. Several biotechnology companies and generic drug manufacturers in Asia and Europe are developing biosimilars of tumor necrosis factor inhibitors and rituximab. A biosimilar etanercept is already being marketed in Colombia and China. In the US, several natural source products and recombinant proteins have been approved as generic drugs under Section 505(b)(2) of the Food, Drug, and Cosmetic Act. However, because the complexity of large biopharmaceuticals makes it difficult to demonstrate that a biosimilar is structurally identical to an already approved biopharmaceutical, this Act does not apply to biosimilars of large biopharmaceuticals. Section 7002 of the Patient Protection and Affordable Care Act of 2010, which is referred to as the Biologics Price Competition and Innovation Act of 2009, amends Section 351 of the Public Health Service Act to create an abbreviated pathway that permits a biosimilar to be evaluated by comparing it with only a single reference biological product. This paper reviews the processes for approval of biosimilars in the US and the European Union and highlights recent changes in federal regulations governing the approval of biosimilars in the US.
Remillard, J.; Fridlind, Ann M.; Ackerman, A. S.; ...
2017-09-20
Here, a case study of persistent stratocumulus over the Azores is simulated using two independent large-eddy simulation (LES) models with bin microphysics, and forward-simulated cloud radar Doppler moments and spectra are compared with observations. Neither model is able to reproduce the monotonic increase of downward mean Doppler velocity with increasing reflectivity that is observed under a variety of conditions, but for differing reasons. To a varying degree, both models also exhibit a tendency to produce too many of the largest droplets, leading to excessive skewness in Doppler velocity distributions, especially below cloud base. Excessive skewness appears to be associated withmore » an insufficiently sharp reduction in droplet number concentration at diameters larger than ~200 μm, where a pronounced shoulder is found for in situ observations and a sharp reduction in reflectivity size distribution is associated with relatively narrow observed Doppler spectra. Effectively using LES with bin microphysics to study drizzle formation and evolution in cloud Doppler radar data evidently requires reducing numerical diffusivity in the treatment of the stochastic collection equation; if that is accomplished sufficiently to reproduce typical spectra, progress toward understanding drizzle processes is likely.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Remillard, J.; Fridlind, Ann M.; Ackerman, A. S.
Here, a case study of persistent stratocumulus over the Azores is simulated using two independent large-eddy simulation (LES) models with bin microphysics, and forward-simulated cloud radar Doppler moments and spectra are compared with observations. Neither model is able to reproduce the monotonic increase of downward mean Doppler velocity with increasing reflectivity that is observed under a variety of conditions, but for differing reasons. To a varying degree, both models also exhibit a tendency to produce too many of the largest droplets, leading to excessive skewness in Doppler velocity distributions, especially below cloud base. Excessive skewness appears to be associated withmore » an insufficiently sharp reduction in droplet number concentration at diameters larger than ~200 μm, where a pronounced shoulder is found for in situ observations and a sharp reduction in reflectivity size distribution is associated with relatively narrow observed Doppler spectra. Effectively using LES with bin microphysics to study drizzle formation and evolution in cloud Doppler radar data evidently requires reducing numerical diffusivity in the treatment of the stochastic collection equation; if that is accomplished sufficiently to reproduce typical spectra, progress toward understanding drizzle processes is likely.« less
NASA Astrophysics Data System (ADS)
Vondráček, M.; Cornils, L.; Minár, J.; Warmuth, J.; Michiardi, M.; Piamonteze, C.; Barreto, L.; Miwa, J. A.; Bianchi, M.; Hofmann, Ph.; Zhou, L.; Kamlapure, A.; Khajetoorians, A. A.; Wiesendanger, R.; Mi, J.-L.; Iversen, B.-B.; Mankovsky, S.; Borek, St.; Ebert, H.; Schüler, M.; Wehling, T.; Wiebe, J.; Honolka, J.
2016-10-01
We report on the quenching of single Ni adatom moments on Te-terminated Bi2Te2Se and Bi2Te3 topological insulator surfaces. The effect is noted as a missing x-ray magnetic circular dichroism for resonant L3 ,2 transitions into partially filled Ni 3 d states of theory-derived occupancy nd=9.2 . On the basis of a comparative study of Ni and Fe using scanning tunneling microscopy and ab initio calculations, we are able to relate the element specific moment formation to a local Stoner criterion. Our theory shows that while Fe adatoms form large spin moments of ms=2.54 μB with out-of-plane anisotropy due to a sufficiently large density of states at the Fermi energy, Ni remains well below an effective Stoner threshold for local moment formation. With the Fermi level remaining in the bulk band gap after adatom deposition, nonmagnetic Ni and preferentially out-of-plane oriented magnetic Fe with similar structural properties on Bi2Te2Se surfaces constitute a perfect platform to study the off-on effects of time-reversal symmetry breaking on topological surface states.
In-flight results of adaptive attitude control law for a microsatellite
NASA Astrophysics Data System (ADS)
Pittet, C.; Luzi, A. R.; Peaucelle, D.; Biannic, J.-M.; Mignot, J.
2015-06-01
Because satellites usually do not experience large changes of mass, center of gravity or inertia in orbit, linear time invariant (LTI) controllers have been widely used to control their attitude. But, as the pointing requirements become more stringent and the satellite's structure more complex with large steerable and/or deployable appendices and flexible modes occurring in the control bandwidth, one unique LTI controller is no longer sufficient. One solution consists in designing several LTI controllers, one for each set point, but the switching between them is difficult to tune and validate. Another interesting solution is to use adaptive controllers, which could present at least two advantages: first, as the controller automatically and continuously adapts to the set point without changing the structure, no switching logic is needed in the software; second, performance and stability of the closed-loop system can be assessed directly on the whole flight domain. To evaluate the real benefits of adaptive control for satellites, in terms of design, validation and performances, CNES selected it as end-of-life experiment on PICARD microsatellite. This paper describes the design, validation and in-flight results of the new adaptive attitude control law, compared to nominal control law.
Effective flocculation of Chlorella vulgaris using chitosan with zeta potential measurement
NASA Astrophysics Data System (ADS)
Low, Y. J.; Lau, S. W.
2017-06-01
Microalgae are considered as one promising source of third-generation biofuels due to their fast growth rates, potentially higher yield rates and wide ranges of growth conditions. However, the extremely low biomass concentration in microalgae cultures presents a great challenge to the harvesting of microalgae because a large volume of water needs to be removed to obtain dry microalgal cells for the subsequent oil extraction process. In this study, the fresh water microalgae Chlorella vulgaris (C. vulgaris) was effectively harvested using both low molecular weight (MW) and high MW chitosan flocculants. The flocculation efficiency was evaluated by physical appearance, supernatant absorbance, zeta potential and solids content after centrifugal dewatering. High flocculation efficiency of 98.0-99.0% was achieved at the optimal dosage of 30-40 mg/g with formation of large microalgae flocs. This study suggests that the polymer bridging mechanism was governing the flocculation behaviour of C. vulgaris using high MW chitosan. Besides, charge patch neutralisation mechanism prevailed at low MW chitosan where lower dosage was sufficient to reach near-zero zeta potential compared with the high MW chitosan. The amount of chitosan polymer present in the culture may also affect the mechanism of flocculation.
Disturbances to Air-Layer Skin-Friction Drag Reduction at High Reynolds Numbers
NASA Astrophysics Data System (ADS)
Dowling, David; Elbing, Brian; Makiharju, Simo; Wiggins, Andrew; Perlin, Marc; Ceccio, Steven
2009-11-01
Skin friction drag on a flat surface may be reduced by more than 80% when a layer of air separates the surface from a flowing liquid compared to when such an air layer is absent. Past large-scale experiments utilizing the US Navy's Large Cavitation Channel and a flat-plate test model 3 m wide and 12.9 m long have demonstrated air layer drag reduction (ALDR) on both smooth and rough surfaces at water flow speeds sufficient to reach downstream-distance-based Reynolds numbers exceeding 100 million. For these experiments, the incoming flow conditions, surface orientation, air injection geometry, and buoyancy forces all favored air layer formation. The results presented here extend this prior work to include the effects that vortex generators and free stream flow unsteadiness have on ALDR to assess its robustness for application to ocean-going ships. Measurements include skin friction, static pressure, airflow rate, video of the flow field downstream of the injector, and profiles of the flowing air-water mixture when the injected air forms bubbles, when it is in transition to an air layer, and when the air layer is fully formed. From these, and the prior measurements, ALDR's viability for full-scale applications is assessed.
Spatial selective attention in a complex auditory environment such as polyphonic music.
Saupe, Katja; Koelsch, Stefan; Rübsamen, Rudolf
2010-01-01
To investigate the influence of spatial information in auditory scene analysis, polyphonic music (three parts in different timbres) was composed and presented in free field. Each part contained large falling interval jumps in the melody and the task of subjects was to detect these events in one part ("target part") while ignoring the other parts. All parts were either presented from the same location (0 degrees; overlap condition) or from different locations (-28 degrees, 0 degrees, and 28 degrees or -56 degrees, 0 degrees, and 56 degrees in the azimuthal plane), with the target part being presented either at 0 degrees or at one of the right-sided locations. Results showed that spatial separation of 28 degrees was sufficient for a significant improvement in target detection (i.e., in the detection of large interval jumps) compared to the overlap condition, irrespective of the position (frontal or right) of the target part. A larger spatial separation of the parts resulted in further improvements only if the target part was lateralized. These data support the notion of improvement in the suppression of interfering signals with spatial sound source separation. Additionally, the data show that the position of the relevant sound source influences auditory performance.
Behaviour of concrete beams reinforced withFRP prestressed concrete prisms
NASA Astrophysics Data System (ADS)
Svecova, Dagmar
The use of fibre reinforced plastics (FRP) to reinforce concrete is gaining acceptance. However, due to the relatively low modulus of FRP, in comparison to steel, such structures may, if sufficient amount of reinforcement is not used, suffer from large deformations and wide cracks. FRP is generally more suited for prestressing. Since it is not feasible to prestress all concrete structures to eliminate the large deflections of FRP reinforced concrete flexural members, researchers are focusing on other strategies. A simple method for avoiding excessive deflections is to provide sufficiently high amount of FRP reinforcement to limit its stress (strain) to acceptable levels under service loads. This approach will not be able to take advantage of the high strength of FRP and will be generally uneconomical. The current investigation focuses on the feasibility of an alternative strategy. This thesis deals with the flexural and shear behaviour of concrete beams reinforced with FRP prestressed concrete prisms. FRP prestressed concrete prisms (PCP) are new reinforcing bars, made by pretensioning FRP and embedding it in high strength grout/concrete. The purpose of the research is to investigate the feasibility of using such pretensioned rebars, and their effect on the flexural and shear behaviour of reinforced concrete beams over the entire loading range. Due to the prestress in the prisms, deflection of concrete beams reinforced with this product is substantially reduced, and is comparable to similarly steel reinforced beams. The thesis comprises both theoretical and experimental investigations. In the experimental part, nine beams reinforced with FRP prestressed concrete prisms, and two companion beams, one steel and one FRP reinforced were tested. All the beams were designed to carry the same ultimate moment. Excellent flexural and shear behaviour of beams reinforced with higher prestressed prisms is reported. When comparing deflections of three beams designed to have the same ultimate capacity, but reinforced with either steel, PCP or FRP rebars, the service load deflections of beams reinforced with PCP are comparable to that of a steel reinforced concrete beam, and are four times smaller than the deflection of the companion FRP reinforced beam. Similarly, the crack width of the PCP reinforced beams under service loads is comparable to that of the steel reinforced beam while the FRP reinforced beam developed unacceptably wide cracks. In the analytical part comprehensive analysis of the experimental data in both flexure and shear is performed. It is determined that the existing design expressions for ultimate flexural strength and service load deflection calculation cannot accurately predict the response of PCP reinforced beams. Accordingly, new expressions for calculation of deflection, crack width, tension stiffening, and ultimate capacity of the PCP reinforced beams are proposed. The predictions of the proposed methods of analysis agree very well with the corresponding experimental data. Based on the results of the current study, it is concluded that high strength concrete prisms prestressed with carbon fibre reinforced plastic bars can be used as reinforcement in concrete structures to avoid the problems of large deflections and wide cracks under service loads.
Humor and Comparatives in Ads for High- and Low-Involvement Products.
ERIC Educational Resources Information Center
Wu, Bob T. W.; And Others
1989-01-01
Investigates the effectiveness of humor in advertising, comparative advertising, and consumer involvement with the product. Finds that humorous ads are more eye catching but less impressive and less sufficient in information than nonhumorous ads. Finds the performance of comparative ads is generally negative and especially so in the high…
Gu, Ja K; Charles, Luenda E; Ma, Claudia C; Andrew, Michael E; Fekedulegn, Desta; Hartley, Tara A; Violanti, John M; Burchfiel, Cecil M
2016-10-01
Studies describing prevalence and trends of physical activity among workers in the United States are scarce. We aimed to estimate prevalence and trends of "sufficient" leisure-time physical activity (LTPA) during the 2004-2014 time period among U.S. workers. Data were collected for U.S. workers in the National Health Interview Survey. LTPA was categorized as sufficiently active (moderate intensity, ≥150 minutes per week), insufficiently active (10-149 minutes per week), and inactive (<10 minutes per week). Prevalence of LTPA was adjusted for age using 2010 U.S. working population as a standardized age distribution. Prevalence trends of "sufficient" LTPA significantly increased from 2004 to 2014 (45.6% to 54.8%; P < .001). Among industry groups, the highest prevalence of "sufficient" LTPA was observed among workers in Professional/Scientific/Technical Services (62.1%). The largest increases were observed among workers in Public Administration (51.3%-63.4%). Among occupational groups, "sufficient" LTPA prevalence was lowest in farming/fishing/forestry (30.8%) and highest in life/physical/social science (66.4%). Prevalence of LTPA significantly increased from 2004 to 2014 in most occupational and industry groups. Among U.S. workers, trends of "sufficient" LTPA significantly increased between 2004 and 2014. Overall, a larger proportion of white-collar compared to blue-collar workers were engaged in "sufficient" LTPA. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Bosman, Peter A. N.; Alderliesten, Tanja
2016-03-01
We recently demonstrated the strong potential of using dual-dynamic transformation models when tackling deformable image registration problems involving large anatomical differences. Dual-dynamic transformation models employ two moving grids instead of the common single moving grid for the target image (and single fixed grid for the source image). We previously employed powerful optimization algorithms to make use of the additional flexibility offered by a dual-dynamic transformation model with good results, directly obtaining insight into the trade-off between important registration objectives as a result of taking a multi-objective approach to optimization. However, optimization has so far been initialized using two regular grids, which still leaves a great potential of dual-dynamic transformation models untapped: a-priori grid alignment with image structures/areas that are expected to deform more. This allows (far) less grid points to be used, compared to using a sufficiently refined regular grid, leading to (far) more efficient optimization, or, equivalently, more accurate results using the same number of grid points. We study the implications of exploiting this potential by experimenting with two new smart grid initialization procedures: one manual expert-based and one automated image-feature-based. We consider a CT test case with large differences in bladder volume with and without a multi-resolution scheme and find a substantial benefit of using smart grid initialization.
[Ultrasound guided percutaneous nephrolithotripsy].
Guliev, B G
2014-01-01
The study was aimed to the evaluation of the effectiveness and results of ultrasound guided percutaneous nephrolithotripsy (PNL) for the treatment of patients with large stones in renal pelvis. The results of PNL in 138 patients who underwent surgery for kidney stones from 2011 to 2013 were analyzed. Seventy patients (Group 1) underwent surgery with combined ultrasound and radiological guidance, and 68 patients (Group 2)--only with ultrasound guidance. The study included patients with large renal pelvic stones larger than 2.2 cm, requiring the formation of a single laparoscopic approach. Using the comparative analysis, the timing of surgery, the number of intra- and postoperative complications, blood loss and length of stay were evaluated. Percutaneous access was successfully performed in all patients. Postoperative complications (exacerbation of chronic pyelonephritis, gross hematuria) were observed in 14.3% of patients in Group 1 and in 14.7% of patients in Group 2. Bleeding requiring blood transfusion, and injuries of adjacent organs were not registered. Efficacy of PNL in the Group 1 was 95.7%; 3 (4.3%) patients required additional interventions. In Group 2, the effectiveness of PNL was 94.1%, 4 (5.9%) patients additionally underwent extracorporeal lithotripsy. There were no significant differences in the effectiveness of PNL, the volume of blood loss and duration of hospitalization. Ultrasound guided PNL can be performed in large pelvic stones and sufficient expansion of renal cavities, thus reducing radiation exposure of patients and medical staff.
Brandt, Adam R; Sun, Yuchi; Bharadwaj, Sharad; Livingston, David; Tan, Eugene; Gordon, Deborah
2015-01-01
Studies of the energy return on investment (EROI) for oil production generally rely on aggregated statistics for large regions or countries. In order to better understand the drivers of the energy productivity of oil production, we use a novel approach that applies a detailed field-level engineering model of oil and gas production to estimate energy requirements of drilling, producing, processing, and transporting crude oil. We examine 40 global oilfields, utilizing detailed data for each field from hundreds of technical and scientific data sources. Resulting net energy return (NER) ratios for studied oil fields range from ≈2 to ≈100 MJ crude oil produced per MJ of total fuels consumed. External energy return (EER) ratios, which compare energy produced to energy consumed from external sources, exceed 1000:1 for fields that are largely self-sufficient. The lowest energy returns are found to come from thermally-enhanced oil recovery technologies. Results are generally insensitive to reasonable ranges of assumptions explored in sensitivity analysis. Fields with very large associated gas production are sensitive to assumptions about surface fluids processing due to the shifts in energy consumed under different gas treatment configurations. This model does not currently include energy invested in building oilfield capital equipment (e.g., drilling rigs), nor does it include other indirect energy uses such as labor or services.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Min-Lin; Wu, Yung-Hsien; Lin, Chia-Chun
2012-10-15
The structure of SiGe nanocrystals embedded in Al{sub 2}O{sub 3} formed by sequential deposition of Al{sub 2}O{sub 3}/Si/Ge/Al{sub 2}O{sub 3} and a subsequent annealing was confirmed by transmission electron microscopy and energy dispersive spectroscopy (EDS), and its application for write-once-read-many-times (WORM) memory devices was explored in this study. By applying a -10 V pulse for 1 s, a large amount of holes injected from Si substrate are stored in the nanocrystals and consequently, the current at +1.5 V increases by a factor of 10{sup 4} as compared to that of the initial state. Even with a smaller -5 V pulsemore » for 1 {mu}s, a sufficiently large current ratio of 36 can still be obtained, verifying the low power operation. Since holes are stored in nanocrystals which are isolated from Si substrate by Al{sub 2}O{sub 3} with good integrity and correspond to a large valence band offset with respect to Al{sub 2}O{sub 3}, desirable read endurance up to 10{sup 5} cycles and excellent retention over 100 yr are achieved. Combining these promising characteristics, WORM memory devices are appropriate for high-performance archival storage applications.« less
Brandt, Adam R.; Sun, Yuchi; Bharadwaj, Sharad; Livingston, David; Tan, Eugene; Gordon, Deborah
2015-01-01
Studies of the energy return on investment (EROI) for oil production generally rely on aggregated statistics for large regions or countries. In order to better understand the drivers of the energy productivity of oil production, we use a novel approach that applies a detailed field-level engineering model of oil and gas production to estimate energy requirements of drilling, producing, processing, and transporting crude oil. We examine 40 global oilfields, utilizing detailed data for each field from hundreds of technical and scientific data sources. Resulting net energy return (NER) ratios for studied oil fields range from ≈2 to ≈100 MJ crude oil produced per MJ of total fuels consumed. External energy return (EER) ratios, which compare energy produced to energy consumed from external sources, exceed 1000:1 for fields that are largely self-sufficient. The lowest energy returns are found to come from thermally-enhanced oil recovery technologies. Results are generally insensitive to reasonable ranges of assumptions explored in sensitivity analysis. Fields with very large associated gas production are sensitive to assumptions about surface fluids processing due to the shifts in energy consumed under different gas treatment configurations. This model does not currently include energy invested in building oilfield capital equipment (e.g., drilling rigs), nor does it include other indirect energy uses such as labor or services. PMID:26695068
AURORAL X-RAYS, COSMIC RAYS, AND RELATED PHENOMENA DURING THE STORM OF FEBRUARY 10-11, 1958
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winckler, J.R.; Peterson, L.; Hoffman, R.
1959-06-01
Balloon observations were made during the auroral storm of February 10- 11, 1958, at Minneapolis. Strong x-ray bursts in two groups were detected. The groups appeared coincident with two large magnetic bays, with strong radio noise absorption, and with the passage across the zenith of a very large amount of auroral luminosity. From the x-ray intensity and measured energies, an electron current of 0.6 x 10/sup 6/ electrons /cm/sup 2// scc was present. These electrons ionizing the upper D layer accounted for the increased cosmic noise absorption. The x-rays themselves carried 1000 times less energy than the electrons and couldmore » not provide sufficient ionization for the observed radio absorption. Visual auroral fornis during this storm are reported to have lower borders at thc 200 to 300 km level. There is thus a difficulty in bringing the electrons to the D layer without ani accompanying visible aurora. A cosmic-ray decrease accompanied the storm and was observed to be from 4 to 6% at sea level, 21% in the balloon altitude ionization, and 15% in total energy influx at 55 deg geomagnetic latitude. Compared with the great intensity of the magnetic and auroral phenomena in this storm, the cosmic-ray modulation was not exceptionally large. (auth)« less
Gahagan, Sheila; Yu, Sunkyung; Kaciroti, Niko; Castillo, Marcela; Lozoff, Betsy
2009-01-01
Iron deficiency remains the most common nutritional deficiency worldwide and supplementation is recommended during periods of high risk, including infancy. However, questions have been raised about possible adverse effects of iron on growth in iron-sufficient (IS) infants and the advisability of across-the-board iron supplementation. This study examined whether short- or long-term growth was impaired in IS infants who received iron supplementation. From a longitudinal study of healthy, breast-fed, low- to middle-income Chilean infants randomly assigned to iron supplementation or usual nutrition at 6 or 12 mo, we retrospectively identified infants meeting criteria for iron sufficiency at the time of random assignment (n = 273). Using multilevel analysis, ponderal and linear growth were modeled before, during, and after iron supplementation up to 10 y in 3 comparisons: 1) iron supplementation compared with usual nutrition from 6 to 12 mo; 2) iron supplementation compared with usual nutrition from 12 to 18 mo; and 3) 15 mg/d of iron as drops compared with iron-fortified formula (12 mg/L). Growth trajectories did not differ during or after supplementation indicating no adverse effect of iron in any comparison. These results suggest that, at least in some environments, iron does not impair growth in IS infants. PMID:19776186
Kahwati, Leila; Viswanathan, Meera; Golin, Carol E; Kane, Heather; Lewis, Megan; Jacobs, Sara
2016-05-04
Interventions to improve medication adherence are diverse and complex. Consequently, synthesizing this evidence is challenging. We aimed to extend the results from an existing systematic review of interventions to improve medication adherence by using qualitative comparative analysis (QCA) to identify necessary or sufficient configurations of behavior change techniques among effective interventions. We used data from 60 studies in a completed systematic review to examine the combinations of nine behavior change techniques (increasing knowledge, increasing awareness, changing attitude, increasing self-efficacy, increasing intention formation, increasing action control, facilitation, increasing maintenance support, and motivational interviewing) among studies demonstrating improvements in adherence. Among the 60 studies, 34 demonstrated improved medication adherence. Among effective studies, increasing patient knowledge was a necessary but not sufficient technique. We identified seven configurations of behavior change techniques sufficient for improving adherence, which together accounted for 26 (76 %) of the effective studies. The intervention configuration that included increasing knowledge and self-efficacy was the most empirically relevant, accounting for 17 studies (50 %) and uniquely accounting for 15 (44 %). This analysis extends the completed review findings by identifying multiple combinations of behavior change techniques that improve adherence. Our findings offer direction for policy makers, practitioners, and future comparative effectiveness research on improving adherence.
Impact of Fetal-Neonatal Iron Deficiency on Recognition Memory at 2 Months of Age.
Geng, Fengji; Mai, Xiaoqin; Zhan, Jianying; Xu, Lin; Zhao, Zhengyan; Georgieff, Michael; Shao, Jie; Lozoff, Betsy
2015-12-01
To assess the effects of fetal-neonatal iron deficiency on recognition memory in early infancy. Perinatal iron deficiency delays or disrupts hippocampal development in animal models and thus may impair related neural functions in human infants, such as recognition memory. Event-related potentials were used in an auditory recognition memory task to compare 2-month-old Chinese infants with iron sufficiency or deficiency at birth. Fetal-neonatal iron deficiency was defined 2 ways: high zinc protoporphyrin/heme ratio (ZPP/H > 118 μmol/mol) or low serum ferritin (<75 μg/L) in cord blood. Late slow wave was used to measure infant recognition of mother's voice. Event related potentials patterns differed significantly for fetal-neonatal iron deficiency as defined by high cord ZPP/H but not low ferritin. Comparing 35 infants with iron deficiency (ZPP/H > 118 μmol/mol) to 92 with lower ZPP/H (iron-sufficient), only infants with iron sufficiency showed larger late slow wave amplitude for stranger's voice than mother's voice in frontal-central and parietal-occipital locations, indicating the recognition of mother's voice. Infants with iron sufficiency showed electrophysiological evidence of recognizing their mother's voice, whereas infants with fetal-neonatal iron deficiency did not. Their poorer auditory recognition memory at 2 months of age is consistent with effects of fetal-neonatal iron deficiency on the developing hippocampus. Copyright © 2015 Elsevier Inc. All rights reserved.
The production of multiprotein complexes in insect cells using the baculovirus expression system.
Abdulrahman, Wassim; Radu, Laura; Garzoni, Frederic; Kolesnikova, Olga; Gupta, Kapil; Osz-Papai, Judit; Berger, Imre; Poterszman, Arnaud
2015-01-01
The production of a homogeneous protein sample in sufficient quantities is an essential prerequisite not only for structural investigations but represents also a rate-limiting step for many functional studies. In the cell, a large fraction of eukaryotic proteins exists as large multicomponent assemblies with many subunits, which act in concert to catalyze specific activities. Many of these complexes cannot be obtained from endogenous source material, so recombinant expression and reconstitution are then required to overcome this bottleneck. This chapter describes current strategies and protocols for the efficient production of multiprotein complexes in large quantities and of high quality, using the baculovirus/insect cell expression system.
Maier, I L; Leyhe, J R; Tsogkas, I; Behme, D; Schregel, K; Knauth, M; Schnieder, M; Liman, J; Psychogios, M-N
2018-05-01
One-stop management of mechanical thrombectomy-eligible patients with large-vessel occlusion represents an innovative approach in acute stroke treatment. This approach reduces door-to-reperfusion times by omitting multidetector CT, using flat detector CT as pre-mechanical thrombectomy imaging. The purpose of this study was to compare the diagnostic performance of the latest-generation flat detector CT with multidetector CT. Prospectively derived data from patients with ischemic stroke with large-vessel occlusion and mechanical thrombectomy were analyzed in this monocentric study. All included patients underwent multidetector CT before referral to our comprehensive stroke center and flat detector CT in the angiography suite before mechanical thrombectomy. Diagnosis of early ischemic signs, quantified by the ASPECTS, was compared between modalities using cross tables, the Pearson correlation, and Bland-Altman plots. The predictive value of multidetector CT- and flat detector CT-derived ASPECTS for functional outcome was investigated using area under the receiver operating characteristic curve analysis. Of 25 patients, 24 (96%) had flat detector CT with sufficient diagnostic quality. Median multidetector CT and flat detector CT ASPECTSs were 7 (interquartile range, 5.5-9 and 4.25-8, respectively) with a mean period of 143.6 ± 49.5 minutes between both modalities. The overall sensitivity was 85.1% and specificity was 83.1% for flat detector CT ASPECTS compared with multidetector CT ASPECTS as the reference technique. Multidetector CT and flat detector CT ASPECTS were strongly correlated ( r = 0.849, P < .001) and moderately predicted functional outcome (area under the receiver operating characteristic curve, 0.738; P = .007 and .715; P = .069, respectively). Determination of ASPECTS on flat detector CT is feasible, showing no significant difference compared with multidetector CT ASPECTS and a similar predictive value for functional outcome. Our findings support the use of flat detector CT for emergency stroke imaging before mechanical thrombectomy to reduce door-to-groin time. © 2018 by American Journal of Neuroradiology.
NASA Astrophysics Data System (ADS)
Wosnik, M.; Bachant, P.
2014-12-01
Cross-flow turbines, often referred to as vertical-axis turbines, show potential for success in marine hydrokinetic (MHK) and wind energy applications, ranging from small- to utility-scale installations in tidal/ocean currents and offshore wind. As turbine designs mature, the research focus is shifting from individual devices to the optimization of turbine arrays. It would be expensive and time-consuming to conduct physical model studies of large arrays at large model scales (to achieve sufficiently high Reynolds numbers), and hence numerical techniques are generally better suited to explore the array design parameter space. However, since the computing power available today is not sufficient to conduct simulations of the flow in and around large arrays of turbines with fully resolved turbine geometries (e.g., grid resolution into the viscous sublayer on turbine blades), the turbines' interaction with the energy resource (water current or wind) needs to be parameterized, or modeled. Models used today--a common model is the actuator disk concept--are not able to predict the unique wake structure generated by cross-flow turbines. This wake structure has been shown to create "constructive" interference in some cases, improving turbine performance in array configurations, in contrast with axial-flow, or horizontal axis devices. Towards a more accurate parameterization of cross-flow turbines, an extensive experimental study was carried out using a high-resolution turbine test bed with wake measurement capability in a large cross-section tow tank. The experimental results were then "interpolated" using high-fidelity Navier--Stokes simulations, to gain insight into the turbine's near-wake. The study was designed to achieve sufficiently high Reynolds numbers for the results to be Reynolds number independent with respect to turbine performance and wake statistics, such that they can be reliably extrapolated to full scale and used for model validation. The end product of this work will be a cross-flow turbine actuator line model to be used as an extension to the OpenFOAM computational fluid dynamics (CFD) software framework, which will likely require modifications to commonly-used dynamic stall models, in consideration of the turbines' high angle of attack excursions during normal operation.
NASA Astrophysics Data System (ADS)
Werner, C. L.; Wegmuller, U.; Strozzi, T.; Wiesmann, A.
2006-12-01
Principle contributors to the noise in differential SAR interferograms are temporal phase stability of the surface, geometry relating to baseline and surface slope, and propagation path delay variations due to tropospheric water vapor and the ionosphere. Time series analysis of multiple interferograms generated from a stack of SAR SLC images seeks to determine the deformation history of the surface while reducing errors. Only those scatterers within a resolution element that are stable and coherent for each interferometric pair contribute to the desired deformation signal. Interferograms with baselines exceeding 1/3 the critical baseline have substantial geometrical decorrelation for distributed targets. Short baseline pairs with multiple reference scenes can be combined using least-squares estimation to obtain a global deformation solution. Alternately point-like persistent scatterers can be identified in scenes that do not exhibit geometrical decorrelation associated with large baselines. In this approach interferograms are formed from a stack of SAR complex images using a single reference scene. Stable distributed scatter pixels are excluded however due to the presence of large baselines. We apply both point- based and short-baseline methodologies and compare results for a stack of fine-beam Radarsat data acquired in 2002-2004 over a rapidly subsiding oil field near Lost Hills, CA. We also investigate the density of point-like scatters with respect to image resolution. The primary difficulty encountered when applying time series methods is phase unwrapping errors due to spatial and temporal gaps. Phase unwrapping requires sufficient spatial and temporal sampling. Increasing the SAR range bandwidth increases the range resolution as well as increasing the critical interferometric baseline that defines the required satellite orbital tube diameter. Sufficient spatial sampling also permits unwrapping because of the reduced phase/pixel gradient. Short time intervals further reduce the differential phase due to deformation when the deformation is continuous. Lower frequency systems (L- vs. C-Band) substantially improve the ability to unwrap the phase correctly by directly reducing both interferometric phase amplitude and temporal decorrelation.
Characterizing uncertain sea-level rise projections to support investment decisions.
Sriver, Ryan L; Lempert, Robert J; Wikman-Svahn, Per; Keller, Klaus
2018-01-01
Many institutions worldwide are considering how to include uncertainty about future changes in sea-levels and storm surges into their investment decisions regarding large capital infrastructures. Here we examine how to characterize deeply uncertain climate change projections to support such decisions using Robust Decision Making analysis. We address questions regarding how to confront the potential for future changes in low probability but large impact flooding events due to changes in sea-levels and storm surges. Such extreme events can affect investments in infrastructure but have proved difficult to consider in such decisions because of the deep uncertainty surrounding them. This study utilizes Robust Decision Making methods to address two questions applied to investment decisions at the Port of Los Angeles: (1) Under what future conditions would a Port of Los Angeles decision to harden its facilities against extreme flood scenarios at the next upgrade pass a cost-benefit test, and (2) Do sea-level rise projections and other information suggest such conditions are sufficiently likely to justify such an investment? We also compare and contrast the Robust Decision Making methods with a full probabilistic analysis. These two analysis frameworks result in similar investment recommendations for different idealized future sea-level projections, but provide different information to decision makers and envision different types of engagement with stakeholders. In particular, the full probabilistic analysis begins by aggregating the best scientific information into a single set of joint probability distributions, while the Robust Decision Making analysis identifies scenarios where a decision to invest in near-term response to extreme sea-level rise passes a cost-benefit test, and then assembles scientific information of differing levels of confidence to help decision makers judge whether or not these scenarios are sufficiently likely to justify making such investments. Results highlight the highly-localized and context dependent nature of applying Robust Decision Making methods to inform investment decisions.
Characterizing uncertain sea-level rise projections to support investment decisions
Lempert, Robert J.; Wikman-Svahn, Per; Keller, Klaus
2018-01-01
Many institutions worldwide are considering how to include uncertainty about future changes in sea-levels and storm surges into their investment decisions regarding large capital infrastructures. Here we examine how to characterize deeply uncertain climate change projections to support such decisions using Robust Decision Making analysis. We address questions regarding how to confront the potential for future changes in low probability but large impact flooding events due to changes in sea-levels and storm surges. Such extreme events can affect investments in infrastructure but have proved difficult to consider in such decisions because of the deep uncertainty surrounding them. This study utilizes Robust Decision Making methods to address two questions applied to investment decisions at the Port of Los Angeles: (1) Under what future conditions would a Port of Los Angeles decision to harden its facilities against extreme flood scenarios at the next upgrade pass a cost-benefit test, and (2) Do sea-level rise projections and other information suggest such conditions are sufficiently likely to justify such an investment? We also compare and contrast the Robust Decision Making methods with a full probabilistic analysis. These two analysis frameworks result in similar investment recommendations for different idealized future sea-level projections, but provide different information to decision makers and envision different types of engagement with stakeholders. In particular, the full probabilistic analysis begins by aggregating the best scientific information into a single set of joint probability distributions, while the Robust Decision Making analysis identifies scenarios where a decision to invest in near-term response to extreme sea-level rise passes a cost-benefit test, and then assembles scientific information of differing levels of confidence to help decision makers judge whether or not these scenarios are sufficiently likely to justify making such investments. Results highlight the highly-localized and context dependent nature of applying Robust Decision Making methods to inform investment decisions. PMID:29414978
NASA Technical Reports Server (NTRS)
Przekop, Adam; Wu, Hsi-Yung T.; Shaw, Peter
2014-01-01
The Environmentally Responsible Aviation Project aims to develop aircraft technologies enabling significant fuel burn and community noise reductions. Small incremental changes to the conventional metallic alloy-based 'tube and wing' configuration are not sufficient to achieve the desired metrics. One of the airframe concepts that might dramatically improve aircraft performance is a composite-based hybrid wing body configuration. Such a concept, however, presents inherent challenges stemming from, among other factors, the necessity to transfer wing loads through the entire center fuselage section which accommodates a pressurized cabin confined by flat or nearly flat panels. This paper discusses a nonlinear finite element analysis of a large-scale test article being developed to demonstrate that the Pultruded Rod Stitched Efficient Unitized Structure concept can meet these challenging demands of the next generation airframes. There are specific reasons why geometrically nonlinear analysis may be warranted for the hybrid wing body flat panel structure. In general, for sufficiently high internal pressure and/or mechanical loading, energy related to the in-plane strain may become significant relative to the bending strain energy, particularly in thin-walled areas such as the minimum gage skin extensively used in the structure under analysis. To account for this effect, a geometrically nonlinear strain-displacement relationship is needed to properly couple large out-of-plane and in-plane deformations. Depending on the loading, this nonlinear coupling mechanism manifests itself in a distinct manner in compression- and tension-dominated sections of the structure. Under significant compression, nonlinear analysis is needed to accurately predict loss of stability and postbuckled deformation. Under significant tension, the nonlinear effects account for suppression of the out-of-plane deformation due to in-plane stretching. By comparing the present results with the previously published preliminary linear analysis, it is demonstrated in the present paper that neglecting nonlinear effects for the structure and loads of interest can lead to appreciable loss in analysis fidelity.
TIDAL DISSIPATION COMPARED TO SEISMIC DISSIPATION: IN SMALL BODIES, EARTHS, AND SUPER-EARTHS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Efroimsky, Michael, E-mail: michael.efroimsky@usno.navy.mil
2012-02-20
While the seismic quality factor and phase lag are defined solely by the bulk properties of the mantle, their tidal counterparts are determined by both the bulk properties and the size effect (self-gravitation of a body as a whole). For a qualitative estimate, we model the body with a homogeneous sphere, and express the tidal phase lag through the lag in a sample of material. Although simplistic, our model is sufficient to understand that the lags are not identical. The difference emerges because self-gravitation pulls the tidal bulge down. At low frequencies, this reduces strain and the damping rate, makingmore » tidal damping less efficient in larger objects. At higher frequencies, competition between self-gravitation and rheology becomes more complex, though for sufficiently large super-Earths the same rule applies: the larger the planet, the weaker the tidal dissipation in it. Being negligible for small terrestrial planets and moons, the difference between the seismic and tidal lagging (and likewise between the seismic and tidal damping) becomes very considerable for large exoplanets (super-Earths). In those, it is much lower than what one might expect from using a seismic quality factor. The tidal damping rate deviates from the seismic damping rate, especially in the zero-frequency limit, and this difference takes place for bodies of any size. So the equal in magnitude but opposite in sign tidal torques, exerted on one another by the primary and the secondary, have their orbital averages going smoothly through zero as the secondary crosses the synchronous orbit. We describe the mantle rheology with the Andrade model, allowing it to lean toward the Maxwell model at the lowest frequencies. To implement this additional flexibility, we reformulate the Andrade model by endowing it with a free parameter {zeta} which is the ratio of the anelastic timescale to the viscoelastic Maxwell time of the mantle. Some uncertainty in this parameter's frequency dependence does not influence our principal conclusions.« less