Instability of Poiseuille flow at extreme Mach numbers: linear analysis and simulations.
Xie, Zhimin; Girimaji, Sharath S
2014-04-01
We develop the perturbation equations to describe instability evolution in Poiseuille flow at the limit of very high Mach numbers. At this limit the equation governing the flow is the pressure-released Navier-Stokes equation. The ensuing semianalytical solution is compared against simulations performed using the gas-kinetic method (GKM), resulting in excellent agreement. A similar comparison between analytical and computational results of small perturbation growth is performed at the incompressible (zero Mach number) limit, again leading to excellent agreement. The study accomplishes two important goals: it (i) contrasts the small perturbation evolution in Poiseuille flows at extreme Mach numbers and (ii) provides important verification of the GKM simulation scheme.
Characterization and prediction of extreme events in turbulence
NASA Astrophysics Data System (ADS)
Fonda, Enrico; Iyer, Kartik P.; Sreenivasan, Katepalli R.
2017-11-01
Extreme events in Nature such as tornadoes, large floods and strong earthquakes are rare but can have devastating consequences. The predictability of these events is very limited at present. Extreme events in turbulence are the very large events in small scales that are intermittent in character. We examine events in energy dissipation rate and enstrophy which are several tens to hundreds to thousands of times the mean value. To this end we use our DNS database of homogeneous and isotropic turbulence with Taylor Reynolds numbers spanning a decade, computed with different small scale resolutions and different box sizes, and study the predictability of these events using machine learning. We start with an aggressive data augmentation to virtually increase the number of these rare events by two orders of magnitude and train a deep convolutional neural network to predict their occurrence in an independent data set. The goal of the work is to explore whether extreme events can be predicted with greater assurance than can be done by conventional methods (e.g., D.A. Donzis & K.R. Sreenivasan, J. Fluid Mech. 647, 13-26, 2010).
Extremism without extremists: Deffuant model with emotions
NASA Astrophysics Data System (ADS)
Sobkowicz, Pawel
2015-03-01
The frequent occurrence of extremist views in many social contexts, often growing from small minorities to almost total majority, poses a significant challenge for democratic societies. The phenomenon can be described within the sociophysical paradigm. We present a modified version of the continuous bounded confidence opinion model, including a simple description of the influence of emotions on tolerances, and eventually on the evolution of opinions. Allowing for psychologically based correlation between the extreme opinions, high emotions and low tolerance for other people's views leads to quick dominance of the extreme views within the studied model, without introducing a special class of agents, as has been done in previous works. This dominance occurs even if the initial numbers of people with extreme opinions is very small. Possible suggestions related to mitigation of the process are briefly discussed.
The Characteristics of Extreme Erosion Events in a Small Mountainous Watershed
Fang, Nu-Fang; Shi, Zhi-Hua; Yue, Ben-Jiang; Wang, Ling
2013-01-01
A large amount of soil loss is caused by a small number of extreme events that are mainly responsible for the time compression of geomorphic processes. The aim of this study was to analyze suspended sediment transport during extreme erosion events in a mountainous watershed. Field measurements were conducted in Wangjiaqiao, a small agricultural watershed (16.7 km2) in the Three Gorges Area (TGA) of China. Continuous records were used to analyze suspended sediment transport regimes and assess the sediment loads of 205 rainfall–runoff events during a period of 16 hydrological years (1989–2004). Extreme events were defined as the largest events, ranked in order of their absolute magnitude (representing the 95th percentile). Ten extreme erosion events from 205 erosion events, representing 83.8% of the total suspended sediment load, were selected for study. The results of canonical discriminant analysis indicated that extreme erosion events are characterized by high maximum flood-suspended sediment concentrations, high runoff coefficients, and high flood peak discharge, which could possibly be explained by the transport of deposited sediment within the stream bed during previous events or bank collapses. PMID:24146898
Extreme event statistics in a drifting Markov chain
NASA Astrophysics Data System (ADS)
Kindermann, Farina; Hohmann, Michael; Lausch, Tobias; Mayer, Daniel; Schmidt, Felix; Widera, Artur
2017-07-01
We analyze extreme event statistics of experimentally realized Markov chains with various drifts. Our Markov chains are individual trajectories of a single atom diffusing in a one-dimensional periodic potential. Based on more than 500 individual atomic traces we verify the applicability of the Sparre Andersen theorem to our system despite the presence of a drift. We present detailed analysis of four different rare-event statistics for our system: the distributions of extreme values, of record values, of extreme value occurrence in the chain, and of the number of records in the chain. We observe that, for our data, the shape of the extreme event distributions is dominated by the underlying exponential distance distribution extracted from the atomic traces. Furthermore, we find that even small drifts influence the statistics of extreme events and record values, which is supported by numerical simulations, and we identify cases in which the drift can be determined without information about the underlying random variable distributions. Our results facilitate the use of extreme event statistics as a signal for small drifts in correlated trajectories.
Improving the performance of extreme learning machine for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Li, Jiaojiao; Du, Qian; Li, Wei; Li, Yunsong
2015-05-01
Extreme learning machine (ELM) and kernel ELM (KELM) can offer comparable performance as the standard powerful classifier―support vector machine (SVM), but with much lower computational cost due to extremely simple training step. However, their performance may be sensitive to several parameters, such as the number of hidden neurons. An empirical linear relationship between the number of training samples and the number of hidden neurons is proposed. Such a relationship can be easily estimated with two small training sets and extended to large training sets so as to greatly reduce computational cost. Other parameters, such as the steepness parameter in the sigmodal activation function and regularization parameter in the KELM, are also investigated. The experimental results show that classification performance is sensitive to these parameters; fortunately, simple selections will result in suboptimal performance.
ERIC Educational Resources Information Center
Gersten, Russell; Rolfhus, Eric; Clarke, Ben; Decker, Lauren E.; Wilkins, Chuck; Dimino, Joseph
2015-01-01
Replication studies are extremely rare in education. This randomized controlled trial (RCT) is a scale-up replication of Fuchs et al., which in a sample of 139 found a statistically significant positive impact for Number Rockets, a small-group intervention for at-risk first graders that focused on building understanding of number operations. The…
Repeated Small Bowel Obstruction Caused by Chestnut Ingestion without the Formation of Phytobezoars.
Satake, Ryu; Chinda, Daisuke; Shimoyama, Tadashi; Satake, Miwa; Oota, Rie; Sato, Satoshi; Yamai, Kiyonori; Hachimori, Hisashi; Okamoto, Yutaka; Yamada, Kyogo; Matsuura, Osamu; Hashizume, Tadashi; Soma, Yasushi; Fukuda, Shinsaku
2016-01-01
A small number of cases of small bowel obstruction caused by foods without the formation of phytobezoars have been reported. Repeated small bowel obstruction due to the ingestion of the same food is extremely rare. We present the case of 63-year-old woman who developed small bowel obstruction twice due to the ingestion of chestnuts without the formation of phytobezoars. This is the first reported case of repeated small bowel obstruction caused by chestnut ingestion. Careful interviews are necessary to determine the meal history of elderly patients and psychiatric patients.
ERIC Educational Resources Information Center
Noley, Grayson
This paper discusses issues in the recruitment, retention, and training of Native college students as teachers and school administrators. The number of Native educational professionals serving schools for Native students is extremely small, and there is evidence that even this number is declining relative to the increasing Native school…
USDA-ARS?s Scientific Manuscript database
For any analytical system the population mean (mu) number of entities (e.g., cells or molecules) per tested volume, surface area, or mass also defines the population standard deviation (sigma = square root of mu ). For a preponderance of analytical methods, sigma is very small relative to mu due to...
Linear mode stability of the Kerr-Newman black hole and its quasinormal modes.
Dias, Óscar J C; Godazgar, Mahdi; Santos, Jorge E
2015-04-17
We provide strong evidence that, up to 99.999% of extremality, Kerr-Newman black holes (KNBHs) are linear mode stable within Einstein-Maxwell theory. We derive and solve, numerically, a coupled system of two partial differential equations for two gauge invariant fields that describe the most general linear perturbations of a KNBH. We determine the quasinormal mode (QNM) spectrum of the KNBH as a function of its three parameters and find no unstable modes. In addition, we find that the lowest radial overtone QNMs that are connected continuously to the gravitational ℓ=m=2 Schwarzschild QNM dominate the spectrum for all values of the parameter space (m is the azimuthal number of the wave function and ℓ measures the number of nodes along the polar direction). Furthermore, the (lowest radial overtone) QNMs with ℓ=m approach Reω=mΩH(ext) and Imω=0 at extremality; this is a universal property for any field of arbitrary spin |s|≤2 propagating on a KNBH background (ω is the wave frequency and ΩH(ext) the black hole angular velocity at extremality). We compare our results with available perturbative results in the small charge or small rotation regimes and find good agreement.
NASA Astrophysics Data System (ADS)
Šimkanin, Ján; Kyselica, Juraj
2017-12-01
Numerical simulations of the geodynamo are becoming more realistic because of advances in computer technology. Here, the geodynamo model is investigated numerically at the extremely low Ekman and magnetic Prandtl numbers using the PARODY dynamo code. These parameters are more realistic than those used in previous numerical studies of the geodynamo. Our model is based on the Boussinesq approximation and the temperature gradient between upper and lower boundaries is a source of convection. This study attempts to answer the question how realistic the geodynamo models are. Numerical results show that our dynamo belongs to the strong-field dynamos. The generated magnetic field is dipolar and large-scale while convection is small-scale and sheet-like flows (plumes) are preferred to a columnar convection. Scales of magnetic and velocity fields are separated, which enables hydromagnetic dynamos to maintain the magnetic field at the low magnetic Prandtl numbers. The inner core rotation rate is lower than that in previous geodynamo models. On the other hand, dimensional magnitudes of velocity and magnetic fields and those of the magnetic and viscous dissipation are larger than those expected in the Earth's core due to our parameter range chosen.
Epidemiology of extremity fractures in the Netherlands.
Beerekamp, M S H; de Muinck Keizer, R J O; Schep, N W L; Ubbink, D T; Panneman, M J M; Goslings, J C
2017-07-01
Insight in epidemiologic data of extremity fractures is relevant to identify people at risk. By analyzing age- and gender specific fracture incidence and treatment patterns we may adjust future policy, take preventive measures and optimize health care management. Current epidemiologic data on extremity fractures and their treatment are scarce, outdated or aiming at a small spectrum of fractures. The aim of this study was to assess trends in incidence and treatment of extremity fractures between 2004 and 2012 in relation to gender and age. We used a combination of national registries of patients aged ≥ 16 years with extremity fractures. Fractures were coded by the International Classification of Diseases (ICD) 10, and allocated to an anatomic region. ICD-10 codes were used for combining the data of the registries. Absolute numbers, incidences, number of patients treated in university hospitals and surgically treated patients were reported. A binary logistic regression was used to calculate trends during the study period. From 2004 to 2012 the Dutch population aged ≥16 years grew from 13,047,018 to 13,639,412 inhabitants, particularly in the higher age groups of 46 years and older. The absolute number of extremity fractures increased significantly from 129,188 to 176,129 (OR 1.308 [1.299-1.318]), except for forearm and lower leg fractures. Incidences increased significantly (3-4%) for wrist, hand/finger, hip/upper leg, ankle and foot/toe fractures. In contrast to the older age categories from 66 years and older, in younger age categories from 16 to 35 years, fractures of the extremities were more frequent in men than in women. Treatments gradually moved towards non-university hospitals for all except forearm fractures. Both relative and absolute numbers increased for surgical treatments of clavicle/shoulder, forearm, wrist and hand/finger fractures. Contrarily, lower extremity fractures showed an increase in non-surgical treatment, except for lower leg fractures. During the study period, we observed an increasing incidence of extremity fractures and a shift towards surgical treatment. Patient numbers in university hospitals declined. If these trends continue, policy makers would be well advised to consider the changing demands in extremity fracture treatment and pro-actively increase capacity and resources. Copyright © 2017 Elsevier Ltd. All rights reserved.
Design of a New Ultracompact Resonant Plasmonic Multi-Analyte Label-Free Biosensing Platform
De Palo, Maripina; Ciminelli, Caterina
2017-01-01
In this paper, we report on the design of a bio-multisensing platform for the selective label-free detection of protein biomarkers, carried out through a 3D numerical algorithm. The platform includes a number of biosensors, each of them is based on a plasmonic nanocavity, consisting of a periodic metal structure to be deposited on a silicon oxide substrate. Light is strongly confined in a region with extremely small size (=1.57 μm2), to enhance the light-matter interaction. A surface sensitivity Ss = 1.8 nm/nm has been calculated together with a detection limit of 128 pg/mm2. Such performance, together with the extremely small footprint, allow the integration of several devices on a single chip to realize extremely compact lab-on-chip microsystems. In addition, each sensing element of the platform has a good chemical stability that is guaranteed by the selection of gold for its fabrication. PMID:28783075
ERIC Educational Resources Information Center
Brewin, Chris R.; Gregory, James D.; Lipton, Michelle; Burgess, Neil
2010-01-01
Involuntary images and visual memories are prominent in many types of psychopathology. Patients with posttraumatic stress disorder, other anxiety disorders, depression, eating disorders, and psychosis frequently report repeated visual intrusions corresponding to a small number of real or imaginary events, usually extremely vivid, detailed, and…
Particulate matter (PM) is a complex mixture of extremely small particles and liquid droplets made up of a number of components including elemental carbon, organic chemicals, metals, acids (such as nitrates and sulfates), and soil and dust particles. Epidemiological studies con...
Rainfall extremes from TRMM data and the Metastatistical Extreme Value Distribution
NASA Astrophysics Data System (ADS)
Zorzetto, Enrico; Marani, Marco
2017-04-01
A reliable quantification of the probability of weather extremes occurrence is essential for designing resilient water infrastructures and hazard mitigation measures. However, it is increasingly clear that the presence of inter-annual climatic fluctuations determines a substantial long-term variability in the frequency of occurrence of extreme events. This circumstance questions the foundation of the traditional extreme value theory, hinged on stationary Poisson processes or on asymptotic assumptions to derive the Generalized Extreme Value (GEV) distribution. We illustrate here, with application to daily rainfall, a new approach to extreme value analysis, the Metastatistical Extreme Value Distribution (MEVD). The MEVD relaxes the above assumptions and is based on the whole distribution of daily rainfall events, thus allowing optimal use of all available observations. Using a global dataset of rain gauge observations, we show that the MEVD significantly outperforms the Generalized Extreme Value distribution, particularly for long average recurrence intervals and when small samples are available. The latter property suggests MEVD to be particularly suited for applications to satellite rainfall estimates, which only cover two decades, thus making extreme value estimation extremely challenging. Here we apply MEVD to the TRMM TMPA 3B42 product, an 18-year dataset of remotely-sensed daily rainfall providing a quasi-global coverage. Our analyses yield a global scale mapping of daily rainfall extremes and of their distributional tail properties, bridging the existing large gaps in ground-based networks. Finally, we illustrate how our global-scale analysis can provide insight into how properties of local rainfall regimes affect tail estimation uncertainty when using the GEV or MEVD approach. We find a dependence of the estimation uncertainty, for both the GEV- and MEV-based approaches, on the average annual number and on the inter-annual variability of rainy days. In particular, estimation uncertainty decreases 1) as the mean annual number of wet days increases, and 2) as the variability in the number of rainy days, expressed by its coefficient of variation, decreases. We tentatively explain this behavior in terms of the assumptions underlying the two approaches.
An Exploration of Equipping a Future Force Warrior Small Combat Unit with Non-Lethal Weapons
2006-06-01
when Russian forces used the chemical fentanyl against Chechen hostage-takers in a Moscow theater. Unfortunately, nearly 130 of the 800-900 hostages...died of overdoses and an undisclosed number were left with permanent disabilities.25 Obviously, extreme care must be exercised in the employment of
Incorporating Biological Knowledge into Evaluation of Casual Regulatory Hypothesis
NASA Technical Reports Server (NTRS)
Chrisman, Lonnie; Langley, Pat; Bay, Stephen; Pohorille, Andrew; DeVincenzi, D. (Technical Monitor)
2002-01-01
Biological data can be scarce and costly to obtain. The small number of samples available typically limits statistical power and makes reliable inference of causal relations extremely difficult. However, we argue that statistical power can be increased substantially by incorporating prior knowledge and data from diverse sources. We present a Bayesian framework that combines information from different sources and we show empirically that this lets one make correct causal inferences with small sample sizes that otherwise would be impossible.
A Framework to Understand Extreme Space Weather Event Probability.
Jonas, Seth; Fronczyk, Kassandra; Pratt, Lucas M
2018-03-12
An extreme space weather event has the potential to disrupt or damage infrastructure systems and technologies that many societies rely on for economic and social well-being. Space weather events occur regularly, but extreme events are less frequent, with a small number of historical examples over the last 160 years. During the past decade, published works have (1) examined the physical characteristics of the extreme historical events and (2) discussed the probability or return rate of select extreme geomagnetic disturbances, including the 1859 Carrington event. Here we present initial findings on a unified framework approach to visualize space weather event probability, using a Bayesian model average, in the context of historical extreme events. We present disturbance storm time (Dst) probability (a proxy for geomagnetic disturbance intensity) across multiple return periods and discuss parameters of interest to policymakers and planners in the context of past extreme space weather events. We discuss the current state of these analyses, their utility to policymakers and planners, the current limitations when compared to other hazards, and several gaps that need to be filled to enhance space weather risk assessments. © 2018 Society for Risk Analysis.
Design and Manufacturing of Extremely Low Mass Flight Systems
NASA Technical Reports Server (NTRS)
Johnson, Michael R.
2002-01-01
Extremely small flight systems pose some unusual design and manufacturing challenges. The small size of the components that make up the system generally must be built with extremely tight tolerances to maintain the functionality of the assembled item. Additionally, the total mass of the system is extremely sensitive to what would be considered small perturbations in a larger flight system. The MUSES C mission, designed, built, and operated by Japan, has a small rover provided by NASA that falls into this small flight system category. This NASA-provided rover is used as a case study of an extremely small flight system design. The issues that were encountered with the rover portion of the MUSES C program are discussed and conclusions about the recommended mass margins at different stages of a small flight system project are presented.
Small-scale dynamo at low magnetic Prandtl numbers
NASA Astrophysics Data System (ADS)
Schober, Jennifer; Schleicher, Dominik; Bovino, Stefano; Klessen, Ralf S.
2012-12-01
The present-day Universe is highly magnetized, even though the first magnetic seed fields were most probably extremely weak. To explain the growth of the magnetic field strength over many orders of magnitude, fast amplification processes need to operate. The most efficient mechanism known today is the small-scale dynamo, which converts turbulent kinetic energy into magnetic energy leading to an exponential growth of the magnetic field. The efficiency of the dynamo depends on the type of turbulence indicated by the slope of the turbulence spectrum v(ℓ)∝ℓϑ, where v(ℓ) is the eddy velocity at a scale ℓ. We explore turbulent spectra ranging from incompressible Kolmogorov turbulence with ϑ=1/3 to highly compressible Burgers turbulence with ϑ=1/2. In this work, we analyze the properties of the small-scale dynamo for low magnetic Prandtl numbers Pm, which denotes the ratio of the magnetic Reynolds number, Rm, to the hydrodynamical one, Re. We solve the Kazantsev equation, which describes the evolution of the small-scale magnetic field, using the WKB approximation. In the limit of low magnetic Prandtl numbers, the growth rate is proportional to Rm(1-ϑ)/(1+ϑ). We furthermore discuss the critical magnetic Reynolds number Rmcrit, which is required for small-scale dynamo action. The value of Rmcrit is roughly 100 for Kolmogorov turbulence and 2700 for Burgers. Furthermore, we discuss that Rmcrit provides a stronger constraint in the limit of low Pm than it does for large Pm. We conclude that the small-scale dynamo can operate in the regime of low magnetic Prandtl numbers if the magnetic Reynolds number is large enough. Thus, the magnetic field amplification on small scales can take place in a broad range of physical environments and amplify week magnetic seed fields on short time scales.
Small-scale dynamo at low magnetic Prandtl numbers.
Schober, Jennifer; Schleicher, Dominik; Bovino, Stefano; Klessen, Ralf S
2012-12-01
The present-day Universe is highly magnetized, even though the first magnetic seed fields were most probably extremely weak. To explain the growth of the magnetic field strength over many orders of magnitude, fast amplification processes need to operate. The most efficient mechanism known today is the small-scale dynamo, which converts turbulent kinetic energy into magnetic energy leading to an exponential growth of the magnetic field. The efficiency of the dynamo depends on the type of turbulence indicated by the slope of the turbulence spectrum v(ℓ)∝ℓ^{ϑ}, where v(ℓ) is the eddy velocity at a scale ℓ. We explore turbulent spectra ranging from incompressible Kolmogorov turbulence with ϑ=1/3 to highly compressible Burgers turbulence with ϑ=1/2. In this work, we analyze the properties of the small-scale dynamo for low magnetic Prandtl numbers Pm, which denotes the ratio of the magnetic Reynolds number, Rm, to the hydrodynamical one, Re. We solve the Kazantsev equation, which describes the evolution of the small-scale magnetic field, using the WKB approximation. In the limit of low magnetic Prandtl numbers, the growth rate is proportional to Rm^{(1-ϑ)/(1+ϑ)}. We furthermore discuss the critical magnetic Reynolds number Rm_{crit}, which is required for small-scale dynamo action. The value of Rm_{crit} is roughly 100 for Kolmogorov turbulence and 2700 for Burgers. Furthermore, we discuss that Rm_{crit} provides a stronger constraint in the limit of low Pm than it does for large Pm. We conclude that the small-scale dynamo can operate in the regime of low magnetic Prandtl numbers if the magnetic Reynolds number is large enough. Thus, the magnetic field amplification on small scales can take place in a broad range of physical environments and amplify week magnetic seed fields on short time scales.
Reynolds number of transition and self-organized criticality of strong turbulence.
Yakhot, Victor
2014-10-01
A turbulent flow is characterized by velocity fluctuations excited in an extremely broad interval of wave numbers k>Λf, where Λf is a relatively small set of the wave vectors where energy is pumped into fluid by external forces. Iterative averaging over small-scale velocity fluctuations from the interval Λf
Reynolds number of transition and self-organized criticality of strong turbulence
NASA Astrophysics Data System (ADS)
Yakhot, Victor
2014-10-01
A turbulent flow is characterized by velocity fluctuations excited in an extremely broad interval of wave numbers k >Λf , where Λf is a relatively small set of the wave vectors where energy is pumped into fluid by external forces. Iterative averaging over small-scale velocity fluctuations from the interval Λf
Estimating forest attribute parameters for small areas using nearest neighbors techniques
Ronald E. McRoberts
2012-01-01
Nearest neighbors techniques have become extremely popular, particularly for use with forest inventory data. With these techniques, a population unit prediction is calculated as a linear combination of observations for a selected number of population units in a sample that are most similar, or nearest, in a space of ancillary variables to the population unit requiring...
NASA Astrophysics Data System (ADS)
Wiemann, Stefan; Eltner, Anette; Sardemann, Hannes; Spieler, Diana; Singer, Thomas; Thanh Luong, Thi; Janabi, Firas Al; Schütze, Niels; Bernard, Lars; Bernhofer, Christian; Maas, Hans-Gerd
2017-04-01
Flash floods regularly cause severe socio-economic damage worldwide. In parallel, climate change is very likely to increase the number of such events, due to an increasing frequency of extreme precipitation events (EASAC 2013). Whereas recent work primarily addresses the resilience of large catchment areas, the major impact of hydro-meteorological extremes caused by heavy precipitation is on small areas. Those are very difficult to observe and predict, due to sparse monitoring networks and only few means for hydro-meteorological modelling, especially in small catchment areas. The objective of the EXTRUSO project is to identify and implement appropriate means to close this gap by an interdisciplinary approach, combining comprehensive research expertise from meteorology, hydrology, photogrammetry and geoinformatics. The project targets innovative techniques for achieving spatio-temporal densified monitoring and simulations for the analysis, prediction and warning of local hydro-meteorological extreme events. The following four aspects are of particular interest: 1. The monitoring, analysis and combination of relevant hydro-meteorological parameters from various sources, including existing monitoring networks, ground radar, specific low-cost sensors and crowdsourcing. 2. The determination of relevant hydro-morphological parameters from different photogrammetric sensors (e.g. camera, laser scanner) and sensor platforms (e.g. UAV (unmanned aerial vehicle) and UWV (unmanned water vehicle)). 3. The continuous hydro-meteorological modelling of precipitation, soil moisture and water flows by means of conceptual and data-driven modelling. 4. The development of a collaborative, web-based service infrastructure as an information and communication point, especially in the case of an extreme event. There are three major applications for the planned information system: First, the warning of local extreme events for the population in potentially affected areas, second, the support for decision makers and emergency responders in the case of an event and, third, the development of open, interoperable tools for other researchers to be applied and further developed. The test area of the project is the Free State of Saxony (Germany) with a number of small and medium catchment areas. However, the whole system, comprising models, tools and sensor setups, is planned to be transferred and tested in other areas, within and outside Europe, as well. The team working on the project consists of eight researchers, including five PhD students and three postdocs. The EXTRUSO project is funded by the European Social Fund (ESF grant nr. 100270097) with a project duration of three years until June 2019. EASAC (2013): Trends in extreme weather events in Europe: implications for national and European Union adaption strategies. European Academies Science Advisory Council. Policy report 22, November 2013 The EXTRUSO project is funded by the European Social Fund (ESF), grant nr. 100270097
Neurologic complications in common wrist and hand surgical procedures
Verdecchia, Nicole; Johnson, Julie; Baratz, Mark; Orebaugh, Steven
2018-01-01
Nerve dysfunction after upper extremity orthopedic surgery is a recognized complication, and may result from a variety of different causes. Hand and wrist surgery require incisions and retraction that necessarily border on small peripheral nerves, which may be difficult to identify and protect with absolute certainty. This article reviews the rates and ranges of reported nerve dysfunction with respect to common surgical interventions for the distal upper extremity, including wrist arthroplasty, wrist arthrodesis, wrist arthroscopy, distal radius open reduction and internal fixation, carpal tunnel release, and thumb carpometacarpal surgery. A relatively large range of neurologic complications is reported, however many of the studies cited involve relatively small numbers of patients, and only rarely are neurologic complications included as primary outcome measures. Knowledge of these neurologic outcomes should help the surgeon to better counsel patients with regard to perioperative risk, as well as provide insight into workup and management of any adverse neurologic outcomes that may arise.
Effects of forebody geometry on subsonic boundary-layer stability
NASA Technical Reports Server (NTRS)
Dodbele, Simha S.
1990-01-01
As part of an effort to develop computational techniques for design of natural laminar flow fuselages, a computational study was made of the effect of forebody geometry on laminar boundary layer stability on axisymmetric body shapes. The effects of nose radius on the stability of the incompressible laminar boundary layer was computationally investigated using linear stability theory for body length Reynolds numbers representative of small and medium-sized airplanes. The steepness of the pressure gradient and the value of the minimum pressure (both functions of fineness ratio) govern the stability of laminar flow possible on an axisymmetric body at a given Reynolds number. It was found that to keep the laminar boundary layer stable for extended lengths, it is important to have a small nose radius. However, nose shapes with extremely small nose radii produce large pressure peaks at off-design angles of attack and can produce vortices which would adversely affect transition.
Self-Amputation in Two Non-Psychotic Patients.
Rahmanian, Hamid; Petrou, Nikoletta A; Sarfraz, M Aamer
2015-09-01
Self-amputation, the extreme form of self-mutilation, is uncommon. The vast majority of cases are associated with psychosis, with a small number being assigned the controversial diagnosis of body identity integrity disorder. In this article, we report two cases of non-psychotic self-amputation and their similarities with a view to highlighting the risk factors and formulating an appropriate management plan.
Protected areas of the central Siberian Arctic: History, status, and prospects
Andrei P. Laletin; Dmitry V. Vladyshevskii; Alexei D. Vladyshevskii
2002-01-01
Before the Siberian Arctic was incorporated into the Russian Empire, it had been inhabited by small numbers of indigenous peoples. The first Russian settlers came to Siberia in the 16th century. The northern areas of Siberia had not been subjected to extreme anthropogenic influences before the Norilsk Industrial Complex started to be built in 1935. Negative...
NASA Astrophysics Data System (ADS)
Shih, Hong-Yan; Goldenfeld, Nigel
Experiments on transitional turbulence in pipe flow seem to show that turbulence is a transient metastable state since the measured mean lifetime of turbulence puffs does not diverge asymptotically at a critical Reynolds number. Yet measurements reveal that the lifetime scales with Reynolds number in a super-exponential way reminiscent of extreme value statistics, and simulations and experiments in Couette and channel flow exhibit directed percolation type scaling phenomena near a well-defined transition. This universality class arises from the interplay between small-scale turbulence and a large-scale collective zonal flow, which exhibit predator-prey behavior. Why is asymptotically divergent behavior not observed? Using directed percolation and a stochastic individual level model of predator-prey dynamics related to transitional turbulence, we investigate the relation between extreme value statistics and power law critical behavior, and show that the paradox is resolved by carefully defining what is measured in the experiments. We theoretically derive the super-exponential scaling law, and using finite-size scaling, show how the same data can give both super-exponential behavior and power-law critical scaling.
NASA Astrophysics Data System (ADS)
Bonnoli, G.; Tavecchio, F.; Ghisellini, G.; Sbarrato, T.
2015-07-01
High-energy observations of extreme BL Lac objects, such as 1ES 0229+200 or 1ES 0347-121, recently focused interest both for blazar and jet physics and for the implication on the extragalactic background light and intergalactic magnetic field estimate. However, the number of these extreme highly peaked BL Lac objects (EHBL) is still rather small. Aiming at increase their number, we selected a group of EHBL candidates starting from the BL Lac sample of Plotkin et al. (2011), considering those undetected (or only barely detected) by the Large Area Telescope onboard Fermi and characterized by a high X-ray versus radio flux ratio. We assembled the multiwavelength spectral energy distribution of the resulting nine sources, profiting of publicly available archival observations performed by Swift, GALEX, and Fermi satellites, confirming their nature. Through a simple one-zone synchrotron self-Compton model we estimate the expected very high energy flux, finding that in the majority of cases it is within the reach of present generation of Cherenkov arrays or of the forthcoming Cherenkov Telescope Array.
Using the Student's "t"-Test with Extremely Small Sample Sizes
ERIC Educational Resources Information Center
de Winter, J. C .F.
2013-01-01
Researchers occasionally have to work with an extremely small sample size, defined herein as "N" less than or equal to 5. Some methodologists have cautioned against using the "t"-test when the sample size is extremely small, whereas others have suggested that using the "t"-test is feasible in such a case. The present…
Current trends in nanobiosensor technology
Wu, Diana; Langer, Robert S
2014-01-01
The development of tools and processes used to fabricate, measure, and image nanoscale objects has lead to a wide range of work devoted to producing sensors that interact with extremely small numbers (or an extremely small concentration) of analyte molecules. These advances are particularly exciting in the context of biosensing, where the demands for low concentration detection and high specificity are great. Nanoscale biosensors, or nanobiosensors, provide researchers with an unprecedented level of sensitivity, often to the single molecule level. The use of biomolecule-functionalized surfaces can dramatically boost the specificity of the detection system, but can also yield reproducibility problems and increased complexity. Several nanobiosensor architectures based on mechanical devices, optical resonators, functionalized nanoparticles, nanowires, nanotubes, and nanofibers have been demonstrated in the lab. As nanobiosensor technology becomes more refined and reliable, it is likely it will eventually make its way from the lab to the clinic, where future lab-on-a-chip devices incorporating an array of nanobiosensors could be used for rapid screening of a wide variety of analytes at low cost using small samples of patient material. PMID:21391305
Genetic and life-history consequences of extreme climate events
Mangel, Marc; Jesensek, Dusan; Garza, John Carlos; Crivelli, Alain J.
2017-01-01
Climate change is predicted to increase the frequency and intensity of extreme climate events. Tests on empirical data of theory-based predictions on the consequences of extreme climate events are thus necessary to understand the adaptive potential of species and the overarching risks associated with all aspects of climate change. We tested predictions on the genetic and life-history consequences of extreme climate events in two populations of marble trout Salmo marmoratus that have experienced severe demographic bottlenecks due to flash floods. We combined long-term field and genotyping data with pedigree reconstruction in a theory-based framework. Our results show that after flash floods, reproduction occurred at a younger age in one population. In both populations, we found the highest reproductive variance in the first cohort born after the floods due to a combination of fewer parents and higher early survival of offspring. A small number of parents allowed for demographic recovery after the floods, but the genetic bottleneck further reduced genetic diversity in both populations. Our results also elucidate some of the mechanisms responsible for a greater prevalence of faster life histories after the extreme event. PMID:28148745
Community exposure to tsunami hazards in Hawai‘i
Jones, Jamie L.; Jamieson, Matthew R.; Wood, Nathan J.
2016-06-17
Community exposure to tsunamis in Hawai‘i varies considerably—some communities may experience great losses that reflect only a small part of their community and others may experience relatively small losses that devastate them. Among the 91 communities and 4 counties, Urban Honolulu has the highest number of people and businesses in the extreme tsunami-inundation zone, and Hanalei has the highest percentages of its people and businesses in this zone. Urban Honolulu has the highest combination of the number and percentage of people, businesses, and facilities in the hazard zone. This report will further the dialogue on societal risk to tsunami hazards in Hawai‘i and help identify future preparedness, mitigation, response, and recovery planning needs within coastal communities and economic sectors of the State of Hawaii.
NASA Astrophysics Data System (ADS)
Pongracz, R.; Bartholy, J.; Szabo, P.; Pieczka, I.; Torma, C. S.
2009-04-01
Regional climatological effects of global warming may be recognized not only in shifts of mean temperature and precipitation, but in the frequency or intensity changes of different climate extremes. Several climate extreme indices are analyzed and compared for the Carpathian basin (located in Central/Eastern Europe) following the guidelines suggested by the joint WMO-CCl/CLIVAR Working Group on climate change detection. Our statistical trend analysis includes the evaluation of several extreme temperature and precipitation indices, e.g., the numbers of severe cold days, winter days, frost days, cold days, warm days, summer days, hot days, extremely hot days, cold nights, warm nights, the intra-annual extreme temperature range, the heat wave duration, the growing season length, the number of wet days (using several threshold values defining extremes), the maximum number of consecutive dry days, the highest 1-day precipitation amount, the greatest 5-day rainfall total, the annual fraction due to extreme precipitation events, etc. In order to evaluate the future trends (2071-2100) in the Carpathian basin, daily values of meteorological variables are obtained from the outputs of various regional climate model (RCM) experiments accomplished in the frame of the completed EU-project PRUDENCE (Prediction of Regional scenarios and Uncertainties for Defining EuropeaN Climate change risks and Effects). Horizontal resolution of the applied RCMs is 50 km. Both scenarios A2 and B2 are used to compare past and future trends of the extreme climate indices for the Carpathian basin. Furthermore, fine-resolution climate experiments of two additional RCMs adapted and run at the Department of Meteorology, Eotvos Lorand University are used to extend the trend analysis of climate extremes for the Carpathian basin. (1) Model PRECIS (run at 25 km horizontal resolution) was developed at the UK Met Office, Hadley Centre, and it uses the boundary conditions from the HadCM3 GCM. (2) Model RegCM3 (run at 10 km horizontal resolution) was developed by Giorgi et al. and it is available from the ICTP (International Centre for Theoretical Physics). Analysis of the simulated daily temperature datasets suggests that the detected regional warming is expected to continue in the 21st century. Cold temperature extremes are projected to decrease while warm extremes tend to increase significantly. Expected changes of annual precipitation indices are small, but generally consistent with the detected trends of the 20th century. Based on the simulations, extreme precipitation events are expected to become more intense and more frequent in winter, while a general decrease of extreme precipitation indices is expected in summer.
RAND Workshop on Antiproton Science and Technology, Annotated Executive Summary. (October 6-9, 1987)
1988-10-01
parity violation to condensed matter . A number of near-term important applications are possible using the source and portable storage devices...from charge parity violation studies to condensed matter studies. -vi - The CERN/LEAR facility will continue to only scratch the surface of important...technology programs. These technology programs include possible small tools to study extreme states of matter ;, a propulsion test facility for
Do one percent of the forest fires cause ninety-nine percent of the damage? Forest Science
David Strauss; Larry Bednar; Romain Mees
1989-01-01
A relatively small number of forest fires are responsible for a very high proportion of the total damage. The proportion due to the fraction p of largest fires, when plotted against p, is a measure of variability of fire sizes that is especially sensitive to the important extreme events. We find the theoretical form of the plot for several commonly used distributions...
Genetic and life-history consequences of extreme climate events.
Vincenzi, Simone; Mangel, Marc; Jesensek, Dusan; Garza, John Carlos; Crivelli, Alain J
2017-02-08
Climate change is predicted to increase the frequency and intensity of extreme climate events. Tests on empirical data of theory-based predictions on the consequences of extreme climate events are thus necessary to understand the adaptive potential of species and the overarching risks associated with all aspects of climate change. We tested predictions on the genetic and life-history consequences of extreme climate events in two populations of marble trout Salmo marmoratus that have experienced severe demographic bottlenecks due to flash floods. We combined long-term field and genotyping data with pedigree reconstruction in a theory-based framework. Our results show that after flash floods, reproduction occurred at a younger age in one population. In both populations, we found the highest reproductive variance in the first cohort born after the floods due to a combination of fewer parents and higher early survival of offspring. A small number of parents allowed for demographic recovery after the floods, but the genetic bottleneck further reduced genetic diversity in both populations. Our results also elucidate some of the mechanisms responsible for a greater prevalence of faster life histories after the extreme event. © 2017 The Author(s).
Application of the Solubility Parameter Concept to the Design of Chemiresistor Arrays
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eastman, M.P.; Hughes, R.C.; Jenkins, M.W.
1999-01-11
Arrays of unheated chemically sensitive resistors (chemiresistors) can serve as extremely small, low-power-consumption sensors with simple read-out electronics. We report here results on carbon-loaded polymer composites, as well as polymeric ionic conductors, as chemiresistor sensors. We use the volubility parameter concept to understand and categorize the chemiresistor responses and, in particular, we compare chemiresistors fabricated from polyisobutylene (PIB) to results from PIB-coated acoustic wave sensors. One goal is to examine the possibility that a small number of diverse chemiresistors can sense all possible solvents-the "Universal Solvent Sensor Array". keywords: chemiresistor, volubility parameter, chemical sensor
The Coast Artillery Journal. Volume 80, Number 5, September-October 1937
1937-10-01
nights, numerous fires, hospitals full. Wireless destroyed last night. Situation extremely critical." Later, at 12:10 P.M., the same day,the Governor...Niagara frontier and Toronto, then called York. Lieutenant Colonel Backus of the "Albany draggons" was in charge of a small detachment and a hospital at...Central Powers launched bom - bardment attacks in each other’s rear areas, particularly on munitions plants, regulating stations, ammunition depots
Translations on North Korea, Number 516
1977-03-22
disturbances and becoming naked, and war would break out at any moment. It was a period of extreme tension. The fatherly leader, even while so busy...small boat. Our memory goes back to July 1950. One dark night during the hardfought war , when enemy airplanes were frequently whirling around overhead...and the current was cascading beneath the Imjin railroad bridge, whose rails had been cut, the leader said: "Comrades at the front are waiting for
2010-06-01
Sampling (MIS)? • Technique of combining many increments of soil from a number of points within exposure area • Developed by Enviro Stat (Trademarked...Demonstrating a reliable soil sampling strategy to accurately characterize contaminant concentrations in spatially extreme and heterogeneous...into a set of decision (exposure) units • One or several discrete or small- scale composite soil samples collected to represent each decision unit
Feasibility of ultra-wideband SAW RFID tags meeting FCC rules.
Härmä, Sanna; Plessky, Victor P; Li, Xianyi; Hartogh, Paul
2009-04-01
We discuss the feasibility of surface acoustic wave (SAW) radio-frequency identification (RFID) tags that rely on ultra-wideband (UWB) technology. We propose a design of a UWB SAW tag, carry out numerical experiments on the device performance, and study signal processing in the system. We also present experimental results for the proposed device and estimate the potentially achievable reading distance. UWB SAW tags will have an extremely small chip size (<0.5 x 1 mm(2)) and a low cost. They also can provide a large number of different codes. The estimated read range for UWB SAW tags is about 2 m with a reader radiating as low as <0.1 mW power levels with an extremely low duty factor.
Spatially inhomogeneous electron state deep in the extreme quantum limit of strontium titanate
Bhattacharya, Anand; Skinner, Brian; Khalsa, Guru; ...
2016-09-29
When an electronic system is subjected to a sufficiently strong magnetic field that the cyclotron energy is much larger than the Fermi energy, the system enters the extreme quantum limit (EQL) and becomes susceptible to a number of instabilities. Bringing a three-dimensional electronic system deeply into the EQL can be difficult however, since it requires a small Fermi energy, large magnetic field, and low disorder. Here we present an experimental study of the EQL in lightly-doped single crystals of strontium titanate. Our experiments probe deeply into the regime where theory has long predicted an interaction-driven charge density wave or Wignermore » crystal state. A number of interesting features arise in the transport in this regime, including a striking re-entrant nonlinearity in the current-voltage characteristics. As a result, we discuss these features in the context of possible correlated electron states, and present an alternative picture based on magnetic-field induced puddling of electrons.« less
A rational decision rule with extreme events.
Basili, Marcello
2006-12-01
Risks induced by extreme events are characterized by small or ambiguous probabilities, catastrophic losses, or windfall gains. Through a new functional, that mimics the restricted Bayes-Hurwicz criterion within the Choquet expected utility approach, it is possible to represent the decisionmaker behavior facing both risky (large and reliable probability) and extreme (small or ambiguous probability) events. A new formalization of the precautionary principle (PP) is shown and a new functional, which encompasses both extreme outcomes and expectation of all the possible results for every act, is claimed.
ROLE OF SMALL OIL AND GAS FIELDS IN THE UNITED STATES.
Meyer, Richard F.; Fleming, Mary L.
1985-01-01
The actual economic size cutoff is a function of such factors as depth, water depth offshore, and accessibility to transportation infrastructure. Because of the constraint of resource availability, price is now the principal force driving drilling activity. The proportion of new-field wildcats to other exploratory wells has fallen in recent years, but success in new-field wildcats has risen to about 20%. However, only very small fields, less than 1 million BOE, are being found in large numbers. Through 1979, almost 93% of known gas fields and 94. 5% of known oil fields were small, yet they contain only 14. 5% of the ultimately recoverable gas and 12. 5% of the oil. However, small fields are less capital intensive than equivalent-capacity synthetic-fuel plants, they are extremely numerous, and they are relatively easy and inexpensive to find and put on production. Refs.
Ba, Yan; Kang, Qinjun; Liu, Haihu; ...
2016-04-14
In this study, the dynamical behavior of a droplet on topologically structured surface is investigated by using a three-dimensional color-gradient lattice Boltzmann model. A wetting boundary condition is proposed to model fluid-surface interactions, which is advantageous to improve the accuracy of the simulation and suppress spurious velocities at the contact line. The model is validated by the droplet partial wetting test and reproduction of the Cassie and Wenzel states. A series of simulations are conducted to investigate the behavior of a droplet when subjected to a shear flow. It is found that in Cassie state, the droplet undergoes a transitionmore » from stationary, to slipping and finally to detachment states as the capillary number increases, while in Wenzel state, the last state changes to the breakup state. The critical capillary number, above which the droplet slipping occurs, is small for the Cassie droplet, but is significantly enhanced for the Wenzel droplet due to the increased contact angle hysteresis. In Cassie state, the receding contact angle nearly equals the prediction by the Cassie relation, and the advancing contact angle is close to 180°, leading to a small contact angle hysteresis. In Wenzel state, however, the contact angle hysteresis is extremely large (around 100°). Finally, high droplet mobility can be easily achieved for Cassie droplets, whereas in Wenzel state, extremely low droplet mobility is identified.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ba, Yan; Kang, Qinjun; Liu, Haihu
In this study, the dynamical behavior of a droplet on topologically structured surface is investigated by using a three-dimensional color-gradient lattice Boltzmann model. A wetting boundary condition is proposed to model fluid-surface interactions, which is advantageous to improve the accuracy of the simulation and suppress spurious velocities at the contact line. The model is validated by the droplet partial wetting test and reproduction of the Cassie and Wenzel states. A series of simulations are conducted to investigate the behavior of a droplet when subjected to a shear flow. It is found that in Cassie state, the droplet undergoes a transitionmore » from stationary, to slipping and finally to detachment states as the capillary number increases, while in Wenzel state, the last state changes to the breakup state. The critical capillary number, above which the droplet slipping occurs, is small for the Cassie droplet, but is significantly enhanced for the Wenzel droplet due to the increased contact angle hysteresis. In Cassie state, the receding contact angle nearly equals the prediction by the Cassie relation, and the advancing contact angle is close to 180°, leading to a small contact angle hysteresis. In Wenzel state, however, the contact angle hysteresis is extremely large (around 100°). Finally, high droplet mobility can be easily achieved for Cassie droplets, whereas in Wenzel state, extremely low droplet mobility is identified.« less
Squeezing and its graphical representations in the anharmonic oscillator model
NASA Astrophysics Data System (ADS)
Tanaś, R.; Miranowicz, A.; Kielich, S.
1991-04-01
The problem of squeezing and its graphical representations in the anharmonic oscillator model is considered. Explicit formulas for squeezing, principal squeezing, and the quasiprobability distribution (QPD) function are given and illustrated graphically. Approximate analytical formulas for the variances, extremal variances, and QPD are obtained for the case of small nonlinearities and large numbers of photons. The possibility of almost perfect squeezing in the model is demonstrated and its graphical representations in the form of variance lemniscates and QPD contours are plotted. For large numbers of photons the crescent shape of the QPD contours is hardly visible and quite regular ellipses are obtained.
Joint observations of solar corona in space projects ARKA and KORTES
NASA Astrophysics Data System (ADS)
Vishnyakov, Eugene A.; Bogachev, Sergey A.; Kirichenko, Alexey S.; Reva, Anton A.; Loboda, Ivan P.; Malyshev, Ilya V.; Ulyanov, Artem S.; Dyatkov, Sergey Yu.; Erkhova, Nataliya F.; Pertsov, Andrei A.; Kuzin, Sergey V.
2017-05-01
ARKA and KORTES are two upcoming solar space missions in extreme ultraviolet and X-ray wavebands. KORTES is a sun-oriented mission designed for the Russian segment of International Space Station. KORTES consists of several imaging and spectroscopic instruments that will observe the solar corona in a number of wavebands, covering EUV and X-ray ranges. The surveillance strategy of KORTES is to cover a wide range of observations including simultaneous imaging, spectroscopic and polarization measurements. ARKA is a small satellite solar mission intended to take highresolution images of the Sun at the extreme ultraviolet wavelengths. ARKA will be equipped with two high-resolution EUV telescopes designed to collect images of the Sun with approximately 150 km spatial resolution in the field of view of about 10'×10'. The scientific results of the mission may have a significant impact on the theory of coronal heating and may help to clarify the physics of small-scale solar structures and phenomena including oscillations of fine coronal structures and the physics of micro- and nanoflares.
Lang, Catherine E.; Birkenmeier, Rebecca; Holm, Margo; Rubinstein, Elaine; Van Swearingen, Jessie; Skidmore, Elizabeth R.
2016-01-01
OBJECTIVE. We examined the feasibility, tolerability, and preliminary efficacy of repetitive task-specific practice for people with unilateral spatial neglect (USN). METHOD. People with USN ≥6 mo poststroke participated in a single-group, repeated-measures study. Attendance, total repetitions, and satisfaction indicated feasibility and pain indicated tolerability. Paired t tests and effect sizes were used to estimate changes in upper-extremity use (Motor Activity Log), function (Action Research Arm Test), and attention (Catherine Bergego Scale). RESULTS. Twenty participants attended 99.4% of sessions and completed a high number of repetitions. Participants reported high satisfaction and low pain, and they demonstrated small, significant improvements in upper-extremity use (before Bonferroni corrections; t = –2.1, p = .04, d = .30), function (t = –3.0, p < .01, d = .20), and attention (t = –3.4, p < .01, d = –.44). CONCLUSION. Repetitive task-specific practice is feasible and tolerable for people with USN. Improvements in upper-extremity use, function, and attention may be attainable. PMID:27294994
Musto, H; Romero, H; Zavala, A; Jabbari, K; Bernardi, G
1999-07-01
We have analyzed the patterns of synonymous codon preferences of the nuclear genes of Plasmodium falciparum, a unicellular parasite characterized by an extremely GC-poor genome. When all genes are considered, codon usage is strongly biased toward A and T in third codon positions, as expected, but multivariate statistical analysis detects a major trend among genes. At one end genes display codon choices determined mainly by the extreme genome composition of this parasite, and very probably their expression level is low. At the other end a few genes exhibit an increased relative usage of a particular subset of codons, many of which are C-ending. Since the majority of these few genes is putatively highly expressed, we postulate that the increased C-ending codons are translationally optimal. In conclusion, while codon usage of the majority of P. falciparum genes is determined mainly by compositional constraints, a small number of genes exhibit translational selection.
[Small rodents in the forest ecosystem as infectious disease reservoirs].
Margaletić, Josip
2003-01-01
Due to numerousness of populations and width of ecologic valence, small rodents are important parts of almost any forest ecosystem. The represent an important animal group, which connects primary makers with higher trophic levels. They transmit various infectious diseases dangerous for the health of people and domestic and wild animals (trichinosis, leptospirosis, tick encephalitis, Lyme disease, hemorrhagic fever with renal syndrome, etc.). The following species of small rodents live in forest ecosystems of Croatia: Chletrionomys glareolus Schreib., Arvicola terrestris L, M. subterraneus de Sel., M. arvalis Pall., M. agrestis L, M. multiplex Fat., Apodemus agrarius Pall., A. sylvaticus L. and A. flavicollis Melch. Small rodents transmit causative agents of diseases in active (excretion products) of passive (ectoparasites and endoparasites) ways. Their multiplication potential is quite high. Transmission of certain disease sometimes takes place extremely fast due to the high number of rodents, their high movability and distribution, and the fact that they easily get in touch with men and domestic and wild animals. The number of population of each species is directly influenced by abiotic and biotic factors and changes during one year and in a several year period. In a year when the influence of ecologic factors is favorable, it is presumed that the number of these rodents will significantly increase, by which the danger of their damaging effect also increases. The following factors influence the increase of a small rodent population: number and physiologic condition of the population, meteorologic conditions, habitat, food sources, natural enemies, and diseases. The occurrence of an epidemic is closely connected to the number and infectivity of causative agents. Regular control of the number of rodent population and their infectivity can help in planning preventive epidemiologic and sanitary measures to preclude the occurrence of epidemics and individual cases of disease among animals and humans who come in contact with forest (forest workers, holiday makers, hikers, soldiers, tourists, etc.).
Anam, Khairul; Al-Jumaily, Adel
2014-01-01
The use of a small number of surface electromyography (EMG) channels on the transradial amputee in a myoelectric controller is a big challenge. This paper proposes a pattern recognition system using an extreme learning machine (ELM) optimized by particle swarm optimization (PSO). PSO is mutated by wavelet function to avoid trapped in a local minima. The proposed system is used to classify eleven imagined finger motions on five amputees by using only two EMG channels. The optimal performance of wavelet-PSO was compared to a grid-search method and standard PSO. The experimental results show that the proposed system is the most accurate classifier among other tested classifiers. It could classify 11 finger motions with the average accuracy of about 94 % across five amputees.
Extreme obesity reduces bone mineral density: complementary evidence from mice and women.
Núñez, Nomelí P; Carpenter, Catherine L; Perkins, Susan N; Berrigan, David; Jaque, S Victoria; Ingles, Sue Ann; Bernstein, Leslie; Forman, Michele R; Barrett, J Carl; Hursting, Stephen D
2007-08-01
To evaluate the effects of body adiposity on bone mineral density in the presence and absence of ovarian hormones in female mice and postmenopausal women. We assessed percentage body fat, serum leptin levels, and bone mineral density in ovariectomized and non-ovariectomized C57BL/6 female mice that had been fed various calorically dense diets to induce body weight profiles ranging from lean to very obese. Additionally, we assessed percentage body fat and whole body bone mineral density in 37 overweight and extremely obese postmenopausal women from the Women's Contraceptive and Reproductive Experiences study. In mice, higher levels of body adiposity (>40% body fat) were associated with lower bone mineral density in ovariectomized C57BL/6 female mice. A similar trend was observed in a small sample of postmenopausal women. The complementary studies in mice and women suggest that extreme obesity in postmenopausal women may be associated with reduced bone mineral density. Thus, extreme obesity (BMI > 40 kg/m2) may increase the risk for osteopenia and osteoporosis. Given the obesity epidemic in the U.S. and in many other countries, and, in particular, the rising number of extremely obese adult women, increased attention should be drawn to the significant and interrelated public health issues of obesity and osteoporosis.
Lepton number violation in theories with a large number of standard model copies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kovalenko, Sergey; Schmidt, Ivan; Paes, Heinrich
2011-03-01
We examine lepton number violation (LNV) in theories with a saturated black hole bound on a large number of species. Such theories have been advocated recently as a possible solution to the hierarchy problem and an explanation of the smallness of neutrino masses. On the other hand, the violation of the lepton number can be a potential phenomenological problem of this N-copy extension of the standard model as due to the low quantum gravity scale black holes may induce TeV scale LNV operators generating unacceptably large rates of LNV processes. We show, however, that this issue can be avoided bymore » introducing a spontaneously broken U{sub 1(B-L)}. Then, due to the existence of a specific compensation mechanism between contributions of different Majorana neutrino states, LNV processes in the standard model copy become extremely suppressed with rates far beyond experimental reach.« less
A plasma microlens for ultrashort high power lasers
NASA Astrophysics Data System (ADS)
Katzir, Yiftach; Eisenmann, Shmuel; Ferber, Yair; Zigler, Arie; Hubbard, Richard F.
2009-07-01
We present a technique for generation of miniature plasma lens system that can be used for focusing and collimating a high intensity femtosecond laser pulse. The plasma lens was created by a nanosecond laser, which ablated a capillary entrance. The spatial configuration of the ablated plasma focused a high intensity femtosecond laser pulse. This configuration offers versatility in the plasma lens small f-number for extremely tight focusing of high power lasers with no damage threshold restrictions of regular optical components.
Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions
NASA Astrophysics Data System (ADS)
Chen, Nan; Majda, Andrew J.
2018-02-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
Theory and computation of optimal low- and medium-thrust transfers
NASA Technical Reports Server (NTRS)
Chuang, C.-H.
1994-01-01
This report presents two numerical methods considered for the computation of fuel-optimal, low-thrust orbit transfers in large numbers of burns. The origins of these methods are observations made with the extremal solutions of transfers in small numbers of burns; there seems to exist a trend such that the longer the time allowed to perform an optimal transfer the less fuel that is used. These longer transfers are obviously of interest since they require a motor of low thrust; however, we also find a trend that the longer the time allowed to perform the optimal transfer the more burns are required to satisfy optimality. Unfortunately, this usually increases the difficulty of computation. Both of the methods described use small-numbered burn solutions to determine solutions in large numbers of burns. One method is a homotopy method that corrects for problems that arise when a solution requires a new burn or coast arc for optimality. The other method is to simply patch together long transfers from smaller ones. An orbit correction problem is solved to develop this method. This method may also lead to a good guidance law for transfer orbits with long transfer times.
Small deformations of extreme five dimensional Myers-Perry black hole initial data
NASA Astrophysics Data System (ADS)
Alaee, Aghil; Kunduri, Hari K.
2015-02-01
We demonstrate the existence of a one-parameter family of initial data for the vacuum Einstein equations in five dimensions representing small deformations of the extreme Myers-Perry black hole. This initial data set has `' symmetry and preserves the angular momenta and horizon geometry of the extreme solution. Our proof is based upon an earlier result of Dain and Gabach-Clement concerning the existence of -invariant initial data sets which preserve the geometry of extreme Kerr (at least for short times). In addition, we construct a general class of transverse, traceless symmetric rank 2 tensors in these geometries.
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man
2016-01-01
During inactive phases of Madden-Julian oscillation (MJO), there are plenty of deep but small convective systems and far fewer deep and large ones. During active phases of MJO, a manifestation of an increase in the occurrence of large and deep cloud clusters results from an amplification of large-scale motions by stronger convective heating. This study is designed to quantitatively examine the roles of small and large cloud clusters during the MJO life cycle. We analyze the cloud object data from Aqua CERES observations for tropical deep convective (DC) and cirrostratus (CS) cloud object types according to the real-time multivariate MJO index. The cloud object is a contiguous region of the earth with a single dominant cloud-system type. The size distributions, defined as the footprint numbers as a function of cloud object diameters, for particular MJO phases depart greatly from the combined (8-phase) distribution at large cloud-object diameters due to the reduced/increased numbers of cloud objects related to changes in the large-scale environments. The medium diameter corresponding to the combined distribution is determined and used to partition all cloud objects into "small" and "large" groups of a particular phase. The two groups corresponding to the combined distribution have nearly equal numbers of footprints. The medium diameters are 502 km for DC and 310 km for cirrostratus. The range of the variation between two extreme phases (typically, the most active and depressed phases) for the small group is 6-11% in terms of the numbers of cloud objects and the total footprint numbers. The corresponding range for the large group is 19-44%. In terms of the probability density functions of radiative and cloud physical properties, there are virtually no differences between the MJO phases for the small group, but there are significant differences for the large groups for both DC and CS types. These results suggest that the intreseasonal variation signals reside at the large cloud clusters while the small cloud clusters represent the background noises resulting from various types of the tropical waves with different wavenumbers and propagation directions/speeds.
Heidelberg Retina Tomography Analysis in Optic Disks with Anatomic Particularities
Alexandrescu, C; Pascu, R; Ilinca, R; Popescu, V; Ciuluvica, R; Voinea, L; Celea, C
2010-01-01
Due to its objectivity, reproducibility and predictive value confirmed by many large scale statistical clinical studies, Heidelberg Retina Tomography has become one of the most used computerized image analysis of the optic disc in glaucoma. It has been signaled, though, that the diagnostic value of Moorfieds Regression Analyses and Glaucoma Probability Score decreases when analyzing optic discs with extreme sizes. The number of false positive results increases in cases of megalopapilllae and the number of false negative results increases in cases of small size optic discs. The present paper is a review of the aspects one should take into account when analyzing a HRT result of an optic disc with anatomic particularities. PMID:21254731
NASA Astrophysics Data System (ADS)
McClure, Rachel Lee
2018-06-01
Observations of the solar photosphere show many spatially compact Doppler velocity events with short life spans and extreme values. In the IMaX spectropolarimetric inversion data of the first flight of the SUNRISE balloon in 2009 these striking flashes in the intergranule lanes and complementary outstanding values in the centers of granules have line of sight Doppler velocity values in excess of 4 sigma from the mean. We conclude that values outside 4 sigma are a result from the superposition of the granulation flows and the p-modes.To determine how granulation and p-modes contribute to these outstanding Doppler events, I separate the two components using the Fast Fourier Transform. I produce the power spectrum of the spatial wave frequencies and their corresponding frequency in time for each image, and create a k-omega filter to separate the two components. Using the filtered data, test the hypothesis that extreme events occur because of strict superposition between the p-mode Doppler velocities and the granular velocities. I compare event counts from the observational data to those produced by random superposition of the two flow components and find that the observational event counts are consistent with the model event counts in the limit of small number statistics. Poisson count probabilities of event numbers observed are consistent with expected model count probability distributions.
NASA Astrophysics Data System (ADS)
Graves, Irina; Nizovtsev, Viacheslav; Erman, Natalia
2017-04-01
A special place in the reconstruction of climate dynamics takes an analysis of extraordinary meteorological phenomena. These extreme weather events in the first place impact the functioning of, the rhythm and dynamics of the landscapes and determine not only the features of economy, but also certain aspects of historical development. In the analysis of primary chronicles and published data, along with the direct climatic characteristics (hot, warm, cold, wet, dry, etc.) a lot of attention was paid to abnormal (extreme) natural phenomena and indirect indications of climate variability (floods, crop failures, hunger years, epidemics, etc.). As a result, tables were compiled reflecting climatic basic characteristics and extremes for each year since 900 BC. X-XI centuries was a period of minor climatic optimum - the climate was warmer and drier than the modern one. In addition to higher temperatures (up to 1-3C above than mordern), during this period there were no severe winters. A small amount of summer rainfall has led to a reduction in the number of small water reservoirs and flooding rivers. This is evidenced by Slavic settlements on floodplains of a number of rivers in the Moscow region. It is in this favorable climatic time the way "from the Vikings to the Greeks" was open. Catastrophic natural events had a minimum repeatability. For example, during the X century the Russian chronicles mentioned 41 extreme event, but for the XIII century - 102. Most of the villages and towns were located on the low floodplain terraces of rivers. The main farmland was concentrated there as well. In the "period of contrasts" (XIII - XIV centuries) there was an increase of intra-seasonal climate variability, humidity and widespread reduction in summer temperatures by 1-2C. The number of extreme weather events increased: cold prolonged winters, long rains in summers, cold weather returns in the early summer, early frosts in late summer - early autumn. Such conditions often resulted in crop damage and famine. From the XIV century the little Ice Age began. Year average temperature becomes lower by 1.4°C and summer temperature - by 2-3°C. In the XIV century the chronicles mentioned a total of 100 extreme natural phenomena, as a result of which Russia experienced more than 37 years of famine. The climate was particularly variable in late XIV - early XV century and XVI - XVII centuries, when there were years of particularly cold winters and increased humidity (due to winter precipitation). The duration of the crop growing season was reduced by three weeks. At the beginning of the XVII century spruce became dominant in the spruce-deciduous forests and co-dominant in deciduous forests. There was a transfer of settlements and agricultural land to interfluve areas and higher river valleys. The determining factors were demographic, socio-economic and historical factors, but the role of natural factors cannot be overlooked. The end of the XVI century was marked by the most severe political and economic crisis in the Russian State (oprichnina (political and administrative apparatus established by Ivan IV) and Livonian Wars by Ivan IV), which, combined with deteriorating environmental conditions (increased humidity of the climate, the average annual temperature drop) caused massive desolation of the lands. Many hundreds of villages turned into wasteland. In this period the Moscow land was reported as a "wild desert, covered with shrubs, bogs and imbanks", there were also memories of the past navigation on small rivers, data on mills on the streams. The climate deterioration caused the agrarian revolution in Russia in XIV-XVI centuries. Slash-and-burn and shifting cultivation was replaced by plow farming system (two- and three-field), which was more adapted to the harsh climatic conditions. The work is performed under project № 17-05-00662of the Russian Foundation for Basic Research
Single-electron random-number generator (RNG) for highly secure ubiquitous computing applications
NASA Astrophysics Data System (ADS)
Uchida, Ken; Tanamoto, Tetsufumi; Fujita, Shinobu
2007-11-01
Since the security of all modern cryptographic techniques relies on unpredictable and irreproducible digital keys generated by random-number generators (RNGs), the realization of high-quality RNG is essential for secure communications. In this report, a new RNG, which utilizes single-electron phenomena, is proposed. A room-temperature operating silicon single-electron transistor (SET) having nearby an electron pocket is used as a high-quality, ultra-small RNG. In the proposed RNG, stochastic single-electron capture/emission processes to/from the electron pocket are detected with high sensitivity by the SET, and result in giant random telegraphic signals (GRTS) on the SET current. It is experimentally demonstrated that the single-electron RNG generates extremely high-quality random digital sequences at room temperature, in spite of its simple configuration. Because of its small-size and low-power properties, the single-electron RNG is promising as a key nanoelectronic device for future ubiquitous computing systems with highly secure mobile communication capabilities.
Indurkhya, Sagar; Beal, Jacob
2010-01-06
ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models.
Indurkhya, Sagar; Beal, Jacob
2010-01-01
ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models. PMID:20066048
Gene-for-gene disease resistance: bridging insect pest and pathogen defense.
Kaloshian, Isgouhi
2004-12-01
Active plant defense, also known as gene-for-gene resistance, is triggered when a plant resistance (R) gene recognizes the intrusion of a specific insect pest or pathogen. Activation of plant defense includes an array of physiological and transcriptional reprogramming. During the past decade, a large number of plant R genes that confer resistance to diverse group of pathogens have been cloned from a number of plant species. Based on predicted protein structures, these genes are classified into a small number of groups, indicating that structurally related R genes recognize phylogenetically distinct pathogens. An extreme example is the tomato Mi-1 gene, which confers resistance to potato aphid (Macrosiphum euphorbiae), whitefly (Bemisia tabaci), and root-knot nematodes (Meloidogyne spp.). While Mi-1 remains the only cloned insect R gene, there is evidence that gene-for-gene type of plant defense against piercing-sucking insects exists in a number of plant species.
Knapp, Alan K.; Avolio, Meghan L.; Beier, Claus; Carroll, Charles J.W.; Collins, Scott L.; Dukes, Jeffrey S.; Fraser, Lauchlan H.; Griffin-Nolan, Robert J.; Hoover, David L.; Jentsch, Anke; Loik, Michael E.; Phillips, Richard P.; Post, Alison K.; Sala, Osvaldo E.; Slette, Ingrid J.; Yahdjian, Laura; Smith, Melinda D.
2017-01-01
Intensification of the global hydrological cycle, ranging from larger individual precipitation events to more extreme multiyear droughts, has the potential to cause widespread alterations in ecosystem structure and function. With evidence that the incidence of extreme precipitation years (defined statistically from historical precipitation records) is increasing, there is a clear need to identify ecosystems that are most vulnerable to these changes and understand why some ecosystems are more sensitive to extremes than others. To date, opportunistic studies of naturally occurring extreme precipitation years, combined with results from a relatively small number of experiments, have provided limited mechanistic understanding of differences in ecosystem sensitivity, suggesting that new approaches are needed. Coordinated distributed experiments (CDEs) arrayed across multiple ecosystem types and focused on water can enhance our understanding of differential ecosystem sensitivity to precipitation extremes, but there are many design challenges to overcome (e.g., cost, comparability, standardization). Here, we evaluate contemporary experimental approaches for manipulating precipitation under field conditions to inform the design of ‘Drought-Net’, a relatively low-cost CDE that simulates extreme precipitation years. A common method for imposing both dry and wet years is to alter each ambient precipitation event. We endorse this approach for imposing extreme precipitation years because it simultaneously alters other precipitation characteristics (i.e., event size) consistent with natural precipitation patterns. However, we do not advocate applying identical treatment levels at all sites – a common approach to standardization in CDEs. This is because precipitation variability varies >fivefold globally resulting in a wide range of ecosystem-specific thresholds for defining extreme precipitation years. For CDEs focused on precipitation extremes, treatments should be based on each site's past climatic characteristics. This approach, though not often used by ecologists, allows ecological responses to be directly compared across disparate ecosystems and climates, facilitating process-level understanding of ecosystem sensitivity to precipitation extremes.
Knapp, Alan K; Avolio, Meghan L; Beier, Claus; Carroll, Charles J W; Collins, Scott L; Dukes, Jeffrey S; Fraser, Lauchlan H; Griffin-Nolan, Robert J; Hoover, David L; Jentsch, Anke; Loik, Michael E; Phillips, Richard P; Post, Alison K; Sala, Osvaldo E; Slette, Ingrid J; Yahdjian, Laura; Smith, Melinda D
2017-05-01
Intensification of the global hydrological cycle, ranging from larger individual precipitation events to more extreme multiyear droughts, has the potential to cause widespread alterations in ecosystem structure and function. With evidence that the incidence of extreme precipitation years (defined statistically from historical precipitation records) is increasing, there is a clear need to identify ecosystems that are most vulnerable to these changes and understand why some ecosystems are more sensitive to extremes than others. To date, opportunistic studies of naturally occurring extreme precipitation years, combined with results from a relatively small number of experiments, have provided limited mechanistic understanding of differences in ecosystem sensitivity, suggesting that new approaches are needed. Coordinated distributed experiments (CDEs) arrayed across multiple ecosystem types and focused on water can enhance our understanding of differential ecosystem sensitivity to precipitation extremes, but there are many design challenges to overcome (e.g., cost, comparability, standardization). Here, we evaluate contemporary experimental approaches for manipulating precipitation under field conditions to inform the design of 'Drought-Net', a relatively low-cost CDE that simulates extreme precipitation years. A common method for imposing both dry and wet years is to alter each ambient precipitation event. We endorse this approach for imposing extreme precipitation years because it simultaneously alters other precipitation characteristics (i.e., event size) consistent with natural precipitation patterns. However, we do not advocate applying identical treatment levels at all sites - a common approach to standardization in CDEs. This is because precipitation variability varies >fivefold globally resulting in a wide range of ecosystem-specific thresholds for defining extreme precipitation years. For CDEs focused on precipitation extremes, treatments should be based on each site's past climatic characteristics. This approach, though not often used by ecologists, allows ecological responses to be directly compared across disparate ecosystems and climates, facilitating process-level understanding of ecosystem sensitivity to precipitation extremes. © 2016 John Wiley & Sons Ltd.
On non-primitively divergent vertices of Yang-Mills theory
NASA Astrophysics Data System (ADS)
Huber, Markus Q.
2017-11-01
Two correlation functions of Yang-Mills beyond the primitively divergent ones, the two-ghost-two-gluon and the four-ghost vertices, are calculated and their influence on lower vertices is examined. Their full (transverse) tensor structure is taken into account. As input, a solution of the full two-point equations - including two-loop terms - is used that respects the resummed perturbative ultraviolet behavior. A clear hierarchy is found with regard to the color structure that reduces the number of relevant dressing functions. The impact of the two-ghost-two-gluon vertex on the three-gluon vertex is negligible, which is explained by the fact that all non-small dressing functions drop out due to their color factors. Only in the ghost-gluon vertex a small net effect below 2% is seen. The four-ghost vertex is found to be extremely small in general. Since these two four-point functions do not enter into the propagator equations, these findings establish their small overall effect on lower correlation functions.
Do regional methods really help reduce uncertainties in flood frequency analyses?
NASA Astrophysics Data System (ADS)
Cong Nguyen, Chi; Payrastre, Olivier; Gaume, Eric
2013-04-01
Flood frequency analyses are often based on continuous measured series at gauge sites. However, the length of the available data sets is usually too short to provide reliable estimates of extreme design floods. To reduce the estimation uncertainties, the analyzed data sets have to be extended either in time, making use of historical and paleoflood data, or in space, merging data sets considered as statistically homogeneous to build large regional data samples. Nevertheless, the advantage of the regional analyses, the important increase of the size of the studied data sets, may be counterbalanced by the possible heterogeneities of the merged sets. The application and comparison of four different flood frequency analysis methods to two regions affected by flash floods in the south of France (Ardèche and Var) illustrates how this balance between the number of records and possible heterogeneities plays in real-world applications. The four tested methods are: (1) a local statistical analysis based on the existing series of measured discharges, (2) a local analysis valuating the existing information on historical floods, (3) a standard regional flood frequency analysis based on existing measured series at gauged sites and (4) a modified regional analysis including estimated extreme peak discharges at ungauged sites. Monte Carlo simulations are conducted to simulate a large number of discharge series with characteristics similar to the observed ones (type of statistical distributions, number of sites and records) to evaluate to which extent the results obtained on these case studies can be generalized. These two case studies indicate that even small statistical heterogeneities, which are not detected by the standard homogeneity tests implemented in regional flood frequency studies, may drastically limit the usefulness of such approaches. On the other hand, these result show that the valuation of information on extreme events, either historical flood events at gauged sites or estimated extremes at ungauged sites in the considered region, is an efficient way to reduce uncertainties in flood frequency studies.
Extreme Temperature Performance of Automotive-Grade Small Signal Bipolar Junction Transistors
NASA Technical Reports Server (NTRS)
Boomer, Kristen; Damron, Benny; Gray, Josh; Hammoud, Ahmad
2018-01-01
Electronics designed for space exploration missions must display efficient and reliable operation under extreme temperature conditions. For example, lunar outposts, Mars rovers and landers, James Webb Space Telescope, Europa orbiter, and deep space probes represent examples of missions where extreme temperatures and thermal cycling are encountered. Switching transistors, small signal as well as power level devices, are widely used in electronic controllers, data instrumentation, and power management and distribution systems. Little is known, however, about their performance in extreme temperature environments beyond their specified operating range; in particular under cryogenic conditions. This report summarizes preliminary results obtained on the evaluation of commercial-off-the-shelf (COTS) automotive-grade NPN small signal transistors over a wide temperature range and thermal cycling. The investigations were carried out to establish a baseline on functionality of these transistors and to determine suitability for use outside their recommended temperature limits.
Size distribution and growth rate of crystal nuclei near critical undercooling in small volumes
NASA Astrophysics Data System (ADS)
Kožíšek, Z.; Demo, P.
2017-11-01
Kinetic equations are numerically solved within standard nucleation model to determine the size distribution of nuclei in small volumes near critical undercooling. Critical undercooling, when first nuclei are detected within the system, depends on the droplet volume. The size distribution of nuclei reaches the stationary value after some time delay and decreases with nucleus size. Only a certain maximum size of nuclei is reached in small volumes near critical undercooling. As a model system, we selected recently studied nucleation in Ni droplet [J. Bokeloh et al., Phys. Rev. Let. 107 (2011) 145701] due to available experimental and simulation data. However, using these data for sample masses from 23 μg up to 63 mg (corresponding to experiments) leads to the size distribution of nuclei, when no critical nuclei in Ni droplet are formed (the number of critical nuclei < 1). If one takes into account the size dependence of the interfacial energy, the size distribution of nuclei increases to reasonable values. In lower volumes (V ≤ 10-9 m3) nucleus size reaches some maximum extreme size, which quickly increases with undercooling. Supercritical clusters continue their growth only if the number of critical nuclei is sufficiently high.
THE PASSAGE OF MARKED IONS INTO TISSUES AND FLUIDS OF THE REPRODUCTIVE TRACT OF PREGNANT RABBITS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bennett, J.P.; Boursnell, J.C.; Lutwak-Mann, C.
1959-10-31
Rapid changes were demonstrated in the uptake of labeled ions both in the developing embryo and in the endometrium, mesodermal placental folds, and other closely associated tissues and fluids following the intravenous injection of labeled ions in pregnant rabbits. Phosphorus-32, sulfur-85, sodium-24, iodine- 131, and potassium-42 were used as tracers. A number of new techniques were developed to obtain, weigh, and handle the extremely small samples. The influence of exogenous materials on the early development of fetuses is discussed briefly. (C.R.)
Open-type miniature heat pipes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vasiliev, L.L.
1994-01-01
The hypothesis that systems of thermoregulation, similar to open-type micro heat pipes, exist in nature (soils, living organisms, plants) and in a number of technological processes (drying, thermodynamic cycles on solid adsorbents) is considered. The hydrodynamics and heat transfer in such thermoregulation systems differ from the hydrodynamics and heat transfer in classical heat pipes, since their geometrical dimensions are extremely small (dozens of microns), adhesion forces are powerful, the effect of the field of capillary and gravitational forces is significant, and strong interaction between counter-current flows of vapor and liquid takes place.
Speil, Sidney
1974-01-01
The problems of quantitating chrysotile in water by fiber count techniques are reviewed briefly and the use of mass quantitation is suggested as a preferable measure. Chrysotile fiber has been found in almost every sample of natural water examined, but generally transmission electron miscroscopy (TEM) is required because of the small diameters involved. The extreme extrapolation required in mathematically converting a few fibers or fiber fragments under the TEM to the fiber content of a liquid sample casts considerable doubt on the validity of numbers used to compare chrysotile contents of different liquids. PMID:4470930
The Arctic Regional Communications Small SATellite (ARCSAT)
NASA Technical Reports Server (NTRS)
Casas, Joseph; Kress, Martin; Sims, William; Spehn, Stephen; Jaeger, Talbot; Sanders, Devon
2013-01-01
Traditional satellite missions are extremely complex and expensive to design, build, test, launch and operate. Consequently many complementary operational, exploration and research satellite missions are being formulated as a growing part of the future space community capabilities using formations of small, distributed, simple to launch and inexpensive highly capable small scale satellites. The Arctic Regional Communications small SATellite (ARCSAT) initiative would launch a Mini-Satellite "Mothership" into Polar or Sun Sync low-earth-orbit (LEO). Once on orbit, the Mothership would perform orbital insertion of four internally stored independently maneuverable nanosatellites, each containing electronically steerable antennas and reconfigurable software-defined radios. Unlike the traditional geostationary larger complex satellite communication systems, this LEO communications system will be comprised of initially a five small satellite formation that can be later incrementally increased in the total number of satellites for additional data coverage. ARCSAT will provide significant enabling capabilities in the Arctic for autonomous voice and data communications relay, Maritime Domain Awareness (MDA), data-extraction from unattended sensors, and terrestrial Search & Rescue (SAR) beacon detection missions throughout the "data starved desert" of the Arctic Region.
Closing the Gap: An Analysis of Options for Improving the USAF Fighter Fleet from 2105 to 2035
2015-10-01
capacity. The CBO predicts an increase in capacity for both large, or 2000 lbs class weapons, and small , either 500 lbs class or Small Diameter Bomb ...Laser Guided Bomb (LGB) designed to penetrate extremely hardened bunkers with extreme accuracy.54 Larger weapons can provide better standoff range...operate with impunity in low intensity CAS scenarios. While survivability, with the exception of against small arms ground fire, is far less a
Solar Imaging UV/EUV Spectrometers Using TVLS Gratings
NASA Technical Reports Server (NTRS)
Thomas, Roger J.
2003-01-01
It is a particular challenge to develop a stigmatic spectrograph for UV, EUV wavelengths since the very low normal-incidence reflectance of standard materials most often requires that the design be restricted to a single optical element which must simultaneously provide both reimaging and spectral dispersion. This problem has been solved in the past by the use of toroidal gratings with uniform line-spaced rulings (TULS). A number of solar extreme ultraviolet (EUV) spectrometers have been based on such designs, including SOHO/CDS, Solar-B/EIS, and the sounding rockets Solar Extreme ultraviolet Research Telescope and Spectrograph (SERTS) and Extreme Ultraviolet Normal Incidence Spectrograph (EUNIS). More recently, Kita, Harada, and collaborators have developed the theory of spherical gratings with varied line-space rulings (SVLS) operated at unity magnification, which have been flown on several astronomical satellite missions. We now combine these ideas into a spectrometer concept that puts varied-line space rulings onto toroidal gratings. Such TVLS designs are found to provide excellent imaging even at very large spectrograph magnifications and beam-speeds, permitting extremely high-quality performance in remarkably compact instrument packages. Optical characteristics of three new solar spectrometers based on this concept are described: SUMI and RAISE, two sounding rocket payloads, and NEXUS, currently being proposed as a Small-Explorer (SMEX) mission.
Divergent ecosystem responses within a benthic marine community to ocean acidification.
Kroeker, Kristy J; Micheli, Fiorenza; Gambi, Maria Cristina; Martz, Todd R
2011-08-30
Ocean acidification is predicted to impact all areas of the oceans and affect a diversity of marine organisms. However, the diversity of responses among species prevents clear predictions about the impact of acidification at the ecosystem level. Here, we used shallow water CO(2) vents in the Mediterranean Sea as a model system to examine emergent ecosystem responses to ocean acidification in rocky reef communities. We assessed in situ benthic invertebrate communities in three distinct pH zones (ambient, low, and extreme low), which differed in both the mean and variability of seawater pH along a continuous gradient. We found fewer taxa, reduced taxonomic evenness, and lower biomass in the extreme low pH zones. However, the number of individuals did not differ among pH zones, suggesting that there is density compensation through population blooms of small acidification-tolerant taxa. Furthermore, the trophic structure of the invertebrate community shifted to fewer trophic groups and dominance by generalists in extreme low pH, suggesting that there may be a simplification of food webs with ocean acidification. Despite high variation in individual species' responses, our findings indicate that ocean acidification decreases the diversity, biomass, and trophic complexity of benthic marine communities. These results suggest that a loss of biodiversity and ecosystem function is expected under extreme acidification scenarios.
Estimation of breeding values using selected pedigree records.
Morton, Richard; Howarth, Jordan M
2005-06-01
Fish bred in tanks or ponds cannot be easily tagged individually. The parentage of any individual may be determined by DNA fingerprinting, but is sufficiently expensive that large numbers cannot be so finger-printed. The measurement of the objective trait can be made on a much larger sample relatively cheaply. This article deals with experimental designs for selecting individuals to be finger-printed and for the estimation of the individual and family breeding values. The general setup provides estimates for both genetic effects regarded as fixed or random and for fixed effects due to known regressors. The family effects can be well estimated when even very small numbers are finger-printed, provided that they are the individuals with the most extreme phenotypes.
Multiphoton correlations in parametric down-conversion and their measurement in the pulsed regime
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanova, O A; Iskhakov, T Sh; Penin, A N
2006-10-31
We consider normalised intensity correlation functions (CFs) of different orders for light emitted via parametric down-conversion (PDC) and their dependence on the number of photons per mode. The main problem in measuring such correlation functions is their extremely small width, which considerably reduces their contrast. It is shown that if the radiation under study is modulated by a periodic sequence of pulses that are short compared to the CF width, no decrease in the contrast occurs. A procedure is proposed for measuring normalised CFs of various orders in the pulsed regime. For nanosecond-pulsed PDC radiation, normalised second-order CF is measuredmore » experimentally as a function of the mean photon number. (nonlinear optical phenomena)« less
Impact of recent extreme Arizona storms
Magirl, C.S.; Webb, R.H.; Schaffner, M.; Lyon, S.W.; Griffiths, P.G.; Shoemaker, C.; Unkrich, C.L.; Yatheendradas, S.; Troch, Peter A.; Pytlak, E.; Goodrich, D.C.; Desilets, S.L.E.; Youberg, A.; Pearthree, P.A.
2007-01-01
Heavy rainfall on 27–31 July 2006 led to record flooding and triggered an historically unprecedented number of debris flows in the Santa Catalina Mountains north of Tucson, Ariz. The U.S. Geological Survey (USGS) documented record floods along four watercourses in the Tucson basin, and at least 250 hillslope failures spawned damaging debris flows in an area where less than 10 small debris flows had been documented in the past 25 years. At least 18 debris flows destroyed infrastructure in the heavily used Sabino Canyon Recreation Area (http://wwwpaztcn.wr.usgs.gov/rsch_highlight/articles/20061 l.html). In four adjacent canyons, debris flows reached the heads of alluvial fans at the boundary of the Tucson metropolitan area. While landuse planners in southeastern Arizona evaluate the potential threat of this previously little recognized hazard to residents along the mountain front, an interdisciplinary group of scientists has collaborated to better understand this extreme event.
Jo, Javier A.; Fang, Qiyin; Marcu, Laura
2007-01-01
We report a new deconvolution method for fluorescence lifetime imaging microscopy (FLIM) based on the Laguerre expansion technique. The performance of this method was tested on synthetic and real FLIM images. The following interesting properties of this technique were demonstrated. 1) The fluorescence intensity decay can be estimated simultaneously for all pixels, without a priori assumption of the decay functional form. 2) The computation speed is extremely fast, performing at least two orders of magnitude faster than current algorithms. 3) The estimated maps of Laguerre expansion coefficients provide a new domain for representing FLIM information. 4) The number of images required for the analysis is relatively small, allowing reduction of the acquisition time. These findings indicate that the developed Laguerre expansion technique for FLIM analysis represents a robust and extremely fast deconvolution method that enables practical applications of FLIM in medicine, biology, biochemistry, and chemistry. PMID:19444338
Instructive Biologic Scaffold for Functional Tissue Regeneration Following Trauma to the Extremities
2016-10-01
Award Number: W81XWH-12-2-0128 TITLE: Instructive Biologic Scaffold for Functional Tissue Regeneration Following Trauma to the Extremities...SUBTITLE Instructive Biologic Scaffold for Functional Tissue Regeneration Following Trauma to the Extremities 5a. CONTRACT NUMBER 5b. GRANT NUMBER...identification of cell phenotype, extracellular 5 matrix characterization, and histomorphometric analysis. The main endpoint of this study was to
DOE Office of Scientific and Technical Information (OSTI.GOV)
McNally, N.; Liu, Xiang Yang; Choudary, P.V.
1997-01-01
The authors describe a microplate-based high-throughput procedure for rapid assay of the enzyme activities of nitrate reductase and nitrite reductase, using extremely small volumes of reagents. The new procedure offers the advantages of rapidity, small sample size-nanoliter volumes, low cost, and a dramatic increase in the throughput sample number that can be analyzed simultaneously. Additional advantages can be accessed by using microplate reader application software packages that permit assigning a group type to the wells, recording of the data on exportable data files and exercising the option of using the kinetic or endpoint reading modes. The assay can also bemore » used independently for detecting nitrite residues/contamination in environmental/food samples. 10 refs., 2 figs.« less
Design of Cryocoolers for Microwatt Superconducting Devices
NASA Technical Reports Server (NTRS)
Zimmerman, J. E.
1985-01-01
The primary applications of the cryocoolers are for cooling various Josephson devices such as SQUID magnetometers and amplifiers, voltage standards, and microwave mixers and detectors. The common feature of these devices is their extremely low inherent bias power requirement, of the order of 10/1 W per junction. This provides the possibility of designing compact, low-power cryocoolers for these applications. Several concepts were explored and a number of laboratory model cryocoolers were built. These include low-power nonmagnetic regenerative machines of the Stirling or Gifford-McMahon type, three or four-stage Joule-Thomson machines, liquid-helium dewars with integral small cryocoolers to reduce the evaporation rate, and liquid-helium dewars with integral continuously or intermittently operated small helium liquefiers to permit operation of cryogenic devices for indefinite time periods.
Piras, Monica; Mascaro, Giuseppe; Deidda, Roberto; Vivoni, Enrique R
2016-02-01
Mediterranean region is characterized by high precipitation variability often enhanced by orography, with strong seasonality and large inter-annual fluctuations, and by high heterogeneity of terrain and land surface properties. As a consequence, catchments in this area are often prone to the occurrence of hydrometeorological extremes, including storms, floods and flash-floods. A number of climate studies focused in the Mediterranean region predict that extreme events will occur with higher intensity and frequency, thus requiring further analyses to assess their effect at the land surface, particularly in small- and medium-sized watersheds. In this study, climate and hydrologic simulations produced within the Climate Induced Changes on the Hydrology of Mediterranean Basins (CLIMB) EU FP7 research project were used to analyze how precipitation extremes propagate into discharge extremes in the Rio Mannu basin (472.5km(2)), located in Sardinia, Italy. The basin hydrologic response to climate forcings in a reference (1971-2000) and a future (2041-2070) period was simulated through the combined use of a set of global and regional climate models, statistical downscaling techniques, and a process based distributed hydrologic model. We analyzed and compared the distribution of annual maxima extracted from hourly and daily precipitation and peak discharge time series, simulated by the hydrologic model under climate forcing. For this aim, yearly maxima were fit by the Generalized Extreme Value (GEV) distribution using a regional approach. Next, we discussed commonality and contrasting behaviors of precipitation and discharge maxima distributions to better understand how hydrological transformations impact propagation of extremes. Finally, we show how rainfall statistical downscaling algorithms produce more reliable forcings for hydrological models than coarse climate model outputs. Copyright © 2015 Elsevier B.V. All rights reserved.
Assessment of extremely low frequency magnetic field exposure from GSM mobile phones.
Calderón, Carolina; Addison, Darren; Mee, Terry; Findlay, Richard; Maslanyj, Myron; Conil, Emmanuelle; Kromhout, Hans; Lee, Ae-kyoung; Sim, Malcolm R; Taki, Masao; Varsier, Nadège; Wiart, Joe; Cardis, Elisabeth
2014-04-01
Although radio frequency (RF) electromagnetic fields emitted by mobile phones have received much attention, relatively little is known about the extremely low frequency (ELF) magnetic fields emitted by phones. This paper summarises ELF magnetic flux density measurements on global system for mobile communications (GSM) mobile phones, conducted as part of the MOBI-KIDS epidemiological study. The main challenge is to identify a small number of generic phone models that can be used to classify the ELF exposure for the different phones reported in the study. Two-dimensional magnetic flux density measurements were performed on 47 GSM mobile phones at a distance of 25 mm. Maximum resultant magnetic flux density values at 217 Hz had a geometric mean of 221 (+198/-104) nT. Taking into account harmonic data, measurements suggest that mobile phones could make a substantial contribution to ELF exposure in the general population. The maximum values and easily available variables were poorly correlated. However, three groups could be defined on the basis of field pattern indicating that manufacturers and shapes of mobile phones may be the important parameters linked to the spatial characteristics of the magnetic field, and the categorization of ELF magnetic field exposure for GSM phones in the MOBI-KIDS study may be achievable on the basis of a small number of representative phones. Such categorization would result in a twofold exposure gradient between high and low exposure based on type of phone used, although there was overlap in the grouping. © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Steinschneider, S.; Ho, M.; Cook, E. R.; Lall, U.
2017-12-01
This work explores how extreme cold-season precipitation dynamics along the west coast of the United States have varied in the past under natural climate variability through an analysis of the moisture anomalies recorded by tree-ring chronologies across the coast and interior of the western U.S. Winters with high total precipitation amounts in the coastal regions are marked by a small number of extreme storms that exhibit distinct spatial patterns of precipitation across the coast and further inland. Building from this observation, this work develops a novel application of dendroclimatic evidence to explore the following questions: a) how is extreme precipitation variability expressed in a network of tree-ring chronologies; b) can this information provide insight on the space-time variability of storm tracks that cause these extreme events; and c) how can the joint variability of extreme precipitation and storm tracks be modeled to develop consistent, multi-centennial reconstructions of both? We use gridded, tree-ring based reconstructions of the summer Palmer Drought Severity Index (PDSI) extending back 500 years within the western U.S. to build and test a novel statistical framework for reconstructing the space-time variability of coastal extreme precipitation and the associated wintertime storm tracks. Within this framework, we (1) identify joint modes of variability of extreme precipitation fields and tree-ring based PDSI reconstructions; (2) relate these modes to previously identified, unique storm track patterns associated with atmospheric rivers (ARs), which are the dominant type of storm that is responsible for extreme precipitation in the region; and (3) determine latitudinal variations of landfalling ARs across the west coast and their relationship to the these joint modes. To our knowledge, this work is the first attempt to leverage information on storm track patterns stored in a network of paleoclimate proxies to improve reconstruction fidelity.
NASA Astrophysics Data System (ADS)
Steinschneider, S.; Ho, M.; Cook, E. R.; Lall, U.
2016-12-01
This work explores how extreme cold-season precipitation dynamics along the west coast of the United States have varied in the past under natural climate variability through an analysis of the moisture anomalies recorded by tree-ring chronologies across the coast and interior of the western U.S. Winters with high total precipitation amounts in the coastal regions are marked by a small number of extreme storms that exhibit distinct spatial patterns of precipitation across the coast and further inland. Building from this observation, this work develops a novel application of dendroclimatic evidence to explore the following questions: a) how is extreme precipitation variability expressed in a network of tree-ring chronologies; b) can this information provide insight on the space-time variability of storm tracks that cause these extreme events; and c) how can the joint variability of extreme precipitation and storm tracks be modeled to develop consistent, multi-centennial reconstructions of both? We use gridded, tree-ring based reconstructions of the summer Palmer Drought Severity Index (PDSI) extending back 500 years within the western U.S. to build and test a novel statistical framework for reconstructing the space-time variability of coastal extreme precipitation and the associated wintertime storm tracks. Within this framework, we (1) identify joint modes of variability of extreme precipitation fields and tree-ring based PDSI reconstructions; (2) relate these modes to previously identified, unique storm track patterns associated with atmospheric rivers (ARs), which are the dominant type of storm that is responsible for extreme precipitation in the region; and (3) determine latitudinal variations of landfalling ARs across the west coast and their relationship to the these joint modes. To our knowledge, this work is the first attempt to leverage information on storm track patterns stored in a network of paleoclimate proxies to improve reconstruction fidelity.
Entropy Production in Collisionless Systems. II. Arbitrary Phase-space Occupation Numbers
NASA Astrophysics Data System (ADS)
Barnes, Eric I.; Williams, Liliya L. R.
2012-04-01
We present an analysis of two thermodynamic techniques for determining equilibria of self-gravitating systems. One is the Lynden-Bell (LB) entropy maximization analysis that introduced violent relaxation. Since we do not use the Stirling approximation, which is invalid at small occupation numbers, our systems have finite mass, unlike LB's isothermal spheres. (Instead of Stirling, we utilize a very accurate smooth approximation for ln x!.) The second analysis extends entropy production extremization to self-gravitating systems, also without the use of the Stirling approximation. In addition to the LB statistical family characterized by the exclusion principle in phase space, and designed to treat collisionless systems, we also apply the two approaches to the Maxwell-Boltzmann (MB) families, which have no exclusion principle and hence represent collisional systems. We implicitly assume that all of the phase space is equally accessible. We derive entropy production expressions for both families and give the extremum conditions for entropy production. Surprisingly, our analysis indicates that extremizing entropy production rate results in systems that have maximum entropy, in both LB and MB statistics. In other words, both thermodynamic approaches lead to the same equilibrium structures.
NASA Astrophysics Data System (ADS)
Zoller, Uri; Maymon, Tsipora
The effectiveness of a smoking-prevention program - incorporated within a traditional science curriculum - was assessed in terms of attitude modification in such categories as health, peer pressure, and social image as related to smoking. The study indicates that most relevant attitudes, the emotionally intense in particular, are modifiable in the desired direction, although the changes are small. Some gender differences in the recorded changes suggest a difference in the dynamics of the response to smoking intervention between male and female high school students. A desired change of attitude frequency distributions (e.g., from less extreme to more extreme responses) has also been found. In addition, the tendency of the experimental students to actively act against smoking within family circles increased, although not significantly. All the above was accompanied by a decrease in the number of smokers in the experimental group and a significant increase in the number of smokers in the control group. These results suggest that it is educationally possible to modify attitudes in health education in the desired direction by means of a properly designed interdisciplinary science curricular unit implemented within ongoing traditional science teaching.
Extreme storm activity in North Atlantic and European region
NASA Astrophysics Data System (ADS)
Vyazilova, N.
2010-09-01
The extreme storm activity study over North Atlantic and Europe includes the analyses of extreme cyclone (track number, integral cyclonic intensity) and extreme storm (track number) during winter and summer seasons in the regions: 1) 55°N-80N, 50°W-70°E; 2) 30°N-55°N, 50°W-70°E. Extreme cyclones were selected based on cyclone centre pressure (P<=970 mbar). Extreme storms were selected from extreme cyclones based on wind velocity on 925 mbar. The Bofort scala was used for this goal. Integral cyclonic intensity (for region) includes the calculation cyclone centers number and sum of MSLP anomalies in cyclone centers. The analyses based on automated cyclone tracking algorithm, 6-hourly MSLP and wind data (u and v on 925 gPa) from the NCEP/NCAR reanalyses from January 1948 to March 2010. The comparision of mean, calculated for every ten years, had shown, that in polar region extreme cyclone and storm track number, and integral cyclonic intensity gradually increases and have maximum during last years (as for summer, as for winter season). Every ten years means for summer season are more then for winter season, as for polar, as for tropical region. Means (ten years) for tropical region are significance less then for polar region.
Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions
NASA Astrophysics Data System (ADS)
Chen, N.; Majda, A.
2017-12-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
Are extreme events (statistically) special? (Invited)
NASA Astrophysics Data System (ADS)
Main, I. G.; Naylor, M.; Greenhough, J.; Touati, S.; Bell, A. F.; McCloskey, J.
2009-12-01
We address the generic problem of testing for scale-invariance in extreme events, i.e. are the biggest events in a population simply a scaled model of those of smaller size, or are they in some way different? Are large earthquakes for example ‘characteristic’, do they ‘know’ how big they will be before the event nucleates, or is the size of the event determined only in the avalanche-like process of rupture? In either case what are the implications for estimates of time-dependent seismic hazard? One way of testing for departures from scale invariance is to examine the frequency-size statistics, commonly used as a bench mark in a number of applications in Earth and Environmental sciences. Using frequency data however introduces a number of problems in data analysis. The inevitably small number of data points for extreme events and more generally the non-Gaussian statistical properties strongly affect the validity of prior assumptions about the nature of uncertainties in the data. The simple use of traditional least squares (still common in the literature) introduces an inherent bias to the best fit result. We show first that the sampled frequency in finite real and synthetic data sets (the latter based on the Epidemic-Type Aftershock Sequence model) converge to a central limit only very slowly due to temporal correlations in the data. A specific correction for temporal correlations enables an estimate of convergence properties to be mapped non-linearly on to a Gaussian one. Uncertainties closely follow a Poisson distribution of errors across the whole range of seismic moment for typical catalogue sizes. In this sense the confidence limits are scale-invariant. A systematic sample bias effect due to counting whole numbers in a finite catalogue makes a ‘characteristic’-looking type extreme event distribution a likely outcome of an underlying scale-invariant probability distribution. This highlights the tendency of ‘eyeball’ fits to unconsciously (but wrongly in this case) assume Gaussian errors. We develop methods to correct for these effects, and show that the current best fit maximum likelihood regression model for the global frequency-moment distribution in the digital era is a power law, i.e. mega-earthquakes continue to follow the Gutenberg-Richter trend of smaller earthquakes with no (as yet) observable cut-off or characteristic extreme event. The results may also have implications for the interpretation of other time-limited geophysical time series that exhibit power-law scaling.
Testing for scale-invariance in extreme events, with application to earthquake occurrence
NASA Astrophysics Data System (ADS)
Main, I.; Naylor, M.; Greenhough, J.; Touati, S.; Bell, A.; McCloskey, J.
2009-04-01
We address the generic problem of testing for scale-invariance in extreme events, i.e. are the biggest events in a population simply a scaled model of those of smaller size, or are they in some way different? Are large earthquakes for example ‘characteristic', do they ‘know' how big they will be before the event nucleates, or is the size of the event determined only in the avalanche-like process of rupture? In either case what are the implications for estimates of time-dependent seismic hazard? One way of testing for departures from scale invariance is to examine the frequency-size statistics, commonly used as a bench mark in a number of applications in Earth and Environmental sciences. Using frequency data however introduces a number of problems in data analysis. The inevitably small number of data points for extreme events and more generally the non-Gaussian statistical properties strongly affect the validity of prior assumptions about the nature of uncertainties in the data. The simple use of traditional least squares (still common in the literature) introduces an inherent bias to the best fit result. We show first that the sampled frequency in finite real and synthetic data sets (the latter based on the Epidemic-Type Aftershock Sequence model) converge to a central limit only very slowly due to temporal correlations in the data. A specific correction for temporal correlations enables an estimate of convergence properties to be mapped non-linearly on to a Gaussian one. Uncertainties closely follow a Poisson distribution of errors across the whole range of seismic moment for typical catalogue sizes. In this sense the confidence limits are scale-invariant. A systematic sample bias effect due to counting whole numbers in a finite catalogue makes a ‘characteristic'-looking type extreme event distribution a likely outcome of an underlying scale-invariant probability distribution. This highlights the tendency of ‘eyeball' fits unconsciously (but wrongly in this case) to assume Gaussian errors. We develop methods to correct for these effects, and show that the current best fit maximum likelihood regression model for the global frequency-moment distribution in the digital era is a power law, i.e. mega-earthquakes continue to follow the Gutenberg-Richter trend of smaller earthquakes with no (as yet) observable cut-off or characteristic extreme event. The results may also have implications for the interpretation of other time-limited geophysical time series that exhibit power-law scaling.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-20
...: Topical Oxygen Chamber for Extremities; Availability; Correction AGENCY: Food and Drug Administration, HHS... Special Controls Guidance Documents: Topical Oxygen Chamber for Extremities.'' The document published... Oxygen Chamber for Extremities'' to the Division of Small Manufacturers, International, and Consumer...
NASA Astrophysics Data System (ADS)
Munaka, Tatsuya; Abe, Hirohisa; Kanai, Masaki; Sakamoto, Takashi; Nakanishi, Hiroaki; Yamaoka, Tetsuji; Shoji, Shuichi; Murakami, Akira
2006-07-01
We successfully developed a measurement system for real-time analysis of cellular function using a newly designed microchip. This microchip was equipped with a micro cell incubation chamber (240 nl) and was stimulated by a very small amount of stimuli (as small as 24 nl). Using the microchip system, cultivation of mast cells was successfully carried out. Monitoring of the cellular events after stimulation with an extremely small amount of fluid on a microchip was performed. This system could be applicable for various types of cellular analysis including real-time monitoring of cellular response by stimulation.
Zero pressure gradient boundary layer at extreme Reynolds numbers
NASA Astrophysics Data System (ADS)
Hultmark, Marcus; Vallikivi, Margit; Smits, Alexander
2011-11-01
Experiments were conducted in a zero pressure gradient flat plate boundary layer using the Princeton/ONR High Reynolds number Test Facility (HRTF). The HRTF uses highly compressed air, up to 220 atmospheres, to produce Reynolds numbers up to Reθ =225,000 . This corresponds to a δ+ =65,000 which is one of the highest Reynolds numbers ever measured in a laboratory. When using pressure to achieve high Reynolds numbers the size of the measurement probes become critical, thus the need for very small sensors is acute. The streamwise component of velocity was investigated using a nanoscale thermal anemometer (NSTAP) as well as a 200 μm pitot tube. The NSTAP has a spatial resolution as well as a temporal resolution one order of magnitude better than conventional measurement techniques. The data was compared to recent data from a high Reynolds number turbulent pipe flow and it was shown that the two flows are more similar than previous data suggests. Supported under NR Grant N00014-09-1-0263 (program manager Ron Joslin) and NSF Grant CBET-1064257(program manager Henning Winter).
NASA Astrophysics Data System (ADS)
Sudmeier-Rieux, Karen; Tonini, Marj; Vulliez, Cindy; Sanjaya, Devkota; Derron, Marc-Henri; Jaboyedoff, Michel
2017-04-01
This paper details an extreme rainfall event, or cloudburst (315 mm/ 24 hours) which occurred on July 29-30, 2015 in the Phewa Lake Watershed, Western Nepal, three months after the April 25, 2015 Gorkha Earthquake. The event triggered over 170 landslides and debris flows, caused 8 deaths and considerable damage to livelihoods. The fatal debris flow started from one of the numerous rural roads, which have proliferated exponentially over the past decades. In addition to mapping landslides due to this extreme rainfall event, our study sought to document and analyze underlying natural and human land use factors that may have impacted the occurrence of landsliding (Vulliez et al submitted). To do so, our study analyzed land cover/ land use changes for the period 1979-2016 based on an interpretation of aerial photos and satellite images, combined with ground truthing. We studied how land use / land cover changes have resulted in a shift of active erosion zones from overgrazing around streams and forests to an exponential number of small failures along unplanned earthen rural roads, or "bulldozer roads". With several hundred small failures documented along roadsides (Leibundgut et al., 2016) as compared to only 14 landslides prior to 2015 extreme rainfall event - and none triggered by the 2015 earthquake - roads are thus a major driver of active erosion zones and small failures in the watershed. More effective management of the current unsustainable mode of rural road construction is required to reduce further environmental and economic impacts on vulnerable populations in Nepal. Leibundgut, G., Sudmeier-Rieux, K. Devkota, S., Jaboyedoff, M., Derron, M-H., Penna, I. Nguyen, L. (2016). Rural earthen roads impact assessment in Phewa watershed, Western region, Nepal. Geoenvironmental Disasters (2016) 3:13. DOI 10.1186/s40677-016-0047-8 Vulliez, C, Tonini, M., Sudmeier-Rieux, K. Devkota, S., Derron, M-H, Jaboyedoff, M. (submitted) Land use changes, landslides and roads in the Phewa Watershed, Western Nepal from 1979 to 2016. Applied Geography
Torque on a sphere inside a rotating cylinder.
NASA Technical Reports Server (NTRS)
Mena, B.; Levinson, E.; Caswell, B.
1972-01-01
A circular cylinder of finite dimensions is made to rotate around a sphere fixed in the center of the cylinder. The couple on the sphere is measured over a wide range of rotational speeds for both Newtonian and non-Newtonian fluids. For the Newtonian liquids a comparison of the experimental results is made with Collins' (1955) expansion of the couple as a series in even powers of the angular Reynolds number. For non-Newtonian liquids the apparatus proves to be extremely useful for an accurate determination of the zero shear rate viscosity using only a small amount of fluid.
Effects of correlation in transition radiation of super-short electron bunches
NASA Astrophysics Data System (ADS)
Danilova, D. K.; Tishchenko, A. A.; Strikhanov, M. N.
2017-07-01
The effect of correlations between electrons in transition radiation is investigated. The correlation function is obtained with help of the approach similar to the Debye-Hückel theory. The corrections due to correlations are estimated to be near 2-3% for the parameters of future projects SINBAD and FLUTE for bunches with extremely small lengths (∼1-10 fs). For the bunches with number of electrons about ∼ 2.5 ∗1010 and more, and short enough that the radiation would be coherent, the corrections due to correlations are predicted to reach 20%.
Abstraction and model evaluation in category learning.
Vanpaemel, Wolf; Storms, Gert
2010-05-01
Thirty previously published data sets, from seminal category learning tasks, are reanalyzed using the varying abstraction model (VAM). Unlike a prototype-versus-exemplar analysis, which focuses on extreme levels of abstraction only, a VAM analysis also considers the possibility of partial abstraction. Whereas most data sets support no abstraction when only the extreme possibilities are considered, we show that evidence for abstraction can be provided using the broader view on abstraction provided by the VAM. The present results generalize earlier demonstrations of partial abstraction (Vanpaemel & Storms, 2008), in which only a small number of data sets was analyzed. Following the dominant modus operandi in category learning research, Vanpaemel and Storms evaluated the models on their best fit, a practice known to ignore the complexity of the models under consideration. In the present study, in contrast, model evaluation not only relies on the maximal likelihood, but also on the marginal likelihood, which is sensitive to model complexity. Finally, using a large recovery study, it is demonstrated that, across the 30 data sets, complexity differences between the models in the VAM family are small. This indicates that a (computationally challenging) complexity-sensitive model evaluation method is uncalled for, and that the use of a (computationally straightforward) complexity-insensitive model evaluation method is justified.
Aly, Aly Mousaad
2014-01-01
Atmospheric turbulence results from the vertical movement of air, together with flow disturbances around surface obstacles which make low- and moderate-level winds extremely irregular. Recent advancements in wind engineering have led to the construction of new facilities for testing residential homes at relatively high Reynolds numbers. However, the generation of a fully developed turbulence in these facilities is challenging. The author proposed techniques for the testing of residential buildings and architectural features in flows that lack fully developed turbulence. While these methods are effective for small structures, the extension of the approach for large and flexible structures is not possible yet. The purpose of this study is to investigate the role of turbulence in the response of tall buildings to extreme winds. In addition, the paper presents a detailed analysis to investigate the influence of upstream terrain conditions, wind direction angle (orientation), and the interference effect from the surrounding on the response of high-rise buildings. The methodology presented can be followed to help decision makers to choose among innovative solutions like aerodynamic mitigation, structural member size adjustment, and/or damping enhancement, with an objective to improve the resiliency and the serviceability of buildings.
Are temporal characteristics of fast repetitive oscillating movement invariant?
Gutnik, B J; Nicholson, J; Go, W; Gale, D; Nash, D
2003-06-01
Validation of the proportional duration model was attempted using very fast single-joint repetitive horizontal abductive-adductive movements of the stretched upper extremity with minimal cognitive input. Participants drew oscillating horizontal lines during 20 sec. over relatively short distances as quickly as possible without visual feedback. Spatial, temporal, and kinetic parameters were analysed. The amplitude and the time spent accelerating, decelerating, and reversing in both directions of each experimental line were recorded and related to the centre of gravity of the upper extremity. The accelerations of the centre of mass of the upper extremity were calculated and used to calculate the forces involved. The ratios of durations were compared and intercorrelated for the two fastest, two average, and two slowest cycles from each participant. Results exhibited significant standard deviations and variability of temporal and kinetic parameters within individual trials. The number of significant coefficients of correlation within individual trials was small despite the controlling influence of the same generalised motor program. The proportional duration model did not hold for our data. Peripheral factors (probably the length-tension relationship rule for skeletal muscles and viscosity of muscle) may be important in this type of action.
2014-01-01
Atmospheric turbulence results from the vertical movement of air, together with flow disturbances around surface obstacles which make low- and moderate-level winds extremely irregular. Recent advancements in wind engineering have led to the construction of new facilities for testing residential homes at relatively high Reynolds numbers. However, the generation of a fully developed turbulence in these facilities is challenging. The author proposed techniques for the testing of residential buildings and architectural features in flows that lack fully developed turbulence. While these methods are effective for small structures, the extension of the approach for large and flexible structures is not possible yet. The purpose of this study is to investigate the role of turbulence in the response of tall buildings to extreme winds. In addition, the paper presents a detailed analysis to investigate the influence of upstream terrain conditions, wind direction angle (orientation), and the interference effect from the surrounding on the response of high-rise buildings. The methodology presented can be followed to help decision makers to choose among innovative solutions like aerodynamic mitigation, structural member size adjustment, and/or damping enhancement, with an objective to improve the resiliency and the serviceability of buildings. PMID:24701140
Delcourt, Vivian; Lucier, Jean-François; Gagnon, Jules; Beaudoin, Maxime C; Vanderperre, Benoît; Breton, Marc-André; Motard, Julie; Jacques, Jean-François; Brunelle, Mylène; Gagnon-Arsenault, Isabelle; Fournier, Isabelle; Ouangraoua, Aida; Hunting, Darel J; Cohen, Alan A; Landry, Christian R; Scott, Michelle S
2017-01-01
Recent functional, proteomic and ribosome profiling studies in eukaryotes have concurrently demonstrated the translation of alternative open-reading frames (altORFs) in addition to annotated protein coding sequences (CDSs). We show that a large number of small proteins could in fact be coded by these altORFs. The putative alternative proteins translated from altORFs have orthologs in many species and contain functional domains. Evolutionary analyses indicate that altORFs often show more extreme conservation patterns than their CDSs. Thousands of alternative proteins are detected in proteomic datasets by reanalysis using a database containing predicted alternative proteins. This is illustrated with specific examples, including altMiD51, a 70 amino acid mitochondrial fission-promoting protein encoded in MiD51/Mief1/SMCR7L, a gene encoding an annotated protein promoting mitochondrial fission. Our results suggest that many genes are multicoding genes and code for a large protein and one or several small proteins. PMID:29083303
Guide to Cybersecurity, Resilience, and Reliability for Small and Under-Resourced Utilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ingram, Michael; Martin, Maurice
Small electricity utilities -- those with less than 100 employees or 50,000 meters -- provide essential services to large parts of the United States while facing a number of challenges unique to their mission. For instance, they often serve areas that are sparsely populated, meaning that their per-customer cost to provide service is higher. At the same time, they often serve customers that have moderate or fixed incomes, meaning that they are under strong pressure to keep costs down. This pressure puts them on a strict budget and creates a need for innovative solutions to common problems. Further, their servicemore » areas may include extreme climates, making severe weather events more frequent and their aftermaths more expensive to address. This guide considers challenges that small utilities face while ensuring the reliability, resilience, and cybersecurity of their electric service; approaches to address those challenges using existing guidance documents; ways that the federal government could provide support in these areas.« less
Characterizing the Spatial Contiguity of Extreme Precipitation over the US in the Recent Past
NASA Astrophysics Data System (ADS)
Touma, D. E.; Swain, D. L.; Diffenbaugh, N. S.
2016-12-01
The spatial characteristics of extreme precipitation over an area can define the hydrologic response in a basin, subsequently affecting the flood risk in the region. Here, we examine the spatial extent of extreme precipitation in the US by defining its "footprint": a contiguous area of rainfall exceeding a certain threshold (e.g., 90th percentile) on a given day. We first characterize the climatology of extreme rainfall footprint sizes across the US from 1980-2015 using Daymet, a high-resolution observational gridded rainfall dataset. We find that there are distinct regional and seasonal differences in average footprint sizes of extreme daily rainfall. In the winter, the Midwest shows footprints exceeding 500,000 sq. km while the Front Range exhibits footprints of 10,000 sq. km. Alternatively, the summer average footprint size is generally smaller and more uniform across the US, ranging from 10,000 sq. km in the Southwest to 100,000 sq. km in Montana and North Dakota. Moreover, we find that there are some significant increasing trends of average footprint size between 1980-2015, specifically in the Southwest in the winter and the Northeast in the spring. While gridded daily rainfall datasets allow for a practical framework in calculating footprint size, this calculation heavily depends on the interpolation methods that have been used in creating the dataset. Therefore, we assess footprint size using the GHCN-Daily station network and use geostatistical methods to define footprints of extreme rainfall directly from station data. Compared to the findings from Daymet, preliminary results using this method show fewer small daily footprint sizes over the US while large footprints are of similar number and magnitude to Daymet. Overall, defining the spatial characteristics of extreme rainfall as well as observed and expected changes in these characteristics allows us to better understand the hydrologic response to extreme rainfall and how to better characterize flood risks.
The NuSTAR Serendipitous Survey: Hunting for the Most Extreme Obscured AGN at >10 keV
NASA Astrophysics Data System (ADS)
Lansbury, G. B.; Alexander, D. M.; Aird, J.; Gandhi, P.; Stern, D.; Koss, M.; Lamperti, I.; Ajello, M.; Annuar, A.; Assef, R. J.; Ballantyne, D. R.; Baloković, M.; Bauer, F. E.; Brandt, W. N.; Brightman, M.; Chen, C.-T. J.; Civano, F.; Comastri, A.; Del Moro, A.; Fuentes, C.; Harrison, F. A.; Marchesi, S.; Masini, A.; Mullaney, J. R.; Ricci, C.; Saez, C.; Tomsick, J. A.; Treister, E.; Walton, D. J.; Zappacosta, L.
2017-09-01
We identify sources with extremely hard X-ray spectra (I.e., with photon indices of {{Γ }}≲ 0.6) in the 13 deg2 NuSTAR serendipitous survey, to search for the most highly obscured active galactic nuclei (AGNs) detected at > 10 {keV}. Eight extreme NuSTAR sources are identified, and we use the NuSTAR data in combination with lower-energy X-ray observations (from Chandra, Swift XRT, and XMM-Newton) to characterize the broadband (0.5-24 keV) X-ray spectra. We find that all of the extreme sources are highly obscured AGNs, including three robust Compton-thick (CT; {N}{{H}}> 1.5× {10}24 cm-2) AGNs at low redshift (z< 0.1) and a likely CT AGN at higher redshift (z = 0.16). Most of the extreme sources would not have been identified as highly obscured based on the low-energy (< 10 keV) X-ray coverage alone. The multiwavelength properties (e.g., optical spectra and X-ray-mid-IR luminosity ratios) provide further support for the eight sources being significantly obscured. Correcting for absorption, the intrinsic rest-frame 10-40 keV luminosities of the extreme sources cover a broad range, from ≈ 5× {10}42 to 1045 erg s-1. The estimated number counts of CT AGNs in the NuSTAR serendipitous survey are in broad agreement with model expectations based on previous X-ray surveys, except for the lowest redshifts (z< 0.07), where we measure a high CT fraction of {f}{CT}{obs}={30}-12+16 % . For the small sample of CT AGNs, we find a high fraction of galaxy major mergers (50% ± 33%) compared to control samples of “normal” AGNs.
NASA Astrophysics Data System (ADS)
Otto, F. E. L.; Mitchell, D.; Sippel, S.; Black, M. T.; Dittus, A. J.; Harrington, L. J.; Mohd Saleh, N. H.
2014-12-01
A shift in the distribution of socially-relevant climate variables such as daily minimum winter temperatures and daily precipitation extremes, has been attributed to anthropogenic climate change for various mid-latitude regions. However, while there are many process-based arguments suggesting also a change in the shape of these distributions, attribution studies demonstrating this have not currently been undertaken. Here we use a very large initial condition ensemble of ~40,000 members simulating the European winter 2013/2014 using the distributed computing infrastructure under the weather@home project. Two separate scenarios are used:1. current climate conditions, and 2. a counterfactual scenario of "world that might have been" without anthropogenic forcing. Specifically focusing on extreme events, we assess how the estimated parameters of the Generalized Extreme Value (GEV) distribution vary depending on variable-type, sampling frequency (daily, monthly, …) and geographical region. We find that the location parameter changes for most variables but, depending on the region and variables, we also find significant changes in scale and shape parameters. The very large ensemble allows, furthermore, to assess whether such findings in the fitted GEV distributions are consistent with an empirical analysis of the model data, and whether the most extreme data still follow a known underlying distribution that in a small sample size might otherwise be thought of as an out-lier. The ~40,000 member ensemble is simulated using 12 different SST patterns (1 'observed', and 11 best guesses of SSTs with no anthropogenic warming). The range in SSTs, along with the corresponding changings in the NAO and high-latitude blocking inform on the dynamics governing some of these extreme events. While strong tele-connection patterns are not found in this particular experiment, the high number of simulated extreme events allows for a more thorough analysis of the dynamics than has been performed before. Therefore, combining extreme value theory with very large ensemble simulations allows us to understand the dynamics of changes in extreme events which is not possible just using the former but also shows in which cases statistics combined with smaller ensembles give as valid results as very large initial conditions.
The association of extreme temperatures and the incidence of tuberculosis in Japan
NASA Astrophysics Data System (ADS)
Onozuka, Daisuke; Hagihara, Akihito
2015-08-01
Seasonal variation in the incidence of tuberculosis (TB) has been widely assumed. However, few studies have investigated the association between extreme temperatures and the incidence of TB. We collected data on cases of TB and mean temperature in Fukuoka, Japan for 2008-2012 and used time-series analyses to assess the possible relationship of extreme temperatures with TB incident cases, adjusting for seasonal and interannual variation. Our analysis revealed that the occurrence of extreme heat temperature events resulted in a significant increase in the number of TB cases (relative risk (RR) 1.20, 95 % confidence interval (CI) 1.01-1.43). We also found that the occurrence of extreme cold temperature events resulted in a significant increase in the number of TB cases (RR 1.23, 95 % CI 1.05-1.45). Sex and age did not modify the effect of either heat or cold extremes. Our study provides quantitative evidence that the number of TB cases increased significantly with extreme heat and cold temperatures. The results may help public health officials predict extreme temperature-related TB incidence and prepare for the implementation of preventive public health interventions.
A Test-Length Correction to the Estimation of Extreme Proficiency Levels
ERIC Educational Resources Information Center
Magis, David; Beland, Sebastien; Raiche, Gilles
2011-01-01
In this study, the estimation of extremely large or extremely small proficiency levels, given the item parameters of a logistic item response model, is investigated. On one hand, the estimation of proficiency levels by maximum likelihood (ML), despite being asymptotically unbiased, may yield infinite estimates. On the other hand, with an…
Pellicer, Jaume; Kelly, Laura J; Leitch, Ilia J; Zomlefer, Wendy B; Fay, Michael F
2014-03-01
• Since the occurrence of giant genomes in angiosperms is restricted to just a few lineages, identifying where shifts towards genome obesity have occurred is essential for understanding the evolutionary mechanisms triggering this process. • Genome sizes were assessed using flow cytometry in 79 species and new chromosome numbers were obtained. Phylogenetically based statistical methods were applied to infer ancestral character reconstructions of chromosome numbers and nuclear DNA contents. • Melanthiaceae are the most diverse family in terms of genome size, with C-values ranging more than 230-fold. Our data confirmed that giant genomes are restricted to tribe Parideae, with most extant species in the family characterized by small genomes. Ancestral genome size reconstruction revealed that the most recent common ancestor (MRCA) for the family had a relatively small genome (1C = 5.37 pg). Chromosome losses and polyploidy are recovered as the main evolutionary mechanisms generating chromosome number change. • Genome evolution in Melanthiaceae has been characterized by a trend towards genome size reduction, with just one episode of dramatic DNA accumulation in Parideae. Such extreme contrasting profiles of genome size evolution illustrate the key role of transposable elements and chromosome rearrangements in driving the evolution of plant genomes. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.
Evaluation of Bias and Variance in Low-count OSEM List Mode Reconstruction
Jian, Y; Planeta, B; Carson, R E
2016-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization (MLEM) reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combination of subsets and iterations. Regions of interest (ROIs) were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations x subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1–5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR. PMID:25479254
Evaluation of bias and variance in low-count OSEM list mode reconstruction
NASA Astrophysics Data System (ADS)
Jian, Y.; Planeta, B.; Carson, R. E.
2015-01-01
Statistical algorithms have been widely used in PET image reconstruction. The maximum likelihood expectation maximization reconstruction has been shown to produce bias in applications where images are reconstructed from a relatively small number of counts. In this study, image bias and variability in low-count OSEM reconstruction are investigated on images reconstructed with MOLAR (motion-compensation OSEM list-mode algorithm for resolution-recovery reconstruction) platform. A human brain ([11C]AFM) and a NEMA phantom are used in the simulation and real experiments respectively, for the HRRT and Biograph mCT. Image reconstructions were repeated with different combinations of subsets and iterations. Regions of interest were defined on low-activity and high-activity regions to evaluate the bias and noise at matched effective iteration numbers (iterations × subsets). Minimal negative biases and no positive biases were found at moderate count levels and less than 5% negative bias was found using extremely low levels of counts (0.2 M NEC). At any given count level, other factors, such as subset numbers and frame-based scatter correction may introduce small biases (1-5%) in the reconstructed images. The observed bias was substantially lower than that reported in the literature, perhaps due to the use of point spread function and/or other implementation methods in MOLAR.
Bioethics and Public Health Collaborate to Reveal Impacts of Climate Change on Caribbean Life
NASA Astrophysics Data System (ADS)
Macpherson, C.; Akpinar-Elci, M.
2011-12-01
Interdisciplinary dialog and collaboration aimed at protecting health against climate change is impeded by the small number of scientists and health professionals skilled in interdisciplinary work, and by the view held by many that "climate change won't affect me personally". These challenges may be surmounted by discussions about the lived experience of climate change and how this threatens things we value. Dialog between bioethics and public health generated an innovative collaboration using the focus group method. The main limitation of focus groups is the small number of participants however the data obtained is generalizable to wider groups and is used regularly in business to enhance marketing strategies. Caribbean academicians from varied disciplines discussed how climate change affects them and life in the Caribbean. Caribbean states are particularly vulnerable to climate change because their large coastal areas are directly exposed to rising sea levels and their development relies heavily on foreign aid. The Caribbean comprises about half of the 39 members of the Association of Small Island States (AOSIS), and small island states comprise about 5% of global population [1]. Participants described socioeconomic and environmental changes in the Caribbean that they attribute to climate change. These include extreme weather, unusual rain and drought, drying rivers, beach erosion, declining fish catches, and others. The session exposed impacts on individuals, businesses, agriculture, and disaster preparedness. This data helps to reframe climate change as a personal reality rather than a vague future concern. It is relevant to the design, implementation, and sustainability of climate policies in the Caribbean and perhaps other small island states. The method and interdisciplinary approach can be used in other settings to elicit dialog about experiences and values across sectors, and to inform policies. Those who have experienced extreme weather are more concerned about climate change than others [2] and no expertise is needed to discuss such experiences or related values. These are accessible concepts in all disciplines and across socioeconomic levels. Research to further identify and describe values challenged by climate change is needed and can be communicated across disciplines and to the public. The resultant dialog will facilitate interdisciplinary collaboration, public and political debate, and possibly generate behavior change. References 1. Alliance of Small Island States (AOSIS). Accessed July 6, 2011. http://aosis.info/members-and-observers/ 2. Spence A., Poortinga W., Butler C., Pidgeon N.F. Perceptions of climate change and willingness to save energy related to flood experience. Nature Climate Change. March 2011. Accessed July 6, 2011. http://www.nature.com/nclimate/journal/vaop/ncurrent/full/nclimate1059.html
ELROI Extremely Low Resource Optical Identifier. A license plate for your satellite, and more.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmer, David
ELROI (Extremely Low Resource Optical Identifier) is a license plate for your satellite; a small tag that flashes an optical identification code that can be read by a small telescope on the ground. The final version of the tag will be the size of a thick postage stamp and fully autonomous: you can attach it to everything that goes into space, including small cubesats and inert debris like rocket stages, and it will keep blinking even after the satellite is shut down, reliably identifying the object from launch until re-entry.
Lu, Huijuan; Wei, Shasha; Zhou, Zili; Miao, Yanzi; Lu, Yi
2015-01-01
The main purpose of traditional classification algorithms on bioinformatics application is to acquire better classification accuracy. However, these algorithms cannot meet the requirement that minimises the average misclassification cost. In this paper, a new algorithm of cost-sensitive regularised extreme learning machine (CS-RELM) was proposed by using probability estimation and misclassification cost to reconstruct the classification results. By improving the classification accuracy of a group of small sample which higher misclassification cost, the new CS-RELM can minimise the classification cost. The 'rejection cost' was integrated into CS-RELM algorithm to further reduce the average misclassification cost. By using Colon Tumour dataset and SRBCT (Small Round Blue Cells Tumour) dataset, CS-RELM was compared with other cost-sensitive algorithms such as extreme learning machine (ELM), cost-sensitive extreme learning machine, regularised extreme learning machine, cost-sensitive support vector machine (SVM). The results of experiments show that CS-RELM with embedded rejection cost could reduce the average cost of misclassification and made more credible classification decision than others.
Ginsberg, Jill P; Rai, Shesh N; Carlson, Claire A; Meadows, Anna T; Hinds, Pamela S; Spearing, Elena M; Zhang, Lijun; Callaway, Lulie; Neel, Michael D; Rao, Bhaskar N; Marchese, Victoria G
2007-12-01
Comparison of functional mobility and quality of life is performed in patients with lower-extremity bone sarcoma following either amputation, limb-sparing surgery, or rotationplasty with four different types of outcome measures: (1) an objective functional mobility measure that requires patients to physically perform specific tasks, functional mobility assessment (FMA); (2) a clinician administered tool, Musculoskeletal Tumor Society Scale (MSTS); (3) a patient questionnaire, Toronto Extremity Salvage Scale (TESS); and (4) a health-related quality of life (HRQL) measure, Short Form-36 version 2 (SF-36v.2). This is a prospective multi-site study including 91 patients with lower-extremity bone sarcoma following amputation, limb-sparing surgery, or rotationplasty. One of three physical therapists administered the quality of life measure (SF-36v.2) as well as a battery of functional measures (FMA, MSTS, and TESS). Differences between patients who had amputation, limb-sparing surgery, or rotationplasty were consistently demonstrated by the FMA. Patients with limb sparing femur surgery performed better than those patients with an above the knee amputation but similarly to a small number of rotationplasty patients. Several of the more conventional self-report measures were shown to not have the discriminative capabilities of the FMA in these cohorts. In adolescents with lower-extremity bone sarcoma, it may be advantageous to consider the use of a combination of outcome measures, including the FMA, for objective functional mobility assessment along with the TESS for a subjective measure of disability and the SF-36v.2 for a quality-of-life measure. 2007 Wiley-Liss, Inc
GESTATIONAL AGE AT BIRTH AND RISK OF TESTICULAR CANCER
Crump, Casey; Sundquist, Kristina; Winkleby, Marilyn A.; Sieh, Weiva; Sundquist, Jan
2011-01-01
Most testicular germ cell tumors originate from carcinoma in situ cells in fetal life, possibly related to sex hormone imbalances in early pregnancy. Previous studies of association between gestational age at birth and testicular cancer have yielded discrepant results and have not examined extreme preterm birth. Our objective was to determine whether low gestational age at birth is independently associated with testicular cancer in later life. We conducted a national cohort study of 354,860 men born in Sweden in 1973–1979, including 19,214 born preterm (gestational age <37 weeks) of whom 1,279 were born extremely preterm (22–29 weeks), followed for testicular cancer incidence through 2008. A total of 767 testicular cancers (296 seminomas and 471 nonseminomatous germ cell tumors) were identified in 11.2 million person-years of follow-up. Extreme preterm birth was associated with an increased risk of testicular cancer (hazard ratio 3.95; 95% CI, 1.67–9.34) after adjusting for other perinatal factors, family history of testicular cancer, and cryptorchidism. Only five cases (three seminomas and two nonseminomas) occurred among men born extremely preterm, limiting the precision of risk estimates. No association was found between later preterm birth, post-term birth, or low or high fetal growth and testicular cancer. These findings suggest that extreme but not later preterm birth may be independently associated with testicular cancer in later life. They are based on a small number of cases and will need confirmation in other large cohorts. Elucidation of the key prenatal etiologic factors may potentially lead to preventive interventions in early life. PMID:22314417
Kakuda, Tsuneo; Shojo, Hideki; Tanaka, Mayumi; Nambiar, Phrabhakaran; Minaguchi, Kiyoshi; Umetsu, Kazuo; Adachi, Noboru
2016-01-01
Mitochondrial DNA (mtDNA) serves as a powerful tool for exploring matrilineal phylogeographic ancestry, as well as for analyzing highly degraded samples, because of its polymorphic nature and high copy numbers per cell. The recent advent of complete mitochondrial genome sequencing has led to improved techniques for phylogenetic analyses based on mtDNA, and many multiplex genotyping methods have been developed for the hierarchical analysis of phylogenetically important mutations. However, few high-resolution multiplex genotyping systems for analyzing East-Asian mtDNA can be applied to extremely degraded samples. Here, we present a multiplex system for analyzing mitochondrial single nucleotide polymorphisms (mtSNPs), which relies on a novel amplified product-length polymorphisms (APLP) method that uses inosine-flapped primers and is specifically designed for the detailed haplogrouping of extremely degraded East-Asian mtDNAs. We used fourteen 6-plex polymerase chain reactions (PCRs) and subsequent electrophoresis to examine 81 haplogroup-defining SNPs and 3 insertion/deletion sites, and we were able to securely assign the studied mtDNAs to relevant haplogroups. Our system requires only 1×10−13 g (100 fg) of crude DNA to obtain a full profile. Owing to its small amplicon size (<110 bp), this new APLP system was successfully applied to extremely degraded samples for which direct sequencing of hypervariable segments using mini-primer sets was unsuccessful, and proved to be more robust than conventional APLP analysis. Thus, our new APLP system is effective for retrieving reliable data from extremely degraded East-Asian mtDNAs. PMID:27355212
Kakuda, Tsuneo; Shojo, Hideki; Tanaka, Mayumi; Nambiar, Phrabhakaran; Minaguchi, Kiyoshi; Umetsu, Kazuo; Adachi, Noboru
2016-01-01
Mitochondrial DNA (mtDNA) serves as a powerful tool for exploring matrilineal phylogeographic ancestry, as well as for analyzing highly degraded samples, because of its polymorphic nature and high copy numbers per cell. The recent advent of complete mitochondrial genome sequencing has led to improved techniques for phylogenetic analyses based on mtDNA, and many multiplex genotyping methods have been developed for the hierarchical analysis of phylogenetically important mutations. However, few high-resolution multiplex genotyping systems for analyzing East-Asian mtDNA can be applied to extremely degraded samples. Here, we present a multiplex system for analyzing mitochondrial single nucleotide polymorphisms (mtSNPs), which relies on a novel amplified product-length polymorphisms (APLP) method that uses inosine-flapped primers and is specifically designed for the detailed haplogrouping of extremely degraded East-Asian mtDNAs. We used fourteen 6-plex polymerase chain reactions (PCRs) and subsequent electrophoresis to examine 81 haplogroup-defining SNPs and 3 insertion/deletion sites, and we were able to securely assign the studied mtDNAs to relevant haplogroups. Our system requires only 1×10-13 g (100 fg) of crude DNA to obtain a full profile. Owing to its small amplicon size (<110 bp), this new APLP system was successfully applied to extremely degraded samples for which direct sequencing of hypervariable segments using mini-primer sets was unsuccessful, and proved to be more robust than conventional APLP analysis. Thus, our new APLP system is effective for retrieving reliable data from extremely degraded East-Asian mtDNAs.
Larsen, Kristian; Weidich, Flemming; Leboeuf-Yde, Charlotte
2002-06-01
Shock-absorbing and biomechanic shoe orthoses are frequently used in the prevention and treatment of back and lower extremity problems. One review concludes that the former is clinically effective in relation to prevention, whereas the latter has been tested in only 1 randomized clinical trial, concluding that stress fractures could be prevented. To investigate if biomechanic shoe orthoses can prevent problems in the back and lower extremities and if reducing the number of days off-duty because of back or lower extremity problems is possible. Prospective, randomized, controlled intervention trial. One female and 145 male military conscripts (aged 18 to 24 years), representing 25% of all new conscripts in a Danish regiment. Health data were collected by questionnaires at initiation of the study and 3 months later. Custom-made biomechanic shoe orthoses to be worn in military boots were provided to all in the study group during the 3-month intervention period. No intervention was provided for the control group. Differences between the 2 groups were tested with the chi-square test, and statistical significance was accepted at P <.05. Risk ratio (RR), risk difference (ARR), numbers needed to prevent (NNP), and cost per successfully prevented case were calculated. Outcome variables included self-reported back and/or lower extremity problems; specific problems in the back or knees or shin splints, Achilles tendonitis, sprained ankle, or other problems in the lower extremity; number of subjects with at least 1 day off-duty because of back or lower extremity problems and total number of days off-duty within the first 3 months of military service because of back or lower extremity problems. Results were significantly better in an actual-use analysis in the intervention group for total number of subjects with back or lower extremity problems (RR 0.7, ARR 19%, NNP 5, cost 98 US dollars); number of subjects with shin splints (RR 0.2, ARR 19%, NNP 5, cost 101 US dollars); number of off-duty days because of back or lower extremity problems (RR 0.6, ARR < 1%, NNP 200, cost 3750 US dollars). In an intention-to-treat analysis, a significant difference was found for only number of subjects with shin splints (RR 0.3, ARR 18%, NNP 6 cost 105 US dollars), whereas a worst-case analysis revealed no significant differences between the study groups. This study shows that it may be possible to prevent certain musculoskeletal problems in the back or lower extremities among military conscripts by using custom-made biomechanic shoe orthoses. However, because care-seeking for lower extremity problems is rare, using this method of prevention in military conscripts would be too costly. We also noted that the choice of statistical approach determined the outcome.
Extreme sensitivity biosensing platform based on hyperbolic metamaterials
NASA Astrophysics Data System (ADS)
Sreekanth, Kandammathe Valiyaveedu; Alapan, Yunus; Elkabbash, Mohamed; Ilker, Efe; Hinczewski, Michael; Gurkan, Umut A.; de Luca, Antonio; Strangi, Giuseppe
2016-06-01
Optical sensor technology offers significant opportunities in the field of medical research and clinical diagnostics, particularly for the detection of small numbers of molecules in highly diluted solutions. Several methods have been developed for this purpose, including label-free plasmonic biosensors based on metamaterials. However, the detection of lower-molecular-weight (<500 Da) biomolecules in highly diluted solutions is still a challenging issue owing to their lower polarizability. In this context, we have developed a miniaturized plasmonic biosensor platform based on a hyperbolic metamaterial that can support highly confined bulk plasmon guided modes over a broad wavelength range from visible to near infrared. By exciting these modes using a grating-coupling technique, we achieved different extreme sensitivity modes with a maximum of 30,000 nm per refractive index unit (RIU) and a record figure of merit (FOM) of 590. We report the ability of the metamaterial platform to detect ultralow-molecular-weight (244 Da) biomolecules at picomolar concentrations using a standard affinity model streptavidin-biotin.
NASA Technical Reports Server (NTRS)
Xu, Kuan-Man
2015-01-01
During inactive phases of Madden-Julian Oscillation (MJO), there are plenty of deep but small convective systems and far fewer deep and large ones. During active phases of MJO, a manifestation of an increase in the occurrence of large and deep cloud clusters results from an amplification of large-scale motions by stronger convective heating. This study is designed to quantitatively examine the roles of small and large cloud clusters during the MJO life cycle. We analyze the cloud object data from Aqua CERES (Clouds and the Earth's Radiant Energy System) observations between July 2006 and June 2010 for tropical deep convective (DC) and cirrostratus (CS) cloud object types according to the real-time multivariate MJO index, which assigns the tropics to one of the eight MJO phases each day. The cloud object is a contiguous region of the earth with a single dominant cloud-system type. The criteria for defining these cloud types are overcast footprints and cloud top pressures less than 400 hPa, but DC has higher cloud optical depths (=10) than those of CS (<10). The size distributions, defined as the footprint numbers as a function of cloud object diameters, for particular MJO phases depart greatly from the combined (8-phase) distribution at large cloud-object diameters due to the reduced/increased numbers of cloud objects related to changes in the large-scale environments. The medium diameter corresponding to the combined distribution is determined and used to partition all cloud objects into "small" and "large" groups of a particular phase. The two groups corresponding to the combined distribution have nearly equal numbers of footprints. The medium diameters are 502 km for DC and 310 km for cirrostratus. The range of the variation between two extreme phases (typically, the most active and depressed phases) for the small group is 6-11% in terms of the numbers of cloud objects and the total footprint numbers. The corresponding range for the large group is 19-44%. In terms of the probability density functions of radiative and cloud physical properties, there are virtually no differences between the MJO phases for the small group, but there are significant differences for the large groups for both DC and CS types. These results suggest that the intreseasonal variation signals reside at the large cloud clusters while the small cloud clusters represent the background noises resulting from various types of the tropical waves with different wavenumbers and propagation speeds/directions.
Gross, V; Bährle, R; Mayer, G
2018-04-01
The taxon Tardigrada, commonly called "water bears", consists of microscopic, eight-legged invertebrates that are well known for their ability to tolerate extreme environmental conditions. Their miniscule body size means that tardigrades possess a small total number of cells, the number and arrangement of which may be highly conserved in some organs. Although mitoses have been observed in several organs, the rate and pattern of cell divisions in adult tardigrades has never been characterized. In this study, we incubated live tardigrades over a period of several days with a thymidine analog in order to visualize all cells that had divided during this time. We focus on the midgut, the largest part of the digestive system. Our results show that new cells in the midgut arise from the anterior and posterior ends of this organ and either migrate or divide toward its middle. These cells divide at a constant rate and all cells of the midgut epithelium are replaced in approximately one week. On the other hand, we found no cell divisions in the nervous system or any other major organs, suggesting that the cell turnover of these organs may be extremely slow or dependent on changing environmental conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Pulse-coupled mixed-mode oscillators: Cluster states and extreme noise sensitivity
NASA Astrophysics Data System (ADS)
Karamchandani, Avinash J.; Graham, James N.; Riecke, Hermann
2018-04-01
Motivated by rhythms in the olfactory system of the brain, we investigate the synchronization of all-to-all pulse-coupled neuronal oscillators exhibiting various types of mixed-mode oscillations (MMOs) composed of sub-threshold oscillations (STOs) and action potentials ("spikes"). We focus particularly on the impact of the delay in the interaction. In the weak-coupling regime, we reduce the system to a Kuramoto-type equation with non-sinusoidal phase coupling and the associated Fokker-Planck equation. Its linear stability analysis identifies the appearance of various cluster states. Their type depends sensitively on the delay and the width of the pulses. Interestingly, long delays do not imply slow population rhythms, and the number of emerging clusters only loosely depends on the number of STOs. Direct simulations of the oscillator equations reveal that for quantitative agreement of the weak-coupling theory the coupling strength and the noise have to be extremely small. Even moderate noise leads to significant skipping of STO cycles, which can enhance the diffusion coefficient in the Fokker-Planck equation by two orders of magnitude. Introducing an effective diffusion coefficient extends the range of agreement significantly. Numerical simulations of the Fokker-Planck equation reveal bistability and solutions with oscillatory order parameters that result from nonlinear mode interactions. These are confirmed in simulations of the full spiking model.
Tao, Tao; Wyer, Robert S; Zheng, Yuhuang
2017-03-01
We propose a two-process conceptualization of numerical information processing to describe how people form impressions of a score that is described along a bounded scale. According to the model, people spontaneously categorize a score as high or low. Furthermore, they compare the numerical discrepancy between the score and the endpoint of the scale to which it is closer, if they are not confident of their categorization, and use implications of this comparison as a basis for judgment. As a result, their evaluation of the score is less extreme when the range of numbers along the scale is large (e.g., from 0 to 100) than when it is small (from 0 to 10). Six experiments support this two-process model and demonstrate its generalizability. Specifically, the magnitude of numbers composing the scale has less impact on judgments (a) when the score being evaluated is extreme, (b) when individuals are unmotivated to engage in endpoint comparison processes (i.e., they are low in need for cognition), and (c) when they are unable to do so (i.e., they are under cognitive load). Moreover, the endpoint to which individuals compare the score can depend on their regulatory focus. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
August 2014 Hiroshima landslide disaster and its societal impact
NASA Astrophysics Data System (ADS)
Fukuoka, Hiroshi; Sassa, Kyoji; Wang, Chunxiang
2015-04-01
In the early morning of August 20, 2014, Hiroshima city was hit by a number of debris flows along a linear rain band which caused extreme downpour. This disaster claimed 74 death, although this city experienced very similar disaster in 1999, claiming more than 30 residents lives. In the most severely affected debris flow torrent, more than 50 residents were killed. Most of the casualties arose in the wooden, vulnerable houses constructed in front of the exit of torrents. Points and lessons learnt from the disaster are as follows: 1. Extreme rainfall events : geology and geomorphology does not much affect the distribution of landslides initiation sites. 2. Area of causative extreme rainfall is localized in 2 km x 10 km along the rain band. 3. Authors collected two types of sands from the source scar of the initial debris slides which induced debris flows. Tested by the ring shear apparatus under pore-pressure control condition, clear "Sliding surface liquefaction" was confirmed for both samples even under small normal stress, representing the small thickness of the slides. These results shows even instant excess pore pressure could initiate the slides and trigger slide-induced debris flow by undrained loading onto the torrent deposits. 4. Apparently long-term land-use change affected the vulnerability of the community. Residential area had expanded into hill-slope (mountainous / semi-mountainous area) especially along the torrents. Those communities were developed on the past debris flow fan. 5. As the devastated area is very close to downtown of Hiroshima city, it gave gigantic societal impact to the Japanese citizens. After 1999 Hiroshima debris flow disaster, the Landslide disaster reduction law which intends to promote designation of landslide potential risk zones, was adopted in 2000. Immediately after 2014 disaster, national diet approved revision of the bill.
Very Low Mass Stars with Extremely Low Metallicity in the Milky Way's Halo
NASA Astrophysics Data System (ADS)
Aoki, Wako; Beers, Timothy C.; Takuma, Suda; Honda, Satoshi; Lee, Young Sun
2015-08-01
Large surveys and follow-up spectroscopic studies in the past few decades have been providing chemical abundance data for a growing number of very metal-poor ([Fe/H] <-2) stars. Most of them are red giants or main-sequence turn-off stars having masses near 0.8 solar masses. Lower mass stars with extremely low metallicity ([Fe/H] <-3) have yet to be well explored. Our high-resolution spectroscopic study for very metal-poor stars found with SDSS has identified four cool main-sequence stars with [Fe/H] <-2.5 among 137 objects (Aoki et al. 2013, AJ, 145, 13). The effective temperatures of these stars are 4500--5000 K, corresponding to a mass of around 0.5 solar masses. Our standard analysis of the high-resolution spectra based on 1D-LTE model atmospheres have obtained self-consistent chemical abundances for these objects, assuming small values of micro-turbulent velocities compared with giants and turn-off stars. The low temperature of the atmospheres of these objects enables us to measure their detailed chemical abundances. Interestingly, two of the four stars have extreme chemical abundance patterns: one has the largest excesses of heavy neutron-capture elements associated with the r-process abundance pattern known to date (Aoki et al. 2010, ApJL 723, L201), and the other exhibits low abundances of the alpha-elements and odd-Z elements, suggested to be the signatures of the yields of very massive stars ( >100 solar masses; Aoki et al. 2014, Science 345, 912). Although the sample size is still small, these results indicate the potential of very low-mass stars as probes to study the early stages of the Milky Way's halo formation.
Very Low-Mass Stars with Extremely Low Metallicity in the Milky Way's Halo
NASA Astrophysics Data System (ADS)
Aoki, Wako; Beers, Timothy C.; Suda, Takuma; Honda, Satoshi; Lee, Young Sun
2016-08-01
Large surveys and follow-up spectroscopic studies in the past few decades have been providing chemical abundance data for a growing number of very metal-poor ([Fe/H] <-2) stars. Most of them are red giants or main-sequence turn-off stars having masses near 0.8 solar masses. Lower mass stars with extremely low metallicity ([Fe/H] <-3) are yet to be explored. Our high-resolution spectroscopic study for very metal-poor stars found with SDSS has identified four cool main-sequence stars with [Fe/H] <-2.5 among 137 objects (Aoki et al. 2013). The effective temperatures of these stars are 4500-5000 K, corresponding to a mass of around 0.5 solar masses. Our standard analysis of the high-resolution spectra based on 1D-LTE model atmospheres has obtained self-consistent chemical abundances for these objects, assuming small values of micro-turbulent velocities compared with giants and turn-off stars. The low temperature of the atmospheres of these objects enables us to measure their detailed chemical abundances. Interestingly, two of the four stars have extreme chemical-abundance patterns: one has the largest excesses of heavy neutron-capture elements associated with the r-process abundance pattern known to date (Aoki et al. 2010), and the other exhibits low abundances of the α-elements and odd-Z elements, suggested to be signatures of the yields of very massive stars (> 100 solar masses; Aoki et al. 2014). Although the sample size is still small, these results indicate the potential of very low-mass stars as probes to study the early stages of the Milky Way's halo formation.
Gravitational Waves From the Kerr/CFT Correspondence
NASA Astrophysics Data System (ADS)
Porfyriadis, Achilleas
Astronomical observation suggests the existence of near-extreme Kerr black holes in the sky. Properties of diffeomorphisms imply that dynamics of the near-horizon region of near-extreme Kerr are governed by an infinite-dimensional conformal symmetry. This symmetry may be exploited to analytically, rather than numerically, compute a variety of potentially observable processes. In this thesis we compute the gravitational radiation emitted by a small compact object that orbits in the near-horizon region and plunges into the horizon of a large rapidly rotating black hole. We study the holographically dual processes in the context of the Kerr/CFT correspondence and find our conformal field theory (CFT) computations in perfect agreement with the gravity results. We compute the radiation emitted by a particle on the innermost stable circular orbit (ISCO) of a rapidly spinning black hole. We confirm previous estimates of the overall scaling of the power radiated, but show that there are also small oscillations all the way to extremality. Furthermore, we reveal an intricate mode-by-mode structure in the flux to infinity, with only certain modes having the dominant scaling. The scaling of each mode is controlled by its conformal weight. Massive objects in adiabatic quasi-circular inspiral towards a near-extreme Kerr black hole quickly plunge into the horizon after passing the ISCO. The post-ISCO plunge trajectory is shown to be related by a conformal map to a circular orbit. Conformal symmetry of the near-horizon region is then used to compute analytically the gravitational radiation produced during the plunge phase. Most extreme-mass-ratio-inspirals of small compact objects into supermassive black holes end with a fast plunge from an eccentric last stable orbit. We use conformal transformations to analytically solve for the radiation emitted from various fast plunges into extreme and near-extreme Kerr black holes.
Efficient Ab initio Modeling of Random Multicomponent Alloys
Jiang, Chao; Uberuaga, Blas P.
2016-03-08
Here, we present in this Letter a novel small set of ordered structures (SSOS) method that allows extremely efficient ab initio modeling of random multi-component alloys. Using inverse II-III spinel oxides and equiatomic quinary bcc (so-called high entropy) alloys as examples, we also demonstrate that a SSOS can achieve the same accuracy as a large supercell or a well-converged cluster expansion, but with significantly reduced computational cost. In particular, because of this efficiency, a large number of quinary alloy compositions can be quickly screened, leading to the identification of several new possible high entropy alloy chemistries. Furthermore, the SSOS methodmore » developed here can be broadly useful for the rapid computational design of multi-component materials, especially those with a large number of alloying elements, a challenging problem for other approaches.« less
Archaeal Viruses from High-Temperature Environments.
Munson-McGee, Jacob H; Snyder, Jamie C; Young, Mark J
2018-02-27
Archaeal viruses are some of the most enigmatic viruses known, due to the small number that have been characterized to date. The number of known archaeal viruses lags behind known bacteriophages by over an order of magnitude. Despite this, the high levels of genetic and morphological diversity that archaeal viruses display has attracted researchers for over 45 years. Extreme natural environments, such as acidic hot springs, are almost exclusively populated by Archaea and their viruses, making these attractive environments for the discovery and characterization of new viruses. The archaeal viruses from these environments have provided insights into archaeal biology, gene function, and viral evolution. This review focuses on advances from over four decades of archaeal virology, with a particular focus on archaeal viruses from high temperature environments, the existing challenges in understanding archaeal virus gene function, and approaches being taken to overcome these limitations.
NASA Astrophysics Data System (ADS)
Liu, M.; Yang, L.; Smith, J. A.; Vecchi, G. A.
2017-12-01
Extreme rainfall and flooding associated with landfalling tropical cyclones (TC) is responsible for vast socioeconomic losses and fatalities. Landfalling tropical cyclones are an important element of extreme rainfall and flood peak distributions in the eastern United States. Record floods for USGS stream gauging stations over the eastern US are closely tied to landfalling hurricanes. A small number of storms account for the largest record floods, most notably Hurricanes Diane (1955) and Agnes (1972). The question we address is: if the synoptic conditions accompanying those hurricanes were to be repeated in the future, how would the thermodynamic and dynamic storm properties and associated extreme rainfall differ in response to climate change? We examine three hurricanes: Diane (1955), Agnes (1972) and Irene (2011), due to the contrasts in structure/evolution properties and their important roles in dictating the upper tail properties of extreme rainfall and flood frequency over eastern US. Extreme rainfall from Diane is more localized as the storm maintains tropical characteristics, while synoptic-scale vertical motion associated with extratropical transition is a central feature for extreme rainfall induced by Agnes. Our analyses are based on ensemble simulations using the Weather Research and Forecasting (WRF) model, considering combinations of different physics options (i.e., microphysics, boundary layer schemes). The initial and boundary conditions of WRF simulations for the present-day climate are using the Twentieth Century Reanalysis (20thCR). A sub-selection of GCMs is used, as part of phase 5 of the Coupled Model Intercomparison Project (CMIP5), to provide future climate projections. For future simulations, changes in model fields (i.e., temperature, humidity, geopotential height) between present-day and future climate are first derived and then added to the same 20thCR initial and boundary data used for the present-day simulations, and the ensemble is rerun using identical model configurations. Response of extreme rainfall as well as changes in thermodynamic and dynamic storm properties will be presented and analyzed. Contrasting responses across the three storm events to climate change will shed light on critical environmental factors for TC-related extreme rainfall over eastern US.
Dissecting Magnetar Variability with Bayesian Hierarchical Models
NASA Astrophysics Data System (ADS)
Huppenkothen, Daniela; Brewer, Brendon J.; Hogg, David W.; Murray, Iain; Frean, Marcus; Elenbaas, Chris; Watts, Anna L.; Levin, Yuri; van der Horst, Alexander J.; Kouveliotou, Chryssa
2015-09-01
Neutron stars are a prime laboratory for testing physical processes under conditions of strong gravity, high density, and extreme magnetic fields. Among the zoo of neutron star phenomena, magnetars stand out for their bursting behavior, ranging from extremely bright, rare giant flares to numerous, less energetic recurrent bursts. The exact trigger and emission mechanisms for these bursts are not known; favored models involve either a crust fracture and subsequent energy release into the magnetosphere, or explosive reconnection of magnetic field lines. In the absence of a predictive model, understanding the physical processes responsible for magnetar burst variability is difficult. Here, we develop an empirical model that decomposes magnetar bursts into a superposition of small spike-like features with a simple functional form, where the number of model components is itself part of the inference problem. The cascades of spikes that we model might be formed by avalanches of reconnection, or crust rupture aftershocks. Using Markov Chain Monte Carlo sampling augmented with reversible jumps between models with different numbers of parameters, we characterize the posterior distributions of the model parameters and the number of components per burst. We relate these model parameters to physical quantities in the system, and show for the first time that the variability within a burst does not conform to predictions from ideas of self-organized criticality. We also examine how well the properties of the spikes fit the predictions of simplified cascade models for the different trigger mechanisms.
DISSECTING MAGNETAR VARIABILITY WITH BAYESIAN HIERARCHICAL MODELS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huppenkothen, Daniela; Elenbaas, Chris; Watts, Anna L.
Neutron stars are a prime laboratory for testing physical processes under conditions of strong gravity, high density, and extreme magnetic fields. Among the zoo of neutron star phenomena, magnetars stand out for their bursting behavior, ranging from extremely bright, rare giant flares to numerous, less energetic recurrent bursts. The exact trigger and emission mechanisms for these bursts are not known; favored models involve either a crust fracture and subsequent energy release into the magnetosphere, or explosive reconnection of magnetic field lines. In the absence of a predictive model, understanding the physical processes responsible for magnetar burst variability is difficult. Here,more » we develop an empirical model that decomposes magnetar bursts into a superposition of small spike-like features with a simple functional form, where the number of model components is itself part of the inference problem. The cascades of spikes that we model might be formed by avalanches of reconnection, or crust rupture aftershocks. Using Markov Chain Monte Carlo sampling augmented with reversible jumps between models with different numbers of parameters, we characterize the posterior distributions of the model parameters and the number of components per burst. We relate these model parameters to physical quantities in the system, and show for the first time that the variability within a burst does not conform to predictions from ideas of self-organized criticality. We also examine how well the properties of the spikes fit the predictions of simplified cascade models for the different trigger mechanisms.« less
Lithium Niobate Whispering Gallery Resonators: Applications and Fundamental Studies
NASA Astrophysics Data System (ADS)
Maleki, L.; Matsko, A. B.
Optical whispering gallery modes (WGMs) are closed circulating electromagnetic waves undergoing total internal reflection inside an axio-symmetric body of a transparent dielectric that forms a resonator. Radiative losses are negligible in these modes if the radius of the resonator exceeds several tens of wavelengths, and surface scattering losses can be made small with surface conditioning techniques. Thus, the quality factor (Q) in crystalline WGM resonators is limited by material losses that are, nevertheless, extremely small in optical materials. WGM resonators made of LiNbO3 have been successfully used in optics and microwave photonics. The resonators are characterized by narrow bandwidth, in the hundred kilohertz to gigahertz range. A proper choice of highly transparent and/or nonlinear resonator material, like lithium niobate, allows for realization of a number of high performance devices: tunable and multi-pole filters, resonant electro-optic modulators, photonic microwave receivers, opto-electronic microwave oscillators, and parametric frequency converters, among others.
Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.
2018-01-01
Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to predict binding specificity. Using simplified datasets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified datasets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems. PMID:29652405
General visual robot controller networks via artificial evolution
NASA Astrophysics Data System (ADS)
Cliff, David; Harvey, Inman; Husbands, Philip
1993-08-01
We discuss recent results from our ongoing research concerning the application of artificial evolution techniques (i.e., an extended form of genetic algorithm) to the problem of developing `neural' network controllers for visually guided robots. The robot is a small autonomous vehicle with extremely low-resolution vision, employing visual sensors which could readily be constructed from discrete analog components. In addition to visual sensing, the robot is equipped with a small number of mechanical tactile sensors. Activity from the sensors is fed to a recurrent dynamical artificial `neural' network, which acts as the robot controller, providing signals to motors governing the robot's motion. Prior to presentation of new results, this paper summarizes our rationale and past work, which has demonstrated that visually guided control networks can arise without any explicit specification that visual processing should be employed: the evolutionary process opportunistically makes use of visual information if it is available.
New continuous recording procedure of holographic information on transient phenomena
NASA Astrophysics Data System (ADS)
Nagayama, Kunihito; Nishihara, H. Keith; Murakami, Terutoshi
1992-09-01
A new method for continuous recording of holographic information, 'streak holography,' is proposed. This kind of record can be useful for velocity and acceleration measurement as well as for observing a moving object whose trajectory cannot be predicted in advance. A very high speed camera system has been designed and constructed for streak holography. A ring-shaped 100-mm-diam film has been cut out from the high-resolution sheet film and mounted on a thin duralmin disk, which has been driven to rotate directly by an air-turbine spindle. Attainable streak velocity is 0.3 mm/microsecond(s) . A direct film drive mechanism makes it possible to use a relay lens system of extremely small f number. The feasibility of the camera system has been demonstrated by observing several transient events, such as the forced oscillation of a wire and the free fall of small glass particles, using an argon-ion laser as a light source.
Missner, Andreas; Pohl, Peter
2010-01-01
The transport of gaseous compounds across biological membranes is essential in all forms of life. Although it was generally accepted that gases freely penetrate the lipid matrix of biological membranes, a number of studies challenged this doctrine as they found biological membranes to have extremely low gas-permeability values. These observations led to the identification of several membrane-embedded “gas” channels, which facilitate the transport of biological active gases, such as carbon dioxide, nitric oxide, and ammonia. However, some of these findings are in contrast to the well-established solubility–diffusion model (also known as the Meyer–Overton rule), which predicts membrane permeabilities from the molecule's oil–water partition coefficient. Herein, we discuss recently reported violations of the Meyer–Overton rule for small molecules, including carboxylic acids and gases, and show that Meyer and Overton continue to rule. PMID:19514034
NASA Astrophysics Data System (ADS)
1987-07-01
The U.S. Navy is conducting a long-term program to monitor for possible effects from the operation of its Extremely Low Frequency (ELF) Communications System to resident biota and their ecological relationships. This report documents progress of the following studies: soil amoeba; soil and litter arthropoda and earthworm studies; biological studies on pollinating insects: megachilid bees; and small vertebrates: small mammals and nesting birds.
Compound summer temperature and precipitation extremes over central Europe
NASA Astrophysics Data System (ADS)
Sedlmeier, Katrin; Feldmann, H.; Schädler, G.
2018-02-01
Reliable knowledge of the near-future climate change signal of extremes is important for adaptation and mitigation strategies. Especially compound extremes, like heat and drought occurring simultaneously, may have a greater impact on society than their univariate counterparts and have recently become an active field of study. In this paper, we use a 12-member ensemble of high-resolution (7 km) regional climate simulations with the regional climate model COSMO-CLM over central Europe to analyze the climate change signal and its uncertainty for compound heat and drought extremes in summer by two different measures: one describing absolute (i.e., number of exceedances of absolute thresholds like hot days), the other relative (i.e., number of exceedances of time series intrinsic thresholds) compound extreme events. Changes are assessed between a reference period (1971-2000) and a projection period (2021-2050). Our findings show an increase in the number of absolute compound events for the whole investigation area. The change signal of relative extremes is more region-dependent, but there is a strong signal change in the southern and eastern parts of Germany and the neighboring countries. Especially the Czech Republic shows strong change in absolute and relative extreme events.
Shifting patterns of mild weather in response to projected radiative forcing
NASA Astrophysics Data System (ADS)
van der Wiel, Karin; Kapnick, Sarah; Vecchi, Gabriel
2017-04-01
Traditionally, climate change research has focused on changes in mean climate (e.g. global mean temperature, sea level rise, glacier melt) or change in extreme events (e.g. hurricanes, extreme precipitation, droughts, heat waves, wild fires). Though extreme events have the potential to disrupt society, extreme conditions are rare by definition. In contrast, mild weather occurs frequently and many human activities are built around it. Examples of such activities include football games, dog walks, bike rides, and outdoor weddings, but also activities of direct economic impact, e.g. construction work, infrastructure projects, road or rail transportation, air travel, and landscaping projects. Absence of mild weather impacts society in various way, understanding current and future mild weather is therefore of high scientific interest. We present a global analysis of mild weather based on simple and relatable criteria and we explore changes in mild weather occurrence in response to radiative forcing. A high-resolution global climate model, GFDL HiFLOR, is used to allow for investigation of local features and changes. In response to RCP4.5, we find a slight global mean decrease in the annual number of mild days projected both in the near future (-4 d/yr, 2016-2035) and at the end of this century (-10 d/yr, 2081-2100). Projected regional and seasonal redistributions of mild days are substantially greater. Tropical regions are projected to see large decreases, in the mid-latitudes small increases in the number of mild days are projected. Mediterranean climates are projected to see a shift of mild weather away from the local summer to the shoulder seasons. These changes are larger than the interannual variability of mild weather caused by El Niño-Southern Oscillation. Finally, we use reanalysis data to show an observed global decrease in the recent past, and we verify that these observed regional changes in mild weather resemble the projections.
A space-based public service platform for terrestrial rescue operations
NASA Technical Reports Server (NTRS)
Fleisig, R.; Bernstein, J.; Cramblit, D. C.
1977-01-01
The space-based Public Service Platform (PSP) is a multibeam, high-gain communications relay satellite that can provide a variety of functions for a large number of people on earth equipped with extremely small, very low cost transceivers. This paper describes the PSP concept, the rationale used to derive the concept, the criteria for selecting specific communication functions to be performed, and the advantages of performing such functions via satellite. The discussion focuses on the benefits of using a PSP for natural disaster warning; control of attendant rescue/assistance operations; and rescue of people in downed aircraft, aboard sinking ships, lost or injured on land.
Priority Queuing on the Docket: Universality of Judicial Dispute Resolution Timing
NASA Astrophysics Data System (ADS)
Mukherjee, Satyam; Whalen, Ryan
2018-01-01
This paper analyzes court priority queuing behavior by examining the time lapse between when a case enters a court’s docket and when it is ultimately disposed of. Using data from the Supreme courts of the United States, Massachusetts, and Canada we show that each court’s docket features a slow decay with a decreasing tail. This demonstrates that, in each of the courts examined, the vast majority of cases are resolved relatively quickly, while there remains a small number of outlier cases that take an extremely long time to resolve. We discuss the implications for this on legal systems, the study of the law, and future research.
Equilibrium structure and atomic vibrations of Nin clusters
NASA Astrophysics Data System (ADS)
Borisova, Svetlana D.; Rusina, Galina G.
2017-12-01
The equilibrium bond lengths and binding energy, second differences in energy and vibrational frequencies of free clusters Nin (2 ≤ n ≤ 20) were calculated with the use of the interaction potential obtained in the tight-binding approximation (TBA). The results show that the minimum vibration frequency plays a significant role in the evaluation of the dynamic stability of the clusters. A nonmonotonic dependence of the minimum vibration frequency of clusters on their size and the extreme values for the number of atoms in a cluster n = 4, 6, 13, and 19 are demonstrated. This result agrees with the theoretical and experimental data on stable structures of small metallic clusters.
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2008-05-01
Many random populations can be modeled as a countable set of points scattered randomly on the positive half-line. The points may represent magnitudes of earthquakes and tornados, masses of stars, market values of public companies, etc. In this article we explore a specific class of random such populations we coin ` Paretian Poisson processes'. This class is elemental in statistical physics—connecting together, in a deep and fundamental way, diverse issues including: the Poisson distribution of the Law of Small Numbers; Paretian tail statistics; the Fréchet distribution of Extreme Value Theory; the one-sided Lévy distribution of the Central Limit Theorem; scale-invariance, renormalization and fractality; resilience to random perturbations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berco, Dan, E-mail: danny.barkan@gmail.com; Tseng, Tseung-Yuen, E-mail: tseng@cc.nctu.edu.tw
This study presents an evaluation method for resistive random access memory retention reliability based on the Metropolis Monte Carlo algorithm and Gibbs free energy. The method, which does not rely on a time evolution, provides an extremely efficient way to compare the relative retention properties of metal-insulator-metal structures. It requires a small number of iterations and may be used for statistical analysis. The presented approach is used to compare the relative robustness of a single layer ZrO{sub 2} device with a double layer ZnO/ZrO{sub 2} one, and obtain results which are in good agreement with experimental data.
Polygenic determinants in extremes of high-density lipoprotein cholesterol[S
Dron, Jacqueline S.; Wang, Jian; Low-Kam, Cécile; Khetarpal, Sumeet A.; Robinson, John F.; McIntyre, Adam D.; Ban, Matthew R.; Cao, Henian; Rhainds, David; Dubé, Marie-Pierre; Rader, Daniel J.; Lettre, Guillaume; Tardif, Jean-Claude
2017-01-01
HDL cholesterol (HDL-C) remains a superior biochemical predictor of CVD risk, but its genetic basis is incompletely defined. In patients with extreme HDL-C concentrations, we concurrently evaluated the contributions of multiple large- and small-effect genetic variants. In a discovery cohort of 255 unrelated lipid clinic patients with extreme HDL-C levels, we used a targeted next-generation sequencing panel to evaluate rare variants in known HDL metabolism genes, simultaneously with common variants bundled into a polygenic trait score. Two additional cohorts were used for validation and included 1,746 individuals from the Montréal Heart Institute Biobank and 1,048 individuals from the University of Pennsylvania. Findings were consistent between cohorts: we found rare heterozygous large-effect variants in 18.7% and 10.9% of low- and high-HDL-C patients, respectively. We also found common variant accumulation, indicated by extreme polygenic trait scores, in an additional 12.8% and 19.3% of overall cases of low- and high-HDL-C extremes, respectively. Thus, the genetic basis of extreme HDL-C concentrations encountered clinically is frequently polygenic, with contributions from both rare large-effect and common small-effect variants. Multiple types of genetic variants should be considered as contributing factors in patients with extreme dyslipidemia. PMID:28870971
Polygenic determinants in extremes of high-density lipoprotein cholesterol.
Dron, Jacqueline S; Wang, Jian; Low-Kam, Cécile; Khetarpal, Sumeet A; Robinson, John F; McIntyre, Adam D; Ban, Matthew R; Cao, Henian; Rhainds, David; Dubé, Marie-Pierre; Rader, Daniel J; Lettre, Guillaume; Tardif, Jean-Claude; Hegele, Robert A
2017-11-01
HDL cholesterol (HDL-C) remains a superior biochemical predictor of CVD risk, but its genetic basis is incompletely defined. In patients with extreme HDL-C concentrations, we concurrently evaluated the contributions of multiple large- and small-effect genetic variants. In a discovery cohort of 255 unrelated lipid clinic patients with extreme HDL-C levels, we used a targeted next-generation sequencing panel to evaluate rare variants in known HDL metabolism genes, simultaneously with common variants bundled into a polygenic trait score. Two additional cohorts were used for validation and included 1,746 individuals from the Montréal Heart Institute Biobank and 1,048 individuals from the University of Pennsylvania. Findings were consistent between cohorts: we found rare heterozygous large-effect variants in 18.7% and 10.9% of low- and high-HDL-C patients, respectively. We also found common variant accumulation, indicated by extreme polygenic trait scores, in an additional 12.8% and 19.3% of overall cases of low- and high-HDL-C extremes, respectively. Thus, the genetic basis of extreme HDL-C concentrations encountered clinically is frequently polygenic, with contributions from both rare large-effect and common small-effect variants. Multiple types of genetic variants should be considered as contributing factors in patients with extreme dyslipidemia. Copyright © 2017 by the American Society for Biochemistry and Molecular Biology, Inc.
Climate Extreme Events over Northern Eurasia in Changing Climate
NASA Astrophysics Data System (ADS)
Bulygina, O.; Korshunova, N. N.; Razuvaev, V. N.; Groisman, P. Y.
2014-12-01
During the period of widespread instrumental observations in Northern Eurasia, the annual surface air temperature has increased by 1.5°C. Close to the north in the Arctic Ocean, the late summer sea ice extent has decreased by 40% providing a near-infinite source of water vapor for the dry Arctic atmosphere in the early cold season months. The contemporary sea ice changes are especially visible in the Eastern Hemisphere All these factors affect the change extreme events. Daily and sub-daily data of 940 stations to analyze variations in the space time distribution of extreme temperatures, precipitation, and wind over Russia were used. Changing in number of days with thaw over Russia was described. The total seasonal numbers of days, when daily surface air temperatures (wind, precipitation) were found to be above (below) selected thresholds, were used as indices of climate extremes. Changing in difference between maximum and minimum temperature (DTR) may produce a variety of effects on biological systems. All values falling within the intervals ranged from the lowest percentile to the 5th percentile and from the 95th percentile to the highest percentile for the time period of interest were considered as daily extremes. The number of days, N, when daily temperatures (wind, precipitation, DTR) were within the above mentioned intervals, was determined for the seasons of each year. Linear trends in the number of days were calculated for each station and for quasi-homogeneous climatic regions. Regional analysis of extreme events was carried out using quasi-homogeneous climatic regions. Maps (climatology, trends) are presented mostly for visualization purposes. Differences in regional characteristics of extreme events are accounted for over a large extent of the Russian territory and variety of its physical and geographical conditions. The number of days with maximum temperatures higher than the 95% percentile has increased in most of Russia and decreased in Siberia in spring and autumn. Reducing the number of days with extremely low air temperatures dominated in all seasons. At the same time, the number of days with abnormally low air temperatures has increased in Middle Volga region and south of Western Siberia. In most parts of European Russia observed increase in the number of days with heavy snowfalls.
Knapp, Alan K; Hoover, David L; Wilcox, Kevin R; Avolio, Meghan L; Koerner, Sally E; La Pierre, Kimberly J; Loik, Michael E; Luo, Yiqi; Sala, Osvaldo E; Smith, Melinda D
2015-02-03
Climate change is intensifying the hydrologic cycle and is expected to increase the frequency of extreme wet and dry years. Beyond precipitation amount, extreme wet and dry years may differ in other ways, such as the number of precipitation events, event size, and the time between events. We assessed 1614 long-term (100 year) precipitation records from around the world to identify key attributes of precipitation regimes, besides amount, that distinguish statistically extreme wet from extreme dry years. In general, in regions where mean annual precipitation (MAP) exceeded 1000 mm, precipitation amounts in extreme wet and dry years differed from average years by ~40% and 30%, respectively. The magnitude of these deviations increased to >60% for dry years and to >150% for wet years in arid regions (MAP<500 mm). Extreme wet years were primarily distinguished from average and extreme dry years by the presence of multiple extreme (large) daily precipitation events (events >99th percentile of all events); these occurred twice as often in extreme wet years compared to average years. In contrast, these large precipitation events were rare in extreme dry years. Less important for distinguishing extreme wet from dry years were mean event size and frequency, or the number of dry days between events. However, extreme dry years were distinguished from average years by an increase in the number of dry days between events. These precipitation regime attributes consistently differed between extreme wet and dry years across 12 major terrestrial ecoregions from around the world, from deserts to the tropics. Thus, we recommend that climate change experiments and model simulations incorporate these differences in key precipitation regime attributes, as well as amount into treatments. This will allow experiments to more realistically simulate extreme precipitation years and more accurately assess the ecological consequences. © 2015 John Wiley & Sons Ltd.
Large number limit of multifield inflation
NASA Astrophysics Data System (ADS)
Guo, Zhong-Kai
2017-12-01
We compute the tensor and scalar spectral index nt, ns, the tensor-to-scalar ratio r , and the consistency relation nt/r in the general monomial multifield slow-roll inflation models with potentials V ˜∑iλi|ϕi| pi . The general models give a novel relation that nt, ns and nt/r are all proportional to the logarithm of the number of fields Nf when Nf is getting extremely large with the order of magnitude around O (1040). An upper bound Nf≲N*eZ N* is given by requiring the slow variation parameter small enough where N* is the e -folding number and Z is a function of distributions of λi and pi. Besides, nt/r differs from the single-field result -1 /8 with substantial probability except for a few very special cases. Finally, we derive theoretical bounds r >2 /N* (r ≳0.03 ) and for nt, which can be tested by observation in the near future.
Linear reduction methods for tag SNP selection.
He, Jingwu; Zelikovsky, Alex
2004-01-01
It is widely hoped that constructing a complete human haplotype map will help to associate complex diseases with certain SNP's. Unfortunately, the number of SNP's is huge and it is very costly to sequence many individuals. Therefore, it is desirable to reduce the number of SNP's that should be sequenced to considerably small number of informative representatives, so called tag SNP's. In this paper, we propose a new linear algebra based method for selecting and using tag SNP's. Our method is purely combinatorial and can be combined with linkage disequilibrium (LD) and block based methods. We measure the quality of our tag SNP selection algorithm by comparing actual SNP's with SNP's linearly predicted from linearly chosen tag SNP's. We obtain an extremely good compression and prediction rates. For example, for long haplotypes (>25000 SNP's), knowing only 0.4% of all SNP's we predict the entire unknown haplotype with 2% accuracy while the prediction method is based on a 10% sample of the population.
Acosta, Aline Angelina; González-Solís, David; da Silva, Reinaldo José
2017-07-01
Nematodes belonging to Spinitectus Fourment, 1883 (Nematoda: Cystidicolidae) were found in the intestine of Pimelodella avanhandavae Eigenmann (Siluriformes: Heptapteridae) from the Aguapeí River, Brazil. They represent a new species, Spinitectus aguapeiensis n. sp., which differs morphologically from its congeners in the body length, the number of spinose rings, the location of the excretory pore, the number of precloacal papillae and the length of the spicules. The new species is the first South American species within the genus with a remarkably spirally coiled posterior extremity in males and the largest spicules. It is also the second species with the highest number of precloacal papillae and has unique shape of the small spicule. Spinitectus aguapeiensis n. sp. is the first helminth species found in P. avanhandavae, the fourth species of this genus recorded in the River Paraná Basin and the sixth species of Spinitectus in South America.
[Extreme reactive thrombocytosis in a healthy 6 year-old child].
de Lama Caro-Patón, G; García-Salido, A; Iglesias-Bouzas, M I; Guillén, M; Cañedo-Villaroya, E; Martínez-Romera, I; Serrano-González, A; Casado-Flores, J
2014-11-01
Thrombocytosis is usually a casual finding in children. Reactive or secondary thrombocytosis is the more common form, being the infections diseases the most prevalent cause of it. Regarding the number of platelets there are four degrees of thrombocytosis; in its extreme degree the number of platelets exceeds 1,000,000/mm(3). We describe a case of extreme reactive thrombocytosis in a healthy 6-year-old child. He required critical care admission for diagnosis and treatment (maximum number of platelets 7,283,000/mm(3)). We review the different causes of thrombocytosis in childhood, the differential diagnosis, and the available treatments in case of extreme thrombocytosis. Copyright © 2013 Asociación Española de Pediatría. Published by Elsevier Espana. All rights reserved.
Kaneff, A
1986-01-01
The following anatomical objects were studied with regard to myology during evolution: M. extensor hallucis longus (MEHL), M. extensor digitorum longus (MEDL) with M. peroneus tertius (MP III), M. peroneus brevis (MPB) with M. peroneus digiti V (MPD V), M. extensor hallucis brevis (MEHB), M. extensor digitorum brevis (MEDB), and the Retinaculum musculorum extensorum imum (RMEI). The study was carried out by the preparation of 3 different groups of material. The 1st group consists of lower extremities of humans. The number of the extremities differs for the particular objects between 151 and 358 (see page 381). The 2nd group of material consists of 122 Membra pelvina from Marsupialia, Insectivora, and Primates. Table 1 shows as well the mammalian species as the number of the studied extremities. The extremities of the 1st and 2nd group were preserved in an manner suitable for a macroscopic preparation. The 3rd group of material consists of 71 lower extremities from embryos and fetus. The lower legs and feet were stained either according to the method described by Morel and Bassal with eosin added or according to Weigert. From this material, complete series of cross sections were prepared. Table 2 shows the age of the embryos (VCL [mm]) as well as the number of the studied extremities. It is important that up to the age of 46 mm VCL the difference in the age of the embryos usually amounts from 0.5 to 1.0 mm. This small difference in the age of the embryos and fetus allows a very good follow up of the changes in construction during the organogenesis. The comparison of the 3 different groups shows the following changes for the above mentioned muscles: The M. extensor hallucis longus (MEHL) is a muscle which is not split. The same result applies for its tendon which inserts at the distal phalanx of the hallux. This primitive form of the muscle amounts actually to 51.12% in human beings. In 48.88% of the cases, additional tendons and muscles are formed by the MEHL. Most of these supplements are positioned on the medial side of the main tendon, only a few lie to the lateral side. For the supplement tendons, the medial one as well as the lateral one occasionally possess a muscle belly. The muscle of the medial tendon is split off from the proximal margin of the MEHL. The muscle of the lateral tendon is split off from the distal margin of the MEHL.(ABSTRACT TRUNCATED AT 400 WORDS)
Bailey-Wilson, Joan E.; Brennan, Jennifer S.; Bull, Shelley B; Culverhouse, Robert; Kim, Yoonhee; Jiang, Yuan; Jung, Jeesun; Li, Qing; Lamina, Claudia; Liu, Ying; Mägi, Reedik; Niu, Yue S.; Simpson, Claire L.; Wang, Libo; Yilmaz, Yildiz E.; Zhang, Heping; Zhang, Zhaogong
2012-01-01
Group 14 of Genetic Analysis Workshop 17 examined several issues related to analysis of complex traits using DNA sequence data. These issues included novel methods for analyzing rare genetic variants in an aggregated manner (often termed collapsing rare variants), evaluation of various study designs to increase power to detect effects of rare variants, and the use of machine learning approaches to model highly complex heterogeneous traits. Various published and novel methods for analyzing traits with extreme locus and allelic heterogeneity were applied to the simulated quantitative and disease phenotypes. Overall, we conclude that power is (as expected) dependent on locus-specific heritability or contribution to disease risk, large samples will be required to detect rare causal variants with small effect sizes, extreme phenotype sampling designs may increase power for smaller laboratory costs, methods that allow joint analysis of multiple variants per gene or pathway are more powerful in general than analyses of individual rare variants, population-specific analyses can be optimal when different subpopulations harbor private causal mutations, and machine learning methods may be useful for selecting subsets of predictors for follow-up in the presence of extreme locus heterogeneity and large numbers of potential predictors. PMID:22128066
NASA Astrophysics Data System (ADS)
Tian, D.; Cammarano, D.
2017-12-01
Modeling changes of crop production at regional scale is important to make adaptation measures for sustainably food supply under global change. In this study, we explore how changing climate extremes in the 20th and 21st century affect maize (summer crop) and wheat (winter crop) yields in an agriculturally important region: the southeast United States. We analyze historical (1950-1999) and projected (2006-2055) precipitation and temperature extremes by calculating the changes of 18 climate extreme indices using the statistically downscaled CMIP5 data from 10 general circulation models (GCMs). To evaluate how these climate extremes affect maize and wheat yields, historical baseline and projected maize and wheat yields under RCP4.5 and RCP8.5 scenarios are simulated using the DSSAT-CERES maize and wheat models driven by the same downscaled GCMs data. All of the changes are examined at 110 locations over the study region. The results show that most of the precipitation extreme indices do not have notable change; mean precipitation, precipitation intensity, and maximum 1-day precipitation are generally increased; the number of rainy days is decreased. The temperature extreme indices mostly showed increased values on mean temperature, number of high temperature days, diurnal temperature range, consecutive high temperature days, maximum daily maximum temperature, and minimum daily minimum temperature; the number of low temperature days and number of consecutive low temperature days are decreased. The conditional probabilistic relationships between changes in crop yields and changes in extreme indices suggested different responses of crop yields to climate extremes during sowing to anthesis and anthesis to maturity periods. Wheat yields and crop water productivity for wheat are increased due to an increased CO2 concentration and minimum temperature; evapotranspiration, maize yields, and crop water productivity for wheat are decreased owing to the increased temperature extremes. We found the effects of precipitation changes on both yields are relatively uncertain.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1987-07-01
The U.S. Navy is conducting a long-term program to monitor for possible effects from the operation of its Extremely Low Frequency (ELF) Communications System to resident biota and their ecological relationships. This report documents progress of the following studies: Soil Amoeba; Soil and Litter Arthropoda and Earthworm Studies; Biological Studies on Pollinating insects: Megachilid Bees; and Small Vertebrates: Small Mammals and Nesting Birds.
Research groups: How big should they be?
Cook, Isabelle; Grange, Sam; Eyre-Walker, Adam
2015-01-01
Understanding the relationship between scientific productivity and research group size is important for deciding how science should be funded. We have investigated the relationship between these variables in the life sciences in the United Kingdom using data from 398 principle investigators (PIs). We show that three measures of productivity, the number of publications, the impact factor of the journals in which papers are published and the number of citations, are all positively correlated to group size, although they all show a pattern of diminishing returns-doubling group size leads to less than a doubling in productivity. The relationships for the impact factor and the number of citations are extremely weak. Our analyses suggest that an increase in productivity will be achieved by funding more PIs with small research groups, unless the cost of employing post-docs and PhD students is less than 20% the cost of a PI. We also provide evidence that post-docs are more productive than PhD students both in terms of the number of papers they produce and where those papers are published.
Usherwood, James Richard
2005-01-01
Bipedal walking following inverted pendulum mechanics is constrained by two requirements: sufficient kinetic energy for the vault over midstance and sufficient gravity to provide the centripetal acceleration required for the arc of the body about the stance foot. While the acceleration condition identifies a maximum walking speed at a Froude number of 1, empirical observation indicates favoured walk–run transition speeds at a Froude number around 0.5 for birds, humans and humans under manipulated gravity conditions. In this study, I demonstrate that the risk of ‘take-off’ is greatest at the extremes of stance. This is because before and after kinetic energy is converted to potential, velocities (and so required centripetal accelerations) are highest, while concurrently the component of gravity acting in line with the leg is least. Limitations to the range of walking velocity and stride angle are explored. At walking speeds approaching a Froude number of 1, take-off is only avoidable with very small steps. With realistic limitations on swing-leg frequency, a novel explanation for the walk–run transition at a Froude number of 0.5 is shown. PMID:17148201
Importance of benthic prey for fishes in coral reef-associated sediments
DeFelice, R.C.; Parrish, J.D.
2003-01-01
The importance of open, sandy substrate adjacent to coral reefs as habitat and a food source for fishes has been little studied in most shallow tropical waters in the Pacific, including Hawai'i. In this study, in Hanalei Bay, Hiwai'i, we identified and quantified the major invertebrate fauna (larger than 0.5 mm) in the well-characterized sands adjoining the shallow fringing reefs. Concurrently, we identified the fish species that seemed to make substantial use of these sand habitats, estimated their density there, sampled their gut contents to examine trophic links with the sand habitat, and made other observations and collections to determine the times, locations, and types of activity there. A variety of (mostly small) polychaeres were dominant in the sediments at most sampling stations, along with many small crustaceans (e.g., amphipods, isopods, ostracods, and small shrimps) and fair numbers of mollusks (especially bivalves) and small echinoids. Fish guts examined contained ???77% of the total number of benthic taxa collected, including nearly all those just listed. However, fish consumption was selective, and the larger shrimps, crabs, and small cryptic fishes were dominant in the diets of most of the numerous predator taxa. Diets of benthic-feeding fishes showed relatively low specific overlap. The fish fauna in this area included substrate-indifferent pelagics, species with various degrees of reef relatedness, reef-restricted species, and (at the other extreme) permanent cryptic sand dwellers. Data on occurrence and movements of fishes indicated that a band of sandy substrate several tens of meters wide next to the reef was an active area for fishes, and activity was considerably different at different times of day and for fish of different ages. These results imply an important trophic role for the benthos in these near-reef habitats in support of reef-associated fishes.
11. INTERIOR OF BEDROOM NUMBER ONE SHOWING OPEN DOOR FROM ...
11. INTERIOR OF BEDROOM NUMBER ONE SHOWING OPEN DOOR FROM LIVING ROOM AT EXTREME PHOTO LEFT, OPEN DOOR TO WALK-IN CLOSET AT PHOTO LEFT CENTER, OPEN DOOR TO BATHROOM AT PHOTO CENTER, AND OPEN DOOR TO BEDROOM NUMBER TWO AT EXTREME PHOTO RIGHT. VIEW TO WEST. - Rush Creek Hydroelectric System, Worker Cottage, Rush Creek, June Lake, Mono County, CA
A new approach for the description of discharge extremes in small catchments
NASA Astrophysics Data System (ADS)
Pavia Santolamazza, Daniela; Lebrenz, Henning; Bárdossy, András
2017-04-01
Small catchment basins in Northwestern Switzerland, characterized by small concentration times, are frequently targeted by floods. The peak and the volume of these floods are commonly estimated by a frequency analysis of occurrence and described by a random variable, assuming a uniform distributed probability and stationary input drivers (e.g. precipitation, temperature). For these small catchments, we attempt to describe and identify the underlying mechanisms and dynamics at the occurrence of extremes by means of available high temporal resolution (10 min) observations and to explore the possibilities to regionalize hydrological parameters for short intervals. Therefore, we investigate new concepts for the flood description such as entropy as a measure of disorder and dispersion of precipitation. First findings and conclusions of this ongoing research are presented.
33 CFR 80.105 - Calais, ME to Cape Small, ME.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Calais, ME to Cape Small, ME. 80... INTERNATIONAL NAVIGATION RULES COLREGS DEMARCATION LINES Atlantic Coast § 80.105 Calais, ME to Cape Small, ME... International Bridge at Calais, ME to the southwesternmost extremity of Bald Head at Cape Small. ...
33 CFR 80.105 - Calais, ME to Cape Small, ME.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 33 Navigation and Navigable Waters 1 2012-07-01 2012-07-01 false Calais, ME to Cape Small, ME. 80... INTERNATIONAL NAVIGATION RULES COLREGS DEMARCATION LINES Atlantic Coast § 80.105 Calais, ME to Cape Small, ME... International Bridge at Calais, ME to the southwesternmost extremity of Bald Head at Cape Small. ...
33 CFR 80.105 - Calais, ME to Cape Small, ME.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 33 Navigation and Navigable Waters 1 2013-07-01 2013-07-01 false Calais, ME to Cape Small, ME. 80... INTERNATIONAL NAVIGATION RULES COLREGS DEMARCATION LINES Atlantic Coast § 80.105 Calais, ME to Cape Small, ME... International Bridge at Calais, ME to the southwesternmost extremity of Bald Head at Cape Small. ...
33 CFR 80.105 - Calais, ME to Cape Small, ME.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Calais, ME to Cape Small, ME. 80... INTERNATIONAL NAVIGATION RULES COLREGS DEMARCATION LINES Atlantic Coast § 80.105 Calais, ME to Cape Small, ME... International Bridge at Calais, ME to the southwesternmost extremity of Bald Head at Cape Small. ...
Rare Event Simulation in Radiation Transport
NASA Astrophysics Data System (ADS)
Kollman, Craig
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved, even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiplied by the likelihood ratio between the true and simulated probabilities so as to keep our estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive "learning" algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give, with probability one, a sequence of estimates converging exponentially fast to the true solution. In the final chapter, an attempt to generalize this algorithm to a continuous state space is made. This involves partitioning the space into a finite number of cells. There is a tradeoff between additional computation per iteration and variance reduction per iteration that arises in determining the optimal grid size. All versions of this algorithm can be thought of as a compromise between deterministic and Monte Carlo methods, capturing advantages of both techniques.
The Cohesive Population Genetics of Molecular Drive
Ohta, Tomoko; Dover, Gabriel A.
1984-01-01
The long-term population genetics of multigene families is influenced by several biased and unbiased mechanisms of nonreciprocal exchanges (gene conversion, unequal exchanges, transposition) between member genes, often distributed on several chromosomes. These mechanisms cause fluctuations in the copy number of variant genes in an individual and lead to a gradual replacement of an original family of n genes (A) in N number of individuals by a variant gene (a). The process for spreading a variant gene through a family and through a population is called molecular drive. Consideration of the known slow rates of nonreciprocal exchanges predicts that the population variance in the copy number of gene a per individual is small at any given generation during molecular drive. Genotypes at a given generation are expected only to range over a small section of all possible genotypes from one extreme (n number of A) to the other (n number of a). A theory is developed for estimating the size of the population variance by using the concept of identity coefficients. In particular, the variance in the course of spreading of a single mutant gene of a multigene family was investigated in detail, and the theory of identity coefficients at the state of steady decay of genetic variability proved to be useful. Monte Carlo simulations and numerical analysis based on realistic rates of exchange in families of known size reveal the correctness of the theoretical prediction and also assess the effect of bias in turnover. The population dynamics of molecular drive in gradually increasing the mean copy number of a variant gene without the generation of a large variance (population cohesion) is of significance regarding potential interactions between natural selection and molecular drive. PMID:6500260
The cohesive population genetics of molecular drive.
Ohta, T; Dover, G A
1984-10-01
The long-term population genetics of multigene families is influenced by several biased and unbiased mechanisms of nonreciprocal exchanges (gene conversion, unequal exchanges, transposition) between member genes, often distributed on several chromosomes. These mechanisms cause fluctuations in the copy number of variant genes in an individual and lead to a gradual replacement of an original family of n genes (A) in N number of individuals by a variant gene (a). The process for spreading a variant gene through a family and through a population is called molecular drive. Consideration of the known slow rates of nonreciprocal exchanges predicts that the population variance in the copy number of gene a per individual is small at any given generation during molecular drive. Genotypes at a given generation are expected only to range over a small section of all possible genotypes from one extreme (n number of A) to the other (n number of a). A theory is developed for estimating the size of the population variance by using the concept of identity coefficients. In particular, the variance in the course of spreading of a single mutant gene of a multigene family was investigated in detail, and the theory of identity coefficients at the state of steady decay of genetic variability proved to be useful. Monte Carlo simulations and numerical analysis based on realistic rates of exchange in families of known size reveal the correctness of the theoretical prediction and also assess the effect of bias in turnover. The population dynamics of molecular drive in gradually increasing the mean copy number of a variant gene without the generation of a large variance (population cohesion) is of significance regarding potential interactions between natural selection and molecular drive.
Quick Tips Guide for Small Manufacturing Businesses
Small manufacturing businesses can use this Quick Tips Guide to be better prepared for future extreme weather events. This guide discusses keeping good records, improving housekeeping procedures, and training employees.
Brown, Jeremy M; Thomson, Robert C
2017-07-01
As the application of genomic data in phylogenetics has become routine, a number of cases have arisen where alternative data sets strongly support conflicting conclusions. This sensitivity to analytical decisions has prevented firm resolution of some of the most recalcitrant nodes in the tree of life. To better understand the causes and nature of this sensitivity, we analyzed several phylogenomic data sets using an alternative measure of topological support (the Bayes factor) that both demonstrates and averts several limitations of more frequently employed support measures (such as Markov chain Monte Carlo estimates of posterior probabilities). Bayes factors reveal important, previously hidden, differences across six "phylogenomic" data sets collected to resolve the phylogenetic placement of turtles within Amniota. These data sets vary substantially in their support for well-established amniote relationships, particularly in the proportion of genes that contain extreme amounts of information as well as the proportion that strongly reject these uncontroversial relationships. All six data sets contain little information to resolve the phylogenetic placement of turtles relative to other amniotes. Bayes factors also reveal that a very small number of extremely influential genes (less than 1% of genes in a data set) can fundamentally change significant phylogenetic conclusions. In one example, these genes are shown to contain previously unrecognized paralogs. This study demonstrates both that the resolution of difficult phylogenomic problems remains sensitive to seemingly minor analysis details and that Bayes factors are a valuable tool for identifying and solving these challenges. © The Author(s) 2016. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Hyperconifold transitions, mirror symmetry, and string theory
NASA Astrophysics Data System (ADS)
Davies, Rhys
2011-09-01
Multiply-connected Calabi-Yau threefolds are of particular interest for both string theorists and mathematicians. Recently it was pointed out that one of the generic degenerations of these spaces (occurring at codimension one in moduli space) is an isolated singularity which is a finite cyclic quotient of the conifold; these were called hyperconifolds. It was also shown that if the order of the quotient group is even, such singular varieties have projective crepant resolutions, which are therefore smooth Calabi-Yau manifolds. The resulting topological transitions were called hyperconifold transitions, and change the fundamental group as well as the Hodge numbers. Here Batyrev's construction of Calabi-Yau hypersurfaces in toric fourfolds is used to demonstrate that certain compact examples containing the remaining hyperconifolds — the Z and Z cases — also have Calabi-Yau resolutions. The mirrors of the resulting transitions are studied and it is found, surprisingly, that they are ordinary conifold transitions. These are the first examples of conifold transitions with mirrors which are more exotic extremal transitions. The new hyperconifold transitions are also used to construct a small number of new Calabi-Yau manifolds, with small Hodge numbers and fundamental group Z or Z. Finally, it is demonstrated that a hyperconifold is a physically sensible background in Type IIB string theory. In analogy to the conifold case, non-perturbative dynamics smooth the physical moduli space, such that hyperconifold transitions correspond to non-singular processes in the full theory.
Trends and Cost-Analysis of Lower Extremity Nerve Injury Using the National Inpatient Sample.
Foster, Chase H; Karsy, Michael; Jensen, Michael R; Guan, Jian; Eli, Ilyas; Mahan, Mark A
2018-06-08
Peripheral nerve injuries (PNIs) of the lower extremities have been assessed in small cohort studies; however, the actual incidence, national trends, comorbidities, and cost of care in lower extremity PNI are not defined. Lack of sufficient data limits discussion on national policies, payors, and other aspects fundamental to the delivery of care in the US. To establish estimates of lower extremity PNIs incidence, associated diagnoses, and cost in the US using a comprehensive database with a minimum of a decade of data. The National Inpatient Sample was utilized to evaluate International Classification of Disease codes for specific lower extremity PNIs (9560-9568) between 2001 and 2013. Lower extremity PNIs occurred with a mean incidence of 13.3 cases per million population annually, which declined minimally from 2001 to 2013. The mean ± SEM age was 41.6 ± 0.1 yr; 61.1% of patients were males. Most were admitted via the emergency department (56.0%). PNIs occurred to the sciatic (16.6%), femoral (10.7%), tibial (6.0%), peroneal (33.4%), multiple nerves (1.3%), and other (32.0%). Associated diagnoses included lower extremity fracture (13.4%), complications of care (11.2%), open wounds (10.3%), crush injury (9.7%), and other (7.2%). Associated procedures included tibial fixation (23.3%), closure of skin (20.1%), debridement of open fractures (15.4%), fixation of other bones (13.5%), and wound debridement (14.5%). The mean annual unadjusted compounded growth rate of charges was 8.8%. The mean ± SEM annual charge over the time period was $64 031.20 ± $421.10, which was associated with the number of procedure codes (β = 0.2), length of stay (β = 0.6), and year (β = 0.1) in a multivariable analysis (P = .0001). These data describe associations in the treatment of lower extremity PNIs, which are important for considering national policies, costs, research and the delivery of care.
Vieira, Cristine; Costa, Nilson do Rosário
2008-01-01
This article analyzes the organizational model of the dental health industry. The main organizational leaders in this industry are the professional cooperatives and group dental insurance companies. The theoretical basis of the article is the organizational theory developed by Di Maggio and Powell. The dental health industry consists of a great number of small and very dynamic companies, however an expressive part of clients and profit are concentrated in a few large companies. The results show that the industry has expanded the number of clients after the creation of the National Health Insurance Agency. The regulation regime has forced institutional changes in the firms with regard to the market entry, permanence or exit patterns. There was no evidence that the regulatory rules have interfered with the development and financial conditions of the industry. The average profitability of the sector, especially among the group dental insurance companies, is extremely high.
A fast time-difference inverse solver for 3D EIT with application to lung imaging.
Javaherian, Ashkan; Soleimani, Manuchehr; Moeller, Knut
2016-08-01
A class of sparse optimization techniques that require solely matrix-vector products, rather than an explicit access to the forward matrix and its transpose, has been paid much attention in the recent decade for dealing with large-scale inverse problems. This study tailors application of the so-called Gradient Projection for Sparse Reconstruction (GPSR) to large-scale time-difference three-dimensional electrical impedance tomography (3D EIT). 3D EIT typically suffers from the need for a large number of voxels to cover the whole domain, so its application to real-time imaging, for example monitoring of lung function, remains scarce since the large number of degrees of freedom of the problem extremely increases storage space and reconstruction time. This study shows the great potential of the GPSR for large-size time-difference 3D EIT. Further studies are needed to improve its accuracy for imaging small-size anomalies.
Optimal Design of Experiments by Combining Coarse and Fine Measurements
NASA Astrophysics Data System (ADS)
Lee, Alpha A.; Brenner, Michael P.; Colwell, Lucy J.
2017-11-01
In many contexts, it is extremely costly to perform enough high-quality experimental measurements to accurately parametrize a predictive quantitative model. However, it is often much easier to carry out large numbers of experiments that indicate whether each sample is above or below a given threshold. Can many such categorical or "coarse" measurements be combined with a much smaller number of high-resolution or "fine" measurements to yield accurate models? Here, we demonstrate an intuitive strategy, inspired by statistical physics, wherein the coarse measurements are used to identify the salient features of the data, while the fine measurements determine the relative importance of these features. A linear model is inferred from the fine measurements, augmented by a quadratic term that captures the correlation structure of the coarse data. We illustrate our strategy by considering the problems of predicting the antimalarial potency and aqueous solubility of small organic molecules from their 2D molecular structure.
Supersymmetry breaking and Nambu-Goldstone fermions with cubic dispersion
NASA Astrophysics Data System (ADS)
Sannomiya, Noriaki; Katsura, Hosho; Nakayama, Yu
2017-03-01
We introduce a lattice fermion model in one spatial dimension with supersymmetry (SUSY) but without particle number conservation. The Hamiltonian is defined as the anticommutator of two nilpotent supercharges Q and Q†. Each supercharge is built solely from spinless fermion operators and depends on a parameter g . The system is strongly interacting for small g , and in the extreme limit g =0 , the number of zero-energy ground states grows exponentially with the system size. By contrast, in the large-g limit, the system is noninteracting and SUSY is broken spontaneously. We study the model for modest values of g and show that under certain conditions spontaneous SUSY breaking occurs in both finite and infinite chains. We analyze the low-energy excitations both analytically and numerically. Our analysis suggests that the Nambu-Goldstone fermions accompanying the spontaneous SUSY breaking have cubic dispersion at low energies.
Tedesco Triccas, L; Burridge, J H; Hughes, A M; Pickering, R M; Desikan, M; Rothwell, J C; Verheyden, G
2016-01-01
To systematically review the methodology in particular treatment options and outcomes and the effect of multiple sessions of transcranial direct current stimulation (tDCS) with rehabilitation programmes for upper extremity recovery post stroke. A search was conducted for randomised controlled trials involving tDCS and rehabilitation for the upper extremity in stroke. Quality of included studies was analysed using the Modified Downs and Black form. The extent of, and effect of variation in treatment parameters such as anodal, cathodal and bi-hemispheric tDCS on upper extremity outcome measures of impairment and activity were analysed using meta-analysis. Nine studies (371 participants with acute, sub-acute and chronic stroke) were included. Different methodologies of tDCS and upper extremity intervention, outcome measures and timing of assessments were identified. Real tDCS combined with rehabilitation had a small non-significant effect of +0.11 (p=0.44) and +0.24 (p=0.11) on upper extremity impairments and activities at post-intervention respectively. Various tDCS methods have been used in stroke rehabilitation. The evidence so far is not statistically significant, but is suggestive of, at best, a small beneficial effect on upper extremity impairment. Future research should focus on which patients and rehabilitation programmes are likely to respond to different tDCS regimes. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Electromagnetic fields can interact with biological tissue both electrically and mechanically. This study investigated the mechanical interaction between brain tissue and an extremely-low-frequency (ELF) electric field by measuring the resultant vibrational amplitude. The exposur...
2012-03-19
THREE EXTREMITY ARMOR SYSTEMS: DETERMINATION OF PHYSIOLOGICAL, BIOMECHANICAL, AND PHYSICAL PERFORMANCE EFFECTS AND QUANTIFICATION OF BODY AREA...PHYSICAL PERFORMANCE EFFECTS AND QUANTIFICATION OF BODY AREA COVERAGE 5a. CONTRACT NUMBER MIPR #M9545006MPR6CC7 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION REPORT NUMBER NATICK/TR-12/014 9
Abbott, J Haxby; Schmitt, John
2014-08-01
Multicenter, prospective, longitudinal cohort study. To investigate the minimum important difference (MID) of the Patient-Specific Functional Scale (PSFS), 4 region-specific outcome measures, and the numeric pain rating scale (NPRS) across 3 levels of patient-perceived global rating of change in a clinical setting. The MID varies depending on the external anchor defining patient-perceived "importance." The MID for the PSFS has not been established across all body regions. One thousand seven hundred eight consecutive patients with musculoskeletal disorders were recruited from 5 physical therapy clinics. The PSFS, NPRS, and 4 region-specific outcome measures-the Oswestry Disability Index, Neck Disability Index, Upper Extremity Functional Index, and Lower Extremity Functional Scale-were assessed at the initial and final physical therapy visits. Global rating of change was assessed at the final visit. MID was calculated for the PSFS and NPRS (overall and for each body region), and for each region-specific outcome measure, across 3 levels of change defined by the global rating of change (small, medium, large change) using receiver operating characteristic curve methodology. The MID for the PSFS (on a scale from 0 to 10) ranged from 1.3 (small change) to 2.3 (medium change) to 2.7 (large change), and was relatively stable across body regions. MIDs for the NPRS (-1.5 to -3.5), Oswestry Disability Index (-12), Neck Disability Index (-14), Upper Extremity Functional Index (6 to 11), and Lower Extremity Functional Scale (9 to 16) are also reported. We reported the MID for small, medium, and large patient-perceived change on the PSFS, NPRS, Oswestry Disability Index, Neck Disability Index, Upper Extremity Functional Index, and Lower Extremity Functional Scale for use in clinical practice and research.
Matter-Radiation Interactions in Extremes
to resolve this capability gap. An experimental explosive is shown igniting during small-scale impact testing. An experimental explosive is shown igniting during small-scale impact testing. Accelerating in to
Rare event simulation in radiation transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kollman, Craig
1993-10-01
This dissertation studies methods for estimating extremely small probabilities by Monte Carlo simulation. Problems in radiation transport typically involve estimating very rare events or the expected value of a random variable which is with overwhelming probability equal to zero. These problems often have high dimensional state spaces and irregular geometries so that analytic solutions are not possible. Monte Carlo simulation must be used to estimate the radiation dosage being transported to a particular location. If the area is well shielded the probability of any one particular particle getting through is very small. Because of the large number of particles involved,more » even a tiny fraction penetrating the shield may represent an unacceptable level of radiation. It therefore becomes critical to be able to accurately estimate this extremely small probability. Importance sampling is a well known technique for improving the efficiency of rare event calculations. Here, a new set of probabilities is used in the simulation runs. The results are multiple by the likelihood ratio between the true and simulated probabilities so as to keep the estimator unbiased. The variance of the resulting estimator is very sensitive to which new set of transition probabilities are chosen. It is shown that a zero variance estimator does exist, but that its computation requires exact knowledge of the solution. A simple random walk with an associated killing model for the scatter of neutrons is introduced. Large deviation results for optimal importance sampling in random walks are extended to the case where killing is present. An adaptive ``learning`` algorithm for implementing importance sampling is given for more general Markov chain models of neutron scatter. For finite state spaces this algorithm is shown to give with probability one, a sequence of estimates converging exponentially fast to the true solution.« less
NASA Astrophysics Data System (ADS)
Bao, Jiawei; Sherwood, Steven C.; Colin, Maxime; Dixit, Vishal
2017-10-01
The behavior of tropical extreme precipitation under changes in sea surface temperatures (SSTs) is investigated with the Weather Research and Forecasting Model (WRF) in three sets of idealized simulations: small-domain tropical radiative-convective equilibrium (RCE), quasi-global "aquapatch", and RCE with prescribed mean ascent from the tropical band in the aquapatch. We find that, across the variations introduced including SST, large-scale circulation, domain size, horizontal resolution, and convective parameterization, the change in the degree of convective organization emerges as a robust mechanism affecting extreme precipitation. Higher ratios of change in extreme precipitation to change in mean surface water vapor are associated with increases in the degree of organization, while lower ratios correspond to decreases in the degree of organization. The spread of such changes is much larger in RCE than aquapatch tropics, suggesting that small RCE domains may be unreliable for assessing the temperature-dependence of extreme precipitation or convective organization. When the degree of organization does not change, simulated extreme precipitation scales with surface water vapor. This slightly exceeds Clausius-Clapeyron (CC) scaling, because the near-surface air warms 10-25% faster than the SST in all experiments. Also for simulations analyzed here with convective parameterizations, there is an increasing trend of organization with SST.
Nonparametric Regression Subject to a Given Number of Local Extreme Value
2001-07-01
compilation report: ADP013708 thru ADP013761 UNCLASSIFIED Nonparametric regression subject to a given number of local extreme value Ali Majidi and Laurie...locations of the local extremes for the smoothing algorithm. 280 A. Majidi and L. Davies 3 The smoothing problem We make the smoothing problem precise...is the solution of QP3. k--oo 282 A. Majidi and L. Davies FiG. 2. The captions top-left, top-right, bottom-left, bottom-right show the result of the
The Small Nuclear Genomes of Selaginella Are Associated with a Low Rate of Genome Size Evolution.
Baniaga, Anthony E; Arrigo, Nils; Barker, Michael S
2016-06-03
The haploid nuclear genome size (1C DNA) of vascular land plants varies over several orders of magnitude. Much of this observed diversity in genome size is due to the proliferation and deletion of transposable elements. To date, all vascular land plant lineages with extremely small nuclear genomes represent recently derived states, having ancestors with much larger genome sizes. The Selaginellaceae represent an ancient lineage with extremely small genomes. It is unclear how small nuclear genomes evolved in Selaginella We compared the rates of nuclear genome size evolution in Selaginella and major vascular plant clades in a comparative phylogenetic framework. For the analyses, we collected 29 new flow cytometry estimates of haploid genome size in Selaginella to augment publicly available data. Selaginella possess some of the smallest known haploid nuclear genome sizes, as well as the lowest rate of genome size evolution observed across all vascular land plants included in our analyses. Additionally, our analyses provide strong support for a history of haploid nuclear genome size stasis in Selaginella Our results indicate that Selaginella, similar to other early diverging lineages of vascular land plants, has relatively low rates of genome size evolution. Further, our analyses highlight that a rapid transition to a small genome size is only one route to an extremely small genome. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Future Extreme Event Vulnerability in the Rural Northeastern United States
NASA Astrophysics Data System (ADS)
Winter, J.; Bowen, F. L.; Partridge, T.; Chipman, J. W.
2017-12-01
Future climate change impacts on humans will be determined by the convergence of evolving physical climate and socioeconomic systems. Of particular concern is the intersection of extreme events and vulnerable populations. Rural areas of the Northeastern United States have experienced increased temperature and precipitation extremes, especially over the past three decades, and face unique challenges due to their physical isolation, natural resources dependent economies, and high poverty rates. To explore the impacts of future extreme events on vulnerable, rural populations in the Northeast, we project extreme events and vulnerability indicators to identify where changes in extreme events and vulnerable populations coincide. Specifically, we analyze future (2046-2075) maximum annual daily temperature, minimum annual daily temperature, maximum annual daily precipitation, and maximum consecutive dry day length for Representative Concentration Pathways (RCP) 4.5 and 8.5 using four global climate models (GCM) and a gridded observational dataset. We then overlay those projections with estimates of county-level population and relative income for 2060 to calculate changes in person-events from historical (1976-2005), with a focus on Northeast counties that have less than 250,000 people and are in the bottom income quartile. We find that across the rural Northeast for RCP4.5, heat person-events per year increase tenfold, far exceeding decreases in cold person-events and relatively small changes in precipitation and drought person-events. Counties in the bottom income quartile have historically (1976-2005) experienced a disproportionate number of heat events, and counties in the bottom two income quartiles are projected to experience a greater heat event increase by 2046-2075 than counties in the top two income quartiles. We further explore the relative contributions of event frequency, population, and income changes to the total and geographic distribution of climate change impacts on rural, vulnerable areas of the Northeast.
Practice makes perfect in memory recall
Romani, Sandro; Katkov, Mikhail
2016-01-01
A large variability in performance is observed when participants recall briefly presented lists of words. The sources of such variability are not known. Our analysis of a large data set of free recall revealed a small fraction of participants that reached an extremely high performance, including many trials with the recall of complete lists. Moreover, some of them developed a number of consistent input-position-dependent recall strategies, in particular recalling words consecutively (“chaining”) or in groups of consecutively presented words (“chunking”). The time course of acquisition and particular choice of positional grouping were variable among participants. Our results show that acquiring positional strategies plays a crucial role in improvement of recall performance. PMID:26980785
Why anthropic reasoning cannot predict Lambda.
Starkman, Glenn D; Trotta, Roberto
2006-11-17
We revisit anthropic arguments purporting to explain the measured value of the cosmological constant. We argue that different ways of assigning probabilities to candidate universes lead to totally different anthropic predictions. As an explicit example, we show that weighting different universes by the total number of possible observations leads to an extremely small probability for observing a value of Lambda equal to or greater than what we now measure. We conclude that anthropic reasoning within the framework of probability as frequency is ill-defined and that in the absence of a fundamental motivation for selecting one weighting scheme over another the anthropic principle cannot be used to explain the value of Lambda, nor, likely, any other physical parameters.
Carvalho, Rimenys J; Cruz, Thayana A
2018-01-01
High-throughput screening (HTS) systems have emerged as important tools to provide fast and low cost evaluation of several conditions at once since it requires small quantities of material and sample volumes. These characteristics are extremely valuable for experiments with large number of variables enabling the application of design of experiments (DoE) strategies or simple experimental planning approaches. Once, the capacity of HTS systems to mimic chromatographic purification steps was established, several studies were performed successfully including scale down purification. Here, we propose a method for studying different purification conditions that can be used for any recombinant protein, including complex and glycosylated proteins, using low binding filter microplates.
NASA Technical Reports Server (NTRS)
Bennett, J.; Hall, P.; Smith, F. T.
1988-01-01
Viscous fluid flows with curved streamlines can support both centrifugal and viscous traveling wave instabilities. Here the interaction of these instabilities in the context of the fully developed flow in a curved channel is discussed. The viscous (Tollmein-Schlichting) instability is described asymptotically at high Reynolds numbers and it is found that it can induce a Taylor-Goertler flow even at extremely small amplitudes. In this interaction, the Tollmein-Schlichting wave can drive a vortex state with wavelength either comparable with the channel width or the wavelength of lower branch viscous modes. The nonlinear equations which describe these interactions are solved for nonlinear equilibrium states.
A universal mechanism for transport and regulation of CPA sodium proton exchangers.
Călinescu, Octavian; Fendler, Klaus
2015-09-01
Recent studies performed on a series of Na+/H+ exchangers have led us to postulate a general mechanism for Na+/H+ exchange in the monovalent cation/proton antiporter superfamily. This simple mechanism employs a single binding site for which both substrates compete. The developed kinetic model is self-regulatory, ensuring down-regulation of transport activity at extreme pH, and elegantly explains the pH-dependent activity of Na+/H+ exchangers. The mechanism was experimentally verified and shown to describe both electrogenic and electroneutral exchangers. Using a small number of parameters, exchanger activity can be modeled under different conditions, providing insights into the physiological role of Na+/H+ exchangers.
SIGN SINGULARITY AND FLARES IN SOLAR ACTIVE REGION NOAA 11158
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sorriso-Valvo, L.; De Vita, G.; Kazachenko, M. D.
Solar Active Region NOAA 11158 has hosted a number of strong flares, including one X2.2 event. The complexity of current density and current helicity are studied through cancellation analysis of their sign-singular measure, which features power-law scaling. Spectral analysis is also performed, revealing the presence of two separate scaling ranges with different spectral index. The time evolution of parameters is discussed. Sudden changes of the cancellation exponents at the time of large flares and the presence of correlation with Extreme-Ultra-Violet and X-ray flux suggest that eruption of large flares can be linked to the small-scale properties of the current structures.
Application of phenotypic microarrays to environmental microbiology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borglin, sharon; Joyner, Dominique; DeAngelis, Kristen
2012-01-01
Environmental organisms are extremely diverse and only a small fraction has been successfully cultured in the laboratory. Culture in micro wells provides a method for rapid screening of a wide variety of growth conditions and commercially available plates contain a large number of substrates, nutrient sources, and inhibitors, which can provide an assessment of the phenotype of an organism. This review describes applications of phenotype arrays to anaerobic and thermophilic microorganisms, use of the plates in stress response studies, in development of culture media for newly discovered strains, and for assessment of phenotype of environmental communities. Also discussed are considerationsmore » and challenges in data interpretation and visualization, including data normalization, statistics, and curve fitting.« less
Jimsphere wind and turbulence exceedance statistic
NASA Technical Reports Server (NTRS)
Adelfang, S. I.; Court, A.
1972-01-01
Exceedance statistics of winds and gusts observed over Cape Kennedy with Jimsphere balloon sensors are described. Gust profiles containing positive and negative departures, from smoothed profiles, in the wavelength ranges 100-2500, 100-1900, 100-860, and 100-460 meters were computed from 1578 profiles with four 41 weight digital high pass filters. Extreme values of the square root of gust speed are normally distributed. Monthly and annual exceedance probability distributions of normalized rms gust speeds in three altitude bands (2-7, 6-11, and 9-14 km) are log-normal. The rms gust speeds are largest in the 100-2500 wavelength band between 9 and 14 km in late winter and early spring. A study of monthly and annual exceedance probabilities and the number of occurrences per kilometer of level crossings with positive slope indicates significant variability with season, altitude, and filter configuration. A decile sampling scheme is tested and an optimum approach is suggested for drawing a relatively small random sample that represents the characteristic extreme wind speeds and shears of a large parent population of Jimsphere wind profiles.
The wasteland of random supergravities
NASA Astrophysics Data System (ADS)
Marsh, David; McAllister, Liam; Wrase, Timm
2012-03-01
We show that in a general {N} = {1} supergravity with N ≫ 1 scalar fields, an exponentially small fraction of the de Sitter critical points are metastable vacua. Taking the superpotential and Kähler potential to be random functions, we construct a random matrix model for the Hessian matrix, which is well-approximated by the sum of a Wigner matrix and two Wishart matrices. We compute the eigenvalue spectrum analytically from the free convolution of the constituent spectra and find that in typical configurations, a significant fraction of the eigenvalues are negative. Building on the Tracy-Widom law governing fluctuations of extreme eigenvalues, we determine the probability P of a large fluctuation in which all the eigenvalues become positive. Strong eigenvalue repulsion makes this extremely unlikely: we find P ∝ exp(- c N p ), with c, p being constants. For generic critical points we find p ≈ 1 .5, while for approximately-supersymmetric critical points, p ≈ 1 .3. Our results have significant implications for the counting of de Sitter vacua in string theory, but the number of vacua remains vast.
Forecasting daily streamflow using online sequential extreme learning machines
NASA Astrophysics Data System (ADS)
Lima, Aranildo R.; Cannon, Alex J.; Hsieh, William W.
2016-06-01
While nonlinear machine methods have been widely used in environmental forecasting, in situations where new data arrive continually, the need to make frequent model updates can become cumbersome and computationally costly. To alleviate this problem, an online sequential learning algorithm for single hidden layer feedforward neural networks - the online sequential extreme learning machine (OSELM) - is automatically updated inexpensively as new data arrive (and the new data can then be discarded). OSELM was applied to forecast daily streamflow at two small watersheds in British Columbia, Canada, at lead times of 1-3 days. Predictors used were weather forecast data generated by the NOAA Global Ensemble Forecasting System (GEFS), and local hydro-meteorological observations. OSELM forecasts were tested with daily, monthly or yearly model updates. More frequent updating gave smaller forecast errors, including errors for data above the 90th percentile. Larger datasets used in the initial training of OSELM helped to find better parameters (number of hidden nodes) for the model, yielding better predictions. With the online sequential multiple linear regression (OSMLR) as benchmark, we concluded that OSELM is an attractive approach as it easily outperformed OSMLR in forecast accuracy.
The balance and harmony of control power for a combat aircraft in tactical maneuvering
NASA Technical Reports Server (NTRS)
Bocvarov, Spiro; Cliff, Eugene M.; Lutze, Frederick H.
1992-01-01
An analysis is presented for a family of regular extremal attitude-maneuvers for the High Angle-of-Attack Research Vehicle that has thrust-vectoring capability. Different levels of dynamic coupling are identified in the combat aircraft attitude model, and the characteristic extremal-family motion is explained. It is shown why the extremal-family trajectories develop small sideslip-angles, a highly desirable feature from a practical viewpoint.
Zanobetti, Antonella; O’Neill, Marie S.; Gronlund, Carina J.; Schwartz, Joel D
2015-01-01
Background Extremes of temperature have been associated with short-term increases in daily mortality. We identified subpopulations with increased susceptibility to dying during temperature extremes, based on personal demographics, small-area characteristics and preexisting medical conditions. Methods We examined Medicare participants in 135 U.S. cities and identified preexisting conditions based on hospitalization records prior to their deaths, from 1985–2006. Personal characteristics were obtained from the Medicare records, and area characteristics were assigned based on zip-code of residence. We conducted a case-only analysis of over 11 million deaths, and evaluated modification of the risk of dying associated with extremely hot days and extremely cold days, continuous temperatures, and water-vapor pressure. Modifiers included preexisting conditions, personal characteristics, zip-code-level population characteristics, and land-cover characteristics. For each effect modifier, a city-specific logistic regression model was fitted and then an overall national estimate was calculated using meta-analysis. Results People with certain preexisting conditions were more susceptible to extreme heat, with an additional 6% (95% confidence interval= 4% – 8%) increase in the risk of dying on an extremely hot day in subjects with previous admission for atrial fibrillation, an additional 8% (4%–12%) in subjects with Alzheimer disease, and an additional 6% (3%–9%) in subjects with dementia. Zip-code level and personal characteristics were also associated with increased susceptibility to temperature. Conclusions We identified several subgroups of the population who are particularly susceptible to temperature extremes, including persons with atrial fibrillation. PMID:24045717
Weak linkage between the heaviest rainfall and tallest storms.
Hamada, Atsushi; Takayabu, Yukari N; Liu, Chuntao; Zipser, Edward J
2015-02-24
Conventionally, the heaviest rainfall has been linked to the tallest, most intense convective storms. However, the global picture of the linkage between extreme rainfall and convection remains unclear. Here we analyse an 11-year record of spaceborne precipitation radar observations and establish that a relatively small fraction of extreme convective events produces extreme rainfall rates in any region of the tropics and subtropics. Robust differences between extreme rainfall and convective events are found in the rainfall characteristics and environmental conditions, irrespective of region; most extreme rainfall events are characterized by less intense convection with intense radar echoes not extending to extremely high altitudes. Rainfall characteristics and environmental conditions both indicate the importance of warm-rain processes in producing extreme rainfall rates. Our results demonstrate that, even in regions where severe convective storms are representative extreme weather events, the heaviest rainfall events are mostly associated with less intense convection.
NASA Astrophysics Data System (ADS)
Mascioli, Nora R.
Extreme temperatures, heat waves, heavy rainfall events, drought, and extreme air pollution events have adverse effects on human health, infrastructure, agriculture and economies. The frequency, magnitude and duration of these events are expected to change in the future in response to increasing greenhouse gases and decreasing aerosols, but future climate projections are uncertain. A significant portion of this uncertainty arises from uncertainty in the effects of aerosol forcing: to what extent were the effects from greenhouse gases masked by aerosol forcing over the historical observational period, and how much will decreases in aerosol forcing influence regional and global climate over the remainder of the 21st century? The observed frequency and intensity of extreme heat and precipitation events have increased in the U.S. over the latter half of the 20th century. Using aerosol only (AER) and greenhouse gas only (GHG) simulations from 1860 to 2005 in the GFDL CM3 chemistry-climate model, I parse apart the competing influences of aerosols and greenhouse gases on these extreme events. I find that small changes in extremes in the "all forcing" simulations reflect cancellations between the effects of increasing anthropogenic aerosols and greenhouse gases. In AER, extreme high temperatures and the number of days with temperatures above the 90th percentile decline over most of the U.S., while in GHG high temperature extremes increase over most of the U.S. The spatial response patterns in AER and GHG are significantly anti-correlated, suggesting a preferred regional mode of response that is largely independent of the type of forcing. Extreme precipitation over the eastern U.S. decreases in AER, particularly in winter, and increases over the eastern and central U.S. in GHG, particularly in spring. Over the 21 st century under the RCP8.5 emissions scenario, the patterns of extreme temperature and precipitation change associated with greenhouse gas forcing dominate. The temperature response pattern in AER and GHG is characterized by strong responses over the western U.S. and weak or opposite signed responses over the southeast U.S., raising the question of whether the observed U.S. "warming hole" could have a forced component. To address this question, I systematically examine observed seasonal temperature trends over all time periods of at least 10 years during 1901-2015. In the northeast and southern U.S., significant summertime cooling occurs from the early 1950s to the mid 1970s, which I partially attribute to increasing anthropogenic aerosol emissions (median fraction of the observed temperature trends explained is 0.69 and 0.17, respectively). In winter, the northeast and southern U.S. cool significantly from the early 1950s to the early 1990s, which I attribute to long-term phase changes in the North Atlantic Oscillation and the Pacific Decadal Oscillation. Rather than being a single phenomenon stemming from a single cause, both the warming hole and its dominant drivers vary by season, region, and time period. Finally, I examine historical and projected future changes in atmospheric stagnation. Stagnation, which is characterized by weak winds and an absence of precipitation, is a meteorological contributor to heat waves, extreme pollution, and drought. Using CM3, I show that regional stagnation trends over the historical period (1860-2005) are driven by changes in anthropogenic aerosol emissions, rather than rising greenhouse gases. In the northeastern and central United States, aerosol-induced changes in surface and upper level winds produce significant decreases in the number of stagnant summer days, while decreasing precipitation in the southeast US increases the number of stagnant summer days. Outside of the U.S., significant drying over eastern China in response to rising aerosol emissions contributed to increased stagnation during 1860-2005. Additionally, this region was found to be particularly sensitive to changes in local aerosol emissions, indicating that decreasing Chinese emissions in efforts to improve air quality will also decrease stagnation. In Europe, I find a dipole response pattern during the historical period wherein stagnation decreases over southern Europe and increases over northern Europe in response to global increases in aerosol emissions. In the future, declining aerosol emissions will likely lead to a reversal of the historical stagnation trends, with increasing greenhouse gases again playing a secondary role. Aerosols have a significant effect on a number of societally important extreme events, including heat waves, intense rainfall events, drought, and stagnation. Further, uncertainty in the strength of aerosol masking of historical greenhouse gas forcing is a significant source of spread in future climate projections. Quantifying these aerosol effects is therefore critical for our ability to accurately project and prepare for future changes in extreme events.
Extreme events and event size fluctuations in biased random walks on networks.
Kishore, Vimal; Santhanam, M S; Amritkar, R E
2012-05-01
Random walk on discrete lattice models is important to understand various types of transport processes. The extreme events, defined as exceedences of the flux of walkers above a prescribed threshold, have been studied recently in the context of complex networks. This was motivated by the occurrence of rare events such as traffic jams, floods, and power blackouts which take place on networks. In this work, we study extreme events in a generalized random walk model in which the walk is preferentially biased by the network topology. The walkers preferentially choose to hop toward the hubs or small degree nodes. In this setting, we show that extremely large fluctuations in event sizes are possible on small degree nodes when the walkers are biased toward the hubs. In particular, we obtain the distribution of event sizes on the network. Further, the probability for the occurrence of extreme events on any node in the network depends on its "generalized strength," a measure of the ability of a node to attract walkers. The generalized strength is a function of the degree of the node and that of its nearest neighbors. We obtain analytical and simulation results for the probability of occurrence of extreme events on the nodes of a network using a generalized random walk model. The result reveals that the nodes with a larger value of generalized strength, on average, display lower probability for the occurrence of extreme events compared to the nodes with lower values of generalized strength.
Fine structure of the pecten oculi of the barred owl (Strix varia).
Smith, B J; Smith, S A; Braekevelt, C R
1996-01-01
The pecten oculi of the barred owl (Strix varia) has been examined by light and transmission electron microscopy. The pecten in this species is of the pleated type and is small in comparison to the size of the ocular globe. The pecten consists of 8-10 accordion-like folds that are linked apically by a pigmented tissue bridge. Each fold contains numerous capillaries, larger supply and drainage vessels, and abundant pleomorphic melanocytes. Most of these capillaries are extremely specialized vessels that possess plentiful microfolds on both the luminal and abluminal surfaces. Some capillaries however display only a few microfolds. The endothelial cell bodies are extremely attenuated, with most organelles located near the nucleus. All capillaries are surrounded by a very thick fibrillar basal lamina, which is thought to provide structural support to these small vessels. Pericytes are commonly found within these thickened basal laminae. Numerous melanocytes are also present, with processes that form an incomplete sheath around the capillaries. These processes are also presumed to provide structural support for the capillaries. As in other avian species, the morphology of the barred owl pecten is indicative of extensive involvement in substance transport. When compared to the pecten of more visually-oriented species, this pecten is smaller, has fewer folds, and displays a reduced number of microfolds within the capillaries. In these and other features, the barred owl pecten is similar to the pecten of the great horned owl (Bubo virginianus).
NASA Astrophysics Data System (ADS)
Chen, Yen-Sheng; Zhou, Huang-Cheng
2017-05-01
This paper presents a multiple-input-multiple-output (MIMO) antenna that has four-unit elements enabled by an isolation technique for long-term evolution (LTE) small-cell base stations. While earlier studies on MIMO base-station antennas cope with either a lower LTE band (698-960 MHz) or an upper LTE band (1710-2690 MHz), the proposed antenna meets the full LTE specification, yet it uses the maximum number of unit elements to increase channel capacity. The antenna configuration is optimized for good impedance matching and high radiation efficiency. In particular, as the spacing between unit elements is so small that severe mutual coupling occurs, we propose a simple structure with extremely low costs to enhance the isolation. By using suspended solid wires interconnecting the position having strong coupled current of two adjacent elements, an isolation enhancement of 37 dB is achieved. Although solid wires inherently aim at direct-current applications, this work successfully employs such a low-cost technique to microwave antenna development. Experimental results have validated the design guidelines and the proposed configuration, showing that antenna performances including impedance matching, isolation, radiation features, signal correlation, and channel capacity gain are highly desired for LTE small-cell base stations.
Wood, Jacquelyn L A; Tezel, Defne; Joyal, Destin; Fraser, Dylan J
2015-09-01
How population size influences quantitative genetic variation and differentiation among natural, fragmented populations remains unresolved. Small, isolated populations might occupy poor quality habitats and lose genetic variation more rapidly due to genetic drift than large populations. Genetic drift might furthermore overcome selection as population size decreases. Collectively, this might result in directional changes in additive genetic variation (VA ) and trait differentiation (QST ) from small to large population size. Alternatively, small populations might exhibit larger variation in VA and QST if habitat fragmentation increases variability in habitat types. We explored these alternatives by investigating VA and QST using nine fragmented populations of brook trout varying 50-fold in census size N (179-8416) and 10-fold in effective number of breeders, Nb (18-135). Across 15 traits, no evidence was found for consistent differences in VA and QST with population size and almost no evidence for increased variability of VA or QST estimates at small population size. This suggests that (i) small populations of some species may retain adaptive potential according to commonly adopted quantitative genetic measures and (ii) populations of varying sizes experience a variety of environmental conditions in nature, however extremely large studies are likely required before any firm conclusions can be made. © 2015 The Author(s). Evolution © 2015 The Society for the Study of Evolution.
No phenotypic plasticity in nest-site selection in response to extreme flooding events.
Bailey, Liam D; Ens, Bruno J; Both, Christiaan; Heg, Dik; Oosterbeek, Kees; van de Pol, Martijn
2017-06-19
Phenotypic plasticity is a crucial mechanism for responding to changes in climatic means, yet we know little about its role in responding to extreme climatic events (ECEs). ECEs may lack the reliable cues necessary for phenotypic plasticity to evolve; however, this has not been empirically tested. We investigated whether behavioural plasticity in nest-site selection allows a long-lived shorebird ( Haematopus ostralegus ) to respond to flooding. We collected longitudinal nest elevation data on individuals over two decades, during which time flooding events have become increasingly frequent. We found no evidence that individuals learn from flooding experiences, showing nest elevation change consistent with random nest-site selection. There was also no evidence of phenotypic plasticity in response to potential environmental cues (lunar nodal cycle and water height). A small number of individuals, those nesting near an artificial sea wall, did show an increase in nest elevation over time; however, there is no conclusive evidence this occurred in response to ECEs. Our study population showed no behavioural plasticity in response to changing ECE patterns. More research is needed to determine whether this pattern is consistent across species and types of ECEs. If so, ECEs may pose a major challenge to the resilience of wild populations.This article is part of the themed issue 'Behavioural, ecological and evolutionary responses to extreme climatic events'. © 2017 The Author(s).
Sueur, Jérôme; Mackie, David; Windmill, James F. C.
2011-01-01
To communicate at long range, animals have to produce intense but intelligible signals. This task might be difficult to achieve due to mechanical constraints, in particular relating to body size. Whilst the acoustic behaviour of large marine and terrestrial animals has been thoroughly studied, very little is known about the sound produced by small arthropods living in freshwater habitats. Here we analyse for the first time the calling song produced by the male of a small insect, the water boatman Micronecta scholtzi. The song is made of three distinct parts differing in their temporal and amplitude parameters, but not in their frequency content. Sound is produced at 78.9 (63.6–82.2) SPL rms re 2.10−5 Pa with a peak at 99.2 (85.7–104.6) SPL re 2.10−5 Pa estimated at a distance of one metre. This energy output is significant considering the small size of the insect. When scaled to body length and compared to 227 other acoustic species, the acoustic energy produced by M. scholtzi appears as an extreme value, outperforming marine and terrestrial mammal vocalisations. Such an extreme display may be interpreted as an exaggerated secondary sexual trait resulting from a runaway sexual selection without predation pressure. PMID:21698252
Sueur, Jérôme; Mackie, David; Windmill, James F C
2011-01-01
To communicate at long range, animals have to produce intense but intelligible signals. This task might be difficult to achieve due to mechanical constraints, in particular relating to body size. Whilst the acoustic behaviour of large marine and terrestrial animals has been thoroughly studied, very little is known about the sound produced by small arthropods living in freshwater habitats. Here we analyse for the first time the calling song produced by the male of a small insect, the water boatman Micronecta scholtzi. The song is made of three distinct parts differing in their temporal and amplitude parameters, but not in their frequency content. Sound is produced at 78.9 (63.6-82.2) SPL rms re 2.10(-5) Pa with a peak at 99.2 (85.7-104.6) SPL re 2.10(-5) Pa estimated at a distance of one metre. This energy output is significant considering the small size of the insect. When scaled to body length and compared to 227 other acoustic species, the acoustic energy produced by M. scholtzi appears as an extreme value, outperforming marine and terrestrial mammal vocalisations. Such an extreme display may be interpreted as an exaggerated secondary sexual trait resulting from a runaway sexual selection without predation pressure.
A New Paradigm in Earth Environmental Monitoring with the CYGNSS Small Satellite Constellation
NASA Technical Reports Server (NTRS)
Ruf, C. S.; Chew, C.; Lang, T.; Morris, M. G.; Kyle, K.; Ridley, A.; Balasubramaniam, R.
2018-01-01
A constellation of small, low-cost satellites is able to make scientifically valuable measurements of the Earth which can be used for weather forecasting, disaster monitoring, and climate studies. Eight CYGNSS satellites were launched into low Earth orbit on December 15, 2016. Each satellite carries a science radar receiver which measures GPS signals reflected from the Earth surface. The signals contain information about the surface, including wind speed over ocean and soil moisture and flooding over land. The satellites are distributed around the globe so that measurements can be made more often to capture extreme weather events. Innovative engineering approaches are used to reduce per satellite cost, increase the number in the constellation, and improve temporal sampling. These include the use of differential drag rather than propulsion to adjust the spacing between satellites and the use of existing GPS signals as the science radars’ transmitter. Initial on-orbit results demonstrate the scientific utility of the CYGNSS observations, and suggest that a new paradigm in spaceborne Earth environmental monitoring is possible.
Preferential partner selection in an evolutionary study of Prisoner's Dilemma.
Ashlock, D; Smucker, M D; Stanley, E A; Tesfatsion, L
1996-01-01
Partner selection is an important process in many social interactions, permitting individuals to decrease the risks associated with cooperation. In large populations, defectors may escape punishment by roving from partner to partner, but defectors in smaller populations risk social isolation. We investigate these possibilities for an evolutionary Prisoner's Dilemma in which agents use expected payoffs to choose and refuse partners. In comparison to random or round-robin partner matching, we find that the average payoffs attained with preferential partner selection tend to be more narrowly confined to a few isolated payoff regions. Most ecologies evolve to essentially full cooperative behavior, but when agents are intolerant of defections, or when the costs of refusal and social isolation are small, we also see the emergence of wallflower ecologies in which all agents are socially isolated. Between these two extremes, we see the emergence of ecologies whose agents tend to engage in a small number of defections followed by cooperation thereafter. The latter ecologies exhibit a plethora of interesting social interaction patterns.
NASA Astrophysics Data System (ADS)
Li, Richard Y.; Di Felice, Rosa; Rohs, Remo; Lidar, Daniel A.
2018-03-01
Transcription factors regulate gene expression, but how these proteins recognize and specifically bind to their DNA targets is still debated. Machine learning models are effective means to reveal interaction mechanisms. Here we studied the ability of a quantum machine learning approach to classify and rank binding affinities. Using simplified data sets of a small number of DNA sequences derived from actual binding affinity experiments, we trained a commercially available quantum annealer to classify and rank transcription factor binding. The results were compared to state-of-the-art classical approaches for the same simplified data sets, including simulated annealing, simulated quantum annealing, multiple linear regression, LASSO, and extreme gradient boosting. Despite technological limitations, we find a slight advantage in classification performance and nearly equal ranking performance using the quantum annealer for these fairly small training data sets. Thus, we propose that quantum annealing might be an effective method to implement machine learning for certain computational biology problems.
A Small Mission Featuring an Imaging X-ray Polarimeter with High Sensitivity
NASA Technical Reports Server (NTRS)
Weisskopf, Martin C.; Baldini, Luca; Bellazini, Ronaldo; Brez, Alessandro; Costa, Enrico; Dissley, Richard; Elsner, Ronald; Fabiani, Sergio; Matt, Giorgio; Minuti, Massimo;
2013-01-01
We present a detailed description of a small mission capable of obtaining high precision and meaningful measurement of the X-ray polarization of a variety of different classes of cosmic X-ray sources. Compared to other ideas that have been suggested this experiment has demonstrated in the laboratory a number of extremely important features relevant to the ultimate selection of such a mission by a funding agency. The most important of these questions are: 1) Have you demonstrated the sensitivity to a polarized beam at the energies of interest (i.e. the energies which represent the majority (not the minority) of detected photons from the X-ray source of interest? 2) Have you demonstrated that the device's sensitivity to an unpolarized beam is really negligible and/or quantified the impact of any systematic effects upon actual measurements? We present our answers to these questions backed up by laboratory measurements and give an overview of the mission.
Simulations of radiation-damaged 3D detectors for the Super-LHC
NASA Astrophysics Data System (ADS)
Pennicard, D.; Pellegrini, G.; Fleta, C.; Bates, R.; O'Shea, V.; Parkes, C.; Tartoni, N.
2008-07-01
Future high-luminosity colliders, such as the Super-LHC at CERN, will require pixel detectors capable of withstanding extremely high radiation damage. In this article, the performances of various 3D detector structures are simulated with up to 1×1016 1 MeV- neq/cm2 radiation damage. The simulations show that 3D detectors have higher collection efficiency and lower depletion voltages than planar detectors due to their small electrode spacing. When designing a 3D detector with a large pixel size, such as an ATLAS sensor, different electrode column layouts are possible. Using a small number of n+ readout electrodes per pixel leads to higher depletion voltages and lower collection efficiency, due to the larger electrode spacing. Conversely, using more electrodes increases both the insensitive volume occupied by the electrode columns and the capacitive noise. Overall, the best performance after 1×1016 1 MeV- neq/cm2 damage is achieved by using 4-6 n+ electrodes per pixel.
[Death by explosion of an aerial mine].
Stockhausen, Sarah; Wöllner, Kirsten; Madea, Burkhard; Doberentz, Elke
2014-01-01
Civilians are rarely killed by military weapons except in times of war. In early 2014, a 50-year-old man died in an explosion of an aerial mine from the Second World War when he was crushing concrete chunks with an excavator at a recycling plant. In the burned operator's cab, the remains of a body were found on the driver's seat. The thorax and the head were missing. Still sticking in the shoe, the right foot severed at the ankle was found about 7 m from the excavator together with numerous small to tiny body parts. At autopsy, the completely disrupted, strongly charred lower torso of a male connected to the left extremities as well as a large number of small tissue fragments and calcined bones were found. According to calculations performed by the seismographical station on the basis of seismic data, only about 45-60 percent of the charge had detonated. The autopsy results illustrate all the more the massive impact of such an explosion.
Preparation and physical characterization of pure beta-carotene.
Laughlin, Robert G; Bunke, Gregory M; Eads, Charles D; Laidig, William D; Shelley, John C
2002-05-01
Pure all-trans beta-carotene has been prepared on the 10's of grams scale by isothermal Fractional Dissolution (FD) of commercial laboratory samples in tetrahydrofuran (THF). beta-Carotene purified in this way is black, with a faint brownish tinge. The electronic spectra of black samples extend into the near infrared, with end-absorption past 750 nm. Black samples react directly with dioxygen under mild conditions to yield the familiar orange or red powders. Pure beta-carotene rigorously obeys Beer's Law in octane over the entire UV-Vis spectral range, while commercial laboratory samples and recrystallized samples do not. NMR self-diffusion coefficient data demonstrate that beta-carotene exists as simple molecular solutions in octane and toluene. The anomalously high crystallinity of beta-carotene can be attributed (from analysis using molecular mechanics) to the facts that: (1) the number of theoretically possible conformers of beta-carotene is extremely small, and (2) only a small fraction of these (ca. 12%, or 127) may actually exist in fluid phases.
PUCHEROS: a cost-effective solution for high-resolution spectroscopy with small telescopes
NASA Astrophysics Data System (ADS)
Vanzi, L.; Chacon, J.; Helminiak, K. G.; Baffico, M.; Rivinius, T.; Štefl, S.; Baade, D.; Avila, G.; Guirao, C.
2012-08-01
We present PUCHEROS, the high-resolution echelle spectrograph, developed at the Center of Astro-Engineering of Pontificia Universidad Catolica de Chile to provide an effective tool for research and teaching of astronomy. The instrument is fed by a single-channel optical fibre and it covers the visible range from 390 to 730 nm in one shot, reaching a spectral resolution of about 20 000. In the era of extremely large telescopes our instrument aims to exploit the capabilities offered by small telescopes in a cost-effective way, covering the observing needs of a community of astronomers, in Chile and elsewhere, which do not necessarily need large collecting areas for their research. In particular the instrument is well suited for long-term spectroscopic monitoring of bright variable and transient targets down to a V magnitude of about 10. We describe the instrument and present a number of text case examples of observations obtained during commissioning and early science.
Report on monitoring and support instruments for solar physics research from Spacelab
NASA Technical Reports Server (NTRS)
1978-01-01
The Quick Reaction and Special Purpose Facility Definition Team for Solar Physics Spacelab Payloads examined a variety of instruments to fulfill the following functions: (1) solar physics research appropriate to Spacelab, (2) correlative data for research in such fields as aeronomy, magnetospheric physics, ionospheric physics, meteorology and climatology, (3) target selection for activity alert monitoring and (4) pointing accuracy monitoring of Spacelab platforms. In this examination the team accepted a number of restrictions and qualifications: (1) the cost of such instruments must be low, so as not to adversely impact the development of new, research class instrumentation in the early Spacelab era; (2) the instruments should be of such a size that they each would occupy a small fraction of a pointing system, and (3) the weight and power consumption of the instruments should also be small. With these restrictions, the instruments chosen are: the visible light telescope and magnetograph, the extreme-ultraviolet telescope, and the solar irradiance monitor.
NASA Astrophysics Data System (ADS)
Ke, Yaling; Zhao, Yi
2018-04-01
The hierarchy of stochastic Schrödinger equation, previously developed under the unpolarised initial bath states, is extended in this paper for open quantum dynamics under polarised initial bath conditions. The method is proved to be a powerful tool in investigating quantum dynamics exposed to an ultraslow Ohmic bath, as in this case the hierarchical truncation level and the random sampling number can be kept at a relatively small extent. By systematically increasing the system-bath coupling strength, the symmetric Ohmic spin-boson dynamics is investigated at finite temperature, with a very small cut-off frequency. It is confirmed that the slow bath makes the system dynamics extremely sensitive to the initial bath conditions. The localisation tendency is stronger in the polarised initial bath conditions. Besides, the oscillatory coherent dynamics persists even when the system-bath coupling is very strong, in correspondence with what is found recently in the deep sub-Ohmic bath, where also the low-frequency modes dominate.
NASA Astrophysics Data System (ADS)
Zhong, Hui; Xu, Fei; Li, Zenghui; Fu, Ruowen; Wu, Dingcai
2013-05-01
A very important yet really challenging issue to address is how to greatly increase the energy density of supercapacitors to approach or even exceed those of batteries without sacrificing the power density. Herein we report the fabrication of a new class of ultrahigh surface area hierarchical porous carbon (UHSA-HPC) based on the pore formation and widening of polystyrene-derived HPC by KOH activation, and highlight its superior ability for energy storage in supercapacitors with ionic liquid (IL) as electrolyte. The UHSA-HPC with a surface area of more than 3000 m2 g-1 shows an extremely high energy density, i.e., 118 W h kg-1 at a power density of 100 W kg-1. This is ascribed to its unique hierarchical nanonetwork structure with a large number of small-sized nanopores for IL storage and an ideal meso-/macroporous network for IL transfer.A very important yet really challenging issue to address is how to greatly increase the energy density of supercapacitors to approach or even exceed those of batteries without sacrificing the power density. Herein we report the fabrication of a new class of ultrahigh surface area hierarchical porous carbon (UHSA-HPC) based on the pore formation and widening of polystyrene-derived HPC by KOH activation, and highlight its superior ability for energy storage in supercapacitors with ionic liquid (IL) as electrolyte. The UHSA-HPC with a surface area of more than 3000 m2 g-1 shows an extremely high energy density, i.e., 118 W h kg-1 at a power density of 100 W kg-1. This is ascribed to its unique hierarchical nanonetwork structure with a large number of small-sized nanopores for IL storage and an ideal meso-/macroporous network for IL transfer. Electronic supplementary information (ESI) available: Sample preparation, material characterization, electrochemical characterization and specific mass capacitance and energy density. See DOI: 10.1039/c3nr00738c
A Review of Recent Advances in Research on Extreme Heat Events
NASA Technical Reports Server (NTRS)
Horton, Radley M.; Mankin, Justin S.; Lesk, Corey; Coffel, Ethan; Raymond, Colin
2016-01-01
Reviewing recent literature, we report that changes in extreme heat event characteristics such as magnitude, frequency, and duration are highly sensitive to changes in mean global-scale warming. Numerous studies have detected significant changes in the observed occurrence of extreme heat events, irrespective of how such events are defined. Further, a number of these studies have attributed present-day changes in the risk of individual heat events and the documented global-scale increase in such events to anthropogenic-driven warming. Advances in process-based studies of heat events have focused on the proximate land-atmosphere interactions through soil moisture anomalies, and changes in occurrence of the underlying atmospheric circulation associated with heat events in the mid-latitudes. While evidence for a number of hypotheses remains limited, climate change nevertheless points to tail risks of possible changes in heat extremes that could exceed estimates generated from model outputs of mean temperature. We also explore risks associated with compound extreme events and nonlinear impacts associated with extreme heat.
Cortical areas involved in Arabic number reading.
Roux, F-E; Lubrano, V; Lauwers-Cances, V; Giussani, C; Démonet, J-F
2008-01-15
Distinct functional pathways for processing words and numbers have been hypothesized from the observation of dissociated impairments of these categories in brain-damaged patients. We aimed to identify the cortical areas involved in Arabic number reading process in patients operated on for various brain lesions. Direct cortical electrostimulation was prospectively used in 60 brain mappings. We used object naming and two reading tasks: alphabetic script (sentences and number words) and Arabic number reading. Cortical areas involved in Arabic number reading were identified according to location, type of interference, and distinctness from areas associated with other language tasks. Arabic number reading was sustained by small cortical areas, often extremely well localized (<1 cm(2)). Over 259 language sites detected, 43 (17%) were exclusively involved in Arabic number reading (no sentence or word number reading interference detected in these sites). Specific Arabic number reading interferences were mainly found in three regions: the Broca area (Brodmann area 45), the anterior part of the dominant supramarginal gyrus (Brodmann area 40; p < 0.0001), and the temporal-basal area (Brodmann area 37; p < 0.05). Diverse types of interferences were observed (reading arrest, phonemic or semantic paraphasia). Error patterns were fairly similar across temporal, parietal, and frontal stimulation sites, except for phonemic paraphasias, which were found only in supramarginal gyrus. Our findings strongly support the fact that the acquisition through education of specific symbolic entities, such as Arabic numbers, could result in the segregation and the specialization of anatomically distinct brain areas.
Implicit Space-Time Conservation Element and Solution Element Schemes
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Himansu, Ananda; Wang, Xiao-Yen
1999-01-01
Artificial numerical dissipation is in important issue in large Reynolds number computations. In such computations, the artificial dissipation inherent in traditional numerical schemes can overwhelm the physical dissipation and yield inaccurate results on meshes of practical size. In the present work, the space-time conservation element and solution element method is used to construct new and accurate implicit numerical schemes such that artificial numerical dissipation will not overwhelm physical dissipation. Specifically, these schemes have the property that numerical dissipation vanishes when the physical viscosity goes to zero. These new schemes therefore accurately model the physical dissipation even when it is extremely small. The new schemes presented are two highly accurate implicit solvers for a convection-diffusion equation. The two schemes become identical in the pure convection case, and in the pure diffusion case. The implicit schemes are applicable over the whole Reynolds number range, from purely diffusive equations to convection-dominated equations with very small viscosity. The stability and consistency of the schemes are analysed, and some numerical results are presented. It is shown that, in the inviscid case, the new schemes become explicit and their amplification factors are identical to those of the Leapfrog scheme. On the other hand, in the pure diffusion case, their principal amplification factor becomes the amplification factor of the Crank-Nicolson scheme.
Radar investigation of asteroids
NASA Technical Reports Server (NTRS)
Ostro, S. J.
1986-01-01
The number of radar detected asteroids has climbed from 6 to 40 (27 mainbelt plus 13 near-Earth). The dual-circular-polarization radar sample now comprises more than 1% of the numbered asteroids. Radar results for mainbelt asteroids furnish the first available information on the nature of these objects at macroscopic scales. At least one object (2 Pallas) and probably many others are extraordinarily smooth at centimeter-to-meter scales but are extremely rough at some scale between several meters and many kilometers. Pallas has essentially no small-scale structure within the uppermost several meters of the regolith, but the rms slope of this regolith exceeds 20 deg., much larger than typical lunar values (approx. 7 deg.). The origin of these slopes could be the hypervelocity impact cratering process, whose manifestations are likely to be different on low-gravity, low-radius-of-curvature objects from those on the terrestrial planets. The range of mainbelt asteroid radar albedoes is very broad and implies big variations in regolith porosity or metal concentration, or both. The highest albedo estimate, for 16 Psyche, is consistent with a surface having porosities typical of lunar soil and a composition nearly completely metallic. Therefore, Psyche might be the collisionally stripped core of a differentiated small plant, and might resemble mineralogically the parent bodies of iron meteorites.
Dynamic pathway modeling of signal transduction networks: a domain-oriented approach.
Conzelmann, Holger; Gilles, Ernst-Dieter
2008-01-01
Mathematical models of biological processes become more and more important in biology. The aim is a holistic understanding of how processes such as cellular communication, cell division, regulation, homeostasis, or adaptation work, how they are regulated, and how they react to perturbations. The great complexity of most of these processes necessitates the generation of mathematical models in order to address these questions. In this chapter we provide an introduction to basic principles of dynamic modeling and highlight both problems and chances of dynamic modeling in biology. The main focus will be on modeling of s transduction pathways, which requires the application of a special modeling approach. A common pattern, especially in eukaryotic signaling systems, is the formation of multi protein signaling complexes. Even for a small number of interacting proteins the number of distinguishable molecular species can be extremely high. This combinatorial complexity is due to the great number of distinct binding domains of many receptors and scaffold proteins involved in signal transduction. However, these problems can be overcome using a new domain-oriented modeling approach, which makes it possible to handle complex and branched signaling pathways.
ERIC Educational Resources Information Center
Hall, Darlene Kordich
1999-01-01
Compares three groups of young sexually abused children on seven "Complex" Posttraumatic Stress Disorder/Disorders of Extreme Stress (CP/DES) indices. As cumulative number of types of trauma increased, the number of CP/DES symptoms rose. Results suggest that CP/DES also characterizes sexually abused children, especially those who have…
Extreme Value Theory and the New Sunspot Number Series
NASA Astrophysics Data System (ADS)
Acero, F. J.; Carrasco, V. M. S.; Gallego, M. C.; García, J. A.; Vaquero, J. M.
2017-04-01
Extreme value theory was employed to study solar activity using the new sunspot number index. The block maxima approach was used at yearly (1700-2015), monthly (1749-2016), and daily (1818-2016) scales, selecting the maximum sunspot number value for each solar cycle, and the peaks-over-threshold (POT) technique was used after a declustering process only for the daily data. Both techniques led to negative values for the shape parameters. This implies that the extreme sunspot number value distribution has an upper bound. The return level (RL) values obtained from the POT approach were greater than when using the block maxima technique. Regarding the POT approach, the 110 year (550 and 1100 year) RLs were lower (higher) than the daily maximum observed sunspot number value of 528. Furthermore, according to the block maxima approach, the 10-cycle RL lay within the block maxima daily sunspot number range, as expected, but it was striking that the 50- and 100-cycle RLs were also within that range. Thus, it would seem that the RL is reaching a plateau, and, although one must be cautious, it would be difficult to attain sunspot number values greater than 550. The extreme value trends from the four series (yearly, monthly, and daily maxima per solar cycle, and POT after declustering the daily data) were analyzed with the Mann-Kendall test and Sen’s method. Only the negative trend of the daily data with the POT technique was statistically significant.
Role of quasiresonant planetary wave dynamics in recent boreal spring-to-autumn extreme events
Petoukhov, Vladimir; Petri, Stefan; Rahmstorf, Stefan; Coumou, Dim; Kornhuber, Kai; Schellnhuber, Hans Joachim
2016-01-01
In boreal spring-to-autumn (May-to-September) 2012 and 2013, the Northern Hemisphere (NH) has experienced a large number of severe midlatitude regional weather extremes. Here we show that a considerable part of these extremes were accompanied by highly magnified quasistationary midlatitude planetary waves with zonal wave numbers m = 6, 7, and 8. We further show that resonance conditions for these planetary waves were, in many cases, present before the onset of high-amplitude wave events, with a lead time up to 2 wk, suggesting that quasiresonant amplification (QRA) of these waves had occurred. Our results support earlier findings of an important role of the QRA mechanism in amplifying planetary waves, favoring recent NH weather extremes. PMID:27274064
Harder-Lauridsen, Nina Majlund; Kuhn, Katrin Gaardbo; Erichsen, Anders Christian; Mølbak, Kåre; Ethelberg, Steen
2013-01-01
Recent years have seen an increase in the frequency of extreme rainfall and subsequent flooding across the world. Climate change models predict that such flooding will become more common, triggering sewer overflows, potentially with increased risks to human health. In August 2010, a triathlon sports competition was held in Copenhagen, Denmark, shortly after an extreme rainfall. The authors took advantage of this event to investigate disease risks in two comparable cohorts of physically fit, long distance swimmers competing in the sea next to a large urban area. An established model of bacterial concentration in the water was used to examine the level of pollution in a spatio-temporal manner. Symptoms and exposures among athletes were examined with a questionnaire using a retrospective cohort design and the questionnaire investigation was repeated after a triathlon competition held in non-polluted seawater in 2011. Diagnostic information was collected from microbiological laboratories. The results showed that the 3.8 kilometer open water swimming competition coincided with the peak of post-flooding bacterial contamination in 2010, with average concentrations of 1.5x10(4) E. coli per 100 ml water. The attack rate of disease among 838 swimmers in 2010 was 42% compared to 8% among 931 swimmers in the 2011 competition (relative risk (RR) 5.0; 95% CI: 4.0-6.39). In 2010, illness was associated with having unintentionally swallowed contaminated water (RR 2.5; 95% CI: 1.8-3.4); and the risk increased with the number of mouthfuls of water swallowed. Confirmed aetiologies of infection included Campylobacter, Giardia lamblia and diarrhoeagenic E. coli. The study demonstrated a considerable risk of illness from water intake when swimming in contaminated seawater in 2010, and a small but measureable risk from non-polluted water in 2011. This suggests a significant risk of disease in people ingesting small amounts of flood water following extreme rainfall in urban areas.
Wilcox, Kevin R; von Fischer, Joseph C; Muscha, Jennifer M; Petersen, Mark K; Knapp, Alan K
2015-01-01
Intensification of the global hydrological cycle with atmospheric warming is expected to increase interannual variation in precipitation amount and the frequency of extreme precipitation events. Although studies in grasslands have shown sensitivity of aboveground net primary productivity (ANPP) to both precipitation amount and event size, we lack equivalent knowledge for responses of belowground net primary productivity (BNPP) and NPP. We conducted a 2-year experiment in three US Great Plains grasslands--the C4-dominated shortgrass prairie (SGP; low ANPP) and tallgrass prairie (TGP; high ANPP), and the C3-dominated northern mixed grass prairie (NMP; intermediate ANPP)--to test three predictions: (i) both ANPP and BNPP responses to increased precipitation amount would vary inversely with mean annual precipitation (MAP) and site productivity; (ii) increased numbers of extreme rainfall events during high-rainfall years would affect high and low MAP sites differently; and (iii) responses belowground would mirror those aboveground. We increased growing season precipitation by as much as 50% by augmenting natural rainfall via (i) many (11-13) small or (ii) fewer (3-5) large watering events, with the latter coinciding with naturally occurring large storms. Both ANPP and BNPP increased with water addition in the two C4 grasslands, with greater ANPP sensitivity in TGP, but greater BNPP and NPP sensitivity in SGP. ANPP and BNPP did not respond to any rainfall manipulations in the C3 -dominated NMP. Consistent with previous studies, fewer larger (extreme) rainfall events increased ANPP relative to many small events in SGP, but event size had no effect in TGP. Neither system responded consistently above- and belowground to event size; consequently, total NPP was insensitive to event size. The diversity of responses observed in these three grassland types underscores the challenge of predicting responses relevant to C cycling to forecast changes in precipitation regimes even within relatively homogeneous biomes such as grasslands. © 2014 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Wedderburn, Scotte D.; Bailey, Colin P.; Delean, Steven; Paton, David C.
2016-01-01
River flows and salinity are key factors structuring fish assemblages in estuaries. The osmoregulatory ability of a fish determines its capacity to tolerate rising salt levels when dispersal is unfeasible. Estuarine fishes can tolerate minor fluctuations in salinity, but a relatively small number of species in a few families can inhabit extreme hypersaline waters. The Murray-Darling Basin drains an extensive area of south-eastern Australia and river flows end at the mouth of the River Murray. The system is characterized by erratic rainfall and highly variable flows which have been reduced by intensive river regulation and water extraction. The Coorong is a coastal lagoon system extending some 110 km south-eastwards from the mouth. It is an inverted estuary with a salinity gradient that typically ranges from estuarine to triple that of sea water. Hypersalinity in the southern region suits a select suite of biota, including the smallmouth hardyhead Atherinosoma microstoma - a small-bodied, euryhaline fish with an annual life cycle. The population response of A. microstoma in the Coorong was examined during a period of considerable hydrological variation and extreme salinity fluctuations (2001-2014), and the findings were related to its osmoregulatory ability. Most notably, the species was extirpated from over 50% of its range during four continuous years without river flows when salinities exceeded 120 (2007-2010). These salinities exceeded the osmoregulatory ability of A. microstoma. Substantial river flows that reached the Coorong in late 2010 and continued into 2011 led salinities to fall below 100 throughout the Coorong by January 2012. Subsequently, A. microstoma recovered to its former range by January 2012. The findings show that the consequences of prolonged periods of insufficient river flows to temperate inverted estuaries will include substantial declines in the range of highly euryhaline fishes, which also may have wider ecological consequences.
Harder-Lauridsen, Nina Majlund; Kuhn, Katrin Gaardbo; Erichsen, Anders Christian; Mølbak, Kåre; Ethelberg, Steen
2013-01-01
Recent years have seen an increase in the frequency of extreme rainfall and subsequent flooding across the world. Climate change models predict that such flooding will become more common, triggering sewer overflows, potentially with increased risks to human health. In August 2010, a triathlon sports competition was held in Copenhagen, Denmark, shortly after an extreme rainfall. The authors took advantage of this event to investigate disease risks in two comparable cohorts of physically fit, long distance swimmers competing in the sea next to a large urban area. An established model of bacterial concentration in the water was used to examine the level of pollution in a spatio-temporal manner. Symptoms and exposures among athletes were examined with a questionnaire using a retrospective cohort design and the questionnaire investigation was repeated after a triathlon competition held in non-polluted seawater in 2011. Diagnostic information was collected from microbiological laboratories. The results showed that the 3.8 kilometer open water swimming competition coincided with the peak of post-flooding bacterial contamination in 2010, with average concentrations of 1.5x104 E. coli per 100 ml water. The attack rate of disease among 838 swimmers in 2010 was 42% compared to 8% among 931 swimmers in the 2011 competition (relative risk (RR) 5.0; 95% CI: 4.0-6.39). In 2010, illness was associated with having unintentionally swallowed contaminated water (RR 2.5; 95% CI: 1.8-3.4); and the risk increased with the number of mouthfuls of water swallowed. Confirmed aetiologies of infection included Campylobacter, Giardia lamblia and diarrhoeagenic E. coli. The study demonstrated a considerable risk of illness from water intake when swimming in contaminated seawater in 2010, and a small but measureable risk from non-polluted water in 2011. This suggests a significant risk of disease in people ingesting small amounts of flood water following extreme rainfall in urban areas. PMID:24244306
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bonar, S.A.; Pauley, G.B.; Thomas, G.L.
1989-01-01
Species profiles are literature summaries of the taxonomy, morphology, range, life history, and environmental requirements of coastal aquatic species. They are designed to assist in environmental impact assessment. The pink salmon, often called humpback salmon or humpy, is easily identified by its extremely small scales (150 to 205) on the lateral line. They are the most abundant of the Pacific salmon species and spawn in North American and Asian streams bordering the Pacific and Arctic Oceans. They have a very simple two-year life cycle, which is so invariable that fish running in odd-numbered years are isolated from fish running inmore » even-numbered years so that no gene flow occurs between them. Adults spawn in the fall and the young fry emerge in the spring. The pink salmon is less desirable in commercial and sport catches than most other salmon because of its small size and its soft pale flesh. The Puget Sound region of Washington State is the southern geographic limit of streams supporting major pink salmon runs in the eastern North Pacific. Pink salmon runs are presently only in odd-numbered years in this region. Optimum water temperatures for spawning range from 7.2 to 12.8/degree/C. Productive pink salmon streams have less than 5.0% by volume of fine sediments (less than or equal to0.8 mm). 87 refs., 5 figs., 1 tab.« less
The magnitude and colour of noise in genetic negative feedback systems.
Voliotis, Margaritis; Bowsher, Clive G
2012-08-01
The comparative ability of transcriptional and small RNA-mediated negative feedback to control fluctuations or 'noise' in gene expression remains unexplored. Both autoregulatory mechanisms usually suppress the average (mean) of the protein level and its variability across cells. The variance of the number of proteins per molecule of mean expression is also typically reduced compared with the unregulated system, but is almost never below the value of one. This relative variance often substantially exceeds a recently obtained, theoretical lower limit for biochemical feedback systems. Adding the transcriptional or small RNA-mediated control has different effects. Transcriptional autorepression robustly reduces both the relative variance and persistence (lifetime) of fluctuations. Both benefits combine to reduce noise in downstream gene expression. Autorepression via small RNA can achieve more extreme noise reduction and typically has less effect on the mean expression level. However, it is often more costly to implement and is more sensitive to rate parameters. Theoretical lower limits on the relative variance are known to decrease slowly as a measure of the cost per molecule of mean expression increases. However, the proportional increase in cost to achieve substantial noise suppression can be different away from the optimal frontier-for transcriptional autorepression, it is frequently negligible.
An L-stable method for solving stiff hydrodynamics
NASA Astrophysics Data System (ADS)
Li, Shengtai
2017-07-01
We develop a new method for simulating the coupled dynamics of gas and multi-species dust grains. The dust grains are treated as pressure-less fluids and their coupling with gas is through stiff drag terms. If an explicit method is used, the numerical time step is subject to the stopping time of the dust particles, which can become extremely small for small grains. The previous semi-implicit method [1] uses second-order trapezoidal rule (TR) on the stiff drag terms and it works only for moderately small size of the dust particles. This is because TR method is only A-stable not L-stable. In this work, we use TR-BDF2 method [2] for the stiff terms in the coupled hydrodynamic equations. The L-stability of TR-BDF2 proves essential in treating a number of dust species. The combination of TR-BDF2 method with the explicit discretization of other hydro terms can solve a wide variety of stiff hydrodynamics equations accurately and efficiently. We have implemented our method in our LA-COMPASS (Los Alamos Computational Astrophysics Suite) package. We have applied the code to simulate some dusty proto-planetary disks and obtained very good match with astronomical observations.
NASA Technical Reports Server (NTRS)
Heinemann, K.
1985-01-01
The interaction of 100 and 200 keV electron beams with amorphous alumina, titania, and aluminum nitride substrates and nanometer-size palladium particulate deposits was investigated for the two extreme cases of (1) large-area electron-beam flash-heating and (2) small-area high-intensity electron-beam irradiation. The former simulates a short-term heating effect with minimum electron irradiation exposure, the latter simulates high-dosage irradiation with minimum heating effect. All alumina and titania samples responded to the flash-heating treatment with significant recrystallization. However, the size, crystal structure, shape, and orientation of the grains depended on the type and thickness of the films and the thickness of the Pd deposit. High-dosage electron irradiation also readily crystallized the alumina substrate films but did not affect the titania films. The alumina recrystallization products were usually either all in the alpha phase, or they were a mixture of small grains in a number of low-temperature phases including gamma, delta, kappa, beta, theta-alumina. Palladium deposits reacted heavily with the alumina substrates during either treatment, but they were very little effected when supported on titania. Both treatments had the same, less prominent localized crystallization effect on aluminum nitride films.
Waudby, Helen P; Petit, Sophie
2017-05-01
Deserts exhibit extreme climatic conditions. Small desert-dwelling vertebrates have physiological and behavioral adaptations to cope with these conditions, including the ability to seek shelter. We investigated the temperature (T) and relative humidity (RH) regulating properties of the soil cracks that characterize the extensive cracking-clay landscapes of arid Australia, and the extent of their use by 2 small marsupial species: fat-tailed and stripe-faced dunnarts (Sminthopsis crassicaudata and Sminthopsis macroura). We measured hourly (over 24-h periods) the T and RH of randomly-selected soil cracks compared to outside conditions, during 2 summers and 2 winters. We tracked 17 dunnarts (8 Sminthopsis crassicaudata and 9 Sminthopsis macroura) to quantify their use of cracks. Cracks consistently moderated microclimate, providing more stable conditions than available from non-crack points, which often displayed comparatively dramatic fluctuations in T and RH. Both dunnart species used crack shelters extensively. Cracks constitute important shelter for small animals during extreme conditions by providing a stable microclimate, which is typically cooler than outside conditions in summer and warmer in winter. Cracks likely play a fundamental sheltering role by sustaining the physiological needs of small mammal populations. Globally, cracking-clay areas are dominated by agricultural land uses, including livestock grazing. Management of these systems should focus not only on vegetation condition, but also on soil integrity, to maintain shelter resources for ground-dwelling fauna. © 2016 International Society of Zoological Sciences, Institute of Zoology/Chinese Academy of Sciences and John Wiley & Sons Australia, Ltd.
Extreme Vertical Gusts in the Atmospheric Boundary Layer
2015-07-01
significant effect on the statistics of the rare, extreme gusts. In the lowest 5,000 ft, boundary layer effects make small to moderate vertical...4 2.4 Effects of Gust Shape ............................................................................................... 5... Definitions Adiabatic Lapse Rate The rate of change of temperature with altitude that would occur if a parcel of air was transported sufficiently
Mixed effects versus fixed effects modelling of binary data with inter-subject variability.
Murphy, Valda; Dunne, Adrian
2005-04-01
The question of whether or not a mixed effects model is required when modelling binary data with inter-subject variability and within subject correlation was reported in this journal by Yano et al. (J. Pharmacokin. Pharmacodyn. 28:389-412 [2001]). That report used simulation experiments to demonstrate that, under certain circumstances, the use of a fixed effects model produced more accurate estimates of the fixed effect parameters than those produced by a mixed effects model. The Laplace approximation to the likelihood was used when fitting the mixed effects model. This paper repeats one of those simulation experiments, with two binary observations recorded for every subject, and uses both the Laplace and the adaptive Gaussian quadrature approximations to the likelihood when fitting the mixed effects model. The results show that the estimates produced using the Laplace approximation include a small number of extreme outliers. This was not the case when using the adaptive Gaussian quadrature approximation. Further examination of these outliers shows that they arise in situations in which the Laplace approximation seriously overestimates the likelihood in an extreme region of the parameter space. It is also demonstrated that when the number of observations per subject is increased from two to three, the estimates based on the Laplace approximation no longer include any extreme outliers. The root mean squared error is a combination of the bias and the variability of the estimates. Increasing the sample size is known to reduce the variability of an estimator with a consequent reduction in its root mean squared error. The estimates based on the fixed effects model are inherently biased and this bias acts as a lower bound for the root mean squared error of these estimates. Consequently, it might be expected that for data sets with a greater number of subjects the estimates based on the mixed effects model would be more accurate than those based on the fixed effects model. This is borne out by the results of a further simulation experiment with an increased number of subjects in each set of data. The difference in the interpretation of the parameters of the fixed and mixed effects models is discussed. It is demonstrated that the mixed effects model and parameter estimates can be used to estimate the parameters of the fixed effects model but not vice versa.
PROBABILITIES OF TEMPERATURE EXTREMES IN THE U.S.
The model Temperature Extremes Version 1.0 provides the capability to estimate the probability, for 332 locations in the 50 U.S. states, that an extreme temperature will occur for one or more consecutive days and/or for any number of days in a given month or season, based on stat...
Boubred, F; Herlenius, E; Bartocci, M; Jonsson, B; Vanpée, M
2015-11-01
Electrolyte balances have not been sufficiently evaluated in extremely preterm infants after early parenteral nutrition. We investigated the risk of early hypophosphatemia and hypokalemia in extremely preterm infants born small for gestational age (SGA) who received nutrition as currently recommended. This prospective, observational cohort study included all consecutive extremely preterm infants born at 24-27 weeks who received high amino acids and lipid perfusion from birth. We evaluated the electrolyte levels of SGA infants and infants born appropriate for gestational age (AGA) during the first five days of life. The 12 SGA infants had lower plasma potassium levels from Day One compared to the 36 AGA infants and were more likely to have hypokalemia (58% vs 17%, p = 0.001) and hypophosphatemia (40% vs 9%, p < 0.01) during the five-day observation period. After adjusting for perinatal factors, SGA remained significantly associated with hypophosphatemia (odds ratio 1.39, confidence intervals 1.07-1.81, p = 0.01). Extremely preterm infants born SGA who were managed with currently recommended early parenteral nutrition had a high risk of early hypokalemia and hypophosphatemia. Potassium and phosphorus intakes should be set at sufficient levels from birth onwards, especially in SGA infants. ©2015 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.
The Astrophysics of Merging Black Holes
NASA Technical Reports Server (NTRS)
Schnittman, Jeremy D.
2011-01-01
When two supermassive black holes (SMBHs) approach within 1-10 mpc, gravitational wave (GW) losses begin to dominate the evolution of the binary, pushing the system to merge in a relatively small time. During this final inspiral regime, the system will emit copious energy in GWs, which should be directly detectable by pulsar timing arrays and space-based interferometers. At the same time, any gas or stars in the immediate vicinity of the merging 5MBHs can get heated and produce bright electromagnetic (EM) counterparts to the GW signals. We present here a number of possible mechanisms by which simultaneous EM and GW signals will yield valuable new information about galaxy evolution, accretion disk dynamics, and fundamental physics in the most extreme gravitational fields.
Repeated Solid-state Dewetting of Thin Gold Films for Nanogap-rich Plasmonic Nanoislands.
Kang, Minhee; Park, Sang-Gil; Jeong, Ki-Hun
2015-10-15
This work reports a facile wafer-level fabrication for nanogap-rich gold nanoislands for highly sensitive surface enhanced Raman scattering (SERS) by repeating solid-state thermal dewetting of thin gold film. The method provides enlarged gold nanoislands with small gap spacing, which increase the number of electromagnetic hotspots and thus enhance the extinction intensity as well as the tunability for plasmon resonance wavelength. The plasmonic nanoislands from repeated dewetting substantially increase SERS enhancement factor over one order-of-magnitude higher than those from a single-step dewetting process and they allow ultrasensitive SERS detection of a neurotransmitter with extremely low Raman activity. This simple method provides many opportunities for engineering plasmonics for ultrasensitive detection and highly efficient photon collection.
Repeated Solid-state Dewetting of Thin Gold Films for Nanogap-rich Plasmonic Nanoislands
Kang, Minhee; Park, Sang-Gil; Jeong, Ki-Hun
2015-01-01
This work reports a facile wafer-level fabrication for nanogap-rich gold nanoislands for highly sensitive surface enhanced Raman scattering (SERS) by repeating solid-state thermal dewetting of thin gold film. The method provides enlarged gold nanoislands with small gap spacing, which increase the number of electromagnetic hotspots and thus enhance the extinction intensity as well as the tunability for plasmon resonance wavelength. The plasmonic nanoislands from repeated dewetting substantially increase SERS enhancement factor over one order-of-magnitude higher than those from a single-step dewetting process and they allow ultrasensitive SERS detection of a neurotransmitter with extremely low Raman activity. This simple method provides many opportunities for engineering plasmonics for ultrasensitive detection and highly efficient photon collection. PMID:26469768
Electromagnetic Signatures of SMBH Coalescence
NASA Technical Reports Server (NTRS)
Schnittman, Jeremy
2012-01-01
When two supermassive black holes (SMBHs) approach within 1-10 mpc, gravitational wave (GW) losses begin to dominate the evolution of the binary, pushing the system to merge in a relatively small time. During this final inspiral regime, the system will emit copious energy in GWs, which should be directly detectable by pulsar timing arrays and space-based interferometers. At the same time, any gas or stars in the immediate vicinity of the merging 5MBHs can get heated and produce bright electromagnetic (EM) counterparts to the GW signals. We present here a number of possible mechanisms by which simultaneous EM and GW signals will yield valuable new information about galaxy evolution, accretion disk dynamics, and fundamental physics in the most extreme gravitational fields.
Practice makes perfect in memory recall.
Romani, Sandro; Katkov, Mikhail; Tsodyks, Misha
2016-04-01
A large variability in performance is observed when participants recall briefly presented lists of words. The sources of such variability are not known. Our analysis of a large data set of free recall revealed a small fraction of participants that reached an extremely high performance, including many trials with the recall of complete lists. Moreover, some of them developed a number of consistent input-position-dependent recall strategies, in particular recalling words consecutively ("chaining") or in groups of consecutively presented words ("chunking"). The time course of acquisition and particular choice of positional grouping were variable among participants. Our results show that acquiring positional strategies plays a crucial role in improvement of recall performance. © 2016 Romani et al.; Published by Cold Spring Harbor Laboratory Press.
The San Andreas fault experiment. [gross tectonic plates relative velocity
NASA Technical Reports Server (NTRS)
Smith, D. E.; Vonbun, F. O.
1973-01-01
A plan was developed during 1971 to determine gross tectonic plate motions along the San Andreas Fault System in California. Knowledge of the gross motion along the total fault system is an essential component in the construction of realistic deformation models of fault regions. Such mathematical models will be used in the future for studies which will eventually lead to prediction of major earthquakes. The main purpose of the experiment described is the determination of the relative velocity of the North American and the Pacific Plates. This motion being so extremely small, cannot be measured directly but can be deduced from distance measurements between points on opposite sites of the plate boundary taken over a number of years.
Black Hole Coalescence: The Gravitational Wave Driven Phase
NASA Technical Reports Server (NTRS)
Schnittman, Jeremy D.
2011-01-01
When two supermassive black holes (SMBHS) approach within 1-10 mpc, gravitational wave (GW) losses begin to dominate the evolution of the binary, pushing the system to merge in a relatively small time. During this final inspiral regime, the system will emit copious energy in GWs, which should be directly detectable by pulsar timing arrays and space-based interferometers. At the same time, any gas or stars in the immediate vicinity of the merging 5MBHs can get heated and produce bright electromagnetic (EM) counterparts to the GW signals. We present here a number of possible mechanisms by which simultaneous EM and GW signals will yield valuable new information about galaxy evolution, accretion disk dynamics, and fundamental physics in the most extreme gravitational fields.
Evidence for a bound on the lifetime of de Sitter space
NASA Astrophysics Data System (ADS)
Freivogel, Ben; Lippert, Matthew
2008-12-01
Recent work has suggested a surprising new upper bound on the lifetime of de Sitter vacua in string theory. The bound is parametrically longer than the Hubble time but parametrically shorter than the recurrence time. We investigate whether the bound is satisfied in a particular class of de Sitter solutions, the KKLT vacua. Despite the freedom to make the supersymmetry breaking scale exponentially small, which naively would lead to extremely stable vacua, we find that the lifetime is always less than about exp(1022) Hubble times, in agreement with the proposed bound. This result, however, is contingent on several estimates and assumptions; in particular, we rely on a conjectural upper bound on the Euler number of the Calabi-Yau fourfolds used in KKLT compactifications.
Cultural definitions of elder maltreatment in Portugal.
Mercurio, Andrea E; Nyborn, Justin
2006-01-01
A small convenience sample of 34 participants (17 males, 17 females) from the Portuguese islands of the Azores and Madeira were asked to provide examples of how extreme, moderate, and mild maltreatment towards an elder would be defined in their culture and society. Neglect, especially psychological neglect, physical maltreatment, and psychological maltreatment were the most frequently reported types of maltreatment. References to neglect and physical maltreatment appeared most often as examples of extreme maltreatment. In general, men were somewhat more likely than women to provide examples of physical aggression in their examples of maltreatment. As examples of extreme maltreatment, females provided significantly more examples of abandonment than males. Although interpretations of the findings must be cautious because of the small sample size and limited statistical power, the study illustrates a procedure for assessing constructs of elder mistreatment in a way that attends to respondents' own constructions of the phenomenon.
An Overview of 2014 SBIR Phase I and Phase II Materials Structures for Extreme Environments
NASA Technical Reports Server (NTRS)
Nguyen, Hung D.; Steele, Gynelle C.; Morris, Jessica R.
2015-01-01
NASA's Small Business Innovation Research (SBIR) program focuses on technological innovation by investing in development of innovative concepts and technologies to help NASA mission directorates address critical research needs for Agency programs. This report highlights nine of the innovative SBIR 2014 Phase I and Phase II projects that emphasize one of NASA Glenn Research Center's six core competencies-Materials and Structures for Extreme Environments. The technologies cover a wide spectrum of applications such as high temperature environmental barrier coating systems, deployable space structures, solid oxide fuel cells, and self-lubricating hard coatings for extreme temperatures. Each featured technology describes an innovation, technical objective, and highlights NASA commercial and industrial applications. This report provides an opportunity for NASA engineers, researchers, and program managers to learn how NASA SBIR technologies could help their programs and projects, and lead to collaborations and partnerships between the small SBIR companies and NASA that would benefit both.
Hot days induced by precipitation deficits at the global scale
Mueller, Brigitte; Seneviratne, Sonia I.
2012-01-01
Global warming increases the occurrence probability of hot extremes, and improving the predictability of such events is thus becoming of critical importance. Hot extremes have been shown to be induced by surface moisture deficits in some regions. In this study, we assess whether such a relationship holds at the global scale. We find that wide areas of the world display a strong relationship between the number of hot days in the regions’ hottest month and preceding precipitation deficits. The occurrence probability of an above-average number of hot days is over 70% after precipitation deficits in most parts of South America as well as the Iberian Peninsula and Eastern Australia, and over 60% in most of North America and Eastern Europe, while it is below 30–40% after wet conditions in these regions. Using quantile regression analyses, we show that the impact of precipitation deficits on the number of hot days is asymmetric, i.e. extreme high numbers of hot days are most strongly influenced. This relationship also applies to the 2011 extreme event in Texas. These findings suggest that effects of soil moisture-temperature coupling are geographically more widespread than commonly assumed. PMID:22802672
Whole plant acclimation responses by finger millet to low nitrogen stress.
Goron, Travis L; Bhosekar, Vijay K; Shearer, Charles R; Watts, Sophia; Raizada, Manish N
2015-01-01
The small grain cereal, finger millet (FM, Eleusine coracana L. Gaertn), is valued by subsistence farmers in India and East Africa as a low-input crop. It is reported by farmers to require no added nitrogen (N), or only residual N, to produce grain. Exact mechanisms underlying the acclimation responses of FM to low N are largely unknown, both above and below ground. In particular, the responses of FM roots and root hairs to N or any other nutrient have not previously been reported. Given its low N requirement, FM also provides a rare opportunity to study long-term responses to N starvation in a cereal species. The objective of this study was to survey the shoot and root morphometric responses of FM, including root hairs, to low N stress. Plants were grown in pails in a semi-hydroponic system on clay containing extremely low background N, supplemented with N or no N. To our surprise, plants grown without deliberately added N grew to maturity, looked relatively normal and produced healthy seed heads. Plants responded to the low N treatment by decreasing shoot, root, and seed head biomass. These declines under low N were associated with decreased shoot tiller number, crown root number, total crown root length and total lateral root length, but with no consistent changes in root hair traits. Changes in tiller and crown root number appeared to coordinate the above and below ground acclimation responses to N. We discuss the remarkable ability of FM to grow to maturity without deliberately added N. The results suggest that FM should be further explored to understand this trait. Our observations are consistent with indigenous knowledge from subsistence farmers in Africa and Asia, where it is reported that this crop can survive extreme environments.
Evaluation of the Impact of Ambient Temperatures on Occupational Injuries in Spain.
Martínez-Solanas, Èrica; López-Ruiz, María; Wellenius, Gregory A; Gasparrini, Antonio; Sunyer, Jordi; Benavides, Fernando G; Basagaña, Xavier
2018-06-01
Extreme cold and heat have been linked to an increased risk of occupational injuries. However, the evidence is still limited to a small number of studies of people with relatively few injuries and with a limited geographic extent, and the corresponding economic effect has not been studied in detail. We assessed the relationship between ambient temperatures and occupational injuries in Spain along with its economic effect. The daily number of occupational injuries that caused at least one day of leave and the daily maximum temperature were obtained for each Spanish province for the years 1994-2013. We estimated temperature-injuries associations with distributed lag nonlinear models, and then pooled the results using a multivariate meta-regression model. We calculated the number of injuries attributable to cold and heat, the corresponding workdays lost, and the resulting economic effect. The study included 15,992,310 occupational injuries. Overall, 2.72% [95% confidence interval (CI): 2.44-2.97] of all occupational injuries were attributed to nonoptimal ambient temperatures, with moderate heat accounting for the highest fraction. This finding corresponds to an estimated 0.67 million (95% CI: 0.60-0.73) person-days of work lost every year in Spain due to temperature, or an annual average of 42 d per 1,000 workers. The estimated annual economic burden is €370 million, or 0.03% of Spain's GDP (€2,015). Our findings suggest that extreme ambient temperatures increased the risk of occupational injuries, with substantial estimated health and economic costs. These results call for public health interventions to protect workers in the context of climate change. https://doi.org/10.1289/EHP2590.
Economic Fluctuations and Statistical Physics: Quantifying Extremely Rare and Much Less Rare Events
NASA Astrophysics Data System (ADS)
Stanley, H. Eugene
2008-03-01
Recent analysis of truly huge quantities of empirical data suggests that classic economic theories not only fail for a few outliers, but that there occur similar outliers of every possible size. In fact, if one analyzes only a small data set (say 10^4 data points), then outliers appear to occur as ``rare events.'' However, when we analyze orders of magnitude more data (10^8 data points!), we find orders of magnitude more outliers---so ignoring them is not a responsible option, and studying their properties becomes a realistic goal. We find that the statistical properties of these ``outliers'' are identical to the statistical properties of everyday fluctuations. For example, a histogram giving the number of fluctuations of a given magnitude x for fluctuations ranging in magnitude from everyday fluctuations to extremely rare fluctuations that occur with a probability of only 10-8 is a perfect straight line in a double-log plot. Quantitative analogies between financial fluctuations and earthquakes will be discussed. Two unifying principles that underlie much of the finance analysis we will present are scale invariance and universality [R. N. Mantegna and H. E. Stanley, Introduction to Econophysics: Correlations & Complexity in Finance/ (Cambridge U. Press, 2000)]. Scale invariance is a property not about algebraic equations but rather about functional equations, which have as their solutions not numbers but rather functional forms. The key idea of universality is that the identical set of laws hold across diverse markets, and over diverse time periods. This work was carried out in collaboration with a number of students and colleagues, chief among whom are X. Gabaix (MIT and Princeton) and V. Plerou (Boston University).
Extreme habitats as refuge from parasite infections? Evidence from an extremophile fish
NASA Astrophysics Data System (ADS)
Tobler, Michael; Schlupp, Ingo; García de León, Francisco J.; Glaubrecht, Matthias; Plath, Martin
2007-05-01
Living in extreme habitats typically requires costly adaptations of any organism tolerating these conditions, but very little is known about potential benefits that trade off these costs. We suggest that extreme habitats may function as refuge from parasite infections, since parasites can become locally extinct either directly, through selection by an extreme environmental parameter on free-living parasite stages, or indirectly, through selection on other host species involved in its life cycle. We tested this hypothesis in a small freshwater fish, the Atlantic molly ( Poecilia mexicana) that inhabits normal freshwaters as well as extreme habitats containing high concentrations of toxic hydrogen sulfide. Populations from such extreme habitats are significantly less parasitized by the trematode Uvulifer sp. than a population from a non-sulfidic habitat. We suggest that reduced parasite prevalence may be a benefit of living in sulfidic habitats.
Takeda, Mitsuo
2013-01-01
The paper reviews a technique for fringe analysis referred to as Fourier fringe analysis (FFA) or the Fourier transform method, with a particular focus on its application to metrology of extreme physical phenomena. Examples include the measurement of extremely small magnetic fields with subfluxon sensitivity by electron wave interferometry, subnanometer wavefront evaluation of projection optics for extreme UV lithography, the detection of sub-Ångstrom distortion of a crystal lattice, and the measurement of ultrashort optical pulses in the femotsecond to attosecond range, which show how the advantages of FFA are exploited in these cutting edge applications.
NASA Technical Reports Server (NTRS)
Ragent, Boris
1998-01-01
The results of the nephelometer experiment conducted aboard the Probe of the Galileo mission to Jupiter are presented. The tenuous clouds and sparse particulate matter in the relatively particle-free 5-micron "hot spot" region of the Probe's descent were documented from about 0.46 bars to about 12 bars. Three regions of apparent coherent structure were noted, in addition to many indications of extremely small particle concentrations along the descent path. From the first valid measurement at about 0.46 bars down to about 0.55 bars a feeble decaying lower portion of a cloud, corresponding with the predicted ammonia particle cloud, was encountered. A denser, but still very modest, particle structure was present in the pressure regime extending from about 0.76 to a distinctive base at 1.34 bars, and is compatible with the expected ammonium hydrosulfide cloud. No massive water cloud was encountered, although below the second structure, a small, vertically thin layer at about 1.65 bars may be detached from the cloud above, but may also be water condensation, compatible with reported measurements of water abundance from other Galileo Mission experiments. A third small signal region, extending from about 1.9 to 4.5 bars, exhibited quite weak but still distinctive structure, and, although the identification of the light scatterers in this region is uncertain, may also be a water cloud perhaps associated with lateral atmospheric motion and/or reduced to a small mass density by atmospheric subsidence or other explanations. Rough descriptions of the particle size distributions and cloud properties in these regions have been derived, although they may be imprecise because of the small signals and experimental difficulties. These descriptions document the small number densities of particles, the moderate particle sizes, generally in the slightly submicron to few micron range, and the resulting small optical depths, mass densities due to particles, column particle number loading and column mass loading in the atmosphere encountered by the Galileo Probe during its descent.
NASA Technical Reports Server (NTRS)
Goodman,Jindra; Ragent, Boris
1998-01-01
The results of the nephelometer experiment conducted aboard the Probe of the Galileo mission to Jupiter are presented. The tenuous clouds and sparse particulate matter in the relatively particle-free 5-micron "hot spot" region of the Probe's descent were documented from about 0.46 bars to about 12 bars. Three regions of apparent coherent structure were noted, in addition to many indications of extremely small particle concentrations along the descent path. From the first valid measurement at about 0.46 bars down to about 0.55 bars a feeble decaying lower portion of a cloud, corresponding with the predicted ammonia particle cloud, was encountered. A denser, but still very modest, particle structure was present in the pressure regime extending from about 0.76 to a distinctive base at 1.34 bars, and is compatible with the expected ammonium hydrosulfide cloud. No massive water cloud was encountered, although below the second structure, a small, vertically thin layer at about 1.65 bars may be detached from the cloud above, but may also be water condensation, compatible with reported measurements of water abundance from other Galileo Mission experiments. A third small signal region, extending from about 1.9 to 4.5 bars, exhibited quite weak but still distinctive structure, and, although the identification of the light scatterers in this region is uncertain, may also be a water cloud perhaps associated with lateral atmospheric motion and/or reduced to a small mass density by atmospheric subsidence or other explanations. Rough descriptions of the particle size distributions and cloud properties in these regions have been derived, although they may be imprecise because of the small signals and experimental difficulties. These descriptions document the small number densities of particles, the moderate particle sizes, generally in the slightly submicron to few micron range, and the resulting small optical depths, mass densities due to particles, column particle number loading and column mass loading in the atmosphere encountered by the Galileo Probe during its descent.
Müller, P
2004-04-01
The DNA regions upstream and downstream of the Bradyrhizobium japonicum gene sipF were cloned by in vivo techniques and subsequently sequenced. In order to study the function of the predicted genes, a new transposon for in vitro mutagenesis, Tn KPK2, was constructed. This mutagenesis system has a number of advantages over other transposons. Tn KPK2 itself has no transposase gene, making transposition events stable. Extremely short inverted repeats minimize the length of the transposable element and facilitate the determination of the nucleotide sequence of the flanking regions. Since the transposable element carries a promoterless ' phoA reporter gene, the appearance of functional PhoA fusion proteins indicates that Tn KPK2 has inserted in a gene encoding a periplasmic or secreted protein. Although such events are extremely rare, because the transposon has to insert in-frame, in the correct orientation, and at an appropriate location in the target molecule, a direct screening procedure on agar indicator plates permits the identification of candidate clones from large numbers of colonies. In this study, Tn KPK2 was used for the construction of various symbiotic mutants of B. japonicum. One of the mutant strains, A2-10, which is defective in a gene encoding a protein that comigrates with bacterioferritin ( bcpB), was found to induce the formation of small and ineffective nodules.
Domain atrophy creates rare cases of functional partial protein domains.
Prakash, Ananth; Bateman, Alex
2015-04-30
Protein domains display a range of structural diversity, with numerous additions and deletions of secondary structural elements between related domains. We have observed a small number of cases of surprising large-scale deletions of core elements of structural domains. We propose a new concept called domain atrophy, where protein domains lose a significant number of core structural elements. Here, we implement a new pipeline to systematically identify new cases of domain atrophy across all known protein sequences. The output of this pipeline was carefully checked by hand, which filtered out partial domain instances that were unlikely to represent true domain atrophy due to misannotations or un-annotated sequence fragments. We identify 75 cases of domain atrophy, of which eight cases are found in a three-dimensional protein structure and 67 cases have been inferred based on mapping to a known homologous structure. Domains with structural variations include ancient folds such as the TIM-barrel and Rossmann folds. Most of these domains are observed to show structural loss that does not affect their functional sites. Our analysis has significantly increased the known cases of domain atrophy. We discuss specific instances of domain atrophy and see that there has often been a compensatory mechanism that helps to maintain the stability of the partial domain. Our study indicates that although domain atrophy is an extremely rare phenomenon, protein domains under certain circumstances can tolerate extreme mutations giving rise to partial, but functional, domains.
NASA Astrophysics Data System (ADS)
Durand, Marc; Kraynik, Andrew M.; van Swol, Frank; Käfer, Jos; Quilliet, Catherine; Cox, Simon; Ataei Talebi, Shirin; Graner, François
2014-06-01
Bubble monolayers are model systems for experiments and simulations of two-dimensional packing problems of deformable objects. We explore the relation between the distributions of the number of bubble sides (topology) and the bubble areas (geometry) in the low liquid fraction limit. We use a statistical model [M. Durand, Europhys. Lett. 90, 60002 (2010), 10.1209/0295-5075/90/60002] which takes into account Plateau laws. We predict the correlation between geometrical disorder (bubble size dispersity) and topological disorder (width of bubble side number distribution) over an extended range of bubble size dispersities. Extensive data sets arising from shuffled foam experiments, surface evolver simulations, and cellular Potts model simulations all collapse surprisingly well and coincide with the model predictions, even at extremely high size dispersity. At moderate size dispersity, we recover our earlier approximate predictions [M. Durand, J. Kafer, C. Quilliet, S. Cox, S. A. Talebi, and F. Graner, Phys. Rev. Lett. 107, 168304 (2011), 10.1103/PhysRevLett.107.168304]. At extremely low dispersity, when approaching the perfectly regular honeycomb pattern, we study how both geometrical and topological disorders vanish. We identify a crystallization mechanism and explore it quantitatively in the case of bidisperse foams. Due to the deformability of the bubbles, foams can crystallize over a larger range of size dispersities than hard disks. The model predicts that the crystallization transition occurs when the ratio of largest to smallest bubble radii is 1.4.
Optimal analytic method for the nonlinear Hasegawa-Mima equation
NASA Astrophysics Data System (ADS)
Baxter, Mathew; Van Gorder, Robert A.; Vajravelu, Kuppalapalle
2014-05-01
The Hasegawa-Mima equation is a nonlinear partial differential equation that describes the electric potential due to a drift wave in a plasma. In the present paper, we apply the method of homotopy analysis to a slightly more general Hasegawa-Mima equation, which accounts for hyper-viscous damping or viscous dissipation. First, we outline the method for the general initial/boundary value problem over a compact rectangular spatial domain. We use a two-stage method, where both the convergence control parameter and the auxiliary linear operator are optimally selected to minimize the residual error due to the approximation. To do the latter, we consider a family of operators parameterized by a constant which gives the decay rate of the solutions. After outlining the general method, we consider a number of concrete examples in order to demonstrate the utility of this approach. The results enable us to study properties of the initial/boundary value problem for the generalized Hasegawa-Mima equation. In several cases considered, we are able to obtain solutions with extremely small residual errors after relatively few iterations are computed (residual errors on the order of 10-15 are found in multiple cases after only three iterations). The results demonstrate that selecting a parameterized auxiliary linear operator can be extremely useful for minimizing residual errors when used concurrently with the optimal homotopy analysis method, suggesting that this approach can prove useful for a number of nonlinear partial differential equations arising in physics and nonlinear mechanics.
Sequences of extremal radially excited rotating black holes.
Blázquez-Salcedo, Jose Luis; Kunz, Jutta; Navarro-Lérida, Francisco; Radu, Eugen
2014-01-10
In the Einstein-Maxwell-Chern-Simons theory the extremal Reissner-Nordström solution is no longer the single extremal solution with vanishing angular momentum, when the Chern-Simons coupling constant reaches a critical value. Instead a whole sequence of rotating extremal J=0 solutions arises, labeled by the node number of the magnetic U(1) potential. Associated with the same near horizon solution, the mass of these radially excited extremal solutions converges to the mass of the extremal Reissner-Nordström solution. On the other hand, not all near horizon solutions are also realized as global solutions.
The waviness of the extratropical jet and daily weather extremes
NASA Astrophysics Data System (ADS)
Röthlisberger, Matthias; Martius, Olivia; Pfahl, Stephan
2016-04-01
In recent years the Northern Hemisphere mid-latitudes have experienced a large number of weather extremes with substantial socio-economic impact, such as the European and Russian heat waves in 2003 and 2010, severe winter floods in the United Kingdom in 2013/2014 and devastating winter storms such as Lothar (1999) and Xynthia (2010) in Central Europe. These have triggered an engaged debate within the scientific community on the role of human induced climate change in the occurrence of such extremes. A key element of this debate is the hypothesis that the waviness of the extratropical jet is linked to the occurrence of weather extremes, with a wavier jet stream favouring more extremes. Previous work on this topic is expanded in this study by analyzing the linkage between a regional measure of jet waviness and daily temperature, precipitation and wind gust extremes. We show that indeed such a linkage exists in many regions of the world, however this waviness-extremes linkage varies spatially in strength and sign. Locally, it is strong only where the relevant weather systems, in which the extremes occur, are affected by the jet waviness. Its sign depends on how the frequency of occurrence of the relevant weather systems is correlated with the occurrence of high and low jet waviness. These results go beyond previous studies by noting that also a decrease in waviness could be associated with an enhanced number of some weather extremes, especially wind gust and precipitation extremes over western Europe.
Aćimović, Jugoslava; Mäki-Marttunen, Tuomo; Linne, Marja-Leena
2015-01-01
We developed a two-level statistical model that addresses the question of how properties of neurite morphology shape the large-scale network connectivity. We adopted a low-dimensional statistical description of neurites. From the neurite model description we derived the expected number of synapses, node degree, and the effective radius, the maximal distance between two neurons expected to form at least one synapse. We related these quantities to the network connectivity described using standard measures from graph theory, such as motif counts, clustering coefficient, minimal path length, and small-world coefficient. These measures are used in a neuroscience context to study phenomena from synaptic connectivity in the small neuronal networks to large scale functional connectivity in the cortex. For these measures we provide analytical solutions that clearly relate different model properties. Neurites that sparsely cover space lead to a small effective radius. If the effective radius is small compared to the overall neuron size the obtained networks share similarities with the uniform random networks as each neuron connects to a small number of distant neurons. Large neurites with densely packed branches lead to a large effective radius. If this effective radius is large compared to the neuron size, the obtained networks have many local connections. In between these extremes, the networks maximize the variability of connection repertoires. The presented approach connects the properties of neuron morphology with large scale network properties without requiring heavy simulations with many model parameters. The two-steps procedure provides an easier interpretation of the role of each modeled parameter. The model is flexible and each of its components can be further expanded. We identified a range of model parameters that maximizes variability in network connectivity, the property that might affect network capacity to exhibit different dynamical regimes.
Extreme cyclone events in the Arctic during wintertime: Variability and Trends
NASA Astrophysics Data System (ADS)
Rinke, Annette; Maturilli, Marion; Graham, Robert; Matthes, Heidrun; Handorf, Doerthe; Cohen, Lana; Hudson, Stephen; Moore, John
2017-04-01
Extreme cyclone events are of significant interest as they can transport much heat, moisture, and momentum poleward. Associated impacts are warming and sea-ice breakup. Recently, several examples of such extreme weather events occurred in winter (e.g. during the N-ICE2015 campaign north of Svalbard and the Frank North Atlantic storm during the end of December 2015). With Arctic amplification and associated reduced sea-ice cover and warmer sea surface temperatures, the occurrence of extreme cyclones events could be a plausible scenario. We calculate the spatial patterns, and changes and trends of the number of extreme cyclone events in the Arctic based on ERA-Interim six-hourly sea level pressure (SLP) data for winter (November-February) 1979-2015. Further, we analyze the SLP data from the Ny Alesund station for the same 37 year period. We define an extreme cyclone event by a extreme low central pressure (SLP below 985 hPa, which is the 5th percentile of the Ny Alesund/N-ICE2015 SLP data) and a deepening of at least 6 hPa/6 hours. Areas of highest frequency of occurrence of extreme cyclones are south/southeast of Greenland (corresponding to the Islandic low), between Norway and Svalbard and in the Barents/Kara Seas. The time series of the number of occurrence of extreme cyclone events for Ny Alesund/N-ICE show considerable interannual variability. The trend is not consistent through the winter, but we detect an increase in early winter and a slight decrease in late winter. The former is due to the increased occurrence of longer events at the expense of short events. Furthermore, the difference patterns of the frequency of events for months following the September with high and low Arctic sea-ice extent ("Low minus high sea ice") conforms with the change patterns of extreme cyclones numbers (frequency of events "2000-2015 minus 1979-1994") and with the trend patterns. This indicates that the changes in extreme cyclone occurrence in early winter are associated with sea-ice changes (regional feedback). In contrast, different mechanisms via large-scale circulation changes/teleconnections seem to play a role in late winter.
Rainy Day: A Remote Sensing-Driven Extreme Rainfall Simulation Approach for Hazard Assessment
NASA Astrophysics Data System (ADS)
Wright, Daniel; Yatheendradas, Soni; Peters-Lidard, Christa; Kirschbaum, Dalia; Ayalew, Tibebu; Mantilla, Ricardo; Krajewski, Witold
2015-04-01
Progress on the assessment of rainfall-driven hazards such as floods and landslides has been hampered by the challenge of characterizing the frequency, intensity, and structure of extreme rainfall at the watershed or hillslope scale. Conventional approaches rely on simplifying assumptions and are strongly dependent on the location, the availability of long-term rain gage measurements, and the subjectivity of the analyst. Regional and global-scale rainfall remote sensing products provide an alternative, but are limited by relatively short (~15-year) observational records. To overcome this, we have coupled these remote sensing products with a space-time resampling framework known as stochastic storm transposition (SST). SST "lengthens" the rainfall record by resampling from a catalog of observed storms from a user-defined region, effectively recreating the regional extreme rainfall hydroclimate. This coupling has been codified in Rainy Day, a Python-based platform for quickly generating large numbers of probabilistic extreme rainfall "scenarios" at any point on the globe. Rainy Day is readily compatible with any gridded rainfall dataset. The user can optionally incorporate regional rain gage or weather radar measurements for bias correction using the Precipitation Uncertainties for Satellite Hydrology (PUSH) framework. Results from Rainy Day using the CMORPH satellite precipitation product are compared with local observations in two examples. The first example is peak discharge estimation in a medium-sized (~4000 square km) watershed in the central United States performed using CUENCAS, a parsimonious physically-based distributed hydrologic model. The second example is rainfall frequency analysis for Saint Lucia, a small volcanic island in the eastern Caribbean that is prone to landslides and flash floods. The distinct rainfall hydroclimates of the two example sites illustrate the flexibility of the approach and its usefulness for hazard analysis in data-poor regions.
Kaneko, Keizo; Satake, Chihiro; Yamamoto, Junpei; Takahashi, Hironori; Sawada, Shojiro; Imai, Junta; Yamada, Tetsuya; Katagiri, Hideki
2017-03-31
Fulminant type 1 diabetes is characterized by remarkably rapid and complete β-cell destruction. The established diagnostic criteria include the occurrence of diabetic ketosis soon after the onset of hyperglycemic symptoms, elevated plasma glucose with relatively low HbA1c at the first visit, and extremely low C-peptide. Serum C-peptide levels remain extremely low over a prolonged period. A 26-year-old-man with diabetic ketosis was admitted to our hospital. His relatively low HbA1c (7.6%), despite marked hyperglycemia (593 mg/dL) with marked ketosis, indicated abrupt onset. Islet-related autoantibodies were all negative. His data at onset, including extremely low serum C-peptide (0.11 ng/mL), fulfilled the diagnostic criteria for fulminant type 1 diabetes. However, his fasting serum C-peptide levels subsequently showed substantial recovery. While fasting C-peptide stayed below 0.30 ng/mL during the first two months post onset, the levels gradually increased and thereafter fluctuated between 0.60 ng/mL and 0.90 ng/mL until 24 months post onset. By means of multiple daily insulin injection therapy, his glycemic control has been well maintained (HbA1c approximately 6.0%), with relatively small glycemic fluctuations evaluated by continuous glucose monitoring. This clinical course suggests that, despite the abrupt diabetes onset with extremely low C-peptide levels, substantial numbers of β-cells had been spared destruction and their function later showed gradual recovery. Diabetes has come to be considered a much more heterogeneous disease than the present subdivisions suggest. This case does not fit into the existing concepts of either fulminant type 1 or ketosis-prone diabetes, thereby further highlighting the heterogeneity of idiopathic type 1 diabetes.
NASA Astrophysics Data System (ADS)
Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo
2016-04-01
Urban drainage response is highly dependent on the spatial and temporal structure of rainfall. Therefore, measuring and simulating rainfall at a high spatial and temporal resolution is a fundamental step to fully assess urban drainage system reliability and related uncertainties. This is even more relevant when considering extreme rainfall events. However, the current space-time rainfall models have limitations in capturing extreme rainfall intensity statistics for short durations. Here, we use the STREAP (Space-Time Realizations of Areal Precipitation) model, which is a novel stochastic rainfall generator for simulating high-resolution rainfall fields that preserve the spatio-temporal structure of rainfall and its statistical characteristics. The model enables a generation of rain fields at 102 m and minute scales in a fast and computer-efficient way matching the requirements for hydrological analysis of urban drainage systems. The STREAP model was applied successfully in the past to generate high-resolution extreme rainfall intensities over a small domain. A sub-catchment in the city of Luzern (Switzerland) was chosen as a case study to: (i) evaluate the ability of STREAP to disaggregate extreme rainfall intensities for urban drainage applications; (ii) assessing the role of stochastic climate variability of rainfall in flow response and (iii) evaluate the degree of non-linearity between extreme rainfall intensity and system response (i.e. flow) for a small urban catchment. The channel flow at the catchment outlet is simulated by means of a calibrated hydrodynamic sewer model.
Applications of Extreme Value Theory in Public Health.
Thomas, Maud; Lemaitre, Magali; Wilson, Mark L; Viboud, Cécile; Yordanov, Youri; Wackernagel, Hans; Carrat, Fabrice
2016-01-01
We present how Extreme Value Theory (EVT) can be used in public health to predict future extreme events. We applied EVT to weekly rates of Pneumonia and Influenza (P&I) deaths over 1979-2011. We further explored the daily number of emergency department visits in a network of 37 hospitals over 2004-2014. Maxima of grouped consecutive observations were fitted to a generalized extreme value distribution. The distribution was used to estimate the probability of extreme values in specified time periods. An annual P&I death rate of 12 per 100,000 (the highest maximum observed) should be exceeded once over the next 30 years and each year, there should be a 3% risk that the P&I death rate will exceed this value. Over the past 10 years, the observed maximum increase in the daily number of visits from the same weekday between two consecutive weeks was 1133. We estimated at 0.37% the probability of exceeding a daily increase of 1000 on each month. The EVT method can be applied to various topics in epidemiology thus contributing to public health planning for extreme events.
Singh, Hardeep; Unger, Janelle; Zariffa, José; Pakosh, Maureen; Jaglal, Susan; Craven, B Catharine; Musselman, Kristin E
2018-01-15
Abstact Purpose: To provide an overview of the feasibility and outcomes of robotic-assisted upper extremity training for individuals with cervical spinal cord injury (SCI), and to identify gaps in current research and articulate future research directions. A systematic search was conducted using Medline, Embase, PsycINFO, CCTR, CDSR, CINAHL and PubMed on June 7, 2017. Search terms included 3 themes: (1) robotics; (2) SCI; (3) upper extremity. Studies using robots for upper extremity rehabilitation among individuals with cervical SCI were included. Identified articles were independently reviewed by two researchers and compared to pre-specified criteria. Disagreements regarding article inclusion were resolved through discussion. The modified Downs and Black checklist was used to assess article quality. Participant characteristics, study and intervention details, training outcomes, robot features, study limitations and recommendations for future studies were abstracted from included articles. Twelve articles (one randomized clinical trial, six case series, five case studies) met the inclusion criteria. Five robots were exoskeletons and three were end-effectors. Sample sizes ranged from 1 to 17 subjects. Articles had variable quality, with quality scores ranging from 8 to 20. Studies had a low internal validity primarily from lack of blinding or a control group. Individuals with mild-moderate impairments showed the greatest improvements on body structure/function and performance-level measures. This review is limited by the small number of articles, low-sample sizes and the diversity of devices and their associated training protocols, and outcome measures. Preliminary evidence suggests robot-assisted interventions are safe, feasible and can reduce active assistance provided by therapists. Implications for rehabilitation Robot-assisted upper extremity training for individuals with cervical spinal cord injury is safe, feasible and can reduce hands-on assistance provided by therapists. Future research in robotics rehabilitation with individuals with spinal cord injury is needed to determine the optimal device and training protocol as well as effectiveness.
Contrasting the projected change in extreme extratropical cyclones in the two hemispheres
NASA Astrophysics Data System (ADS)
Chang, E. K. M.
2017-12-01
Extratropical cyclones form an important part of the global circulation. They are responsible for much of the high impact weather in the mid-latitudes, including heavy precipitation, strong winds, and coastal storm surges. They are also the surface manifestation of baroclinic waves that are responsible for much of the transport of momentum, heat, and moisture across the mid-latitudes. Thus how these storms will change in the future is of much general interest. In particular, how the frequency of the extreme cyclones change are of most concern, since they are the ones that cause most damages. While the projection of a poleward shift of the Southern Hemisphere storm track and cyclone activity is widely accepted, together with a small decrease in the total number of extratropical cyclones, as discussed in the 5th Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR5), projected change in cyclone intensity is still rather uncertain. Several studies have suggested that cyclone intensity, in terms of absolute value of sea level pressure (SLP) minima or SLP perturbations, is projected to increase under global warming. However, other studies found no increase in wind speed around extratropical cyclones. In this study, CMIP5 multi-model projection of how the frequency of extreme cyclones in terms of near surface wind intensity may change under global warming has been examined. Results suggest significant increase in the occurrences of extreme cyclones in the Southern Hemisphere. In the Northern Hemisphere, CMIP5 models project a northeastward shift in extreme cyclone activity over the Pacific, and significant decrease over the Atlantic. Substantial differences are also found between projected changes in near surface wind intensity and wind intensity at 850 hPa, suggesting that wind change at 850 hPa is not a good proxy for change in surface wind intensity. Finally, projected changes in the large scale environment are examined to understand the dynamics behind these contrasting projected changes.
Kaban, Nicole L; Avitabile, Nicholas C; Siadecki, Sebastian D; Saul, Turandot
2016-06-01
The peripheral veins in the arms and forearms of patients with a history of intravenous (IV) drug use may be sclerosed, calcified, or collapsed due to damage from previous injections. These patients may consequently require alternative, more invasive types of vascular access including central venous or intraosseous catheters. We investigated the relationship between hand dominance and the presence of patent upper extremity (UE) veins specifically in patients with a history of IV drug-use. We predicted that injection into the non-dominant UE would occur with a higher frequency than the dominant UE, leading to fewer damaged veins in the dominant UE. If hand dominance affects which upper extremity has more patent veins, providers could focus their first vascular access attempt on the dominant upper extremity. Adult patients were approached for enrollment if they provided a history of IV drug use into one of their upper extremities. Each upper extremity was examined with a high frequency linear transducer in 3 areas: the antecubital crease, forearm and the proximal arm. The number of fully compressible veins ≥1.8 mm in diameter was recorded for each location. The mean vein difference between the numbers of veins in the dominant versus the non-dominant UE was -1.5789. At a .05 significance level, there was insufficient evidence to suggest the number of compressible veins between patients' dominant and non-dominant arms was significantly different (P = .0872.) The number of compressible veins visualized with ultrasound was not greater in the dominant upper extremity as expected. Practitioners may gain more information about potential peripheral venous access sites by asking patients their previous injection practice patterns. Copyright © 2016 Elsevier Inc. All rights reserved.
Geng, Rong; Geng, Zengchao; Huang, Jian; He, Wenxiang; Hou, Lin; She, Diao; Zhao, Jun; Shang, Jie
2015-07-04
To study the diversity of ectomycorrhizal fungi associated with Picea asperata in Xin Jiashan Forest of Qinling Mountains. This method combined the field investigation, morphological and molecular biology to identify ectomycorrhizal fungi. There were 37 different ectomycorrhizal fungi under 14 genera of 10 families on spruce in Xin Jiashan Forest of Qinling Mountains, 34 types belonged to Basidiomycetes, 1 to Ascomycete and 2 to unknown species. Among these identified ectomycorrhizal fungi types, Inocybe sp. was the dominant group; Russula nauseosa was the most dominant species; Hygrophorus sp., Tomentella coerulea, Inocybe sp. 1, Helotiaceae sp. and Lactarius deterrimus were common species; and the rest species were rare species. The large number but relatively rare species of dominant family and the small number but relatively more species of rare family survived in ectomycorrhizal fungal communities of Picea asperata. For the extreme degradation in arid area of western ecological system, identifing some rare family for further development and utilization had very important practical significance.
Interactive computer aided shift scheduling.
Gaertner, J
2001-12-01
This paper starts with a discussion of computer aided shift scheduling. After a brief review of earlier approaches, two conceptualizations of this field are introduced: First, shift scheduling as a field that ranges from extremely stable rosters at one pole to rather market-like approaches on the other pole. Unfortunately, already small alterations of a scheduling problem (e.g., the number of groups, the number of shifts) may call for rather different approaches and tools. Second, their environment shapes scheduling problems and scheduling has to be done within idiosyncratic organizational settings. This calls for the amalgamation of scheduling with other tasks (e.g., accounting) and for reflections whether better solutions might become possible by changes in the problem definition (e.g., other service levels, organizational changes). Therefore shift scheduling should be understood as a highly connected problem. Building upon these two conceptualizations, a few examples of software that ease scheduling in some areas of this field are given and future research questions are outlined.
NASA Astrophysics Data System (ADS)
Blázquez-Salcedo, Jose Luis; Kunz, Jutta; Navarro-Lérida, Francisco; Radu, Eugen
2017-03-01
We consider rotating black hole solutions in five-dimensional Einstein-Maxwell-Chern-Simons theory with a negative cosmological constant and a generic value of the Chern-Simons coupling constant λ . Using both analytical and numerical techniques, we focus on cohomogeneity-1 configurations, with two equal-magnitude angular momenta, which approach at infinity a globally anti-de Sitter background. We find that the generic solutions share a number of basic properties with the known Cvetič, Lü, and Pope black holes which have λ =1 . New features occur as well; for example, when the Chern-Simons coupling constant exceeds a critical value, the solutions are no longer uniquely determined by their global charges. Moreover, the black holes possess radial excitations which can be labelled by the node number of the magnetic gauge potential function. Solutions with small values of λ possess other distinct features. For instance, the extremal black holes there form two disconnected branches, while not all near-horizon solutions are associated with global solutions.
OCTOCAM: A Workhorse Instrument for the Gemini Telescopes During the Era of LSST
NASA Astrophysics Data System (ADS)
Roming, Peter; van der Horst, Alexander; OCTOCAM Team
2018-01-01
The decade of the 2020s are planned to be an era of large surveys and giant telescopes. A trademark of this era will be the large number of interesting objects observed daily by high-cadence surveys, such as the LSST. Because of the sheer numbers, only a very small fraction of these interesting objects will be observed with extremely large telescopes. The follow up workhorses during this era will be the 8-meter class telescopes and corresponding instruments that are prepared to pursue these interesting objects. One such workhorse instrument is OCTOCAM, a highly efficient instrument designed to probe the time domain window with simulatenous broad-wavelength coverage. OCTOCAM optimizes the use of Gemini for broadband imaging and spectroscopic single-target observations. The instrument is designed for high temporal resolution, broad spectral coverage, and moderate spectral resolution. OCTOCAM was selected as part of the Gemini instrumentation program in early 2017. Here we provide a description of the science cases to be addressed, overall instrument design, and current status.
Steady inviscid transonic flows over planar airfoils: A search for a simplified procedure
NASA Technical Reports Server (NTRS)
Magnus, R.; Yoshihara, H.
1973-01-01
A finite difference procedure based upon a system of unsteady equations in proper conservation form with either exact or small disturbance steady terms is used to calculate the steady flows over several classes of airfoils. The airfoil condition is fulfilled on a slab whose upstream extremity is a semi-circle overlaying the airfoil leading edge circle. The limitations of the small disturbance equations are demonstrated in an extreme example of a blunt-nosed, aft-cambered airfoil. The necessity of using the equations in proper conservation form to capture the shock properly is stressed. Ability of the steady relaxation procedures to capture the shock is briefly examined.
Resonant line transfer in a fog: using Lyman-alpha to probe tiny structures in atomic gas
NASA Astrophysics Data System (ADS)
Gronke, Max; Dijkstra, Mark; McCourt, Michael; Peng Oh, S.
2017-11-01
Motivated by observational and theoretical work that suggest very small-scale (≲ 1 pc) structure in the circumgalactic medium of galaxies and in other environments, we study Lyman-α (Lyα) radiative transfer in an extremely clumpy medium with many clouds of neutral gas along the line of sight. While previous studies have typically considered radiative transfer through sightlines intercepting ≲ 10 clumps, we explored the limit of a very large number of clumps per sightline (up to fc 1000). Our main finding is that, for covering factors greater than some critical threshold, a multiphase medium behaves similarly to a homogeneous medium in terms of the emergent Lyα spectrum. The value of this threshold depends on both the clump column density and the movement of the clumps. We estimated this threshold analytically and compare our findings to radiative transfer simulations with a range of covering factors, clump column densities, radii, and motions. Our results suggest that (I) the success in fitting observed Lyα spectra using homogeneous "shell models" (and the corresponding failure of multiphase models) hints at the presence of very small-scale structure in neutral gas, which is in agreement within a number of other observations; and (II) the recurrent problems of reproducing realistic line profiles from hydrodynamical simulations may be due to their inability to resolve small-scale structure, which causes simulations to underestimate the effective covering factor of neutral gas clouds. The movie associated to Fig. B.2 is available at http://www.aanda.org
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manz, Boryana N.; Jackson, Bryan L.; Petit, Rebecca S.
2011-05-31
T cells react to extremely small numbers of activating agonist peptides. Spatial organization of T-cell receptors (TCR) and their peptide-major histocompatibility complex (pMHC) ligands into microclusters is correlated with T-cell activation. In this study, we have designed an experimental strategy that enables control over the number of agonist peptides per TCR cluster, without altering the total number engaged by the cell. Supported membranes, partitioned with grids of barriers to lateral mobility, provide an effective way of limiting the total number of pMHC ligands that may be assembled within a single TCR cluster. Observations directly reveal that restriction of pMHC contentmore » within individual TCR clusters can decrease T-cell sensitivity for triggering initial calcium flux at fixed total pMHC density. Further analysis suggests that triggering thresholds are determined by the number of activating ligands available to individual TCR clusters, not by the total number encountered by the cell. Results from a series of experiments in which the overall agonist density and the maximum number of agonist per TCR cluster are independently varied in primary T cells indicate that the most probable minimal triggering unit for calcium signaling is at least four pMHC in a single cluster for this system. In conclusion, this threshold is unchanged by inclusion of coagonist pMHC, but costimulation of CD28 by CD80 can modulate the threshold lower.« less
Wan, Qun; Guan, Xueying; Yang, Nannan; Wu, Huaitong; Pan, Mengqiao; Liu, Bingliang; Fang, Lei; Yang, Shouping; Hu, Yan; Ye, Wenxue; Zhang, Hua; Ma, Peiyong; Chen, Jiedan; Wang, Qiong; Mei, Gaofu; Cai, Caiping; Yang, Donglei; Wang, Jiawei; Guo, Wangzhen; Zhang, Wenhua; Chen, Xiaoya; Zhang, Tianzhen
2016-06-01
Natural antisense transcripts (NATs) are commonly observed in eukaryotic genomes, but only a limited number of such genes have been identified as being involved in gene regulation in plants. In this research, we investigated the function of small RNA derived from a NAT in fiber cell development. Using a map-based cloning strategy for the first time in tetraploid cotton, we cloned a naked seed mutant gene (N1 ) encoding a MYBMIXTA-like transcription factor 3 (MML3)/GhMYB25-like in chromosome A12, GhMML3_A12, that is associated with fuzz fiber development. The extremely low expression of GhMML3_A12 in N1 is associated with NAT production, driven by its 3' antisense promoter, as indicated by the promoter-driven histochemical staining assay. In addition, small RNA deep sequencing analysis suggested that the bidirectional transcriptions of GhMML3_A12 form double-stranded RNAs and generate 21-22 nt small RNAs. Therefore, in a fiber-specific manner, small RNA derived from the GhMML3_A12 locus can mediate GhMML3_A12 mRNA self-cleavage and result in the production of naked seeds followed by lint fiber inhibition in N1 plants. The present research reports the first observation of gene-mediated NATs and siRNA directly controlling fiber development in cotton. © 2016 The Authors. New Phytologist © 2016 New Phytologist Trust.
An Inverse MOOC Model: Small Virtual Field Geology Classes with Many Teachers (Invited)
NASA Astrophysics Data System (ADS)
De Paor, D. G.; Whitmeyer, S. J.; Bentley, C.
2013-12-01
In the Massive Open Online Courses (MOOCs) mode of instruction, one or a small group of collaborating instructors lecture online to a large (often extremely large) number of students. We are experimenting with an inverse concept: an online classroom in which a small group of collaborating students are taught by dozens of collaborating instructors. This experiment is part of a new NSF TUES Type 3 project titled 'Google Earth for Onsite and Distance Education (GEODE).' Among the goals of the project are the development of an online course called the 'Grand Tour.' We are inviting dozens of colleagues to record virtual field trips (VFTs) and upload them to Google Earth. Students enrolled in the course will be assigned to a small group and tasked with a research project--for example to write a report on foreland thrust belts. They will select a small subset of available VFTs to follow and will be scaffolded by virtual specimens, emergent cross sections, analytical simulations (virtual tricorders), and a game style environment. Instant feedback based on auto-logging will enable adaptive learning. The design is suited to both onsite and distance education and will facilitate access to iconic geologic sites around the world to persons with mobility constraints. We invite input from the community to help guide the design phase of this project. Prototypes of the above-listed learning resources have already been developed and are freely available at http://www.DigitalPlanet.org.
Exact simulation of max-stable processes.
Dombry, Clément; Engelke, Sebastian; Oesting, Marco
2016-06-01
Max-stable processes play an important role as models for spatial extreme events. Their complex structure as the pointwise maximum over an infinite number of random functions makes their simulation difficult. Algorithms based on finite approximations are often inexact and computationally inefficient. We present a new algorithm for exact simulation of a max-stable process at a finite number of locations. It relies on the idea of simulating only the extremal functions, that is, those functions in the construction of a max-stable process that effectively contribute to the pointwise maximum. We further generalize the algorithm by Dieker & Mikosch (2015) for Brown-Resnick processes and use it for exact simulation via the spectral measure. We study the complexity of both algorithms, prove that our new approach via extremal functions is always more efficient, and provide closed-form expressions for their implementation that cover most popular models for max-stable processes and multivariate extreme value distributions. For simulation on dense grids, an adaptive design of the extremal function algorithm is proposed.
Climate Variability and Weather Extremes: Model-Simulated and Historical Data. Chapter 9
NASA Technical Reports Server (NTRS)
Schubert, Siegfried D.; Lim, Young-Kwon
2012-01-01
Extremes in weather and climate encompass a wide array of phenomena including tropical storms, mesoscale convective systems, snowstorms, floods, heat waves, and drought. Understanding how such extremes might change in the future requires an understanding of their past behavior including their connections to large-scale climate variability and trends. Previous studies suggest that the most robust findings concerning changes in short-term extremes are those that can be most directly (though not completely) tied to the increase in the global mean temperatures. These include the findings that (IPCC 2007): There has been a widespread reduction in the number of frost days in mid-latitude regions in recent decades, an increase in the number of warm extremes, particularly warm nights, and a reduction in the number of cold extremes, particularly cold nights. For North America in particular (CCSP SAP 3.3, 2008): There are fewer unusually cold days during the last few decades. The last 10 years have seen a lower number of severe cold waves than for any other 10-year period in the historical record that dates back to 1895. There has been a decrease in the number of frost days and a lengthening of the frost-free season, particularly in the western part of North America. Other aspects of extremes such as the changes in storminess have a less clear signature of long term change, with considerable interannual, and decadal variability that can obscure any climate change signal. Nevertheless, regarding extratropical storms (CCSP SAP 3.3, 2008): The balance of evidence suggests that there has been a northward shift in the tracks of strong low pressure systems (storms) in both the North Atlantic and North Pacific basins. For North America: Regional analyses suggest that there has been a decrease in snowstorms in the South and lower Midwest of the United States, and an increase in snowstorms in the upper Midwest and Northeast. Despite the progress already made, our understanding of the basic mechanisms by which extremes vary is incomplete. As noted in IPCC (2007), Incomplete global data sets and remaining model uncertainties still restrict understanding of changes in extremes and attribution of changes to causes, although understanding of changes in the intensity, frequency and risk of extremes has improved. Separating decadal and other shorter-term variability from climate change impacts on extremes requires a better understanding of the processes responsible for the changes. In particular, the physical processes linking sea surface temperature changes to regional climate changes, and a basic understanding of the inherent variability in weather extremes and how that is impacted by atmospheric circulation changes at subseasonal to decadal and longer time scales, are still inadequately understood. Given the fundamental limitations in the time span and quality of global observations, substantial progress on these issues will rely increasingly on improvements in models, with observations continuing to play a critical role, though less as a detection tool, and more as a tool for addressing physical processes, and to insure the quality of the climate models and the verisimilitude of the simulations (CCSP SAP 1.3, 2008).
Bright focused ion beam sources based on laser-cooled atoms
McClelland, J. J.; Steele, A. V.; Knuffman, B.; Twedt, K. A.; Schwarzkopf, A.; Wilson, T. M.
2016-01-01
Nanoscale focused ion beams (FIBs) represent one of the most useful tools in nanotechnology, enabling nanofabrication via milling and gas-assisted deposition, microscopy and microanalysis, and selective, spatially resolved doping of materials. Recently, a new type of FIB source has emerged, which uses ionization of laser cooled neutral atoms to produce the ion beam. The extremely cold temperatures attainable with laser cooling (in the range of 100 μK or below) result in a beam of ions with a very small transverse velocity distribution. This corresponds to a source with extremely high brightness that rivals or may even exceed the brightness of the industry standard Ga+ liquid metal ion source. In this review we discuss the context of ion beam technology in which these new ion sources can play a role, their principles of operation, and some examples of recent demonstrations. The field is relatively new, so only a few applications have been demonstrated, most notably low energy ion microscopy with Li ions. Nevertheless, a number of promising new approaches have been proposed and/or demonstrated, suggesting that a rapid evolution of this type of source is likely in the near future. PMID:27239245
Stoop, Nicky; Menendez, Mariano E; Mellema, Jos J; Ring, David
2018-01-01
The objective of this study is to evaluate the construct validity of the Patient-Reported Outcomes Measurement Information System (PROMIS) Global Health instrument by establishing its correlation to the Quick-Disabilities of the Arm, Shoulder and Hand (QuickDASH) questionnaire in patients with upper extremity illness. A cohort of 112 patients completed a sociodemographic survey and the PROMIS Global Health and QuickDASH questionnaires. Pearson correlation coefficients were used to evaluate the association of the QuickDASH with the PROMIS Global Health items and subscales. Six of the 10 PROMIS Global Health items were associated with the QuickDASH. The PROMIS Global Physical Health subscale showed moderate correlation with QuickDASH and the Mental Health subscale. There was no significant relationship between the PROMIS Global Mental Health subscale and QuickDASH. The consistent finding that general patient-reported outcomes correlate moderately with regional patient-reported outcomes suggests that a small number of relatively nonspecific patient-reported outcome measures might be used to assess a variety of illnesses. In our opinion, the blending of physical and mental health questions in the PROMIS Global Health makes this instrument less useful for research or patient care.
Parsimonious kernel extreme learning machine in primal via Cholesky factorization.
Zhao, Yong-Ping
2016-08-01
Recently, extreme learning machine (ELM) has become a popular topic in machine learning community. By replacing the so-called ELM feature mappings with the nonlinear mappings induced by kernel functions, two kernel ELMs, i.e., P-KELM and D-KELM, are obtained from primal and dual perspectives, respectively. Unfortunately, both P-KELM and D-KELM possess the dense solutions in direct proportion to the number of training data. To this end, a constructive algorithm for P-KELM (CCP-KELM) is first proposed by virtue of Cholesky factorization, in which the training data incurring the largest reductions on the objective function are recruited as significant vectors. To reduce its training cost further, PCCP-KELM is then obtained with the application of a probabilistic speedup scheme into CCP-KELM. Corresponding to CCP-KELM, a destructive P-KELM (CDP-KELM) is presented using a partial Cholesky factorization strategy, where the training data incurring the smallest reductions on the objective function after their removals are pruned from the current set of significant vectors. Finally, to verify the efficacy and feasibility of the proposed algorithms in this paper, experiments on both small and large benchmark data sets are investigated. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bright focused ion beam sources based on laser-cooled atoms
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClelland, J. J.; Wilson, T. M.; Steele, A. V.
2016-03-15
Nanoscale focused ion beams (FIBs) represent one of the most useful tools in nanotechnology, enabling nanofabrication via milling and gas-assisted deposition, microscopy and microanalysis, and selective, spatially resolved doping of materials. Recently, a new type of FIB source has emerged, which uses ionization of laser cooled neutral atoms to produce the ion beam. The extremely cold temperatures attainable with laser cooling (in the range of 100 μK or below) result in a beam of ions with a very small transverse velocity distribution. This corresponds to a source with extremely high brightness that rivals or may even exceed the brightness of themore » industry standard Ga{sup +} liquid metal ion source. In this review, we discuss the context of ion beam technology in which these new ion sources can play a role, their principles of operation, and some examples of recent demonstrations. The field is relatively new, so only a few applications have been demonstrated, most notably low energy ion microscopy with Li ions. Nevertheless, a number of promising new approaches have been proposed and/or demonstrated, suggesting that a rapid evolution of this type of source is likely in the near future.« less
Do copepods inhabit hypersaline waters worldwide? A short review and discussion
NASA Astrophysics Data System (ADS)
Anufriieva, Elena V.
2015-11-01
A small number of copepod species have adapted to an existence in the extreme habitat of hypersaline water. 13 copepod species have been recorded in the hypersaline waters of Crimea (the largest peninsula in the Black Sea with over 50 hypersaline lakes). Summarizing our own and literature data, the author concludes that the Crimean extreme environment is not an exception: copepod species dwell in hypersaline waters worldwide. There are at least 26 copepod species around the world living at salinity above 100; among them 12 species are found at salinity higher than 200. In the Crimea Cletocamptus retrogressus is found at salinity 360×10-3 (with a density of 1 320 individuals/m3) and Arctodiaptomus salinus at salinity 300×10-3 (with a density of 343 individuals/m3). Those species are probably the most halotolerant copepod species in the world. High halotolerance of osmoconforming copepods may be explained by exoosmolyte consumption, mainly with food. High tolerance to many factors in adults, availability of resting stages, and an opportunity of long-distance transportation of resting stages by birds and/or winds are responsible for the wide geographic distribution of these halophilic copepods.
Matsumine, Akihiko; Tsujii, Masaya; Nakamura, Tomoki; Asanuma, Kunihiro; Matsubara, Takao; Kakimoto, Takuya; Yada, Yuki; Takada, Akinori; Ii, Noriko; Nomoto, Yoshihito; Sudo, Akihiro
2016-08-12
When a soft tissue sarcoma (STS) is located at the distal part of an extremity and involves the tendon, a wide excision usually causes severe functional disability. We therefore developed a minimally invasive surgical technique using intraoperative electron-beam radiotherapy (IOERT) to reduce the incidence of post-operative functional disability in patients with peri-/intra-tendinous STS. We assessed the clinical outcomes of the novel minimally invasive surgery. The study population included five patients who received treatment for distal extremity STSs. After elevating the tumor mass, including the tendon and nerve from the tumor bed with a wide margin, a lead board was inserted beneath the tumor mass to shield the normal tissue. IOERT (25-50 Gy) was then applied, and the tumor excised with care taken to maintain the continuity of the tendon. In a desmoid patient, local recurrence was observed outside the irradiated field. No cases of neuropathy or bone necrosis were observed. The mean limb function score was excellent in all patients. None of the high-grade sarcoma patients had local recurrence or distant metastasis. Although the current study is only a pilot study with a small number of patients, it shows that this minimally invasive procedure has the potential to become a standard treatment option for selected patients. H17-250 (registered 2 November 2005) and H25-250 (modified from H17-250, registered 5 December 2013).
Effects of El Niño-Southern Oscillation on sea level anomalies along the Gulf of Mexico coast
NASA Astrophysics Data System (ADS)
Kennedy, Andrew J.; Griffin, Melissa L.; Morey, Steven L.; Smith, Shawn R.; O'Brien, James J.
2007-05-01
Analyses of daily sea level data show the impacts of El Niño-Southern Oscillation (ENSO) in the Gulf of Mexico (GOM). Data from three stations (St. Petersburg, Florida, Pensacola, Florida, and Galveston, Texas), all of which have at least 50 years of daily observations, are processed to identify the interannual signals. Although low frequency (interannual) signals in the sea level anomaly time series are not clearly evident, a low frequency modulation of the extreme anomaly events (upper 10% or lower 10% of the distributions) is identified. Results show that sea level variability is seasonally dependent at all stations, with maximum variability in the winter months. In the eastern GOM, low sea level events in the winter months are more frequent during El Niño (warm phase) conditions when compared to a neutral ENSO phase. This is consistent with ENSO-related changes in the location where extratropical atmospheric low pressure systems form and in the tracks of these weather systems. The impacts of tropical systems in the summer through early fall months on coastal sea level in the GOM are shown by infrequent extreme high and low anomalies coinciding with individual storms. However, the number of storms affecting the data record from a particular sea level station is too small to confirm ENSO-related variability. Statistical methods are employed to demonstrate a significant link between extreme sea level anomalies in the GOM and ENSO during the October to March period.
Introducing the refined gravity hypothesis of extreme sexual size dimorphism
2010-01-01
Background Explanations for the evolution of female-biased, extreme Sexual Size Dimorphism (SSD), which has puzzled researchers since Darwin, are still controversial. Here we propose an extension of the Gravity Hypothesis (i.e., the GH, which postulates a climbing advantage for small males) that in conjunction with the fecundity hypothesis appears to have the most general power to explain the evolution of SSD in spiders so far. In this "Bridging GH" we propose that bridging locomotion (i.e., walking upside-down under own-made silk bridges) may be behind the evolution of extreme SSD. A biomechanical model shows that there is a physical constraint for large spiders to bridge. This should lead to a trade-off between other traits and dispersal in which bridging would favor smaller sizes and other selective forces (e.g. fecundity selection in females) would favor larger sizes. If bridging allows faster dispersal, small males would have a selective advantage by enjoying more mating opportunities. We predicted that both large males and females would show a lower propensity to bridge, and that SSD would be negatively correlated with sexual dimorphism in bridging propensity. To test these hypotheses we experimentally induced bridging in males and females of 13 species of spiders belonging to the two clades in which bridging locomotion has evolved independently and in which most of the cases of extreme SSD in spiders are found. Results We found that 1) as the degree of SSD increased and females became larger, females tended to bridge less relative to males, and that 2) smaller males and females show a higher propensity to bridge. Conclusions Physical constraints make bridging inefficient for large spiders. Thus, in species where bridging is a very common mode of locomotion, small males, by being more efficient at bridging, will be competitively superior and enjoy more mating opportunities. This "Bridging GH" helps to solve the controversial question of what keeps males small and also contributes to explain the wide range of SSD in spiders, as those spider species in which extreme SSD has not evolved but still live in tall vegetation, do not use bridging locomotion to disperse. PMID:20682029
NASA Astrophysics Data System (ADS)
Lindegren, Lennart
2012-01-01
The launch of the Hipparcos satellite in 1989 and the Hubble Space Telescope in 1990 revolutionized astrometry. By no means does this imply that not much progress was made in the ground-based techniques used exclusively until then. On the contrary, the 1960s to 1980s saw an intense development of new or highly improved instruments, including photoelectric meridian circles, automated plate measuring machines, and the use of chargecoupled device (CCD) detectors for small-field differential astrometry (for a review of optical astrometry at the time, see Monet 1988). In the radio domain, very long baseline interferometry (VLBI) astrometry already provided an extragalactic reference frame accurate to about 1 milliarcsecond (mas) (Ma et al. 1990). Spectacular improvements were made in terms of accuracy, the faintness of the observed objects, and their numbers. However, there was a widening gulf between small-angle astrometry, where differential techniques could overcome atmospheric effects down to below 1 mas, and large-angle astrometry, where conventional instruments such as meridian circles seemed to have hit a barrier in the underlying systematic errors at about 100 mas. Though very precise, the small-angle measurements were of limited use for the determination of positions and proper motions, due to the lack of suitable reference objects in the small fields, and even for parallaxes the necessary correction for the mean parallax of background stars was highly non-trivial. Linking the optical observations to the accurate VLBI frame also proved extremely difficult.
Houghton, Bruce F.; Swanson, Don; Rausch, J.; Carey, R.J.; Fagents, S.A.; Orr, Tim R.
2013-01-01
Estimating the mass, volume, and dispersal of the deposits of very small and/or extremely weak explosive eruptions is difficult, unless they can be sampled on eruption. During explosive eruptions of Halema‘uma‘u Crater (Kīlauea, Hawaii) in 2008, we constrained for the first time deposits of bulk volumes as small as 9–300 m3 (1 × 104 to 8 × 105 kg) and can demonstrate that they show simple exponential thinning with distance from the vent. There is no simple fit for such products within classifications such as the Volcanic Explosivity Index (VEI). The VEI is being increasingly used as the measure of magnitude of explosive eruptions, and as an input for both hazard modeling and forecasting of atmospheric dispersal of tephra. The 2008 deposits demonstrate a problem for the use of the VEI, as originally defined, which classifies small, yet ballistic-producing, explosive eruptions at Kīlauea and other basaltic volcanoes as nonexplosive. We suggest a simple change to extend the scale in a fashion inclusive of such very small deposits, and to make the VEI more consistent with other magnitude scales such as the Richter scale for earthquakes. Eruptions of this magnitude constitute a significant risk at Kīlauea and elsewhere because of their high frequency and the growing number of “volcano tourists” visiting basaltic volcanoes.
ERIC Educational Resources Information Center
Nowell, Amy; Hedges, Larry V.
1998-01-01
Uses evidence from seven surveys of the U.S. 12th-grade population and the National Assessment of Educational Progress to show that gender differences in mean and variance in academic achievement are small from 1960 to 1994 but that differences in extreme scores are often substantial. (SLD)
Heavy Tail Behavior of Rainfall Extremes across Germany
NASA Astrophysics Data System (ADS)
Castellarin, A.; Kreibich, H.; Vorogushyn, S.; Merz, B.
2017-12-01
Distributions are termed heavy-tailed if extreme values are more likely than would be predicted by probability distributions that have exponential asymptotic behavior. Heavy-tail behavior often leads to surprise, because historical observations can be a poor guide for the future. Heavy-tail behavior seems to be widespread for hydro-meteorological extremes, such as extreme rainfall and flood events. To date there have been only vague hints to explain under which conditions these extremes show heavy-tail behavior. We use an observational data set consisting of 11 climate variables at 1440 stations across Germany. This homogenized, gap-free data set covers 110 years (1901-2010) at daily resolution. We estimate the upper tail behavior, including its uncertainty interval, of daily precipitation extremes for the 1,440 stations at the annual and seasonal time scales. Different tail indicators are tested, including the shape parameter of the Generalized Extreme Value distribution, the upper tail ratio and the obesity index. In a further step, we explore to which extent the tail behavior can be explained by geographical and climate factors. A large number of characteristics is derived, such as station elevation, degree of continentality, aridity, measures for quantifying the variability of humidity and wind velocity, or event-triggering large-scale atmospheric situation. The link between the upper tail behavior and these characteristics is investigated via data mining methods capable of detecting non-linear relationships in large data sets. This exceptionally rich observational data set, in terms of number of stations, length of time series and number of explaining variables, allows insights into the upper tail behavior which is rarely possible given the typical observational data sets available.
Probing Prokaryotic Social Behaviors with Bacterial “Lobster Traps”
Connell, Jodi L.; Wessel, Aimee K.; Parsek, Matthew R.; Ellington, Andrew D.; Whiteley, Marvin; Shear, Jason B.
2010-01-01
Bacteria are social organisms that display distinct behaviors/phenotypes when present in groups. These behaviors include the abilities to construct antibiotic-resistant sessile biofilm communities and to communicate with small signaling molecules (quorum sensing [QS]). Our understanding of biofilms and QS arises primarily from in vitro studies of bacterial communities containing large numbers of cells, often greater than 108 bacteria; however, in nature, bacteria often reside in dense clusters (aggregates) consisting of significantly fewer cells. Indeed, bacterial clusters containing 101 to 105 cells are important for transmission of many bacterial pathogens. Here, we describe a versatile strategy for conducting mechanistic studies to interrogate the molecular processes controlling antibiotic resistance and QS-mediated virulence factor production in high-density bacterial clusters. This strategy involves enclosing a single bacterium within three-dimensional picoliter-scale microcavities (referred to as bacterial “lobster traps”) defined by walls that are permeable to nutrients, waste products, and other bioactive small molecules. Within these traps, bacteria divide normally into extremely dense (1012 cells/ml) clonal populations with final population sizes similar to that observed in naturally occurring bacterial clusters. Using these traps, we provide strong evidence that within low-cell-number/high-density bacterial clusters, QS is modulated not only by bacterial density but also by population size and flow rate of the surrounding medium. We also demonstrate that antibiotic resistance develops as cell density increases, with as few as ~150 confined bacteria exhibiting an antibiotic-resistant phenotype similar to biofilm bacteria. Together, these findings provide key insights into clinically relevant phenotypes in low-cell-number/high-density bacterial populations. PMID:21060734
Observational limitations of Bose-Einstein photon statistics and radiation noise in thermal emission
NASA Astrophysics Data System (ADS)
Lee, Y.-J.; Talghader, J. J.
2018-01-01
For many decades, theory has predicted that Bose-Einstein statistics are a fundamental feature of thermal emission into one or a few optical modes; however, the resulting Bose-Einstein-like photon noise has never been experimentally observed. There are at least two reasons for this: (1) Relationships to describe the thermal radiation noise for an arbitrary mode structure have yet to be set forth, and (2) the mode and detector constraints necessary for the detection of such light is extremely hard to fulfill. Herein, photon statistics and radiation noise relationships are developed for systems with any number of modes and couplings to an observing space. The results are shown to reproduce existing special cases of thermal emission and are then applied to resonator systems to discuss physically realizable conditions under which Bose-Einstein-like thermal statistics might be observed. Examples include a single isolated cavity and an emitter cavity coupled to a small detector space. Low-mode-number noise theory shows major deviations from solely Bose-Einstein or Poisson treatments and has particular significance because of recent advances in perfect absorption and subwavelength structures both in the long-wave infrared and terahertz regimes. These microresonator devices tend to utilize a small volume with few modes, a regime where the current theory of thermal emission fluctuations and background noise, which was developed decades ago for free-space or single-mode cavities, has no derived solutions.
How To Identify Plasmons from the Optical Response of Nanostructures
2017-01-01
A promising trend in plasmonics involves shrinking the size of plasmon-supporting structures down to a few nanometers, thus enabling control over light–matter interaction at extreme-subwavelength scales. In this limit, quantum mechanical effects, such as nonlocal screening and size quantization, strongly affect the plasmonic response, rendering it substantially different from classical predictions. For very small clusters and molecules, collective plasmonic modes are hard to distinguish from other excitations such as single-electron transitions. Using rigorous quantum mechanical computational techniques for a wide variety of physical systems, we describe how an optical resonance of a nanostructure can be classified as either plasmonic or nonplasmonic. More precisely, we define a universal metric for such classification, the generalized plasmonicity index (GPI), which can be straightforwardly implemented in any computational electronic-structure method or classical electromagnetic approach to discriminate plasmons from single-particle excitations and photonic modes. Using the GPI, we investigate the plasmonicity of optical resonances in a wide range of systems including: the emergence of plasmonic behavior in small jellium spheres as the size and the number of electrons increase; atomic-scale metallic clusters as a function of the number of atoms; and nanostructured graphene as a function of size and doping down to the molecular plasmons in polycyclic aromatic hydrocarbons. Our study provides a rigorous foundation for the further development of ultrasmall nanostructures based on molecular plasmonics. PMID:28651057
NASA Astrophysics Data System (ADS)
Dai, Yimian; Wu, Yiquan; Song, Yu; Guo, Jun
2017-03-01
To further enhance the small targets and suppress the heavy clutters simultaneously, a robust non-negative infrared patch-image model via partial sum minimization of singular values is proposed. First, the intrinsic reason behind the undesirable performance of the state-of-the-art infrared patch-image (IPI) model when facing extremely complex backgrounds is analyzed. We point out that it lies in the mismatching of IPI model's implicit assumption of a large number of observations with the reality of deficient observations of strong edges. To fix this problem, instead of the nuclear norm, we adopt the partial sum of singular values to constrain the low-rank background patch-image, which could provide a more accurate background estimation and almost eliminate all the salient residuals in the decomposed target image. In addition, considering the fact that the infrared small target is always brighter than its adjacent background, we propose an additional non-negative constraint to the sparse target patch-image, which could not only wipe off more undesirable components ulteriorly but also accelerate the convergence rate. Finally, an algorithm based on inexact augmented Lagrange multiplier method is developed to solve the proposed model. A large number of experiments are conducted demonstrating that the proposed model has a significant improvement over the other nine competitive methods in terms of both clutter suppressing performance and convergence rate.
Regioselective Synthesis of Cellulose Ester Homopolymers
Daiqiang Xu; Kristen Voiges; Thomas Elder; Petra Mischnick; Kevin J. Edgar
2012-01-01
Regioselective synthesis of cellulose esters is extremely difficult due to the small reactivity differences between cellulose hydroxyl groups, small differences in steric demand between acyl moieties of interest, and the difficulty of attaching and detaching many protecting groups in the presence of cellulose ester moieties without removing the ester groups. Yet the...
Particles, particles everywhere: What is in the air we breathe?
USDA-ARS?s Scientific Manuscript database
Particulate matter (PM) air pollution consists of extremely small particles, some so small that they can directly enter the bloodstream through the lungs. PM is of prime concern from both health and environmental perspectives. Current research is focused on understanding how PM forms in the atmosphe...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Band, R.N.; Snider, R.J.; Snider, R.M.
1986-07-01
This volume consists of the following reports: Soil Amoeba; Soil and Litter Arthropoda and Earthworm Studies; Biological Studies on Pollinating Insects: Megachilid Bees; Small Vertebrates: Small Mammals and Nesting Birds.
Koiffmann, Celia Priszkulnik
2012-01-01
In recent decades, obesity has reached epidemic proportions worldwide and became a major concern in public health. Despite heritability estimates of 40 to 70% and the long-recognized genetic basis of obesity in a number of rare cases, the list of common obesity susceptibility variants by the currently published genome-wide association studies (GWASs) only explain a small proportion of the individual variation in risk of obesity. It was not until very recently that GWASs of copy number variants (CNVs) in individuals with extreme phenotypes reported a number of large and rare CNVs conferring high risk to obesity, and specifically deletions on chromosome 16p11.2. In this paper, we comment on the recent advances in the field of genetics of obesity with an emphasis on the genes and genomic regions implicated in highly penetrant forms of obesity associated with developmental disorders. Array genomic hybridization in this patient population has afforded discovery opportunities for CNVs that have not previously been detectable. This information can be used to generate new diagnostic arrays and sequencing platforms, which will likely enhance detection of known genetic conditions with the potential to elucidate new disease genes and ultimately help in developing a next-generation sequencing protocol relevant to clinical practice. PMID:23316347
Initial Low-Reynolds Number Iced Aerodynamic Performance for CRM Wing
NASA Technical Reports Server (NTRS)
Woodard, Brian; Diebold, Jeff; Broeren, Andy; Potapczuk, Mark; Lee, Sam; Bragg, Michael
2015-01-01
NASA, FAA, ONERA, and other partner organizations have embarked on a significant, collaborative research effort to address the technical challenges associated with icing on large scale, three-dimensional swept wings. These are extremely complex phenomena important to the design, certification and safe operation of small and large transport aircraft. There is increasing demand to balance trade-offs in aircraft efficiency, cost and noise that tend to compete directly with allowable performance degradations over an increasing range of icing conditions. Computational fluid dynamics codes have reached a level of maturity that they are being proposed by manufacturers for use in certification of aircraft for flight in icing. However, sufficient high-quality data to evaluate their performance on iced swept wings are not currently available in the public domain and significant knowledge gaps remain.
Evaluation of unsaturated zone water fluxes in heterogeneous alluvium at a Mojave Basin Site
Nimmo, John R.; Deason, Jeffrey A.; Izbicki, John A.; Martin, Peter
2002-01-01
Vertical and horizontal water fluxes in the unsaturated zone near intermittent streams critically affect ecosystems, water supply, and contaminant transport in arid and semiarid regions. The subsurface near the Oro Grande Wash is typical in having great textural diversity, pronounced layer contrasts, and extremely low hydraulic conductivities associated with nearly dry media. These features prevent a straightforward application of the Darcian method for recharge estimation, which has provided high‐quality flux estimates at simpler, wetter sites. We have augmented the basic Darcian method with theoretical developments such that a small number of core sample unsaturated hydraulic property measurements, combined with additional, easily obtained data (e.g., drillers' logs) can provide useful flux estimates and knowledge of two‐dimensional water behavior beneath the wash.
Minimally symptomatic hypocalcaemia unmasking celiac disease.
Lazaridis, A; Drosou, M E; Fontalis, A; Prousali, E; Hadwe, S E; Giouleme, O; Petidis, K
2016-11-01
Celiac disease is an autoimmune disease of the small intestine which occurs in genetically predisposed people of all ages. A large clinical spectrum of manifestations accompanies the onset of the disease with diarrhoea, flatulence and weight loss being the most common. However, findings like osteoporosis, iron deficiency, anaemia and hypocalcaemia could also insinuate the existence of the disease. We report the case of a 55-year-old man with numbness and tingling of the upper extremities due to hypocalcaemia that proved to be an uncommon case of celiac disease. A non-negligible number of adult patients with celiac disease can present with only minor and subclinical manifestations of the disease. As such, hypocalcaemia may be the sole manifestation of celiac disease. A high index of suspicion is needed for prompt diagnosis. © The Author(s) 2016.
Photon-efficient super-resolution laser radar
NASA Astrophysics Data System (ADS)
Shin, Dongeek; Shapiro, Jeffrey H.; Goyal, Vivek K.
2017-08-01
The resolution achieved in photon-efficient active optical range imaging systems can be low due to non-idealities such as propagation through a diffuse scattering medium. We propose a constrained optimization-based frame- work to address extremes in scarcity of photons and blurring by a forward imaging kernel. We provide two algorithms for the resulting inverse problem: a greedy algorithm, inspired by sparse pursuit algorithms; and a convex optimization heuristic that incorporates image total variation regularization. We demonstrate that our framework outperforms existing deconvolution imaging techniques in terms of peak signal-to-noise ratio. Since our proposed method is able to super-resolve depth features using small numbers of photon counts, it can be useful for observing fine-scale phenomena in remote sensing through a scattering medium and through-the-skin biomedical imaging applications.
Watching a Black Hole Feed: Sgr A* in the X-ray and Infrared
NASA Astrophysics Data System (ADS)
Fazio, Giovanni
2017-09-01
Black hole accretion drives extreme astrophysical phenomena in the universe. Sgr A*, the nearest supermassive black hole, is highly variable, but sparse data and short observations preclude determination of its emission physics. Despite enormous advances in accretion models in recent years, even the radiation mechanisms of Sgr A* are still unknown. Because the needed information is encoded in the time-dependent relationship between X-ray and IR emission, we propose four new epochs of Chandra monitoring with Spitzer at 4.5 microns. This will double the exposure time for X-ray flares where the NIR state is known, moving us out of the realm of small-number statistics and enabling diagnostics of the true X-ray/IR relationship. This will be the final chance for Chandra+Spitzer observations.
Total variation-based neutron computed tomography
NASA Astrophysics Data System (ADS)
Barnard, Richard C.; Bilheux, Hassina; Toops, Todd; Nafziger, Eric; Finney, Charles; Splitter, Derek; Archibald, Rick
2018-05-01
We perform the neutron computed tomography reconstruction problem via an inverse problem formulation with a total variation penalty. In the case of highly under-resolved angular measurements, the total variation penalty suppresses high-frequency artifacts which appear in filtered back projections. In order to efficiently compute solutions for this problem, we implement a variation of the split Bregman algorithm; due to the error-forgetting nature of the algorithm, the computational cost of updating can be significantly reduced via very inexact approximate linear solvers. We present the effectiveness of the algorithm in the significantly low-angular sampling case using synthetic test problems as well as data obtained from a high flux neutron source. The algorithm removes artifacts and can even roughly capture small features when an extremely low number of angles are used.
Complexity-aware simple modeling.
Gómez-Schiavon, Mariana; El-Samad, Hana
2018-02-26
Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.
Amniotic Constriction Bands: Secondary Deformities and Their Treatments.
Drury, Benjamin T; Rayan, Ghazi M
2018-01-01
The purpose of this study was to report the surgical treatment experience of patients with amniotic constriction bands (ACB) over a 35-year interval and detail consequential limb deformities with emphasis on hands and upper extremities, along with the nature and frequency of their surgical treatment methods. Fifty-one patients were identified; 26 were males and 25 females. The total number of deformities was listed. The total number of operations, individual procedures, and operations plus procedures that were done for each patient and their frequency were recorded. The total number of operations was 117, and total number of procedures was 341. More procedures were performed on the upper extremity (85%) than the lower extremity (15%). Including the primary deformity ACB, 16 different hand deformities secondary to ACB were encountered. Sixteen different surgical methods for the upper extremity were utilized; a primary procedure for ACB and secondary reconstructions for all secondary deformities. Average age at the time of the first procedure was 9.3 months. The most common procedures performed, in order of frequency, were excision of ACB plus Z-plasty, release of partial syndactyly, release of fenestrated syndactyly, full-thickness skin grafts, resection of digital bony overgrowth from amputation stumps, and deepening of first and other digital web spaces. Many hand and upper extremity deformities secondary to ACB are encountered. Children with ACB may require more than one operation including multiple procedures. Numerous surgical methods of reconstruction for these children's secondary deformities are necessary in addition to the customary primary procedure of excision of ACB and Z-plasty.
Feather roughness reduces flow separation during low Reynolds number glides of swifts.
van Bokhorst, Evelien; de Kat, Roeland; Elsinga, Gerrit E; Lentink, David
2015-10-01
Swifts are aerodynamically sophisticated birds with a small arm and large hand wing that provides them with exquisite control over their glide performance. However, their hand wings have a seemingly unsophisticated surface roughness that is poised to disturb flow. This roughness of about 2% chord length is formed by the valleys and ridges of overlapping primary feathers with thick protruding rachides, which make the wing stiffer. An earlier flow study of laminar-turbulent boundary layer transition over prepared swift wings suggested that swifts can attain laminar flow at a low angle of attack. In contrast, aerodynamic design theory suggests that airfoils must be extremely smooth to attain such laminar flow. In hummingbirds, which have similarly rough wings, flow measurements on a 3D printed model suggest that the flow separates at the leading edge and becomes turbulent well above the rachis bumps in a detached shear layer. The aerodynamic function of wing roughness in small birds is, therefore, not fully understood. Here, we performed particle image velocimetry and force measurements to compare smooth versus rough 3D-printed models of the swift hand wing. The high-resolution boundary layer measurements show that the flow over rough wings is indeed laminar at a low angle of attack and a low Reynolds number, but becomes turbulent at higher values. In contrast, the boundary layer over the smooth wing forms open laminar separation bubbles that extend beyond the trailing edge. The boundary layer dynamics of the smooth surface varies non-linearly as a function of angle of attack and Reynolds number, whereas the rough surface boasts more consistent turbulent boundary layer dynamics. Comparison of the corresponding drag values, lift values and glide ratios suggests, however, that glide performance is equivalent. The increased structural performance, boundary layer robustness and equivalent aerodynamic performance of rough wings might have provided small (proto) birds with an evolutionary window to high glide performance. © 2015. Published by The Company of Biologists Ltd.
Polygenic influences on dyslipidemias.
Dron, Jacqueline S; Hegele, Robert A
2018-04-01
Rare large-effect genetic variants underlie monogenic dyslipidemias, whereas common small-effect genetic variants - single nucleotide polymorphisms (SNPs) - have modest influences on lipid traits. Over the past decade, these small-effect SNPs have been shown to cumulatively exert consistent effects on lipid phenotypes under a polygenic framework, which is the focus of this review. Several groups have reported polygenic risk scores assembled from lipid-associated SNPs, and have applied them to their respective phenotypes. For lipid traits in the normal population distribution, polygenic effects quantified by a score that integrates several common polymorphisms account for about 20-30% of genetic variation. Among individuals at the extremes of the distribution, that is, those with clinical dyslipidemia, the polygenic component includes both rare variants with large effects and common polymorphisms: depending on the trait, 20-50% of susceptibility can be accounted for by this assortment of genetic variants. Accounting for polygenic effects increases the numbers of dyslipidemic individuals who can be explained genetically, but a substantial proportion of susceptibility remains unexplained. Whether documenting the polygenic basis of dyslipidemia will affect outcomes in clinical trials or prospective observational studies remains to be determined.
Study design for a hepatitis B vaccine trial.
Lustbader, E D; London, W T; Blumberg, B S
1976-01-01
A short-time trial of small sample size for an evaluation of the hepatitis B vaccine is proposed and designed. The vaccine is based on the premise that antibody to the surface antigen of the hepatitis B virus is protective against viral infection. This premise is verified by using the presence of the surface antigen as the marker of infection and comparing infection rates in renal dialysis patients who had naturally acquired antibody to patients without antibody. Patients with antibody have an extremely low risk of infection. The probability of remaining uninfected decreases at an exponential rate for patients without antibody, implying a constant risk of infection at any point in time. The study design described makes use of this time independence and the observed infection rates to formulate a clinical trial which can be accomplished with a relatively small number of patients. This design might be useful if, in preliminary studies, it is shown that the vaccine produces antibody in the patients and that protection against hepatitis B virus would be beneficial to the patients. PMID:1062809
A New Paradigm in Earth Environmental Monitoring with the CYGNSS Small Satellite Constellation.
Ruf, Christopher S; Chew, Clara; Lang, Timothy; Morris, Mary G; Nave, Kyle; Ridley, Aaron; Balasubramaniam, Rajeswari
2018-06-08
A constellation of small, low-cost satellites is able to make scientifically valuable measurements of the Earth which can be used for weather forecasting, disaster monitoring, and climate studies. Eight CYGNSS satellites were launched into low Earth orbit on December 15, 2016. Each satellite carries a science radar receiver which measures GPS signals reflected from the Earth surface. The signals contain information about the surface, including wind speed over ocean, and soil moisture and flooding over land. The satellites are distributed around their orbit plane so that measurements can be made more often to capture extreme weather events. Innovative engineering approaches are used to reduce per satellite cost, increase the number in the constellation, and improve temporal sampling. These include the use of differential drag rather than propulsion to adjust the spacing between satellites and the use of existing GPS signals as the science radars' transmitter. Initial on-orbit results demonstrate the scientific utility of the CYGNSS observations, and suggest that a new paradigm in spaceborne Earth environmental monitoring is possible.
Detection of nanoflare-heated plasma in the solar corona by the FOXSI-2 sounding rocket
NASA Astrophysics Data System (ADS)
Ishikawa, Shin-nosuke; Glesener, Lindsay; Krucker, Säm; Christe, Steven; Buitrago-Casas, Juan Camilo; Narukage, Noriyuki; Vievering, Juliana
2017-11-01
The processes that heat the solar and stellar coronae to several million kelvins, compared with the much cooler photosphere (5,800 K for the Sun), are still not well known1. One proposed mechanism is heating via a large number of small, unresolved, impulsive heating events called nanoflares2. Each event would heat and cool quickly, and the average effect would be a broad range of temperatures including a small amount of extremely hot plasma. However, detecting these faint, hot traces in the presence of brighter, cooler emission is observationally challenging. Here we present hard X-ray data from the second flight of the Focusing Optics X-ray Solar Imager (FOXSI-2), which detected emission above 7 keV from an active region of the Sun with no obvious individual X-ray flare emission. Through differential emission measure computations, we ascribe this emission to plasma heated above 10 MK, providing evidence for the existence of solar nanoflares. The quantitative evaluation of the hot plasma strongly constrains the coronal heating models.
Interfacial Symmetry Control of Emergent Ferromagnetism
NASA Astrophysics Data System (ADS)
Grutter, Alexander; Borchers, Julie; Kirby, Brian; He, Chunyong; Arenholz, Elke; Vailionis, Arturas; Flint, Charles; Suzuki, Yuri
Atomically precise complex oxide heterostructures provide model systems for the discovery of new emergent phenomena since their magnetism, structure and electronic properties are strongly coupled. Octahedral tilts and rotations have been shown to alter the magnetic properties of complex oxide heterostructures, but typically induce small, gradual magnetic changes. Here, we demonstrate sharp switching between ferromagnetic and antiferromagnetic order at the emergent ferromagnetic interfaces of CaRuO3/CaMnO3 superlattices. Through synchrotron X-ray diffraction and neutron reflectometry, we show that octahedral distortions in superlattices with an odd number of CaMnO3 unit cells in each layer are symmetry mismatched across the interface. In this case, the rotation symmetry switches across the interface, reducing orbital overlap, suppressing charge transfer from Ru to Mn, and disrupting the interfacial double exchange. This disruption switches half of the interfaces from ferromagnetic to antiferromagnetic and lowers the saturation magnetic of the superlattice from 1.0 to 0.5 μB/interfacial Mn. By targeting a purely interfacial emergent magnetic system, we achieve drastic alterations to the magnetic ground state with extremely small changes in layer thickness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mittal, Sparsh; Zhang, Zhao; Vetter, Jeffrey S
Recent trends of CMOS scaling and use of large last level caches (LLCs) have led to significant increase in the leakage energy consumption of LLCs and hence, managing their energy consumption has become extremely important in modern processor design. The conventional cache energy saving techniques require offline profiling or provide only coarse granularity of cache allocation. We present FlexiWay, a cache energy saving technique which uses dynamic cache reconfiguration. FlexiWay logically divides the cache sets into multiple (e.g. 16) modules and dynamically turns off suitable and possibly different number of cache ways in each module. FlexiWay has very small implementationmore » overhead and it provides fine-grain cache allocation even with caches of typical associativity, e.g. an 8-way cache. Microarchitectural simulations have been performed using an x86-64 simulator and workloads from SPEC2006 suite. Also, FlexiWay has been compared with two conventional energy saving techniques. The results show that FlexiWay provides largest energy saving and incurs only small loss in performance. For single, dual and quad core systems, the average energy saving using FlexiWay are 26.2%, 25.7% and 22.4%, respectively.« less
Knowles, Martyn; Nation, David A; Timaran, David E; Gomez, Luis F; Baig, M Shadman; Valentine, R James; Timaran, Carlos H
2015-01-01
Fenestrated endovascular aortic aneurysm repair (FEVAR) is an alternative to open repair in patients with complex abdominal aortic aneurysms who are neither fit nor suitable for standard open or endovascular repair. Chimney and snorkel grafts are other endovascular alternatives but frequently require bilateral upper extremity access that has been associated with a 3% to 10% risk of stroke. However, upper extremity access is also frequently required for FEVAR because of the caudal orientation of the visceral vessels. The purpose of this study was to assess the use of upper extremity access for FEVAR and the associated morbidity. During a 5-year period, 148 patients underwent FEVAR, and upper extremity access for FEVAR was used in 98 (66%). Outcomes were compared between those who underwent upper extremity access and those who underwent femoral access alone. The primary end point was a cerebrovascular accident or transient ischemic attack, and the secondary end point was local access site complications. The mean number of fenestrated vessels was 3.07 ± 0.81 (median, 3) for a total of 457 vessels stented. Percutaneous upper extremity access was used in 12 patients (12%) and open access in 86 (88%). All patients who required a sheath size >7F underwent high brachial open access, with the exception of one patient who underwent percutaneous axillary access with a 12F sheath. The mean sheath size was 10.59F ± 2.51F (median, 12F), which was advanced into the descending thoracic aorta, allowing multiple wire and catheter exchanges. One hemorrhagic stroke (one of 98 [1%]) occurred in the upper extremity access group, and one ischemic stroke (one of 54 [2%]) occurred in the femoral-only access group (P = .67). The stroke in the upper extremity access group occurred 5 days after FEVAR and was related to uncontrolled hypertension, whereas the stroke in the femoral group occurred on postoperative day 3. Neither patient had signs or symptoms of a stroke immediately after FEVAR. The right upper extremity was accessed six times without a stroke (0%) compared with the left being accessed 92 times with one stroke (1%; P = .8). Four patients (4%) had local complications related to upper extremity access. One (1%) required exploration for an expanding hematoma after manual compression for a 7F sheath, one (1%) required exploration for hematoma and neurologic symptoms after open access for a 12F sheath, and two patients (2%) with small hematomas did not require intervention. Two (two of 12 [17%]) of these complications were in the percutaneous access group, which were significantly more frequent than in the open group (two of 86 [2%]; P = .02). Upper extremity access appears to be a safe and feasible approach for patients undergoing FEVAR. Open exposure in the upper extremity may be safer than percutaneous access during FEVAR. Unlike chimney and snorkel grafts, upper extremity access during FEVAR is not associated with an increased risk of stroke, despite the need for multiple visceral vessel stenting. Copyright © 2015 Society for Vascular Surgery. All rights reserved.
Bredfeldt, Christine E; Butani, Amy; Padmanabhan, Sandhyasree; Hitz, Paul; Pardee, Roy
2013-03-22
Multi-site health sciences research is becoming more common, as it enables investigation of rare outcomes and diseases and new healthcare innovations. Multi-site research usually involves the transfer of large amounts of research data between collaborators, which increases the potential for accidental disclosures of protected health information (PHI). Standard protocols for preventing release of PHI are extremely vulnerable to human error, particularly when the shared data sets are large. To address this problem, we developed an automated program (SAS macro) to identify possible PHI in research data before it is transferred between research sites. The macro reviews all data in a designated directory to identify suspicious variable names and data patterns. The macro looks for variables that may contain personal identifiers such as medical record numbers and social security numbers. In addition, the macro identifies dates and numbers that may identify people who belong to small groups, who may be identifiable even in the absences of traditional identifiers. Evaluation of the macro on 100 sample research data sets indicated a recall of 0.98 and precision of 0.81. When implemented consistently, the macro has the potential to streamline the PHI review process and significantly reduce accidental PHI disclosures.
Multiplex congruence network of natural numbers.
Yan, Xiao-Yong; Wang, Wen-Xu; Chen, Guan-Rong; Shi, Ding-Hua
2016-03-31
Congruence theory has many applications in physical, social, biological and technological systems. Congruence arithmetic has been a fundamental tool for data security and computer algebra. However, much less attention was devoted to the topological features of congruence relations among natural numbers. Here, we explore the congruence relations in the setting of a multiplex network and unveil some unique and outstanding properties of the multiplex congruence network. Analytical results show that every layer therein is a sparse and heterogeneous subnetwork with a scale-free topology. Counterintuitively, every layer has an extremely strong controllability in spite of its scale-free structure that is usually difficult to control. Another amazing feature is that the controllability is robust against targeted attacks to critical nodes but vulnerable to random failures, which also differs from ordinary scale-free networks. The multi-chain structure with a small number of chain roots arising from each layer accounts for the strong controllability and the abnormal feature. The multiplex congruence network offers a graphical solution to the simultaneous congruences problem, which may have implication in cryptography based on simultaneous congruences. Our work also gains insight into the design of networks integrating advantages of both heterogeneous and homogeneous networks without inheriting their limitations.
Multiplex congruence network of natural numbers
NASA Astrophysics Data System (ADS)
Yan, Xiao-Yong; Wang, Wen-Xu; Chen, Guan-Rong; Shi, Ding-Hua
2016-03-01
Congruence theory has many applications in physical, social, biological and technological systems. Congruence arithmetic has been a fundamental tool for data security and computer algebra. However, much less attention was devoted to the topological features of congruence relations among natural numbers. Here, we explore the congruence relations in the setting of a multiplex network and unveil some unique and outstanding properties of the multiplex congruence network. Analytical results show that every layer therein is a sparse and heterogeneous subnetwork with a scale-free topology. Counterintuitively, every layer has an extremely strong controllability in spite of its scale-free structure that is usually difficult to control. Another amazing feature is that the controllability is robust against targeted attacks to critical nodes but vulnerable to random failures, which also differs from ordinary scale-free networks. The multi-chain structure with a small number of chain roots arising from each layer accounts for the strong controllability and the abnormal feature. The multiplex congruence network offers a graphical solution to the simultaneous congruences problem, which may have implication in cryptography based on simultaneous congruences. Our work also gains insight into the design of networks integrating advantages of both heterogeneous and homogeneous networks without inheriting their limitations.
The Coast Artillery Journal. Volume 57, Number 6, December 1922
1922-12-01
theorems ; Chapter III, to application; Chapters IV, V and VI, to infinitesimals and differentials, trigonometric functions, and logarithms and...taneously." There are chapters on complex numbers with simple and direct discussion of the roots of unity; on elementary theorems on the roots of an...through the centuries from the time of Pythagoras , an interest shared on the one extreme by nearly every noted mathematician and on the other extreme by
The magnitude and colour of noise in genetic negative feedback systems
Voliotis, Margaritis; Bowsher, Clive G.
2012-01-01
The comparative ability of transcriptional and small RNA-mediated negative feedback to control fluctuations or ‘noise’ in gene expression remains unexplored. Both autoregulatory mechanisms usually suppress the average (mean) of the protein level and its variability across cells. The variance of the number of proteins per molecule of mean expression is also typically reduced compared with the unregulated system, but is almost never below the value of one. This relative variance often substantially exceeds a recently obtained, theoretical lower limit for biochemical feedback systems. Adding the transcriptional or small RNA-mediated control has different effects. Transcriptional autorepression robustly reduces both the relative variance and persistence (lifetime) of fluctuations. Both benefits combine to reduce noise in downstream gene expression. Autorepression via small RNA can achieve more extreme noise reduction and typically has less effect on the mean expression level. However, it is often more costly to implement and is more sensitive to rate parameters. Theoretical lower limits on the relative variance are known to decrease slowly as a measure of the cost per molecule of mean expression increases. However, the proportional increase in cost to achieve substantial noise suppression can be different away from the optimal frontier—for transcriptional autorepression, it is frequently negligible. PMID:22581772
Rossetto, Maurizio; Kooyman, Robert; Yap, Jia-Yee S.; Laffan, Shawn W.
2015-01-01
Seed dispersal is a key process in plant spatial dynamics. However, consistently applicable generalizations about dispersal across scales are mostly absent because of the constraints on measuring propagule dispersal distances for many species. Here, we focus on fleshy-fruited taxa, specifically taxa with large fleshy fruits and their dispersers across an entire continental rainforest biome. We compare species-level results of whole-chloroplast DNA analyses in sister taxa with large and small fruits, to regional plot-based samples (310 plots), and whole-continent patterns for the distribution of woody species with either large (more than 30 mm) or smaller fleshy fruits (1093 taxa). The pairwise genomic comparison found higher genetic distances between populations and between regions in the large-fruited species (Endiandra globosa), but higher overall diversity within the small-fruited species (Endiandra discolor). Floristic comparisons among plots confirmed lower numbers of large-fruited species in areas where more extreme rainforest contraction occurred, and re-colonization by small-fruited species readily dispersed by the available fauna. Species' distribution patterns showed that larger-fruited species had smaller geographical ranges than smaller-fruited species and locations with stable refugia (and high endemism) aligned with concentrations of large fleshy-fruited taxa, making them a potentially valuable conservation-planning indicator. PMID:26645199
Rossetto, Maurizio; Kooyman, Robert; Yap, Jia-Yee S; Laffan, Shawn W
2015-12-07
Seed dispersal is a key process in plant spatial dynamics. However, consistently applicable generalizations about dispersal across scales are mostly absent because of the constraints on measuring propagule dispersal distances for many species. Here, we focus on fleshy-fruited taxa, specifically taxa with large fleshy fruits and their dispersers across an entire continental rainforest biome. We compare species-level results of whole-chloroplast DNA analyses in sister taxa with large and small fruits, to regional plot-based samples (310 plots), and whole-continent patterns for the distribution of woody species with either large (more than 30 mm) or smaller fleshy fruits (1093 taxa). The pairwise genomic comparison found higher genetic distances between populations and between regions in the large-fruited species (Endiandra globosa), but higher overall diversity within the small-fruited species (Endiandra discolor). Floristic comparisons among plots confirmed lower numbers of large-fruited species in areas where more extreme rainforest contraction occurred, and re-colonization by small-fruited species readily dispersed by the available fauna. Species' distribution patterns showed that larger-fruited species had smaller geographical ranges than smaller-fruited species and locations with stable refugia (and high endemism) aligned with concentrations of large fleshy-fruited taxa, making them a potentially valuable conservation-planning indicator. © 2015 The Author(s).
NASA Astrophysics Data System (ADS)
Huang, Jin; Islam, A. R. M. Towfiqul; Zhang, Fangmin; Hu, Zhenghua
2017-10-01
With the increasing risk of meteorological disasters, it is of great importance to analyze the spatiotemporal changes of precipitation extremes and its possible impact on rice productivity, especially in Jiangsu province, southeast China. In this study, we explored the relationships between rice yield and extreme precipitation indices using Mann-Kendall trend test, Pettitt's test, and K-means clustering methods. This study used 10 extreme precipitation indices of the rice growing season (May to October) based on the daily precipitation records and rice yield data at 52 meteorological stations during 1961-2012 in Jiangsu province. The main findings were as follows: (1) correlation results indicated that precipitation extremes occurred in the months of July, August, and October, which had noticeable adverse effects on rice yield; (2) the maximum 7-day precipitation of July and the number of rainy days of August and October should be considered as three key indicators for the precipitation-induced rice meteorological disasters; and (3) most of the stations showed an increasing trends for the maximum 7-day precipitation of July and the number of rainy days of August, while the number of rainy days of October in all the stations demonstrated a decreasing trend. Moreover, Jiangsu province could be divided into two major sub-regions such as north and south areas with different temporal variations in the three key indicators.
The evolution of extreme precipitations in high resolution scenarios over France
NASA Astrophysics Data System (ADS)
Colin, J.; Déqué, M.; Somot, S.
2009-09-01
Over the past years, improving the modelling of extreme events and their variability at climatic time scales has become one of the challenging issue raised in the regional climate research field. This study shows the results of a high resolution (12 km) scenario run over France with the limited area model (LAM) ALADIN-Climat, regarding the representation of extreme precipitations. The runs were conducted in the framework of the ANR-SCAMPEI national project on high resolution scenarios over French mountains. As a first step, we attempt to quantify one of the uncertainties implied by the use of LAM : the size of the area on which the model is run. In particular, we address the issue of whether a relatively small domain allows the model to create its small scale process. Indeed, high resolution scenarios cannot be run on large domains because of the computation time. Therefore one needs to answer this preliminary question before producing and analyzing such scenarios. To do so, we worked in the framework of a « big brother » experiment. We performed a 23-year long global simulation in present-day climate (1979-2001) with the ARPEGE-Climat GCM, at a resolution of approximately 50 km over Europe (stretched grid). This first simulation, named ARP50, constitutes the « big brother » reference of our experiment. It has been validated in comparison with the CRU climatology. Then we filtered the short waves (up to 200 km) from ARP50 in order to obtain the equivalent of coarse resolution lateral boundary conditions (LBC). We have carried out three ALADIN-Climat simulations at a 50 km resolution with these LBC, using different configurations of the model : * FRA50, run over a small domain (2000 x 2000 km, centered over France), * EUR50, run over a larger domain (5000 x 5000 km, centered over France as well), * EUR50-SN, run over the large domain (using spectral nudging). Considering the facts that ARPEGE-Climat and ALADIN-Climat models share the same physics and dynamics and that both regional and global simulations were run at the same resolution, ARP50 can be regarded as a reference with which FRA50, EUR50 and EUR50-SN should each be compared. After an analysis of the differences between the regional simulations and ARP50 in annual and seasonal mean, we focus on the representation of rainfall extremes comparing two dimensional fields of various index inspired from STARDEX and quantile-quantile plots. The results show a good agreement with the ARP50 reference for all three regional simulations and little differences are found between them. This result indicates that the use of small domains is not significantly detrimental to the modelling of extreme precipitation events. It also shows that the spectral nudging technique has no detrimental effect on the extreme precipitation. Therefore, high resolution scenarios performed on a relatively small domain such as the ones run for SCAMPEI, can be regarded as good tools to explore their possible evolution in the future climate. Preliminary results on the response of precipitation extremes over South-East France are given.
Pruitt, Valerie M
2006-01-01
Work-related upper extremity burns often occur. The cause directs the course of action. Thermal burns should be assessed for system alterations, and depth of burn should be determined. Deep partial-thickness burns and more severe burns require a specialist evaluation. Chemical burns must be irrigated and the agent identified. Some chemical burns, such as those that involve phenols and metal fragments, require specific topical applications before water lavage. Hydrofluoric acid burns can cause life-threatening electrolyte abnormalities with a small, highly concentrated acid burn. The goal with any extremity burn is to provide the patient with a multidisciplinary team approach to achieve a functional, usable extremity.
Controllable gaussian-qubit interface for extremal quantum state engineering.
Adesso, Gerardo; Campbell, Steve; Illuminati, Fabrizio; Paternostro, Mauro
2010-06-18
We study state engineering through bilinear interactions between two remote qubits and two-mode gaussian light fields. The attainable two-qubit states span the entire physically allowed region in the entanglement-versus-global-purity plane. Two-mode gaussian states with maximal entanglement at fixed global and marginal entropies produce maximally entangled two-qubit states in the corresponding entropic diagram. We show that a small set of parameters characterizing extremally entangled two-mode gaussian states is sufficient to control the engineering of extremally entangled two-qubit states, which can be realized in realistic matter-light scenarios.
Statistic analysis of annual total ozone extremes for the period 1964-1988
NASA Technical Reports Server (NTRS)
Krzyscin, Janusz W.
1994-01-01
Annual extremes of total column amount of ozone (in the period 1964-1988) from a network of 29 Dobson stations have been examined using the extreme value analysis. The extremes have been calculated as the highest deviation of daily mean total ozone from its long-term monthly mean, normalized by the monthly standard deviations. The extremes have been selected from the direct-Sun total ozone observations only. The extremes resulting from abrupt changes in ozone (day to day changes greater than 20 percent) have not been considered. The ordered extremes (maxima in ascending way, minima in descending way) have been fitted to one of three forms of the Fisher-Tippet extreme value distribution by the nonlinear least square method (Levenberg-Marguard method). We have found that the ordered extremes from a majority of Dobson stations lie close to Fisher-Tippet type III. The extreme value analysis of the composite annual extremes (combined from averages of the annual extremes selected at individual stations) has shown that the composite maxima are fitted by the Fisher-Tippet type III and the composite minima by the Fisher-Tippet type I. The difference between the Fisher-Tippet types of the composite extremes seems to be related to the ozone downward trend. Extreme value prognoses for the period 1964-2014 (derived from the data taken at: all analyzed stations, the North American, and the European stations) have revealed that the prognostic extremes are close to the largest annual extremes in the period 1964-1988 and there are only small regional differences in the prognoses.
Promoting transportation flexibility in extreme events through multi-modal connectivity.
DOT National Transportation Integrated Search
2014-06-01
Extreme events of all kinds are increasing in number, severity, or impacts. Transportation provides a vital support : service for people in such circumstances in the short-term for evacuation and providing supplies where evacuation is : not undertake...
2010-01-01
Background Osteoporosis treatment guidelines recommend calcium and vitamin D supplementation for both prevention as well as treatment, however, compliance with these guidelines is often unsatisfactory. This study investigated the opinion of Asian physicians and Asian patients regarding vitamin D and calcium and patients' use of both. Methods Physicians selected from Malaysia, Taiwan, Philippines, Korea and Singapore were asked to grade the significance of vitamin D and calcium in the treatment of osteoporosis and their patients' use of these supplements. In addition, physicians recruited seven eligible osteoporotic women to answer a questionnaire to determine their use of vitamin D and calcium, and their attitudes and beliefs regarding these supplements. Results In total, 237 physicians and 1463 osteoporosis patients completed the questionnaire. The results revealed that 22% of physicians in Malaysia, 12% in Taiwan, 72% in the Philippines, 50% in Korea and 24% in Singapore rated the importance of vitamin D supplementation as being extremely important. For calcium, 27% of physicians in Malaysia, 30% in Taiwan, 80% in the Philippines, 50% in Korea and 38% in Singapore rated the importance as being extremely important. Forty-three percent of patients in Malaysia, 38% in Taiwan, 73% in the Philippines, 35% in Korea and 39% in Singapore rated the importance of vitamin D as being extremely important. For calcium, 69% of patients in Malaysia, 58% in Taiwan, 90% in the Philippines, 70% in Korea and 55% in Singapore rated the importance as being extremely important. In addition, results of the patient questionnaire revealed that only a very small number regularly took both supplements. In addition, the results indicated that, with the exception of patients from the Philippines, the majority of patients had no or infrequent discussion with their physician about vitamin D and calcium. Conclusions There is generally suboptimal appreciation by both physicians and patients of the importance of vitamin D and calcium for maintenance of bone health as reflected in the low number of patients who reported regularly taking these supplements. Recognition of this problem should translate to appropriate action to improve education for both physicians and patients, with a goal to increase use of these supplements among Asian patients with osteoporosis. PMID:20977729
Instructive Biologic Scaffold for Functional Tissue Regeneration Following Trauma to the Extremities
2015-09-01
Award Number: W81XWH-12-2-0128 TITLE: Instructive Biologic Scaffold for Functional Tissue Regeneration Following Trauma to the Extremities...2014 - 29 Aug 2015 4. TITLE AND SUBTITLE Instructive Biologic Scaffold for Functional Tissue Regeneration Following Trauma to the Extremities 5a...effectiveness of a regenerative scaffold for the restoration of functional musculotendinous tissue , including the restoration of blood supply and innervation
Lejiang Yu; Shiyuan Zhong; Lisi Pei; Xindi (Randy) Bian; Warren E. Heilman
2016-01-01
The mean global climate has warmed as a result of the increasing emission of greenhouse gases induced by human activities. This warming is considered the main reason for the increasing number of extreme precipitation events in the US. While much attention has been given to extreme precipitation events occurring over several days, which are usually responsible for...
Extremal entanglement witnesses
NASA Astrophysics Data System (ADS)
Hansen, Leif Ove; Hauge, Andreas; Myrheim, Jan; Sollid, Per Øyvind
2015-02-01
We present a study of extremal entanglement witnesses on a bipartite composite quantum system. We define the cone of witnesses as the dual of the set of separable density matrices, thus TrΩρ≥0 when Ω is a witness and ρ is a pure product state, ρ=ψψ† with ψ=ϕ⊗χ. The set of witnesses of unit trace is a compact convex set, uniquely defined by its extremal points. The expectation value f(ϕ,χ)=TrΩρ as a function of vectors ϕ and χ is a positive semidefinite biquadratic form. Every zero of f(ϕ,χ) imposes strong real-linear constraints on f and Ω. The real and symmetric Hessian matrix at the zero must be positive semidefinite. Its eigenvectors with zero eigenvalue, if such exist, we call Hessian zeros. A zero of f(ϕ,χ) is quadratic if it has no Hessian zeros, otherwise it is quartic. We call a witness quadratic if it has only quadratic zeros, and quartic if it has at least one quartic zero. A main result we prove is that a witness is extremal if and only if no other witness has the same, or a larger, set of zeros and Hessian zeros. A quadratic extremal witness has a minimum number of isolated zeros depending on dimensions. If a witness is not extremal, then the constraints defined by its zeros and Hessian zeros determine all directions in which we may search for witnesses having more zeros or Hessian zeros. A finite number of iterated searches in random directions, by numerical methods, leads to an extremal witness which is nearly always quadratic and has the minimum number of zeros. We discuss briefly some topics related to extremal witnesses, in particular the relation between the facial structures of the dual sets of witnesses and separable states. We discuss the relation between extremality and optimality of witnesses, and a conjecture of separability of the so-called structural physical approximation (SPA) of an optimal witness. Finally, we discuss how to treat the entanglement witnesses on a complex Hilbert space as a subset of the witnesses on a real Hilbert space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hagos, Samson M.; Leung, Lai-Yung R.; Yoon, Jin-Ho
Simulations from the Community Earth System Model Large Ensemble project are analyzed to investigate the impact of global warming on atmospheric rivers (ARs). The model has notable biases in simulating the subtropical jet position and the relationship between extreme precipitation and moisture transport. After accounting for these biases, the model projects an ensemble mean increase of 35% in the number of landfalling AR days between the last twenty years of the 20th and 21st centuries. However, the number of AR associated extreme precipitation days increases only by 28% because the moisture transport required to produce extreme precipitation also increases withmore » warming. Internal variability introduces an uncertainty of ±8% and ±7% in the projected changes in AR days and associated extreme precipitation days. In contrast, accountings for model biases only change the projected changes by about 1%. The significantly larger mean changes compared to internal variability and to the effects of model biases highlight the robustness of AR responses to global warming.« less
Mechanisms Underlying Adaptation to Life in Hydrogen Sulfide–Rich Environments
Kelley, Joanna L.; Arias-Rodriguez, Lenin; Patacsil Martin, Dorrelyn; Yee, Muh-Ching; Bustamante, Carlos D.; Tobler, Michael
2016-01-01
Hydrogen sulfide (H2S) is a potent toxicant interfering with oxidative phosphorylation in mitochondria and creating extreme environmental conditions in aquatic ecosystems. The mechanistic basis of adaptation to perpetual exposure to H2S remains poorly understood. We investigated evolutionarily independent lineages of livebearing fishes that have colonized and adapted to springs rich in H2S and compared their genome-wide gene expression patterns with closely related lineages from adjacent, nonsulfidic streams. Significant differences in gene expression were uncovered between all sulfidic and nonsulfidic population pairs. Variation in the number of differentially expressed genes among population pairs corresponded to differences in divergence times and rates of gene flow, which is consistent with neutral drift driving a substantial portion of gene expression variation among populations. Accordingly, there was little evidence for convergent evolution shaping large-scale gene expression patterns among independent sulfide spring populations. Nonetheless, we identified a small number of genes that was consistently differentially expressed in the same direction in all sulfidic and nonsulfidic population pairs. Functional annotation of shared differentially expressed genes indicated upregulation of genes associated with enzymatic H2S detoxification and transport of oxidized sulfur species, oxidative phosphorylation, energy metabolism, and pathways involved in responses to oxidative stress. Overall, our results suggest that modification of processes associated with H2S detoxification and toxicity likely complement each other to mediate elevated H2S tolerance in sulfide spring fishes. Our analyses allow for the development of novel hypotheses about biochemical and physiological mechanisms of adaptation to extreme environments. PMID:26861137
Extremely rare collapse and build-up of turbulence in stochastic models of transitional wall flows.
Rolland, Joran
2018-02-01
This paper presents a numerical and theoretical study of multistability in two stochastic models of transitional wall flows. An algorithm dedicated to the computation of rare events is adapted on these two stochastic models. The main focus is placed on a stochastic partial differential equation model proposed by Barkley. Three types of events are computed in a systematic and reproducible manner: (i) the collapse of isolated puffs and domains initially containing their steady turbulent fraction; (ii) the puff splitting; (iii) the build-up of turbulence from the laminar base flow under a noise perturbation of vanishing variance. For build-up events, an extreme realization of the vanishing variance noise pushes the state from the laminar base flow to the most probable germ of turbulence which in turn develops into a full blown puff. For collapse events, the Reynolds number and length ranges of the two regimes of collapse of laminar-turbulent pipes, independent collapse or global collapse of puffs, is determined. The mean first passage time before each event is then systematically computed as a function of the Reynolds number r and pipe length L in the laminar-turbulent coexistence range of Reynolds number. In the case of isolated puffs, the faster-than-linear growth with Reynolds number of the logarithm of mean first passage time T before collapse is separated in two. One finds that ln(T)=A_{p}r-B_{p}, with A_{p} and B_{p} positive. Moreover, A_{p} and B_{p} are affine in the spatial integral of turbulence intensity of the puff, with the same slope. In the case of pipes initially containing the steady turbulent fraction, the length L and Reynolds number r dependence of the mean first passage time T before collapse is also separated. The author finds that T≍exp[L(Ar-B)] with A and B positive. The length and Reynolds number dependence of T are then discussed in view of the large deviations theoretical approaches of the study of mean first passage times and multistability, where ln(T) in the limit of small variance noise is studied. Two points of view, local noise of small variance and large length, can be used to discuss the exponential dependence in L of T. In particular, it is shown how a T≍exp[L(A^{'}R-B^{'})] can be derived in a conceptual two degrees of freedom model of a transitional wall flow proposed by Dauchot and Manneville. This is done by identifying a quasipotential in low variance noise, large length limit. This pinpoints the physical effects controlling collapse and build-up trajectories and corresponding passage times with an emphasis on the saddle points between laminar and turbulent states. This analytical analysis also shows that these effects lead to the asymmetric probability density function of kinetic energy of turbulence.
Extremely rare collapse and build-up of turbulence in stochastic models of transitional wall flows
NASA Astrophysics Data System (ADS)
Rolland, Joran
2018-02-01
This paper presents a numerical and theoretical study of multistability in two stochastic models of transitional wall flows. An algorithm dedicated to the computation of rare events is adapted on these two stochastic models. The main focus is placed on a stochastic partial differential equation model proposed by Barkley. Three types of events are computed in a systematic and reproducible manner: (i) the collapse of isolated puffs and domains initially containing their steady turbulent fraction; (ii) the puff splitting; (iii) the build-up of turbulence from the laminar base flow under a noise perturbation of vanishing variance. For build-up events, an extreme realization of the vanishing variance noise pushes the state from the laminar base flow to the most probable germ of turbulence which in turn develops into a full blown puff. For collapse events, the Reynolds number and length ranges of the two regimes of collapse of laminar-turbulent pipes, independent collapse or global collapse of puffs, is determined. The mean first passage time before each event is then systematically computed as a function of the Reynolds number r and pipe length L in the laminar-turbulent coexistence range of Reynolds number. In the case of isolated puffs, the faster-than-linear growth with Reynolds number of the logarithm of mean first passage time T before collapse is separated in two. One finds that ln(T ) =Apr -Bp , with Ap and Bp positive. Moreover, Ap and Bp are affine in the spatial integral of turbulence intensity of the puff, with the same slope. In the case of pipes initially containing the steady turbulent fraction, the length L and Reynolds number r dependence of the mean first passage time T before collapse is also separated. The author finds that T ≍exp[L (A r -B )] with A and B positive. The length and Reynolds number dependence of T are then discussed in view of the large deviations theoretical approaches of the study of mean first passage times and multistability, where ln(T ) in the limit of small variance noise is studied. Two points of view, local noise of small variance and large length, can be used to discuss the exponential dependence in L of T . In particular, it is shown how a T ≍exp[L (A'R -B') ] can be derived in a conceptual two degrees of freedom model of a transitional wall flow proposed by Dauchot and Manneville. This is done by identifying a quasipotential in low variance noise, large length limit. This pinpoints the physical effects controlling collapse and build-up trajectories and corresponding passage times with an emphasis on the saddle points between laminar and turbulent states. This analytical analysis also shows that these effects lead to the asymmetric probability density function of kinetic energy of turbulence.
Response Identification in the Extremely Low Frequency Region of an Electret Condenser Microphone
Jeng, Yih-Nen; Yang, Tzung-Ming; Lee, Shang-Yin
2011-01-01
This study shows that a small electret condenser microphone connected to a notebook or a personal computer (PC) has a prominent response in the extremely low frequency region in a specific environment. It confines most acoustic waves within a tiny air cell as follows. The air cell is constructed by drilling a small hole in a digital versatile disk (DVD) plate. A small speaker and an electret condenser microphone are attached to the two sides of the hole. Thus, the acoustic energy emitted by the speaker and reaching the microphone is strong enough to actuate the diaphragm of the latter. The experiments showed that, once small air leakages are allowed on the margin of the speaker, the microphone captured the signal in the range of 0.5 to 20 Hz. Moreover, by removing the plastic cover of the microphone and attaching the microphone head to the vibration surface, the low frequency signal can be effectively captured too. Two examples are included to show the convenience of applying the microphone to pick up the low frequency vibration information of practical systems. PMID:22346594
Response identification in the extremely low frequency region of an electret condenser microphone.
Jeng, Yih-Nen; Yang, Tzung-Ming; Lee, Shang-Yin
2011-01-01
This study shows that a small electret condenser microphone connected to a notebook or a personal computer (PC) has a prominent response in the extremely low frequency region in a specific environment. It confines most acoustic waves within a tiny air cell as follows. The air cell is constructed by drilling a small hole in a digital versatile disk (DVD) plate. A small speaker and an electret condenser microphone are attached to the two sides of the hole. Thus, the acoustic energy emitted by the speaker and reaching the microphone is strong enough to actuate the diaphragm of the latter. The experiments showed that, once small air leakages are allowed on the margin of the speaker, the microphone captured the signal in the range of 0.5 to 20 Hz. Moreover, by removing the plastic cover of the microphone and attaching the microphone head to the vibration surface, the low frequency signal can be effectively captured too. Two examples are included to show the convenience of applying the microphone to pick up the low frequency vibration information of practical systems.
Development of a miniature Stirling cryocooler for LWIR small satellite applications
NASA Astrophysics Data System (ADS)
Kirkconnell, C. S.; Hon, R. C.; Perella, M. D.; Crittenden, T. M.; Ghiaasiaan, S. M.
2017-05-01
The optimum small satellite (SmallSat) cryocooler system must be extremely compact and lightweight, achieved in this paper by operating a linear cryocooler at a frequency of approximately 300 Hz. Operation at this frequency, which is well in excess of the 100-150 Hz reported in recent papers on related efforts, requires an evolution beyond the traditional Oxford-class, flexure-based methods of setting the mechanical resonance. A novel approach that optimizes the electromagnetic design and the mechanical design together to simultaneously achieve the required dynamic and thermodynamic performances is described. Since highly miniaturized pulse tube coolers are fundamentally ill-suited for the sub-80K temperature range of interest because the boundary layer losses inside the pulse tube become dominant at the associated very small pulse tube size, a moving displacer Stirling cryocooler architecture is used. Compact compressor mechanisms developed on a previous program are reused for this design, and they have been adapted to yield an extremely compact Stirling warm end motor mechanism. Supporting thermodynamic and electromagnetic analysis results are reported.
1980-07-01
number) Quality of life Job satisfaction ABSTRACT (Continue on reverse tide If nece.’snry and Identify by block number) eport summarizes results of...following description! WORKS Doing work that is personally meaningful and important; pride in ay work) job satisfaction ) recognition for my efforts and...family (if married ) or from home end friends (if unmarried ). EXTREMELY UNDESIRABLE INDIFFERENT EXTREMELY DESIRABLE 68. A favorable attitude on the
Entropy, extremality, euclidean variations, and the equations of motion
NASA Astrophysics Data System (ADS)
Dong, Xi; Lewkowycz, Aitor
2018-01-01
We study the Euclidean gravitational path integral computing the Rényi entropy and analyze its behavior under small variations. We argue that, in Einstein gravity, the extremality condition can be understood from the variational principle at the level of the action, without having to solve explicitly the equations of motion. This set-up is then generalized to arbitrary theories of gravity, where we show that the respective entanglement entropy functional needs to be extremized. We also extend this result to all orders in Newton's constant G N , providing a derivation of quantum extremality. Understanding quantum extremality for mixtures of states provides a generalization of the dual of the boundary modular Hamiltonian which is given by the bulk modular Hamiltonian plus the area operator, evaluated on the so-called modular extremal surface. This gives a bulk prescription for computing the relative entropies to all orders in G N . We also comment on how these ideas can be used to derive an integrated version of the equations of motion, linearized around arbitrary states.
Chaffee, M.A.
1983-01-01
A technique called SCORESUM was developed to display a maximum of multi-element geochemical information on a minimum number of maps for mineral assessment purposes. The technique can be done manually for a small analytical data set or can be done with a computer for a large data set. SCORESUM can be used with highly censored data and can also weight samples so as to minimize the chemical differences of diverse lithologies in different parts of a given study area. The full range of reported analyses for each element of interest in a data set is divided into four categories. Anomaly scores - values of O (background), 1 (weakly anomalous), 2 (moderately anomalous), and 3 (strongly anomalous) - are substituted for all of the analyses falling into each of the four categories. A group of elements based on known or suspected association in altered or mineralized areas is selected for study and the anomaly scores for these elements are summed for each sample site and then plotted on a map. Some of the results of geochemical studies conducted for mineral assessments in two areas are briefly described. The first area, the Mokelumne Wilderness and vicinity, is a relatively small and geologically simple one. The second, the Walker Lake 1?? ?? 2?? quadrangle, is a large area that has extremely complex geology and that contains a number of different mineral deposit environments. These two studies provide examples of how the SCORESUM technique has been used (1) to enhance relatively small but anomalous areas and (2) to delineate and rank areas containing geochemical signatures for specific suites of elements related to certain types of alteration or mineralization. ?? 1983.
Transmission disequilibrium of small CNVs in simplex autism.
Krumm, Niklas; O'Roak, Brian J; Karakoc, Emre; Mohajeri, Kiana; Nelson, Ben; Vives, Laura; Jacquemont, Sebastien; Munson, Jeff; Bernier, Raphe; Eichler, Evan E
2013-10-03
We searched for disruptive, genic rare copy-number variants (CNVs) among 411 families affected by sporadic autism spectrum disorder (ASD) from the Simons Simplex Collection by using available exome sequence data and CoNIFER (Copy Number Inference from Exome Reads). Compared to high-density SNP microarrays, our approach yielded ∼2× more smaller genic rare CNVs. We found that affected probands inherited more CNVs than did their siblings (453 versus 394, p = 0.004; odds ratio [OR] = 1.19) and that the probands' CNVs affected more genes (921 versus 726, p = 0.02; OR = 1.30). These smaller CNVs (median size 18 kb) were transmitted preferentially from the mother (136 maternal versus 100 paternal, p = 0.02), although this bias occurred irrespective of affected status. The excess burden of inherited CNVs among probands was driven primarily by sibling pairs with discordant social-behavior phenotypes (p < 0.0002, measured by Social Responsiveness Scale [SRS] score), which contrasts with families where the phenotypes were more closely matched or less extreme (p > 0.5). Finally, we found enrichment of brain-expressed genes unique to probands, especially in the SRS-discordant group (p = 0.0035). In a combined model, our inherited CNVs, de novo CNVs, and de novo single-nucleotide variants all independently contributed to the risk of autism (p < 0.05). Taken together, these results suggest that small transmitted rare CNVs play a role in the etiology of simplex autism. Importantly, the small size of these variants aids in the identification of specific genes as additional risk factors associated with ASD. Copyright © 2013 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.
Chen, Han-Kuang; Pemberton, Richard
2016-01-08
We report a case of a patient who presented with an extremely high serum prostate specific antigen (PSA) level and underwent radical prostatectomy for presumed prostate cancer. Surprisingly, the whole mount prostatectomy specimen showed only small volume, organ-confined prostate adenocarcinoma and a large, benign intraprostatic cyst, which was thought to be responsible for the PSA elevation. 2016 BMJ Publishing Group Ltd.
Near-Earth Object (NEO) Hazard Background
NASA Technical Reports Server (NTRS)
Mazanek, Daniel D.
2005-01-01
The fundamental problem regarding NEO hazards is that the Earth and other planets, as well as their moons, share the solar system with a vast number of small planetary bodies and orbiting debris. Objects of substantial size are typically classified as either comets or asteroids. Although the solar system is quite expansive, the planets and moons (as well as the Sun) are occasionally impacted by these objects. We live in a cosmic shooting gallery where collisions with Earth occur on a regular basis. Because the number of smaller comets and asteroids is believed to be much greater than larger objects, the frequency of impacts is significantly higher. Fortunately, the smaller objects, which are much more numerous, are usually neutralized by the Earth's protective atmosphere. It is estimated that between 1000 and 10,000 tons of debris fall to Earth each year, most of it in the form of dust particles and extremely small meteorites. With no atmosphere, the Moon's surface is continuously impacted with dust and small debris. On November 17 and 18, 1999, during the annual Leonid meteor shower, several lunar surface impacts were observed by amateur astronomers in North America. The Leonids result from the Earth's passage each year through the debris ejected from Comet Tempel-Tuttle. These annual showers provide a periodic reminder of the possibility of a much more consequential cosmic collision, and the heavily cratered lunar surface acts a constant testimony to the impact threat. The impact problem and those planetary bodies that are a threat have been discussed in great depth in a wide range of publications and books, such as The Spaceguard Survey , Hazards Due to Comets and Asteroids, and Cosmic Catastrophes. This paper gives a brief overview on the background of this problem and address some limitations of ground-based surveys for detection of small and/or faint near-Earth objects.
NASA Astrophysics Data System (ADS)
Zervas, G.; Tsiplakou, E.
2012-03-01
Greenhouse gas (GHG) emissions are expected to cause global warming which results in extreme weather changes that could affect crop yields and productivity, food supplies and food prices. It is also expected that climate change will have an impact on animal metabolism and health, reproduction and productivity. On the other hand, the expected increased demand of animal origin products in the coming years will increase the reared animal numbers and consequently GHG emissions. This paper outlines the main GHGs emitted from livestock which are CO2, CH4 and N2O, coming from respiration, enteric fermentation and manure management respectively, with CH4 and N2O having the highest global warming potential. Ruminant livestock has the highest contribution to these GHG emissions with small ruminants share being 12.25% of the total GHG emissions from livestock's enteric and manure CH4, and manure N2O in CO2 equivalent, producing 9.45 kg CO2 equivalent per kg body weight with the respective values for cattle, pigs and poultry being 5.45, 3.97 and 3.25. Since the production systems significantly affect the GHG emissions, the grazing, livestock crop complex, and intensive ones account for 30.5%, 67.29% and 5.51% for total CH4 emission (from enteric fermentation and manure management) and 24.32%, 68.11% and 7.57% for N2O respectively. Taking into account the positive and negative impacts of small ruminant livestock production systems to the environmental aspects in general, it is recommended that a number of potentially effective measures should be taken and the appropriate mitigation technologies should be applied in order to reduce effectively and essentially the GHG emissions to the atmosphere, with no adverse effects on intensification and increased productivity of small ruminants production systems.
Evidence of population resistance to extreme low flows in a fluvial-dependent fish species
Katz, Rachel A.; Freeman, Mary C.
2015-01-01
Extreme low streamflows are natural disturbances to aquatic populations. Species in naturally intermittent streams display adaptations that enhance persistence during extreme events; however, the fate of populations in perennial streams during unprecedented low-flow periods is not well-understood. Biota requiring swift-flowing habitats may be especially vulnerable to flow reductions. We estimated the abundance and local survival of a native fluvial-dependent fish species (Etheostoma inscriptum) across 5 years encompassing historic low flows in a sixth-order southeastern USA perennial river. Based on capturemark-recapture data, the study shoal may have acted as a refuge during severe drought, with increased young-of-the-year (YOY) recruitment and occasionally high adult immigration. Contrary to expectations, summer and autumn survival rates (30 days) were not strongly depressed during low-flow periods, despite 25%-80% reductions in monthly discharge. Instead, YOY survival increased with lower minimum discharge and in response to small rain events that increased low-flow variability. Age-1+ fish showed the opposite pattern, with survival decreasing in response to increasing low-flow variability. Results from this population dynamics study of a small fish in a perennial river suggest that fluvial-dependent species can be resistant to extreme flow reductions through enhanced YOY recruitment and high survival
Juckett, D A; Rosenberg, B
1992-04-21
The distributions for human disease-specific mortality exhibit two striking characteristics: survivorship curves that intersect near the longevity limit; and, the clustering of best-fitting Weibull shape parameter values into groups centered on integers. Correspondingly, we have hypothesized that the distribution intersections result from either competitive processes or population partitioning and the integral clustering in the shape parameter results from the occurrence of a small number of rare, rate-limiting events in disease progression. In this report we initiate a theoretical examination of these questions by exploring serial chain model dynamics and parameteric competing risks theory. The links in our chain models are composed of more than one bond, where the number of bonds in a link are denoted the link size and are the number of events necessary to break the link and, hence, the chain. We explored chains with all links of the same size or with segments of the chain composed of different size links (competition). Simulations showed that chain breakage dynamics depended on the weakest-link principle and followed kinetics of extreme-values which were very similar to human mortality kinetics. In particular, failure distributions for simple chains were Weibull-type extreme-value distributions with shape parameter values that were identifiable with the integral link size in the limit of infinite chain length. Furthermore, for chains composed of several segments of differing link size, the survival distributions for the various segments converged at a point in the S(t) tails indistinguishable from human data. This was also predicted by parameteric competing risks theory using Weibull underlying distributions. In both the competitive chain simulations and the parametric competing risks theory, however, the shape values for the intersecting distributions deviated from the integer values typical of human data. We conclude that rare events can be the source of integral shapes in human mortality, that convergence is a salient feature of multiple endpoints, but that pure competition may not be the best explanation for the exact type of convergence observable in human mortality. Finally, while the chain models were not motivated by any specific biological structures, interesting biological correlates to them may be useful in gerontological research.
Printing Proteins as Microarrays for High-Throughput Function Determination
NASA Astrophysics Data System (ADS)
MacBeath, Gavin; Schreiber, Stuart L.
2000-09-01
Systematic efforts are currently under way to construct defined sets of cloned genes for high-throughput expression and purification of recombinant proteins. To facilitate subsequent studies of protein function, we have developed miniaturized assays that accommodate extremely low sample volumes and enable the rapid, simultaneous processing of thousands of proteins. A high-precision robot designed to manufacture complementary DNA microarrays was used to spot proteins onto chemically derivatized glass slides at extremely high spatial densities. The proteins attached covalently to the slide surface yet retained their ability to interact specifically with other proteins, or with small molecules, in solution. Three applications for protein microarrays were demonstrated: screening for protein-protein interactions, identifying the substrates of protein kinases, and identifying the protein targets of small molecules.
2006-02-13
business clientele, additional costs to hire a manager or replacement workers, complete loss of family income, and in extreme situations, bankruptcy, are...USAWC STRATEGY RESEARCH PROJECT MAINTAINING SMALL BUSINESS SUPPORT IN TIMES OF INCREASED ARMY NATIONAL GUARD UTILIZATION: AN IMPENDING CRISIS by...00-00-2005 to 00-00-2006 4. TITLE AND SUBTITLE Maintaining Small Business Support in Times of Increased Army National Guard Utilization An
Evolution caused by extreme events.
Grant, Peter R; Grant, B Rosemary; Huey, Raymond B; Johnson, Marc T J; Knoll, Andrew H; Schmitt, Johanna
2017-06-19
Extreme events can be a major driver of evolutionary change over geological and contemporary timescales. Outstanding examples are evolutionary diversification following mass extinctions caused by extreme volcanism or asteroid impact. The evolution of organisms in contemporary time is typically viewed as a gradual and incremental process that results from genetic change, environmental perturbation or both. However, contemporary environments occasionally experience strong perturbations such as heat waves, floods, hurricanes, droughts and pest outbreaks. These extreme events set up strong selection pressures on organisms, and are small-scale analogues of the dramatic changes documented in the fossil record. Because extreme events are rare, almost by definition, they are difficult to study. So far most attention has been given to their ecological rather than to their evolutionary consequences. We review several case studies of contemporary evolution in response to two types of extreme environmental perturbations, episodic (pulse) or prolonged (press). Evolution is most likely to occur when extreme events alter community composition. We encourage investigators to be prepared for evolutionary change in response to rare events during long-term field studies.This article is part of the themed issue 'Behavioural, ecological and evolutionary responses to extreme climatic events'. © 2017 The Author(s).
NASA Astrophysics Data System (ADS)
Voigt, M.; Lorenz, P.; Kruschke, T.; Osinski, R.; Ulbrich, U.; Leckebusch, G. C.
2012-04-01
Winterstorms and related gusts can cause extensive socio-economic damages. Knowledge about the occurrence and the small scale structure of such events may help to make regional estimations of storm losses. For a high spatial and temporal representation, the use of dynamical downscaling methods (RCM) is a cost-intensive and time-consuming option and therefore only applicable for a limited number of events. The current study explores a methodology to provide a statistical downscaling, which offers small scale structured gust fields from an extended large scale structured eventset. Radial-basis-function (RBF) networks in combination with bidirectional Kohonen (BDK) maps are used to generate the gustfields on a spatial resolution of 7 km from the 6-hourly mean sea level pressure field from ECMWF reanalysis data. BDK maps are a kind of neural network which handles supervised classification problems. In this study they are used to provide prototypes for the RBF network and give a first order approximation for the output data. A further interpolation is done by the RBF network. For the training process the 50 most extreme storm events over the North Atlantic area from 1957 to 2011 are used, which have been selected from ECMWF reanalysis datasets ERA40 and ERA-Interim by an objective wind based tracking algorithm. These events were downscaled dynamically by application of the DWD model chain GME → COSMO-EU. Different model parameters and their influence on the quality of the generated high-resolution gustfields are studied. It is shown that the statistical RBF network approach delivers reasonable results in modeling the regional gust fields for untrained events.
Chen, Nan; Majda, Andrew J
2017-12-05
Solving the Fokker-Planck equation for high-dimensional complex dynamical systems is an important issue. Recently, the authors developed efficient statistically accurate algorithms for solving the Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures, which contain many strong non-Gaussian features such as intermittency and fat-tailed probability density functions (PDFs). The algorithms involve a hybrid strategy with a small number of samples [Formula: see text], where a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious Gaussian kernel density estimation in the remaining low-dimensional subspace. In this article, two effective strategies are developed and incorporated into these algorithms. The first strategy involves a judicious block decomposition of the conditional covariance matrix such that the evolutions of different blocks have no interactions, which allows an extremely efficient parallel computation due to the small size of each individual block. The second strategy exploits statistical symmetry for a further reduction of [Formula: see text] The resulting algorithms can efficiently solve the Fokker-Planck equation with strongly non-Gaussian PDFs in much higher dimensions even with orders in the millions and thus beat the curse of dimension. The algorithms are applied to a [Formula: see text]-dimensional stochastic coupled FitzHugh-Nagumo model for excitable media. An accurate recovery of both the transient and equilibrium non-Gaussian PDFs requires only [Formula: see text] samples! In addition, the block decomposition facilitates the algorithms to efficiently capture the distinct non-Gaussian features at different locations in a [Formula: see text]-dimensional two-layer inhomogeneous Lorenz 96 model, using only [Formula: see text] samples. Copyright © 2017 the Author(s). Published by PNAS.
Combining multiple sources of data to inform conservation of Lesser Prairie-Chicken populations
Ross, Beth; Haukos, David A.; Hagen, Christian A.; Pitman, James
2018-01-01
Conservation of small populations is often based on limited data from spatially and temporally restricted studies, resulting in management actions based on an incomplete assessment of the population drivers. If fluctuations in abundance are related to changes in weather, proper management is especially important, because extreme weather events could disproportionately affect population abundance. Conservation assessments, especially for vulnerable populations, are aided by a knowledge of how extreme events influence population status and trends. Although important for conservation efforts, data may be limited for small or vulnerable populations. Integrated population models maximize information from various sources of data to yield population estimates that fully incorporate uncertainty from multiple data sources while allowing for the explicit incorporation of environmental covariates of interest. Our goal was to assess the relative influence of population drivers for the Lesser Prairie-Chicken (Tympanuchus pallidicinctus) in the core of its range, western and southern Kansas, USA. We used data from roadside lek count surveys, nest monitoring surveys, and survival data from telemetry monitoring combined with climate (Palmer drought severity index) data in an integrated population model. Our results indicate that variability in population growth rate was most influenced by variability in juvenile survival. The Palmer drought severity index had no measurable direct effects on adult survival or mean number of offspring per female; however, there were declines in population growth rate following severe drought. Because declines in population growth rate occurred at a broad spatial scale, declines in response to drought were likely due to decreases in chick and juvenile survival rather than emigration outside of the study area. Overall, our model highlights the importance of accounting for environmental and demographic sources of variability, and provides a thorough method for simultaneously evaluating population demography in response to long-term climate effects.
Particle growth kinetics over the Amazon rainforest
NASA Astrophysics Data System (ADS)
Pinterich, T.; Andreae, M. O.; Artaxo, P.; Kuang, C.; Longo, K.; Machado, L.; Manzi, A. O.; Martin, S. T.; Mei, F.; Pöhlker, C.; Pöhlker, M. L.; Poeschl, U.; Shilling, J. E.; Shiraiwa, M.; Tomlinson, J. M.; Zaveri, R. A.; Wang, J.
2016-12-01
Aerosol particles larger than 100 nm play a key role in global climate by acting as cloud condensation nuclei (CCN). Most of these particles, originated from new particle formation or directly emitted into the atmospheric, are initially too small to serve as CCN. These small particles grow to CCN size mainly through condensation of secondary species. In one extreme, the growth is dictated by kinetic condensation of very low-volatility compounds, favoring the growth of the smallest particles; in the other extreme, the process is driven by Raoult's law-based equilibrium partitioning of semi-volatile organic compound, favoring the growth of larger particles. These two mechanisms can lead to very different production rates of CCN. The growth of particles depends on a number of parameters, including the volatility of condensing species, particle phase, and diffusivity inside the particles, and this process is not well understood in part due to lack of ambient data. Here we examine atmospheric particle growth using high-resolution size distributions measured onboard the DOE G-1 aircraft during GoAmazon campaign, which took place from January 2014 to December 2015 near Manaus, Brazil, a city surrounded by natural forest for over 1000 km in every direction. City plumes are clearly identified by the strong enhancement of nucleation and Aitken mode particle concentrations over the clean background. As the plume traveled downwind, particle growth was observed, and is attributed to condensation of secondary species and coagulation (Fig.1). Observed aerosol growth is modeled using MOSAIC (Model for Simulating Aerosol Interactions and Chemistry), which dynamically partitions multiple compounds to all particle size bins by taking into account compound volatility, gas-phase diffusion, interfacial mass accommodation, particle-phase diffusion, and particle-phase reaction. The results from both wet and dry seasons will be discussed.
Altered neuromuscular control of leg stiffness following soccer-specific exercise.
Oliver, Jon L; De Ste Croix, Mark B A; Lloyd, Rhodri S; Williams, Craig A
2014-11-01
To examine changes to neuromuscular control of leg stiffness following 42 min of soccer-specific exercise. Ten youth soccer players, aged 15.8 ± 0.4 years, stature 1.73 ± 0.06 m and mass 59.8 ± 9.7 kg, hopped on a force plate at a self-selected frequency before and after simulated soccer exercise performed on a non-motorised treadmill. During hopping, muscle activity was measured using surface electromyography from four lower limb muscles and analysed to determine feedforward- and feedback-mediated activity, as well as co-contraction. There was a small, non-significant change in stiffness following exercise (26.6 ± 10.6 vs. 24.0 ± 7.0 kN m(-1), p > 0.05, ES = 0.25), with half the group increasing and half decreasing their stiffness. Changes in stiffness were significantly related to changes in centre of mass (CoM) displacement (r = 0.90, p < 0.01, extremely large correlation) but not changes in peak ground reaction force (r = 0.58, p > 0.05, large correlation). A number of significant relationships were observed between changes in stiffness and CoM displacement with changes in feedforward, feedback and eccentric muscle activity of the soleus and vastus lateralis muscles following exercise (r = 0.64-0.98, p < 0.05, large-extremely large correlations), but not with changes in co-contraction (r = 0.11-0.55, p > 0.05, small-large correlations). Following soccer-specific exercise individual changes in feedforward- and reflex-mediated activity of the soleus and vastus lateralis, and not co-contraction around the knee and ankle, modulate changes in CoM displacement and leg stiffness.
NASA Astrophysics Data System (ADS)
Watanabe, S.; Utsumi, N.; Take, M.; Iida, A.
2016-12-01
This study aims to develop a new approach to assess the impact of climate change on the small oceanic islands in the Pacific. In the new approach, the change of the probabilities of various situations was projected with considering the spread of projection derived from ensemble simulations, instead of projecting the most probable situation. The database for Policy Decision making for Future climate change (d4PDF) is a database of long-term high-resolution climate ensemble experiments, which has the results of 100 ensemble simulations. We utilized the database for Policy Decision making for Future climate change (d4PDF), which was (a long-term and high-resolution database) composed of results of 100 ensemble experiments. A new methodology, Multi Threshold Ensemble Assessment (MTEA), was developed using the d4PDF in order to assess the impact of climate change. We focused on the impact of climate change on tourism because it has played an important role in the economy of the Pacific Islands. The Yaeyama Region, one of the tourist destinations in Okinawa, Japan, was selected as the case study site. Two kinds of impact were assessed: change in probability of extreme climate phenomena and tourist satisfaction associated with weather. The database of long-term high-resolution climate ensemble experiments and the questionnaire survey conducted by a local government were used for the assessment. The result indicated that the strength of extreme events would be increased, whereas the probability of occurrence would be decreased. This change should result in increase of the number of clear days and it could contribute to improve the tourist satisfaction.
NASA Astrophysics Data System (ADS)
Bárdossy, András; Pegram, Geoffrey
2017-01-01
The use of radar measurements for the space time estimation of precipitation has for many decades been a central topic in hydro-meteorology. In this paper we are interested specifically in daily and sub-daily extreme values of precipitation at gauged or ungauged locations which are important for design. The purpose of the paper is to develop a methodology to combine daily precipitation observations and radar measurements to estimate sub-daily extremes at point locations. Radar data corrected using precipitation-reflectivity relationships lead to biased estimations of extremes. Different possibilities of correcting systematic errors using the daily observations are investigated. Observed gauged daily amounts are interpolated to unsampled points and subsequently disaggregated using the sub-daily values obtained by the radar. Different corrections based on the spatial variability and the subdaily entropy of scaled rainfall distributions are used to provide unbiased corrections of short duration extremes. Additionally a statistical procedure not based on a matching day by day correction is tested. In this last procedure as we are only interested in rare extremes, low to medium values of rainfall depth were neglected leaving a small number of L days of ranked daily maxima in each set per year, whose sum typically comprises about 50% of each annual rainfall total. The sum of these L day maxima is first iterpolated using a Kriging procedure. Subsequently this sum is disaggregated to daily values using a nearest neighbour procedure. The daily sums are then disaggregated by using the relative values of the biggest L radar based days. Of course, the timings of radar and gauge maxima can be different, so the method presented here uses radar for disaggregating daily gauge totals down to 15 min intervals in order to extract the maxima of sub-hourly through to daily rainfall. The methodologies were tested in South Africa, where an S-band radar operated relatively continuously at Bethlehem from 1998 to 2003, whose scan at 1.5 km above ground [CAPPI] overlapped a dense (10 km spacing) set of 45 pluviometers recording in the same 6-year period. This valuable set of data was obtained from each of 37 selected radar pixels [1 km square in plan] which contained a pluviometer not masked out by the radar foot-print. The pluviometer data were also aggregated to daily totals, for the same purpose. The extremes obtained using disaggregation methods were compared to the observed extremes in a cross validation procedure. The unusual and novel goal was not to obtain the reproduction of the precipitation matching in space and time, but to obtain frequency distributions of the point extremes, which we found to be stable.
Data Mining of Extremely Large Ad Hoc Data Sets to Produce Inverted Indices
2016-06-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited DATA MINING OF...COVERED Master’s Thesis 4. TITLE AND SUBTITLE DATA MINING OF EXTREMELY LARGE AD HOC DATA SETS TO PRODUCE INVERTED INDICES 5. FUNDING NUMBERS 6...INTENTIONALLY LEFT BLANK iii Approved for public release; distribution is unlimited DATA MINING OF EXTREMELY LARGE AD HOC DATA SETS TO PRODUCE
NASA Astrophysics Data System (ADS)
Feng, Tai-Chen; Zhang, Ke-Quan; Su, Hai-Jing; Wang, Xiao-Juan; Gong, Zhi-Qiang; Zhang, Wen-Yu
2015-10-01
Based on an objective identification technique for regional low temperature event (OITRLTE), the daily minimum temperature in China has been detected from 1960 to 2013. During this period, there were 60 regional extreme low temperature events (ERLTEs), which are included in the 690 regional low temperature events (RLTEs). The 60 ERLTEs are analyzed in this paper. The results show that in the last 50 years, the intensity of the ERLTEs has become weak; the number of lasted days has decreased; and, the affected area has become small. However, that situation has changed in this century. In terms of spatial distribution, the high intensity regions are mainly in Northern China while the high frequency regions concentrate in Central and Eastern China. According to the affected area of each event, the 60 ERLTEs are classified into six types. The atmospheric circulation background fields which correspond to these types are also analyzed. The results show that, influenced by stronger blocking highs of Ural and Lake Baikal, as well as stronger southward polar vortex and East Asia major trough at 500-hPa geopotential height, cold air from high latitudes is guided to move southward and abnormal northerly winds at 850 hPa makes the cold air blow into China along diverse paths, thereby forming different types of regional extreme low temperatures in winter. Project supported by the National Natural Science Foundation of China (Grant No. 41305075), the National Basic Research Program of China (Grant Nos. 2012CB955203 and 2012CB955902), and the Special Scientific Research on Public Welfare Industry, China (Grant No. GYHY201306049).
Recent work on network application layer: MioNet, the virtual workplace for small businesses
NASA Astrophysics Data System (ADS)
Hesselink, Lambertus; Rizal, Dharmarus; Bjornson, Eric; Miller, Brian; Chan, Keith
2005-11-01
Small businesses must be extremely efficient and smartly leverage their resources, suppliers, and partners to successfully compete with larger firms. A successful small business requires a set of companies with interlocking business relationships that are dynamic and needs-based. There has been no software solution that creates a secure and flexible way to efficiently connect small business computer-based employees and partners. In this invited paper, we discuss MioNet, a secure and powerful data management platform which may provide millions of small businesses with a virtual workplace and help them to succeed.
Characteristics of a 30-cm thruster operated with small hole accelerator grid ion optics
NASA Technical Reports Server (NTRS)
Vahrenkamp, R. P.
1976-01-01
Small hole accelerator grid ion optical systems have been tested as a possible means of improving 30-cm ion thruster performance. The effects of small hole grids on the critical aspects of thruster operation including discharge chamber performance, doubly-charged ion concentration, effluent beam characteristics, and plasma properties have been evaluated. In general, small hole accelerator grids are beneficial in improving thruster performance while maintaining low double ion ratios. However, extremely small accelerator aperture diameters tend to degrade beam divergence characteristics. A quantitative discussion of these advantages and disadvantages of small hole accelerator grids, as well as resulting variations in thruster operation characteristics, is presented.
2008-11-01
ISTC Project No. #1592P The Comparative Study of The Effects of Extremely Low Frequency Electromagnetic Fields and Infrasound on Water Molecule...performed under the agreement with the International Science and Technology Center ( ISTC ), Moscow. REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704...dissociation and generation of reactive oxygen spaces. 5a. CONTRACT NUMBER ISTC Registration No: A-1592p 5b. GRANT NUMBER 5c. PROGRAM ELEMENT
Peters, Denise M; McPherson, Aaron K; Fletcher, Blake; McClenaghan, Bruce A; Fritz, Stacy L
2013-09-01
The use of video gaming as a therapeutic intervention has increased in popularity; however, the number of repetitions in comparison with traditional therapy methods has yet to be investigated. The primary purpose of this study was to document and compare the number of repetitions performed while playing 1 of 2 video gaming systems for a time frame similar to that of a traditional therapy session in individuals with chronic stroke. Twelve participants with chronic stroke (mean age, 66.8 ± 8.2 years; time poststroke, 19.2 ± 15.4 months) completed video game play sessions, using either the Nintendo Wii or the Playstation 2 EyeToy. A total of 203 sessions were captured on video record; of these, 50 sessions for each gaming system were randomly selected for analysis. For each selected record, active upper and lower extremity repetitions were counted for a 36-minute segment of the recorded session. The Playstation 2 EyeToy group produced an average of 302.5 (228.1) upper extremity active movements and 189.3 (98.3) weight shifts, significantly higher than the Nintendo Wii group, which produced an average of 61.9 (65.7) upper extremity active movements and 109.7 (78.5) weight shifts. No significant differences were found in steps and other lower extremity active movements between the 2 systems. The Playstation 2 EyeToy group produced more upper extremity active movements and weight shifting movements than the Nintendo Wii group; the number and type of repetitions varied across games. Active gaming (specifically Playstation 2 EyeToy) provided more upper extremity repetitions than those reported in the literature by using traditional therapy, suggesting that it may be a modality to promote increased active movements in individuals poststroke.
The analysis of ensembles of moderately saturated interstellar lines
NASA Technical Reports Server (NTRS)
Jenkins, E. B.
1986-01-01
It is shown that the combined equivalent widths for a large population of Gaussian-like interstellar line components, each with different central optical depths tau(0) and velocity dispersions b, exhibit a curve of growth (COG) which closely mimics that of a single, pure Gaussian distribution in velocity. Two parametric distributions functions for the line populations are considered: a bivariate Gaussian for tau(0) and b and a power law distribution for tau(0) combined with a Gaussian dispersion for b. First, COGs for populations having an extremely large number of nonoverlapping components are derived, and the implications are shown by focusing on the doublet-ratio analysis for a pair of lines whose f-values differ by a factor of two. The consequences of having, instead of an almost infinite number of lines, a relatively small collection of components added together for each member of a doublet are examined. The theory of how the equivalent widths grow for populations of overlapping Gaussian profiles is developed. Examples of the composite COG analysis applied to existing collections of high-resolution interstellar line data are presented.
Adiabatic theory for the population distribution in the evolutionary minority game
NASA Astrophysics Data System (ADS)
Chen, Kan; Wang, Bing-Hong; Yuan, Baosheng
2004-02-01
We study the evolutionary minority game (EMG) using a statistical mechanics approach. We derive a theory for the steady-state population distribution of the agents. The theory is based on an “adiabatic approximation” in which short time fluctuations in the population distribution are integrated out to obtain an effective equation governing the steady-state distribution. We discover the mechanism for the transition from segregation (into opposing groups) to clustering (towards cautious behaviors). The transition is determined by two generic factors: the market impact (of the agents’ own actions) and the short time market inefficiency (arbitrage opportunities) due to fluctuations in the numbers of agents using opposite strategies. A large market impact favors “extreme” players who choose fixed opposite strategies, while large market inefficiency favors cautious players. The transition depends on the number of agents (N) and the effective rate of strategy switching. When N is small, the market impact is relatively large; this favors the extreme behaviors. Frequent strategy switching, on the other hand, leads to a clustering of the cautious agents.
Living on the edge: Fig tree phenology at the northern range limit of monoecious Ficus in China
NASA Astrophysics Data System (ADS)
Zhang, Lu-Shui; Compton, Stephen G.; Xiao, Hui; Lu, Qian; Chen, Yan
2014-05-01
Fig trees (Ficus) are a species-rich group of mainly tropical and subtropical plants that are of ecological importance because of the large numbers of vertebrates that utilise their figs for food. Factors limiting their distributions to warmer regions are still poorly understood, but are likely to include factors linked to their specialised pollination biology, because each Ficus species is dependent on one or a small number of host-specific fig wasps (Agaonidae) for pollination. Adult fig wasps are short-lived, but some species are capable of dispersing extremely long distances to pollinate their hosts. Close to its northern range limit we investigated the phenology of Ficus virens, the monoecious fig tree that reaches furthest north in China. Relatively few trees produced any figs, and very few retained figs throughout the winter. Despite this, new crops produced in spring were pollinated, with seasonally migrant pollinators from plants growing further south the most likely pollen vectors. An inability to initiate new crops at low temperatures may limit the distribution of monoecious fig trees to warmer areas.
Salty sisters: The women of halophiles
Baxter, Bonnie K.; Gunde-Cimerman, Nina; Oren, Aharon
2014-01-01
A history of halophile research reveals the commitment of scientists to uncovering the secrets of the limits of life, in particular life in high salt concentration and under extreme osmotic pressure. During the last 40 years, halophile scientists have indeed made important contributions to extremophile research, and prior international halophiles congresses have documented both the historical and the current work. During this period of salty discoveries, female scientists, in general, have grown in number worldwide. But those who worked in the field when there were small numbers of women sometimes saw their important contributions overshadowed by their male counterparts. Recent studies suggest that modern female scientists experience gender bias in matters such as conference invitations and even representation among full professors. In the field of halophilic microbiology, what is the impact of gender bias? How has the participation of women changed over time? What do women uniquely contribute to this field? What are factors that impact current female scientists to a greater degree? This essay emphasizes the “her story” (not “history”) of halophile discovery. PMID:24926287
Towards denoising XMCD movies of fast magnetization dynamics using extended Kalman filter.
Kopp, M; Harmeling, S; Schütz, G; Schölkopf, B; Fähnle, M
2015-01-01
The Kalman filter is a well-established approach to get information on the time-dependent state of a system from noisy observations. It was developed in the context of the Apollo project to see the deviation of the true trajectory of a rocket from the desired trajectory. Afterwards it was applied to many different systems with small numbers of components of the respective state vector (typically about 10). In all cases the equation of motion for the state vector was known exactly. The fast dissipative magnetization dynamics is often investigated by x-ray magnetic circular dichroism movies (XMCD movies), which are often very noisy. In this situation the number of components of the state vector is extremely large (about 10(5)), and the equation of motion for the dissipative magnetization dynamics (especially the values of the material parameters of this equation) is not well known. In the present paper it is shown by theoretical considerations that - nevertheless - there is no principle problem for the use of the Kalman filter to denoise XMCD movies of fast dissipative magnetization dynamics. Copyright © 2014 Elsevier B.V. All rights reserved.
Magnetic nanorings and manipulation of nanowires
NASA Astrophysics Data System (ADS)
Chien, C. L.
2006-03-01
The properties of nanoscale entities, such as nanorings and nanowires, and the response of such entities to external fields are dictated by their geometrical shapes and sizes, which can be manipulated by fabrication. We have developed a method for fabricating a large number of nanorings (10^10) of different sizes in the range of 100 nm and ring cross sections. During magnetic reversal, both the vortex state and the rotating onion state appear with different proportions, which depend on the ring diameter, ring cross section, and the profile of the ring cross section. In the case of nanowires in suspension, the large aspect ratio of the nanowires can be exploited for manipulation despite extremely small Reynolds numbers of 10-5. Using AC electric field applied to microelectrodes, both magnetic and non-magnetic nanowires can be efficiently assembled into desired patterns. We also demonstrate rotation of nanowires with precisely controlled rotation speed and chirality, as well as an electrically driven nanowire micromotor a few in size. In collaboration with F. Q. Zhu, D. L. Fan, O. Tchernyshyov, R. C. Cammarata (Johns Hopkins University) and X. C. Zhu and J. G. Zhu (Carnegie-Mellon University).
An origin of good electrical conduction in La{sub 4}BaCu{sub 5}O{sub 13+δ}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mori, Daiki; Asai, Shinichiro; Terasaki, Ichiro, E-mail: terra@cc.nagoya-u.ac.jp
2015-07-21
We have prepared a set of polycrystalline samples of the metallic copper oxide La{sub 4}BaCu{sub 5−x}Co{sub x}O{sub 13+δ} (0 ≤ x ≤ 0.35) and have measured the resistivity from 4 to 800 K. All the resistivities show metallic temperature dependence with a small magnitude less than 2 mΩ cm at 800 K, indicating that the metallic conduction is robust against impurities. The robust metallic conduction further suggests that this class of oxide is a promising candidate for electrical leads at high temperature, which might replace platinum. A detailed measurement and analysis on the Hall resistivity have revealed that at least two components are responsible for the electricalmore » conduction, in which a large number of electrons of moderate mobility coexist with a much smaller number of holes of extremely high mobility. This large electron density well screens the impurity potential and retains the metallic conduction against 7% impurity doping.« less
Su, Ning; Zhai, Fei-Fei; Zhou, Li-Xin; Ni, Jun; Yao, Ming; Li, Ming-Li; Jin, Zheng-Yu; Gong, Gao-Lang; Zhang, Shu-Yang; Cui, Li-Ying; Tian, Feng; Zhu, Yi-Cheng
2017-01-01
Objective: To investigate the correlation between cerebral small vessel disease (CSVD) burden and motor performance of lower and upper extremities in community-dwelling populations. Methods: We performed a cross-sectional analysis on 770 participants enrolled in the Shunyi study, which is a population-based cohort study. CSVD burden, including white matter hyperintensities (WMH), lacunes, cerebral microbleeds (CMBs), perivascular spaces (PVS), and brain atrophy were measured using 3T magnetic resonance imaging. All participants underwent quantitative motor assessment of lower and upper extremities, which included 3-m walking speed, 5-repeat chair-stand time, 10-repeat pronation–supination time, and 10-repeat finger-tapping time. Data on demographic characteristics, vascular risk factors, and cognitive functions were collected. General linear model analysis was performed to identify potential correlations between motor performance measures and imaging markers of CSVD after controlling for confounding factors. Results: For motor performance of the lower extremities, WMH was negatively associated with gait speed (standardized β = -0.092, p = 0.022) and positively associated with chair-stand time (standardized β = 0.153, p < 0.0001, surviving FDR correction). For motor performance of the upper extremities, pronation–supination time was positively associated with WMH (standardized β = 0.155, p < 0.0001, surviving FDR correction) and negatively with brain parenchymal fraction (BPF; standardized β = -0.125, p = 0.011, surviving FDR correction). Only BPF was found to be negatively associated with finger-tapping time (standardized β = -0.123, p = 0.012). However, lacunes, CMBs, or PVS were not found to be associated with motor performance of lower or upper extremities in multivariable analysis. Conclusion: Our findings suggest that cerebral microstructural changes related to CSVD may affect motor performance of both lower and upper extremities. WMH and brain atrophy are most strongly associated with motor function deterioration in community-dwelling populations. PMID:29021757
NASA Astrophysics Data System (ADS)
Salack, S.; Worou, N. O.; Sanfo, S.; Nikiema, M. P.; Boubacar, I.; Paturel, J. E.; Tondoh, E. J.
2017-12-01
In West Africa, the risk of food insecurity linked to the low productivity of small holder farming increases as a result of rainfall extremes. In its recent evolution, the rainy season in the Sudan-Sahel zone presents mixed patterns of extreme climatic events. In addition to intense rain events, the distribution of events is associated with pockets of intra-seasonal long dry spells. The negative consequences of these mixed patterns are obvious on the farm: soil water logging, erosion of arable land, dwartness and dessication of crops, and loss in production. The capacity of local farming communities to respond accordingly to rainfall extreme events is often constrained by lack of access to climate information and advisory on smart crop management practices that can help translate extreme rainfall events into farming options. The objective of this work is to expose the framework and the pre-liminary results of a scheme that customizes climate-advisory information package delivery to subsistence farmers in Bakel (Senegal), Ouahigouya & Dano (Burkina Faso) and Bolgatanga (Ghana) for sustainable family agriculture. The package is based on the provision of timely climate information (48-hours, dekadal & seasonal) embedded with smart crop management practices to explore and exploite the potential advantage of intense rainfall and extreme dry spells in millet, maize, sorghum and cowpea farming communities. It is sent via mobile phones and used on selected farms (i.e agro-climatic farm schools) on which some small on-farm infrastructure were built to alleviate negative impacts of weather. Results provide prominent insight on how co-production of weather/climate information, customized access and guidiance on its use can induce fast learning (capacity building of actors), motivation for adaptation, sustainability, potential changes in cropping system, yields and family income in the face of a rainfall extremes at local scales of Sudan-Sahel of West Africa. Keywords: Climate Information, Smart Practices, Farming Options, Agro-Climatic Farm Schools, Sudan-Sahel
Su, Ning; Zhai, Fei-Fei; Zhou, Li-Xin; Ni, Jun; Yao, Ming; Li, Ming-Li; Jin, Zheng-Yu; Gong, Gao-Lang; Zhang, Shu-Yang; Cui, Li-Ying; Tian, Feng; Zhu, Yi-Cheng
2017-01-01
Objective: To investigate the correlation between cerebral small vessel disease (CSVD) burden and motor performance of lower and upper extremities in community-dwelling populations. Methods: We performed a cross-sectional analysis on 770 participants enrolled in the Shunyi study, which is a population-based cohort study. CSVD burden, including white matter hyperintensities (WMH), lacunes, cerebral microbleeds (CMBs), perivascular spaces (PVS), and brain atrophy were measured using 3T magnetic resonance imaging. All participants underwent quantitative motor assessment of lower and upper extremities, which included 3-m walking speed, 5-repeat chair-stand time, 10-repeat pronation-supination time, and 10-repeat finger-tapping time. Data on demographic characteristics, vascular risk factors, and cognitive functions were collected. General linear model analysis was performed to identify potential correlations between motor performance measures and imaging markers of CSVD after controlling for confounding factors. Results: For motor performance of the lower extremities, WMH was negatively associated with gait speed (standardized β = -0.092, p = 0.022) and positively associated with chair-stand time (standardized β = 0.153, p < 0.0001, surviving FDR correction). For motor performance of the upper extremities, pronation-supination time was positively associated with WMH (standardized β = 0.155, p < 0.0001, surviving FDR correction) and negatively with brain parenchymal fraction (BPF; standardized β = -0.125, p = 0.011, surviving FDR correction). Only BPF was found to be negatively associated with finger-tapping time (standardized β = -0.123, p = 0.012). However, lacunes, CMBs, or PVS were not found to be associated with motor performance of lower or upper extremities in multivariable analysis. Conclusion: Our findings suggest that cerebral microstructural changes related to CSVD may affect motor performance of both lower and upper extremities. WMH and brain atrophy are most strongly associated with motor function deterioration in community-dwelling populations.
How to recover more value from small pine trees: Essential oils and resins
Vasant M. Kelkar; Brian W. Geils; Dennis R. Becker; Steven T. Overby; Daniel G. Neary
2006-01-01
In recent years, the young dense forests of northern Arizona have suffered extreme droughts, wildfires, and insect outbreaks. Improving forest health requires reducing forest density by cutting many small-diameter trees with the consequent production of large volumes of residual biomass. To offset the cost of handling this low-value timber, additional marketing options...
Small Body Size at Birth and Behavioural Symptoms of ADHD in Children Aged Five to Six Years
ERIC Educational Resources Information Center
Lahti, J.; Raikkonen, K.; Kajantie, E.; Heinonen, K.; Pesonen, A.-K.; Jarvenpaa, A.-L.; Strandberg, T.
2006-01-01
Background: Behavioural disorders with a neurodevelopmental background, such as attention deficit hyperactivity disorder (ADHD), have been associated with a non-optimal foetal environment, reflected in small body size at birth. However, the evidence stems from highly selected groups with birth outcomes biased towards the extreme low end of the…
ERIC Educational Resources Information Center
Coordination in Development, New York, NY.
This booklet was produced in response to the growing need for reliable environmental assessment techniques that can be applied to small-scale development projects. The suggested techniques emphasize low-technology environmental analysis. Although these techniques may lack precision, they can be extremely valuable in helping to assure the success…
Fetal primary small bowel volvulus in a child without intestinal malrotation.
Chung, Jae Hee; Lim, Gye-Yeon; We, Ji Sun
2013-07-01
Fetal primary small bowel volvulus without atresia or malrotation is an extremely rare but life-threatening surgical emergency. We report a case of primary small bowel volvulus that presented as sudden fetal distress and was diagnosed on the basis of the 'whirl-pool sign' of fetal sonography. This diagnosis led to emergency operation after birth at the third trimester with a good outcome. Although the pathogenesis of fetal primary small bowel volvulus is unclear, ganglion cell immaturity may play a role in the etiology. Copyright © 2013 Elsevier Inc. All rights reserved.
Micro-motors: A motile bacteria based system for liposome cargo transport.
Dogra, Navneet; Izadi, Hadi; Vanderlick, T Kyle
2016-07-05
Biological micro-motors (microorganisms) have potential applications in energy utilization and nanotechnology. However, harnessing the power generated by such motors to execute desired work is extremely difficult. Here, we employ the power of motile bacteria to transport small, large, and giant unilamellar vesicles (SUVs, LUVs, and GUVs). Furthermore, we demonstrate bacteria-bilayer interactions by probing glycolipids inside the model membrane scaffold. Fluorescence Resonance Energy Transfer (FRET) spectroscopic and microscopic methods were utilized for understanding these interactions. We found that motile bacteria could successfully propel SUVs and LUVs with a velocity of 28 μm s(-1) and 13 μm s(-1), respectively. GUVs, however, displayed Brownian motion and could not be propelled by attached bacteria. Bacterial velocity decreased with the larger loaded cargo, which agrees with our calculations of loaded bacteria swimming at low Reynolds number.
Isla Isabela in the western Galapagos Islands
NASA Technical Reports Server (NTRS)
1994-01-01
This is an image showing part of Isla Isabela in the western Galapagos Islands. It was taken by the L-band radar in HH polarization from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperature Radar on the 40th orbit of the Shuttle Endeavour. The image is centered at about .5 degrees south latitude and 91 degrees West longitude and covers an area of 75 km by 60 km. The radar incidence angle at the center of the image is about 20 degrees. This SIR-C/X-SAR image of Alcedo and Sierra Negra volcanoes shows the rougher lava flows as bright features, while ash deposits and smooth Pahoehoe lava flows appear dark. A small portion of Isla Fernandina is visible in the extreme upper left corner of the image. The Jet Propulsion Laboratory alternative photo number is P-43899.
Human skeletal muscles replaced to a high degree by white adipose tissue.
Ina, Keisuke; Kitamura, Hirokazu; Masaki, Takayuki; Tatsukawa, Shuji; Yoshimatsu, Hironobu; Fujikura, Yoshihisa
2011-02-01
Extreme replacement of skeletal muscles by adipose tissue was found in an 86-year old Japanese male cadaver during dissection practice for medical students at Oita University School of Medicine. Especially, the bilateral sartorius muscles looked overall like adipose tissue. The man had suffered from diabetes mellitus, renal failure, hypertension and hypothyroidism before his death. He was also an alcohol drinker. He had been bedridden late in life. The cause of death was renal failure. In microscopy, the adipose tissue-like sartorius muscle was shown to consist of leptin-positive adipocytes with a small number of degenerated muscle fibers. Fatty replacement, or fatty degeneration, appears to result from endocrine and metabolic disorders, and being bedridden leads to muscle atrophy and damage, although the origin of the adipocytes which emerged in the degenerated muscles is unknown.
Digital image processing of nanometer-size metal particles on amorphous substrates
NASA Technical Reports Server (NTRS)
Soria, F.; Artal, P.; Bescos, J.; Heinemann, K.
1989-01-01
The task of differentiating very small metal aggregates supported on amorphous films from the phase contrast image features inherently stemming from the support is extremely difficult in the nanometer particle size range. Digital image processing was employed to overcome some of the ambiguities in evaluating such micrographs. It was demonstrated that such processing allowed positive particle detection and a limited degree of statistical size analysis even for micrographs where by bare eye examination the distribution between particles and erroneous substrate features would seem highly ambiguous. The smallest size class detected for Pd/C samples peaks at 0.8 nm. This size class was found in various samples prepared under different evaporation conditions and it is concluded that these particles consist of 'a magic number' of 13 atoms and have cubooctahedral or icosahedral crystal structure.
Insulinoma in a patient with type 2 diabetes mellitus.
Ghafoori, Shahnaz; Lankarani, Mahnaz
2015-01-01
Insulinoma in a patient with pre-existing diabetes is extremely rare. Only a small number of cases have been reported all over the world. We report a case of insulinoma in a patient with type 2 diabetes. A 63-year-old female was diagnosed to have diabetes mellitus six years ago, she was given metformin and sulphonylurea to control her glycemia, she had adequate glycemic control for many years, but thereafter, the patient has experienced hypoglycemia after cessation of the treatment since 8 months ago and was hospitalized for further examination, endogenous hypoglycemia was confirmed and the level of serum insulin and C-peptide were elevated. Endoscopic ultrasound showed a heterogeneous lesion in the head of the pancreas. Head pancreatectomy was done. In the postoperative period diabetes again developed and required oral agents for control.
In Search of Stellar Music: Finding Pulsators for the TESS Mission
NASA Astrophysics Data System (ADS)
Richey-Yowell, Tyler; Pepper, Joshua; KELT Collaboration
2017-01-01
The Transiting Exoplanet Survey Satellite (TESS) will search for small transiting exoplanets orbiting bright stars. One of the additional mission objectives is to observe oscillating variable stars to precisely measure these stars’ masses, radii, and internal structures. Since TESS can observe only a limited number of stars with high enough cadence to detect these oscillations, it is necessary to identify candidates that will yield the most valuable results. Using data from the Kilodegree Extremely Little Telescope (KELT), we searched for bright stars showing oscillations to be included as TESS targets. We found 2,108 variable stars with B-V < 0.5 and P < 5 days. Further analysis will be carried out to establish final candidates. This project was funded by the National Science Foundation grant PHY-1359195 to the Lehigh University REU program.
Atomic-level characterization of the structural dynamics of proteins.
Shaw, David E; Maragakis, Paul; Lindorff-Larsen, Kresten; Piana, Stefano; Dror, Ron O; Eastwood, Michael P; Bank, Joseph A; Jumper, John M; Salmon, John K; Shan, Yibing; Wriggers, Willy
2010-10-15
Molecular dynamics (MD) simulations are widely used to study protein motions at an atomic level of detail, but they have been limited to time scales shorter than those of many biologically critical conformational changes. We examined two fundamental processes in protein dynamics--protein folding and conformational change within the folded state--by means of extremely long all-atom MD simulations conducted on a special-purpose machine. Equilibrium simulations of a WW protein domain captured multiple folding and unfolding events that consistently follow a well-defined folding pathway; separate simulations of the protein's constituent substructures shed light on possible determinants of this pathway. A 1-millisecond simulation of the folded protein BPTI reveals a small number of structurally distinct conformational states whose reversible interconversion is slower than local relaxations within those states by a factor of more than 1000.
Radiation dose equivalent to stowaways in vehicles.
Khan, Siraj M; Nicholas, Paul E; Terpilak, Michael S
2004-05-01
The U.S. Bureau of Customs and Border Protection has deployed a large number of non-intrusive inspection (NII) systems at land border crossings and seaports throughout the United States to inspect cars, trucks, and sea containers. These NII systems use x rays and gamma rays for the detection of contraband. Unfortunately, undocumented aliens infrequently stow away in these same conveyances to illegally enter the United States. It is extremely important that the radiation dose equivalent imparted to these stowaways be within acceptable limits. This paper discusses the issues involved and describes a protocol the U.S. Bureau of Customs and Border Protection has used in a study to measure and document these levels. The results of this study show that the radiation dose equivalent to the stowaways from the deployed NII systems is negligibly small and does not pose a health hazard.
NASA Astrophysics Data System (ADS)
Kassoy, D. R.
2014-01-01
Systematic asymptotic methods are applied to the compressible conservation and state equations for a reactive gas, including transport terms, to develop a rational thermomechanical formulation for the ignition of a chemical reaction following time-resolved, spatially distributed thermal energy addition from an external source into a finite volume of gas. A multi-parameter asymptotic analysis is developed for a wide range of energy deposition levels relative to the initial internal energy in the volume when the heating timescale is short compared to the characteristic acoustic timescale of the volume. Below a quantitatively defined threshold for energy addition, a nearly constant volume heating process occurs, with a small but finite internal gas expansion Mach number. Very little added thermal energy is converted to kinetic energy. The gas expelled from the boundary of the hot, high-pressure spot is the source of mechanical disturbances (acoustic and shock waves) that propagate away into the neighbouring unheated gas. When the energy addition reaches the threshold value, the heating process is fully compressible with a substantial internal gas expansion Mach number, the source of blast waves propagating into the unheated environmental gas. This case corresponds to an extremely large non-dimensional hot-spot temperature and pressure. If the former is sufficiently large, a high activation energy chemical reaction is initiated on the short heating timescale. This phenomenon is in contrast to that for more modest levels of energy addition, where a thermal explosion occurs only after the familiar extended ignition delay period for a classical high activation reaction. Transport effects, modulated by an asymptotically small Knudsen number, are shown to be negligible unless a local gradient in temperature, concentration or velocity is exceptionally large.
NASA Technical Reports Server (NTRS)
Greene, William H.
1990-01-01
A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.
Welke, Bastian; Hurschler, Christof; Föller, Marie; Schwarze, Michael; Calliess, Tilman
2013-07-11
Techniques for the skeletal attachment of amputation-prostheses have been developed over recent decades. This type of attachment has only been performed on a small number of patients. It poses various potential advantages compared to conventional treatment with a socket, but is also associated with an increased risk of bone or implant-bone interface fracture in the case of a fall. We therefore investigated the bending stiffness and ultimate bending moment of such devices implanted in human and synthetic bones. Eight human specimens and 16 synthetic models of the proximal femora were implanted with lower extremity prostheses and eight human specimens and six synthetic humeri were implanted with upper extremity prostheses. They were dissected according to typical amputation levels and underwent loading in a material testing machine in a four-point bending setup. Bending stiffness, ultimate bending moment and fracture modes were determined in a load to failure experiment. Additionally, axial pull-out was performed on eight synthetic specimens of the lower extremity. Maximum bending moment of the synthetic femora was 160.6±27.5 Nm, the flexural rigidity of the synthetic femora was 189.0±22.6 Nm2. Maximum bending moment of the human femora was 100.4±38.5 Nm, and the flexural rigidity was 137.8±29.4 Nm2. The maximum bending moment of the six synthetic humeri was 104.9±19.0 Nm, and the flexural rigidity was 63.7±3.6 Nm2. For the human humeri the maximum bending moment was 36.7±11.0 Nm, and the flexural rigidity at was 43.7±10.5 Nm2. The maximum pull-out force for the eight synthetic femora was 3571±919 N. Significant differences were found between human and synthetic specimens of the lower and upper extremity regarding maximum bending moment, bending displacement and flexural rigidity. The results of this study are relevant with respect to previous finding regarding the load at the interfaces of osseointegrated prosthesis fixation devices and are crucial for the development of safety devices intended to protect the bone-implant interface from damaging loadings.
Stiffness and ultimate load of osseointegrated prosthesis fixations in the upper and lower extremity
2013-01-01
Background Techniques for the skeletal attachment of amputation-prostheses have been developed over recent decades. This type of attachment has only been performed on a small number of patients. It poses various potential advantages compared to conventional treatment with a socket, but is also associated with an increased risk of bone or implant-bone interface fracture in the case of a fall. We therefore investigated the bending stiffness and ultimate bending moment of such devices implanted in human and synthetic bones. Methods Eight human specimens and 16 synthetic models of the proximal femora were implanted with lower extremity prostheses and eight human specimens and six synthetic humeri were implanted with upper extremity prostheses. They were dissected according to typical amputation levels and underwent loading in a material testing machine in a four-point bending setup. Bending stiffness, ultimate bending moment and fracture modes were determined in a load to failure experiment. Additionally, axial pull-out was performed on eight synthetic specimens of the lower extremity. Results Maximum bending moment of the synthetic femora was 160.6±27.5 Nm, the flexural rigidity of the synthetic femora was 189.0±22.6 Nm2. Maximum bending moment of the human femora was 100.4±38.5 Nm, and the flexural rigidity was 137.8±29.4 Nm2. The maximum bending moment of the six synthetic humeri was 104.9±19.0 Nm, and the flexural rigidity was 63.7±3.6 Nm2. For the human humeri the maximum bending moment was 36.7±11.0 Nm, and the flexural rigidity at was 43.7±10.5 Nm2. The maximum pull-out force for the eight synthetic femora was 3571±919 N. Conclusion Significant differences were found between human and synthetic specimens of the lower and upper extremity regarding maximum bending moment, bending displacement and flexural rigidity. The results of this study are relevant with respect to previous finding regarding the load at the interfaces of osseointegrated prosthesis fixation devices and are crucial for the development of safety devices intended to protect the bone-implant interface from damaging loadings. PMID:23844992
Kelly, Laura J; Renny-Byfield, Simon; Pellicer, Jaume; Macas, Jiří; Novák, Petr; Neumann, Pavel; Lysak, Martin A; Day, Peter D; Berger, Madeleine; Fay, Michael F; Nichols, Richard A; Leitch, Andrew R; Leitch, Ilia J
2015-10-01
Plants exhibit an extraordinary range of genome sizes, varying by > 2000-fold between the smallest and largest recorded values. In the absence of polyploidy, changes in the amount of repetitive DNA (transposable elements and tandem repeats) are primarily responsible for genome size differences between species. However, there is ongoing debate regarding the relative importance of amplification of repetitive DNA versus its deletion in governing genome size. Using data from 454 sequencing, we analysed the most repetitive fraction of some of the largest known genomes for diploid plant species, from members of Fritillaria. We revealed that genomic expansion has not resulted from the recent massive amplification of just a handful of repeat families, as shown in species with smaller genomes. Instead, the bulk of these immense genomes is composed of highly heterogeneous, relatively low-abundance repeat-derived DNA, supporting a scenario where amplified repeats continually accumulate due to infrequent DNA removal. Our results indicate that a lack of deletion and low turnover of repetitive DNA are major contributors to the evolution of extremely large genomes and show that their size cannot simply be accounted for by the activity of a small number of high-abundance repeat families. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Compositional patterns in the genomes of unicellular eukaryotes.
Costantini, Maria; Alvarez-Valin, Fernando; Costantini, Susan; Cammarano, Rosalia; Bernardi, Giorgio
2013-11-05
The genomes of multicellular eukaryotes are compartmentalized in mosaics of isochores, large and fairly homogeneous stretches of DNA that belong to a small number of families characterized by different average GC levels, by different gene concentration (that increase with GC), different chromatin structures, different replication timing in the cell cycle, and other different properties. A question raised by these basic results concerns how far back in evolution the compartmentalized organization of the eukaryotic genomes arose. In the present work we approached this problem by studying the compositional organization of the genomes from the unicellular eukaryotes for which full sequences are available, the sample used being representative. The average GC levels of the genomes from unicellular eukaryotes cover an extremely wide range (19%-60% GC) and the compositional patterns of individual genomes are extremely different but all genomes tested show a compositional compartmentalization. The average GC range of the genomes of unicellular eukaryotes is very broad (as broad as that of prokaryotes) and individual compositional patterns cover a very broad range from very narrow to very complex. Both features are not surprising for organisms that are very far from each other both in terms of phylogenetic distances and of environmental life conditions. Most importantly, all genomes tested, a representative sample of all supergroups of unicellular eukaryotes, are compositionally compartmentalized, a major difference with prokaryotes.
Moreland, J; Thomson, M A
1994-06-01
The purpose of this study was to examine the efficacy of electromyographic biofeedback compared with conventional physical therapy for improving upper-extremity function in patients following a stroke. A literature search was done for the years 1976 to 1992. The selection criteria included single-blinded randomized control trials. Study quality was assessed for nine criteria. For functional (disability index or stage of recovery) and impairment outcomes, meta-analyses were performed on odds ratios for improvement versus no improvement. Mann-Whitney U-Test probability values were combined across studies. Six studies were selected, and outcome data were obtained for five studies. The common odds ratio was 2.2 for function and 1.1 for impairments in favor of biofeedback. The estimate of the number needed to treat to prevent a nonresponder was 11 for function and 22 for impairments. None of the meta-analyses were statistically significant. The results do not conclusively indicate superiority of either form of therapy. Although there is a chance of Type II error, the estimated size of the effect is small. Given this estimate of little or no difference, therapists need to consider cost, ease of application, and patient preference when selecting these therapies.
Annamalai, Murali; Hristeva, Stanimira; Bielska, Martyna; Ortega, Raquel; Kumar, Kamal
2017-05-18
Despite the great contribution of natural products in the history of successful drug discovery, there are significant limitations that persuade the pharmaceutical industry to evade natural products in drug discovery research. The extreme scarcity as well as structural complexity of natural products renders their practical synthetic access and further modifications extremely challenging. Although other alternative technologies, particularly combinatorial chemistry, were embraced by the pharmaceutical industry to get quick access to a large number of small molecules with simple frameworks that often lack three-dimensional complexity, hardly any success was achieved in the discovery of lead molecules. To acquire chemotypes beholding structural features of natural products, for instance high sp ³ character, the synthesis of compound collections based on core-scaffolds of natural products presents a promising strategy. Here, we report a natural product inspired synthesis of six different chemotypes and their derivatives for drug discovery research. These bicyclic hetero- and carbocyclic scaffolds are highly novel, rich in sp ³ features and with ideal physicochemical properties to display drug likeness. The functional groups on the scaffolds were exploited further to generate corresponding compound collections. Synthesis of two of these collections exemplified with ca. 350 compounds are each also presented. The whole compound library is being exposed to various biological screenings within the European Lead Factory consortium.
Holographic Adaptive Laser Optics System (HALOS): Fast, Autonomous Aberration Correction
NASA Astrophysics Data System (ADS)
Andersen, G.; MacDonald, K.; Gelsinger-Austin, P.
2013-09-01
We present an adaptive optics system which uses a multiplexed hologram to deconvolve the phase aberrations in an input beam. This wavefront characterization is extremely fast as it is based on simple measurements of the intensity of focal spots and does not require any computations. Furthermore, the system does not require a computer in the loop and is thus much cheaper, less complex and more robust as well. A fully functional, closed-loop prototype incorporating a 32-element MEMS mirror has been constructed. The unit has a footprint no larger than a laptop but runs at a bandwidth of 100kHz over an order of magnitude faster than comparable, conventional systems occupying a significantly larger volume. Additionally, since the sensing is based on parallel, all-optical processing, the speed is independent of actuator number running at the same bandwidth for one actuator as for a million. We are developing the HALOS technology with a view towards next-generation surveillance systems for extreme adaptive optics applications. These include imaging, lidar and free-space optical communications for unmanned aerial vehicles and SSA. The small volume is ideal for UAVs, while the high speed and high resolution will be of great benefit to the ground-based observation of space-based objects.
NASA Astrophysics Data System (ADS)
Pantillon, Florian; Knippertz, Peter; Corsmeier, Ulrich
2017-10-01
New insights into the synoptic-scale predictability of 25 severe European winter storms of the 1995-2015 period are obtained using the homogeneous ensemble reforecast dataset from the European Centre for Medium-Range Weather Forecasts. The predictability of the storms is assessed with different metrics including (a) the track and intensity to investigate the storms' dynamics and (b) the Storm Severity Index to estimate the impact of the associated wind gusts. The storms are well predicted by the whole ensemble up to 2-4 days ahead. At longer lead times, the number of members predicting the observed storms decreases and the ensemble average is not clearly defined for the track and intensity. The Extreme Forecast Index and Shift of Tails are therefore computed from the deviation of the ensemble from the model climate. Based on these indices, the model has some skill in forecasting the area covered by extreme wind gusts up to 10 days, which indicates a clear potential for early warnings. However, large variability is found between the individual storms. The poor predictability of outliers appears related to their physical characteristics such as explosive intensification or small size. Longer datasets with more cases would be needed to further substantiate these points.
A comparative analysis of support vector machines and extreme learning machines.
Liu, Xueyi; Gao, Chuanhou; Li, Ping
2012-09-01
The theory of extreme learning machines (ELMs) has recently become increasingly popular. As a new learning algorithm for single-hidden-layer feed-forward neural networks, an ELM offers the advantages of low computational cost, good generalization ability, and ease of implementation. Hence the comparison and model selection between ELMs and other kinds of state-of-the-art machine learning approaches has become significant and has attracted many research efforts. This paper performs a comparative analysis of the basic ELMs and support vector machines (SVMs) from two viewpoints that are different from previous works: one is the Vapnik-Chervonenkis (VC) dimension, and the other is their performance under different training sample sizes. It is shown that the VC dimension of an ELM is equal to the number of hidden nodes of the ELM with probability one. Additionally, their generalization ability and computational complexity are exhibited with changing training sample size. ELMs have weaker generalization ability than SVMs for small sample but can generalize as well as SVMs for large sample. Remarkably, great superiority in computational speed especially for large-scale sample problems is found in ELMs. The results obtained can provide insight into the essential relationship between them, and can also serve as complementary knowledge for their past experimental and theoretical comparisons. Copyright © 2012 Elsevier Ltd. All rights reserved.
Fine structure of the pecten oculi in the great horned owl (Bubo virginianus).
Braekevelt, C R
1993-01-01
The pecten oculi of the great horned owl (Bubo virginianus) has been examined by light and electron microscopy. The pecten in this species is of the pleated type and is small in comparison to the size of the eyeball. It consists of 7-8 accordion folds which are joined apically by a pigmented bridge of tissue. Within each fold are numerous capillaries, larger supply and drainage vessels and plentiful pleomorphic melanocytes. The capillaries are extremely specialized vessels, most of which display plentiful microfolds on both their luminal and abluminal surfaces although some capillaries show but a few microfolds. The endothelial cell bodies are extremely thin with most organelles located near the nucleus. All capillaries are surrounded by a thick fibrillar basal lamina which is felt to be structurally important. Pericytes are a common feature within these thickened basal laminae. The numerous melanocytes form an incomplete sheath around the capillaries and are also presumed to be fulfilling a structural role. While the morphology of the pecten in the great horned owl is certainly indicative of a heavy involvement in transport, when compared to the pecten in species that are more visually oriented it is smaller, displays fewer folds and a reduced number of microfolds within the capillaries.
Energy and water in aestivating amphibians.
Carvalho, José E; Navas, Carlos A; Pereira, Isabel C
2010-01-01
The physiological mechanisms, behavioral adjustments, and ecological associations that allow animal species to live in extreme environments have evoked the attention of many zoologists. Often, extreme environments are defined as those believed to be limiting to life in terms of water, energetic availability, and temperature. These three elements seem extreme in a number of arid and semi-arid settings that even so have been colonized by amphibians. Because this taxon is usually seen as the quintessential water-dependent ectotherm tetrapods, their presence in a number of semi-arid environments poses a number of intriguing questions regarding microhabitat choice and physiological plasticity, particularly regarding the ecological and physiological correlates of behaviors granting avoidance of the harshest conditions of semi-arid environments. Such avoidance states, generally associated to the concept of aestivation, are currently seen as a diverse and complex phenomena varying from species to species and involving numerous behavioral and metabolic adjustments that enhance survival during the drought. This chapter reviews the physiological ecology of anuran aestivation, mainly from the perspective of water and energy balance.
2018-04-02
NASA's Solar Dynamics Observatory ran together three sequences of the sun taken in three different extreme ultraviolet wavelengths to better illustrate how different features that appear in one sequence are difficult if not impossible to see in the others (Mar. 20-21, 2018). In the red sequence (304 Angstroms), we can see very small spicules and some small prominences at the sun's edge, which are not easy to see in the other two sequences. In the second clip (193 Angstroms), we can readily observe the large and dark coronal hole, though it is difficult to make out in the others. In the third clip (171 wavelengths), we can see strands of plasma waving above the surface, especially above the one small, but bright, active region near the right edge. And these are just three of the 10 extreme ultraviolet wavelengths in which SDO images the sun every 12 seconds every day. That's a lot of data and a lot of science. Movies are available at https://photojournal.jpl.nasa.gov/catalog/PIA22360
Small Flare and a Coronal Mass Ejection
2018-01-31
The sun shot out a small coronal mass ejection that was also associated with a small flare (Jan. 22, 2018). The video, which covers about 5 hours, shows the burst of plasma as the magnetic loops break apart. Immediately the magnetic fields brighten intensely and begin to reorganize themselves in coils above the active region. The images were taken in a wavelength of extreme ultraviolet light. Videos are available at https://photojournal.jpl.nasa.gov/catalog/PIA22184
An H-band Vector Vortex Coronagraph for the Subaru Coronagraphic Extreme-adaptive Optics System
NASA Astrophysics Data System (ADS)
Kühn, J.; Serabyn, E.; Lozi, J.; Jovanovic, N.; Currie, T.; Guyon, O.; Kudo, T.; Martinache, F.; Liewer, K.; Singh, G.; Tamura, M.; Mawet, D.; Hagelberg, J.; Defrere, D.
2018-03-01
The vector vortex is a coronagraphic imaging mode of the recently commissioned Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) platform on the 8 m Subaru Telescope. This multi-purpose high-contrast visible and near-infrared (R- to K-band) instrument is not only intended to serve as a VLT-class “planet-imager” instrument in the northern hemisphere, but also to operate as a technology demonstration testbed ahead of the ELTs-era, with a particular emphasis on small inner-working angle (IWA) coronagraphic capabilities. The given priority to small-IWA imaging led to the early design choice to incorporate focal-plane phase-mask coronagraphs. In this context, a test H-band vector vortex liquid crystal polymer waveplate was provided to SCExAO, to allow a one-to-one comparison of different small-IWA techniques on the same telescope instrument, before considering further steps. Here we present a detailed overview of the vector vortex coronagraph, from its installation and performances on the SCExAO optical bench, to the on-sky results in the extreme AO regime, as of late 2016/early 2017. To this purpose, we also provide a few recent on-sky imaging examples, notably high-contrast ADI detection of the planetary-mass companion κ Andromedae b, with a signal-to-noise ratio above 100 reached in less than 10 mn exposure time.
Radiative heat transfer in the extreme near field.
Kim, Kyeongtae; Song, Bai; Fernández-Hurtado, Víctor; Lee, Woochul; Jeong, Wonho; Cui, Longji; Thompson, Dakotah; Feist, Johannes; Reid, M T Homer; García-Vidal, Francisco J; Cuevas, Juan Carlos; Meyhofer, Edgar; Reddy, Pramod
2015-12-17
Radiative transfer of energy at the nanometre length scale is of great importance to a variety of technologies including heat-assisted magnetic recording, near-field thermophotovoltaics and lithography. Although experimental advances have enabled elucidation of near-field radiative heat transfer in gaps as small as 20-30 nanometres (refs 4-6), quantitative analysis in the extreme near field (less than 10 nanometres) has been greatly limited by experimental challenges. Moreover, the results of pioneering measurements differed from theoretical predictions by orders of magnitude. Here we use custom-fabricated scanning probes with embedded thermocouples, in conjunction with new microdevices capable of periodic temperature modulation, to measure radiative heat transfer down to gaps as small as two nanometres. For our experiments we deposited suitably chosen metal or dielectric layers on the scanning probes and microdevices, enabling direct study of extreme near-field radiation between silica-silica, silicon nitride-silicon nitride and gold-gold surfaces to reveal marked, gap-size-dependent enhancements of radiative heat transfer. Furthermore, our state-of-the-art calculations of radiative heat transfer, performed within the theoretical framework of fluctuational electrodynamics, are in excellent agreement with our experimental results, providing unambiguous evidence that confirms the validity of this theory for modelling radiative heat transfer in gaps as small as a few nanometres. This work lays the foundations required for the rational design of novel technologies that leverage nanoscale radiative heat transfer.
Mitchell, A.J.; Cole, Rebecca A.
2008-01-01
The faucet snail Bithynia tentaculata, a nonindigenous aquatic snail from Eurasia, was introduced into Lake Michigan in 1871 and has spread to the mid-Atlantic states, the Great Lakes region, Montana, and most recently, the Mississippi River. The faucet snail serves as intermediate host for several trematodes that have caused large-scale mortality among water birds, primarily in the Great Lakes region and Montana. It is important to limit the spread of the faucet snail; small fisheries equipment can serve as a method of snail distribution. Treatments with chemical disinfection, pH extremes, and heated water baths were tested to determine their effectiveness as a disinfectant for small fisheries equipment. Two treatments eliminated all test snails: (1) a 24-h exposure to Hydrothol 191 at a concentration of at least 20 mg/L and (2) a treatment with 50°C heated water for 1 min or longer. Faucet snails were highly resistant to ethanol, NaCl, formalin, Lysol, potassium permanganate, copper sulfate, Baquacil, Virkon, household bleach, and pH extremes (as low as 1 and as high as 13).
Testing anthropic reasoning for the cosmological constant with a realistic galaxy formation model
NASA Astrophysics Data System (ADS)
Sudoh, Takahiro; Totani, Tomonori; Makiya, Ryu; Nagashima, Masahiro
2017-01-01
The anthropic principle is one of the possible explanations for the cosmological constant (Λ) problem. In previous studies, a dark halo mass threshold comparable with our Galaxy must be assumed in galaxy formation to get a reasonably large probability of finding the observed small value, P(<Λobs), though stars are found in much smaller galaxies as well. Here we examine the anthropic argument by using a semi-analytic model of cosmological galaxy formation, which can reproduce many observations such as galaxy luminosity functions. We calculate the probability distribution of Λ by running the model code for a wide range of Λ, while other cosmological parameters and model parameters for baryonic processes of galaxy formation are kept constant. Assuming that the prior probability distribution is flat per unit Λ, and that the number of observers is proportional to stellar mass, we find P(<Λobs) = 6.7 per cent without introducing any galaxy mass threshold. We also investigate the effect of metallicity; we find P(<Λobs) = 9.0 per cent if observers exist only in galaxies whose metallicity is higher than the solar abundance. If the number of observers is proportional to metallicity, we find P(<Λobs) = 9.7 per cent. Since these probabilities are not extremely small, we conclude that the anthropic argument is a viable explanation, if the value of Λ observed in our Universe is determined by a probability distribution.
Long-term changes in Serengeti-Mara wildebeest and land cover: Pastoralism, population, or policies?
Homewood, K.; Lambin, E. F.; Coast, E.; Kariuki, A.; Kikula, I.; Kivelia, J.; Said, M.; Serneels, S.; Thompson, M.
2001-01-01
Declines in habitat and wildlife in semiarid African savannas are widely reported and commonly attributed to agropastoral population growth, livestock impacts, and subsistence cultivation. However, extreme annual and shorter-term variability of rainfall, primary production, vegetation, and populations of grazers make directional trends and causal chains hard to establish in these ecosystems. Here two decades of changes in land cover and wildebeest in the Serengeti-Mara region of East Africa are analyzed in terms of potential drivers (rainfall, human and livestock population growth, socio-economic trends, land tenure, agricultural policies, and markets). The natural experiment research design controls for confounding variables, and our conceptual model and statistical approach integrate natural and social sciences data. The Kenyan part of the ecosystem shows rapid land-cover change and drastic decline for a wide range of wildlife species, but these changes are absent on the Tanzanian side. Temporal climate trends, human population density and growth rates, uptake of small-holder agriculture, and livestock population trends do not differ between the Kenyan and Tanzanian parts of the ecosystem and cannot account for observed changes. Differences in private versus state/communal land tenure, agricultural policy, and market conditions suggest, and spatial correlations confirm, that the major changes in land cover and dominant grazer species numbers are driven primarily by private landowners responding to market opportunities for mechanized agriculture, less by agropastoral population growth, cattle numbers, or small-holder land use. PMID:11675492
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1994-01-01
The primary accomplishments of the project are as follows: (1) Using the transonic small perturbation equation as a flowfield model, the project demonstrated that the quasi-analytical method could be used to obtain aerodynamic sensitivity coefficients for airfoils at subsonic, transonic, and supersonic conditions for design variables such as Mach number, airfoil thickness, maximum camber, angle of attack, and location of maximum camber. It was established that the quasi-analytical approach was an accurate method for obtaining aerodynamic sensitivity derivatives for airfoils at transonic conditions and usually more efficient than the finite difference approach. (2) The usage of symbolic manipulation software to determine the appropriate expressions and computer coding associated with the quasi-analytical method for sensitivity derivatives was investigated. Using the three dimensional fully conservative full potential flowfield model, it was determined that symbolic manipulation along with a chain rule approach was extremely useful in developing a combined flowfield and quasi-analytical sensitivity derivative code capable of considering a large number of realistic design variables. (3) Using the three dimensional fully conservative full potential flowfield model, the quasi-analytical method was applied to swept wings (i.e. three dimensional) at transonic flow conditions. (4) The incremental iterative technique has been applied to the three dimensional transonic nonlinear small perturbation flowfield formulation, an equivalent plate deflection model, and the associated aerodynamic and structural discipline sensitivity equations; and coupled aeroelastic results for an aspect ratio three wing in transonic flow have been obtained.
Diagnosing Dementia in the Oldest-Old
Brumback-Peltz, Carrie; Balasubramanian, Archana B.; Corrada, María M.; Kawas, Claudia H.
2011-01-01
The “oldest-old” comprise the fastest growing segment of the population in much of the world. Rates of dementia are extremely high in this age group and will present a major public health burden as the numbers of these individuals quadruple by the middle of the century. Studies in this age group are rare and frequently have small numbers of participants. In research studies and the clinic, the diagnosis of dementia and determination of the etiology of the disorder are challenging. In this review, we include some of our experiences in a population-based longitudinal investigation, The 90+ Study. Oldest-old individuals are more likely to suffer from medical comorbidities and have high rates of sensory loss, psychoactive medication usage, frailty and fatigue. Moreover, social and cultural expectations affect the reporting and interpretation of behavioral changes. These and other factors make it difficult to determine the relative contributions of cognitive losses and non-cognitive losses in the development of functional disability. Contributing further to the complexities of diagnosis, current research suggests that dementia in the oldest-old, compared to younger people, is more likely to be related to mixed disease pathologies. Frequent cerebral neuropathologies include Alzheimer’s disease neurodegeneration, small and large vessel vascular disease, and hippocampal sclerosis. More research is necessary in the oldest-old to better understand the etiologies of dementia in this age group, and factors that may affect the expression of disease as we age. PMID:21831546
NASA Astrophysics Data System (ADS)
Ostrenga, D.; Shen, S.; Vollmer, B.; Meyer, D. L.
2017-12-01
NASA climate reanalysis dataset from MERRA-2 contains numerous data for atmosphere, land, and ocean, that are grouped into 95 products of archived volume over 300 TB. The data files are saved as hourly-file, day-file (hourly time interval) and month-file containing up to 125 parameters. Due to the large number of data files and the sheer data volumes, it is a challenging for users, especially those in the application research community, to handle dealing with the original data files. Most of these researchers prefer to focus on a small region or single location using the hourly data for long time periods to analyze extreme weather events or say winds for renewable energy applications. At the GES DISC, we have been working closely with the science teams and the application user community to create several new value added data products and high quality services to facilitate the use of the model data for various types of research. We have tested converting hourly data from one-day per file into different data cubes, such as one-month, one-year, or whole-mission and then continued to analyze the efficiency of the accessibility of this newly structured data through various services. Initial results have shown that compared to the original file structure, the new data has significantly improved the performance for accessing long time series. It is noticed that the performance is associated to the cube size and structure, the compression method, and how the data are accessed. The optimized data cube structure will not only improve the data access, but also enable better online analytic services for doing statistical analysis and extreme events mining. Two case studies will be presented using the newly structured data and value added services, the California drought and the extreme drought of the Northeastern states of Brazil. Furthermore, data access and analysis through cloud storage capabilities will be investigated.
The Climatology of Extreme Surge-Producing Extratropical Cyclones in Observations and Models
NASA Astrophysics Data System (ADS)
Catalano, A. J.; Broccoli, A. J.; Kapnick, S. B.
2016-12-01
Extreme coastal storms devastate heavily populated areas around the world by producing powerful winds that can create a large storm surge. Both tropical and extratropical cyclones (ETCs) occur over the northwestern Atlantic Ocean, and the risks associated with ETCs can be just as severe as those associated with tropical storms (e.g. high winds, storm surge). At The Battery in New York City, 17 of the 20 largest storm surge events were a consequence of extratropical cyclones (ETCs), which are more prevalent than tropical cyclones in the northeast region of the United States. Therefore, we analyze the climatology of ETCs that are capable of producing a large storm surge along the northeastern coast of the United States. For a historical analysis, water level data was collected from National Oceanic and Atmospheric Administration (NOAA) tide gauges at three separate locations (Sewell's Pt., VA, The Battery, NY, and Boston, MA). We perform a k-means cluster analysis of sea level pressure from the ECMWF 20th Century Reanalysis dataset (ERA-20c) to explore the natural sets of observed storms with similar characteristics. We then composite cluster results with features of atmospheric circulation to observe the influence of interannual and multidecadal variability such as the North Atlantic Oscillation. Since observational records contain a small number of well-documented ETCs, the capability of a high-resolution coupled climate model to realistically simulate such extreme coastal storms will also be assessed. Global climate models provide a means of simulating a much larger sample of extreme events, allowing for better resolution of the tail of the distribution. We employ a tracking algorithm to identify ETCs in a multi-century simulation under present-day conditions. Quantitative comparisons of cyclolysis, cyclogenesis, and cyclone densities of simulated ETCs and storms from recent history (using reanalysis products) are conducted.
Kondrup, S. V.; Bennett, P. C.; Forkman, B.; Meyer, I; Proschowsky, H. F.; Serpell, J. A.; Lund, T. B.
2017-01-01
A number of dog breeds suffer from welfare problems due to extreme phenotypes and high levels of inherited diseases but the popularity of such breeds is not declining. Using a survey of owners of two popular breeds with extreme physical features (French Bulldog and Chihuahua), one with a high load of inherited diseases not directly related to conformation (Cavalier King Charles Spaniel), and one representing the same size range but without extreme conformation and with the same level of disease as the overall dog population (Cairn Terrier), we investigated this seeming paradox. We examined planning and motivational factors behind acquisition of the dogs, and whether levels of experienced health and behavior problems were associated with the quality of the owner-dog relationship and the intention to re-procure a dog of the same breed. Owners of each of the four breeds (750/breed) were randomly drawn from a nationwide Danish dog registry and invited to participate. Of these, 911 responded, giving a final sample of 846. There were clear differences between owners of the four breeds with respect to degree of planning prior to purchase, with owners of Chihuahuas exhibiting less. Motivations behind choice of dog were also different. Health and other breed attributes were more important to owners of Cairn Terriers, whereas the dog’s personality was reported to be more important for owners of French Bulldogs and Cavalier King Charles Spaniels but less important for Chihuahua owners. Higher levels of health and behavior problems were positively associated with a closer owner-dog relationship for owners of Cavalier King Charles Spaniels and Chihuahuas but, for owners of French Bulldogs, high levels of problems were negatively associated with an intention to procure the same breed again. In light of these findings, it appears less paradoxical that people continue to buy dogs with welfare problems. PMID:28234931
Sandøe, P; Kondrup, S V; Bennett, P C; Forkman, B; Meyer, I; Proschowsky, H F; Serpell, J A; Lund, T B
2017-01-01
A number of dog breeds suffer from welfare problems due to extreme phenotypes and high levels of inherited diseases but the popularity of such breeds is not declining. Using a survey of owners of two popular breeds with extreme physical features (French Bulldog and Chihuahua), one with a high load of inherited diseases not directly related to conformation (Cavalier King Charles Spaniel), and one representing the same size range but without extreme conformation and with the same level of disease as the overall dog population (Cairn Terrier), we investigated this seeming paradox. We examined planning and motivational factors behind acquisition of the dogs, and whether levels of experienced health and behavior problems were associated with the quality of the owner-dog relationship and the intention to re-procure a dog of the same breed. Owners of each of the four breeds (750/breed) were randomly drawn from a nationwide Danish dog registry and invited to participate. Of these, 911 responded, giving a final sample of 846. There were clear differences between owners of the four breeds with respect to degree of planning prior to purchase, with owners of Chihuahuas exhibiting less. Motivations behind choice of dog were also different. Health and other breed attributes were more important to owners of Cairn Terriers, whereas the dog's personality was reported to be more important for owners of French Bulldogs and Cavalier King Charles Spaniels but less important for Chihuahua owners. Higher levels of health and behavior problems were positively associated with a closer owner-dog relationship for owners of Cavalier King Charles Spaniels and Chihuahuas but, for owners of French Bulldogs, high levels of problems were negatively associated with an intention to procure the same breed again. In light of these findings, it appears less paradoxical that people continue to buy dogs with welfare problems.
NASA Astrophysics Data System (ADS)
Rieder, Harald E.; Jancso, Leonhardt M.; Staehelin, Johannes; Maeder, Jörg A.; Ribatet, Mathieu; Peter, Thomas; Davison, Anthony C.
2010-05-01
In this study we analyze the frequency distribution of extreme events in low and high total ozone (termed ELOs and EHOs) for 5 long-term stations in the northern mid-latitudes in Europe (Belsk, Poland; Hradec Kralove, Czech Republic; Hohenpeissenberg and Potsdam, Germany; and Uccle, Belgium). Further, the influence of these extreme events on annual and seasonal mean values and trends is analysed. The applied method follows the new "ozone extreme concept", which is based on tools from extreme value theory [Coles, 2001; Ribatet, 2007], recently developed by Rieder et al. [2010a, b]. Mathematically seen the decisive feature within the extreme concept is the Generalized Pareto Distribution (GPD). In this analysis, the long-term trends needed to be removed first, differently to the treatment of Rieder et al. [2010a, b], in which the time series of Arosa was analysed, covering many decades of measurements in the anthropogenically undisturbed stratosphere. In contrast to previous studies only focusing on so called ozone mini-holes and mini-highs the "ozone extreme concept" provides a statistical description of the tails in total ozone distributions (i.e. extreme low and high values). It is shown that this concept is not only an appropriate method to describe the frequency and distribution of extreme events, it also provides new information on time series properties and internal variability. Furthermore it allows detection of fingerprints of physical (e.g. El Niño, NAO) and chemical (e.g. polar vortex ozone loss) features in the Earth's atmosphere as well as major volcanic eruptions (e.g. El Chichón, Mt. Pinatubo). It is shown that mean values and trends in total ozone are strongly influenced by extreme events. Trend calculations (for the period 1970-1990) are performed for the entire as well as the extremes-removed time series. The results after excluding extremes show that annual trends are most reduced at Hradec Kralove (about a factor of 3), followed by Potsdam (factor of 2.5), and Hohenpeissenberg and Belsk (both about a factor of 2). In general the reduction of trend is strongest during winter and spring. Throughout all stations the influence of ELOs on observed trends is larger than those of EHOs. Especially from the 1990s on ELOs dominate the picture as only a relatively small fraction of EHOs can be observed in the records (due to strong influence of Mt. Pinatubo eruption and polar vortex ozone loss contributions). Additionally it is evidenced that the number of observed mini-holes can be estimated highly accurate by the GPD-model. Overall the results of this thesis show that extreme events play a major role in total ozone and the "ozone extremes concept" provides deeper insight in the influence of chemical and physical features on column ozone. References: Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder ,H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part I: Application of extreme value theory, to be submitted to ACPD. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part II: Fingerprints of atmospheric dynamics and chemistry and effects on mean values and long-term changes, to be submitted to ACPD.
Understanding the importance of transient resonances in extreme mass ratio inspirals
NASA Astrophysics Data System (ADS)
Berry, C. P. L.; Cole, R. H.; Cañizares, P.; Gair, J. R.
2017-05-01
Extreme mass ratio inspirals (EMRIs) occur when a compact object orbits a much larger one, like a solar-mass black hole around a supermassive black hole. The orbit has 3 frequencies which evolve through the inspiral. If the orbital radial frequency and polar frequency become commensurate, the system passes through a transient resonance. Evolving through resonance causes a jump in the evolution of the orbital parameters. We study these jumps and their impact on EMRI gravitational-wave detection. Jumps are smaller for lower eccentricity orbits; since most EMRIs have small eccentricities when passing through resonances, we expect that the impact on detection will be small. Neglecting the effects of transient resonances leads to a loss of ∼ 4% of detectable signals for an astrophysically motivated population of EMRIs.
Harris, Wendy; Zhang, You; Yin, Fang-Fang; Ren, Lei
2017-01-01
Purpose To investigate the feasibility of using structural-based principal component analysis (PCA) motion-modeling and weighted free-form deformation to estimate on-board 4D-CBCT using prior information and extremely limited angle projections for potential 4D target verification of lung radiotherapy. Methods A technique for lung 4D-CBCT reconstruction has been previously developed using a deformation field map (DFM)-based strategy. In the previous method, each phase of the 4D-CBCT was generated by deforming a prior CT volume. The DFM was solved by a motion-model extracted by global PCA and free-form deformation (GMM-FD) technique, using a data fidelity constraint and deformation energy minimization. In this study, a new structural-PCA method was developed to build a structural motion-model (SMM) by accounting for potential relative motion pattern changes between different anatomical structures from simulation to treatment. The motion model extracted from planning 4DCT was divided into two structures: tumor and body excluding tumor, and the parameters of both structures were optimized together. Weighted free-form deformation (WFD) was employed afterwards to introduce flexibility in adjusting the weightings of different structures in the data fidelity constraint based on clinical interests. XCAT (computerized patient model) simulation with a 30 mm diameter lesion was simulated with various anatomical and respirational changes from planning 4D-CT to onboard volume to evaluate the method. The estimation accuracy was evaluated by the Volume-Percent-Difference (VPD)/Center-of-Mass-Shift (COMS) between lesions in the estimated and “ground-truth” on board 4D-CBCT. Different onboard projection acquisition scenarios and projection noise levels were simulated to investigate their effects on the estimation accuracy. The method was also evaluated against 3 lung patients. Results The SMM-WFD method achieved substantially better accuracy than the GMM-FD method for CBCT estimation using extremely small scan angles or projections. Using orthogonal 15° scanning angles, the VPD/COMS were 3.47±2.94% and 0.23±0.22mm for SMM-WFD and 25.23±19.01% and 2.58±2.54mm for GMM-FD among all 8 XCAT scenarios. Compared to GMM-FD, SMM-WFD was more robust against reduction of the scanning angles down to orthogonal 10° with VPD/COMS of 6.21±5.61% and 0.39±0.49mm, and more robust against reduction of projection numbers down to only 8 projections in total for both orthogonal-view 30° and orthogonal-view 15° scan angles. SMM-WFD method was also more robust than the GMM-FD method against increasing levels of noise in the projection images. Additionally, the SMM-WFD technique provided better tumor estimation for all three lung patients compared to the GMM-FD technique. Conclusion Compared to the GMM-FD technique, the SMM-WFD technique can substantially improve the 4D-CBCT estimation accuracy using extremely small scan angles and low number of projections to provide fast low dose 4D target verification. PMID:28079267
Climatic extremes improve predictions of spatial patterns of tree species
Zimmermann, N.E.; Yoccoz, N.G.; Edwards, T.C.; Meier, E.S.; Thuiller, W.; Guisan, Antoine; Schmatz, D.R.; Pearman, P.B.
2009-01-01
Understanding niche evolution, dynamics, and the response of species to climate change requires knowledge of the determinants of the environmental niche and species range limits. Mean values of climatic variables are often used in such analyses. In contrast, the increasing frequency of climate extremes suggests the importance of understanding their additional influence on range limits. Here, we assess how measures representing climate extremes (i.e., interannual variability in climate parameters) explain and predict spatial patterns of 11 tree species in Switzerland. We find clear, although comparably small, improvement (+20% in adjusted D2, +8% and +3% in cross-validated True Skill Statistic and area under the receiver operating characteristics curve values) in models that use measures of extremes in addition to means. The primary effect of including information on climate extremes is a correction of local overprediction and underprediction. Our results demonstrate that measures of climate extremes are important for understanding the climatic limits of tree species and assessing species niche characteristics. The inclusion of climate variability likely will improve models of species range limits under future conditions, where changes in mean climate and increased variability are expected.
Scattering Tools for Nanostructure Phonon Engineering
2013-09-25
characterization of phonons in nanomaterials, such as Raman scattering, are sensitive only to phonon modes with wavevectors of extremely small magnitude...Fundamentally the wavevectors that can be probed by Raman scattering are limited by the small momentum of photons in the visible spectrum. Our work...serious characterization challenge because existing experimental techniques for the characterization of phonons in nanomaterials, such as Raman
Small Business Procurement Event
2014-08-13
Small Business Procurement Event 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK...NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Department of the Navy,Office of Small Business Programs,720 Kennon...distribution unlimited 13. SUPPLEMENTARY NOTES NDIA 27th Navy Gold Coast Small Business Procurement Event, 12-13 Aug 2014, San Diego, CA. 14. ABSTRACT
... albumin (a product that is made from live donor blood). Although there is a extremely small chance that ... develop these antibodies, you may have an allergic reaction when you take medications made from murine proteins, ...
NASA Astrophysics Data System (ADS)
Torrungrueng, Danai; Johnson, Joel T.; Chou, Hsi-Tseng
2002-03-01
The novel spectral acceleration (NSA) algorithm has been shown to produce an $[\\mathcal{O}]$(Ntot) efficient iterative method of moments for the computation of radiation/scattering from both one-dimensional (1-D) and two-dimensional large-scale quasi-planar structures, where Ntot is the total number of unknowns to be solved. This method accelerates the matrix-vector multiplication in an iterative method of moments solution and divides contributions between points into ``strong'' (exact matrix elements) and ``weak'' (NSA algorithm) regions. The NSA method is based on a spectral representation of the electromagnetic Green's function and appropriate contour deformation, resulting in a fast multipole-like formulation in which contributions from large numbers of points to a single point are evaluated simultaneously. In the standard NSA algorithm the NSA parameters are derived on the basis of the assumption that the outermost possible saddle point, φs,max, along the real axis in the complex angular domain is small. For given height variations of quasi-planar structures, this assumption can be satisfied by adjusting the size of the strong region Ls. However, for quasi-planar structures with large height variations, the adjusted size of the strong region is typically large, resulting in significant increases in computational time for the computation of the strong-region contribution and degrading overall efficiency of the NSA algorithm. In addition, for the case of extremely large scale structures, studies based on the physical optics approximation and a flat surface assumption show that the given NSA parameters in the standard NSA algorithm may yield inaccurate results. In this paper, analytical formulas associated with the NSA parameters for an arbitrary value of φs,max are presented, resulting in more flexibility in selecting Ls to compromise between the computation of the contributions of the strong and weak regions. In addition, a ``multilevel'' algorithm, decomposing 1-D extremely large scale quasi-planar structures into more than one weak region and appropriately choosing the NSA parameters for each weak region, is incorporated into the original NSA method to improve its accuracy.
How morphometric characteristics affect flow accumulation values
NASA Astrophysics Data System (ADS)
Farek, Vladimir
2014-05-01
Remote sensing methods (like aerial based LIDAR recording, land-use recording etc.) become continually more available and accurate. On the other hand in-situ surveying is still expensive. Above all in small, anthropogenically uninfluenced catchments, with poor, or non-existing surveying network could be remote sensing methods extremely useful. Overland flow accumulation (FA) values belong to important indicators of higher flash floods or soil erosion exposure. This value gives the number of cells of the Digital Elevation Model (DEM) grid, which are drained to each point of the catchment. This contribution deals with relations between basic geomorphological and morphometric characteristics (like hypsometric integral, Melton index of subcatchment etc.) and FA values. These relations are studied in the rocky sandstone landscapes of National park Ceské Svycarsko with the particular occurrence of broken relief. All calculations are based on high-resolution LIDAR DEM named Genesis created by TU Dresden. The main computational platform is GIS GRASS . The goal of the conference paper is to submit a quick method or indicators to estimate small particular subcatchments threatened by higher flash floods or soil erosion risks, without the necessity of using sophisticated rainfall-runoff models. There is a possibility to split catchments easily to small subcatchments (or use existing disjunction), compute basic characteristics and (with knowledge of links between this characteristics and FA values) identify, which particular subcatchment is potentially threatened by flash floods or soil erosion.
Simón, Marc; Jordana, Xavier; Armentano, Nuria; Santos, Cristina; Díaz, Nancy; Solórzano, Eduvigis; López, Joan B; González-Ruiz, Mercedes; Malgosa, Assumpció
2011-11-01
Ancient populations have commonly been thought to have lived in small groups where extreme endogamy was the norm. To contribute to this debate, a genetic analysis has been carried out on a collective burial with eight primary inhumations from Montanissell Cave in the Catalan pre-Pyrenees. Radiocarbon dating clearly placed the burial in the Bronze Age, around 3200 BP. The composition of the group-two adults (one male, one female), one young woman, and five children from both sexes-seemed to represent the structure of a typical nuclear family. The genetic evidence proves this assumption to be wrong. In fact, at least five out of the eight mitochondrial haplotypes were different, denying the possibility of a common maternal ancestor for all of them. Nevertheless, 50% of the inhumations shared haplogroup J, so the possibility of a maternal relationship cannot be ruled out. Actually, combining different analyses performed using ancient and living populations, the probability of having four related J individuals in Montanissell Cave would range from 0.9884 to 0.9999. Owing to the particularities of this singular collective burial (small number of bodies placed altogether in a hidden cave, the evidence of non-simultaneous interments, close dating and unusual grave goods), we suggest that it might represent a small group with a patrilocal mating system. Copyright © 2011 Wiley-Liss, Inc.
2013-01-01
Background Multi-site health sciences research is becoming more common, as it enables investigation of rare outcomes and diseases and new healthcare innovations. Multi-site research usually involves the transfer of large amounts of research data between collaborators, which increases the potential for accidental disclosures of protected health information (PHI). Standard protocols for preventing release of PHI are extremely vulnerable to human error, particularly when the shared data sets are large. Methods To address this problem, we developed an automated program (SAS macro) to identify possible PHI in research data before it is transferred between research sites. The macro reviews all data in a designated directory to identify suspicious variable names and data patterns. The macro looks for variables that may contain personal identifiers such as medical record numbers and social security numbers. In addition, the macro identifies dates and numbers that may identify people who belong to small groups, who may be identifiable even in the absences of traditional identifiers. Results Evaluation of the macro on 100 sample research data sets indicated a recall of 0.98 and precision of 0.81. Conclusions When implemented consistently, the macro has the potential to streamline the PHI review process and significantly reduce accidental PHI disclosures. PMID:23521861
NASA Astrophysics Data System (ADS)
Robertson, J. Gordon; Bland-Hawthorn, Joss
2012-09-01
As telescopes get larger, the size of a seeing-limited spectrograph for a given resolving power becomes larger also, and for ELTs the size will be so great that high resolution instruments of simple design will be infeasible. Solutions include adaptive optics (but not providing full correction for short wavelengths) or image slicers (which give feasible but still large instruments). Here we develop the solution proposed by Bland-Hawthorn and Horton: the use of diffraction-limited spectrographs which are compact even for high resolving power. Their use is made possible by the photonic lantern, which splits a multi-mode optical fiber into a number of single-mode fibers. We describe preliminary designs for such spectrographs, at a resolving power of R ~ 50,000. While they are small and use relatively simple optics, the challenges are to accommodate the longest possible fiber slit (hence maximum number of single-mode fibers in one spectrograph) and to accept the beam from each fiber at a focal ratio considerably faster than for most spectrograph collimators, while maintaining diffraction-limited imaging quality. It is possible to obtain excellent performance despite these challenges. We also briefly consider the number of such spectrographs required, which can be reduced by full or partial adaptive optics correction, and/or moving towards longer wavelengths.
Small Fermi surfaces of PtSn4 and Pt3In7
NASA Astrophysics Data System (ADS)
Yara, T.; Kakihana, M.; Nishimura, K.; Hedo, M.; Nakama, T.; Ōnuki, Y.; Harima, H.
2018-05-01
An extremely large magnetoresistance of PtSn4 has been recently observed and discussed from a viewpoint of de Haas-van Alphen (dHvA) oscillations and theoretical small Fermi surfaces. We have studied precisely the Fermi surfaces by measuring angular dependences of dHvA frequencies and have also carried out the full potential LAPW band calculation. Furthermore, small Fermi surfaces have been detected in another Pt-based compound of Pt3In7 with the cubic structure.
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1991-01-01
An algorithm is presented for unsteady two-dimensional incompressible Navier-Stokes calculations. This algorithm is based on the fourth order partial differential equation for incompressible fluid flow which uses the streamfunction as the only dependent variable. The algorithm is second order accurate in both time and space. It uses a multigrid solver at each time step. It is extremely efficient with respect to the use of both CPU time and physical memory. It is extremely robust with respect to Reynolds number.
Climate Change and Extreme Weather Impacts on Salt Marsh Plants
Regional assessments of climate change impacts on New England demonstrate a clear rise in rainfall over the past century. The number of extreme precipitation events (i.e., two or more inches of rain falling during a 48-hour period) has also increased over the past few decades. ...
Ajtić, J; Brattich, E; Sarvan, D; Djurdjevic, V; Hernández-Ceballos, M A
2018-05-01
Relationships between the beryllium-7 activity concentrations in surface air and meteorological parameters (temperature, atmospheric pressure, and precipitation), teleconnection indices (Arctic Oscillation, North Atlantic Oscillation, and Scandinavian pattern) and number of sunspots are investigated using two multivariate statistical techniques: hierarchical cluster and factor analysis. The beryllium-7 surface measurements over 1995-2011, at four sampling sites located in the Scandinavian Peninsula, are obtained from the Radioactivity Environmental Monitoring Database. In all sites, the statistical analyses show that the beryllium-7 concentrations are strongly linked to temperature. Although the beryllium-7 surface concentration exhibits the well-characterised spring/summer maximum, our study shows that extremely high beryllium-7 concentrations, defined as the values exceeding the 90 th percentile in the data records for each site, also occur over the October-March period. Two types of autumn/winter extremes are distinguished: type-1 when the number of extremes in a given month is less than three, and type-2 when at least three extremes occur in a month. Factor analysis performed for these autumn/winter events shows a weaker effect of temperature and a stronger impact of the transport and production signal on the beryllium-7 concentrations. Further, the majority of the type-2 extremes are associated with a very high monthly Scandinavian teleconnection index. The type-2 extremes that occurred in January, February and March are also linked to sudden stratospheric warmings of the Arctic vortex. Our results indicate that the Scandinavian teleconnection index might be a good indicator of the meteorological conditions facilitating extremely high beryllium-7 surface concentrations over Scandinavia during autumn and winter. Copyright © 2018 Elsevier Ltd. All rights reserved.
McDonnell, Mark D.; Tissera, Migel D.; Vladusich, Tony; van Schaik, André; Tapson, Jonathan
2015-01-01
Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems. PMID:26262687
Kong, Y K; Lee, S J; Lee, K S; Kim, G R; Kim, D M
2015-10-01
Researchers have been using various ergonomic tools to study occupational musculoskeletal diseases in industrial contexts. However, in agricultural work, where the work environment is poorer and the socio-psychological stress is high due to the high labor intensities of the industry, current research efforts have been scarce, and the number of available tools is small. In our preliminary studies, which focused on a limited number of body parts and other working elements, we developed separate evaluation tools for the upper and lower extremities. The current study was conducted to develop a whole-body ergonomic assessment tool for agricultural work that integrates the existing assessment tools for lower and upper extremities developed in the preliminary studies and to verify the relevance of the integrated assessment tool. To verify the relevance of the Agricultural Whole-Body Assessment (AWBA) tool, we selected 50 different postures that occur frequently in agricultural work. Our results showed that the AWBA-determined risk levels were similar to the subjective risk levels determined by experts. In addition, as the risk level increased, the average risk level increased to a similar extent. Moreover, the differences in risk levels between the AWBA and expert assessments were mostly smaller than the differences in risk levels between other assessment tools and the expert assessments in this study. In conclusion, the AWBA tool developed in this study was demonstrated to be appropriate for use as a tool for assessing various postures commonly assumed in agricultural work. Moreover, we believe that our verification of the assessment tools will contribute to the enhancement of the quality of activities designed to prevent and control work-related musculoskeletal diseases in other industries.
Gene copy number variation and its significance in cyanobacterial phylogeny
2012-01-01
Background In eukaryotes, variation in gene copy numbers is often associated with deleterious effects, but may also have positive effects. For prokaryotes, studies on gene copy number variation are rare. Previous studies have suggested that high numbers of rRNA gene copies can be advantageous in environments with changing resource availability, but further association of gene copies and phenotypic traits are not documented. We used one of the morphologically most diverse prokaryotic phyla to test whether numbers of gene copies are associated with levels of cell differentiation. Results We implemented a search algorithm that identified 44 genes with highly conserved copies across 22 fully sequenced cyanobacterial taxa. For two very basal cyanobacterial species, Gloeobacter violaceus and a thermophilic Synechococcus species, distinct phylogenetic positions previously found were supported by identical protein coding gene copy numbers. Furthermore, we found that increased ribosomal gene copy numbers showed a strong correlation to cyanobacteria capable of terminal cell differentiation. Additionally, we detected extremely low variation of 16S rRNA sequence copies within the cyanobacteria. We compared our results for 16S rRNA to three other eubacterial phyla (Chroroflexi, Spirochaetes and Bacteroidetes). Based on Bayesian phylogenetic inference and the comparisons of genetic distances, we could confirm that cyanobacterial 16S rRNA paralogs and orthologs show significantly stronger conservation than found in other eubacterial phyla. Conclusions A higher number of ribosomal operons could potentially provide an advantage to terminally differentiated cyanobacteria. Furthermore, we suggest that 16S rRNA gene copies in cyanobacteria are homogenized by both concerted evolution and purifying selection. In addition, the small ribosomal subunit in cyanobacteria appears to evolve at extraordinary slow evolutionary rates, an observation that has been made previously for morphological characteristics of cyanobacteria. PMID:22894826
The Impact of Air-Sea Interactions on the Representation of Tropical Precipitation Extremes
NASA Astrophysics Data System (ADS)
Hirons, L. C.; Klingaman, N. P.; Woolnough, S. J.
2018-02-01
The impacts of air-sea interactions on the representation of tropical precipitation extremes are investigated using an atmosphere-ocean-mixed-layer coupled model. The coupled model is compared to two atmosphere-only simulations driven by the coupled-model sea-surface temperatures (SSTs): one with 31 day running means (31 d), the other with a repeating mean annual cycle. This allows separation of the effects of interannual SST variability from those of coupled feedbacks on shorter timescales. Crucially, all simulations have a consistent mean state with very small SST biases against present-day climatology. 31d overestimates the frequency, intensity, and persistence of extreme tropical precipitation relative to the coupled model, likely due to excessive SST-forced precipitation variability. This implies that atmosphere-only attribution and time-slice experiments may overestimate the strength and duration of precipitation extremes. In the coupled model, air-sea feedbacks damp extreme precipitation, through negative local thermodynamic feedbacks between convection, surface fluxes, and SST.
Highly Conductive Multifunctional Graphene Polycarbonate Nanocomposites
NASA Technical Reports Server (NTRS)
Yoonessi, Mitra; Gaier, James R.
2010-01-01
Graphene nanosheet bisphenol A polycarbonate nanocomposites (0.027 2.2 vol %) prepared by both emulsion mixing and solution blending methods, followed by compression molding at 287 C, exhibited dc electrical percolation threshold of approx.0.14 and approx.0.38 vol %, respectively. The conductivities of 2.2 vol % graphene nanocomposites were 0.512 and 0.226 S/cm for emulsion and solution mixing. The 1.1 and 2.2 vol % graphene nanocomposites exhibited frequency-independent behavior. Inherent conductivity, extremely high aspect ratio, and nanostructure directed assembly of the graphene using PC nanospheres are the main factors for excellent electrical properties of the nanocomposites. Dynamic tensile moduli of nanocomposites increased with increasing graphene in the nanocomposite. The glass transition temperatures were decreased with increasing graphene for the emulsion series. High-resolution electron microscopy (HR-TEM) and small-angle neutron scattering (SANS) showed isolated graphene with no connectivity path for insulating nanocomposites and connected nanoparticles for the conductive nanocomposites. A stacked disk model was used to obtain the average particle radius, average number of graphene layers per stack, and stack spacing by simulation of the experimental SANS data. Morphology studies indicated the presence of well-dispersed graphene and small graphene stacking with infusion of polycarbonate within the stacks.
Hu, Jianzhong; Nudelman, German; Shimoni, Yishai; Kumar, Madhu; Ding, Yaomei; López, Carolina; Hayot, Fernand; Wetmur, James G.; Sealfon, Stuart C.
2011-01-01
In the first few hours following Newcastle disease viral infection of human monocyte-derived dendritic cells, the induction of IFNB1 is extremely low and the secreted type I interferon response is below the limits of ELISA assay. However, many interferon-induced genes are activated at this time, for example DDX58 (RIGI), which in response to viral RNA induces IFNB1. We investigated whether the early induction of IFNBI in only a small percentage of infected cells leads to low level IFN secretion that then induces IFN-responsive genes in all cells. We developed an agent-based mathematical model to explore the IFNBI and DDX58 temporal dynamics. Simulations showed that a small number of early responder cells provide a mechanism for efficient and controlled activation of the DDX58-IFNBI positive feedback loop. The model predicted distributions of single cell responses that were confirmed by single cell mRNA measurements. The results suggest that large cell-to-cell variation plays an important role in the early innate immune response, and that the variability is essential for the efficient activation of the IFNB1 based feedback loop. PMID:21347441
Self-assembly of robotic micro- and nanoswimmers using magnetic nanoparticles
NASA Astrophysics Data System (ADS)
Cheang, U. Kei; Kim, Min Jun
2015-03-01
Micro- and nanoscale robotic swimmers are very promising to significantly enhance the performance of particulate drug delivery by providing high accuracy at extremely small scales. Here, we introduce micro- and nanoswimmers fabricated using self-assembly of nanoparticles and control via magnetic fields. Nanoparticles self-align into parallel chains under magnetization. The swimmers exhibit flexibility under a rotating magnetic field resulting in chiral structures upon deformation, thereby having the prerequisite for non-reciprocal motion to move about at low Reynolds number. The swimmers are actuated wirelessly using an external rotating magnetic field supplied by approximate Helmholtz coils. By controlling the concentration of the suspended magnetic nanoparticles, the swimmers can be modulated into different sizes. Nanoscale swimmers are largely influenced by Brownian motion, as observed from their jerky trajectories. The microswimmers, which are roughly three times larger, are less vulnerable to the effects from Brownian motion. In this paper, we demonstrate responsive directional control of micro- and nanoswimmers and compare their respective diffusivities and trajectories to characterize the implications of Brownian disturbance on the motions of small and large swimmers. We then performed a simulation using a kinematic model for the magnetic swimmers including the stochastic nature of Brownian motion.
Su, Andreas A. H.; Tripp, Vanessa; Randau, Lennart
2013-01-01
The methanogenic archaeon Methanopyrus kandleri grows near the upper temperature limit for life. Genome analyses revealed strategies to adapt to these harsh conditions and elucidated a unique transfer RNA (tRNA) C-to-U editing mechanism at base 8 for 30 different tRNA species. Here, RNA-Seq deep sequencing methodology was combined with computational analyses to characterize the small RNome of this hyperthermophilic organism and to obtain insights into the RNA metabolism at extreme temperatures. A large number of 132 small RNAs were identified that guide RNA modifications, which are expected to stabilize structured RNA molecules. The C/D box guide RNAs were shown to exist as circular RNA molecules. In addition, clustered regularly interspaced short palindromic repeats RNA processing and potential regulatory RNAs were identified. Finally, the identification of tRNA precursors before and after the unique C8-to-U8 editing activity enabled the determination of the order of tRNA processing events with termini truncation preceding intron removal. This order of tRNA maturation follows the compartmentalized tRNA processing order found in Eukaryotes and suggests its conservation during evolution. PMID:23620296
[Henoch-Schönlein purpura in a cocaine consumer man with HIV infection and ANCA-p positivity].
De Paoli, María C; Moretti, Dino; Scolari Pasinato, Carlos M; Buncuga, Martín G
The Henoch-Schönlein purpura (HSP) is a small vessel vasculitis with IgA immune complex deposition. The presentation in adults is rare and severe. Reported cases of HSP in patients infected with HIV are scarce. Neutrophil cytoplasmic antibodies (ANCA) are commonly found in other systemic vasculitis, but rarely in HSP and even more unusual the perinuclear pattern. Beside small vessel vasculitis, positivity of ANCA can be detected in a number of different pathological conditions in association with infectious processes, including HIV, or cocaine use, and especially the pattern of ANCA-p, associated with drugs, inflammatory bowel or autoimmune diseases. We report the case of a 35 years old man with toxic habits (cocaine, marijuana) who consulted for abdominal pain, hematochezia and purpura on lower extremities, and later fever, joint pain and progression of purpura associated with nephritic syndrome and ANCA-p (+). During hospitalization HIV infection was detected. Renal biopsy showed IgA nephropathy with favorable response to corticosteroid and antiproteinuric treatment. The communication of the case is due to the rarity of the presentation and therapeutic diagnostic challenge. It remains to elucidate the role of ANCA in the pathogenesis and management of adult PSH.
Miura, Naoki; Kucho, Ken-Ichi; Noguchi, Michiko; Miyoshi, Noriaki; Uchiumi, Toshiki; Kawaguchi, Hiroaki; Tanimoto, Akihide
2014-01-01
The microminipig, which weighs less than 10 kg at an early stage of maturity, has been reported as a potential experimental model animal. Its extremely small size and other distinct characteristics suggest the possibility of a number of differences between the genome of the microminipig and that of conventional pigs. In this study, we analyzed the genomes of two healthy microminipigs using a next-generation sequencer SOLiD™ system. We then compared the obtained genomic sequences with a genomic database for the domestic pig (Sus scrofa). The mapping coverage of sequenced tag from the microminipig to conventional pig genomic sequences was greater than 96% and we detected no clear, substantial genomic variance from these data. The results may indicate that the distinct characteristics of the microminipig derive from small-scale alterations in the genome, such as Single Nucleotide Polymorphisms or translational modifications, rather than large-scale deletion or insertion polymorphisms. Further investigation of the entire genomic sequence of the microminipig with methods enabling deeper coverage is required to elucidate the genetic basis of its distinct phenotypic traits. Copyright © 2014 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.
A Proof of Concept Experiment for Reducing Skin Friction by Using a Micro-Blowing Technique
NASA Technical Reports Server (NTRS)
Hwang, Danny P.
1996-01-01
A proof of concept experiment for reducing skin friction has been conducted in the Advanced Nozzle and Engine Components Test Facility at the NASA Lewis Research Center. In this unique concept, called the micro-blowing technique (MBT), an extremely small amount of air was blown vertically through very small holes to reduce the surface roughness and to control the gradient of the flow velocity profile on the surface thereby reducing skin friction. Research revealed that the skin was the most important factor to make this concept achievable. The proposed skin consisted of two layers. The inner layer was a low permeable porous skin for distributing the blowing air evenly while the outer layer with small holes controlled the vertical or nearly vertical blowing air. Preliminary experimental results showed that the MBT has the potential of a very large reduction in skin friction below the skin friction of a nonporous plain flat plate. Of the skins tested, three have been identified as the MBT skins. They provided very low unblown skin friction such that a large skin friction reduction, below a flat plate value, was achieved with very small amounts of blowing air. The reduction in skin friction of 55 percent was achieved at the Mach number of 0.3 for the exhaust pressure of 0.85 atm, and 60 percent reduction was obtained for the exhaust pressure of 0.24 atm (corresponding to 10 700-m altitude) at the same Mach number. A significant reduction in skin friction of over 25 percent was achieved for the exhaust pressure of 0.24 atm at the Mach number of 0.7. This implied that the MBT could be applied to a wide range of flight conditions. It is also believed that additional 10 percent reduction could be obtained by eliminating the gap between the inner layer and the outer layer. The aspect ratio of the vertical small holes for the outer layer of the MBT skin should be larger than 4 based on the preliminary conclusion from this test. Many experiments are needed to find out the optimal MBT skin. The penalty associated with the MBT needs to be assessed. However, preliminary results indicated that the MBT could provide a 25 to 35 percent reduction for real-world application. The concept can be applied to not only an airplane, but also a missile, a submarine (micro-blow water instead of air), and an ocean liner.
Cardiovascular consequences of extreme prematurity: the EPICure study.
McEniery, Carmel M; Bolton, Charlotte E; Fawke, Joseph; Hennessy, Enid; Stocks, Janet; Wilkinson, Ian B; Cockcroft, John R; Marlow, Neil
2011-07-01
The long-term consequences of extreme prematurity are becoming increasingly important, given recent improvements in neonatal intensive care. The aim of the current study was to examine the cardiovascular consequences of extreme prematurity in 11-year-olds born at or before 25 completed weeks of gestation. Age and sex-matched classmates were recruited as controls. Information concerning perinatal and maternal history was collected, and current anthropometric characteristics were measured in 219 children born extremely preterm and 153 classmates. A subset of the extremely preterm children (n = 68) and classmates (n = 90) then underwent detailed haemodynamic investigations, including measurement of supine blood pressure (BP), aortic pulse wave velocity (aPWV, a measure of aortic stiffness) and augmentation index (AIx, a measure of arterial pressure wave reflections). Seated brachial systolic and diastolic BP were not different between extremely preterm children and classmates (P = 0.3 for both), although there was a small, significant elevation in supine mean and diastolic BP in the extremely preterm children (P < 0.05 for both). Arterial pressure wave reflections were significantly elevated in the extremely preterm children (P < 0.001) and this persisted after adjusting for confounding variables. However, aortic stiffness was not different between the groups (P = 0.1). These data suggest that extreme prematurity is associated with altered arterial haemodynamics in children, not evident from the examination of brachial BP alone. Moreover, the smaller, preresistance and resistance vessels rather than large elastic arteries appear to be most affected. Children born extremely preterm may be at increased future cardiovascular risk.
The cost of large numbers of hypothesis tests on power, effect size and sample size.
Lazzeroni, L C; Ray, A
2012-01-01
Advances in high-throughput biology and computer science are driving an exponential increase in the number of hypothesis tests in genomics and other scientific disciplines. Studies using current genotyping platforms frequently include a million or more tests. In addition to the monetary cost, this increase imposes a statistical cost owing to the multiple testing corrections needed to avoid large numbers of false-positive results. To safeguard against the resulting loss of power, some have suggested sample sizes on the order of tens of thousands that can be impractical for many diseases or may lower the quality of phenotypic measurements. This study examines the relationship between the number of tests on the one hand and power, detectable effect size or required sample size on the other. We show that once the number of tests is large, power can be maintained at a constant level, with comparatively small increases in the effect size or sample size. For example at the 0.05 significance level, a 13% increase in sample size is needed to maintain 80% power for ten million tests compared with one million tests, whereas a 70% increase in sample size is needed for 10 tests compared with a single test. Relative costs are less when measured by increases in the detectable effect size. We provide an interactive Excel calculator to compute power, effect size or sample size when comparing study designs or genome platforms involving different numbers of hypothesis tests. The results are reassuring in an era of extreme multiple testing.
NASA Astrophysics Data System (ADS)
Gaonkar, Bilwaj; Hovda, David; Martin, Neil; Macyszyn, Luke
2016-03-01
Deep Learning, refers to large set of neural network based algorithms, have emerged as promising machine- learning tools in the general imaging and computer vision domains. Convolutional neural networks (CNNs), a specific class of deep learning algorithms, have been extremely effective in object recognition and localization in natural images. A characteristic feature of CNNs, is the use of a locally connected multi layer topology that is inspired by the animal visual cortex (the most powerful vision system in existence). While CNNs, perform admirably in object identification and localization tasks, typically require training on extremely large datasets. Unfortunately, in medical image analysis, large datasets are either unavailable or are extremely expensive to obtain. Further, the primary tasks in medical imaging are organ identification and segmentation from 3D scans, which are different from the standard computer vision tasks of object recognition. Thus, in order to translate the advantages of deep learning to medical image analysis, there is a need to develop deep network topologies and training methodologies, that are geared towards medical imaging related tasks and can work in a setting where dataset sizes are relatively small. In this paper, we present a technique for stacked supervised training of deep feed forward neural networks for segmenting organs from medical scans. Each `neural network layer' in the stack is trained to identify a sub region of the original image, that contains the organ of interest. By layering several such stacks together a very deep neural network is constructed. Such a network can be used to identify extremely small regions of interest in extremely large images, inspite of a lack of clear contrast in the signal or easily identifiable shape characteristics. What is even more intriguing is that the network stack achieves accurate segmentation even when it is trained on a single image with manually labelled ground truth. We validate this approach,using a publicly available head and neck CT dataset. We also show that a deep neural network of similar depth, if trained directly using backpropagation, cannot acheive the tasks achieved using our layer wise training paradigm.
Santoro, Simone; Sanchez-Suarez, Cristina; Rouco, Carlos; Palomo, L Javier; Fernández, M Carmen; Kufner, Maura B; Moreno, Sacramento
2017-10-01
Climate change affects distribution and persistence of species. However, forecasting species' responses to these changes requires long-term data series that are often lacking in ecological studies. We used 15 years of small mammal trapping data collected between 1978 and 2015 in 3 areas at Doñana National Park (southwest Spain) to (i) describe changes in species composition and (ii) test the association between local climate conditions and size of small mammal populations. Overall, 5 species were captured: wood mouse Apodemus sylvaticus , algerian mouse Mus spretus , greater white-toothed shrew Crocidura russula , garden dormouse Eliomys quercinus , and black rat Rattus rattus . The temporal pattern in the proportion of captures of each species suggests that the small mammal diversity declined with time. Although the larger species (e.g., E. quercinus ), better adapted to colder climate, have disappeared from our trapping records, M. spretus , a small species inhabiting southwest Europe and the Mediterranean coast of Africa, currently is almost the only trapped species. We used 2-level hierarchical models to separate changes in abundance from changes in probability of capture using records of A. sylvaticus in all 3 areas and of M. spretus in 1. We found that heavy rainfall and low temperatures were positively related to abundance of A. sylvaticus , and that the number of extremely hot days was negatively related to abundance of M. spretus . Despite other mechanisms are likely to be involved, our findings support the importance of climate for the distribution and persistence of these species and raise conservation concerns about potential cascading effects in the Doñana ecosystem.
TECA: A Parallel Toolkit for Extreme Climate Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prabhat, Mr; Ruebel, Oliver; Byna, Surendra
2012-03-12
We present TECA, a parallel toolkit for detecting extreme events in large climate datasets. Modern climate datasets expose parallelism across a number of dimensions: spatial locations, timesteps and ensemble members. We design TECA to exploit these modes of parallelism and demonstrate a prototype implementation for detecting and tracking three classes of extreme events: tropical cyclones, extra-tropical cyclones and atmospheric rivers. We process a modern TB-sized CAM5 simulation dataset with TECA, and demonstrate good runtime performance for the three case studies.
The Microphysical Structure of Extreme Precipitation as Inferred from Ground-Based Raindrop Spectra.
NASA Astrophysics Data System (ADS)
Uijlenhoet, Remko; Smith, James A.; Steiner, Matthias
2003-05-01
The controls on the variability of raindrop size distributions in extreme rainfall and the associated radar reflectivity-rain rate relationships are studied using a scaling-law formalism for the description of raindrop size distributions and their properties. This scaling-law formalism enables a separation of the effects of changes in the scale of the raindrop size distribution from those in its shape. Parameters controlling the scale and shape of the scaled raindrop size distribution may be related to the microphysical processes generating extreme rainfall. A global scaling analysis of raindrop size distributions corresponding to rain rates exceeding 100 mm h1, collected during the 1950s with the Illinois State Water Survey raindrop camera in Miami, Florida, reveals that extreme rain rates tend to be associated with conditions in which the variability of the raindrop size distribution is strongly number controlled (i.e., characteristic drop sizes are roughly constant). This means that changes in properties of raindrop size distributions in extreme rainfall are largely produced by varying raindrop concentrations. As a result, rainfall integral variables (such as radar reflectivity and rain rate) are roughly proportional to each other, which is consistent with the concept of the so-called equilibrium raindrop size distribution and has profound implications for radar measurement of extreme rainfall. A time series analysis for two contrasting extreme rainfall events supports the hypothesis that the variability of raindrop size distributions for extreme rain rates is strongly number controlled. However, this analysis also reveals that the actual shapes of the (measured and scaled) spectra may differ significantly from storm to storm. This implies that the exponents of power-law radar reflectivity-rain rate relationships may be similar, and close to unity, for different extreme rainfall events, but their prefactors may differ substantially. Consequently, there is no unique radar reflectivity-rain rate relationship for extreme rain rates, but the variability is essentially reduced to one free parameter (i.e., the prefactor). It is suggested that this free parameter may be estimated on the basis of differential reflectivity measurements in extreme rainfall.
Qin, Jin; Trudeau, Matthieu; Katz, Jeffrey N; Buchholz, Bryan; Dennerlein, Jack T
2011-08-01
Musculoskeletal disorders associated with computer use span the joints of the upper extremity. Computing typically involves tapping in multiple directions. Thus, we sought to describe the loading on the finger, wrist, elbow and shoulder joints in terms of kinematic and kinetic difference across single key switch tapping to directional tapping on multiple keys. An experiment with repeated measures design was conducted. Six subjects tapped with their right index finger on a stand-alone number keypad placed horizontally in three conditions: (1) on single key switch (the number key 5); (2) left and right on number key 4 and 6; (3) top and bottom on number key 8 and 2. A force-torque transducer underneath the keypad measured the fingertip force. An active-marker infrared motion analysis system measured the kinematics of the fingertip, hand, forearm, upper arm and torso. Joint moments for the metacarpophalangeal, wrist, elbow, and shoulder joints were estimated using inverse dynamics. Tapping in the top-bottom orientation introduced the largest biomechanical loading on the upper extremity especially for the proximal joint, followed by tapping in the left-right orientation, and the lowest loading was observed during single key switch tapping. Directional tapping on average increased the fingertip force, joint excursion, and peak-to-peak joint torque by 45%, 190% and 55%, respectively. Identifying the biomechanical loading patterns associated with these fundamental movements of keying improves the understanding of the risks of upper extremity musculoskeletal disorders for computer keyboard users. Copyright © 2010 Elsevier Ltd. All rights reserved.
Asteroid Origins Satellite (AOSAT) I: An On-orbit Centrifuge Science Laboratory
NASA Astrophysics Data System (ADS)
Lightholder, Jack; Thoesen, Andrew; Adamson, Eric; Jakubowski, Jeremy; Nallapu, Ravi; Smallwood, Sarah; Raura, Laksh; Klesh, Andrew; Asphaug, Erik; Thangavelautham, Jekan
2017-04-01
Exploration of asteroids, comets and small moons (small bodies) can answer fundamental questions relating to the formation of the solar system, the availability of resources, and the nature of impact hazards. Near-earth asteroids and the small moons of Mars are potential targets of human exploration. But as illustrated by recent missions, small body surface exploration remains challenging, expensive, and fraught with risk. Despite their small size, they are among the most extreme planetary environments, with low and irregular gravity, loosely bound regolith, extreme temperature variation, and the presence of electrically charged dust. Here we describe the Asteroid Origins Satellite (AOSAT-I), an on-orbit, 3U CubeSat centrifuge using a sandwich-sized bed of crushed meteorite fragments to replicate asteroid surface conditions. Demonstration of this CubeSat will provide a low-cost pathway to physical asteroid model validation, shed light on the origin and geophysics of asteroids, and constrain the design of future landers, rovers, resource extractors, and human missions. AOSAT-I will conduct scientific experiments within its payload chamber while operating in two distinct modes: (1) as a nonrotating microgravity laboratory to investigate primary accretion, and (2) as a rotating centrifuge producing artificial milligravity to simulate surface conditions on asteroids, comets and small moons. AOSAT-I takes advantage of low-cost, off-the-shelf components, modular design, and the rapid assembly and instrumentation of the CubeSat standard, to answer fundamental questions in planetary science and reduce cost and risk of future exploration.
NASA Astrophysics Data System (ADS)
Pakula, Anna; Tomczewski, Slawomir; Skalski, Andrzej; Biało, Dionizy; Salbut, Leszek
2010-05-01
This paper presents novel application of Low Coherence Interferometry (LCI) in measurements of characteristic parameters as circular pitch, foot diameter, heads diameter, in extremely small cogged wheels (cogged wheel diameter lower than θ=3 mm and module m = 0.15) produced from metal and ceramics. The most interesting issue concerning small diameter cogged wheels occurs during their production. The characteristic parameters of the wheel depend strongly on the manufacturing process and while inspecting small diameter wheels the shrinkage during the cast varies with the slight change of fabrication process. In the paper the LCI interferometric Twyman - Green setup with pigtailed high power light emitting diode, for cogged wheels measurement, is described. Due to its relatively big field of view the whole wheel can be examined in one measurement, without the necessity of numerical stitching. For purposes of small cogged wheel's characteristic parameters measurement the special binarization algorithm was developed and successfully applied. At the end the results of measurement of heads and foot diameters of two cogged wheels obtained by proposed LCI setup are presented and compared with the results obtained by the commercial optical profiler. The results of examination of injection moulds used for fabrication of measured cogged wheels are also presented. Additionally, the value of cogged wheels shrinkage is calculated as a conclusion for obtained results. Proposed method is suitable for complex measurements of small diameter cogged wheels with low module especially when there are no measurements standards for such objects.
Dynamic response analysis of a 24-story damped steel structure
NASA Astrophysics Data System (ADS)
Feng, Demin; Miyama, Takafumi
2017-10-01
In Japanese and Chinese building codes, a two-stage design philosophy, damage limitation (small earthquake, Level 1) and life safety (extreme large earthquake, Level 2), is adopted. It is very interesting to compare the design method of a damped structure based on the two building codes. In the Chinese code, in order to be consistent with the conventional seismic design method, the damped structure is also designed at the small earthquake level. The effect of damper systems is considered by the additional damping ratio concept. The design force will be obtained from the damped design spectrum considering the reduction due to the additional damping ratio. The additional damping ratio by the damper system is usually calculated by a time history analysis method at the small earthquake level. The velocity dependent type dampers such as viscous dampers can function well even in the small earthquake level. But, if steel damper is used, which usually remains elastic in the small earthquake, there will be no additional damping ratio achieved. On the other hand, a time history analysis is used in Japan both for small earthquake and extreme large earthquake level. The characteristics of damper system and ductility of the structure can be modelled well. An existing 24-story steel frame is modified to demonstrate the design process of the damped structure based on the two building codes. Viscous wall type damper and low yield steel panel dampers are studied as the damper system.
Development of a Micro-Fabricated Total-Field Magnetometer
2011-03-01
are made with fluxgate technologies. Fluxgates have lower sensitivity than Cs magnetometers , yet they continue to be used in small wands simply...extraction process by providing the sensitivity of a Cs magnetometer with the convenience and low cost of a fluxgate wand. Extremely small and low cost...FINAL REPORT Development of a Micro-Fabricated Total-Field Magnetometer SERDP Project MR-1512 MARCH 2011 Mark Prouty Geometrics, Inc
ERIC Educational Resources Information Center
Wildermuth, Susan M.; French, Tammy; Fredrick, Edward
2013-01-01
This study explores alternative approaches for teaching general education courses burdened with serving extremely large enrollments. It compares the effectiveness of a self-contained course in which each course section is taught by one instructor to a large lecture/small lab format in which all course enrollees attend one large lecture section and…
The Physical Properties of Ceramides in Membranes.
Alonso, Alicia; Goñi, Félix M
2018-05-20
Ceramides are sphingolipids containing a sphingosine or a related base, to which a fatty acid is linked through an amide bond. When incorporated into a lipid bilayer, ceramides exhibit a number of properties not shared by almost any other membrane lipid: Ceramides ( a) are extremely hydrophobic and thus cannot exist in suspension in aqueous media; ( b) increase the molecular order (rigidity) of phospholipids in membranes; ( c) give rise to lateral phase separation and domain formation in phospholipid bilayers; ( d) possess a marked intrinsic negative curvature that facilitates formation of inverted hexagonal phases; ( e) make bilayers and cell membranes permeable to small and large (i.e., protein-size) solutes; and ( f) promote transmembrane (flip-flop) lipid motion. Unfortunately, there is hardly any link between the physical studies reviewed here and the mass of biological and clinical studies on the effects of ceramides in health and disease.
Oils and hydrocarbon source rocks of the Baltic syneclise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kanev, S.; Margulis, L.; Bojesen-Koefoed, J.A.
Prolific source rock horizons of varying thickness, having considerable areal extent, occur over the Baltic syneclise. These source sediments are rich and have excellent petroleum generation potential. Their state of thermal maturity varies form immature in the northeastern part of the syneclise to peak generation maturity in the southwestern part of the region-the main kitchen area. These maturity variations are manifest in petroleum composition in the region. Hence, mature oils occur in the Polish and Kaliningrad areas, immature oils in small accumulations in Latvian and central Lithuanian onshore areas, and intermediate oils in areas between these extremes. The oil accumulationsmore » probably result from pooling of petroleum generated from a number of different source rocks at varying levels of thermal maturity. Hence, no single source for petroleum occurrences in the Baltic syneclise may be identified. The paper describes the baltic syneclise, source rocks, thermal maturity and oils and extracts.« less
Artificial intelligent e-learning architecture
NASA Astrophysics Data System (ADS)
Alharbi, Mafawez; Jemmali, Mahdi
2017-03-01
Many institutions and university has forced to use e learning, due to its ability to provide additional and flexible solutions for students and researchers. E-learning In the last decade have transported about the extreme changes in the distribution of education allowing learners to access multimedia course material at any time, from anywhere to suit their specific needs. In the form of e learning, instructors and learners live in different places and they do not engage in a classroom environment, but within virtual universe. Many researches have defined e learning based on their objectives. Therefore, there are small number of e-learning architecture have proposed in the literature. However, the proposed architecture has lack of embedding intelligent system in the architecture of e learning. This research argues that unexplored potential remains, as there is scope for e learning to be intelligent system. This research proposes e-learning architecture that incorporates intelligent system. There are intelligence components, which built into the architecture.
Large-scale data analysis of power grid resilience across multiple US service regions
NASA Astrophysics Data System (ADS)
Ji, Chuanyi; Wei, Yun; Mei, Henry; Calzada, Jorge; Carey, Matthew; Church, Steve; Hayes, Timothy; Nugent, Brian; Stella, Gregory; Wallace, Matthew; White, Joe; Wilcox, Robert
2016-05-01
Severe weather events frequently result in large-scale power failures, affecting millions of people for extended durations. However, the lack of comprehensive, detailed failure and recovery data has impeded large-scale resilience studies. Here, we analyse data from four major service regions representing Upstate New York during Super Storm Sandy and daily operations. Using non-stationary spatiotemporal random processes that relate infrastructural failures to recoveries and cost, our data analysis shows that local power failures have a disproportionally large non-local impact on people (that is, the top 20% of failures interrupted 84% of services to customers). A large number (89%) of small failures, represented by the bottom 34% of customers and commonplace devices, resulted in 56% of the total cost of 28 million customer interruption hours. Our study shows that extreme weather does not cause, but rather exacerbates, existing vulnerabilities, which are obscured in daily operations.
Bacterial floc mediated rapid streamer formation in creeping flows
NASA Astrophysics Data System (ADS)
Hassanpourfard, Mahtab; Nikakhtari, Zahra; Ghosh, Ranajay; Das, Siddhartha; Thundat, Thomas; Kumar, Aloke
2015-11-01
One of the contentious problems regarding the interaction of low Reynolds number (Re << 1) fluid flow with bacterial biomass is the formation of filamentous structures called streamers. Recently, we discovered that streamers can be formed from flow-induced deformation of the pre-formed bacterial flocs over extremely small timescales (less than a second). However, these streamers are different than the ones that mediated by biofilms. To optically probe the inception process of these streamers formation, bacterial flocs were embedded with 200 nm red fluorescent polystyrene beads that served as tracers. We also showed that at their inception the deformation of the flocs is dominated by large recoverable strains indicating significant elasticity. These strains subsequently increase tremendously to produce filamentous streamers. At time scales larger than streamers formation time scale, viscous response was observed from streamers. Finally, rapid clogging of microfluidic devices occurred after these streamers formed.
Thermal noise from optical coatings in gravitational wave detectors.
Harry, Gregory M; Armandula, Helena; Black, Eric; Crooks, D R M; Cagnoli, Gianpietro; Hough, Jim; Murray, Peter; Reid, Stuart; Rowan, Sheila; Sneddon, Peter; Fejer, Martin M; Route, Roger; Penn, Steven D
2006-03-01
Gravitational waves are a prediction of Einstein's general theory of relativity. These waves are created by massive objects, like neutron stars or black holes, oscillating at speeds appreciable to the speed of light. The detectable effect on the Earth of these waves is extremely small, however, creating strains of the order of 10(-21). There are a number of basic physics experiments around the world designed to detect these waves by using interferometers with very long arms, up to 4 km in length. The next-generation interferometers are currently being designed, and the thermal noise in the mirrors will set the sensitivity over much of the usable bandwidth. Thermal noise arising from mechanical loss in the optical coatings put on the mirrors will be a significant source of noise. Achieving higher sensitivity through lower mechanical loss coatings, while preserving the crucial optical and thermal properties, is an area of active research right now.
Zhong, Hui; Xu, Fei; Li, Zenghui; Fu, Ruowen; Wu, Dingcai
2013-06-07
A very important yet really challenging issue to address is how to greatly increase the energy density of supercapacitors to approach or even exceed those of batteries without sacrificing the power density. Herein we report the fabrication of a new class of ultrahigh surface area hierarchical porous carbon (UHSA-HPC) based on the pore formation and widening of polystyrene-derived HPC by KOH activation, and highlight its superior ability for energy storage in supercapacitors with ionic liquid (IL) as electrolyte. The UHSA-HPC with a surface area of more than 3000 m(2) g(-1) shows an extremely high energy density, i.e., 118 W h kg(-1) at a power density of 100 W kg(-1). This is ascribed to its unique hierarchical nanonetwork structure with a large number of small-sized nanopores for IL storage and an ideal meso-/macroporous network for IL transfer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wojcik, Roza; Webb, Ian K.; Deng, Liulin
Understanding the biological mechanisms related to lipids and glycolipids is challenging due to the vast number of possible isomers. Mass spectrometry (MS) measurements are currently the dominant approach for studying and providing detailed information on lipid and glycolipid structures. However, difficulties in distinguishing many structural isomers (e.g. distinct acyl chain positions, double bond locations, as well as glycan isomers) inhibit the understanding of their biological roles. Here we utilized ultra-high resolution ion mobility spectrometry (IMS) separations based upon the use of traveling waves in a serpentine long path length multi-pass Structures for Lossless Manipulations (SLIM) to enhance isomer resolution. Themore » multi-pass arrangement allowed separations ranging from ~16 m (1 pass) to ~470 m (32 passes) to be investigated for the distinction of lipids and glycolipids with extremely small structural differences. Lastly, these ultra-high resolution SLIM IMS-MS analyses provide a foundation for exploring and better understanding isomer specific biological and disease processes.« less
Hereditary spastic paraplegia.
Blackstone, Craig
2018-01-01
The hereditary spastic paraplegias (HSPs) are a heterogeneous group of neurologic disorders with the common feature of prominent lower-extremity spasticity, resulting from a length-dependent axonopathy of corticospinal upper motor neurons. The HSPs exist not only in "pure" forms but also in "complex" forms that are associated with additional neurologic and extraneurologic features. The HSPs are among the most genetically diverse neurologic disorders, with well over 70 distinct genetic loci, for which about 60 mutated genes have already been identified. Numerous studies elucidating the molecular pathogenesis underlying HSPs have highlighted the importance of basic cellular functions - especially membrane trafficking, mitochondrial function, organelle shaping and biogenesis, axon transport, and lipid/cholesterol metabolism - in axon development and maintenance. An encouragingly small number of converging cellular pathogenic themes have been identified for the most common HSPs, and some of these pathways present compelling targets for future therapies. Copyright © 2018 Elsevier B.V. All rights reserved.
The Hopkins Ultraviolet Telescope - Performance and calibration during the Astro-1 mission
NASA Technical Reports Server (NTRS)
Davidsen, Arthur F.; Long, Knox S.; Durrance, Samuel T.; Blair, William P.; Bowers, Charles W.; Conard, Steven J.; Feldman, Paul D.; Ferguson, Henry C.; Fountain, Glen H.; Kimble, Randy A.
1992-01-01
Results are reported of spectrophotometric observations, made with the Hopkins Ultraviolet Telescope (HUT), of 77 astronomical sources throughout the far-UV (912-1850 A) at a resolution of about 3 A, and, for a small number of sources, in the extreme UV (415-912 A) beyond the Lyman limit at a resolution of about 1.5 A. The HUT instrument and its performance in orbit are described. A HUT observation of the DA white dwarf G191-B2B is presented, and the photometric calibration curve for the instrument is derived from a comparison of the observation with a model stellar atmosphere. The sensitivity reaches a maximum at 1050 A, where 1 photon/sq cm s A yields 9.5 counts/s A, and remains within a factor of 2 of this value from 912 to 1600 A. The instrumental dark count measured on orbit was less than 0.001 counts/s A.
Holographic photolysis of caged neurotransmitters
Lutz, Christoph; Otis, Thomas S.; DeSars, Vincent; Charpak, Serge; DiGregorio, David A.; Emiliani, Valentina
2009-01-01
Stimulation of light-sensitive chemical probes has become a powerful tool for the study of dynamic signaling processes in living tissue. Classically, this approach has been constrained by limitations of lens–based and point-scanning illumination systems. Here we describe a novel microscope configuration that incorporates a nematic liquid crystal spatial light modulator (LC-SLM) to generate holographic patterns of illumination. This microscope can produce illumination spots of variable size and number and patterns shaped to precisely match user-defined elements in a specimen. Using holographic illumination to photolyse caged glutamate in brain slices, we demonstrate that shaped excitation on segments of neuronal dendrites and simultaneous, multi-spot excitation of different dendrites enables precise spatial and rapid temporal control of glutamate receptor activation. By allowing the excitation volume shape to be tailored precisely, the holographic microscope provides an extremely flexible method for activation of various photosensitive proteins and small molecules. PMID:19160517
Geometric method for forming periodic orbits in the Lorenz system
NASA Astrophysics Data System (ADS)
Nicholson, S. B.; Kim, Eun-jin
2016-04-01
Many systems in nature are out of equilibrium and irreversible. The non-detailed balance observable representation (NOR) provides a useful methodology for understanding the evolution of such non-equilibrium complex systems, by mapping out the correlation between two states to a metric space where a small distance represents a strong correlation [1]. In this paper, we present the first application of the NOR to a continuous system and demonstrate its utility in controlling chaos. Specifically, we consider the evolution of a continuous system governed by the Lorenz equation and calculate the NOR by following a sufficient number of trajectories. We then show how to control chaos by converting chaotic orbits to periodic orbits by utilizing the NOR. We further discuss the implications of our method for potential applications given the key advantage that this method makes no assumptions of the underlying equations of motion and is thus extremely general.
Meta-analyses of the determinants and outcomes of belief in climate change
NASA Astrophysics Data System (ADS)
Hornsey, Matthew J.; Harris, Emily A.; Bain, Paul G.; Fielding, Kelly S.
2016-06-01
Recent growth in the number of studies examining belief in climate change is a positive development, but presents an ironic challenge in that it can be difficult for academics, practitioners and policy makers to keep pace. As a response to this challenge, we report on a meta-analysis of the correlates of belief in climate change. Twenty-seven variables were examined by synthesizing 25 polls and 171 academic studies across 56 nations. Two broad conclusions emerged. First, many intuitively appealing variables (such as education, sex, subjective knowledge, and experience of extreme weather events) were overshadowed in predictive power by values, ideologies, worldviews and political orientation. Second, climate change beliefs have only a small to moderate effect on the extent to which people are willing to act in climate-friendly ways. Implications for converting sceptics to the climate change cause--and for converting believers’ intentions into action--are discussed.
Archaeal Viruses: Diversity, Replication, and Structure.
Dellas, Nikki; Snyder, Jamie C; Bolduc, Benjamin; Young, Mark J
2014-11-01
The Archaea-and their viruses-remain the most enigmatic of life's three domains. Once thought to inhabit only extreme environments, archaea are now known to inhabit diverse environments. Even though the first archaeal virus was described over 40 years ago, only 117 archaeal viruses have been discovered to date. Despite this small number, these viruses have painted a portrait of enormous morphological and genetic diversity. For example, research centered around the various steps of the archaeal virus life cycle has led to the discovery of unique mechanisms employed by archaeal viruses during replication, maturation, and virion release. In many instances, archaeal virus proteins display very low levels of sequence homology to other proteins listed in the public database, and therefore, structural characterization of these proteins has played an integral role in functional assignment. These structural studies have not only provided insights into structure-function relationships but have also identified links between viruses across all three domains of life.
Bahar, Ali Newaz; Waheed, Sajjad
2016-01-01
The fundamental logical element of a quantum-dot cellular automata (QCA) circuit is majority voter gate (MV). The efficiency of a QCA circuit is depends on the efficiency of the MV. This paper presents an efficient single layer five-input majority voter gate (MV5). The structure of proposed MV5 is very simple and easy to implement in any logical circuit. This proposed MV5 reduce number of cells and use conventional QCA cells. However, using MV5 a multilayer 1-bit full-adder (FA) is designed. The functional accuracy of the proposed MV5 and FA are confirmed by QCADesigner a well-known QCA layout design and verification tools. Furthermore, the power dissipation of proposed circuits are estimated, which shows that those circuits dissipate extremely small amount of energy and suitable for reversible computing. The simulation outcomes demonstrate the superiority of the proposed circuit.
Designed Strategies for Fluorescence-Based Biosensors for the Detection of Mycotoxins
Sharma, Atul; Khan, Reem; Catanante, Gaelle; Sherazi, Tauqir A.; Bhand, Sunil; Hayat, Akhtar; Marty, Jean Louis
2018-01-01
Small molecule toxins such as mycotoxins with low molecular weight are the most widely studied biological toxins. These biological toxins are responsible for food poisoning and have the potential to be used as biological warfare agents at the toxic dose. Due to the poisonous nature of mycotoxins, effective analysis techniques for quantifying their toxicity are indispensable. In this context, biosensors have been emerged as a powerful tool to monitors toxins at extremely low level. Recently, biosensors based on fluorescence detection have attained special interest with the incorporation of nanomaterials. This review paper will focus on the development of fluorescence-based biosensors for mycotoxin detection, with particular emphasis on their design as well as properties such as sensitivity and specificity. A number of these fluorescent biosensors have shown promising results in food samples for the detection of mycotoxins, suggesting their future potential for food applications. PMID:29751687
Reef corals of Johnston Atoll: one of the world's most isolated reefs
NASA Astrophysics Data System (ADS)
Maragos, James E.; Jokiel, Paul L.
1986-01-01
Johnston Atoll lies 800 km southwest of the nearest reefs of Hawaii and over 1,500 km from other shallow reefs to the south and west. Only 33 species and 16 genera and subgenera of shallow water stony corals have been reported from the atoll. Endemic species are absent despite Johnston's great age and favorable environment. With few exceptions, only species with broad geographic distribution are represented. Factors contributing to the low number of species are remoteness, the atoll's small size, lack of favorable currents to transport larvae from the southwest Pacific, lack of reef “stepping stones” in the region since the Cretaceous, possible defaunation during eustatic sea-level rise and fall, and possible drowning from tectonic subsidence or tilting. The species list shows strongest affinity with that of Hawaii, but some unexpected discontinuities occur. Despite low species diversity, coral coverage is extremely high in most environments.
A fast low-power optical memory based on coupled micro-ring lasers
NASA Astrophysics Data System (ADS)
Hill, Martin T.; Dorren, Harmen J. S.; de Vries, Tjibbe; Leijtens, Xaveer J. M.; den Besten, Jan Hendrik; Smalbrugge, Barry; Oei, Yok-Siang; Binsma, Hans; Khoe, Giok-Djan; Smit, Meint K.
2004-11-01
The increasing speed of fibre-optic-based telecommunications has focused attention on high-speed optical processing of digital information. Complex optical processing requires a high-density, high-speed, low-power optical memory that can be integrated with planar semiconductor technology for buffering of decisions and telecommunication data. Recently, ring lasers with extremely small size and low operating power have been made, and we demonstrate here a memory element constructed by interconnecting these microscopic lasers. Our device occupies an area of 18 × 40µm2 on an InP/InGaAsP photonic integrated circuit, and switches within 20ps with 5.5fJ optical switching energy. Simulations show that the element has the potential for much smaller dimensions and switching times. Large numbers of such memory elements can be densely integrated and interconnected on a photonic integrated circuit: fast digital optical information processing systems employing large-scale integration should now be viable.
Environmental Lead Pollution in an Urban Soft-water Area
Beattie, A. D.; Moore, M. R.; Devenay, W. T.; Miller, A. R.; Goldberg, A.
1972-01-01
An investigation has been reported on the clinical and metabolic effects of lead acquired by soft domestic water from lead plumbing systems in 23 Glasgow households. The lead content of water from cold taps was up to 18 times the upper acceptable limit and was proportional to the amount of lead in the plumbing system. The blood lead of 71 inhabitants of these houses showed a significant positive correlation with water lead content. Delta-aminolaevulic acid dehydrase activity, an extremely sensitive indicator of lead exposure, showed a significant negative correlation with water-lead content. Atmospheric lead was within acceptable limits in all but one house and no significant correlation could be found with biochemical measurements. A small number of clinical abnormalities were found but could not be directly attributed to lead toxicity. The results of the study underline the possible danger to health of lead plumbing systems in soft-water regions. PMID:5031207
NASA Technical Reports Server (NTRS)
Polites, M. E.; Carrington, C. K.
1995-01-01
This paper presents a conceptual design for the attitude control and determination (ACAD) system for the Magnetosphere Imager (Ml) spacecraft. The MI is a small spin-stabilized spacecraft that has been proposed for launch on a Taurus-S expendable launch vehicle into a highly-ellipdcal polar Earth orbit. Presently, launch is projected for 1999. The paper describes the MI mission and ACAD requirements and then proposes an ACAD system for meeting these requirements. The proposed design is low-power, low-mass, very simple conceptually, highly passive, and consistent with the overall MI design philosophy, which is faster-better-cheaper. Still, the MI ACAD system is extremely robust and can handle a number of unexpected, adverse situations on orbit without impacting the mission as a whole. Simulation results are presented that support the soundness of the design approach.
Fixed target matrix for femtosecond time-resolved and in situ serial micro-crystallography
Mueller, C.; Marx, A.; Epp, S. W.; Zhong, Y.; Kuo, A.; Balo, A. R.; Soman, J.; Schotte, F.; Lemke, H. T.; Owen, R. L.; Pai, E. F.; Pearson, A. R.; Olson, J. S.; Anfinrud, P. A.; Ernst, O. P.; Dwayne Miller, R. J.
2015-01-01
We present a crystallography chip enabling in situ room temperature crystallography at microfocus synchrotron beamlines and X-ray free-electron laser (X-FEL) sources. Compared to other in situ approaches, we observe extremely low background and high diffraction data quality. The chip design is robust and allows fast and efficient loading of thousands of small crystals. The ability to load a large number of protein crystals, at room temperature and with high efficiency, into prescribed positions enables high throughput automated serial crystallography with microfocus synchrotron beamlines. In addition, we demonstrate the application of this chip for femtosecond time-resolved serial crystallography at the Linac Coherent Light Source (LCLS, Menlo Park, California, USA). The chip concept enables multiple images to be acquired from each crystal, allowing differential detection of changes in diffraction intensities in order to obtain high signal-to-noise and fully exploit the time resolution capabilities of XFELs. PMID:26798825
Experimental Resonance Enhanced Multiphoton Ionization (REMPI) studies of small molecules
NASA Technical Reports Server (NTRS)
Dehmer, J. L.; Dehmer, P. M.; Pratt, S. T.; Ohalloran, M. A.; Tomkins, F. S.
1987-01-01
Resonance enhanced multiphoton ionization (REMPI) utilizes tunable dye lasers to ionize an atom or molecule by first preparing an excited state by multiphoton absorption and then ionizing that state before it can decay. This process is highly selective with respect to both the initial and resonant intermediate states of the target, and it can be extremely sensitive. In addition, the products of the REMPI process can be detected as needed by analyzing the resulting electrons, ions, fluorescence, or by additional REMPI. This points to a number of exciting opportunities for both basic and applied science. On the applied side, REMPI has great potential as an ultrasensitive, highly selective detector for trace, reactive, or transient species. On the basic side, REMPI affords an unprecedented means of exploring excited state physics and chemistry at the quantum-state-specific level. An overview of current studies of excited molecular states is given to illustrate the principles and prospects of REMPI.
Where are supercentenarians located? A worldwide demographic study.
Santos-Lozano, Alejandro; Sanchis-Gomar, Fabian; Pareja-Galeano, Helios; Fiuza-Luces, Carmen; Emanuele, Enzo; Lucia, Alejandro; Garatachea, Nuria
2015-02-01
The world population is continuously aging, and centenarians may be considered to be the most successfully aged individuals. Among people who reach extreme longevity (EL; i.e., >95 years), supercentenarians (SCs; aged ≥110 years) represent a subgroup of great scientific interest. Unfortunately, data on the worldwide distribution of SCs remain scarce. Therefore, this study was designed to investigate this issue. Current available data indicate that Japan is the country with the highest number of currently alive SCs. Interestingly, Puerto Rico would show the highest prevalence of SCs among people who reach EL (approximately one SC per 10,000 inhabitants aged ≥95 years), although data on this country must be intrepreted with caution owing to potential methodological limitations, mainly related to its small population. Our findings highlight the need to investigate in greater detail the genetic and lifestyle background of SCs, with the ultimate goal of unraveling new potential mechanisms underlying human EL.
Designed Strategies for Fluorescence-Based Biosensors for the Detection of Mycotoxins.
Sharma, Atul; Khan, Reem; Catanante, Gaelle; Sherazi, Tauqir A; Bhand, Sunil; Hayat, Akhtar; Marty, Jean Louis
2018-05-11
Small molecule toxins such as mycotoxins with low molecular weight are the most widely studied biological toxins. These biological toxins are responsible for food poisoning and have the potential to be used as biological warfare agents at the toxic dose. Due to the poisonous nature of mycotoxins, effective analysis techniques for quantifying their toxicity are indispensable. In this context, biosensors have been emerged as a powerful tool to monitors toxins at extremely low level. Recently, biosensors based on fluorescence detection have attained special interest with the incorporation of nanomaterials. This review paper will focus on the development of fluorescence-based biosensors for mycotoxin detection, with particular emphasis on their design as well as properties such as sensitivity and specificity. A number of these fluorescent biosensors have shown promising results in food samples for the detection of mycotoxins, suggesting their future potential for food applications.
Martyrs' last letters: are they the same as suicide notes?
Leenaars, Antoon A; Ben Park, B C; Collins, Peter I; Wenckstern, Susanne; Leenaars, Lindsey
2010-05-01
Of the 800,000 suicides worldwide every year, a small number fall under Emile Durkheim's term of altruistic suicides. Study on martyrdom has been limited. There has to date, for example, been no systematic empirical study of martyr letters. We examined 33 letters of Korean self-immolators, compared with 33 suicide notes of a matched sample of more common suicides. An analysis of intrapsychic factors (suicide as unbearable pain, psychopathology) and interpersonal factors (suicide as murderous impulses and need to escape) revealed that, although one can use the same psychological characteristics or dynamics to understand the deaths, the state of mind of martyrs is more extreme, such that the pain is reported to be even more unbearable. Yet, there are differences, such as there was no ambivalence in the altruistic notes. It is concluded that intrapsychic and interpersonal characteristics are central in understanding martyrs, probably equal to community or societal factors. More forensic study is, however, warranted.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Ki-Myeong; Weinberg, Erick J.; Physics Department, Columbia University, New York, New York 10027
2009-01-15
We explore the characteristics of spherical bags made of large numbers of BPS magnetic monopoles. There are two extreme limits. In the Abelian bag, N zeros of the Higgs field are arranged in a quasiregular lattice on a sphere of radius R{sub cr}{approx}N/v, where v is the Higgs vacuum expectation value. The massive gauge fields of the theory are largely confined to a thin shell at this radius that separates an interior with almost vanishing magnetic and Higgs fields from an exterior region with long-range Coulomb magnetic and Higgs fields. In the other limiting case, which we term a non-Abelianmore » bag, the N zeros of the Higgs field are all the origin, but there is again a thin shell of radius R{sub cr}. In this case the region enclosed by this shell can be viewed as a large monopole core, with small Higgs field but nontrivial massive and massless gauge fields.« less
An interview study of young adults born to mothers with mild intellectual disability.
Lindblad, Ida; Billstedt, Eva; Gillberg, Christopher; Fernell, Elisabeth
2013-12-01
A group of 10 young adults from a population-based series in Sweden, of originally 42 individuals, born to mothers with mild intellectual disability (ID), were interviewed with regard to their experiences during childhood, adolescence, and their current situation. The interview revealed that 6 of the 10 individuals had been removed from their biological parents during childhood, 6 reported clear child abuse and/or neglect, and 6 individuals had mild ID. The majority of the individuals reported difficulties in their relations with family and in school. The small number of participants (n = 10) who could be personally interviewed reflected the major problems that were encountered in the process of making contact with this group of young individuals. In conclusion, this study adds to other reports that children of mothers with ID constitute an extremely vulnerable group. Therefore, these families need full and continuous attention from the supportive systems in society.
Fishery survey of U. S. waters of Lake Ontario
Wells, LaRue
1969-01-01
Gill nets and trawls were fished by the Bureau of Commercial Fisheries R/V Cisco during September 19-23, 1964, at several locations and depths in the offshore United States waters of Lake Ontario. Water temperatures were low (3.7-8.3 A?C) at all fishing stations except one (16.4 A?C). Supplementary data were provided by the Bureau's R/V Kaho in 1966. Alewives and smelt were common. Ciscoes were extremely scarce, but large; most of those caught were bloaters. Slimy sculpins were abundant, but no deepwater sculpins were caught. Yellow perch were scarce. Although the warm water species were inadequately sampled, trout-perch seemed to be abundant. Other species, all caught in small numbers, were lake trout, spottail shiners, burbot, threespine sticklebacks, and johnny darters from cold water and northern pike, lake chubs, white suckers, white bass, white perch, and rock bass from warm water.
Fixed target matrix for femtosecond time-resolved and in situ serial micro-crystallography.
Mueller, C; Marx, A; Epp, S W; Zhong, Y; Kuo, A; Balo, A R; Soman, J; Schotte, F; Lemke, H T; Owen, R L; Pai, E F; Pearson, A R; Olson, J S; Anfinrud, P A; Ernst, O P; Dwayne Miller, R J
2015-09-01
We present a crystallography chip enabling in situ room temperature crystallography at microfocus synchrotron beamlines and X-ray free-electron laser (X-FEL) sources. Compared to other in situ approaches, we observe extremely low background and high diffraction data quality. The chip design is robust and allows fast and efficient loading of thousands of small crystals. The ability to load a large number of protein crystals, at room temperature and with high efficiency, into prescribed positions enables high throughput automated serial crystallography with microfocus synchrotron beamlines. In addition, we demonstrate the application of this chip for femtosecond time-resolved serial crystallography at the Linac Coherent Light Source (LCLS, Menlo Park, California, USA). The chip concept enables multiple images to be acquired from each crystal, allowing differential detection of changes in diffraction intensities in order to obtain high signal-to-noise and fully exploit the time resolution capabilities of XFELs.
Zhang, Zhifei; Song, Yang; Cui, Haochen; Wu, Jayne; Schwartz, Fernando; Qi, Hairong
2017-09-01
Bucking the trend of big data, in microdevice engineering, small sample size is common, especially when the device is still at the proof-of-concept stage. The small sample size, small interclass variation, and large intraclass variation, have brought biosignal analysis new challenges. Novel representation and classification approaches need to be developed to effectively recognize targets of interests with the absence of a large training set. Moving away from the traditional signal analysis in the spatiotemporal domain, we exploit the biosignal representation in the topological domain that would reveal the intrinsic structure of point clouds generated from the biosignal. Additionally, we propose a Gaussian-based decision tree (GDT), which can efficiently classify the biosignals even when the sample size is extremely small. This study is motivated by the application of mastitis detection using low-voltage alternating current electrokinetics (ACEK) where five categories of bisignals need to be recognized with only two samples in each class. Experimental results demonstrate the robustness of the topological features as well as the advantage of GDT over some conventional classifiers in handling small dataset. Our method reduces the voltage of ACEK to a safe level and still yields high-fidelity results with a short assay time. This paper makes two distinctive contributions to the field of biosignal analysis, including performing signal processing in the topological domain and handling extremely small dataset. Currently, there have been no related works that can efficiently tackle the dilemma between avoiding electrochemical reaction and accelerating assay process using ACEK.
Biomedical imaging with THz waves
NASA Astrophysics Data System (ADS)
Nguyen, Andrew
2010-03-01
We discuss biomedical imaging using radio waves operating in the terahertz (THz) range between 300 GHz to 3 THz. Particularly, we present the concept for two THz imaging systems. One system employs single antenna, transmitter and receiver operating over multi-THz-frequency simultaneously for sensing and imaging small areas of the human body or biological samples. Another system consists of multiple antennas, a transmitter, and multiple receivers operating over multi-THz-frequency capable of sensing and imaging simultaneously the whole body or large biological samples. Using THz waves for biomedical imaging promises unique and substantial medical benefits including extremely small medical devices, extraordinarily fine spatial resolution, and excellent contrast between images of diseased and healthy tissues. THz imaging is extremely attractive for detection of cancer in the early stages, sensing and imaging of tissues near the skin, and study of disease and its growth versus time.
Temperature-Related Death and Illness. Chapter 2
NASA Technical Reports Server (NTRS)
Sarofim, Marcus C.; Saha, Shubhayu; Hawkins, Michelle D.; Mills, David M.; Hess, Jeremy; Horton, Radley; Kinney, Patrick; Schwartz, Joel; St. Juliana, Alexis
2016-01-01
Based on present-day sensitivity to heat, an increase of thousands to tens of thousands of premature heat-related deaths in the summer and a decrease of premature cold-related deaths in the winter are projected each year as a result of climate change by the end of the century. Future adaptation will very likely reduce these impacts (see Changing Tolerance to Extreme Heat Finding). The reduction in cold-related deaths is projected to be smaller than the increase in heat-related deaths in most regions. Days that are hotter than usual in the summer or colder than usual in the winter are both associated with increased illness and death. Mortality effects are observed even for small differences from seasonal average temperatures. Because small temperature differences occur much more frequently than large temperature differences, not accounting for the effect of these small differences would lead to underestimating the future impact of climate change. An increase in population tolerance to extreme heat has been observed over time. Changes in this tolerance have been associated with increased use of air conditioning, improved social responses, and or physiological acclimatization, among other factors. Expected future increases in this tolerance will reduce the projected increase in deaths from heat. Older adults and children have a higher risk of dying or becoming ill due to extreme heat. People working outdoors, the socially isolated and economically disadvantaged, those with chronic illnesses, as well as some communities of color, are also especially vulnerable to death or illness.
AN AUTOMATIC DETECTION METHOD FOR EXTREME-ULTRAVIOLET DIMMINGS ASSOCIATED WITH SMALL-SCALE ERUPTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alipour, N.; Safari, H.; Innes, D. E.
2012-02-10
Small-scale extreme-ultraviolet (EUV) dimming often surrounds sites of energy release in the quiet Sun. This paper describes a method for the automatic detection of these small-scale EUV dimmings using a feature-based classifier. The method is demonstrated using sequences of 171 Angstrom-Sign images taken by the STEREO/Extreme UltraViolet Imager (EUVI) on 2007 June 13 and by Solar Dynamics Observatory/Atmospheric Imaging Assembly on 2010 August 27. The feature identification relies on recognizing structure in sequences of space-time 171 Angstrom-Sign images using the Zernike moments of the images. The Zernike moments space-time slices with events and non-events are distinctive enough to be separatedmore » using a support vector machine (SVM) classifier. The SVM is trained using 150 events and 700 non-event space-time slices. We find a total of 1217 events in the EUVI images and 2064 events in the AIA images on the days studied. Most of the events are found between latitudes -35 Degree-Sign and +35 Degree-Sign . The sizes and expansion speeds of central dimming regions are extracted using a region grow algorithm. The histograms of the sizes in both EUVI and AIA follow a steep power law with slope of about -5. The AIA slope extends to smaller sizes before turning over. The mean velocity of 1325 dimming regions seen by AIA is found to be about 14 km s{sup -1}.« less