Extreme Material Physical Properties and Measurements above 100 tesla
NASA Astrophysics Data System (ADS)
Mielke, Charles
2011-03-01
The National High Magnetic Field Laboratory (NHMFL) Pulsed Field Facility (PFF) at Los Alamos National Laboratory (LANL) offers extreme environments of ultra high magnetic fields above 100 tesla by use of the Single Turn method as well as fields approaching 100 tesla with more complex methods. The challenge of metrology in the extreme magnetic field generating devices is complicated by the millions of amperes of current and tens of thousands of volts that are required to deliver the pulsed power needed for field generation. Methods of detecting physical properties of materials are essential parts of the science that seeks to understand and eventually control the fundamental functionality of materials in extreme environments. De-coupling the signal of the sample from the electro-magnetic interference associated with the magnet system is required to make these state-of-the-art magnetic fields useful to scientists studying materials in high magnetic fields. The cutting edge methods that are being used as well as methods in development will be presented with recent results in Graphene and High-Tc superconductors along with the methods and challenges. National Science Foundation DMR-Award 0654118.
Potential Use of Agile Methods in Selected DoD Acquisitions: Requirements Development and Management
2014-04-01
understanding of common Agile meth- ods, particularly Scrum and eXtreme Programming. For those unfamiliar with the basics of Agile development, the... Scrum (namely, the concepts of product owner, product backlog and self- organized teams) and eXtreme Programming (epics and user stories). These concepts...also been adopted as a requirements specification mechanism by many teams using Scrum , even if those teams don’t use other aspects of eXtreme
Statistical downscaling modeling with quantile regression using lasso to estimate extreme rainfall
NASA Astrophysics Data System (ADS)
Santri, Dewi; Wigena, Aji Hamim; Djuraidah, Anik
2016-02-01
Rainfall is one of the climatic elements with high diversity and has many negative impacts especially extreme rainfall. Therefore, there are several methods that required to minimize the damage that may occur. So far, Global circulation models (GCM) are the best method to forecast global climate changes include extreme rainfall. Statistical downscaling (SD) is a technique to develop the relationship between GCM output as a global-scale independent variables and rainfall as a local- scale response variable. Using GCM method will have many difficulties when assessed against observations because GCM has high dimension and multicollinearity between the variables. The common method that used to handle this problem is principal components analysis (PCA) and partial least squares regression. The new method that can be used is lasso. Lasso has advantages in simultaneuosly controlling the variance of the fitted coefficients and performing automatic variable selection. Quantile regression is a method that can be used to detect extreme rainfall in dry and wet extreme. Objective of this study is modeling SD using quantile regression with lasso to predict extreme rainfall in Indramayu. The results showed that the estimation of extreme rainfall (extreme wet in January, February and December) in Indramayu could be predicted properly by the model at quantile 90th.
Data informatics for the Detection, Characterization, and Attribution of Climate Extremes
NASA Astrophysics Data System (ADS)
Collins, W.; Wehner, M. F.; O'Brien, T. A.; Paciorek, C. J.; Krishnan, H.; Johnson, J. N.; Prabhat, M.
2015-12-01
The potential for increasing frequency and intensity of extremephenomena including downpours, heat waves, and tropical cyclonesconstitutes one of the primary risks of climate change for society andthe environment. The challenge of characterizing these risks is thatextremes represent the "tails" of distributions of atmosphericphenomena and are, by definition, highly localized and typicallyrelatively transient. Therefore very large volumes of observationaldata and projections of future climate are required to quantify theirproperties in a robust manner. Massive data analytics are required inorder to detect individual extremes, accumulate statistics on theirproperties, quantify how these statistics are changing with time, andattribute the effects of anthropogenic global warming on thesestatistics. We describe examples of the suite of techniques the climate communityis developing to address these analytical challenges. The techniquesinclude massively parallel methods for detecting and trackingatmospheric rivers and cyclones; data-intensive extensions togeneralized extreme value theory to summarize the properties ofextremes; and multi-model ensembles of hindcasts to quantify theattributable risk of anthropogenic influence on individual extremes.We conclude by highlighting examples of these methods developed by ourCASCADE (Calibrated and Systematic Characterization, Attribution, andDetection of Extremes) project.
Extremal Optimization: Methods Derived from Co-Evolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boettcher, S.; Percus, A.G.
1999-07-13
We describe a general-purpose method for finding high-quality solutions to hard optimization problems, inspired by self-organized critical models of co-evolution such as the Bak-Sneppen model. The method, called Extremal Optimization, successively eliminates extremely undesirable components of sub-optimal solutions, rather than ''breeding'' better components. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, Extremal Optimization improves on a single candidate solution by treating each of its components as species co-evolving according to Darwinian principles. Unlike Simulated Annealing, its non-equilibrium approach effects an algorithm requiring few parameters to tune. With only one adjustable parameter, its performance provesmore » competitive with, and often superior to, more elaborate stochastic optimization procedures. We demonstrate it here on two classic hard optimization problems: graph partitioning and the traveling salesman problem.« less
Dadaci, Mehmet; Altuntas, Zeynep
2016-01-01
Background Although the use of temporary shunts in proximal extremity amputations has been reported, no study has described the use of temporary shunts in distal extremity amputations that require vein grafting. Moreover, the total volume of blood loss when temporary shunts are used has not been reported. The aim of this study was to investigate the applicability of a temporary shunt for distal extremity amputations requiring repair by vessel grafting with an ischemia time of >6 hours. This study also aimed to determine the total volume of blood loss when temporary shunts were used. Methods Patients who underwent distal major extremity replantation and/or revascularization with a vessel graft and who experienced ischemia for 6–8 hours between 2013 and 2014 were included in the study. A 6-Fr suction catheter was cut to 5 cm in length after the infusion of heparin, and secured with a 5-0 silk suture between the distal and the proximal ends of the artery. While bleeding continued, the bones were shortened and fixed. After the complete restoration of circulation, the arterial shunt created using the catheter was also repaired with a vein graft. Results Six patients were included in this study. The mean duration of ischemia was 7.25 hours. The mean duration of suction catheter use during limb revascularization was 7 minutes. The mean transfusion volume was 7.5 units. No losses of the extremity were observed. Conclusions This procedure should be considered in distal extremity amputations requiring repair by vessel grafting during critical ischemia. PMID:27896186
A method for Removing Surface Contamination on Ultra-pure Copper Spectrometer Components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoppe, Eric W.; Seifert, Allen; Aalseth, Craig E.
Spectrometers for the lowest-level radiometric measurements require materials of extreme radiopurity. Measurements of rare nuclear decays, e.g. neutrinoless double-beta decay, can require construction and shielding materials with bulk radiopurity reaching one micro-Becquerel per kilogram or less. When such extreme material purity is achieved, surface contamination, particularly solid daughters in the natural radon decay chains, can become the limiting background. High-purity copper is an important material for ultra-low-background spectrometers and thus is the focus of this work. A method for removing surface contamination at very low levels without attacking the bulk material is described. An assay method using a low-background proportionalmore » counter made of the material under examination is employed, and the resulting preliminary result of achievable surface contamination levels is presented.« less
Invariant Imbedded T-Matrix Method for Axial Symmetric Hydrometeors with Extreme Aspect Ratios
NASA Technical Reports Server (NTRS)
Pelissier, Craig; Kuo, Kwo-Sen; Clune, Thomas; Adams, Ian; Munchak, Stephen
2017-01-01
The single-scattering properties (SSPs) of hydrometeors are the fundamental quantities for physics-based precipitation retrievals. Thus, efficient computation of their electromagnetic scattering is of great value. Whereas the semi-analytical T-Matrix methods are likely the most efficient for nonspherical hydrometeors with axial symmetry, they are not suitable for arbitrarily shaped hydrometeors absent of any significant symmetry, for which volume integral methods such as those based on Discrete Dipole Approximation (DDA) are required. Currently the two leading T-matrix methods are the Extended Boundary Condition Method (EBCM) and the Invariant Imbedding T-matrix Method incorporating Lorentz-Mie Separation of Variables (IITM+SOV). EBCM is known to outperform IITM+SOV for hydrometeors with modest aspect ratios. However, in cases when aspect ratios become extreme, such as needle-like particles with large height to diameter values, EBCM fails to converge. Such hydrometeors with extreme aspect ratios are known to be present in solid precipitation and their SSPs are required to model the radiative responses accurately. In these cases, IITM+SOV is shown to converge. An efficient, parallelized C++ implementation for both EBCM and IITM+SOV has been developed to conduct a performance comparison between EBCM, IITM+SOV, and DDSCAT (a popular implementation of DDA). We present the comparison results and discuss details. Our intent is to release the combined ECBM IITM+SOV software to the community under an open source license.
Invariant Imbedding T-Matrix Method for Axial Symmetric Hydrometeors with Extreme Aspect Ratios
NASA Astrophysics Data System (ADS)
Pelissier, C.; Clune, T.; Kuo, K. S.; Munchak, S. J.; Adams, I. S.
2017-12-01
The single-scattering properties (SSPs) of hydrometeors are the fundamental quantities for physics-based precipitation retrievals. Thus, efficient computation of their electromagnetic scattering is of great value. Whereas the semi-analytical T-Matrix methods are likely the most efficient for nonspherical hydrometeors with axial symmetry, they are not suitable for arbitrarily shaped hydrometeors absent of any significant symmetry, for which volume integral methods such as those based on Discrete Dipole Approximation (DDA) are required. Currently the two leading T-matrix methods are the Extended Boundary Condition Method (EBCM) and the Invariant Imbedding T-matrix Method incorporating Lorentz-Mie Separation of Variables (IITM+SOV). EBCM is known to outperform IITM+SOV for hydrometeors with modest aspect ratios. However, in cases when aspect ratios become extreme, such as needle-like particles with large height to diameter values, EBCM fails to converge. Such hydrometeors with extreme aspect ratios are known to be present in solid precipitation and their SSPs are required to model the radiative responses accurately. In these cases, IITM+SOV is shown to converge. An efficient, parallelized C++ implementation for both EBCM and IITM+SOV has been developed to conduct a performance comparison between EBCM, IITM+SOV, and DDSCAT (a popular implementation of DDA). We present the comparison results and discuss details. Our intent is to release the combined ECBM & IITM+SOV software to the community under an open source license.
Acidic Ribosomal Proteins from the Extreme ’Halobacterium cutirubrum’,
the extreme halophilic bacterium, Halobacterium cutirubrum. The identification of the protein moieties involved in these and other interactions in...the halophile ribosome requires a rapid and reproducible screening method for the separation, enumeration and identification of these acidic...polypeptides in the complex ribosomal protein mixtures. In this paper the authors present the results of analyses of the halophile ribosomal proteins using a
Inflationary dynamics for matrix eigenvalue problems
Heller, Eric J.; Kaplan, Lev; Pollmann, Frank
2008-01-01
Many fields of science and engineering require finding eigenvalues and eigenvectors of large matrices. The solutions can represent oscillatory modes of a bridge, a violin, the disposition of electrons around an atom or molecule, the acoustic modes of a concert hall, or hundreds of other physical quantities. Often only the few eigenpairs with the lowest or highest frequency (extremal solutions) are needed. Methods that have been developed over the past 60 years to solve such problems include the Lanczos algorithm, Jacobi–Davidson techniques, and the conjugate gradient method. Here, we present a way to solve the extremal eigenvalue/eigenvector problem, turning it into a nonlinear classical mechanical system with a modified Lagrangian constraint. The constraint induces exponential inflationary growth of the desired extremal solutions. PMID:18511564
Optimizing Illumina next-generation sequencing library preparation for extremely AT-biased genomes.
Oyola, Samuel O; Otto, Thomas D; Gu, Yong; Maslen, Gareth; Manske, Magnus; Campino, Susana; Turner, Daniel J; Macinnis, Bronwyn; Kwiatkowski, Dominic P; Swerdlow, Harold P; Quail, Michael A
2012-01-03
Massively parallel sequencing technology is revolutionizing approaches to genomic and genetic research. Since its advent, the scale and efficiency of Next-Generation Sequencing (NGS) has rapidly improved. In spite of this success, sequencing genomes or genomic regions with extremely biased base composition is still a great challenge to the currently available NGS platforms. The genomes of some important pathogenic organisms like Plasmodium falciparum (high AT content) and Mycobacterium tuberculosis (high GC content) display extremes of base composition. The standard library preparation procedures that employ PCR amplification have been shown to cause uneven read coverage particularly across AT and GC rich regions, leading to problems in genome assembly and variation analyses. Alternative library-preparation approaches that omit PCR amplification require large quantities of starting material and hence are not suitable for small amounts of DNA/RNA such as those from clinical isolates. We have developed and optimized library-preparation procedures suitable for low quantity starting material and tolerant to extremely high AT content sequences. We have used our optimized conditions in parallel with standard methods to prepare Illumina sequencing libraries from a non-clinical and a clinical isolate (containing ~53% host contamination). By analyzing and comparing the quality of sequence data generated, we show that our optimized conditions that involve a PCR additive (TMAC), produces amplified libraries with improved coverage of extremely AT-rich regions and reduced bias toward GC neutral templates. We have developed a robust and optimized Next-Generation Sequencing library amplification method suitable for extremely AT-rich genomes. The new amplification conditions significantly reduce bias and retain the complexity of either extremes of base composition. This development will greatly benefit sequencing clinical samples that often require amplification due to low mass of DNA starting material.
CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.
Cooley, Richard L.; Vecchia, Aldo V.
1987-01-01
A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.
Generation of Boundary Manikin Anthropometry
NASA Technical Reports Server (NTRS)
Young, Karen S.; Margerum, Sarah; Barr, Abbe; Ferrer, Mike A.; Rajulu, Sudhakar
2008-01-01
The purpose of this study was to develop 3D digital boundary manikins that are representative of the anthropometry of a unique population. These digital manikins can be used by designers to verify and validate that the components of the spacesuit design satisfy the requirements specified in the Human Systems Integration Requirements (HSIR) document. Currently, the HSIR requires the suit to accommodate the 1st percentile American female to the 99th percentile American male. The manikin anthropometry was derived using two methods: Principal Component Analysis (PCA) and Whole Body Posture Based Analysis (WBPBA). PCA is a statistical method for reducing a multidimensional data set by using eigenvectors and eigenvalues. The goal is to create a reduced data set that encapsulates the majority of the variation in the population. WBPBA is a multivariate analytical approach that was developed by the Anthropometry and Biomechanics Facility (ABF) to identify the extremes of the population for a given body posture. WBPBA is a simulation-based method that finds extremes in a population based on anthropometry and posture whereas PCA is based solely on anthropometry. Both methods yield a list of subjects and their anthropometry from the target population; PCA resulted in 20 female and 22 male subjects anthropometry and WBPBA resulted in 7 subjects' anthropometry representing the extreme subjects in the target population. The subjects anthropometry is then used to 'morph' a baseline digital scan of a person with the same body type to create a 3D digital model that can be used as a tool for designers, the details of which will be discussed in subsequent papers.
Bidirectional extreme learning machine for regression problem and its learning effectiveness.
Yang, Yimin; Wang, Yaonan; Yuan, Xiaofang
2012-09-01
It is clear that the learning effectiveness and learning speed of neural networks are in general far slower than required, which has been a major bottleneck for many applications. Recently, a simple and efficient learning method, referred to as extreme learning machine (ELM), was proposed by Huang , which has shown that, compared to some conventional methods, the training time of neural networks can be reduced by a thousand times. However, one of the open problems in ELM research is whether the number of hidden nodes can be further reduced without affecting learning effectiveness. This brief proposes a new learning algorithm, called bidirectional extreme learning machine (B-ELM), in which some hidden nodes are not randomly selected. In theory, this algorithm tends to reduce network output error to 0 at an extremely early learning stage. Furthermore, we find a relationship between the network output error and the network output weights in the proposed B-ELM. Simulation results demonstrate that the proposed method can be tens to hundreds of times faster than other incremental ELM algorithms.
Cazelle, Elodie; Eskes, Chantra; Hermann, Martina; Jones, Penny; McNamee, Pauline; Prinsen, Menk; Taylor, Hannah; Wijnands, Marcel V W
2015-04-01
A.I.S.E. investigated the suitability of the regulatory adopted ICE in vitro test method (OECD TG 438) with or without histopathology to identify detergent and cleaning formulations having extreme pH that require classification as EU CLP/UN GHS Category 1. To this aim, 18 extreme pH detergent and cleaning formulations were tested covering both alkaline and acidic extreme pHs. The ICE standard test method following OECD Test Guideline 438 showed good concordance with in vivo classification (83%) and good and balanced specificity and sensitivity values (83%) which are in line with the performances of currently adopted in vitro test guidelines, confirming its suitability to identify Category 1 extreme pH detergent and cleaning products. In contrast to previous findings obtained with non-extreme pH formulations, the use of histopathology did not improve the sensitivity of the assay whilst it strongly decreased its specificity for the extreme pH formulations. Furthermore, use of non-testing prediction rules for classification showed poor concordance values (33% for the extreme pH rule and 61% for the EU CLP additivity approach) with high rates of over-prediction (100% for the extreme pH rule and 50% for the additivity approach), indicating that these non-testing prediction rules are not suitable to predict Category 1 hazards of extreme pH detergent and cleaning formulations. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Haruki, W.; Iseri, Y.; Takegawa, S.; Sasaki, O.; Yoshikawa, S.; Kanae, S.
2016-12-01
Natural disasters caused by heavy rainfall occur every year in Japan. Effective countermeasures against such events are important. In 2015, a catastrophic flood occurred in Kinu river basin, which locates in the northern part of Kanto region. The remarkable feature of this flood event was not only in the intensity of rainfall but also in the spatial characteristics of heavy rainfall area. The flood was caused by continuous overlapping of heavy rainfall area over the Kinu river basin, suggesting consideration of spatial extent is quite important to assess impacts of heavy rainfall events. However, the spatial extent of heavy rainfall events cannot be properly measured through rainfall measurement by rain gauges at observation points. On the other hand, rainfall measurements by radar observations provide spatially and temporarily high resolution rainfall data which would be useful to catch the characteristics of heavy rainfall events. For long term effective countermeasure, extreme heavy rainfall scenario considering rainfall area and distribution is required. In this study, a new method for generating extreme heavy rainfall events using Monte Carlo Simulation has been developed in order to produce extreme heavy rainfall scenario. This study used AMeDAS analyzed precipitation data which is high resolution grid precipitation data made by Japan Meteorological Agency. Depth area duration (DAD) analysis has been conducted to extract extreme rainfall events in the past, considering time and spatial scale. In the Monte Carlo Simulation, extreme rainfall event is generated based on events extracted by DAD analysis. Extreme heavy rainfall events are generated in specific region in Japan and the types of generated extreme heavy rainfall events can be changed by varying the parameter. For application of this method, we focused on Kanto region in Japan. As a result, 3000 years rainfall data are generated. 100 -year probable rainfall and return period of flood in Kinu River Basin (2015) are obtained using generated data. We compared 100-year probable rainfall calculated by this method with other traditional method. New developed method enables us to generate extreme rainfall events considering time and spatial scale and produce extreme rainfall scenario.
NASA Astrophysics Data System (ADS)
Nursamsiah; Nugroho Sugianto, Denny; Suprijanto, Jusup; Munasik; Yulianto, Bambang
2018-02-01
The information of extreme wave height return level was required for maritime planning and management. The recommendation methods in analyzing extreme wave were better distributed by Generalized Pareto Distribution (GPD). Seasonal variation was often considered in the extreme wave model. This research aims to identify the best model of GPD by considering a seasonal variation of the extreme wave. By using percentile 95 % as the threshold of extreme significant wave height, the seasonal GPD and non-seasonal GPD fitted. The Kolmogorov-Smirnov test was applied to identify the goodness of fit of the GPD model. The return value from seasonal and non-seasonal GPD was compared with the definition of return value as criteria. The Kolmogorov-Smirnov test result shows that GPD fits data very well both seasonal and non-seasonal model. The seasonal return value gives better information about the wave height characteristics.
Quantifying the consequences of changing hydroclimatic extremes on protection levels for the Rhine
NASA Astrophysics Data System (ADS)
Sperna Weiland, Frederiek; Hegnauer, Mark; Buiteveld, Hendrik; Lammersen, Rita; van den Boogaard, Henk; Beersma, Jules
2017-04-01
The Dutch method for quantifying the magnitude and frequency of occurrence of discharge extremes in the Rhine basin and the potential influence of climate change hereon are presented. In the Netherlands flood protection design requires estimates of discharge extremes for return periods of 1000 up to 100,000 years. Observed discharge records are too short to derive such extreme return discharges, therefore extreme value assessment is based on very long synthetic discharge time-series generated with the Generator of Rainfall And Discharge Extremes (GRADE). The GRADE instrument consists of (1) a stochastic weather generator based on time series resampling of historical f rainfall and temperature and (2) a hydrological model optimized following the GLUE methodology and (3) a hydrodynamic model to simulate the propagation of flood waves based on the generated hydrological time-series. To assess the potential influence of climate change, the four KNMI'14 climate scenarios are applied. These four scenarios represent a large part of the uncertainty provided by the GCMs used for the IPCC 5th assessment report (the CMIP5 GCM simulations under different climate forcings) and are for this purpose tailored to the Rhine and Meuse river basins. To derive the probability distributions of extreme discharges under climate change the historical synthetic rainfall and temperature series simulated with the weather generator are transformed to the future following the KNMI'14 scenarios. For this transformation the Advanced Delta Change method, which allows that the changes in the extremes differ from those in the means, is used. Subsequently the hydrological model is forced with the historical and future (i.e. transformed) synthetic time-series after which the propagation of the flood waves is simulated with the hydrodynamic model to obtain the extreme discharge statistics both for current and future climate conditions. The study shows that both for 2050 and 2085 increases in discharge extremes for the river Rhine at Lobith are projected by all four KNMI'14 climate scenarios. This poses increased requirements for flood protection design in order to prepare for changing climate conditions.
NASA Astrophysics Data System (ADS)
Yin, Shui-qing; Wang, Zhonglei; Zhu, Zhengyuan; Zou, Xu-kai; Wang, Wen-ting
2018-07-01
Extreme precipitation can cause flooding and may result in great economic losses and deaths. The return level is a commonly used measure of extreme precipitation events and is required for hydrological engineer designs, including those of sewerage systems, dams, reservoirs and bridges. In this paper, we propose a two-step method to estimate the return level and its uncertainty for a study region. In the first step, we use the generalized extreme value distribution, the L-moment method and the stationary bootstrap to estimate the return level and its uncertainty at the site with observations. In the second step, a spatial model incorporating the heterogeneous measurement errors and covariates is trained to estimate return levels at sites with no observations and to improve the estimates at sites with limited information. The proposed method is applied to the daily rainfall data from 273 weather stations in the Haihe river basin of North China. We compare the proposed method with two alternatives: the first one is based on the ordinary Kriging method without measurement error, and the second one smooths the estimated location and scale parameters of the generalized extreme value distribution by the universal Kriging method. Results show that the proposed method outperforms its counterparts. We also propose a novel approach to assess the two-step method by comparing it with the at-site estimation method with a series of reduced length of observations. Estimates of the 2-, 5-, 10-, 20-, 50- and 100-year return level maps and the corresponding uncertainties are provided for the Haihe river basin, and a comparison with those released by the Hydrology Bureau of Ministry of Water Resources of China is made.
Li, Yuancheng; Qiu, Rixuan; Jing, Sitong
2018-01-01
Advanced Metering Infrastructure (AMI) realizes a two-way communication of electricity data through by interconnecting with a computer network as the core component of the smart grid. Meanwhile, it brings many new security threats and the traditional intrusion detection method can't satisfy the security requirements of AMI. In this paper, an intrusion detection system based on Online Sequence Extreme Learning Machine (OS-ELM) is established, which is used to detecting the attack in AMI and carrying out the comparative analysis with other algorithms. Simulation results show that, compared with other intrusion detection methods, intrusion detection method based on OS-ELM is more superior in detection speed and accuracy.
Simulations of nearly extremal binary black holes
NASA Astrophysics Data System (ADS)
Giesler, Matthew; Scheel, Mark; Hemberger, Daniel; Lovelace, Geoffrey; Kuper, Kevin; Boyle, Michael; Szilagyi, Bela; Kidder, Lawrence; SXS Collaboration
2015-04-01
Astrophysical black holes could have nearly extremal spins; therefore, nearly extremal black holes could be among the binaries that current and future gravitational-wave observatories will detect. Predicting the gravitational waves emitted by merging black holes requires numerical-relativity simulations, but these simulations are especially challenging when one or both holes have mass m and spin S exceeding the Bowen-York limit of S /m2 = 0 . 93 . Using improved methods we simulate an unequal-mass, precessing binary black hole coalescence, where the larger black hole has S /m2 = 0 . 99 . We also use these methods to simulate a nearly extremal non-precessing binary black hole coalescence, where both black holes have S /m2 = 0 . 994 , nearly reaching the Novikov-Thorne upper bound for holes spun up by thin accretion disks. We demonstrate numerical convergence and estimate the numerical errors of the waveforms; we compare numerical waveforms from our simulations with post-Newtonian and effective-one-body waveforms; and we compare the evolution of the black-hole masses and spins with analytic predictions.
Management of venomous snakebite injury to the extremities.
Anz, Adam W; Schweppe, Mark; Halvorson, Jason; Bushnell, Brandon; Sternberg, Michael; Andrew Koman, L
2010-12-01
Pit vipers (subfamily Crotalinae) are responsible for most venomous snakebites in the United States. The mixture of proteins with cytotoxic, proteolytic, and/or neurotoxic enzymes in snake venom varies by species. Treatment in the field consists of safe identification of the species of snake and rapid transport of the patient to the nearest health care facility. Swelling, bruising, and systemic symptoms are seen following snakebite. Most patients respond to elevation of the affected extremity and observation. Some require the administration of antivenin. Crotalidae Polyvalent Immune Fab (Ovine) (CroFab, BTG International, West Conshohocken, PA) antivenin is safe and effective for the management of local and systemic effects of envenomation. Rarely, compartment syndrome may develop in the affected limb because of edema and tissue necrosis. Close monitoring of the extremity via serial physical examination and measurement of compartment pressure is a reliable method of determining whether surgical intervention is required.
Pediatric lower extremity mower injuries.
Hill, Sean M; Elwood, Eric T
2011-09-01
Lawn mower injuries in children represent an unfortunate common problem to the plastic reconstructive surgeon. There are approximately 68,000 per year reported in the United States. Compounding this problem is the fact that a standard treatment algorithm does not exist. This study follows a series of 7 pediatric patients treated for lower extremity mower injuries by a single plastic surgeon. The extent of soft tissue injury varied. All patients were treated with negative pressure wound therapy as a bridge to definitive closure. Of the 7 patients, 4 required skin grafts, 1 required primary closure, 1 underwent a lower extremity amputation secondary to wounds, and 1 was repaired using a cross-leg flap. Function limitations were minimal for all of our patients after reconstruction. Our basic treatment algorithm is presented with initial debridement followed by the simplest method possible for wound closure using negative pressure wound therapy, if necessary.
Photoswitchable method for the ordered attachment of proteins to surfaces
Camarero, Julio A [Livermore, CA; DeYoreo, James J [Clayton, CA; Kwon, Youngeun [Livermore, CA
2011-07-05
Described herein is a method for the attachment of proteins to any solid support with control over the orientation of the attachment. The method is extremely efficient, not requiring the previous purification of the protein to be attached, and can be activated by UV-light. Spatially addressable arrays of multiple protein components can be generated by using standard photolithographic techniques.
Photoswitchable method for the ordered attachment of proteins to surfaces
Camarero, Julio A.; De Yoreo, James J.; Kwon, Youngeun
2010-04-20
Described herein is a method for the attachment of proteins to any solid support with control over the orientation of the attachment. The method is extremely efficient, not requiring the previous purification of the protein to be attached, and can be activated by UV-light. Spatially addressable arrays of multiple protein components can be generated by using standard photolithographic techniques.
ERIC Educational Resources Information Center
Domah, Darshan
2013-01-01
Agile software development has become very popular around the world in recent years, with methods such as Scrum and Extreme Programming (XP). Literature suggests that functionality is the primary focus in Agile processes while non-functional requirements (NFR) are either ignored or ill-defined. However, for software to be of good quality both…
Highly Thermal Conductive Nanocomposites
NASA Technical Reports Server (NTRS)
Sun, Ya-Ping (Inventor); Connell, John W. (Inventor); Veca, Lucia Monica (Inventor)
2015-01-01
Disclosed are methods for forming carbon-based fillers as may be utilized in forming highly thermal conductive nanocomposite materials. Formation methods include treatment of an expanded graphite with an alcohol/water mixture followed by further exfoliation of the graphite to form extremely thin carbon nanosheets that are on the order of between about 2 and about 10 nanometers in thickness. Disclosed carbon nanosheets can be functionalized and/or can be incorporated in nanocomposites with extremely high thermal conductivities. Disclosed methods and materials can prove highly valuable in many technological applications including, for instance, in formation of heat management materials for protective clothing and as may be useful in space exploration or in others that require efficient yet light-weight and flexible thermal management solutions.
Highly Thermal Conductive Nanocomposites
NASA Technical Reports Server (NTRS)
Sun, Ya-Ping (Inventor); Connell, John W. (Inventor); Veca, Lucia Monica (Inventor)
2017-01-01
Disclosed are methods for forming carbon-based fillers as may be utilized in forming highly thermal conductive nanocomposite materials. Formation methods include treatment of an expanded graphite with an alcohol/water mixture followed by further exfoliation of the graphite to form extremely thin carbon nanosheets that are on the order of between about 2 and about 10 nanometers in thickness. Disclosed carbon nanosheets can be functionalized and/or can be incorporated in nanocomposites with extremely high thermal conductivities. Disclosed methods and materials can prove highly valuable in many technological applications including, for instance, in formation of heat management materials for protective clothing and as may be useful in space exploration or in others that require efficient yet light-weight and flexible thermal management solutions.
Li, Yuancheng; Jing, Sitong
2018-01-01
Advanced Metering Infrastructure (AMI) realizes a two-way communication of electricity data through by interconnecting with a computer network as the core component of the smart grid. Meanwhile, it brings many new security threats and the traditional intrusion detection method can’t satisfy the security requirements of AMI. In this paper, an intrusion detection system based on Online Sequence Extreme Learning Machine (OS-ELM) is established, which is used to detecting the attack in AMI and carrying out the comparative analysis with other algorithms. Simulation results show that, compared with other intrusion detection methods, intrusion detection method based on OS-ELM is more superior in detection speed and accuracy. PMID:29485990
.beta.-silicon carbide protective coating and method for fabricating same
Carey, Paul G.; Thompson, Jesse B.
1994-01-01
A polycrystalline beta-silicon carbide film or coating and method for forming same on components, such as the top of solar cells, to act as an extremely hard protective surface, and as an anti-reflective coating. This is achieved by DC magnetron co-sputtering of amorphous silicon and carbon to form a SiC thin film onto a surface, such as a solar cell. The thin film is then irradiated by a pulsed energy source, such as an excimer laser, to synthesize the poly- or .mu.c-SiC film on the surface and produce .beta.--SiC. While the method of this invention has primary application in solar cell manufacturing, it has application wherever there is a requirement for an extremely hard surface.
NASA Astrophysics Data System (ADS)
Bhardwaj, Alok; Ziegler, Alan D.; Wasson, Robert J.; Chow, Winston; Sharma, Mukat L.
2017-04-01
Extreme monsoon rainfall is the primary reason of floods and other secondary hazards such as landslides in the Indian Himalaya. Understanding the phenomena of extreme monsoon rainfall is therefore required to study the natural hazards. In this work, we study the characteristics of extreme monsoon rainfall including its intensity and frequency in the Garhwal Himalaya in India, with a focus on the Mandakini River Catchment, the site of devastating flood and multiple large landslides in 2013. We have used two long term rainfall gridded data sets: the Asian Precipitation Highly Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE) product with daily rainfall data from 1951-2007 and the India Meteorological Department (IMD) product with daily rainfall data from 1901 to 2013. Two methods of Mann Kendall and Sen Slope estimator are used to identify the statistical significance and magnitude of trends in intensity and frequency of extreme monsoon rainfall respectively, at a significance level of 0.05. The autocorrelation in the time series of extreme monsoon rainfall is identified and reduced using the methods of: pre-whitening, trend-free pre-whitening, variance correction, and block bootstrap. We define extreme monsoon rainfall threshold as the 99th percentile of time series of rainfall values and any rainfall depth greater than 99th percentile is considered as extreme in nature. With the IMD data set, significant increasing trend in intensity and frequency of extreme rainfall with slope magnitude of 0.55 and 0.02 respectively was obtained in the north of the Mandakini Catchment as identified by all four methods. Significant increasing trend in intensity with a slope magnitude of 0.3 is found in the middle of the catchment as identified by all methods except block bootstrap. In the south of the catchment, significant increasing trend in intensity with a slope magnitude of 0.86 for pre-whitening method and 0.28 for trend-free pre-whitening and variance correction methods was obtained. Further, increasing trend in frequency with a slope magnitude of 0.01 was identified by three methods except block bootstrap in the south of the catchment. With the APHRODITE data set, we obtained significant increasing trend in intensity with a slope magnitude of 1.27 at the middle of the catchment as identified by all four methods. Collectively, both the datasets show signals of increasing intensity, and IMD shows results for increasing frequency in the Mandakini Catchment. The increasing occurrence of extreme events, as identified here, is becoming more disastrous because of rising human population and infrastructure in the Mandakini Catchment. For example, the 2013 flood due to extreme rainfall was catastrophic in terms of loss of human and animal lives and destruction of the local economy. We believe our results will help understand more about extreme rainfall events in the Mandakini Catchment and in the Indian Himalaya.
Two case studies on NARCCAP precipitation extremes
NASA Astrophysics Data System (ADS)
Weller, Grant B.; Cooley, Daniel; Sain, Stephan R.; Bukovsky, Melissa S.; Mearns, Linda O.
2013-09-01
We introduce novel methodology to examine the ability of six regional climate models (RCMs) in the North American Regional Climate Change Assessment Program (NARCCAP) ensemble to simulate past extreme precipitation events seen in the observational record over two different regions and seasons. Our primary objective is to examine the strength of daily correspondence of extreme precipitation events between observations and the output of both the RCMs and the driving reanalysis product. To explore this correspondence, we employ methods from multivariate extreme value theory. These methods require that we account for marginal behavior, and we first model and compare climatological quantities which describe tail behavior of daily precipitation for both the observations and model output before turning attention to quantifying the correspondence of the extreme events. Daily precipitation in a West Coast region of North America is analyzed in two seasons, and it is found that the simulated extreme events from the reanalysis-driven NARCCAP models exhibit strong daily correspondence to extreme events in the observational record. Precipitation over a central region of the United States is examined, and we find some daily correspondence between winter extremes simulated by reanalysis-driven NARCCAP models and those seen in observations, but no such correspondence is found for summer extremes. Furthermore, we find greater discrepancies among the NARCCAP models in the tail characteristics of the distribution of daily summer precipitation over this region than seen in precipitation over the West Coast region. We find that the models which employ spectral nudging exhibit stronger tail dependence to observations in the central region.
Bootstrapping conformal field theories with the extremal functional method.
El-Showk, Sheer; Paulos, Miguel F
2013-12-13
The existence of a positive linear functional acting on the space of (differences between) conformal blocks has been shown to rule out regions in the parameter space of conformal field theories (CFTs). We argue that at the boundary of the allowed region the extremal functional contains, in principle, enough information to determine the dimensions and operator product expansion (OPE) coefficients of an infinite number of operators appearing in the correlator under analysis. Based on this idea we develop the extremal functional method (EFM), a numerical procedure for deriving the spectrum and OPE coefficients of CFTs lying on the boundary (of solution space). We test the EFM by using it to rederive the low lying spectrum and OPE coefficients of the two-dimensional Ising model based solely on the dimension of a single scalar quasiprimary--no Virasoro algebra required. Our work serves as a benchmark for applications to more interesting, less known CFTs in the near future.
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Shinyie, Wendy Ling; Jemain, Abdul Aziz
2015-02-01
In this study, two series of data for extreme rainfall events are generated based on Annual Maximum and Partial Duration Methods, derived from 102 rain-gauge stations in Peninsular from 1982-2012. To determine the optimal threshold for each station, several requirements must be satisfied and Adapted Hill estimator is employed for this purpose. A semi-parametric bootstrap is then used to estimate the mean square error (MSE) of the estimator at each threshold and the optimal threshold is selected based on the smallest MSE. The mean annual frequency is also checked to ensure that it lies in the range of one to five and the resulting data is also de-clustered to ensure independence. The two data series are then fitted to Generalized Extreme Value and Generalized Pareto distributions for annual maximum and partial duration series, respectively. The parameter estimation methods used are the Maximum Likelihood and the L-moment methods. Two goodness of fit tests are then used to evaluate the best-fitted distribution. The results showed that the Partial Duration series with Generalized Pareto distribution and Maximum Likelihood parameter estimation provides the best representation for extreme rainfall events in Peninsular Malaysia for majority of the stations studied. Based on these findings, several return values are also derived and spatial mapping are constructed to identify the distribution characteristic of extreme rainfall in Peninsular Malaysia.
Hot air vulcanization of rubber profiles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerlach, J.
1995-07-01
Elastomer profiles are deployed in quantity by the automobile industry as seals and wateproofing in coachwork. The high standards demanded by the industry; improvement in weather prediction, noise reduction, restriction of tolerances, together with powerful demand for EPDM force the rubber processing industry into development, particularly of elastomers. Complex proofing systems must also be achieved with extremely complicated profile forms. All too often such profiles have an extremely large surface together with a low cross-section density. They frequently consist of two or three rubber compounds and are steel reinforced. Sometimes they are flocked and coated with a low friction finish.more » Such high-tech seals require an adjustment of the vulcanization method. The consistent trend in the nineties towards lower quantities of elastomer per sealing unit and the dielectric factor, especially with EPDM, has brought an old fashioned vulcanization method once more to the fore, a method developed over the past years to an extremely high standard, namely the hot-air method. This paper describes various vulcanization and curing methods and their relative merits and disadvantages, the Gerlach hot-air concept, the hot air installation concept, and energy saving and efficiency afforded by this technique. 4 figs.« less
Predictive Thermal Control Applied to HabEx
NASA Technical Reports Server (NTRS)
Brooks, Thomas E.
2017-01-01
Exoplanet science can be accomplished with a telescope that has an internal coronagraph or with an external starshade. An internal coronagraph architecture requires extreme wavefront stability (10 pm change/10 minutes for 10(exp -10) contrast), so every source of wavefront error (WFE) must be controlled. Analysis has been done to estimate the thermal stability required to meet the wavefront stability requirement. This paper illustrates the potential of a new thermal control method called predictive thermal control (PTC) to achieve the required thermal stability. A simple development test using PTC indicates that PTC may meet the thermal stability requirements. Further testing of the PTC method in flight-like environments will be conducted in the X-ray and Cryogenic Facility (XRCF) at Marshall Space Flight Center (MSFC).
Predictive thermal control applied to HabEx
NASA Astrophysics Data System (ADS)
Brooks, Thomas E.
2017-09-01
Exoplanet science can be accomplished with a telescope that has an internal coronagraph or with an external starshade. An internal coronagraph architecture requires extreme wavefront stability (10 pm change/10 minutes for 10-10 contrast), so every source of wavefront error (WFE) must be controlled. Analysis has been done to estimate the thermal stability required to meet the wavefront stability requirement. This paper illustrates the potential of a new thermal control method called predictive thermal control (PTC) to achieve the required thermal stability. A simple development test using PTC indicates that PTC may meet the thermal stability requirements. Further testing of the PTC method in flight-like environments will be conducted in the X-ray and Cryogenic Facility (XRCF) at Marshall Space Flight Center (MSFC).
Yang, Jinliang; Jiang, Haiying; Yeh, Cheng-Ting; Yu, Jianming; Jeddeloh, Jeffrey A; Nettleton, Dan; Schnable, Patrick S
2015-11-01
Although approaches for performing genome-wide association studies (GWAS) are well developed, conventional GWAS requires high-density genotyping of large numbers of individuals from a diversity panel. Here we report a method for performing GWAS that does not require genotyping of large numbers of individuals. Instead XP-GWAS (extreme-phenotype GWAS) relies on genotyping pools of individuals from a diversity panel that have extreme phenotypes. This analysis measures allele frequencies in the extreme pools, enabling discovery of associations between genetic variants and traits of interest. This method was evaluated in maize (Zea mays) using the well-characterized kernel row number trait, which was selected to enable comparisons between the results of XP-GWAS and conventional GWAS. An exome-sequencing strategy was used to focus sequencing resources on genes and their flanking regions. A total of 0.94 million variants were identified and served as evaluation markers; comparisons among pools showed that 145 of these variants were statistically associated with the kernel row number phenotype. These trait-associated variants were significantly enriched in regions identified by conventional GWAS. XP-GWAS was able to resolve several linked QTL and detect trait-associated variants within a single gene under a QTL peak. XP-GWAS is expected to be particularly valuable for detecting genes or alleles responsible for quantitative variation in species for which extensive genotyping resources are not available, such as wild progenitors of crops, orphan crops, and other poorly characterized species such as those of ecological interest. © 2015 The Authors The Plant Journal published by Society for Experimental Biology and John Wiley & Sons Ltd.
[beta]-silicon carbide protective coating and method for fabricating same
Carey, P.G.; Thompson, J.B.
1994-11-01
A polycrystalline beta-silicon carbide film or coating and method for forming same on components, such as the top of solar cells, to act as an extremely hard protective surface, and as an anti-reflective coating are disclosed. This is achieved by DC magnetron co-sputtering of amorphous silicon and carbon to form a SiC thin film onto a surface, such as a solar cell. The thin film is then irradiated by a pulsed energy source, such as an excimer laser, to synthesize the poly- or [mu]c-SiC film on the surface and produce [beta]-SiC. While the method of this invention has primary application in solar cell manufacturing, it has application wherever there is a requirement for an extremely hard surface. 3 figs.
Walker, David; Yu, Guoyu; Li, Hongyu; Messelink, Wilhelmus; Evans, Rob; Beaucamp, Anthony
2012-08-27
Segment-edges for extremely large telescopes are critical for observations requiring high contrast and SNR, e.g. detecting exo-planets. In parallel, industrial requirements for edge-control are emerging in several applications. This paper reports on a new approach, where edges are controlled throughout polishing of the entire surface of a part, which has been pre-machined to its final external dimensions. The method deploys compliant bonnets delivering influence functions of variable diameter, complemented by small pitch tools sized to accommodate aspheric mis-fit. We describe results on witness hexagons in preparation for full size prototype segments for the European Extremely Large Telescope, and comment on wider applications of the technology.
3D Microfabrication Using Emulsion Mask Grayscale Photolithography Technique
NASA Astrophysics Data System (ADS)
Lee, Tze Pin; Mohamed, Khairudin
2016-02-01
Recently, the rapid development of technology such as biochips, microfluidic, micro-optical devices and micro-electromechanical-systems (MEMS) demands the capability to create complex design of three-dimensional (3D) microstructures. In order to create 3D microstructures, the traditional photolithography process often requires multiple photomasks to form 3D pattern from several stacked photoresist layers. This fabrication method is extremely time consuming, low throughput, costly and complicated to conduct for high volume manufacturing scale. On the other hand, next generation lithography such as electron beam lithography (EBL), focused ion beam lithography (FIB) and extreme ultraviolet lithography (EUV) are however too costly and the machines require expertise to setup. Therefore, the purpose of this study is to develop a simplified method in producing 3D microstructures using single grayscale emulsion mask technique. By using this grayscale fabrication method, microstructures of thickness as high as 500μm and as low as 20μm are obtained in a single photolithography exposure. Finally, the fabrication of 3D microfluidic channel has been demonstrated by using this grayscale photolithographic technique.
Estimation of resist sensitivity for extreme ultraviolet lithography using an electron beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oyama, Tomoko Gowa, E-mail: ohyama.tomoko@qst.go.jp; Oshima, Akihiro; Tagawa, Seiichi, E-mail: tagawa@sanken.osaka-u.ac.jp
2016-08-15
It is a challenge to obtain sufficient extreme ultraviolet (EUV) exposure time for fundamental research on developing a new class of high sensitivity resists for extreme ultraviolet lithography (EUVL) because there are few EUV exposure tools that are very expensive. In this paper, we introduce an easy method for predicting EUV resist sensitivity by using conventional electron beam (EB) sources. If the chemical reactions induced by two ionizing sources (EB and EUV) are the same, the required absorbed energies corresponding to each required exposure dose (sensitivity) for the EB and EUV would be almost equivalent. Based on this theory, wemore » calculated the resist sensitivities for the EUV/soft X-ray region. The estimated sensitivities were found to be comparable to the experimentally obtained sensitivities. It was concluded that EB is a very useful exposure tool that accelerates the development of new resists and sensitivity enhancement processes for 13.5 nm EUVL and 6.x nm beyond-EUVL (BEUVL).« less
Improved methods for simulating nearly extremal binary black holes
NASA Astrophysics Data System (ADS)
Scheel, Mark A.; Giesler, Matthew; Hemberger, Daniel A.; Lovelace, Geoffrey; Kuper, Kevin; Boyle, Michael; Szilágyi, Béla; Kidder, Lawrence E.
2015-05-01
Astrophysical black holes could be nearly extremal (that is, rotating nearly as fast as possible); therefore, nearly extremal black holes could be among the binaries that current and future gravitational-wave observatories will detect. Predicting the gravitational waves emitted by merging black holes requires numerical-relativity simulations, but these simulations are especially challenging when one or both holes have mass m and spin S exceeding the Bowen-York limit of S/{{m}2}=0.93. We present improved methods that enable us to simulate merging, nearly extremal black holes (i.e., black holes with S/{{m}2}\\gt 0.93) more robustly and more efficiently. We use these methods to simulate an unequal-mass, precessing binary black hole (BBH) coalescence, where the larger black hole has S/{{m}2}=0.99. We also use these methods to simulate a non-precessing BBH coalescence, where both black holes have S/{{m}2}=0.994, nearly reaching the Novikov-Thorne upper bound for holes spun up by thin accretion disks. We demonstrate numerical convergence and estimate the numerical errors of the waveforms; we compare numerical waveforms from our simulations with post-Newtonian and effective-one-body waveforms; we compare the evolution of the black hole masses and spins with analytic predictions; and we explore the effect of increasing spin magnitude on the orbital dynamics (the so-called ‘orbital hangup’ effect).
Prince, Linda M
2015-01-01
Inter-simple sequence repeat PCR (ISSR-PCR) is a fast, inexpensive genotyping technique based on length variation in the regions between microsatellites. The method requires no species-specific prior knowledge of microsatellite location or composition. Very small amounts of DNA are required, making this method ideal for organisms of conservation concern, or where the quantity of DNA is extremely limited due to organism size. ISSR-PCR can be highly reproducible but requires careful attention to detail. Optimization of DNA extraction, fragment amplification, and normalization of fragment peak heights during fluorescent detection are critical steps to minimizing the downstream time spent verifying and scoring the data.
Comparison of Reconstruction and Control algorithms on the ESO end-to-end simulator OCTOPUS
NASA Astrophysics Data System (ADS)
Montilla, I.; Béchet, C.; Lelouarn, M.; Correia, C.; Tallon, M.; Reyes, M.; Thiébaut, É.
Extremely Large Telescopes are very challenging concerning their Adaptive Optics requirements. Their diameters, the specifications demanded by the science for which they are being designed for, and the planned use of Extreme Adaptive Optics systems, imply a huge increment in the number of degrees of freedom in the deformable mirrors. It is necessary to study new reconstruction algorithms to implement the real time control in Adaptive Optics at the required speed. We have studied the performance, applied to the case of the European ELT, of three different algorithms: the matrix-vector multiplication (MVM) algorithm, considered as a reference; the Fractal Iterative Method (FrIM); and the Fourier Transform Reconstructor (FTR). The algorithms have been tested on ESO's OCTOPUS software, which simulates the atmosphere, the deformable mirror, the sensor and the closed-loop control. The MVM is the default reconstruction and control method implemented in OCTOPUS, but it scales in O(N2) operations per loop so it is not considered as a fast algorithm for wave-front reconstruction and control on an Extremely Large Telescope. The two other methods are the fast algorithms studied in the E-ELT Design Study. The performance, as well as their response in the presence of noise and with various atmospheric conditions, has been compared using a Single Conjugate Adaptive Optics configuration for a 42 m diameter ELT, with a total amount of 5402 actuators. Those comparisons made on a common simulator allow to enhance the pros and cons of the various methods, and give us a better understanding of the type of reconstruction algorithm that an ELT demands.
Method for fabricating an ultra-low expansion mask blank having a crystalline silicon layer
Cardinale, Gregory F.
2002-01-01
A method for fabricating masks for extreme ultraviolet lithography (EUVL) using Ultra-Low Expansion (ULE) substrates and crystalline silicon. ULE substrates are required for the necessary thermal management in EUVL mask blanks, and defect detection and classification have been obtained using crystalline silicon substrate materials. Thus, this method provides the advantages for both the ULE substrate and the crystalline silicon in an Extreme Ultra-Violet (EUV) mask blank. The method is carried out by bonding a crystalline silicon wafer or member to a ULE wafer or substrate and thinning the silicon to produce a 5-10 .mu.m thick crystalline silicon layer on the surface of the ULE substrate. The thinning of the crystalline silicon may be carried out, for example, by chemical mechanical polishing and if necessary or desired, oxidizing the silicon followed by etching to the desired thickness of the silicon.
Jo, Javier A.; Fang, Qiyin; Marcu, Laura
2007-01-01
We report a new deconvolution method for fluorescence lifetime imaging microscopy (FLIM) based on the Laguerre expansion technique. The performance of this method was tested on synthetic and real FLIM images. The following interesting properties of this technique were demonstrated. 1) The fluorescence intensity decay can be estimated simultaneously for all pixels, without a priori assumption of the decay functional form. 2) The computation speed is extremely fast, performing at least two orders of magnitude faster than current algorithms. 3) The estimated maps of Laguerre expansion coefficients provide a new domain for representing FLIM information. 4) The number of images required for the analysis is relatively small, allowing reduction of the acquisition time. These findings indicate that the developed Laguerre expansion technique for FLIM analysis represents a robust and extremely fast deconvolution method that enables practical applications of FLIM in medicine, biology, biochemistry, and chemistry. PMID:19444338
Bailey-Wilson, Joan E.; Brennan, Jennifer S.; Bull, Shelley B; Culverhouse, Robert; Kim, Yoonhee; Jiang, Yuan; Jung, Jeesun; Li, Qing; Lamina, Claudia; Liu, Ying; Mägi, Reedik; Niu, Yue S.; Simpson, Claire L.; Wang, Libo; Yilmaz, Yildiz E.; Zhang, Heping; Zhang, Zhaogong
2012-01-01
Group 14 of Genetic Analysis Workshop 17 examined several issues related to analysis of complex traits using DNA sequence data. These issues included novel methods for analyzing rare genetic variants in an aggregated manner (often termed collapsing rare variants), evaluation of various study designs to increase power to detect effects of rare variants, and the use of machine learning approaches to model highly complex heterogeneous traits. Various published and novel methods for analyzing traits with extreme locus and allelic heterogeneity were applied to the simulated quantitative and disease phenotypes. Overall, we conclude that power is (as expected) dependent on locus-specific heritability or contribution to disease risk, large samples will be required to detect rare causal variants with small effect sizes, extreme phenotype sampling designs may increase power for smaller laboratory costs, methods that allow joint analysis of multiple variants per gene or pathway are more powerful in general than analyses of individual rare variants, population-specific analyses can be optimal when different subpopulations harbor private causal mutations, and machine learning methods may be useful for selecting subsets of predictors for follow-up in the presence of extreme locus heterogeneity and large numbers of potential predictors. PMID:22128066
2011-01-01
Background Recovery of upper extremity function is particularly recalcitrant to successful rehabilitation. Robotic-assisted arm training devices integrated with virtual targets or complex virtual reality gaming simulations are being developed to deal with this problem. Neural control mechanisms indicate that reaching and hand-object manipulation are interdependent, suggesting that training on tasks requiring coordinated effort of both the upper arm and hand may be a more effective method for improving recovery of real world function. However, most robotic therapies have focused on training the proximal, rather than distal effectors of the upper extremity. This paper describes the effects of robotically-assisted, integrated upper extremity training. Methods Twelve subjects post-stroke were trained for eight days on four upper extremity gaming simulations using adaptive robots during 2-3 hour sessions. Results The subjects demonstrated improved proximal stability, smoothness and efficiency of the movement path. This was in concert with improvement in the distal kinematic measures of finger individuation and improved speed. Importantly, these changes were accompanied by a robust 16-second decrease in overall time in the Wolf Motor Function Test and a 24-second decrease in the Jebsen Test of Hand Function. Conclusions Complex gaming simulations interfaced with adaptive robots requiring integrated control of shoulder, elbow, forearm, wrist and finger movements appear to have a substantial effect on improving hemiparetic hand function. We believe that the magnitude of the changes and the stability of the patient's function prior to training, along with maintenance of several aspects of the gains demonstrated at retention make a compelling argument for this approach to training. PMID:21575185
NASA Astrophysics Data System (ADS)
Sobradelo, R.; Martí, J.; Mendoza-Rosas, A. T.; Gómez, G.
2012-04-01
The Canary Islands are an active volcanic region densely populated and visited by several millions of tourists every year. Nearly twenty eruptions have been reported through written chronicles in the last 600 years, suggesting that the probability of a new eruption in the near future is far from zero. This shows the importance of assessing and monitoring the volcanic hazard of the region in order to reduce and manage its potential volcanic risk, and ultimately contribute to the design of appropriate preparedness plans. Hence, the probabilistic analysis of the volcanic eruption time series for the Canary Islands is an essential step for the assessment of volcanic hazard and risk in the area. Such a series describes complex processes involving different types of eruptions over different time scales. Here we propose a statistical method for calculating the probabilities of future eruptions which is most appropriate given the nature of the documented historical eruptive data. We first characterise the eruptions by their magnitudes, and then carry out a preliminary analysis of the data to establish the requirements for the statistical method. Past studies in eruptive time series used conventional statistics and treated the series as an homogeneous process. In this paper, we will use a method that accounts for the time-dependence of the series and includes rare or extreme events, in the form of few data of large eruptions, since these data require special methods of analysis. Hence, we will use a statistical method from extreme value theory. In particular, we will apply a non-homogeneous Poisson process to the historical eruptive data of the Canary Islands to estimate the probability of having at least one volcanic event of a magnitude greater than one in the upcoming years. Shortly after the publication of this method an eruption in the island of El Hierro took place for the first time in historical times, supporting our method and contributing towards the validation of our results.
Analysis of Extreme Snow Water Equivalent Data in Central New Hampshire
NASA Astrophysics Data System (ADS)
Vuyovich, C.; Skahill, B. E.; Kanney, J. F.; Carr, M.
2017-12-01
Heavy snowfall and snowmelt-related events have been linked to widespread flooding and damages in many regions of the U.S. Design of critical infrastructure in these regions requires spatial estimates of extreme snow water equivalent (SWE). In this study, we develop station specific and spatially explicit estimates of extreme SWE using data from fifteen snow sampling stations maintained by the New Hampshire Department of Environmental Services. The stations are located in the Mascoma, Pemigewasset, Winnipesaukee, Ossipee, Salmon Falls, Lamprey, Sugar, and Isinglass basins in New Hampshire. The average record length for the fifteen stations is approximately fifty-nine years. The spatial analysis of extreme SWE involves application of two Bayesian Hierarchical Modeling methods, one that assumes conditional independence, and another which uses the Smith max-stable process model to account for spatial dependence. We also apply additional max-stable process models, albeit not in a Bayesian framework, that better model the observed dependence among the extreme SWE data. The spatial process modeling leverages readily available and relevant spatially explicit covariate data. The noted additional max-stable process models also used the nonstationary winter North Atlantic Oscillation index, which has been observed to influence snowy weather along the east coast of the United States. We find that, for this data set, SWE return level estimates are consistently higher when derived using methods which account for the observed spatial dependence among the extreme data. This is particularly significant for design scenarios of relevance for critical infrastructure evaluation.
NASA Astrophysics Data System (ADS)
Smith, J. D.; Kean, J. W.
2003-12-01
Accurate empirical determination of river discharge during an extreme event is very difficult even at a gage site. Moreover, the procurement of extreme flow measurements at many locations in an ungaged drainage basin often is necessary to relate the surface-water flow in the drainage network during a flood to the spatial distribution of intense rainfall. Consequently, paleo-hydrologic methods have to be employed to estimate peak discharges. These methods, however, require the application of some type of flow model. Often the flow models used with paleo-hydrologic data are over simplified and embody low-flow or extrapolated roughness coefficients that are inappropriate for the high flow of interest and that substantially reduce the reliability of the estimated discharge. Models that permit calculation of flow resistance from measured or calculated pre-flood, post-flood, or evolving channel and floodplain geometries and roughnesses can yield the most accurate results for these extreme situations. We have developed a procedure for directly calculating flow discharge as a function of stage in reaches a few tens of river widths in length. The foundation for this approach is a set of algorithms that permits computation of the form drag on topographic elements and woody vegetation. Its application requires an initial survey of the channel and floodplain topography and roughness. The method can be used either with stage determined from a set of pressure gages distributed throughout a drainage basin to monitor discharge in a drainage network or with paleo-hydrologic data to determine discharge from extreme events. Currently, our method of determining discharge from stage is being tested at various sites in the drainage basin of the Whitewater River, Kansas. Two of these sites are just downstream of USGS gages, and a third is a short distance downstream from the outlet pipe of a man-made lake. These tests are for a full range of hydrologic conditions in order to demonstrate that the model-based method for converting stage to discharge can be employed with confidence (1) in ungaged drainage basins where a large number of discharge measurements are required for hydrologic research, (2) at locations where rated USGS stage gages are too expensive, (3) near the sites of USGS stage gages for floods during which the discharge exceeds those for which the gage has been rated, and (4) for situations where paleo-flood methods have to be used to obtain a peak discharge. Model calculated rating curves are compared to measured ones for one of the USGS gage sites. Model calculations also are used to show that Manning's and other friction coefficients are functions of stage at this site. An approach such as the one described here is essential for the quantitative investigation of fluvial geomorphic processes caused by very large floods.
Porosity estimation by semi-supervised learning with sparsely available labeled samples
NASA Astrophysics Data System (ADS)
Lima, Luiz Alberto; Görnitz, Nico; Varella, Luiz Eduardo; Vellasco, Marley; Müller, Klaus-Robert; Nakajima, Shinichi
2017-09-01
This paper addresses the porosity estimation problem from seismic impedance volumes and porosity samples located in a small group of exploratory wells. Regression methods, trained on the impedance as inputs and the porosity as output labels, generally suffer from extremely expensive (and hence sparsely available) porosity samples. To optimally make use of the valuable porosity data, a semi-supervised machine learning method was proposed, Transductive Conditional Random Field Regression (TCRFR), showing good performance (Görnitz et al., 2017). TCRFR, however, still requires more labeled data than those usually available, which creates a gap when applying the method to the porosity estimation problem in realistic situations. In this paper, we aim to fill this gap by introducing two graph-based preprocessing techniques, which adapt the original TCRFR for extremely weakly supervised scenarios. Our new method outperforms the previous automatic estimation methods on synthetic data and provides a comparable result to the manual labored, time-consuming geostatistics approach on real data, proving its potential as a practical industrial tool.
NASA Astrophysics Data System (ADS)
Kim, S. K.; Lee, J.; Zhang, C.; Ames, S.; Williams, D. N.
2017-12-01
Deep learning techniques have been successfully applied to solve many problems in climate and geoscience using massive-scaled observed and modeled data. For extreme climate event detections, several models based on deep neural networks have been recently proposed and attend superior performance that overshadows all previous handcrafted expert based method. The issue arising, though, is that accurate localization of events requires high quality of climate data. In this work, we propose framework capable of detecting and localizing extreme climate events in very coarse climate data. Our framework is based on two models using deep neural networks, (1) Convolutional Neural Networks (CNNs) to detect and localize extreme climate events, and (2) Pixel recursive recursive super resolution model to reconstruct high resolution climate data from low resolution climate data. Based on our preliminary work, we have presented two CNNs in our framework for different purposes, detection and localization. Our results using CNNs for extreme climate events detection shows that simple neural nets can capture the pattern of extreme climate events with high accuracy from very coarse reanalysis data. However, localization accuracy is relatively low due to the coarse resolution. To resolve this issue, the pixel recursive super resolution model reconstructs the resolution of input of localization CNNs. We present a best networks using pixel recursive super resolution model that synthesizes details of tropical cyclone in ground truth data while enhancing their resolution. Therefore, this approach not only dramat- ically reduces the human effort, but also suggests possibility to reduce computing cost required for downscaling process to increase resolution of data.
NASA Technical Reports Server (NTRS)
Mattox, D. M.
1981-01-01
Surface tension gradient in melt forces gas bubbles to surface, increasing glass strength and transparency. Conventional chemical and buoyant fining are extremely slow in viscous glasses, but tension gradient method moves 250 um bubbles as rapidly as 30 um/s. Heat required for high temperature part of melt is furnished by stationary electrical or natural-gas heater; induction and laser heating are also possible. Method has many applications in industry processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kraus, R. G.; Mcnabb, D.; Kumar, M.
The National Nuclear Security Agency has recently recognized that a long-term need exists to establish a stronger scientific basis for the assessment and qualification of materials and manufacturing processes for the nuclear stockpile and other national security applications. These materials may have undergone substantial changes with age, or may represent new materials that are being introduced because of difficulties associated with reusing or recreating materials used in original stockpile components. Also, with advancements in manufacturing methods, the NNSA anticipates opportunities for an enhanced range of control over fabricated components, an enhanced pace of materials development, and enhanced functionality. The developmentmore » of qualification standards for these new materials will require the ability to understand and control material characteristics that affect both mechanical and dynamic performance. A unique aspect for NNSA is that the performance requirements for materials are often set by system hydrodynamics, and these materials must perform in extreme environments and loading conditions. Thus, the scientific motivation is to understand “Matter-Radiation Interactions in Extremes (MaRIE).”« less
Amniotic Constriction Bands: Secondary Deformities and Their Treatments.
Drury, Benjamin T; Rayan, Ghazi M
2018-01-01
The purpose of this study was to report the surgical treatment experience of patients with amniotic constriction bands (ACB) over a 35-year interval and detail consequential limb deformities with emphasis on hands and upper extremities, along with the nature and frequency of their surgical treatment methods. Fifty-one patients were identified; 26 were males and 25 females. The total number of deformities was listed. The total number of operations, individual procedures, and operations plus procedures that were done for each patient and their frequency were recorded. The total number of operations was 117, and total number of procedures was 341. More procedures were performed on the upper extremity (85%) than the lower extremity (15%). Including the primary deformity ACB, 16 different hand deformities secondary to ACB were encountered. Sixteen different surgical methods for the upper extremity were utilized; a primary procedure for ACB and secondary reconstructions for all secondary deformities. Average age at the time of the first procedure was 9.3 months. The most common procedures performed, in order of frequency, were excision of ACB plus Z-plasty, release of partial syndactyly, release of fenestrated syndactyly, full-thickness skin grafts, resection of digital bony overgrowth from amputation stumps, and deepening of first and other digital web spaces. Many hand and upper extremity deformities secondary to ACB are encountered. Children with ACB may require more than one operation including multiple procedures. Numerous surgical methods of reconstruction for these children's secondary deformities are necessary in addition to the customary primary procedure of excision of ACB and Z-plasty.
Temporal development of extreme precipitation in Germany projected by EURO-CORDEX simulations
NASA Astrophysics Data System (ADS)
Brendel, Christoph; Deutschländer, Thomas
2017-04-01
A sustainable operation of transport infrastructure requires an enhanced resilience to the increasing impacts of climate change and related extreme meteorological events. To meet this challenge, the German Federal Ministry of Transport and Digital Infrastructure (BMVI) commenced a comprehensive national research program on safe and sustainable transport in Germany. A network of departmental research institutes addresses the "Adaptation of the German transport infrastructure towards climate change and extreme events". Various studies already have identified an increase in the average global precipitation for the 20th century. There is some indication that these increases are most visible in a rising frequency of precipitation extremes. However, the changes are highly variable between regions and seasons. With a further increase of atmospheric greenhouse gas concentrations in the 21st century, the likelihood of occurrence of such extreme events will continue to rise. A kernel estimator has been used in order to obtain a robust estimate of the temporal development of extreme precipitation events projected by an ensemble of EURO-CORDEX simulations. The kernel estimator measures the intensity of the poisson point process indicating temporal changes in the frequency of extreme events. Extreme precipitation events were selected using the peaks over threshold (POT) method with the 90th, 95th and 99th quantile of daily precipitation sums as thresholds. Application of this non-parametric approach with relative thresholds renders the use of a bias correction non-mandatory. In addition, in comparison to fitting an extreme value theory (EVT) distribution, the method is completely unsusceptible to outliers. First results show an overall increase of extreme precipitation events for Germany until the end of the 21st century. However, major differences between seasons, quantiles and the three different Representative Concentration Pathways (RCP 2.6, 4.5, and 8.5) have been identified. For instance, the frequency of extreme precipitation events more than triples in the most extreme scenario. Regional differences are rather small with the largest increase in northern Germany, particularly in coastal regions and the weakest increase in the most southern parts of Germany.
A frequency dependent preconditioned wavelet method for atmospheric tomography
NASA Astrophysics Data System (ADS)
Yudytskiy, Mykhaylo; Helin, Tapio; Ramlau, Ronny
2013-12-01
Atmospheric tomography, i.e. the reconstruction of the turbulence in the atmosphere, is a main task for the adaptive optics systems of the next generation telescopes. For extremely large telescopes, such as the European Extremely Large Telescope, this problem becomes overly complex and an efficient algorithm is needed to reduce numerical costs. Recently, a conjugate gradient method based on wavelet parametrization of turbulence layers was introduced [5]. An iterative algorithm can only be numerically efficient when the number of iterations required for a sufficient reconstruction is low. A way to achieve this is to design an efficient preconditioner. In this paper we propose a new frequency-dependent preconditioner for the wavelet method. In the context of a multi conjugate adaptive optics (MCAO) system simulated on the official end-to-end simulation tool OCTOPUS of the European Southern Observatory we demonstrate robustness and speed of the preconditioned algorithm. We show that three iterations are sufficient for a good reconstruction.
NASA Astrophysics Data System (ADS)
Rokita, Pawel
Classical portfolio diversification methods do not take account of any dependence between extreme returns (losses). Many researchers provide, however, some empirical evidence for various assets that extreme-losses co-occur. If the co-occurrence is frequent enough to be statistically significant, it may seriously influence portfolio risk. Such effects may result from a few different properties of financial time series, like for instance: (1) extreme dependence in a (long-term) unconditional distribution, (2) extreme dependence in subsequent conditional distributions, (3) time-varying conditional covariance, (4) time-varying (long-term) unconditional covariance, (5) market contagion. Moreover, a mix of these properties may be present in return time series. Modeling each of them requires different approaches. It seams reasonable to investigate whether distinguishing between the properties is highly significant for portfolio risk measurement. If it is, identifying the effect responsible for high loss co-occurrence would be of a great importance. If it is not, the best solution would be selecting the easiest-to-apply model. This article concentrates on two of the aforementioned properties: extreme dependence (in a long-term unconditional distribution) and time-varying conditional covariance.
A New Approach to Extreme Value Estimation Applicable to a Wide Variety of Random Variables
NASA Technical Reports Server (NTRS)
Holland, Frederic A., Jr.
1997-01-01
Designing reliable structures requires an estimate of the maximum and minimum values (i.e., strength and load) that may be encountered in service. Yet designs based on very extreme values (to insure safety) can result in extra material usage and hence, uneconomic systems. In aerospace applications, severe over-design cannot be tolerated making it almost mandatory to design closer to the assumed limits of the design random variables. The issue then is predicting extreme values that are practical, i.e. neither too conservative or non-conservative. Obtaining design values by employing safety factors is well known to often result in overly conservative designs and. Safety factor values have historically been selected rather arbitrarily, often lacking a sound rational basis. To answer the question of how safe a design needs to be has lead design theorists to probabilistic and statistical methods. The so-called three-sigma approach is one such method and has been described as the first step in utilizing information about the data dispersion. However, this method is based on the assumption that the random variable is dispersed symmetrically about the mean and is essentially limited to normally distributed random variables. Use of this method can therefore result in unsafe or overly conservative design allowables if the common assumption of normality is incorrect.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graf, Peter; Damiani, Rick R.; Dykes, Katherine
2017-01-09
A new adaptive stratified importance sampling (ASIS) method is proposed as an alternative approach for the calculation of the 50 year extreme load under operational conditions, as in design load case 1.1 of the the International Electrotechnical Commission design standard. ASIS combines elements of the binning and extrapolation technique, currently described by the standard, and of the importance sampling (IS) method to estimate load probability of exceedances (POEs). Whereas a Monte Carlo (MC) approach would lead to the sought level of POE with a daunting number of simulations, IS-based techniques are promising as they target the sampling of the inputmore » parameters on the parts of the distributions that are most responsible for the extreme loads, thus reducing the number of runs required. We compared the various methods on select load channels as output from FAST, an aero-hydro-servo-elastic tool for the design and analysis of wind turbines developed by the National Renewable Energy Laboratory (NREL). Our newly devised method, although still in its infancy in terms of tuning of the subparameters, is comparable to the others in terms of load estimation and its variance versus computational cost, and offers great promise going forward due to the incorporation of adaptivity into the already powerful importance sampling concept.« less
Evaluation and testing of image quality of the Space Solar Extreme Ultraviolet Telescope
NASA Astrophysics Data System (ADS)
Peng, Jilong; Yi, Zhong; Zhou, Shuhong; Yu, Qian; Hou, Yinlong; Wang, Shanshan
2018-01-01
For the space solar extreme ultraviolet telescope, the star point test can not be performed in the x-ray band (19.5nm band) as there is not light source of bright enough. In this paper, the point spread function of the optical system is calculated to evaluate the imaging performance of the telescope system. Combined with the actual processing surface error, such as small grinding head processing and magnetorheological processing, the optical design software Zemax and data analysis software Matlab are used to directly calculate the system point spread function of the space solar extreme ultraviolet telescope. Matlab codes are programmed to generate the required surface error grid data. These surface error data is loaded to the specified surface of the telescope system by using the communication technique of DDE (Dynamic Data Exchange), which is used to connect Zemax and Matlab. As the different processing methods will lead to surface error with different size, distribution and spatial frequency, the impact of imaging is also different. Therefore, the characteristics of the surface error of different machining methods are studied. Combining with its position in the optical system and simulation its influence on the image quality, it is of great significance to reasonably choose the processing technology. Additionally, we have also analyzed the relationship between the surface error and the image quality evaluation. In order to ensure the final processing of the mirror to meet the requirements of the image quality, we should choose one or several methods to evaluate the surface error according to the different spatial frequency characteristics of the surface error.
Spatial distribution of precipitation extremes in Norway
NASA Astrophysics Data System (ADS)
Verpe Dyrrdal, Anita; Skaugen, Thomas; Lenkoski, Alex; Thorarinsdottir, Thordis; Stordal, Frode; Førland, Eirik J.
2015-04-01
Estimates of extreme precipitation, in terms of return levels, are crucial in planning and design of important infrastructure. Through two separate studies, we have examined the levels and spatial distribution of daily extreme precipitation over catchments in Norway, and hourly extreme precipitation in a point. The analyses were carried out through the development of two new methods for estimating extreme precipitation in Norway. For daily precipitation we fit the Generalized Extreme Value (GEV) distribution to areal time series from a gridded dataset, consisting of daily precipitation during the period 1957-today with a resolution of 1x1 km². This grid-based method is more objective and less manual and time-consuming compared to the existing method at MET Norway. In addition, estimates in ungauged catchments are easier to obtain, and the GEV approach includes a measure of uncertainty, which is a requirement in climate studies today. Further, we go into depth on the debated GEV shape parameter, which plays an important role for longer return periods. We show that it varies according to dominating precipitation types, having positive values in the southeast and negative values in the southwest. We also find indications that the degree of orographic enhancement might affect the shape parameter. For hourly precipitation, we estimate return levels on a 1x1 km² grid, by linking GEV distributions with latent Gaussian fields in a Bayesian hierarchical model (BHM). Generalized linear models on the GEV parameters, estimated from observations, are able to incorporate location-specific geographic and meteorological information and thereby accommodate these effects on extreme precipitation. Gaussian fields capture additional unexplained spatial heterogeneity and overcome the sparse grid on which observations are collected, while a Bayesian model averaging component directly assesses model uncertainty. We find that mean summer precipitation, mean summer temperature, latitude, longitude, mean annual precipitation and elevation are good covariate candidates for hourly precipitation in our model. Summer indices succeed because hourly precipitation extremes often occur during the convective season. The spatial distribution of hourly and daily precipitation differs in Norway. Daily precipitation extremes are larger along the southwestern coast, where large-scale frontal systems dominate during fall season and the mountain ridge generates strong orographic enhancement. The largest hourly precipitation extremes are mostly produced by intense convective showers during summer, and are thus found along the entire southern coast, including the Oslo-region.
NASA Astrophysics Data System (ADS)
Cifelli, R.; Mahoney, K. M.; Webb, R. S.; McCormick, B.
2017-12-01
To ensure structural and operational safety of dams and other water management infrastructure, water resources managers and engineers require information about the potential for heavy precipitation. The methods and data used to estimate extreme rainfall amounts for managing risk are based on 40-year-old science and in need of improvement. The need to evaluate new approaches based on the best science available has led the states of Colorado and New Mexico to engage a body of scientists and engineers in an innovative "ensemble approach" to updating extreme precipitation estimates. NOAA is at the forefront of one of three technical approaches that make up the "ensemble study"; the three approaches are conducted concurrently and in collaboration with each other. One approach is the conventional deterministic, "storm-based" method, another is a risk-based regional precipitation frequency estimation tool, and the third is an experimental approach utilizing NOAA's state-of-the-art High Resolution Rapid Refresh (HRRR) physically-based dynamical weather prediction model. The goal of the overall project is to use the individual strengths of these different methods to define an updated and broadly acceptable state of the practice for evaluation and design of dam spillways. This talk will highlight the NOAA research and NOAA's role in the overarching goal to better understand and characterizing extreme precipitation estimation uncertainty. The research led by NOAA explores a novel high-resolution dataset and post-processing techniques using a super-ensemble of hourly forecasts from the HRRR model. We also investigate how this rich dataset may be combined with statistical methods to optimally cast the data in probabilistic frameworks. NOAA expertise in the physical processes that drive extreme precipitation is also employed to develop careful testing and improved understanding of the limitations of older estimation methods and assumptions. The process of decision making in the midst of uncertainty is a major part of this study. We will speak to how the ensemble approach may be used in concert with one another to manage risk and enhance resiliency in the midst of uncertainty. Finally, the presentation will also address the implications of including climate change in future extreme precipitation estimation studies.
NASA Astrophysics Data System (ADS)
Hamdi, Y.; Bardet, L.; Duluc, C.-M.; Rebour, V.
2014-09-01
Nuclear power plants located in the French Atlantic coast are designed to be protected against extreme environmental conditions. The French authorities remain cautious by adopting a strict policy of nuclear plants flood prevention. Although coastal nuclear facilities in France are designed to very low probabilities of failure (e.g. 1000 year surge), exceptional surges (outliers induced by exceptional climatic events) had shown that the extreme sea levels estimated with the current statistical approaches could be underestimated. The estimation of extreme surges then requires the use of a statistical analysis approach having a more solid theoretical motivation. This paper deals with extreme surge frequency estimation using historical information (HI) about events occurred before the systematic record period. It also contributes to addressing the problem of the presence of outliers in data sets. The frequency models presented in the present paper have been quite successful in the field of hydrometeorology and river flooding but they have not been applied to sea levels data sets to prevent marine flooding. In this work, we suggest two methods of incorporating the HI: the Peaks-Over-Threshold method with HI (POTH) and the Block Maxima method with HI (BMH). Two kinds of historical data can be used in the POTH method: classical Historical Maxima (HMax) data, and Over a Threshold Supplementary (OTS) data. In both cases, the data are structured in historical periods and can be used only as complement to the main systematic data. On the other hand, in the BMH method, the basic hypothesis in statistical modeling of HI is that at least one threshold of perception exists for the whole period (historical and systematic) and that during a giving historical period preceding the period of tide gauging, only information about surges above this threshold have been recorded or archived. The two frequency models were applied to a case study from France, at the La Rochelle site where the storm Xynthia induced an outlier, to illustrate their potentials, to compare their performances and especially to analyze the impact of the use of HI on the extreme surge frequency estimation.
NASA Astrophysics Data System (ADS)
Hamdi, Y.; Bardet, L.; Duluc, C.-M.; Rebour, V.
2015-07-01
Nuclear power plants located in the French Atlantic coast are designed to be protected against extreme environmental conditions. The French authorities remain cautious by adopting a strict policy of nuclear-plants flood prevention. Although coastal nuclear facilities in France are designed to very low probabilities of failure (e.g., 1000-year surge), exceptional surges (outliers induced by exceptional climatic events) have shown that the extreme sea levels estimated with the current statistical approaches could be underestimated. The estimation of extreme surges then requires the use of a statistical analysis approach having a more solid theoretical motivation. This paper deals with extreme-surge frequency estimation using historical information (HI) about events occurred before the systematic record period. It also contributes to addressing the problem of the presence of outliers in data sets. The frequency models presented in the present paper have been quite successful in the field of hydrometeorology and river flooding but they have not been applied to sea level data sets to prevent marine flooding. In this work, we suggest two methods of incorporating the HI: the peaks-over-threshold method with HI (POTH) and the block maxima method with HI (BMH). Two kinds of historical data can be used in the POTH method: classical historical maxima (HMax) data, and over-a-threshold supplementary (OTS) data. In both cases, the data are structured in historical periods and can be used only as complement to the main systematic data. On the other hand, in the BMH method, the basic hypothesis in statistical modeling of HI is that at least one threshold of perception exists for the whole period (historical and systematic) and that during a giving historical period preceding the period of tide gauging, only information about surges above this threshold have been recorded or archived. The two frequency models were applied to a case study from France, at the La Rochelle site where the storm Xynthia induced an outlier, to illustrate their potentials, to compare their performances and especially to analyze the impact of the use of HI on the extreme-surge frequency estimation.
Army Staff Automated Administrative Support System (ARSTADS) Report. Phase I. Volume II.
1980-07-01
requirements to transmit data with short fuse. This requirement varies from 1-6 times daily throughout the agency. Media used for transmission varies from...material automatically onto magnetic media . (1) Advantages. (a) Eliminates need for second or more typings of material. (b) Can be extremely cost...reduced and other methods of storage media will be possible. VI-1 LOmni (App 6 Contd) B. ZXJXM: Offices are over crowded with record storage containers
ExM:System Support for Extreme-Scale, Many-Task Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Katz, Daniel S
The ever-increasing power of supercomputer systems is both driving and enabling the emergence of new problem-solving methods that require the effi cient execution of many concurrent and interacting tasks. Methodologies such as rational design (e.g., in materials science), uncertainty quanti fication (e.g., in engineering), parameter estimation (e.g., for chemical and nuclear potential functions, and in economic energy systems modeling), massive dynamic graph pruning (e.g., in phylogenetic searches), Monte-Carlo- based iterative fi xing (e.g., in protein structure prediction), and inverse modeling (e.g., in reservoir simulation) all have these requirements. These many-task applications frequently have aggregate computing needs that demand the fastestmore » computers. For example, proposed next-generation climate model ensemble studies will involve 1,000 or more runs, each requiring 10,000 cores for a week, to characterize model sensitivity to initial condition and parameter uncertainty. The goal of the ExM project is to achieve the technical advances required to execute such many-task applications efficiently, reliably, and easily on petascale and exascale computers. In this way, we will open up extreme-scale computing to new problem solving methods and application classes. In this document, we report on combined technical progress of the collaborative ExM project, and the institutional financial status of the portion of the project at University of Chicago, over the rst 8 months (through April 30, 2011)« less
UVPROM dosimetry, microdosimetry and applications to SEU and extreme value theory
NASA Astrophysics Data System (ADS)
Scheick, Leif Zebediah
A new method is described for characterizing a device in terms of the statistical distribution of first failures. The method is based on the erasure of a commercial Ultra- Violet erasable Programmable Read Only Memory (UVPROM). The method of readout would be used on a spacecraft or in other restrictive radiation environments. The measurement of the charge remaining on the floating gate is used to determine absorbed dose. The method of determining dose does not require the detector to be destroyed or erased nor does it effect the ability for taking further measurements. This is compared to extreme value theory applied to the statistical distributions that apply to this device. This technique predicts the threshold of Single Event Effects (SEE), like anomalous changes in erasure time in programmable devices due to high microdose energy-deposition events. This technique also allows for advanced non-destructive, screening of a single microelectronic devices for predictable response in a stressful, i.e. radiation, environments.
PPM mixtures of formaldehyde in gas cylinders: Stability and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wong, K.C.; Miller, S.B.; Patterson, L.M.
1999-07-01
Scott Specialty Gases has been successful in producing stable calibration gases of formaldehyde at low concentration. Critical to this success has been the development of a treatment process for high pressure aluminum cylinders. Formaldehyde cylinders having concentrations of 20ppm and 4ppm were found to show only small decline in concentrations over a period of approximately 12 months. Since no NIST traceable formaldehyde standards (or Standard Reference Material) are available, all Scott's formaldehyde cylinders were originally certified by traditional impinger method. This method involves an extremely tedious purification procedure for 2,4-dinitrophenylhydrazine (2,4-DNPH). A modified version of the impinger method has beenmore » developed and does not require extensive reagent purification for formaldehyde analysis. Extremely low formaldehyde blanks have been obtained with the modified method. The HPLC conditions in the original method were used for chromatographic separations. The modified method results in a lower analytical uncertainty for the formaldehyde standard mixtures. Consequently, it is possible to discern small differences between analytical results that are important for stability study.« less
EXTENDING THE FLOOR AND THE CEILING FOR ASSESSMENT OF PHYSICAL FUNCTION
Fries, James F.; Lingala, Bharathi; Siemons, Liseth; Glas, Cees A. W.; Cella, David; Hussain, Yusra N; Bruce, Bonnie; Krishnan, Eswar
2014-01-01
Objective The objective of the current study was to improve the assessment of physical function by improving the precision of assessment at the floor (extremely poor function) and at the ceiling (extremely good health) of the health continuum. Methods Under the NIH PROMIS program, we developed new physical function floor and ceiling items to supplement the existing item bank. Using item response theory (IRT) and the standard PROMIS methodology, we developed 30 floor items and 26 ceiling items and administered them during a 12-month prospective observational study of 737 individuals at the extremes of health status. Change over time was compared across anchor instruments and across items by means of effect sizes. Using the observed changes in scores, we back-calculated sample size requirements for the new and comparison measures. Results We studied 444 subjects with chronic illness and/or extreme age, and 293 generally fit subjects including athletes in training. IRT analyses confirmed that the new floor and ceiling items outperformed reference items (p<0.001). The estimated post-hoc sample size requirements were reduced by a factor of two to four at the floor and a factor of two at the ceiling. Conclusion Extending the range of physical function measurement can substantially improve measurement quality, can reduce sample size requirements and improve research efficiency. The paradigm shift from Disability to Physical Function includes the entire spectrum of physical function, signals improvement in the conceptual base of outcome assessment, and may be transformative as medical goals more closely approach societal goals for health. PMID:24782194
Brachial artery protected by wrapped latissimus dorsi muscle flap in high voltage electrical injury
Gencel, E.; Eser, C.; Kokacya, O.; Kesiktas, E.; Yavuz, M.
2016-01-01
Summary High voltage electrical injury can disrupt the vascular system and lead to extremity amputations. It is important to protect main vessels from progressive burn necrosis in order to salvage a limb. The brachial artery should be totally isolated from the burned area by a muscle flap to prevent vessel disruption. In this study, we report the use of a wrap-around latissimus dorsi muscle flap to protect a skeletonized brachial artery in a high voltage electrical injury in order to salvage the upper extremity and restore function. The flap wrapped around the exposed brachial artery segment and luminal status of the artery was assessed using magnetic resonance angiography. No vascular intervention was required. The flap survived completely with good elbow function. Extremity amputation was not encountered. This method using a latissimus dorsi flap allows the surgeon to protect the main upper extremity artery and reconstruct arm defects, which contributes to restoring arm function in high voltage electrical injury. PMID:28149236
Brachial artery protected by wrapped latissimus dorsi muscle flap in high voltage electrical injury.
Gencel, E; Eser, C; Kokacya, O; Kesiktas, E; Yavuz, M
2016-06-30
High voltage electrical injury can disrupt the vascular system and lead to extremity amputations. It is important to protect main vessels from progressive burn necrosis in order to salvage a limb. The brachial artery should be totally isolated from the burned area by a muscle flap to prevent vessel disruption. In this study, we report the use of a wrap-around latissimus dorsi muscle flap to protect a skeletonized brachial artery in a high voltage electrical injury in order to salvage the upper extremity and restore function. The flap wrapped around the exposed brachial artery segment and luminal status of the artery was assessed using magnetic resonance angiography. No vascular intervention was required. The flap survived completely with good elbow function. Extremity amputation was not encountered. This method using a latissimus dorsi flap allows the surgeon to protect the main upper extremity artery and reconstruct arm defects, which contributes to restoring arm function in high voltage electrical injury.
NASA Astrophysics Data System (ADS)
Pankratz, C. K.; Baker, D. N.; Jaynes, A. N.; Elkington, S. R.; Baltzer, T.; Sanchez, F.
2017-12-01
Society's growing reliance on complex and highly interconnected technological systems makes us increasingly vulnerable to the effects of space weather events - maybe more than for any other natural hazard. An extreme solar storm today could conceivably impact hundreds of the more than 1400 operating Earth satellites. Such an extreme storm could cause collapse of the electrical grid on continental scales. The effects on navigation, communication, and remote sensing of our home planet could be devastating to our social functioning. Thus, it is imperative that the scientific community address the question of just how severe events might become. At least as importantly, it is crucial that policy makers and public safety officials be informed by the facts on what might happen during extreme conditions. This requires essentially real-time alerts, warnings, and also forecasts of severe space weather events, which in turn demands measurements, models, and associated data products to be available via the most effective data discovery and access methods possible. Similarly, advancement in the fundamental scientific understanding of space weather processes is also vital, requiring that researchers have convenient and effective access to a wide variety of data sets and models from multiple sources. The space weather research community, as with many scientific communities, must access data from dispersed and often uncoordinated data repositories to acquire the data necessary for the analysis and modeling efforts that advance our understanding of solar influences and space physics on the Earth's environment. The Laboratory for Atmospheric and Space Physics (LASP), as a leading institution in both producing data products and advancing the state of scientific understanding of space weather processes, is well positioned to address many of these issues. In this presentation, we will outline the motivating factors for effective space weather data access, summarize the various data and models that are available, and present methods for meeting the data management and access needs of the disparate communities who require low-latency space weather data and information.
Code of Federal Regulations, 2010 CFR
2010-07-01
... EMERGENCY PLANNING AND NOTIFICATION Emergency Planning Who Must Comply § 355.12 What quantities of extremely... 40 Protection of Environment 27 2010-07-01 2010-07-01 false What quantities of extremely hazardous substances trigger emergency planning requirements? 355.12 Section 355.12 Protection of Environment...
Replicated Composite Optics Development
NASA Technical Reports Server (NTRS)
Engelhaupt, Darell
1997-01-01
Advanced optical systems for applications such as grazing incidence Wolter I x-ray mirror assemblies require extraordinary mirror surfaces in ten-ns of fine surface finish and figure. The impeccable mirror surface is on the inside of the rotational mirror form. One practical method of producing devices with these requirements is to first fabricate an exterior surface for the optical device then replicate that surface to have the inverse component with lightweight characteristics. The replicate optic is not better than the master or mandrel from which it is made. This task is a continuance of previous studies to identify methods and materials for forming these extremely low roughness optical components.
Developing the Cleanliness Requirements for an Organic-detection Instrument MOMA-MS
NASA Technical Reports Server (NTRS)
Perry, Radford; Canham, John; Lalime, Erin
2015-01-01
The cleanliness requirements for an organic-detection instrument, like the Mars Organic Molecule Analyzer Mass Spectrometer (MOMA-MS), on a Planetary Protection Class IVb mission can be extremely stringent. These include surface molecular and particulate, outgassing, and bioburden. The prime contractor for the European Space Agencys ExoMars 2018 project, Thales Alenia Space Italy, provided requirements based on a standard, conservative approach of defining limits which yielded levels that are unverifiable by standard cleanliness verification methods. Additionally, the conservative method for determining contamination surface area uses underestimation while conservative bioburden surface area relies on overestimation, which results in inconsistencies for the normalized reporting. This presentation will provide a survey of the challenge to define requirements that can be reasonably verified and still remain appropriate to the core science of the ExoMars mission.
Requirements UML Tool (RUT) Expanded for Extreme Programming (CI02)
NASA Technical Reports Server (NTRS)
McCoy, James R.
2003-01-01
A procedure for capturing and managing system requirements that incorporates XP user stories. Because costs associated with identifying problems in requirements increase dramatically over the lifecycle of a project, a method for identifying sources of software risks in user stories is urgently needed. This initiative aims to determine a set of guide-lines for user stories that will result in high-quality requirement. To further this initiative, a tool is needed to analyze user stories that can assess the quality of individual user stories, detect sources cf software risk's, produce software metrics, and identify areas in user stories that can be improved.
Spacecraft Dynamics and Control Program at AFRPL
NASA Technical Reports Server (NTRS)
Das, A.; Slimak, L. K. S.; Schloegel, W. T.
1986-01-01
A number of future DOD and NASA spacecraft such as the space based radar will be not only an order of magnitude larger in dimension than the current spacecraft, but will exhibit extreme structural flexibility with very low structural vibration frequencies. Another class of spacecraft (such as the space defense platforms) will combine large physical size with extremely precise pointing requirement. Such problems require a total departure from the traditional methods of modeling and control system design of spacecraft where structural flexibility is treated as a secondary effect. With these problems in mind, the Air Force Rocket Propulsion Laboratory (AFRPL) initiated research to develop dynamics and control technology so as to enable the future large space structures (LSS). AFRPL's effort in this area can be subdivided into the following three overlapping areas: (1) ground experiments, (2) spacecraft modeling and control, and (3) sensors and actuators. Both the in-house and contractual efforts of the AFRPL in LSS are summarized.
If We Can't Predict Solar Cycle 24, What About Solar Cycle 34?
NASA Technical Reports Server (NTRS)
Pesnell. William Dean
2008-01-01
Predictions of solar activity in Solar Cycle 24 range from 50% larger than SC 23 to the onset of a Grand Minimum. Because low levels of solar activity are associated with global cooling in paleoclimate and isotopic records, anticipating these extremes is required in any longterm extrapolation of climate variability. Climate models often look forward 100 or more years, which would mean 10 solar cycles into the future. Predictions of solar activity are derived from a number of methods, most of which, such as climatology and physics-based models, will be familiar to atmospheric scientists. More than 50 predictions of the maximum amplitude of SC 24 published before solar minimum will be discussed. Descriptions of several methods that result in the extreme predictions and some anticipation of even longer term predictions will be presented.
The Application of Weikart's Theories in Teaching Non-English Speaking Students How to Read.
ERIC Educational Resources Information Center
Layton, Kent
Non-English speaking students of average intelligence experience extreme frustration when learning to read. The frustration is partly a result of simultaneous requirements to speak, read, listen, and write in the new language. It also is possible that the teaching methods and strategies employed by the teachers could be harmful to non-English…
Magnetic Bubble Memories for Data Collection in Sounding Rockets,
1982-01-29
generate interest in bubbles as a mass storage device for micro - processor based equipment, manufacturers have come up with a variety of diversified...absence of a bubble represents a Ŕ". With diameters on the order of I to 5 micro -meters, these bubbles are so small that extremely tiny chips can hold...methods of transfer: polled I/O, interrupt driven I/O, and direct memory access (DMA). The first two methods require tho host processor be involved
The development of a super-fine-grained nuclear emulsion
NASA Astrophysics Data System (ADS)
Asada, Takashi; Naka, Tatsuhiro; Kuwabara, Ken-ichi; Yoshimoto, Masahiro
2017-06-01
A nuclear emulsion with micronized crystals is required for the tracking detection of submicron ionizing particles, which are one of the targets of dark-matter detection and other techniques. We found that a new production method, called the PVA—gelatin mixing method (PGMM), could effectively control crystal size from 20 nm to 50 nm. We called the two types of emulsion produced with the new method the nano imaging tracker and the ultra-nano imaging tracker. Their composition and spatial resolution were measured, and the results indicate that these emulsions detect extremely short tracks.
The Detection Method of Escherichia coli in Water Resources: A Review
NASA Astrophysics Data System (ADS)
Nurliyana, M. R.; Sahdan, M. Z.; Wibowo, K. M.; Muslihati, A.; Saim, H.; Ahmad, S. A.; Sari, Y.; Mansor, Z.
2018-04-01
This article reviews several approaches for Escherichia coli (E. coli) bacteria detection from conventional methods, emerging method and goes to biosensor-based techniques. Detection and enumeration of E. coli bacteria usually required long duration of time in obtaining the result since laboratory-based approach is normally used in its assessment. It requires 24 hours to 72 hours after sampling to process the culturing samples before results are available. Although faster technique for detecting E. coli in water such as Polymerase Chain Reaction (PCR) and Enzyme-Linked Immunosorbent Assay (ELISA) have been developed, it still required transporting the samples from water resources to the laboratory, high-cost, complicated equipment usage, complex procedures, as well as the requirement of skilled specialist to cope with the complexity which limit their wide spread practice in water quality detection. Recently, development of biosensor device that is easy to perform, portable, highly sensitive and selective becomes indispensable in detecting extremely lower consolidation of pathogenic E. coli bacteria in water samples.
Relative optical navigation around small bodies via Extreme Learning Machine
NASA Astrophysics Data System (ADS)
Law, Andrew M.
To perform close proximity operations under a low-gravity environment, relative and absolute positions are vital information to the maneuver. Hence navigation is inseparably integrated in space travel. Extreme Learning Machine (ELM) is presented as an optical navigation method around small celestial bodies. Optical Navigation uses visual observation instruments such as a camera to acquire useful data and determine spacecraft position. The required input data for operation is merely a single image strip and a nadir image. ELM is a machine learning Single Layer feed-Forward Network (SLFN), a type of neural network (NN). The algorithm is developed on the predicate that input weights and biases can be randomly assigned and does not require back-propagation. The learned model is the output layer weights which are used to calculate a prediction. Together, Extreme Learning Machine Optical Navigation (ELM OpNav) utilizes optical images and ELM algorithm to train the machine to navigate around a target body. In this thesis the asteroid, Vesta, is the designated celestial body. The trained ELMs estimate the position of the spacecraft during operation with a single data set. The results show the approach is promising and potentially suitable for on-board navigation.
Johnson, S; Hennessy, E; Smith, R; Trikic, R; Wolke, D; Marlow, N
2009-07-01
To assess academic attainment and special educational needs (SEN) in extremely preterm children in middle childhood. Of 307 extremely preterm (< or =25 weeks) survivors born in the UK and Ireland in 1995, 219 (71%) were re-assessed at 11 years of age and compared to 153 classmates born at term, using standardised tests of cognitive ability and academic attainment and teacher reports of school performance and SEN. Multiple imputation was used to correct for selective dropout. Extremely preterm children had significantly lower scores than classmates for cognitive ability (-20 points; 95% CI -23 to -17), reading (-18 points; -22 to -15) and mathematics (-27 points; -31 to -23). Twenty nine (13%) extremely preterm children attended special school. In mainstream schools, 105 (57%) extremely preterm children had SEN (OR 10; 6 to 18) and 103 (55%) required SEN resource provision (OR 10; 6 to 18). Teachers rated 50% of extremely preterm children as having below average attainment compared with 5% of classmates (OR 18; 8 to 41). Extremely preterm children who entered compulsory education an academic year early due to preterm birth had similar academic attainment but required more SEN support (OR 2; 1.0 to 3.6). Extremely preterm survivors remain at high risk for learning impairments and poor academic attainment in middle childhood. A significant proportion require full-time specialist education and over half of those attending mainstream schools require additional health or educational resources to access the national curriculum. The prevalence and impact of SEN are likely to increase as these children approach the transition to secondary school.
Probabilistic safety assessment of the design of a tall buildings under the extreme load
DOE Office of Scientific and Technical Information (OSTI.GOV)
Králik, Juraj, E-mail: juraj.kralik@stuba.sk
2016-06-08
The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.
Probabilistic safety assessment of the design of a tall buildings under the extreme load
NASA Astrophysics Data System (ADS)
Králik, Juraj
2016-06-01
The paper describes some experiences from the deterministic and probabilistic analysis of the safety of the tall building structure. There are presented the methods and requirements of Eurocode EN 1990, standard ISO 2394 and JCSS. The uncertainties of the model and resistance of the structures are considered using the simulation methods. The MONTE CARLO, LHS and RSM probabilistic methods are compared with the deterministic results. On the example of the probability analysis of the safety of the tall buildings is demonstrated the effectiveness of the probability design of structures using Finite Element Methods.
Impact of extreme weather events and climate change for health and social care systems.
Curtis, Sarah; Fair, Alistair; Wistow, Jonathan; Val, Dimitri V; Oven, Katie
2017-12-05
This review, commissioned by the Research Councils UK Living With Environmental Change (LWEC) programme, concerns research on the impacts on health and social care systems in the United Kingdom of extreme weather events, under conditions of climate change. Extreme weather events considered include heatwaves, coldwaves and flooding. Using a structured review method, we consider evidence regarding the currently observed and anticipated future impacts of extreme weather on health and social care systems and the potential of preparedness and adaptation measures that may enhance resilience. We highlight a number of general conclusions which are likely to be of international relevance, although the review focussed on the situation in the UK. Extreme weather events impact the operation of health services through the effects on built, social and institutional infrastructures which support health and health care, and also because of changes in service demand as extreme weather impacts on human health. Strategic planning for extreme weather and impacts on the care system should be sensitive to within country variations. Adaptation will require changes to built infrastructure systems (including transport and utilities as well as individual care facilities) and also to institutional and social infrastructure supporting the health care system. Care sector organisations, communities and individuals need to adapt their practices to improve resilience of health and health care to extreme weather. Preparedness and emergency response strategies call for action extending beyond the emergency response services, to include health and social care providers more generally.
The use of historical information for regional frequency analysis of extreme skew surge
NASA Astrophysics Data System (ADS)
Frau, Roberto; Andreewsky, Marc; Bernardara, Pietro
2018-03-01
The design of effective coastal protections requires an adequate estimation of the annual occurrence probability of rare events associated with a return period up to 103 years. Regional frequency analysis (RFA) has been proven to be an applicable way to estimate extreme events by sorting regional data into large and spatially distributed datasets. Nowadays, historical data are available to provide new insight on past event estimation. The utilisation of historical information would increase the precision and the reliability of regional extreme's quantile estimation. However, historical data are from significant extreme events that are not recorded by tide gauge. They usually look like isolated data and they are different from continuous data from systematic measurements of tide gauges. This makes the definition of the duration of our observations period complicated. However, the duration of the observation period is crucial for the frequency estimation of extreme occurrences. For this reason, we introduced here the concept of credible duration
. The proposed RFA method (hereinafter referenced as FAB, from the name of the authors) allows the use of historical data together with systematic data, which is a result of the use of the credible duration concept.
Characteristics and safety assessment of intractable proteins in genetically modified crops.
Bushey, Dean F; Bannon, Gary A; Delaney, Bryan F; Graser, Gerson; Hefford, Mary; Jiang, Xiaoxu; Lee, Thomas C; Madduri, Krishna M; Pariza, Michael; Privalle, Laura S; Ranjan, Rakesh; Saab-Rincon, Gloria; Schafer, Barry W; Thelen, Jay J; Zhang, John X Q; Harper, Marc S
2014-07-01
Genetically modified (GM) crops may contain newly expressed proteins that are described as "intractable". Safety assessment of these proteins may require some adaptations to the current assessment procedures. Intractable proteins are defined here as those proteins with properties that make it extremely difficult or impossible with current methods to express in heterologous systems; isolate, purify, or concentrate; quantify (due to low levels); demonstrate biological activity; or prove equivalency with plant proteins. Five classes of intractable proteins are discussed here: (1) membrane proteins, (2) signaling proteins, (3) transcription factors, (4) N-glycosylated proteins, and (5) resistance proteins (R-proteins, plant pathogen recognition proteins that activate innate immune responses). While the basic tiered weight-of-evidence approach for assessing the safety of GM crops proposed by the International Life Sciences Institute (ILSI) in 2008 is applicable to intractable proteins, new or modified methods may be required. For example, the first two steps in Tier I (hazard identification) analysis, gathering of applicable history of safe use (HOSU) information and bioinformatics analysis, do not require protein isolation. The extremely low level of expression of most intractable proteins should be taken into account while assessing safety of the intractable protein in GM crops. If Tier II (hazard characterization) analyses requiring animal feeding are judged to be necessary, alternatives to feeding high doses of pure protein may be needed. These alternatives are discussed here. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, J.; Wasko, C.; Johnson, F.; Evans, J. P.; Sharma, A.
2018-05-01
The spatial extent and organization of extreme storm events has important practical implications for flood forecasting. Recently, conflicting evidence has been found on the observed changes of storm spatial extent with increasing temperatures. To further investigate this question, a regional climate model assessment is presented for the Greater Sydney region, in Australia. Two regional climate models were considered: the first a convection-resolving simulation at 2-km resolution, the second a resolution of 10 km with three different convection parameterizations. Both the 2- and the 10-km resolutions that used the Betts-Miller-Janjic convective scheme simulate decreasing storm spatial extent with increasing temperatures for 1-hr duration precipitation events, consistent with the observation-based study in Australia. However, other observed relationships of extreme rainfall with increasing temperature were not well represented by the models. Improved methods for considering storm organization are required to better understand potential future changes.
Diagnosis and Treatment of Lower Extremity Deep Vein Thrombosis: Korean Practice Guidelines
Min, Seung-Kee; Kim, Young Hwan; Joh, Jin Hyun; Kang, Jin Mo; Park, Ui Jun; Kim, Hyung-Kee; Chang, Jeong-Hwan; Park, Sang Jun; Kim, Jang Yong; Bae, Jae Ik; Choi, Sun Young; Kim, Chang Won; Park, Sung Il; Yim, Nam Yeol; Jeon, Yong Sun; Yoon, Hyun-Ki; Park, Ki Hyuk
2016-01-01
Lower extremity deep vein thrombosis is a serious medical condition that can result in death or major disability due to pulmonary embolism or post-thrombotic syndrome. Appropriate diagnosis and treatment are required to improve symptoms and salvage the affected limb. Early thrombus clearance rapidly resolves symptoms related to venous obstruction, restores valve function and reduces the incidence of post-thrombotic syndrome. Recently, endovascular treatment has been established as a standard method for early thrombus removal. However, there are a variety of views regarding the indications and procedures among medical institutions and operators. Therefore, we intend to provide evidence-based guidelines for diagnosis and treatment of lower extremity deep vein thrombosis by multidisciplinary consensus. These guidelines are the result of a close collaboration between interventional radiologists and vascular surgeons. The goals of these guidelines are to improve treatment, to serve as a guide to the clinician, and consequently to contribute to public health care. PMID:27699156
Extreme value theory applied to the definition of bathing water quality discounting limits.
Haggarty, R A; Ferguson, C A; Scott, E M; Iroegbu, C; Stidson, R
2010-02-01
The European Community Bathing Water Directive (European Parliament, 2006) set compliance standards for bathing waters across Europe, with minimum standards for microbiological indicators to be attained at all locations by 2015. The Directive allows up to 15% of samples affected by short-term pollution episodes to be disregarded from the figures used to classify bathing waters, provided certain management criteria have been met, including informing the public of short-term water pollution episodes. Therefore, a scientifically justifiable discounting limit is required which could be used as a management tool to determine the samples that should be removed. This paper investigates different methods of obtaining discounting limits, focusing in particular on extreme value methodology applied to data from Scottish bathing waters. Return level based limits derived from threshold models applied at a site-specific level improved the percentage of sites which met at least the minimum required standard. This approach provides a method of obtaining limits which identify the samples that should be removed from compliance calculations, although care has to be taken in terms of the quantity of data which is removed. (c) 2009 Elsevier Ltd. All rights reserved.
Using SpF to Achieve Petascale for Legacy Pseudospectral Applications
NASA Technical Reports Server (NTRS)
Clune, Thomas L.; Jiang, Weiyuan
2014-01-01
Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. Highlevel abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical kernels that can be performed entirely inprocessor. The granularity of domain decomposition provided by SpF is only constrained by the datalocality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe our experience in porting legacy pseudospectral models, MoSST and DYNAMO, to use SpF as well as present preliminary performance results provided by the improved scalability.
A Novel Approach for Lie Detection Based on F-Score and Extreme Learning Machine
Gao, Junfeng; Wang, Zhao; Yang, Yong; Zhang, Wenjia; Tao, Chunyi; Guan, Jinan; Rao, Nini
2013-01-01
A new machine learning method referred to as F-score_ELM was proposed to classify the lying and truth-telling using the electroencephalogram (EEG) signals from 28 guilty and innocent subjects. Thirty-one features were extracted from the probe responses from these subjects. Then, a recently-developed classifier called extreme learning machine (ELM) was combined with F-score, a simple but effective feature selection method, to jointly optimize the number of the hidden nodes of ELM and the feature subset by a grid-searching training procedure. The method was compared to two classification models combining principal component analysis with back-propagation network and support vector machine classifiers. We thoroughly assessed the performance of these classification models including the training and testing time, sensitivity and specificity from the training and testing sets, as well as network size. The experimental results showed that the number of the hidden nodes can be effectively optimized by the proposed method. Also, F-score_ELM obtained the best classification accuracy and required the shortest training and testing time. PMID:23755136
Electrostatic particle trap for ion beam sputter deposition
Vernon, Stephen P.; Burkhart, Scott C.
2002-01-01
A method and apparatus for the interception and trapping of or reflection of charged particulate matter generated in ion beam sputter deposition. The apparatus involves an electrostatic particle trap which generates electrostatic fields in the vicinity of the substrate on which target material is being deposited. The electrostatic particle trap consists of an array of electrode surfaces, each maintained at an electrostatic potential, and with their surfaces parallel or perpendicular to the surface of the substrate. The method involves interception and trapping of or reflection of charged particles achieved by generating electrostatic fields in the vicinity of the substrate, and configuring the fields to force the charged particulate material away from the substrate. The electrostatic charged particle trap enables prevention of charged particles from being deposited on the substrate thereby enabling the deposition of extremely low defect density films, such as required for reflective masks of an extreme ultraviolet lithography (EUVL) system.
Johnson, Mitchell E; Landers, James P
2004-11-01
Laser-induced fluorescence is an extremely sensitive method for detection in chemical separations. In addition, it is well-suited to detection in small volumes, and as such is widely used for capillary electrophoresis and microchip-based separations. This review explores the detailed instrumental conditions required for sub-zeptomole, sub-picomolar detection limits. The key to achieving the best sensitivity is to use an excitation and emission volume that is matched to the separation system and that, simultaneously, will keep scattering and luminescence background to a minimum. We discuss how this is accomplished with confocal detection, 90 degrees on-capillary detection, and sheath-flow detection. It is shown that each of these methods have their advantages and disadvantages, but that all can be used to produce extremely sensitive detectors for capillary- or microchip-based separations. Analysis of these capabilities allows prediction of the optimal means of achieving ultrasensitive detection on microchips.
A statistical model of extreme storm rainfall
NASA Astrophysics Data System (ADS)
Smith, James A.; Karr, Alan F.
1990-02-01
A model of storm rainfall is developed for the central Appalachian region of the United States. The model represents the temporal occurrence of major storms and, for a given storm, the spatial distribution of storm rainfall. Spatial inhomogeneities of storm rainfall and temporal inhomogeneities of the storm occurrence process are explicitly represented. The model is used for estimating recurrence intervals of extreme storms. The parameter estimation procedure developed for the model is based on the substitution principle (method of moments) and requires data from a network of rain gages. The model is applied to a 5000 mi2 (12,950 km2) region in the Valley and Ridge Province of Virginia and West Virginia.
NASA Astrophysics Data System (ADS)
Harudin, N.; Jamaludin, K. R.; Muhtazaruddin, M. Nabil; Ramlie, F.; Muhamad, Wan Zuki Azman Wan
2018-03-01
T-Method is one of the techniques governed under Mahalanobis Taguchi System that developed specifically for multivariate data predictions. Prediction using T-Method is always possible even with very limited sample size. The user of T-Method required to clearly understanding the population data trend since this method is not considering the effect of outliers within it. Outliers may cause apparent non-normality and the entire classical methods breakdown. There exist robust parameter estimate that provide satisfactory results when the data contain outliers, as well as when the data are free of them. The robust parameter estimates of location and scale measure called Shamos Bickel (SB) and Hodges Lehman (HL) which are used as a comparable method to calculate the mean and standard deviation of classical statistic is part of it. Embedding these into T-Method normalize stage feasibly help in enhancing the accuracy of the T-Method as well as analysing the robustness of T-method itself. However, the result of higher sample size case study shows that T-method is having lowest average error percentages (3.09%) on data with extreme outliers. HL and SB is having lowest error percentages (4.67%) for data without extreme outliers with minimum error differences compared to T-Method. The error percentages prediction trend is vice versa for lower sample size case study. The result shows that with minimum sample size, which outliers always be at low risk, T-Method is much better on that, while higher sample size with extreme outliers, T-Method as well show better prediction compared to others. For the case studies conducted in this research, it shows that normalization of T-Method is showing satisfactory results and it is not feasible to adapt HL and SB or normal mean and standard deviation into it since it’s only provide minimum effect of percentages errors. Normalization using T-method is still considered having lower risk towards outlier’s effect.
Theory and computation of optimal low- and medium-thrust transfers
NASA Technical Reports Server (NTRS)
Chuang, C.-H.
1994-01-01
This report presents two numerical methods considered for the computation of fuel-optimal, low-thrust orbit transfers in large numbers of burns. The origins of these methods are observations made with the extremal solutions of transfers in small numbers of burns; there seems to exist a trend such that the longer the time allowed to perform an optimal transfer the less fuel that is used. These longer transfers are obviously of interest since they require a motor of low thrust; however, we also find a trend that the longer the time allowed to perform the optimal transfer the more burns are required to satisfy optimality. Unfortunately, this usually increases the difficulty of computation. Both of the methods described use small-numbered burn solutions to determine solutions in large numbers of burns. One method is a homotopy method that corrects for problems that arise when a solution requires a new burn or coast arc for optimality. The other method is to simply patch together long transfers from smaller ones. An orbit correction problem is solved to develop this method. This method may also lead to a good guidance law for transfer orbits with long transfer times.
Kakuda, Tsuneo; Shojo, Hideki; Tanaka, Mayumi; Nambiar, Phrabhakaran; Minaguchi, Kiyoshi; Umetsu, Kazuo; Adachi, Noboru
2016-01-01
Mitochondrial DNA (mtDNA) serves as a powerful tool for exploring matrilineal phylogeographic ancestry, as well as for analyzing highly degraded samples, because of its polymorphic nature and high copy numbers per cell. The recent advent of complete mitochondrial genome sequencing has led to improved techniques for phylogenetic analyses based on mtDNA, and many multiplex genotyping methods have been developed for the hierarchical analysis of phylogenetically important mutations. However, few high-resolution multiplex genotyping systems for analyzing East-Asian mtDNA can be applied to extremely degraded samples. Here, we present a multiplex system for analyzing mitochondrial single nucleotide polymorphisms (mtSNPs), which relies on a novel amplified product-length polymorphisms (APLP) method that uses inosine-flapped primers and is specifically designed for the detailed haplogrouping of extremely degraded East-Asian mtDNAs. We used fourteen 6-plex polymerase chain reactions (PCRs) and subsequent electrophoresis to examine 81 haplogroup-defining SNPs and 3 insertion/deletion sites, and we were able to securely assign the studied mtDNAs to relevant haplogroups. Our system requires only 1×10−13 g (100 fg) of crude DNA to obtain a full profile. Owing to its small amplicon size (<110 bp), this new APLP system was successfully applied to extremely degraded samples for which direct sequencing of hypervariable segments using mini-primer sets was unsuccessful, and proved to be more robust than conventional APLP analysis. Thus, our new APLP system is effective for retrieving reliable data from extremely degraded East-Asian mtDNAs. PMID:27355212
Cazelle, Elodie; Eskes, Chantra; Hermann, Martina; Jones, Penny; McNamee, Pauline; Prinsen, Menk; Taylor, Hannah; Wijnands, Marcel V W
2014-06-01
A.I.S.E. investigated the suitability of histopathological evaluations as an additional endpoint to the regulatory adopted ICE in vitro test method (OECD TG 438) to identify non-extreme pH detergent and cleaning products that require classification as EU CLP/UN GHS Category 1 (serious eye damage). To this aim, a total of 30 non-extreme pH products covering the range of in vivo classifications for eye irritation, and representing various product categories were tested. Epithelium vacuolation (mid and lower layers) and erosion (at least moderate) were found to be the most relevant histopathological effects induced by products classified in vivo as Category 1. Histopathology criteria specifically developed for non-extreme pH detergent and cleaning products were shown to correctly identify materials classified as Category 1 based on in vivo persistent effects, and to significantly increase the overall sensitivity of the standard ICE prediction model for Category 1 identification (to 75%) whilst maintaining a good concordance (73%). In contrast, use of EU CLP additivity approach for classification of mixtures was considerably less predictive, with a concordance of only 27%, and 100% over-predictions of non-Category 1 products. As such, use of histopathology as an addition to the ICE test method was found suitable to identify EU CLP/UN GHS Category 1 non-extreme pH detergent and cleaning products and to allow a better discrimination from Category 2 products. Copyright © 2014 Elsevier Ltd. All rights reserved.
Kakuda, Tsuneo; Shojo, Hideki; Tanaka, Mayumi; Nambiar, Phrabhakaran; Minaguchi, Kiyoshi; Umetsu, Kazuo; Adachi, Noboru
2016-01-01
Mitochondrial DNA (mtDNA) serves as a powerful tool for exploring matrilineal phylogeographic ancestry, as well as for analyzing highly degraded samples, because of its polymorphic nature and high copy numbers per cell. The recent advent of complete mitochondrial genome sequencing has led to improved techniques for phylogenetic analyses based on mtDNA, and many multiplex genotyping methods have been developed for the hierarchical analysis of phylogenetically important mutations. However, few high-resolution multiplex genotyping systems for analyzing East-Asian mtDNA can be applied to extremely degraded samples. Here, we present a multiplex system for analyzing mitochondrial single nucleotide polymorphisms (mtSNPs), which relies on a novel amplified product-length polymorphisms (APLP) method that uses inosine-flapped primers and is specifically designed for the detailed haplogrouping of extremely degraded East-Asian mtDNAs. We used fourteen 6-plex polymerase chain reactions (PCRs) and subsequent electrophoresis to examine 81 haplogroup-defining SNPs and 3 insertion/deletion sites, and we were able to securely assign the studied mtDNAs to relevant haplogroups. Our system requires only 1×10-13 g (100 fg) of crude DNA to obtain a full profile. Owing to its small amplicon size (<110 bp), this new APLP system was successfully applied to extremely degraded samples for which direct sequencing of hypervariable segments using mini-primer sets was unsuccessful, and proved to be more robust than conventional APLP analysis. Thus, our new APLP system is effective for retrieving reliable data from extremely degraded East-Asian mtDNAs.
Strasberg, Steven M; Gouma, Dirk J
2012-01-01
Objectives Extreme vasculobiliary injuries usually involve major hepatic arteries and portal veins. They are rare, but have severe consequences, including rapid infarction of the liver. The pathogenesis of these injuries is not well understood. The purpose of this study was to elucidate the mechanism of injury through an analysis of clinical records, particularly the operative notes of the index procedure. Methods Biliary injury databases in two institutions were searched for data on extreme vasculobiliary injuries. Operative notes for the index procedure (cholecystectomy) were requested from the primary institutions. These notes and the treatment records of the tertiary centres to which the patients had been referred were examined. Radiographs from the primary institutions, when available, as well as those from the tertiary centres, were studied. Results Eight patients with extreme vasculobiliary injuries were found. Most had the following features in common. The operation had been started laparoscopically and converted to an open procedure because of severe chronic or acute inflammation. Fundus-down cholecystectomy had been attempted. Severe bleeding had been encountered as a result of injury to a major portal vein and hepatic artery. Four patients have required right hepatectomy and one had required an orthotopic liver transplant. Four of the eight patients have died and one remains under treatment. Conclusions Extreme vasculobiliary injuries tend to occur when fundus-down cholecystectomy is performed in the presence of severe inflammation. Contractive inflammation thickens and shortens the cystic plate, making separation of the gallbladder from the liver hazardous. PMID:22151444
Extreme river flow dependence in Northern Scotland
NASA Astrophysics Data System (ADS)
Villoria, M. Franco; Scott, M.; Hoey, T.; Fischbacher-Smith, D.
2012-04-01
Various methods for the spatial analysis of hydrologic data have been developed recently. Here we present results using the conditional probability approach proposed by Keef et al. [Appl. Stat. (2009): 58,601-18] to investigate spatial interdependence in extreme river flows in Scotland. This approach does not require the specification of a correlation function, being mostly suitable for relatively small geographical areas. The work is motivated by the Flood Risk Management Act (Scotland (2009)) which requires maps of flood risk that take account of spatial dependence in extreme river flow. The method is based on two conditional measures of spatial flood risk: firstly the conditional probability PC(p) that a set of sites Y = (Y 1,...,Y d) within a region C of interest exceed a flow threshold Qp at time t (or any lag of t), given that in the specified conditioning site X > Qp; and, secondly the expected number of sites within C that will exceed a flow Qp on average (given that X > Qp). The conditional probabilities are estimated using the conditional distribution of Y |X = x (for large x), which can be modeled using a semi-parametric approach (Heffernan and Tawn [Roy. Statist. Soc. Ser. B (2004): 66,497-546]). Once the model is fitted, pseudo-samples can be generated to estimate functionals of the joint tails of the distribution of (Y,X). Conditional return level plots were directly compared to traditional return level plots thus improving our understanding of the dependence structure of extreme river flow events. Confidence intervals were calculated using block bootstrapping methods (100 replicates). We report results from applying this approach to a set of four rivers (Dulnain, Lossie, Ewe and Ness) in Northern Scotland. These sites were chosen based on data quality, spatial location and catchment characteristics. The river Ness, being the largest (catchment size 1839.1km2) was chosen as the conditioning river. Both the Ewe (441.1km2) and Ness catchments have predominantly impermeable bedrock, with the Ewe's one being very wet. The Lossie(216km2) and Dulnain (272.2km2) both contain significant areas of glacial deposits. River flow in the Dulnain is usually affected by snowmelt. In all cases, the conditional probability of each of the three rivers (Dulnain, Lossie, Ewe) decreases as the event in the conditioning river (Ness) becomes more extreme. The Ewe, despite being the furthest of the three sites from the Ness shows the strongest dependence, with relatively high (>0.4) conditional probabilities even for very extreme events (>0.995). Although the Lossie is closer geographically to the Ness than the Ewe, it shows relatively low conditional probabilities and can be considered independent of the Ness for very extreme events (> 0.990). The conditional probabilities seem to reflect the different catchment characteristics and dominant precipitation generating events, with the Ewe being more similar to the Ness than the other two rivers. This interpretation suggests that the conditional method may yield improved estimates of extreme events, but the approach is time consuming. An alternative model that is easier to implement, using a spatial quantile regression, is currently being investigated, which would also allow the introduction of further covariates, essential as the effects of climate change are incorporated into estimation procedures.
Can quantile mapping improve precipitation extremes from regional climate models?
NASA Astrophysics Data System (ADS)
Tani, Satyanarayana; Gobiet, Andreas
2015-04-01
The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.
Input reconstruction of chaos sensors.
Yu, Dongchuan; Liu, Fang; Lai, Pik-Yin
2008-06-01
Although the sensitivity of sensors can be significantly enhanced using chaotic dynamics due to its extremely sensitive dependence on initial conditions and parameters, how to reconstruct the measured signal from the distorted sensor response becomes challenging. In this paper we suggest an effective method to reconstruct the measured signal from the distorted (chaotic) response of chaos sensors. This measurement signal reconstruction method applies the neural network techniques for system structure identification and therefore does not require the precise information of the sensor's dynamics. We discuss also how to improve the robustness of reconstruction. Some examples are presented to illustrate the measurement signal reconstruction method suggested.
A nonrecursive 'Order N' preconditioned conjugate gradient/range space formulation of MDOF dynamics
NASA Technical Reports Server (NTRS)
Kurdila, A. J.; Menon, R.; Sunkel, John
1991-01-01
This paper addresses the requirements of present-day mechanical system simulations of algorithms that induce parallelism on a fine scale and of transient simulation methods which must be automatically load balancing for a wide collection of system topologies and hardware configurations. To this end, a combination range space/preconditioned conjugage gradient formulation of multidegree-of-freedon dynamics is developed, which, by employing regular ordering of the system connectivity graph, makes it possible to derive an extremely efficient preconditioner from the range space metric (as opposed to the system coefficient matrix). Because of the effectiveness of the preconditioner, the method can achieve performance rates that depend linearly on the number of substructures. The method, termed 'Order N' does not require the assembly of system mass or stiffness matrices, and is therefore amenable to implementation on work stations. Using this method, a 13-substructure model of the Space Station was constructed.
Environmental Symposium Held in Crystal City, Virginia on May 5-6, 1992
1992-05-01
addition, the Act creat a new program designed to prevent sudden, accidental releases of extremely hazardo substances . Generally, the Act sets forth a... prevent of sudden, The owner or operator of any facility handling an extremely hazardous substance will also be required to prepare and implement a risk...management plan to detect and prevent or minimize the potential for an accidental release of extremely hazardous substances . EPA may require that such
Russell, Jeffrey A; Shave, Ruth M; Kruse, David W; Nevill, Alan M; Koutedakis, Yiannis; Wyon, Matthew A
2011-06-01
Female ballet dancers require extreme ankle motion to attain the demi-plié (weight-bearing full dorsiflexion [DF]) and en pointe (weight-bearing full plantar flexion [PF]) positions of ballet. However, techniques for assessing this amount of motion have not yet received sufficient scientific scrutiny. Therefore, the purpose of this study was to examine possible differences between weight-bearing goniometric and radiographic ankle range of motion measurements in female ballet dancers. Ankle range of motion in 8 experienced female ballet dancers was assessed by goniometry and 2 radiographic measurement methods. The latter were performed on 3 mediolateral x-rays, in demi-plié, neutral, and en pointe positions; one of them used the same landmarks as goniometry. DF values were not significantly different among the methods, but PF values were (P < .05). Not only was PF of the talocrural joint significantly less than the other 2 measurements (P < .001), PF from the goniometric method applied to the x-rays was significantly less than PF obtained from clinical goniometry (P < .05). These data provide insight into the extreme ankle and foot motion, particularly PF, required in female ballet dancers and suggest that goniometry may not be ideal for assessing ankle range of motion in these individuals. Therefore, further research is needed to standardize how DF and PF are measured in ballet dancers. Diagnostic, Level I.
Empirical Bayes estimation of proportions with application to cowbird parasitism rates
Link, W.A.; Hahn, D.C.
1996-01-01
Bayesian models provide a structure for studying collections of parameters such as are considered in the investigation of communities, ecosystems, and landscapes. This structure allows for improved estimation of individual parameters, by considering them in the context of a group of related parameters. Individual estimates are differentially adjusted toward an overall mean, with the magnitude of their adjustment based on their precision. Consequently, Bayesian estimation allows for a more credible identification of extreme values in a collection of estimates. Bayesian models regard individual parameters as values sampled from a specified probability distribution, called a prior. The requirement that the prior be known is often regarded as an unattractive feature of Bayesian analysis and may be the reason why Bayesian analyses are not frequently applied in ecological studies. Empirical Bayes methods provide an alternative approach that incorporates the structural advantages of Bayesian models while requiring a less stringent specification of prior knowledge. Rather than requiring that the prior distribution be known, empirical Bayes methods require only that it be in a certain family of distributions, indexed by hyperparameters that can be estimated from the available data. This structure is of interest per se, in addition to its value in allowing for improved estimation of individual parameters; for example, hypotheses regarding the existence of distinct subgroups in a collection of parameters can be considered under the empirical Bayes framework by allowing the hyperparameters to vary among subgroups. Though empirical Bayes methods have been applied in a variety of contexts, they have received little attention in the ecological literature. We describe the empirical Bayes approach in application to estimation of proportions, using data obtained in a community-wide study of cowbird parasitism rates for illustration. Since observed proportions based on small sample sizes are heavily adjusted toward the mean, extreme values among empirical Bayes estimates identify those species for which there is the greatest evidence of extreme parasitism rates. Applying a subgroup analysis to our data on cowbird parasitism rates, we conclude that parasitism rates for Neotropical Migrants as a group are no greater than those of Resident/Short-distance Migrant species in this forest community. Our data and analyses demonstrate that the parasitism rates for certain Neotropical Migrant species are remarkably low (Wood Thrush and Rose-breasted Grosbeak) while those for others are remarkably high (Ovenbird and Red-eyed Vireo).
Application of symbolic/numeric matrix solution techniques to the NASTRAN program
NASA Technical Reports Server (NTRS)
Buturla, E. M.; Burroughs, S. H.
1977-01-01
The matrix solving algorithm of any finite element algorithm is extremely important since solution of the matrix equations requires a large amount of elapse time due to null calculations and excessive input/output operations. An alternate method of solving the matrix equations is presented. A symbolic processing step followed by numeric solution yields the solution very rapidly and is especially useful for nonlinear problems.
Resonance Raman Spectroscopy of Extreme Nanowires and Other 1D Systems
Smith, David C.; Spencer, Joseph H.; Sloan, Jeremy; McDonnell, Liam P.; Trewhitt, Harrison; Kashtiban, Reza J.; Faulques, Eric
2016-01-01
This paper briefly describes how nanowires with diameters corresponding to 1 to 5 atoms can be produced by melting a range of inorganic solids in the presence of carbon nanotubes. These nanowires are extreme in the sense that they are the limit of miniaturization of nanowires and their behavior is not always a simple extrapolation of the behavior of larger nanowires as their diameter decreases. The paper then describes the methods required to obtain Raman spectra from extreme nanowires and the fact that due to the van Hove singularities that 1D systems exhibit in their optical density of states, that determining the correct choice of photon excitation energy is critical. It describes the techniques required to determine the photon energy dependence of the resonances observed in Raman spectroscopy of 1D systems and in particular how to obtain measurements of Raman cross-sections with better than 8% noise and measure the variation in the resonance as a function of sample temperature. The paper describes the importance of ensuring that the Raman scattering is linearly proportional to the intensity of the laser excitation intensity. It also describes how to use the polarization dependence of the Raman scattering to separate Raman scattering of the encapsulated 1D systems from those of other extraneous components in any sample. PMID:27168195
Kim, Jong Moon; Je, Hyun Dong; Kim, Hyeong-Dong
2017-01-01
[Purpose] To investigate the effects of a pelvic compression belt (PCB) and chair height on the kinematics and kinetics of the lower extremity during sit-to-stand (STS) maneuvers in healthy people. [Subjects and Methods] Twenty-two people participated in this study. They were required to perform STS maneuvers under four conditions. Hip joint moment and angular displacement of the hip, knee, and ankle were measured. A PCB was also applied below the anterior superior iliac spine. [Results] The angular displacement of the ankle joint increased while performing STS maneuvers from a normal chair with a PCB in phase 1, and decreased during phase 2 when performing STS maneuvers from a high chair. The overall angular displacement in phase 3 was decreased while rising from a chair with a PCB and rising from a high chair. When performed STS maneuvers from a high chair, the angular displacement of the hip, knee, and ankle joint decreased considerably in phase 3. This decreased lower extremity motion in phase 3 indicated that participants required less momentum to complete the maneuver. [Conclusion] The results of this study suggest that a PCB might be appropriate for patients with pelvic girdle pain and lower back pain related to pregnancy. PMID:28878454
Kim, Jong Moon; Je, Hyun Dong; Kim, Hyeong-Dong
2017-08-01
[Purpose] To investigate the effects of a pelvic compression belt (PCB) and chair height on the kinematics and kinetics of the lower extremity during sit-to-stand (STS) maneuvers in healthy people. [Subjects and Methods] Twenty-two people participated in this study. They were required to perform STS maneuvers under four conditions. Hip joint moment and angular displacement of the hip, knee, and ankle were measured. A PCB was also applied below the anterior superior iliac spine. [Results] The angular displacement of the ankle joint increased while performing STS maneuvers from a normal chair with a PCB in phase 1, and decreased during phase 2 when performing STS maneuvers from a high chair. The overall angular displacement in phase 3 was decreased while rising from a chair with a PCB and rising from a high chair. When performed STS maneuvers from a high chair, the angular displacement of the hip, knee, and ankle joint decreased considerably in phase 3. This decreased lower extremity motion in phase 3 indicated that participants required less momentum to complete the maneuver. [Conclusion] The results of this study suggest that a PCB might be appropriate for patients with pelvic girdle pain and lower back pain related to pregnancy.
Survey of upper extremity injuries among martial arts participants.
Diesselhorst, Matthew M; Rayan, Ghazi M; Pasque, Charles B; Peyton Holder, R
2013-01-01
To survey participants at various experience levels of different martial arts (MA) about upper extremity injuries sustained during training and fighting. A 21-s question survey was designed and utilised. The survey was divided into four groups (Demographics, Injury Description, Injury Mechanism, and Miscellaneous information) to gain knowledge about upper extremity injuries sustained during martial arts participation. Chi-square testing was utilised to assess for significant associations. Males comprised 81% of respondents. Involvement in multiple forms of MA was the most prevalent (38%). The hand/wrist was the most common area injured (53%), followed by the shoulder/upper arm (27%) and the forearm/elbow (19%). Joint sprains/muscle strains were the most frequent injuries reported overall (47%), followed by abrasions/bruises (26%). Dislocations of the upper extremity were reported by 47% of participants while fractures occurred in 39%. Surgeries were required for 30% of participants. Females were less likely to require surgery and more likely to have shoulder and elbow injuries. Males were more likely to have hand injuries. Participants of Karate and Tae Kwon Do were more likely to have injuries to their hands, while participants of multiple forms were more likely to sustain injuries to their shoulders/upper arms and more likely to develop chronic upper extremity symptoms. With advanced level of training the likelihood of developing chronic upper extremity symptoms increases, and multiple surgeries were required. Hand protection was associated with a lower risk of hand injuries. Martial arts can be associated with substantial upper extremity injuries that may require surgery and extended time away from participation. Injuries may result in chronic upper extremity symptoms. Hand protection is important for reducing injuries to the hand and wrist.
Three-dimensional laser window formation for industrial application
NASA Technical Reports Server (NTRS)
Verhoff, Vincent G.; Kowalski, David
1993-01-01
The NASA Lewis Research Center has developed and implemented a unique process for forming flawless three-dimensional, compound-curvature laser windows to extreme accuracies. These windows represent an integral component of specialized nonintrusive laser data acquisition systems that are used in a variety of compressor and turbine research testing facilities. These windows are molded to the flow surface profile of turbine and compressor casings and are required to withstand extremely high pressures and temperatures. This method of glass formation could also be used to form compound-curvature mirrors that would require little polishing and for a variety of industrial applications, including research view ports for testing devices and view ports for factory machines with compound-curvature casings. Currently, sodium-alumino-silicate glass is recommended for three-dimensional laser windows because of its high strength due to chemical strengthening and its optical clarity. This paper discusses the main aspects of three-dimensional laser window formation. It focuses on the unique methodology and the peculiarities that are associated with the formation of these windows.
Alves, Gelio; Yu, Yi-Kuo
2016-09-01
There is a growing trend for biomedical researchers to extract evidence and draw conclusions from mass spectrometry based proteomics experiments, the cornerstone of which is peptide identification. Inaccurate assignments of peptide identification confidence thus may have far-reaching and adverse consequences. Although some peptide identification methods report accurate statistics, they have been limited to certain types of scoring function. The extreme value statistics based method, while more general in the scoring functions it allows, demands accurate parameter estimates and requires, at least in its original design, excessive computational resources. Improving the parameter estimate accuracy and reducing the computational cost for this method has two advantages: it provides another feasible route to accurate significance assessment, and it could provide reliable statistics for scoring functions yet to be developed. We have formulated and implemented an efficient algorithm for calculating the extreme value statistics for peptide identification applicable to various scoring functions, bypassing the need for searching large random databases. The source code, implemented in C ++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit yyu@ncbi.nlm.nih.gov Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2016. This work is written by US Government employees and is in the public domain in the US.
Sallés, Laia; Martín-Casas, Patricia; Gironès, Xavier; Durà, María José; Lafuente, José Vicente; Perfetti, Carlo
2017-04-01
[Purpose] This study aims to describe a protocol based on neurocognitive therapeutic exercises and determine its feasibility and usefulness for upper extremity functionality when compared with a conventional protocol. [Subjects and Methods] Eight subacute stroke patients were randomly assigned to a conventional (control group) or neurocognitive (experimental group) treatment protocol. Both lasted 30 minutes, 3 times a week for 10 weeks and assessments were blinded. Outcome measures included: Motor Evaluation Scale for Upper Extremity in Stroke Patients, Motricity Index, Revised Nottingham Sensory Assessment and Kinesthetic and Visual Imagery Questionnaire. Descriptive measures and nonparametric statistical tests were used for analysis. [Results] The results indicate a more favorable clinical progression in the neurocognitive group regarding upper extremity functional capacity with achievement of the minimal detectable change. The functionality results are related with improvements on muscle strength and sensory discrimination (tactile and kinesthetic). [Conclusion] Despite not showing significant group differences between pre and post-treatment, the neurocognitive approach could be a safe and useful strategy for recovering upper extremity movement following stroke, especially regarding affected hands, with better and longer lasting results. Although this work shows this protocol's feasibility with the panel of scales proposed, larger studies are required to demonstrate its effectiveness.
Sallés, Laia; Martín-Casas, Patricia; Gironès, Xavier; Durà, María José; Lafuente, José Vicente; Perfetti, Carlo
2017-01-01
[Purpose] This study aims to describe a protocol based on neurocognitive therapeutic exercises and determine its feasibility and usefulness for upper extremity functionality when compared with a conventional protocol. [Subjects and Methods] Eight subacute stroke patients were randomly assigned to a conventional (control group) or neurocognitive (experimental group) treatment protocol. Both lasted 30 minutes, 3 times a week for 10 weeks and assessments were blinded. Outcome measures included: Motor Evaluation Scale for Upper Extremity in Stroke Patients, Motricity Index, Revised Nottingham Sensory Assessment and Kinesthetic and Visual Imagery Questionnaire. Descriptive measures and nonparametric statistical tests were used for analysis. [Results] The results indicate a more favorable clinical progression in the neurocognitive group regarding upper extremity functional capacity with achievement of the minimal detectable change. The functionality results are related with improvements on muscle strength and sensory discrimination (tactile and kinesthetic). [Conclusion] Despite not showing significant group differences between pre and post-treatment, the neurocognitive approach could be a safe and useful strategy for recovering upper extremity movement following stroke, especially regarding affected hands, with better and longer lasting results. Although this work shows this protocol’s feasibility with the panel of scales proposed, larger studies are required to demonstrate its effectiveness. PMID:28533607
Volcanic hazard assessment for the Canary Islands (Spain) using extreme value theory
NASA Astrophysics Data System (ADS)
Sobradelo, R.; Martí, J.; Mendoza-Rosas, A. T.; Gómez, G.
2011-10-01
The Canary Islands are an active volcanic region densely populated and visited by several millions of tourists every year. Nearly twenty eruptions have been reported through written chronicles in the last 600 yr, suggesting that the probability of a new eruption in the near future is far from zero. This shows the importance of assessing and monitoring the volcanic hazard of the region in order to reduce and manage its potential volcanic risk, and ultimately contribute to the design of appropriate preparedness plans. Hence, the probabilistic analysis of the volcanic eruption time series for the Canary Islands is an essential step for the assessment of volcanic hazard and risk in the area. Such a series describes complex processes involving different types of eruptions over different time scales. Here we propose a statistical method for calculating the probabilities of future eruptions which is most appropriate given the nature of the documented historical eruptive data. We first characterize the eruptions by their magnitudes, and then carry out a preliminary analysis of the data to establish the requirements for the statistical method. Past studies in eruptive time series used conventional statistics and treated the series as an homogeneous process. In this paper, we will use a method that accounts for the time-dependence of the series and includes rare or extreme events, in the form of few data of large eruptions, since these data require special methods of analysis. Hence, we will use a statistical method from extreme value theory. In particular, we will apply a non-homogeneous Poisson process to the historical eruptive data of the Canary Islands to estimate the probability of having at least one volcanic event of a magnitude greater than one in the upcoming years. This is done in three steps: First, we analyze the historical eruptive series to assess independence and homogeneity of the process. Second, we perform a Weibull analysis of the distribution of repose time between successive eruptions. Third, we analyze the non-homogeneous Poisson process with a generalized Pareto distribution as the intensity function.
Knowles, Martyn; Nation, David A; Timaran, David E; Gomez, Luis F; Baig, M Shadman; Valentine, R James; Timaran, Carlos H
2015-01-01
Fenestrated endovascular aortic aneurysm repair (FEVAR) is an alternative to open repair in patients with complex abdominal aortic aneurysms who are neither fit nor suitable for standard open or endovascular repair. Chimney and snorkel grafts are other endovascular alternatives but frequently require bilateral upper extremity access that has been associated with a 3% to 10% risk of stroke. However, upper extremity access is also frequently required for FEVAR because of the caudal orientation of the visceral vessels. The purpose of this study was to assess the use of upper extremity access for FEVAR and the associated morbidity. During a 5-year period, 148 patients underwent FEVAR, and upper extremity access for FEVAR was used in 98 (66%). Outcomes were compared between those who underwent upper extremity access and those who underwent femoral access alone. The primary end point was a cerebrovascular accident or transient ischemic attack, and the secondary end point was local access site complications. The mean number of fenestrated vessels was 3.07 ± 0.81 (median, 3) for a total of 457 vessels stented. Percutaneous upper extremity access was used in 12 patients (12%) and open access in 86 (88%). All patients who required a sheath size >7F underwent high brachial open access, with the exception of one patient who underwent percutaneous axillary access with a 12F sheath. The mean sheath size was 10.59F ± 2.51F (median, 12F), which was advanced into the descending thoracic aorta, allowing multiple wire and catheter exchanges. One hemorrhagic stroke (one of 98 [1%]) occurred in the upper extremity access group, and one ischemic stroke (one of 54 [2%]) occurred in the femoral-only access group (P = .67). The stroke in the upper extremity access group occurred 5 days after FEVAR and was related to uncontrolled hypertension, whereas the stroke in the femoral group occurred on postoperative day 3. Neither patient had signs or symptoms of a stroke immediately after FEVAR. The right upper extremity was accessed six times without a stroke (0%) compared with the left being accessed 92 times with one stroke (1%; P = .8). Four patients (4%) had local complications related to upper extremity access. One (1%) required exploration for an expanding hematoma after manual compression for a 7F sheath, one (1%) required exploration for hematoma and neurologic symptoms after open access for a 12F sheath, and two patients (2%) with small hematomas did not require intervention. Two (two of 12 [17%]) of these complications were in the percutaneous access group, which were significantly more frequent than in the open group (two of 86 [2%]; P = .02). Upper extremity access appears to be a safe and feasible approach for patients undergoing FEVAR. Open exposure in the upper extremity may be safer than percutaneous access during FEVAR. Unlike chimney and snorkel grafts, upper extremity access during FEVAR is not associated with an increased risk of stroke, despite the need for multiple visceral vessel stenting. Copyright © 2015 Society for Vascular Surgery. All rights reserved.
NASA Astrophysics Data System (ADS)
House, B. M.; Norris, R. D.
2017-12-01
The Early Eocene Climatic Optimum (EECO) around 50 Ma was a sustained period of extreme global warmth with ocean bottom water temperatures of up to 12° C. The marine biologic response to such climatic extremes is unclear, however, in part because proxies that integrate ecosystem-wide productivity signals are scarce. While the accumulation of marine barite (BaSO4) is one such proxy, its applicability has remained limited due to the difficulty in reliably quantifying barite. Discrete measurements of barite content in marine sediments are laborious, and indirect estimates provide unclear results. We have developed a fast, high-throughput method for reliable measurement of barite content that relies on selective extraction of barite rather than sample digestion and quantification of remaining barite. Tests of the new method reveal that it gives the expected results for a wide variety of sediment types and can quantitatively extract 10-100 times the amount of barite typically encountered in natural sediments. Altogether, our method provides an estimated ten-fold increase in analysis efficiency over current sample digestion methods and also works reliably on small ( 1 g or less) sediment samples. Furthermore, the instrumentation requirements of this method are minor, so samples can be analyzed in shipboard labs to generate real-time paleoproductivity records during coring expeditions. Because of the magnitude of throughput improvement, this new technique will permit the generation of large datasets needed to address previously intractable paleoclimate and paleoceanographic questions. One such question is how export productivity changes during climatic extremes. We used our new method to analyze globally distributed sediment cores to determine if the EECO represented a period of anomalous export productivity either due to higher rates of primary production or more vigorous heterotrophic metabolisms. An increase in export productivity could provide a mechanism for exiting periods of extreme warmth, and understanding the interplay between temperature, atmospheric CO2 levels, and export productivity during the EECO will help clarify how the marine biologic system functions as a whole.
A NEW METHOD OF SWEAT TESTING: THE CF QUANTUM® SWEAT TEST
Rock, Michael J.; Makholm, Linda; Eickhoff, Jens
2015-01-01
Background Conventional methods of sweat testing are time consuming and have many steps that can and do lead to errors. This study compares conventional sweat testing to a new quantitative method, the CF Quantum® (CFQT) sweat test. This study tests the diagnostic accuracy and analytic validity of the CFQT. Methods Previously diagnosed CF patients and patients who required a sweat test for clinical indications were invited to have the CFQT test performed. Both conventional sweat testing and the CFQT were performed bilaterally on the same day. Pairs of data from each test are plotted as a correlation graph and Bland Altman plot. Sensitivity and specificity were calculated as well as the means and coefficient of variation by test and by extremity. After completing the study, subjects or their parents were asked for their preference of the CFQT and conventional sweat testing. Results The correlation coefficient between the CFQT and conventional sweat testing was 0.98 (95% confidence interval: 0.97–0.99). The sensitivity and specificity of the CFQT in diagnosing CF was 100% (95% confidence interval: 94–100%) and 96% (95% confidence interval: 89–99%), respectively. In one center in this three center multicenter study, there were higher sweat chloride values in patients with CF and also more tests that were invalid due to discrepant values between the two extremities. The percentage of invalid tests was higher in the CFQT method (16.5%) compared to conventional sweat testing (3.8%)(p < 0.001). In the post-test questionnaire, 88% of subjects/parents preferred the CFQT test. Conclusions The CFQT is a fast and simple method of quantitative sweat chloride determination. This technology requires further refinement to improve the analytic accuracy at higher sweat chloride values and to decrease the number of invalid tests. PMID:24862724
Literature Review and Annotated Bibliography: Water Requirements of Desert Ungulates
Cain, James W.; Krausman, Paul R.; Rosenstock, Steven S.; Turner, Jack C.
2005-01-01
Executive Summary Ungulates adapted to desert areas are able to survive extreme temperatures and limited water availability. This ability is largely due to behavioral, morphological, and physiological adaptations that allow these animals to avoid or tolerate extreme environmental conditions. The physiological adaptations possessed by ungulates for thermoregulation and maintenance of water balance have been the subject of numerous studies involving a wide range of species. In this report we review the behavioral, morphological, and physiological mechanisms used by ungulates and other desert mammals to maintain water and temperature balance in arid environments. We also review some of the more commonly used methods for studying the physiological mechanisms involved in water balance and thermoregulation, and the influence of dehydration on these mechanisms.
Topographic relationships for design rainfalls over Australia
NASA Astrophysics Data System (ADS)
Johnson, F.; Hutchinson, M. F.; The, C.; Beesley, C.; Green, J.
2016-02-01
Design rainfall statistics are the primary inputs used to assess flood risk across river catchments. These statistics normally take the form of Intensity-Duration-Frequency (IDF) curves that are derived from extreme value probability distributions fitted to observed daily, and sub-daily, rainfall data. The design rainfall relationships are often required for catchments where there are limited rainfall records, particularly catchments in remote areas with high topographic relief and hence some form of interpolation is required to provide estimates in these areas. This paper assesses the topographic dependence of rainfall extremes by using elevation-dependent thin plate smoothing splines to interpolate the mean annual maximum rainfall, for periods from one to seven days, across Australia. The analyses confirm the important impact of topography in explaining the spatial patterns of these extreme rainfall statistics. Continent-wide residual and cross validation statistics are used to demonstrate the 100-fold impact of elevation in relation to horizontal coordinates in explaining the spatial patterns, consistent with previous rainfall scaling studies and observational evidence. The impact of the complexity of the fitted spline surfaces, as defined by the number of knots, and the impact of applying variance stabilising transformations to the data, were also assessed. It was found that a relatively large number of 3570 knots, suitably chosen from 8619 gauge locations, was required to minimise the summary error statistics. Square root and log data transformations were found to deliver marginally superior continent-wide cross validation statistics, in comparison to applying no data transformation, but detailed assessments of residuals in complex high rainfall regions with high topographic relief showed that no data transformation gave superior performance in these regions. These results are consistent with the understanding that in areas with modest topographic relief, as for most of the Australian continent, extreme rainfall is closely aligned with elevation, but in areas with high topographic relief the impacts of topography on rainfall extremes are more complex. The interpolated extreme rainfall statistics, using no data transformation, have been used by the Australian Bureau of Meteorology to produce new IDF data for the Australian continent. The comprehensive methods presented for the evaluation of gridded design rainfall statistics will be useful for similar studies, in particular the importance of balancing the need for a continentally-optimum solution that maintains sufficient definition at the local scale.
Microsurgery within reconstructive surgery of extremities.
Pheradze, I; Pheradze, T; Tsilosani, G; Goginashvili, Z; Mosiava, T
2006-05-01
Reconstructive surgery of extremities is an object of a special attention of surgeons. Vessel and nerve damages, deficiency of soft tissue, bone, associated with infection results in a complete loss of extremity function, it also raises a question of amputation. The goal of the study was to improve the role of microsurgery in reconstructive surgery of limbs. We operated on 294 patients with various diseases and damages of extremities: pathology of nerves, vessels, tissue loss. An original method of treatment of large simultaneous functional defects of limbs has been used. Good functional and aesthetic results were obtained. Results of reconstructive operations on extremities might be improved by using of microsurgery methods. Microsurgery is deemed as a method of choice for extremities' reconstructive surgery as far as outcomes achieved through application of microsurgical technique significantly surpass the outcomes obtained through the use of routine surgical methods.
April 22, 1987: This FR established the list of extremely hazardous substances (EHSs) and their threshold planning quantities (TPQs). Also codified reporting and notification requirements for facilities with EHS. Do not use for current compliance purposes.
Ultralow-mass solar-array designs for Halley's comet rendezvous mission
NASA Technical Reports Server (NTRS)
Costogue, E. N.; Rayl, G.
1978-01-01
This paper describes the conceptual design study results of photovoltaic arrays capable of powering a Halley's comet rendezvous mission. This mission would be Shuttle-launched, employ a unique form of propulsion (ion drive) which requires high power levels for operation, and operate at distances between 0.6 and 4.5 AU. These requirements make it necessary to develop arrays with extremely high power-to-mass ratio (200 W/kg). In addition, the dual requirements of providing ion thruster power as well as housekeeping power leads to the development of unique methods for mode switching. Both planar and variable-concentrator-enhanced array concepts using ultrathin (50 micron) high-efficiency (up to 12.5%) silicon solar cells coupled with thin (75 micron) plastic encapsulants are considered. In order to satisfy the Shuttle launch environment it was necessary to provide novel methods of both storing and deploying these arrays.
2011-12-15
for Retrofit Design of Submarine Actuation Systems 5b. GRANT NUMBER Energy Storage for Electric Actuators NOOO 14-08-1-0424 5c. PROGRAM ELEMENT...are used to derive power and energy storage requirements for control surface actuation during extreme submarine maneuvers, such as emergency...and for initially sizing system components. 15. SUBJECT TERMS Submarines, electromagnetic actuators, energy storage, simulation-based design
NASA Technical Reports Server (NTRS)
Kellner, A.
1987-01-01
Extremely large knowledge sources and efficient knowledge access characterizing future real-life artificial intelligence applications represent crucial requirements for on-board artificial intelligence systems due to obvious computer time and storage constraints on spacecraft. A type of knowledge representation and corresponding reasoning mechanism is proposed which is particularly suited for the efficient processing of such large knowledge bases in expert systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Indelicato, Daniel J.; Keole, Sameer R.; Shahlaee, Amir H.
2008-11-01
Purpose: More than 70% of Ewing tumors occur in the extremities and pelvis. This study identified factors influencing local control and functional outcomes after management with definitive radiotherapy (RT). Patients and Methods: A total of 75 patients with a localized Ewing tumor of the extremity or pelvis were treated with definitive RT at the University of Florida between 1970 and 2006 (lower extremity tumors in 30, pelvic tumors in 26, and upper extremity tumors in 19). RT was performed on a once-daily (40%) or twice-daily (60%) basis. The median dose was 55.2 Gy in 1.8-Gy daily fractions or 55.0 Gymore » in 1.2-Gy twice-daily fractions. The median observed follow-up was 4.7 years. Functional outcome was assessed using the Toronto Extremity Salvage Score. Results: The 10-year actuarial overall survival, cause-specific survival, freedom from relapse, and local control rate was 48%, 48%, 42%, and 71%, respectively. Of the 72 patients, 3 required salvage amputation. Inferior cause-specific survival was associated with larger tumors (81% for tumors <8 cm vs. 39% for tumors {>=}8 cm, p <0.05). No patient characteristics or treatment variables were predictive of local failure. No fractures occurred in patients treated with hyperfractionation or with tumors of the distal extremities. Severe late complications were more frequently associated with use of <8-MV photons and fields encompassing the entire bone or hemipelvis. A significantly better Toronto Extremity Salvage Score was associated with a late-effect biologically effective dose of <91.7 Gy{sub 3}. Conclusions: Limb preservation was effectively achieved through definitive RT. Treating limited field sizes with hyperfractionated high-energy RT could minimize long-term complications and provides superior functional outcomes.« less
Merians, Alma S; Fluet, Gerard G; Qiu, Qinyin; Saleh, Soha; Lafond, Ian; Davidow, Amy; Adamovich, Sergei V
2011-05-16
Recovery of upper extremity function is particularly recalcitrant to successful rehabilitation. Robotic-assisted arm training devices integrated with virtual targets or complex virtual reality gaming simulations are being developed to deal with this problem. Neural control mechanisms indicate that reaching and hand-object manipulation are interdependent, suggesting that training on tasks requiring coordinated effort of both the upper arm and hand may be a more effective method for improving recovery of real world function. However, most robotic therapies have focused on training the proximal, rather than distal effectors of the upper extremity. This paper describes the effects of robotically-assisted, integrated upper extremity training. Twelve subjects post-stroke were trained for eight days on four upper extremity gaming simulations using adaptive robots during 2-3 hour sessions. The subjects demonstrated improved proximal stability, smoothness and efficiency of the movement path. This was in concert with improvement in the distal kinematic measures of finger individuation and improved speed. Importantly, these changes were accompanied by a robust 16-second decrease in overall time in the Wolf Motor Function Test and a 24-second decrease in the Jebsen Test of Hand Function. Complex gaming simulations interfaced with adaptive robots requiring integrated control of shoulder, elbow, forearm, wrist and finger movements appear to have a substantial effect on improving hemiparetic hand function. We believe that the magnitude of the changes and the stability of the patient's function prior to training, along with maintenance of several aspects of the gains demonstrated at retention make a compelling argument for this approach to training.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 28 2011-07-01 2011-07-01 false What quantities of extremely hazardous substances trigger emergency planning requirements? 355.12 Section 355.12 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS...
Statistical Evaluation and Improvement of Methods for Combining Random and Harmonic Loads
NASA Technical Reports Server (NTRS)
Brown, A. M.; McGhee, D. S.
2003-01-01
Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield/ultimate strength and high- cycle fatigue capability. This Technical Publication examines the cumulative distribution percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematica to calculate the combined value corresponding to any desired percentile is then presented along with a curve tit to this value. Another Excel macro that calculates the combination using Monte Carlo simulation is shown. Unlike the traditional techniques. these methods quantify the calculated load value with a consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Additionally, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can substantially lower the design loading without losing any of the identified structural reliability.
Statistical Comparison and Improvement of Methods for Combining Random and Harmonic Loads
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; McGhee, David S.
2004-01-01
Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield ultimate strength and high cycle fatigue capability. This paper examines the cumulative distribution function (CDF) percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematics is then used to calculate the combined value corresponding to any desired percentile along with a curve fit to this value. Another Excel macro is used to calculate the combination using a Monte Carlo simulation. Unlike the traditional techniques, these methods quantify the calculated load value with a Consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Also, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can lower the design loading substantially without losing any of the identified structural reliability.
NASA Astrophysics Data System (ADS)
Guo, Enliang; Zhang, Jiquan; Si, Ha; Dong, Zhenhua; Cao, Tiehua; Lan, Wu
2017-10-01
Environmental changes have brought about significant changes and challenges to water resources and management in the world; these include increasing climate variability, land use change, intensive agriculture, and rapid urbanization and industrial development, especially much more frequency extreme precipitation events. All of which greatly affect water resource and the development of social economy. In this study, we take extreme precipitation events in the Midwest of Jilin Province as an example; daily precipitation data during 1960-2014 are used. The threshold of extreme precipitation events is defined by multifractal detrended fluctuation analysis (MF-DFA) method. Extreme precipitation (EP), extreme precipitation ratio (EPR), and intensity of extreme precipitation (EPI) are selected as the extreme precipitation indicators, and then the Kolmogorov-Smirnov (K-S) test is employed to determine the optimal probability distribution function of extreme precipitation indicators. On this basis, copulas connect nonparametric estimation method and the Akaike Information Criterion (AIC) method is adopted to determine the bivariate copula function. Finally, we analyze the characteristics of single variable extremum and bivariate joint probability distribution of the extreme precipitation events. The results show that the threshold of extreme precipitation events in semi-arid areas is far less than that in subhumid areas. The extreme precipitation frequency shows a significant decline while the extreme precipitation intensity shows a trend of growth; there are significant differences in spatiotemporal of extreme precipitation events. The spatial variation trend of the joint return period gets shorter from the west to the east. The spatial distribution of co-occurrence return period takes on contrary changes and it is longer than the joint return period.
A characterization of workflow management systems for extreme-scale applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia
We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less
Fatigue influences lower extremity angular velocities during a single-leg drop vertical jump.
Tamura, Akihiro; Akasaka, Kiyokazu; Otsudo, Takahiro; Shiozawa, Junya; Toda, Yuka; Yamada, Kaori
2017-03-01
[Purpose] Fatigue alters lower extremity landing strategies and decreases the ability to attenuate impact during landing. The purpose of this study was to reveal the influence of fatigue on dynamic alignment and joint angular velocities in the lower extremities during a single leg landing. [Subjects and Methods] The 34 female college students were randomly assigned to either the fatigue or control group. The fatigue group performed single-leg drop vertical jumps before, and after, the fatigue protocol, which was performed using a bike ergometer. Lower extremity kinematic data were acquired using a three-dimensional motion analysis system. The ratio of each variable (%), for the pre-fatigue to post-fatigue protocols, were calculated to compare differences between each group. [Results] Peak hip and knee flexion angular velocities increased significantly in the fatigue group compared with the control group. Furthermore, hip flexion angular velocity increased significantly between each group at 40 milliseconds after initial ground contact. [Conclusion] Fatigue reduced the ability to attenuate impact by increasing angular velocities in the direction of hip and knee flexion during landings. These findings indicate a requirement to evaluate movement quality over time by measuring hip and knee flexion angular velocities in landings during fatigue conditions.
A characterization of workflow management systems for extreme-scale applications
Ferreira da Silva, Rafael; Filgueira, Rosa; Pietri, Ilia; ...
2017-02-16
We present that the automation of the execution of computational tasks is at the heart of improving scientific productivity. Over the last years, scientific workflows have been established as an important abstraction that captures data processing and computation of large and complex scientific applications. By allowing scientists to model and express entire data processing steps and their dependencies, workflow management systems relieve scientists from the details of an application and manage its execution on a computational infrastructure. As the resource requirements of today’s computational and data science applications that process vast amounts of data keep increasing, there is a compellingmore » case for a new generation of advances in high-performance computing, commonly termed as extreme-scale computing, which will bring forth multiple challenges for the design of workflow applications and management systems. This paper presents a novel characterization of workflow management systems using features commonly associated with extreme-scale computing applications. We classify 15 popular workflow management systems in terms of workflow execution models, heterogeneous computing environments, and data access methods. Finally, the paper also surveys workflow applications and identifies gaps for future research on the road to extreme-scale workflows and management systems.« less
NASA Astrophysics Data System (ADS)
Wei, Ping; Li, Xinyang; Luo, Xi; Li, Jianfeng
2018-02-01
The centroid method is commonly adopted to locate the spot in the sub-apertures in the Shack-Hartmann wavefront sensor (SH-WFS), in which preprocessing image is required before calculating the spot location due to that the centroid method is extremely sensitive to noises. In this paper, the SH-WFS image was simulated according to the characteristics of the noises, background and intensity distribution. The Optimal parameters of SH-WFS image preprocessing method were put forward, in different signal-to-noise ratio (SNR) conditions, where the wavefront reconstruction error was considered as the evaluation index. Two methods of image preprocessing, thresholding method and windowing combing with thresholding method, were compared by studying the applicable range of SNR and analyzing the stability of the two methods, respectively.
Biota and Biomolecules in Extreme Environments on Earth: Implications for Life Detection on Mars
Aerts, Joost W.; Röling, Wilfred F.M.; Elsaesser, Andreas; Ehrenfreund, Pascale
2014-01-01
The three main requirements for life as we know it are the presence of organic compounds, liquid water, and free energy. Several groups of organic compounds (e.g., amino acids, nucleobases, lipids) occur in all life forms on Earth and are used as diagnostic molecules, i.e., biomarkers, for the characterization of extant or extinct life. Due to their indispensability for life on Earth, these biomarkers are also prime targets in the search for life on Mars. Biomarkers degrade over time; in situ environmental conditions influence the preservation of those molecules. Nonetheless, upon shielding (e.g., by mineral surfaces), particular biomarkers can persist for billions of years, making them of vital importance in answering questions about the origins and limits of life on early Earth and Mars. The search for organic material and biosignatures on Mars is particularly challenging due to the hostile environment and its effect on organic compounds near the surface. In support of life detection on Mars, it is crucial to investigate analogue environments on Earth that resemble best past and present Mars conditions. Terrestrial extreme environments offer a rich source of information allowing us to determine how extreme conditions affect life and molecules associated with it. Extremophilic organisms have adapted to the most stunning conditions on Earth in environments with often unique geological and chemical features. One challenge in detecting biomarkers is to optimize extraction, since organic molecules can be low in abundance and can strongly adsorb to mineral surfaces. Methods and analytical tools in the field of life science are continuously improving. Amplification methods are very useful for the detection of low concentrations of genomic material but most other organic molecules are not prone to amplification methods. Therefore, a great deal depends on the extraction efficiency. The questions “what to look for”, “where to look”, and “how to look for it” require more of our attention to ensure the success of future life detection missions on Mars. PMID:25370528
Biota and biomolecules in extreme environments on Earth: implications for life detection on Mars.
Aerts, Joost W; Röling, Wilfred F M; Elsaesser, Andreas; Ehrenfreund, Pascale
2014-10-13
The three main requirements for life as we know it are the presence of organic compounds, liquid water, and free energy. Several groups of organic compounds (e.g., amino acids, nucleobases, lipids) occur in all life forms on Earth and are used as diagnostic molecules, i.e., biomarkers, for the characterization of extant or extinct life. Due to their indispensability for life on Earth, these biomarkers are also prime targets in the search for life on Mars. Biomarkers degrade over time; in situ environmental conditions influence the preservation of those molecules. Nonetheless, upon shielding (e.g., by mineral surfaces), particular biomarkers can persist for billions of years, making them of vital importance in answering questions about the origins and limits of life on early Earth and Mars. The search for organic material and biosignatures on Mars is particularly challenging due to the hostile environment and its effect on organic compounds near the surface. In support of life detection on Mars, it is crucial to investigate analogue environments on Earth that resemble best past and present Mars conditions. Terrestrial extreme environments offer a rich source of information allowing us to determine how extreme conditions affect life and molecules associated with it. Extremophilic organisms have adapted to the most stunning conditions on Earth in environments with often unique geological and chemical features. One challenge in detecting biomarkers is to optimize extraction, since organic molecules can be low in abundance and can strongly adsorb to mineral surfaces. Methods and analytical tools in the field of life science are continuously improving. Amplification methods are very useful for the detection of low concentrations of genomic material but most other organic molecules are not prone to amplification methods. Therefore, a great deal depends on the extraction efficiency. The questions "what to look for", "where to look", and "how to look for it" require more of our attention to ensure the success of future life detection missions on Mars.
In situ methods for Li-ion battery research: A review of recent developments
NASA Astrophysics Data System (ADS)
Harks, P. P. R. M. L.; Mulder, F. M.; Notten, P. H. L.
2015-08-01
A considerable amount of research is being directed towards improving lithium-ion batteries in order to meet today's market demands. In particular in situ investigations of Li-ion batteries have proven extremely insightful, but require the electrochemical cell to be fully compatible with the conditions of the testing method and are therefore often challenging to execute. Advantageously, in the past few years significant progress has been made with new, more advanced, in situ techniques. Herein, a comprehensive overview of in situ methods for studying Li-ion batteries is given, with the emphasis on new developments and reported experimental highlights.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berco, Dan, E-mail: danny.barkan@gmail.com; Tseng, Tseung-Yuen, E-mail: tseng@cc.nctu.edu.tw
This study presents an evaluation method for resistive random access memory retention reliability based on the Metropolis Monte Carlo algorithm and Gibbs free energy. The method, which does not rely on a time evolution, provides an extremely efficient way to compare the relative retention properties of metal-insulator-metal structures. It requires a small number of iterations and may be used for statistical analysis. The presented approach is used to compare the relative robustness of a single layer ZrO{sub 2} device with a double layer ZnO/ZrO{sub 2} one, and obtain results which are in good agreement with experimental data.
Bayes plus Brass: Estimating Total Fertility for Many Small Areas from Sparse Census Data
Schmertmann, Carl P.; Cavenaghi, Suzana M.; Assunção, Renato M.; Potter, Joseph E.
2013-01-01
Small-area fertility estimates are valuable for analysing demographic change, and important for local planning and population projection. In countries lacking complete vital registration, however, small-area estimates are possible only from sparse survey or census data that are potentially unreliable. Such estimation requires new methods for old problems: procedures must be automated if thousands of estimates are required, they must deal with extreme sampling variability in many areas, and they should also incorporate corrections for possible data errors. We present a two-step algorithm for estimating total fertility in such circumstances, and we illustrate by applying the method to 2000 Brazilian Census data for over five thousand municipalities. Our proposed algorithm first smoothes local age-specific rates using Empirical Bayes methods, and then applies a new variant of Brass’s P/F parity correction procedure that is robust under conditions of rapid fertility decline. PMID:24143946
NASA Astrophysics Data System (ADS)
Lazoglou, Georgia; Anagnostopoulou, Christina; Tolika, Konstantia; Kolyva-Machera, Fotini
2018-04-01
The increasing trend of the intensity and frequency of temperature and precipitation extremes during the past decades has substantial environmental and socioeconomic impacts. Thus, the objective of the present study is the comparison of several statistical methods of the extreme value theory (EVT) in order to identify which is the most appropriate to analyze the behavior of the extreme precipitation, and high and low temperature events, in the Mediterranean region. The extremes choice was made using both the block maxima and the peaks over threshold (POT) technique and as a consequence both the generalized extreme value (GEV) and generalized Pareto distributions (GPDs) were used to fit them. The results were compared, in order to select the most appropriate distribution for extremes characterization. Moreover, this study evaluates the maximum likelihood estimation, the L-moments and the Bayesian method, based on both graphical and statistical goodness-of-fit tests. It was revealed that the GPD can characterize accurately both precipitation and temperature extreme events. Additionally, GEV distribution with the Bayesian method is proven to be appropriate especially for the greatest values of extremes. Another important objective of this investigation was the estimation of the precipitation and temperature return levels for three return periods (50, 100, and 150 years) classifying the data into groups with similar characteristics. Finally, the return level values were estimated with both GEV and GPD and with the three different estimation methods, revealing that the selected method can affect the return level values for both the parameter of precipitation and temperature.
Tan, Qunyou; Zhang, Li; Zhang, Liangke; Teng, Yongzhen; Zhang, Jingqing
2012-01-01
Pyridostigmine bromide (PTB) is a highly soluble and extremely bitter drug. Here, an economic complexation technology combined with direct tablet compression method has been developed to meet the requirements of a patient friendly dosage known as taste-masked dispersible tablets loaded PTB (TPDPTs): (1) TPDPTs should have optimal disintegration and good physical resistance (hardness); (2) a low-cost, simple but practical preparation method suitable for industrial production is preferred from a cost perspective. Physicochemical properties of the inclusion complex of PTB with beta-cyclodextrin were investigated by Fourier transformed infrared spectroscopy, differential scanning calorimetry and UV spectroscopy. An orthogonal design was chosen to properly formulate TPDPTs. All volunteers regarded acceptable bitterness of TPDPTs. The properties including disintegration time, weight variation, friability, hardness, dispersible uniformity and drug content of TPDPTs were evaluated. The dissolution profile of TPDPTs in distilled water exhibited a fast rate. Pharmacokinetic results demonstrated that TPDPTs and the commercial tablets were bioequivalent.
Efficient statistically accurate algorithms for the Fokker-Planck equation in large dimensions
NASA Astrophysics Data System (ADS)
Chen, Nan; Majda, Andrew J.
2018-02-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace and is therefore computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O (100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
A pratical deconvolution algorithm in multi-fiber spectra extraction
NASA Astrophysics Data System (ADS)
Zhang, Haotong; Li, Guangwei; Bai, Zhongrui
2015-08-01
Deconvolution algorithm is a very promising method in multi-fiber spectroscopy data reduction, the method can extract spectra to the photo noise level as well as improve the spectral resolution, but as mentioned in Bolton & Schlegel (2010), it is limited by its huge computation requirement and thus can not be implemented directly in actual data reduction. We develop a practical algorithm to solve the computation problem. The new algorithm can deconvolve a 2D fiber spectral image of any size with actual PSFs, which may vary with positions. We further consider the influence of noise, which is thought to be an intrinsic ill-posed problem in deconvolution algorithms. We modify our method with a Tikhonov regularization item to depress the method induced noise. A series of simulations based on LAMOST data are carried out to test our method under more real situations with poisson noise and extreme cross talk, i.e., the fiber-to-fiber distance is comparable to the FWHM of the fiber profile. Compared with the results of traditional extraction methods, i.e., the Aperture Extraction Method and the Profile Fitting Method, our method shows both higher S/N and spectral resolution. The computaion time for a noise added image with 250 fibers and 4k pixels in wavelength direction, is about 2 hours when the fiber cross talk is not in the extreme case and 3.5 hours for the extreme fiber cross talk. We finally apply our method to real LAMOST data. We find that the 1D spectrum extracted by our method has both higher SNR and resolution than the traditional methods, but there are still some suspicious weak features possibly caused by the noise sensitivity of the method around the strong emission lines. How to further attenuate the noise influence will be the topic of our future work. As we have demonstrated, multi-fiber spectra extracted by our method will have higher resolution and signal to noise ratio thus will provide more accurate information (such as higher radial velocity and metallicity measurement accuracy in stellar physics) to astronomers than traditional methods.
Digital image profilers for detecting faint sources which have bright companions
NASA Technical Reports Server (NTRS)
Morris, Elena; Flint, Graham; Slavey, Robert
1992-01-01
For this program, an image profiling system was developed which offers the potential for detecting extremely faint optical sources that are located in close proximity to bright companions. The approach employed is novel in three respects. First, it does not require an optical system wherein extraordinary measures must be taken to minimize diffraction and scatter. Second, it does not require detectors possessing either extreme uniformity in sensitivity or extreme temporal stability. Finally, the system can readily be calibrated, or nulled, in space by testing against an unresolved singular stellar source.
Extremely low-outgassing material: 0.2% beryllium copper alloy
NASA Astrophysics Data System (ADS)
Watanabe, Fumio
2004-01-01
Exploration for low-outgassing materials for use in ultrahigh vacuum and extreme high-vacuum systems is one of the most important topics of a vacuum researcher. We have found that a copper alloy containing 0.2% beryllium (0.2% BeCu) can attain an extremely low hydrogen outgassing rate of 10-14 Pa (H2) m/s order. Almost the entire surface of 0.2% BeCu is dominated by a BeO layer, after a 400 °C×72 h prebakeout treatment in an ultrahigh vacuum. This layer functions as a barrier to the processes of oxidization and permeation of hydrogen. In addition, this layer resists carbon contamination. Temperature-programmed desorption spectra show only a single peak for water at 150 °C and small quantities of any other desorption gases. Therefore, an in situ bakeout process in which the temperature simply ramps up to 150 °C and immediately ramps back down is enough for degassing; it does not require an ordinary sustained-temperature bakeout. Using an outgassing sample consisting of 0.2% BeCu disks housed in a 0.2% BeCu nipple chamber, a lowest outgassing rate of the 5.6×10-14 Pa (H2) m/s was measured by the pressure-rise method after pump cutoff. The pressure-rise versus time curve was completely nonlinear. It rises over time to a constant slope of 1/2 in a log-log plot, due to hydrogen diffusion from the bulk, but this requires over a week at room temperature. The hydrogen outgassing from the 0.2% BeCu bulk is completely dominated by a diffusion-limited mechanism. This article will describe why we obtain such low-outgassing rates with 0.2% BeCu. It is based on the observed surface changes with prebakeout treatment seen by x-ray photoelectron spectroscopy, and the improvement of hydrogen outgassing measurements by the pressure-rise method. A comparison is made to ordinary stainless steel. In addition, the concept of an outgassing reduction method will be discussed from a review of the published ultralow-outgassing data and reduction methods. .
Translations on USSR Science and Technology, Physical Sciences and Technology, Number 16
1977-08-05
34INVESTIGATION OF SPLITTING OF LIGHT NUCLEI WITH HIGH-ENERGY y -RAYS WITH THE METHOD OF WILSON’S CHAMBER OPERATING IN POWERFUL BEAMS OF ELECTRONIC...boast high reliability, high speed, and extremely modest power requirements. Information oh the Screen Visual display devices greatly facilitate...area of application of these units Includes navigation, control of power systems, machine tools, and manufac- turing processes. Th» ^»abilities of
A Review of United States Air Force and Department of Defense Aerospace Propulsion Needs
2006-01-01
evolved expendable launch vehicle EHF extremely high frequency EMA electromechanical actuator EMDP engine model derivative program EMTVA...condition. A key aspect of the model was which of the two methods was used—parameters of the system or propulsion variables produced in the design ... models for turbopump analysis and design . In addition, the skills required to design a high -performance turbopump are very specialized and must be
NASA Astrophysics Data System (ADS)
Hollingsworth, Peter Michael
The drive toward robust systems design, especially with respect to system affordability throughout the system life-cycle, has led to the development of several advanced design methods. While these methods have been extremely successful in satisfying the needs for which they have been developed, they inherently leave a critical area unaddressed. None of them fully considers the effect of requirements on the selection of solution systems. The goal of all of current modern design methodologies is to bring knowledge forward in the design process to the regions where more design freedom is available and design changes cost less. Therefore, it seems reasonable to consider the point in the design process where the greatest restrictions are placed on the final design, the point in which the system level requirements are set. Historically the requirements have been treated as something handed down from above. However, neither the customer nor the solution provider completely understood all of the options that are available in the broader requirements space. If a method were developed that provided the ability to understand the full scope of the requirements space, it would allow for a better comparison of potential solution systems with respect to both the current and potential future requirements. The key to a requirements conscious method is to treat requirements differently from the traditional approach. The method proposed herein is known as Requirements Controlled Design (RCD). By treating the requirements as a set of variables that control the behavior of the system, instead of variables that only define the response of the system, it is possible to determine a-priori what portions of the requirements space that any given system is capable of satisfying. Additionally, it should be possible to identify which systems can satisfy a given set of requirements and the locations where a small change in one or more requirements poses a significant risk to a design program. This thesis puts forth the theory and methodology to enable RCD, and details and validates a specific method called the Modified Strength Pareto Evolutionary Algorithm (MSPEA).
NASA Technical Reports Server (NTRS)
Roth, Don J.; Kautz, Harold E.; Abel, Phillip B.; Whalen, Mike F.; Hendricks, J. Lynne; Bodis, James R.
2000-01-01
Surface topography, which significantly affects the performance of many industrial components, is normally measured with diamond-tip profilometry over small areas or with optical scattering methods over larger areas. To develop air-coupled surface profilometry, the NASA Glenn Research Center at Lewis Field initiated a Space Act Agreement with Sonix, Inc., through two Glenn programs, the Advanced High Temperature Engine Materials Program (HITEMP) and COMMTECH. The work resulted in quantitative surface topography profiles obtained using only high-frequency, focused ultrasonic pulses in air. The method is nondestructive, noninvasive, and noncontact, and it does not require light-reflective surfaces. Air surface profiling may be desirable when diamond-tip or laserbased methods are impractical, such as over large areas, when a significant depth range is required, or for curved surfaces. When the configuration is optimized, the method is reasonably rapid and all the quantitative analysis facilities are online, including two- and three-dimensional visualization, extreme value filtering (for faulty data), and leveling.
SpF: Enabling Petascale Performance for Pseudospectral Dynamo Models
NASA Astrophysics Data System (ADS)
Jiang, W.; Clune, T.; Vriesema, J.; Gutmann, G.
2013-12-01
Pseudospectral (PS) methods possess a number of characteristics (e.g., efficiency, accuracy, natural boundary conditions) that are extremely desirable for dynamo models. Unfortunately, dynamo models based upon PS methods face a number of daunting challenges, which include exposing additional parallelism, leveraging hardware accelerators, exploiting hybrid parallelism, and improving the scalability of global memory transposes. Although these issues are a concern for most models, solutions for PS methods tend to require far more pervasive changes to underlying data and control structures. Further, improvements in performance in one model are difficult to transfer to other models, resulting in significant duplication of effort across the research community. We have developed an extensible software framework for pseudospectral methods called SpF that is intended to enable extreme scalability and optimal performance. High-level abstractions provided by SpF unburden applications of the responsibility of managing domain decomposition and load balance while reducing the changes in code required to adapt to new computing architectures. The key design concept in SpF is that each phase of the numerical calculation is partitioned into disjoint numerical 'kernels' that can be performed entirely in-processor. The granularity of domain-decomposition provided by SpF is only constrained by the data-locality requirements of these kernels. SpF builds on top of optimized vendor libraries for common numerical operations such as transforms, matrix solvers, etc., but can also be configured to use open source alternatives for portability. SpF includes several alternative schemes for global data redistribution and is expected to serve as an ideal testbed for further research into optimal approaches for different network architectures. In this presentation, we will describe the basic architecture of SpF as well as preliminary performance data and experience with adapting legacy dynamo codes. We will conclude with a discussion of planned extensions to SpF that will provide pseudospectral applications with additional flexibility with regard to time integration, linear solvers, and discretization in the radial direction.
NASA Technical Reports Server (NTRS)
Maskew, Brian
1987-01-01
The VSAERO low order panel method formulation is described for the calculation of subsonic aerodynamic characteristics of general configurations. The method is based on piecewise constant doublet and source singularities. Two forms of the internal Dirichlet boundary condition are discussed and the source distribution is determined by the external Neumann boundary condition. A number of basic test cases are examined. Calculations are compared with higher order solutions for a number of cases. It is demonstrated that for comparable density of control points where the boundary conditions are satisfied, the low order method gives comparable accuracy to the higher order solutions. It is also shown that problems associated with some earlier low order panel methods, e.g., leakage in internal flows and junctions and also poor trailing edge solutions, do not appear for the present method. Further, the application of the Kutta conditions is extremely simple; no extra equation or trailing edge velocity point is required. The method has very low computing costs and this has made it practical for application to nonlinear problems requiring iterative solutions for wake shape and surface boundary layer effects.
Stress fractures of the ribs and upper extremities: causation, evaluation, and management.
Miller, Timothy L; Harris, Joshua D; Kaeding, Christopher C
2013-08-01
Stress fractures are common troublesome injuries in athletes and non-athletes. Historically, stress fractures have been thought to predominate in the lower extremities secondary to the repetitive stresses of impact loading. Stress injuries of the ribs and upper extremities are much less common and often unrecognized. Consequently, these injuries are often omitted from the differential diagnosis of rib or upper extremity pain. Given the infrequency of this diagnosis, few case reports or case series have reported on their precipitating activities and common locations. Appropriate evaluation for these injuries requires a thorough history and physical examination. Radiographs may be negative early, requiring bone scintigraphy or MRI to confirm the diagnosis. Nonoperative and operative treatment recommendations are made based on location, injury classification, and causative activity. An understanding of the most common locations of upper extremity stress fractures and their associated causative activities is essential for prompt diagnosis and optimal treatment.
Analysis of the dependence of extreme rainfalls
NASA Astrophysics Data System (ADS)
Padoan, Simone; Ancey, Christophe; Parlange, Marc
2010-05-01
The aim of spatial analysis is to quantitatively describe the behavior of environmental phenomena such as precipitation levels, wind speed or daily temperatures. A number of generic approaches to spatial modeling have been developed[1], but these are not necessarily ideal for handling extremal aspects given their focus on mean process levels. The areal modelling of the extremes of a natural process observed at points in space is important in environmental statistics; for example, understanding extremal spatial rainfall is crucial in flood protection. In light of recent concerns over climate change, the use of robust mathematical and statistical methods for such analyses has grown in importance. Multivariate extreme value models and the class of maxstable processes [2] have a similar asymptotic motivation to the univariate Generalized Extreme Value (GEV) distribution , but providing a general approach to modeling extreme processes incorporating temporal or spatial dependence. Statistical methods for max-stable processes and data analyses of practical problems are discussed by [3] and [4]. This work illustrates methods to the statistical modelling of spatial extremes and gives examples of their use by means of a real extremal data analysis of Switzerland precipitation levels. [1] Cressie, N. A. C. (1993). Statistics for Spatial Data. Wiley, New York. [2] de Haan, L and Ferreria A. (2006). Extreme Value Theory An Introduction. Springer, USA. [3] Padoan, S. A., Ribatet, M and Sisson, S. A. (2009). Likelihood-Based Inference for Max-Stable Processes. Journal of the American Statistical Association, Theory & Methods. In press. [4] Davison, A. C. and Gholamrezaee, M. (2009), Geostatistics of extremes. Journal of the Royal Statistical Society, Series B. To appear.
NASA Astrophysics Data System (ADS)
Tichý, Vladimír; Hudec, René; Němcová, Šárka
2016-06-01
The algorithm presented is intended mainly for lobster eye optics. This type of optics (and some similar types) allows for a simplification of the classical ray-tracing procedure that requires great many rays to simulate. The method presented performs the simulation of a only few rays; therefore it is extremely effective. Moreover, to simplify the equations, a specific mathematical formalism is used. Only a few simple equations are used, therefore the program code can be simple as well. The paper also outlines how to apply the method to some other reflective optical systems.
Magneto-optical cooling of atoms.
Raizen, Mark G; Budker, Dmitry; Rochester, Simon M; Narevicius, Julia; Narevicius, Edvardas
2014-08-01
We propose an alternative method to laser cooling. Our approach utilizes the extreme brightness of a supersonic atomic beam, and the adiabatic atomic coilgun to slow atoms in the beam or to bring them to rest. We show how internal-state optical pumping and stimulated optical transitions, combined with magnetic forces, can be used to cool the translational motion of atoms. This approach does not rely on momentum transfer from photons to atoms, as in laser cooling. We predict that our method can surpass laser cooling in terms of flux of ultracold atoms and phase-space density, with lower required laser power.
Rasch validation of the Arabic version of the lower extremity functional scale.
Alnahdi, Ali H
2018-02-01
The purpose of this study was to examine the internal construct validity of the Arabic version of the Lower Extremity Functional Scale (20-item Arabic LEFS) using Rasch analysis. Patients (n = 170) with lower extremity musculoskeletal dysfunction were recruited. Rasch analysis of 20-item Arabic LEFS was performed. Once the initial Rasch analysis indicated that the 20-item Arabic LEFS did not fit the Rasch model, follow-up analyses were conducted to improve the fit of the scale to the Rasch measurement model. These modifications included removing misfitting individuals, changing item scoring structure, removing misfitting items, addressing bias caused by response dependency between items and differential item functioning (DIF). Initial analysis indicated deviation of the 20-item Arabic LEFS from the Rasch model. Disordered thresholds in eight items and response dependency between six items were detected with the scale as a whole did not meet the requirement of unidimensionality. Refinements led to a 15-item Arabic LEFS that demonstrated excellent internal consistency (person separation index [PSI] = 0.92) and satisfied all the requirement of the Rasch model. Rasch analysis did not support the 20-item Arabic LEFS as a unidimensional measure of lower extremity function. The refined 15-item Arabic LEFS met all the requirement of the Rasch model and hence is a valid objective measure of lower extremity function. The Rasch-validated 15-item Arabic LEFS needs to be further tested in an independent sample to confirm its fit to the Rasch measurement model. Implications for Rehabilitation The validity of the 20-item Arabic Lower Extremity Functional Scale to measure lower extremity function is not supported. The 15-item Arabic version of the LEFS is a valid measure of lower extremity function and can be used to quantify lower extremity function in patients with lower extremity musculoskeletal disorders.
NASA Astrophysics Data System (ADS)
Mentaschi, Lorenzo; Vousdoukas, Michalis; Voukouvalas, Evangelos; Sartini, Ludovica; Feyen, Luc; Besio, Giovanni; Alfieri, Lorenzo
2016-09-01
Statistical approaches to study extreme events require, by definition, long time series of data. In many scientific disciplines, these series are often subject to variations at different temporal scales that affect the frequency and intensity of their extremes. Therefore, the assumption of stationarity is violated and alternative methods to conventional stationary extreme value analysis (EVA) must be adopted. Using the example of environmental variables subject to climate change, in this study we introduce the transformed-stationary (TS) methodology for non-stationary EVA. This approach consists of (i) transforming a non-stationary time series into a stationary one, to which the stationary EVA theory can be applied, and (ii) reverse transforming the result into a non-stationary extreme value distribution. As a transformation, we propose and discuss a simple time-varying normalization of the signal and show that it enables a comprehensive formulation of non-stationary generalized extreme value (GEV) and generalized Pareto distribution (GPD) models with a constant shape parameter. A validation of the methodology is carried out on time series of significant wave height, residual water level, and river discharge, which show varying degrees of long-term and seasonal variability. The results from the proposed approach are comparable with the results from (a) a stationary EVA on quasi-stationary slices of non-stationary series and (b) the established method for non-stationary EVA. However, the proposed technique comes with advantages in both cases. For example, in contrast to (a), the proposed technique uses the whole time horizon of the series for the estimation of the extremes, allowing for a more accurate estimation of large return levels. Furthermore, with respect to (b), it decouples the detection of non-stationary patterns from the fitting of the extreme value distribution. As a result, the steps of the analysis are simplified and intermediate diagnostics are possible. In particular, the transformation can be carried out by means of simple statistical techniques such as low-pass filters based on the running mean and the standard deviation, and the fitting procedure is a stationary one with a few degrees of freedom and is easy to implement and control. An open-source MATLAB toolbox has been developed to cover this methodology, which is available at https://github.com/menta78/tsEva/ (Mentaschi et al., 2016).
Sustained Aeration of Infant Lungs (SAIL) trial: study protocol for a randomized controlled trial.
Foglia, Elizabeth E; Owen, Louise S; Thio, Marta; Ratcliffe, Sarah J; Lista, Gianluca; Te Pas, Arjan; Hummler, Helmut; Nadkarni, Vinay; Ades, Anne; Posencheg, Michael; Keszler, Martin; Davis, Peter; Kirpalani, Haresh
2015-03-15
Extremely preterm infants require assistance recruiting the lung to establish a functional residual capacity after birth. Sustained inflation (SI) combined with positive end expiratory pressure (PEEP) may be a superior method of aerating the lung compared with intermittent positive pressure ventilation (IPPV) with PEEP in extremely preterm infants. The Sustained Aeration of Infant Lungs (SAIL) trial was designed to study this question. This multisite prospective randomized controlled unblinded trial will recruit 600 infants of 23 to 26 weeks gestational age who require respiratory support at birth. Infants in both arms will be treated with PEEP 5 to 7 cm H2O throughout the resuscitation. The study intervention consists of performing an initial SI (20 cm H20 for 15 seconds) followed by a second SI (25 cm H2O for 15 seconds), and then PEEP with or without IPPV, as needed. The control group will be treated with initial IPPV with PEEP. The primary outcome is the combined endpoint of bronchopulmonary dysplasia or death at 36 weeks post-menstrual age. www.clinicaltrials.gov , Trial identifier NCT02139800 , Registered 13 May 2014.
Development of a miniature Stirling cryocooler for LWIR small satellite applications
NASA Astrophysics Data System (ADS)
Kirkconnell, C. S.; Hon, R. C.; Perella, M. D.; Crittenden, T. M.; Ghiaasiaan, S. M.
2017-05-01
The optimum small satellite (SmallSat) cryocooler system must be extremely compact and lightweight, achieved in this paper by operating a linear cryocooler at a frequency of approximately 300 Hz. Operation at this frequency, which is well in excess of the 100-150 Hz reported in recent papers on related efforts, requires an evolution beyond the traditional Oxford-class, flexure-based methods of setting the mechanical resonance. A novel approach that optimizes the electromagnetic design and the mechanical design together to simultaneously achieve the required dynamic and thermodynamic performances is described. Since highly miniaturized pulse tube coolers are fundamentally ill-suited for the sub-80K temperature range of interest because the boundary layer losses inside the pulse tube become dominant at the associated very small pulse tube size, a moving displacer Stirling cryocooler architecture is used. Compact compressor mechanisms developed on a previous program are reused for this design, and they have been adapted to yield an extremely compact Stirling warm end motor mechanism. Supporting thermodynamic and electromagnetic analysis results are reported.
Receive Mode Analysis and Design of Microstrip Reflectarrays
NASA Technical Reports Server (NTRS)
Rengarajan, Sembiam
2011-01-01
Traditionally microstrip or printed reflectarrays are designed using the transmit mode technique. In this method, the size of each printed element is chosen so as to provide the required value of the reflection phase such that a collimated beam results along a given direction. The reflection phase of each printed element is approximated using an infinite array model. The infinite array model is an excellent engineering approximation for a large microstrip array since the size or orientation of elements exhibits a slow spatial variation. In this model, the reflection phase from a given printed element is approximated by that of an infinite array of elements of the same size and orientation when illuminated by a local plane wave. Thus the reflection phase is a function of the size (or orientation) of the element, the elevation and azimuth angles of incidence of a local plane wave, and polarization. Typically, one computes the reflection phase of the infinite array as a function of several parameters such as size/orientation, elevation and azimuth angles of incidence, and in some cases for vertical and horizontal polarization. The design requires the selection of the size/orientation of the printed element to realize the required phase by interpolating or curve fitting all the computed data. This is a substantially complicated problem, especially in applications requiring a computationally intensive commercial code to determine the reflection phase. In dual polarization applications requiring rectangular patches, one needs to determine the reflection phase as a function of five parameters (dimensions of the rectangular patch, elevation and azimuth angles of incidence, and polarization). This is an extremely complex problem. The new method employs the reciprocity principle and reaction concept, two well-known concepts in electromagnetics to derive the receive mode analysis and design techniques. In the "receive mode design" technique, the reflection phase is computed for a plane wave incident on the reflectarray from the direction of the beam peak. In antenna applications with a single collimated beam, this method is extremely simple since all printed elements see the same angles of incidence. Thus the number of parameters is reduced by two when compared to the transmit mode design. The reflection phase computation as a function of five parameters in the rectangular patch array discussed previously is reduced to a computational problem with three parameters in the receive mode. Furthermore, if the beam peak is in the broadside direction, the receive mode design is polarization independent and the reflection phase computation is a function of two parameters only. For a square patch array, it is a function of the size, one parameter only, thus making it extremely simple.
Almonroeder, Thomas Gus; Kernozek, Thomas; Cobb, Stephen; Slavens, Brooke; Wang, Jinsung; Huddleston, Wendy
2018-05-01
Study Design Cross-sectional study. Background The drop vertical jump task is commonly used to screen for anterior cruciate ligament injury risk; however, its predictive validity is limited. The limited predictive validity of the drop vertical jump task may be due to not imposing the cognitive demands that reflect sports participation. Objectives To investigate the influence of additional cognitive demands on lower extremity mechanics during execution of the drop vertical jump task. Methods Twenty uninjured women (age range, 18-25 years) were required to perform the standard drop vertical jump task, as well as drop vertical jumps that included additional cognitive demands. The additional cognitive demands were related to attending to an overhead goal (ball suspended overhead) and/or temporal constraints on movement selection (decision making). Three-dimensional ground reaction forces and lower extremity mechanics were compared between conditions. Results The inclusion of the overhead goal resulted in higher peak vertical ground reaction forces and lower peak knee flexion angles in comparison to the standard drop vertical jump task. In addition, participants demonstrated greater peak knee abduction angles when trials incorporated temporal constraints on decision making and/or required participants to attend to an overhead goal, in comparison to the standard drop vertical jump task. Conclusion Imposing additional cognitive demands during execution of the drop vertical jump task influenced lower extremity mechanics in a manner that suggested increased loading of the anterior cruciate ligament. Tasks utilized in anterior cruciate ligament injury risk screening may benefit from more closely reflecting the cognitive demands of the sports environment. J Orthop Sports Phys Ther 2018;48(5):381-387. Epub 10 Jan 2018. doi:10.2519/jospt.2018.7739.
Estimating extreme river discharges in Europe through a Bayesian network
NASA Astrophysics Data System (ADS)
Paprotny, Dominik; Morales-Nápoles, Oswaldo
2017-06-01
Large-scale hydrological modelling of flood hazards requires adequate extreme discharge data. In practise, models based on physics are applied alongside those utilizing only statistical analysis. The former require enormous computational power, while the latter are mostly limited in accuracy and spatial coverage. In this paper we introduce an alternate, statistical approach based on Bayesian networks (BNs), a graphical model for dependent random variables. We use a non-parametric BN to describe the joint distribution of extreme discharges in European rivers and variables representing the geographical characteristics of their catchments. Annual maxima of daily discharges from more than 1800 river gauges (stations with catchment areas ranging from 1.4 to 807 000 km2) were collected, together with information on terrain, land use and local climate. The (conditional) correlations between the variables are modelled through copulas, with the dependency structure defined in the network. The results show that using this method, mean annual maxima and return periods of discharges could be estimated with an accuracy similar to existing studies using physical models for Europe and better than a comparable global statistical model. Performance of the model varies slightly between regions of Europe, but is consistent between different time periods, and remains the same in a split-sample validation. Though discharge prediction under climate change is not the main scope of this paper, the BN was applied to a large domain covering all sizes of rivers in the continent both for present and future climate, as an example. Results show substantial variation in the influence of climate change on river discharges. The model can be used to provide quick estimates of extreme discharges at any location for the purpose of obtaining input information for hydraulic modelling.
NASA Astrophysics Data System (ADS)
Durfee, David; Johnson, Walter; McLeod, Scott
2007-04-01
Un-cooled microbolometer sensors used in modern infrared night vision systems such as driver vehicle enhancement (DVE) or thermal weapons sights (TWS) require a mechanical shutter. Although much consideration is given to the performance requirements of the sensor, supporting electronic components and imaging optics, the shutter technology required to survive in combat is typically the last consideration in the system design. Electro-mechanical shutters used in military IR applications must be reliable in temperature extremes from a low temperature of -40°C to a high temperature of +70°C. They must be extremely light weight while having the ability to withstand the high vibration and shock forces associated with systems mounted in military combat vehicles, weapon telescopic sights, or downed unmanned aerial vehicles (UAV). Electro-mechanical shutters must have minimal power consumption and contain circuitry integrated into the shutter to manage battery power while simultaneously adapting to changes in electrical component operating parameters caused by extreme temperature variations. The technology required to produce a miniature electro-mechanical shutter capable of fitting into a rifle scope with these capabilities requires innovations in mechanical design, material science, and electronics. This paper describes a new, miniature electro-mechanical shutter technology with integrated power management electronics designed for extreme service infra-red night vision systems.
Patel, Jigna; Qiu, Qinyin; Yarossi, Mathew; Merians, Alma; Massood, Supriya; Tunik, Eugene; Adamovich, Sergei; Fluet, Gerard
2017-07-01
Explore the potential benefits of using priming methods prior to an active hand task in the acute phase post-stroke in persons with severe upper extremity hemiparesis. Five individuals were trained using priming techniques including virtual reality (VR) based visual mirror feedback and contralaterally controlled passive movement strategies prior to training with an active pinch force modulation task. Clinical, kinetic, and neurophysiological measurements were taken pre and post the training period. Clinical measures were taken at six months post training. The two priming simulations and active training were well tolerated early after stroke. Priming effects were suggested by increased maximal pinch force immediately after visual and movement based priming. Despite having no clinically observable movement distally, the subjects were able to volitionally coordinate isometric force and muscle activity (EMG) in a pinch tracing task. The Root Mean Square Error (RMSE) of force during the pinch trace task gradually decreased over the training period suggesting learning may have occurred. Changes in motor cortical neurophysiology were seen in the unaffected hemisphere using Transcranial Magnetic Stimulation (TMS) mapping. Significant improvements in motor recovery as measured by the Action Research Arm Test (ARAT) and the Upper Extremity Fugl Meyer Assessment (UEFMA) were demonstrated at six months post training by three of the five subjects. This study suggests that an early hand-based intervention using visual and movement based priming activities and a scaled motor task allows participation by persons without the motor control required for traditionally presented rehabilitation and testing. Implications for Rehabilitation Rehabilitation of individuals with severely paretic upper extremities after stroke is challenging due to limited movement capacity and few options for therapeutic training. Long-term functional recovery of the arm after stroke depends on early return of active hand control, establishing a need for acute training methods focused distally. This study demonstrates the feasibility of an early hand-based intervention using virtual reality based priming and scaled motor activities which can allow for participation by persons without the motor control required for traditionally presented rehabilitation and testing.
Extremal Correlators in the Ads/cft Correspondence
NASA Astrophysics Data System (ADS)
D'Hoker, Eric; Freedman, Daniel Z.; Mathur, Samir D.; Matusis, Alec; Rastelli, Leonardo
The non-renormalization of the 3-point functions
Attribution of extreme weather and climate-related events.
Stott, Peter A; Christidis, Nikolaos; Otto, Friederike E L; Sun, Ying; Vanderlinden, Jean-Paul; van Oldenborgh, Geert Jan; Vautard, Robert; von Storch, Hans; Walton, Peter; Yiou, Pascal; Zwiers, Francis W
2016-01-01
Extreme weather and climate-related events occur in a particular place, by definition, infrequently. It is therefore challenging to detect systematic changes in their occurrence given the relative shortness of observational records. However, there is a clear interest from outside the climate science community in the extent to which recent damaging extreme events can be linked to human-induced climate change or natural climate variability. Event attribution studies seek to determine to what extent anthropogenic climate change has altered the probability or magnitude of particular events. They have shown clear evidence for human influence having increased the probability of many extremely warm seasonal temperatures and reduced the probability of extremely cold seasonal temperatures in many parts of the world. The evidence for human influence on the probability of extreme precipitation events, droughts, and storms is more mixed. Although the science of event attribution has developed rapidly in recent years, geographical coverage of events remains patchy and based on the interests and capabilities of individual research groups. The development of operational event attribution would allow a more timely and methodical production of attribution assessments than currently obtained on an ad hoc basis. For event attribution assessments to be most useful, remaining scientific uncertainties need to be robustly assessed and the results clearly communicated. This requires the continuing development of methodologies to assess the reliability of event attribution results and further work to understand the potential utility of event attribution for stakeholder groups and decision makers. WIREs Clim Change 2016, 7:23-41. doi: 10.1002/wcc.380 For further resources related to this article, please visit the WIREs website.
Optical characterization of high speed microscanners based on static slit profiling method
NASA Astrophysics Data System (ADS)
Alaa Elhady, A.; Sabry, Yasser M.; Khalil, Diaa
2017-01-01
Optical characterization of high-speed microscanners is a challenging task that usually requires special high speed, extremely expensive camera systems. This paper presents a novel simple method to characterize the scanned beam spot profile and size in high-speed optical scanners under operation. It allows measuring the beam profile and the spot sizes at different scanning angles. The method is analyzed theoretically and applied experimentally on the characterization of a Micro Electro Mechanical MEMS scanner operating at 2.6 kHz. The variation of the spot size versus the scanning angle, up to ±15°, is extracted and the dynamic bending curvature effect of the micromirror is predicted.
Pruitt, Valerie M
2006-01-01
Work-related upper extremity burns often occur. The cause directs the course of action. Thermal burns should be assessed for system alterations, and depth of burn should be determined. Deep partial-thickness burns and more severe burns require a specialist evaluation. Chemical burns must be irrigated and the agent identified. Some chemical burns, such as those that involve phenols and metal fragments, require specific topical applications before water lavage. Hydrofluoric acid burns can cause life-threatening electrolyte abnormalities with a small, highly concentrated acid burn. The goal with any extremity burn is to provide the patient with a multidisciplinary team approach to achieve a functional, usable extremity.
An Automatic Orthonormalization Method for Solving Stiff Boundary-Value Problems
NASA Astrophysics Data System (ADS)
Davey, A.
1983-08-01
A new initial-value method is described, based on a remark by Drury, for solving stiff linear differential two-point cigenvalue and boundary-value problems. The method is extremely reliable, it is especially suitable for high-order differential systems, and it is capable of accommodating realms of stiffness which other methods cannot reach. The key idea behind the method is to decompose the stiff differential operator into two non-stiff operators, one of which is nonlinear. The nonlinear one is specially chosen so that it advances an orthonormal frame, indeed the method is essentially a kind of automatic orthonormalization; the second is auxiliary but it is needed to determine the required function. The usefulness of the method is demonstrated by calculating some eigenfunctions for an Orr-Sommerfeld problem when the Reynolds number is as large as 10°.
Optimal dynamic remapping of parallel computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Reynolds, Paul F., Jr.
1987-01-01
A large class of computations are characterized by a sequence of phases, with phase changes occurring unpredictably. The decision problem was considered regarding the remapping of workload to processors in a parallel computation when the utility of remapping and the future behavior of the workload is uncertain, and phases exhibit stable execution requirements during a given phase, but requirements may change radically between phases. For these problems a workload assignment generated for one phase may hinder performance during the next phase. This problem is treated formally for a probabilistic model of computation with at most two phases. The fundamental problem of balancing the expected remapping performance gain against the delay cost was addressed. Stochastic dynamic programming is used to show that the remapping decision policy minimizing the expected running time of the computation has an extremely simple structure. Because the gain may not be predictable, the performance of a heuristic policy that does not require estimnation of the gain is examined. The heuristic method's feasibility is demonstrated by its use on an adaptive fluid dynamics code on a multiprocessor. The results suggest that except in extreme cases, the remapping decision problem is essentially that of dynamically determining whether gain can be achieved by remapping after a phase change. The results also suggest that this heuristic is applicable to computations with more than two phases.
Center for Intelligent Control Systems
1992-12-01
difficult than anyone expected 50 years ago, and it now seems that it will require inputs from such diver fields as brain and cognitive sience ...9/147 Wilisky. A.S. 17 Fleming, W.H. P Two-Player. Zero-Sum Differential Games 5/1/87 Sougmnidis, PS. 18 Gemnan, S-A. P Ststistical Methods for...Mansour, Y. Shavit, N. 175 Tshsiklis, J.N. P Extremal Properties of Likelihood-Ratio Quantizers 11/1/89 176 Awerbuch, B. P Online Tracking of Mobile
Bridging paradigms: hybrid mechanistic-discriminative predictive models.
Doyle, Orla M; Tsaneva-Atansaova, Krasimira; Harte, James; Tiffin, Paul A; Tino, Peter; Díaz-Zuccarini, Vanessa
2013-03-01
Many disease processes are extremely complex and characterized by multiple stochastic processes interacting simultaneously. Current analytical approaches have included mechanistic models and machine learning (ML), which are often treated as orthogonal viewpoints. However, to facilitate truly personalized medicine, new perspectives may be required. This paper reviews the use of both mechanistic models and ML in healthcare as well as emerging hybrid methods, which are an exciting and promising approach for biologically based, yet data-driven advanced intelligent systems.
Real-time attitude determination and gyro calibration
NASA Technical Reports Server (NTRS)
Challa, M.; Filla, O.; Sedlak, J.; Chu, D.
1993-01-01
We present results for two real-time filters prototyped for the Compton Gamma Ray Observatory (GRO), the Extreme Ultraviolet Explorer (EUVE), the Cosmic Background Explorer (COBE), and the next generation of Geostationary Operational Environmental Satellites (GOES). Both real and simulated data were used to solve for attitude and gyro biases. These filters promise advantages over single-frame and batch methods for missions like GOES, where startup and transfer-orbit operations require quick knowledge of attitude and gyro biases.
NASA Technical Reports Server (NTRS)
Payne, L. L.
1982-01-01
The strength of the bond between optically contacted quartz surfaces was investigated. The Gravity Probe-B (GP-B) experiment to test the theories of general relativity requires extremely precise measurements. The quartz components of the instruments to make these measurements must be held together in a very stable unit. Optical contacting is suggested as a possible method of joining these components. The fundamental forces involved in optical contacting are reviewed and relates calculations of these forces to the results obtained in experiments.
New control strategies for neuroprosthetic systems.
Crago, P E; Lan, N; Veltink, P H; Abbas, J J; Kantor, C
1996-04-01
The availability of techniques to artificially excite paralyzed muscles opens enormous potential for restoring both upper and lower extremity movements with neuroprostheses. Neuroprostheses must stimulate muscle, and control and regulate the artificial movements produced. Control methods to accomplish these tasks include feedforward (open-loop), feedback, and adaptive control. Feedforward control requires a great deal of information about the biomechanical behavior of the limb. For the upper extremity, an artificial motor program was developed to provide such movement program input to a neuroprosthesis. In lower extremity control, one group achieved their best results by attempting to meet naturally perceived gait objectives rather than to follow an exact joint angle trajectory. Adaptive feedforward control, as implemented in the cycle-to-cycle controller, gave good compensation for the gradual decrease in performance observed with open-loop control. A neural network controller was able to control its system to customize stimulation parameters in order to generate a desired output trajectory in a given individual and to maintain tracking performance in the presence of muscle fatigue. The authors believe that practical FNS control systems must exhibit many of these features of neurophysiological systems.
Preliminary research of a novel center-driven robot for upper extremity rehabilitation.
Cao, Wujing; Zhang, Fei; Yu, Hongliu; Hu, Bingshan; Meng, Qiaoling
2018-01-19
Loss of upper limb function often appears after stroke. Robot-assisted systems are becoming increasingly common in upper extremity rehabilitation. Rehabilitation robot provides intensive motor therapy, which can be performed in a repetitive, accurate and controllable manner. This study aims to propose a novel center-driven robot for upper extremity rehabilitation. A new power transmission mechanism is designed to transfer the power to elbow and shoulder joints from three motors located on the base. The forward and inverse kinematics equations of the center-driven robot (CENTROBOT) are deduced separately. The theoretical values of the scope of joint movements are obtained with the Denavit-Hartenberg parameters method. A prototype of the CENTROBOT is developed and tested. The elbow flexion/extension, shoulder flexion/extension and shoulder adduction/abduction can be realized of the center-driven robot. The angles value of joints are in conformity with the theoretical value. The CENTROBOT reduces the overall size of the robot arm, the influence of motor noise, radiation and other adverse factors by setting all motors on the base. It can satisfy the requirements of power and movement transmission of the robot arm.
Rankin, Jeffery W; Kwarciak, Andrew M; Richter, W Mark; Neptune, Richard R
2012-11-01
The majority of manual wheelchair users will experience upper extremity injuries or pain, in part due to the high force requirements, repetitive motion and extreme joint postures associated with wheelchair propulsion. Recent studies have identified cadence, contact angle and peak force as important factors for reducing upper extremity demand during propulsion. However, studies often make comparisons between populations (e.g., able-bodied vs. paraplegic) or do not investigate specific measures of upper extremity demand. The purpose of this study was to use a musculoskeletal model and forward dynamics simulations of wheelchair propulsion to investigate how altering cadence, peak force and contact angle influence individual muscle demand. Forward dynamics simulations of wheelchair propulsion were generated to emulate group-averaged experimental data during four conditions: 1) self-selected propulsion technique, and while 2) minimizing cadence, 3) maximizing contact angle, and 4) minimizing peak force using biofeedback. Simulations were used to determine individual muscle mechanical power and stress as measures of muscle demand. Minimizing peak force and cadence had the lowest muscle power requirements. However, minimizing peak force increased cadence and recovery power, while minimizing cadence increased average muscle stress. Maximizing contact angle increased muscle stress and had the highest muscle power requirements. Minimizing cadence appears to have the most potential for reducing muscle demand and fatigue, which could decrease upper extremity injuries and pain. However, altering any of these variables to extreme values appears to be less effective; instead small to moderate changes may better reduce overall muscle demand. Copyright © 2012 Elsevier Ltd. All rights reserved.
Development of replicated optics for AXAF-1 XDA testing
NASA Technical Reports Server (NTRS)
Engelhaupt, Darell; Wilson, Michele; Martin, Greg
1995-01-01
Advanced optical systems for applications such as grazing incidence Wolter I x-ray mirror assemblies require extraordinary mirror surfaces in terms of fine finish and surface figure. The impeccable mirror surface is on the inside of the rotational mirror form. One practical method of producing devices with these requirements is to first fabricate an exterior surface for the optical device then replicate that surface to have the inverse component with lightweight characteristics. The replicated optic is not better than the master or mandrel from which it is made. This task identifies methods and materials for forming these extremely low roughness optical components. The objectives of this contract were to (1) prepare replication samples of electroless nickel coated aluminum, and determine process requirements for plating XDA test optic; (2) prepare and assemble plating equipment required to process a demonstration optic; (3) characterize mandrels, replicas and test samples for residual stress, surface contamination and surface roughness and figure using equipment at MSFC and; (4) provide technical expertise in establishing the processes, procedures, supplies and equipment needed to process the XDA test optics.
MaRIE: an experimental facility concept revolutionizing materials in extremes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnes, Cris W
The Matter-Radiation Interactions in Extremes (MaRIE) project intends to create an experimental facility that will revolutionize the control of materials in extremes. That control extends to extreme regimes where solid material has failed and begins to flow - the regimes of fluid dynamics and turbulent mixing. This presentation introduces the MaRIE facility concept, demonstrates examples of the science case that determine its functional requirements, and kicks-off the discussion of the decadal scientific challenges of mixing in extremes, including those MaRIE might address.
Extreme Programming: Maestro Style
NASA Technical Reports Server (NTRS)
Norris, Jeffrey; Fox, Jason; Rabe, Kenneth; Shu, I-Hsiang; Powell, Mark
2009-01-01
"Extreme Programming: Maestro Style" is the name of a computer programming methodology that has evolved as a custom version of a methodology, called extreme programming that has been practiced in the software industry since the late 1990s. The name of this version reflects its origin in the work of the Maestro team at NASA's Jet Propulsion Laboratory that develops software for Mars exploration missions. Extreme programming is oriented toward agile development of software resting on values of simplicity, communication, testing, and aggressiveness. Extreme programming involves use of methods of rapidly building and disseminating institutional knowledge among members of a computer-programming team to give all the members a shared view that matches the view of the customers for whom the software system is to be developed. Extreme programming includes frequent planning by programmers in collaboration with customers, continually examining and rewriting code in striving for the simplest workable software designs, a system metaphor (basically, an abstraction of the system that provides easy-to-remember software-naming conventions and insight into the architecture of the system), programmers working in pairs, adherence to a set of coding standards, collaboration of customers and programmers, frequent verbal communication, frequent releases of software in small increments of development, repeated testing of the developmental software by both programmers and customers, and continuous interaction between the team and the customers. The environment in which the Maestro team works requires the team to quickly adapt to changing needs of its customers. In addition, the team cannot afford to accept unnecessary development risk. Extreme programming enables the Maestro team to remain agile and provide high-quality software and service to its customers. However, several factors in the Maestro environment have made it necessary to modify some of the conventional extreme-programming practices. The single most influential of these factors is that continuous interaction between customers and programmers is not feasible.
Ainsbury, Elizabeth; Badie, Christophe; Barnard, Stephen; Manning, Grainne; Moquet, Jayne; Abend, Michael; Antunes, Ana Catarina; Barrios, Lleonard; Bassinet, Celine; Beinke, Christina; Bortolin, Emanuela; Bossin, Lily; Bricknell, Clare; Brzoska, Kamil; Buraczewska, Iwona; Castaño, Carlos Huertas; Čemusová, Zina; Christiansson, Maria; Cordero, Santiago Mateos; Cosler, Guillaume; Monaca, Sara Della; Desangles, François; Discher, Michael; Dominguez, Inmaculada; Doucha-Senf, Sven; Eakins, Jon; Fattibene, Paola; Filippi, Silvia; Frenzel, Monika; Georgieva, Dimka; Gregoire, Eric; Guogyte, Kamile; Hadjidekova, Valeria; Hadjiiska, Ljubomira; Hristova, Rositsa; Karakosta, Maria; Kis, Enikő; Kriehuber, Ralf; Lee, Jungil; Lloyd, David; Lumniczky, Katalin; Lyng, Fiona; Macaeva, Ellina; Majewski, Matthaeus; Vanda Martins, S; McKeever, Stephen W S; Meade, Aidan; Medipally, Dinesh; Meschini, Roberta; M'kacher, Radhia; Gil, Octávia Monteiro; Montero, Alegria; Moreno, Mercedes; Noditi, Mihaela; Oestreicher, Ursula; Oskamp, Dominik; Palitti, Fabrizio; Palma, Valentina; Pantelias, Gabriel; Pateux, Jerome; Patrono, Clarice; Pepe, Gaetano; Port, Matthias; Prieto, María Jesús; Quattrini, Maria Cristina; Quintens, Roel; Ricoul, Michelle; Roy, Laurence; Sabatier, Laure; Sebastià, Natividad; Sholom, Sergey; Sommer, Sylwester; Staynova, Albena; Strunz, Sonja; Terzoudi, Georgia; Testa, Antonella; Trompier, Francois; Valente, Marco; Hoey, Olivier Van; Veronese, Ivan; Wojcik, Andrzej; Woda, Clemens
2017-01-01
RENEB, 'Realising the European Network of Biodosimetry and Physical Retrospective Dosimetry,' is a network for research and emergency response mutual assistance in biodosimetry within the EU. Within this extremely active network, a number of new dosimetry methods have recently been proposed or developed. There is a requirement to test and/or validate these candidate techniques and inter-comparison exercises are a well-established method for such validation. The authors present details of inter-comparisons of four such new methods: dicentric chromosome analysis including telomere and centromere staining; the gene expression assay carried out in whole blood; Raman spectroscopy on blood lymphocytes, and detection of radiation-induced thermoluminescent signals in glass screens taken from mobile phones. In general the results show good agreement between the laboratories and methods within the expected levels of uncertainty, and thus demonstrate that there is a lot of potential for each of the candidate techniques. Further work is required before the new methods can be included within the suite of reliable dosimetry methods for use by RENEB partners and others in routine and emergency response scenarios.
Does perception of catheterization limit its use in pediatric UTI?
Selekman, Rachel E; Sanford, Melissa T; Ko, Lauren N; Allen, I Elaine; Copp, Hillary L
2017-02-01
Urinary tract infections (UTIs) affect 3-8% of febrile children annually, but correctly diagnosing UTI in young children can present a challenge. Diagnosis requires a non-contaminated urine sample, which requires catheterization or suprapubic aspiration in infants and young children that have not completed toilet training. To improve adherence to these guidelines, it is critical to understand the barriers to urine testing and catheterization. The purpose of this study was to investigate parental perception of pediatric UTI evaluation to better understand factors that impede urine testing prior to treatment of suspected UTI. We conducted an electronic, cross-sectional survey via social media targeting parents of children with a history of UTI. Participants were queried regarding demographics, urine specimen collection method, factors influencing urine collection method, and perception of the experience. Multivariable logistic regression was used to assess factors associated with catheterization distress and urine testing. Of 2726 survey respondents, > 80% were female and White; 74% of the children with a history of UTI were female. Fifty-six percent of parents perceived extreme distress with catheterization. Among parents whose child was catheterized, extreme distress was less likely perceived if the parent was White (OR 0.6, 95% CI 0.4-0.9) or if the child was circumcised (OR 0.7, 95% CI 0.4-0.98). Among those whose child was not catheterized, extreme distress was more likely if parents had a college education (OR 3.2, 95% CI 2.2-4.5) and the child was more than 1 year old (OR 1.7, 95% CI 1.2-2.5). Catheterization was less likely to be withheld if parents had a college education (OR 0.1, 95% CI 0.1-0.2), and if the child was circumcised (OR 0.5, 95% CI 0.3-0.8) or had only one UTI (OR 0.6, 95% CI 0.4-0.8) (Table). Parental education level, child age, and circumcision status play an important role in the subjective distress associated with catheterization. This highlights the substantial impact of parental factors on adherence to guidelines for children suspected of UTI. For example, college-educated parents were more likely to be offered catheterization. However, these parents are also more likely to associate the catheterization experience with extreme distress, possibly limiting their likelihood of consent to this procedure. More studies are required to better understand the impact of these factors on catheterization. But, it is clear that parental input has a substantial impact on the evaluation of their child's suspected UTI. Copyright © 2016. Published by Elsevier Ltd.
NASA Astrophysics Data System (ADS)
Gooré Bi, Eustache; Gachon, Philippe; Vrac, Mathieu; Monette, Frédéric
2017-02-01
Changes in extreme precipitation should be one of the primary impacts of climate change (CC) in urban areas. To assess these impacts, rainfall data from climate models are commonly used. The main goal of this paper is to report on the state of knowledge and recent works on the study of CC impacts with a focus on urban areas, in order to produce an integrated review of various approaches to which future studies can then be compared or constructed. Model output statistics (MOS) methods are increasingly used in the literature to study the impacts of CC in urban settings. A review of previous works highlights the non-stationarity nature of future climate data, underscoring the need to revise urban drainage system design criteria. A comparison of these studies is made difficult, however, by the numerous sources of uncertainty arising from a plethora of assumptions, scenarios, and modeling options. All the methods used do, however, predict increased extreme precipitation in the future, suggesting potential risks of combined sewer overflow frequencies, flooding, and back-up in existing sewer systems in urban areas. Future studies must quantify more accurately the different sources of uncertainty by improving downscaling and correction methods. New research is necessary to improve the data validation process, an aspect that is seldom reported in the literature. Finally, the potential application of non-stationarity conditions into generalized extreme value (GEV) distribution should be assessed more closely, which will require close collaboration between engineers, hydrologists, statisticians, and climatologists, thus contributing to the ongoing reflection on this issue of social concern.
Cole, Conrad R.; Hansen, Nellie I.; Higgins, Rosemary D.; Ziegler, Thomas R.; Stoll, Barbara J.
2009-01-01
OBJECTIVES The objective of this study was to determine the (1) incidence of short bowel syndrome in very low birth weight (<1500 g) infants, (2) associated morbidity and mortality during initial hospitalization, and (3) impact on short-term growth and nutrition in extremely low birth weight (<1000 g) infants. METHODS Infants who were born from January 1, 2002, through June 30, 2005, and enrolled in the National Institute of Child Health and Human Development Neonatal Research Network were studied. Risk factors for developing short bowel syndrome as a result of partial bowel resection (surgical short bowel syndrome) and outcomes were evaluated for all neonates until hospital discharge, death, or 120 days. Extremely low birth weight survivors were further evaluated at 18 to 22 months’ corrected age for feeding methods and growth. RESULTS The incidence of surgical short bowel syndrome in this cohort of 12 316 very low birth weight infants was 0.7%. Necrotizing enterocolitis was the most common diagnosis associated with surgical short bowel syndrome. More very low birth weight infants with short bowel syndrome (20%) died during initial hospitalization than those without necrotizing enterocolitis or short bowel syndrome (12%) but fewer than the infants with surgical necrotizing enterocolitis without short bowel syndrome (53%). Among 5657 extremely low birth weight infants, the incidence of surgical short bowel syndrome was 1.1%. At 18 to 22 months, extremely low birth weight infants with short bowel syndrome were more likely to still require tube feeding (33%) and to have been rehospitalized (79%). Moreover, these infants had growth delay with shorter lengths and smaller head circumferences than infants without necrotizing enterocolitis or short bowel syndrome. CONCLUSIONS Short bowel syndrome is rare in neonates but has a high mortality rate. At 18 to 22 months’ corrected age, extremely low birth weight infants with short bowel syndrome were more likely to have growth failure than infants without short bowel syndrome. PMID:18762491
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-08
... requirements at the close when extreme order imbalances may cause significant dislocation to the closing price. The rule has operated on a pilot basis since April 2009 (``Extreme Order Imbalances Pilot'' or ``Pilot... suspend NYSE Amex Equities Rules 52 (Hours of Operation) to resolve an extreme order imbalance that may...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-15
... certain rule requirements at the close when extreme order imbalances may cause significant dislocation to the closing price (``Extreme Order Imbalances Pilot'' or ``Pilot'') \\4\\ until December 1, 2010.\\5\\ \\4... an extreme order imbalance that may result in a price dislocation at the close as a result of an...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-05
... suspend certain rule requirements at the close when extreme order imbalances may cause significant dislocation to the closing price (``Extreme Order Imbalances Pilot'' or ``Pilot'').\\5\\ The Pilot has recently... resolve an extreme order imbalance that may result in a price dislocation at the close as a result of an...
NASA Technical Reports Server (NTRS)
Lee, Jonghyun; Hyers, Robert W.; Rogers, Jan R.; Rathz, Thomas J.; Choo, Hahn; Liaw, Peter
2006-01-01
Responsive access to space requires re-use of components such as rocket nozzles that operate at extremely high temperatures. For such applications, new ultra-hightemperature materials that can operate over 2,000 C are required. At the temperatures higher than the fifty percent of the melting temperature, the characterization of creep properties is indispensable. Since conventional methods for the measurement of creep is limited below 1,700 C, a new technique that can be applied at higher temperatures is strongly demanded. This research develops a non-contact method for the measurement of creep at the temperatures over 2,300 C. Using the electrostatic levitator in NASA MSFC, a spherical sample was rotated to cause creep deformation by centrifugal acceleration. The deforming sample was captured with a digital camera and analyzed to measure creep deformation. Numerical and analytical analyses have also been conducted to compare the experimental results. Analytical, numerical, and experimental results showed a good agreement with one another.
Indurkhya, Sagar; Beal, Jacob
2010-01-06
ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models.
Indurkhya, Sagar; Beal, Jacob
2010-01-01
ODE simulations of chemical systems perform poorly when some of the species have extremely low concentrations. Stochastic simulation methods, which can handle this case, have been impractical for large systems due to computational complexity. We observe, however, that when modeling complex biological systems: (1) a small number of reactions tend to occur a disproportionately large percentage of the time, and (2) a small number of species tend to participate in a disproportionately large percentage of reactions. We exploit these properties in LOLCAT Method, a new implementation of the Gillespie Algorithm. First, factoring reaction propensities allows many propensities dependent on a single species to be updated in a single operation. Second, representing dependencies between reactions with a bipartite graph of reactions and species requires only storage for reactions, rather than the required for a graph that includes only reactions. Together, these improvements allow our implementation of LOLCAT Method to execute orders of magnitude faster than currently existing Gillespie Algorithm variants when simulating several yeast MAPK cascade models. PMID:20066048
Computer aided flexible envelope designs
NASA Technical Reports Server (NTRS)
Resch, R. D.
1975-01-01
Computer aided design methods are presented for the design and construction of strong, lightweight structures which require complex and precise geometric definition. The first, flexible structures, is a unique system of modeling folded plate structures and space frames. It is possible to continuously vary the geometry of a space frame to produce large, clear spans with curvature. The second method deals with developable surfaces, where both folding and bending are explored with the observed constraint of available building materials, and what minimal distortion result in maximum design capability. Alternative inexpensive fabrication techniques are being developed to achieve computer defined enclosures which are extremely lightweight and mathematically highly precise.
Schuettler, M; Stiess, S; King, B V; Suaning, G J
2005-03-01
A new method for fabrication of microelectrode arrays comprised of traditional implant materials is presented. The main construction principle is the use of spun-on medical grade silicone rubber as insulating substrate material and platinum foil as conductor (tracks, pads and electrodes). The silicone rubber and the platinum foil are patterned by laser cutting using an Nd:YAG laser and a microcontroller-driven, stepper-motor operated x-y table. The method does not require expensive clean room facilities and offers an extremely short design-to-prototype time of below 1 day. First prototypes demonstrate a minimal achievable feature size of about 30 microm.
Application of short-data methods on extreme surge levels
NASA Astrophysics Data System (ADS)
Feng, X.
2014-12-01
Tropical cyclone-induced storm surges are among the most destructive natural hazards that impact the United States. Unfortunately for academic research, the available time series for extreme surge analysis are very short. The limited data introduces uncertainty and affects the accuracy of statistical analyses of extreme surge levels. This study deals with techniques applicable to data sets less than 20 years, including simulation modelling and methods based on the parameters of the parent distribution. The verified water levels from water gauges spread along the Southwest and Southeast Florida Coast, as well as the Florida Keys, are used in this study. Methods to calculate extreme storm surges are described and reviewed, including 'classical' methods based on the generalized extreme value (GEV) distribution and the generalized Pareto distribution (GPD), and approaches designed specifically to deal with short data sets. Incorporating global-warming influence, the statistical analysis reveals enhanced extreme surge magnitudes and frequencies during warm years, while reduced levels of extreme surge activity are observed in the same study domain during cold years. Furthermore, a non-stationary GEV distribution is applied to predict the extreme surge levels with warming sea surface temperatures. The non-stationary GEV distribution indicates that with 1 Celsius degree warming in sea surface temperature from the baseline climate, the 100-year return surge level in Southwest and Southeast Florida will increase by up to 40 centimeters. The considered statistical approaches for extreme surge estimation based on short data sets will be valuable to coastal stakeholders, including urban planners, emergency managers, and the hurricane and storm surge forecasting and warning system.
The European Water Framework Directive: Challenges For A New Type of Social and Policy Analysis
NASA Astrophysics Data System (ADS)
Pahl-Wostl, C.
Water resources managment is facing increasing uncertainties in all areas. Socio- economic boundary conditions change quickly and require more flexible management strategies. Climate change, for example results in an increase in uncertainties, in par- ticular extreme events. Given the fact that current management practices deal with extreme events by designing the technical systems to manage the most extreme of all cases (e.g. higher dams for the protection against extreme floods, larger water reser- voirs for droughts and to meet daily peak demand) a serious problem is posed for long-term planning and risk management. Engineering planning has perceived the hu- man dimension as exogenous boundary conditions. Legislation focused largely on the environmental and technological dimensions that set limits and prescribe new tech- nologies without taking the importance of institutional change into account. However, technology is only the "hardware" and it is becoming increasingly obvious that the "software", the social dimension, has to become part of planning and management processes. Hence, the inclusion of the human dimension into integrated models and processes will be valuable in supporting the introduction of new elements into plan- ning processes in water resources management. With the European Water Framework Directive environmental policy enters a new era. The traditional approach to solving isolated environmental problems with technological fixes and end-of-pipe solutions has started to shift towards a more thoughtful attitude which involves the development of integrated approaches to problem solving. The WFD introduces the river basin as the management unit, thus following the experience of some European countries (e.g. France) and the example of the management of some international rivers (e.g. the Rhine). Overall the WFD represents a general shift towards a polycentric understand- ing of policy making that requires the involvement of stakeholders as active partic- ipants into the policy process at different levels of societal organization. The WFD requires the inclusion of stakeholders in the process of developing and adopting a river basin management plan. In order to improve stakeholder-based policy design and modeling processes innovation and research is required in linking analytical methods and participatory approaches. Factual knowledge and analytical techniques have to be combined with local knowledge and subjective perceptions of the various stakeholder groups. The talk will summarize current approaches and point out research needs.
Explosive compaction of aluminum oxide modified by multiwall carbon nanotubes
NASA Astrophysics Data System (ADS)
Buzyurkin, A. E.; Kraus, E. I.; Lukyanov, Ya L.
2018-04-01
This paper presents experiments and numerical research on explosive compaction of aluminum oxide powder modified by multiwall carbon nanotubes (MWCNT) and modeling of the stress state behind the shock front at shock loading. The aim of this study was to obtain a durable low-porosity compact sample. The explosive compaction technology is used in this problem because the aluminum oxide is an extremely hard and refractory material. Therefore, its compaction by traditional methods requires special equipment and considerable expenses.
NASA Technical Reports Server (NTRS)
Willden, Kurtis S. (Inventor)
1995-01-01
A reusable laminate mandrel which is unaffected by extreme temperature changes. The flexible laminate mandrel is comprised of sheets stacked to produce the required configuration, a cover wrap that applies pressure to the mandrel laminate, maintaining the stack cross-section. Then after use, the mandrels can be removed, disassembled, and reused. In the method of extracting the flexible mandrel from one end of a composite stiffener, individual ones of the laminae of the flexible mandrel or all are extracted at the same time, depending on severity of the contour.
Low-Cost Deposition Methods for Transparent Thin-Film Transistors
2003-09-26
theoretical limit is estimated to be ∼10 cm2/V s. [9] The largest organic TFT mobility reported is 2.7 cm2/V s for pentacene which is approaching the...refractory materials require the use of an electron beam. A directed electron beam is capable of locally heating source material to extremely high...Haboeck, M. Stassburg, M. Strassburg, G. Kaczmarczyk, A. Hoffman, and C. Thomsen, “Nitrogen-related local vibrational modes in ZnO:N,” Appl. Phys
Perspectives in astrophysical databases
NASA Astrophysics Data System (ADS)
Frailis, Marco; de Angelis, Alessandro; Roberto, Vito
2004-07-01
Astrophysics has become a domain extremely rich of scientific data. Data mining tools are needed for information extraction from such large data sets. This asks for an approach to data management emphasizing the efficiency and simplicity of data access; efficiency is obtained using multidimensional access methods and simplicity is achieved by properly handling metadata. Moreover, clustering and classification techniques on large data sets pose additional requirements in terms of computation and memory scalability and interpretability of results. In this study we review some possible solutions.
Extreme Trust Region Policy Optimization for Active Object Recognition.
Liu, Huaping; Wu, Yupei; Sun, Fuchun; Huaping Liu; Yupei Wu; Fuchun Sun; Sun, Fuchun; Liu, Huaping; Wu, Yupei
2018-06-01
In this brief, we develop a deep reinforcement learning method to actively recognize objects by choosing a sequence of actions for an active camera that helps to discriminate between the objects. The method is realized using trust region policy optimization, in which the policy is realized by an extreme learning machine and, therefore, leads to efficient optimization algorithm. The experimental results on the publicly available data set show the advantages of the developed extreme trust region optimization method.
Global coastal flood hazard mapping
NASA Astrophysics Data System (ADS)
Eilander, Dirk; Winsemius, Hessel; Ward, Philip; Diaz Loaiza, Andres; Haag, Arjen; Verlaan, Martin; Luo, Tianyi
2017-04-01
Over 10% of the world's population lives in low-lying coastal areas (up to 10m elevation). Many of these areas are prone to flooding from tropical storm surges or extra-tropical high sea levels in combination with high tides. A 1 in 100 year extreme sea level is estimated to expose 270 million people and 13 trillion USD worth of assets to flooding. Coastal flood risk is expected to increase due to drivers such as ground subsidence, intensification of tropical and extra-tropical storms, sea level rise and socio-economic development. For better understanding of the hazard and drivers to global coastal flood risk, a globally consistent analysis of coastal flooding is required. In this contribution we present a comprehensive global coastal flood hazard mapping study. Coastal flooding is estimated using a modular inundation routine, based on a vegetation corrected SRTM elevation model and forced by extreme sea levels. Per tile, either a simple GIS inundation routine or a hydrodynamic model can be selected. The GIS inundation method projects extreme sea levels to land, taking into account physical obstructions and dampening of the surge level land inwards. For coastlines with steep slopes or where local dynamics play a minor role in flood behavior, this fast GIS method can be applied. Extreme sea levels are derived from the Global Tide and Surge Reanalysis (GTSR) dataset. Future sea level projections are based on probabilistic sea level rise for RCP 4.5 and RCP 8.5 scenarios. The approach is validated against observed flood extents from ground and satellite observations. The results will be made available through the online Aqueduct Global Flood Risk Analyzer of the World Resources Institute.
Estimating the extreme low-temperature event using nonparametric methods
NASA Astrophysics Data System (ADS)
D'Silva, Anisha
This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.
NASA Astrophysics Data System (ADS)
Yin, Yixing; Chen, Haishan; Xu, Chong-Yu; Xu, Wucheng; Chen, Changchun; Sun, Shanlei
2016-05-01
The regionalization methods, which "trade space for time" by pooling information from different locations in the frequency analysis, are efficient tools to enhance the reliability of extreme quantile estimates. This paper aims at improving the understanding of the regional frequency of extreme precipitation by using regionalization methods, and providing scientific background and practical assistance in formulating the regional development strategies for water resources management in one of the most developed and flood-prone regions in China, the Yangtze River Delta (YRD) region. To achieve the main goals, L-moment-based index-flood (LMIF) method, one of the most popular regionalization methods, is used in the regional frequency analysis of extreme precipitation with special attention paid to inter-site dependence and its influence on the accuracy of quantile estimates, which has not been considered by most of the studies using LMIF method. Extensive data screening of stationarity, serial dependence, and inter-site dependence was carried out first. The entire YRD region was then categorized into four homogeneous regions through cluster analysis and homogenous analysis. Based on goodness-of-fit statistic and L-moment ratio diagrams, generalized extreme-value (GEV) and generalized normal (GNO) distributions were identified as the best fitted distributions for most of the sub-regions, and estimated quantiles for each region were obtained. Monte Carlo simulation was used to evaluate the accuracy of the quantile estimates taking inter-site dependence into consideration. The results showed that the root-mean-square errors (RMSEs) were bigger and the 90 % error bounds were wider with inter-site dependence than those without inter-site dependence for both the regional growth curve and quantile curve. The spatial patterns of extreme precipitation with a return period of 100 years were finally obtained which indicated that there are two regions with highest precipitation extremes and a large region with low precipitation extremes. However, the regions with low precipitation extremes are the most developed and densely populated regions of the country, and floods will cause great loss of human life and property damage due to the high vulnerability. The study methods and procedure demonstrated in this paper will provide useful reference for frequency analysis of precipitation extremes in large regions, and the findings of the paper will be beneficial in flood control and management in the study area.
Method for thermal and structural evaluation of shallow intense-beam deposition in matter
NASA Astrophysics Data System (ADS)
Pilan Zanoni, André
2018-05-01
The projected range of high-intensity proton and heavy-ion beams at energies below a few tens of MeV/A in matter can be as short as a few micrometers. For the evaluation of temperature and stresses from a shallow beam energy deposition in matter conventional numerical 3D models require minuscule element sizes for acceptable element aspect ratio as well as extremely short time steps for numerical convergence. In order to simulate energy deposition using a manageable number of elements this article presents a method using layered elements. This method is applied to beam stoppers and accidental intense-beam impact onto UHV sector valves. In those cases the thermal results from the new method are congruent to those from conventional solid-element and adiabatic models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinzig, M.; DeYong, G.D.; Anglin, R.J.
1993-12-01
The MetalTrace method, which consists of an anion-exchange separation coupled with a spectrophotometric quantification, was used to determine lead and cadmium in sulfuric acid-hydrogen peroxide digests of soils and sludges and hydrobromic acid extracts of soils. Cadmium only was determined in sulfuric acid-hydrogen peroxide digests of fertilizers because no standards were available with certified lead contents. The selectivity provided by the anion-exchange separation allowed the use of a spectrophotometric indicator with an extremely high extinction coefficient so that detection limits in the low parts per million range could be attained. The results obtained using this method compared favorably with thosemore » obtained using much more expensive methods requiring more specialized training and equipment.« less
Arulkumar, Subramanian; Sabesan, Muthukumaran
2010-01-01
Backgorund: Development of biologically inspired experimental processes for the synthesis of nanoparticles is evolving an important branch of nanotechnology. Methods: The bioreduction behavior of plant seed extract of Mucuna pruriens in the synthesis of silver nanoparticles was investigated employing UV/visible spectrophotometry, X-ray diffraction (XRD), and transmission electron microscopy (TEM), Fourier transform – infra red (FT- IR). Result: M. pruriens was found to exhibit strong potential for rapid reduction of silver ions. The formation of nanoparticles by this method is extremely rapid, requires no toxic chemicals, and the nanoparticles are stable for several months. Conclusion: The main conclusion is that the bioreduction method to produce nanoparticles is a good alternative to the electrochemical methods and it is expected to be biocompatible. PMID:21808573
Using groundwater levels to estimate recharge
Healy, R.W.; Cook, P.G.
2002-01-01
Accurate estimation of groundwater recharge is extremely important for proper management of groundwater systems. Many different approaches exist for estimating recharge. This paper presents a review of methods that are based on groundwater-level data. The water-table fluctuation method may be the most widely used technique for estimating recharge; it requires knowledge of specific yield and changes in water levels over time. Advantages of this approach include its simplicity and an insensitivity to the mechanism by which water moves through the unsaturated zone. Uncertainty in estimates generated by this method relate to the limited accuracy with which specific yield can be determined and to the extent to which assumptions inherent in the method are valid. Other methods that use water levels (mostly based on the Darcy equation) are also described. The theory underlying the methods is explained. Examples from the literature are used to illustrate applications of the different methods.
The least-squares finite element method for low-mach-number compressible viscous flows
NASA Technical Reports Server (NTRS)
Yu, Sheng-Tao
1994-01-01
The present paper reports the development of the Least-Squares Finite Element Method (LSFEM) for simulating compressible viscous flows at low Mach numbers in which the incompressible flows pose as an extreme. Conventional approach requires special treatments for low-speed flows calculations: finite difference and finite volume methods are based on the use of the staggered grid or the preconditioning technique; and, finite element methods rely on the mixed method and the operator-splitting method. In this paper, however, we show that such difficulty does not exist for the LSFEM and no special treatment is needed. The LSFEM always leads to a symmetric, positive-definite matrix through which the compressible flow equations can be effectively solved. Two numerical examples are included to demonstrate the method: first, driven cavity flows at various Reynolds numbers; and, buoyancy-driven flows with significant density variation. Both examples are calculated by using full compressible flow equations.
Controlling the net charge on a nanoparticle optically levitated in vacuum
NASA Astrophysics Data System (ADS)
Frimmer, Martin; Luszcz, Karol; Ferreiro, Sandra; Jain, Vijay; Hebestreit, Erik; Novotny, Lukas
2017-06-01
Optically levitated nanoparticles in vacuum are a promising model system to test physics beyond our current understanding of quantum mechanics. Such experimental tests require extreme control over the dephasing of the levitated particle's motion. If the nanoparticle carries a finite net charge, it experiences a random Coulomb force due to fluctuating electric fields. This dephasing mechanism can be fully excluded by discharging the levitated particle. Here, we present a simple and reliable technique to control the charge on an optically levitated nanoparticle in vacuum. Our method is based on the generation of charges in an electric discharge and does not require additional optics or mechanics close to the optical trap.
Vial OrganicTM-Organic Chemistry Labs for High School and Junior College
NASA Astrophysics Data System (ADS)
Russo, Thomas J.; Meszaros, Mark
1999-01-01
Vial Organic is the most economical, safe, and time-effective method of performing organic chemistry experiments. Activities are carried out in low-cost, sealed vials. Vial Organic is extremely safe because only micro quantities of reactants are used, reactants are contained in tightly sealed vials, and only water baths are used for temperature control. Vial Organic laboratory activities are easily performed within one 50-minute class period. When heat is required, a simple hot-water bath is prepared from a beaker of water and an inexpensive immersion heater. The low cost, ease of use, and relatively short time requirement will allow organic chemistry to be experienced by more students with less confusion and intimidation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchings, L J; Foxall, W; Rambo, J
2005-02-14
Yucca Mountain licensing will require estimation of ground motions from probabilistic seismic hazard analyses (PSHA) with annual probabilities of exceedance on the order of 10{sup -6} to 10{sup -7} per year or smaller, which correspond to much longer earthquake return periods than most previous PSHA studies. These long return periods for the Yucca Mountain PSHA result in estimates of ground motion that are extremely high ({approx} 10 g) and that are believed to be physically unrealizable. However, there is at present no generally accepted method to bound ground motions either by showing that the physical properties of materials cannot maintainmore » such extreme motions, or the energy release by the source for such large motions is physically impossible. The purpose of this feasibility study is to examine recorded ground motion and rock property data from nuclear explosions to determine its usefulness for studying the ground motion from extreme earthquakes. The premise is that nuclear explosions are an extreme energy density source, and that the recorded ground motion will provide useful information about the limits of ground motion from extreme earthquakes. The data were categorized by the source and rock properties, and evaluated as to what extent non-linearity in the material has affected the recordings. They also compiled existing results of non-linear dynamic modeling of the explosions carried out by LLNL and other institutions. They conducted an extensive literature review to outline current understanding of extreme ground motion. They also analyzed the data in terms of estimating maximum ground motions at Yucca Mountain.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchings, L H; Foxall, W; Rambo, J
2005-03-09
Yucca Mountain licensing will require estimation of ground motions from probabilistic seismic hazard analyses (PSHA) with annual probabilities of exceedance on the order of 10{sup -6} to 10{sup -7} per year or smaller, which correspond to much longer earthquake return periods than most previous PSHA studies. These long return periods for the Yucca Mountain PSHA result in estimates of ground motion that are extremely high ({approx} 10 g) and that are believed to be physically unrealizable. However, there is at present no generally accepted method to bound ground motions either by showing that the physical properties of materials cannot maintainmore » such extreme motions, or the energy release by the source for such large motions is physically impossible. The purpose of this feasibility study is to examine recorded ground motion and rock property data from nuclear explosions to determine its usefulness for studying the ground motion from extreme earthquakes. The premise is that nuclear explosions are an extreme energy density source, and that the recorded ground motion will provide useful information about the limits of ground motion from extreme earthquakes. The data were categorized by the source and rock properties, and evaluated as to what extent non-linearity in the material has affected the recordings. They also compiled existing results of non-linear dynamic modeling of the explosions carried out by LLNL and other institutions. They conducted an extensive literature review to outline current understanding of extreme ground motion. They also analyzed the data in terms of estimating maximum ground motions at Yucca Mountain.« less
NASA Astrophysics Data System (ADS)
de Ruiter, Marleen; Hudson, Paul; de Ruig, Lars; Kuik, Onno; Botzen, Wouter
2017-04-01
This paper provides an analysis of the insurance schemes that cover extreme weather events in twelve different EU countries and the risk reduction incentives offered by these schemes. Economic impacts of extreme weather events in many regions in Europe and elsewhere are on the rise due to climate change and increasing exposure as driven by urban development. In an attempt to manage impacts from extreme weather events, natural disaster insurance schemes can provide incentives for taking measures that limit weather-related risks. Insurance companies can influence public risk management policies and risk-reducing behaviour of policyholders by "rewarding behaviour that reduces risks and potential damages" (Botzen and Van den Bergh, 2008, p. 417). Examples of insurance market systems that directly or indirectly aim to incentivize risk reduction with varying degrees of success are: the U.S. National Flood Insurance Programme; the French Catastrophes Naturelles system; and the U.K. Flood Re program which requires certain levels of protection standards for properties to be insurable. In our analysis, we distinguish between four different disaster types (i.e. coastal and fluvial floods, droughts and storms) and three different sectors (i.e. residential, commercial and agriculture). The selected case studies also provide a wide coverage of different insurance market structures, including public, private and public-private insurance provision, and different methods of coping with extreme loss events, such as re-insurance, governmental aid and catastrophe bonds. The analysis of existing mechanisms for risk reduction incentives provides recommendations about incentivizing adaptive behaviour, in order to assist policy makers and other stakeholders in designing more effective insurance schemes for extreme weather risks.
Physical Exam Risk Factors for Lower Extremity Injury in High School Athletes: A Systematic Review
Onate, James A.; Everhart, Joshua S.; Clifton, Daniel R.; Best, Thomas M.; Borchers, James R.; Chaudhari, Ajit M.W.
2016-01-01
Objective A stated goal of the preparticipation physical evaluation (PPE) is to reduce musculoskeletal injury, yet the musculoskeletal portion of the PPE is reportedly of questionable use in assessing lower extremity injury risk in high school-aged athletes. The objectives of this study are: (1) identify clinical assessment tools demonstrated to effectively determine lower extremity injury risk in a prospective setting, and (2) critically assess the methodological quality of prospective lower extremity risk assessment studies that use these tools. Data Sources A systematic search was performed in PubMed, CINAHL, UptoDate, Google Scholar, Cochrane Reviews, and SportDiscus. Inclusion criteria were prospective injury risk assessment studies involving athletes primarily ages 13 to 19 that used screening methods that did not require highly specialized equipment. Methodological quality was evaluated with a modified physiotherapy evidence database (PEDro) scale. Main Results Nine studies were included. The mean modified PEDro score was 6.0/10 (SD, 1.5). Multidirectional balance (odds ratio [OR], 3.0; CI, 1.5–6.1; P < 0.05) and physical maturation status (P < 0.05) were predictive of overall injury risk, knee hyperextension was predictive of anterior cruciate ligament injury (OR, 5.0; CI, 1.2–18.4; P < 0.05), hip external: internal rotator strength ratio of patellofemoral pain syndrome (P = 0.02), and foot posture index of ankle sprain (r = −0.339, P = 0.008). Conclusions Minimal prospective evidence supports or refutes the use of the functional musculoskeletal exam portion of the current PPE to assess lower extremity injury risk in high school athletes. Limited evidence does support inclusion of multidirectional balance assessment and physical maturation status in a musculoskeletal exam as both are generalizable risk factors for lower extremity injury. PMID:26978166
Code of Federal Regulations, 2010 CFR
2010-07-01
... for NRLM diesel fuel; (2) Bond rating of entity that owns the refinery (in the case of joint ventures, include the bond rating of the joint venture entity and the bond ratings of all partners; in the case of... relief from the requirements of this subpart in case of extreme hardship circumstances? 80.560 Section 80...
Advanced fast 3D DSA model development and calibration for design technology co-optimization
NASA Astrophysics Data System (ADS)
Lai, Kafai; Meliorisz, Balint; Muelders, Thomas; Welling, Ulrich; Stock, Hans-Jürgen; Marokkey, Sajan; Demmerle, Wolfgang; Liu, Chi-Chun; Chi, Cheng; Guo, Jing
2017-04-01
Direct Optimization (DO) of a 3D DSA model is a more optimal approach to a DTCO study in terms of accuracy and speed compared to a Cahn Hilliard Equation solver. DO's shorter run time (10X to 100X faster) and linear scaling makes it scalable to the area required for a DTCO study. However, the lack of temporal data output, as opposed to prior art, requires a new calibration method. The new method involves a specific set of calibration patterns. The calibration pattern's design is extremely important when temporal data is absent to obtain robust model parameters. A model calibrated to a Hybrid DSA system with a set of device-relevant constructs indicates the effectiveness of using nontemporal data. Preliminary model prediction using programmed defects on chemo-epitaxy shows encouraging results and agree qualitatively well with theoretical predictions from a strong segregation theory.
An Improved Neutron Transport Algorithm for HZETRN
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Blattnig, Steve R.; Clowdsley, Martha S.; Walker, Steven A.; Badavi, Francis F.
2010-01-01
Long term human presence in space requires the inclusion of radiation constraints in mission planning and the design of shielding materials, structures, and vehicles. In this paper, the numerical error associated with energy discretization in HZETRN is addressed. An inadequate numerical integration scheme in the transport algorithm is shown to produce large errors in the low energy portion of the neutron and light ion fluence spectra. It is further shown that the errors result from the narrow energy domain of the neutron elastic cross section spectral distributions, and that an extremely fine energy grid is required to resolve the problem under the current formulation. Two numerical methods are developed to provide adequate resolution in the energy domain and more accurately resolve the neutron elastic interactions. Convergence testing is completed by running the code for various environments and shielding materials with various energy grids to ensure stability of the newly implemented method.
NASA Astrophysics Data System (ADS)
Hartmann, Alexander K.; Weigt, Martin
2005-10-01
A concise, comprehensive introduction to the topic of statistical physics of combinatorial optimization, bringing together theoretical concepts and algorithms from computer science with analytical methods from physics. The result bridges the gap between statistical physics and combinatorial optimization, investigating problems taken from theoretical computing, such as the vertex-cover problem, with the concepts and methods of theoretical physics. The authors cover rapid developments and analytical methods that are both extremely complex and spread by word-of-mouth, providing all the necessary basics in required detail. Throughout, the algorithms are shown with examples and calculations, while the proofs are given in a way suitable for graduate students, post-docs, and researchers. Ideal for newcomers to this young, multidisciplinary field.
Temporary Thermocouple Attachment for Thermal/Vacuum Testing at Non-Extreme Temperatures
NASA Technical Reports Server (NTRS)
Ungar, Eugene K.; Wright, Sarah E.
2016-01-01
Post-test examination and data analysis that followed a two week long vacuum test showed that numerous self-stick thermocouples became detached from the test article. The thermocouples were reattached with thermally conductive epoxy and the test was repeated to obtain the required data. Because the thermocouple detachment resulted in significant expense and rework, it was decided to investigate the temporary attachment methods used around NASA and to perform a test to assess their efficacy. The present work describes the original test and the analysis that showed that the thermocouples had become detached, temporary thermocouple attachment methods assessed in the retest and in the thermocouple attachment test, and makes a recommendation for attachment methods for future tests.
Liu, Yuxuan; Huang, Xiangyi; Ren, Jicun
2016-01-01
CE is an ideal analytical method for extremely volume-limited biological microenvironments. However, the small injection volume makes it a challenge to achieve highly sensitive detection. Chemiluminescence (CL) detection is characterized by providing low background with excellent sensitivity because of requiring no light source. The coupling of CL with CE and MCE has become a powerful analytical method. So far, this method has been widely applied to chemical analysis, bioassay, drug analysis, and environment analysis. In this review, we first introduce some developments for CE-CL and MCE-CL systems, and then put the emphasis on the applications in the last 10 years. Finally, we discuss the future prospects. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
1987-09-21
objectives of our program are to isolate and characterize a fully active DNA dependent RNA polymerase from the extremely halophilic archaebacteria of the genus...operons in II. Marismortui. The halobacteriaceae are extreme halophiles . They require 3.5 M NaCI for optimal growth an(l no growth is observed below 2...was difficutlt to perform due to the extreme genetic instability in this strain (6). In contrast, the genoine of the extreme halophilic and prototrophic
8 CFR 216.5 - Waiver of requirement to file joint petition to remove conditions by alien spouse.
Code of Federal Regulations, 2011 CFR
2011-01-01
... conditional resident but during the marriage the alien spouse or child was battered by or subjected to extreme... alien's claim of having been battered or subjected to extreme mental cruelty. A conditional resident who entered into the qualifying marriage in good faith, and who was battered or was the subject of extreme...
8 CFR 216.5 - Waiver of requirement to file joint petition to remove conditions by alien spouse.
Code of Federal Regulations, 2010 CFR
2010-01-01
... conditional resident but during the marriage the alien spouse or child was battered by or subjected to extreme... alien's claim of having been battered or subjected to extreme mental cruelty. A conditional resident who entered into the qualifying marriage in good faith, and who was battered or was the subject of extreme...
New quantitative method for evaluation of motor functions applicable to spinal muscular atrophy.
Matsumaru, Naoki; Hattori, Ryo; Ichinomiya, Takashi; Tsukamoto, Katsura; Kato, Zenichiro
2018-03-01
The aim of this study was to develop and introduce new method to quantify motor functions of the upper extremity. The movement was recorded using a three-dimensional motion capture system, and the movement trajectory was analyzed using newly developed two indices, which measure precise repeatability and directional smoothness. Our target task was shoulder flexion repeated ten times. We applied our method to a healthy adult without and with a weight, simulating muscle impairment. We also applied our method to assess the efficacy of a drug therapy for amelioration of motor functions in a non-ambulatory patient with spinal muscular atrophy. Movement trajectories before and after thyrotropin-releasing hormone therapy were analyzed. In the healthy adult, we found the values of both indices increased significantly when holding a weight so that the weight-induced deterioration in motor function was successfully detected. From the efficacy assessment of drug therapy in the patient, the directional smoothness index successfully detected improvements in motor function, which were also clinically observed by the patient's doctors. We have developed a new quantitative evaluation method of motor functions of the upper extremity. Clinical usability of this method is also greatly enhanced by reducing the required number of body-attached markers to only one. This simple but universal approach to quantify motor functions will provide additional insights into the clinical phenotypes of various neuromuscular diseases and developmental disorders. Copyright © 2017 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Xu, Chao; Fang, Jian; Shen, Hui; Wang, Yu-Ping; Deng, Hong-Wen
2018-01-25
Extreme phenotype sampling (EPS) is a broadly-used design to identify candidate genetic factors contributing to the variation of quantitative traits. By enriching the signals in extreme phenotypic samples, EPS can boost the association power compared to random sampling. Most existing statistical methods for EPS examine the genetic factors individually, despite many quantitative traits have multiple genetic factors underlying their variation. It is desirable to model the joint effects of genetic factors, which may increase the power and identify novel quantitative trait loci under EPS. The joint analysis of genetic data in high-dimensional situations requires specialized techniques, e.g., the least absolute shrinkage and selection operator (LASSO). Although there are extensive research and application related to LASSO, the statistical inference and testing for the sparse model under EPS remain unknown. We propose a novel sparse model (EPS-LASSO) with hypothesis test for high-dimensional regression under EPS based on a decorrelated score function. The comprehensive simulation shows EPS-LASSO outperforms existing methods with stable type I error and FDR control. EPS-LASSO can provide a consistent power for both low- and high-dimensional situations compared with the other methods dealing with high-dimensional situations. The power of EPS-LASSO is close to other low-dimensional methods when the causal effect sizes are small and is superior when the effects are large. Applying EPS-LASSO to a transcriptome-wide gene expression study for obesity reveals 10 significant body mass index associated genes. Our results indicate that EPS-LASSO is an effective method for EPS data analysis, which can account for correlated predictors. The source code is available at https://github.com/xu1912/EPSLASSO. hdeng2@tulane.edu. Supplementary data are available at Bioinformatics online. © The Author (2018). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Soft-Tissue Injuries Associated With High-Energy Extremity Trauma: Principles of Management.
Norris; Kellam
1997-01-01
The management of high-energy extremity trauma has evolved over the past several decades, and appropriate treatment of associated soft-tissue injuries has proved to be an important factor in achieving a satisfactory outcome. Early evaluation of the severely injured extremity is crucial. Severe closed injuries require serial observation of the soft tissues and early skeletal stabilization. Open injuries require early aggressive debridement of the soft tissues followed by skeletal stabilization. Temporary wound dressings should remain in place until definitive soft-tissue coverage has been obtained. Definitive soft-tissue closure will be expedited by serial debridements performed every 48 to 72 hours in a sterile environment. Skeletal union is facilitated by early bone grafting and/or modification of the stabilizing device. Aggressive rehabilitation, includ-ing early social reintegration, are crucial for a good functional outcome. Adherence to protocols is especially beneficial in the management of salvageable severely injured extremities.
Boucher, Florian C.; Thuiller, Wilfried; Roquet, Cristina; Douzet, Rolland; Aubert, Serge; Alvarez, Nadir; Lavergne, Sébastien
2014-01-01
Relatively, few species have been able to colonize extremely cold alpine environments. We investigate the role played by the cushion life form in the evolution of climatic niches in the plant genus Androsace s.l., which spreads across the mountain ranges of the Northern Hemisphere. Using robust methods that account for phylogenetic uncertainty, intraspecific variability of climatic requirements and different life-history evolution scenarios, we show that climatic niches of Androsace s.l. exhibit low phylogenetic signal and that they evolved relatively recently and punctually. Models of niche evolution fitted onto phylogenies show that the cushion life form has been a key innovation providing the opportunity to occupy extremely cold environments, thus contributing to rapid climatic niche diversification in the genus Androsace s.l. We then propose a plausible scenario for the adaptation of plants to alpine habitats. PMID:22486702
Design of thermocouple probes for measurement of rocket exhaust plume temperatures
NASA Astrophysics Data System (ADS)
Warren, R. C.
1994-06-01
This paper summarizes a literature survey on high temperature measurement and describes the design of probes used in plume measurements. There were no cases reported of measurements in extreme environments such as exist in solid rocket exhausts, but there were a number of thermocouple designs which had been used under less extreme conditions and which could be further developed. Tungsten-rhenium(W-Rh) thermocouples had the combined properties of strength at high temperatures, high thermoelectric emf, and resistance to chemical attack. A shielded probe was required, both to protect the thermocouple junction, and to minimise radiative heat losses. After some experimentation, a twin shielded design made from molybdenum gave acceptable results. Corrections for thermal conduction losses were made based on a method obtained from the literature. Radiation losses were minimized with this probe design, and corrections for these losses were too complex and unreliable to be included.
Optical proximity correction for anamorphic extreme ultraviolet lithography
NASA Astrophysics Data System (ADS)
Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas
2017-10-01
The change from isomorphic to anamorphic optics in high numerical aperture extreme ultraviolet scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking. OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs that are more tolerant to mask errors.
Low Contrast Dose Catheter-Directed CT Angiography (CCTA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Formosa, Amanda, E-mail: amandaformosa@yahoo.ca; Santos, Denise May, E-mail: contact@denisemaysantos.com; Marcuzzi, Daniel
2016-04-15
PurposeCatheter-directed computed tomography angiography (CCTA) has been shown to reduce the contrast volumes required in conventional CTA, thus minimizing the risk of contrast-induced nephropathy (CIN).Materials and MethodsA retrospective analysis was performed on cases where CCTA was used to assess access vessels prior to transfemoral aortic valve implantation (TAVI, n = 53), abdominal aortic aneurysm assessment for endovascular aneurysm repair (EVAR, n = 11), and peripheral vascular disease (PVD, n = 24).ResultsWe show that CCTA can image vasculature with adequate diagnostic detail to allow assessment of lower extremity disease, anatomic suitability for EVAR, as well as potential contraindications to TAVI. Average contrast volumes for pre-TAVI, pre-EVAR, andmore » PVD cases were 7, 11, and 28 mL, respectively.ConclusionThis study validates the use of CCTA in obtaining diagnostic images of the abdominal and pelvic vessels and in imaging lower extremity vasculature.« less
Single-shot spectro-temporal characterization of XUV pulses from a seeded free-electron laser
De Ninno, Giovanni; Gauthier, David; Mahieu, Benoît; Ribič, Primož Rebernik; Allaria, Enrico; Cinquegrana, Paolo; Danailov, Miltcho Bojanov; Demidovich, Alexander; Ferrari, Eugenio; Giannessi, Luca; Penco, Giuseppe; Sigalotti, Paolo; Stupar, Matija
2015-01-01
Intense ultrashort X-ray pulses produced by modern free-electron lasers (FELs) allow one to probe biological systems, inorganic materials and molecular reaction dynamics with nanoscale spatial and femtoscale temporal resolution. These experiments require the knowledge, and possibly the control, of the spectro-temporal content of individual pulses. FELs relying on seeding have the potential to produce spatially and temporally fully coherent pulses. Here we propose and implement an interferometric method, which allows us to carry out the first complete single-shot spectro-temporal characterization of the pulses, generated by an FEL in the extreme ultraviolet spectral range. Moreover, we provide the first direct evidence of the temporal coherence of a seeded FEL working in the extreme ultraviolet spectral range and show the way to control the light generation process to produce Fourier-limited pulses. Experiments are carried out at the FERMI FEL in Trieste. PMID:26290320
Dual-domain lateral shearing interferometer
Naulleau, Patrick P.; Goldberg, Kenneth Alan
2004-03-16
The phase-shifting point diffraction interferometer (PS/PDI) was developed to address the problem of at-wavelength metrology of extreme ultraviolet (EUV) optical systems. Although extremely accurate, the fact that the PS/PDI is limited to use with coherent EUV sources, such as undulator radiation, is a drawback for its widespread use. An alternative to the PS/PDI, with relaxed coherence requirements, is lateral shearing interferometry (LSI). The use of a cross-grating, carrier-frequency configuration to characterize a large-field 4.times.-reduction EUV lithography optic is demonstrated. The results obtained are directly compared with PS/PDI measurements. A defocused implementation of the lateral shearing interferometer in which an image-plane filter allows both phase-shifting and Fourier wavefront recovery. The two wavefront recovery methods can be combined in a dual-domain technique providing suppression of noise added by self-interference of high-frequency components in the test-optic wavefront.
A new method of sweat testing: the CF Quantum®sweat test.
Rock, Michael J; Makholm, Linda; Eickhoff, Jens
2014-09-01
Conventional methods of sweat testing are time consuming and have many steps that can and do lead to errors. This study compares conventional sweat testing to a new quantitative method, the CF Quantum® (CFQT) sweat test. This study tests the diagnostic accuracy and analytic validity of the CFQT. Previously diagnosed CF patients and patients who required a sweat test for clinical indications were invited to have the CFQT test performed. Both conventional sweat testing and the CFQT were performed bilaterally on the same day. Pairs of data from each test are plotted as a correlation graph and Bland-Altman plot. Sensitivity and specificity were calculated as well as the means and coefficient of variation by test and by extremity. After completing the study, subjects or their parents were asked for their preference of the CFQT and conventional sweat testing. The correlation coefficient between the CFQT and conventional sweat testing was 0.98 (95% confidence interval: 0.97-0.99). The sensitivity and specificity of the CFQT in diagnosing CF was 100% (95% confidence interval: 94-100%) and 96% (95% confidence interval: 89-99%), respectively. In one center in this three center multicenter study, there were higher sweat chloride values in patients with CF and also more tests that were invalid due to discrepant values between the two extremities. The percentage of invalid tests was higher in the CFQT method (16.5%) compared to conventional sweat testing (3.8%) (p < 0.001). In the post-test questionnaire, 88% of subjects/parents preferred the CFQT test. The CFQT is a fast and simple method of quantitative sweat chloride determination. This technology requires further refinement to improve the analytic accuracy at higher sweat chloride values and to decrease the number of invalid tests. Copyright © 2014 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.
Efficient Statistically Accurate Algorithms for the Fokker-Planck Equation in Large Dimensions
NASA Astrophysics Data System (ADS)
Chen, N.; Majda, A.
2017-12-01
Solving the Fokker-Planck equation for high-dimensional complex turbulent dynamical systems is an important and practical issue. However, most traditional methods suffer from the curse of dimensionality and have difficulties in capturing the fat tailed highly intermittent probability density functions (PDFs) of complex systems in turbulence, neuroscience and excitable media. In this article, efficient statistically accurate algorithms are developed for solving both the transient and the equilibrium solutions of Fokker-Planck equations associated with high-dimensional nonlinear turbulent dynamical systems with conditional Gaussian structures. The algorithms involve a hybrid strategy that requires only a small number of ensembles. Here, a conditional Gaussian mixture in a high-dimensional subspace via an extremely efficient parametric method is combined with a judicious non-parametric Gaussian kernel density estimation in the remaining low-dimensional subspace. Particularly, the parametric method, which is based on an effective data assimilation framework, provides closed analytical formulae for determining the conditional Gaussian distributions in the high-dimensional subspace. Therefore, it is computationally efficient and accurate. The full non-Gaussian PDF of the system is then given by a Gaussian mixture. Different from the traditional particle methods, each conditional Gaussian distribution here covers a significant portion of the high-dimensional PDF. Therefore a small number of ensembles is sufficient to recover the full PDF, which overcomes the curse of dimensionality. Notably, the mixture distribution has a significant skill in capturing the transient behavior with fat tails of the high-dimensional non-Gaussian PDFs, and this facilitates the algorithms in accurately describing the intermittency and extreme events in complex turbulent systems. It is shown in a stringent set of test problems that the method only requires an order of O(100) ensembles to successfully recover the highly non-Gaussian transient PDFs in up to 6 dimensions with only small errors.
Extreme-scale Algorithms and Solver Resilience
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dongarra, Jack
A widening gap exists between the peak performance of high-performance computers and the performance achieved by complex applications running on these platforms. Over the next decade, extreme-scale systems will present major new challenges to algorithm development that could amplify this mismatch in such a way that it prevents the productive use of future DOE Leadership computers due to the following; Extreme levels of parallelism due to multicore processors; An increase in system fault rates requiring algorithms to be resilient beyond just checkpoint/restart; Complex memory hierarchies and costly data movement in both energy and performance; Heterogeneous system architectures (mixing CPUs, GPUs,more » etc.); and Conflicting goals of performance, resilience, and power requirements.« less
Fluet, Gerard G.; Patel, Jigna; Qiu, Qinyin; Yarossi, Matthew; Massood, Supriya; Adamovich, Sergei V.; Tunik, Eugene; Merians, Alma S.
2016-01-01
Purpose The complexity of upper extremity (UE) behavior requires recovery of near normal neuromuscular function to minimize residual disability following a stroke. This requirement places a premium on spontaneous recovery and neuroplastic adaptation to rehabilitation by the lesioned hemisphere. Motor skill learning is frequently cited as a requirement for neuroplasticity. Studies examining the links between training, motor learning, neuroplasticity, and improvements in hand motor function are indicated. Methods This case study describes a patient with slow recovering hand and finger movement (Total Upper Extremity Fugl–Meyer examination score = 25/66, Wrist and Hand items = 2/24 on poststroke day 37) following a stroke. The patient received an intensive eight-session intervention utilizing simulated activities that focused on the recovery of finger extension, finger individuation, and pinch-grasp force modulation. Results Over the eight sessions, the patient demonstrated improvements on untrained transfer tasks, which suggest that motor learning had occurred, as well a dramatic increase in hand function and corresponding expansion of the cortical motor map area representing several key muscles of the paretic hand. Recovery of hand function and motor map expansion continued after discharge through the three-month retention testing. Conclusion This case study describes a neuroplasticity based intervention for UE hemiparesis and a model for examining the relationship between training, motor skill acquisition, neuroplasticity, and motor function changes. PMID:27669997
Comparison of joint kinetics during free weight and flywheel resistance exercise.
Chiu, Loren Z F; Salem, George J
2006-08-01
The most common modality for resistance exercise is free weight resistance. Alternative methods of providing external resistance have been investigated, in particular for use in microgravity environments such as space flight. One alternative modality is flywheel inertial resistance, which generates resistance as a function of the mass, distribution of mass, and angular acceleration of the flywheel. The purpose of this investigation was to characterize net joint kinetics of multijoint exercises performed with a flywheel inertial resistance device in comparison to free weights. Eleven trained men and women performed the front squat, lunge, and push press on separate days with free weight or flywheel resistance, while instrumented for biomechanical analysis. Front squats performed with flywheel resistance required greater contribution of the hip and ankle, and less contribution of the knee, compared to free weight. Push presses performed with flywheel resistance had similar impulse requirements at the knee compared to free weight, but greater impulse requirement at the hip and ankle. As used in this investigation, flywheel inertial resistance increases the demand on the hip extensors and ankle plantarflexors and decreases the mechanical demand on the knee extensors for lower extremity exercises such as the front squat and lunge. Exercises involving dynamic lower and upper extremity actions, such as the push press, may benefit from flywheel inertial resistance, due to the increased mechanical demand on the knee extensors.
NASA Astrophysics Data System (ADS)
Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen
2017-03-01
Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.
Badash, Ido; Burtt, Karen E; Leland, Hyuma A; Gould, Daniel J; Rounds, Alexis D; Azadgoli, Beina; Patel, Ketan M; Carey, Joseph N
2017-10-01
Traumatic lower extremity fractures with compromised arterial flow are limb-threatening injuries. A retrospective review of 158 lower extremities with traumatic fractures, including 26 extremities with arterial injuries, was performed to determine the effects of vascular compromise on flap survival, successful limb salvage and complication rates. Patients with arterial injuries had a larger average flap surface area (255.1 vs 144.6 cm2, P = 0.02) and a greater number of operations (4.7 vs 3.8, P = 0.01) than patients without vascular compromise. Patients presenting with vascular injury were also more likely to require fasciotomy [odds ratio (OR): 6.5, confidence interval (CI): 2.3-18.2] and to have a nerve deficit (OR: 16.6, CI: 3.9-70.0), fracture of the distal third of the leg (OR: 2.9, CI: 1.15-7.1) and intracranial hemorrhage (OR: 3.84, CI: 1.1-12.9). After soft tissue reconstruction, patients with arterial injuries had a higher rate of amputation (OR: 8.5, CI: 1.3-53.6) and flap failure requiring a return to the operating room (OR: 4.5, CI: 1.5-13.2). Arterial injury did not correlate with infection or overall complication rate. In conclusion, arterial injuries resulted in significant complications for patients with lower extremity fractures requiring flap coverage, although limb salvage was still effective in most cases.
Statistic analysis of annual total ozone extremes for the period 1964-1988
NASA Technical Reports Server (NTRS)
Krzyscin, Janusz W.
1994-01-01
Annual extremes of total column amount of ozone (in the period 1964-1988) from a network of 29 Dobson stations have been examined using the extreme value analysis. The extremes have been calculated as the highest deviation of daily mean total ozone from its long-term monthly mean, normalized by the monthly standard deviations. The extremes have been selected from the direct-Sun total ozone observations only. The extremes resulting from abrupt changes in ozone (day to day changes greater than 20 percent) have not been considered. The ordered extremes (maxima in ascending way, minima in descending way) have been fitted to one of three forms of the Fisher-Tippet extreme value distribution by the nonlinear least square method (Levenberg-Marguard method). We have found that the ordered extremes from a majority of Dobson stations lie close to Fisher-Tippet type III. The extreme value analysis of the composite annual extremes (combined from averages of the annual extremes selected at individual stations) has shown that the composite maxima are fitted by the Fisher-Tippet type III and the composite minima by the Fisher-Tippet type I. The difference between the Fisher-Tippet types of the composite extremes seems to be related to the ozone downward trend. Extreme value prognoses for the period 1964-2014 (derived from the data taken at: all analyzed stations, the North American, and the European stations) have revealed that the prognostic extremes are close to the largest annual extremes in the period 1964-1988 and there are only small regional differences in the prognoses.
NASA Astrophysics Data System (ADS)
Pang, Linyong; Hu, Peter; Satake, Masaki; Tolani, Vikram; Peng, Danping; Li, Ying; Chen, Dongxue
2011-11-01
According to the ITRS roadmap, mask defects are among the top technical challenges to introduce extreme ultraviolet (EUV) lithography into production. Making a multilayer defect-free extreme ultraviolet (EUV) blank is not possible today, and is unlikely to happen in the next few years. This means that EUV must work with multilayer defects present on the mask. The method proposed by Luminescent is to compensate effects of multilayer defects on images by modifying the absorber patterns. The effect of a multilayer defect is to distort the images of adjacent absorber patterns. Although the defect cannot be repaired, the images may be restored to their desired targets by changing the absorber patterns. This method was first introduced in our paper at BACUS 2010, which described a simple pixel-based compensation algorithm using a fast multilayer model. The fast model made it possible to complete the compensation calculations in seconds, instead of days or weeks required for rigorous Finite Domain Time Difference (FDTD) simulations. Our SPIE 2011 paper introduced an advanced compensation algorithm using the Level Set Method for 2D absorber patterns. In this paper the method is extended to consider process window, and allow repair tool constraints, such as permitting etching but not deposition. The multilayer defect growth model is also enhanced so that the multilayer defect can be "inverted", or recovered from the top layer profile using a calibrated model.
NASA Astrophysics Data System (ADS)
Lajeunesse, E.; Delacourt, C.; Allemand, P.; Limare, A.; Dessert, C.; Ammann, J.; Grandjean, P.
2010-12-01
A series of recent works have underlined that the flux of material exported outside of a watershed is dramatically increased during extreme climatic events, such as storms, tropical cyclones and hurricanes [Dadson et al., 2003 and 2004; Hilton et al., 2008]. Indeed the exceptionally high rainfall rates reached during these events trigger runoff and landsliding which destabilize slopes and accumulate a significant amount of sediments in flooded rivers. This observation raises the question of the control that extreme climatic events might exert on the denudation rate and the morphology of watersheds. Addressing this questions requires to measure sediment transport in flooded rivers. However most conventional sediment monitoring technics rely on manned operated measurements which cannot be performed during extreme climatic events. Monitoring riverine sediment transport during extreme climatic events remains therefore a challenging issue because of the lack of instruments and methodologies adapted to such extreme conditions. In this paper, we present a new methodology aimed at estimating the impact of extreme events on sediment transport in rivers. Our approach relies on the development of two instruments. The first one is an in-situ optical instrument, based on a LISST-25X sensor, capable of measuring both the water level and the concentration of suspended matter in rivers with a time step going from one measurement every hour at low flow to one measurement every 2 minutes during a flood. The second instrument is a remote controlled drone helicopter used to acquire high resolution stereophotogrammetric images of river beds used to compute DEMs and to estimate how flash floods impact the granulometry and the morphology of the river. These two instruments were developed and tested during a 1.5 years field survey performed from june 2007 to january 2009 on the Capesterre river located on Basse-Terre island (Guadeloupe archipelago, Lesser Antilles Arc).
The end of trend-estimation for extreme floods under climate change?
NASA Astrophysics Data System (ADS)
Schulz, Karsten; Bernhardt, Matthias
2016-04-01
An increased risk of flood events is one of the major threats under future climate change conditions. Therefore, many recent studies have investigated trends in flood extreme occurences using historic long-term river discharge data as well as simulations from combined global/regional climate and hydrological models. Severe floods are relatively rare events and the robust estimation of their probability of occurrence requires long time series of data (6). Following a method outlined by the IPCC research community, trends in extreme floods are calculated based on the difference of discharge values exceeding e.g. a 100-year level (Q100) between two 30-year windows, which represents prevailing conditions in a reference and a future time period, respectively. Following this approach, we analysed multiple, synthetically derived 2,000-year trend-free, yearly maximum runoff data generated using three different extreme value distributions (EDV). The parameters were estimated from long term runoff data of four large European watersheds (Danube, Elbe, Rhine, Thames). Both, Q100-values estimated from 30-year moving windows, as well as the subsequently derived trends showed enormous variations with time: for example, estimating the Extreme Value (Gumbel) - distribution for the Danube data, trends of Q100 in the synthetic time-series range from -4,480 to 4,028 m³/s per 100 years (Q100 =10,071m³/s, for reference). Similar results were found when applying other extreme value distributions (Weibull, and log-Normal) to all of the watersheds considered. This variability or "background noise" of estimating trends in flood extremes makes it almost impossible to significantly distinguish any real trend in observed as well as modelled data when such an approach is applied. These uncertainties, even though known in principle are hardly addressed and discussed by the climate change impact community. Any decision making and flood risk management, including the dimensioning of flood protection measures, that is based on such studies might therefore be fundamentally flawed.
NASA Technical Reports Server (NTRS)
Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael G.; Gershunov, Alexander; Gutowski, William J., Jr.; Gyakum, John R.; Katz, Richard W.;
2015-01-01
The objective of this paper is to review statistical methods, dynamics, modeling efforts, and trends related to temperature extremes, with a focus upon extreme events of short duration that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). The statistics, dynamics, and modeling sections of this paper are written to be autonomous and so can be read separately. Methods to define extreme events statistics and to identify and connect LSMPs to extreme temperature events are presented. Recent advances in statistical techniques connect LSMPs to extreme temperatures through appropriately defined covariates that supplement more straightforward analyses. Various LSMPs, ranging from synoptic to planetary scale structures, are associated with extreme temperature events. Current knowledge about the synoptics and the dynamical mechanisms leading to the associated LSMPs is incomplete. Systematic studies of: the physics of LSMP life cycles, comprehensive model assessment of LSMP-extreme temperature event linkages, and LSMP properties are needed. Generally, climate models capture observed properties of heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreak frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Modeling studies have identified the impact of large-scale circulation anomalies and landatmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs to more specifically understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated. The paper concludes with unresolved issues and research questions.
NASA Astrophysics Data System (ADS)
Imani, Moslem; Kao, Huan-Chin; Lan, Wen-Hau; Kuo, Chung-Yen
2018-02-01
The analysis and the prediction of sea level fluctuations are core requirements of marine meteorology and operational oceanography. Estimates of sea level with hours-to-days warning times are especially important for low-lying regions and coastal zone management. The primary purpose of this study is to examine the applicability and capability of extreme learning machine (ELM) and relevance vector machine (RVM) models for predicting sea level variations and compare their performances with powerful machine learning methods, namely, support vector machine (SVM) and radial basis function (RBF) models. The input dataset from the period of January 2004 to May 2011 used in the study was obtained from the Dongshi tide gauge station in Chiayi, Taiwan. Results showed that the ELM and RVM models outperformed the other methods. The performance of the RVM approach was superior in predicting the daily sea level time series given the minimum root mean square error of 34.73 mm and the maximum determination coefficient of 0.93 (R2) during the testing periods. Furthermore, the obtained results were in close agreement with the original tide-gauge data, which indicates that RVM approach is a promising alternative method for time series prediction and could be successfully used for daily sea level forecasts.
Sood, Aditya; Therattil, Paul J; Russo, Gerardo; Lee, Edward S
2017-01-01
Objective: The latissimus dorsi flap is a workhorse for plastic surgeons, being used for many years for soft-tissue coverage of the upper extremity as well as for functional reconstruction to restore motion to the elbow and shoulder. The authors present a case of functional latissimus dorsi transfer for restoration of elbow flexion and review the literature on technique and outcomes. Methods: A literature review was performed using MEDLINE and the Cochrane Collaboration Library for primary research articles on functional latissimus dorsi flap transfer. Data related to surgical techniques and outcomes were extracted. Results: The literature search yielded 13 relevant studies, with a total of 52 patients who received pedicled, functional latissimus dorsi flaps for upper-extremity reconstruction. The most common etiology requiring reconstruction was closed brachial plexus injury (n = 13). After flap transfer, 98% of patients were able to flex the elbow against gravity and 82.3% were able to flex against resistance. In the presented case, a 77-year-old man underwent resection of myxofibrosarcoma of the upper arm with elbow prosthesis placement and functional latissimus dorsi transfer. The patient was able to actively flex against gravity at 3-month follow-up. Conclusions: A review of the literature shows that nearly all patients undergoing functional latissimus dorsi transfer for upper-extremity reconstruction regain at least motion against gravity whereas a large proportion regain motion against resistance. Considerations when planning for functional latissimus dorsi transfer include patient positioning, appropriate tensioning of the muscle, safe inset, polarity, management of other affected upper-extremity joints, and educating patients on the expected outcomes.
Therattil, Paul J.; Russo, Gerardo; Lee, Edward S.
2017-01-01
Objective: The latissimus dorsi flap is a workhorse for plastic surgeons, being used for many years for soft-tissue coverage of the upper extremity as well as for functional reconstruction to restore motion to the elbow and shoulder. The authors present a case of functional latissimus dorsi transfer for restoration of elbow flexion and review the literature on technique and outcomes. Methods: A literature review was performed using MEDLINE and the Cochrane Collaboration Library for primary research articles on functional latissimus dorsi flap transfer. Data related to surgical techniques and outcomes were extracted. Results: The literature search yielded 13 relevant studies, with a total of 52 patients who received pedicled, functional latissimus dorsi flaps for upper-extremity reconstruction. The most common etiology requiring reconstruction was closed brachial plexus injury (n = 13). After flap transfer, 98% of patients were able to flex the elbow against gravity and 82.3% were able to flex against resistance. In the presented case, a 77-year-old man underwent resection of myxofibrosarcoma of the upper arm with elbow prosthesis placement and functional latissimus dorsi transfer. The patient was able to actively flex against gravity at 3-month follow-up. Conclusions: A review of the literature shows that nearly all patients undergoing functional latissimus dorsi transfer for upper-extremity reconstruction regain at least motion against gravity whereas a large proportion regain motion against resistance. Considerations when planning for functional latissimus dorsi transfer include patient positioning, appropriate tensioning of the muscle, safe inset, polarity, management of other affected upper-extremity joints, and educating patients on the expected outcomes. PMID:28293330
Hedman, Travis L; Chapman, Ted T; Dewey, William S; Quick, Charles D; Wolf, Steven E; Holcomb, John B
2007-01-01
Burn therapists routinely are tasked to position the lower extremities of burn patients for pressure ulcer prevention, skin graft protection, donor site ventilation, and edema reduction. We developed two durable and low-maintenance devices that allow effective positioning of the lower extremities. The high-profile and low-profile leg net devices were simple to fabricate and maintain. The frame was assembled using a three-quarter-inch diameter copper pipe and copper fittings (45 degrees, 90 degrees, and tees). A double layer of elasticized tubular netting was pulled over the frame and doubled back for leg support to complete the devices. The devices can be placed on any bed surface. The netting can be exchanged when soiled and the frame can be disinfected between patients using standard techniques. Both devices were used on approximately 250 patients for a total of 1200 treatment days. No incidence of pressure ulcer was observed, and graft take was not adversely affected. The devices have not required repairs or replacement. Medical providers reported they are easy to apply and effectively maintain proper positioning throughout application. Neither device interfered with the application of other positioning devices. Both devices were found to be an effective method of positioning lower extremities to prevent pressure ulcer, minimize graft loss and donor site morbidity, and reduce edema. The devices allowed for proper wound ventilation and protected grafted lower extremities on any bed surface. The devices are simple to fabricate and maintain. Both devices can be effectively used simultaneously with other positioning devices.
The limits for life under multiple extremes.
Harrison, Jesse P; Gheeraert, Nicolas; Tsigelnitskiy, Dmitry; Cockell, Charles S
2013-04-01
Life on Earth is limited by physical and chemical extremes that define the 'habitable space' within which it operates. Aside from its requirement for liquid water, no definite limits have been established for life under any extreme. Here, we employ growth data published for 67 prokaryotic strains to explore the limitations for microbial life under combined extremes of temperature, pH, salt (NaCl) concentrations, and pressure. Our review reveals a fundamental lack of information on the tolerance of microorganisms to multiple extremes that impedes several areas of science, ranging from environmental and industrial microbiology to the search for extraterrestrial life. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Filippa, Gianluca; Cremonese, Edoardo; Galvagno, Marta; Migliavacca, Mirco; Morra di Cella, Umberto; Petey, Martina; Siniscalco, Consolata
2015-12-01
The increasingly important effect of climate change and extremes on alpine phenology highlights the need to establish accurate monitoring methods to track inter-annual variation (IAV) and long-term trends in plant phenology. We evaluated four different indices of phenological development (two for plant productivity, i.e., green biomass and leaf area index; two for plant greenness, i.e., greenness from visual inspection and from digital images) from a 5-year monitoring of ecosystem phenology, here defined as the seasonal development of the grassland canopy, in a subalpine grassland site (NW Alps). Our aim was to establish an effective observation strategy that enables the detection of shifts in grassland phenology in response to climate trends and meteorological extremes. The seasonal development of the vegetation at this site appears strongly controlled by snowmelt mostly in its first stages and to a lesser extent in the overall development trajectory. All indices were able to detect an anomalous beginning of the growing season in 2011 due to an exceptionally early snowmelt, whereas only some of them revealed a later beginning of the growing season in 2013 due to a late snowmelt. A method is developed to derive the number of samples that maximise the trade-off between sampling effort and accuracy in IAV detection in the context of long-term phenology monitoring programmes. Results show that spring phenology requires a smaller number of samples than autumn phenology to track a given target of IAV. Additionally, productivity indices (leaf area index and green biomass) have a higher sampling requirement than greenness derived from visual estimation and from the analysis of digital images. Of the latter two, the analysis of digital images stands out as the more effective, rapid and objective method to detect IAV in vegetation development.
Filippa, Gianluca; Cremonese, Edoardo; Galvagno, Marta; Migliavacca, Mirco; Morra di Cella, Umberto; Petey, Martina; Siniscalco, Consolata
2015-12-01
The increasingly important effect of climate change and extremes on alpine phenology highlights the need to establish accurate monitoring methods to track inter-annual variation (IAV) and long-term trends in plant phenology. We evaluated four different indices of phenological development (two for plant productivity, i.e., green biomass and leaf area index; two for plant greenness, i.e., greenness from visual inspection and from digital images) from a 5-year monitoring of ecosystem phenology, here defined as the seasonal development of the grassland canopy, in a subalpine grassland site (NW Alps). Our aim was to establish an effective observation strategy that enables the detection of shifts in grassland phenology in response to climate trends and meteorological extremes. The seasonal development of the vegetation at this site appears strongly controlled by snowmelt mostly in its first stages and to a lesser extent in the overall development trajectory. All indices were able to detect an anomalous beginning of the growing season in 2011 due to an exceptionally early snowmelt, whereas only some of them revealed a later beginning of the growing season in 2013 due to a late snowmelt. A method is developed to derive the number of samples that maximise the trade-off between sampling effort and accuracy in IAV detection in the context of long-term phenology monitoring programmes. Results show that spring phenology requires a smaller number of samples than autumn phenology to track a given target of IAV. Additionally, productivity indices (leaf area index and green biomass) have a higher sampling requirement than greenness derived from visual estimation and from the analysis of digital images. Of the latter two, the analysis of digital images stands out as the more effective, rapid and objective method to detect IAV in vegetation development.
Instrument control software requirement specification for Extremely Large Telescopes
NASA Astrophysics Data System (ADS)
Young, Peter J.; Kiekebusch, Mario J.; Chiozzi, Gianluca
2010-07-01
Engineers in several observatories are now designing the next generation of optical telescopes, the Extremely Large Telescopes (ELT). These are very complex machines that will host sophisticated astronomical instruments to be used for a wide range of scientific studies. In order to carry out scientific observations, a software infrastructure is required to orchestrate the control of the multiple subsystems and functions. This paper will focus on describing the considerations, strategies and main issues related to the definition and analysis of the software requirements for the ELT's Instrument Control System using modern development processes and modelling tools like SysML.
Caldwell, Michelle; Dickerhoof, Erica; Hall, Anastasia; Odakura, Bryan; Fanchiang, Hsin-Chen
2014-01-01
Objective. To describe and analyze the potential use of games in the commercially available EyeToy Play and EyeToy Play 2 on required/targeted training skills and feedback provided for clinical application. Methods. A summary table including all games was created. Two movement experts naïve to the software validated required/targeted training skills and feedback for 10 randomly selected games. Ten healthy school-aged children played to further validate the required/targeted training skills. Results. All but two (muscular and cardiovascular endurance) had excellent agreement in required/targeted training skills, and there was 100% agreement on feedback. Children's performance in required/targeted training skills (number of unilateral reaches and bilateral reaches, speed, muscular endurance, and cardiovascular endurance) significantly differed between games (P < .05). Conclusion. EyeToy Play games could be used to train children's arm function. However, a careful evaluation of the games is needed since performance might not be consistent between players and therapists' interpretation. PMID:25610652
Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods
NASA Astrophysics Data System (ADS)
Werner, A. T.; Cannon, A. J.
2015-06-01
Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e., correlation tests) and distributional properties (i.e., tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3 day peak flow and 7 day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational datasets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational dataset. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7 day low flow events, regardless of reanalysis or observational dataset. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event-scale spatial gradients, passed the greatest number of tests for hydrologic extremes. Non-stationarity in the observational/reanalysis datasets complicated the evaluation of downscaling performance. Comparing temporal homogeneity and trends in climate indices and hydrological model outputs calculated from downscaled reanalyses and gridded observations was useful for diagnosing the reliability of the various historical datasets. We recommend that such analyses be conducted before such data are used to construct future hydro-climatic change scenarios.
Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods
NASA Astrophysics Data System (ADS)
Werner, Arelia T.; Cannon, Alex J.
2016-04-01
Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e. correlation tests) and distributional properties (i.e. tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), the climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3-day peak flow and 7-day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational data sets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational data set. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7-day low-flow events, regardless of reanalysis or observational data set. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event-scale spatial gradients, passed the greatest number of tests for hydrologic extremes. Non-stationarity in the observational/reanalysis data sets complicated the evaluation of downscaling performance. Comparing temporal homogeneity and trends in climate indices and hydrological model outputs calculated from downscaled reanalyses and gridded observations was useful for diagnosing the reliability of the various historical data sets. We recommend that such analyses be conducted before such data are used to construct future hydro-climatic change scenarios.
Application of redundancy in the Saturn 5 guidance and control system
NASA Technical Reports Server (NTRS)
Moore, F. B.; White, J. B.
1976-01-01
The Saturn launch vehicle's guidance and control system is so complex that the reliability of a simplex system is not adequate to fulfill mission requirements. Thus, to achieve the desired reliability, redundancy encompassing a wide range of types and levels was employed. At one extreme, the lowest level, basic components (resistors, capacitors, relays, etc.) are employed in series, parallel, or quadruplex arrangements to insure continued system operation in the presence of possible failure conditions. At the other extreme, the highest level, complete subsystem duplication is provided so that a backup subsystem can be employed in case the primary system malfunctions. In between these two extremes, many other redundancy schemes and techniques are employed at various levels. Basic redundancy concepts are covered to gain insight into the advantages obtained with various techniques. Points and methods of application of these techniques are included. The theoretical gain in reliability resulting from redundancy is assessed and compared to a simplex system. Problems and limitations encountered in the practical application of redundancy are discussed as well as techniques verifying proper operation of the redundant channels. As background for the redundancy application discussion, a basic description of the guidance and control system is included.
Siu, Ho Chit; Arenas, Ana M; Sun, Tingxiao; Stirling, Leia A
2018-02-05
Upper-extremity exoskeletons have demonstrated potential as augmentative, assistive, and rehabilitative devices. Typical control of upper-extremity exoskeletons have relied on switches, force/torque sensors, and surface electromyography (sEMG), but these systems are usually reactionary, and/or rely on entirely hand-tuned parameters. sEMG-based systems may be able to provide anticipatory control, since they interface directly with muscle signals, but typically require expert placement of sensors on muscle bodies. We present an implementation of an adaptive sEMG-based exoskeleton controller that learns a mapping between muscle activation and the desired system state during interaction with a user, generating a personalized sEMG feature classifier to allow for anticipatory control. This system is robust to novice placement of sEMG sensors, as well as subdermal muscle shifts. We validate this method with 18 subjects using a thumb exoskeleton to complete a book-placement task. This learning-from-demonstration system for exoskeleton control allows for very short training times, as well as the potential for improvement in intent recognition over time, and adaptation to physiological changes in the user, such as those due to fatigue.
Arenas, Ana M.; Sun, Tingxiao
2018-01-01
Upper-extremity exoskeletons have demonstrated potential as augmentative, assistive, and rehabilitative devices. Typical control of upper-extremity exoskeletons have relied on switches, force/torque sensors, and surface electromyography (sEMG), but these systems are usually reactionary, and/or rely on entirely hand-tuned parameters. sEMG-based systems may be able to provide anticipatory control, since they interface directly with muscle signals, but typically require expert placement of sensors on muscle bodies. We present an implementation of an adaptive sEMG-based exoskeleton controller that learns a mapping between muscle activation and the desired system state during interaction with a user, generating a personalized sEMG feature classifier to allow for anticipatory control. This system is robust to novice placement of sEMG sensors, as well as subdermal muscle shifts. We validate this method with 18 subjects using a thumb exoskeleton to complete a book-placement task. This learning-from-demonstration system for exoskeleton control allows for very short training times, as well as the potential for improvement in intent recognition over time, and adaptation to physiological changes in the user, such as those due to fatigue. PMID:29401754
Application of Radar-Rainfall Estimates to Probable Maximum Precipitation in the Carolinas
NASA Astrophysics Data System (ADS)
England, J. F.; Caldwell, R. J.; Sankovich, V.
2011-12-01
Extreme storm rainfall data are essential in the assessment of potential impacts on design precipitation amounts, which are used in flood design criteria for dams and nuclear power plants. Probable Maximum Precipitation (PMP) from National Weather Service Hydrometeorological Report 51 (HMR51) is currently used for design rainfall estimates in the eastern U.S. The extreme storm database associated with the report has not been updated since the early 1970s. In the past several decades, several extreme precipitation events have occurred that have the potential to alter the PMP values, particularly across the Southeast United States (e.g., Hurricane Floyd 1999). Unfortunately, these and other large precipitation-producing storms have not been analyzed with the detail required for application in design studies. This study focuses on warm-season tropical cyclones (TCs) in the Carolinas, as these systems are the critical maximum rainfall mechanisms in the region. The goal is to discern if recent tropical events may have reached or exceeded current PMP values. We have analyzed 10 storms using modern datasets and methodologies that provide enhanced spatial and temporal resolution relative to point measurements used in past studies. Specifically, hourly multisensor precipitation reanalysis (MPR) data are used to estimate storm total precipitation accumulations at various durations throughout each storm event. The accumulated grids serve as input to depth-area-duration calculations. Individual storms are then maximized using back-trajectories to determine source regions for moisture. The development of open source software has made this process time and resource efficient. Based on the current methodology, two of the ten storms analyzed have the potential to challenge HMR51 PMP values. Maximized depth-area curves for Hurricane Floyd indicate exceedance at 24- and 72-hour durations for large area sizes, while Hurricane Fran (1996) appears to exceed PMP at large area sizes for short-duration, 6-hour storms. Utilizing new methods and data, however, requires careful consideration of the potential limitations and caveats associated with the analysis and further evaluation of the newer storms within the context of historical storms from HMR51. Here, we provide a brief background on extreme rainfall in the Carolinas, along with an overview of the methods employed for converting MPR to depth-area relationships. Discussion of the issues and limitations, evaluation of the various techniques, and comparison to HMR51 storms and PMP values are also presented.
Spoilt for choice - A comparison of downscaling approaches for hydrological impact studies
NASA Astrophysics Data System (ADS)
Rössler, Ole; Fischer, Andreas; Kotlarski, Sven; Keller, Denise; Liniger, Mark; Weingartner, Rolf
2017-04-01
With the increasing number of available climate downscaling approaches, users are often faced with the luxury problem of which downscaling method to apply in a climate change impact assessment study. In Switzerland, for instance, the new generation of local scale climate scenarios CH2018 will be based on quantile mapping (QM), replacing the previous delta change (DC) method. Parallel to those two methods, a multi-site weather generator (WG) was developed to meet specific user needs. The question poses which downscaling method is the most suitable for a given application. Here, we analyze the differences of the three approaches in terms of hydro-meteorological responses in the Swiss pre-Alps in terms of mean values as well as indices of extremes. The comparison of the three different approaches was carried out in the frame of a hydrological impact assessment study that focused on different runoff characteristics and their related meteorological indices in the meso-scale catchment of the river Thur ( 1700 km2), Switzerland. For this purpose, we set up the hydrological model WaSiM-ETH under present (1980-2009) and under future conditions (2070-2099), assuming the SRES A1B emission scenario. Input to the three downscaling approaches were 10 GCM-RCM simulations of the ENSEMBLES project, while eight meteorological station observations served as the reference. All station data, observed and downscaled, were interpolated to obtain meteorological fields of temperature and precipitation required by the hydrological model. For the present-day reference period we evaluated the ability of each downscaling method to reproduce today's hydro-meteorological patterns. In the scenario runs, we focused on the comparison of change signals for each hydro-meteorological parameter generated by the three downscaling techniques. The evaluation exercise reveals that QM and WG perform equally well in representing present day average conditions, but that QM outperforms WG in reproducing indices related to extreme conditions like the number of drought events or multi-day rain sums. In terms of mean monthly discharge changes, the three downscaling methods reveal notable differences: DC shows the strongest (in summer) and less pronounced (in winter) change signal. Regarding some extreme features of runoff like frequency of droughts and the low flow level, DC shows similar change signals compared to QM and WG. This was unexpected as DC is commonly reported to fail in terms of projecting extreme changes. In contrast, QM mostly shows the strongest change signals for the 10 different extreme related indices, due to its ability to pick up more features of the climate change signals from the RCM. This indicates that DC and also WG miss some aspects, especially for flood related indices. Hence, depending on the target variable of interest, DC and QM typically provide the full range of change signals, while WG mostly lies in between both method. However, it offers the great advantage of multiple realizations combined with inter-variable consistency.
Bayesian Non-Stationary Index Gauge Modeling of Gridded Precipitation Extremes
NASA Astrophysics Data System (ADS)
Verdin, A.; Bracken, C.; Caldwell, J.; Balaji, R.; Funk, C. C.
2017-12-01
We propose a Bayesian non-stationary model to generate watershed scale gridded estimates of extreme precipitation return levels. The Climate Hazards Group Infrared Precipitation with Stations (CHIRPS) dataset is used to obtain gridded seasonal precipitation extremes over the Taylor Park watershed in Colorado for the period 1981-2016. For each year, grid cells within the Taylor Park watershed are aggregated to a representative "index gauge," which is input to the model. Precipitation-frequency curves for the index gauge are estimated for each year, using climate variables with significant teleconnections as proxies. Such proxies enable short-term forecasting of extremes for the upcoming season. Disaggregation ratios of the index gauge to the grid cells within the watershed are computed for each year and preserved to translate the index gauge precipitation-frequency curve to gridded precipitation-frequency maps for select return periods. Gridded precipitation-frequency maps are of the same spatial resolution as CHIRPS (0.05° x 0.05°). We verify that the disaggregation method preserves spatial coherency of extremes in the Taylor Park watershed. Validation of the index gauge extreme precipitation-frequency method consists of ensuring extreme value statistics are preserved on a grid cell basis. To this end, a non-stationary extreme precipitation-frequency analysis is performed on each grid cell individually, and the resulting frequency curves are compared to those produced by the index gauge disaggregation method.
Modern methods of cost saving of the production activity in construction
NASA Astrophysics Data System (ADS)
Silka, Dmitriy
2017-10-01
Every time economy faces recession, cost saving questions acquire increased urgency. This article shows how companies of the construction industry have switched to the new kind of economic relations over recent years. It is specified that the dominant type of economic relations does not allow to quickly reorient on the necessary tools in accordance with new requirements of economic activity. Successful experience in the new environment becomes demanded. Cost saving methods, which were proven in other industries, are offered for achievement of efficiency and competitiveness of the companies. Analysis is performed on the example of the retail sphere, which, according to the authoritative analytical reviews, is extremely innovative on both local and world economic levels. At that, methods, based on the modern unprecedentedly high opportunities of communications and informational exchange took special place among offered methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassanein, Ahmed; Konkashbaev, Isak
A device and method for generating extremely short-wave ultraviolet electromagnetic wave uses two intersecting plasma beams generated by two plasma accelerators. The intersection of the two plasma beams emits electromagnetic radiation and in particular radiation in the extreme ultraviolet wavelength. In the preferred orientation two axially aligned counter streaming plasmas collide to produce an intense source of electromagnetic radiation at the 13.5 nm wavelength. The Mather type plasma accelerators can utilize tin, or lithium covered electrodes. Tin, lithium or xenon can be used as the photon emitting gas source.
Challenges and opportunities in the manufacture and expansion of cells for therapy.
Maartens, Joachim H; De-Juan-Pardo, Elena; Wunner, Felix M; Simula, Antonio; Voelcker, Nicolas H; Barry, Simon C; Hutmacher, Dietmar W
2017-10-01
Laboratory-based ex vivo cell culture methods are largely manual in their manufacturing processes. This makes it extremely difficult to meet regulatory requirements for process validation, quality control and reproducibility. Cell culture concepts with a translational focus need to embrace a more automated approach where cell yields are able to meet the quantitative production demands, the correct cell lineage and phenotype is readily confirmed and reagent usage has been optimized. Areas covered: This article discusses the obstacles inherent in classical laboratory-based methods, their concomitant impact on cost-of-goods and that a technology step change is required to facilitate translation from bed-to-bedside. Expert opinion: While traditional bioreactors have demonstrated limited success where adherent cells are used in combination with microcarriers, further process optimization will be required to find solutions for commercial-scale therapies. New cell culture technologies based on 3D-printed cell culture lattices with favourable surface to volume ratios have the potential to change the paradigm in industry. An integrated Quality-by-Design /System engineering approach will be essential to facilitate the scaled-up translation from proof-of-principle to clinical validation.
NASA Technical Reports Server (NTRS)
Scheper, C.; Baker, R.; Frank, G.; Yalamanchili, S.; Gray, G.
1992-01-01
Systems for Space Defense Initiative (SDI) space applications typically require both high performance and very high reliability. These requirements present the systems engineer evaluating such systems with the extremely difficult problem of conducting performance and reliability trade-offs over large design spaces. A controlled development process supported by appropriate automated tools must be used to assure that the system will meet design objectives. This report describes an investigation of methods, tools, and techniques necessary to support performance and reliability modeling for SDI systems development. Models of the JPL Hypercubes, the Encore Multimax, and the C.S. Draper Lab Fault-Tolerant Parallel Processor (FTPP) parallel-computing architectures using candidate SDI weapons-to-target assignment algorithms as workloads were built and analyzed as a means of identifying the necessary system models, how the models interact, and what experiments and analyses should be performed. As a result of this effort, weaknesses in the existing methods and tools were revealed and capabilities that will be required for both individual tools and an integrated toolset were identified.
Carvalho, Rimenys J; Cruz, Thayana A
2018-01-01
High-throughput screening (HTS) systems have emerged as important tools to provide fast and low cost evaluation of several conditions at once since it requires small quantities of material and sample volumes. These characteristics are extremely valuable for experiments with large number of variables enabling the application of design of experiments (DoE) strategies or simple experimental planning approaches. Once, the capacity of HTS systems to mimic chromatographic purification steps was established, several studies were performed successfully including scale down purification. Here, we propose a method for studying different purification conditions that can be used for any recombinant protein, including complex and glycosylated proteins, using low binding filter microplates.
A dynamic method for magnetic torque measurement
NASA Technical Reports Server (NTRS)
Lin, C. E.; Jou, H. L.
1994-01-01
In a magnetic suspension system, accurate force measurement will result in better control performance in the test section, especially when a wider range of operation is required. Although many useful methods were developed to obtain the desired model, however, significant error is inevitable since the magnetic field distribution of the large-gap magnetic suspension system is extremely nonlinear. This paper proposed an easy approach to measure the magnetic torque of a magnetic suspension system using an angular photo encoder. Through the measurement of the velocity change data, the magnetic torque is converted. The proposed idea is described and implemented to obtain the desired data. It is useful to the calculation of a magnetic force in the magnetic suspension system.
A Novel Method of Preparation of Inorganic Glasses by Microwave Irradiation
NASA Astrophysics Data System (ADS)
Vaidhyanathan, B.; Ganguli, Munia; Rao, K. J.
1994-12-01
Microwave heating is shown to provide an extremely facile and automatically temperature-controlled route to the synthesis of glasses. Glass-forming compositions of several traditional and novel glasses were melted in a kitchen microwave oven, typically within 5 min and quenched into glasses. This is only a fraction of the time required in normal glass preparation methods. The rapidity of melting minimizes undesirable features such as loss of components of the glass, variation of oxidation states of metal ions, and oxygen loss leading to reduced products in the glass such as metal particles. This novel procedure of preparation is applicable when at least one of the components of the glass-forming mixture absorbs microwaves.
Field Tests of the Magnetotelluric Method to Detect Gas Hydrates, Mallik, Mackenzie Delta, Canada
NASA Astrophysics Data System (ADS)
Craven, J. A.; Roberts, B.; Bellefleur, G.; Spratt, J.; Wright, F.; Dallimore, S. R.
2008-12-01
The magnetotelluric method is not generally utilized at extreme latitudes due primarily to difficulties in making the good electrical contact with the ground required to measure the electric field. As such, the magnetotelluric technique has not been previously investigated to direct detect gas hydrates in on-shore permafrost environments. We present the results of preliminary field tests at Mallik, Northwest Territories, Canada, that demonstrate good quality magnetotelluric data can be obtained in this environment using specialized electrodes and buffer amplifiers similar to those utilized by Wannamaker et al (2004). This result suggests that subsurface images from larger magnetotelluric surveys will be useful to complement other techniques to detect, quantify and characterize gas hydrates.
Emissivity correction for interpreting thermal radiation from a terrestrial surface
NASA Technical Reports Server (NTRS)
Sutherland, R. A.; Bartholic, J. F.; Gerber, J. F.
1979-01-01
A general method of accounting for emissivity in making temperature determinations of graybody surfaces from radiometric data is presented. The method differs from previous treatments in that a simple blackbody calibration and graphical approach is used rather than numerical integrations which require detailed knowledge of an instrument's spectral characteristics. Also, errors caused by approximating instrumental response with the Stephan-Boltzman law rather than with an appropriately weighted Planck integral are examined. In the 8-14 micron wavelength interval, it is shown that errors are at most on the order of 3 C for the extremes of the earth's temperature and emissivity. For more practical limits, however, errors are less than 0.5 C.
Air sampling with solid phase microextraction
NASA Astrophysics Data System (ADS)
Martos, Perry Anthony
There is an increasing need for simple yet accurate air sampling methods. The acceptance of new air sampling methods requires compatibility with conventional chromatographic equipment, and the new methods have to be environmentally friendly, simple to use, yet with equal, or better, detection limits, accuracy and precision than standard methods. Solid phase microextraction (SPME) satisfies the conditions for new air sampling methods. Analyte detection limits, accuracy and precision of analysis with SPME are typically better than with any conventional air sampling methods. Yet, air sampling with SPME requires no pumps, solvents, is re-usable, extremely simple to use, is completely compatible with current chromatographic equipment, and requires a small capital investment. The first SPME fiber coating used in this study was poly(dimethylsiloxane) (PDMS), a hydrophobic liquid film, to sample a large range of airborne hydrocarbons such as benzene and octane. Quantification without an external calibration procedure is possible with this coating. Well understood are the physical and chemical properties of this coating, which are quite similar to those of the siloxane stationary phase used in capillary columns. The log of analyte distribution coefficients for PDMS are linearly related to chromatographic retention indices and to the inverse of temperature. Therefore, the actual chromatogram from the analysis of the PDMS air sampler will yield the calibration parameters which are used to quantify unknown airborne analyte concentrations (ppb v to ppm v range). The second fiber coating used in this study was PDMS/divinyl benzene (PDMS/DVB) onto which o-(2,3,4,5,6- pentafluorobenzyl) hydroxylamine (PFBHA) was adsorbed for the on-fiber derivatization of gaseous formaldehyde (ppb v range), with and without external calibration. The oxime formed from the reaction can be detected with conventional gas chromatographic detectors. Typical grab sampling times were as small as 5 seconds. With 300 seconds sampling, the formaldehyde detection limit was 2.1 ppbv, better than any other 5 minute sampling device for formaldehyde. The first-order rate constant for product formation was used to quantify formaldehyde concentrations without a calibration curve. This spot sampler was used to sample the headspace of hair gel, particle board, plant material and coffee grounds for formaldehyde, and other carbonyl compounds, with extremely promising results. The SPME sampling devices were also used for time- weighted average sampling (30 minutes to 16 hours). Finally, the four new SPME air sampling methods were field tested with side-by-side comparisons to standard air sampling methods, showing a tremendous use of SPME as an air sampler.
The importance of range edges for an irruptive species during extreme weather events
Bateman, Brooke L.; Pidgeon, Anna M.; Radeloff, Volker C.; Allstadt, Andrew J.; Akçakaya, H. Resit; Thogmartin, Wayne E.; Vavrus, Stephen J.; Heglund, Patricia J.
2015-01-01
In a changing climate where more frequent extreme weather may be more common, conservation strategies for weather-sensitive species may require consideration of habitat in the edges of species’ ranges, even though non-core areas may be unoccupied in ‘normal’ years. Our results highlight the conservation importance of range edges in providing refuge from extreme events, such as drought, and climate change.
Uncertainty in determining extreme precipitation thresholds
NASA Astrophysics Data System (ADS)
Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili
2013-10-01
Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method are more reasonable and applicable for the Pearl River Basin.
Tian, He; Zhao, Lianfeng; Wang, Xuefeng; Yeh, Yao-Wen; Yao, Nan; Rand, Barry P; Ren, Tian-Ling
2017-12-26
Extremely low energy consumption neuromorphic computing is required to achieve massively parallel information processing on par with the human brain. To achieve this goal, resistive memories based on materials with ionic transport and extremely low operating current are required. Extremely low operating current allows for low power operation by minimizing the program, erase, and read currents. However, materials currently used in resistive memories, such as defective HfO x , AlO x , TaO x , etc., cannot suppress electronic transport (i.e., leakage current) while allowing good ionic transport. Here, we show that 2D Ruddlesden-Popper phase hybrid lead bromide perovskite single crystals are promising materials for low operating current nanodevice applications because of their mixed electronic and ionic transport and ease of fabrication. Ionic transport in the exfoliated 2D perovskite layer is evident via the migration of bromide ions. Filaments with a diameter of approximately 20 nm are visualized, and resistive memories with extremely low program current down to 10 pA are achieved, a value at least 1 order of magnitude lower than conventional materials. The ionic migration and diffusion as an artificial synapse is realized in the 2D layered perovskites at the pA level, which can enable extremely low energy neuromorphic computing.
Evaluation of extreme temperature events in northern Spain based on process control charts
NASA Astrophysics Data System (ADS)
Villeta, M.; Valencia, J. L.; Saá, A.; Tarquis, A. M.
2018-02-01
Extreme climate events have recently attracted the attention of a growing number of researchers because these events impose a large cost on agriculture and associated insurance planning. This study focuses on extreme temperature events and proposes a new method for their evaluation based on statistical process control tools, which are unusual in climate studies. A series of minimum and maximum daily temperatures for 12 geographical areas of a Spanish region between 1931 and 2009 were evaluated by applying statistical process control charts to statistically test whether evidence existed for an increase or a decrease of extreme temperature events. Specification limits were determined for each geographical area and used to define four types of extreme anomalies: lower and upper extremes for the minimum and maximum anomalies. A new binomial Markov extended process that considers the autocorrelation between extreme temperature events was generated for each geographical area and extreme anomaly type to establish the attribute control charts for the annual fraction of extreme days and to monitor the occurrence of annual extreme days. This method was used to assess the significance of changes and trends of extreme temperature events in the analysed region. The results demonstrate the effectiveness of an attribute control chart for evaluating extreme temperature events. For example, the evaluation of extreme maximum temperature events using the proposed statistical process control charts was consistent with the evidence of an increase in maximum temperatures during the last decades of the last century.
Extreme values in the Chinese and American stock markets based on detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Cao, Guangxi; Zhang, Minjia
2015-10-01
This paper focuses on the comparative analysis of extreme values in the Chinese and American stock markets based on the detrended fluctuation analysis (DFA) algorithm using the daily data of Shanghai composite index and Dow Jones Industrial Average. The empirical results indicate that the multifractal detrended fluctuation analysis (MF-DFA) method is more objective than the traditional percentile method. The range of extreme value of Dow Jones Industrial Average is smaller than that of Shanghai composite index, and the extreme value of Dow Jones Industrial Average is more time clustering. The extreme value of the Chinese or American stock markets is concentrated in 2008, which is consistent with the financial crisis in 2008. Moreover, we investigate whether extreme events affect the cross-correlation between the Chinese and American stock markets using multifractal detrended cross-correlation analysis algorithm. The results show that extreme events have nothing to do with the cross-correlation between the Chinese and American stock markets.
NASA Technical Reports Server (NTRS)
Lohner, Kevin A. (Inventor); Mays, Jeffrey A. (Inventor); Sevener, Kathleen M. (Inventor)
2004-01-01
A method for designing and assembling a high performance catalyst bed gas generator for use in decomposing propellants, particularly hydrogen peroxide propellants, for use in target, space, and on-orbit propulsion systems and low-emission terrestrial power and gas generation. The gas generator utilizes a sectioned catalyst bed system, and incorporates a robust, high temperature mixed metal oxide catalyst. The gas generator requires no special preheat apparatus or special sequencing to meet start-up requirements, enabling a fast overall response time. The high performance catalyst bed gas generator system has consistently demonstrated high decomposition efficiency, extremely low decomposition roughness, and long operating life on multiple test articles.
Formal design specification of a Processor Interface Unit
NASA Technical Reports Server (NTRS)
Fura, David A.; Windley, Phillip J.; Cohen, Gerald C.
1992-01-01
This report describes work to formally specify the requirements and design of a processor interface unit (PIU), a single-chip subsystem providing memory-interface bus-interface, and additional support services for a commercial microprocessor within a fault-tolerant computer system. This system, the Fault-Tolerant Embedded Processor (FTEP), is targeted towards applications in avionics and space requiring extremely high levels of mission reliability, extended maintenance-free operation, or both. The need for high-quality design assurance in such applications is an undisputed fact, given the disastrous consequences that even a single design flaw can produce. Thus, the further development and application of formal methods to fault-tolerant systems is of critical importance as these systems see increasing use in modern society.
Ulnar nerve injury associated with trampoline injuries.
Maclin, Melvin M; Novak, Christine B; Mackinnon, Susan E
2004-08-01
This study reports three cases of ulnar neuropathy after trampoline injuries in children. A chart review was performed on children who sustained an ulnar nerve injury from a trampoline accident. In all cases, surgical intervention was required. Injuries included upper-extremity fractures in two cases and an upper-extremity laceration in one case. All cases required surgical exploration with internal neurolysis and ulnar nerve transposition. Nerve grafts were used in two cases and an additional nerve transfer was used in one case. All patients had return of intrinsic hand function and sensation after surgery. Children should be followed for evolution of ulnar nerve neuropathy after upper-extremity injury with consideration for electrical studies and surgical exploration if there is no improvement after 3 months.
Euthanasia of Cattle: Practical Considerations and Application.
Shearer, Jan Keith
2018-04-17
Acceptable methods for the euthanasia of cattle include overdose of an anesthetic, gunshot and captive bolt. The use of anesthetics for euthanasia is costly and complicates carcass disposal. These issues can be avoided by use of a physical method such as gunshot or captive bolt; however, each requires that certain conditions be met to assure an immediate loss of consciousness and death. For example, the caliber of firearm and type of bullet are important considerations when gunshot is used. When captive bolt is used, a penetrating captive bolt loaded with the appropriate powder charge and accompanied by a follow up (adjunctive) step to assure death are required. The success of physical methods also requires careful selection of the anatomic site for entry of a “free bullet” or “bolt” in the case of penetrating captive bolt. Disease eradication plans for animal health emergencies necessitate methods of euthanasia that will facilitate rapid and efficient depopulation of animals while preserving their welfare to the greatest extent possible. A portable pneumatic captive bolt device has been developed and validated as effective for use in mass depopulation scenarios. Finally, while most tend to focus on the technical aspects of euthanasia, it is extremely important that no one forget the human cost for those who may be required to perform the task of euthanasia on a regular basis. Symptoms including depression, grief, sleeplessness and destructive behaviors including alcoholism and drug abuse are not uncommon for those who participate in the euthanasia of animals.
Multibody dynamics model building using graphical interfaces
NASA Technical Reports Server (NTRS)
Macala, Glenn A.
1989-01-01
In recent years, the extremely laborious task of manually deriving equations of motion for the simulation of multibody spacecraft dynamics has largely been eliminated. Instead, the dynamicist now works with commonly available general purpose dynamics simulation programs which generate the equations of motion either explicitly or implicitly via computer codes. The user interface to these programs has predominantly been via input data files, each with its own required format and peculiarities, causing errors and frustrations during program setup. Recent progress in a more natural method of data input for dynamics programs: the graphical interface, is described.
Rapid identification of group JK and other corynebacteria with the Minitek system.
Slifkin, M; Gil, G M; Engwall, C
1986-01-01
Forty primary clinical isolates and 50 stock cultures of corynebacteria and coryneform bacteria were tested with the Minitek system (BBL Microbiology Systems, Cockeysville, Md.). The Minitek correctly identified all of these organisms, including JK group isolates, within 12 to 18 h of incubation. The method does not require serum supplements for testing carbohydrate utilization by the bacteria. The Minitek system is an extremely simple and rapid way to identify the JK group, as well as many other corynebacteria, by established identification schemata for these bacteria. PMID:3091632
An analytic data analysis method for oscillatory slug tests.
Chen, Chia-Shyun
2006-01-01
An analytical data analysis method is developed for slug tests in partially penetrating wells in confined or unconfined aquifers of high hydraulic conductivity. As adapted from the van der Kamp method, the determination of the hydraulic conductivity is based on the occurrence times and the displacements of the extreme points measured from the oscillatory data and their theoretical counterparts available in the literature. This method is applied to two sets of slug test response data presented by Butler et al.: one set shows slow damping with seven discernable extremities, and the other shows rapid damping with three extreme points. The estimates of the hydraulic conductivity obtained by the analytic method are in good agreement with those determined by an available curve-matching technique.
A kinematic method for footstrike pattern detection in barefoot and shod runners.
Altman, Allison R; Davis, Irene S
2012-02-01
Footstrike patterns during running can be classified discretely into a rearfoot strike, midfoot strike and forefoot strike by visual observation. However, the footstrike pattern can also be classified on a continuum, ranging from 0% to 100% (extreme rearfoot to extreme forefoot) using the strike index, a measure requiring force plate data. When force data are not available, an alternative method to quantify the strike pattern must be used. The purpose of this paper was to quantify the continuum of foot strike patterns using an easily attainable kinematic measure, and compare it to the strike index measure. Force and kinematic data from twenty subjects were collected as they ran across an embedded force plate. Strike index and the footstrike angle were identified for the four running conditions of rearfoot strike, midfoot strike and forefoot strike, as well as barefoot. The footstrike angle was calculated as the angle of the foot with respect to the ground in the sagittal plane. Results indicated that the footstrike angle was significantly correlated with strike index. The linear regression model suggested that strike index can be accurately estimated, in both barefoot and shod conditions, in the absence of force data. Copyright © 2011 Elsevier B.V. All rights reserved.
Extracting archaeal populations from iron oxidizing systems
NASA Astrophysics Data System (ADS)
Whitmore, L. M.; Hutchison, J.; Chrisler, W.; Jay, Z.; Moran, J.; Inskeep, W.; Kreuzer, H.
2013-12-01
Unique environments in Yellowstone National Park offer exceptional conditions for studying microorganisms in extreme and constrained systems. However, samples from some extreme systems often contain inorganic components that pose complications during microbial and molecular analysis. Several archaeal species are found in acidic, geothermal ferric-oxyhydroxide mats; these species have been shown to adhere to mineral surfaces in flocculated colonies. For optimal microbial analysis, (microscopy, flow cytometry, genomic extractions, proteomic analysis, stable isotope analysis, and others), improved techniques are needed to better facilitate cell detachment and separation from mineral surfaces. As a requirement, these techniques must preserve cell structure while simultaneously minimizing organic carryover to downstream analysis. Several methods have been developed for removing sediments from mixed prokaryotic populations, including ultra-centrifugation, nycodenz gradient, sucrose cushions, and cell straining. In this study we conduct a comparative analysis of mechanisms used to detach archaeal cell populations from the mineral interface. Specifically, we evaluated mechanical and chemical approaches for cell separation and homogenization. Methods were compared using confocal microscopy, flow cytometry analyses, and real-time PCR detection. The methodology and approaches identified will be used to optimize biomass collection from environmental specimens or isolates grown with solid phases.
Practical considerations in the development of hemoglobin-based oxygen therapeutics.
Kim, Hae Won; Estep, Timothy N
2012-09-01
The development of hemoglobin based oxygen therapeutics (HBOCs) requires consideration of a number of factors. While the enabling technology derives from fundamental research on protein biochemistry and biological interactions, translation of these research insights into usable medical therapeutics demands the application of considerable technical expertise and consideration and reconciliation of a myriad of manufacturing, medical, and regulatory requirements. The HBOC development challenge is further exacerbated by the extremely high intravenous doses required for many of the indications contemplated for these products, which in turn implies an extremely high level of purity is required. This communication discusses several of the important product configuration and developmental considerations that impact the translation of fundamental research discoveries on HBOCs into usable medical therapeutics.
Paik, Young-Rim; Lee, Jeong-Hoon; Lee, Doo-Ho; Park, Hee-Su; Oh, Dong-Hwan
2017-12-01
[Purpose] This study investigated the effects of mirror therapy and neuromuscular electrical stimulation on upper extremity function in stroke patients. [Subjects and Methods] This study recruited 8 stroke patients. All patients were treated with mirror therapy and neuromuscular electrical stimulation five times per week for 4 weeks. Upper limb function evaluation was performed using upper extremity part of fugl meyer assessment. [Results] Before and after intervention, fugl meyer assessment showed significant improvement. [Conclusion] In this study, mirror therapy and neuromuscular electrical stimulation are effective methods for upper extremity function recovery in stroke patients.
OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES
DOE Office of Scientific and Technical Information (OSTI.GOV)
S. BOETTCHER; A. PERCUS
2000-08-01
We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-06-19
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.
Slice sampling technique in Bayesian extreme of gold price modelling
NASA Astrophysics Data System (ADS)
Rostami, Mohammad; Adam, Mohd Bakri; Ibrahim, Noor Akma; Yahya, Mohamed Hisham
2013-09-01
In this paper, a simulation study of Bayesian extreme values by using Markov Chain Monte Carlo via slice sampling algorithm is implemented. We compared the accuracy of slice sampling with other methods for a Gumbel model. This study revealed that slice sampling algorithm offers more accurate and closer estimates with less RMSE than other methods . Finally we successfully employed this procedure to estimate the parameters of Malaysia extreme gold price from 2000 to 2011.
NASA Astrophysics Data System (ADS)
Nunes, Ana
2015-04-01
Extreme meteorological events played an important role in catastrophic occurrences observed in the past over densely populated areas in Brazil. This motived the proposal of an integrated system for analysis and assessment of vulnerability and risk caused by extreme events in urban areas that are particularly affected by complex topography. That requires a multi-scale approach, which is centered on a regional modeling system, consisting of a regional (spectral) climate model coupled to a land-surface scheme. This regional modeling system employs a boundary forcing method based on scale-selective bias correction and assimilation of satellite-based precipitation estimates. Scale-selective bias correction is a method similar to the spectral nudging technique for dynamical downscaling that allows internal modes to develop in agreement with the large-scale features, while the precipitation assimilation procedure improves the modeled deep-convection and drives the land-surface scheme variables. Here, the scale-selective bias correction acts only on the rotational part of the wind field, letting the precipitation assimilation procedure to correct moisture convergence, in order to reconstruct South American current climate within the South American Hydroclimate Reconstruction Project. The hydroclimate reconstruction outputs might eventually produce improved initial conditions for high-resolution numerical integrations in metropolitan regions, generating more reliable short-term precipitation predictions, and providing accurate hidrometeorological variables to higher resolution geomorphological models. Better representation of deep-convection from intermediate scales is relevant when the resolution of the regional modeling system is refined by any method to meet the scale of geomorphological dynamic models of stability and mass movement, assisting in the assessment of risk areas and estimation of terrain stability over complex topography. The reconstruction of past extreme events also helps the development of a system for decision-making, regarding natural and social disasters, and reducing impacts. Numerical experiments using this regional modeling system successfully modeled severe weather events in Brazil. Comparisons with the NCEP Climate Forecast System Reanalysis outputs were made at resolutions of about 40- and 25-km of the regional climate model.
To the fringe and back: Violent extremism and the psychology of deviance.
Kruglanski, Arie W; Jasko, Katarzyna; Chernikova, Marina; Dugas, Michelle; Webber, David
2017-04-01
We outline a general psychological theory of extremism and apply it to the special case of violent extremism (VE). Extremism is defined as motivated deviance from general behavioral norms and is assumed to stem from a shift from a balanced satisfaction of basic human needs afforded by moderation to a motivational imbalance wherein a given need dominates the others. Because motivational imbalance is difficult to sustain, only few individuals do, rendering extreme behavior relatively rare, hence deviant. Thus, individual dynamics translate into social patterns wherein majorities of individuals practice moderation, whereas extremism is the province of the few. Both extremism and moderation require the ability to successfully carry out the activities that these demand. Ability is partially determined by the activities' difficulty, controllable in part by external agents who promote or oppose extremism. Application of this general framework to VE identifies the specific need that animates it and offers broad guidelines for addressing this pernicious phenomenon. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Data inversion algorithm development for the hologen occultation experiment
NASA Technical Reports Server (NTRS)
Gordley, Larry L.; Mlynczak, Martin G.
1986-01-01
The successful retrieval of atmospheric parameters from radiometric measurement requires not only the ability to do ideal radiometric calculations, but also a detailed understanding of instrument characteristics. Therefore a considerable amount of time was spent in instrument characterization in the form of test data analysis and mathematical formulation. Analyses of solar-to-reference interference (electrical cross-talk), detector nonuniformity, instrument balance error, electronic filter time-constants and noise character were conducted. A second area of effort was the development of techniques for the ideal radiometric calculations required for the Halogen Occultation Experiment (HALOE) data reduction. The computer code for these calculations must be extremely complex and fast. A scheme for meeting these requirements was defined and the algorithms needed form implementation are currently under development. A third area of work included consulting on the implementation of the Emissivity Growth Approximation (EGA) method of absorption calculation into a HALOE broadband radiometer channel retrieval algorithm.
NASA Astrophysics Data System (ADS)
Fekete, B. M.; Afshari Tork, S.; Vorosmarty, C. J.
2015-12-01
Characterizing hydrological extreme events and assessing their societal impacts is perpetual challenge for hydrologists. Climate models predict that anticipated temperature rise leads to an intensification of the hydrological cycle and to a corresponding increase in the reoccurrence and the severity of extreme events. The societal impact of the hydrological extremes are interlinked with anthropogenic activities therefore the damages to manmade infrastructures are rarely a good measure of the extreme events' magnitudes. Extreme events are rare by definition therefore detecting change in their distributions requires long-term observational records. Currently, only in-situ monitoring time series has the temporal extent necessary for assessing the reoccurrence probabilities of extreme events, but they frequently lack the spatial coverage. Satellite remote sensing is often advocated to provide the required spatial coverage, but satellites have to compromise between spatial and temporal resolutions. Furthermore, the retrieval algorithms are often as complex as comparable hydrological models with similar degree of uncertainties in their parameterization and the validity of the final data products. In addition, anticipated changes over time in the reoccurrence frequencies of extreme events invalidates the stationarity assumption, which is the basis for using past observations to predict the probabilities future extreme events. Probably the best approach to provide more robust predictions of extreme events is the integration of the available data (in-situ and remote sensing) in a comprehensive data assimilation frameworks built on top of adequate hydrological modeling platforms. Our presentation will provide an overview of the current state of hydrological models to support data assimilations and the viable pathways to integrate in-situ and remote sensing observations for flood predictions. We will demonstrate the use of socio-economic data in combination with hydrological data assimilation to assess the resiliency to extreme flood events.
MEMS Micro-Valve for Space Applications
NASA Technical Reports Server (NTRS)
Chakraborty, I.; Tang, W. C.; Bame, D. P.; Tang, T. K.
1998-01-01
We report on the development of a Micro-ElectroMechanical Systems (MEMS) valve that is designed to meet the rigorous performance requirements for a variety of space applications, such as micropropulsion, in-situ chemical analysis of other planets, or micro-fluidics experiments in micro-gravity. These systems often require very small yet reliable silicon valves with extremely low leak rates and long shelf lives. Also, they must survive the perils of space travel, which include unstoppable radiation, monumental shock and vibration forces, as well as extreme variations in temperature. Currently, no commercial MEMS valve meets these requirements. We at JPL are developing a piezoelectric MEMS valve that attempts to address the unique problem of space. We begin with proven configurations that may seem familiar. However, we have implemented some major design innovations that should produce a superior valve. The JPL micro-valve is expected to have an extremely low leak rate, limited susceptibility to particulates, vibration or radiation, as well as a wide operational temperature range.
Ege, Tolga; Unlu, Aytekin; Tas, Huseyin; Bek, Dogan; Turkan, Selim; Cetinkaya, Aytac
2015-01-01
Background: Decision of limb salvage or amputation is generally aided with several trauma scoring systems such as the mangled extremity severity score (MESS). However, the reliability of the injury scores in the settling of open fractures due to explosives and missiles is challenging. Mortality and morbidity of the extremity trauma due to firearms are generally associated with time delay in revascularization, injury mechanism, anatomy of the injured site, associated injuries, age and the environmental circumstance. The purpose of the retrospective study was to evaluate the extent of extremity injuries due to ballistic missiles and to detect the reliability of mangled extremity severity score (MESS) in both upper and lower extremities. Materials and Methods: Between 2004 and 2014, 139 Gustillo Anderson Type III open fractures of both the upper and lower extremities were enrolled in the study. Data for patient age, fire arm type, transporting time from the field to the hospital (and the method), injury severity scores, MESS scores, fracture types, amputation levels, bone fixation methods and postoperative infections and complications retrieved from the two level-2 trauma center's data base. Sensitivity, specificity, positive and negative predictive values of the MESS were calculated to detect the ability in deciding amputation in the mangled limb. Results: Amputation was performed in 39 extremities and limb salvage attempted in 100 extremities. The mean followup time was 14.6 months (range 6–32 months). In the amputated group, the mean MESS scores for upper and lower extremity were 8.8 (range 6–11) and 9.24 (range 6–11), respectively. In the limb salvage group, the mean MESS scores for upper and lower extremities were 5.29 (range 4–7) and 5.19 (range 3–8), respectively. Sensitivity of MESS in upper and lower extremities were calculated as 80% and 79.4% and positive predictive values detected as 55.55% and 83.3%, respectively. Specificity of MESS score for upper and lower extremities was 84% and 86.6%; negative predictive values were calculated as 95.45% and 90.2%, respectively. Conclusion: MESS is not predictive in combat related extremity injuries especially if between a score of 6–8. Limb ischemia and presence or absence of shock can be used in initial decision-making for amputation. PMID:26806974
Estimating extreme losses for the Florida Public Hurricane Model—part II
NASA Astrophysics Data System (ADS)
Gulati, Sneh; George, Florence; Hamid, Shahid
2018-02-01
Rising global temperatures are leading to an increase in the number of extreme events and losses (http://www.epa.gov/climatechange/science/indicators/). Accurate estimation of these extreme losses with the intention of protecting themselves against them is critical to insurance companies. In a previous paper, Gulati et al. (2014) discussed probable maximum loss (PML) estimation for the Florida Public Hurricane Loss Model (FPHLM) using parametric and nonparametric methods. In this paper, we investigate the use of semi-parametric methods to do the same. Detailed analysis of the data shows that the annual losses from FPHLM do not tend to be very heavy tailed, and therefore, neither the popular Hill's method nor the moment's estimator work well. However, Pickand's estimator with threshold around the 84th percentile provides a good fit for the extreme quantiles for the losses.
Badtieva, V A; Kniazeva, T A; Apkhanova, T V
2010-01-01
The present review of the literature data highlights modern approaches to and major trends in diagnostics and conservative treatment of lymphedema of the lower extremities based on the generalized world experience. Patients with lymphedema of the lower extremities comprise a "difficult to manage" group because the disease is characterized by steady progression and marked refractoriness to various conservative therapeutic modalities creating problems for both the patient and the attending physician. Modern methods for the diagnosis of lymphedema are discussed with special reference to noninvasive and minimally invasive techniques (such as lymphoscintiography, computed tomography, MRT, laser Doppler flowmetry, etc.). During the last 20 years, combined conservative therapy has been considered as the method of choice for the management of different stages and forms of lymphedema of the lower extremities in foreign clinics. The basis of conservative therapy is constituted by manual lymph drainage (MLD), compression bandages using short-stretch materials, physical exercises, and skin care (using the method of M. Foldi). Also reviewed are the main physiobalneotherapeutic methods traditionally widely applied for the treatment of lymphedema of the lower extremities in this country. Original methods for the same purpose developed by the authors are described including modifications of cryotherapy, pulsed matrix laserotherapy, hydro- and balneotherapy. Mechanisms of their therapeutic action on the main pathogenetic factors responsible for the development of lymphedema (with special reference to lymph transport and formation) are discussed. The principles of combined application of physiotherapeutic methods for the rehabilitative treatment of patients presenting with lymphedema of the lower extremities are briefly substantiated. Special emphasis is laid on their influence on major components of the pathological process.
Fighting Illiteracy in the Arab World
ERIC Educational Resources Information Center
Hammud, Muwafaq Abu; Jarrar, Amani G.
2017-01-01
Illiteracy in the Arab world is becoming an urgent necessity particularly facing problems of poverty, ignorance, extremism, which impede the required economic, social, political and cultural development processes. Extremism, violence and terrorism, in the Arab world, can only be eliminated by spreading of knowledge, fighting illiteracy. The study…
Science-Driven Approach to Disaster Risk and Crisis Management
NASA Astrophysics Data System (ADS)
Ismail-Zadeh, A.
2014-12-01
Disasters due to natural extreme events continue to grow in number and intensity. Disaster risk and crisis management requires long-term planning, and to undertake that planning, a science-driven approach is needed to understand and assess disaster risks and to help in impact assessment and in recovery processes after a disaster. Science is used in assessments and rapid modeling of the disaster impact, in forecasting triggered hazards and risk (e.g., a tsunami or a landslide after a large earthquake), in contacts with and medical treatment of the affected population, and in some other actions. At the stage of response to disaster, science helps to analyze routinely the disaster happened (e.g., the physical processes led to this extreme event; hidden vulnerabilities; etc.) At the stage of recovery, natural scientists improve the existing regional hazard assessments; engineers try to use new science to produce new materials and technologies to make safer houses and infrastructure. At the stage of disaster risk mitigation new scientific methods and approaches are being developed to study natural extreme events; vulnerability of society is periodically investigated, and the measures for increasing the resilience of society to extremes are developed; existing disaster management regulations are improved. At the stage of preparedness, integrated research on disaster risks should be developed to understand the roots of potential disasters. Enhanced forecasting and early warning systems are to be developed reducing predictive uncertainties, and comprehensive disaster risk assessment is to be undertaken at local, regional, national and global levels. Science education should be improved by introducing trans-disciplinary approach to disaster risks. Science can help society by improving awareness about extreme events, enhancing risk communication with policy makers, media and society, and assisting disaster risk management authorities in organization of local and regional training and exercises.
Extreme habitats as refuge from parasite infections? Evidence from an extremophile fish
NASA Astrophysics Data System (ADS)
Tobler, Michael; Schlupp, Ingo; García de León, Francisco J.; Glaubrecht, Matthias; Plath, Martin
2007-05-01
Living in extreme habitats typically requires costly adaptations of any organism tolerating these conditions, but very little is known about potential benefits that trade off these costs. We suggest that extreme habitats may function as refuge from parasite infections, since parasites can become locally extinct either directly, through selection by an extreme environmental parameter on free-living parasite stages, or indirectly, through selection on other host species involved in its life cycle. We tested this hypothesis in a small freshwater fish, the Atlantic molly ( Poecilia mexicana) that inhabits normal freshwaters as well as extreme habitats containing high concentrations of toxic hydrogen sulfide. Populations from such extreme habitats are significantly less parasitized by the trematode Uvulifer sp. than a population from a non-sulfidic habitat. We suggest that reduced parasite prevalence may be a benefit of living in sulfidic habitats.
Mozheiko, E Yu; Prokopenko, S V; Alekseevich, G V
To reason the choice of methods of restoration of advanced hand activity depending on severity of motor disturbance in the top extremity. Eighty-eight patients were randomized into 3 groups: 1) the mCIMT group, 2) the 'touch glove' group, 3) the control group. For assessment of physical activity of the top extremity Fugl-Meyer Assessment Upper Extremity, Nine-Hole Peg Test, Motor Assessment Scale were used. Assessment of non-use phenomenon was carried out with the Motor Activity Log scale. At a stage of severe motor dysfunction, there was a restoration of proximal departments of a hand in all groups, neither method was superior to the other. In case of moderate severity of motor deficiency of the upper extremity the most effective was the method based on the principle of biological feedback - 'a touch glove'. In the group with mild severity of motor dysfunction, the best recovery was achieved in the mCIMT group.
Advanced Flip Chips in Extreme Temperature Environments
NASA Technical Reports Server (NTRS)
Ramesham, Rajeshuni
2010-01-01
The use of underfill materials is necessary with flip-chip interconnect technology to redistribute stresses due to mismatching coefficients of thermal expansion (CTEs) between dissimilar materials in the overall assembly. Underfills are formulated using organic polymers and possibly inorganic filler materials. There are a few ways to apply the underfills with flip-chip technology. Traditional capillary-flow underfill materials now possess high flow speed and reduced time to cure, but they still require additional processing steps beyond the typical surface-mount technology (SMT) assembly process. Studies were conducted using underfills in a temperature range of -190 to 85 C, which resulted in an increase of reliability by one to two orders of magnitude. Thermal shock of the flip-chip test articles was designed to induce failures at the interconnect sites (-40 to 100 C). The study on the reliability of flip chips using underfills in the extreme temperature region is of significant value for space applications. This technology is considered as an enabling technology for future space missions. Flip-chip interconnect technology is an advanced electrical interconnection approach where the silicon die or chip is electrically connected, face down, to the substrate by reflowing solder bumps on area-array metallized terminals on the die to matching footprints of solder-wettable pads on the chosen substrate. This advanced flip-chip interconnect technology will significantly improve the performance of high-speed systems, productivity enhancement over manual wire bonding, self-alignment during die joining, low lead inductances, and reduced need for attachment of precious metals. The use of commercially developed no-flow fluxing underfills provides a means of reducing the processing steps employed in the traditional capillary flow methods to enhance SMT compatibility. Reliability of flip chips may be significantly increased by matching/tailoring the CTEs of the substrate material and the silicon die or chip, and also the underfill materials. Advanced packaging interconnects technology such as flip-chip interconnect test boards have been subjected to various extreme temperature ranges that cover military specifications and extreme Mars and asteroid environments. The eventual goal of each process step and the entire process is to produce components with 100 percent interconnect and satisfy the reliability requirements. Underfill materials, in general, may possibly meet demanding end use requirements such as low warpage, low stress, fine pitch, high reliability, and high adhesion.
Challenges and requirements of mask data processing for multi-beam mask writer
NASA Astrophysics Data System (ADS)
Choi, Jin; Lee, Dong Hyun; Park, Sinjeung; Lee, SookHyun; Tamamushi, Shuichi; Shin, In Kyun; Jeon, Chan Uk
2015-07-01
To overcome the resolution and throughput of current mask writer for advanced lithography technologies, the platform of e-beam writer have been evolved by the developments of hardware and software in writer. Especially, aggressive optical proximity correction (OPC) for unprecedented extension of optical lithography and the needs of low sensitivity resist for high resolution result in the limit of variable shaped beam writer which is widely used for mass production. The multi-beam mask writer is attractive candidate for photomask writing of sub-10nm device because of its high speed and the large degree of freedom which enable high dose and dose modulation for each pixel. However, the higher dose and almost unlimited appetite for dose modulation challenge the mask data processing (MDP) in aspects of extreme data volume and correction method. Here, we discuss the requirements of mask data processing for multi-beam mask writer and presents new challenges of the data format, data flow, and correction method for user and supplier MDP tool.
NASA Astrophysics Data System (ADS)
Hamm, L. L.; Vanbrunt, V.
1982-08-01
The numerical solution to the ordinary differential equation which describes the high-pressure vapor-liquid equilibria of a binary system where one of the components is supercritical and exists as a noncondensable gas in the pure state is considered with emphasis on the implicit Runge-Kuta and orthogonal collocation methods. Some preliminary results indicate that the implicit Runge-Kutta method is superior. Due to the extreme nonlinearity of thermodynamic properties in the region near the critical locus, and extended cubic spline fitting technique is devised for correlating the P-x data. The least-squares criterion is employed in smoothing the experimental data. The technique could easily be applied to any thermodynamic data by changing the endpoint requirements. The volumetric behavior of the systems must be given or predicted in order to perform thermodynamic consistency tests. A general procedure is developed for predicting the volumetric behavior required and some indication as to the expected limit of accuracy is given.
NASA Technical Reports Server (NTRS)
Thomes, W. Joe; Ott, Melanie N.; Chuska, Richard; Switzer, Robert; Onuma, Eleanya; Blair, Diana; Frese, Erich; Matyseck, Marc
2016-01-01
Fiber optic assemblies have been used on spaceflight missions for many years as an enabling technology for routing, transmitting, and detecting optical signals. Due to the overwhelming success of NASA in implementing fiber optic assemblies on spaceflight science-based instruments, system scientists increasingly request fibers that perform in extreme environments while still maintaining very high optical transmission, stability, and reliability. Many new applications require fiber optic assemblies that will operate down to cryogenic temperatures as low as 20 Kelvin. In order for the fiber assemblies to operate with little loss in optical throughput at these extreme temperatures requires a system level approach all the way from how the fiber assembly is manufactured to how it is held, routed, and integrated. The NASA Goddard Code 562 Photonics Group has been designing, manufacturing, testing, and integrating fiber optics for spaceflight and other high reliability applications for nearly 20 years. Design techniques and lessons learned over the years are consistently applied to developing new fiber optic assemblies that meet these demanding environments. System level trades, fiber assembly design methods, manufacturing, testing, and integration will be discussed. Specific recent examples of ground support equipment for the James Webb Space Telescope (JWST); the Ice, Cloud and Land Elevation Satellite-2 (ICESat-2); and others will be included.
NASA Astrophysics Data System (ADS)
Thomes, W. Joe; Ott, Melanie N.; Chuska, Richard; Switzer, Robert; Onuma, Eleanya; Blair, Diana; Frese, Erich; Matyseck, Marc
2016-09-01
Fiber optic assemblies have been used on spaceflight missions for many years as an enabling technology for routing, transmitting, and detecting optical signals. Due to the overwhelming success of NASA in implementing fiber optic assemblies on spaceflight science-based instruments, system scientists increasingly request fibers that perform in extreme environments while still maintaining very high optical transmission, stability, and reliability. Many new applications require fiber optic assemblies that will operate down to cryogenic temperatures as low as 20 Kelvin. In order for the fiber assemblies to operate with little loss in optical throughput at these extreme temperatures requires a system level approach all the way from how the fiber assembly is manufactured to how it is held, routed, and integrated. The NASA Goddard Code 562 Photonics Group has been designing, manufacturing, testing, and integrating fiber optics for spaceflight and other high reliability applications for nearly 20 years. Design techniques and lessons learned over the years are consistently applied to developing new fiber optic assemblies that meet these demanding environments. System level trades, fiber assembly design methods, manufacturing, testing, and integration will be discussed. Specific recent examples of ground support equipment for the James Webb Space Telescope (JWST); the Ice, Cloud and Land Elevation Satellite-2 (ICESat- 2); and others will be included.
Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor.
Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung
2018-02-03
A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.
Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor
Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung
2018-01-01
A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver’s point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods. PMID:29401681
Kim, Jihun; Kim, Jonghong; Jang, Gil-Jin; Lee, Minho
2017-03-01
Deep learning has received significant attention recently as a promising solution to many problems in the area of artificial intelligence. Among several deep learning architectures, convolutional neural networks (CNNs) demonstrate superior performance when compared to other machine learning methods in the applications of object detection and recognition. We use a CNN for image enhancement and the detection of driving lanes on motorways. In general, the process of lane detection consists of edge extraction and line detection. A CNN can be used to enhance the input images before lane detection by excluding noise and obstacles that are irrelevant to the edge detection result. However, training conventional CNNs requires considerable computation and a big dataset. Therefore, we suggest a new learning algorithm for CNNs using an extreme learning machine (ELM). The ELM is a fast learning method used to calculate network weights between output and hidden layers in a single iteration and thus, can dramatically reduce learning time while producing accurate results with minimal training data. A conventional ELM can be applied to networks with a single hidden layer; as such, we propose a stacked ELM architecture in the CNN framework. Further, we modify the backpropagation algorithm to find the targets of hidden layers and effectively learn network weights while maintaining performance. Experimental results confirm that the proposed method is effective in reducing learning time and improving performance. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Thornton, James
2016-04-01
December 2015 was recently confirmed as the UK's wettest month on record by the Met Office. The most extreme precipitation was associated with three extratropical storm systems, named Desmond, Eva and Frank by the pilot Met Éireann/Met Office "Name our storms" project. In response, river levels reached new maxima at many locations across Northern England. Property damage was widespread, with at least 16,000 homes in England flooded. As with recent predecessors, these events reinvigorated public debate about the extent to which natural weather variability, anthropogenic climate change, increased urbanisation and/or other changes in catchment and river management might be responsible for apparent increases in flood frequency and severity. Change detection and attribution science is required to inform the debate, but is complicated by the short (typically ~ 35 years) river flow records available. Running a large number of coupled climate and hydrological model simulations is a powerful way of addressing the 'attribution question' with respect to the hypothesised climate forcing, for example, albeit one that remains largely in the research domain at present. In the meantime, flood-frequency analysis of available records still forms the bedrock of practice in the water industry; the results are used routinely in the design of new defence structures and in the development of flood hazard maps, amongst other things. In such analyses, it is usual for the records to be assumed stationary. In this context, the specific aims of this research are twofold: • To investigate whether, under the assumption of stationarity, the outputs of standard flood-frequency modelling methods (both 'single-site' and 'spatially pooled' methods) differ significantly depending on whether the new peaks are included or excluded, and; • To assess the sustainability of previous conclusions regarding trends in English river flows by reapplying simple statistical tests, such as the Mann-Kendal test, to data series with the new peaks included. Overall, the research seeks to explore the robustness of commonly-employed statistical flood estimation methods to instrumentally unprecedented extremes. Should it be found that the new records do indeed represent paradigm-shifting 'leverage points', then the suggestion of the Deputy Chief Executive of the Environment Agency, David Rooke - that a "complete rethink" of flood mitigation matters is required in our world of "unknown extremes" - must be given sufficient attention.
Outcome Trajectories in Extremely Preterm Infants
Carlo, Waldemar A.; Tyson, Jon E.; Langer, John C.; Walsh, Michele C.; Parikh, Nehal A.; Das, Abhik; Van Meurs, Krisa P.; Shankaran, Seetha; Stoll, Barbara J.; Higgins, Rosemary D.
2012-01-01
OBJECTIVE: Methods are required to predict prognosis with changes in clinical course. Death or neurodevelopmental impairment in extremely premature neonates can be predicted at birth/admission to the ICU by considering gender, antenatal steroids, multiple birth, birth weight, and gestational age. Predictions may be improved by using additional information available later during the clinical course. Our objective was to develop serial predictions of outcome by using prognostic factors available over the course of NICU hospitalization. METHODS: Data on infants with birth weight ≤1.0 kg admitted to 18 large academic tertiary NICUs during 1998–2005 were used to develop multivariable regression models following stepwise variable selection. Models were developed by using all survivors at specific times during hospitalization (in delivery room [n = 8713], 7-day [n = 6996], 28-day [n = 6241], and 36-week postmenstrual age [n = 5118]) to predict death or death/neurodevelopmental impairment at 18 to 22 months. RESULTS: Prediction of death or neurodevelopmental impairment in extremely premature infants is improved by using information available later during the clinical course. The importance of birth weight declines, whereas the importance of respiratory illness severity increases with advancing postnatal age. The c-statistic in validation models ranged from 0.74 to 0.80 with misclassification rates ranging from 0.28 to 0.30. CONCLUSIONS: Dynamic models of the changing probability of individual outcome can improve outcome predictions in preterm infants. Various current and future scenarios can be modeled by input of different clinical possibilities to develop individual “outcome trajectories” and evaluate impact of possible morbidities on outcome. PMID:22689874
NASA Astrophysics Data System (ADS)
Matthies, A.; Leckebusch, G. C.; Rohlfing, G.; Ulbrich, U.
2009-04-01
Extreme weather events such as thunderstorms, hail and heavy rain or snowfall can pose a threat to human life and to considerable tangible assets. Yet there is a lack of knowledge about present day climatological risk and its economic effects, and its changes due to rising greenhouse gas concentrations. Therefore, parts of economy particularly sensitve to extreme weather events such as insurance companies and airports require regional risk-analyses, early warning and prediction systems to cope with such events. Such an attempt is made for southern Germany, in close cooperation with stakeholders. Comparing ERA40 and station data with impact records of Munich Re and Munich Airport, the 90th percentile was found to be a suitable threshold for extreme impact relevant precipitation events. Different methods for the classification of causing synoptic situations have been tested on ERA40 reanalyses. An objective scheme for the classification of Lamb's circulation weather types (CWT's) has proved to be most suitable for correct classification of the large-scale flow conditions. Certain CWT's have been turned out to be prone to heavy precipitation or on the other side to have a very low risk of such events. Other large-scale parameters are tested in connection with CWT's to find out a combination that has the highest skill to identify extreme precipitation events in climate model data (ECHAM5 and CLM). For example vorticity advection in 700 hPa shows good results, but assumes knowledge of regional orographic particularities. Therefore ongoing work is focused on additional testing of parameters that indicate deviations of a basic state of the atmosphere like the Eady Growth Rate or the newly developed Dynamic State Index. Evaluation results will be used to estimate the skill of the regional climate model CLM concerning the simulation of frequency and intensity of the extreme weather events. Data of the A1B scenario (2000-2050) will be examined for a possible climate change signal.
Experiences issues with plastic parts at cold temperatures
NASA Technical Reports Server (NTRS)
Sandor, Mike; Agarwal, Shri
2005-01-01
Missions to MARS/planets/asteroids require electronic parts to operate and survive at extreme cold conditions. At extreme cold temperatures many types of cold related failures can occur. Office 514 is currently evaluating plastic parts under various cold temperature conditions and applications. Evaluations, screens, and qualifications are conducted on flight parts.
Reproducing an extreme flood with uncertain post-event information
NASA Astrophysics Data System (ADS)
Fuentes-Andino, Diana; Beven, Keith; Halldin, Sven; Xu, Chong-Yu; Reynolds, José Eduardo; Di Baldassarre, Giuliano
2017-07-01
Studies for the prevention and mitigation of floods require information on discharge and extent of inundation, commonly unavailable or uncertain, especially during extreme events. This study was initiated by the devastating flood in Tegucigalpa, the capital of Honduras, when Hurricane Mitch struck the city. In this study we hypothesized that it is possible to estimate, in a trustworthy way considering large data uncertainties, this extreme 1998 flood discharge and the extent of the inundations that followed from a combination of models and post-event measured data. Post-event data collected in 2000 and 2001 were used to estimate discharge peaks, times of peak, and high-water marks. These data were used in combination with rain data from two gauges to drive and constrain a combination of well-known modelling tools: TOPMODEL, Muskingum-Cunge-Todini routing, and the LISFLOOD-FP hydraulic model. Simulations were performed within the generalized likelihood uncertainty estimation (GLUE) uncertainty-analysis framework. The model combination predicted peak discharge, times of peaks, and more than 90 % of the observed high-water marks within the uncertainty bounds of the evaluation data. This allowed an inundation likelihood map to be produced. Observed high-water marks could not be reproduced at a few locations on the floodplain. Identifications of these locations are useful to improve model set-up, model structure, or post-event data-estimation methods. Rainfall data were of central importance in simulating the times of peak and results would be improved by a better spatial assessment of rainfall, e.g. from radar data or a denser rain-gauge network. Our study demonstrated that it was possible, considering the uncertainty in the post-event data, to reasonably reproduce the extreme Mitch flood in Tegucigalpa in spite of no hydrometric gauging during the event. The method proposed here can be part of a Bayesian framework in which more events can be added into the analysis as they become available.
Electrochemical Biosensors for Rapid Detection of Foodborne Salmonella: A Critical Overview
Cinti, Stefano; Volpe, Giulia; Piermarini, Silvia; Delibato, Elisabetta; Palleschi, Giuseppe
2017-01-01
Salmonella has represented the most common and primary cause of food poisoning in many countries for at least over 100 years. Its detection is still primarily based on traditional microbiological culture methods which are labor-intensive, extremely time consuming, and not suitable for testing a large number of samples. Accordingly, great efforts to develop rapid, sensitive and specific methods, easy to use, and suitable for multi-sample analysis, have been made and continue. Biosensor-based technology has all the potentialities to meet these requirements. In this paper, we review the features of the electrochemical immunosensors, genosensors, aptasensors and phagosensors developed in the last five years for Salmonella detection, focusing on the critical aspects of their application in food analysis. PMID:28820458
Tschan, Mathieu J.‐L.; Ieong, Nga Sze; Todd, Richard; Everson, Jack
2017-01-01
Abstract Poly(ortho ester)s (POEs) are well‐known for their surface‐eroding properties and hence present unique opportunities for controlled‐release and tissue‐engineering applications. Their development and wide‐spread investigation has, however, been severely limited by challenging synthetic requirements that incorporate unstable intermediates and are therefore highly irreproducible. Herein, the first catalytic method for the synthesis of POEs using air‐ and moisture‐stable vinyl acetal precursors is presented. The synthesis of a range of POE structures is demonstrated, including those that are extremely difficult to achieve by other synthetic methods. Furthermore, application of this chemistry permits efficient installation of functional groups through ortho ester linkages on an aliphatic polycarbonate. PMID:29087610
Method for fabricating beryllium-based multilayer structures
Skulina, Kenneth M.; Bionta, Richard M.; Makowiecki, Daniel M.; Alford, Craig S.
2003-02-18
Beryllium-based multilayer structures and a process for fabricating beryllium-based multilayer mirrors, useful in the wavelength region greater than the beryllium K-edge (111 .ANG. or 11.1 nm). The process includes alternating sputter deposition of beryllium and a metal, typically from the fifth row of the periodic table, such as niobium (Nb), molybdenum (Mo), ruthenium (Ru), and rhodium (Rh). The process includes not only the method of sputtering the materials, but the industrial hygiene controls for safe handling of beryllium. The mirrors made in accordance with the process may be utilized in soft x-ray and extreme-ultraviolet projection lithography, which requires mirrors of high reflectivity (>60%) for x-rays in the range of 60-140 .ANG. (60-14.0 nm).
NASA Astrophysics Data System (ADS)
Kulenkampff, Johannes; Zakhnini, Abdelhamid; Gründig, Marion; Lippmann-Pipke, Johanna
2016-08-01
Clay plays a prominent role as barrier material in the geosphere. The small particle sizes cause extremely small pore sizes and induce low permeability and high sorption capacity. Transport of dissolved species by molecular diffusion, driven only by a concentration gradient, is less sensitive to the pore size. Heterogeneous structures on the centimetre scale could cause heterogeneous effects, like preferential transport zones, which are difficult to assess. Laboratory measurements with diffusion cells yield limited information on heterogeneity, and pore space imaging methods have to consider scale effects. We established positron emission tomography (PET), applying a high-resolution PET scanner as a spatially resolved quantitative method for direct laboratory observation of the molecular diffusion process of a PET tracer on the prominent scale of 1-100 mm. Although PET is rather insensitive to bulk effects, quantification required significant improvements of the image reconstruction procedure with respect to Compton scatter and attenuation. The experiments were conducted with 22Na and 124I over periods of 100 and 25 days, respectively. From the images we derived trustable anisotropic diffusion coefficients and, in addition, we identified indications of preferential transport zones. We thus demonstrated the unique potential of the PET imaging modality for geoscientific process monitoring under conditions where other methods fail, taking advantage of the extremely high detection sensitivity that is typical of radiotracer applications.
NASA Technical Reports Server (NTRS)
Goldberg, Robert K.; Stouffer, Donald C.
1998-01-01
Recently applications have exposed polymer matrix composite materials to very high strain rate loading conditions, requiring an ability to understand and predict the material behavior under these extreme conditions. In this second paper of a two part report, a three-dimensional composite micromechanical model is described which allows for the analysis of the rate dependent, nonlinear deformation response of a polymer matrix composite. Strain rate dependent inelastic constitutive equations utilized to model the deformation response of a polymer are implemented within the micromechanics method. The deformation response of two representative laminated carbon fiber reinforced composite materials with varying fiber orientation has been predicted using the described technique. The predicted results compare favorably to both experimental values and the response predicted by the Generalized Method of Cells, a well-established micromechanics analysis method.
Developement of an Optimum Interpolation Analysis Method for the CYBER 205
NASA Technical Reports Server (NTRS)
Nestler, M. S.; Woollen, J.; Brin, Y.
1985-01-01
A state-of-the-art technique to assimilate the diverse observational database obtained during FGGE, and thus create initial conditions for numerical forecasts is described. The GLA optimum interpolation (OI) analysis method analyzes pressure, winds, and temperature at sea level, mixing ratio at six mandatory pressure levels up to 300 mb, and heights and winds at twelve levels up to 50 mb. Conversion to the CYBER 205 required a major re-write of the Amdahl OI code to take advantage of the CYBER vector processing capabilities. Structured programming methods were used to write the programs and this has resulted in a modular, understandable code. Among the contributors to the increased speed of the CYBER code are a vectorized covariance-calculation routine, an extremely fast matrix equation solver, and an innovative data search and sort technique.
21st Century Changes in Precipitation Extremes Based on Resolved Atmospheric Patterns
NASA Astrophysics Data System (ADS)
Gao, X.; Schlosser, C. A.; O'Gorman, P. A.; Monier, E.
2014-12-01
Global warming is expected to alter the frequency and/or magnitude of extreme precipitation events. Such changes could have substantial ecological, economic, and sociological consequences. However, climate models in general do not correctly reproduce the frequency distribution of precipitation, especially at the regional scale. In this study, a validated analogue method is employed to diagnose the potential future shifts in the probability of extreme precipitation over the United States under global warming. The method is based on the use of the resolved large-scale meteorological conditions (i.e. flow features, moisture supply) to detect the occurrence of extreme precipitation. The CMIP5 multi-model projections have been compiled for two radiative forcing scenarios (Representative Concentration Pathways 4.5 and 8.5). We further analyze the accompanying circulation features and their changes that may be responsible for shifts in extreme precipitation in response to changed climate. The application of such analogue method to detect other types of hazard events, i.e. landslides is also explored. The results from this study may guide hazardous weather watches and help society develop adaptive strategies for preventing catastrophic losses.
A Method for Automated Detection of Usability Problems from Client User Interface Events
Saadawi, Gilan M.; Legowski, Elizabeth; Medvedeva, Olga; Chavan, Girish; Crowley, Rebecca S.
2005-01-01
Think-aloud usability analysis provides extremely useful data but is very time-consuming and expensive to perform because of the extensive manual video analysis that is required. We describe a simple method for automated detection of usability problems from client user interface events for a developing medical intelligent tutoring system. The method incorporates (1) an agent-based method for communication that funnels all interface events and system responses to a centralized database, (2) a simple schema for representing interface events and higher order subgoals, and (3) an algorithm that reproduces the criteria used for manual coding of usability problems. A correction factor was empirically determining to account for the slower task performance of users when thinking aloud. We tested the validity of the method by simultaneously identifying usability problems using TAU and manually computing them from stored interface event data using the proposed algorithm. All usability problems that did not rely on verbal utterances were detectable with the proposed method. PMID:16779121
Closed Loop, DM Diversity-based, Wavefront Correction Algorithm for High Contrast Imaging Systems
NASA Technical Reports Server (NTRS)
Give'on, Amir; Belikov, Ruslan; Shaklan, Stuart; Kasdin, Jeremy
2007-01-01
High contrast imaging from space relies on coronagraphs to limit diffraction and a wavefront control systems to compensate for imperfections in both the telescope optics and the coronagraph. The extreme contrast required (up to 10(exp -10) for terrestrial planets) puts severe requirements on the wavefront control system, as the achievable contrast is limited by the quality of the wavefront. This paper presents a general closed loop correction algorithm for high contrast imaging coronagraphs by minimizing the energy in a predefined region in the image where terrestrial planets could be found. The estimation part of the algorithm reconstructs the complex field in the image plane using phase diversity caused by the deformable mirror. This method has been shown to achieve faster and better correction than classical speckle nulling.
NASA Technical Reports Server (NTRS)
Lightsey, W. D.
1990-01-01
A digital computer simulation is used to determine if the extreme ultraviolet explorer (EUVE) reaction wheels can provide sufficient torque and momentum storage capability to meet the space infrared telescope facility (SIRTF) maneuver requirements. A brief description of the pointing control system (PCS) and the sensor and actuator dynamic models used in the simulation is presented. A model to represent a disturbance such as fluid sloshing is developed. Results developed with the simulation, and a discussion of these results are presented.
Viscosity models for pure hydrocarbons at extreme conditions: A review and comparative study
Baled, Hseen O.; Gamwo, Isaac K.; Enick, Robert M.; ...
2018-01-12
Here, viscosity is a critical fundamental property required in many applications in the chemical and oil industries. In this review the performance of seven select viscosity models, representative of various predictive and correlative approaches, is discussed and evaluated by comparison to experimental data of 52 pure hydrocarbons including straight-chain alkanes, branched alkanes, cycloalkanes, and aromatics. This analysis considers viscosity data to extremely high-temperature, high-pressure conditions up to 573 K and 300 MPa. Unsatisfactory results are found, particularly at high pressures, with the Chung-Ajlan-Lee-Starling, Pedersen-Fredenslund, and Lohrenz-Bray-Clark models commonly used for oil reservoir simulation. If sufficient experimental viscosity data are readilymore » available to determine model-specific parameters, the free volume theory and the expanded fluid theory models provide generally comparable results that are superior to those obtained with the friction theory, particularly at pressures higher than 100 MPa. Otherwise, the entropy scaling method by Lötgering-Lin and Gross is recommended as the best predictive model.« less
Viscosity models for pure hydrocarbons at extreme conditions: A review and comparative study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baled, Hseen O.; Gamwo, Isaac K.; Enick, Robert M.
Here, viscosity is a critical fundamental property required in many applications in the chemical and oil industries. In this review the performance of seven select viscosity models, representative of various predictive and correlative approaches, is discussed and evaluated by comparison to experimental data of 52 pure hydrocarbons including straight-chain alkanes, branched alkanes, cycloalkanes, and aromatics. This analysis considers viscosity data to extremely high-temperature, high-pressure conditions up to 573 K and 300 MPa. Unsatisfactory results are found, particularly at high pressures, with the Chung-Ajlan-Lee-Starling, Pedersen-Fredenslund, and Lohrenz-Bray-Clark models commonly used for oil reservoir simulation. If sufficient experimental viscosity data are readilymore » available to determine model-specific parameters, the free volume theory and the expanded fluid theory models provide generally comparable results that are superior to those obtained with the friction theory, particularly at pressures higher than 100 MPa. Otherwise, the entropy scaling method by Lötgering-Lin and Gross is recommended as the best predictive model.« less
Extremely efficient flexible organic light-emitting diodes with modified graphene anode
NASA Astrophysics Data System (ADS)
Han, Tae-Hee; Lee, Youngbin; Choi, Mi-Ri; Woo, Seong-Hoon; Bae, Sang-Hoon; Hong, Byung Hee; Ahn, Jong-Hyun; Lee, Tae-Woo
2012-02-01
Although graphene films have a strong potential to replace indium tin oxide anodes in organic light-emitting diodes (OLEDs), to date, the luminous efficiency of OLEDs with graphene anodes has been limited by a lack of efficient methods to improve the low work function and reduce the sheet resistance of graphene films to the levels required for electrodes. Here, we fabricate flexible OLEDs by modifying the graphene anode to have a high work function and low sheet resistance, and thus achieve extremely high luminous efficiencies (37.2 lm W-1 in fluorescent OLEDs, 102.7 lm W-1 in phosphorescent OLEDs), which are significantly higher than those of optimized devices with an indium tin oxide anode (24.1 lm W-1 in fluorescent OLEDs, 85.6 lm W-1 in phosphorescent OLEDs). We also fabricate flexible white OLED lighting devices using the graphene anode. These results demonstrate the great potential of graphene anodes for use in a wide variety of high-performance flexible organic optoelectronics.
Optical proximity correction for anamorphic extreme ultraviolet lithography
NASA Astrophysics Data System (ADS)
Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas
2017-10-01
The change from isomorphic to anamorphic optics in high numerical aperture (NA) extreme ultraviolet (EUV) scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated, and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking (MRC). OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs which are more tolerant to mask errors.
Experimental violation of Bell inequalities for multi-dimensional systems
Lo, Hsin-Pin; Li, Che-Ming; Yabushita, Atsushi; Chen, Yueh-Nan; Luo, Chih-Wei; Kobayashi, Takayoshi
2016-01-01
Quantum correlations between spatially separated parts of a d-dimensional bipartite system (d ≥ 2) have no classical analog. Such correlations, also called entanglements, are not only conceptually important, but also have a profound impact on information science. In theory the violation of Bell inequalities based on local realistic theories for d-dimensional systems provides evidence of quantum nonlocality. Experimental verification is required to confirm whether a quantum system of extremely large dimension can possess this feature, however it has never been performed for large dimension. Here, we report that Bell inequalities are experimentally violated for bipartite quantum systems of dimensionality d = 16 with the usual ensembles of polarization-entangled photon pairs. We also estimate that our entanglement source violates Bell inequalities for extremely high dimensionality of d > 4000. The designed scenario offers a possible new method to investigate the entanglement of multipartite systems of large dimensionality and their application in quantum information processing. PMID:26917246
Brigham, Mark E.; Payne, Gregory A.; Andrews, William J.; Abbott, Marvin M.
2002-01-01
The sampling network was evaluated with respect to areal coverage, sampling frequency, and analytical schedules. Areal coverage could be expanded to include one additional watershed that is not part of the current network. A new sampling site on the North Canadian River might be useful because of expanding urbanization west of the city, but sampling at some other sites could be discontinued or reduced based on comparisons of data between the sites. Additional real-time or periodic monitoring for dissolved oxygen may be useful to prevent anoxic conditions in pools behind new low-water dams. The sampling schedules, both monthly and quarterly, are adequate to evaluate trends, but additional sampling during flow extremes may be needed to quantify loads and evaluate water-quality during flow extremes. Emerging water-quality issues may require sampling for volatile organic compounds, sulfide, total phosphorus, chlorophyll-a, Esherichia coli, and enterococci, as well as use of more sensitive laboratory analytical methods for determination of cadmium, mercury, lead, and silver.
Gaoua, Nadia; de Oliveira, Rita F; Hunter, Steve
2017-01-01
Different professional domains require high levels of physical performance alongside fast and accurate decision-making. Construction workers, police officers, firefighters, elite sports men and women, the military and emergency medical professionals are often exposed to hostile environments with limited options for behavioral coping strategies. In this (mini) review we use football refereeing as an example to discuss the combined effect of intense physical activity and extreme temperatures on decision-making and suggest an explicative model. In professional football competitions can be played in temperatures ranging from -5°C in Norway to 30°C in Spain for example. Despite these conditions, the referee's responsibility is to consistently apply the laws fairly and uniformly, and to ensure the rules are followed without waning or adversely influencing the competitiveness of the play. However, strenuous exercise in extreme environments imposes increased physiological and psychological stress that can affect decision-making. Therefore, the physical exertion required to follow the game and the thermal strain from the extreme temperatures may hinder the ability of referees to make fast and accurate decisions. Here, we review literature on the physical and cognitive requirements of football refereeing and how extreme temperatures may affect referees' decisions. Research suggests that both hot and cold environments have a negative impact on decision-making but data specific to decision-making is still lacking. A theoretical model of decision-making under the constraint of intense physical activity and thermal stress is suggested. Future naturalistic studies are needed to validate this model and provide clear recommendations for mitigating strategies.
Gaoua, Nadia; de Oliveira, Rita F.; Hunter, Steve
2017-01-01
Different professional domains require high levels of physical performance alongside fast and accurate decision-making. Construction workers, police officers, firefighters, elite sports men and women, the military and emergency medical professionals are often exposed to hostile environments with limited options for behavioral coping strategies. In this (mini) review we use football refereeing as an example to discuss the combined effect of intense physical activity and extreme temperatures on decision-making and suggest an explicative model. In professional football competitions can be played in temperatures ranging from -5°C in Norway to 30°C in Spain for example. Despite these conditions, the referee’s responsibility is to consistently apply the laws fairly and uniformly, and to ensure the rules are followed without waning or adversely influencing the competitiveness of the play. However, strenuous exercise in extreme environments imposes increased physiological and psychological stress that can affect decision-making. Therefore, the physical exertion required to follow the game and the thermal strain from the extreme temperatures may hinder the ability of referees to make fast and accurate decisions. Here, we review literature on the physical and cognitive requirements of football refereeing and how extreme temperatures may affect referees’ decisions. Research suggests that both hot and cold environments have a negative impact on decision-making but data specific to decision-making is still lacking. A theoretical model of decision-making under the constraint of intense physical activity and thermal stress is suggested. Future naturalistic studies are needed to validate this model and provide clear recommendations for mitigating strategies. PMID:28912742
Visual soil evaluation - future research requirements
NASA Astrophysics Data System (ADS)
Emmet-Booth, Jeremy; Forristal, Dermot; Fenton, Owen; Ball, Bruce; Holden, Nick
2017-04-01
A review of Visual Soil Evaluation (VSE) techniques (Emmet-Booth et al., 2016) highlighted their established utility for soil quality assessment, though some limitations were identified; (1) The examination of aggregate size, visible intra-porosity and shape forms a key assessment criterion in almost all methods, thus limiting evaluation to structural form. The addition of criteria that holistically examine structure may be desirable. For example, structural stability can be indicated using dispersion tests or examining soil surface crusting, while the assessment of soil colour may indirectly indicate soil organic matter content, a contributor to stability. Organic matter assessment may also indicate structural resilience, along with rooting, earthworm numbers or shrinkage cracking. (2) Soil texture may influence results or impeded method deployment. Modification of procedures to account for extreme texture variation is desirable. For example, evidence of compaction in sandy or single grain soils greatly differs to that in clayey soils. Some procedures incorporate separate classification systems or adjust deployment based on texture. (3) Research into impacts of soil moisture content on VSE evaluation criteria is required. Criteria such as rupture resistance and shape may be affected by moisture content. It is generally recommended that methods are deployed on moist soils and quantification of influences of moisture variation on results is necessary. (4) Robust sampling strategies for method deployment are required. Dealing with spatial variation differs between methods, but where methods can be deployed over large areas, clear instruction on sampling is required. Additionally, as emphasis has been placed on the agricultural production of soil, so the ability of VSE for exploring structural quality in terms of carbon storage, water purification and biodiversity support also requires research. References Emmet-Booth, J.P., Forristal. P.D., Fenton, O., Ball, B.C. & Holden, N.M. 2016. A review of visual soil evaluation techniques for soil structure. Soil Use and Management, 32, 623-634.
NASA Astrophysics Data System (ADS)
Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme
2013-04-01
Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj
2016-01-01
This invited talk will give a brief overview of the integrated heat-shield system design that requires seams and the extreme environment conditions that HEEET should be demonstrated to be capable of thermal performance without fail. We have tested HEEET across many different facilities and at conditions that are extreme. The presentation will highlight the performance of both the acreage as well as integrated seam at these conditions. The Invite talks are 10 min and hence this presentation will be short.
NASA Technical Reports Server (NTRS)
Kolawa, Elizabeth; Chen, Yuan; Mojarradi, Mohammad M.; Weber, Carissa Tudryn; Hunter, Don J.
2013-01-01
This paper describes the technology development and infusion of a motor drive electronics assembly for Mars Curiosity Rover under space extreme environments. The technology evaluation and qualification as well as space qualification of the assembly are detailed and summarized. Because of the uncertainty of the technologies operating under the extreme space environments and that a high level reliability was required for this assembly application, both component and assembly board level qualifications were performed.
A New Integrated Threshold Selection Methodology for Spatial Forecast Verification of Extreme Events
NASA Astrophysics Data System (ADS)
Kholodovsky, V.
2017-12-01
Extreme weather and climate events such as heavy precipitation, heat waves and strong winds can cause extensive damage to the society in terms of human lives and financial losses. As climate changes, it is important to understand how extreme weather events may change as a result. Climate and statistical models are often independently used to model those phenomena. To better assess performance of the climate models, a variety of spatial forecast verification methods have been developed. However, spatial verification metrics that are widely used in comparing mean states, in most cases, do not have an adequate theoretical justification to benchmark extreme weather events. We proposed a new integrated threshold selection methodology for spatial forecast verification of extreme events that couples existing pattern recognition indices with high threshold choices. This integrated approach has three main steps: 1) dimension reduction; 2) geometric domain mapping; and 3) thresholds clustering. We apply this approach to an observed precipitation dataset over CONUS. The results are evaluated by displaying threshold distribution seasonally, monthly and annually. The method offers user the flexibility of selecting a high threshold that is linked to desired geometrical properties. The proposed high threshold methodology could either complement existing spatial verification methods, where threshold selection is arbitrary, or be directly applicable in extreme value theory.
Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong
2017-01-01
A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification. PMID:28629202
Code of Federal Regulations, 2010 CFR
2010-01-01
... 16 Commercial Practices 2 2010-01-01 2010-01-01 false Method for determining extremely flammable and flammable contents of self-pressurized containers. 1500.45 Section 1500.45 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS HAZARDOUS SUBSTANCES AND...
Automatic Debugging Support for UML Designs
NASA Technical Reports Server (NTRS)
Schumann, Johann; Swanson, Keith (Technical Monitor)
2001-01-01
Design of large software systems requires rigorous application of software engineering methods covering all phases of the software process. Debugging during the early design phases is extremely important, because late bug-fixes are expensive. In this paper, we describe an approach which facilitates debugging of UML requirements and designs. The Unified Modeling Language (UML) is a set of notations for object-orient design of a software system. We have developed an algorithm which translates requirement specifications in the form of annotated sequence diagrams into structured statecharts. This algorithm detects conflicts between sequence diagrams and inconsistencies in the domain knowledge. After synthesizing statecharts from sequence diagrams, these statecharts usually are subject to manual modification and refinement. By using the "backward" direction of our synthesis algorithm. we are able to map modifications made to the statechart back into the requirements (sequence diagrams) and check for conflicts there. Fed back to the user conflicts detected by our algorithm are the basis for deductive-based debugging of requirements and domain theory in very early development stages. Our approach allows to generate explanations oil why there is a conflict and which parts of the specifications are affected.
Monazzam, Shafagh; Goodell, Parker B; Salcedo, Edgardo S; Nelson, Sandahl H; Wolinsky, Philip R
2017-01-01
Computed tomography angiogram (CTA) is frequently utilized to detect vascular injuries even without examination findings indicating a vascular injury. We had the following hypotheses: (1) a CTA for lower extremity fractures with no clinical signs of a vascular injury is not indicated, and (2) fracture location and pattern would correlate with the risk of a vascular injury. A retrospective review was conducted on patients who had an acute lower extremity fracture(s) and a CTA. Their charts were reviewed for multiple factors including the presence or absence of hard or soft signs of a vascular injury, soft tissue status, and fracture location/pattern. Every CTA radiology report was reviewed and any vascular intervention or amputation resulting from a vascular injury was recorded. Statistical analysis was performed. Of the 275 CTAs of fractured extremities reviewed, 80 (29%) had a positive CTA finding and 16 (6%) required treatment. A total of 109 (40%) of the extremities had no hard or soft signs; all had normal CTAs. Having at least one hard or soft sign was a significant risk factor for having a positive CTA. An open fracture, isolated proximal third fibula fracture, distal and shaft tibia fractures, and the presence of multiple fractures in one extremity were also associated with an increased risk for having a positive CTA. We found no evidence to support the routine use of CTAs to evaluate lower extremity fractures unless at least one hard or soft sign is present. The presence of an open fracture, distal tibia or tibial shaft fractures, multiple fractures in one extremity, and/or an isolated proximal third fibula fracture increases the risk of having a finding consistent with a vascular injury on a CTA. Only 6% of the cases required treatment, and all of them had diminished or absent distal pulses on presentation. Diagnostic test, level III.
Identifying Heat Waves in Florida: Considerations of Missing Weather Data
Leary, Emily; Young, Linda J.; DuClos, Chris; Jordan, Melissa M.
2015-01-01
Background Using current climate models, regional-scale changes for Florida over the next 100 years are predicted to include warming over terrestrial areas and very likely increases in the number of high temperature extremes. No uniform definition of a heat wave exists. Most past research on heat waves has focused on evaluating the aftermath of known heat waves, with minimal consideration of missing exposure information. Objectives To identify and discuss methods of handling and imputing missing weather data and how those methods can affect identified periods of extreme heat in Florida. Methods In addition to ignoring missing data, temporal, spatial, and spatio-temporal models are described and utilized to impute missing historical weather data from 1973 to 2012 from 43 Florida weather monitors. Calculated thresholds are used to define periods of extreme heat across Florida. Results Modeling of missing data and imputing missing values can affect the identified periods of extreme heat, through the missing data itself or through the computed thresholds. The differences observed are related to the amount of missingness during June, July, and August, the warmest months of the warm season (April through September). Conclusions Missing data considerations are important when defining periods of extreme heat. Spatio-temporal methods are recommended for data imputation. A heat wave definition that incorporates information from all monitors is advised. PMID:26619198
Tropical precipitation extremes: Response to SST-induced warming in aquaplanet simulations
NASA Astrophysics Data System (ADS)
Bhattacharya, Ritthik; Bordoni, Simona; Teixeira, João.
2017-04-01
Scaling of tropical precipitation extremes in response to warming is studied in aquaplanet experiments using the global Weather Research and Forecasting (WRF) model. We show how the scaling of precipitation extremes is highly sensitive to spatial and temporal averaging: while instantaneous grid point extreme precipitation scales more strongly than the percentage increase (˜7% K-1) predicted by the Clausius-Clapeyron (CC) relationship, extremes for zonally and temporally averaged precipitation follow a slight sub-CC scaling, in agreement with results from Climate Model Intercomparison Project (CMIP) models. The scaling depends crucially on the employed convection parameterization. This is particularly true when grid point instantaneous extremes are considered. These results highlight how understanding the response of precipitation extremes to warming requires consideration of dynamic changes in addition to the thermodynamic response. Changes in grid-scale precipitation, unlike those in convective-scale precipitation, scale linearly with the resolved flow. Hence, dynamic changes include changes in both large-scale and convective-scale motions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kerner, Ryan; Mann, R.B.
We investigate quantum tunnelling methods for calculating black hole temperature, specifically the null-geodesic method of Parikh and Wilczek and the Hamilton-Jacobi Ansatz method of Angheben et al. We consider application of these methods to a broad class of spacetimes with event horizons, including Rindler and nonstatic spacetimes such as Kerr-Newman and Taub-NUT. We obtain a general form for the temperature of Taub-NUT-AdS black holes that is commensurate with other methods. We examine the limitations of these methods for extremal black holes, taking the extremal Reissner-Nordstrom spacetime as a case in point.
Radioactive contamination of scintillators
NASA Astrophysics Data System (ADS)
Danevich, F. A.; Tretyak, V. I.
2018-03-01
Low counting experiments (search for double β decay and dark matter particles, measurements of neutrino fluxes from different sources, search for hypothetical nuclear and subnuclear processes, low background α, β, γ spectrometry) require extremely low background of a detector. Scintillators are widely used to search for rare events both as conventional scintillation detectors and as cryogenic scintillating bolometers. Radioactive contamination of a scintillation material plays a key role to reach low level of background. Origin and nature of radioactive contamination of scintillators, experimental methods and results are reviewed. A programme to develop radiopure crystal scintillators for low counting experiments is discussed briefly.
New advances in non-dispersive IR technology for CO2 detection
NASA Technical Reports Server (NTRS)
Small, John W.; Odegard, Wayne L.
1988-01-01
This paper discusses new technology developments in CO2 detection using Non-Dispersive Infrared (NDIR) techniques. The method described has successfully been used in various applications and environments. It has exhibited extremely reliable long-term stability without the need of routine calibration. The analysis employs a dual wavelength, differential detection approach with compensating circuitry for component aging and dirt accumulation on optical surfaces. The instrument fails 'safe' and provides the operator with a 'fault' alarm in the event of a system failure. The NDIR analyzer described has been adapted to NASA Space Station requirements.
Research on vacuum utraviolet calibration technology
NASA Astrophysics Data System (ADS)
Wang, Jiapeng; Gao, Shumin; Sun, Hongsheng; Chen, Yinghang; Wei, Jianqiang
2014-11-01
Importance of extreme ultraviolet (EUV) and far ultraviolet (FUV) calibration is growing fast as vacuum ultraviolet payloads are wildly used in national space plan. A calibration device is established especially for the requirement of EUV and FUV metrology and measurement. Spectral radiation and detector relative spectral response at EUV and FUV wavelengths can be calibrated with accuracy of 26% and 20%, respectively. The setup of the device, theoretical model and value retroactive method are introduced and measurement of detector relative spectral response from 30 nm to 200 nm is presented in this paper. The calibration device plays an important role in national space research.
Code of Federal Regulations, 2010 CFR
2010-01-01
... extremely flammable contents of self-pressurized containers. 1500.46 Section 1500.46 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION FEDERAL HAZARDOUS SUBSTANCES ACT REGULATIONS HAZARDOUS SUBSTANCES AND... extremely flammable contents of self-pressurized containers. Use the apparatus described in § 1500.43a. Use...
Partition of unity finite element method for quantum mechanical materials calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pask, J. E.; Sukumar, N.
The current state of the art for large-scale quantum-mechanical simulations is the planewave (PW) pseudopotential method, as implemented in codes such as VASP, ABINIT, and many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires significant nonlocal communications, which limit parallel efficiency. Real-space methods such as finite-differences (FD) and finite-elements (FE) have partially addressed both resolution and parallel-communications issues but have been plagued by one key disadvantage relative tomore » PW: excessive number of degrees of freedom (basis functions) needed to achieve the required accuracies. In this paper, we present a real-space partition of unity finite element (PUFE) method to solve the Kohn–Sham equations of density functional theory. In the PUFE method, we build the known atomic physics into the solution process using partition-of-unity enrichment techniques in finite element analysis. The method developed herein is completely general, applicable to metals and insulators alike, and particularly efficient for deep, localized potentials, as occur in calculations at extreme conditions of pressure and temperature. Full self-consistent Kohn–Sham calculations are presented for LiH, involving light atoms, and CeAl, involving heavy atoms with large numbers of atomic-orbital enrichments. We find that the new PUFE approach attains the required accuracies with substantially fewer degrees of freedom, typically by an order of magnitude or more, than the PW method. As a result, we compute the equation of state of LiH and show that the computed lattice constant and bulk modulus are in excellent agreement with reference PW results, while requiring an order of magnitude fewer degrees of freedom to obtain.« less
Partition of unity finite element method for quantum mechanical materials calculations
Pask, J. E.; Sukumar, N.
2016-11-09
The current state of the art for large-scale quantum-mechanical simulations is the planewave (PW) pseudopotential method, as implemented in codes such as VASP, ABINIT, and many others. However, since the PW method uses a global Fourier basis, with strictly uniform resolution at all points in space, it suffers from substantial inefficiencies in calculations involving atoms with localized states, such as first-row and transition-metal atoms, and requires significant nonlocal communications, which limit parallel efficiency. Real-space methods such as finite-differences (FD) and finite-elements (FE) have partially addressed both resolution and parallel-communications issues but have been plagued by one key disadvantage relative tomore » PW: excessive number of degrees of freedom (basis functions) needed to achieve the required accuracies. In this paper, we present a real-space partition of unity finite element (PUFE) method to solve the Kohn–Sham equations of density functional theory. In the PUFE method, we build the known atomic physics into the solution process using partition-of-unity enrichment techniques in finite element analysis. The method developed herein is completely general, applicable to metals and insulators alike, and particularly efficient for deep, localized potentials, as occur in calculations at extreme conditions of pressure and temperature. Full self-consistent Kohn–Sham calculations are presented for LiH, involving light atoms, and CeAl, involving heavy atoms with large numbers of atomic-orbital enrichments. We find that the new PUFE approach attains the required accuracies with substantially fewer degrees of freedom, typically by an order of magnitude or more, than the PW method. As a result, we compute the equation of state of LiH and show that the computed lattice constant and bulk modulus are in excellent agreement with reference PW results, while requiring an order of magnitude fewer degrees of freedom to obtain.« less
NASA Astrophysics Data System (ADS)
Becker, A.; Burroughs, R.
2014-12-01
This presentation discusses a new method to assess vulnerability and resilience strategies for stakeholders of coastal-dependent transportation infrastructure, such as seaports. Much coastal infrastructure faces increasing risk to extreme events resulting from sea level rise and tropical storms. As seen after Hurricane Sandy, natural disasters result in economic costs, damages to the environment, and negative consequences on resident's quality of life. In the coming decades, tough decisions will need to be made about investment measures to protect critical infrastructure. Coastal communities will need to weigh the costs and benefits of a new storm barrier, for example, against those of retrofitting, elevating or simply doing nothing. These decisions require understanding the priorities and concerns of stakeholders. For ports, these include shippers, insurers, tenants, and ultimate consumers of the port cargo on a local and global scale, all of whom have a stake in addressing port vulnerabilities.Decision-makers in exposed coastal areas need tools to understand stakeholders concerns and perceptions of potential resilience strategies. For ports, they need answers to: 1) How will stakeholders be affected? 2) What strategies could be implemented to build resilience? 3) How effectively would the strategies mitigate stakeholder concerns? 4) What level of time and investment would strategies require? 5) Which stakeholders could/should take responsibility? Our stakeholder-based method provides answers to questions 1-3 and forms the basis for further work to address 4 and 5.Together with an expert group, we developed a pilot study for stakeholders of Rhode Island's critical energy port, the Port of Providence. Our method uses a plausible extreme storm scenario with localized visualizations and a portfolio of potential resilience strategies. We tailor a multi-criteria decision analysis tool and, through a series of workshops, we use the storm scenario, resilience strategies, and decision tool to elicit perceptions and priorities of port stakeholders. Results provide new knowledge to assist decision-makers allocate investments of time, money, and staff resources. We intend for our method to be utilized in other port communities around Rhode Island and in other coastal states.
SDCLIREF - A sub-daily gridded reference dataset
NASA Astrophysics Data System (ADS)
Wood, Raul R.; Willkofer, Florian; Schmid, Franz-Josef; Trentini, Fabian; Komischke, Holger; Ludwig, Ralf
2017-04-01
Climate change is expected to impact the intensity and frequency of hydrometeorological extreme events. In order to adequately capture and analyze extreme rainfall events, in particular when assessing flood and flash flood situations, data is required at high spatial and sub-daily resolution which is often not available in sufficient density and over extended time periods. The ClimEx project (Climate Change and Hydrological Extreme Events) addresses the alteration of hydrological extreme events under climate change conditions. In order to differentiate between a clear climate change signal and the limits of natural variability, unique Single-Model Regional Climate Model Ensembles (CRCM5 driven by CanESM2, RCP8.5) were created for a European and North-American domain, each comprising 50 members of 150 years (1951-2100). In combination with the CORDEX-Database, this newly created ClimEx-Ensemble is a one-of-a-kind model dataset to analyze changes of sub-daily extreme events. For the purpose of bias-correcting the regional climate model ensembles as well as for the baseline calibration and validation of hydrological catchment models, a new sub-daily (3h) high-resolution (500m) gridded reference dataset (SDCLIREF) was created for a domain covering the Upper Danube and Main watersheds ( 100.000km2). As the sub-daily observations lack a continuous time series for the reference period 1980-2010, the need for a suitable method to bridge the gap of the discontinuous time series arouse. The Method of Fragments (Sharma and Srikanthan (2006); Westra et al. (2012)) was applied to transform daily observations to sub-daily rainfall events to extend the time series and densify the station network. Prior to applying the Method of Fragments and creating the gridded dataset using rigorous interpolation routines, data collection of observations, operated by several institutions in three countries (Germany, Austria, Switzerland), and the subsequent quality control of the observations was carried out. Among others, the quality control checked for steps, extensive dry seasons, temporal consistency and maximum hourly values. The resulting SDCLIREF dataset provides a robust precipitation reference for hydrometeorological applications in unprecedented high spatio-temporal resolution. References: Sharma, A.; Srikanthan, S. (2006): Continuous Rainfall Simulation: A Nonparametric Alternative. In: 30th Hydrology and Water Resources Symposium 4-7 December 2006, Launceston, Tasmania. Westra, S.; Mehrotra, R.; Sharma, A.; Srikanthan, R. (2012): Continuous rainfall simulation. 1. A regionalized subdaily disaggregation approach. In: Water Resour. Res. 48 (1). DOI: 10.1029/2011WR010489.
Examining Risk-Taking Behavior and Sensation Seeking Requirement in Extreme Athletes
ERIC Educational Resources Information Center
Agilonu, Ali; Bastug, Gulsum; Mutlu, Tonguc Osman; Pala, Adem
2017-01-01
Extreme sports are sport branches which include actions, adventures, risks and difficulties more rather than other sports. Special materials are used in sport branches such as surfing, kite surfing, sailing, snowboarding, paragliding, diving, mountaineering, motor sports and adrenaline release is more rather than in other sport branches. On the…
Epidemic failure detection and consensus for extreme parallelism
Katti, Amogh; Di Fatta, Giuseppe; Naughton, Thomas; ...
2017-02-01
Future extreme-scale high-performance computing systems will be required to work under frequent component failures. The MPI Forum s User Level Failure Mitigation proposal has introduced an operation, MPI Comm shrink, to synchronize the alive processes on the list of failed processes, so that applications can continue to execute even in the presence of failures by adopting algorithm-based fault tolerance techniques. This MPI Comm shrink operation requires a failure detection and consensus algorithm. This paper presents three novel failure detection and consensus algorithms using Gossiping. The proposed algorithms were implemented and tested using the Extreme-scale Simulator. The results show that inmore » all algorithms the number of Gossip cycles to achieve global consensus scales logarithmically with system size. The second algorithm also shows better scalability in terms of memory and network bandwidth usage and a perfect synchronization in achieving global consensus. The third approach is a three-phase distributed failure detection and consensus algorithm and provides consistency guarantees even in very large and extreme-scale systems while at the same time being memory and bandwidth efficient.« less
Bowerman, Erin Anne; Whatman, Chris; Harris, Nigel; Bradshaw, Elizabeth
2015-06-01
The objective of this study was to review the evidence for selected risk factors of lower extremity overuse injuries in young elite female ballet dancers. An electronic search of key databases from 1969 to July 2013 was conducted using the keywords dancers, ballet dancers, athletes, adolescent, adolescence, young, injury, injuries, risk, overuse, lower limb, lower extremity, lower extremities, growth, maturation, menarche, alignment, and biomechanics. Thirteen published studies were retained for review. Results indicated that there is a high incidence of lower extremity overuse injuries in the target population. Primary risk factors identified included maturation, growth, and poor lower extremity alignment. Strong evidence from well-designed studies indicates that young elite female ballet dancers suffer from delayed onset of growth, maturation, menarche, and menstrual irregularities. However, there is little evidence that this deficit increases the risk of overuse injury, with the exception of stress fractures. Similarly, there is minimal evidence linking poor lower extremity alignment to increased risk of overuse injury. It is concluded that further prospective, longitudinal studies are required to clarify the relationship between growth, maturation, menarche, and lower extremity alignment, and the risk of lower extremity overuse injury in young elite female ballet dancers.
Exploring regional stakeholder needs and requirements in terms of Extreme Weather Event Attribution
NASA Astrophysics Data System (ADS)
Schwab, M.; Meinke, I.; Vanderlinden, J. P.; Touili, N.; Von Storch, H.
2015-12-01
Extreme event attribution has increasingly received attention in the scientific community. It may also serve decision-making at the regional level where much of the climate change impact mitigation takes place. Nevertheless, there is, to date, little known about the requirements of regional actors in terms of extreme event attribution. We have therefore analysed these at the example of regional decision-makers for climate change-related activities and/or concerned with storm surge risks at the German Baltic Sea and heat wave risks in the Greater Paris area. In order to explore if stakeholders find scientific knowledge from extreme event attribution useful and how this information might be relevant to their decision-making, we consulted a diverse set of actors engaged in the assessment, mitigation and communication of storm surge, heat wave, and climate change-related risks. Extreme event attribution knowledge was perceived to be most useful to public and political awareness-raising, but was of little or no relevance for the consulted stakeholders themselves. It was not acknowledged that it would support adaptation planning as sometimes argued in the literature. The consulted coastal protection, health, and urban adaptation planners rather needed reliable statements about possible future changes in extreme events than causal statements about past events. To enhance salience, a suitable product of event attribution should be linked to regional problems, vulnerabilities, and impacts of climate change. Given that the tolerance of uncertainty is rather low, most of the stakeholders also claimed that a suitable product of event attribution is to be received from a trusted "honest broker" and published rather later, but with smaller uncertainties than vice versa. Institutional mechanisms, like regional climate services, which enable and foster communication, translation and mediation across the boundaries between knowledge and action can help fulfill such requirements. This is of particular importance for extreme event attribution which is often understood as science producing complex and abstract information attached to large uncertainties. They can serve as an interface for creating the necessary mutual understanding by being in a continuous dialogue with both science and stakeholders.
Consistency of extreme flood estimation approaches
NASA Astrophysics Data System (ADS)
Felder, Guido; Paquet, Emmanuel; Penot, David; Zischg, Andreas; Weingartner, Rolf
2017-04-01
Estimations of low-probability flood events are frequently used for the planning of infrastructure as well as for determining the dimensions of flood protection measures. There are several well-established methodical procedures to estimate low-probability floods. However, a global assessment of the consistency of these methods is difficult to achieve, the "true value" of an extreme flood being not observable. Anyway, a detailed comparison performed on a given case study brings useful information about the statistical and hydrological processes involved in different methods. In this study, the following three different approaches for estimating low-probability floods are compared: a purely statistical approach (ordinary extreme value statistics), a statistical approach based on stochastic rainfall-runoff simulation (SCHADEX method), and a deterministic approach (physically based PMF estimation). These methods are tested for two different Swiss catchments. The results and some intermediate variables are used for assessing potential strengths and weaknesses of each method, as well as for evaluating the consistency of these methods.
16 CFR 1500.44 - Method for determining extremely flammable and flammable solids.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and flammable solids. 1500.44 Section 1500.44 Commercial Practices CONSUMER PRODUCT SAFETY COMMISSION... ENFORCEMENT REGULATIONS § 1500.44 Method for determining extremely flammable and flammable solids. (a... with inner dimensions 6 inches long × 1 inch wide × one-fourth inch deep. (2) Rigid and pliable solids...
NASA Astrophysics Data System (ADS)
Dibike, Y. B.; Eum, H. I.; Prowse, T. D.
2017-12-01
Flows originating from alpine dominated cold region watersheds typically experience extended winter low flows followed by spring snowmelt and summer rainfall driven high flows. In a warmer climate, there will be temperature- induced shift in precipitation from snow towards rain as well as changes in snowmelt timing affecting the frequency of extreme high and low flow events which could significantly alter ecosystem services. This study examines the potential changes in the frequency and severity of hydrologic extremes in the Athabasca River watershed in Alberta, Canada based on the Variable Infiltration Capacity (VIC) hydrologic model and selected and statistically downscaled climate change scenario data from the latest Coupled Model Intercomparison Project (CMIP5). The sensitivity of these projected changes is also examined by applying different extreme flow analysis methods. The hydrological model projections show an overall increase in mean annual streamflow in the watershed and a corresponding shift in the freshet timing to earlier period. Most of the streams are projected to experience increases during the winter and spring seasons and decreases during the summer and early fall seasons, with an overall projected increases in extreme high flows, especially for low frequency events. While the middle and lower parts of the watershed are characterised by projected increases in extreme high flows, the high elevation alpine region is mainly characterised by corresponding decreases in extreme low flow events. However, the magnitude of projected changes in extreme flow varies over a wide range, especially for low frequent events, depending on the climate scenario and period of analysis, and sometimes in a nonlinear way. Nonetheless, the sensitivity of the projected changes to the statistical method of analysis is found to be relatively small compared to the inter-model variability.
Choe, Seungho; Hecht, Karen A.; Grabe, Michael
2008-01-01
Continuum electrostatic approaches have been extremely successful at describing the charged nature of soluble proteins and how they interact with binding partners. However, it is unclear whether continuum methods can be used to quantitatively understand the energetics of membrane protein insertion and stability. Recent translation experiments suggest that the energy required to insert charged peptides into membranes is much smaller than predicted by present continuum theories. Atomistic simulations have pointed to bilayer inhomogeneity and membrane deformation around buried charged groups as two critical features that are neglected in simpler models. Here, we develop a fully continuum method that circumvents both of these shortcomings by using elasticity theory to determine the shape of the deformed membrane and then subsequently uses this shape to carry out continuum electrostatics calculations. Our method does an excellent job of quantitatively matching results from detailed molecular dynamics simulations at a tiny fraction of the computational cost. We expect that this method will be ideal for studying large membrane protein complexes. PMID:18474636
Variance analysis of forecasted streamflow maxima in a wet temperate climate
NASA Astrophysics Data System (ADS)
Al Aamery, Nabil; Fox, James F.; Snyder, Mark; Chandramouli, Chandra V.
2018-05-01
Coupling global climate models, hydrologic models and extreme value analysis provides a method to forecast streamflow maxima, however the elusive variance structure of the results hinders confidence in application. Directly correcting the bias of forecasts using the relative change between forecast and control simulations has been shown to marginalize hydrologic uncertainty, reduce model bias, and remove systematic variance when predicting mean monthly and mean annual streamflow, prompting our investigation for maxima streamflow. We assess the variance structure of streamflow maxima using realizations of emission scenario, global climate model type and project phase, downscaling methods, bias correction, extreme value methods, and hydrologic model inputs and parameterization. Results show that the relative change of streamflow maxima was not dependent on systematic variance from the annual maxima versus peak over threshold method applied, albeit we stress that researchers strictly adhere to rules from extreme value theory when applying the peak over threshold method. Regardless of which method is applied, extreme value model fitting does add variance to the projection, and the variance is an increasing function of the return period. Unlike the relative change of mean streamflow, results show that the variance of the maxima's relative change was dependent on all climate model factors tested as well as hydrologic model inputs and calibration. Ensemble projections forecast an increase of streamflow maxima for 2050 with pronounced forecast standard error, including an increase of +30(±21), +38(±34) and +51(±85)% for 2, 20 and 100 year streamflow events for the wet temperate region studied. The variance of maxima projections was dominated by climate model factors and extreme value analyses.
On Light-Like Extremal Surfaces in Curved Spacetimes
NASA Astrophysics Data System (ADS)
Huang, Shou-Jun; He, Chun-Lei
2014-01-01
In this paper, we are concerned with light-like extremal surfaces in curved spacetimes. It is interesting to find that under a diffeomorphic transformation of variables, the light-like extremal surfaces can be described by a system of nonlinear geodesic equations. Particularly, we investigate the light-like extremal surfaces in Schwarzschild spacetime in detail and some new special solutions are derived systematically with aim to compare with the known results and to illustrate the method.
NASA Astrophysics Data System (ADS)
Deidda, Roberto; Mamalakis, Antonis; Langousis, Andreas
2015-04-01
One of the most crucial issues in statistical hydrology is the estimation of extreme rainfall from data. To that extent, based on asymptotic arguments from Extreme Excess (EE) theory, several studies have focused on developing new, or improving existing methods to fit a Generalized Pareto Distribution (GPD) model to rainfall excesses above a properly selected threshold u. The latter is generally determined using various approaches that can be grouped into three basic classes: a) non-parametric methods that locate the changing point between extreme and non-extreme regions of the data, b) graphical methods where one studies the dependence of the GPD parameters (or related metrics) to the threshold level u, and c) Goodness of Fit (GoF) metrics that, for a certain level of significance, locate the lowest threshold u that a GPD model is applicable. In this work, we review representative methods for GPD threshold detection, discuss fundamental differences in their theoretical bases, and apply them to daily rainfall records from the NOAA-NCDC open-access database (http://www.ncdc.noaa.gov/oa/climate/ghcn-daily/). We find that non-parametric methods that locate the changing point between extreme and non-extreme regions of the data are generally not reliable, while graphical methods and GoF metrics that rely on limiting arguments for the upper distribution tail lead to unrealistically high thresholds u. The latter is expected, since one checks the validity of the limiting arguments rather than the applicability of a GPD distribution model. Better performance is demonstrated by graphical methods and GoF metrics that rely on GPD properties. Finally, we discuss the effects of data quantization (common in hydrologic applications) on the estimated thresholds. Acknowledgments: The research project is implemented within the framework of the Action «Supporting Postdoctoral Researchers» of the Operational Program "Education and Lifelong Learning" (Action's Beneficiary: General Secretariat for Research and Technology), and is co-financed by the European Social Fund (ESF) and the Greek State.
Warmest extreme year in U.S. history alters thermal requirements for tree phenology.
Carter, Jacob M; Orive, Maria E; Gerhart, Laci M; Stern, Jennifer H; Marchin, Renée M; Nagel, Joane; Ward, Joy K
2017-04-01
The frequency of extreme warm years is increasing across the majority of the planet. Shifts in plant phenology in response to extreme years can influence plant survival, productivity, and synchrony with pollinators/herbivores. Despite extensive work on plant phenological responses to climate change, little is known about responses to extreme warm years, particularly at the intraspecific level. Here we investigate 43 populations of white ash trees (Fraxinus americana) from throughout the species range that were all grown in a common garden. We compared the timing of leaf emergence during the warmest year in U.S. history (2012) with relatively non-extreme years. We show that (a) leaf emergence among white ash populations was accelerated by 21 days on average during the extreme warm year of 2012 relative to non-extreme years; (b) rank order for the timing of leaf emergence was maintained among populations across extreme and non-extreme years, with southern populations emerging earlier than northern populations; (c) greater amounts of warming units accumulated prior to leaf emergence during the extreme warm year relative to non-extreme years, and this constrained the potential for even earlier leaf emergence by an average of 9 days among populations; and (d) the extreme warm year reduced the reliability of a relevant phenological model for white ash by producing a consistent bias toward earlier predicted leaf emergence relative to observations. These results demonstrate a critical need to better understand how extreme warm years will impact tree phenology, particularly at the intraspecific level.
The Effects of Load Carriage and Muscle Fatigue on Lower-Extremity Joint Mechanics
ERIC Educational Resources Information Center
Wang, He; Frame, Jeff; Ozimek, Elicia; Leib, Daniel; Dugan, Eric L.
2013-01-01
Military personnel are commonly afflicted by lower-extremity overuse injuries. Load carriage and muscular fatigue are major stressors during military basic training. Purpose: To examine effects of load carriage and muscular fatigue on lower-extremity joint mechanics during walking. Method: Eighteen men performed the following tasks: unloaded…
Cost-sensitive AdaBoost algorithm for ordinal regression based on extreme learning machine.
Riccardi, Annalisa; Fernández-Navarro, Francisco; Carloni, Sante
2014-10-01
In this paper, the well known stagewise additive modeling using a multiclass exponential (SAMME) boosting algorithm is extended to address problems where there exists a natural order in the targets using a cost-sensitive approach. The proposed ensemble model uses an extreme learning machine (ELM) model as a base classifier (with the Gaussian kernel and the additional regularization parameter). The closed form of the derived weighted least squares problem is provided, and it is employed to estimate analytically the parameters connecting the hidden layer to the output layer at each iteration of the boosting algorithm. Compared to the state-of-the-art boosting algorithms, in particular those using ELM as base classifier, the suggested technique does not require the generation of a new training dataset at each iteration. The adoption of the weighted least squares formulation of the problem has been presented as an unbiased and alternative approach to the already existing ELM boosting techniques. Moreover, the addition of a cost model for weighting the patterns, according to the order of the targets, enables the classifier to tackle ordinal regression problems further. The proposed method has been validated by an experimental study by comparing it with already existing ensemble methods and ELM techniques for ordinal regression, showing competitive results.
Stadler, Julia; Eder, Johanna; Pratscher, Barbara; Brandt, Sabine; Schneller, Doris; Müllegger, Robert; Vogl, Claus; Trautinger, Franz; Brem, Gottfried; Burgstaller, Joerg P.
2015-01-01
Cell-free circulating tumor DNA in the plasma of cancer patients has become a common point of interest as indicator of therapy options and treatment response in clinical cancer research. Especially patient- and tumor-specific single nucleotide variants that accurately distinguish tumor DNA from wild type DNA are promising targets. The reliable detection and quantification of these single-base DNA variants is technically challenging. Currently, a variety of techniques is applied, with no apparent “gold standard”. Here we present a novel qPCR protocol that meets the conditions of extreme sensitivity and specificity that are required for detection and quantification of tumor DNA. By consecutive application of two polymerases, one of them designed for extreme base-specificity, the method reaches unprecedented sensitivity and specificity. Three qPCR assays were tested with spike-in experiments, specific for point mutations BRAF V600E, PTEN T167A and NRAS Q61L of melanoma cell lines. It was possible to detect down to one copy of tumor DNA per reaction (Poisson distribution), at a background of up to 200 000 wild type DNAs. To prove its clinical applicability, the method was successfully tested on a small cohort of BRAF V600E positive melanoma patients. PMID:26562020
Analysis of real-time vibration data
Safak, E.
2005-01-01
In recent years, a few structures have been instrumented to provide continuous vibration data in real time, recording not only large-amplitude motions generated by extreme loads, but also small-amplitude motions generated by ambient loads. The main objective in continuous recording is to track any changes in structural characteristics, and to detect damage after an extreme event, such as an earthquake or explosion. The Fourier-based spectral analysis methods have been the primary tool to analyze vibration data from structures. In general, such methods do not work well for real-time data, because real-time data are mainly composed of ambient vibrations with very low amplitudes and signal-to-noise ratios. The long duration, linearity, and the stationarity of ambient data, however, allow us to utilize statistical signal processing tools, which can compensate for the adverse effects of low amplitudes and high noise. The analysis of real-time data requires tools and techniques that can be applied in real-time; i.e., data are processed and analyzed while being acquired. This paper presents some of the basic tools and techniques for processing and analyzing real-time vibration data. The topics discussed include utilization of running time windows, tracking mean and mean-square values, filtering, system identification, and damage detection.
Research on the remote sensing methods of drought monitoring in Chongqing
NASA Astrophysics Data System (ADS)
Yang, Shiqi; Tang, Yunhui; Gao, Yanghua; Xu, Yongjin
2011-12-01
There are regional and periodic droughts in Chongqing, which impacted seriously on agricultural production and people's lives. This study attempted to monitor the drought in Chongqing with complex terrain using MODIS data. First, we analyzed and compared three remote sensing methods for drought monitoring (time series of vegetation index, temperature vegetation dryness index (TVDI), and vegetation supply water index (VSWI)) for the severe drought in 2006. Then we developed a remote sensing based drought monitoring model for Chongqing by combining soil moisture data and meteorological data. The results showed that the three remote sensing based drought monitoring models performed well in detecting the occurrence of drought in Chongqing on a certain extent. However, Time Series of Vegetation Index has stronger sensitivity in time pattern but weaker in spatial pattern; although TVDI and VSWI can reflect inverse the whole process of severe drought in 2006 summer from drought occurred - increased - relieved - increased again - complete remission in spatial domain, but TVDI requires the situation of extreme drought and extreme moist both exist in study area which it is more difficult in Chongqing; VSWI is simple and practicable, which the correlation coefficient between VSWI and soil moisture data reaches significant levels. In summary, VSWI is the best model for summer drought monitoring in Chongqing.
A NOISE ADAPTIVE FUZZY EQUALIZATION METHOD FOR PROCESSING SOLAR EXTREME ULTRAVIOLET IMAGES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Druckmueller, M., E-mail: druckmuller@fme.vutbr.cz
A new image enhancement tool ideally suited for the visualization of fine structures in extreme ultraviolet images of the corona is presented in this paper. The Noise Adaptive Fuzzy Equalization method is particularly suited for the exceptionally high dynamic range images from the Atmospheric Imaging Assembly instrument on the Solar Dynamics Observatory. This method produces artifact-free images and gives significantly better results than methods based on convolution or Fourier transform which are often used for that purpose.
NASA Astrophysics Data System (ADS)
Wang, Yuan
2015-10-01
The recent study "Trends of Extreme Precipitation in Eastern China and Their Possible Causes" attributed the observed decrease/increase of light/heavy precipitation in eastern China to global warming rather than the regional aerosol effects. However, there exist compelling evidence from previous long-term observations and numerical modeling studies, suggesting that anthropogenic pollution is closely linked to the recent changes in precipitation intensity because of considerably modulated cloud physical properties by aerosols in eastern China. Clearly, a quantitative assessment of the aerosol and greenhouse effects on the regional scale is required to identify the primary cause for the extreme precipitation changes.
Evaluation of COTS Electronic Parts for Extreme Temperature Use in NASA Missions
NASA Technical Reports Server (NTRS)
Patterson, Richard L.; Hammoud, Ahmad; Elbuluk, Malik
2008-01-01
Electronic systems capable of extreme temperature operation are required for many future NASA space exploration missions where it is desirable to have smaller, lighter, and less expensive spacecraft and probes. Presently, spacecraft on-board electronics are maintained at about room temperature by use of thermal control systems. An Extreme Temperature Electronics Program at the NASA Glenn Research Center focuses on development of electronics suitable for space exploration missions. The effects of exposure to extreme temperatures and thermal cycling are being investigated for commercial-off-the-shelf components as well as for components specially developed for harsh environments. An overview of this program along with selected data is presented.
Self-Recovery Experiments in Extreme Environments Using a Field Programmable Transistor Array
NASA Technical Reports Server (NTRS)
Stoica, Adrian; Keymeulen, Didier; Arslan, Tughrul; Duong, Vu; Zebulum, Ricardo; Ferguson, Ian; Guo, Xin
2004-01-01
Temperature and radiation tolerant electronics, as well as long life survivability are key capabilities required for future NASA missions. Current approaches to electronics for extreme environments focus on component level robustness and hardening. However, current technology can only ensure very limited lifetime in extreme environments. This paper describes novel experiments that allow adaptive in-situ circuit redesign/reconfiguration during operation in extreme temperature and radiation environments. This technology would complement material/device advancements and increase the mission capability to survive harsh environments. The approach is demonstrated on a mixed-signal programmable chip (FPTA-2), which recovers functionality for temperatures until 28 C and with total radiation dose up to 250kRad.
Zhan, Yi; Fu, Guo; Zhou, Xiang; He, Bo; Yan, Li-Wei; Zhu, Qing-Tang; Gu, Li-Qiang; Liu, Xiao-Lin; Qi, Jian
2017-12-01
Complex extremity trauma commonly involves both soft tissue and vascular injuries. Traditional two-stage surgical repair may delay rehabilitation and functional recovery, as well as increase the risk of infections. We report a single-stage reconstructive surgical method that repairs soft tissue defects and vascular injuries with flow-through free flaps to improve functional outcomes. Between March 2010 and December 2016 in our hospital, 5 patients with severe upper extremity trauma received single-stage reconstructive surgery, in which a flow-through anterolateral thigh free flap was applied to repair soft tissue defects and vascular injuries simultaneously. Cases of injured artery were reconstructed with the distal trunk of the descending branch of the lateral circumflex femoral artery. A segment of adjacent vein was used if there was a second artery injury. Patients were followed to evaluate their functional recoveries, and received computed tomography angiography examinations to assess peripheral circulation. Two patients had post-operative thumb necrosis; one required amputation, and the other was healed after debridement and abdominal pedicle flap repair. The other 3 patients had no major complications (infection, necrosis) to the recipient or donor sites after surgery. All the patients had achieved satisfactory functional recovery by the end of the follow-up period. Computed tomography angiography showed adequate circulation in the peripheral vessels. The success of these cases shows that one-step reconstructive surgery with flow-through anterolateral thigh free flaps can be a safe and effective treatment option for patients with complex upper extremity trauma with soft tissue defects and vascular injuries. Copyright © 2017. Published by Elsevier Ltd.
Kindergarten classroom functioning of extremely preterm/extremely low birth weight children.
Wong, Taylor; Taylor, H Gerry; Klein, Nancy; Espy, Kimberly A; Anselmo, Marcia G; Minich, Nori; Hack, Maureen
2014-12-01
Cognitive, behavioral, and learning problems are evident in extremely preterm/extremely low birth weight (EPT/ELBW, <28 weeks gestational age or <1000 g) children by early school age. However, we know little about how they function within the classroom once they start school. To determine how EPT/ELBW children function in kindergarten classrooms compared to termborn normal birth weight (NBW) classmates and identify factors related to difficulties in classroom functioning. A 2001-2003 birth cohort of 111 EPT/ELBW children and 110 NBW classmate controls were observed in regular kindergarten classrooms during a 1-hour instructional period using a time-sample method. The groups were compared on frequencies of individual teacher attention, competing or offtask behaviors, task management/preparation, and academic responding. Regression analysis was also conducted within the EPT/ELBW group to examine associations of these measures with neonatal and developmental risk factors, kindergarten neuropsychological and behavioral assessments, and classroom characteristics. The EPT/ELBW group received more individual teacher attention and was more often off-task than the NBW controls. Poorer classroom functioning in the EPT/ELBW group was associated with higher neonatal and developmental risk, poorer executive function skills, more negative teaching ratings of behavior and learning progress, and classroom characteristics. EPT/ELBW children require more teacher support and are less able to engage in instructional activities than their NBW classmates. Associations of classroom functioning with developmental history and cognitive and behavioral traits suggest that these factors may be useful in identifying the children most in need of special educational interventions. Copyright © 2014 Elsevier Ltd. All rights reserved.
Research progress of extreme climate and its vegetation response
NASA Astrophysics Data System (ADS)
Cui, Xiaolin; Wei, Xiaoqing; Wang, Tao
2017-08-01
The IPCC’s fifth assessment report indicates that climate warming is unquestionable, the frequency and intensity of extreme weather events may increase, and extreme weather events can destroy the growth conditions of vegetation that is otherwise in a stable condition. Therefore, it is essential to research the formation of extreme weather events and its ecological response, both in terms scientific development and the needs of societal development. This paper mainly examines these issues from the following aspects: (1) the definition of extreme climate events and the methods of studying the associated response of vegetation; (2) the research progress on extreme climate events and their vegetation response; and (3) the future direction of research on extreme climate and its vegetation response.
Modified Inverse First Order Reliability Method (I-FORM) for Predicting Extreme Sea States.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eckert-Gallup, Aubrey Celia; Sallaberry, Cedric Jean-Marie; Dallman, Ann Renee
Environmental contours describing extreme sea states are generated as the input for numerical or physical model simulation s as a part of the stand ard current practice for designing marine structure s to survive extreme sea states. Such environmental contours are characterized by combinations of significant wave height ( ) and energy period ( ) values calculated for a given recurrence interval using a set of data based on hindcast simulations or buoy observations over a sufficient period of record. The use of the inverse first - order reliability method (IFORM) i s standard design practice for generating environmental contours.more » In this paper, the traditional appli cation of the IFORM to generating environmental contours representing extreme sea states is described in detail and its merits and drawbacks are assessed. The application of additional methods for analyzing sea state data including the use of principal component analysis (PCA) to create an uncorrelated representation of the data under consideration is proposed. A reexamination of the components of the IFORM application to the problem at hand including the use of new distribution fitting techniques are shown to contribute to the development of more accurate a nd reasonable representations of extreme sea states for use in survivability analysis for marine struc tures. Keywords: In verse FORM, Principal Component Analysis , Environmental Contours, Extreme Sea State Characteri zation, Wave Energy Converters« less
Adaptive Nulling for Interferometric Detection of Planets
NASA Technical Reports Server (NTRS)
Lay, Oliver P.; Peters, Robert D.
2010-01-01
An adaptive-nulling method has been proposed to augment the nulling-optical- interferometry method of detection of Earth-like planets around distant stars. The method is intended to reduce the cost of building and aligning the highly precise optical components and assemblies needed for nulling. Typically, at the mid-infrared wavelengths used for detecting planets orbiting distant stars, a star is millions of times brighter than an Earth-sized planet. In order to directly detect the light from the planet, it is necessary to remove most of the light coming from the star. Nulling interferometry is one way to suppress the light from the star without appreciably suppressing the light from the planet. In nulling interferometry in its simplest form, one uses two nominally identical telescopes aimed in the same direction and separated laterally by a suitable distance. The light collected by the two telescopes is processed through optical trains and combined on a detector. The optical trains are designed such that the electric fields produced by an on-axis source (the star) are in anti-phase at the detector while the electric fields from the planet, which is slightly off-axis, combine in phase, so that the contrast ratio between the star and the planet is greatly decreased. If the electric fields from the star are exactly equal in amplitude and opposite in phase, then the star is effectively nulled out. Nulling is effective only if it is complete in the sense that it occurs simultaneously in both polarization states and at all wavelengths of interest. The need to ensure complete nulling translates to extremely tight demands upon the design and fabrication of the complex optical trains: The two telescopes must be highly symmetric, the reflectivities of the many mirrors in the telescopes and other optics must be carefully tailored, the optical coatings must be extremely uniform, sources of contamination must be minimized, optical surfaces must be nearly ideal, and alignments must be extremely precise. Satisfaction of all of these requirements entails substantial cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Bruce T.
2015-12-11
Problem: The overall goal of this proposal is to detect observed seasonal-mean precipitation variations and extreme event occurrences over the United States. Detection, e.g. the process of demonstrating that an observed change in climate is unusual, first requires some means of estimating the range of internal variability absent any external drivers. Ideally, the internal variability would be derived from the observations themselves, however generally the observed variability is a confluence of both internal variability and variability in response to external drivers. Further, numerical climate models—the standard tool for detection studies—have their own estimates of intrinsic variability, which may differ substantiallymore » from that found in the observed system as well as other model systems. These problems are further compounded for weather and climate extremes, which as singular events are particularly ill-suited for detection studies because of their infrequent occurrence, limited spatial range, and underestimation within global and even regional numerical models. Rationale: As a basis for this research we will show how stochastic daily-precipitation models—models in which the simulated interannual-to-multidecadal precipitation variance is purely the result of the random evolution of daily precipitation events within a given time period—can be used to address many of these issues simultaneously. Through the novel application of these well-established models, we can first estimate the changes/trends in various means and extremes that can occur even with fixed daily-precipitation characteristics, e.g. that can occur simply as a result of the stochastic evolution of daily weather events within a given climate. Detection of a change in the observed climate—either naturally or anthropogenically forced—can then be defined as any change relative to this stochastic variability, e.g. as changes/trends in the means and extremes that could only have occurred through a change in the underlying climate. As such, this method is capable of detecting “hot spot” regions—as well as “flare ups” within the hot spot regions—that have experienced interannual to multi-decadal scale variations and trends in seasonal-mean precipitation and extreme events. Further by applying the same methods to numerical climate models we can discern the fidelity of the current-generation climate models in representing detectability within the observed climate system. In this way, we can objectively determine the utility of these model systems for performing detection studies of historical and future climate change.« less
A short note on the use of the red-black tree in Cartesian adaptive mesh refinement algorithms
NASA Astrophysics Data System (ADS)
Hasbestan, Jaber J.; Senocak, Inanc
2017-12-01
Mesh adaptivity is an indispensable capability to tackle multiphysics problems with large disparity in time and length scales. With the availability of powerful supercomputers, there is a pressing need to extend time-proven computational techniques to extreme-scale problems. Cartesian adaptive mesh refinement (AMR) is one such method that enables simulation of multiscale, multiphysics problems. AMR is based on construction of octrees. Originally, an explicit tree data structure was used to generate and manipulate an adaptive Cartesian mesh. At least eight pointers are required in an explicit approach to construct an octree. Parent-child relationships are then used to traverse the tree. An explicit octree, however, is expensive in terms of memory usage and the time it takes to traverse the tree to access a specific node. For these reasons, implicit pointerless methods have been pioneered within the computer graphics community, motivated by applications requiring interactivity and realistic three dimensional visualization. Lewiner et al. [1] provides a concise review of pointerless approaches to generate an octree. Use of a hash table and Z-order curve are two key concepts in pointerless methods that we briefly discuss next.
Efficient SRAM yield optimization with mixture surrogate modeling
NASA Astrophysics Data System (ADS)
Zhongjian, Jiang; Zuochang, Ye; Yan, Wang
2016-12-01
Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.
Wilson, Kate E; Marouga, Rita; Prime, John E; Pashby, D Paul; Orange, Paul R; Crosier, Steven; Keith, Alexander B; Lathe, Richard; Mullins, John; Estibeiro, Peter; Bergling, Helene; Hawkins, Edward; Morris, Christopher M
2005-10-01
Comparative proteomic methods are rapidly being applied to many different biological systems including complex tissues. One pitfall of these methods is that in some cases, such as oncology and neuroscience, tissue complexity requires isolation of specific cell types and sample is limited. Laser microdissection (LMD) is commonly used for obtaining such samples for proteomic studies. We have combined LMD with sensitive thiol-reactive saturation dye labelling of protein samples and 2-D DIGE to identify protein changes in a test system, the isolated CA1 pyramidal neurone layer of a transgenic (Tg) rat carrying a human amyloid precursor protein transgene. Saturation dye labelling proved to be extremely sensitive with a spot map of over 5,000 proteins being readily produced from 5 mug total protein, with over 100 proteins being significantly altered at p < 0.0005. Of the proteins identified, all showed coherent changes associated with transgene expression. It was, however, difficult to identify significantly different proteins using PMF and MALDI-TOF on gels containing less than 500 mug total protein. The use of saturation dye labelling of limiting samples will therefore require the use of highly sensitive MS techniques to identify the significantly altered proteins isolated using methods such as LMD.
A systematic review of hypofractionation for primary management of prostate cancer.
Koontz, Bridget F; Bossi, Alberto; Cozzarini, Cesare; Wiegel, Thomas; D'Amico, Anthony
2015-10-01
Technological advances in radiation therapy delivery have permitted the use of high-dose-per-fraction radiation therapy (RT) for early-stage prostate cancer (PCa). Level 1 evidence supporting the safety and efficacy of hypofractionated RT is evolving as this modality becomes more widely utilized and refined. To perform a systematic review of the current evidence on the safety and efficacy of hypofractionated RT for early-stage PCa and to provide in-context recommendations for current application of this technology. Embase, PubMed, and Scopus electronic databases were queried for English-language articles from January 1990 through June 2014. Prospective studies with a minimum of 50 patients were included. Separate consideration was made for studies involving moderate hypofractionation (doses of 2.5-4Gy per fraction) and extreme hypofractionation (5-10Gy in 4-7 fractions). Six relatively small superiority designed randomized trials of standard fractionation versus moderate hypofractionation in predominantly low- and intermediate-risk PCa have been published with follow-up ranging from 4 to 8 yr, noting similar biochemical control (5-yr freedom from biochemical failure in modern studies is >80% for low-risk and intermediate-risk patients) and late grade ≥2 genitourinary and gastrointestinal toxicities (between 2% and 20%). Noninferiority studies are pending. In prospective phase 2 studies, extreme hypofractionation has promising 2- to 5-yr biochemical control rates of >90% for low-risk patients. Results from a randomized trial are expected in 2015. Moderate hypofractionation has 5-yr data to date establishing safety compared with standard fractionation, but 10-yr outcomes and longer follow-up are needed to establish noninferiority for clinical effectiveness. Extreme hypofractionation is promising but as yet requires reporting of randomized data prior to application outside of a clinical protocol. Hypofractionation for prostate cancer delivers relatively high doses of radiation per treatment. Prospective studies support the safety of moderate hypofractionation, while extreme fractionation may have greater toxicity. Both show promising cancer control but long-term results of noninferiority studies of both methods are required before use in routine treatment outside of clinical protocols. Copyright © 2014 European Association of Urology. Published by Elsevier B.V. All rights reserved.
Qiu, Qinyin; Adamovich, Sergei; Saleh, Soha; Lafond, Ian; Merians, Alma S.; Fluet, Gerard G.
2015-01-01
Nine children with cerebral palsy and nine adults with stroke were trained using 5 different upper extremity simulations using the NJIT-RAVR system for approximately nine to twelve hours over a three week period. Both groups made improvements in clinical measurements of upper extremity function and reaching kinematics. Patterns and magnitudes of improvement differ between the two groups. Responses to training required adjustment of the robotic system to accommodate the rehabilitation needs of children with cerebral palsy. PMID:22275632
Social selection is a powerful explanation for prosociality.
Nesse, Randolph M
2016-01-01
Cultural group selection helps explain human cooperation, but social selection offers a complementary, more powerful explanation. Just as sexual selection shapes extreme traits that increase matings, social selection shapes extreme traits that make individuals preferred social partners. Self-interested partner choices create strong and possibly runaway selection for prosocial traits, without requiring group selection, kin selection, or reciprocity.
ERIC Educational Resources Information Center
Southgate, Erica; Brosnan, Caragh; Lempp, Heidi; Kelly, Brian; Wright, Sarah; Outram, Sue; Bennett, Anna
2017-01-01
Higher education is understood as essential to enabling social mobility. Research and policy have centred on access to university, but recently attention has turned to the journey of social mobility itself--and its costs. Long-distance or "extreme" social mobility journeys particularly require analysis. This paper examines journeys of…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-08
... Proposed Rule Change Amending NYSE Rule 123C(8)(a)(1) To Extend Operation of the Extreme Order Imbalances... suspend certain rule requirements at the close when extreme order imbalances may cause significant... Imbalances Pilot'' or ``Pilot'').\\3\\ Through this filing, NYSE proposes to extend the Pilot until the earlier...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-08
... suspend certain rule requirements at the close when extreme order imbalances may cause significant... Imbalances Pilot'' or ``Pilot'').\\4\\ Through this filing, NYSE proposes to extend the Pilot until the earlier... suspend NYSE Rules 52 (Hours of Operation) to resolve an extreme order imbalance that may result in a...
NASA Astrophysics Data System (ADS)
Nair, B. G.; Winter, N.; Daniel, B.; Ward, R. M.
2016-07-01
Direct measurement of the flow of electric current during VAR is extremely difficult due to the aggressive environment as the arc process itself controls the distribution of current. In previous studies the technique of “magnetic source tomography” was presented; this was shown to be effective but it used a computationally intensive iterative method to analyse the distribution of arc centre position. In this paper we present faster computational methods requiring less numerical optimisation to determine the centre position of a single distributed arc both numerically and experimentally. Numerical validation of the algorithms were done on models and experimental validation on measurements based on titanium and nickel alloys (Ti6Al4V and INCONEL 718). The results are used to comment on the effects of process parameters on arc behaviour during VAR.
Study of high speed complex number algorithms. [for determining antenna for field radiation patterns
NASA Technical Reports Server (NTRS)
Heisler, R.
1981-01-01
A method of evaluating the radiation integral on the curved surface of a reflecting antenna is presented. A three dimensional Fourier transform approach is used to generate a two dimensional radiation cross-section along a planer cut at any angle phi through the far field pattern. Salient to the method is an algorithm for evaluating a subset of the total three dimensional discrete Fourier transform results. The subset elements are selectively evaluated to yield data along a geometric plane of constant. The algorithm is extremely efficient so that computation of the induced surface currents via the physical optics approximation dominates the computer time required to compute a radiation pattern. Application to paraboloid reflectors with off-focus feeds in presented, but the method is easily extended to offset antenna systems and reflectors of arbitrary shapes. Numerical results were computed for both gain and phase and are compared with other published work.
Keane, Robert E.; Burgan, Robert E.; Van Wagtendonk, Jan W.
2001-01-01
Fuel maps are essential for computing spatial fire hazard and risk and simulating fire growth and intensity across a landscape. However, fuel mapping is an extremely difficult and complex process requiring expertise in remotely sensed image classification, fire behavior, fuels modeling, ecology, and geographical information systems (GIS). This paper first presents the challenges of mapping fuels: canopy concealment, fuelbed complexity, fuel type diversity, fuel variability, and fuel model generalization. Then, four approaches to mapping fuels are discussed with examples provided from the literature: (1) field reconnaissance; (2) direct mapping methods; (3) indirect mapping methods; and (4) gradient modeling. A fuel mapping method is proposed that uses current remote sensing and image processing technology. Future fuel mapping needs are also discussed which include better field data and fuel models, accurate GIS reference layers, improved satellite imagery, and comprehensive ecosystem models.
Composite membranes from photochemical synthesis of ultrathin polymer films
NASA Astrophysics Data System (ADS)
Liu, Chao; Martin, Charles R.
1991-07-01
THERE has recently been a resurgence of interest in synthetic membranes and membrane-based processes1-12. This is motivated by a wide variety of technological applications, such as chemical separations1-7, bioreactors and sensors8,9, energy conversion10,11 and drug-delivery systems12. Many of these technologies require the ability to prepare extremely thin, defect-free synthetic (generally polymeric) films, which are supported on microporous supports to form composite membranes. Here we describe a method for producing composite membranes of this sort that incorporate high-quality polymer films less than 50-nm thick. The method involves interfacial photopolymerization of a thin polymer film on the surface of the microporous substrate. We have been able to use this technique to synthesize a variety of functionalized ultrathin films based on electroactive, photoactive and ion-exchange polymers. We demonstrate the method here with composite membranes that show exceptional gas-transport properties.
Methodology for worker neutron exposure evaluation in the PDCF facility design.
Scherpelz, R I; Traub, R J; Pryor, K H
2004-01-01
A project headed by Washington Group International is meant to design the Pit Disassembly and Conversion Facility (PDCF) to convert the plutonium pits from excessed nuclear weapons into plutonium oxide for ultimate disposition. Battelle staff are performing the shielding calculations that will determine appropriate shielding so that the facility workers will not exceed target exposure levels. The target exposure levels for workers in the facility are 5 mSv y(-1) for the whole body and 100 mSv y(-1) for the extremity, which presents a significant challenge to the designers of a facility that will process tons of radioactive material. The design effort depended on shielding calculations to determine appropriate thickness and composition for glove box walls, and concrete wall thicknesses for storage vaults. Pacific Northwest National Laboratory (PNNL) staff used ORIGEN-S and SOURCES to generate gamma and neutron source terms, and Monte Carlo (computer code for) neutron photon (transport) (MCNP-4C) to calculate the radiation transport in the facility. The shielding calculations were performed by a team of four scientists, so it was necessary to develop a consistent methodology. There was also a requirement for the study to be cost-effective, so efficient methods of evaluation were required. The calculations were subject to rigorous scrutiny by internal and external reviewers, so acceptability was a major feature of the methodology. Some of the issues addressed in the development of the methodology included selecting appropriate dose factors, developing a method for handling extremity doses, adopting an efficient method for evaluating effective dose equivalent in a non-uniform radiation field, modelling the reinforcing steel in concrete, and modularising the geometry descriptions for efficiency. The relative importance of the neutron dose equivalent compared with the gamma dose equivalent varied substantially depending on the specific shielding conditions and lessons were learned from this effect. This paper addresses these issues and the resulting methodology.
Kaakinen, M; Huttunen, S; Paavolainen, L; Marjomäki, V; Heikkilä, J; Eklund, L
2014-01-01
Phase-contrast illumination is simple and most commonly used microscopic method to observe nonstained living cells. Automatic cell segmentation and motion analysis provide tools to analyze single cell motility in large cell populations. However, the challenge is to find a sophisticated method that is sufficiently accurate to generate reliable results, robust to function under the wide range of illumination conditions encountered in phase-contrast microscopy, and also computationally light for efficient analysis of large number of cells and image frames. To develop better automatic tools for analysis of low magnification phase-contrast images in time-lapse cell migration movies, we investigated the performance of cell segmentation method that is based on the intrinsic properties of maximally stable extremal regions (MSER). MSER was found to be reliable and effective in a wide range of experimental conditions. When compared to the commonly used segmentation approaches, MSER required negligible preoptimization steps thus dramatically reducing the computation time. To analyze cell migration characteristics in time-lapse movies, the MSER-based automatic cell detection was accompanied by a Kalman filter multiobject tracker that efficiently tracked individual cells even in confluent cell populations. This allowed quantitative cell motion analysis resulting in accurate measurements of the migration magnitude and direction of individual cells, as well as characteristics of collective migration of cell groups. Our results demonstrate that MSER accompanied by temporal data association is a powerful tool for accurate and reliable analysis of the dynamic behaviour of cells in phase-contrast image sequences. These techniques tolerate varying and nonoptimal imaging conditions and due to their relatively light computational requirements they should help to resolve problems in computationally demanding and often time-consuming large-scale dynamical analysis of cultured cells. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.
Picometer stable scan mechanism for gravitational wave detection in space: LISA PAAM
NASA Astrophysics Data System (ADS)
Pijnenburg, J. A. C. M.; Rijnveld, N.
2017-11-01
Detection and observation of gravitational waves requires extreme stability in the frequency range 0.03 mHz to 1 Hz. The Laser Interferometer Space Antenna (LISA) mission will attain this by creating a giant interferometer in space, based on free floating proof masses in three spacecrafts. Due to orbit evolution and time delay in the interferometer arms, the direction of transmitted light changes. To solve this problem, a picometer stable Point-Ahead Angle Mechanism (PAAM) was designed, realized and successfully tested. The PAAM concept is based on a rotatable mirror. The critical requirements are the contribution to the optical path length (less than 1.4 pm / rt Hz) and the angular jitter (less than 8 nrad / rt Hz). Extreme dimensional stability is achieved by manufacturing a monolithical Haberland hinge mechanism out of Ti6Al4V, through high precision wire erosion. Extreme thermal stability is realized by placing the thermal center on the surface of the mirror. Because of piezo actuator noise and leakage, the PAAM has to be controlled in closed-loop. To meet the requirements in the low frequencies, an active target capacitance-to-digital converter is used. Interferometric measurements with a triangular resonant cavity in vacuum proved that the PAAM meets the requirements.
Grotjahn, Richard; Black, Robert; Leung, Ruby; ...
2015-05-22
This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic tomore » planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so more research is needed to understand the limitations of climate models and improve model skill in simulating extreme temperatures and their associated LSMPs. Furthermore, the paper concludes with unresolved issues and research questions.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eckert-Gallup, Aubrey Celia; Lewis, John R.; Brooks, Dusty Marie
This report describes the methods, results, and conclusions of the analysis of 11 scenarios defined to exercise various options available in the xLPR (Extremely Low Probability of Rupture) Version 2 .0 code. The scope of the scenario analysis is three - fold: (i) exercise the various options and components comprising xLPR v2.0 and defining each scenario; (ii) develop and exercise methods for analyzing and interpreting xLPR v2.0 outputs ; and (iii) exercise the various sampling options available in xLPR v2.0. The simulation workflow template developed during the course of this effort helps to form a basis for the application ofmore » the xLPR code to problems with similar inputs and probabilistic requirements and address in a systematic manner the three points covered by the scope.« less
Accurate and efficient calculation of response times for groundwater flow
NASA Astrophysics Data System (ADS)
Carr, Elliot J.; Simpson, Matthew J.
2018-03-01
We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.
Frazin, Richard A
2016-04-01
A new generation of telescopes with mirror diameters of 20 m or more, called extremely large telescopes (ELTs), has the potential to provide unprecedented imaging and spectroscopy of exoplanetary systems, if the difficulties in achieving the extremely high dynamic range required to differentiate the planetary signal from the star can be overcome to a sufficient degree. Fully utilizing the potential of ELTs for exoplanet imaging will likely require simultaneous and self-consistent determination of both the planetary image and the unknown aberrations in multiple planes of the optical system, using statistical inference based on the wavefront sensor and science camera data streams. This approach promises to overcome the most important systematic errors inherent in the various schemes based on differential imaging, such as angular differential imaging and spectral differential imaging. This paper is the first in a series on this subject, in which a formalism is established for the exoplanet imaging problem, setting the stage for the statistical inference methods to follow in the future. Every effort has been made to be rigorous and complete, so that validity of approximations to be made later can be assessed. Here, the polarimetric image is expressed in terms of aberrations in the various planes of a polarizing telescope with an adaptive optics system. Further, it is shown that current methods that utilize focal plane sensing to correct the speckle field, e.g., electric field conjugation, rely on the tacit assumption that aberrations on multiple optical surfaces can be represented as aberration on a single optical surface, ultimately limiting their potential effectiveness for ground-based astronomy.
Patel, Jigna; Qiu, Qinyin; Yarossi, Mathew; Merians, Alma; Massood, Supriya; Tunik, Eugene; Adamovich, Sergei; Fluet, Gerard
2016-01-01
Purpose Explore the potential benefits of using priming methods prior to an active hand task in the acute phase post-stroke in persons with severe upper extremity hemiparesis. Methods Five individuals were trained using priming techniques including virtual reality (VR) based visual mirror feedback and contralaterally controlled passive movement strategies prior to training with an active pinch force modulation task. Clinical, kinetic, and neurophysiological measurements were taken pre and post the training period. Clinical measures were taken at six months post training. Results The two priming simulations and active training were well tolerated early after stroke. Priming effects were suggested by increased maximal pinch force immediately after visual and movement based priming. Despite having no clinically observable movement distally, the subjects were able to volitionally coordinate isometric force and muscle activity (EMG) in a pinch tracing task. The Root Mean Square Error (RMSE) of force during the pinch trace task gradually decreased over the training period suggesting learning may have occurred. Changes in motor cortical neurophysiology were seen in the unaffected hemisphere using Transcranial Magnetic Stimulation (TMS) mapping. Significant improvements in motor recovery as measured by the Action Research Arm Test (ARAT) and the Upper Extremity Fugl Meyer Assessment (UEFMA) were demonstrated at six months post training by three of the five subjects. Conclusion This study suggests that an early hand-based intervention using visual and movement based priming activities and a scaled motor task allows participation by persons without the motor control required for traditionally presented rehabilitation and testing. PMID:27636200
Effect of low temperatures on osserous and cartilaginous tissues
NASA Technical Reports Server (NTRS)
Pankov, Y. Y.; Babiychuk, G. A.; Malyshkina, S. V.; Zhigun, A. I.
1983-01-01
The use of extreme cold to treat tumoral afflictions of the extremities is discussed. Cryogenic methods and instruments are discussed, and the levels of accumulated knowledge in this area (as well as the areas still in question) are evaluated. The overall promise for cryogenic methods of treatment is acknowledged, and areas which need further development are noted.
The application of the statistical theory of extreme values to gust-load problems
NASA Technical Reports Server (NTRS)
Press, Harry
1950-01-01
An analysis is presented which indicates that the statistical theory of extreme values is applicable to the problems of predicting the frequency of encountering the larger gust loads and gust velocities for both specific test conditions as well as commercial transport operations. The extreme-value theory provides an analytic form for the distributions of maximum values of gust load and velocity. Methods of fitting the distribution are given along with a method of estimating the reliability of the predictions. The theory of extreme values is applied to available load data from commercial transport operations. The results indicate that the estimates of the frequency of encountering the larger loads are more consistent with the data and more reliable than those obtained in previous analyses. (author)
Method for extreme ultraviolet lithography
Felter, T. E.; Kubiak, Glenn D.
1999-01-01
A method of producing a patterned array of features, in particular, gate apertures, in the size range 0.4-0.05 .mu.m using projection lithography and extreme ultraviolet (EUV) radiation. A high energy laser beam is used to vaporize a target material in order to produce a plasma which in turn, produces extreme ultraviolet radiation of a characteristic wavelength of about 13 nm for lithographic applications. The radiation is transmitted by a series of reflective mirrors to a mask which bears the pattern to be printed. The demagnified focused mask pattern is, in turn, transmitted by means of appropriate optics and in a single exposure, to a substrate coated with photoresists designed to be transparent to EUV radiation and also satisfy conventional processing methods.
Method for extreme ultraviolet lithography
Felter, T. E.; Kubiak, G. D.
2000-01-01
A method of producing a patterned array of features, in particular, gate apertures, in the size range 0.4-0.05 .mu.m using projection lithography and extreme ultraviolet (EUV) radiation. A high energy laser beam is used to vaporize a target material in order to produce a plasma which in turn, produces extreme ultraviolet radiation of a characteristic wavelength of about 13 nm for lithographic applications. The radiation is transmitted by a series of reflective mirrors to a mask which bears the pattern to be printed. The demagnified focused mask pattern is, in turn, transmitted by means of appropriate optics and in a single exposure, to a substrate coated with photoresists designed to be transparent to EUV radiation and also satisfy conventional processing methods.
Computation of elementary modes: a unifying framework and the new binary approach
Gagneur, Julien; Klamt, Steffen
2004-01-01
Background Metabolic pathway analysis has been recognized as a central approach to the structural analysis of metabolic networks. The concept of elementary (flux) modes provides a rigorous formalism to describe and assess pathways and has proven to be valuable for many applications. However, computing elementary modes is a hard computational task. In recent years we assisted in a multiplication of algorithms dedicated to it. We require a summarizing point of view and a continued improvement of the current methods. Results We show that computing the set of elementary modes is equivalent to computing the set of extreme rays of a convex cone. This standard mathematical representation provides a unified framework that encompasses the most prominent algorithmic methods that compute elementary modes and allows a clear comparison between them. Taking lessons from this benchmark, we here introduce a new method, the binary approach, which computes the elementary modes as binary patterns of participating reactions from which the respective stoichiometric coefficients can be computed in a post-processing step. We implemented the binary approach in FluxAnalyzer 5.1, a software that is free for academics. The binary approach decreases the memory demand up to 96% without loss of speed giving the most efficient method available for computing elementary modes to date. Conclusions The equivalence between elementary modes and extreme ray computations offers opportunities for employing tools from polyhedral computation for metabolic pathway analysis. The new binary approach introduced herein was derived from this general theoretical framework and facilitates the computation of elementary modes in considerably larger networks. PMID:15527509
Aziz, Faisal; Lehman, Erik; Blebea, John; Lurie, Fedor
2017-01-01
Background Deep venous thrombosis after any surgical operations is considered a preventable complication. Lower extremity bypass surgery is a commonly performed operation to improve blood flow to lower extremities in patients with severe peripheral arterial disease. Despite advances in endovascular surgery, lower extremity arterial bypass remains the gold standard treatment for severe, symptomatic peripheral arterial disease. The purpose of this study is to identify the clinical risk factors associated with development of deep venous thrombosis after lower extremity bypass surgery. Methods The American College of Surgeons' NSQIP database was utilized and all lower extremity bypass procedures performed in 2013 were examined. Patient and procedural characteristics were evaluated. Univariate and multivariate logistic regression analysis was used to determine independent risk factors for the development of postoperative deep venous thrombosis. Results A total of 2646 patients (65% males and 35% females) underwent lower extremity open revascularization during the year 2013. The following factors were found to be significantly associated with postoperative deep venous thrombosis: transfusion >4 units of packed red blood cells (odds ratio (OR) = 5.21, confidence interval (CI) = 1.29-22.81, p = 0.03), postoperative urinary tract infection (OR = 12.59, CI = 4.12-38.48, p < 0.01), length of hospital stay >28 days (OR = 9.30, CI = 2.79-30.92, p < 0.01), bleeding (OR = 2.93, CI = 1.27-6.73, p = 0.01), deep wound infection (OR = 3.21, CI = 1.37-7.56, p < 0.01), and unplanned reoperation (OR = 4.57, CI = 2.03-10.26, p < 0.01). Of these, multivariable analysis identified the factors independently associated with development of deep venous thrombosis after lower extremity bypass surgery to be unplanned reoperation (OR = 3.57, CI = 1.54-8.30, p < 0.01), reintubation (OR = 8.93, CI = 2.66-29.97, p < 0.01), and urinary tract infection (OR = 7.64, CI = 2.27-25.73, p < 0.01). Presence of all three factors was associated with a 54% incidence of deep venous thrombosis. Conclusions Development of deep venous thrombosis after lower extremity bypass is a serious but infrequent complication. Patients who require unplanned return to the operating room, reintubation, or develop a postoperative urinary tract are at high risk for developing postoperative deep venous thrombosis. Increased monitoring of these patients and ensuring adequate deep venous thrombosis prophylaxis for such patients is suggested.
Changes in the probability of co-occurring extreme climate events
NASA Astrophysics Data System (ADS)
Diffenbaugh, N. S.
2017-12-01
Extreme climate events such as floods, droughts, heatwaves, and severe storms exert acute stresses on natural and human systems. When multiple extreme events co-occur, either in space or time, the impacts can be substantially compounded. A diverse set of human interests - including supply chains, agricultural commodities markets, reinsurance, and deployment of humanitarian aid - have historically relied on the rarity of extreme events to provide a geographic hedge against the compounded impacts of co-occuring extremes. However, changes in the frequency of extreme events in recent decades imply that the probability of co-occuring extremes is also changing, and is likely to continue to change in the future in response to additional global warming. This presentation will review the evidence for historical changes in extreme climate events and the response of extreme events to continued global warming, and will provide some perspective on methods for quantifying changes in the probability of co-occurring extremes in the past and future.
NASA Astrophysics Data System (ADS)
Yin, Yixing; Chen, Haishan; Xu, Chongyu; Xu, Wucheng; Chen, Changchun
2014-05-01
The regionalization methods which 'trade space for time' by including several at-site data records in the frequency analysis are an efficient tool to improve the reliability of extreme quantile estimates. With the main aims of improving the understanding of the regional frequency of extreme precipitation and providing scientific and practical background and assistance in formulating the regional development strategies for water resources management in one of the most developed and flood-prone regions in China, the Yangtze River Delta (YRD) region, in this paper, L-moment-based index-flood (LMIF) method, one of the popular regionalization methods, is used in the regional frequency analysis of extreme precipitation; attention was paid to inter-site dependence and its influence on the accuracy of quantile estimates, which hasn't been considered for most of the studies using LMIF method. Extensive data screening of stationarity, serial dependence and inter-site dependence was carried out first. The entire YRD region was then categorized into four homogeneous regions through cluster analysis and homogenous analysis. Based on goodness-of-fit statistic and L-moment ratio diagrams, Generalized extreme-value (GEV) and Generalized Normal (GNO) distributions were identified as the best-fit distributions for most of the sub regions. Estimated quantiles for each region were further obtained. Monte-Carlo simulation was used to evaluate the accuracy of the quantile estimates taking inter-site dependence into consideration. The results showed that the root mean square errors (RMSEs) were bigger and the 90% error bounds were wider with inter-site dependence than those with no inter-site dependence for both the regional growth curve and quantile curve. The spatial patterns of extreme precipitation with return period of 100 years were obtained which indicated that there are two regions with the highest precipitation extremes (southeastern coastal area of Zhejiang Province and the southwest part of Anhui Province) and a large region with low precipitation extremes in the north and middle parts of Zhejiang Province, Shanghai City and Jiangsu Province. However, the central areas with low precipitation extremes are the most developed and densely populated regions in the study area, thus floods will cause great loss of human life and property damage. These findings will contribute to formulating the regional development strategies for policymakers and stakeholders in water resource management against the menaces of frequently emerged floods.
Multi-window detection for P-wave in electrocardiograms based on bilateral accumulative area.
Chen, Riqing; Huang, Yingsong; Wu, Jian
2016-11-01
P-wave detection is one of the most challenging aspects in electrocardiograms (ECGs) due to its low amplitude, low frequency, and variable waveforms. This work introduces a novel multi-window detection method for P-wave delineation based on the bilateral accumulative area. The bilateral accumulative area is calculated by summing the areas covered by the P-wave curve with left and right sliding windows. The onset and offset of a positive P-wave correspond to the local maxima of the area detector. The position drift and difference in area variation of local extreme points with different windows are used to systematically combine multi-window and 12-lead synchronous detection methods, which are used to screen the optimization boundary points from all extreme points of different window widths and adaptively match the P-wave location. The proposed method was validated with ECG signals from various databases, including the Standard CSE Database, T-Wave Alternans Challenge Database, PTB Diagnostic ECG Database, and the St. Petersburg Institute of Cardiological Technics 12-Lead Arrhythmia Database. The average sensitivity Se was 99.44% with a positive predictivity P+ of 99.37% for P-wave detection. Standard deviations of 3.7 and 4.3ms were achieved for the onset and offset of P-waves, respectively, which is in agreement with the accepted tolerances required by the CSE committee. Compared with well-known delineation methods, this method can achieve high sensitivity and positive predictability using a simple calculation process. The experiment results suggest that the bilateral accumulative area could be an effective detection tool for ECG signal analysis. Copyright © 2016 Elsevier Ltd. All rights reserved.
Park, Jung Ho; Kim, Hee-Chun; Lee, Jae Hoon; Kim, Jin Soo; Roh, Si Young; Yi, Cheol Ho; Kang, Yoon Kyoo; Kwon, Bum Sun
2009-05-01
While the lower extremities support the weight and move the body, the upper extremities are essential for the activities of daily living, which require many detailed movements. Therefore, a disability of the upper extremity function should include a limitation of all motions of the joints and sensory loss, which affects the activities. In this study, disabilities of the upper extremities were evaluated according to the following conditions: 1) amputation, 2) joint contracture, 3) diseases of upper extremity, 4) weakness, 5) sensory loss of the finger tips, and 6) vascular and lymphatic diseases. The order of 1) to 6) is the order of major disability and there is no need to evaluate a lower order disability when a higher order one exists in the same joint or a part of the upper extremity. However, some disabilities can be either added or substituted when there are special contributions from multiple disabilities. An upper extremity disability should be evaluated after the completion of treatment and full adaptation when further functional changes are not expected. The dominance of the right or left hand before the disability should not be considered when there is a higher rate of disability.
Progress on glass ceramic ZERODUR enabling nanometer precision
NASA Astrophysics Data System (ADS)
Jedamzik, Ralf; Kunisch, Clemens; Nieder, Johannes; Weber, Peter; Westerhoff, Thomas
2016-03-01
The Semiconductor Industry is making continuous progress in shrinking feature size developing technologies and process to achieve < 10 nm feature size. The required Overlay specification for successful production is in the range one nanometer or even smaller. Consequently, materials designed into metrology systems of exposure or inspection tools need to fulfill ever tighter specification on the coefficient of thermal expansion (CTE). The glass ceramic ZERODUR® is a well-established material in critical components of microlithography wafer stepper and offered with an extremely low coefficient of thermal expansion, the tightest tolerance available on market. SCHOTT is continuously improving manufacturing processes and it's method to measure and characterize the CTE behavior of ZERODUR®. This paper is focusing on the "Advanced Dilatometer" for determination of the CTE developed at SCHOTT in the recent years and introduced into production in Q1 2015. The achievement for improving the absolute CTE measurement accuracy and the reproducibility are described in detail. Those achievements are compared to the CTE measurement accuracy reported by the Physikalische Technische Bundesanstalt (PTB), the National Metrology Institute of Germany. The CTE homogeneity is of highest importance to achieve nanometer precision on larger scales. Additionally, the paper presents data on the short scale CTE homogeneity and its improvement in the last two years. The data presented in this paper will explain the capability of ZERODUR® to enable the extreme precision required for future generation of lithography equipment and processes.
Modeling Hydrological Extremes in the Anthropocene
NASA Astrophysics Data System (ADS)
Di Baldassarre, Giuliano; Martinez, Fabian; Kalantari, Zahra; Viglione, Alberto
2017-04-01
Hydrological studies have investigated human impacts on hydrological extremes, i.e. droughts and floods, while social studies have explored human responses and adaptation to them. Yet, there is still little understanding about the dynamics resulting from two-way feedbacks, i.e. both impacts and responses. Traditional risk assessment methods therefore fail to assess future dynamics, and thus risk reduction strategies built on these methods can lead to unintended consequences in the medium-long term. Here we review the dynamics resulting from the reciprocal links between society and hydrological extremes, and describe initial efforts to model floods and droughts in the Anthropocene. In particular, we first discuss the need for a novel approach to explicitly account for human interactions with both hydrological extremes, and then present a stylized model simulating the reciprocal effects between droughts, foods and reservoir operation rules. Unprecedented opportunities offered by the growing availability of global data and worldwide archives to uncover the mutual shaping of hydrological extremes and society across places and scales are also discussed.
Final Rule: Community Right-To-Know Reporting Requirements Federal Register Notice
Final reporting thresholds and threshold planning quantity (TPQ) for extremely hazardous substances (EHS) and non-EHS hazardous chemicals, required under Emergency Planning and Community Right-to-Know Act, Superfund Amendments and Reauthorization Act.
Improving the Accuracy of Estimation of Climate Extremes
NASA Astrophysics Data System (ADS)
Zolina, Olga; Detemmerman, Valery; Trenberth, Kevin E.
2010-12-01
Workshop on Metrics and Methodologies of Estimation of Extreme Climate Events; Paris, France, 27-29 September 2010; Climate projections point toward more frequent and intense weather and climate extremes such as heat waves, droughts, and floods, in a warmer climate. These projections, together with recent extreme climate events, including flooding in Pakistan and the heat wave and wildfires in Russia, highlight the need for improved risk assessments to help decision makers and the public. But accurate analysis and prediction of risk of extreme climate events require new methodologies and information from diverse disciplines. A recent workshop sponsored by the World Climate Research Programme (WCRP) and hosted at United Nations Educational, Scientific and Cultural Organization (UNESCO) headquarters in France brought together, for the first time, a unique mix of climatologists, statisticians, meteorologists, oceanographers, social scientists, and risk managers (such as those from insurance companies) who sought ways to improve scientists' ability to characterize and predict climate extremes in a changing climate.
Parra-Robles, Juan; Cross, Albert R; Santyr, Giles E
2005-05-01
Hyperpolarized noble gases (HNGs) provide exciting possibilities for MR imaging at ultra-low magnetic field strengths (<0.15 T) due to the extremely high polarizations available from optical pumping. The fringe field of many superconductive magnets used in clinical MR imaging can provide a stable magnetic field for this purpose. In addition to offering the benefit of HNG MR imaging alongside conventional high field proton MRI, this approach offers the other useful advantage of providing different field strengths at different distances from the magnet. However, the extremely strong field gradients associated with the fringe field present a major challenge for imaging since impractically high active shim currents would be required to achieve the necessary homogeneity. In this work, a simple passive shimming method based on the placement of a small number of ferromagnetic pieces is proposed to reduce the fringe field inhomogeneities to a level that can be corrected using standard active shims. The method explicitly takes into account the strong variations of the field over the volume of the ferromagnetic pieces used to shim. The method is used to obtain spectra in the fringe field of a high-field (1.89 T) superconducting magnet from hyperpolarized 129Xe gas samples at two different ultra-low field strengths (8.5 and 17 mT). The linewidths of spectra measured from imaging phantoms (30 Hz) indicate a homogeneity sufficient for MRI of the rat lung.
Lock hopper values for coal gasification plant service
NASA Technical Reports Server (NTRS)
Schoeneweis, E. F.
1977-01-01
Although the operating principle of the lock hopper system is extremely simple, valve applications involving this service for coal gasification plants are likewise extremely difficult. The difficulties center on the requirement of handling highly erosive pulverized coal or char (either in dry or slurry form) combined with the requirement of providing tight sealing against high-pressure (possibly very hot) gas. Operating pressures and temperatures in these applications typically range up to 1600 psi (110bar) and 600F (316C), with certain process requirements going even higher. In addition, and of primary concern, is the need for reliable operation over long service periods with the provision for practical and economical maintenance. Currently available data indicate the requirement for something in the order of 20,000 to 30,000 open-close cycles per year and a desire to operate at least that long without valve failure.
Determination of three-dimensional joint loading within the lower extremities in snowboarding.
Krüger, Andreas; McAlpine, Paul; Borrani, Fabio; Edelmann-Nusser, Jürgen
2012-02-01
In the biomechanical literature only a few studies are available focusing on the determination of joint loading within the lower extremities in snowboarding. These studies are limited to analysis in a restricted capture volume due to the use of optical video-based systems. To overcome this restriction the aim of the present study was to develop a method to determine net joint moments within the lower extremities in snowboarding for complete measurement runs. An experienced snowboarder performed several runs equipped with two custom-made force plates as well as a full-body inertial measurement system. A rigid, multi-segment model was developed to describe the motion and loads within the lower extremities. This model is based on an existing lower-body model and designed to be run by the OpenSim software package. Measured kinetic and kinematic data were imported into the OpenSim program and inverse dynamic calculations were performed. The results illustrate the potential of the developed method for the determination of joint loadings within the lower extremities for complete measurement runs in a real snowboarding environment. The calculated net joint moments of force are reasonable in comparison to the data presented in the literature. A good reliability of the method seems to be indicated by the low data variation between different turns. Due to the unknown accuracy of this method the application for inter-individual studies as well as studies of injury mechanisms may be limited. For intra-individual studies comparing different snowboarding techniques as well as different snowboard equipment the method seems to be beneficial. The validity of the method needs to be studied further.
NASA Astrophysics Data System (ADS)
Takeda, Shun; Kumagai, Hiroshi
2018-02-01
Hyperpolarized (HP) noble gas has attracted attention in NMR / MRI. In an ultra-low magnetic field, the effectiveness of signal enhancement by HP noble gas should be required because reduction of the signal intensity is serious. One method of generating HP noble gas is spin exchange optical pumping which uses selective excitation of electrons of alkali metal vapor and spin transfer to nuclear spin by collision to noble gas. Although SEOP does not require extreme cooling or strong magnetic field, generally it required large-scale equipment including high power light source to generate HP noble gas with high efficiency. In this study, we construct a simply generation system of HP xenon-129 by SEOP with an ultralow magnetic field (up to 1 mT) and small-scale light source (about 1W). In addition, we measure in situ NMR signal at the same time, and then examine efficient conditions for SEOP in ultra-low magnetic fields.
Lunar Habitat Optimization Using Genetic Algorithms
NASA Technical Reports Server (NTRS)
SanScoucie, M. P.; Hull, P. V.; Tinker, M. L.; Dozier, G. V.
2007-01-01
Long-duration surface missions to the Moon and Mars will require bases to accommodate habitats for the astronauts. Transporting the materials and equipment required to build the necessary habitats is costly and difficult. The materials chosen for the habitat walls play a direct role in protection against each of the mentioned hazards. Choosing the best materials, their configuration, and the amount required is extremely difficult due to the immense size of the design region. Clearly, an optimization method is warranted for habitat wall design. Standard optimization techniques are not suitable for problems with such large search spaces; therefore, a habitat wall design tool utilizing genetic algorithms (GAs) has been developed. GAs use a "survival of the fittest" philosophy where the most fit individuals are more likely to survive and reproduce. This habitat design optimization tool is a multiobjective formulation of up-mass, heat loss, structural analysis, meteoroid impact protection, and radiation protection. This Technical Publication presents the research and development of this tool as well as a technique for finding the optimal GA search parameters.
Advanced Stirling Duplex Materials Assessment for Potential Venus Mission Heater Head Application
NASA Technical Reports Server (NTRS)
Ritzert, Frank; Nathal, Michael V.; Salem, Jonathan; Jacobson, Nathan; Nesbitt, James
2011-01-01
This report will address materials selection for components in a proposed Venus lander system. The lander would use active refrigeration to allow Space Science instrumentation to survive the extreme environment that exists on the surface of Venus. The refrigeration system would be powered by a Stirling engine-based system and is termed the Advanced Stirling Duplex (ASD) concept. Stirling engine power conversion in its simplest definition converts heat from radioactive decay into electricity. Detailed design decisions will require iterations between component geometries, materials selection, system output, and tolerable risk. This study reviews potential component requirements against known materials performance. A lower risk, evolutionary advance in heater head materials could be offered by nickel-base superalloy single crystals, with expected capability of approximately 1100C. However, the high temperature requirements of the Venus mission may force the selection of ceramics or refractory metals, which are more developmental in nature and may not have a well-developed database or a mature supporting technology base such as fabrication and joining methods.
Mesh Convergence Requirements for Composite Damage Models
NASA Technical Reports Server (NTRS)
Davila, Carlos G.
2016-01-01
The ability of the finite element method to accurately represent the response of objects with intricate geometry and loading renders the finite element method as an extremely versatile analysis technique for structural analysis. Finite element analysis is routinely used in industry to calculate deflections, stress concentrations, natural frequencies, buckling loads, and much more. The method works by discretizing complex problems into smaller, simpler approximations that are valid over small uniform domains. For common analyses, the maximum size of the elements that can be used is often be determined by experience. However, to verify the quality of a solution, analyses with several levels of mesh refinement should be performed to ensure that the solution has converged. In recent years, the finite element method has been used to calculate the resistance of structures, and in particular that of composite structures. A number of techniques such as cohesive zone modeling, the virtual crack closure technique, and continuum damage modeling have emerged that can be used to predict cracking, delaminations, fiber failure, and other composite damage modes that lead to structural collapse. However, damage models present mesh refinement requirements that are not well understood. In this presentation, we examine different mesh refinement issues related to the representation of damage in composite materials. Damage process zone sizes and their corresponding mesh requirements will be discussed. The difficulties of modeling discontinuities and the associated need for regularization techniques will be illustrated, and some unexpected element size constraints will be presented. Finally, some of the difficulties in constructing models of composite structures capable of predicting transverse matrix cracking will be discussed. It will be shown that to predict the initiation and propagation of transverse matrix cracks, their density, and their saturation may require models that are significantly more refined than those that have been contemplated in the past.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1994-01-01
The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.
[Ultrasound examination for lower extremity deep vein thrombosis].
Toyota, Kosaku
2014-09-01
Surgery is known to be a major risk factor of vein thrombosis. Progression from lower extremity deep vein thrombosis (DVT) to pulmonary embolism can lead to catastrophic outcome, although the incidence ratio is low. The ability to rule in or rule out DVT is becoming essential for anesthesiologists. Non-invasive technique of ultrasonography is a sensitive and specific tool for the assessment of lower extremity DVT. This article introduces the basics and practical methods of ultrasound examination for lower extremity DVT.
NASA Astrophysics Data System (ADS)
Zhang, Chuanqing; Feng, Xiating; Zhou, Hui; Qiu, Shili; Wu, Wenping
2012-05-01
The headrace tunnels at the Jinping II Hydropower Station cross the Jinping Mountain with a maximum overburden depth of 2,525 m, where 80% of the strata along the tunnels consist of marble. A number of extremely intense rockbursts occurred during the excavation of the auxiliary tunnels and the drainage tunnel. In particular, a tunnel boring machine (TBM) was destroyed by an extremely intense rockburst in a 7.2-m-diameter drainage tunnel. Two of the four subsequent 12.4-m-diameter headrace tunnels will be excavated with larger size TBMs, where a high risk of extremely intense rockbursts exists. Herein, a top pilot tunnel preconditioning method is proposed to minimize this risk, in which a drilling and blasting method is first recommended for the top pilot tunnel excavation and support, and then the TBM excavation of the main tunnel is conducted. In order to evaluate the mechanical effectiveness of this method, numerical simulation analyses using the failure approaching index, energy release rate, and excess shear stress indices are carried out. Its construction feasibility is discussed as well. Moreover, a microseismic monitoring technique is used in the experimental tunnel section for the real-time monitoring of the microseismic activities of the rock mass in TBM excavation and for assessing the effect of the top pilot tunnel excavation in reducing the risk of rockbursts. This method is applied to two tunnel sections prone to extremely intense rockbursts and leads to a reduction in the risk of rockbursts in TBM excavation.
Farahmand, Shervin; Ahmadi, Omid; Dehpour, Ahmadreza; Khashayar, Patricia
2012-01-01
The present study aims to assess the influence of ultra-low doses of opioid antagonists on the analgesic properties of opioids and their side effects. In the present randomized, double-blind controlled trial, the influence of the combination of ultra-low-dose naltrexone and morphine on the total opioid requirement and the frequency of the subsequent side effects was compared with that of morphine alone (added with placebo) in patients with trauma in the upper or lower extremities. Although the morphine and naltrexone group required 0.04 mg more opioids during the study period, there was no significant difference between the opioid requirements of the 2 groups. Nausea was less frequently reported in patients receiving morphine and naltrexone. The combination of ultra-low-dose naltrexone and morphine in extremity trauma does not affect the opioid requirements; it, however, lowers the risk of nausea. Copyright © 2012 Elsevier Inc. All rights reserved.
Data entry errors and design for model-based tight glycemic control in critical care.
Ward, Logan; Steel, James; Le Compte, Aaron; Evans, Alicia; Tan, Chia-Siong; Penning, Sophie; Shaw, Geoffrey M; Desaive, Thomas; Chase, J Geoffrey
2012-01-01
Tight glycemic control (TGC) has shown benefits but has been difficult to achieve consistently. Model-based methods and computerized protocols offer the opportunity to improve TGC quality but require human data entry, particularly of blood glucose (BG) values, which can be significantly prone to error. This study presents the design and optimization of data entry methods to minimize error for a computerized and model-based TGC method prior to pilot clinical trials. To minimize data entry error, two tests were carried out to optimize a method with errors less than the 5%-plus reported in other studies. Four initial methods were tested on 40 subjects in random order, and the best two were tested more rigorously on 34 subjects. The tests measured entry speed and accuracy. Errors were reported as corrected and uncorrected errors, with the sum comprising a total error rate. The first set of tests used randomly selected values, while the second set used the same values for all subjects to allow comparisons across users and direct assessment of the magnitude of errors. These research tests were approved by the University of Canterbury Ethics Committee. The final data entry method tested reduced errors to less than 1-2%, a 60-80% reduction from reported values. The magnitude of errors was clinically significant and was typically by 10.0 mmol/liter or an order of magnitude but only for extreme values of BG < 2.0 mmol/liter or BG > 15.0-20.0 mmol/liter, both of which could be easily corrected with automated checking of extreme values for safety. The data entry method selected significantly reduced data entry errors in the limited design tests presented, and is in use on a clinical pilot TGC study. The overall approach and testing methods are easily performed and generalizable to other applications and protocols. © 2012 Diabetes Technology Society.
8 CFR 1216.5 - Waiver of requirement to file joint petition to remove conditions by alien spouse.
Code of Federal Regulations, 2011 CFR
2011-01-01
... child was battered by or subjected to extreme cruelty committed by the citizen or permanent resident... pertinent by the director. (3) Application for waiver based on alien's claim of having been battered or... faith, and who was battered or was the subject of extreme cruelty or whose child was battered by or was...
8 CFR 1216.5 - Waiver of requirement to file joint petition to remove conditions by alien spouse.
Code of Federal Regulations, 2010 CFR
2010-01-01
... child was battered by or subjected to extreme cruelty committed by the citizen or permanent resident... pertinent by the director. (3) Application for waiver based on alien's claim of having been battered or... faith, and who was battered or was the subject of extreme cruelty or whose child was battered by or was...
Research on Nitride Thin Films, Advanced Plasma Diagnostics, and Charged-Particle Processes
2006-07-01
Additionally, these components are being placed closer to the point of use--requiring that they operate in extreme temperature environments ...reasons for component failure. To operate in extreme temperature environments , electronic and electrical components must withstand higher ambient...hybrid and plug-in hybrid-powered automobiles, heart defibrillators , and industrial equipment will benefit from a new generation of capacitors. High
Deformation mechanisms in a coal mine roadway in extremely swelling soft rock.
Li, Qinghai; Shi, Weiping; Yang, Renshu
2016-01-01
The problem of roadway support in swelling soft rock was one of the challenging problems during mining. For most geological conditions, combinations of two or more supporting approaches could meet the requirements of most roadways; however, in extremely swelling soft rock, combined approaches even could not control large deformations. The purpose of this work was to probe the roadway deformation mechanisms in extremely swelling soft rock. Based on the main return air-way in a coal mine, deformation monitoring and geomechanical analysis were conducted, as well as plastic zone mechanical model was analysed. Results indicated that this soft rock was potentially very swelling. When the ground stress acted alone, the support strength needed in situ was not too large and combined supporting approaches could meet this requirement; however, when this potential released, the roadway would undergo permanent deformation. When the loose zone reached 3 m within surrounding rock, remote stress p ∞ and supporting stress P presented a linear relationship. Namely, the greater the swelling stress, the more difficult it would be in roadway supporting. So in this extremely swelling soft rock, a better way to control roadway deformation was to control the releasing of surrounding rock's swelling potential.
Reconstructing metabolic flux vectors from extreme pathways: defining the alpha-spectrum.
Wiback, Sharon J; Mahadevan, Radhakrishnan; Palsson, Bernhard Ø
2003-10-07
The move towards genome-scale analysis of cellular functions has necessitated the development of analytical (in silico) methods to understand such large and complex biochemical reaction networks. One such method is extreme pathway analysis that uses stoichiometry and thermodynamic irreversibly to define mathematically unique, systemic metabolic pathways. These extreme pathways form the edges of a high-dimensional convex cone in the flux space that contains all the attainable steady state solutions, or flux distributions, for the metabolic network. By definition, any steady state flux distribution can be described as a nonnegative linear combination of the extreme pathways. To date, much effort has been focused on calculating, defining, and understanding these extreme pathways. However, little work has been performed to determine how these extreme pathways contribute to a given steady state flux distribution. This study represents an initial effort aimed at defining how physiological steady state solutions can be reconstructed from a network's extreme pathways. In general, there is not a unique set of nonnegative weightings on the extreme pathways that produce a given steady state flux distribution but rather a range of possible values. This range can be determined using linear optimization to maximize and minimize the weightings of a particular extreme pathway in the reconstruction, resulting in what we have termed the alpha-spectrum. The alpha-spectrum defines which extreme pathways can and cannot be included in the reconstruction of a given steady state flux distribution and to what extent they individually contribute to the reconstruction. It is shown that accounting for transcriptional regulatory constraints can considerably shrink the alpha-spectrum. The alpha-spectrum is computed and interpreted for two cases; first, optimal states of a skeleton representation of core metabolism that include transcriptional regulation, and second for human red blood cell metabolism under various physiological, non-optimal conditions.
Studying Weather and Climate Extremes in a Non-stationary Framework
NASA Astrophysics Data System (ADS)
Wu, Z.
2010-12-01
The study of weather and climate extremes often uses the theory of extreme values. Such a detection method has a major problem: to obtain the probability distribution of extremes, one has to implicitly assume the Earth’s climate is stationary over a long period within which the climatology is defined. While such detection makes some sense in a purely statistical view of stationary processes, it can lead to misleading statistical properties of weather and climate extremes caused by long term climate variability and change, and may also cause enormous difficulty in attributing and predicting these extremes. To alleviate this problem, here we report a novel non-stationary framework for studying weather and climate extremes in a non-stationary framework. In this new framework, the weather and climate extremes will be defined as timescale-dependent quantities derived from the anomalies with respect to non-stationary climatologies of different timescales. With this non-stationary framework, the non-stationary and nonlinear nature of climate system will be taken into account; and the attribution and the prediction of weather and climate extremes can then be separated into 1) the change of the statistical properties of the weather and climate extremes themselves and 2) the background climate variability and change. The new non-stationary framework will use the ensemble empirical mode decomposition (EEMD) method, which is a recent major improvement of the Hilbert-Huang Transform for time-frequency analysis. Using this tool, we will adaptively decompose various weather and climate data from observation and climate models in terms of the components of the various natural timescales contained in the data. With such decompositions, the non-stationary statistical properties (both spatial and temporal) of weather and climate anomalies and of their corresponding climatologies will be analyzed and documented.
Liu, Yang; Zhang, Mingqing; Fang, Xiuqi
2018-03-20
By merging reconstructed phenological series from published articles and observations of China Phenology Observation Network (CPON), the first blooming date of Amygdalus davidiana (FBA) in Beijing between 1741 and 2000 is reconstructed. The Butterworth method is used to remove the multi-year variations for generating the phenological series of annual variations in the first blooming date of A. davidiana. The extreme delay years in the phenological series are identified using the percentage threshold method. The characteristics of the extreme delays and the correspondence of these events with natural forcings are analysed. The main results are as follows. In annual phenological series, the extreme delays appeared in single year as main feature, only A.D.1800-1801, 1816-1817 and 1983-1984 were the events of two consecutively extreme years. Approximately 85% of the extreme delays occurred during 1-2 years after the large volcanic eruptions (VEI ≥ 4) in the eastern rim or the western rim of the Pacific Ocean, as the same proportion of the extreme delays followed El Niño events. About 73% years of the extreme delays fall in the valleys of sunspot cycles or the Dalton minimum period in the year or the previous year. According to the certainty factor (CF), the large eruptions have the greatest influence to the extreme delays; sunspot activity is the second, and ENSO is the last one. The extreme phenological delayed year is most likely to occur after a large eruption, which particularly occurs during El Niño year and its previous several years were in the descending portion or valley of sunspot phase.
NASA Astrophysics Data System (ADS)
Liu, Yang; Zhang, Mingqing; Fang, Xiuqi
2018-03-01
By merging reconstructed phenological series from published articles and observations of China Phenology Observation Network (CPON), the first blooming date of Amygdalus davidiana (FBA) in Beijing between 1741 and 2000 is reconstructed. The Butterworth method is used to remove the multi-year variations for generating the phenological series of annual variations in the first blooming date of A. davidiana. The extreme delay years in the phenological series are identified using the percentage threshold method. The characteristics of the extreme delays and the correspondence of these events with natural forcings are analysed. The main results are as follows. In annual phenological series, the extreme delays appeared in single year as main feature, only A.D.1800-1801, 1816-1817 and 1983-1984 were the events of two consecutively extreme years. Approximately 85% of the extreme delays occurred during 1-2 years after the large volcanic eruptions (VEI ≥ 4) in the eastern rim or the western rim of the Pacific Ocean, as the same proportion of the extreme delays followed El Niño events. About 73% years of the extreme delays fall in the valleys of sunspot cycles or the Dalton minimum period in the year or the previous year. According to the certainty factor (CF), the large eruptions have the greatest influence to the extreme delays; sunspot activity is the second, and ENSO is the last one. The extreme phenological delayed year is most likely to occur after a large eruption, which particularly occurs during El Niño year and its previous several years were in the descending portion or valley of sunspot phase.
A Simulation Study on Methods of Correcting for the Effects of Extreme Response Style
ERIC Educational Resources Information Center
Wetzel, Eunike; Böhnke, Jan R.; Rose, Norman
2016-01-01
The impact of response styles such as extreme response style (ERS) on trait estimation has long been a matter of concern to researchers and practitioners. This simulation study investigated three methods that have been proposed for the correction of trait estimates for ERS effects: (a) mixed Rasch models, (b) multidimensional item response models,…
Structures and Materials Technologies for Extreme Environments Applied to Reusable Launch Vehicles
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.; Clay, Christopher; Rezin, Marc
2003-01-01
This paper provides an overview of the evolution of structures and materials technology approaches to survive the challenging extreme environments encountered by earth-to-orbit space transportation systems, with emphasis on more recent developments in the USA. The evolution of technology requirements and experience in the various approaches to meeting these requirements has significantly influenced the technology approaches. While previous goals were primarily performance driven, more recently dramatic improvements in costs/operations and in safety have been paramount goals. Technologies that focus on the cost/operations and safety goals in the area of hot structures and thermal protection systems for reusable launch vehicles are presented. Assessments of the potential ability of the various technologies to satisfy the technology requirements, and their current technology readiness status are also presented.
Solar Probe Plus MAG Sensor Thermal Design for Low Heater Power and Extreme Thermal Environment
NASA Technical Reports Server (NTRS)
Choi, Michael K.
2015-01-01
The heater power available for the Solar Probe Plus FIELDS MAG sensor is less than half of the heritage value for other missions. Nominally the MAG sensors are in the spacecraft's umbra. In the worst hot case, approximately 200 spacecraft communication downlinks, up to 10 hours each, are required at 0.7 AU. These downlinks require the spacecraft to slew 45 deg. about the Y-axis, exposing the MAG sensors and boom to sunlight. This paper presents the thermal design to meet the MAG sensor thermal requirements in the extreme thermal environment and with low heater power. A thermal balance test on the MAG sensor engineering model has verified the thermal design and correlated the thermal model for flight temperature predictions.
Exploring high power, extreme wavelength operating potential of rare-earth-doped silica fiber
NASA Astrophysics Data System (ADS)
Zhou, Pu; Li, Ruixian; Xiao, Hu; Huang, Long; Zhang, Hanwei; Leng, Jinyong; Chen, Zilun; Xu, Jiangmin; Wu, Jian; Wang, Xiong
2017-08-01
Ytterbium-doped fiber laser (YDFL) and Thulium doped fiber laser (TDFL) have been two kinds of the most widely studied fiber laser in recent years. Although both silica-based Ytterbium-doped fiber and Thulium doped fiber have wide emission spectrum band (more than 200 nm and 400 nm, respectively), the operation spectrum region of previously demonstrated high power YDFL and TDFL fall into 1060-1100 nm and 1900-2050nm. Power scaling of YDFL and TDFL operates at short-wavelength or long-wavelength band, especially for extreme wavelength operation, although is highly required in a large variety of application fields, is quite challenging due to small net gain and strong amplified spontaneous emission (ASE). In this paper, we will present study on extreme wavelength operation of high power YDFL and TDFL in our group. Comprehensive mathematical models are built to investigate the feasibility of high power operation and propose effective technical methods to achieve high power operation. We have achieved (1) Diodepumped 1150nm long wavelength YDFL with 120-watt level output power (2) Diode-pumped 1178nm long wavelength YDFL operates at high temperature with 30-watt level output power (3) Random laser pumped 2153nm long wavelength TDFL with 20-watt level output power (4) Diode-pumped 1018nm short wavelength YDFL with a record 2 kilowatt output power is achieved by using home-made fiber combiner.
Mirkarimi, P B; Baker, S L; Montcalm, C; Folta, J A
2001-01-01
Extreme-ultraviolet lithography requires expensive multilayer-coated Zerodur or ULE optics with extremely tight figure and finish specifications. Therefore it is desirable to develop methods to recover these optics if they are coated with a nonoptimum multilayer films or in the event that the coating deteriorates over time owing to long-term exposure to radiation, corrosion, or surface contamination. We evaluate recoating, reactive-ion etching, and wet-chemical techniques for the recovery of Mo/Si and Mo/Be multilayer films upon Zerodur and ULE test optics. The recoating technique was successfully employed in the recovery of Mo/Si-coated optics but has the drawback of limited applicability. A chlorine-based reactive-ion etch process was successfully used to recover Mo/Si-coated optics, and a particularly large process window was observed when ULE optics were employed; this is an advantageous for large, curved optics. Dilute HCl wet-chemical techniques were developed and successfully demonstrated for the recovery of Mo/Be-coated optics as well as for Mo/Si-coated optics when Mo/Be release layers were employed; however, there are questions about the extendability of the HCl process to large optics and multiple coat and strip cycles. The technique of using carbon barrier layers to protect the optic during removal of Mo/Si in HF:HNO(3) also showed promise.
Method For Synthesizing Extremely High-Temperature Melting Materials
Saboungi, Marie-Louise; Glorieux, Benoit
2005-11-22
The invention relates to a method of synthesizing high-temperature melting materials. More specifically the invention relates to a containerless method of synthesizing very high temperature melting materials such as borides, carbides and transition-metal, lanthanide and actinide oxides, using an Aerodynamic Levitator and a laser. The object of the invention is to provide a method for synthesizing extremely high-temperature melting materials that are otherwise difficult to produce, without the use of containers, allowing the manipulation of the phase (amorphous/crystalline/metastable) and permitting changes of the environment such as different gaseous compositions.
Method for synthesizing extremely high-temperature melting materials
Saboungi, Marie-Louise; Glorieux, Benoit
2007-11-06
The invention relates to a method of synthesizing high-temperature melting materials. More specifically the invention relates to a containerless method of synthesizing very high temperature melting materials such as carbides and transition-metal, lanthanide and actinide oxides, using an aerodynamic levitator and a laser. The object of the invention is to provide a method for synthesizing extremely high-temperature melting materials that are otherwise difficult to produce, without the use of containers, allowing the manipulation of the phase (amorphous/crystalline/metastable) and permitting changes of the environment such as different gaseous compositions.
Approximate matching of regular expressions.
Myers, E W; Miller, W
1989-01-01
Given a sequence A and regular expression R, the approximate regular expression matching problem is to find a sequence matching R whose optimal alignment with A is the highest scoring of all such sequences. This paper develops an algorithm to solve the problem in time O(MN), where M and N are the lengths of A and R. Thus, the time requirement is asymptotically no worse than for the simpler problem of aligning two fixed sequences. Our method is superior to an earlier algorithm by Wagner and Seiferas in several ways. First, it treats real-valued costs, in addition to integer costs, with no loss of asymptotic efficiency. Second, it requires only O(N) space to deliver just the score of the best alignment. Finally, its structure permits implementation techniques that make it extremely fast in practice. We extend the method to accommodate gap penalties, as required for typical applications in molecular biology, and further refine it to search for sub-strings of A that strongly align with a sequence in R, as required for typical data base searches. We also show how to deliver an optimal alignment between A and R in only O(N + log M) space using O(MN log M) time. Finally, an O(MN(M + N) + N2log N) time algorithm is presented for alignment scoring schemes where the cost of a gap is an arbitrary increasing function of its length.
NASA Astrophysics Data System (ADS)
Lu, Shan; Zhang, Hanmo
2016-01-01
To meet the requirement of autonomous orbit determination, this paper proposes a fast curve fitting method based on earth ultraviolet features to obtain accurate earth vector direction, in order to achieve the high precision autonomous navigation. Firstly, combining the stable characters of earth ultraviolet radiance and the use of transmission model software of atmospheric radiation, the paper simulates earth ultraviolet radiation model on different time and chooses the proper observation band. Then the fast improved edge extracting method combined Sobel operator and local binary pattern (LBP) is utilized, which can both eliminate noises efficiently and extract earth ultraviolet limb features accurately. And earth's centroid locations on simulated images are estimated via the least square fitting method using part of the limb edges. Taken advantage of the estimated earth vector direction and earth distance, Extended Kalman Filter (EKF) is applied to realize the autonomous navigation finally. Experiment results indicate the proposed method can achieve a sub-pixel earth centroid location estimation and extremely enhance autonomous celestial navigation precision.
Systematic Serendipity: A Method to Discover the Anomalous
NASA Astrophysics Data System (ADS)
Giles, Daniel; Walkowicz, Lucianne
2018-01-01
One of the challenges in the era of big data astronomical surveys is identifying anomalous data, data that exhibits as-of-yet unobserved behavior. These data may result from systematic errors, extreme (or rare) forms of known phenomena, or, most interestingly, truly novel phenomena that has historically required a trained eye and often fortuitous circumstance to identify. We describe a method that uses machine clustering techniques to discover anomalous data in Kepler lightcurves, as a step towards systematizing the detection of novel phenomena in the era of LSST. As a proof of concept, we apply our anomaly detection method to Kepler data including Boyajian's Star (KIC 8462852). We examine quarters 4, 8, 11, and 16 of the Kepler data which contain Boyajian’s Star acting normally (quarters 4 and 11) and anomalously (quarters 8 and 16). We demonstrate that our method is capable of identifying Boyajian’s Star’s anomalous behavior in quarters of interest, and we further identify other anomalous light curves that exhibit a range of interesting variability.
Explicit formulation of second and third order optical nonlinearity in the FDTD framework
NASA Astrophysics Data System (ADS)
Varin, Charles; Emms, Rhys; Bart, Graeme; Fennel, Thomas; Brabec, Thomas
2018-01-01
The finite-difference time-domain (FDTD) method is a flexible and powerful technique for rigorously solving Maxwell's equations. However, three-dimensional optical nonlinearity in current commercial and research FDTD softwares requires solving iteratively an implicit form of Maxwell's equations over the entire numerical space and at each time step. Reaching numerical convergence demands significant computational resources and practical implementation often requires major modifications to the core FDTD engine. In this paper, we present an explicit method to include second and third order optical nonlinearity in the FDTD framework based on a nonlinear generalization of the Lorentz dispersion model. A formal derivation of the nonlinear Lorentz dispersion equation is equally provided, starting from the quantum mechanical equations describing nonlinear optics in the two-level approximation. With the proposed approach, numerical integration of optical nonlinearity and dispersion in FDTD is intuitive, transparent, and fully explicit. A strong-field formulation is also proposed, which opens an interesting avenue for FDTD-based modelling of the extreme nonlinear optics phenomena involved in laser filamentation and femtosecond micromachining of dielectrics.
Processes involved in the development of latent fingerprints using the cyanoacrylate fuming method.
Lewis, L A; Smithwick, R W; Devault, G L; Bolinger, B; Lewis, S A
2001-03-01
Chemical processes involved in the development of latent fingerprints using the cyanoacrylate fuming method have been studied. Two major types of latent prints have been investigated-clean and oily prints. Scanning electron microscopy (SEM) has been used as a tool for determining the morphology of the polymer developed separately on clean and oily prints after cyanoacrylate fuming. A correlation between the chemical composition of an aged latent fingerprint, prior to development, and the quality of a developed fingerprint has been observed in the morphology. The moisture in the print prior to fuming has been found to be more important than the moisture in the air during fuming for the development of a useful latent print. In addition, the amount of time required to develop a high quality latent print has been found to be within 2 min. The cyanoacrylate polymerization process is extremely rapid. When heat is used to accelerate the fuming process, typically a period of 2 min is required to develop the print. The optimum development time depends upon the concentration of cyanoacrylate vapors within the enclosure.
Risk assessment of precipitation extremes in northern Xinjiang, China
NASA Astrophysics Data System (ADS)
Yang, Jun; Pei, Ying; Zhang, Yanwei; Ge, Quansheng
2018-05-01
This study was conducted using daily precipitation records gathered at 37 meteorological stations in northern Xinjiang, China, from 1961 to 2010. We used the extreme value theory model, generalized extreme value (GEV) and generalized Pareto distribution (GPD), statistical distribution function to fit outputs of precipitation extremes with different return periods to estimate risks of precipitation extremes and diagnose aridity-humidity environmental variation and corresponding spatial patterns in northern Xinjiang. Spatiotemporal patterns of daily maximum precipitation showed that aridity-humidity conditions of northern Xinjiang could be well represented by the return periods of the precipitation data. Indices of daily maximum precipitation were effective in the prediction of floods in the study area. By analyzing future projections of daily maximum precipitation (2, 5, 10, 30, 50, and 100 years), we conclude that the flood risk will gradually increase in northern Xinjiang. GEV extreme value modeling yielded the best results, proving to be extremely valuable. Through example analysis for extreme precipitation models, the GEV statistical model was superior in terms of favorable analog extreme precipitation. The GPD model calculation results reflect annual precipitation. For most of the estimated sites' 2 and 5-year T for precipitation levels, GPD results were slightly greater than GEV results. The study found that extreme precipitation reaching a certain limit value level will cause a flood disaster. Therefore, predicting future extreme precipitation may aid warnings of flood disaster. A suitable policy concerning effective water resource management is thus urgently required.
Photoresist composition for extreme ultraviolet lithography
Felter, T. E.; Kubiak, G. D.
1999-01-01
A method of producing a patterned array of features, in particular, gate apertures, in the size range 0.4-0.05 .mu.m using projection lithography and extreme ultraviolet (EUV) radiation. A high energy laser beam is used to vaporize a target material in order to produce a plasma which in turn, produces extreme ultraviolet radiation of a characteristic wavelength of about 13 nm for lithographic applications. The radiation is transmitted by a series of reflective mirrors to a mask which bears the pattern to be printed. The demagnified focused mask pattern is, in turn, transmitted by means of appropriate optics and in a single exposure, to a substrate coated with photoresists designed to be transparent to EUV radiation and also satisfy conventional processing methods. A photoresist composition for extreme ultraviolet radiation of boron carbide polymers, hydrochlorocarbons and mixtures thereof.
Use of Magnetic Resonance Imaging to Monitor Iron Overload
Wood, John C.
2014-01-01
SYNOPSIS Treatment of iron overload requires robust estimates of total body iron burden and its response to iron chelation therapy. Compliance with chelation therapy varies considerably among patients and individual reporting is notoriously unreliable. Even with perfect compliance, intersubject variability in chelator effectiveness is extremely high, necessitating reliable iron estimates to guide dose titration. In addition, each chelator has a unique profile with respect to clearing iron stores from different organs. This chapter will present the tools available to clinicians monitoring their patients, focusing on non-invasive magnetic resonance imaging methods because they have become the de-facto standard of care. PMID:25064711
Unstructured Cartesian/prismatic grid generation for complex geometries
NASA Technical Reports Server (NTRS)
Karman, Steve L., Jr.
1995-01-01
The generation of a hybrid grid system for discretizing complex three dimensional (3D) geometries is described. The primary grid system is an unstructured Cartesian grid automatically generated using recursive cell subdivision. This grid system is sufficient for computing Euler solutions about extremely complex 3D geometries. A secondary grid system, using triangular-prismatic elements, may be added for resolving the boundary layer region of viscous flows near surfaces of solid bodies. This paper describes the grid generation processes used to generate each grid type. Several example grids are shown, demonstrating the ability of the method to discretize complex geometries, with very little pre-processing required by the user.
Continuous-variable quantum computing in optical time-frequency modes using quantum memories.
Humphreys, Peter C; Kolthammer, W Steven; Nunn, Joshua; Barbieri, Marco; Datta, Animesh; Walmsley, Ian A
2014-09-26
We develop a scheme for time-frequency encoded continuous-variable cluster-state quantum computing using quantum memories. In particular, we propose a method to produce, manipulate, and measure two-dimensional cluster states in a single spatial mode by exploiting the intrinsic time-frequency selectivity of Raman quantum memories. Time-frequency encoding enables the scheme to be extremely compact, requiring a number of memories that are a linear function of only the number of different frequencies in which the computational state is encoded, independent of its temporal duration. We therefore show that quantum memories can be a powerful component for scalable photonic quantum information processing architectures.
Ultra-low power operation of self-heated, suspended carbon nanotube gas sensors
NASA Astrophysics Data System (ADS)
Chikkadi, Kiran; Muoth, Matthias; Maiwald, Verena; Roman, Cosmin; Hierold, Christofer
2013-11-01
We present a suspended carbon nanotube gas sensor that senses NO2 at ambient temperature and recovers from gas exposure at an extremely low power of 2.9 μW by exploiting the self-heating effect for accelerated gas desorption. The recovery time of 10 min is two orders of magnitude faster than non-heated recovery at ambient temperature. This overcomes an important bottleneck for the practical application of carbon nanotube gas sensors. Furthermore, the method is easy to implement in sensor systems and requires no additional components, paving the way for ultra-low power, compact, and highly sensitive gas sensors.
2014-03-14
with expected changes due to climate change. (tropicals and extra-tropicals) Ivan provided some good information on work being done on tropical...Pattiaratchi, C., Jensen, J., 2013. Estimating extreme water level probabilities: a comparison of the direct methods and recommendations for best practise ...sites: site-by-site analyses. Proudman Oceanographic Laboratory , Internal Document, No. 65, 229pp. Dixon, M.J., Tawn, J.A. (1995) Extreme sea-levels
NASA Astrophysics Data System (ADS)
Li, L.; Xu, C.-Y.; Engeland, K.
2012-04-01
With respect to model calibration, parameter estimation and analysis of uncertainty sources, different approaches have been used in hydrological models. Bayesian method is one of the most widely used methods for uncertainty assessment of hydrological models, which incorporates different sources of information into a single analysis through Bayesian theorem. However, none of these applications can well treat the uncertainty in extreme flows of hydrological models' simulations. This study proposes a Bayesian modularization method approach in uncertainty assessment of conceptual hydrological models by considering the extreme flows. It includes a comprehensive comparison and evaluation of uncertainty assessments by a new Bayesian modularization method approach and traditional Bayesian models using the Metropolis Hasting (MH) algorithm with the daily hydrological model WASMOD. Three likelihood functions are used in combination with traditional Bayesian: the AR (1) plus Normal and time period independent model (Model 1), the AR (1) plus Normal and time period dependent model (Model 2) and the AR (1) plus multi-normal model (Model 3). The results reveal that (1) the simulations derived from Bayesian modularization method are more accurate with the highest Nash-Sutcliffe efficiency value, and (2) the Bayesian modularization method performs best in uncertainty estimates of entire flows and in terms of the application and computational efficiency. The study thus introduces a new approach for reducing the extreme flow's effect on the discharge uncertainty assessment of hydrological models via Bayesian. Keywords: extreme flow, uncertainty assessment, Bayesian modularization, hydrological model, WASMOD
Entropy in bimolecular simulations: A comprehensive review of atomic fluctuations-based methods.
Kassem, Summer; Ahmed, Marawan; El-Sheikh, Salah; Barakat, Khaled H
2015-11-01
Entropy of binding constitutes a major, and in many cases a detrimental, component of the binding affinity in biomolecular interactions. While the enthalpic part of the binding free energy is easier to calculate, estimating the entropy of binding is further more complicated. A precise evaluation of entropy requires a comprehensive exploration of the complete phase space of the interacting entities. As this task is extremely hard to accomplish in the context of conventional molecular simulations, calculating entropy has involved many approximations. Most of these golden standard methods focused on developing a reliable estimation of the conformational part of the entropy. Here, we review these methods with a particular emphasis on the different techniques that extract entropy from atomic fluctuations. The theoretical formalisms behind each method is explained highlighting its strengths as well as its limitations, followed by a description of a number of case studies for each method. We hope that this brief, yet comprehensive, review provides a useful tool to understand these methods and realize the practical issues that may arise in such calculations. Copyright © 2015 Elsevier Inc. All rights reserved.
Identifying Heat Waves in Florida: Considerations of Missing Weather Data.
Leary, Emily; Young, Linda J; DuClos, Chris; Jordan, Melissa M
2015-01-01
Using current climate models, regional-scale changes for Florida over the next 100 years are predicted to include warming over terrestrial areas and very likely increases in the number of high temperature extremes. No uniform definition of a heat wave exists. Most past research on heat waves has focused on evaluating the aftermath of known heat waves, with minimal consideration of missing exposure information. To identify and discuss methods of handling and imputing missing weather data and how those methods can affect identified periods of extreme heat in Florida. In addition to ignoring missing data, temporal, spatial, and spatio-temporal models are described and utilized to impute missing historical weather data from 1973 to 2012 from 43 Florida weather monitors. Calculated thresholds are used to define periods of extreme heat across Florida. Modeling of missing data and imputing missing values can affect the identified periods of extreme heat, through the missing data itself or through the computed thresholds. The differences observed are related to the amount of missingness during June, July, and August, the warmest months of the warm season (April through September). Missing data considerations are important when defining periods of extreme heat. Spatio-temporal methods are recommended for data imputation. A heat wave definition that incorporates information from all monitors is advised.
A perturbation approach for assessing trends in precipitation extremes across Iran
NASA Astrophysics Data System (ADS)
Tabari, Hossein; AghaKouchak, Amir; Willems, Patrick
2014-11-01
Extreme precipitation events have attracted a great deal of attention among the scientific community because of their devastating consequences on human livelihood and socio-economic development. To assess changes in precipitation extremes in a given region, it is essential to analyze decadal oscillations in precipitation extremes. This study examines temporal oscillations in precipitation data in several sub-regions of Iran using a novel quantile perturbation method during 1980-2010. Precipitation data from NASA's Modern-Era Retrospective Analysis for Research and Applications-Land (MERRA-Land) are used in this study. The results indicate significant anomalies in precipitation extremes in the northwest and southeast regions of Iran. Analysis of extreme precipitation perturbations reveals that perturbations for the monthly aggregation level are generally lower than the annual perturbations. Furthermore, high-oscillation and low-oscillation periods are found in extreme precipitation quantiles across different seasons. In all selected regions, a significant anomaly (i.e., extreme wet/dry conditions) in precipitation extremes is observed during spring.
NASA Technical Reports Server (NTRS)
Milesi, Cristina; Costa-Cabral, Mariza; Rath, John; Mills, William; Roy, Sujoy; Thrasher, Bridget; Wang, Weile; Chiang, Felicia; Loewenstein, Max; Podolske, James
2014-01-01
Water resource managers planning for the adaptation to future events of extreme precipitation now have access to high resolution downscaled daily projections derived from statistical bias correction and constructed analogs. We also show that along the Pacific Coast the Northern Oscillation Index (NOI) is a reliable predictor of storm likelihood, and therefore a predictor of seasonal precipitation totals and likelihood of extremely intense precipitation. Such time series can be used to project intensity duration curves into the future or input into stormwater models. However, few climate projection studies have explored the impact of the type of downscaling method used on the range and uncertainty of predictions for local flood protection studies. Here we present a study of the future climate flood risk at NASA Ames Research Center, located in South Bay Area, by comparing the range of predictions in extreme precipitation events calculated from three sets of time series downscaled from CMIP5 data: 1) the Bias Correction Constructed Analogs method dataset downscaled to a 1/8 degree grid (12km); 2) the Bias Correction Spatial Disaggregation method downscaled to a 1km grid; 3) a statistical model of extreme daily precipitation events and projected NOI from CMIP5 models. In addition, predicted years of extreme precipitation are used to estimate the risk of overtopping of the retention pond located on the site through simulations of the EPA SWMM hydrologic model. Preliminary results indicate that the intensity of extreme precipitation events is expected to increase and flood the NASA Ames retention pond. The results from these estimations will assist flood protection managers in planning for infrastructure adaptations.
Gao, Yubo
2011-01-01
OBJECTIVE The purpose of this study was to examine the demographic and hospitalization characteristics of children hospitalized with lower extremity fractures in the United States in 2006. METHODS Children aged 0 to 20 years with a diagnosis of lower extremity fracture in the 2006 Healthcare Cost and Utilization Project Kids’ Inpatient Database (KID) were included. Lower extremity fractures were defined by International Classification of Diseases, 9th Revision, Clinical Modification codes 820-829 under “Injury and Poisoning (800-999).” Patient demographic and hospitalization-related data were analyzed by chi-square testing and unbalanced analysis of variance. RESULTS There were more boys than girls with lower extremity fractures and 53% had private insurance as their primary payer. About one half of the children were between the ages of 13 and 20 years, but all ages were represented from age 0 to 20. White children accounted for 56%. Urban hospitalizations accounted for 93% of cases and 66 percent of admissions were to teaching hospitals. All patients had an average length of stay (LOS) 4.04 days, and infant patients had the longest average LOS of 5.46 days. The average number of diagnoses per patient was 3.07, and the average number of procedures per patient was 2.21. The average charge per discharge was $35,236, and the oldest patients had the largest average charge of $41,907. The average number of comorbidities increased with increasing patient age. There was a 55.6% greater mortality risk in non-teaching hospitals than in teaching hospitals and there was at least ten times the mortality risk in rural hospitals than in urban hospitals. CONCLUSIONS This study provides an understanding of the demographic and hospitalization characteristics of children with lower extremity fractures in the United States in 2006. This information may be useful in implementing measures to help prevent similar injuries in the future. Further research is required to determine causality of the associations found including increased mortality risk for this population at rural and non-teaching hospitals. PMID:22096438
14 CFR Appendix D to Part 417 - Flight Termination Systems, Components, Installation, and Monitoring
Code of Federal Regulations, 2013 CFR
2013-01-01
... other propulsion system. D417.5Flight termination system design (a) Reliability prediction. A flight... design margin required by this appendix. As an alternative to subjecting the flight termination system to... the component is heated or cooled to achieve the required dwell time at one extreme of the required...
14 CFR Appendix D to Part 417 - Flight Termination Systems, Components, Installation, and Monitoring
Code of Federal Regulations, 2011 CFR
2011-01-01
... other propulsion system. D417.5Flight termination system design (a) Reliability prediction. A flight... design margin required by this appendix. As an alternative to subjecting the flight termination system to... the component is heated or cooled to achieve the required dwell time at one extreme of the required...
14 CFR Appendix D to Part 417 - Flight Termination Systems, Components, Installation, and Monitoring
Code of Federal Regulations, 2012 CFR
2012-01-01
... other propulsion system. D417.5Flight termination system design (a) Reliability prediction. A flight... design margin required by this appendix. As an alternative to subjecting the flight termination system to... the component is heated or cooled to achieve the required dwell time at one extreme of the required...
14 CFR Appendix D to Part 417 - Flight Termination Systems, Components, Installation, and Monitoring
Code of Federal Regulations, 2014 CFR
2014-01-01
... other propulsion system. D417.5Flight termination system design (a) Reliability prediction. A flight... design margin required by this appendix. As an alternative to subjecting the flight termination system to... the component is heated or cooled to achieve the required dwell time at one extreme of the required...
A maximally stable extremal region based scene text localization method
NASA Astrophysics Data System (ADS)
Xiao, Chengqiu; Ji, Lixin; Gao, Chao; Li, Shaomei
2015-07-01
Text localization in natural scene images is an important prerequisite for many content-based image analysis tasks. This paper proposes a novel text localization algorithm. Firstly, a fast pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSER) as basic character candidates. Secondly, these candidates are filtered by using the properties of fitting ellipse and the distribution properties of characters to exclude most non-characters. Finally, a new extremal regions projection merging algorithm is designed to group character candidates into words. Experimental results show that the proposed method has an advantage in speed and achieve relatively high precision and recall rates than the latest published algorithms.
Characterizing the Spatial Contiguity of Extreme Precipitation over the US in the Recent Past
NASA Astrophysics Data System (ADS)
Touma, D. E.; Swain, D. L.; Diffenbaugh, N. S.
2016-12-01
The spatial characteristics of extreme precipitation over an area can define the hydrologic response in a basin, subsequently affecting the flood risk in the region. Here, we examine the spatial extent of extreme precipitation in the US by defining its "footprint": a contiguous area of rainfall exceeding a certain threshold (e.g., 90th percentile) on a given day. We first characterize the climatology of extreme rainfall footprint sizes across the US from 1980-2015 using Daymet, a high-resolution observational gridded rainfall dataset. We find that there are distinct regional and seasonal differences in average footprint sizes of extreme daily rainfall. In the winter, the Midwest shows footprints exceeding 500,000 sq. km while the Front Range exhibits footprints of 10,000 sq. km. Alternatively, the summer average footprint size is generally smaller and more uniform across the US, ranging from 10,000 sq. km in the Southwest to 100,000 sq. km in Montana and North Dakota. Moreover, we find that there are some significant increasing trends of average footprint size between 1980-2015, specifically in the Southwest in the winter and the Northeast in the spring. While gridded daily rainfall datasets allow for a practical framework in calculating footprint size, this calculation heavily depends on the interpolation methods that have been used in creating the dataset. Therefore, we assess footprint size using the GHCN-Daily station network and use geostatistical methods to define footprints of extreme rainfall directly from station data. Compared to the findings from Daymet, preliminary results using this method show fewer small daily footprint sizes over the US while large footprints are of similar number and magnitude to Daymet. Overall, defining the spatial characteristics of extreme rainfall as well as observed and expected changes in these characteristics allows us to better understand the hydrologic response to extreme rainfall and how to better characterize flood risks.
Zürch, Michael; Foertsch, Stefan; Matzas, Mark; Pachmann, Katharina; Kuth, Rainer; Spielmann, Christian
2014-01-01
Abstract. In cancer treatment, it is highly desirable to classify single cancer cells in real time. The standard method is polymerase chain reaction requiring a substantial amount of resources and time. Here, we present an innovative approach for rapidly classifying different cell types: we measure the diffraction pattern of a single cell illuminated with coherent extreme ultraviolet (XUV) laser-generated radiation. These patterns allow distinguishing different breast cancer cell types in a subsequent step. Moreover, the morphology of the object can be retrieved from the diffraction pattern with submicron resolution. In a proof-of-principle experiment, we prepared single MCF7 and SKBR3 breast cancer cells on gold-coated silica slides. The output of a laser-driven XUV light source is focused onto a single unstained and unlabeled cancer cell. With the resulting diffraction pattern, we could clearly identify the different cell types. With an improved setup, it will not only be feasible to classify circulating tumor cells with a high throughput, but also to identify smaller objects such as bacteria or even viruses. PMID:26158049
Augmented kludge waveforms for detecting extreme-mass-ratio inspirals
NASA Astrophysics Data System (ADS)
Chua, Alvin J. K.; Moore, Christopher J.; Gair, Jonathan R.
2017-08-01
The extreme-mass-ratio inspirals (EMRIs) of stellar-mass compact objects into massive black holes are an important class of source for the future space-based gravitational-wave detector LISA. Detecting signals from EMRIs will require waveform models that are both accurate and computationally efficient. In this paper, we present the latest implementation of an augmented analytic kludge (AAK) model, publicly available at https://github.com/alvincjk/EMRI_Kludge_Suite as part of an EMRI waveform software suite. This version of the AAK model has improved accuracy compared to its predecessors, with two-month waveform overlaps against a more accurate fiducial model exceeding 0.97 for a generic range of sources; it also generates waveforms 5-15 times faster than the fiducial model. The AAK model is well suited for scoping out data analysis issues in the upcoming round of mock LISA data challenges. A simple analytic argument shows that it might even be viable for detecting EMRIs with LISA through a semicoherent template bank method, while the use of the original analytic kludge in the same approach will result in around 90% fewer detections.
Autonomous satellite command and control: A comparison with other military systems
NASA Technical Reports Server (NTRS)
Kruchten, Robert J.; Todd, Wayne
1988-01-01
Existing satellite concepts of operation depend on readily available experts and are extremely manpower intensive. Areas of expertise required include mission planning, mission data interpretation, telemetry monitoring, and anomaly resolution. The concepts of operation have envolved to their current state in part because space systems have tended to be treated more as research and development assets rather than as operational assets. These methods of satellite command and control will be inadequate in the future because of the availability, survivability, and capability of human experts. Because space systems have extremely high reliability and limited access, they offer challenges not found in other military systems. Thus, automation techniques used elsewhere are not necessarily applicable to space systems. A program to make satellites much more autonomous has been developed, using a variety of advanced software techniques. The problem the program is addressing, some possible solutions, the goals of the Rome Air Development Center (RADC) program, the rationale as to why the goals are reasonable, and the current program status are discussed. Also presented are some of the concepts used in the program and how they differ from more traditional approaches.
Thermal conduction properties of Mo/Si multilayers for extreme ultraviolet optics
NASA Astrophysics Data System (ADS)
Bozorg-Grayeli, Elah; Li, Zijian; Asheghi, Mehdi; Delgado, Gil; Pokrovsky, Alexander; Panzer, Matthew; Wack, Daniel; Goodson, Kenneth E.
2012-10-01
Extreme ultraviolet (EUV) lithography requires nanostructured optical components, whose reliability can be influenced by radiation absorption and thermal conduction. Thermal conduction analysis is complicated by sub-continuum electron and phonon transport and the lack of thermal property data. This paper measures and interprets thermal property data, and their evolution due to heating exposure, for Mo/Si EUV mirrors with 6.9 nm period and Mo/Si thickness ratios of 0.4/0.6 and 0.6/0.4. We use time-domain thermoreflectance and the 3ω method to estimate the thermal resistance between the Ru capping layer and the Mo/Si multilayers (RRu-Mo/Si = 1.5 m2 K GW-1), as well as the out-of-plane thermal conductivity (kMo/Si 1.1 W m-1 K-1) and thermal anisotropy (η = 13). This work also reports the impact of annealing on thermal conduction in a co-deposited MoSi2 layer, increasing the thermal conductivity from 1.7 W m-1 K-1 in the amorphous phase to 2.8 W m-1 K-1 in the crystalline phase.
Sport events and climate for visitors—the case of FIFA World Cup in Qatar 2022
NASA Astrophysics Data System (ADS)
Matzarakis, Andreas; Fröhlich, Dominik
2015-04-01
The effect of weather on sport events is not well studied. It requires special attention if the event is taking place at a time and place with extreme weather situations. For the world soccer championship in Qatar (Doha 2022), human biometeorological analysis has been performed in order to identify the time of the year that is most suitable in terms of thermal comfort for visitors attending the event. The analysis is based on thermal indices like Physiologically Equivalent Temperature (PET). The results show that this kind of event may be not appropriate for visitors, if it is placed during months with extreme conditions. For Doha, this is the period from May to September, when conditions during a large majority of hours of the day cause strong heat stress for the visitors. A more appropriate time would be the months November to February, when thermally comfortable conditions are much more frequent. The methods applied here can quantify the thermal conditions and show limitations and possibilities for specific events and locations.
Achieving accuracy in first-principles calculations at extreme temperature and pressure
NASA Astrophysics Data System (ADS)
Mattsson, Ann; Wills, John
2013-06-01
First-principles calculations are increasingly used to provide EOS data at pressures and temperatures where experimental data is difficult or impossible to obtain. The lack of experimental data, however, also precludes validation of the calculations in those regimes. Factors influencing the accuracy of first-principles data include theoretical approximations, and computational approximations used in implementing and solving the underlying equations. The first category includes approximate exchange-correlation functionals and wave equations simplifying the Dirac equation. In the second category are, e.g., basis completeness and pseudo-potentials. While the first category is extremely hard to assess without experimental data, inaccuracies of the second type should be well controlled. We are using two rather different electronic structure methods (VASP and RSPt) to make explicit the requirements for accuracy of the second type. We will discuss the VASP Projector Augmented Wave potentials, with examples for Li and Mo. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Zürch, Michael; Foertsch, Stefan; Matzas, Mark; Pachmann, Katharina; Kuth, Rainer; Spielmann, Christian
2014-10-01
In cancer treatment, it is highly desirable to classify single cancer cells in real time. The standard method is polymerase chain reaction requiring a substantial amount of resources and time. Here, we present an innovative approach for rapidly classifying different cell types: we measure the diffraction pattern of a single cell illuminated with coherent extreme ultraviolet (XUV) laser-generated radiation. These patterns allow distinguishing different breast cancer cell types in a subsequent step. Moreover, the morphology of the object can be retrieved from the diffraction pattern with submicron resolution. In a proof-of-principle experiment, we prepared single MCF7 and SKBR3 breast cancer cells on gold-coated silica slides. The output of a laser-driven XUV light source is focused onto a single unstained and unlabeled cancer cell. With the resulting diffraction pattern, we could clearly identify the different cell types. With an improved setup, it will not only be feasible to classify circulating tumor cells with a high throughput, but also to identify smaller objects such as bacteria or even viruses.
Morality Principles for Risk Modelling: Needs and Links with the Origins of Plausible Inference
NASA Astrophysics Data System (ADS)
Solana-Ortega, Alberto; Solana, Vicente
2009-12-01
In comparison with the foundations of probability calculus, the inescapable and controversial issue of how to assign probabilities has only recently become a matter of formal study. The introduction of information as a technical concept was a milestone, but the most promising entropic assignment methods still face unsolved difficulties, manifesting the incompleteness of plausible inference theory. In this paper we examine the situation faced by risk analysts in the critical field of extreme events modelling, where the former difficulties are especially visible, due to scarcity of observational data, the large impact of these phenomena and the obligation to assume professional responsibilities. To respond to the claim for a sound framework to deal with extremes, we propose a metafoundational approach to inference, based on a canon of extramathematical requirements. We highlight their strong moral content, and show how this emphasis in morality, far from being new, is connected with the historic origins of plausible inference. Special attention is paid to the contributions of Caramuel, a contemporary of Pascal, unfortunately ignored in the usual mathematical accounts of probability.
Human survivability of extreme impacts in free-fall.
DOT National Transportation Integrated Search
1963-08-01
Human deceleration tolerances beyond the limits imposed by voluntary experimental methods were studied by means of intensive case histories of 137 individuals who have survived extremely abrupt impacts in accidental, suicidal, and homicidal free-fall...
NASA Technical Reports Server (NTRS)
1977-01-01
A lap in this instance is not a midriff but a tool for presision.polishing and grinding. During the Saturn V moonbooster program, Marshall Space Flight Center found a need for a better lap. The need arose from the exquisitely precise tolerances required for parts of the launch vehicle's guidance,and control system. So William J. Abernathy, a former Marshall employee, built a better lap; he invented a method for charging aluminum lap plates with diamond powder, then hard-anodizing them. The resulting lap produces a high polish on materials ranging from the softest aluminum to the hardest ceramics. It operates faster, wears longer and requires less reworking. Abernathy got NASA's permission to obtain a personal patent and he formed the one-man Abernathy Laps Co. in Huntsville, which produces a variety of laps. One of Abernathy's customers is Bell Aerospace Textron, Buffalo, which uses the laps to finish polish delicate instrument parts produced for NASA's Viking and other space programs. Says a Bell official: "Time needed (with the Abernathy lap) is a fraction of that required by conventional methods. The result is extremely accurate flatness and surface finish." Abernathy is providing laps for other manufacturing applications and for preparation of metallurgical specimens. The business is small but steady, and Abernathy plans expansion into other markets.
Code of Federal Regulations, 2012 CFR
2012-01-01
... grams) Objectionable seeds (number in 500 grams) Chalky kernels 1,3 (percent) Color requirements 1...) has any commercially objectionable foreign odor; (e) has a badly damaged or extremely red appearance...
Grenier, Jordane G.; Millet, Guillaume Y.; Peyrot, Nicolas; Samozino, Pierre; Oullion, Roger; Messonnier, Laurent; Morin, Jean-Benoît
2012-01-01
Trekking and military missions generally consist of carrying heavy loads for extreme durations. These factors have been separately shown to be sources of neuromuscular (NM) fatigue and locomotor alterations. However, the question of their combined effects remains unresolved, and addressing this issue required a representative context. Purpose The aim was to investigate the effects of extreme-duration heavy load carriage on NM function and walking characteristics. Methods Ten experienced infantrymen performed a 21-h simulated military mission (SMM) in a middle-mountain environment with equipment weighing ∼27 kg during battles and ∼43 kg during marches. NM function was evaluated for knee extensors (KE) and plantar flexors (PF) pre- and immediately post-SMM using isometric maximal voluntary contraction (MVC) measurement, neural electrical stimulation and surface EMG. The twitch-interpolation method was used to assess central fatigue. Peripheral changes were examined by stimulating the muscle in the relaxed state. The energy cost, mechanical work and spatio-temporal pattern of walking were also evaluated pre−/post-SMM on an instrumented treadmill in three equipment conditions: Sportswear, Battle and March. Results After the SMM, MVC declined by −10.2±3.6% for KE (P<0.01) and −10.7±16.1% for PF (P = 0.06). The origin of fatigue was essentially peripheral for both muscle groups. A trend toward low-frequency fatigue was detected for KE (5.5%, P = 0.08). These moderate NM alterations were concomitant with a large increase in perceived fatigue from pre- (rating of 8.3±2.2) to post-SMM (15.9±2.1, P<0.01). The SMM-related fatigue did not alter walking energetics or mechanics, and the different equipment carried on the treadmill did not interact with this fatigue either. Conclusion this study reports the first data on physiological and biomechanical consequences of extreme-duration heavy load carriage. Unexpectedly, NM function alterations due to the 21-h SMM were moderate and did not alter walking characteristics. Clinical Trial Registration Name: Effect of prolonged military exercises with high load carriage on neuromuscular fatigue and physiological/biomechanical responses. Number: NCT01127191. PMID:22927995
Quinn, Terrance; Sinkala, Zachariah
2014-01-01
We develop a general method for computing extreme value distribution (Gumbel, 1958) parameters for gapped alignments. Our approach uses mixture distribution theory to obtain associated BLOSUM matrices for gapped alignments, which in turn are used for determining significance of gapped alignment scores for pairs of biological sequences. We compare our results with parameters already obtained in the literature.
Environmental Conditions for Space Flight Hardware: A Survey
NASA Technical Reports Server (NTRS)
Plante, Jeannette; Lee, Brandon
2005-01-01
Interest in generalization of the physical environment experienced by NASA hardware from the natural Earth environment (on the launch pad), man-made environment on Earth (storage acceptance an d qualification testing), the launch environment, and the space environment, is ed to find commonality among our hardware in an effort to reduce cost and complexity. NASA is entering a period of increase in its number of planetary missions and it is important to understand how our qualification requirements will evolve with and track these new environments. Environmental conditions are described for NASA projects in several ways for the different periods of the mission life cycle. At the beginning, the mission manager defines survivability requirements based on the mission length, orbit, launch date, launch vehicle, and other factors . such as the use of reactor engines. Margins are then applied to these values (temperature extremes, vibration extremes, radiation tolerances, etc,) and a new set of conditions is generalized for design requirements. Mission assurance documents will then assign an additional margin for reliability, and a third set of values is provided for during testing. A fourth set of environmental condition values may evolve intermittently from heritage hardware that has been tested to a level beyond the actual mission requirement. These various sets of environment figures can make it quite confusing and difficult to capture common hardware environmental requirements. Environmental requirement information can be found in a wide variety of places. The most obvious is with the individual projects. We can easily get answers to questions about temperature extremes being used and radiation tolerance goals, but it is more difficult to map the answers to the process that created these requirements: for design, for qualification, and for actual environment with no margin applied. Not everyone assigned to a NASA project may have that kind of insight, as many have only the environmental requirement numbers needed to do their jobs but do not necessarily have a programmatic-level understanding of how all of the environmental requirements fit together.
Experience of 14 years of emergency reconstruction of electrical injuries.
Zhu, Zhi-Xiang; Xu, Xiao-Guang; Li, Wei-Ping; Wang, Dao-Xin; Zhang, Li-Yong; Chen, Li-Ying; Liu, Tian-yi
2003-02-01
Although there have been great advances in the treatment of electrical injuries in the last 20 years, the extremity loss ratio in electrical injuries remains at an unacceptably high level. The primary cause is due to the progressive tissue necrosis which results in the continuous extension of necrosis in the wound, leading to loss of the whole injured extremity. This study reports attempts to break the dangerous tissue necrosis circle and save the form and function of damaged extremities. After 14 years of systematic experimental and clinical studies a successful comprehensive urgent reconstruction alternative (CURA) for electrical injuries is proposed. CURA includes: debriding the wound as early as possible after injury; preserving the vital tissue structures as much as possible, such as nerves, vessels, joints, tendons, bone, even though they have undergone devitalization or local necrosis; repairing these vital tissues during the first surgery if functional reconstruction requires it; protecting the wound bed by covering with tissue flaps of rich blood supply; improving flap survival through moist dressings supported by continuous irrigation beneath the flaps for a 24-72h period after surgery with measures to control local infection; and last, giving general systemic treatment with vasoactive agents and antibiotics. Four hundred and fifty nine wounds in 155 patients suffering from electrical injuries have been successfully treated with this technique between 1986 and 2000 and are reported in this paper. Satisfactory results were obtained with the extremity loss proportion reduced to less than 9% compared with 41.5% during the 10 years before 1984 in the same hospital. The authors suggest that CURA is an effective and workable method for treatment of electrical injuries.
Work activities and musculoskeletal complaints among preschool workers.
Grant, K A; Habes, D J; Tepper, A L
1995-12-01
The potential for musculoskeletal trauma among preschool workers has been largely unexplored in the United States. This case report describes an investigation conducted to identify and evaluate possible causes of back and lower extremity pain among 22 workers at a Montessori day care facility. Investigators met with and distributed a questionnaire to school employees, and made measurements of workstation and furniture dimensions. Investigators also recorded the normal work activities of school employees on videotape, and performed a work sampling study to estimate the percentage of time employees spend performing various tasks and in certain postures. Questionnaire results from 18 employees indicated that back pain/discomfort was a common musculoskeletal complaint, reported by 61% of respondents. Neck/shoulder pain, lower extremity pain and hand/wrist pain were reported by 33, 33 and 11% of respondents, respectively. Observation and analysis of work activities indicated that employees spend significant periods of time kneeling, sitting on the floor, squatting, or bending at the waist. Furthermore, staff members who work with smaller children (i.e. six weeks to 18 months of age) performed more lifts and assumed more awkward lower extremity postures than employees who work with older children (3-4 years of age). Analysis of two lifting tasks using the revised NIOSH lifting equation indicated that employees who handle small children may be at increased risk of lifting-related low back pain. Investigators concluded that day care employees at this facility are at increased risk of low back pain and lower extremity (i.e. knee) injury due to work activities that require awkward or heavy lifts, and static working postures. Recommendations for reducing or eliminating these risks by modifying the workplace and changing the organization and methods of work are presented.
Identification of Extremely Premature Infants at High Risk of Rehospitalization
Carlo, Waldemar A.; McDonald, Scott A.; Yao, Qing; Das, Abhik; Higgins, Rosemary D.
2011-01-01
OBJECTIVE: Extremely low birth weight infants often require rehospitalization during infancy. Our objective was to identify at the time of discharge which extremely low birth weight infants are at higher risk for rehospitalization. METHODS: Data from extremely low birth weight infants in Eunice Kennedy Shriver National Institute of Child Health and Human Development Neonatal Research Network centers from 2002–2005 were analyzed. The primary outcome was rehospitalization by the 18- to 22-month follow-up, and secondary outcome was rehospitalization for respiratory causes in the first year. Using variables and odds ratios identified by stepwise logistic regression, scoring systems were developed with scores proportional to odds ratios. Classification and regression-tree analysis was performed by recursive partitioning and automatic selection of optimal cutoff points of variables. RESULTS: A total of 3787 infants were evaluated (mean ± SD birth weight: 787 ± 136 g; gestational age: 26 ± 2 weeks; 48% male, 42% black). Forty-five percent of the infants were rehospitalized by 18 to 22 months; 14.7% were rehospitalized for respiratory causes in the first year. Both regression models (area under the curve: 0.63) and classification and regression-tree models (mean misclassification rate: 40%–42%) were moderately accurate. Predictors for the primary outcome by regression were shunt surgery for hydrocephalus, hospital stay of >120 days for pulmonary reasons, necrotizing enterocolitis stage II or higher or spontaneous gastrointestinal perforation, higher fraction of inspired oxygen at 36 weeks, and male gender. By classification and regression-tree analysis, infants with hospital stays of >120 days for pulmonary reasons had a 66% rehospitalization rate compared with 42% without such a stay. CONCLUSIONS: The scoring systems and classification and regression-tree analysis models identified infants at higher risk of rehospitalization and might assist planning for care after discharge. PMID:22007016
Dornseifer, Ulf; Kleeberger, Charlotte; Kargl, Lukas; Schönberger, Markus; Rohde, Daniel; Ninkovic, Milomir; Schilling, Arndt
2017-03-01
Background The current standard to gradually adapt the fragile perfusion in lower extremity free flaps to an upright posture is the dangling maneuver. This type of flap training neither fits the orthostatic target load of an upright posture, nor does it assist in mobilizing the patients effectively. In this study, we quantitatively analyzed training effects of an early and full mobilization on flap perfusion. Methods A total of 15 patients with gracilis flaps for distal lower extremity reconstruction were included. Flap training was performed daily by mobilizing the patients on a tilt table into a fully upright posture for 5 minutes between the third and fifth postop days (PODs). Changes in micro- and macrocirculation were analyzed by laser Doppler flowmetry, remission spectroscopy, and an implanted Doppler probe. Results All flaps healed without complications. Yet, in three patients, the increased orthostatic load required an adjustment of the training duration due to a critical blood flow. The others showed an increasing compensation in the microcirculation. When tilting the patients, blood flow and oxygen saturation dropped significantly less on POD5 than on POD3. Furthermore, a significant increase of the blood flow was noted after an initial decrease during the mobilization on all days. An increasing compensation in the macrocirculation could not be determined. Conclusion Full mobilization of patients with lower extremity free flaps can be performed safely under perfusion monitoring, already starting on POD3. Additionally, monitoring allows a consideration of the individual orthostatic competence and therefore, exploitation of the maximum mobilization potential. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Designing instrumented walker to measure upper-extremity's efforts: A case study.
Khodadadi, Mohammad; Baniasad, Mina Arab; Arazpour, Mokhtar; Farahmand, Farzam; Zohoor, Hassan
2018-02-26
The high prevalence of shoulder pain in using walkers in patients who have spinal cord injury (SCI). Also, the limited options available to economically measure grip forces in walkers, which drove the need to create one. This article describes a method to obtain upper-extremities' forces and moments in a person with SCI by designing an appropriate instrumented walker. First, since the commercial multidirectional loadcells are too expensive, custom loadcells are fabricated. Ultimately, a complete gait analysis by means of VICON motion analysis and using inverse dynamic method has been held to measure upper-extremities' efforts. The results for a person with SCI using a two-wheel walker in low and high heights and a basic walker show that there are higher shoulder and elbow flexion-extension moments and also higher shoulder forces in superior-inferior direction and higher elbow and wrist forces in anterior-posterior directions. The results are not much different in using two different types of walker. By using the proposed method, upper-extremities' forces and moments were obtained and the results were compared to each other in using two different walkers.
Comparison of iSTAT and EPOC Blood Analyzers
2017-10-25
requires accurate blood analysis across a range of environmental conditions and, in extreme circumstances, use beyond the expiration date. We compared... analysis across a range of environmental conditions and, in extreme circumstances, use beyond the expiration date. We compared gold standard laboratory...temperatures for either device can result in spurious results, particularly for blood gases. 2.0 BACKGROUND Blood analysis is a critical aspect of
Powder-Metallurgical Bearings For Turbopumps
NASA Technical Reports Server (NTRS)
Bhat, B. N.; Humphries, T. S.; Thom, R. L.; Moxson, V.; Friedman, G. I.; Dolan, F. J.; Shipley, R. J.
1993-01-01
Bearings fabricated by powder metallurgy developed for use in machines subjected to extremes of temperature, rolling-contact cyclic stresses, and oxidizing or otherwise corrosive fluids. Bearings also extend operating lives of other machines in which bearings required to resist extreme thermal, mechanical, and chemical stresses. One alloy exhibiting outstanding properties was MRC-2001. Resistance to fatigue, stress corrosion cracking, and wear found superior to that of 440C stainless steel.
Lorantfy, Bettina; Seyer, Bernhard; Herwig, Christoph
2014-01-25
Extreme halophilic Archaea are extremophile species which can thrive in hypersaline environments of up to 3-5 M sodium chloride concentration. Although their ecology and physiology are widely identified on the microbiological level, little emphasis has been laid on quantitative bioprocess development with extreme halophiles. The goal of this study was to establish, on the one hand, a methodological basis for quantitative bioprocess analysis of extreme halophilic Archaea with an extreme halophilic strain as an example. Firstly, as a novel usage, a corrosion resistant bioreactor setup for extreme halophiles has been implemented. Then, paying special attention to total bioprocess quantification approaches, an indirect method for biomass quantification using on-line process signals was introduced. Subsequently, robust quantitative data evaluation methods for halophiles could be developed, providing defined and controlled cultivation conditions in the bioreactor and therefore obtaining suitable quality of on-line as well as off-line datasets. On the other hand, new physiological results of extreme halophiles in bioreactor have also been obtained based on the quantitative methodological tools. For the first time, quantitative data on stoichiometry and kinetics were collected and evaluated on different carbon sources. The results on various substrates were interpreted, with proposed metabolic mechanisms, by linking to the reported primary carbon metabolism of extreme halophilic Archaea. Moreover, results of chemostat cultures demonstrated that extreme halophilic organisms show Monod-kinetics on different sole carbon sources. A diauxic growth pattern was described on a mixture of substrates in batch cultivations. In addition, the methodologies presented here enable one to characterize the utilized strain Haloferax mediterranei (HFX) as a potential new host organism. Thus, this study offers a strong methodological basis as well as a fundamental physiological assessment for bioreactor quantification of extreme halophiles that can serve as primary knowledge for applications of extreme halophiles in biotechnology. Copyright © 2013 Elsevier B.V. All rights reserved.
A comparative assessment of statistical methods for extreme weather analysis
NASA Astrophysics Data System (ADS)
Schlögl, Matthias; Laaha, Gregor
2017-04-01
Extreme weather exposure assessment is of major importance for scientists and practitioners alike. We compare different extreme value approaches and fitting methods with respect to their value for assessing extreme precipitation and temperature impacts. Based on an Austrian data set from 25 meteorological stations representing diverse meteorological conditions, we assess the added value of partial duration series over the standardly used annual maxima series in order to give recommendations for performing extreme value statistics of meteorological hazards. Results show the merits of the robust L-moment estimation, which yielded better results than maximum likelihood estimation in 62 % of all cases. At the same time, results question the general assumption of the threshold excess approach (employing partial duration series, PDS) being superior to the block maxima approach (employing annual maxima series, AMS) due to information gain. For low return periods (non-extreme events) the PDS approach tends to overestimate return levels as compared to the AMS approach, whereas an opposite behavior was found for high return levels (extreme events). In extreme cases, an inappropriate threshold was shown to lead to considerable biases that may outperform the possible gain of information from including additional extreme events by far. This effect was neither visible from the square-root criterion, nor from standardly used graphical diagnosis (mean residual life plot), but from a direct comparison of AMS and PDS in synoptic quantile plots. We therefore recommend performing AMS and PDS approaches simultaneously in order to select the best suited approach. This will make the analyses more robust, in cases where threshold selection and dependency introduces biases to the PDS approach, but also in cases where the AMS contains non-extreme events that may introduce similar biases. For assessing the performance of extreme events we recommend conditional performance measures that focus on rare events only in addition to standardly used unconditional indicators. The findings of this study are of relevance for a broad range of environmental variables, including meteorological and hydrological quantities.
Response surface method in geotechnical/structural analysis, phase 1
NASA Astrophysics Data System (ADS)
Wong, F. S.
1981-02-01
In the response surface approach, an approximating function is fit to a long running computer code based on a limited number of code calculations. The approximating function, called the response surface, is then used to replace the code in subsequent repetitive computations required in a statistical analysis. The procedure of the response surface development and feasibility of the method are shown using a sample problem in slop stability which is based on data from centrifuge experiments of model soil slopes and involves five random soil parameters. It is shown that a response surface can be constructed based on as few as four code calculations and that the response surface is computationally extremely efficient compared to the code calculation. Potential applications of this research include probabilistic analysis of dynamic, complex, nonlinear soil/structure systems such as slope stability, liquefaction, and nuclear reactor safety.
Atomic-resolution transmission electron microscopy of electron beam–sensitive crystalline materials
NASA Astrophysics Data System (ADS)
Zhang, Daliang; Zhu, Yihan; Liu, Lingmei; Ying, Xiangrong; Hsiung, Chia-En; Sougrat, Rachid; Li, Kun; Han, Yu
2018-02-01
High-resolution imaging of electron beam–sensitive materials is one of the most difficult applications of transmission electron microscopy (TEM). The challenges are manifold, including the acquisition of images with extremely low beam doses, the time-constrained search for crystal zone axes, the precise image alignment, and the accurate determination of the defocus value. We develop a suite of methods to fulfill these requirements and acquire atomic-resolution TEM images of several metal organic frameworks that are generally recognized as highly sensitive to electron beams. The high image resolution allows us to identify individual metal atomic columns, various types of surface termination, and benzene rings in the organic linkers. We also apply our methods to other electron beam–sensitive materials, including the organic-inorganic hybrid perovskite CH3NH3PbBr3.