Calendar methods of fertility regulation: a rule of thumb.
Colombo, B; Scarpa, B
1996-01-01
"[Many] illiterate women, particularly in the third world, find [it] difficult to apply usual calendar methods for the regulation of fertility. Some of them are even unable to make simple subtractions. In this paper we are therefore trying to evaluate the applicability and the efficiency of an extremely simple rule which entails only [the ability to count] a number of days, and always the same way." (SUMMARY IN ITA) excerpt
ERIC Educational Resources Information Center
Britos, Leticia; Goyenola, Guillermo; Orono, Silvia Umpierrez
2004-01-01
An extremely simple, inexpensive, and safe method is presented, which emulates nucleic acids isolation and electrophoretic analysis as performed in a research environment, in the context of a secondary school hands-on activity. The protocol is amenable to an interdisciplinary approach, taking into consideration the electrical and chemical…
Jig For Stereoscopic Photography
NASA Technical Reports Server (NTRS)
Nielsen, David J.
1990-01-01
Separations between views adjusted precisely for best results. Simple jig adjusted to set precisely, distance between right and left positions of camera used to make stereoscopic photographs. Camera slides in slot between extreme positions, where it takes stereoscopic pictures. Distance between extreme positions set reproducibly with micrometer. In view of trend toward very-large-scale integration of electronic circuits, training method and jig used to make training photographs useful to many companies to reduce cost of training manufacturing personnel.
Kwan, Johnny S H; Kung, Annie W C; Sham, Pak C
2011-09-01
Selective genotyping can increase power in quantitative trait association. One example of selective genotyping is two-tail extreme selection, but simple linear regression analysis gives a biased genetic effect estimate. Here, we present a simple correction for the bias.
Hedman, Travis L; Chapman, Ted T; Dewey, William S; Quick, Charles D; Wolf, Steven E; Holcomb, John B
2007-01-01
Burn therapists routinely are tasked to position the lower extremities of burn patients for pressure ulcer prevention, skin graft protection, donor site ventilation, and edema reduction. We developed two durable and low-maintenance devices that allow effective positioning of the lower extremities. The high-profile and low-profile leg net devices were simple to fabricate and maintain. The frame was assembled using a three-quarter-inch diameter copper pipe and copper fittings (45 degrees, 90 degrees, and tees). A double layer of elasticized tubular netting was pulled over the frame and doubled back for leg support to complete the devices. The devices can be placed on any bed surface. The netting can be exchanged when soiled and the frame can be disinfected between patients using standard techniques. Both devices were used on approximately 250 patients for a total of 1200 treatment days. No incidence of pressure ulcer was observed, and graft take was not adversely affected. The devices have not required repairs or replacement. Medical providers reported they are easy to apply and effectively maintain proper positioning throughout application. Neither device interfered with the application of other positioning devices. Both devices were found to be an effective method of positioning lower extremities to prevent pressure ulcer, minimize graft loss and donor site morbidity, and reduce edema. The devices allowed for proper wound ventilation and protected grafted lower extremities on any bed surface. The devices are simple to fabricate and maintain. Both devices can be effectively used simultaneously with other positioning devices.
Our method of correcting cryptotia.
Yanai, A; Tange, I; Bandoh, Y; Tsuzuki, K; Sugino, H; Nagata, S
1988-12-01
Our technique for the correction of cryptotia using both Z-plasty and the advancement flap is described. The main advantages are the simple design of the skin incision and the possibility of its application to cryptotia other than severe cartilage deformity and extreme lack of skin.
ROOM TEMPERATURE BULK AND TEMPLATE-FREE SYNTHESIS OF LEUCOEMARLDINE POLYANILINE NANOFIBERS
An extremely simple single-step method is described for the bulk synthesis of nanofibers of the electronic polymer polyaniline in fully reduced state (leucoemarldine form) without using any reducing agents, surfactants, and/or large amounts of insoluble templates. Chemical oxida...
NASA Astrophysics Data System (ADS)
Tichý, Vladimír; Hudec, René; Němcová, Šárka
2016-06-01
The algorithm presented is intended mainly for lobster eye optics. This type of optics (and some similar types) allows for a simplification of the classical ray-tracing procedure that requires great many rays to simulate. The method presented performs the simulation of a only few rays; therefore it is extremely effective. Moreover, to simplify the equations, a specific mathematical formalism is used. Only a few simple equations are used, therefore the program code can be simple as well. The paper also outlines how to apply the method to some other reflective optical systems.
Building Flexible User Interfaces for Solving PDEs
NASA Astrophysics Data System (ADS)
Logg, Anders; Wells, Garth N.
2010-09-01
FEniCS is a collection of software tools for the automated solution of differential equations by finite element methods. In this note, we describe how FEniCS can be used to solve a simple nonlinear model problem with varying levels of automation. At one extreme, FEniCS provides tools for the fully automated and adaptive solution of nonlinear partial differential equations. At the other extreme, FEniCS provides a range of tools that allow the computational scientist to experiment with novel solution algorithms.
USDA-ARS?s Scientific Manuscript database
Recently several isothermal amplification techniques have been developed that are extremely tolerant towards inhibitors present in many plant extracts. Recombinase polymerase amplification (RPA) assays for the genus Phytophthora have been developed which provide a simple and rapid method to macerate...
Bidirectional extreme learning machine for regression problem and its learning effectiveness.
Yang, Yimin; Wang, Yaonan; Yuan, Xiaofang
2012-09-01
It is clear that the learning effectiveness and learning speed of neural networks are in general far slower than required, which has been a major bottleneck for many applications. Recently, a simple and efficient learning method, referred to as extreme learning machine (ELM), was proposed by Huang , which has shown that, compared to some conventional methods, the training time of neural networks can be reduced by a thousand times. However, one of the open problems in ELM research is whether the number of hidden nodes can be further reduced without affecting learning effectiveness. This brief proposes a new learning algorithm, called bidirectional extreme learning machine (B-ELM), in which some hidden nodes are not randomly selected. In theory, this algorithm tends to reduce network output error to 0 at an extremely early learning stage. Furthermore, we find a relationship between the network output error and the network output weights in the proposed B-ELM. Simulation results demonstrate that the proposed method can be tens to hundreds of times faster than other incremental ELM algorithms.
Vitamins B2, B1, C, tea polyphenols, and natural surfactants, which function both as reducing and capping agents, provide extremely simple, one-pot, green synthetic methods to bulk quantities of nanomaterials in water. Shape-controlled synthesis of noble nanostructures via microw...
BIOMIMETIC APPROACH TO SUSTAINABLE NANOMATERIALS AND SAFER APPLICATION IN CATALYSIS AND REMEDIATION
Vitamins B1, B2, C, and tea polyphenols which function both as reducing and capping agents, provide extremely simple, one-pot, green synthetic methods to bulk quantities of nanomaterials in water. Shape-controlled synthesis of noble nanostructures via microwave (MW)-assisted spon...
Biomimetic Approach to Nanomaterials and Their Safer Application in Catalysis and Remediation
Vitamins B1, B2, C, and tea polyphenols which function both as reducing and capping agents, provide extremely simple, one-pot, green synthetic methods to bulk quantities of nanomaterials in water. Shape-controlled synthesis of noble nanostructures via microwave (MW)-assisted spon...
Modeling method of time sequence model based grey system theory and application proceedings
NASA Astrophysics Data System (ADS)
Wei, Xuexia; Luo, Yaling; Zhang, Shiqiang
2015-12-01
This article gives a modeling method of grey system GM(1,1) model based on reusing information and the grey system theory. This method not only extremely enhances the fitting and predicting accuracy of GM(1,1) model, but also maintains the conventional routes' merit of simple computation. By this way, we have given one syphilis trend forecast method based on reusing information and the grey system GM(1,1) model.
Prince, Linda M
2015-01-01
Inter-simple sequence repeat PCR (ISSR-PCR) is a fast, inexpensive genotyping technique based on length variation in the regions between microsatellites. The method requires no species-specific prior knowledge of microsatellite location or composition. Very small amounts of DNA are required, making this method ideal for organisms of conservation concern, or where the quantity of DNA is extremely limited due to organism size. ISSR-PCR can be highly reproducible but requires careful attention to detail. Optimization of DNA extraction, fragment amplification, and normalization of fragment peak heights during fluorescent detection are critical steps to minimizing the downstream time spent verifying and scoring the data.
The Analysis of Spontaneous Processes Using Equilibrium Thermodynamics
ERIC Educational Resources Information Center
Honig, J. M.; Ben-Amotz, Dor
2006-01-01
The derivations based on the use of deficit functions provide a simple means of demonstrating the extremism conditions that are applicable to various thermodynamics function. The method shows that the maximum quantity of work is available from a system only when the processes are carried out reversibly since irreversible (spontaneous)…
Vitamins B2, B1, C, tea polyphenols, and natural surfactants, which function both as reducing and capping agents, provide extremely simple, one-pot, green synthetic methods to bulk quantities of nanomaterials in water. Shape-controlled synthesis of noble nanostructures via microw...
Green synthesis of nanomaterials and sustainable applications of nano-catalysts
Green synthesis efforts involving the use of vitamins B1, B2, C, and tea and wine polyphenols which function both as reducing and capping agents will be presented which enables extremely simple, one-pot, green synthetic methods to nanomaterials in water.1a Shape-controlled synth...
Liew, S K; Carlson, N W
1992-05-20
A simple method for obtaining a collimated near-unity aspect ratio output beam from laser sources with extremely large (> 100:1) aspect ratios is demonstrated by using a distributed-feedback grating-surfaceemitting laser. Far-field power-in-the-bucket measurements of the laser indicate good beam quality with a high Strehl ratio.
NASA Astrophysics Data System (ADS)
Cook, L. M.; Samaras, C.; McGinnis, S. A.
2017-12-01
Intensity-duration-frequency (IDF) curves are a common input to urban drainage design, and are used to represent extreme rainfall in a region. As rainfall patterns shift into a non-stationary regime as a result of climate change, these curves will need to be updated with future projections of extreme precipitation. Many regions have begun to update these curves to reflect the trends from downscaled climate models; however, few studies have compared the methods for doing so, as well as the uncertainty that results from the selection of the native grid scale and temporal resolution of the climate model. This study examines the variability in updated IDF curves for Pittsburgh using four different methods for adjusting gridded regional climate model (RCM) outputs into station scale precipitation extremes: (1) a simple change factor applied to observed return levels, (2) a naïve adjustment of stationary and non-stationary Generalized Extreme Value (GEV) distribution parameters, (3) a transfer function of the GEV parameters from the annual maximum series, and (4) kernel density distribution mapping bias correction of the RCM time series. Return level estimates (rainfall intensities) and confidence intervals from these methods for the 1-hour to 48-hour duration are tested for sensitivity to the underlying spatial and temporal resolution of the climate ensemble from the NA-CORDEX project, as well as, the future time period for updating. The first goal is to determine if uncertainty is highest for: (i) the downscaling method, (ii) the climate model resolution, (iii) the climate model simulation, (iv) the GEV parameters, or (v) the future time period examined. Initial results of the 6-hour, 10-year return level adjusted with the simple change factor method using four climate model simulations of two different spatial resolutions show that uncertainty is highest in the estimation of the GEV parameters. The second goal is to determine if complex downscaling methods and high-resolution climate models are necessary for updating, or if simpler methods and lower resolution climate models will suffice. The final results can be used to inform the most appropriate method and climate model resolutions to use for updating IDF curves for urban drainage design.
NASA Astrophysics Data System (ADS)
Sargent, Garrett C.; Ratliff, Bradley M.; Asari, Vijayan K.
2017-08-01
The advantage of division of focal plane imaging polarimeters is their ability to obtain temporally synchronized intensity measurements across a scene; however, they sacrifice spatial resolution in doing so due to their spatially modulated arrangement of the pixel-to-pixel polarizers and often result in aliased imagery. Here, we propose a super-resolution method based upon two previously trained extreme learning machines (ELM) that attempt to recover missing high frequency and low frequency content beyond the spatial resolution of the sensor. This method yields a computationally fast and simple way of recovering lost high and low frequency content from demosaicing raw microgrid polarimetric imagery. The proposed method outperforms other state-of-the-art single-image super-resolution algorithms in terms of structural similarity and peak signal-to-noise ratio.
Response of Simple, Model Systems to Extreme Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ewing, Rodney C.; Lang, Maik
2015-07-30
The focus of the research was on the application of high-pressure/high-temperature techniques, together with intense energetic ion beams, to the study of the behavior of simple oxide systems (e.g., SiO 2, GeO 2, CeO 2, TiO 2, HfO 2, SnO 2, ZnO and ZrO 2) under extreme conditions. These simple stoichiometries provide unique model systems for the analysis of structural responses to pressure up to and above 1 Mbar, temperatures of up to several thousands of kelvin, and the extreme energy density generated by energetic heavy ions (tens of keV/atom). The investigations included systematic studies of radiation- and pressure-induced amorphizationmore » of high P-T polymorphs. By studying the response of simple stoichiometries that have multiple structural “outcomes”, we have established the basic knowledge required for the prediction of the response of more complex structures to extreme conditions. We especially focused on the amorphous state and characterized the different non-crystalline structure-types that result from the interplay of radiation and pressure. For such experiments, we made use of recent technological developments, such as the perforated diamond-anvil cell and in situ investigation using synchrotron x-ray sources. We have been particularly interested in using extreme pressures to alter the electronic structure of a solid prior to irradiation. We expected that the effects of modified band structure would be evident in the track structure and morphology, information which is much needed to describe theoretically the fundamental physics of track-formation. Finally, we investigated the behavior of different simple-oxide, composite nanomaterials (e.g., uncoated nanoparticles vs. core/shell systems) under coupled, extreme conditions. This provided insight into surface and boundary effects on phase stability under extreme conditions.« less
An extremely simple green approach that generates bulk quantities of nanocrystals of noble metals such as silver (Ag) and palladium (Pd) using coffee and tea extract at room temperature is described. The single-pot method uses no surfactant, capping agent, and/or template. The ob...
Vitamins B1,1a B2, C,1b and tea polyphenols1c which function both as reducing and capping agents, provide extremely simple, one-pot, green synthetic methods to bulk quantities of nanomaterials in water. Shape-controlled synthesis of noble nanostructures via microwave (MW)-assiste...
NASA Astrophysics Data System (ADS)
Song, Tingting; Liu, Qi; Liu, Jingyuan; Yang, Wanlu; Chen, Rongrong; Jing, Xiaoyan; Takahashi, Kazunobu; Wang, Jun
2015-11-01
Inspired by natural plants such as Nepenthes pitcher plants, super slippery surfaces have been developed to improve the attributes of repellent surfaces. In this report, super slippery porous anodic aluminium oxide (AAO) surfaces have fabricated by a simple and reproducible method. Firstly, the aluminium substrates were treated by an anodic process producing micro-nano structured sheet-layered pores, and then immersed in Methyl Silicone Oil, Fluororalkylsilane (FAS) and DuPont Krytox, respectively, generating super slippery surfaces. Such a good material with excellent anti-corrosion property through a simple and repeatable method may be potential candidates for metallic application in anti-corrosion and extreme environment.
Combining local search with co-evolution in a remarkably simple way
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boettcher, S.; Percus, A.
2000-05-01
The authors explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problem. The method, called extremal optimization, is inspired by self-organized criticality, a concept introduced to describe emergent complexity in physical systems. In contrast to genetic algorithms, which operate on an entire gene-pool of possible solutions, extremal optimization successively replaces extremely undesirable elements of a single sub-optimal solution with new, random ones. Large fluctuations, or avalanches, ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements heuristics inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Phase transitions are found in many combinatorial optimization problems, and have been conjectured to occur in the region of parameter space containing the hardest instances. We demonstrate how extremal optimization can be implemented for a variety of hard optimization problems. We believe that this will be a useful tool in the investigation of phase transitions in combinatorial optimization, thereby helping to elucidate the origin of computational complexity.« less
Khisamutdinov, Emil F; Bui, My Nguyen Hoan; Jasinski, Daniel; Zhao, Zhengyi; Cui, Zheng; Guo, Peixuan
2015-01-01
Precise shape control of architectures at the nanometer scale is an intriguing but extremely challenging facet. RNA has recently emerged as a unique material and thermostable building block for use in nanoparticle construction. Here, we describe a simple method from design to synthesis of RNA triangle, square, and pentagon by stretching RNA 3WJ native angle from 60° to 90° and 108°, using the three-way junction (3WJ) of the pRNA from bacteriophage phi29 dsDNA packaging motor. These methods for the construction of elegant polygons can be applied to other RNA building blocks including the utilization and application of RNA 4-way, 5-way, and other multi-way junctions.
Miao, Meng; Zhao, Gaosheng; Xu, Li; Dong, Junguo; Cheng, Ping
2018-03-01
A direct analytical method based on spray-inlet microwave plasma torch tandem mass spectrometry was applied to simultaneously determine 4 phthalate esters (PAEs), namely, benzyl butyl phthalate, diethyl phthalate, dipentyl phthalate, and dodecyl phthalate with extremely high sensitivity in spirits without sample treatment. Among the 4 brands of spirit products, 3 kinds of PAE compounds were directly determined at very low concentrations from 1.30 to 114 ng·g -1 . Compared with other online and off-line methods, the spray-inlet microwave plasma torch tandem mass spectrometry technique is extremely simple, rapid, sensitive, and high efficient, providing an ideal screening tool for PAEs in spirits. Copyright © 2017 John Wiley & Sons, Ltd.
Atmostpheric simulations of extreme surface heating episodes on simple hills
W.E. Heilman
1992-01-01
A two-dimensional nonhydrostatic atmospheric model was used to simulate the circulation patterns (wind and vorticity) and turbulence energy fields associated with lines of extreme surface heating on simple two-dimensional hills. Heating-line locations and ambient crossflow conditions were varied to qualitatively determine the impact of terrain geometry on the...
Fenton, A H
1976-07-01
The construction of an interim overdenture using existing removable partial dentures with natural tooth crowns and artificial teeth can be a simple and economical method of providing patients with dentures while tissues heal and teeth are prepared and restored. A more definite prognosis for both the patient and his remaining dentition can be established before the final overdenture is completed. The procedures necessary to provide three types of interim overdentures have been outlined. Patients tolerate this method of changing their dentitions extremely well.
NASA Astrophysics Data System (ADS)
Kergadallan, Xavier; Bernardara, Pietro; Benoit, Michel; Andreewsky, Marc; Weiss, Jérôme
2013-04-01
Estimating the probability of occurrence of extreme sea levels is a central issue for the protection of the coast. Return periods of sea level with wave set-up contribution are estimated here in one site : Cherbourg in France in the English Channel. The methodology follows two steps : the first one is computation of joint probability of simultaneous wave height and still sea level, the second one is interpretation of that joint probabilities to assess a sea level for a given return period. Two different approaches were evaluated to compute joint probability of simultaneous wave height and still sea level : the first one is multivariate extreme values distributions of logistic type in which all components of the variables become large simultaneously, the second one is conditional approach for multivariate extreme values in which only one component of the variables have to be large. Two different methods were applied to estimate sea level with wave set-up contribution for a given return period : Monte-Carlo simulation in which estimation is more accurate but needs higher calculation time and classical ocean engineering design contours of type inverse-FORM in which the method is simpler and allows more complex estimation of wave setup part (wave propagation to the coast for example). We compare results from the two different approaches with the two different methods. To be able to use both Monte-Carlo simulation and design contours methods, wave setup is estimated with an simple empirical formula. We show advantages of the conditional approach compared to the multivariate extreme values approach when extreme sea-level occurs when either surge or wave height is large. We discuss the validity of the ocean engineering design contours method which is an alternative when computation of sea levels is too complex to use Monte-Carlo simulation method.
NASA Astrophysics Data System (ADS)
Mentaschi, Lorenzo; Vousdoukas, Michalis; Voukouvalas, Evangelos; Sartini, Ludovica; Feyen, Luc; Besio, Giovanni; Alfieri, Lorenzo
2016-09-01
Statistical approaches to study extreme events require, by definition, long time series of data. In many scientific disciplines, these series are often subject to variations at different temporal scales that affect the frequency and intensity of their extremes. Therefore, the assumption of stationarity is violated and alternative methods to conventional stationary extreme value analysis (EVA) must be adopted. Using the example of environmental variables subject to climate change, in this study we introduce the transformed-stationary (TS) methodology for non-stationary EVA. This approach consists of (i) transforming a non-stationary time series into a stationary one, to which the stationary EVA theory can be applied, and (ii) reverse transforming the result into a non-stationary extreme value distribution. As a transformation, we propose and discuss a simple time-varying normalization of the signal and show that it enables a comprehensive formulation of non-stationary generalized extreme value (GEV) and generalized Pareto distribution (GPD) models with a constant shape parameter. A validation of the methodology is carried out on time series of significant wave height, residual water level, and river discharge, which show varying degrees of long-term and seasonal variability. The results from the proposed approach are comparable with the results from (a) a stationary EVA on quasi-stationary slices of non-stationary series and (b) the established method for non-stationary EVA. However, the proposed technique comes with advantages in both cases. For example, in contrast to (a), the proposed technique uses the whole time horizon of the series for the estimation of the extremes, allowing for a more accurate estimation of large return levels. Furthermore, with respect to (b), it decouples the detection of non-stationary patterns from the fitting of the extreme value distribution. As a result, the steps of the analysis are simplified and intermediate diagnostics are possible. In particular, the transformation can be carried out by means of simple statistical techniques such as low-pass filters based on the running mean and the standard deviation, and the fitting procedure is a stationary one with a few degrees of freedom and is easy to implement and control. An open-source MATLAB toolbox has been developed to cover this methodology, which is available at https://github.com/menta78/tsEva/ (Mentaschi et al., 2016).
Kim, Seong-Gil
2018-01-01
Background The purpose of this study was to investigate the effect of ankle ROM and lower-extremity muscle strength on static balance control ability in young adults. Material/Methods This study was conducted with 65 young adults, but 10 young adults dropped out during the measurement, so 55 young adults (male: 19, female: 36) completed the study. Postural sway (length and velocity) was measured with eyes open and closed, and ankle ROM (AROM and PROM of dorsiflexion and plantarflexion) and lower-extremity muscle strength (flexor and extensor of hip, knee, and ankle joint) were measured. Pearson correlation coefficient was used to examine the correlation between variables and static balance ability. Simple linear regression analysis and multiple linear regression analysis were used to examine the effect of variables on static balance ability. Results In correlation analysis, plantarflexion ROM (AROM and PROM) and lower-extremity muscle strength (except hip extensor) were significantly correlated with postural sway (p<0.05). In simple correlation analysis, all variables that passed the correlation analysis procedure had significant influence (p<0.05). In multiple linear regression analysis, plantar flexion PROM with eyes open significantly influenced sway length (B=0.681) and sway velocity (B=0.011). Conclusions Lower-extremity muscle strength and ankle plantarflexion ROM influenced static balance control ability, with ankle plantarflexion PROM showing the greatest influence. Therefore, both contractile structures and non-contractile structures should be of interest when considering static balance control ability improvement. PMID:29760375
Kim, Seong-Gil; Kim, Wan-Soo
2018-05-15
BACKGROUND The purpose of this study was to investigate the effect of ankle ROM and lower-extremity muscle strength on static balance control ability in young adults. MATERIAL AND METHODS This study was conducted with 65 young adults, but 10 young adults dropped out during the measurement, so 55 young adults (male: 19, female: 36) completed the study. Postural sway (length and velocity) was measured with eyes open and closed, and ankle ROM (AROM and PROM of dorsiflexion and plantarflexion) and lower-extremity muscle strength (flexor and extensor of hip, knee, and ankle joint) were measured. Pearson correlation coefficient was used to examine the correlation between variables and static balance ability. Simple linear regression analysis and multiple linear regression analysis were used to examine the effect of variables on static balance ability. RESULTS In correlation analysis, plantarflexion ROM (AROM and PROM) and lower-extremity muscle strength (except hip extensor) were significantly correlated with postural sway (p<0.05). In simple correlation analysis, all variables that passed the correlation analysis procedure had significant influence (p<0.05). In multiple linear regression analysis, plantar flexion PROM with eyes open significantly influenced sway length (B=0.681) and sway velocity (B=0.011). CONCLUSIONS Lower-extremity muscle strength and ankle plantarflexion ROM influenced static balance control ability, with ankle plantarflexion PROM showing the greatest influence. Therefore, both contractile structures and non-contractile structures should be of interest when considering static balance control ability improvement.
A Method for Automated Detection of Usability Problems from Client User Interface Events
Saadawi, Gilan M.; Legowski, Elizabeth; Medvedeva, Olga; Chavan, Girish; Crowley, Rebecca S.
2005-01-01
Think-aloud usability analysis provides extremely useful data but is very time-consuming and expensive to perform because of the extensive manual video analysis that is required. We describe a simple method for automated detection of usability problems from client user interface events for a developing medical intelligent tutoring system. The method incorporates (1) an agent-based method for communication that funnels all interface events and system responses to a centralized database, (2) a simple schema for representing interface events and higher order subgoals, and (3) an algorithm that reproduces the criteria used for manual coding of usability problems. A correction factor was empirically determining to account for the slower task performance of users when thinking aloud. We tested the validity of the method by simultaneously identifying usability problems using TAU and manually computing them from stored interface event data using the proposed algorithm. All usability problems that did not rely on verbal utterances were detectable with the proposed method. PMID:16779121
DOE Office of Scientific and Technical Information (OSTI.GOV)
Quon, Eliot; Platt, Andrew; Yu, Yi-Hsiang
Extreme loads are often a key cost driver for wave energy converters (WECs). As an alternative to exhaustive Monte Carlo or long-term simulations, the most likely extreme response (MLER) method allows mid- and high-fidelity simulations to be used more efficiently in evaluating WEC response to events at the edges of the design envelope, and is therefore applicable to system design analysis. The study discussed in this paper applies the MLER method to investigate the maximum heave, pitch, and surge force of a point absorber WEC. Most likely extreme waves were obtained from a set of wave statistics data based onmore » spectral analysis and the response amplitude operators (RAOs) of the floating body; the RAOs were computed from a simple radiation-and-diffraction-theory-based numerical model. A weakly nonlinear numerical method and a computational fluid dynamics (CFD) method were then applied to compute the short-term response to the MLER wave. Effects of nonlinear wave and floating body interaction on the WEC under the anticipated 100-year waves were examined by comparing the results from the linearly superimposed RAOs, the weakly nonlinear model, and CFD simulations. Overall, the MLER method was successfully applied. In particular, when coupled to a high-fidelity CFD analysis, the nonlinear fluid dynamics can be readily captured.« less
Calculating p-values and their significances with the Energy Test for large datasets
NASA Astrophysics Data System (ADS)
Barter, W.; Burr, C.; Parkes, C.
2018-04-01
The energy test method is a multi-dimensional test of whether two samples are consistent with arising from the same underlying population, through the calculation of a single test statistic (called the T-value). The method has recently been used in particle physics to search for samples that differ due to CP violation. The generalised extreme value function has previously been used to describe the distribution of T-values under the null hypothesis that the two samples are drawn from the same underlying population. We show that, in a simple test case, the distribution is not sufficiently well described by the generalised extreme value function. We present a new method, where the distribution of T-values under the null hypothesis when comparing two large samples can be found by scaling the distribution found when comparing small samples drawn from the same population. This method can then be used to quickly calculate the p-values associated with the results of the test.
CALCULATION OF NONLINEAR CONFIDENCE AND PREDICTION INTERVALS FOR GROUND-WATER FLOW MODELS.
Cooley, Richard L.; Vecchia, Aldo V.
1987-01-01
A method is derived to efficiently compute nonlinear confidence and prediction intervals on any function of parameters derived as output from a mathematical model of a physical system. The method is applied to the problem of obtaining confidence and prediction intervals for manually-calibrated ground-water flow models. To obtain confidence and prediction intervals resulting from uncertainties in parameters, the calibrated model and information on extreme ranges and ordering of the model parameters within one or more independent groups are required. If random errors in the dependent variable are present in addition to uncertainties in parameters, then calculation of prediction intervals also requires information on the extreme range of error expected. A simple Monte Carlo method is used to compute the quantiles necessary to establish probability levels for the confidence and prediction intervals. Application of the method to a hypothetical example showed that inclusion of random errors in the dependent variable in addition to uncertainties in parameters can considerably widen the prediction intervals.
Theoretical study of mixing in liquid clouds – Part 1: Classical concepts
Korolev, Alexei; Khain, Alex; Pinsky, Mark; ...
2016-07-28
The present study considers final stages of in-cloud mixing in the framework of classical concept of homogeneous and extreme inhomogeneous mixing. Simple analytical relationships between basic microphysical parameters were obtained for homogeneous and extreme inhomogeneous mixing based on the adiabatic consideration. It was demonstrated that during homogeneous mixing the functional relationships between the moments of the droplets size distribution hold only during the primary stage of mixing. Subsequent random mixing between already mixed parcels and undiluted cloud parcels breaks these relationships. However, during extreme inhomogeneous mixing the functional relationships between the microphysical parameters hold both for primary and subsequent mixing.more » The obtained relationships can be used to identify the type of mixing from in situ observations. The effectiveness of the developed method was demonstrated using in situ data collected in convective clouds. It was found that for the specific set of in situ measurements the interaction between cloudy and entrained environments was dominated by extreme inhomogeneous mixing.« less
Thermal Degradation Characteristics of Oil Filled Cable Joint with Extremely Degraded tan δ Oil
NASA Astrophysics Data System (ADS)
Ide, Kenichi; Nakade, Masahiko; Takahashi, Tohru; Nakajima, Takenori
Much of oil filled (OF) cable has been used for a long time for 66∼500kV extra high voltage cable. Sometimes we can see extremely degraded tanδ oil (several tens % of tanδ, for example) in joint box etc. The calculation results of tanδ on a simple combination model of paper/oil show that, tanδ of oil impregnated paper with such a high tanδ oil is extremely high and it must result in a thermal breakdown. However such an event has not taken place up to the present in actually operated transmission line. This fact suggests that some suppression mechanism of tanδ has acted in the degraded tanδ oil impregnated paper insulation. Therefore we investigated the tanδ characteristics of oil impregnated paper with extremely high tanδ oil in detail. In addition, based on the investigation results, we developed a simulation method of heat generation by dielectric loss in OF cable joint (which has degraded tanδ oil).
Multifunctional transparent ZnO nanorod films.
Kwak, Geunjae; Jung, Sungmook; Yong, Kijung
2011-03-18
Transparent ZnO nanorod (NR) films that exhibit extreme wetting states (either superhydrophilicity or superhydrophobicity through surface chemical modification), high transmittance, UV protection and antireflection have been prepared via the facile ammonia hydrothermal method. The periodic 1D ZnO NR arrays showed extreme wetting states as well as antireflection properties due to their unique surface structure and prevented the UVA region from penetrating the substrate due to the unique material property of ZnO. Because of the simple, time-efficient and low temperature preparation process, ZnO NR films with useful functionalities are promising for fabrication of highly light transmissive, antireflective, UV protective, antifogging and self-cleaning optical materials to be used for optical devices and photovoltaic energy devices.
Taxonomic study of extreme halophilic archaea isolated from the "Salar de Atacama", Chile.
Lizama, C; Monteoliva-Sánchez, M; Prado, B; Ramos-Cormenzana, A; Weckesser, J; Campos, V
2001-11-01
A large number of halophilic bacteria were isolated in 1984-1992 from the Atacama Saltern (North of Chile). For this study 82 strains of extreme halophilic archaea were selected. The characterization was performed by using the phenotypic characters including morphological, physiological, biochemical, nutritional and antimicrobial susceptibility test. The results, together with those from reference strains, were subjected to numerical analysis, using the Simple Matching (S(SM)) coefficient and clustered by the unweighted pair group method of association (UPGMA). Fifteen phena were obtained at an 70% similarity level. The results obtained reveal a high diversity among the halophilic archaea isolated. Representative strains from the phena were chosen to determine their DNA base composition and the percentage of DNA-DNA similarity compared to reference strains. The 16S rRNA studies showed that some of these strains constitutes a new taxa of extreme halophilic archaea.
NASA Astrophysics Data System (ADS)
Huang, Haiyun; Zhang, Junping; Li, Yonghe
2018-05-01
Under the weight charge policy, the weigh in motion data at a toll station on the Jing-Zhu Expressway were collected. The statistic analysis of vehicle load data was carried out. For calculating the operating vehicle load effects on bridges, by Monte Carlo method used to generate random traffic flow and influence line loading method, the maximum bending moment effect of simple supported beams were obtained. The extreme value I distribution and normal distribution were used to simulate the distribution of the maximum bending moment effect. By the extrapolation of Rice formula and the extreme value I distribution, the predicted values of the maximum load effects were obtained. By comparing with vehicle load effect according to current specification, some references were provided for the management of the operating vehicles and the revision of the bridge specifications.
A discrete-time adaptive control scheme for robot manipulators
NASA Technical Reports Server (NTRS)
Tarokh, M.
1990-01-01
A discrete-time model reference adaptive control scheme is developed for trajectory tracking of robot manipulators. The scheme utilizes feedback, feedforward, and auxiliary signals, obtained from joint angle measurement through simple expressions. Hyperstability theory is utilized to derive the adaptation laws for the controller gain matrices. It is shown that trajectory tracking is achieved despite gross robot parameter variation and uncertainties. The method offers considerable design flexibility and enables the designer to improve the performance of the control system by adjusting free design parameters. The discrete-time adaptation algorithm is extremely simple and is therefore suitable for real-time implementation. Simulations and experimental results are given to demonstrate the performance of the scheme.
Wang, Kai; Shi, Yantao; Li, Bo; Zhao, Liang; Wang, Wei; Wang, Xiangyuan; Bai, Xiaogong; Wang, Shufeng; Hao, Ce; Ma, Tingli
2016-03-02
Inorganic electron-selective layers (ESLs) are fabricated at extremely low temperatures of 70°C or even 25°C by a simple solution route. This is of great significance because the attained PCEs confirm the feasibility of room-temperature coating of inorganic amorphous ESLs through a solution method for the first time. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Return period curves for extreme 5-min rainfall amounts at the Barcelona urban network
NASA Astrophysics Data System (ADS)
Lana, X.; Casas-Castillo, M. C.; Serra, C.; Rodríguez-Solà, R.; Redaño, A.; Burgueño, A.; Martínez, M. D.
2018-03-01
Heavy rainfall episodes are relatively common in the conurbation of Barcelona and neighbouring cities (NE Spain), usually due to storms generated by convective phenomena in summer and eastern and south-eastern advections in autumn. Prevention of local flood episodes and right design of urban drainage have to take into account the rainfall intensity spread instead of a simple evaluation of daily rainfall amounts. The database comes from 5-min rain amounts recorded by tipping buckets in the Barcelona urban network along the years 1994-2009. From these data, extreme 5-min rain amounts are selected applying the peaks-over-threshold method for thresholds derived from both 95% percentile and the mean excess plot. The return period curves are derived from their statistical distribution for every gauge, describing with detail expected extreme 5-min rain amounts across the urban network. These curves are compared with those derived from annual extreme time series. In this way, areas in Barcelona submitted to different levels of flood risk from the point of view of rainfall intensity are detected. Additionally, global time trends on extreme 5-min rain amounts are quantified for the whole network and found as not statistically significant.
Fabrication of wafer-scale nanopatterned sapphire substrate through phase separation lithography
NASA Astrophysics Data System (ADS)
Guo, Xu; Ni, Mengyang; Zhuang, Zhe; Dai, Jiangping; Wu, Feixiang; Cui, Yushuang; Yuan, Changsheng; Ge, Haixiong; Chen, Yanfeng
2016-04-01
A phase separation lithography (PSL) based on polymer blend provides an extremely simple, low-cost, and high-throughput way to fabricate wafer-scale disordered nanopatterns. This method was introduced to fabricate nanopatterned sapphire substrates (NPSSs) for GaN-based light-emitting diodes (LEDs). The PSL process only involved in spin-coating of polystyrene (PS)/polyethylene glycol (PEG) polymer blend on sapphire substrate and followed by a development with deionized water to remove PEG moiety. The PS nanoporous network was facilely obtained, and the structural parameters could be effectively tuned by controlling the PS/PEG weight ratio of the spin-coating solution. 2-in. wafer-scale NPSSs were conveniently achieved through the PS nanoporous network in combination with traditional nanofabrication methods, such as O2 reactive ion etching (RIE), e-beam evaporation deposition, liftoff, and chlorine-based RIE. In order to investigate the performance of such NPSSs, typical blue LEDs with emission wavelengths of ~450 nm were grown on the NPSS and a flat sapphire substrate (FSS) by metal-organic chemical vapor deposition, respectively. The integral photoluminescence (PL) intensity of the NPSS LED was enhanced by 32.3 % compared to that of the FSS-LED. The low relative standard deviation of 4.7 % for PL mappings of NPSS LED indicated the high uniformity of PL data across the whole 2-in. wafer. Extremely simple, low cost, and high throughput of the process and the ability to fabricate at the wafer scale make PSL a potential method for production of nanopatterned sapphire substrates.
NASA Astrophysics Data System (ADS)
Pesin, A.; Pustovoytov, D.; Shveyova, T.; Vafin, R.
2017-12-01
The level of a shear strain and equivalent strain plays a key role in terms of the possibility of using the asymmetric rolling process as a method of severe plastic deformation. Strain mode (pure shear or simple shear) can affect very strongly on the equivalent strain and the grain refinement of the material. This paper presents the results of FEM simulations and comparison of the equivalent strain in the aluminium alloy 5083 processed by a single-pass equal channel angular pressing (simple shear), symmetric rolling (pure shear) and asymmetric rolling (simultaneous pure and simple shear). The nonlinear effect of rolls speed ratio on the deformation characteristics during asymmetric rolling was found. Extremely high equivalent strain up to e=4.2 was reached during a single-pass asymmetric rolling. The influence of the shear strain on the level of equivalent strain is discussed. Finite element analysis of the deformation characteristics, presented in this study, can be used for optimization of the asymmetric rolling process as a method of severe plastic deformation.
A method of batch-purifying microalgae with multiple antibiotics at extremely high concentrations
NASA Astrophysics Data System (ADS)
Han, Jichang; Wang, Song; Zhang, Lin; Yang, Guanpin; Zhao, Lu; Pan, Kehou
2016-01-01
Axenic microalgal strains are highly valued in diverse microalgal studies and applications. Antibiotics, alone or in combination, are often used to avoid bacterial contamination during microalgal isolation and culture. In our preliminary trials, we found that many microalgae ceased growing in antibiotics at extremely high concentrations but could resume growth quickly when returned to an antibiotics-free liquid medium and formed colonies when spread on a solid medium. We developed a simple and highly efficient method of obtaining axenic microalgal cultures based on this observation. First, microalgal strains of different species or strains were treated with a mixture of ampicillin, gentamycin sulfate, kanamycin, neomycin and streptomycin (each at a concentration of 600 mg/L) for 3 days; they were then transferred to antibiotics-free medium for 5 days; and finally they were spread on solid f/2 media to allow algal colonies to form. With this method, five strains of Nannochloropsis sp. (Eustigmatophyceae), two strains of Cylindrotheca sp. (Bacillariophyceae), two strains of Tetraselmis sp. (Chlorodendrophyceae) and one strain of Amphikrikos sp. (Trebouxiophyceae) were purified successfully. The method shows promise for batch-purifying microalgal cultures.
A Novel Approach for Lie Detection Based on F-Score and Extreme Learning Machine
Gao, Junfeng; Wang, Zhao; Yang, Yong; Zhang, Wenjia; Tao, Chunyi; Guan, Jinan; Rao, Nini
2013-01-01
A new machine learning method referred to as F-score_ELM was proposed to classify the lying and truth-telling using the electroencephalogram (EEG) signals from 28 guilty and innocent subjects. Thirty-one features were extracted from the probe responses from these subjects. Then, a recently-developed classifier called extreme learning machine (ELM) was combined with F-score, a simple but effective feature selection method, to jointly optimize the number of the hidden nodes of ELM and the feature subset by a grid-searching training procedure. The method was compared to two classification models combining principal component analysis with back-propagation network and support vector machine classifiers. We thoroughly assessed the performance of these classification models including the training and testing time, sensitivity and specificity from the training and testing sets, as well as network size. The experimental results showed that the number of the hidden nodes can be effectively optimized by the proposed method. Also, F-score_ELM obtained the best classification accuracy and required the shortest training and testing time. PMID:23755136
de Rham, Claudia; Motohashi, Hayato
2017-03-07
We study the development of caustics in shift-symmetric scalar field theories by focusing on simple waves with an S O ( p ) -symmetry in an arbitrary number of space dimensions. We show that the pure Galileon, the DBI–Galileon, and the extreme-relativistic Galileon naturally emerge as the unique set of caustic-free theories, highlighting a link between the caustic-free condition for simple S O ( p ) -waves and the existence of either a global Galilean symmetry or a global (extreme-)relativistic Galilean symmetry.
Character expansion methods for matrix models of dually weighted graphs
NASA Astrophysics Data System (ADS)
Kazakov, Vladimir A.; Staudacher, Matthias; Wynter, Thomas
1996-04-01
We consider generalized one-matrix models in which external fields allow control over the coordination numbers on both the original and dual lattices. We rederive in a simple fashion a character expansion formula for these models originally due to Itzykson and Di Francesco, and then demonstrate how to take the large N limit of this expansion. The relationship to the usual matrix model resolvent is elucidated. Our methods give as a by-product an extremely simple derivation of the Migdal integral equation describing the large N limit of the Itzykson-Zuber formula. We illustrate and check our methods by analysing a number of models solvable by traditional means. We then proceed to solve a new model: a sum over planar graphys possessing even coordination numbers on both the original and the dual lattice. We conclude by formulating the equations for the case of arbitrary sets of even, self-dual coupling constants. This opens the way for studying the deep problems of phase transitions from random to flat lattices. January 1995
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jiali; Han, Yuefeng; Stein, Michael L.
2016-02-10
The Weather Research and Forecast (WRF) model downscaling skill in extreme maximum daily temperature is evaluated by using the generalized extreme value (GEV) distribution. While the GEV distribution has been used extensively in climatology and meteorology for estimating probabilities of extreme events, accurately estimating GEV parameters based on data from a single pixel can be difficult, even with fairly long data records. This work proposes a simple method assuming that the shape parameter, the most difficult of the three parameters to estimate, does not vary over a relatively large region. This approach is applied to evaluate 31-year WRF-downscaled extreme maximummore » temperature through comparison with North American Regional Reanalysis (NARR) data. Uncertainty in GEV parameter estimates and the statistical significance in the differences of estimates between WRF and NARR are accounted for by conducting bootstrap resampling. Despite certain biases over parts of the United States, overall, WRF shows good agreement with NARR in the spatial pattern and magnitudes of GEV parameter estimates. Both WRF and NARR show a significant increase in extreme maximum temperature over the southern Great Plains and southeastern United States in January and over the western United States in July. The GEV model shows clear benefits from the regionally constant shape parameter assumption, for example, leading to estimates of the location and scale parameters of the model that show coherent spatial patterns.« less
Complexity-aware simple modeling.
Gómez-Schiavon, Mariana; El-Samad, Hana
2018-02-26
Mathematical models continue to be essential for deepening our understanding of biology. On one extreme, simple or small-scale models help delineate general biological principles. However, the parsimony of detail in these models as well as their assumption of modularity and insulation make them inaccurate for describing quantitative features. On the other extreme, large-scale and detailed models can quantitatively recapitulate a phenotype of interest, but have to rely on many unknown parameters, making them often difficult to parse mechanistically and to use for extracting general principles. We discuss some examples of a new approach-complexity-aware simple modeling-that can bridge the gap between the small-scale and large-scale approaches. Copyright © 2018 Elsevier Ltd. All rights reserved.
José Daniel, Flores-Alatriste; Karla Georgina, Saldivar-Gutiérrez; Josué Sarmiento-Ángeles; Jaime Claudio, Granados-Marin; Marco Antonio, Olaya-Rivera; Stark, Carlotta; Hugo, Flores-Navarro; Jaroslav, Stern-Colin
2015-07-01
Infection by HPV is a major global health problem and the main risk factor for cervical cancer with high morbidity and mortality. Simple diagnostic methods, such as visual inspection with the naked eye of the cervix with acetic acid application 5% (VAT) or solution of iodine (tincture of iodine) are simple to detect early lesions, sensitivity varies from 87 to 99% and specificity varies from 23 to 87%. To find the proportion of infection by human papillomavirus in a population of extreme poverty. Linear, observational and descriptive pilot study was done in patients of marginalized communities in extreme poverty in Chiapas (Mexico), from 1 to 30 November 2013. The existence of acetowhite lesions suggestive of virus was verified human papillomavirus, and medical history of all patients was formed for the incidence of risk factors. 214 women with age limits of 19 and 78 years, median age of 37 years were studied. Of the total, 66 (31%) had acetowhite lesions consistent with human papillomavirus at the time of the study. Marginalized populations have a higher risk of infection with human papillomavirus, consequently high rate of progression to cervical cancer due to sociodemographic characteristics, risk factors and lack of resources in health. Diagnostic tests like the simple display with acetic acid are ideal for people such as this.
Methods for identification and verification using vacuum XRF system
NASA Technical Reports Server (NTRS)
Kaiser, Bruce (Inventor); Schramm, Fred (Inventor)
2005-01-01
Apparatus and methods in which one or more elemental taggants that are intrinsically located in an object are detected by x-ray fluorescence analysis under vacuum conditions to identify or verify the object's elemental content for elements with lower atomic numbers. By using x-ray fluorescence analysis, the apparatus and methods of the invention are simple and easy to use, as well as provide detection by a non line-of-sight method to establish the origin of objects, as well as their point of manufacture, authenticity, verification, security, and the presence of impurities. The invention is extremely advantageous because it provides the capability to measure lower atomic number elements in the field with a portable instrument.
Extremely simple holographic projection of color images
NASA Astrophysics Data System (ADS)
Makowski, Michal; Ducin, Izabela; Kakarenko, Karol; Suszek, Jaroslaw; Kolodziejczyk, Andrzej; Sypek, Maciej
2012-03-01
A very simple scheme of holographic projection is presented with some experimental results showing good quality image projection without any imaging lens. This technique can be regarded as an alternative to classic projection methods. It is based on the reconstruction real images from three phase iterated Fourier holograms. The illumination is performed with three laser beams of primary colors. A divergent wavefront geometry is used to achieve an increased throw angle of the projection, compared to plane wave illumination. Light fibers are used as light guidance in order to keep the setup as simple as possible and to provide point-like sources of high quality divergent wave-fronts at optimized position against the light modulator. Absorbing spectral filters are implemented to multiplex three holograms on a single phase-only spatial light modulator. Hence color mixing occurs without any time-division methods, which cause rainbow effects and color flicker. The zero diffractive order with divergent illumination is practically invisible and speckle field is effectively suppressed with phase optimization and time averaging techniques. The main advantages of the proposed concept are: a very simple and highly miniaturizable configuration; lack of lens; a single LCoS (Liquid Crystal on Silicon) modulator; a strong resistance to imperfections and obstructions of the spatial light modulator like dead pixels, dust, mud, fingerprints etc.; simple calculations based on Fast Fourier Transform (FFT) easily processed in real time mode with GPU (Graphic Programming).
NASA Astrophysics Data System (ADS)
Lenderink, Geert; Attema, Jisk
2015-08-01
Scenarios of future changes in small scale precipitation extremes for the Netherlands are presented. These scenarios are based on a new approach whereby changes in precipitation extremes are set proportional to the change in water vapor amount near the surface as measured by the 2m dew point temperature. This simple scaling framework allows the integration of information derived from: (i) observations, (ii) a new unprecedentedly large 16 member ensemble of simulations with the regional climate model RACMO2 driven by EC-Earth, and (iii) short term integrations with a non-hydrostatic model Harmonie. Scaling constants are based on subjective weighting (expert judgement) of the three different information sources taking also into account previously published work. In all scenarios local precipitation extremes increase with warming, yet with broad uncertainty ranges expressing incomplete knowledge of how convective clouds and the atmospheric mesoscale circulation will react to climate change.
[Principle of LAMP method--a simple and rapid gene amplification method].
Ushikubo, Hiroshi
2004-06-01
So far nucleic acid test (NAT) has been employed in various fields, including infectious disease diagnoses. However, due to its complicated procedures and relatively high cost, it has not been widely utilized in many actual diagnostic applications. We have therefore developed a simple and rapid gene amplification technology, Loop-mediated Isothermal Amplification (LAMP) method, which has shown prominent results of surpassing the performance of the conventional gene amplification methods. LAMP method acquires three main features: (1) all reaction can be carried out under isothermal conditions; (2) the amplification efficiency is extremely high and tremendous amount of amplification products can be obtained; and (3) the reaction is highly specific. Furthermore, developed from the standard LAMP method, a rapid LAMP method, by adding in the loop primers, can reduce the amplification time from the previous 1 hour to less than 30 minutes. Enormous amount of white precipitate of magnesium pyrophosphate is produced as a by-product of the amplification, therefore, direct visual detection is possible without using any reaction indicators and detection equipments. We believe LAMP technology, with the integration of these features, can rightly apply to clinical genetic testing, food and environmental analysis, as well as NAT in different fields.
Aerodynamic Shape Optimization Using A Real-Number-Encoded Genetic Algorithm
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2001-01-01
A new method for aerodynamic shape optimization using a genetic algorithm with real number encoding is presented. The algorithm is used to optimize three different problems, a simple hill climbing problem, a quasi-one-dimensional nozzle problem using an Euler equation solver and a three-dimensional transonic wing problem using a nonlinear potential solver. Results indicate that the genetic algorithm is easy to implement and extremely reliable, being relatively insensitive to design space noise.
Ståhlberg, Anders; Krzyzanowski, Paul M; Jackson, Jennifer B; Egyud, Matthew; Stein, Lincoln; Godfrey, Tony E
2016-06-20
Detection of cell-free DNA in liquid biopsies offers great potential for use in non-invasive prenatal testing and as a cancer biomarker. Fetal and tumor DNA fractions however can be extremely low in these samples and ultra-sensitive methods are required for their detection. Here, we report an extremely simple and fast method for introduction of barcodes into DNA libraries made from 5 ng of DNA. Barcoded adapter primers are designed with an oligonucleotide hairpin structure to protect the molecular barcodes during the first rounds of polymerase chain reaction (PCR) and prevent them from participating in mis-priming events. Our approach enables high-level multiplexing and next-generation sequencing library construction with flexible library content. We show that uniform libraries of 1-, 5-, 13- and 31-plex can be generated. Utilizing the barcodes to generate consensus reads for each original DNA molecule reduces background sequencing noise and allows detection of variant alleles below 0.1% frequency in clonal cell line DNA and in cell-free plasma DNA. Thus, our approach bridges the gap between the highly sensitive but specific capabilities of digital PCR, which only allows a limited number of variants to be analyzed, with the broad target capability of next-generation sequencing which traditionally lacks the sensitivity to detect rare variants. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Horno, J; González-Caballero, F; González-Fernández, C F
1990-01-01
Simple techniques of network thermodynamics are used to obtain the numerical solution of the Nernst-Planck and Poisson equation system. A network model for a particular physical situation, namely ionic transport through a thin membrane with simultaneous diffusion, convection and electric current, is proposed. Concentration and electric field profiles across the membrane, as well as diffusion potential, have been simulated using the electric circuit simulation program, SPICE. The method is quite general and extremely efficient, permitting treatments of multi-ion systems whatever the boundary and experimental conditions may be.
Surface-enhanced Raman spectroscopy on coupled two-layer nanorings
NASA Astrophysics Data System (ADS)
Hou, Yumin; Xu, Jun; Wang, Pengwei; Yu, Dapeng
2010-05-01
A reproducible quasi-three-dimensional structure, composed of top and bottom concentric nanorings with same periodicity but different widths and no overlapping at the perpendicular direction, is built up by a separation-layer method, which results in huge enhancement of surface-enhanced Raman spectroscopy (SERS) due to the coupling of plasmons. Simulations show plasmonic focusing with "hot arcs" of electromagnetic enhancement meeting the need of quantitative SERS with extremely high sensitivities. In addition, the separation-layer method opens a simple and effective way to adjust the coupling of plasmons among nanostructures which is essential for the fabrication of SERS-based sensors.
Methods of alleviation of ionospheric scintillation effects on digital communications
NASA Technical Reports Server (NTRS)
Massey, J. L.
1974-01-01
The degradation of the performance of digital communication systems because of ionospheric scintillation effects can be reduced either by diversity techniques or by coding. The effectiveness of traditional space-diversity, frequency-diversity and time-diversity techniques is reviewed and design considerations isolated. Time-diversity signaling is then treated as an extremely simple form of coding. More advanced coding methods, such as diffuse threshold decoding and burst-trapping decoding, which appear attractive in combatting scintillation effects are discussed and design considerations noted. Finally, adaptive coding techniques appropriate when the general state of the channel is known are discussed.
Note: A simple sample transfer alignment for ultra-high vacuum systems.
Tamtögl, A; Carter, E A; Ward, D J; Avidor, N; Kole, P R; Jardine, A P; Allison, W
2016-06-01
The alignment of ultra-high-vacuum sample transfer systems can be problematic when there is no direct line of sight to assist the user. We present the design of a simple and cheap system which greatly simplifies the alignment of sample transfer devices. Our method is based on the adaptation of a commercial digital camera which provides live views from within the vacuum chamber. The images of the camera are further processed using an image recognition and processing code which determines any misalignments and reports them to the user. Installation has proven to be extremely useful in order to align the sample with respect to the transfer mechanism. Furthermore, the alignment software can be easily adapted for other systems.
Sleigh, Alison; Lupson, Victoria; Thankamony, Ajay; Dunger, David B; Savage, David B; Carpenter, T Adrian; Kemp, Graham J
2016-01-11
The growing recognition of diseases associated with dysfunction of mitochondria poses an urgent need for simple measures of mitochondrial function. Assessment of the kinetics of replenishment of the phosphocreatine pool after exercise using (31)P magnetic resonance spectroscopy can provide an in vivo measure of mitochondrial function; however, the wider application of this technique appears limited by complex or expensive MR-compatible exercise equipment and protocols not easily tolerated by frail participants or those with reduced mental capacity. Here we describe a novel in-scanner exercise method which is patient-focused, inexpensive, remarkably simple and highly portable. The device exploits an MR-compatible high-density material (BaSO4) to form a weight which is attached directly to the ankle, and a one-minute dynamic knee extension protocol produced highly reproducible measurements of post-exercise PCr recovery kinetics in both healthy subjects and patients. As sophisticated exercise equipment is unnecessary for this measurement, our extremely simple design provides an effective and easy-to-implement apparatus that is readily translatable across sites. Its design, being tailored to the needs of the patient, makes it particularly well suited to clinical applications, and we argue the potential of this method for investigating in vivo mitochondrial function in new cohorts of growing clinical interest.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banerjee, Arka; Dalal, Neal, E-mail: abanerj6@illinois.edu, E-mail: dalaln@illinois.edu
We present a new method for simulating cosmologies that contain massive particles with thermal free streaming motion, such as massive neutrinos or warm/hot dark matter. This method combines particle and fluid descriptions of the thermal species to eliminate the shot noise known to plague conventional N-body simulations. We describe this method in detail, along with results for a number of test cases to validate our method, and check its range of applicability. Using this method, we demonstrate that massive neutrinos can produce a significant scale-dependence in the large-scale biasing of deep voids in the matter field. We show that thismore » scale-dependence may be quantitatively understood using an extremely simple spherical expansion model which reproduces the behavior of the void bias for different neutrino parameters.« less
Probabilistic forecasting for extreme NO2 pollution episodes.
Aznarte, José L
2017-10-01
In this study, we investigate the convenience of quantile regression to predict extreme concentrations of NO 2 . Contrarily to the usual point-forecasting, where a single value is forecast for each horizon, probabilistic forecasting through quantile regression allows for the prediction of the full probability distribution, which in turn allows to build models specifically fit for the tails of this distribution. Using data from the city of Madrid, including NO 2 concentrations as well as meteorological measures, we build models that predict extreme NO 2 concentrations, outperforming point-forecasting alternatives, and we prove that the predictions are accurate, reliable and sharp. Besides, we study the relative importance of the independent variables involved, and show how the important variables for the median quantile are different than those important for the upper quantiles. Furthermore, we present a method to compute the probability of exceedance of thresholds, which is a simple and comprehensible manner to present probabilistic forecasts maximizing their usefulness. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dupuytren disease: on our way to a cure?
Degreef, Ilse; De Smet, Luc
2013-06-01
Despite its high prevalence, the clinical presentation and severity of Dupuytren disease is extremely variable. The disease features a broad spectrum of symptoms, from simple nodules without the slightest clinical impact towards an extremely disabling form requiring multiple surgical procedures, sometimes even partial hand amputations. Recurrence after surgery is considered a failure for both patient and surgeon, but its definition is vague. The term 'recontracture' was coined by a patient and reflects the disappointment of recurrent disease. Wether or not a treatment option will insure a definite result, may depend more on the severity of the disease, which is patient specific, than on the treatment method itself. If a patient presents with Dupuytren disease, one should not merely evaluate his hands. Different clinical and personal history features may uncover a severe fibrosis diathesis and both correct information to the patient and an individualized treatment plan are needed. In the near future, a simple genetic test may help to identify patients at risk. Similar to the evolving knowledge and treatment modalities seen in rheumatoid arthritis, treatment of Dupuytren disease is likely to advance in the direction of disease control with pharmacotherapy and single shot minimal invasive enzymatic fasciotomy with collagenase to correct established contractures.
Extreme Learning Machine and Particle Swarm Optimization in optimizing CNC turning operation
NASA Astrophysics Data System (ADS)
Janahiraman, Tiagrajah V.; Ahmad, Nooraziah; Hani Nordin, Farah
2018-04-01
The CNC machine is controlled by manipulating cutting parameters that could directly influence the process performance. Many optimization methods has been applied to obtain the optimal cutting parameters for the desired performance function. Nonetheless, the industry still uses the traditional technique to obtain those values. Lack of knowledge on optimization techniques is the main reason for this issue to be prolonged. Therefore, the simple yet easy to implement, Optimal Cutting Parameters Selection System is introduced to help the manufacturer to easily understand and determine the best optimal parameters for their turning operation. This new system consists of two stages which are modelling and optimization. In modelling of input-output and in-process parameters, the hybrid of Extreme Learning Machine and Particle Swarm Optimization is applied. This modelling technique tend to converge faster than other artificial intelligent technique and give accurate result. For the optimization stage, again the Particle Swarm Optimization is used to get the optimal cutting parameters based on the performance function preferred by the manufacturer. Overall, the system can reduce the gap between academic world and the industry by introducing a simple yet easy to implement optimization technique. This novel optimization technique can give accurate result besides being the fastest technique.
Concrete Condition Assessment Using Impact-Echo Method and Extreme Learning Machines
Zhang, Jing-Kui; Yan, Weizhong; Cui, De-Mi
2016-01-01
The impact-echo (IE) method is a popular non-destructive testing (NDT) technique widely used for measuring the thickness of plate-like structures and for detecting certain defects inside concrete elements or structures. However, the IE method is not effective for full condition assessment (i.e., defect detection, defect diagnosis, defect sizing and location), because the simple frequency spectrum analysis involved in the existing IE method is not sufficient to capture the IE signal patterns associated with different conditions. In this paper, we attempt to enhance the IE technique and enable it for full condition assessment of concrete elements by introducing advanced machine learning techniques for performing comprehensive analysis and pattern recognition of IE signals. Specifically, we use wavelet decomposition for extracting signatures or features out of the raw IE signals and apply extreme learning machine, one of the recently developed machine learning techniques, as classification models for full condition assessment. To validate the capabilities of the proposed method, we build a number of specimens with various types, sizes, and locations of defects and perform IE testing on these specimens in a lab environment. Based on analysis of the collected IE signals using the proposed machine learning based IE method, we demonstrate that the proposed method is effective in performing full condition assessment of concrete elements or structures. PMID:27023563
Optical characterization of high speed microscanners based on static slit profiling method
NASA Astrophysics Data System (ADS)
Alaa Elhady, A.; Sabry, Yasser M.; Khalil, Diaa
2017-01-01
Optical characterization of high-speed microscanners is a challenging task that usually requires special high speed, extremely expensive camera systems. This paper presents a novel simple method to characterize the scanned beam spot profile and size in high-speed optical scanners under operation. It allows measuring the beam profile and the spot sizes at different scanning angles. The method is analyzed theoretically and applied experimentally on the characterization of a Micro Electro Mechanical MEMS scanner operating at 2.6 kHz. The variation of the spot size versus the scanning angle, up to ±15°, is extracted and the dynamic bending curvature effect of the micromirror is predicted.
Model reductions using a projection formulation
NASA Technical Reports Server (NTRS)
De Villemagne, Christian; Skelton, Robert E.
1987-01-01
A new methodology for model reduction of MIMO systems exploits the notion of an oblique projection. A reduced model is uniquely defined by a projector whose range space and orthogonal to the null space are chosen among the ranges of generalized controllability and observability matrices. The reduced order models match various combinations (chosen by the designer) of four types of parameters of the full order system associated with (1) low frequency response, (2) high frequency response, (3) low frequency power spectral density, and (4) high frequency power spectral density. Thus, the proposed method is a computationally simple substitute for many existing methods, has an extreme flexibility to embrace combinations of existing methods and offers some new features.
NUCLEON-mission: A New Approach to Cosmic Rays Investigation
NASA Technical Reports Server (NTRS)
Adams, J.; Bashindzhagyan, G.; Chilingarian, A.; Drury, L.; Egorov, N.; Golubkov, S.; Grebenyuk, V.; Korotkova, N.; Mashkantcev, A.; Nanjo, H.;
2001-01-01
A new approach to Cosmic Rays Investigation is proposed. The main idea is to combine two experimental methods (KLEM and UHIS) for the NUCLEON Project. The KLEM (Kinematic Lightweight Energy Meter) method is used for the study of chemical composition and elemental energy spectra of galactic CRs in extremely wide energy range 10(exp 11)-10(exp 15) eV. The UHIS (Ultra Heavy Isotope Spectrometer) method is used for the ultra heavy CR nuclei fluxes registration nuclei beyond the iron peak. Combination of the two techniques will lead not to simple mechanical unification of two instruments in one block, but lead to the creation of a unique instrument, with a number of advantages.
Tan, Qunyou; Zhang, Li; Zhang, Liangke; Teng, Yongzhen; Zhang, Jingqing
2012-01-01
Pyridostigmine bromide (PTB) is a highly soluble and extremely bitter drug. Here, an economic complexation technology combined with direct tablet compression method has been developed to meet the requirements of a patient friendly dosage known as taste-masked dispersible tablets loaded PTB (TPDPTs): (1) TPDPTs should have optimal disintegration and good physical resistance (hardness); (2) a low-cost, simple but practical preparation method suitable for industrial production is preferred from a cost perspective. Physicochemical properties of the inclusion complex of PTB with beta-cyclodextrin were investigated by Fourier transformed infrared spectroscopy, differential scanning calorimetry and UV spectroscopy. An orthogonal design was chosen to properly formulate TPDPTs. All volunteers regarded acceptable bitterness of TPDPTs. The properties including disintegration time, weight variation, friability, hardness, dispersible uniformity and drug content of TPDPTs were evaluated. The dissolution profile of TPDPTs in distilled water exhibited a fast rate. Pharmacokinetic results demonstrated that TPDPTs and the commercial tablets were bioequivalent.
Extremely Robust and Patternable Electrodes for Copy-Paper-Based Electronics.
Ahn, Jaeho; Seo, Ji-Won; Lee, Tae-Ik; Kwon, Donguk; Park, Inkyu; Kim, Taek-Soo; Lee, Jung-Yong
2016-07-27
We propose a fabrication process for extremely robust and easily patternable silver nanowire (AgNW) electrodes on paper. Using an auxiliary donor layer and a simple laminating process, AgNWs can be easily transferred to copy paper as well as various other substrates using a dry process. Intercalating a polymeric binder between the AgNWs and the substrate through a simple printing technique enhances adhesion, not only guaranteeing high foldability of the electrodes, but also facilitating selective patterning of the AgNWs. Using the proposed process, extremely crease-tolerant electronics based on copy paper can be fabricated, such as a printed circuit board for a 7-segment display, portable heater, and capacitive touch sensor, demonstrating the applicability of the AgNWs-based electrodes to paper electronics.
Lee, Jin-Woong; Chung, Jiyong; Cho, Min-Young; Timilsina, Suman; Sohn, Keemin; Kim, Ji Sik; Sohn, Kee-Sun
2018-06-20
An extremely simple bulk sheet made of a piezoresistive carbon nanotube (CNT)-Ecoflex composite can act as a smart keypad that is portable, disposable, and flexible enough to be carried crushed inside the pocket of a pair of trousers. Both a rigid-button-imbedded, rollable (or foldable) pad and a patterned flexible pad have been introduced for use as portable keyboards. Herein, we suggest a bare, bulk, macroscale piezoresistive sheet as a replacement for these complex devices that are achievable only through high-cost fabrication processes such as patterning-based coating, printing, deposition, and mounting. A deep-learning technique based on deep neural networks (DNN) enables this extremely simple bulk sheet to play the role of a smart keypad without the use of complicated fabrication processes. To develop this keypad, instantaneous electrical resistance change was recorded at several locations on the edge of the sheet along with the exact information on the touch position and pressure for a huge number of random touches. The recorded data were used for training a DNN model that could eventually act as a brain for a simple sheet-type keypad. This simple sheet-type keypad worked perfectly and outperformed all of the existing portable keypads in terms of functionality, flexibility, disposability, and cost.
Van Hove singularities and spectral smearing in high-temperature superconducting H3S
NASA Astrophysics Data System (ADS)
Quan, Yundi; Pickett, Warren E.
2016-03-01
The superconducting phase of hydrogen sulfide at Tc=200 K observed by Drozdov and collaborators at pressures around 200 GPa is simple bcc I m 3 ¯m H3S from a combination of theoretical and experimental confirmation. The various "extremes" that are involved—high pressure implying extreme reduction of volume, extremely high H phonon energy scale around 1400 K, extremely high temperature for a superconductor—necessitates a close look at new issues raised by these characteristics in relation to high Tc itself. First principles methods are applied to analyze the H3S electronic structure, beginning with the effect of sulfur and then focusing on the origin and implications of the two van Hove singularities (vHs) providing an impressive peak in the density of states near the Fermi energy. Implications arising from strong coupling Migdal-Eliashberg theory are studied. It becomes evident that electron spectral density smearing due to virtual phonon emission and absorption must be accounted for in a correct understanding of this unusual material and to obtain accurate theoretical predictions. Means for increasing Tc in H3S -like materials are noted.
van Hove Singularities and Spectral Smearing in High Temperature Superconducting H3S
NASA Astrophysics Data System (ADS)
Quan, Yundi; Pickett, Warren E.
The superconducting phase of hydrogen sulfide at Tc=200 K observed by Drozdov and collaborators at pressures around 200 GPa is simple bcc Im 3 m H3S reopens questions about what is achievable in high Tc. The various ''extremes'' that are involved - pressure, implying extreme reduction of volume, extremely high H phonon energy scale around 1400K, extremely high temperature for a superconductor - necessitate a close look at new issues raised by these characteristics in relation to high Tc. We have applied first principles methods to analyze the H3S electronic structure, particularly the van Hove singularities (vHs) and the effect of sulfur. Focusing on the two closely spaced vHs near the Fermi level that give rise to the impressively sharp peak in the density of states, the implications of strong coupling Migdal-Eliashberg theory are assessed. The electron spectral density smearing due to virtual phonon emission and absorption, as done in earlier days for A15 superconductors, must be included explicitly to obtain accurate theoretical predictions and a correct understanding. Means for increasing Tc in H3S-like materials will be mentioned. NSF DMR Grant 1207622.
Tissue cell assisted fabrication of tubular catalytic platinum microengines
NASA Astrophysics Data System (ADS)
Wang, Hong; Moo, James Guo Sheng; Pumera, Martin
2014-09-01
We report a facile platform for mass production of robust self-propelled tubular microengines. Tissue cells extracted from fruits of banana and apple, Musa acuminata and Malus domestica, are used as the support on which a thin platinum film is deposited by means of physical vapor deposition. Upon sonication of the cells/Pt-coated substrate in water, microscrolls of highly uniform sizes are spontaneously formed. Tubular microengines fabricated with the fruit cell assisted method exhibit a fast motion of ~100 bodylengths per s (~1 mm s-1). An extremely simple and affordable platform for mass production of the micromotors is crucial for the envisioned swarms of thousands and millions of autonomous micromotors performing biomedical and environmental remediation tasks.We report a facile platform for mass production of robust self-propelled tubular microengines. Tissue cells extracted from fruits of banana and apple, Musa acuminata and Malus domestica, are used as the support on which a thin platinum film is deposited by means of physical vapor deposition. Upon sonication of the cells/Pt-coated substrate in water, microscrolls of highly uniform sizes are spontaneously formed. Tubular microengines fabricated with the fruit cell assisted method exhibit a fast motion of ~100 bodylengths per s (~1 mm s-1). An extremely simple and affordable platform for mass production of the micromotors is crucial for the envisioned swarms of thousands and millions of autonomous micromotors performing biomedical and environmental remediation tasks. Electronic supplementary information (ESI) available: Related video. See DOI: 10.1039/c4nr03720k
Lederer, W J
1983-09-01
A device is described that is capable of rapidly moving microelectrodes and micropipettes over distances up to 15 mu. This piezoelectric transLator uses the diaphragm from virtually any available piezoelectric buzzer in combination with simple physical support and drive electronics. All of the necessary details for the construction of this small device are presented. Each finished unit is about 2 cm long with a diameter of 2 cm and can be readily adapted to existing manipulators. The translator has been found useful in aiding the independent penetration by one or more microelectrodes of single cells or of more complicated multicellular preparations (including those that lie behind a connective tissue layer). This new device offers fine control of microelectrode motion that cannot be obtained by the other methods used to aid microelectrode and micropipette penetration of cell membranes (e.g. capacitance overcompensation--"ringing in"' or "tickling"--or tapping the manipulator base). Finally, the device described in this paper is extremely simple and inexpensive to build.
Application of FTA technology to extraction of sperm DNA from mixed body fluids containing semen.
Fujita, Yoshihiko; Kubo, Shin-ichi
2006-01-01
FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. In this study, we report a rapid and simple method of extracting DNA from sperm when body fluids mixed with semen were collected using FTA cards. After proteinase K digestion of the sperm and body fluid mixture, the washed pellet suspension as the sperm fraction and the concentrated supernatant as the epithelial cell fraction were respectively applied to FTA cards containing DTT. The FTA cards were dried, then directly added to a polymerase chain reaction (PCR) mix and processed by PCR. The time required from separation of the mixed fluid into sperm and epithelial origin DNA extractions was only about 2.5-3h. Furthermore, the procedure was extremely simple. It is considered that our designed DNA extraction procedure using an FTA card is available for application to routine work.
Nonlinear power spectrum from resummed perturbation theory: a leap beyond the BAO scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anselmi, Stefano; Pietroni, Massimo, E-mail: anselmi@ieec.uab.es, E-mail: massimo.pietroni@pd.infn.it
2012-12-01
A new computational scheme for the nonlinear cosmological matter power spectrum (PS) is presented. Our method is based on evolution equations in time, which can be cast in a form extremely convenient for fast numerical evaluations. A nonlinear PS is obtained in a time comparable to that needed for a simple 1-loop computation, and the numerical implementation is very simple. Our results agree with N-body simulations at the percent level in the BAO range of scales, and at the few-percent level up to k ≅ 1 h/Mpc at z∼>0.5, thereby opening the possibility of applying this tool to scales interestingmore » for weak lensing. We clarify the approximations inherent to this approach as well as its relations to previous ones, such as the Time Renormalization Group, and the multi-point propagator expansion. We discuss possible lines of improvements of the method and its intrinsic limitations by multi streaming at small scales and low redshifts.« less
Simultaneous determination of all polyphenols in vegetables, fruits, and teas.
Sakakibara, Hiroyuki; Honda, Yoshinori; Nakagawa, Satoshi; Ashida, Hitoshi; Kanazawa, Kazuki
2003-01-29
Polyphenols, which have beneficial effects on health and occur ubiquitously in plant foods, are extremely diverse. We developed a method for simultaneously determining all the polyphenols in foodstuffs, using HPLC and a photodiode array to construct a library comprising retention times, spectra of aglycons, and respective calibration curves for 100 standard chemicals. The food was homogenized in liquid nitrogen, lyophilized, extracted with 90% methanol, and subjected to HPLC without hydrolysis. The recovery was 68-92%, and the variation in reproducibility ranged between 1 and 9%. The HPLC eluted polyphenols with good resolution within 95 min in the following order: simple polyphenols, catechins, anthocyanins, glycosides of flavones, flavonols, isoflavones and flavanones, their aglycons, anthraquinones, chalcones, and theaflavins. All the polyphenols in 63 vegetables, fruits, and teas were then examined in terms of content and class. The present method offers accuracy by avoiding the decomposition of polyphenols during hydrolysis, the ability to determine aglycons separately from glycosides, and information on simple polyphenol levels simultaneously.
Active disturbance rejection controller for chemical reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Both, Roxana; Dulf, Eva H.; Muresan, Cristina I., E-mail: roxana.both@aut.utcluj.ro
2015-03-10
In the petrochemical industry, the synthesis of 2 ethyl-hexanol-oxo-alcohols (plasticizers alcohol) is of high importance, being achieved through hydrogenation of 2 ethyl-hexenal inside catalytic trickle bed three-phase reactors. For this type of processes the use of advanced control strategies is suitable due to their nonlinear behavior and extreme sensitivity to load changes and other disturbances. Due to the complexity of the mathematical model an approach was to use a simple linear model of the process in combination with an advanced control algorithm which takes into account the model uncertainties, the disturbances and command signal limitations like robust control. However themore » resulting controller is complex, involving cost effective hardware. This paper proposes a simple integer-order control scheme using a linear model of the process, based on active disturbance rejection method. By treating the model dynamics as a common disturbance and actively rejecting it, active disturbance rejection control (ADRC) can achieve the desired response. Simulation results are provided to demonstrate the effectiveness of the proposed method.« less
Improving the performance of extreme learning machine for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Li, Jiaojiao; Du, Qian; Li, Wei; Li, Yunsong
2015-05-01
Extreme learning machine (ELM) and kernel ELM (KELM) can offer comparable performance as the standard powerful classifier―support vector machine (SVM), but with much lower computational cost due to extremely simple training step. However, their performance may be sensitive to several parameters, such as the number of hidden neurons. An empirical linear relationship between the number of training samples and the number of hidden neurons is proposed. Such a relationship can be easily estimated with two small training sets and extended to large training sets so as to greatly reduce computational cost. Other parameters, such as the steepness parameter in the sigmodal activation function and regularization parameter in the KELM, are also investigated. The experimental results show that classification performance is sensitive to these parameters; fortunately, simple selections will result in suboptimal performance.
1979-05-01
and social problems, does not lend itself to a single or simple solution. This is why we must all be involved. For this reason we. believe that...of admission to decisionmaking. At times the implications of this relatively simple premise are not minor. Many people beginning community...involvement programs have found it extremely difficult to locate technical people able to translate technical reports into simple , every- day English. There
Electron capture rates in stars studied with heavy ion charge exchange reactions
NASA Astrophysics Data System (ADS)
Bertulani, C. A.
2018-01-01
Indirect methods using nucleus-nucleus reactions at high energies (here, high energies mean ~ 50 MeV/nucleon and higher) are now routinely used to extract information of interest for nuclear astrophysics. This is of extreme relevance as many of the nuclei involved in stellar evolution are short-lived. Therefore, indirect methods became the focus of recent studies carried out in major nuclear physics facilities. Among such methods, heavy ion charge exchange is thought to be a useful tool to infer Gamow-Teller matrix elements needed to describe electron capture rates in stars and also double beta-decay experiments. In this short review, I provide a theoretical guidance based on a simple reaction model for charge exchange reactions.
Functional renormalization group and Kohn-Sham scheme in density functional theory
NASA Astrophysics Data System (ADS)
Liang, Haozhao; Niu, Yifei; Hatsuda, Tetsuo
2018-04-01
Deriving accurate energy density functional is one of the central problems in condensed matter physics, nuclear physics, and quantum chemistry. We propose a novel method to deduce the energy density functional by combining the idea of the functional renormalization group and the Kohn-Sham scheme in density functional theory. The key idea is to solve the renormalization group flow for the effective action decomposed into the mean-field part and the correlation part. Also, we propose a simple practical method to quantify the uncertainty associated with the truncation of the correlation part. By taking the φ4 theory in zero dimension as a benchmark, we demonstrate that our method shows extremely fast convergence to the exact result even for the highly strong coupling regime.
Battistoni, Andrea; Bencivenga, Filippo; Fioretto, Daniele; Masciovecchio, Claudio
2014-10-15
In this Letter, we present a simple method to avoid the well-known spurious contributions in the Brillouin light scattering (BLS) spectrum arising from the finite aperture of collection optics. The method relies on the use of special spatial filters able to select the scattered light with arbitrary precision around a given value of the momentum transfer (Q). We demonstrate the effectiveness of such filters by analyzing the BLS spectra of a reference sample as a function of scattering angle. This practical and inexpensive method could be an extremely useful tool to fully exploit the potentiality of Brillouin acoustic spectroscopy, as it will easily allow for effective Q-variable experiments with unparalleled luminosity and resolution.
Air sampling with solid phase microextraction
NASA Astrophysics Data System (ADS)
Martos, Perry Anthony
There is an increasing need for simple yet accurate air sampling methods. The acceptance of new air sampling methods requires compatibility with conventional chromatographic equipment, and the new methods have to be environmentally friendly, simple to use, yet with equal, or better, detection limits, accuracy and precision than standard methods. Solid phase microextraction (SPME) satisfies the conditions for new air sampling methods. Analyte detection limits, accuracy and precision of analysis with SPME are typically better than with any conventional air sampling methods. Yet, air sampling with SPME requires no pumps, solvents, is re-usable, extremely simple to use, is completely compatible with current chromatographic equipment, and requires a small capital investment. The first SPME fiber coating used in this study was poly(dimethylsiloxane) (PDMS), a hydrophobic liquid film, to sample a large range of airborne hydrocarbons such as benzene and octane. Quantification without an external calibration procedure is possible with this coating. Well understood are the physical and chemical properties of this coating, which are quite similar to those of the siloxane stationary phase used in capillary columns. The log of analyte distribution coefficients for PDMS are linearly related to chromatographic retention indices and to the inverse of temperature. Therefore, the actual chromatogram from the analysis of the PDMS air sampler will yield the calibration parameters which are used to quantify unknown airborne analyte concentrations (ppb v to ppm v range). The second fiber coating used in this study was PDMS/divinyl benzene (PDMS/DVB) onto which o-(2,3,4,5,6- pentafluorobenzyl) hydroxylamine (PFBHA) was adsorbed for the on-fiber derivatization of gaseous formaldehyde (ppb v range), with and without external calibration. The oxime formed from the reaction can be detected with conventional gas chromatographic detectors. Typical grab sampling times were as small as 5 seconds. With 300 seconds sampling, the formaldehyde detection limit was 2.1 ppbv, better than any other 5 minute sampling device for formaldehyde. The first-order rate constant for product formation was used to quantify formaldehyde concentrations without a calibration curve. This spot sampler was used to sample the headspace of hair gel, particle board, plant material and coffee grounds for formaldehyde, and other carbonyl compounds, with extremely promising results. The SPME sampling devices were also used for time- weighted average sampling (30 minutes to 16 hours). Finally, the four new SPME air sampling methods were field tested with side-by-side comparisons to standard air sampling methods, showing a tremendous use of SPME as an air sampler.
Return levels of temperature extremes in southern Pakistan
NASA Astrophysics Data System (ADS)
Zahid, Maida; Blender, Richard; Lucarini, Valerio; Caterina Bramati, Maria
2017-12-01
Southern Pakistan (Sindh) is one of the hottest regions in the world and is highly vulnerable to temperature extremes. In order to improve rural and urban planning, it is useful to gather information about the recurrence of temperature extremes. In this work, return levels of the daily maximum temperature Tmax are estimated, as well as the daily maximum wet-bulb temperature TWmax extremes. We adopt the peaks over threshold (POT) method, which has not yet been used for similar studies in this region. Two main datasets are analyzed: temperatures observed at nine meteorological stations in southern Pakistan from 1980 to 2013, and the ERA-Interim (ECMWF reanalysis) data for the nearest corresponding locations. The analysis provides the 2-, 5-, 10-, 25-, 50-, and 100-year return levels (RLs) of temperature extremes. The 90 % quantile is found to be a suitable threshold for all stations. We find that the RLs of the observed Tmax are above 50 °C at northern stations and above 45 °C at the southern stations. The RLs of the observed TWmax exceed 35 °C in the region, which is considered as a limit of survivability. The RLs estimated from the ERA-Interim data are lower by 3 to 5 °C than the RLs assessed for the nine meteorological stations. A simple bias correction applied to ERA-Interim data improves the RLs remarkably, yet discrepancies are still present. The results have potential implications for the risk assessment of extreme temperatures in Sindh.
The use of bioimpedance analysis to evaluate lymphedema.
Warren, Anne G; Janz, Brian A; Slavin, Sumner A; Borud, Loren J
2007-05-01
Lymphedema, a chronic disfiguring condition resulting from lymphatic dysfunction or disruption, can be difficult to accurately diagnose and manage. Of particular challenge is identifying the presence of clinically significant limb swelling through simple and noninvasive methods. Many historical and currently used techniques for documenting differences in limb volume, including volume displacement and circumferential measurements, have proven difficult and unreliable. Bioimpedance spectroscopy analysis, a technology that uses resistance to electrical current in comparing the composition of fluid compartments within the body, has been considered as a cost-effective and reproducible alternative for evaluating patients with suspected lymphedema. All patients were recruited through the Beth Israel Deaconess Medical Center Lymphedema Clinic. A total of 15 patients (mean age: 55.2 years) with upper-extremity or lower-extremity lymphedema as documented by lymphoscintigraphy underwent bioimpedance spectroscopy analysis using an Impedimed SFB7 device. Seven healthy medical students and surgical residents (mean age: 26.9 years) were selected to serve as normal controls. All study participants underwent analysis of both limbs, which allowed participants to act as their own controls. The multifrequency bioimpedance device documented impedance values for each limb, with lower values correlating with higher levels of accumulated protein-rich edematous fluid. The average ratio of impedance to current flow of the affected limb to the unaffected limb in lymphedema patients was 0.9 (range: 0.67 to 1.01). In the control group, the average impedance ratio of the participant's dominant limb to their nondominant limb was 0.99 (range: 0.95 to 1.02) (P = 0.01). Bioimpedance spectroscopy can be used as a reliable and accurate tool for documenting the presence of lymphedema in patients with either upper- or lower-extremity swelling. Measurement with the device is quick and simple and results are reproducible among patients. Given significant limitations with other methods of evaluating lymphedema, the use of bioimpedance analysis may aid in the diagnosis of lymphedema and allow for tracking patients over time as they proceed with treatment of their disease.
Statistical complexity without explicit reference to underlying probabilities
NASA Astrophysics Data System (ADS)
Pennini, F.; Plastino, A.
2018-06-01
We show that extremely simple systems of a not too large number of particles can be simultaneously thermally stable and complex. To such an end, we extend the statistical complexity's notion to simple configurations of non-interacting particles, without appeal to probabilities, and discuss configurational properties.
Synergetic approach for simple and rapid conjugation of gold nanoparticles with oligonucleotides.
Li, Jiuxing; Zhu, Binqing; Yao, Xiujie; Zhang, Yicong; Zhu, Zhi; Tu, Song; Jia, Shasha; Liu, Rudi; Kang, Huaizhi; Yang, Chaoyong James
2014-10-08
Attaching thiolated DNA on gold nanoparticles (AuNPs) has been extremely important in nanobiotechnology because DNA-AuNPs combine the programmability and molecular recognition properties of the biopolymers with the optical, thermal, and catalytic properties of the inorganic nanomaterials. However, current standard protocols to attach thiolated DNA on AuNPs involve time-consuming, tedious steps and do not perform well for large AuNPs, thereby greatly restricting applications of DNA-AuNPs. Here we demonstrate a rapid and facile strategy to attach thiolated DNA on AuNPs based on the excellent stabilization effect of mPEG-SH on AuNPs. AuNPs are first protected by mPEG-SH in the presence of Tween 20, which results in excellent stability of AuNPs in high ionic strength environments and extreme pHs. A high concentration of NaCl can be applied to the mixture of DNA and AuNP directly, allowing highly efficient DNA attachment to the AuNP surface by minimizing electrostatic repulsion. The entire DNA loading process can be completed in 1.5 h with only a few simple steps. DNA-loaded AuNPs are stable for more than 2 weeks at room temperature, and they can precisely hybridize with the complementary sequence, which was applied to prepare core-satellite nanostructures. Moreover, cytotoxicity assay confirmed that the DNA-AuNPs synthesized by this method exhibit lower cytotoxicity than those prepared by current standard methods. The proposed method provides a new way to stabilize AuNPs for rapid and facile loading thiolated DNA on AuNPs and will find wide applications in many areas requiring DNA-AuNPs, including diagnosis, therapy, and imaging.
Vehicle Speed and Length Estimation Using Data from Two Anisotropic Magneto-Resistive (AMR) Sensors
Markevicius, Vytautas; Navikas, Dangirutis; Valinevicius, Algimantas; Zilys, Mindaugas
2017-01-01
Methods for estimating a car’s length are presented in this paper, as well as the results achieved by using a self-designed system equipped with two anisotropic magneto-resistive (AMR) sensors, which were placed on a road lane. The purpose of the research was to compare the lengths of mid-size cars, i.e., family cars (hatchbacks), saloons (sedans), station wagons and SUVs. Four methods were used in the research: a simple threshold based method, a threshold method based on moving average and standard deviation, a two-extreme-peak detection method and a method based on the amplitude and time normalization using linear extrapolation (or interpolation). The results were achieved by analyzing changes in the magnitude and in the absolute z-component of the magnetic field as well. The tests, which were performed in four different Earth directions, show differences in the values of estimated lengths. The magnitude-based results in the case when cars drove from the South to the North direction were even up to 1.2 m higher than the other results achieved using the threshold methods. Smaller differences in lengths were observed when the distances were measured between two extreme peaks in the car magnetic signatures. The results were summarized in tables and the errors of estimated lengths were presented. The maximal errors, related to real lengths, were up to 22%. PMID:28771171
Rapid identification of group JK and other corynebacteria with the Minitek system.
Slifkin, M; Gil, G M; Engwall, C
1986-01-01
Forty primary clinical isolates and 50 stock cultures of corynebacteria and coryneform bacteria were tested with the Minitek system (BBL Microbiology Systems, Cockeysville, Md.). The Minitek correctly identified all of these organisms, including JK group isolates, within 12 to 18 h of incubation. The method does not require serum supplements for testing carbohydrate utilization by the bacteria. The Minitek system is an extremely simple and rapid way to identify the JK group, as well as many other corynebacteria, by established identification schemata for these bacteria. PMID:3091632
NASA Astrophysics Data System (ADS)
Lindsey, Rebecca; Goldman, Nir; Fried, Laurence
2017-06-01
Atomistic modeling of chemistry at extreme conditions remains a challenge, despite continuing advances in computing resources and simulation tools. While first principles methods provide a powerful predictive tool, the time and length scales associated with chemistry at extreme conditions (ns and μm, respectively) largely preclude extension of such models to molecular dynamics. In this work, we develop a simulation approach that retains the accuracy of density functional theory (DFT) while decreasing computational effort by several orders of magnitude. We generate n-body descriptions for atomic interactions by mapping forces arising from short density functional theory (DFT) trajectories on to simple Chebyshev polynomial series. We examine the importance of including greater than 2-body interactions, model transferability to different state points, and discuss approaches to ensure smooth and reasonable model shape outside of the distance domain sampled by the DFT training set. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
NASA Astrophysics Data System (ADS)
Lindsey, Rebecca; Goldman, Nir; Fried, Laurence
Understanding chemistry at extreme conditions is crucial in fields including geochemistry, astrobiology, and alternative energy. First principles methods can provide valuable microscopic insights into such systems while circumventing the risks of physical experiments, however the time and length scales associated with chemistry at extreme conditions (ns and μm, respectively) largely preclude extension of such models to molecular dynamics. In this work, we develop a simulation approach that retains the accuracy of density functional theory (DFT) while decreasing computational effort by several orders of magnitude. We generate n-body descriptions for atomic interactions by mapping forces arising from short density functional theory (DFT) trajectories on to simple Chebyshev polynomial series. We examine the importance of including greater than 2-body interactions, model transferability to different state points, and discuss approaches to ensure smooth and reasonable model shape outside of the distance domain sampled by the DFT training set. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Vibrational pumping and heating under SERS conditions: fact or myth?
Le Ru, E C; Etchegoin, P G
2006-01-01
We address in this paper the long debated issue of the possibility of vibrational pumping under Surface Enhanced Raman Scattering (SERS) conditions, both theoretically and experimentally. We revisit with simple theoretical models the mechanisms of vibrational pumping and its relation to heating. This presentation provides a clear classification of the various regimes of heating/pumping, from simple global laser heating to selective pumping of a single vibrational mode. We also propose the possibility of extreme pumping driven by stimulated phonon emission, and we introduce and apply a new experimental technique to study these effects in SERS. Our method relies on correlations between Raman peak parameters, and cross-correlation for two Raman peaks. We find strong evidence for local and dynamical heating, but no convincing evidence for selective pumping under our specific experimental SERS conditions.
Carvalho, Rimenys J; Cruz, Thayana A
2018-01-01
High-throughput screening (HTS) systems have emerged as important tools to provide fast and low cost evaluation of several conditions at once since it requires small quantities of material and sample volumes. These characteristics are extremely valuable for experiments with large number of variables enabling the application of design of experiments (DoE) strategies or simple experimental planning approaches. Once, the capacity of HTS systems to mimic chromatographic purification steps was established, several studies were performed successfully including scale down purification. Here, we propose a method for studying different purification conditions that can be used for any recombinant protein, including complex and glycosylated proteins, using low binding filter microplates.
Emissivity correction for interpreting thermal radiation from a terrestrial surface
NASA Technical Reports Server (NTRS)
Sutherland, R. A.; Bartholic, J. F.; Gerber, J. F.
1979-01-01
A general method of accounting for emissivity in making temperature determinations of graybody surfaces from radiometric data is presented. The method differs from previous treatments in that a simple blackbody calibration and graphical approach is used rather than numerical integrations which require detailed knowledge of an instrument's spectral characteristics. Also, errors caused by approximating instrumental response with the Stephan-Boltzman law rather than with an appropriately weighted Planck integral are examined. In the 8-14 micron wavelength interval, it is shown that errors are at most on the order of 3 C for the extremes of the earth's temperature and emissivity. For more practical limits, however, errors are less than 0.5 C.
Test Generation Algorithm for Fault Detection of Analog Circuits Based on Extreme Learning Machine
Zhou, Jingyu; Tian, Shulin; Yang, Chenglin; Ren, Xuelong
2014-01-01
This paper proposes a novel test generation algorithm based on extreme learning machine (ELM), and such algorithm is cost-effective and low-risk for analog device under test (DUT). This method uses test patterns derived from the test generation algorithm to stimulate DUT, and then samples output responses of the DUT for fault classification and detection. The novel ELM-based test generation algorithm proposed in this paper contains mainly three aspects of innovation. Firstly, this algorithm saves time efficiently by classifying response space with ELM. Secondly, this algorithm can avoid reduced test precision efficiently in case of reduction of the number of impulse-response samples. Thirdly, a new process of test signal generator and a test structure in test generation algorithm are presented, and both of them are very simple. Finally, the abovementioned improvement and functioning are confirmed in experiments. PMID:25610458
Parsimonious nonstationary flood frequency analysis
NASA Astrophysics Data System (ADS)
Serago, Jake M.; Vogel, Richard M.
2018-02-01
There is now widespread awareness of the impact of anthropogenic influences on extreme floods (and droughts) and thus an increasing need for methods to account for such influences when estimating a frequency distribution. We introduce a parsimonious approach to nonstationary flood frequency analysis (NFFA) based on a bivariate regression equation which describes the relationship between annual maximum floods, x, and an exogenous variable which may explain the nonstationary behavior of x. The conditional mean, variance and skewness of both x and y = ln (x) are derived, and combined with numerous common probability distributions including the lognormal, generalized extreme value and log Pearson type III models, resulting in a very simple and general approach to NFFA. Our approach offers several advantages over existing approaches including: parsimony, ease of use, graphical display, prediction intervals, and opportunities for uncertainty analysis. We introduce nonstationary probability plots and document how such plots can be used to assess the improved goodness of fit associated with a NFFA.
Teaching the Concept of Breakdown Point in Simple Linear Regression.
ERIC Educational Resources Information Center
Chan, Wai-Sum
2001-01-01
Most introductory textbooks on simple linear regression analysis mention the fact that extreme data points have a great influence on ordinary least-squares regression estimation; however, not many textbooks provide a rigorous mathematical explanation of this phenomenon. Suggests a way to fill this gap by teaching students the concept of breakdown…
The anesthesia and brain monitor (ABM). Concept and performance.
Kay, B
1984-01-01
Three integral components of the ABM, the frontalis electromyogram (EMG), the processed unipolar electroencephalogram (EEG) and the neuromuscular transmission monitor (NMT) were compared with standard research methods, and their clinical utility indicated. The EMG was compared with the method of Dundee et al (2) for measuring the induction dose of thiopentone; the EEG was compared with the SLE Galileo E8-b and the NMT was compared with the Medelec MS6. In each case correlation of results was extremely high, and the ABM offered some advantages over the standard research methods. We conclude that each of the integral units of the ABM is simple to apply and interpret, yet as accurate as standard apparatus used for research. In addition the ABM offers excellent display and recording facilities and alarm systems.
Design of nuclease-based target recycling signal amplification in aptasensors.
Yan, Mengmeng; Bai, Wenhui; Zhu, Chao; Huang, Yafei; Yan, Jiao; Chen, Ailiang
2016-03-15
Compared with conventional antibody-based immunoassay methods, aptasensors based on nucleic acid aptamer have made at least two significant breakthroughs. One is that aptamers are more easily used for developing various simple and rapid homogeneous detection methods by "sample in signal out" without multi-step washing. The other is that aptamers are more easily employed for developing highly sensitive detection methods by using various nucleic acid-based signal amplification approaches. As many substances playing regulatory roles in physiology or pathology exist at an extremely low concentration and many chemical contaminants occur in trace amounts in food or environment, aptasensors for signal amplification contribute greatly to detection of such targets. Among the signal amplification approaches in highly sensitive aptasensors, the nuclease-based target recycling signal amplification has recently become a research focus because it shows easy design, simple operation, and rapid reaction and can be easily developed for homogenous assay. In this review, we summarized recent advances in the development of various nuclease-based target recycling signal amplification with the aim to provide a general guide for the design of aptamer-based ultrasensitive biosensing assays. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Kim, S. K.; Lee, J.; Zhang, C.; Ames, S.; Williams, D. N.
2017-12-01
Deep learning techniques have been successfully applied to solve many problems in climate and geoscience using massive-scaled observed and modeled data. For extreme climate event detections, several models based on deep neural networks have been recently proposed and attend superior performance that overshadows all previous handcrafted expert based method. The issue arising, though, is that accurate localization of events requires high quality of climate data. In this work, we propose framework capable of detecting and localizing extreme climate events in very coarse climate data. Our framework is based on two models using deep neural networks, (1) Convolutional Neural Networks (CNNs) to detect and localize extreme climate events, and (2) Pixel recursive recursive super resolution model to reconstruct high resolution climate data from low resolution climate data. Based on our preliminary work, we have presented two CNNs in our framework for different purposes, detection and localization. Our results using CNNs for extreme climate events detection shows that simple neural nets can capture the pattern of extreme climate events with high accuracy from very coarse reanalysis data. However, localization accuracy is relatively low due to the coarse resolution. To resolve this issue, the pixel recursive super resolution model reconstructs the resolution of input of localization CNNs. We present a best networks using pixel recursive super resolution model that synthesizes details of tropical cyclone in ground truth data while enhancing their resolution. Therefore, this approach not only dramat- ically reduces the human effort, but also suggests possibility to reduce computing cost required for downscaling process to increase resolution of data.
A dynamical systems approach to studying midlatitude weather extremes
NASA Astrophysics Data System (ADS)
Messori, Gabriele; Caballero, Rodrigo; Faranda, Davide
2017-04-01
Extreme weather occurrences carry enormous social and economic costs and routinely garner widespread scientific and media coverage. The ability to predict these events is therefore a topic of crucial importance. Here we propose a novel predictability pathway for extreme events, by building upon recent advances in dynamical systems theory. We show that simple dynamical systems metrics can be used to identify sets of large-scale atmospheric flow patterns with similar spatial structure and temporal evolution on time scales of several days to a week. In regions where these patterns favor extreme weather, they afford a particularly good predictability of the extremes. We specifically test this technique on the atmospheric circulation in the North Atlantic region, where it provides predictability of large-scale wintertime surface temperature extremes in Europe up to 1 week in advance.
Taguchi method of experimental design in materials education
NASA Technical Reports Server (NTRS)
Weiser, Martin W.
1993-01-01
Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.
Simplified Method for Groundwater Treatment Using Dilution and Ceramic Filter
NASA Astrophysics Data System (ADS)
Musa, S.; Ariff, N. A.; Kadir, M. N. Abdul; Denan, F.
2016-07-01
Groundwater is one of the natural resources that is not susceptible to pollutants. However, increasing activities of municipal, industrial, agricultural or extreme land use activities have resulted in groundwater contamination as occured at the Research Centre for Soft Soil Malaysia (RECESS), Universiti Tun Hussein Onn Malaysia (UTHM). Thus, aims of this study is to treat groundwater by using rainwater and simple ceramic filter as a treatment agent. The treatment uses rain water dilution, ceramic filters and combined method of dilute and filtering as an alternate treatment which are simple and more practical compared to modern or chemical methods. The water went through dilution treatment processes able to get rid of 57% reduction compared to initial condition. Meanwhile, the water that passes through the filtering process successfully get rid of as much as 86% groundwater parameters where only chloride does not pass the standard. Favorable results for the combination methods of dilution and filtration methods that can succesfully eliminate 100% parameters that donot pass the standards of the Ministry of Health and the Interim National Drinking Water Quality Standard such as those found in groundwater in RECESS, UTHM especially sulfate and chloride. As a result, it allows the raw water that will use clean drinking water and safe. It also proves that the method used in this study is very effective in improving the quality of groundwater.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meeks, Kelsey; Pantoya, Michelle L.; Green, Micah
For dispersions containing a single type of particle, it has been observed that the onset of percolation coincides with a critical value of volume fraction. When the volume fraction is calculated based on excluded volume, this critical percolation threshold is nearly invariant to particle shape. The critical threshold has been calculated to high precision for simple geometries using Monte Carlo simulations, but this method is slow at best, and infeasible for complex geometries. This article explores an analytical approach to the prediction of percolation threshold in polydisperse mixtures. Specifically, this paper suggests an extension of the concept of excluded volume,more » and applies that extension to the 2D binary disk system. The simple analytical expression obtained is compared to Monte Carlo results from the literature. In conclusion, the result may be computed extremely rapidly and matches key parameters closely enough to be useful for composite material design.« less
Detonation product EOS studies: Using ISLS to refine CHEETAH
NASA Astrophysics Data System (ADS)
Zaug, Joseph; Fried, Larry; Hansen, Donald
2001-06-01
Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a suite of non-ideal simple fluids and fluid mixtures. Impulsive Stimulated Light Scattering conducted in the diamond-anvil cell offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition the kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model CHEETAH. Computational models are systematically improved with each addition of experimental data. Experimentally grounded computational models provide a good basis to confidently understand the chemical nature of reactions at extreme conditions.
Detonation Product EOS Studies: Using ISLS to Refine Cheetah
NASA Astrophysics Data System (ADS)
Zaug, J. M.; Howard, W. M.; Fried, L. E.; Hansen, D. W.
2002-07-01
Knowledge of an effective interatomic potential function underlies any effort to predict or rationalize the properties of solids and liquids. The experiments we undertake are directed towards determination of equilibrium and dynamic properties of simple fluids at densities sufficiently high that traditional computational methods and semi-empirical forms successful at ambient conditions may require reconsideration. In this paper we present high-pressure and temperature experimental sound speed data on a simple fluid, methanol. Impulsive Stimulated Light Scattering (ISLS) conducted on diamond-anvil cell (DAC) encapsulated samples offers an experimental approach to determine cross-pair potential interactions through equation of state determinations. In addition the kinetics of structural relaxation in fluids can be studied. We compare our experimental results with our thermochemical computational model Cheetah. Experimentally grounded computational models provide a good basis to confidently understand the chemical nature of reactions at extreme conditions.
NASA Technical Reports Server (NTRS)
1981-01-01
The set of computer programs described allows for data definition, data input, and data transfer between the LSI-11 microcomputers and the VAX-11/780 minicomputer. Program VAXCOM allows for a simple method of textual file transfer from the LSI to the VAX. Program LSICOM allows for easy file transfer from the VAX to the LSI. Program TTY changes the LSI-11 operators console to the LSI's printing device. Program DICTIN provides a means for defining a data set for input to either computer. Program DATAIN is a simple to operate data entry program which is capable of building data files on either machine. Program LEDITV is an extremely powerful, easy to use, line oriented text editor. Program COPYSBF is designed to print out textual files on the line printer without character loss from FORTRAN carriage control or wide record transfer.
NASA Technical Reports Server (NTRS)
Holmes, Thomas; Owe, Manfred; deJeu, Richard
2007-01-01
Two data sets of experimental field observations with a range of meteorological conditions are used to investigate the possibility of modeling near-surface soil temperature profiles in a bare soil. It is shown that commonly used heat flow methods that assume a constant ground heat flux can not be used to model the extreme variations in temperature that occur near the surface. This paper proposes a simple approach for modeling the surface soil temperature profiles from a single depth observation. This approach consists of two parts: 1) modeling an instantaneous ground flux profile based on net radiation and the ground heat flux at 5cm depth; 2) using this ground heat flux profile to extrapolate a single temperature observation to a continuous near surface temperature profile. The new model is validated with an independent data set from a different soil and under a range of meteorological conditions.
A discrete Markov metapopulation model for persistence and extinction of species.
Thompson, Colin J; Shtilerman, Elad; Stone, Lewi
2016-09-07
A simple discrete generation Markov metapopulation model is formulated for studying the persistence and extinction dynamics of a species in a given region which is divided into a large number of sites or patches. Assuming a linear site occupancy probability from one generation to the next we obtain exact expressions for the time evolution of the expected number of occupied sites and the mean-time to extinction (MTE). Under quite general conditions we show that the MTE, to leading order, is proportional to the logarithm of the initial number of occupied sites and in precise agreement with similar expressions for continuous time-dependent stochastic models. Our key contribution is a novel application of generating function techniques and simple asymptotic methods to obtain a second order asymptotic expression for the MTE which is extremely accurate over the entire range of model parameter values. Copyright © 2016 Elsevier Ltd. All rights reserved.
Removal of obstructing T-tube and stabilization of the airway.
Athavale, Sanjay M; Dang, Jennifer; Rangarajan, Sanjeet; Garrett, Gaelyn
2011-05-01
Although they are extremely effective in maintaining tracheal and subglottic patency, T-tubes themselves can result in airway obstruction from plugging. Many practitioners educate patients on placing a small (5.0) endotracheal tube (ETT) through the tracheal limb of the T-tube if they develop airway obstruction. Unfortunately, this can be a difficult task to complete during acute airway obstruction. In this article, we describe a simple set of steps for rapid relief of airway obstruction and stabilization of the airway in the event of T-tube obstruction. This method requires removal of the T-tube with a Kelly clamp and stabilization of the airway with a tracheostomy tube. Although it is simple, we hope that this technique will prevent morbidity and mortality from acute airway obstructions related to T-tubes. Copyright © 2011 The American Laryngological, Rhinological, and Otological Society, Inc.
Shrivas, Kamlesh; Wu, Hui-Fen
2007-11-02
A simple and rapid sample cleanup and preconcentration method for the quantitative determination of caffeine in one drop of beverages and foods by gas chromatography/mass spectrometry (GC/MS) has been proposed using drop-to-drop solvent microextraction (DDSME). The best optimum experimental conditions for DDSME were: chloroform as the extraction solvent, 5 min extraction time, 0.5 microL exposure volume of the extraction phase and no salt addition at room temperature. The optimized methodology exhibited good linearity between 0.05 and 5.0 microg/mL with correlation coefficient of 0.980. The relative standard deviation (RSD) and limits of detection (LOD) of the DDSME/GC/MS method were 4.4% and 4.0 ng/mL, respectively. Relative recovery of caffeine in beverages and foods were found to be 96.6-101%, which showing good reliability of this method. This DDSME excludes the major disadvantages of conventional method of caffeine extraction, like large amount of organic solvent and sample consumption and long sample pre-treatment process. So, this approach proves that the DDSME/GC/MS technique can be applied as a simple, fast and feasible diagnosis tool for environmental, food and biological application for extremely small amount of real sample analysis.
Adaptive Online Sequential ELM for Concept Drift Tackling
Basaruddin, Chan
2016-01-01
A machine learning method needs to adapt to over time changes in the environment. Such changes are known as concept drift. In this paper, we propose concept drift tackling method as an enhancement of Online Sequential Extreme Learning Machine (OS-ELM) and Constructive Enhancement OS-ELM (CEOS-ELM) by adding adaptive capability for classification and regression problem. The scheme is named as adaptive OS-ELM (AOS-ELM). It is a single classifier scheme that works well to handle real drift, virtual drift, and hybrid drift. The AOS-ELM also works well for sudden drift and recurrent context change type. The scheme is a simple unified method implemented in simple lines of code. We evaluated AOS-ELM on regression and classification problem by using concept drift public data set (SEA and STAGGER) and other public data sets such as MNIST, USPS, and IDS. Experiments show that our method gives higher kappa value compared to the multiclassifier ELM ensemble. Even though AOS-ELM in practice does not need hidden nodes increase, we address some issues related to the increasing of the hidden nodes such as error condition and rank values. We propose taking the rank of the pseudoinverse matrix as an indicator parameter to detect “underfitting” condition. PMID:27594879
Kim, Ah-Young; Choi, Myoung Su
2015-05-14
Canine fossa puncture (CFP) combined with endoscopic sinus surgery is a simple and effective method for treating antrochoanal polyps, particularly those that originate in the anterior, inferior or medial aspect of the antrum. Several complications can occur following CFP, including facial paraesthesia and dental numbness. However, facial palsy is extremely rare after CFP. We postulated that a possible mechanism of facial palsy is pressure injury to the soft tissues adjacent to the puncture site, which can damage the buccal branch of the facial nerve during CFP. 2015 BMJ Publishing Group Ltd.
Collisional-radiative switching - A powerful technique for converging non-LTE calculations
NASA Technical Reports Server (NTRS)
Hummer, D. G.; Voels, S. A.
1988-01-01
A very simple technique has been developed to converge statistical equilibrium and model atmospheric calculations in extreme non-LTE conditions when the usual iterative methods fail to converge from an LTE starting model. The proposed technique is based on a smooth transition from a collision-dominated LTE situation to the desired non-LTE conditions in which radiation dominates, at least in the most important transitions. The proposed approach was used to successfully compute stellar models with He abundances of 0.20, 0.30, and 0.50; Teff = 30,000 K, and log g = 2.9.
Oscillations of a Simple Pendulum with Extremely Large Amplitudes
ERIC Educational Resources Information Center
Butikov, Eugene I.
2012-01-01
Large oscillations of a simple rigid pendulum with amplitudes close to 180[degrees] are treated on the basis of a physically justified approach in which the cycle of oscillation is divided into several stages. The major part of the almost closed circular path of the pendulum is approximated by the limiting motion, while the motion in the vicinity…
Music through the Skin--Simple Demonstration of Human Electrical Conductivity
ERIC Educational Resources Information Center
Vollmer, M.; Möllmann, K. P.
2016-01-01
The conduction of electricity is an important topic for any basic physics course. Issues of safety often results in teacher demonstration experiments in front of the class or in extremely simple though--for students--not really fascinating (not to say boring) hands on activities for everybody using 1.5 V batteries, cables and light bulbs etc. Here…
On extreme points of the diffusion polytope
Hay, M. J.; Schiff, J.; Fisch, N. J.
2017-01-04
Here, we consider a class of diffusion problems defined on simple graphs in which the populations at any two vertices may be averaged if they are connected by an edge. The diffusion polytope is the convex hull of the set of population vectors attainable using finite sequences of these operations. A number of physical problems have linear programming solutions taking the diffusion polytope as the feasible region, e.g. the free energy that can be removed from plasma using waves, so there is a need to describe and enumerate its extreme points. We also review known results for the case ofmore » the complete graph Kn, and study a variety of problems for the path graph Pn and the cyclic graph Cn. Finall, we describe the different kinds of extreme points that arise, and identify the diffusion polytope in a number of simple cases. In the case of increasing initial populations on Pn the diffusion polytope is topologically an n-dimensional hypercube.« less
Color images of Kansas subsurface geology from well logs
Collins, D.R.; Doveton, J.H.
1986-01-01
Modern wireline log combinations give highly diagnostic information that goes beyond the basic shale content, pore volume, and fluid saturation of older logs. Pattern recognition of geology from logs is made conventionally through either the examination of log overlays or log crossplots. Both methods can be combined through the use of color as a medium of information by setting the three color primaries of blue, green, and red light as axes of three dimensional color space. Multiple log readings of zones are rendered as composite color mixtures which, when plotted sequentially with depth, show lithological successions in a striking manner. The method is extremely simple to program and display on a color monitor. Illustrative examples are described from the Kansas subsurface. ?? 1986.
Repeated Solid-state Dewetting of Thin Gold Films for Nanogap-rich Plasmonic Nanoislands.
Kang, Minhee; Park, Sang-Gil; Jeong, Ki-Hun
2015-10-15
This work reports a facile wafer-level fabrication for nanogap-rich gold nanoislands for highly sensitive surface enhanced Raman scattering (SERS) by repeating solid-state thermal dewetting of thin gold film. The method provides enlarged gold nanoislands with small gap spacing, which increase the number of electromagnetic hotspots and thus enhance the extinction intensity as well as the tunability for plasmon resonance wavelength. The plasmonic nanoislands from repeated dewetting substantially increase SERS enhancement factor over one order-of-magnitude higher than those from a single-step dewetting process and they allow ultrasensitive SERS detection of a neurotransmitter with extremely low Raman activity. This simple method provides many opportunities for engineering plasmonics for ultrasensitive detection and highly efficient photon collection.
Repeated Solid-state Dewetting of Thin Gold Films for Nanogap-rich Plasmonic Nanoislands
Kang, Minhee; Park, Sang-Gil; Jeong, Ki-Hun
2015-01-01
This work reports a facile wafer-level fabrication for nanogap-rich gold nanoislands for highly sensitive surface enhanced Raman scattering (SERS) by repeating solid-state thermal dewetting of thin gold film. The method provides enlarged gold nanoislands with small gap spacing, which increase the number of electromagnetic hotspots and thus enhance the extinction intensity as well as the tunability for plasmon resonance wavelength. The plasmonic nanoislands from repeated dewetting substantially increase SERS enhancement factor over one order-of-magnitude higher than those from a single-step dewetting process and they allow ultrasensitive SERS detection of a neurotransmitter with extremely low Raman activity. This simple method provides many opportunities for engineering plasmonics for ultrasensitive detection and highly efficient photon collection. PMID:26469768
Predictive Thermal Control Applied to HabEx
NASA Technical Reports Server (NTRS)
Brooks, Thomas E.
2017-01-01
Exoplanet science can be accomplished with a telescope that has an internal coronagraph or with an external starshade. An internal coronagraph architecture requires extreme wavefront stability (10 pm change/10 minutes for 10(exp -10) contrast), so every source of wavefront error (WFE) must be controlled. Analysis has been done to estimate the thermal stability required to meet the wavefront stability requirement. This paper illustrates the potential of a new thermal control method called predictive thermal control (PTC) to achieve the required thermal stability. A simple development test using PTC indicates that PTC may meet the thermal stability requirements. Further testing of the PTC method in flight-like environments will be conducted in the X-ray and Cryogenic Facility (XRCF) at Marshall Space Flight Center (MSFC).
An algorithm for determining the rotation count of pulsars
NASA Astrophysics Data System (ADS)
Freire, Paulo C. C.; Ridolfi, Alessandro
2018-06-01
We present here a simple, systematic method for determining the correct global rotation count of a radio pulsar; an essential step for the derivation of an accurate phase-coherent ephemeris. We then build on this method by developing a new algorithm for determining the global rotational count for pulsars with sparse timing data sets. This makes it possible to obtain phase-coherent ephemerides for pulsars for which this has been impossible until now. As an example, we do this for PSR J0024-7205aa, an extremely faint Millisecond pulsar (MSP) recently discovered in the globular cluster 47 Tucanae. This algorithm has the potential to significantly reduce the number of observations and the amount of telescope time needed to follow up on new pulsar discoveries.
[Nailfold capillaroscopy: relevance to the practice of rheumatology].
Souza, Eduardo José do Rosário E; Kayser, Cristiane
2015-01-01
Nailfold capillaroscopy is a simple, low-cost method, that is extremely important in the evaluation of patients with Raynaud's phenomenon and of patients with systemic sclerosis (SSc) spectrum diseases. Besides its importance for the early diagnosis of SSc, nailfold capillaroscopy is a useful tool to identify scleroderma patients with high risk for development of vascular and visceral complications and death. The inclusion of capillaroscopy in the new classification criteria for SSc of the American College of Rheumatology (ACR) and European League Against Rheumatism (Eular) gives a new impetus to the use and dissemination of the method. In this paper, we present a didactic, non-systematic review on the subject, with emphasis on advances recently described. Copyright © 2014 Elsevier Editora Ltda. All rights reserved.
Predictive thermal control applied to HabEx
NASA Astrophysics Data System (ADS)
Brooks, Thomas E.
2017-09-01
Exoplanet science can be accomplished with a telescope that has an internal coronagraph or with an external starshade. An internal coronagraph architecture requires extreme wavefront stability (10 pm change/10 minutes for 10-10 contrast), so every source of wavefront error (WFE) must be controlled. Analysis has been done to estimate the thermal stability required to meet the wavefront stability requirement. This paper illustrates the potential of a new thermal control method called predictive thermal control (PTC) to achieve the required thermal stability. A simple development test using PTC indicates that PTC may meet the thermal stability requirements. Further testing of the PTC method in flight-like environments will be conducted in the X-ray and Cryogenic Facility (XRCF) at Marshall Space Flight Center (MSFC).
Imaging with hypertelescopes: a simple modal approach
NASA Astrophysics Data System (ADS)
Aime, C.
2008-05-01
Aims: We give a simple analysis of imaging with hypertelescopes, a technique proposed by Labeyrie to produce snapshot images using arrays of telescopes. The approach is modal: we describe the transformations induced by the densification onto a sinusoidal decomposition of the focal image instead of the usual point spread function approach. Methods: We first express the image formed at the focus of a diluted array of apertures as the product R_0(α) X_F(α) of the diffraction pattern of the elementary apertures R_0(α) by the object-dependent interference term X_F(α) between all apertures. The interference term, which can be written in the form of a Fourier Series for an extremely diluted array, produces replications of the object, which makes observing the image difficult. We express the focal image after the densification using the approach of Tallon and Tallon-Bosc. Results: The result is very simple for an extremely diluted array. We show that the focal image in a periscopic densification of the array can be written as R_0(α) X_F(α/γ), where γ is the factor of densification. There is a dilatation of the interference term while the diffraction term is unchanged. After de-zooming, the image can be written as γ2 X_F(α)R_0(γ α), an expression which clearly indicates that the final image corresponds to the center of the Fizeau image intensified by γ2. The imaging limitations of hypertelescopes are therefore those of the original configuration. The effect of the suppression of image replications is illustrated in a numerical simulation for a fully redundant configuration and a non-redundant one.
Frenz, Patricia; González, Claudia
2010-09-01
the infant mortality gradient by maternal education is a good indicator of the health impact of the social inequalities that prevail in Chile. to propose a systematic method of analysis, using simple epidemiological measures, for the comparison of differential health risks between social groups that change over time. data and statistics on births and infant deaths, obtained from the Ministry of Health, were used. Five strata of maternal schooling were defined and various measures were calculated to compare infant mortality, according to maternal education in the periods 1998-2001 and 2001-2003. of particular interest is the distinction between a measure of effect, Relative Risk (RR), which indicates the size of the gap between socioeconomic extremes and the etiological strength of low maternal schooling on infant mortality, and a measure of global impact, the Population Attributable Risk (PAR%), which takes into account the whole socioeconomic distribution and permits comparisons over time independently of the variability in the proportions of the different social strata. The comparison of these measures in the two periods studied, reveals an increase in the infant mortality gap between maternal educational extremes measured by the RR, but a stabilization in the population impact of low maternal schooling. these results can be explained by a decline in the proportion of mothers in the lowest educational level and an increase in the proportion in the highest group.
Ultrastable Quantum Dot Composite Films under Severe Environments.
Yang, Zunxian; Zhang, Yuxiang; Liu, Jiahui; Ai, Jingwei; Lai, Shouqiang; Zhao, Zhiwei; Ye, Bingqing; Ruan, Yushuai; Guo, Tailiang; Yu, Xuebin; Chen, Gengxu; Lin, Yuanyuan; Xu, Sheng
2018-05-09
Semiconductor quantum dots (QDs) have attracted extensive attention because of their remarkable optical and electrical characteristics. However, the practical application of QDs and further the QD composite films have greatly been hindered mainly owing to their essential drawbacks of extreme unstability under oxygen and water environments. Herein, one simple method has been employed to enhance enormously the stability of Cd x Zn 1- x Se y S 1- y QD composite films by a combination of Cd x Zn 1- x Se y S 1- y QDs and poly(vinylidene) fluoride (PVDF), which is characteristic of closely arranged molecular chains and strong hydrogen bonds. There are many particular advantages in using QD/PVDF composite films such as easy processing, low cost, large-area fabrication, and especially extreme stability even in the boiling water for more than 240 min. By employing K 2 SiF 6 :Mn 4+ as a red phosphor, a prototype white light-emitting diode (WLED) with color coordinates of (0.3307, 0.3387), T c of 5568 K, and color gamut 112.1NTSC(1931)% at 20 mA has been fabricated, and there is little variation under different excitation currents, indicating that the QD/PVDF composite films fabricated by this simple blade-coating process make them ideal candidates for liquid-crystal display backlight utilization via assembling a WLED on a large scale owing to its ultrahigh stability under severe environments.
New quantitative method for evaluation of motor functions applicable to spinal muscular atrophy.
Matsumaru, Naoki; Hattori, Ryo; Ichinomiya, Takashi; Tsukamoto, Katsura; Kato, Zenichiro
2018-03-01
The aim of this study was to develop and introduce new method to quantify motor functions of the upper extremity. The movement was recorded using a three-dimensional motion capture system, and the movement trajectory was analyzed using newly developed two indices, which measure precise repeatability and directional smoothness. Our target task was shoulder flexion repeated ten times. We applied our method to a healthy adult without and with a weight, simulating muscle impairment. We also applied our method to assess the efficacy of a drug therapy for amelioration of motor functions in a non-ambulatory patient with spinal muscular atrophy. Movement trajectories before and after thyrotropin-releasing hormone therapy were analyzed. In the healthy adult, we found the values of both indices increased significantly when holding a weight so that the weight-induced deterioration in motor function was successfully detected. From the efficacy assessment of drug therapy in the patient, the directional smoothness index successfully detected improvements in motor function, which were also clinically observed by the patient's doctors. We have developed a new quantitative evaluation method of motor functions of the upper extremity. Clinical usability of this method is also greatly enhanced by reducing the required number of body-attached markers to only one. This simple but universal approach to quantify motor functions will provide additional insights into the clinical phenotypes of various neuromuscular diseases and developmental disorders. Copyright © 2017 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
An empirical approach to improving tidal predictions using recent real-time tide gauge data
NASA Astrophysics Data System (ADS)
Hibbert, Angela; Royston, Samantha; Horsburgh, Kevin J.; Leach, Harry
2014-05-01
Classical harmonic methods of tidal prediction are often problematic in estuarine environments due to the distortion of tidal fluctuations in shallow water, which results in a disparity between predicted and observed sea levels. This is of particular concern in the Bristol Channel, where the error associated with tidal predictions is potentially greater due to an unusually large tidal range of around 12m. As such predictions are fundamental to the short-term forecasting of High Water (HW) extremes, it is vital that alternative solutions are found. In a pilot study, using a year-long observational sea level record from the Port of Avonmouth in the Bristol Channel, the UK National Tidal and Sea Level Facility (NTSLF) tested the potential for reducing tidal prediction errors, using three alternatives to the Harmonic Method of tidal prediction. The three methods evaluated were (1) the use of Artificial Neural Network (ANN) models, (2) the Species Concordance technique and (3) a simple empirical procedure for correcting Harmonic Method High Water predictions based upon a few recent observations (referred to as the Empirical Correction Method). This latter method was then successfully applied to sea level records from an additional 42 of the 45 tide gauges that comprise the UK Tide Gauge Network. Consequently, it is to be incorporated into the operational systems of the UK Coastal Monitoring and Forecasting Partnership in order to improve short-term sea level predictions for the UK and in particular, the accurate estimation of HW extremes.
An extended abstract: A heuristic repair method for constraint-satisfaction and scheduling problems
NASA Technical Reports Server (NTRS)
Minton, Steven; Johnston, Mark D.; Philips, Andrew B.; Laird, Philip
1992-01-01
The work described in this paper was inspired by a surprisingly effective neural network developed for scheduling astronomical observations on the Hubble Space Telescope. Our heuristic constraint satisfaction problem (CSP) method was distilled from an analysis of the network. In the process of carrying out the analysis, we discovered that the effectiveness of the network has little to do with its connectionist implementation. Furthermore, the ideas employed in the network can be implemented very efficiently within a symbolic CSP framework. The symbolic implementation is extremely simple. It also has the advantage that several different search strategies can be employed, although we have found that hill-climbing methods are particularly well-suited for the applications that we have investigated. We begin the paper with a brief review of the neural network. Following this, we describe our symbolic method for heuristic repair.
NASA Astrophysics Data System (ADS)
Roberts, Michael J.; Braun, Noah O.; Sinclair, Thomas R.; Lobell, David B.; Schlenker, Wolfram
2017-09-01
We compare predictions of a simple process-based crop model (Soltani and Sinclair 2012), a simple statistical model (Schlenker and Roberts 2009), and a combination of both models to actual maize yields on a large, representative sample of farmer-managed fields in the Corn Belt region of the United States. After statistical post-model calibration, the process model (Simple Simulation Model, or SSM) predicts actual outcomes slightly better than the statistical model, but the combined model performs significantly better than either model. The SSM, statistical model and combined model all show similar relationships with precipitation, while the SSM better accounts for temporal patterns of precipitation, vapor pressure deficit and solar radiation. The statistical and combined models show a more negative impact associated with extreme heat for which the process model does not account. Due to the extreme heat effect, predicted impacts under uniform climate change scenarios are considerably more severe for the statistical and combined models than for the process-based model.
Extreme Sea Conditions in Shallow Water: Estimation based on in-situ measurements
NASA Astrophysics Data System (ADS)
Le Crom, Izan; Saulnier, Jean-Baptiste
2013-04-01
The design of marine renewable energy devices and components is based, among others, on the assessment of the environmental extreme conditions (winds, currents, waves, and water level) that must be combined together in order to evaluate the maximal loads on a floating/fixed structure, and on the anchoring system over a determined return period. Measuring devices are generally deployed at sea over relatively short durations (a few months to a few years), typically when describing water free surface elevation, and extrapolation methods based on hindcast data (and therefore on wave simulation models) have to be used. How to combine, in a realistic way, the action of the different loads (winds and waves for instance) and which correlation of return periods should be used are highly topical issues. However, the assessment of the extreme condition itself remains a not-fully-solved, crucial, and sensitive task. Above all in shallow water, extreme wave height, Hmax, is the most significant contribution in the dimensioning process of EMR devices. As a case study, existing methodologies for deep water have been applied to SEMREV, the French marine energy test site. The interest of this study, especially at this location, goes beyond the simple application to SEMREV's WEC and floating wind turbines deployment as it could also be extended to the Banc de Guérande offshore wind farm that are planned close by. More generally to pipes and communication cables as it is a redundant problematic. The paper will first present the existing measurements (wave and wind on site), the prediction chain that has been developed via wave models, the extrapolation methods applied to hindcast data, and will try to formulate recommendations for improving this assessment in shallow water.
Global coastal flood hazard mapping
NASA Astrophysics Data System (ADS)
Eilander, Dirk; Winsemius, Hessel; Ward, Philip; Diaz Loaiza, Andres; Haag, Arjen; Verlaan, Martin; Luo, Tianyi
2017-04-01
Over 10% of the world's population lives in low-lying coastal areas (up to 10m elevation). Many of these areas are prone to flooding from tropical storm surges or extra-tropical high sea levels in combination with high tides. A 1 in 100 year extreme sea level is estimated to expose 270 million people and 13 trillion USD worth of assets to flooding. Coastal flood risk is expected to increase due to drivers such as ground subsidence, intensification of tropical and extra-tropical storms, sea level rise and socio-economic development. For better understanding of the hazard and drivers to global coastal flood risk, a globally consistent analysis of coastal flooding is required. In this contribution we present a comprehensive global coastal flood hazard mapping study. Coastal flooding is estimated using a modular inundation routine, based on a vegetation corrected SRTM elevation model and forced by extreme sea levels. Per tile, either a simple GIS inundation routine or a hydrodynamic model can be selected. The GIS inundation method projects extreme sea levels to land, taking into account physical obstructions and dampening of the surge level land inwards. For coastlines with steep slopes or where local dynamics play a minor role in flood behavior, this fast GIS method can be applied. Extreme sea levels are derived from the Global Tide and Surge Reanalysis (GTSR) dataset. Future sea level projections are based on probabilistic sea level rise for RCP 4.5 and RCP 8.5 scenarios. The approach is validated against observed flood extents from ground and satellite observations. The results will be made available through the online Aqueduct Global Flood Risk Analyzer of the World Resources Institute.
NASA Astrophysics Data System (ADS)
Padmanabhan, Saraswathi; Shinoj, Vengalathunadakal K.; Murukeshan, Vadakke M.; Padmanabhan, Parasuraman
2010-01-01
A simple optical method using hollow-core photonic crystal fiber for protein detection has been described. In this study, estrogen receptor (ER) from a MCF-7 breast carcinoma cell lysates immobilized inside a hollow-core photonic crystal fiber was detected using anti-ER primary antibody with either Alexa™ Fluor 488 (green fluorescent dye) or 555 (red Fluorescent dye) labeled Goat anti-rabbit IgG as the secondary antibody. The fluorescence fingerprints of the ERα protein were observed under fluorescence microscope, and its optical characteristics were analyzed. The ERα protein detection by this proposed method is based on immuno binding from sample volume as low as 50 nL. This method is expected to offer great potential as a biosensor for medical diagnostics and therapeutics applications.
Simplified estimation of age-specific reference intervals for skewed data.
Wright, E M; Royston, P
1997-12-30
Age-specific reference intervals are commonly used in medical screening and clinical practice, where interest lies in the detection of extreme values. Many different statistical approaches have been published on this topic. The advantages of a parametric method are that they necessarily produce smooth centile curves, the entire density is estimated and an explicit formula is available for the centiles. The method proposed here is a simplified version of a recent approach proposed by Royston and Wright. Basic transformations of the data and multiple regression techniques are combined to model the mean, standard deviation and skewness. Using these simple tools, which are implemented in almost all statistical computer packages, age-specific reference intervals may be obtained. The scope of the method is illustrated by fitting models to several real data sets and assessing each model using goodness-of-fit techniques.
NASA Astrophysics Data System (ADS)
Pang, Linyong; Hu, Peter; Satake, Masaki; Tolani, Vikram; Peng, Danping; Li, Ying; Chen, Dongxue
2011-11-01
According to the ITRS roadmap, mask defects are among the top technical challenges to introduce extreme ultraviolet (EUV) lithography into production. Making a multilayer defect-free extreme ultraviolet (EUV) blank is not possible today, and is unlikely to happen in the next few years. This means that EUV must work with multilayer defects present on the mask. The method proposed by Luminescent is to compensate effects of multilayer defects on images by modifying the absorber patterns. The effect of a multilayer defect is to distort the images of adjacent absorber patterns. Although the defect cannot be repaired, the images may be restored to their desired targets by changing the absorber patterns. This method was first introduced in our paper at BACUS 2010, which described a simple pixel-based compensation algorithm using a fast multilayer model. The fast model made it possible to complete the compensation calculations in seconds, instead of days or weeks required for rigorous Finite Domain Time Difference (FDTD) simulations. Our SPIE 2011 paper introduced an advanced compensation algorithm using the Level Set Method for 2D absorber patterns. In this paper the method is extended to consider process window, and allow repair tool constraints, such as permitting etching but not deposition. The multilayer defect growth model is also enhanced so that the multilayer defect can be "inverted", or recovered from the top layer profile using a calibrated model.
Improving the Statistical Modeling of the TRMM Extreme Precipitation Monitoring System
NASA Astrophysics Data System (ADS)
Demirdjian, L.; Zhou, Y.; Huffman, G. J.
2016-12-01
This project improves upon an existing extreme precipitation monitoring system based on the Tropical Rainfall Measuring Mission (TRMM) daily product (3B42) using new statistical models. The proposed system utilizes a regional modeling approach, where data from similar grid locations are pooled to increase the quality and stability of the resulting model parameter estimates to compensate for the short data record. The regional frequency analysis is divided into two stages. In the first stage, the region defined by the TRMM measurements is partitioned into approximately 27,000 non-overlapping clusters using a recursive k-means clustering scheme. In the second stage, a statistical model is used to characterize the extreme precipitation events occurring in each cluster. Instead of utilizing the block-maxima approach used in the existing system, where annual maxima are fit to the Generalized Extreme Value (GEV) probability distribution at each cluster separately, the present work adopts the peak-over-threshold (POT) method of classifying points as extreme if they exceed a pre-specified threshold. Theoretical considerations motivate the use of the Generalized-Pareto (GP) distribution for fitting threshold exceedances. The fitted parameters can be used to construct simple and intuitive average recurrence interval (ARI) maps which reveal how rare a particular precipitation event is given its spatial location. The new methodology eliminates much of the random noise that was produced by the existing models due to a short data record, producing more reasonable ARI maps when compared with NOAA's long-term Climate Prediction Center (CPC) ground based observations. The resulting ARI maps can be useful for disaster preparation, warning, and management, as well as increased public awareness of the severity of precipitation events. Furthermore, the proposed methodology can be applied to various other extreme climate records.
Hierarchical extreme learning machine based reinforcement learning for goal localization
NASA Astrophysics Data System (ADS)
AlDahoul, Nouar; Zaw Htike, Zaw; Akmeliawati, Rini
2017-03-01
The objective of goal localization is to find the location of goals in noisy environments. Simple actions are performed to move the agent towards the goal. The goal detector should be capable of minimizing the error between the predicted locations and the true ones. Few regions need to be processed by the agent to reduce the computational effort and increase the speed of convergence. In this paper, reinforcement learning (RL) method was utilized to find optimal series of actions to localize the goal region. The visual data, a set of images, is high dimensional unstructured data and needs to be represented efficiently to get a robust detector. Different deep Reinforcement models have already been used to localize a goal but most of them take long time to learn the model. This long learning time results from the weights fine tuning stage that is applied iteratively to find an accurate model. Hierarchical Extreme Learning Machine (H-ELM) was used as a fast deep model that doesn’t fine tune the weights. In other words, hidden weights are generated randomly and output weights are calculated analytically. H-ELM algorithm was used in this work to find good features for effective representation. This paper proposes a combination of Hierarchical Extreme learning machine and Reinforcement learning to find an optimal policy directly from visual input. This combination outperforms other methods in terms of accuracy and learning speed. The simulations and results were analysed by using MATLAB.
Censored rainfall modelling for estimation of fine-scale extremes
NASA Astrophysics Data System (ADS)
Cross, David; Onof, Christian; Winter, Hugo; Bernardara, Pietro
2018-01-01
Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett-Lewis rectangular pulse (BLRP) model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett-Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.
Javorska, Lenka; Krcmova, Lenka Kujovska; Solich, Petr; Kaska, Milan
2017-08-05
Management of the therapy of life-threatening bacterial infection is extremely based on an optimal antibiotic treatment. Achieving the correct vancomycin dosage in blood and target tissues can be complicated in special situations, e.g., where large fluid sequestration and/or acute renal failure occur. A UHPLC-MS/MS method operating in electrospray (ESI) positive ion mode was applied for the determination of vancomycin in serum, urine and peritoneal/pleural effusion. Sample pretreatment was composed of dilution and simple protein precipitation where only a small volume (50μL) of serum, urine or peritoneal/pleural effusion was required. The separation of vancomycin was performed on a Meteoric Core C18 BIO column (100×4.6mm, 2.7μm) by gradient elution with 0.1% formic acid in water and acetonitrile. The total time of analysis was 4.5min. The method was found to be linear in the range of 2-60μM (or 0.5-10μM) for serum, 0.27-10μM (or 2-60μM) for peritoneal/pleural effusion and 25-300μM for urine, which was adequate for the determination of vancomycin in patient samples. The intra- and inter-day precision was below 8% RSD, and accuracy was from 89 to 104%. The UHPLC/MS-MS method offers a fast and reliable approach to determine vancomycin concentrations in three different human body fluid samples (serum, urine and peritoneal/pleural effusion) with a simple sample pretreatment that was the same for all selected specimens. This method should be applicable to large sample series in clinical (pharmacokinetic/pharmacodynamic) studies. Copyright © 2017 Elsevier B.V. All rights reserved.
ULTRA-SHARP nonoscillatory convection schemes for high-speed steady multidimensional flow
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Mokhtari, Simin
1990-01-01
For convection-dominated flows, classical second-order methods are notoriously oscillatory and often unstable. For this reason, many computational fluid dynamicists have adopted various forms of (inherently stable) first-order upwinding over the past few decades. Although it is now well known that first-order convection schemes suffer from serious inaccuracies attributable to artificial viscosity or numerical diffusion under high convection conditions, these methods continue to enjoy widespread popularity for numerical heat transfer calculations, apparently due to a perceived lack of viable high accuracy alternatives. But alternatives are available. For example, nonoscillatory methods used in gasdynamics, including currently popular TVD schemes, can be easily adapted to multidimensional incompressible flow and convective transport. This, in itself, would be a major advance for numerical convective heat transfer, for example. But, as is shown, second-order TVD schemes form only a small, overly restrictive, subclass of a much more universal, and extremely simple, nonoscillatory flux-limiting strategy which can be applied to convection schemes of arbitrarily high order accuracy, while requiring only a simple tridiagonal ADI line-solver, as used in the majority of general purpose iterative codes for incompressible flow and numerical heat transfer. The new universal limiter and associated solution procedures form the so-called ULTRA-SHARP alternative for high resolution nonoscillatory multidimensional steady state high speed convective modelling.
Keskinbora, Kahraman; Grévent, Corinne; Eigenthaler, Ulrike; Weigand, Markus; Schütz, Gisela
2013-11-26
A significant challenge to the wide utilization of X-ray microscopy lies in the difficulty in fabricating adequate high-resolution optics. To date, electron beam lithography has been the dominant technique for the fabrication of diffractive focusing optics called Fresnel zone plates (FZP), even though this preparation method is usually very complicated and is composed of many fabrication steps. In this work, we demonstrate an alternative method that allows the direct, simple, and fast fabrication of FZPs using focused Ga(+) beam lithography practically, in a single step. This method enabled us to prepare a high-resolution FZP in less than 13 min. The performance of the FZP was evaluated in a scanning transmission soft X-ray microscope where nanostructures as small as sub-29 nm in width were clearly resolved, with an ultimate cutoff resolution of 24.25 nm, demonstrating the highest first-order resolution for any FZP fabricated by the ion beam lithography technique. This rapid and simple fabrication scheme illustrates the capabilities and the potential of direct ion beam lithography (IBL) and is expected to increase the accessibility of high-resolution optics to a wider community of researchers working on soft X-ray and extreme ultraviolet microscopy using synchrotron radiation and advanced laboratory sources.
Dynamical systems proxies of atmospheric predictability and mid-latitude extremes
NASA Astrophysics Data System (ADS)
Messori, Gabriele; Faranda, Davide; Caballero, Rodrigo; Yiou, Pascal
2017-04-01
Extreme weather ocurrences carry enormous social and economic costs and routinely garner widespread scientific and media coverage. Many extremes (for e.g. storms, heatwaves, cold spells, heavy precipitation) are tied to specific patterns of midlatitude atmospheric circulation. The ability to identify these patterns and use them to enhance the predictability of the extremes is therefore a topic of crucial societal and economic value. We propose a novel predictability pathway for extreme events, by building upon recent advances in dynamical systems theory. We use two simple dynamical systems metrics - local dimension and persistence - to identify sets of similar large-scale atmospheric flow patterns which present a coherent temporal evolution. When these patterns correspond to weather extremes, they therefore afford a particularly good forward predictability. We specifically test this technique on European winter temperatures, whose variability largely depends on the atmospheric circulation in the North Atlantic region. We find that our dynamical systems approach provides predictability of large-scale temperature extremes up to one week in advance.
NASA Astrophysics Data System (ADS)
Bender, Carl
2017-01-01
The theory of complex variables is extremely useful because it helps to explain the mathematical behavior of functions of a real variable. Complex variable theory also provides insight into the nature of physical theories. For example, it provides a simple and beautiful picture of quantization and it explains the underlying reason for the divergence of perturbation theory. By using complex-variable methods one can generalize conventional Hermitian quantum theories into the complex domain. The result is a new class of parity-time-symmetric (PT-symmetric) theories whose remarkable physical properties have been studied and verified in many recent laboratory experiments.
The vacuum splint: an aid in emergency splinting of fractures
Letts, R. M.; Hobson, D. A.
1973-01-01
The vacuum splint has been shown to be a simple, safe and effective method of emergency splinting of fractured extremities. The splint is simply constructed from clear vinyl sheeting and contains 2-mm. expanded polystyrene balls. Evacuation of air causes the splint to become rigid, thereby providing stability and immobilization of the limb. The splint is radiolucent, containing no obstructive metal components that would interfere with the radiographic appearance of the injured limb. The ease of application of this splint makes it especially effective for the emergency splinting of fractures in children. ImagesFIG. 1FIG. 2AFIG. 2BFIG. 3FIG. 4 PMID:4742489
Simplified formula for mean cycle-slip time of phase-locked loops with steady-state phase error.
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1972-01-01
Previous work shows that the mean time from lock to a slipped cycle of a phase-locked loop is given by a certain double integral. Accurate numerical evaluation of this formula for the second-order loop is extremely vexing because the difference between exponentially large quantities is involved. The presented article demonstrates a method in which a much-reduced precision program can be used to obtain the mean first-cycle slip time for a loop of arbitrary degree tracking at a specified SNR and steady-state phase error. It also presents a simple approximate formula that is asymptotically tight at higher loop SNR.
A Mild, Ferrocene-Catalyzed C–H Imidation of (Hetero)Arenes
2015-01-01
A simple method for direct C–H imidation is reported using a new perester-based self-immolating reagent and a base-metal catalyst. The succinimide products obtained can be easily deprotected in situ (if desired) to reveal the corresponding anilines directly. The scope of the reaction is broad, the conditions are extremely mild, and the reaction is tolerant of oxidizable and acid-labile functionality, multiple heteroatoms, and aryl iodides. Mechanistic studies indicate that ferrocene (Cp2Fe) plays the role of an electron shuttle in the decomposition of the perester reagent, delivering a succinimidyl radical ready to add to an aromatic system. PMID:24654983
Analysis of magnitude and duration of floods and droughts in the context of climate change
NASA Astrophysics Data System (ADS)
Eshetu Debele, Sisay; Bogdanowicz, Ewa; Strupczewski, Witold
2016-04-01
Research and scientific information are key elements of any decision-making process. There is also a strong need for tools to describe and compare in a concise way the regime of hydrological extreme events in the context of presumed climate change. To meet these demands, two complementary methods for estimating high and low-flow frequency characteristics are proposed. Both methods deal with duration and magnitude of extreme events. The first one "flow-duration-frequency" (known as QdF) has already been applied successfully to low-flow analysis, flood flows and rainfall intensity. The second one called "duration-flow-frequency" (DqF) was proposed by Strupczewski et al. in 2010 to flood frequency analysis. The two methods differ in the treatment of flow and duration. In the QdF method the duration (d-consecutive days) is a chosen fixed value and the frequency analysis concerns the annual or seasonal series of mean value of flows exceeded (in the case of floods) or non-exceeded (in the case of droughts) within d-day period. In the second method, DqF, the flows are treated as fixed thresholds and the duration of flows exceeding (floods) and non-exceeding (droughts) these thresholds are a subject of frequency analysis. The comparison of characteristics of floods and droughts in reference period and under future climate conditions for catchments studied within the CHIHE project is presented and a simple way to show the results to non-professionals and decision-makers is proposed. The work was undertaken within the project "Climate Change Impacts on Hydrological Extremes (CHIHE)", which is supported by the Norway-Poland Grants Program administered by the Norwegian Research Council. The observed time series were provided by the Institute of Meteorology and Water Management (IMGW), Poland. Strupczewski, W. G., Kochanek, K., Markiewicz, I., Bogdanowicz, E., Weglarczyk, S., & Singh V. P. (2010). On the Tails of Distributions of Annual Peak Flow. Hydrology Research, 42, 171-192. http://dx.doi.org/10.2166/nh.2011.062
NASA Astrophysics Data System (ADS)
Giebink, Noel; Wiederrecht, Gary; Wasielewski, Michael
2011-03-01
Luminescent concentrators (LSCs) were developed over three decades ago as a simple route to obtain high concentration ratio for photovoltaic cells without tracking the sun. In principle, high concentration ratios 100 are possible for commonly used chromophores. In practice, however, there is typically an overlap between the chromophore absorption and emission spectra that, although small, ultimately leads to unacceptable reabsorption losses, limiting the concentration ratio to ~ 10 and hence the utility of LSCs to date. We introduce a simple, all-optical means of avoiding reabsorption loss by ``resonance shifting'' from a bilayer cavity that consists of an absorber/emitter waveguide lying upon a low refractive index layer supported by a transparent substrate. Emission is evanescently coupled into the substrate at sharply defined angles and hence, by varying the cavity thickness over the device area, the original absorption resonance can be avoided at each bounce, allowing for extremely low propagation loss to the substrate edges and hence an increase in the optical concentration ratio. We validate this concept for absorber/emitter layers composed of both a typical luminescent polymer and inorganic semiconductor nanocrystals, demonstrating near-lossless propagation in each case.
Wang, Rui; Zhang, Fang; Wang, Liu; Qian, Wenjuan; Qian, Cheng; Wu, Jian; Ying, Yibin
2017-04-18
On-site monitoring the plantation of genetically modified (GM) crops is of critical importance in agriculture industry throughout the world. In this paper, a simple, visual and instrument-free method for instant on-site detection of GTS 40-3-2 soybean has been developed. It is based on body-heat recombinase polymerase amplification (RPA) and followed with naked-eye detection via fluorescent DNA dye. Combining with extremely simplified sample preparation, the whole detection process can be accomplished within 10 min and the fluorescent results can be photographed by an accompanied smart phone. Results demonstrated a 100% detection rate for screening of practical GTS 40-3-2 soybean samples by 20 volunteers under different ambient temperatures. This method is not only suitable for on-site detection of GM crops but also demonstrates great potential to be applied in other fields.
Simulation of rare events in quantum error correction
NASA Astrophysics Data System (ADS)
Bravyi, Sergey; Vargo, Alexander
2013-12-01
We consider the problem of calculating the logical error probability for a stabilizer quantum code subject to random Pauli errors. To access the regime of large code distances where logical errors are extremely unlikely we adopt the splitting method widely used in Monte Carlo simulations of rare events and Bennett's acceptance ratio method for estimating the free energy difference between two canonical ensembles. To illustrate the power of these methods in the context of error correction, we calculate the logical error probability PL for the two-dimensional surface code on a square lattice with a pair of holes for all code distances d≤20 and all error rates p below the fault-tolerance threshold. Our numerical results confirm the expected exponential decay PL˜exp[-α(p)d] and provide a simple fitting formula for the decay rate α(p). Both noiseless and noisy syndrome readout circuits are considered.
Process for measuring degradation of sulfur hexafluoride in high voltage systems
Sauers, Isidor
1986-01-01
This invention is a method of detecting the presence of toxic and corrosive by-products in high voltage systems produced by electrically induced degradation of SF.sub.6 insulating gas in the presence of certain impurities. It is an improvement over previous methods because it is extremely sensitive, detecting by-products present in parts per billion concentrations, and because the device employed is of a simple design and takes advantage of the by-products natural affinity for fluoride ions. The method employs an ion-molecule reaction cell in which negative ions of the by-products are produced by fluorine attachment. These ions are admitted to a negative ion mass spectrometer and identified by their spectra. This spectrometry technique is an improvement over conventional techniques because the negative ion peaks are strong and not obscured by a major ion spectra of the SF.sub.6 component as is the case in positive ion mass spectrometry.
Process for measuring degradation of sulfur hexafluoride in high voltage systems
Sauers, I.
1985-04-23
This invention is a method of detecting the presence of toxic and corrosive by-products in high voltage systems produced by electrically induced degradation of SF/sub 6/ insulating gas in the presence of certain impurities. It is an improvement over previous methods because it is extremely sensitive, detecting by-products present in parts per billion concentrations, and because the device employed is of a simple design and takes advantage of the by-products natural affinity for fluoride ions. The method employs an ion-molecule reaction cell in which negative ions of the by-products are produced by fluorine attachment. These ions are admitted to a negative ion mass spectrometer and identified by their spectra. This spectrometry technique is an improvement over conventional techniques because the negative ion peaks are strong and not obscured by a major ion spectra of the SF/sub 6/ component as is the case in positive ion mass spectrometry.
The Coast Artillery Journal. Volume 57, Number 6, December 1922
1922-12-01
theorems ; Chapter III, to application; Chapters IV, V and VI, to infinitesimals and differentials, trigonometric functions, and logarithms and...taneously." There are chapters on complex numbers with simple and direct discussion of the roots of unity; on elementary theorems on the roots of an...through the centuries from the time of Pythagoras , an interest shared on the one extreme by nearly every noted mathematician and on the other extreme by
Cicone, A; Liu, J; Zhou, H
2016-04-13
Chemicals released in the air can be extremely dangerous for human beings and the environment. Hyperspectral images can be used to identify chemical plumes, however the task can be extremely challenging. Assuming we know a priori that some chemical plume, with a known frequency spectrum, has been photographed using a hyperspectral sensor, we can use standard techniques such as the so-called matched filter or adaptive cosine estimator, plus a properly chosen threshold value, to identify the position of the chemical plume. However, due to noise and inadequate sensing, the accurate identification of chemical pixels is not easy even in this apparently simple situation. In this paper, we present a post-processing tool that, in a completely adaptive and data-driven fashion, allows us to improve the performance of any classification methods in identifying the boundaries of a plume. This is done using the multidimensional iterative filtering (MIF) algorithm (Cicone et al. 2014 (http://arxiv.org/abs/1411.6051); Cicone & Zhou 2015 (http://arxiv.org/abs/1507.07173)), which is a non-stationary signal decomposition method like the pioneering empirical mode decomposition method (Huang et al. 1998 Proc. R. Soc. Lond. A 454, 903. (doi:10.1098/rspa.1998.0193)). Moreover, based on the MIF technique, we propose also a pre-processing method that allows us to decorrelate and mean-centre a hyperspectral dataset. The cosine similarity measure, which often fails in practice, appears to become a successful and outperforming classifier when equipped with such a pre-processing method. We show some examples of the proposed methods when applied to real-life problems. © 2016 The Author(s).
NASA Astrophysics Data System (ADS)
Vainshtein, Sergey N.; Duan, Guoyong; Mikhnev, Valeri A.; Zemlyakov, Valery E.; Egorkin, Vladimir I.; Kalyuzhnyy, Nikolay A.; Maleev, Nikolai A.; Näpänkangas, Juha; Sequeiros, Roberto Blanco; Kostamovaara, Juha T.
2018-05-01
Progress in terahertz spectroscopy and imaging is mostly associated with femtosecond laser-driven systems, while solid-state sources, mainly sub-millimetre integrated circuits, are still in an early development phase. As simple and cost-efficient an emitter as a Gunn oscillator could cause a breakthrough in the field, provided its frequency limitations could be overcome. Proposed here is an application of the recently discovered collapsing field domains effect that permits sub-THz oscillations in sub-micron semiconductor layers thanks to nanometer-scale powerfully ionizing domains arising due to negative differential mobility in extreme fields. This shifts the frequency limit by an order of magnitude relative to the conventional Gunn effect. Our first miniature picosecond pulsed sources cover the 100-200 GHz band and promise milliwatts up to ˜500 GHz. Thanks to the method of interferometrically enhanced time-domain imaging proposed here and the low single-shot jitter of ˜1 ps, our simple imaging system provides sufficient time-domain imaging contrast for fresh-tissue terahertz histology.
Modified surface of titanium dioxide nanoparticles-based biosensor for DNA detection
NASA Astrophysics Data System (ADS)
Nadzirah, Sh.; Hashim, U.; Rusop, M.
2018-05-01
A new technique was used to develop a simple and selective picoammeter DNA biosensor for identification of E. coli O157:H7. This biosensor was fabricated from titanium dioxide nanoparticles that was synthesized by sol-gel method and spin-coated on silicon dioxide substrate via spinner. 3-Aminopropyl triethoxy silane (APTES) was used to modify the surface of TiO2. Simple surface modification approach has been applied; which is single dropping of APTES onto the TiO2 nanoparticles surface. Carboxyl modified probe DNA has been bind onto the surface of APTES/TiO2 without any amplifier element. Electrical signal has been used as the indicator to differentiate each step (surface modification of TiO2 and probe DNA immobilization). The I-V measurements indicate extremely low current (pico-ampere) flow through the device which is 2.8138E-10 A for pure TiO2 nanoparticles, 2.8124E-10 A after APTES modification and 3.5949E-10 A after probe DNA immobilization.
Meeks, Kelsey; Pantoya, Michelle L.; Green, Micah; ...
2017-06-01
For dispersions containing a single type of particle, it has been observed that the onset of percolation coincides with a critical value of volume fraction. When the volume fraction is calculated based on excluded volume, this critical percolation threshold is nearly invariant to particle shape. The critical threshold has been calculated to high precision for simple geometries using Monte Carlo simulations, but this method is slow at best, and infeasible for complex geometries. This article explores an analytical approach to the prediction of percolation threshold in polydisperse mixtures. Specifically, this paper suggests an extension of the concept of excluded volume,more » and applies that extension to the 2D binary disk system. The simple analytical expression obtained is compared to Monte Carlo results from the literature. In conclusion, the result may be computed extremely rapidly and matches key parameters closely enough to be useful for composite material design.« less
Coulomb explosion of uniformly charged spheroids
NASA Astrophysics Data System (ADS)
Grech, M.; Nuter, R.; Mikaberidze, A.; di Cintio, P.; Gremillet, L.; Lefebvre, E.; Saalmann, U.; Rost, J. M.; Skupin, S.
2011-11-01
A simple, semianalytical model is proposed for nonrelativistic Coulomb explosion of a uniformly charged spheroid. This model allows us to derive the time-dependent particle energy distributions. Simple expressions are also given for the characteristic explosion time and maximum particle energies in the limits of extreme prolate and oblate spheroids as well as for the sphere. Results of particle simulations are found to be in remarkably good agreement with the model.
Extremely frequency-widened terahertz wave generation using Cherenkov-type radiation.
Suizu, Koji; Koketsu, Kaoru; Shibuya, Takayuki; Tsutsui, Toshihiro; Akiba, Takuya; Kawase, Kodo
2009-04-13
Terahertz (THz) wave generation based on nonlinear frequency conversion is promising way for realizing a tunable monochromatic bright THz-wave source. Such a development of efficient and wide tunable THz-wave source depends on discovery of novel brilliant nonlinear crystal. Important factors of a nonlinear crystal for THz-wave generation are, 1. High nonlinearity and 2. Good transparency at THz frequency region. Unfortunately, many nonlinear crystals have strong absorption at THz frequency region. The fact limits efficient and wide tunable THz-wave generation. Here, we show that Cherenkov radiation with waveguide structure is an effective strategy for achieving efficient and extremely wide tunable THz-wave source. We fabricated MgO-doped lithium niobate slab waveguide with 3.8 microm of thickness and demonstrated difference frequency generation of THz-wave generation with Cherenkov phase matching. Extremely frequency-widened THz-wave generation, from 0.1 to 7.2 THz, without no structural dips successfully obtained. The tuning frequency range of waveguided Cherenkov radiation source was extremely widened compare to that of injection seeded-Terahertz Parametric Generator. The tuning range obtained in this work for THz-wave generation using lithium niobate crystal was the widest value in our knowledge. The highest THz-wave energy obtained was about 3.2 pJ, and the energy conversion efficiency was about 10(-5) %. The method can be easily applied for many conventional nonlinear crystals, results in realizing simple, reasonable, compact, high efficient and ultra broad band THz-wave sources.
NASA Astrophysics Data System (ADS)
Aronica, G. T.; Candela, A.
2007-12-01
SummaryIn this paper a Monte Carlo procedure for deriving frequency distributions of peak flows using a semi-distributed stochastic rainfall-runoff model is presented. The rainfall-runoff model here used is very simple one, with a limited number of parameters and practically does not require any calibration, resulting in a robust tool for those catchments which are partially or poorly gauged. The procedure is based on three modules: a stochastic rainfall generator module, a hydrologic loss module and a flood routing module. In the rainfall generator module the rainfall storm, i.e. the maximum rainfall depth for a fixed duration, is assumed to follow the two components extreme value (TCEV) distribution whose parameters have been estimated at regional scale for Sicily. The catchment response has been modelled by using the Soil Conservation Service-Curve Number (SCS-CN) method, in a semi-distributed form, for the transformation of total rainfall to effective rainfall and simple form of IUH for the flood routing. Here, SCS-CN method is implemented in probabilistic form with respect to prior-to-storm conditions, allowing to relax the classical iso-frequency assumption between rainfall and peak flow. The procedure is tested on six practical case studies where synthetic FFC (flood frequency curve) were obtained starting from model variables distributions by simulating 5000 flood events combining 5000 values of total rainfall depth for the storm duration and AMC (antecedent moisture conditions) conditions. The application of this procedure showed how Monte Carlo simulation technique can reproduce the observed flood frequency curves with reasonable accuracy over a wide range of return periods using a simple and parsimonious approach, limited data input and without any calibration of the rainfall-runoff model.
Resonance Raman Spectroscopy of Extreme Nanowires and Other 1D Systems
Smith, David C.; Spencer, Joseph H.; Sloan, Jeremy; McDonnell, Liam P.; Trewhitt, Harrison; Kashtiban, Reza J.; Faulques, Eric
2016-01-01
This paper briefly describes how nanowires with diameters corresponding to 1 to 5 atoms can be produced by melting a range of inorganic solids in the presence of carbon nanotubes. These nanowires are extreme in the sense that they are the limit of miniaturization of nanowires and their behavior is not always a simple extrapolation of the behavior of larger nanowires as their diameter decreases. The paper then describes the methods required to obtain Raman spectra from extreme nanowires and the fact that due to the van Hove singularities that 1D systems exhibit in their optical density of states, that determining the correct choice of photon excitation energy is critical. It describes the techniques required to determine the photon energy dependence of the resonances observed in Raman spectroscopy of 1D systems and in particular how to obtain measurements of Raman cross-sections with better than 8% noise and measure the variation in the resonance as a function of sample temperature. The paper describes the importance of ensuring that the Raman scattering is linearly proportional to the intensity of the laser excitation intensity. It also describes how to use the polarization dependence of the Raman scattering to separate Raman scattering of the encapsulated 1D systems from those of other extraneous components in any sample. PMID:27168195
Venkatesh, S; Li, J; Xu, Y; Vishnuvajjala, R; Anderson, B D
1996-10-01
The selection of cosalane (NSC 658586) by the National Cancer Institute for further development as a potential drug candidate for the treatment of AIDS led to the exploration of the solubility behavior of this extremely hydrophobic drug, which has an intrinsic solubility (S0 approaching 1 ng/ml. This study describes attempts to reliably measure the intrinsic solubility of cosalane and examine its pH-solubility behavior. S0 was estimated by 5 different strategies: (a) direct determination in an aqueous suspension: (b) facilitated dissolution; (c) estimation from the octanol/water partition coefficient and octanol solubility (d) application of an empirical equation based on melting point and partition coefficient; and (e) estimation from the hydrocarbon solubility and functional group contributions for transfer from hydrocarbon to water. S0 estimates using these five methods varied over a 5 x 107-fold range Method (a) yielded the highest values, two-orders of magnitude greater than those obtained by method (b) (facilitated dissolution. 1.4 +/- 0.5 ng/ml). Method (c) gave a value 20-fold higher while that from method (d) was in fair agreement with that from facilitated dissolution. Method (e) yielded a value several orders-of-magnitude lower than other methods. A molecular dynamics simulation suggests that folded conformations not accounted for by group contributions may reduce cosalane's effective hydrophobicity. Ionic equilibria calculations for this weak diprotic acid suggested a 100-fold increase in solubility per pH unit increase. The pH-solubility profile of cosalane at 25 degrees C agreed closely with theory. These studies highlight the difficulty in determining solubility of very poorly soluble compounds and the possible advantage of the facilitated dissolution method. The diprotic nature of cosalane enabled a solubility enhancement of > 107-fold by simple pH adjustment.
Correlation dimension and phase space contraction via extreme value theory
NASA Astrophysics Data System (ADS)
Faranda, Davide; Vaienti, Sandro
2018-04-01
We show how to obtain theoretical and numerical estimates of correlation dimension and phase space contraction by using the extreme value theory. The maxima of suitable observables sampled along the trajectory of a chaotic dynamical system converge asymptotically to classical extreme value laws where: (i) the inverse of the scale parameter gives the correlation dimension and (ii) the extremal index is associated with the rate of phase space contraction for backward iteration, which in dimension 1 and 2, is closely related to the positive Lyapunov exponent and in higher dimensions is related to the metric entropy. We call it the Dynamical Extremal Index. Numerical estimates are straightforward to obtain as they imply just a simple fit to a univariate distribution. Numerical tests range from low dimensional maps, to generalized Henon maps and climate data. The estimates of the indicators are particularly robust even with relatively short time series.
Music through the skin—simple demonstration of human electrical conductivity
NASA Astrophysics Data System (ADS)
Vollmer, M.; Möllmann, K. P.
2016-05-01
The conduction of electricity is an important topic for any basic physics course. Issues of safety often results in teacher demonstration experiments in front of the class or in extremely simple though—for students—not really fascinating (not to say boring) hands on activities for everybody using 1.5 V batteries, cables and light bulbs etc. Here we briefly review some basic facts about conduction of electricity through the human body and report a simple, safe, and awe inspiring electrical conduction experiment which can be performed with little preparation by a teacher involving the whole class of say 20 students.
NASA Astrophysics Data System (ADS)
Lea, J.
2017-12-01
The quantification of glacier change is a key variable within glacier monitoring, with the method used potentially being crucial to ensuring that data can be appropriately compared with environmental data. The topic and timescales of study (e.g. land/marine terminating environments; sub-annual/decadal/centennial/millennial timescales) often mean that different methods are more suitable for different problems. However, depending on the GIS/coding expertise of the user, some methods can potentially be time consuming to undertake, making large-scale studies problematic. In addition, examples exist where different users have nominally applied the same methods in different studies, though with minor methodological inconsistencies in their approach. In turn, this will have implications for data homogeneity where regional/global datasets may be constructed. Here, I present a simple toolbox scripted in a Matlab® environment that requires only glacier margin and glacier centreline data to quantify glacier length, glacier change between observations, rate of change, in addition to other metrics. The toolbox includes the option to apply the established centreline or curvilinear box methods, or a new method: the variable box method - designed for tidewater margins where box width is defined as the total width of the individual terminus observation. The toolbox is extremely flexible, and has the option to be applied as either Matlab® functions within user scripts, or via a graphical user interface (GUI) for those unfamiliar with a coding environment. In both instances, there is potential to apply the methods quickly to large datasets (100s-1000s of glaciers, with potentially similar numbers of observations each), thus ensuring large scale methodological consistency (and therefore data homogeneity) and allowing regional/global scale analyses to be achievable for those with limited GIS/coding experience. The toolbox has been evaluated against idealised scenarios demonstrating its accuracy, while feedback from undergraduate students who have trialled the toolbox is that it is intuitive and simple to use. When released, the toolbox will be free and open source allowing users to potentially modify, improve and expand upon the current version.
Extremism without extremists: Deffuant model with emotions
NASA Astrophysics Data System (ADS)
Sobkowicz, Pawel
2015-03-01
The frequent occurrence of extremist views in many social contexts, often growing from small minorities to almost total majority, poses a significant challenge for democratic societies. The phenomenon can be described within the sociophysical paradigm. We present a modified version of the continuous bounded confidence opinion model, including a simple description of the influence of emotions on tolerances, and eventually on the evolution of opinions. Allowing for psychologically based correlation between the extreme opinions, high emotions and low tolerance for other people's views leads to quick dominance of the extreme views within the studied model, without introducing a special class of agents, as has been done in previous works. This dominance occurs even if the initial numbers of people with extreme opinions is very small. Possible suggestions related to mitigation of the process are briefly discussed.
Using Extreme Tropical Precipitation Statistics to Constrain Future Climate States
NASA Astrophysics Data System (ADS)
Igel, M.; Biello, J. A.
2017-12-01
Tropical precipitation is characterized by a rapid growth in mean intensity as the column humidity increases. This behavior is examined in both a cloud resolving model and with high-resolution observations of precipitation and column humidity from CloudSat and AIRS, respectively. The model and the observations exhibit remarkable consistency and suggest a new paradigm for extreme precipitation. We show that the total precipitation can be decomposed into a product of contributions from a mean intensity, a probability of precipitation, and a global PDF of column humidity values. We use the modeling and observational results to suggest simple, analytic forms for each of these functions. The analytic representations are then used to construct a simple expression for the global accumulated precipitation as a function of the parameters of each of the component functions. As the climate warms, extreme precipitation intensity and global precipitation are expected to increase, though at different rates. When these predictions are incorporated into the new analytic expression for total precipitation, predictions for changes due to global warming to the probability of precipitation and the PDF of column humidity can be made. We show that strong constraints can be imposed on the future shape of the PDF of column humidity but that only weak constraints can be set on the probability of precipitation. These are largely imposed by the intensification of extreme precipitation. This result suggests that understanding precisely how extreme precipitation responds to climate warming is critical to predicting other impactful properties of global hydrology. The new framework can also be used to confirm and discount existing theories for shifting precipitation.
Forces and Holes in Liquid Surfaces and Soap Films: A Simple Measurement of a Not-So-Simple Effect
ERIC Educational Resources Information Center
Gratton, Luigi M.; Oss, Stefano
2004-01-01
In this article we show how to verify that in a fluid surface or film the value of the surface tension (i.e. the free energy per unit area) does not depend on the area of the film itself. The experimental evidence discussed can be obtained extremely simply yet with great accuracy. This experiment is important in that it leads to a deeper…
Robust 1-Bit Compressive Sensing via Binary Stable Embeddings of Sparse Vectors
2011-04-15
funded by Mitsubishi Electric Research Laboratories. †ICTEAM Institute, ELEN Department, Université catholique de Louvain (UCL), B-1348 Louvain-la-Neuve...reduced to a simple comparator that tests for values above or below zero, enabling extremely simple, efficient, and fast quantization. A 1-bit quantizer is...these two terms appears to be significantly different, according to the previously discussed experiments. To test the hypothesis that this term is the key
Recovering Galaxy Properties Using Gaussian Process SED Fitting
NASA Astrophysics Data System (ADS)
Iyer, Kartheik; Awan, Humna
2018-01-01
Information about physical quantities like the stellar mass, star formation rates, and ages for distant galaxies is contained in their spectral energy distributions (SEDs), obtained through photometric surveys like SDSS, CANDELS, LSST etc. However, noise in the photometric observations often is a problem, and using naive machine learning methods to estimate physical quantities can result in overfitting the noise, or converging on solutions that lie outside the physical regime of parameter space.We use Gaussian Process regression trained on a sample of SEDs corresponding to galaxies from a Semi-Analytic model (Somerville+15a) to estimate their stellar masses, and compare its performance to a variety of different methods, including simple linear regression, Random Forests, and k-Nearest Neighbours. We find that the Gaussian Process method is robust to noise and predicts not only stellar masses but also their uncertainties. The method is also robust in the cases where the distribution of the training data is not identical to the target data, which can be extremely useful when generalized to more subtle galaxy properties.
NASA Astrophysics Data System (ADS)
da Silva Oliveira, C. I.; Martinez-Martinez, D.; Al-Rjoub, A.; Rebouta, L.; Menezes, R.; Cunha, L.
2018-04-01
In this paper, we present a statistical method that allows evaluating the degree of a transparency of a thin film. To do so, the color coordinates are measured on different substrates, and the standard deviation is evaluated. In case of low values, the color depends on the film and not on the substrate, and intrinsic colors are obtained. In contrast, transparent films lead to high values of standard deviation, since the value of the color coordinates depends on the substrate. Between both extremes, colored films with a certain degree of transparency can be found. This method allows an objective and simple evaluation of the transparency of any film, improving the subjective visual inspection and avoiding the thickness problems related to optical spectroscopy evaluation. Zirconium oxynitride films deposited on three different substrates (Si, steel and glass) are used for testing the validity of this method, whose results have been validated with optical spectroscopy, and agree with the visual impression of the samples.
A single pH fluorescent probe for biosensing and imaging of extreme acidity and extreme alkalinity.
Chao, Jian-Bin; Wang, Hui-Juan; Zhang, Yong-Bin; Li, Zhi-Qing; Liu, Yu-Hong; Huo, Fang-Jun; Yin, Cai-Xia; Shi, Ya-Wei; Wang, Juan-Juan
2017-07-04
A simple tailor-made pH fluorescent probe 2-benzothiazole (N-ethylcarbazole-3-yl) hydrazone (Probe) is facilely synthesized by the condensation reaction of 2-hydrazinobenzothiazole with N-ethylcarbazole-3-formaldehyde, which is a useful fluorescent probe for monitoring extremely acidic and alkaline pH, quantitatively. The pH titrations indicate that Probe displays a remarkable emission enhancement with a pK a of 2.73 and responds linearly to minor pH fluctuations within the extremely acidic range of 2.21-3.30. Interestingly, Probe also exhibits strong pH-dependent characteristics with pK a 11.28 and linear response to extreme-alkalinity range of 10.41-12.43. In addition, Probe shows a large Stokes shift of 84 nm under extremely acidic and alkaline conditions, high selectivity, excellent sensitivity, good water-solubility and fine stability, all of which are favorable for intracellular pH imaging. The probe is further successfully applied to image extremely acidic and alkaline pH values fluctuations in E. coli cells. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ansari, S. M.; Suryawanshi, S. R.; More, M. A.; Sen, Debasis; Kolekar, Y. D.; Ramana, C. V.
2018-06-01
We report on the field-emission properties of structure-morphology controlled nano-CoFe2O4 (CFO) synthesized via a simple and low-temperature chemical method. Structural analyses indicate that the spongy-CFO (approximately, 2.96 nm) is nano-structured, spherical, uniformly-distributed, cubic-structured and porous. Field emission studies reveal that CFO exhibit low turn-on field (4.27 V/μm) and high emission current-density (775 μA/cm2) at a lower applied electric field of 6.80 V/μm. In addition, extremely good emission current stability is obtained at a pre-set value of 1 μA and high emission spot-density over large area (2 × 2 cm2) suggesting the applicability of these materials for practical applications in vacuum micro-/nano-electronics.
Ishida, Haruki; Kagawa, Keiichiro; Komuro, Takashi; Zhang, Bo; Seo, Min-Woong; Takasawa, Taishi; Yasutomi, Keita; Kawahito, Shoji
2018-01-01
A probabilistic method to remove the random telegraph signal (RTS) noise and to increase the signal level is proposed, and was verified by simulation based on measured real sensor noise. Although semi-photon-counting-level (SPCL) ultra-low noise complementary-metal-oxide-semiconductor (CMOS) image sensors (CISs) with high conversion gain pixels have emerged, they still suffer from huge RTS noise, which is inherent to the CISs. The proposed method utilizes a multi-aperture (MA) camera that is composed of multiple sets of an SPCL CIS and a moderately fast and compact imaging lens to emulate a very fast single lens. Due to the redundancy of the MA camera, the RTS noise is removed by the maximum likelihood estimation where noise characteristics are modeled by the probability density distribution. In the proposed method, the photon shot noise is also relatively reduced because of the averaging effect, where the pixel values of all the multiple apertures are considered. An extremely low-light condition that the maximum number of electrons per aperture was the only 2e− was simulated. PSNRs of a test image for simple averaging, selective averaging (our previous method), and the proposed method were 11.92 dB, 11.61 dB, and 13.14 dB, respectively. The selective averaging, which can remove RTS noise, was worse than the simple averaging because it ignores the pixels with RTS noise and photon shot noise was less improved. The simulation results showed that the proposed method provided the best noise reduction performance. PMID:29587424
NASA Technical Reports Server (NTRS)
Lee, Michael
1995-01-01
Since the original post-launch calibration of the FHSTs (Fixed Head Star Trackers) on EUVE (Extreme Ultraviolet Explorer) and UARS (Upper Atmosphere Research Satellite), the Flight Dynamics task has continued to analyze the FHST performance. The algorithm used for inflight alignment of spacecraft sensors is described and the equations for the errors in the relative alignment for the simple 2 star tracker case are shown. Simulated data and real data are used to compute the covariance of the relative alignment errors. Several methods for correcting the alignment are compared and results analyzed. The specific problems seen on orbit with UARS and EUVE are then discussed. UARS has experienced anomalous tracker performance on an FHST resulting in continuous variation in apparent tracker alignment. On EUVE, the FHST residuals from the attitude determination algorithm showed a dependence on the direction of roll during survey mode. This dependence is traced back to time tagging errors and the original post launch alignment is found to be in error due to the impact of the time tagging errors on the alignment algorithm. The methods used by the FDF (Flight Dynamics Facility) to correct for these problems is described.
Research on the remote sensing methods of drought monitoring in Chongqing
NASA Astrophysics Data System (ADS)
Yang, Shiqi; Tang, Yunhui; Gao, Yanghua; Xu, Yongjin
2011-12-01
There are regional and periodic droughts in Chongqing, which impacted seriously on agricultural production and people's lives. This study attempted to monitor the drought in Chongqing with complex terrain using MODIS data. First, we analyzed and compared three remote sensing methods for drought monitoring (time series of vegetation index, temperature vegetation dryness index (TVDI), and vegetation supply water index (VSWI)) for the severe drought in 2006. Then we developed a remote sensing based drought monitoring model for Chongqing by combining soil moisture data and meteorological data. The results showed that the three remote sensing based drought monitoring models performed well in detecting the occurrence of drought in Chongqing on a certain extent. However, Time Series of Vegetation Index has stronger sensitivity in time pattern but weaker in spatial pattern; although TVDI and VSWI can reflect inverse the whole process of severe drought in 2006 summer from drought occurred - increased - relieved - increased again - complete remission in spatial domain, but TVDI requires the situation of extreme drought and extreme moist both exist in study area which it is more difficult in Chongqing; VSWI is simple and practicable, which the correlation coefficient between VSWI and soil moisture data reaches significant levels. In summary, VSWI is the best model for summer drought monitoring in Chongqing.
A new supernova light curve modeling program
NASA Astrophysics Data System (ADS)
Jäger, Zoltán; Nagy, Andrea P.; Biro, Barna I.; Vinkó, József
2017-12-01
Supernovae are extremely energetic explosions that highlight the violent deaths of various types of stars. Studying such cosmic explosions may be important because of several reasons. Supernovae play a key role in cosmic nucleosynthesis processes, and they are also the anchors of methods of measuring extragalactic distances. Several exotic physical processes take place in the expanding ejecta produced by the explosion. We have developed a fast and simple semi-analytical code to model the the light curve of core collapse supernovae. This allows the determination of their most important basic physical parameters, like the the radius of the progenitor star, the mass of the ejected envelope, the mass of the radioactive nickel synthesized during the explosion, among others.
Controlling the net charge on a nanoparticle optically levitated in vacuum
NASA Astrophysics Data System (ADS)
Frimmer, Martin; Luszcz, Karol; Ferreiro, Sandra; Jain, Vijay; Hebestreit, Erik; Novotny, Lukas
2017-06-01
Optically levitated nanoparticles in vacuum are a promising model system to test physics beyond our current understanding of quantum mechanics. Such experimental tests require extreme control over the dephasing of the levitated particle's motion. If the nanoparticle carries a finite net charge, it experiences a random Coulomb force due to fluctuating electric fields. This dephasing mechanism can be fully excluded by discharging the levitated particle. Here, we present a simple and reliable technique to control the charge on an optically levitated nanoparticle in vacuum. Our method is based on the generation of charges in an electric discharge and does not require additional optics or mechanics close to the optical trap.
Vial OrganicTM-Organic Chemistry Labs for High School and Junior College
NASA Astrophysics Data System (ADS)
Russo, Thomas J.; Meszaros, Mark
1999-01-01
Vial Organic is the most economical, safe, and time-effective method of performing organic chemistry experiments. Activities are carried out in low-cost, sealed vials. Vial Organic is extremely safe because only micro quantities of reactants are used, reactants are contained in tightly sealed vials, and only water baths are used for temperature control. Vial Organic laboratory activities are easily performed within one 50-minute class period. When heat is required, a simple hot-water bath is prepared from a beaker of water and an inexpensive immersion heater. The low cost, ease of use, and relatively short time requirement will allow organic chemistry to be experienced by more students with less confusion and intimidation.
Afzal, Naveed; Sohn, Sunghwan; Abram, Sara; Scott, Christopher G.; Chaudhry, Rajeev; Liu, Hongfang; Kullo, Iftikhar J.; Arruda-Olson, Adelaide M.
2016-01-01
Objective Lower extremity peripheral arterial disease (PAD) is highly prevalent and affects millions of individuals worldwide. We developed a natural language processing (NLP) system for automated ascertainment of PAD cases from clinical narrative notes and compared the performance of the NLP algorithm to billing code algorithms, using ankle-brachial index (ABI) test results as the gold standard. Methods We compared the performance of the NLP algorithm to 1) results of gold standard ABI; 2) previously validated algorithms based on relevant ICD-9 diagnostic codes (simple model) and 3) a combination of ICD-9 codes with procedural codes (full model). A dataset of 1,569 PAD patients and controls was randomly divided into training (n= 935) and testing (n= 634) subsets. Results We iteratively refined the NLP algorithm in the training set including narrative note sections, note types and service types, to maximize its accuracy. In the testing dataset, when compared with both simple and full models, the NLP algorithm had better accuracy (NLP: 91.8%, full model: 81.8%, simple model: 83%, P<.001), PPV (NLP: 92.9%, full model: 74.3%, simple model: 79.9%, P<.001), and specificity (NLP: 92.5%, full model: 64.2%, simple model: 75.9%, P<.001). Conclusions A knowledge-driven NLP algorithm for automatic ascertainment of PAD cases from clinical notes had greater accuracy than billing code algorithms. Our findings highlight the potential of NLP tools for rapid and efficient ascertainment of PAD cases from electronic health records to facilitate clinical investigation and eventually improve care by clinical decision support. PMID:28189359
High order harmonic generation in rare gases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budil, Kimberly Susan
1994-05-01
The process of high order harmonic generation in atomic gases has shown great promise as a method of generating extremely short wavelength radiation, extending far into the extreme ultraviolet (XUV). The process is conceptually simple. A very intense laser pulse (I ~10 13-10 14 W/cm 2) is focused into a dense (~10 17 particles/cm 3) atomic medium, causing the atoms to become polarized. These atomic dipoles are then coherently driven by the laser field and begin to radiate at odd harmonics of the laser field. This dissertation is a study of both the physical mechanism of harmonic generation as wellmore » as its development as a source of coherent XUV radiation. Recently, a semiclassical theory has been proposed which provides a simple, intuitive description of harmonic generation. In this picture the process is treated in two steps. The atom ionizes via tunneling after which its classical motion in the laser field is studied. Electron trajectories which return to the vicinity of the nucleus may recombine and emit a harmonic photon, while those which do not return will ionize. An experiment was performed to test the validity of this model wherein the trajectory of the electron as it orbits the nucleus or ion core is perturbed by driving the process with elliptically, rather than linearly, polarized laser radiation. The semiclassical theory predicts a rapid turn-off of harmonic production as the ellipticity of the driving field is increased. This decrease in harmonic production is observed experimentally and a simple quantum mechanical theory is used to model the data. The second major focus of this work was on development of the harmonic "source". A series of experiments were performed examining the spatial profiles of the harmonics. The quality of the spatial profile is crucial if the harmonics are to be used as the source for experiments, particularly if they must be refocused.« less
Wave Propagation in Discontinuous Media by the Scattering Matrix Method
NASA Astrophysics Data System (ADS)
Perino, A.; Orta, R.; Barla, G.
2012-09-01
Propagation of elastic waves in discontinuous media is studied in this paper by the scattering matrix method (SMM). An electromagnetic transmission line analogy is also used to set up the mathematical model. The SMM operates in the frequency domain and allows for all wave polarizations (P, SV and SH). Rock masses are examples of discontinuous media in which the discontinuities (fractures or joints) influence wave propagation. Both elastic and viscoelastic joints are considered and the latter are described by Kelvin-Voigt, Maxwell and Burgers models. Rock joints with Coulomb slip behavior are also analyzed, by applying the averaging principle of Caughy (J Appl Mech 27:640-643, 1960). The evaluation of the effects of periodic discontinuities in a homogeneous medium is presented by introducing the concept of Bloch waves. The dispersion curves of these waves are useful to explain the existence of frequency bands of strong attenuation, also in the case of lossless (perfectly elastic) structures. Simple expressions of transmission and reflection coefficients are obtained. Finally, the SMM results are compared with those computed via the distinct element method (DEM). The comparisons are performed on a medium with joints with Coulomb slip behavior and the agreement is satisfactory, although the SMM must be applied in conjunction with the equivalent linearization technique. Even if the DEM is much more general, the SMM in these simple cases is extremely faster and provides a higher physical insight.
Stephan, Anett; Hahn-Löbmann, Simone; Rosche, Fred; Buchholz, Mirko; Giritch, Anatoli; Gleba, Yuri
2017-12-29
Colicins are natural non-antibiotic bacterial proteins with a narrow spectrum but an extremely high antibacterial activity. These proteins are promising food additives for the control of major pathogenic Shiga toxin-producing E. coli serovars in meats and produce. In the USA, colicins produced in edible plants such as spinach and leafy beets have already been accepted by the U. S. Food and Drug Administration (FDA) and U. S. Department of Agriculture (USDA) as food-processing antibacterials through the GRAS (generally recognized as safe) regulatory review process. Nicotiana benthamiana , a wild relative of tobacco, N. tabacum , has become the preferred production host plant for manufacturing recombinant proteins-including biopharmaceuticals, vaccines, and biomaterials-but the purification procedures that have been employed thus far are highly complex and costly. We describe a simple and inexpensive purification method based on specific acidic extraction followed by one chromatography step. The method provides for a high recovery yield of purified colicins, as well as a drastic reduction of nicotine to levels that could enable the final products to be used on food. The described purification method allows production of the colicin products at a commercially viable cost of goods and might be broadly applicable to other cost-sensitive proteins.
Statistical Methods for Quantifying the Variability of Solar Wind Transients of All Sizes
NASA Astrophysics Data System (ADS)
Tindale, E.; Chapman, S. C.
2016-12-01
The solar wind is inherently variable across a wide range of timescales, from small-scale turbulent fluctuations to the 11-year periodicity induced by the solar cycle. Each solar cycle is unique, and this change in overall cycle activity is coupled from the Sun to Earth via the solar wind, leading to long-term trends in space weather. Our work [Tindale & Chapman, 2016] applies novel statistical methods to solar wind transients of all sizes, to quantify the variability of the solar wind associated with the solar cycle. We use the same methods to link solar wind observations with those on the Sun and Earth. We use Wind data to construct quantile-quantile (QQ) plots comparing the statistical distributions of multiple commonly used solar wind-magnetosphere coupling parameters between the minima and maxima of solar cycles 23 and 24. We find that in each case the distribution is multicomponent, ranging from small fluctuations to extreme values, with the same functional form at all phases of the solar cycle. The change in PDF is captured by a simple change of variables, which is independent of the PDF model. Using this method we can quantify the quietness of the cycle 24 maximum, identify which variable drives the changing distribution of composite parameters such as ɛ, and we show that the distribution of ɛ is less sensitive to changes in its extreme values than that of its constituents. After demonstrating the QQ method on solar wind data, we extend the analysis to include solar and magnetospheric data spanning the same time period. We focus on GOES X-ray flux and WDC AE index data. Finally, having studied the statistics of transients across the full distribution, we apply the same method to time series of extreme bursts in each variable. Using these statistical tools, we aim to track the solar cycle-driven variability from the Sun through the solar wind and into the Earth's magnetosphere. Tindale, E. and S.C. Chapman (2016), Geophys. Res. Lett., 43(11), doi: 10.1002/2016GL068920.
Panel methods: An introduction
NASA Technical Reports Server (NTRS)
Erickson, Larry L.
1990-01-01
Panel methods are numerical schemes for solving (the Prandtl-Glauert equation) for linear, inviscid, irrotational flow about aircraft flying at subsonic or supersonic speeds. The tools at the panel-method user's disposal are (1) surface panels of source-doublet-vorticity distributions that can represent nearly arbitrary geometry, and (2) extremely versatile boundary condition capabilities that can frequently be used for creative modeling. Panel-method capabilities and limitations, basic concepts common to all panel-method codes, different choices that were made in the implementation of these concepts into working computer programs, and various modeling techniques involving boundary conditions, jump properties, and trailing wakes are discussed. An approach for extending the method to nonlinear transonic flow is also presented. Three appendices supplement the main test. In appendix 1, additional detail is provided on how the basic concepts are implemented into a specific computer program (PANAIR). In appendix 2, it is shown how to evaluate analytically the fundamental surface integral that arises in the expressions for influence-coefficients, and evaluate its jump property. In appendix 3, a simple example is used to illustrate the so-called finite part of the improper integrals.
NASA Technical Reports Server (NTRS)
Maskew, Brian
1987-01-01
The VSAERO low order panel method formulation is described for the calculation of subsonic aerodynamic characteristics of general configurations. The method is based on piecewise constant doublet and source singularities. Two forms of the internal Dirichlet boundary condition are discussed and the source distribution is determined by the external Neumann boundary condition. A number of basic test cases are examined. Calculations are compared with higher order solutions for a number of cases. It is demonstrated that for comparable density of control points where the boundary conditions are satisfied, the low order method gives comparable accuracy to the higher order solutions. It is also shown that problems associated with some earlier low order panel methods, e.g., leakage in internal flows and junctions and also poor trailing edge solutions, do not appear for the present method. Further, the application of the Kutta conditions is extremely simple; no extra equation or trailing edge velocity point is required. The method has very low computing costs and this has made it practical for application to nonlinear problems requiring iterative solutions for wake shape and surface boundary layer effects.
A dynamic traction splint for the management of extrinsic tendon tightness.
Dovelle, S; Heeter, P K; Phillips, P D
1987-02-01
The dynamic traction splint designed by therapists at Walter Reed Army Medical Center is used for the management of extrinsic extensor tendon tightness commonly seen in brachial plexus injuries and traumatic soft tissue injuries of the upper extremity. The two components of the splint allow for simultaneous maximum flexion of the MCP and IP joints. This simple and economical splint provides an additional modality to any occupational therapy service involved in the management of upper extremity disorders.
Ideological Responses to the EU Refugee Crisis
van Prooijen, Jan-Willem; Krouwel, André P. M.; Emmer, Julia
2017-01-01
The 2016 European Union (EU) refugee crisis exposed a fundamental distinction in political attitudes between the political left and right. Previous findings suggest, however, that besides political orientation, ideological strength (i.e., political extremism) is also relevant to understand such distinctive attitudes. Our study reveals that the political right is more anxious, and the political left experiences more self-efficacy, about the refugee crisis. At the same time, the political extremes—at both sides of the spectrum—are more likely than moderates to believe that the solution to this societal problem is simple. Furthermore, both extremes experience more judgmental certainty about their domain-specific knowledge of the refugee crisis, independent of their actual knowledge. Finally, belief in simple solutions mediated the relationship between ideology and judgmental certainty, but only among political extremists. We conclude that both ideological orientation and strength matter to understand citizens’ reactions to the refugee crisis. PMID:29593852
NASA Technical Reports Server (NTRS)
Levy, Samuel; Krupen, Philip
1943-01-01
The von Karman equations for flat plates are solved beyond the buckling load up to edge strains equal to eight time the buckling strain, for the extreme case of rigid clamping along the edges parallel to the load. Deflections, bending stresses, and membrane stresses are given as a function of end compressive load. The theoretical values of effective width are compared with the values derived for simple support along the edges parallel to the load. The increases in effective width due to rigid clamping drops from about 20 percent near the buckling strain to about 8 percent at an edge strain equal to eight times the buckling strain. Experimental values of effective width in the elastic range reported in NACA Technical Note No. 684 are between the theoretical curves for the extremes of simple support and rigid clamping.
Augmented kludge waveforms for detecting extreme-mass-ratio inspirals
NASA Astrophysics Data System (ADS)
Chua, Alvin J. K.; Moore, Christopher J.; Gair, Jonathan R.
2017-08-01
The extreme-mass-ratio inspirals (EMRIs) of stellar-mass compact objects into massive black holes are an important class of source for the future space-based gravitational-wave detector LISA. Detecting signals from EMRIs will require waveform models that are both accurate and computationally efficient. In this paper, we present the latest implementation of an augmented analytic kludge (AAK) model, publicly available at https://github.com/alvincjk/EMRI_Kludge_Suite as part of an EMRI waveform software suite. This version of the AAK model has improved accuracy compared to its predecessors, with two-month waveform overlaps against a more accurate fiducial model exceeding 0.97 for a generic range of sources; it also generates waveforms 5-15 times faster than the fiducial model. The AAK model is well suited for scoping out data analysis issues in the upcoming round of mock LISA data challenges. A simple analytic argument shows that it might even be viable for detecting EMRIs with LISA through a semicoherent template bank method, while the use of the original analytic kludge in the same approach will result in around 90% fewer detections.
Hu, Ying; Ren, Jie; Peng, Zhao; Umana, Arnoldo A; Le, Ha; Danilova, Tatiana; Fu, Junjie; Wang, Haiyan; Robertson, Alison; Hulbert, Scot H; White, Frank F; Liu, Sanzhen
2018-01-01
Goss's wilt (GW) of maize is caused by the Gram-positive bacterium Clavibacter michiganensis subsp. nebraskensis (Cmn) and has spread in recent years throughout the Great Plains, posing a threat to production. The genetic basis of plant resistance is unknown. Here, a simple method for quantifying disease symptoms was developed and used to select cohorts of highly resistant and highly susceptible lines known as extreme phenotypes (XP). Copy number variation (CNV) analyses using whole genome sequences of bulked XP revealed 141 genes containing CNV between the two XP groups. The CNV genes include the previously identified common rust resistant locus rp1 . Multiple Rp1 accessions with distinct rp1 haplotypes in an otherwise susceptible accession exhibited hypersensitive responses upon inoculation. GW provides an excellent system for the genetic dissection of diseases caused by closely related subspecies of C. michiganesis . Further work will facilitate breeding strategies to control GW and provide needed insight into the resistance mechanism of important related diseases such as bacterial canker of tomato and bacterial ring rot of potato.
Agent Model Development for Assessing Climate-Induced Geopolitical Instability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boslough, Mark B.; Backus, George A.
2005-12-01
We present the initial stages of development of new agent-based computational methods to generate and test hypotheses about linkages between environmental change and international instability. This report summarizes the first year's effort of an originally proposed three-year Laboratory Directed Research and Development (LDRD) project. The preliminary work focused on a set of simple agent-based models and benefited from lessons learned in previous related projects and case studies of human response to climate change and environmental scarcity. Our approach was to define a qualitative model using extremely simple cellular agent models akin to Lovelock's Daisyworld and Schelling's segregation model. Such modelsmore » do not require significant computing resources, and users can modify behavior rules to gain insights. One of the difficulties in agent-based modeling is finding the right balance between model simplicity and real-world representation. Our approach was to keep agent behaviors as simple as possible during the development stage (described herein) and to ground them with a realistic geospatial Earth system model in subsequent years. This work is directed toward incorporating projected climate data--including various C02 scenarios from the Intergovernmental Panel on Climate Change (IPCC) Third Assessment Report--and ultimately toward coupling a useful agent-based model to a general circulation model.3« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mellors, R J
The Comprehensive Nuclear Test Ban Treaty (CTBT) includes provisions for an on-site inspection (OSI), which allows the use of specific techniques to detect underground anomalies including cavities and rubble zones. One permitted technique is active seismic surveys such as seismic refraction or reflection. The purpose of this report is to conduct some simple modeling to evaluate the potential use of seismic reflection in detecting cavities and to test the use of open-source software in modeling possible scenarios. It should be noted that OSI inspections are conducted under specific constraints regarding duration and logistics. These constraints are likely to significantly impactmore » active seismic surveying, as a seismic survey typically requires considerable equipment, effort, and expertise. For the purposes of this study, which is a first-order feasibility study, these issues will not be considered. This report provides a brief description of the seismic reflection method along with some commonly used software packages. This is followed by an outline of a simple processing stream based on a synthetic model, along with results from a set of models representing underground cavities. A set of scripts used to generate the models are presented in an appendix. We do not consider detection of underground facilities in this work and the geologic setting used in these tests is an extremely simple one.« less
A NEW METHOD OF SWEAT TESTING: THE CF QUANTUM® SWEAT TEST
Rock, Michael J.; Makholm, Linda; Eickhoff, Jens
2015-01-01
Background Conventional methods of sweat testing are time consuming and have many steps that can and do lead to errors. This study compares conventional sweat testing to a new quantitative method, the CF Quantum® (CFQT) sweat test. This study tests the diagnostic accuracy and analytic validity of the CFQT. Methods Previously diagnosed CF patients and patients who required a sweat test for clinical indications were invited to have the CFQT test performed. Both conventional sweat testing and the CFQT were performed bilaterally on the same day. Pairs of data from each test are plotted as a correlation graph and Bland Altman plot. Sensitivity and specificity were calculated as well as the means and coefficient of variation by test and by extremity. After completing the study, subjects or their parents were asked for their preference of the CFQT and conventional sweat testing. Results The correlation coefficient between the CFQT and conventional sweat testing was 0.98 (95% confidence interval: 0.97–0.99). The sensitivity and specificity of the CFQT in diagnosing CF was 100% (95% confidence interval: 94–100%) and 96% (95% confidence interval: 89–99%), respectively. In one center in this three center multicenter study, there were higher sweat chloride values in patients with CF and also more tests that were invalid due to discrepant values between the two extremities. The percentage of invalid tests was higher in the CFQT method (16.5%) compared to conventional sweat testing (3.8%)(p < 0.001). In the post-test questionnaire, 88% of subjects/parents preferred the CFQT test. Conclusions The CFQT is a fast and simple method of quantitative sweat chloride determination. This technology requires further refinement to improve the analytic accuracy at higher sweat chloride values and to decrease the number of invalid tests. PMID:24862724
Thermophysical properties of simple liquid metals: A brief review of theory
NASA Technical Reports Server (NTRS)
Stroud, David
1993-01-01
In this paper, we review the current theory of the thermophysical properties of simple liquid metals. The emphasis is on thermodynamic properties, but we also briefly discuss the nonequilibrium properties of liquid metals. We begin by defining a 'simple liquid metal' as one in which the valence electrons interact only weakly with the ionic cores, so that the interaction can be treated by perturbation theory. We then write down the equilibrium Hamiltonian of a liquid metal as a sum of five terms: the bare ion-ion interaction, the electron-electron interaction, the bare electron-ion interaction, and the kinetic energies of electrons and ions. Since the electron-ion interaction can be treated by perturbation, the electronic part contributes in two ways to the Helmholtz free energy: it gives a density-dependent term which is independent of the arrangement of ions, and it acts to screen the ion-ion interaction, giving rise to effective ion-ion pair potentials which are density-dependent, in general. After sketching the form of a typical pair potential, we briefly enumerate some methods for calculating the ionic distribution function and hence the Helmholtz free energy of the liquid: monte Carlo simulations, molecular dynamics simulations, and thermodynamic perturbation theory. The final result is a general expression for the Helmholtz free energy of the liquid metal. It can be used to calculate a wide range of thermodynamic properties of simple metal liquids, which we enumerate. They include not only a range of thermodynamic coefficients of both metals and alloys, but also many aspects of the phase diagram, including freezing curves of pure elements and phase diagrams of liquid alloys (including liquidus and solidus curves). We briefly mention some key discoveries resulting from previous applications of this method, and point out that the same methods work for other materials not normally considered to be liquid metals (such as colloidal suspensions, in which the suspended microspheres behave like ions screened by the salt solution in which they are suspended). We conclude with a brief discussion of some non-equilibrium (i.e., transport) properties which can be treated by an extension of these methods. These include electrical resistivity, thermal conductivity, viscosity, atomic self-diffusion coefficients, concentration diffusion coefficients in alloys, surface tension and thermal emissivity. Finally, we briefly mention two methods by which the theory might be extended to non-simple liquid metals: these are empirical techniques (i.e., empirical two- and three-body potentials), and numerical many-body approaches. Both may be potentially applicable to extremely complex systems, such as nonstoichiometric liquid semiconductor alloys.
The coefficient of restitution of pressurized balls: a mechanistic model
NASA Astrophysics Data System (ADS)
Georgallas, Alex; Landry, Gaëtan
2016-01-01
Pressurized, inflated balls used in professional sports are regulated so that their behaviour upon impact can be anticipated and allow the game to have its distinctive character. However, the dynamics governing the impacts of such balls, even on stationary hard surfaces, can be extremely complex. The energy transformations, which arise from the compression of the gas within the ball and from the shear forces associated with the deformation of the wall, are examined in this paper. We develop a simple mechanistic model of the dependence of the coefficient of restitution, e, upon both the gauge pressure, P_G, of the gas and the shear modulus, G, of the wall. The model is validated using the results from a simple series of experiments using three different sports balls. The fits to the data are extremely good for P_G > 25 kPa and consistent values are obtained for the value of G for the wall material. As far as the authors can tell, this simple, mechanistic model of the pressure dependence of the coefficient of restitution is the first in the literature. *%K Coefficient of Restitution, Dynamics, Inflated Balls, Pressure, Impact Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frey, Brian J.; Kuang, Ping; Hsieh, Mei-Li
A 900 nm thick TiO 2 simple cubic photonic crystal with lattice constant 450 nm was fabricated and used to experimentally validate a newly-discovered mechanism for extreme light-bending. Absorption enhancement was observed extending 1–2 orders of magnitude over that of a reference TiO 2 film. Several enhancement peaks in the region from 600–950 nm were identified, which far exceed both the ergodic fundamental limit and the limit based on surface-gratings, with some peaks exceeding 100 times enhancement. These results are attributed to radically sharp refraction where the optical path length approaches infinity due to the Poynting vector lying nearly parallelmore » to the photonic crystal interface. The observed phenomena follow directly from the simple cubic symmetry of the photonic crystal, and can be achieved by integrating the light-trapping architecture into the absorbing volume. These results are not dependent on the material used, and can be applied to any future light trapping applications such as phosphor-converted white light generation, water-splitting, or thin-film solar cells, where increased response in areas of weak absorption is desired.« less
Frey, Brian J.; Kuang, Ping; Hsieh, Mei-Li; ...
2017-06-23
A 900 nm thick TiO 2 simple cubic photonic crystal with lattice constant 450 nm was fabricated and used to experimentally validate a newly-discovered mechanism for extreme light-bending. Absorption enhancement was observed extending 1–2 orders of magnitude over that of a reference TiO 2 film. Several enhancement peaks in the region from 600–950 nm were identified, which far exceed both the ergodic fundamental limit and the limit based on surface-gratings, with some peaks exceeding 100 times enhancement. These results are attributed to radically sharp refraction where the optical path length approaches infinity due to the Poynting vector lying nearly parallelmore » to the photonic crystal interface. The observed phenomena follow directly from the simple cubic symmetry of the photonic crystal, and can be achieved by integrating the light-trapping architecture into the absorbing volume. These results are not dependent on the material used, and can be applied to any future light trapping applications such as phosphor-converted white light generation, water-splitting, or thin-film solar cells, where increased response in areas of weak absorption is desired.« less
An extremely simple macroscale electronic skin realized by deep machine learning.
Sohn, Kee-Sun; Chung, Jiyong; Cho, Min-Young; Timilsina, Suman; Park, Woon Bae; Pyo, Myungho; Shin, Namsoo; Sohn, Keemin; Kim, Ji Sik
2017-09-11
Complicated structures consisting of multi-layers with a multi-modal array of device components, i.e., so-called patterned multi-layers, and their corresponding circuit designs for signal readout and addressing are used to achieve a macroscale electronic skin (e-skin). In contrast to this common approach, we realized an extremely simple macroscale e-skin only by employing a single-layered piezoresistive MWCNT-PDMS composite film with neither nano-, micro-, nor macro-patterns. It is the deep machine learning that made it possible to let such a simple bulky material play the role of a smart sensory device. A deep neural network (DNN) enabled us to process electrical resistance change induced by applied pressure and thereby to instantaneously evaluate the pressure level and the exact position under pressure. The great potential of this revolutionary concept for the attainment of pressure-distribution sensing on a macroscale area could expand its use to not only e-skin applications but to other high-end applications such as touch panels, portable flexible keyboard, sign language interpreting globes, safety diagnosis of social infrastructures, and the diagnosis of motility and peristalsis disorders in the gastrointestinal tract.
Climate Change Impact on Variability of Rainfall Intensity in Upper Blue Nile Basin, Ethiopia
NASA Astrophysics Data System (ADS)
Worku, L. Y.
2015-12-01
Extreme rainfall events are major problems in Ethiopia with the resulting floods that usually could cause significant damage to agriculture, ecology, infrastructure, disruption to human activities, loss of property, loss of lives and disease outbreak. The aim of this study was to explore the likely changes of precipitation extreme changes due to future climate change. The study specifically focuses to understand the future climate change impact on variability of rainfall intensity-duration-frequency in Upper Blue Nile basin. Precipitations data from two Global Climate Models (GCMs) have been used in the study are HadCM3 and CGCM3. Rainfall frequency analysis was carried out to estimate quantile with different return periods. Probability Weighted Method (PWM) selected estimation of parameter distribution and L-Moment Ratio Diagrams (LMRDs) used to find the best parent distribution for each station. Therefore, parent distributions for derived from frequency analysis are Generalized Logistic (GLOG), Generalized Extreme Value (GEV), and Gamma & Pearson III (P3) parent distribution. After analyzing estimated quantile simple disaggregation model was applied in order to find sub daily rainfall data. Finally the disaggregated rainfall is fitted to find IDF curve and the result shows in most parts of the basin rainfall intensity expected to increase in the future. As a result of the two GCM outputs, the study indicates there will be likely increase of precipitation extremes over the Blue Nile basin due to the changing climate. This study should be interpreted with caution as the GCM model outputs in this part of the world have huge uncertainty.
Spyratos, Dionisios; Haidich, Anna-Bettina; Chloros, Diamantis; Michalopoulou, Dionisia; Sichletidis, Lazaros
2017-01-01
Even though the diagnosis of chronic obstructive pulmonary disease (COPD) is easy and based mainly on spirometry and symptoms, the prevalence of underdiagnosis is extremely high. The use of simple screening tools (e.g., questionnaires, hand-held spirometers) has been proved to be a simple method for case finding of COPD. Nevertheless the most appropriate target group of the general population has not been specified yet. The aim of the present study was to compare 3 screening questionnaires among smokers aged >40 years in the primary care setting. We excluded all subjects with a previous medical diagnosis of bronchial asthma or chronic pulmonary disease other than COPD. All participants were in a stable clinical condition, filled in the International Primary Care Airways Group (IPAG) questionnaire, the COPD Population Screener (COPD-PS) questionnaire, and the Lung Function Questionnaire (LFQ) and underwent spirometry. Medical diagnosis of COPD was established by an experienced pulmonologist. We studied 3,234 subjects during a 3.5-year period. COPD prevalence was 10.9% (52.1% underdiagnosis). All 3 questionnaires showed extremely high negative predictive values (94-96%), so in this case the diagnosis of COPD could be safely excluded. The area under the curve was similar across the 3 questionnaires (AUCROC: 0.794-0.809). The COPD-PS questionnaire demonstrated the highest positive predictive value (41%) compared to the other 2. On the other hand, the IPAG questionnaire and LFQ demonstrated higher sensitivities than COPD-PS resulting in lower percentages of missed cases. Three validated screening questionnaires for COPD demonstrated different diagnostic characteristics. © 2017 S. Karger AG, Basel.
Accurate screening for insulin resistance in PCOS women using fasting insulin concentrations.
Lunger, Fabian; Wildt, Ludwig; Seeber, Beata
2013-06-01
The aims of this cross-sectional study were to evaluate the relative agreement of both static and dynamic methods of diagnosing IR in women with polycystic ovary syndrome (PCOS) and to suggest a simple screening method for IR. All participants underwent serial blood draws for hormonal profiling and lipid assessment, a 3 h, 75 g load oral glucose tolerance test (OGTT) with every 15 min measurements of glucose and insulin, and an ACTH stimulation test. The prevalence of IR ranged from 12.2% to 60.5%, depending on the IR index used. Based on largest area under the curve on receiver operating curve (ROC) analyses, the dynamic indices outperformed the static indices with glucose to insulin ratio and fasting insulin (fInsulin) demonstrating the best diagnostic properties. Applying two cut-offs representing fInsulin extremes (<7 and >13 mIU/l, respectively) gave the diagnosis in 70% of the patients with high accuracy. Currently utilized indices for assessing IR give highly variable results in women with PCOS. The most accurate indices based on dynamic testing can be time-consuming and labor-intensive. We suggest the use of fInsulin as a simple screening test, which can reduce the number of OGTTs needed to routinely assess insulin resistance in women with PCOS.
Nakamura, Mikiko; Suzuki, Ayako; Akada, Junko; Yarimizu, Tohru; Iwakiri, Ryo; Hoshida, Hisashi; Akada, Rinji
2015-08-01
Escherichia coli plasmids are commonly used for gene expression experiments in mammalian cells, while PCR-amplified DNAs are rarely used even though PCR is a much faster and easier method to construct recombinant DNAs. One difficulty may be the limited amount of DNA produced by PCR. For direct utilization of PCR-amplified DNA in transfection experiments, efficient transfection with a smaller amount of DNA should be attained. For this purpose, we investigated two enhancer reagents, polyethylene glycol and tRNA, for a chemical transfection method. The addition of the enhancers to a commercial transfection reagent individually and synergistically exhibited higher transfection efficiency applicable for several mammalian cell culture lines in a 96-well plate. By taking advantage of a simple transfection procedure using PCR-amplified DNA, SV40 and rabbit β-globin terminator lengths were minimized. The terminator length is short enough to design in oligonucleotides; thus, terminator primers can be used for the construction and analysis of numerous mutations, deletions, insertions, and tag-fusions at the 3'-terminus of any gene. The PCR-mediated gene manipulation with the terminator primers will transform gene expression by allowing for extremely simple and high-throughput experiments with small-scale, multi-well, and mammalian cell cultures.
Deployed Force Waste Management
2004-11-01
Humid Coastal Desert (B3) Cold (C0) (C1) (C2) Severe & Extreme Cold (C3) (C4) Affects effectiveness and efficiency of particular treatment and...surface (eg Spinifex ) Commercially available Easily deployable and some construction by engineers required Simple but specialised
A simple repeat polymorphism in the MITF-M promoter is a key regulator of white spotting in dogs.
Baranowska Körberg, Izabella; Sundström, Elisabeth; Meadows, Jennifer R S; Rosengren Pielberg, Gerli; Gustafson, Ulla; Hedhammar, Åke; Karlsson, Elinor K; Seddon, Jennifer; Söderberg, Arne; Vilà, Carles; Zhang, Xiaolan; Åkesson, Mikael; Lindblad-Toh, Kerstin; Andersson, Göran; Andersson, Leif
2014-01-01
The white spotting locus (S) in dogs is colocalized with the MITF (microphtalmia-associated transcription factor) gene. The phenotypic effects of the four S alleles range from solid colour (S) to extreme white spotting (s(w)). We have investigated four candidate mutations associated with the s(w) allele, a SINE insertion, a SNP at a conserved site and a simple repeat polymorphism all associated with the MITF-M promoter as well as a 12 base pair deletion in exon 1B. The variants associated with white spotting at all four loci were also found among wolves and we conclude that none of these could be a sole causal mutation, at least not for extreme white spotting. We propose that the three canine white spotting alleles are not caused by three independent mutations but represent haplotype effects due to different combinations of causal polymorphisms. The simple repeat polymorphism showed extensive diversity both in dogs and wolves, and allele-sharing was common between wolves and white spotted dogs but was non-existent between solid and spotted dogs as well as between wolves and solid dogs. This finding was unexpected as Solid is assumed to be the wild-type allele. The data indicate that the simple repeat polymorphism has been a target for selection during dog domestication and breed formation. We also evaluated the significance of the three MITF-M associated polymorphisms with a Luciferase assay, and found conclusive evidence that the simple repeat polymorphism affects promoter activity. Three alleles associated with white spotting gave consistently lower promoter activity compared with the allele associated with solid colour. We propose that the simple repeat polymorphism affects cooperativity between transcription factors binding on either flanking sides of the repeat. Thus, both genetic and functional evidence show that the simple repeat polymorphism is a key regulator of white spotting in dogs.
A Simple Repeat Polymorphism in the MITF-M Promoter Is a Key Regulator of White Spotting in Dogs
Meadows, Jennifer R. S.; Rosengren Pielberg, Gerli; Gustafson, Ulla; Hedhammar, Åke; Karlsson, Elinor K.; Seddon, Jennifer; Söderberg, Arne; Vilà, Carles; Zhang, Xiaolan; Åkesson, Mikael; Lindblad-Toh, Kerstin; Andersson, Göran; Andersson, Leif
2014-01-01
The white spotting locus (S) in dogs is colocalized with the MITF (microphtalmia-associated transcription factor) gene. The phenotypic effects of the four S alleles range from solid colour (S) to extreme white spotting (sw). We have investigated four candidate mutations associated with the sw allele, a SINE insertion, a SNP at a conserved site and a simple repeat polymorphism all associated with the MITF-M promoter as well as a 12 base pair deletion in exon 1B. The variants associated with white spotting at all four loci were also found among wolves and we conclude that none of these could be a sole causal mutation, at least not for extreme white spotting. We propose that the three canine white spotting alleles are not caused by three independent mutations but represent haplotype effects due to different combinations of causal polymorphisms. The simple repeat polymorphism showed extensive diversity both in dogs and wolves, and allele-sharing was common between wolves and white spotted dogs but was non-existent between solid and spotted dogs as well as between wolves and solid dogs. This finding was unexpected as Solid is assumed to be the wild-type allele. The data indicate that the simple repeat polymorphism has been a target for selection during dog domestication and breed formation. We also evaluated the significance of the three MITF-M associated polymorphisms with a Luciferase assay, and found conclusive evidence that the simple repeat polymorphism affects promoter activity. Three alleles associated with white spotting gave consistently lower promoter activity compared with the allele associated with solid colour. We propose that the simple repeat polymorphism affects cooperativity between transcription factors binding on either flanking sides of the repeat. Thus, both genetic and functional evidence show that the simple repeat polymorphism is a key regulator of white spotting in dogs. PMID:25116146
DSP+FPGA-based real-time histogram equalization system of infrared image
NASA Astrophysics Data System (ADS)
Gu, Dongsheng; Yang, Nansheng; Pi, Defu; Hua, Min; Shen, Xiaoyan; Zhang, Ruolan
2001-10-01
Histogram Modification is a simple but effective method to enhance an infrared image. There are several methods to equalize an infrared image's histogram due to the different characteristics of the different infrared images, such as the traditional HE (Histogram Equalization) method, and the improved HP (Histogram Projection) and PE (Plateau Equalization) method and so on. If to realize these methods in a single system, the system must have a mass of memory and extremely fast speed. In our system, we introduce a DSP + FPGA based real-time procession technology to do these things together. FPGA is used to realize the common part of these methods while DSP is to do the different part. The choice of methods and the parameter can be input by a keyboard or a computer. By this means, the function of the system is powerful while it is easy to operate and maintain. In this article, we give out the diagram of the system and the soft flow chart of the methods. And at the end of it, we give out the infrared image and its histogram before and after the process of HE method.
NASA Astrophysics Data System (ADS)
Okada, Yukimasa; Ono, Kouichi; Eriguchi, Koji
2017-06-01
Aggressive shrinkage and geometrical transition to three-dimensional structures in metal-oxide-semiconductor field-effect transistors (MOSFETs) lead to potentially serious problems regarding plasma processing such as plasma-induced physical damage (PPD). For the precise control of material processing and future device designs, it is extremely important to clarify the depth and energy profiles of PPD. Conventional methods to estimate the PPD profile (e.g., wet etching) are time-consuming. In this study, we propose an advanced method using a simple capacitance-voltage (C-V) measurement. The method first assumes the depth and energy profiles of defects in Si substrates, and then optimizes the C-V curves. We applied this methodology to evaluate the defect generation in (100), (111), and (110) Si substrates. No orientation dependence was found regarding the surface-oxide layers, whereas a large number of defects was assigned in the case of (110). The damaged layer thickness and areal density were estimated. This method provides the highly sensitive PPD prediction indispensable for designing future low-damage plasma processes.
Environmental Pressure May Change the Composition Protein Disorder in Prokaryotes
Vicedo, Esmeralda; Schlessinger, Avner; Rost, Burkhard
2015-01-01
Many prokaryotic organisms have adapted to incredibly extreme habitats. The genomes of such extremophiles differ from their non-extremophile relatives. For example, some proteins in thermophiles sustain high temperatures by being more compact than homologs in non-extremophiles. Conversely, some proteins have increased volumes to compensate for freezing effects in psychrophiles that survive in the cold. Here, we revealed that some differences in organisms surviving in extreme habitats correlate with a simple single feature, namely the fraction of proteins predicted to have long disordered regions. We predicted disorder with different methods for 46 completely sequenced organisms from diverse habitats and found a correlation between protein disorder and the extremity of the environment. More specifically, the overall percentage of proteins with long disordered regions tended to be more similar between organisms of similar habitats than between organisms of similar taxonomy. For example, predictions tended to detect substantially more proteins with long disordered regions in prokaryotic halophiles (survive high salt) than in their taxonomic neighbors. Another peculiar environment is that of high radiation survived, e.g. by Deinococcus radiodurans. The relatively high fraction of disorder predicted in this extremophile might provide a shield against mutations. Although our analysis fails to establish causation, the observed correlation between such a simplistic, coarse-grained, microscopic molecular feature (disorder content) and a macroscopic variable (habitat) remains stunning. PMID:26252577
Parra-Robles, Juan; Cross, Albert R; Santyr, Giles E
2005-05-01
Hyperpolarized noble gases (HNGs) provide exciting possibilities for MR imaging at ultra-low magnetic field strengths (<0.15 T) due to the extremely high polarizations available from optical pumping. The fringe field of many superconductive magnets used in clinical MR imaging can provide a stable magnetic field for this purpose. In addition to offering the benefit of HNG MR imaging alongside conventional high field proton MRI, this approach offers the other useful advantage of providing different field strengths at different distances from the magnet. However, the extremely strong field gradients associated with the fringe field present a major challenge for imaging since impractically high active shim currents would be required to achieve the necessary homogeneity. In this work, a simple passive shimming method based on the placement of a small number of ferromagnetic pieces is proposed to reduce the fringe field inhomogeneities to a level that can be corrected using standard active shims. The method explicitly takes into account the strong variations of the field over the volume of the ferromagnetic pieces used to shim. The method is used to obtain spectra in the fringe field of a high-field (1.89 T) superconducting magnet from hyperpolarized 129Xe gas samples at two different ultra-low field strengths (8.5 and 17 mT). The linewidths of spectra measured from imaging phantoms (30 Hz) indicate a homogeneity sufficient for MRI of the rat lung.
Applied extreme-value statistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kinnison, R.R.
1983-05-01
The statistical theory of extreme values is a well established part of theoretical statistics. Unfortunately, it is seldom part of applied statistics and is infrequently a part of statistical curricula except in advanced studies programs. This has resulted in the impression that it is difficult to understand and not of practical value. In recent environmental and pollution literature, several short articles have appeared with the purpose of documenting all that is necessary for the practical application of extreme value theory to field problems (for example, Roberts, 1979). These articles are so concise that only a statistician can recognise all themore » subtleties and assumptions necessary for the correct use of the material presented. The intent of this text is to expand upon several recent articles, and to provide the necessary statistical background so that the non-statistician scientist can recognize and extreme value problem when it occurs in his work, be confident in handling simple extreme value problems himself, and know when the problem is statistically beyond his capabilities and requires consultation.« less
NASA Astrophysics Data System (ADS)
Arnaud, Patrick; Cantet, Philippe; Odry, Jean
2017-11-01
Flood frequency analyses (FFAs) are needed for flood risk management. Many methods exist ranging from classical purely statistical approaches to more complex approaches based on process simulation. The results of these methods are associated with uncertainties that are sometimes difficult to estimate due to the complexity of the approaches or the number of parameters, especially for process simulation. This is the case of the simulation-based FFA approach called SHYREG presented in this paper, in which a rainfall generator is coupled with a simple rainfall-runoff model in an attempt to estimate the uncertainties due to the estimation of the seven parameters needed to estimate flood frequencies. The six parameters of the rainfall generator are mean values, so their theoretical distribution is known and can be used to estimate the generator uncertainties. In contrast, the theoretical distribution of the single hydrological model parameter is unknown; consequently, a bootstrap method is applied to estimate the calibration uncertainties. The propagation of uncertainty from the rainfall generator to the hydrological model is also taken into account. This method is applied to 1112 basins throughout France. Uncertainties coming from the SHYREG method and from purely statistical approaches are compared, and the results are discussed according to the length of the recorded observations, basin size and basin location. Uncertainties of the SHYREG method decrease as the basin size increases or as the length of the recorded flow increases. Moreover, the results show that the confidence intervals of the SHYREG method are relatively small despite the complexity of the method and the number of parameters (seven). This is due to the stability of the parameters and takes into account the dependence of uncertainties due to the rainfall model and the hydrological calibration. Indeed, the uncertainties on the flow quantiles are on the same order of magnitude as those associated with the use of a statistical law with two parameters (here generalised extreme value Type I distribution) and clearly lower than those associated with the use of a three-parameter law (here generalised extreme value Type II distribution). For extreme flood quantiles, the uncertainties are mostly due to the rainfall generator because of the progressive saturation of the hydrological model.
Kang, Xinchen; Zhang, Jianling; Shang, Wenting; Wu, Tianbin; Zhang, Peng; Han, Buxing; Wu, Zhonghua; Mo, Guang; Xing, Xueqing
2014-03-12
Stable porous ionic liquid-water gel induced by inorganic salts was created for the first time. The porous gel was used to develop a one-step method to synthesize supported metal nanocatalysts. Au/SiO2, Ru/SiO2, Pd/Cu(2-pymo)2 metal-organic framework (Cu-MOF), and Au/polyacrylamide (PAM) were synthesized, in which the supports had hierarchical meso- and macropores, the size of the metal nanocatalysts could be very small (<1 nm), and the size distribution was very narrow even when the metal loading amount was as high as 8 wt %. The catalysts were extremely active, selective, and stable for oxidative esterification of benzyl alcohol to methyl benzoate, benzene hydrogenation to cyclohexane, and oxidation of benzyl alcohol to benzaldehyde because they combined the advantages of the nanocatalysts of small size and hierarchical porosity of the supports. In addition, this method is very simple.
Global behavior of a vibro-impact system with asymmetric clearances
NASA Astrophysics Data System (ADS)
Li, Guofang; Ding, Wangcai
2018-06-01
A simple dynamic model of a vibro-impact system subjected to harmonic excitation with two asymmetric clearances is considered. The Semi-Analytical Method for getting periodic solutions of the vibro-impact system is proposed. Diversity and evolution of the fundamental periodic impact motions are analyzed. The formation mechanism of the complete chatting-impact periodic motion with sticking motion by the influence of gazing bifurcation is analyzed. The transitional law of periodic motions in the periodical inclusions area is presented. The coexistence of periodic motions and the extreme sensitivity of the initial value within the high frequency region are studied. The global distribution of the periodic and chaos motions of the system is obtained by the state-parameter space co-simulation method which very few have considered before. The distribution of the attractor and the corresponding attracting domain corresponding to different periodic motions are also studied.
Skin-inspired hydrogel-elastomer hybrids with robust interfaces and functional microstructures
NASA Astrophysics Data System (ADS)
Yuk, Hyunwoo; Zhang, Teng; Parada, German Alberto; Liu, Xinyue; Zhao, Xuanhe
2016-06-01
Inspired by mammalian skins, soft hybrids integrating the merits of elastomers and hydrogels have potential applications in diverse areas including stretchable and bio-integrated electronics, microfluidics, tissue engineering, soft robotics and biomedical devices. However, existing hydrogel-elastomer hybrids have limitations such as weak interfacial bonding, low robustness and difficulties in patterning microstructures. Here, we report a simple yet versatile method to assemble hydrogels and elastomers into hybrids with extremely robust interfaces (interfacial toughness over 1,000 Jm-2) and functional microstructures such as microfluidic channels and electrical circuits. The proposed method is generally applicable to various types of tough hydrogels and diverse commonly used elastomers including polydimethylsiloxane Sylgard 184, polyurethane, latex, VHB and Ecoflex. We further demonstrate applications enabled by the robust and microstructured hydrogel-elastomer hybrids including anti-dehydration hydrogel-elastomer hybrids, stretchable and reactive hydrogel-elastomer microfluidics, and stretchable hydrogel circuit boards patterned on elastomer.
Comparative economics of space resource utilization
NASA Technical Reports Server (NTRS)
Cutler, Andrew Hall
1991-01-01
Physical economic factors such as mass payback ratio, total payback ratio, and capital payback time are discussed and used to compare the economics of using resources from the Moon, Mars and its moons, and near Earth asteroids to serve certain near term markets such as propellant in low Earth orbit or launched mass reduction for lunar and Martian exploration. Methods for accounting for the time cost of money in simple figures of merit such as MPRs are explored and applied to comparisons such as those between lunar, Martian, and asteroidal resources. Methods for trading off capital and operating costs to compare schemes with substantially different capital to operating cost ratio are presented and discussed. Areas where further research or engineering would be extremely useful in reducing economic uncertainty are identified, as are areas where economic merit is highly sensitive to engineering performance - as well as areas where such sensitivity is surprisingly low.
Şenel, Mehmet
2015-03-01
A film of chitosan-polypyrrole-gold nanoparticles was fabricated by in-situ chemical synthesis method and its application in glucose biosensor was investigated. The obtained biosensor exhibited a high and reproducible sensitivity of 0.58μA/mM, response time ~4s, linear dynamic range from 1 to 20mM, correlation coefficient of R(2)=0.9981, and limit of detection (LOD), based on S/N ratio (S/N=3) of 0.068mM. A value of 1.83mM for the apparent Michaelis-Menten constant was obtained. The resulting bio-nanocomposite provided a suitable environment for the enzyme to retain its bioactivity at considerably extreme conditions, and the decorated gold nanoparticles in the bio-nanocomposite offer good affinity to enzyme. Copyright © 2014. Published by Elsevier B.V.
Vacuum-assisted cell loading enables shear-free mammalian microfluidic culture
Kolnik, Martin; Tsimring, Lev S; Hasty, Je
2012-01-01
Microfluidic perfusion cultures for mammalian cells provide a novel means for probing single-cell behavior but require the management of culture parameters such as flow-induced shear stress. Methods to eliminate shear stress generally focus on capturing cells in regions with high resistance to fluid flow. Here, we present a novel trapping design to easily and reliably load a high density of cells into culture chambers that are extremely isolated from potentially damaging flow effects. We utilize a transient on-chip vacuum to remove air from the culture chambers and rapidly replace the volume with a liquid cell suspension. We demonstrate the ability of this simple and robust method to load and culture three commonly used cell lines. We show how the incorporation of an on-chip function generator can be used for dynamic stimulation of cells during long-term continuous perfusion culture. PMID:22961584
Beware! A simple renal cyst could be a hydatid cyst.
Sehgal, Nidhi; Priyadarshi, Vinod
2017-01-01
Kidney is one of the most common sites for the cyst formation in the body, and the management of simple cysts is required entirely for its symptoms and complications. Surgical decortication is an established treatment for a large and symptomatic simple renal cyst. On the other hand, hydatid cysts of the kidney are usually multiloculated complex or calcified cysts and are quite rare. Their surgical treatment also differs and requires complete excision with pericystectomy or partial/complete nephrectomy depending upon residual functional parenchyma, using extreme caution to avoid spillage, recurrence or development of severe anaphylactic shock. A simple cyst harboring a hydatid cyst is highly uncommon and quite dangerous; as if not diagnosed preoperatively, it can create huge trouble for both the patient and the operating surgeon which happened in the present case.
Photoacoustic sensor for medical diagnostics
NASA Astrophysics Data System (ADS)
Wolff, Marcus; Groninga, Hinrich G.; Harde, Hermann
2004-03-01
The development of new optical sensor technologies has a major impact on the progress of diagnostic methods. Of the permanently increasing number of non-invasive breath tests, the 13C-Urea Breath Test (UBT) for the detection of Helicobacter pylori is the most prominent. However, many recent developments, like the detection of cancer by breath test, go beyond gastroenterological applications. We present a new detection scheme for breath analysis that employs an especially compact and simple set-up. Photoacoustic Spectroscopy (PAS) represents an offset-free technique that allows for short absorption paths and small sample cells. Using a single-frequency diode laser and taking advantage of acoustical resonances of the sample cell, we performed extremely sensitive and selective measurements. The smart data processing method contributes to the extraordinary sensitivity and selectivity as well. Also, the reasonable acquisition cost and low operational cost make this detection scheme attractive for many biomedical applications. The experimental set-up and data processing method, together with exemplary isotope-selective measurements on carbon dioxide, are presented.
Pneumatic gap sensor and method
Bagdal, Karl T.; King, Edward L.; Follstaedt, Donald W.
1992-01-01
An apparatus and method for monitoring and maintaining a predetermined width in the gap between a casting nozzle and a casting wheel, wherein the gap is monitored by means of at least one pneumatic gap sensor. The pneumatic gap sensor is mounted on the casting nozzle in proximity to the casting surface and is connected by means of a tube to a regulator and a transducer. The regulator provides a flow of gas through a restictor to the pneumatic gap sensor, and the transducer translates the changes in the gas pressure caused by the proximity of the casting wheel to the pneumatic gap sensor outlet into a signal intelligible to a control device. The relative positions of the casting nozzle and casting wheel can thereby be selectively adjusted to continually maintain a predetermined distance between their adjacent surfaces. The apparatus and method enables accurate monitoring of the actual casting gap in a simple and reliable manner resistant to the extreme temperatures and otherwise hostile casting environment.
Pneumatic gap sensor and method
Bagdal, K.T.; King, E.L.; Follstaedt, D.W.
1992-03-03
An apparatus and method for monitoring and maintaining a predetermined width in the gap between a casting nozzle and a casting wheel, wherein the gap is monitored by means of at least one pneumatic gap sensor. The pneumatic gap sensor is mounted on the casting nozzle in proximity to the casting surface and is connected by means of a tube to a regulator and a transducer. The regulator provides a flow of gas through a restictor to the pneumatic gap sensor, and the transducer translates the changes in the gas pressure caused by the proximity of the casting wheel to the pneumatic gap sensor outlet into a signal intelligible to a control device. The relative positions of the casting nozzle and casting wheel can thereby be selectively adjusted to continually maintain a predetermined distance between their adjacent surfaces. The apparatus and method enables accurate monitoring of the actual casting gap in a simple and reliable manner resistant to the extreme temperatures and otherwise hostile casting environment. 6 figs.
Multiscale Modeling of UHTC: Thermal Conductivity
NASA Technical Reports Server (NTRS)
Lawson, John W.; Murry, Daw; Squire, Thomas; Bauschlicher, Charles W.
2012-01-01
We are developing a multiscale framework in computational modeling for the ultra high temperature ceramics (UHTC) ZrB2 and HfB2. These materials are characterized by high melting point, good strength, and reasonable oxidation resistance. They are candidate materials for a number of applications in extreme environments including sharp leading edges of hypersonic aircraft. In particular, we used a combination of ab initio methods, atomistic simulations and continuum computations to obtain insights into fundamental properties of these materials. Ab initio methods were used to compute basic structural, mechanical and thermal properties. From these results, a database was constructed to fit a Tersoff style interatomic potential suitable for atomistic simulations. These potentials were used to evaluate the lattice thermal conductivity of single crystals and the thermal resistance of simple grain boundaries. Finite element method (FEM) computations using atomistic results as inputs were performed with meshes constructed on SEM images thereby modeling the realistic microstructure. These continuum computations showed the reduction in thermal conductivity due to the grain boundary network.
Exfoliation of non-oxidized graphene flakes for scalable conductive film.
Park, Kwang Hyun; Kim, Bo Hyun; Song, Sung Ho; Kwon, Jiyoung; Kong, Byung Seon; Kang, Kisuk; Jeon, Seokwoo
2012-06-13
The increasing demand for graphene has required a new route for its mass production without causing extreme damages. Here we demonstrate a simple and cost-effective intercalation based exfoliation method for preparing high quality graphene flakes, which form a stable dispersion in organic solvents without any functionalization and surfactant. Successful intercalation of alkali metal between graphite interlayers through liquid-state diffusion from ternary KCl-NaCl-ZnCl(2) eutectic system is confirmed by X-ray diffraction and X-ray photoelectric spectroscopy. Chemical composition and morphology analyses prove that the graphene flakes preserve their intrinsic properties without any degradation. The graphene flakes remain dispersed in a mixture of pyridine and salts for more than 6 months. We apply these results to produce transparent conducting (∼930 Ω/□ at ∼75% transmission) graphene films using the modified Langmuir-Blodgett method. The overall results suggest that our method can be a scalable (>1 g/batch) and economical route for the synthesis of nonoxidized graphene flakes.
Optoelectronically probing the density of nanowire surface trap states to the single state limit
NASA Astrophysics Data System (ADS)
Dan, Yaping
2015-02-01
Surface trap states play a dominant role in the optoelectronic properties of nanoscale devices. Understanding the surface trap states allows us to properly engineer the device surfaces for better performance. But characterization of surface trap states at nanoscale has been a formidable challenge using the traditional capacitive techniques. Here, we demonstrate a simple but powerful optoelectronic method to probe the density of nanowire surface trap states to the single state limit. In this method, we choose to tune the quasi-Fermi level across the bandgap of a silicon nanowire photoconductor, allowing for capture and emission of photogenerated charge carriers by surface trap states. The experimental data show that the energy density of nanowire surface trap states is in a range from 109 cm-2/eV at deep levels to 1012 cm-2/eV near the conduction band edge. This optoelectronic method allows us to conveniently probe trap states of ultra-scaled nano/quantum devices at extremely high precision.
Qian, Wenjuan; Lu, Ying; Meng, Youqing; Ye, Zunzhong; Wang, Liu; Wang, Rui; Zheng, Qiqi; Wu, Hui; Wu, Jian
2018-06-06
' Candidatus Liberibacter asiaticus' (Las) is the most prevalent bacterium associated with huanglongbing, which is one of the most destructive diseases of citrus. In this paper, an extremely rapid and simple method for field detection of Las from leaf samples, based on recombinase polymerase amplification (RPA), is described. Three RPA primer pairs were designed and evaluated. RPA amplification was optimized so that it could be accomplished within 10 min. In combination with DNA crude extraction by a 50-fold dilution after 1 min of grinding in 0.5 M sodium hydroxide and visual detection via fluorescent DNA dye (positive samples display obvious green fluorescence while negative samples remain colorless), the whole detection process can be accomplished within 15 min. The sensitivity and specificity of this RPA-based method were evaluated and were proven to be equal to those of real-time PCR. The reliability of this method was also verified by analyzing field samples.
Zhang, Yuan Z; Lu, Sheng; Zhang, Hui Q; Jin, Zhong M; Zhao, Jian M; Huang, Jian; Zhang, Zhi F
2016-10-01
The success of total knee arthroplasty (TKA) depends on many factors. The position of a prosthesis is vitally important. The purpose of the present study was to evaluate the value of a computer-aided establishing lower extremity mechanical axis in TKA using digital technology. A total of 36 cases of patients with TKA were randomly divided into the computer-aided design of navigation template group (NT) and conventional intramedullary positioning group (CIP). Three-dimensional (3D) CT scanning images of the hip, knee, and ankle were obtained in NT group. X-ray images and CT scans were transferred into the 3D reconstruction software. A 3D bone model of the hip, knee, ankle, as well as the modified loading, was reconstructed and saved in a stereolithographic format. In the 3D reconstruction model, the mechanical axis of the lower limb was determined, and the navigational templates produced an accurate model using a rapid prototyping technique. The THA in CIP group was performed according to a routine operation. CT scans were performed postoperatively to evaluate the accuracy of the two TKA methods. The averaged operative time of the NT group procedures was [Formula: see text] min shorter than those of the conventional procedures ([Formula: see text] min). The coronal femoral angle, coronal tibial angle, posterior tibial slope were [Formula: see text], [Formula: see text], [Formula: see text] in NT group and [Formula: see text], [Formula: see text], [Formula: see text] in CIP group, respectively. Statistically significant group differences were found. The navigation template produced through mechanical axis of lower extremity may provide a relative accurate and simple method for TKA.
Comparative analysis of port tariffs in the ESCAP region
DOT National Transportation Integrated Search
2002-01-01
Ports of the Economic and Social Commission for Asia and the Pacific (ESCAP) region have long-established tariff structures. Some tariffs are extremely complex while others are relatively simple. There is, however, an increasing desire on the part of...
Programed asynchronous serial data interrogation in a two-computer system
NASA Technical Reports Server (NTRS)
Schneberger, N. A.
1975-01-01
Technique permits redundant computers, with one unit in control mode and one in MONITOR mode, to interrogate the same serial data source. Its use for program-controlled serial data transfer results in extremely simple hardware and software mechanization.
Wind data for wind driven plant. [site selection for optimal performance
NASA Technical Reports Server (NTRS)
Stodhart, A. H.
1973-01-01
Simple, averaged wind velocity data provide information on energy availability, facilitate generator site selection and enable appropriate operating ranges to be established for windpowered plants. They also provide a basis for the prediction of extreme wind speeds.
USE OF A SIMPLE THERMALISED NEUTRON FIELD FOR QUALITY ACCEPTANCE OF WHOLE BODY TLDS.
Gilvin, P J; Baker, S T; Eakins, J S; Tanner, R J
2016-09-01
The individual monitoring service of Public Health England (PHE) uses Harshaw™ whole-body and extremity thermoluminescent dosemeters (TLDs) with high-sensitivity lithium fluoride LiF:Mg,Cu,P, together with Harshaw 8800™ automated readers. The neutron-insensitive, (6)Li-depleted variety of TLD material is used by PHE because the service provides separate neutron and photon dosemeters. The neutron dosemeters are not sensitive to photons and vice versa Since insensitivity to neutrons is a supply requirement for TLDs, there is a need to test every new (annual) consignment for this. Because it is thermal neutrons that produce a response in (6)Li TLDs, a thermal field is needed. To this end, PHE has adopted the simple approach of sandwiching the TLDs between two ISO water-filled slab phantoms. In this arrangement, the fast neutrons from an Am-Be source are effectively thermalised. Details of the method are given, together with the results of supporting MCNP calculations and some typical results. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Long, Jiangyou; Fan, Peixun; Gong, Dingwei; Jiang, Dafa; Zhang, Hongjun; Li, Lin; Zhong, Minlin
2015-05-13
Superhydrophobic surfaces with tunable water adhesion have attracted much interest in fundamental research and practical applications. In this paper, we used a simple method to fabricate superhydrophobic surfaces with tunable water adhesion. Periodic microstructures with different topographies were fabricated on copper surface via femtosecond (fs) laser irradiation. The topography of these microstructures can be controlled by simply changing the scanning speed of the laser beam. After surface chemical modification, these as-prepared surfaces showed superhydrophobicity combined with different adhesion to water. Surfaces with deep microstructures showed self-cleaning properties with extremely low water adhesion, and the water adhesion increased when the surface microstructures became flat. The changes in surface water adhesion are attributed to the transition from Cassie state to Wenzel state. We also demonstrated that these superhydrophobic surfaces with different adhesion can be used for transferring small water droplets without any loss. We demonstrate that our approach provides a novel but simple way to tune the surface adhesion of superhydrophobic metallic surfaces for good potential applications in related areas.
Forsman, Zac H.; Toonen, Robert J.
2018-01-01
Species within the scleractinian genus Pocillopora Lamarck 1816 exhibit extreme phenotypic plasticity, making identification based on morphology difficult. However, the mitochondrial open reading frame (mtORF) marker provides a useful genetic tool for identification of most species in this genus, with a notable exception of P. eydouxi and P. meandrina. Based on recent genomic work, we present a quick and simple, gel-based restriction fragment length polymorphism (RFLP) method for the identification of all six Pocillopora species occurring in Hawai‘i by amplifying either the mtORF region, a newly discovered histone region, or both, and then using the restriction enzymes targeting diagnostic sequences we unambiguously identify each species. Using this approach, we documented frequent misidentification of Pocillopora species based on colony morphology. We found that P. acuta colonies are frequently mistakenly identified as P. damicornis in Kāne‘ohe Bay, O‘ahu. We also found that P. meandrina likely has a northern range limit in the Northwest Hawaiian Islands, above which P. ligulata was regularly mistaken for P. meandrina. PMID:29441239
NASA Astrophysics Data System (ADS)
Mahdavi, Ali; Seyyedian, Hamid
2014-05-01
This study presents a semi-analytical solution for steady groundwater flow in trapezoidal-shaped aquifers in response to an areal diffusive recharge. The aquifer is homogeneous, anisotropic and interacts with four surrounding streams of constant-head. Flow field in this laterally bounded aquifer-system is efficiently constructed by means of variational calculus. This is accomplished by minimizing a properly defined penalty function for the associated boundary value problem. Simple yet demonstrative scenarios are defined to investigate anisotropy effects on the water table variation. Qualitative examination of the resulting equipotential contour maps and velocity vector field illustrates the validity of the method, especially in the vicinity of boundary lines. Extension to the case of triangular-shaped aquifer with or without an impervious boundary line is also demonstrated through a hypothetical example problem. The present solution benefits from an extremely simple mathematical expression and exhibits strictly close agreement with the numerical results obtained from Modflow. Overall, the solution may be used to conduct sensitivity analysis on various hydrogeological parameters that affect water table variation in aquifers defined in trapezoidal or triangular-shaped domains.
Multi-window detection for P-wave in electrocardiograms based on bilateral accumulative area.
Chen, Riqing; Huang, Yingsong; Wu, Jian
2016-11-01
P-wave detection is one of the most challenging aspects in electrocardiograms (ECGs) due to its low amplitude, low frequency, and variable waveforms. This work introduces a novel multi-window detection method for P-wave delineation based on the bilateral accumulative area. The bilateral accumulative area is calculated by summing the areas covered by the P-wave curve with left and right sliding windows. The onset and offset of a positive P-wave correspond to the local maxima of the area detector. The position drift and difference in area variation of local extreme points with different windows are used to systematically combine multi-window and 12-lead synchronous detection methods, which are used to screen the optimization boundary points from all extreme points of different window widths and adaptively match the P-wave location. The proposed method was validated with ECG signals from various databases, including the Standard CSE Database, T-Wave Alternans Challenge Database, PTB Diagnostic ECG Database, and the St. Petersburg Institute of Cardiological Technics 12-Lead Arrhythmia Database. The average sensitivity Se was 99.44% with a positive predictivity P+ of 99.37% for P-wave detection. Standard deviations of 3.7 and 4.3ms were achieved for the onset and offset of P-waves, respectively, which is in agreement with the accepted tolerances required by the CSE committee. Compared with well-known delineation methods, this method can achieve high sensitivity and positive predictability using a simple calculation process. The experiment results suggest that the bilateral accumulative area could be an effective detection tool for ECG signal analysis. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kloos, Karin; Schloter, Michael; Meyer, Ortwin
2006-11-01
Acid resins are residues produced in a recycling process for used oils that was in use in the forties and fifties of the last century. The resin-like material is highly contaminated with mineral oil hydrocarbons, extremely acidic and co-contaminated with substituted and aromatic hydrocarbons, and heavy metals. To determine the potential for microbial biodegradation the acid resin deposit and its surroundings were screened for microbial activity by soil respiration measurements. No microbial activity was found in the core deposit. However, biodegradation of hydrocarbons was possible in zones with a lower degree of contamination surrounding the deposit. An extreme acidophilic microbial community was detected close to the core deposit. With a simple ecotoxicological approach it could be shown that the pure acid resin that formed the major part of the core deposit, was toxic to the indigenous microflora due to its extremely low pH of 0-1.
Facilitating Co-Design for Extreme-Scale Systems Through Lightweight Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelmann, Christian; Lauer, Frank
This work focuses on tools for investigating algorithm performance at extreme scale with millions of concurrent threads and for evaluating the impact of future architecture choices to facilitate the co-design of high-performance computing (HPC) architectures and applications. The approach focuses on lightweight simulation of extreme-scale HPC systems with the needed amount of accuracy. The prototype presented in this paper is able to provide this capability using a parallel discrete event simulation (PDES), such that a Message Passing Interface (MPI) application can be executed at extreme scale, and its performance properties can be evaluated. The results of an initial prototype aremore » encouraging as a simple 'hello world' MPI program could be scaled up to 1,048,576 virtual MPI processes on a four-node cluster, and the performance properties of two MPI programs could be evaluated at up to 16,384 virtual MPI processes on the same system.« less
Capability of insulator study by photoemission electron microscopy at SPring-8.
Ohkochi, Takuo; Kotsugi, Masato; Yamada, Keisuke; Kawano, Kenji; Horiba, Koji; Kitajima, Fumio; Oura, Masaki; Shiraki, Susumu; Hitosugi, Taro; Oshima, Masaharu; Ono, Teruo; Kinoshita, Toyohiko; Muro, Takayuki; Watanabe, Yoshio
2013-07-01
The observation method of photoemission electron microscopy (PEEM) on insulating samples has been established in an extremely simple way. Surface conductivity is induced locally on an insulating surface by continuous radiation of soft X-rays, and Au films close to the area of interest allow the accumulated charges on the insulated area to be released to ground level. Magnetic domain observations of a NiZn ferrite, local X-ray absorption spectroscopy of sapphire, high-resolution imaging of a poorly conducting Li0.9CoO2 film surface, and Au pattern evaporation on a fine rock particle are demonstrated. Using this technique, all users' experiments on poorly conducting samples have been performed successfully at the PEEM experimental station of SPring-8.
Emission Measure Distribution and Heating of Two Active Region Cores
NASA Technical Reports Server (NTRS)
Tripathi, Durgesh; Klimchuk, James A.; Mason, Helen E.
2011-01-01
Using data from the Extreme-ultraviolet Imaging Spectrometer aboard Hinode, we have studied the coronal plasma in the core of two active regions. Concentrating on the area between opposite polarity moss, we found emission measure distributions having an approximate power-law form EM/T(exp 2.4) from log T = 5.55 up to a peak at log T = 6.57. The observations are explained extremely well by a simple nanoflare model. However, in the absence of additional constraints, the observations could possibly also be explained by steady heating.
C-H carbonylation: In situ acyl triflates ace it
NASA Astrophysics Data System (ADS)
Lee, Yong Ho; Morandi, Bill
2018-02-01
A simple palladium catalyst has mediated the facile formation of aroyl triflates -- an extremely reactive class of electrophiles. These intermediates, generated in situ, enable the Friedel-Crafts acylation of traditionally unreactive arenes, addressing a significant gap in C-H carbonylation methodology.
ERIC Educational Resources Information Center
Niagi, John; Warner, John; Andreesco, Silvana
2007-01-01
The study describes the development of new biosensors based on metal nanoparticles because of its high surface area and large binding ability. The adopted procedure is extremely simple and versatile and can be used in various applications of electrochemistry.
The Impact of Different Environmental Conditions on Cognitive Function: A Focused Review
Taylor, Lee; Watkins, Samuel L.; Marshall, Hannah; Dascombe, Ben J.; Foster, Josh
2016-01-01
Cognitive function defines performance in objective tasks that require conscious mental effort. Extreme environments, namely heat, hypoxia, and cold can all alter human cognitive function due to a variety of psychological and/or biological processes. The aims of this Focused Review were to discuss; (1) the current state of knowledge on the effects of heat, hypoxic and cold stress on cognitive function, (2) the potential mechanisms underpinning these alterations, and (3) plausible interventions that may maintain cognitive function upon exposure to each of these environmental stressors. The available evidence suggests that the effects of heat, hypoxia, and cold stress on cognitive function are both task and severity dependent. Complex tasks are particularly vulnerable to extreme heat stress, whereas both simple and complex task performance appear to be vulnerable at even at moderate altitudes. Cold stress also appears to negatively impact both simple and complex task performance, however, the research in this area is sparse in comparison to heat and hypoxia. In summary, this focused review provides updated knowledge regarding the effects of extreme environmental stressors on cognitive function and their biological underpinnings. Tyrosine supplementation may help individuals maintain cognitive function in very hot, hypoxic, and/or cold conditions. However, more research is needed to clarify these and other postulated interventions. PMID:26779029
Can quantile mapping improve precipitation extremes from regional climate models?
NASA Astrophysics Data System (ADS)
Tani, Satyanarayana; Gobiet, Andreas
2015-04-01
The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.
Huang, Yue; Zheng, Han; Liu, Chi; Ding, Xinghao; Rohde, Gustavo K
2017-11-01
Epithelium-stroma classification is a necessary preprocessing step in histopathological image analysis. Current deep learning based recognition methods for histology data require collection of large volumes of labeled data in order to train a new neural network when there are changes to the image acquisition procedure. However, it is extremely expensive for pathologists to manually label sufficient volumes of data for each pathology study in a professional manner, which results in limitations in real-world applications. A very simple but effective deep learning method, that introduces the concept of unsupervised domain adaptation to a simple convolutional neural network (CNN), has been proposed in this paper. Inspired by transfer learning, our paper assumes that the training data and testing data follow different distributions, and there is an adaptation operation to more accurately estimate the kernels in CNN in feature extraction, in order to enhance performance by transferring knowledge from labeled data in source domain to unlabeled data in target domain. The model has been evaluated using three independent public epithelium-stroma datasets by cross-dataset validations. The experimental results demonstrate that for epithelium-stroma classification, the proposed framework outperforms the state-of-the-art deep neural network model, and it also achieves better performance than other existing deep domain adaptation methods. The proposed model can be considered to be a better option for real-world applications in histopathological image analysis, since there is no longer a requirement for large-scale labeled data in each specified domain.
Huang, Huacheng; Ning, Yanli; Zhang, Bucheng; Lou, Cen
2015-01-01
Carbon-11-raclopride (¹¹C-R) is a positron-emitting radiotracer successfully used for the study of cognitive control and widely applied in PET imaging. A simple automated preparation of ¹¹C-R by using the reaction of carbon-(11)-methyl triflate (¹¹C-MeOTF) or ¹¹C-methyl iodide (¹¹C-MeI) with demethylraclopride is described. Specifically we used a simple setup applied an additional "U" reaction vessel for ¹¹C-MeOTf compared with ¹¹C-MeI and assessed the influence of several solvents and of the amount of the percussor for ¹¹C-methylation of demethylraclopride by the bubbling method. The reversal of retention order between product and its precursor has been achieved for ¹¹C-R, enabling collection of the purified ¹¹C-R by using the HPLC column after shorter retention time. By the improved radiosynthesis and purification strategy, ¹¹C-R could be prepared with higher radiochemical yield than that of the previous studies. The yield for ¹¹C-MeOTf was 76% and for ¹¹C-CH3I >26% and with better radiochemical purity (>99% based on both ¹¹C-MeOTf and ¹¹C-MeI) as compared to the previously obtained purity of ¹¹C-R using HPLC method with acetonitrile as a part of mobile phase. Furthermore, by using ethanol as the organic modifier, residual solvent analysis prior to human injection could be avoided and ¹¹C-R could be injected directly following simple dilution and sterile filtration. Improved radiosynthesis and HPLC purification in combination with ethanol containing eluent, extremely shortened the time for preparation of ¹¹C-R, gave a higher radiochemical yield and purity for ¹¹C-R and can be used for multiple and faster synthesis of ¹¹C-R and probably for other ¹¹C-labeled radiopharmaceuticals.
NASA Astrophysics Data System (ADS)
Vanderlinden, J. P.; Fellmer, M.; Capellini, N.; Meinke, I.; Remvikos, Y.; Bray, D.; Pacteau, C.; Von Storch, H.
2014-12-01
Attribution of extreme weather events has recently generated a lot of interest simultaneously within the general public, the scientific community, and stakeholders affected by meteorological extremes. This interest calls for the need to explore the potential convergence of the current atttribution science with the desire and needs of stakeholders. Such an euiry contributes to the development of climate services aiming at quantifying the human responsibility for particular events. Through interviews with climate scientists, through the analysis of the press coverage of extreme meteorological events, and through stakeholder (private sector, covernment services and local and regional government) focus groups, we analyze how social representations of the concepts associated with extreme event attribution are theorized. From the corpuses generated in the course of this enquiry, we build up a grounded, bottom-up, theorization of extreme weather event attribution. This bottom-up theorization allows for a framing of the potential climate services in a way that is attuned to the needs and expectations of the stakeholders. From apparently simple formulations: "what is an extreme event?", "what makes it extreme?", "what is meant by attribution of extreme weather events?", "what do we want to attribute?", "what is a climate service?", we demonstrate the polysemy of these terms and propose ways to address the challenges associated with the juxtaposition of four highly loaded concepts: extreme - event - attribution - climate services.
NASA Technical Reports Server (NTRS)
Aftosmis, M. J.; Berger, M. J.; Murman, S. M.; Kwak, Dochan (Technical Monitor)
2002-01-01
The proposed paper will present recent extensions in the development of an efficient Euler solver for adaptively-refined Cartesian meshes with embedded boundaries. The paper will focus on extensions of the basic method to include solution adaptation, time-dependent flow simulation, and arbitrary rigid domain motion. The parallel multilevel method makes use of on-the-fly parallel domain decomposition to achieve extremely good scalability on large numbers of processors, and is coupled with an automatic coarse mesh generation algorithm for efficient processing by a multigrid smoother. Numerical results are presented demonstrating parallel speed-ups of up to 435 on 512 processors. Solution-based adaptation may be keyed off truncation error estimates using tau-extrapolation or a variety of feature detection based refinement parameters. The multigrid method is extended to for time-dependent flows through the use of a dual-time approach. The extension to rigid domain motion uses an Arbitrary Lagrangian-Eulerlarian (ALE) formulation, and results will be presented for a variety of two- and three-dimensional example problems with both simple and complex geometry.
Metal tube reducer is inexpensive and simple to operate
NASA Technical Reports Server (NTRS)
Mayfield, R. M.
1967-01-01
Low-cost metal tube reducer accepts tubing up to 1 inch outer diameter and can reduce this diameter to less than 1/2 inch with controlled wall thickness. This device can reduce all of the tube without waste. It produces extremely good surface finishes.
ERIC Educational Resources Information Center
Bishara, Monica
1990-01-01
Shows how high school students used foam carpet padding to create forms for still-life drawing. Discusses learning to progress from simple-line drawing to a three-dimensional image. Identifies the drawing of shadows, and extreme light and dark values, as points that need to be emphasized repeatedly. (KM)
NASA Astrophysics Data System (ADS)
Casas-Castillo, M. Carmen; Rodríguez-Solà, Raúl; Navarro, Xavier; Russo, Beniamino; Lastra, Antonio; González, Paula; Redaño, Angel
2018-01-01
The fractal behavior of extreme rainfall intensities registered between 1940 and 2012 by the Retiro Observatory of Madrid (Spain) has been examined, and a simple scaling regime ranging from 25 min to 3 days of duration has been identified. Thus, an intensity-duration-frequency (IDF) master equation of the location has been constructed in terms of the simple scaling formulation. The scaling behavior of probable maximum precipitation (PMP) for durations between 5 min and 24 h has also been verified. For the statistical estimation of the PMP, an envelope curve of the frequency factor ( k m ) based on a total of 10,194 station-years of annual maximum rainfall from 258 stations in Spain has been developed. This curve could be useful to estimate suitable values of PMP at any point of the Iberian Peninsula from basic statistical parameters (mean and standard deviation) of its rainfall series. [Figure not available: see fulltext.
Segmentation in Tardigrada and diversification of segmental patterns in Panarthropoda.
Smith, Frank W; Goldstein, Bob
2017-05-01
The origin and diversification of segmented metazoan body plans has fascinated biologists for over a century. The superphylum Panarthropoda includes three phyla of segmented animals-Euarthropoda, Onychophora, and Tardigrada. This superphylum includes representatives with relatively simple and representatives with relatively complex segmented body plans. At one extreme of this continuum, euarthropods exhibit an incredible diversity of serially homologous segments. Furthermore, distinct tagmosis patterns are exhibited by different classes of euarthropods. At the other extreme, all tardigrades share a simple segmented body plan that consists of a head and four leg-bearing segments. The modular body plans of panarthropods make them a tractable model for understanding diversification of animal body plans more generally. Here we review results of recent morphological and developmental studies of tardigrade segmentation. These results complement investigations of segmentation processes in other panarthropods and paleontological studies to illuminate the earliest steps in the evolution of panarthropod body plans. Copyright © 2016 Elsevier Ltd. All rights reserved.
Development of a method for personal, spatiotemporal exposure assessment.
Adams, Colby; Riggs, Philip; Volckens, John
2009-07-01
This work describes the development and evaluation of a high resolution, space and time-referenced sampling method for personal exposure assessment to airborne particulate matter (PM). This method integrates continuous measures of personal PM levels with the corresponding location-activity (i.e. work/school, home, transit) of the subject. Monitoring equipment include a small, portable global positioning system (GPS) receiver, a miniature aerosol nephelometer, and an ambient temperature monitor to estimate the location, time, and magnitude of personal exposure to particulate matter air pollution. Precision and accuracy of each component, as well as the integrated method performance were tested in a combination of laboratory and field tests. Spatial data was apportioned into pre-determined location-activity categories (i.e. work/school, home, transit) with a simple, temporospatially-based algorithm. The apportioning algorithm was extremely effective with an overall accuracy of 99.6%. This method allows examination of an individual's estimated exposure through space and time, which may provide new insights into exposure-activity relationships not possible with traditional exposure assessment techniques (i.e., time-integrated, filter-based measurements). Furthermore, the method is applicable to any contaminant or stressor that can be measured on an individual with a direct-reading sensor.
Bakand, S; Winder, C; Khalil, C; Hayes, A
2005-12-01
Exposure to occupational and environmental contaminants is a major contributor to human health problems. Inhalation of gases, vapors, aerosols, and mixtures of these can cause a wide range of adverse health effects, ranging from simple irritation to systemic diseases. Despite significant achievements in the risk assessment of chemicals, the toxicological database, particularly for industrial chemicals, remains limited. Considering there are approximately 80,000 chemicals in commerce, and an extremely large number of chemical mixtures, in vivo testing of this large number is unachievable from both economical and practical perspectives. While in vitro methods are capable of rapidly providing toxicity information, regulatory agencies in general are still cautious about the replacement of whole-animal methods with new in vitro techniques. Although studying the toxic effects of inhaled chemicals is a complex subject, recent studies demonstrate that in vitro methods may have significant potential for assessing the toxicity of airborne contaminants. In this review, current toxicity test methods for risk evaluation of industrial chemicals and airborne contaminants are presented. To evaluate the potential applications of in vitro methods for studying respiratory toxicity, more recent models developed for toxicity testing of airborne contaminants are discussed.
Facile preparation of super durable superhydrophobic materials.
Wu, Lei; Zhang, Junping; Li, Bucheng; Fan, Ling; Li, Lingxiao; Wang, Aiqin
2014-10-15
The low stability, complicated and expensive fabrication procedures seriously hinder practical applications of superhydrophobic materials. Here we report an extremely simple method for preparing super durable superhydrophobic materials, e.g., textiles and sponges, by dip coating in fluoropolymers (FPs). The morphology, surface chemical composition, mechanical, chemical and environmental stabilities of the superhydrophobic textiles were investigated. The results show how simple the preparation of super durable superhydrophobic textiles can be! The superhydrophobic textiles outperform their natural counterparts and most of the state-of-the-art synthetic superhydrophobic materials in stability. The intensive mechanical abrasion, long time immersion in various liquids and repeated washing have no obvious influence on the superhydrophobicity. Water drops are spherical in shape on the samples and could easily roll off after these harsh stability tests. In addition, this simple dip coating approach is applicable to various synthetic and natural textiles and can be easily scaled up. Furthermore, the results prove that a two-tier roughness is helpful but not essential with regard to the creation of super durable superhydrophobic textiles. The combination of microscale roughness of textiles and materials with very low surface tension is enough to form super durable superhydrophobic textiles. According to the same procedure, superhydrophobic polyurethane sponges can be prepared, which show high oil absorbency, oil/water separation efficiency and stability. Copyright © 2014 Elsevier Inc. All rights reserved.
Pírez, Macarena; Gonzalez-Sapienza, Gualberto; Sienra, Daniel; Ferrari, Graciela; Last, Michael; Last, Jerold A; Brena, Beatriz M
2013-01-15
In recent years, the international demand for commodities has prompted enormous growth in agriculture in most South American countries. Due to intensive use of fertilizers, cyanobacterial blooms have become a recurrent phenomenon throughout the continent, but their potential health risk remains largely unknown due to the lack of analytical capacity. In this paper we report the main results and conclusions of more than five years of systematic monitoring of cyanobacterial blooms in 20 beaches of Montevideo, Uruguay, on the Rio de la Plata, the fifth largest basin in the world. A locally developed microcystin ELISA was used to establish a sustainable monitoring program that revealed seasonal peaks of extremely high toxicity, more than one-thousand-fold greater than the WHO limit for recreational water. Comparison with cyanobacterial cell counts and chlorophyll-a determination, two commonly used parameters for indirect estimation of toxicity, showed that such indicators can be highly misleading. On the other hand, the accumulated experience led to the definition of a simple criterion for visual classification of blooms, that can be used by trained lifeguards and technicians to take rapid on-site decisions on beach management. The simple and low cost approach is broadly applicable to risk assessment and risk management in developing countries. Copyright © 2012 Elsevier Ltd. All rights reserved.
Statistical downscaling modeling with quantile regression using lasso to estimate extreme rainfall
NASA Astrophysics Data System (ADS)
Santri, Dewi; Wigena, Aji Hamim; Djuraidah, Anik
2016-02-01
Rainfall is one of the climatic elements with high diversity and has many negative impacts especially extreme rainfall. Therefore, there are several methods that required to minimize the damage that may occur. So far, Global circulation models (GCM) are the best method to forecast global climate changes include extreme rainfall. Statistical downscaling (SD) is a technique to develop the relationship between GCM output as a global-scale independent variables and rainfall as a local- scale response variable. Using GCM method will have many difficulties when assessed against observations because GCM has high dimension and multicollinearity between the variables. The common method that used to handle this problem is principal components analysis (PCA) and partial least squares regression. The new method that can be used is lasso. Lasso has advantages in simultaneuosly controlling the variance of the fitted coefficients and performing automatic variable selection. Quantile regression is a method that can be used to detect extreme rainfall in dry and wet extreme. Objective of this study is modeling SD using quantile regression with lasso to predict extreme rainfall in Indramayu. The results showed that the estimation of extreme rainfall (extreme wet in January, February and December) in Indramayu could be predicted properly by the model at quantile 90th.
Extreme wind-wave modeling and analysis in the south Atlantic ocean
NASA Astrophysics Data System (ADS)
Campos, R. M.; Alves, J. H. G. M.; Guedes Soares, C.; Guimaraes, L. G.; Parente, C. E.
2018-04-01
A set of wave hindcasts is constructed using two different types of wind calibration, followed by an additional test retuning the input source term Sin in the wave model. The goal is to improve the simulation in extreme wave events in the South Atlantic Ocean without compromising average conditions. Wind fields are based on Climate Forecast System Reanalysis (CFSR/NCEP). The first wind calibration applies a simple linear regression model, with coefficients obtained from the comparison of CFSR against buoy data. The second is a method where deficiencies of the CFSR associated with severe sea state events are remedied, whereby "defective" winds are replaced with satellite data within cyclones. A total of six wind datasets forced WAVEWATCH-III and additional three tests with modified Sin in WAVEWATCH III lead to a total of nine wave hindcasts that are evaluated against satellite and buoy data for ambient and extreme conditions. The target variable considered is the significant wave height (Hs). The increase of sea-state severity shows a progressive increase of the hindcast underestimation which could be calculated as a function of percentiles. The wind calibration using a linear regression function shows similar results to the adjustments to Sin term (increase of βmax parameter) in WAVEWATCH-III - it effectively reduces the average bias of Hs but cannot avoid the increase of errors with percentiles. The use of blended scatterometer winds within cyclones could reduce the increasing wave hindcast errors mainly above the 93rd percentile and leads to a better representation of Hs at the peak of the storms. The combination of linear regression calibration of non-cyclonic winds with scatterometer winds within the cyclones generated a wave hindcast with small errors from calm to extreme conditions. This approach led to a reduction of the percentage error of Hs from 14% to less than 8% for extreme waves, while also improving the RMSE.
Encouraging moderation: clues from a simple model of ideological conflict.
Marvel, Seth A; Hong, Hyunsuk; Papush, Anna; Strogatz, Steven H
2012-09-14
Some of the most pivotal moments in intellectual history occur when a new ideology sweeps through a society, supplanting an established system of beliefs in a rapid revolution of thought. Yet in many cases the new ideology is as extreme as the old. Why is it then that moderate positions so rarely prevail? Here, in the context of a simple model of opinion spreading, we test seven plausible strategies for deradicalizing a society and find that only one of them significantly expands the moderate subpopulation without risking its extinction in the process.
Encouraging Moderation: Clues from a Simple Model of Ideological Conflict
NASA Astrophysics Data System (ADS)
Marvel, Seth A.; Hong, Hyunsuk; Papush, Anna; Strogatz, Steven H.
2012-09-01
Some of the most pivotal moments in intellectual history occur when a new ideology sweeps through a society, supplanting an established system of beliefs in a rapid revolution of thought. Yet in many cases the new ideology is as extreme as the old. Why is it then that moderate positions so rarely prevail? Here, in the context of a simple model of opinion spreading, we test seven plausible strategies for deradicalizing a society and find that only one of them significantly expands the moderate subpopulation without risking its extinction in the process.
Receive Mode Analysis and Design of Microstrip Reflectarrays
NASA Technical Reports Server (NTRS)
Rengarajan, Sembiam
2011-01-01
Traditionally microstrip or printed reflectarrays are designed using the transmit mode technique. In this method, the size of each printed element is chosen so as to provide the required value of the reflection phase such that a collimated beam results along a given direction. The reflection phase of each printed element is approximated using an infinite array model. The infinite array model is an excellent engineering approximation for a large microstrip array since the size or orientation of elements exhibits a slow spatial variation. In this model, the reflection phase from a given printed element is approximated by that of an infinite array of elements of the same size and orientation when illuminated by a local plane wave. Thus the reflection phase is a function of the size (or orientation) of the element, the elevation and azimuth angles of incidence of a local plane wave, and polarization. Typically, one computes the reflection phase of the infinite array as a function of several parameters such as size/orientation, elevation and azimuth angles of incidence, and in some cases for vertical and horizontal polarization. The design requires the selection of the size/orientation of the printed element to realize the required phase by interpolating or curve fitting all the computed data. This is a substantially complicated problem, especially in applications requiring a computationally intensive commercial code to determine the reflection phase. In dual polarization applications requiring rectangular patches, one needs to determine the reflection phase as a function of five parameters (dimensions of the rectangular patch, elevation and azimuth angles of incidence, and polarization). This is an extremely complex problem. The new method employs the reciprocity principle and reaction concept, two well-known concepts in electromagnetics to derive the receive mode analysis and design techniques. In the "receive mode design" technique, the reflection phase is computed for a plane wave incident on the reflectarray from the direction of the beam peak. In antenna applications with a single collimated beam, this method is extremely simple since all printed elements see the same angles of incidence. Thus the number of parameters is reduced by two when compared to the transmit mode design. The reflection phase computation as a function of five parameters in the rectangular patch array discussed previously is reduced to a computational problem with three parameters in the receive mode. Furthermore, if the beam peak is in the broadside direction, the receive mode design is polarization independent and the reflection phase computation is a function of two parameters only. For a square patch array, it is a function of the size, one parameter only, thus making it extremely simple.
GREENER SYNTHESIS OF NOBLE METAL NANOSTRUCTURES AND NANOCOMPOSITES
A brief account of a greener preparation of nanoparticles which reduces or eliminates the use and generation of hazardous substances is presented. The utility of vitamins B1 and B2, which can function both as reducing and capping agents, provides an extremely simple, one-pot, gre...
3000 Years of Breeding for Drought Tolerance in Soybean
USDA-ARS?s Scientific Manuscript database
Plants and animals have both suffered through extreme environments over the long evolutionary history of life. Amazing adaptations such as camels and cactus have occurred. On the whole however, evolutionary adaptation to stress has been greater in plants than in animals. The simple difference betwe...
New statistical downscaling for Canada
NASA Astrophysics Data System (ADS)
Murdock, T. Q.; Cannon, A. J.; Sobie, S.
2013-12-01
This poster will document the production of a set of statistically downscaled future climate projections for Canada based on the latest available RCM and GCM simulations - the North American Regional Climate Change Assessment Program (NARCCAP; Mearns et al. 2007) and the Coupled Model Intercomparison Project Phase 5 (CMIP5). The main stages of the project included (1) downscaling method evaluation, (2) scenarios selection, (3) production of statistically downscaled results, and (4) applications of results. We build upon a previous downscaling evaluation project (Bürger et al. 2012, Bürger et al. 2013) in which a quantile-based method (Bias Correction/Spatial Disaggregation - BCSD; Werner 2011) provided high skill compared with four other methods representing the majority of types of downscaling used in Canada. Additional quantile-based methods (Bias-Correction/Constructed Analogues; Maurer et al. 2010 and Bias-Correction/Climate Imprint ; Hunter and Meentemeyer 2005) were evaluated. A subset of 12 CMIP5 simulations was chosen based on an objective set of selection criteria. This included hemispheric skill assessment based on the CLIMDEX indices (Sillmann et al. 2013), historical criteria used previously at the Pacific Climate Impacts Consortium (Werner 2011), and refinement based on a modified clustering algorithm (Houle et al. 2012; Katsavounidis et al. 1994). Statistical downscaling was carried out on the NARCCAP ensemble and a subset of the CMIP5 ensemble. We produced downscaled scenarios over Canada at a daily time resolution and 300 arc second (~10 km) spatial resolution from historical runs for 1951-2005 and from RCP 2.6, 4.5, and 8.5 projections for 2006-2100. The ANUSPLIN gridded daily dataset (McKenney et al. 2011) was used as a target. It has national coverage, spans the historical period of interest 1951-2005, and has daily time resolution. It uses interpolation of station data based on thin-plate splines. This type of method has been shown to have superior skill in interpolating RCM data over North America (McGinnis et al. 2012). An early application of the new dataset was to provide projections of climate extremes for adaptation planning by the British Columbia Ministry of Transportation and Infrastructure. Recently, certain stretches of highway have experienced extreme precipitation events resulting in substantial damage to infrastructure. As part of the planning process to refurbish or replace components of these highways, information about the magnitude and frequency of future extreme events are needed to inform the infrastructure design. The increased resolution provided by downscaling improves the representation of topographic features, particularly valley temperature and precipitation effects. A range of extreme values, from simple daily maxima and minima to complex multi-day and threshold-based climate indices were computed and analyzed from the downscaled output. Selected results from this process and how the projections of precipitation extremes are being used in the context of highway infrastructure planning in British Columbia will be presented.
McDonnell, Mark D.; Tissera, Migel D.; Vladusich, Tony; van Schaik, André; Tapson, Jonathan
2015-01-01
Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the ‘Extreme Learning Machine’ (ELM) approach, which also enables a very rapid training time (∼ 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random ‘receptive field’ sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems. PMID:26262687
A laboratory investigation into microwave backscattering from sea ice. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Bredow, Jonathan W.
1989-01-01
The sources of scattering of artificial sea ice were determined, backscatter measurements semi-quantitatively were compared with theoretical predictions, and inexpensive polarimetric radars were developed for sea ice backscatter studies. A brief review of the dielectric properties of sea ice and of commonly used surface and volume scattering theories is presented. A description is provided of the backscatter measurements performed and experimental techniques used. The development of inexpensive short-range polarimetric radars is discussed. The steps taken to add polarimetric capability to a simple FM-W radar are considered as are sample polarimetric phase measurements of the radar. Ice surface characterization data and techniques are discussed, including computation of surface rms height and correlation length and air bubble distribution statistics. A method is also presented of estimating the standard deviation of rms height and correlation length for cases of few data points. Comparisons were made of backscatter measurements and theory. It was determined that backscatter from an extremely smooth saline ice surface at C band cannot be attributed only to surface scatter. It was found that snow cover had a significant influence on backscatter from extremely smooth saline ice at C band.
Wu, Xialu; Ding, Nini; Zhang, Wenhua; Xue, Fei; Hor, T S Andy
2015-07-20
The use of simple self-assembly methods to direct or engineer porosity or channels of desirable functionality is a major challenge in the field of metal-organic frameworks. We herein report a series of frameworks by modifying square ring structure of [{Cu2(5-dmpy)2(L1)2(H2O)(MeOH)}2{ClO4}4]·4MeOH (1·4MeOH, 5-dmpy = 5,5'-dimethyl-2,2'-bipyridine, HL1 = 4-pyridinecarboxylic acid). Use of pyridyl carboxylates as directional spacers in bipyridyl chelated Cu(II) system led to the growth of square unit into other configurations, namely, square ring, square chain, and square tunnel. Another remarkable characteristic is that the novel use of two isomers of pyridinyl-acrylic acid directs selectively to two different extreme tubular forms-aligned stacking of discrete hexagonal rings and crack-free one-dimensional continuum polymers. This provides a unique example of two extreme forms of copper nanotubes from two isomeric spacers. All of the reactions are performed in a one-pot self-assembly process at room temperature, while the topological selectivity is exclusively determined by the skeletal characteristics of the spacers.
Bonenkamp, J J; Thompson, J F; de Wilt, J H; Doubrovsky, A; de Faria Lima, R; Kam, P C A
2004-12-01
Isolated limb infusion (ILI) is a simple yet effective alternative to conventional isolated limb perfusion for the treatment of advanced melanoma of the extremities. The study group comprised 13 patients with very advanced limb disease who had failed to achieve a satisfactory response to one or more ILIs with melphalan, and in whom amputation was the only other realistic treatment option. The aim of this study was to evaluate the efficacy and toxicity of ILI with fotemustine after systemic chemosensitisation with dacarbazine (DTIC). Complete remission was achieved in four patients and partial remission in eight patients, with a median response duration of 3 months. Limb salvage was achieved in five of 12 assessable patients (42%). Limb toxicity peaked 9 days after ILI; two patients experienced Wieberdink grade IV (severe) toxicity and four patients had grade V toxicity (requiring early amputation). ILI with fotemustine after DTIC chemosensitisation can be successful when gross limb disease has not been controlled by one or more ILIs with melphalan. However, it cannot be recommended as a routine method of treatment for advanced melanoma of the extremities because of the high incidence of severe limb toxicity.
Intensity changes in future extreme precipitation: A statistical event-based approach.
NASA Astrophysics Data System (ADS)
Manola, Iris; van den Hurk, Bart; de Moel, Hans; Aerts, Jeroen
2017-04-01
Short-lived precipitation extremes are often responsible for hazards in urban and rural environments with economic and environmental consequences. The precipitation intensity is expected to increase about 7% per degree of warming, according to the Clausius-Clapeyron (CC) relation. However, the observations often show a much stronger increase in the sub-daily values. In particular, the behavior of the hourly summer precipitation from radar observations with the dew point temperature (the Pi-Td relation) for the Netherlands suggests that for moderate to warm days the intensification of the precipitation can be even higher than 21% per degree of warming, that is 3 times higher than the expected CC relation. The rate of change depends on the initial precipitation intensity, as low percentiles increase with a rate below CC, the medium percentiles with 2CC and the moderate-high and high percentiles with 3CC. This non-linear statistical Pi-Td relation is suggested to be used as a delta-transformation to project how a historic extreme precipitation event would intensify under future, warmer conditions. Here, the Pi-Td relation is applied over a selected historic extreme precipitation event to 'up-scale' its intensity to warmer conditions. Additionally, the selected historic event is simulated in the high-resolution, convective-permitting weather model Harmonie. The initial and boundary conditions are alternated to represent future conditions. The comparison between the statistical and the numerical method of projecting the historic event to future conditions showed comparable intensity changes, which depending on the initial percentile intensity, range from below CC to a 3CC rate of change per degree of warming. The model tends to overestimate the future intensities for the low- and the very high percentiles and the clouds are somewhat displaced, due to small wind and convection changes. The total spatial cloud coverage in the model remains, as also in the statistical method, unchanged. The advantages of the suggested Pi-Td method of projecting future precipitation events from historic events is that it is simple to use, is less expensive time, computational and resource wise compared to a numerical model. The outcome can be used directly for hydrological and climatological studies and for impact analysis such as for flood risk assessments.
Yeo, Junyeob; Hong, Sukjoon; Lee, Daehoo; Hotz, Nico; Lee, Ming-Tsang; Grigoropoulos, Costas P.; Ko, Seung Hwan
2012-01-01
Flexible electronics opened a new class of future electronics. The foldable, light and durable nature of flexible electronics allows vast flexibility in applications such as display, energy devices and mobile electronics. Even though conventional electronics fabrication methods are well developed for rigid substrates, direct application or slight modification of conventional processes for flexible electronics fabrication cannot work. The future flexible electronics fabrication requires totally new low-temperature process development optimized for flexible substrate and it should be based on new material too. Here we present a simple approach to developing a flexible electronics fabrication without using conventional vacuum deposition and photolithography. We found that direct metal patterning based on laser-induced local melting of metal nanoparticle ink is a promising low-temperature alternative to vacuum deposition– and photolithography-based conventional metal patterning processes. The “digital” nature of the proposed direct metal patterning process removes the need for expensive photomask and allows easy design modification and short turnaround time. This new process can be extremely useful for current small-volume, large-variety manufacturing paradigms. Besides, simple, scalable, fast and low-temperature processes can lead to cost-effective fabrication methods on a large-area polymer substrate. The developed process was successfully applied to demonstrate high-quality Ag patterning (2.1 µΩ·cm) and high-performance flexible organic field effect transistor arrays. PMID:22900011
Yeo, Junyeob; Hong, Sukjoon; Lee, Daehoo; Hotz, Nico; Lee, Ming-Tsang; Grigoropoulos, Costas P; Ko, Seung Hwan
2012-01-01
Flexible electronics opened a new class of future electronics. The foldable, light and durable nature of flexible electronics allows vast flexibility in applications such as display, energy devices and mobile electronics. Even though conventional electronics fabrication methods are well developed for rigid substrates, direct application or slight modification of conventional processes for flexible electronics fabrication cannot work. The future flexible electronics fabrication requires totally new low-temperature process development optimized for flexible substrate and it should be based on new material too. Here we present a simple approach to developing a flexible electronics fabrication without using conventional vacuum deposition and photolithography. We found that direct metal patterning based on laser-induced local melting of metal nanoparticle ink is a promising low-temperature alternative to vacuum deposition- and photolithography-based conventional metal patterning processes. The "digital" nature of the proposed direct metal patterning process removes the need for expensive photomask and allows easy design modification and short turnaround time. This new process can be extremely useful for current small-volume, large-variety manufacturing paradigms. Besides, simple, scalable, fast and low-temperature processes can lead to cost-effective fabrication methods on a large-area polymer substrate. The developed process was successfully applied to demonstrate high-quality Ag patterning (2.1 µΩ·cm) and high-performance flexible organic field effect transistor arrays.
Dervisevic, Muamer; Senel, Mehmet; Sagir, Tugba; Isik, Sevim
2017-04-15
The detection of cancer cells through important molecular recognition target such as sialic acid is significant for the clinical diagnosis and treatment. There are many electrochemical cytosensors developed for cancer cells detection but most of them have complicated fabrication processes which results in poor reproducibility and reliability. In this study, a simple, low-cost, and highly sensitive electrochemical cytosensor was designed based on boronic acid-functionalized polythiophene. In cytosensors fabrication simple single-step procedure was used which includes coating pencil graphite electrode (PGE) by means of electro-polymerization of 3-Thienyl boronic acid and Thiophen. Electrochemical impedance spectroscopy and cyclic voltammetry were used as an analytical methods to optimize and measure analytical performances of PGE/P(TBA 0.5 Th 0.5 ) based electrode. Cytosensor showed extremely good analytical performances in detection of cancer cells with linear rage of 1×10 1 to 1×10 6 cellsmL -1 exhibiting low detection limit of 10 cellsmL -1 and incubation time of 10min. Next to excellent analytical performances, it showed high selectivity towards AGS cancer cells when compared to HEK 293 normal cells and bone marrow mesenchymal stem cells (BM-hMSCs). This method is promising for future applications in early stage cancer diagnosis. Copyright © 2016 Elsevier B.V. All rights reserved.
Silva, Michelli Massaroli da; Andrade, Moacir Dos Santos; Bauermeister, Anelize; Merfa, Marcus Vinícius; Forim, Moacir Rossi; Fernandes, João Batista; Vieira, Paulo Cezar; Silva, Maria Fátima das Graças Fernandes da; Lopes, Norberto Peporine; Machado, Marcos Antônio; Souza, Alessandra Alves de
2017-06-13
Diketopiperazines can be generated by non-enzymatic cyclization of linear dipeptides at extreme temperature or pH, and the complex medium used to culture bacteria and fungi including phytone peptone and trypticase peptone, can also produce cyclic peptides by heat sterilization. As a result, it is not always clear if many diketopiperazines reported in the literature are artifacts formed by the different complex media used in microorganism growth. An ideal method for analysis of these compounds should identify whether they are either synthesized de novo from the products of primary metabolism and deliver true diketopiperazines. A simple defined medium ( X. fastidiosa medium or XFM) containing a single carbon source and no preformed amino acids has emerged as a method with a particularly high potential for the grown of X. fastidiosa and to produce genuine natural products. In this work, we identified a range of diketopiperazines from X. fastidiosa 9a5c growth in XFM, using Ultra-Fast Liquid Chromatography coupled with mass spectrometry. Diketopiperazines are reported for the first time from X. fastidiosa , which is responsible for citrus variegated chlorosis. We also report here fatty acids from X. fastidiosa , which were not biologically active as diffusible signals, and the role of diketopiperazines in signal transduction still remains unknown.
Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT
NASA Technical Reports Server (NTRS)
Fagundo, Arturo
1994-01-01
Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.
Testing the Role of Multicopy Plasmids in the Evolution of Antibiotic Resistance.
Escudero, Jose Antonio; MacLean, R Craig; San Millan, Alvaro
2018-05-02
Multicopy plasmids are extremely abundant in prokaryotes but their role in bacterial evolution remains poorly understood. We recently showed that the increase in gene copy number per cell provided by multicopy plasmids could accelerate the evolution of plasmid-encoded genes. In this work, we present an experimental system to test the ability of multicopy plasmids to promote gene evolution. Using simple molecular biology methods, we constructed a model system where an antibiotic resistance gene can be inserted into Escherichia coli MG1655, either in the chromosome or on a multicopy plasmid. We use an experimental evolution approach to propagate the different strains under increasing concentrations of antibiotics and we measure survival of bacterial populations over time. The choice of the antibiotic molecule and the resistance gene is so that the gene can only confer resistance through the acquisition of mutations. This "evolutionary rescue" approach provides a simple method to test the potential of multicopy plasmids to promote the acquisition of antibiotic resistance. In the next step of the experimental system, the molecular bases of antibiotic resistance are characterized. To identify mutations responsible for the acquisition of antibiotic resistance we use deep DNA sequencing of samples obtained from whole populations and clones. Finally, to confirm the role of the mutations in the gene under study, we reconstruct them in the parental background and test the resistance phenotype of the resulting strains.
Microsurgery within reconstructive surgery of extremities.
Pheradze, I; Pheradze, T; Tsilosani, G; Goginashvili, Z; Mosiava, T
2006-05-01
Reconstructive surgery of extremities is an object of a special attention of surgeons. Vessel and nerve damages, deficiency of soft tissue, bone, associated with infection results in a complete loss of extremity function, it also raises a question of amputation. The goal of the study was to improve the role of microsurgery in reconstructive surgery of limbs. We operated on 294 patients with various diseases and damages of extremities: pathology of nerves, vessels, tissue loss. An original method of treatment of large simultaneous functional defects of limbs has been used. Good functional and aesthetic results were obtained. Results of reconstructive operations on extremities might be improved by using of microsurgery methods. Microsurgery is deemed as a method of choice for extremities' reconstructive surgery as far as outcomes achieved through application of microsurgical technique significantly surpass the outcomes obtained through the use of routine surgical methods.
The Parker-Sochacki Method--A Powerful New Method for Solving Systems of Differential Equations
NASA Astrophysics Data System (ADS)
Rudmin, Joseph W.
2001-04-01
The Parker-Sochacki Method--A Powerful New Method for Solving Systems of Differential Equations Joseph W. Rudmin (Physics Dept, James Madison University) A new system of solving systems of differential equations will be presented, which has been developed by J. Edgar Parker and James Sochacki, of the James Madison University Mathematics Department. The method produces MacClaurin Series solutions to systems of differential equations, with the coefficients in either algebraic or numerical form. The method yields high-degree solutions: 20th degree is easily obtainable. It is conceptually simple, fast, and extremely general. It has been applied to over a hundred systems of differential equations, some of which were previously unsolved, and has yet to fail to solve any system for which the MacClaurin series converges. The method is non-recursive: each coefficient in the series is calculated just once, in closed form, and its accuracy is limited only by the digital accuracy of the computer. Although the original differential equations may include any mathematical functions, the computational method includes ONLY the operations of addition, subtraction, and multiplication. Furthermore, it is perfectly suited to parallel -processing computer languages. Those who learn this system will never use Runge-Kutta or predictor-corrector methods again. Examples will be presented, including the classical many-body problem.
Lee, Seung-Heon; Lu, Jian; Lee, Seung-Jun; Han, Jae-Hyun; Jeong, Chan-Uk; Lee, Seung-Chul; Li, Xian; Jazbinšek, Mojca; Yoon, Woojin; Yun, Hoseop; Kang, Bong Joo; Rotermund, Fabian; Nelson, Keith A; Kwon, O-Pil
2017-08-01
Highly efficient nonlinear optical organic crystals are very attractive for various photonic applications including terahertz (THz) wave generation. Up to now, only two classes of ionic crystals based on either pyridinium or quinolinium with extremely large macroscopic optical nonlinearity have been developed. This study reports on a new class of organic nonlinear optical crystals introducing electron-accepting benzothiazolium, which exhibit higher electron-withdrawing strength than pyridinium and quinolinium in benchmark crystals. The benzothiazolium crystals consisting of new acentric core HMB (2-(4-hydroxy-3-methoxystyryl)-3-methylbenzo[d]thiazol-3-ium) exhibit extremely large macroscopic optical nonlinearity with optimal molecular ordering for maximizing the diagonal second-order nonlinearity. HMB-based single crystals prepared by simple cleaving method satisfy all required crystal characteristics for intense THz wave generation such as large crystal size with parallel surfaces, moderate thickness and high optical quality with large optical transparency range (580-1620 nm). Optical rectification of 35 fs pulses at the technologically very important wavelength of 800 nm in 0.26 mm thick HMB crystal leads to one order of magnitude higher THz wave generation efficiency with remarkably broader bandwidth compared to standard inorganic 0.5 mm thick ZnTe crystal. Therefore, newly developed HMB crystals introducing benzothiazolium with extremely large macroscopic optical nonlinearity are very promising materials for intense broadband THz wave generation and other nonlinear optical applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Determination of the maximum-depth to potential field sources by a maximum structural index method
NASA Astrophysics Data System (ADS)
Fedi, M.; Florio, G.
2013-01-01
A simple and fast determination of the limiting depth to the sources may represent a significant help to the data interpretation. To this end we explore the possibility of determining those source parameters shared by all the classes of models fitting the data. One approach is to determine the maximum depth-to-source compatible with the measured data, by using for example the well-known Bott-Smith rules. These rules involve only the knowledge of the field and its horizontal gradient maxima, and are independent from the density contrast. Thanks to the direct relationship between structural index and depth to sources we work out a simple and fast strategy to obtain the maximum depth by using the semi-automated methods, such as Euler deconvolution or depth-from-extreme-points method (DEXP). The proposed method consists in estimating the maximum depth as the one obtained for the highest allowable value of the structural index (Nmax). Nmax may be easily determined, since it depends only on the dimensionality of the problem (2D/3D) and on the nature of the analyzed field (e.g., gravity field or magnetic field). We tested our approach on synthetic models against the results obtained by the classical Bott-Smith formulas and the results are in fact very similar, confirming the validity of this method. However, while Bott-Smith formulas are restricted to the gravity field only, our method is applicable also to the magnetic field and to any derivative of the gravity and magnetic field. Our method yields a useful criterion to assess the source model based on the (∂f/∂x)max/fmax ratio. The usefulness of the method in real cases is demonstrated for a salt wall in the Mississippi basin, where the estimation of the maximum depth agrees with the seismic information.
Approximating Long-Term Statistics Early in the Global Precipitation Measurement Era
NASA Technical Reports Server (NTRS)
Stanley, Thomas; Kirschbaum, Dalia B.; Huffman, George J.; Adler, Robert F.
2017-01-01
Long-term precipitation records are vital to many applications, especially the study of extreme events. The Tropical Rainfall Measuring Mission (TRMM) has served this need, but TRMMs successor mission, Global Precipitation Measurement (GPM), does not yet provide a long-term record. Quantile mapping, the conversion of values across paired empirical distributions, offers a simple, established means to approximate such long-term statistics, but only within appropriately defined domains. This method was applied to a case study in Central America, demonstrating that quantile mapping between TRMM and GPM data maintains the performance of a real-time landslide model. Use of quantile mapping could bring the benefits of the latest satellite-based precipitation dataset to existing user communities such as those for hazard assessment, crop forecasting, numerical weather prediction, and disease tracking.
NASA Astrophysics Data System (ADS)
Shimada, M.; Shimada, J.; Tsunashima, K.; Aoyama, C.
2017-12-01
Methane hydrate is anticipated to be the unconventional natural gas energy resource. Two types of methane hydrates are known to exist, based on the settings: "shallow" type and "sand layer" type. In comparison, shallow type is considered an advantage due to its high purity and the more simple exploration. However, not much development methods have been made in the area of extraction techniques. Currently, heating and depressurization are used as methods to collect sand layer methane hydrate, but these methods are still under examination and not yet to be implemented. This is probably because fossil fuel is used for the extraction process instead of natural energy. It is necessary to utilize natural energy instead of relying on fossil fuel. This is why sunlight is believed to be the most significant alternative. Solar power generation is commonly used to extract sunlight, but it is said that this process causes extreme energy loss since solar energy converted to electricity requires conversion to heat energy. A new method is contrived to accelerate the decomposition of methane hydrate with direct sunlight utilizing optical fibers. Authors will present details of this new method to collect methane hydrate with direct sunlight exposure.
An extremely simple green approach is described that generates bulk quantities of nanofibers of the electronic polymer polyaniline in fully reduced state (leucoemeraldine form) in one step without using any reducing agent, surfactants, and/or large amounts of insoluble templates....
Greener Biomimetic Approach to the Synthesis of Nanomaterials and Nanocomposite
A brief account of greener production of nanoparticles which reduces or eliminates the use and generation of hazardous substances is presented. The utility of vitamins B1 and B2, which can function both as reducing and capping agents, provides an extremely simple, one-pot, greene...
Risk Reduction via Greener Synthesis of Noble Metal Nanostructures and Nanocomposites (Presentation)
A brief account of greener production of nanoparticles which reduces or eliminates the use and generation of hazardous substances is presented. The utility of vitamins B1 and B2, which can function both as reducing and capping agents, provides an extremely simple, one-pot, greene...
Therapeutic hand-exercising device with cycling pressure value
NASA Technical Reports Server (NTRS)
Barthlome, D. E.
1974-01-01
Device exercises hands of persons whose fingers are generally straight and need to be flexed inward toward palms of hands. Device is extremely simple in design, which reduces costs, and fits all hand sizes. Patient can instantly free hand from device by pulling flap free from wrist of straps.
Greener Synthesis of Nanomaterials: Risk Reduction Strategies and Environmental Applications
A brief account of a greener preparation of nanoparticles which reduces or eliminates the use and generation of hazardous substances is presented. The utility of vitamins B1, B2, and C which can function both as reducing and capping agents, provides an extremely simple and greene...
Using a Simple Parcel Model to Investigate the Haines Index
Mary Ann Jenkins; Steven K. Krueger; Ruiyu Sun
2003-01-01
The Haines Index (Haines 1988) ia fire-weather index based on stability and moisture conditions of the lower atmosphere that rates the potential for large fire growth or extreme fire behavior. The Hained Index is calculated by adding a temperature term a to a moisture term b.
NASA Astrophysics Data System (ADS)
Guo, Enliang; Zhang, Jiquan; Si, Ha; Dong, Zhenhua; Cao, Tiehua; Lan, Wu
2017-10-01
Environmental changes have brought about significant changes and challenges to water resources and management in the world; these include increasing climate variability, land use change, intensive agriculture, and rapid urbanization and industrial development, especially much more frequency extreme precipitation events. All of which greatly affect water resource and the development of social economy. In this study, we take extreme precipitation events in the Midwest of Jilin Province as an example; daily precipitation data during 1960-2014 are used. The threshold of extreme precipitation events is defined by multifractal detrended fluctuation analysis (MF-DFA) method. Extreme precipitation (EP), extreme precipitation ratio (EPR), and intensity of extreme precipitation (EPI) are selected as the extreme precipitation indicators, and then the Kolmogorov-Smirnov (K-S) test is employed to determine the optimal probability distribution function of extreme precipitation indicators. On this basis, copulas connect nonparametric estimation method and the Akaike Information Criterion (AIC) method is adopted to determine the bivariate copula function. Finally, we analyze the characteristics of single variable extremum and bivariate joint probability distribution of the extreme precipitation events. The results show that the threshold of extreme precipitation events in semi-arid areas is far less than that in subhumid areas. The extreme precipitation frequency shows a significant decline while the extreme precipitation intensity shows a trend of growth; there are significant differences in spatiotemporal of extreme precipitation events. The spatial variation trend of the joint return period gets shorter from the west to the east. The spatial distribution of co-occurrence return period takes on contrary changes and it is longer than the joint return period.
NASA Astrophysics Data System (ADS)
Lazoglou, Georgia; Anagnostopoulou, Christina; Tolika, Konstantia; Kolyva-Machera, Fotini
2018-04-01
The increasing trend of the intensity and frequency of temperature and precipitation extremes during the past decades has substantial environmental and socioeconomic impacts. Thus, the objective of the present study is the comparison of several statistical methods of the extreme value theory (EVT) in order to identify which is the most appropriate to analyze the behavior of the extreme precipitation, and high and low temperature events, in the Mediterranean region. The extremes choice was made using both the block maxima and the peaks over threshold (POT) technique and as a consequence both the generalized extreme value (GEV) and generalized Pareto distributions (GPDs) were used to fit them. The results were compared, in order to select the most appropriate distribution for extremes characterization. Moreover, this study evaluates the maximum likelihood estimation, the L-moments and the Bayesian method, based on both graphical and statistical goodness-of-fit tests. It was revealed that the GPD can characterize accurately both precipitation and temperature extreme events. Additionally, GEV distribution with the Bayesian method is proven to be appropriate especially for the greatest values of extremes. Another important objective of this investigation was the estimation of the precipitation and temperature return levels for three return periods (50, 100, and 150 years) classifying the data into groups with similar characteristics. Finally, the return level values were estimated with both GEV and GPD and with the three different estimation methods, revealing that the selected method can affect the return level values for both the parameter of precipitation and temperature.
Perucho, Beatriz; Micó, Vicente
2014-01-01
Progressive addition lenses (PALs) are engraved with permanent marks at standardized locations in order to guarantee correct centering and alignment throughout the manufacturing and mounting processes. Out of the production line, engraved marks provide useful information about the PAL as well as act as locator marks to re-ink again the removable marks. Even though those marks should be visible by simple visual inspection with the naked eye, engraving marks are often faint and weak, obscured by scratches, and partially occluded and difficult to recognize on tinted or antireflection-coated lenses. Here, we present an extremely simple optical device (named as wavefront holoscope) for visualization and characterization of permanent marks in PAL based on digital in-line holography. Essentially, a point source of coherent light illuminates the engraved mark placed just before a CCD camera that records a classical Gabor in-line hologram. The recorded hologram is then digitally processed to provide a set of high-contrast images of the engraved marks. Experimental results are presented showing the applicability of the proposed method as a new ophthalmic instrument for visualization and characterization of engraved marks in PALs.
Zhang, Quanxin; Zhang, Geping; Sun, Xiaofeng; Yin, Keyang; Li, Hongguang
2017-05-31
Dye-sensitized solar cells (DSSCs) are highly promising since they can potentially solve global energy issues. The development of new photosensitizers is the key to fully realizing perspectives proposed to DSSCs. Being cheap and nontoxic, carbon quantum dots (CQDs) have emerged as attractive candidates for this purpose. However, current methodologies to build up CQD-sensitized solar cells (CQDSCs) result in an imperfect apparatus with extremely low power conversion efficiencies (PCEs). Herein, we present a simple strategy of growing carbon quantum dots (CQDs) onto TiO₂ surfaces in situ. The CQDs/TiO₂ hybridized photoanode was then used to construct solar cell with an improved PCE of 0.87%, which is higher than all of the reported CQDSCs adopting the simple post-adsorption method. This result indicates that an in situ growing strategy has great advantages in terms of optimizing the performance of CQDSCs. In addition, we have also found that the mechanisms dominating the performance of CQDSCs are different from those behind the solar cells using inorganic semiconductor quantum dots (ISQDs) as the photosensitizers, which re-confirms the conclusion that the characteristics of CQDs differ from those of ISQDs.
Kim, Ki-Joong; Lu, Ping; Culp, Jeffrey T; Ohodnicki, Paul R
2018-02-23
Integration of optical fiber with sensitive thin films offers great potential for the realization of novel chemical sensing platforms. In this study, we present a simple design strategy and high performance of nanoporous metal-organic framework (MOF) based optical gas sensors, which enables detection of a wide range of concentrations of small molecules based upon extremely small differences in refractive indices as a function of analyte adsorption within the MOF framework. Thin and compact MOF films can be uniformly formed and tightly bound on the surface of etched optical fiber through a simple solution method which is critical for manufacturability of MOF-based sensor devices. The resulting sensors show high sensitivity/selectivity to CO 2 gas relative to other small gases (H 2 , N 2 , O 2 , and CO) with rapid (
Simple Numerical Modelling for Gasdynamic Design of Wave Rotors
NASA Astrophysics Data System (ADS)
Okamoto, Koji; Nagashima, Toshio
The precise estimation of pressure waves generated in the passages is a crucial factor in wave rotor design. However, it is difficult to estimate the pressure wave analytically, e.g. by the method of characteristics, because the mechanism of pressure-wave generation and propagation in the passages is extremely complicated as compared to that in a shock tube. In this study, a simple numerical modelling scheme was developed to facilitate the design procedure. This scheme considers the three dominant factors in the loss mechanism —gradual passage opening, wall friction and leakage— for simulating the pressure waves precisely. The numerical scheme itself is based on the one-dimensional Euler equations with appropriate source terms to reduce the calculation time. The modelling of these factors was verified by comparing the results with those of a two-dimensional numerical simulation, which were previously validated by the experimental data in our previous study. Regarding wave rotor miniaturization, the leakage flow effect, which involves the interaction between adjacent cells, was investigated extensively. A port configuration principle was also examined and analyzed in detail to verify the applicability of the present numerical modelling scheme to the wave rotor design.
Thirty years since diffuse sound reflection by maximum length
NASA Astrophysics Data System (ADS)
Cox, Trevor J.; D'Antonio, Peter
2005-09-01
This year celebrates the 30th anniversary of Schroeder's seminal paper on sound scattering from maximum length sequences. This paper, along with Schroeder's subsequent publication on quadratic residue diffusers, broke new ground, because they contained simple recipes for designing diffusers with known acoustic performance. So, what has happened in the intervening years? As with most areas of engineering, the room acoustic diffuser has been greatly influenced by the rise of digital computing technologies. Numerical methods have become much more powerful, and this has enabled predictions of surface scattering to greater accuracy and for larger scale surfaces than previously possible. Architecture has also gone through a revolution where the forms of buildings have become more extreme and sculptural. Acoustic diffuser designs have had to keep pace with this to produce shapes and forms that are desirable to architects. To achieve this, design methodologies have moved away from Schroeder's simple equations to brute force optimization algorithms. This paper will look back at the past development of the modern diffuser, explaining how the principles of diffuser design have been devised and revised over the decades. The paper will also look at the present state-of-the art, and dreams for the future.
Recent developments in optical detection methods for microchip separations.
Götz, Sebastian; Karst, Uwe
2007-01-01
This paper summarizes the features and performances of optical detection systems currently applied in order to monitor separations on microchip devices. Fluorescence detection, which delivers very high sensitivity and selectivity, is still the most widely applied method of detection. Instruments utilizing laser-induced fluorescence (LIF) and lamp-based fluorescence along with recent applications of light-emitting diodes (LED) as excitation sources are also covered in this paper. Since chemiluminescence detection can be achieved using extremely simple devices which no longer require light sources and optical components for focusing and collimation, interesting approaches based on this technique are presented, too. Although UV/vis absorbance is a detection method that is commonly used in standard desktop electrophoresis and liquid chromatography instruments, it has not yet reached the same level of popularity for microchip applications. Current applications of UV/vis absorbance detection to microchip separations and innovative approaches that increase sensitivity are described. This article, which contains 85 references, focuses on developments and applications published within the last three years, points out exciting new approaches, and provides future perspectives on this field.
Dynamic mask for producing uniform or graded-thickness thin films
Folta, James A [Livermore, CA
2006-06-13
A method for producing single layer or multilayer films with high thickness uniformity or thickness gradients. The method utilizes a moving mask which blocks some of the flux from a sputter target or evaporation source before it deposits on a substrate. The velocity and position of the mask is computer controlled to precisely tailor the film thickness distribution. The method is applicable to any type of vapor deposition system, but is particularly useful for ion beam sputter deposition and evaporation deposition; and enables a high degree of uniformity for ion beam deposition, even for near-normal incidence of deposition species, which may be critical for producing low-defect multilayer coatings, such as required for masks for extreme ultraviolet lithography (EUVL). The mask can have a variety of shapes, from a simple solid paddle shape to a larger mask with a shaped hole through which the flux passes. The motion of the mask can be linear or rotational, and the mask can be moved to make single or multiple passes in front of the substrate per layer, and can pass completely or partially across the substrate.
NASA Technical Reports Server (NTRS)
Lawson, John W.; Daw, Murray S.; Squire, Thomas H.; Bauschlicher, Charles W.
2012-01-01
We are developing a multiscale framework in computational modeling for the ultra high temperature ceramics (UHTC) ZrB2 and HfB2. These materials are characterized by high melting point, good strength, and reasonable oxidation resistance. They are candidate materials for a number of applications in extreme environments including sharp leading edges of hypersonic aircraft. In particular, we used a combination of ab initio methods, atomistic simulations and continuum computations to obtain insights into fundamental properties of these materials. Ab initio methods were used to compute basic structural, mechanical and thermal properties. From these results, a database was constructed to fit a Tersoff style interatomic potential suitable for atomistic simulations. These potentials were used to evaluate the lattice thermal conductivity of single crystals and the thermal resistance of simple grain boundaries. Finite element method (FEM) computations using atomistic results as inputs were performed with meshes constructed on SEM images thereby modeling the realistic microstructure. These continuum computations showed the reduction in thermal conductivity due to the grain boundary network.
G0-WISHART Distribution Based Classification from Polarimetric SAR Images
NASA Astrophysics Data System (ADS)
Hu, G. C.; Zhao, Q. H.
2017-09-01
Enormous scientific and technical developments have been carried out to further improve the remote sensing for decades, particularly Polarimetric Synthetic Aperture Radar(PolSAR) technique, so classification method based on PolSAR images has getted much more attention from scholars and related department around the world. The multilook polarmetric G0-Wishart model is a more flexible model which describe homogeneous, heterogeneous and extremely heterogeneous regions in the image. Moreover, the polarmetric G0-Wishart distribution dose not include the modified Bessel function of the second kind. It is a kind of simple statistical distribution model with less parameter. To prove its feasibility, a process of classification has been tested with the full-polarized Synthetic Aperture Radar (SAR) image by the method. First, apply multilook polarimetric SAR data process and speckle filter to reduce speckle influence for classification result. Initially classify the image into sixteen classes by H/A/α decomposition. Using the ICM algorithm to classify feature based on the G0-Wshart distance. Qualitative and quantitative results show that the proposed method can classify polaimetric SAR data effectively and efficiently.
Hydroelectric power plant on a paper strip.
Das, Sankha Shuvra; Kar, Shantimoy; Anwar, Tarique; Saha, Partha; Chakraborty, Suman
2018-05-03
We exploit the combinatorial advantage of electrokinetics and tortuosity of a cellulose-based paper network on laboratory grade filter paper for the development of a simple, inexpensive, yet extremely robust (shows constant performance for 12 days) 'paper-and-pencil'-based device for energy harvesting applications. We successfully achieve harvesting of a maximum output power of ∼640 pW in a single channel, while the same is significantly improved (by ∼100 times) with the use of a multichannel microfluidic array (maximum of up to 20 channels). Furthermore, we also provide theoretical insights into the observed phenomenon and show that the experimentally predicted trends agree well with our theoretical calculations. Thus, we envisage that such ultra-low cost devices may turn out to be extremely useful in energizing analytical microdevices in resource limited settings, for instance, in extreme point of care diagnostic applications.
Ansari, Majid; Nourian, Ruhollah; Khodaee, Morteza
With the increasing popularity of mountain biking, also known as off-road cycling, and the riders pushing the sport into extremes, there has been a corresponding increase in injury. Almost two thirds of acute injuries involve the upper extremities, and a similar proportion of overuse injuries affect the lower extremities. Mountain biking appears to be a high-risk sport for severe spine injuries. New trends of injury patterns are observed with popularity of mountain bike trail parks and freeride cycling. Using protective gear, improving technical proficiency, and physical fitness may somewhat decrease the risk of injuries. Simple modifications in bicycle-rider interface areas and with the bicycle (bike fit) also may decrease some overuse injuries. Bike fit provides the clinician with postural correction during the sport. In this review, we also discuss the importance of race-day management strategies and monitoring the injury trends.
Wigner phase space distribution via classical adiabatic switching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bose, Amartya; Makri, Nancy; Department of Physics, University of Illinois, 1110 W. Green Street, Urbana, Illinois 61801
2015-09-21
Evaluation of the Wigner phase space density for systems of many degrees of freedom presents an extremely demanding task because of the oscillatory nature of the Fourier-type integral. We propose a simple and efficient, approximate procedure for generating the Wigner distribution that avoids the computational difficulties associated with the Wigner transform. Starting from a suitable zeroth-order Hamiltonian, for which the Wigner density is available (either analytically or numerically), the phase space distribution is propagated in time via classical trajectories, while the perturbation is gradually switched on. According to the classical adiabatic theorem, each trajectory maintains a constant action if themore » perturbation is switched on infinitely slowly. We show that the adiabatic switching procedure produces the exact Wigner density for harmonic oscillator eigenstates and also for eigenstates of anharmonic Hamiltonians within the Wentzel-Kramers-Brillouin (WKB) approximation. We generalize the approach to finite temperature by introducing a density rescaling factor that depends on the energy of each trajectory. Time-dependent properties are obtained simply by continuing the integration of each trajectory under the full target Hamiltonian. Further, by construction, the generated approximate Wigner distribution is invariant under classical propagation, and thus, thermodynamic properties are strictly preserved. Numerical tests on one-dimensional and dissipative systems indicate that the method produces results in very good agreement with those obtained by full quantum mechanical methods over a wide temperature range. The method is simple and efficient, as it requires no input besides the force fields required for classical trajectory integration, and is ideal for use in quasiclassical trajectory calculations.« less
Confidence of compliance: a Bayesian approach for percentile standards.
McBride, G B; Ellis, J C
2001-04-01
Rules for assessing compliance with percentile standards commonly limit the number of exceedances permitted in a batch of samples taken over a defined assessment period. Such rules are commonly developed using classical statistical methods. Results from alternative Bayesian methods are presented (using beta-distributed prior information and a binomial likelihood), resulting in "confidence of compliance" graphs. These allow simple reading of the consumer's risk and the supplier's risks for any proposed rule. The influence of the prior assumptions required by the Bayesian technique on the confidence results is demonstrated, using two reference priors (uniform and Jeffreys') and also using optimistic and pessimistic user-defined priors. All four give less pessimistic results than does the classical technique, because interpreting classical results as "confidence of compliance" actually invokes a Bayesian approach with an extreme prior distribution. Jeffreys' prior is shown to be the most generally appropriate choice of prior distribution. Cost savings can be expected using rules based on this approach.
Compensation of high order harmonic long quantum-path attosecond chirp
NASA Astrophysics Data System (ADS)
Guichard, R.; Caillat, J.; Lévêque, C.; Risoud, F.; Maquet, A.; Taïeb, R.; Zaïr, A.
2017-12-01
We propose a method to compensate for the extreme ultra violet (XUV) attosecond chirp associated with the long quantum-path in the high harmonic generation process. Our method employs an isolated attosecond pulse (IAP) issued from the short trajectory contribution in a primary target to assist the infrared driving field to produce high harmonics from the long trajectory in a secondary target. In our simulations based on the resolution of the time-dependent Schrödinger equation, the resulting high harmornics present a clear phase compensation of the long quantum-path contribution, near to Fourier transform limited attosecond XUV pulse. Employing time-frequency analysis of the high harmonic dipole, we found that the compensation is not a simple far-field photonic interference between the IAP and the long-path harmonic emission, but a coherent phase transfer from the weak IAP to the long quantum-path electronic wavepacket. Our approach opens the route to utilizing the long quantum-path for the production and applications of attosecond pulses.
Chemically programmed self-sorting of gelator networks.
Morris, Kyle L; Chen, Lin; Raeburn, Jaclyn; Sellick, Owen R; Cotanda, Pepa; Paul, Alison; Griffiths, Peter C; King, Stephen M; O'Reilly, Rachel K; Serpell, Louise C; Adams, Dave J
2013-01-01
Controlling the order and spatial distribution of self-assembly in multicomponent supramolecular systems could underpin exciting new functional materials, but it is extremely challenging. When a solution of different components self-assembles, the molecules can either coassemble, or self-sort, where a preference for like-like intermolecular interactions results in coexisting, homomolecular assemblies. A challenge is to produce generic and controlled 'one-pot' fabrication methods to form separate ordered assemblies from 'cocktails' of two or more self-assembling species, which might have relatively similar molecular structures and chemistry. Self-sorting in supramolecular gel phases is hence rare. Here we report the first example of the pH-controlled self-sorting of gelators to form self-assembled networks in water. Uniquely, the order of assembly can be predefined. The assembly of each component is preprogrammed by the pK(a) of the gelator. This pH-programming method will enable higher level, complex structures to be formed that cannot be accessed by simple thermal gelation.
Daré, Joyce K; Silva, Cristina F; Freitas, Matheus P
2017-10-01
Soil sorption of insecticides employed in agriculture is an important parameter to probe the environmental fate of organic chemicals. Therefore, methods for the prediction of soil sorption of new agrochemical candidates, as well as for the rationalization of the molecular characteristics responsible for a given sorption profile, are extremely beneficial for the environment. A quantitative structure-property relationship method based on chemical structure images as molecular descriptors provided a reliable model for the soil sorption prediction of 24 widely used organophosphorus insecticides. By means of contour maps obtained from the partial least squares regression coefficients and the variable importance in projection scores, key molecular moieties were targeted for possible structural modification, in order to obtain novel and more environmentally friendly insecticide candidates. The image-based descriptors applied encode molecular arrangement, atoms connectivity, groups size, and polarity; consequently, the findings in this work cannot be achieved by a simple relationship with hydrophobicity, usually described by the octanol-water partition coefficient. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Kim, Kyoohyun; Park, Yongkeun
2017-05-01
Optical trapping can manipulate the three-dimensional (3D) motion of spherical particles based on the simple prediction of optical forces and the responding motion of samples. However, controlling the 3D behaviour of non-spherical particles with arbitrary orientations is extremely challenging, due to experimental difficulties and extensive computations. Here, we achieve the real-time optical control of arbitrarily shaped particles by combining the wavefront shaping of a trapping beam and measurements of the 3D refractive index distribution of samples. Engineering the 3D light field distribution of a trapping beam based on the measured 3D refractive index map of samples generates a light mould, which can manipulate colloidal and biological samples with arbitrary orientations and/or shapes. The present method provides stable control of the orientation and assembly of arbitrarily shaped particles without knowing a priori information about the sample geometry. The proposed method can be directly applied in biophotonics and soft matter physics.
a New Gas Correlation Radiometer for Remote Sounding of Carbon Monoxide
NASA Astrophysics Data System (ADS)
Tikhomirov, Alexey; Drummond, James
Carbon monoxide (CO) is extremely important component of the Earth's atmosphere since it is an indicator of air quality and plays a great role in tropospheric chemistry. Experimental data about CO mixing ratio distribution are necessary to study long range transport of pollutions and are being used along with models in understanding the CO budget. Remote sounding techniques from space are very advantageous in terms of global monitoring of CO. The gas correlation radiometry method has been successfully employed on a number of satellite based instruments for remote sounding of atmospheric gases for several decades. In this report a new concept of gas correlation radiometer for remote sounding of carbon monoxide from space is described. A length modulated cell, used for the first time with the MOPITT instrument, coupled with a static dual detector per channel architecture underlies the optical design of the new sounder. The main goal of the design is to produce an extremely simple and compact system which will in turn lead to a small space instrument. A laboratory prototype of the radiometer has been built in Dalhousie University. Its characteristics are investigated to verify the new concept. The sources of optical imbalance will be discussed as well as the methods for optical imbalance characterization and minimization. The results of the radiometer calibration and laboratory measurements of CO are presented. This work is supported by the Canadian Space Agency, the Canadian Foundation for Innovation, the Atlantic Innovation Fund/Nova Scotia Research Innovation Trust and Dalhousie University.
A regional strategy for ecological sustainability: A case study in Southwest China.
Wu, Xue; Liu, Shiliang; Cheng, Fangyan; Hou, Xiaoyun; Zhang, Yueqiu; Dong, Shikui; Liu, Guohua
2018-03-01
Partitioning, a method considering environmental protection and development potential, is an effective way to provide regional management strategies to maintain ecological sustainability. In this study, we provide a large-scale regional division approach and present a strategy for Southwest China, which also has extremely high development potential because of the "Western development" policy. Based on the superposition of 15 factors, including species diversity, pattern restriction, agricultural potential, accessibility, urbanization potential, and topographical limitations, the environmental value and development benefit in the region were quantified spatially by weighting the sum of indicators within environmental and development categories. By comparing the scores with their respective median values, the study area was divided into four different strategy zones: Conserve zones (34.94%), Construction zones (32.95%), Conflict zones (16.96%), and Low-tension zones (15.16%). The Conflict zones in which environmental value and development benefit were both higher than the respective medians were separated further into the following 5 levels: Extreme conflict (36.20%), Serious conflict (28.07%), Moderate conflict (12.28%), Minor conflict (6.55%), and Slight conflict (16.91%). We found that 9.04% of nature reserves were in Conflict zones, and thus should be given more attention. This study provides a simple and feasible method for regional partitioning, as well as comprehensive support that weighs both the environmental value and development benefit for China's current Ecological Red Line and space planning and for regional management in similar situations. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhao, Guangju; Zhai, Jianqing; Tian, Peng; Zhang, Limei; Mu, Xingmin; An, Zhengfeng; Han, Mengwei
2017-08-01
Assessing regional patterns and trends in extreme precipitation is crucial for facilitating flood control and drought adaptation because extreme climate events have more damaging impacts on society and ecosystems than simple shifts in the mean values. In this study, we employed daily precipitation data from 231 climate stations spanning 1961 to 2014 to explore the changes in precipitation extremes on the Loess Plateau, China. Nine of the 12 extreme precipitation indices suggested decreasing trends, and only the annual total wet-day precipitation (PRCPTOT) and R10 declined significantly: - 0.69 mm/a and - 0.023 days/a at the 95% confidence level. The spatial patterns in all of the extreme precipitation indices indicated mixed trends on the Loess Plateau, with decreasing trends in the precipitation extremes at the majority of the stations examined in the Fen-Wei River valley and high-plain plateau. Most of extreme precipitation indices suggested apparent regional differences, whereas R25 and R20 had spatially similar patterns on the Loess Plateau, with many stations revealing no trends. In addition, we found a potential decreasing trend in rainfall amounts and rainy days and increasing trends in rainfall intensities and storm frequencies in some regions due to increasing precipitation events in recent years. The relationships between extreme rainfall events and atmospheric circulation indices suggest that the weakening trend in the East Asia summer monsoon has limited the northward extension of the rainfall belt to northern China, thereby leading to a decrease in rainfall on the Loess Plateau.
[Management and orientation of a hand laceration].
Masmejean, Emmanuel
2013-11-01
The good management and orientation of a hand laceration by the general physician is essential. Anatomical knowledge help to judge, after examination, the opportunity for surgery exploration with local anesthesia. Serious stakes are prognostic and economics. The conclusion identifies three clinical pictures : simple superf cial wound requiring a simple clinical control 2 days follow-up, the dubious wound that need to be sent to a specialized center, and the wound requiring care in an emergency hand unit. Extremely urgent wounds are devascularization, amputation and the pressure injection . Bites and wounds on a tendon way require surgical exploration. Bandage should be as simple as possible in order to allow early motion. No antibiotic is given preventively exept for bite, open fractures and/or delay of treatment. Outpatient surgery under loca anesthesia simplifies management.
Simple Predictions Fueled by Capacity Limitations: When Are They Successful?
ERIC Educational Resources Information Center
Gaissmaier, Wolfgang; Schooler, Lael J.; Rieskamp, Jorg
2006-01-01
Counterintuitively, Y. Kareev, I. Lieberman, and M. Lev (1997) found that a lower short-term memory capacity benefits performance on a correlation detection task. They assumed that people with low short-term memory capacity (low spans) perceived the correlations as more extreme because they relied on smaller samples, which are known to exaggerate…
ERIC Educational Resources Information Center
Vollmer, Michael; Mollmann, Klaus-Peter
2012-01-01
We present fascinating simple demonstration experiments recorded with high-speed cameras in the field of fluid dynamics. Examples include oscillations of falling droplets, effects happening upon impact of a liquid droplet into a liquid, the disintegration of extremely large droplets in free fall and the consequences of incompressibility. (Contains…
Tracking Hazard Analysis Data in a Jungle of Changing Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, Robin S.; Young, Jonathan
2006-05-16
Tracking hazard analysis data during the 'life cycle' of a project can be an extremely complicated task. However, a few simple rules, used consistently, can give you the edge that will save countless headaches and provide the information that will help integrate the hazard analysis and design activities even if performed in parallel.
Plains Culture Area. Native American Curriculum Series.
ERIC Educational Resources Information Center
Ross, Cathy; Fernandes, Roger
One in a series of Native American instructional materials, this booklet introduces elementary students to the tribes of the plains culture area, extending from the Rocky Mountains to the Mississippi River and from Texas to Canada. Written in simple language, the booklet begins with a brief description of the region--its extreme climate and the…
Conservation strategies for forest gene resources
F. Thomas Ledig
1986-01-01
Gene conservation has three facets: (1) the maintenance of diversity in production plantations to buffer against vulnerability to pests and climatic extremes; (2) the preservation of genes for their future value in breeding; (3) the protection of species to promote ecosystem stability. Maintaining diversity as a hedge against damaging agents is a simple strategy in...
Changing Fonts in Education: How the Benefits Vary with Ability and Dyslexia
ERIC Educational Resources Information Center
French, M. M. J.; Blood, Arabella; Bright, N. D.; Futak, Dez; Grohmann, M. J.; Hasthorpe, Alex; Heritage, John; Poland, Remy L.; Reece, Simon; Tabor, Jennifer
2013-01-01
Previous research has shown that presenting educational materials in slightly harder to read fonts than is typical engenders deeper processing. This leads to better retention and subsequent recall of information. Before this extremely simple-to-implement and cost-effective adaptation can be made routinely to educational materials, it needs to be…
Dark Energy and the Cosmological Constant: A Brief Introduction
ERIC Educational Resources Information Center
Harvey, Alex
2009-01-01
The recently observed acceleration of the expansion of the universe is a topic of intense interest. The favoured causes are the "cosmological constant" or "dark energy". The former, which appears in the Einstein equations as the term [lambda]g[subscript [mu]v], provides an extremely simple, well-defined mechanism for the acceleration. However,…
Stripped-Down Poker: A Classroom Game with Signaling and Bluffing
ERIC Educational Resources Information Center
Reiley, David H.; Urbancic, Michael B.; Walker, Mark
2008-01-01
The authors present a simplified, "stripped-down" version of poker as an instructional classroom game. Although Stripped-Down Poker is extremely simple, it nevertheless provides an excellent illustration of a number of topics: signaling, bluffing, mixed strategies, the value of information, and Bayes's Rule. The authors begin with a description of…
Cardiac Arrhythmias in Experimental Syncope
1958-11-01
cardiac toward respiratory alkalosis . Regardless of arriythmias by stress procedures. It follows these two extremes in the assumed change in that previous...frequently induced by respiratory maneuvers without syncope. Intravenous aidministration of atropine appatently prevented recurrence of cardiac...arrhythmia induced by respiratory maneuvers. Significant cardiac arrhythmia was also noted in simple orthostatic syncope. Loss of consciousness presents a
Hot spots of multivariate extreme anomalies in Earth observations
NASA Astrophysics Data System (ADS)
Flach, M.; Sippel, S.; Bodesheim, P.; Brenning, A.; Denzler, J.; Gans, F.; Guanche, Y.; Reichstein, M.; Rodner, E.; Mahecha, M. D.
2016-12-01
Anomalies in Earth observations might indicate data quality issues, extremes or the change of underlying processes within a highly multivariate system. Thus, considering the multivariate constellation of variables for extreme detection yields crucial additional information over conventional univariate approaches. We highlight areas in which multivariate extreme anomalies are more likely to occur, i.e. hot spots of extremes in global atmospheric Earth observations that impact the Biosphere. In addition, we present the year of the most unusual multivariate extreme between 2001 and 2013 and show that these coincide with well known high impact extremes. Technically speaking, we account for multivariate extremes by using three sophisticated algorithms adapted from computer science applications. Namely an ensemble of the k-nearest neighbours mean distance, a kernel density estimation and an approach based on recurrences is used. However, the impact of atmosphere extremes on the Biosphere might largely depend on what is considered to be normal, i.e. the shape of the mean seasonal cycle and its inter-annual variability. We identify regions with similar mean seasonality by means of dimensionality reduction in order to estimate in each region both the `normal' variance and robust thresholds for detecting the extremes. In addition, we account for challenges like heteroscedasticity in Northern latitudes. Apart from hot spot areas, those anomalies in the atmosphere time series are of particular interest, which can only be detected by a multivariate approach but not by a simple univariate approach. Such an anomalous constellation of atmosphere variables is of interest if it impacts the Biosphere. The multivariate constellation of such an anomalous part of a time series is shown in one case study indicating that multivariate anomaly detection can provide novel insights into Earth observations.
Zhang, Tianmu; Shi, Changsheng; Zhao, Chenyang; Wu, Zhongbin; Chen, Jiangshan; Xie, Zhiyuan; Ma, Dongge
2018-03-07
Phosphorescent organic light-emitting diodes (OLEDs) possess the property of high efficiency but have serious efficiency roll-off at high luminance. Herein, we manufactured high-efficiency phosphorescent OLEDs with extremely low roll-off by effectively locating the ultrathin emitting layer (UEML) away from the high-concentration exciton formation region. The strategic exciton management in this simple UEML architecture greatly suppressed the exciton annihilation due to the expansion of the exciton diffusion region; thus, this efficiency roll-off at high luminance was significantly improved. The resulting green phosphorescent OLEDs exhibited the maximum external quantum efficiency of 25.5%, current efficiency of 98.0 cd A -1 , and power efficiency of 85.4 lm W -1 and still had 25.1%, 94.9 cd A -1 , and 55.5 lm W -1 at 5000 cd m -2 luminance, and retained 24.3%, 92.7 cd A -1 , and 49.3 lm W -1 at 10 000 cd m -2 luminance, respectively. Compared with the usual structures, the improvement demonstrated in this work displays potential value in applications.
NASA Astrophysics Data System (ADS)
Pradas, Marc; Pumir, Alain; Huber, Greg; Wilkinson, Michael
2017-07-01
Chaos is widely understood as being a consequence of sensitive dependence upon initial conditions. This is the result of an instability in phase space, which separates trajectories exponentially. Here, we demonstrate that this criterion should be refined. Despite their overall intrinsic instability, trajectories may be very strongly convergent in phase space over extremely long periods, as revealed by our investigation of a simple chaotic system (a realistic model for small bodies in a turbulent flow). We establish that this strong convergence is a multi-facetted phenomenon, in which the clustering is intense, widespread and balanced by lacunarity of other regions. Power laws, indicative of scale-free features, characterize the distribution of particles in the system. We use large-deviation and extreme-value statistics to explain the effect. Our results show that the interpretation of the ‘butterfly effect’ needs to be carefully qualified. We argue that the combination of mixing and clustering processes makes our specific model relevant to understanding the evolution of simple organisms. Lastly, this notion of convergent chaos, which implies the existence of conditions for which uncertainties are unexpectedly small, may also be relevant to the valuation of insurance and futures contracts.
Rate of muscle contraction is associated with cognition in women, not in men.
Tian, Qu; Osawa, Yusuke; Resnick, Susan M; Ferrucci, Luigi; Studenski, Stephanie A
2018-05-08
In older persons, lower hand grip strength is associated with poorer cognition. Little is known about how the rate of muscle contraction relates to cognition and upper extremity motor function, and sex differences are understudied. Linear regression, adjusting for age, race, education, body mass index, appendicular lean mass, and knee pain assessed sex-specific cross-sectional associations of peak torque, rate of torque development (RTD) and rate of velocity development (RVD) with cognition and upper extremity motor function. In men (n=447), higher rate-adjusted peak torque and a greater RVD were associated with faster simple finger tapping speed, and a greater RVD was associated with higher nondominant pegboard performance. In women (n=447), higher peak torque was not associated with any measures, but a greater RTD was associated with faster simple tapping speed and higher language performance, and a greater RVD was associated with higher executive function, attention, memory, and nondominant pegboard performance. In women with low isokinetic peak torque, RVD was associated with attention and memory. RVD capacity may reflect neural health, especially in women with low muscle strength.
Analysis of the dependence of extreme rainfalls
NASA Astrophysics Data System (ADS)
Padoan, Simone; Ancey, Christophe; Parlange, Marc
2010-05-01
The aim of spatial analysis is to quantitatively describe the behavior of environmental phenomena such as precipitation levels, wind speed or daily temperatures. A number of generic approaches to spatial modeling have been developed[1], but these are not necessarily ideal for handling extremal aspects given their focus on mean process levels. The areal modelling of the extremes of a natural process observed at points in space is important in environmental statistics; for example, understanding extremal spatial rainfall is crucial in flood protection. In light of recent concerns over climate change, the use of robust mathematical and statistical methods for such analyses has grown in importance. Multivariate extreme value models and the class of maxstable processes [2] have a similar asymptotic motivation to the univariate Generalized Extreme Value (GEV) distribution , but providing a general approach to modeling extreme processes incorporating temporal or spatial dependence. Statistical methods for max-stable processes and data analyses of practical problems are discussed by [3] and [4]. This work illustrates methods to the statistical modelling of spatial extremes and gives examples of their use by means of a real extremal data analysis of Switzerland precipitation levels. [1] Cressie, N. A. C. (1993). Statistics for Spatial Data. Wiley, New York. [2] de Haan, L and Ferreria A. (2006). Extreme Value Theory An Introduction. Springer, USA. [3] Padoan, S. A., Ribatet, M and Sisson, S. A. (2009). Likelihood-Based Inference for Max-Stable Processes. Journal of the American Statistical Association, Theory & Methods. In press. [4] Davison, A. C. and Gholamrezaee, M. (2009), Geostatistics of extremes. Journal of the Royal Statistical Society, Series B. To appear.
NASA Astrophysics Data System (ADS)
Yao, Jun
2010-05-01
Geo-microbes and their function were widespread ever since life appeared on the earth. Geo-microbiological process has left a rich and colorful material record in the geological body of earth, the most critical record of which is all sorts of organic hieroglyph and various forms of organic matter derived from bio-organisms, and oil field is the most ideal geological location to preserve these organic matters. It have already produced or might produce petroleum and natural gas sedimentary rocks under natural conditions, also known as olefiant (gas) rock or the parent rock, which is the product of the interaction between the life-system and earth environmental system in the specific geological conditions and integrate the whole microbial ecosystem in the geological time. The microbial community under extreme geological environment of Dagang Oilfield is relatively simple, therefore it is quite easy to investigate the special relationship between geo-microbes and biogeochemistry. We have mastered a large number of information related with the geological condition and biological species of Dagang Oilfield; what's more we also have isolated a number of archimycetes strains with different extremophiles capacity from the core samples collected in the Dagang oil field. At present, we are to proceed with the cooperative research at Environment School of Yale University and Institute of the Earth's biosphere using these strains. In the future, we will work together to carry out geological surveys in the field using international first-class equipment and methods and study the geological environment of Dagang Oilfield utilizing isotope techniques and mineral phase analysis method. Meanwhile we are going to undertake the on-line monitoring of the overall microbial activity of these collected geological samples, the specific metabolic activity of these extreme strains of microorganisms and the biomarkers produced during their metabolic processes under laboratory conditions. According to these research work listed above, we can reveal the mechanism of interaction between the special geological environment of Dagang Oilfield and the extreme geo-microbes, so as to clarify the effects of oil field environment on the extreme geo-microbes and especially the adverse effect of these geo-microbes to the geological environment, which may provide a practical foundation of theoretical basis for the reasons why the Dagang Oilfield can produce oil. Acknowledgement This work was supported in part by grants from National Outstanding Youth Research Foundation of China (40925010), International Joint Key Project from National Natural Science Foundation of China (40920134003), National Natural Science Foundation of China (40873060), and International Joint Key Project from Chinese Ministry of Science and Technology (2009DFA92830), and the 111 Project (08030).
Exploring the Extreme Universe with the Fermi Gamma-Ray Space Telescope
NASA Technical Reports Server (NTRS)
Thompson, D. J.
2010-01-01
Because high-energy gamma rays are produced by powerful sources, the Fermi Gamma-ray Space Telescope provides a window on extreme conditions in the Universe. Some key observations of the constantly changing gamma-ray sky include: (1) Gamma-rays from pulsars appear to come from a region well above the surface of the neutron star; (2) Multiwavelength studies of blazars show that simple models of jet emission are not always adequate to explain what is seen; (3) Gamma-ray bursts can constrain models of quantum gravity; (4) Cosmic-ray electrons at energies approaching 1 TeV suggest a local source for some of these particles.
An extraordinary directive radiation based on optical antimatter at near infrared.
Mocella, Vito; Dardano, Principia; Rendina, Ivo; Cabrini, Stefano
2010-11-22
In this paper we discuss and experimentally demonstrate that in a quasi- zero-average-refractive-index (QZAI) metamaterial, in correspondence of a divergent source in near infrared (λ = 1.55 μm) the light scattered out is extremely directive (Δθ(out) = 0.06°), coupling with diffraction order of the alternating complementary media grating. With a high degree of accuracy the measurements prove also the excellent vertical confinement of the beam even in the air region of the metamaterial, in absence of any simple vertical confinement mechanism. This extremely sensitive device works on a large contact area and open news perspective to integrated spectroscopy.
The reconstructive microsurgery ladder in orthopaedics.
Tintle, Scott M; Levin, L Scott
2013-03-01
Since the advent of the operating microscope by Julius Jacobson in 1960, reconstructive microsurgery has become an integral part of extremity reconstruction and orthopaedics. During World War I, with the influx of severe extremity trauma Harold Gillies introduced the concept of the reconstructive ladder for wound closure. The concept of the reconstructive ladder goes from simple to complex means of attaining wound closure. Over the last half century microsurgery has continued to evolve and progress. We now have a microsurgical reconstructive ladder. The microsurgical reconstruction ladder is based upon the early work on revascularization and replantation extending through the procedures that are described in this article. Copyright © 2013. Published by Elsevier Ltd.
Shen, Feng; Du, Wenbin; Kreutz, Jason E; Fok, Alice; Ismagilov, Rustem F
2010-10-21
This paper describes a SlipChip to perform digital PCR in a very simple and inexpensive format. The fluidic path for introducing the sample combined with the PCR mixture was formed using elongated wells in the two plates of the SlipChip designed to overlap during sample loading. This fluidic path was broken up by simple slipping of the two plates that removed the overlap among wells and brought each well in contact with a reservoir preloaded with oil to generate 1280 reaction compartments (2.6 nL each) simultaneously. After thermal cycling, end-point fluorescence intensity was used to detect the presence of nucleic acid. Digital PCR on the SlipChip was tested quantitatively by using Staphylococcus aureus genomic DNA. As the concentration of the template DNA in the reaction mixture was diluted, the fraction of positive wells decreased as expected from the statistical analysis. No cross-contamination was observed during the experiments. At the extremes of the dynamic range of digital PCR the standard confidence interval determined using a normal approximation of the binomial distribution is not satisfactory. Therefore, statistical analysis based on the score method was used to establish these confidence intervals. The SlipChip provides a simple strategy to count nucleic acids by using PCR. It may find applications in research applications such as single cell analysis, prenatal diagnostics, and point-of-care diagnostics. SlipChip would become valuable for diagnostics, including applications in resource-limited areas after integration with isothermal nucleic acid amplification technologies and visual readout.
Concrete ensemble Kalman filters with rigorous catastrophic filter divergence
Kelly, David; Majda, Andrew J.; Tong, Xin T.
2015-01-01
The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature. PMID:26261335
"Tools For Analysis and Visualization of Large Time- Varying CFD Data Sets"
NASA Technical Reports Server (NTRS)
Wilhelms, Jane; vanGelder, Allen
1999-01-01
During the four years of this grant (including the one year extension), we have explored many aspects of the visualization of large CFD (Computational Fluid Dynamics) datasets. These have included new direct volume rendering approaches, hierarchical methods, volume decimation, error metrics, parallelization, hardware texture mapping, and methods for analyzing and comparing images. First, we implemented an extremely general direct volume rendering approach that can be used to render rectilinear, curvilinear, or tetrahedral grids, including overlapping multiple zone grids, and time-varying grids. Next, we developed techniques for associating the sample data with a k-d tree, a simple hierarchial data model to approximate samples in the regions covered by each node of the tree, and an error metric for the accuracy of the model. We also explored a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH (Association for Computing Machinery Special Interest Group on Computer Graphics) '96. In our initial implementation, we automatically image the volume from 32 approximately evenly distributed positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation.
Concrete ensemble Kalman filters with rigorous catastrophic filter divergence.
Kelly, David; Majda, Andrew J; Tong, Xin T
2015-08-25
The ensemble Kalman filter and ensemble square root filters are data assimilation methods used to combine high-dimensional, nonlinear dynamical models with observed data. Ensemble methods are indispensable tools in science and engineering and have enjoyed great success in geophysical sciences, because they allow for computationally cheap low-ensemble-state approximation for extremely high-dimensional turbulent forecast models. From a theoretical perspective, the dynamical properties of these methods are poorly understood. One of the central mysteries is the numerical phenomenon known as catastrophic filter divergence, whereby ensemble-state estimates explode to machine infinity, despite the true state remaining in a bounded region. In this article we provide a breakthrough insight into the phenomenon, by introducing a simple and natural forecast model that transparently exhibits catastrophic filter divergence under all ensemble methods and a large set of initializations. For this model, catastrophic filter divergence is not an artifact of numerical instability, but rather a true dynamical property of the filter. The divergence is not only validated numerically but also proven rigorously. The model cleanly illustrates mechanisms that give rise to catastrophic divergence and confirms intuitive accounts of the phenomena given in past literature.
NASA Astrophysics Data System (ADS)
Cao, Qian; Thawait, Gaurav; Gang, Grace J.; Zbijewski, Wojciech; Reigel, Thomas; Brown, Tyler; Corner, Brian; Demehri, Shadpour; Siewerdsen, Jeffrey H.
2015-02-01
Joint space morphology can be indicative of the risk, presence, progression, and/or treatment response of disease or trauma. We describe a novel methodology of characterizing joint space morphology in high-resolution 3D images (e.g. cone-beam CT (CBCT)) using a model based on elementary electrostatics that overcomes a variety of basic limitations of existing 2D and 3D methods. The method models each surface of a joint as a conductor at fixed electrostatic potential and characterizes the intra-articular space in terms of the electric field lines resulting from the solution of Gauss’ Law and the Laplace equation. As a test case, the method was applied to discrimination of healthy and osteoarthritic subjects (N = 39) in 3D images of the knee acquired on an extremity CBCT system. The method demonstrated improved diagnostic performance (area under the receiver operating characteristic curve, AUC > 0.98) compared to simpler methods of quantitative measurement and qualitative image-based assessment by three expert musculoskeletal radiologists (AUC = 0.87, p-value = 0.007). The method is applicable to simple (e.g. the knee or elbow) or multi-axial joints (e.g. the wrist or ankle) and may provide a useful means of quantitatively assessing a variety of joint pathologies.
Gynecomastia: the horizontal ellipse method for its correction.
Gheita, Alaa
2008-09-01
Gynecomastia is an extremely disturbing deformity affecting males, especially when it occurs in young subjects. Such subjects generally have no hormonal anomalies and thus either liposuction or surgical intervention, depending on the type and consistency of the breast, is required for treatment. If there is slight hypertrophy alone with no ptosis, then subcutaneous mastectomy is usually sufficient. However, when hypertrophy and/or ptosis are present, then corrective surgery on the skin and breast is mandatory to obtain a good cosmetic result. Most of the procedures suggested for reduction of the male breast are usually derived from reduction mammaplasty methods used for females. They have some disadvantages, mainly the multiple scars, which remain apparent in males, unusual shape, and the lack of symmetry with regard to the size of both breasts and/or the nipple position. The author presents a new, simple method that has proven superior to any previous method described so far. It consists of a horizontal excision ellipse of the breast's redundant skin and deep excess tissue and a superior pedicle flap carrying the areola-nipple complex to its new site on the chest wall. The method described yields excellent shape, symmetry, and minimal scars. A new method for treating gynecomastis is described in detail, its early and late operative results are shown, and its advantages are discussed.
TCC: an R package for comparing tag count data with robust normalization strategies
2013-01-01
Background Differential expression analysis based on “next-generation” sequencing technologies is a fundamental means of studying RNA expression. We recently developed a multi-step normalization method (called TbT) for two-group RNA-seq data with replicates and demonstrated that the statistical methods available in four R packages (edgeR, DESeq, baySeq, and NBPSeq) together with TbT can produce a well-ranked gene list in which true differentially expressed genes (DEGs) are top-ranked and non-DEGs are bottom ranked. However, the advantages of the current TbT method come at the cost of a huge computation time. Moreover, the R packages did not have normalization methods based on such a multi-step strategy. Results TCC (an acronym for Tag Count Comparison) is an R package that provides a series of functions for differential expression analysis of tag count data. The package incorporates multi-step normalization methods, whose strategy is to remove potential DEGs before performing the data normalization. The normalization function based on this DEG elimination strategy (DEGES) includes (i) the original TbT method based on DEGES for two-group data with or without replicates, (ii) much faster methods for two-group data with or without replicates, and (iii) methods for multi-group comparison. TCC provides a simple unified interface to perform such analyses with combinations of functions provided by edgeR, DESeq, and baySeq. Additionally, a function for generating simulation data under various conditions and alternative DEGES procedures consisting of functions in the existing packages are provided. Bioinformatics scientists can use TCC to evaluate their methods, and biologists familiar with other R packages can easily learn what is done in TCC. Conclusion DEGES in TCC is essential for accurate normalization of tag count data, especially when up- and down-regulated DEGs in one of the samples are extremely biased in their number. TCC is useful for analyzing tag count data in various scenarios ranging from unbiased to extremely biased differential expression. TCC is available at http://www.iu.a.u-tokyo.ac.jp/~kadota/TCC/ and will appear in Bioconductor (http://bioconductor.org/) from ver. 2.13. PMID:23837715
Entropy emission properties of near-extremal Reissner-Nordström black holes
NASA Astrophysics Data System (ADS)
Hod, Shahar
2016-05-01
Bekenstein and Mayo have revealed an interesting property of evaporating (3 +1 )-dimensional Schwarzschild black holes: their entropy emission rates S˙Sch are related to their energy emission rates P by the simple relation S˙Sch=CSch×(P /ℏ)1/2, where CSch is a numerically computed dimensionless coefficient. Remembering that (1 +1 )-dimensional perfect black-body emitters are characterized by the same functional relation, S˙1 +1=C1 +1×(P /ℏ)1/2 [with C1 +1=(π /3 )1/2], Bekenstein and Mayo have concluded that, in their entropy emission properties, (3 +1 )-dimensional Schwarzschild black holes behave effectively as (1 +1 )-dimensional entropy emitters. Later studies have shown that this intriguing property is actually a generic feature of all radiating (D +1 )-dimensional Schwarzschild black holes. One naturally wonders whether all black holes behave as simple (1 +1 )-dimensional entropy emitters? In order to address this interesting question, we shall study in this paper the entropy emission properties of Reissner-Nordström black holes. We shall show, in particular, that the physical properties which characterize the neutral sector of the Hawking emission spectra of these black holes can be studied analytically in the near-extremal TBH→0 regime (here TBH is the Bekenstein-Hawking temperature of the black hole). We find that the Hawking radiation spectra of massless neutral scalar fields and coupled electromagnetic-gravitational fields are characterized by the nontrivial entropy-energy relations S˙RNScalar=-CRNScalar×(A P3/ℏ3)1/4ln (A P /ℏ) and S˙RN Elec -Grav=-CRNElec -Grav×(A4P9/ℏ9)1 /10ln (A P /ℏ) in the near-extremal TBH→0 limit (here {CRNScalar,CRNElec -Grav} are analytically calculated dimensionless coefficients and A is the surface area of the Reissner-Nordström black hole). Our analytical results therefore indicate that not all black holes behave as simple (1 +1 )-dimensional entropy emitters.
Yield of computed tomography of the cervical spine in cases of simple assault.
Uriell, Matthew L; Allen, Jason W; Lovasik, Brendan P; Benayoun, Marc D; Spandorfer, Robert M; Holder, Chad A
2017-01-01
Computed tomography (CT) of the cervical spine (C-spine) is routinely ordered for low-impact, non-penetrating or "simple" assault at our institution and others. Common clinical decision tools for C-spine imaging in the setting of trauma include the National Emergency X-Radiography Utilization Study (NEXUS) and the Canadian Cervical Spine Rule for Radiography (CCR). While NEXUS and CCR have served to decrease the amount of unnecessary imaging of the C-spine, overutilization of CT is still of concern. A retrospective, cross-sectional study was performed of the electronic medical record (EMR) database at an urban, Level I Trauma Center over a 6-month period for patients receiving a C-spine CT. The primary outcome of interest was prevalence of cervical spine fracture. Secondary outcomes of interest included appropriateness of C-spine imaging after retrospective application of NEXUS and CCR. The hypothesis was that fracture rates within this patient population would be extremely low. No C-spine fractures were identified in the 460 patients who met inclusion criteria. Approximately 29% of patients did not warrant imaging by CCR, and 25% by NEXUS. Of note, approximately 44% of patients were indeterminate for whether imaging was warranted by CCR, with the most common reason being lack of assessment for active neck rotation. Cervical spine CT is overutilized in the setting of simple assault, despite established clinical decision rules. With no fractures identified regardless of other factors, the likelihood that a CT of the cervical spine will identify clinically significant findings in the setting of "simple" assault is extremely low, approaching zero. At minimum, adherence to CCR and NEXUS within this patient population would serve to reduce both imaging costs and population radiation dose exposure. Copyright © 2016 Elsevier Ltd. All rights reserved.
A new method of sweat testing: the CF Quantum®sweat test.
Rock, Michael J; Makholm, Linda; Eickhoff, Jens
2014-09-01
Conventional methods of sweat testing are time consuming and have many steps that can and do lead to errors. This study compares conventional sweat testing to a new quantitative method, the CF Quantum® (CFQT) sweat test. This study tests the diagnostic accuracy and analytic validity of the CFQT. Previously diagnosed CF patients and patients who required a sweat test for clinical indications were invited to have the CFQT test performed. Both conventional sweat testing and the CFQT were performed bilaterally on the same day. Pairs of data from each test are plotted as a correlation graph and Bland-Altman plot. Sensitivity and specificity were calculated as well as the means and coefficient of variation by test and by extremity. After completing the study, subjects or their parents were asked for their preference of the CFQT and conventional sweat testing. The correlation coefficient between the CFQT and conventional sweat testing was 0.98 (95% confidence interval: 0.97-0.99). The sensitivity and specificity of the CFQT in diagnosing CF was 100% (95% confidence interval: 94-100%) and 96% (95% confidence interval: 89-99%), respectively. In one center in this three center multicenter study, there were higher sweat chloride values in patients with CF and also more tests that were invalid due to discrepant values between the two extremities. The percentage of invalid tests was higher in the CFQT method (16.5%) compared to conventional sweat testing (3.8%) (p < 0.001). In the post-test questionnaire, 88% of subjects/parents preferred the CFQT test. The CFQT is a fast and simple method of quantitative sweat chloride determination. This technology requires further refinement to improve the analytic accuracy at higher sweat chloride values and to decrease the number of invalid tests. Copyright © 2014 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.
Risk assessment of tropical cyclone rainfall flooding in the Delaware River Basin
NASA Astrophysics Data System (ADS)
Lu, P.; Lin, N.; Smith, J. A.; Emanuel, K.
2016-12-01
Rainfall-induced inland flooding is a leading cause of death, injury, and property damage from tropical cyclones (TCs). In the context of climate change, it has been shown that extreme precipitation from TCs is likely to increase during the 21st century. Assessing the long-term risk of inland flooding associated with landfalling TCs is therefore an important task. Standard risk assessment techniques, which are based on observations from rain gauges and stream gauges, are not broadly applicable to TC induced flooding, since TCs are rare, extreme events with very limited historical observations at any specific location. Also, rain gauges and stream gauges can hardly capture the complex spatial variation of TC rainfall and flooding. Furthermore, the utility of historically based assessments is compromised by climate change. Regional dynamical downscaling models can resolve many features of TC precipitation. In terms of risk assessment, however, it is computationally demanding to run such models to obtain long-term climatology of TC induced flooding. Here we apply a computationally efficient climatological-hydrological method to assess the risk of inland flooding associated with landfalling TCs. It includes: 1) a deterministic TC climatology modeling method to generate large numbers of synthetic TCs with physically correlated characteristics (i.e., track, intensity, size) under observed and projected climates; 2) a simple physics-based tropical cyclone rainfall model which is able to simulate rainfall fields associated with each synthetic storm; 3) a hydrologic modeling system that takes in rainfall fields to simulate flood peaks over an entire drainage basin. We will present results of this method applied to the Delaware River Basin in the mid-Atlantic US.
NASA Astrophysics Data System (ADS)
Bravo, Mikel; Angulo-Vinuesa, Xabier; Martin-Lopez, Sonia; Lopez-Amo, Manuel; Gonzalez-Herraez, Miguel
2013-05-01
High-Q resonators have been widely used for sensing purposes. High Q factors normally lead to sharp spectral peaks which accordingly provide a strong sensitivity in spectral interrogation methods. In this work we employ a low-Q ring resonator to develop a high sensitivity sub-micrometric resolution displacement sensor. We use the slow-light effects occurring close to the critical coupling regime to achieve high sensitivity in the device. By tuning the losses in the cavity close to the critical coupling, extremely high group delay variations can be achieved, which in turn introduce strong enhancements of the absorption of the structure. We first validate the concept using an Optical Vector Analyzer (OVA) and then we propose a simple functional scheme for achieving a low-cost interrogation of this kind of sensors.
Finite temperature properties of clusters by replica exchange metadynamics: the water nonamer.
Zhai, Yingteng; Laio, Alessandro; Tosatti, Erio; Gong, Xin-Gao
2011-03-02
We introduce an approach for the accurate calculation of thermal properties of classical nanoclusters. On the basis of a recently developed enhanced sampling technique, replica exchange metadynamics, the method yields the true free energy of each relevant cluster structure, directly sampling its basin and measuring its occupancy in full equilibrium. All entropy sources, whether vibrational, rotational anharmonic, or especially configurational, the latter often forgotten in many cluster studies, are automatically included. For the present demonstration, we choose the water nonamer (H(2)O)(9), an extremely simple cluster, which nonetheless displays a sufficient complexity and interesting physics in its relevant structure spectrum. Within a standard TIP4P potential description of water, we find that the nonamer second relevant structure possesses a higher configurational entropy than the first, so that the two free energies surprisingly cross for increasing temperature.
Finite Temperature Properties of Clusters by Replica Exchange Metadynamics: The Water Nonamer
NASA Astrophysics Data System (ADS)
Zhai, Yingteng; Laio, Alessandro; Tosatti, Erio; Gong, Xingao
2012-02-01
We introduce an approach for the accurate calculation of thermal properties of classical nanoclusters. Based on a recently developed enhanced sampling technique, replica exchange metadynamics, the method yields the true free energy of each relevant cluster structure, directly sampling its basin and measuring its occupancy in full equilibrium. All entropy sources, whether vibrational, rotational anharmonic and especially configurational -- the latter often forgotten in many cluster studies -- are automatically included. For the present demonstration we choose the water nonamer (H2O)9, an extremely simple cluster which nonetheless displays a sufficient complexity and interesting physics in its relevant structure spectrum. Within a standard TIP4P potential description of water, we find that the nonamer second relevant structure possesses a higher configurational entropy than the first, so that the two free energies surprisingly cross for increasing temperature.
NASA Technical Reports Server (NTRS)
Bhandari, P.; Wu, Y. C.; Roschke, E. J.
1989-01-01
A simple solar flux calculation algorithm for a cylindrical cavity type solar receiver has been developed and implemented on an IBM PC-AT. Using cone optics, the contour error method is utilized to handle the slope error of a paraboloidal concentrator. The flux distribution on the side wall is calculated by integration of the energy incident from cones emanating from all the differential elements on the concentrator. The calculations are done for any set of dimensions and properties of the receiver and the concentrator, and account for any spillover on the aperture plate. The results of this algorithm compared excellently with those predicted by more complicated programs. Because of the utilization of axial symmetry and overall simplification, it is extremely fast. It can be esily extended to other axisymmetric receiver geometries.
Spectrophotometry of cerebrospinal fluid in subacute and chronic subdural haematomas
Kjellin, K. G.; Steiner, L.
1974-01-01
Spectrophotometric examinations were performed on cerebrospinal and subdural fluids in subacute (five patients) and chronic (20 patients) subdural haematomas, with special reference to the diagnostic aid of CSF spectrophotometry. Spectrophotometric xanthochromia of haemorrhagic origin was found in all CSFs examined, while definite visible xanthochromia was observed in only 28% and the CSF was judged as colourless in 52% of those cases. Characteristic bleeding patterns were found spectrophotometrically in all the 20 CSFs examined within 24 hours after lumbar puncture, haematoma patterns being detected in 90-95% of the cases. In many cases the electrophoretically separated protein fractions of CSF and subdural fluids were spectrophotometrically examined. In conclusion, CSF spectrophotometry is a simple, fast, and extremely sensitive method, which in our opinion should be used routinely in the diagnosis of suspected subdural haematomas, if lumbar puncture is not contraindicated. PMID:4140892
CRISPR/Cas9 mediates efficient conditional mutagenesis in Drosophila.
Xue, Zhaoyu; Wu, Menghua; Wen, Kejia; Ren, Menda; Long, Li; Zhang, Xuedi; Gao, Guanjun
2014-09-05
Existing transgenic RNA interference (RNAi) methods greatly facilitate functional genome studies via controlled silencing of targeted mRNA in Drosophila. Although the RNAi approach is extremely powerful, concerns still linger about its low efficiency. Here, we developed a CRISPR/Cas9-mediated conditional mutagenesis system by combining tissue-specific expression of Cas9 driven by the Gal4/upstream activating site system with various ubiquitously expressed guide RNA transgenes to effectively inactivate gene expression in a temporally and spatially controlled manner. Furthermore, by including multiple guide RNAs in a transgenic vector to target a single gene, we achieved a high degree of gene mutagenesis in specific tissues. The CRISPR/Cas9-mediated conditional mutagenesis system provides a simple and effective tool for gene function analysis, and complements the existing RNAi approach. Copyright © 2014 Xue et al.
Extreme current fluctuations in lattice gases: Beyond nonequilibrium steady states
NASA Astrophysics Data System (ADS)
Meerson, Baruch; Sasorov, Pavel V.
2014-01-01
We use the macroscopic fluctuation theory (MFT) to study large current fluctuations in nonstationary diffusive lattice gases. We identify two universality classes of these fluctuations, which we call elliptic and hyperbolic. They emerge in the limit when the deterministic mass flux is small compared to the mass flux due to the shot noise. The two classes are determined by the sign of compressibility of effective fluid, obtained by mapping the MFT into an inviscid hydrodynamics. An example of the elliptic class is the symmetric simple exclusion process, where, for some initial conditions, we can solve the effective hydrodynamics exactly. This leads to a super-Gaussian extreme current statistics conjectured by Derrida and Gerschenfeld [J. Stat. Phys. 137, 978 (2009), 10.1007/s10955-009-9830-1] and yields the optimal path of the system. For models of the hyperbolic class, the deterministic mass flux cannot be neglected, leading to a different extreme current statistics.
Persistence Mapping Using EUV Solar Imager Data
NASA Technical Reports Server (NTRS)
Thompson, B. J.; Young, C. A.
2016-01-01
We describe a simple image processing technique that is useful for the visualization and depiction of gradually evolving or intermittent structures in solar physics extreme-ultraviolet imagery. The technique is an application of image segmentation, which we call "Persistence Mapping," to isolate extreme values in a data set, and is particularly useful for the problem of capturing phenomena that are evolving in both space and time. While integration or "time-lapse" imaging uses the full sample (of size N ), Persistence Mapping rejects (N - 1)/N of the data set and identifies the most relevant 1/N values using the following rule: if a pixel reaches an extreme value, it retains that value until that value is exceeded. The simplest examples isolate minima and maxima, but any quantile or statistic can be used. This paper demonstrates how the technique has been used to extract the dynamics in long-term evolution of comet tails, erupting material, and EUV dimming regions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ray-Chaudhuri, A.K.; Ng, W.; Cerrina, F.
1995-11-01
Multilayer-coated imaging systems for extreme ultraviolet (EUV) lithography at 13 nm represent a significant challenge for alignment and characterization. The standard practice of utilizing visible light interferometry fundamentally provides an incomplete picture since this technique fails to account for phase effects induced by the multilayer coating. Thus the development of optical techniques at the functional EUV wavelength is required. We present the development of two EUV optical tests based on Foucault and Ronchi techniques. These relatively simple techniques are extremely sensitive due to the factor of 50 reduction in wavelength. Both techniques were utilized to align a Mo--Si multilayer-coated Schwarzschildmore » camera. By varying the illumination wavelength, phase shift effects due to the interplay of multilayer coating and incident angle were uniquely detected. {copyright} {ital 1995} {ital American} {ital Vacuum} {ital Society}« less
Enhanced Sintering of β"-Al2O3/YSZ with the Sintering Aids of TiO2 and MnO2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Xiaochuan; Li, Guosheng; Kim, Jin Yong
2015-07-11
β"-Al2O3 has been the dominated choice for the electrolyte materials of sodium batteries because of its high ionic conductivity, excellent stability with the electrode materials, satisfactory mechanical strength, and low material cost. To achieve adequate electrical and mechanical performance, sintering of β"-Al2O3 is typically carried out at temperatures above 1600oC with deliberate efforts on controlling the phase, composition, and microstructure. Here, we reported a simple method to fabricate β"-Al2O3/YSZ electrolyte at relatively lower temperatures. With the starting material of boehmite, single phase of β"-Al2O3 can be achieved at as low as 1200oC. It was found that TiO2 was extremely effectivemore » as a sintering aid for the densification of β"-Al2O3 and similar behavior was observed with MnO2 for YSZ. With the addition of 2 mol% TiO2 and 5 mol% MnO2, the β"-Al2O3/YSZ composite was able to be densified at as low as 1400oC with a fine microstructure and good electrical/mechanical performance. This study demonstrated a new approach of synthesis and sintering of β"-Al2O3/YSZ composite, which represented a simple and low-cost method for fabrication of high-performance β"-Al2O3/YSZ electrolyte.« less
Ye, Xin; Xu, Jin; Lu, Lijuan; Li, Xinxin; Fang, Xueen; Kong, Jilie
2018-08-14
The use of paper-based methods for clinical diagnostics is a rapidly expanding research topic attracting a great deal of interest. Some groups have attempted to realize an integrated nucleic acid test on a single microfluidic paper chip, including extraction, amplification, and readout functions. However, these studies were not able to overcome complex modification and fabrication requirements, long turn-around times, or the need for sophisticated equipment like pumps, thermal cyclers, or centrifuges. Here, we report an extremely simple paper-based test for the point-of-care diagnosis of rotavirus A, one of the most common pathogens that causes pediatric gastroenteritis. This paper-based test could perform nucleic acid extraction within 5 min, then took 25 min to amplify the target sequence, and the result was visible to the naked eye immediately afterward or quantitative by the UV-Vis absorbance. This low-cost method does not require extra equipment and is easy to use either in a lab or at the point-of-care. The detection limit for rotavirus A was found to be 1 × 10 3 copies/mL. In addition, 100% sensitivity and specificity were achieved when testing 48 clinical stool samples. In conclusion, the present paper-based test fulfills the main requirements for a point-of-care diagnostic tool, and has the potential to be applied to disease prevention, control, and precision diagnosis. Copyright © 2018 Elsevier B.V. All rights reserved.
Wealth distribution of simple exchange models coupled with extremal dynamics
NASA Astrophysics Data System (ADS)
Bagatella-Flores, N.; Rodríguez-Achach, M.; Coronel-Brizio, H. F.; Hernández-Montoya, A. R.
2015-01-01
Punctuated Equilibrium (PE) states that after long periods of evolutionary quiescence, species evolution can take place in short time intervals, where sudden differentiation makes new species emerge and some species extinct. In this paper, we introduce and study the effect of punctuated equilibrium on two different asset exchange models: the yard sale model (YS, winner gets a random fraction of a poorer player's wealth) and the theft and fraud model (TF, winner gets a random fraction of the loser's wealth). The resulting wealth distribution is characterized using the Gini index. In order to do this, we consider PE as a perturbation with probability ρ of being applied. We compare the resulting values of the Gini index at different increasing values of ρ in both models. We found that in the case of the TF model, the Gini index reduces as the perturbation ρ increases, not showing dependence with the agents number. While for YS we observe a phase transition which happens around ρc = 0.79. For perturbations ρ <ρc the Gini index reaches the value of one as time increases (an extreme wealth condensation state), whereas for perturbations greater than or equal to ρc the Gini index becomes different to one, avoiding the system reaches this extreme state. We show that both simple exchange models coupled with PE dynamics give more realistic results. In particular for YS, we observe a power low decay of wealth distribution.
Optimal dynamic remapping of parallel computations
NASA Technical Reports Server (NTRS)
Nicol, David M.; Reynolds, Paul F., Jr.
1987-01-01
A large class of computations are characterized by a sequence of phases, with phase changes occurring unpredictably. The decision problem was considered regarding the remapping of workload to processors in a parallel computation when the utility of remapping and the future behavior of the workload is uncertain, and phases exhibit stable execution requirements during a given phase, but requirements may change radically between phases. For these problems a workload assignment generated for one phase may hinder performance during the next phase. This problem is treated formally for a probabilistic model of computation with at most two phases. The fundamental problem of balancing the expected remapping performance gain against the delay cost was addressed. Stochastic dynamic programming is used to show that the remapping decision policy minimizing the expected running time of the computation has an extremely simple structure. Because the gain may not be predictable, the performance of a heuristic policy that does not require estimnation of the gain is examined. The heuristic method's feasibility is demonstrated by its use on an adaptive fluid dynamics code on a multiprocessor. The results suggest that except in extreme cases, the remapping decision problem is essentially that of dynamically determining whether gain can be achieved by remapping after a phase change. The results also suggest that this heuristic is applicable to computations with more than two phases.
Tensor manifold-based extreme learning machine for 2.5-D face recognition
NASA Astrophysics Data System (ADS)
Chong, Lee Ying; Ong, Thian Song; Teoh, Andrew Beng Jin
2018-01-01
We explore the use of the Gabor regional covariance matrix (GRCM), a flexible matrix-based descriptor that embeds the Gabor features in the covariance matrix, as a 2.5-D facial descriptor and an effective means of feature fusion for 2.5-D face recognition problems. Despite its promise, matching is not a trivial problem for GRCM since it is a special instance of a symmetric positive definite (SPD) matrix that resides in non-Euclidean space as a tensor manifold. This implies that GRCM is incompatible with the existing vector-based classifiers and distance matchers. Therefore, we bridge the gap of the GRCM and extreme learning machine (ELM), a vector-based classifier for the 2.5-D face recognition problem. We put forward a tensor manifold-compliant ELM and its two variants by embedding the SPD matrix randomly into reproducing kernel Hilbert space (RKHS) via tensor kernel functions. To preserve the pair-wise distance of the embedded data, we orthogonalize the random-embedded SPD matrix. Hence, classification can be done using a simple ridge regressor, an integrated component of ELM, on the random orthogonal RKHS. Experimental results show that our proposed method is able to improve the recognition performance and further enhance the computational efficiency.
Jonker, Dirk; Gustafsson, Ewa; Rolander, Bo; Arvidsson, Inger; Nordander, Catarina
2015-01-01
A new health surveillance protocol for work-related upper-extremity musculoskeletal disorders has been validated by comparing the results with a reference protocol. The studied protocol, Health Surveillance in Adverse Ergonomics Conditions (HECO), is a new version of the reference protocol modified for application in the Occupational Health Service (OHS). The HECO protocol contains both a screening part and a diagnosing part. Sixty-three employees were examined. The screening in HECO did not miss any diagnosis found when using the reference protocol, but in comparison to the reference protocol considerable time savings could be achieved. Fair to good agreement between the protocols was obtained for one or more diagnoses in neck/shoulders (86%, k = 0.62) and elbow/hands (84%, k = 0.49). Therefore, the results obtained using the HECO protocol can be compared with a reference material collected with the reference protocol, and thus provide information of the magnitude of disorders in an examined work group. Practitioner Summary: The HECO protocol is a relatively simple physical examination protocol for identification of musculoskeletal disorders in the neck and upper extremities. The protocol is a reliable and cost-effective tool for the OHS to use for occupational health surveillance in order to detect workplaces at high risk for developing musculoskeletal disorders.
Parrish, Donna; Butryn, Ryan S.; Rizzo, Donna M.
2012-01-01
We developed a methodology to predict brook trout (Salvelinus fontinalis) distribution using summer temperature metrics as predictor variables. Our analysis used long-term fish and hourly water temperature data from the Dog River, Vermont (USA). Commonly used metrics (e.g., mean, maximum, maximum 7-day maximum) tend to smooth the data so information on temperature variation is lost. Therefore, we developed a new set of metrics (called event metrics) to capture temperature variation by describing the frequency, area, duration, and magnitude of events that exceeded a user-defined temperature threshold. We used 16, 18, 20, and 22°C. We built linear discriminant models and tested and compared the event metrics against the commonly used metrics. Correct classification of the observations was 66% with event metrics and 87% with commonly used metrics. However, combined event and commonly used metrics correctly classified 92%. Of the four individual temperature thresholds, it was difficult to assess which threshold had the “best” accuracy. The 16°C threshold had slightly fewer misclassifications; however, the 20°C threshold had the fewest extreme misclassifications. Our method leveraged the volumes of existing long-term data and provided a simple, systematic, and adaptable framework for monitoring changes in fish distribution, specifically in the case of irregular, extreme temperature events.
Projections of West African summer monsoon rainfall extremes from two CORDEX models
NASA Astrophysics Data System (ADS)
Akinsanola, A. A.; Zhou, Wen
2018-05-01
Global warming has a profound impact on the vulnerable environment of West Africa; hence, robust climate projection, especially of rainfall extremes, is quite important. Based on two representative concentration pathway (RCP) scenarios, projected changes in extreme summer rainfall events over West Africa were investigated using data from the Coordinated Regional Climate Downscaling Experiment models. Eight (8) extreme rainfall indices (CDD, CWD, r10mm, r20mm, PRCPTOT, R95pTOT, rx5day, and sdii) defined by the Expert Team on Climate Change Detection and Indices were used in the study. The performance of the regional climate model (RCM) simulations was validated by comparing with GPCP and TRMM observation data sets. Results show that the RCMs reasonably reproduced the observed pattern of extreme rainfall over the region and further added significant value to the driven GCMs over some grids. Compared to the baseline period 1976-2005, future changes (2070-2099) in summer rainfall extremes under the RCP4.5 and RCP8.5 scenarios show statistically significant decreasing total rainfall (PRCPTOT), while consecutive dry days and extreme rainfall events (R95pTOT) are projected to increase significantly. There are obvious indications that simple rainfall intensity (sdii) will increase in the future. This does not amount to an increase in total rainfall but suggests a likelihood of greater intensity of rainfall events. Overall, our results project that West Africa may suffer more natural disasters such as droughts and floods in the future.
Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra
2016-01-01
The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases.
Hansen, Ulf-Peter; Rauh, Oliver; Schroeder, Indra
2016-01-01
abstract The calculation of flux equations or current-voltage relationships in reaction kinetic models with a high number of states can be very cumbersome. Here, a recipe based on an arrow scheme is presented, which yields a straightforward access to the minimum form of the flux equations and the occupation probability of the involved states in cyclic and linear reaction schemes. This is extremely simple for cyclic schemes without branches. If branches are involved, the effort of setting up the equations is a little bit higher. However, also here a straightforward recipe making use of so-called reserve factors is provided for implementing the branches into the cyclic scheme, thus enabling also a simple treatment of such cases. PMID:26646356
Inexpensive high vacuum feedthroughs.
NASA Technical Reports Server (NTRS)
Gerber, S.; Post, D.
1973-01-01
Description of the use of rigid coaxial cable in the construction of high vacuum coaxial and coaxial push-pull rotary motion feedthroughs. This type of feedthroughs is shown to be extremely cheap and simple to make and modify. It can be used for moderately high voltages and provides a continuous, well shielded, low-noise feedthrough cable in any desired configuration.
Forecast skill of synoptic conditions associated with Santa Ana winds in Southern California
Charles Jones; Francis Fujioka; Leila M.V. Carvalho
2010-01-01
Santa Ana winds (SAW) are synoptically driven mesoscale winds observed in Southern California usually during late fall and winter. Because of the complex topography of the region, SAW episodes can sometimes be extremely intense and pose significant environmental hazards, especially during wildfire incidents. A simple set of criteria was used to identify synoptic-scale...
A Simple Microscopy Assay to Teach the Processes of Phagocytosis and Exocytosis
ERIC Educational Resources Information Center
Gray, Ross; Gray, Andrew; Fite, Jessica L.; Jordan, Renee; Stark, Sarah; Naylor, Kari
2012-01-01
Phagocytosis and exocytosis are two cellular processes involving membrane dynamics. While it is easy to understand the purpose of these processes, it can be extremely difficult for students to comprehend the actual mechanisms. As membrane dynamics play a significant role in many cellular processes ranging from cell signaling to cell division to…
Photovoltaic effect in ferroelectric ceramics
NASA Technical Reports Server (NTRS)
Epstein, D. J.; Linz, A.; Jenssen, H. P.
1982-01-01
The ceramic structure was simulated in a form that is more tractable to correlation between experiment and theory. Single crystals (of barium titanate) were fabricated in a simple corrugated structure in which the pedestals of the corrugation simulated the grain while the intervening cuts could be filled with materials simulating the grain boundaries. The observed photovoltages were extremely small (100 mv).
A Simple Flotation De-Linking Experiment for the Recycling of Paper
ERIC Educational Resources Information Center
Venditti, Richard A.
2004-01-01
A laboratory exercise for the flotation de-linking of wastepaper is described, which consists of disintegrating printed wastepaper in a blender and then removing the ink or toner contaminants by pumping air bubbles through suspension using an aquarium pump or other source of air bubbles. The exercise has proven extremely reliable and consistent in…
Communicating Our Science to Our Customers: Drug Discovery in Five Simple Experiments.
Pearson, Lesley-Anne; Foley, David William
2017-02-09
The complexities of modern drug discovery-an interdisciplinary process that often takes years and costs billions-can be extremely challenging to explain to a public audience. We present details of a 30 minute demonstrative lecture that uses well-known experiments to illustrate key concepts in drug discovery including synthesis, assay and metabolism.
ERIC Educational Resources Information Center
Coordination in Development, New York, NY.
This booklet was produced in response to the growing need for reliable environmental assessment techniques that can be applied to small-scale development projects. The suggested techniques emphasize low-technology environmental analysis. Although these techniques may lack precision, they can be extremely valuable in helping to assure the success…
A simple second-order digital phase-locked loop.
NASA Technical Reports Server (NTRS)
Tegnelia, C. R.
1972-01-01
A simple second-order digital phase-locked loop has been designed for the Viking Orbiter 1975 command system. Excluding analog-to-digital conversion, implementation of the loop requires only an adder/subtractor, two registers, and a correctable counter with control logic. The loop considers only the polarity of phase error and corrects system clocks according to a filtered sequence of this polarity. The loop is insensitive to input gain variation, and therefore offers the advantage of stable performance over long life. Predictable performance is guaranteed by extreme reliability of acquisition, yet in the steady state the loop produces only a slight degradation with respect to analog loop performance.
NASA Astrophysics Data System (ADS)
Liu, Xiaodong
2017-08-01
A sampling method by using scattering amplitude is proposed for shape and location reconstruction in inverse acoustic scattering problems. Only matrix multiplication is involved in the computation, thus the novel sampling method is very easy and simple to implement. With the help of the factorization of the far field operator, we establish an inf-criterion for characterization of underlying scatterers. This result is then used to give a lower bound of the proposed indicator functional for sampling points inside the scatterers. While for the sampling points outside the scatterers, we show that the indicator functional decays like the bessel functions as the sampling point goes away from the boundary of the scatterers. We also show that the proposed indicator functional continuously depends on the scattering amplitude, this further implies that the novel sampling method is extremely stable with respect to errors in the data. Different to the classical sampling method such as the linear sampling method or the factorization method, from the numerical point of view, the novel indicator takes its maximum near the boundary of the underlying target and decays like the bessel functions as the sampling points go away from the boundary. The numerical simulations also show that the proposed sampling method can deal with multiple multiscale case, even the different components are close to each other.
Jorgensen, Martin Gronbech; Paramanathan, Sentha; Ryg, Jesper; Masud, Tahir; Andersen, Stig
2015-07-10
Reaction time (RT) has been associated with falls in older adults, but is not routinely tested in clinical practice. A simple, portable, inexpensive and reliable method for measuring RT is desirable for clinical settings. We therefore developed a custom software, which utilizes the portable and low-cost standard Nintendo Wii board (NWB) to record RT. The aims in the study were to (1) explore if the test could differentiate old and young adults, and (2) to study learning effects between test-sessions, and (3) to examine reproducibility. A young (n = 25, age 20-35 years, mean BMI of 22.6) and an old (n = 25, age ≥65 years, mean BMI of 26.3) study-population were enrolled in this within- and between-day reproducibility study. A standard NWB was used along with the custom software to obtain RT from participants in milliseconds. A mixed effect model was initially used to explore systematic differences associated with age, and test-session. Reproducibility was then expressed by Intraclass Correlation Coefficients (ICC), Coefficient of Variance (CV), and Typical Error (TE). The RT tests was able to differentiate the old group from the young group in both the upper extremity test (p < 0.001; -170.7 ms (95%CI -209.4; -132.0)) and the lower extremity test (p < 0.001; -224.3 ms (95%CI -274.6; -173.9)). Moreover, the mixed effect model showed no significant learning effect between sessions with exception of the lower extremity test between session one and three for the young group (-35,5 ms; 4.6%; p = 0.02). A good within- and between-day reproducibility (ICC: 0.76-0.87; CV: 8.5-12.9; TE: 45.7-95.1 ms) was achieved for both the upper and lower extremity test with the fastest of three trials in both groups. A low-cost and portable reaction test utilizing a standard Nintendo wii board showed good reproducibility, no or little systematic learning effects across test-sessions, and could differentiate between young and older adults in both upper and lower extremity tests.
Vincent, Joshua I; MacDermid, Joy C; Michlovitz, Susan L; Rafuse, Richard; Wells-Rowsell, Christina; Wong, Owen; Bisbee, Leslie
2014-01-01
Longitudinal clinical measurement study. The push-off test (POT) is a novel and simple measure of upper extremity weight-bearing that can be measured with a grip dynamometer. There are no published studies on the validity and reliability of the POT. The relationship between upper extremity self-report activity/participation and impairment measures remain an unexplored realm. The primary purpose of this study is to estimate the intra and inter-rater reliability and construct validity of the POT. The secondary purpose is to estimate the relationship between upper extremity self-report activity/participation questionnaires and impairment measures. A convenience sample of 22 patients with wrist or elbow injuries were tested for POT, wrist/elbow range of motion (ROM), isometric wrist extension strength (WES) and grip strength; and completed two self-report activity/participation questionnaires: Disability of the Arm, Shoulder and the Hand (DASH) and Work Limitations Questionnaire (WLQ-26). POT's inter and intra-rater reliability and construct validity was tested. Pearson's correlations were run between the impairment measures and self-report questionnaires to look into the relationship amongst them. The POT demonstrated high inter-rater reliability (ICC affected = 0.97; 95% C.I. 0.93-0.99; ICC unaffected = 0.85; 95% C.I. 0.68-0.94) and intra-rater reliability (ICC affected = 0.96; 95% C.I. 0.92-0.97; ICC unaffected = 0.92; 95% C.I. 0.85-0.97). The POT was correlated moderately with the DASH (r = -0.47; p = 0.03). While examining the relationship between upper extremity self-reported activity/participation questionnaires and impairment measures the strongest correlation was between the DASH and the POT (r = -0.47; p = 0.03) and none of the correlations with the other physical impairment measures reached significance. At-work disability demonstrated insignificant correlations with physical impairments. The POT test provides a reliable and easily administered quantitative measure of ability to bear the load through an injured arm. Preliminary evidence supports a moderate relationship between loading bearing measured by the POT and upper extremity function measured by the DASH. 1b. Copyright © 2014 Hanley & Belfus. Published by Elsevier Inc. All rights reserved.
Probabilistic liver atlas construction.
Dura, Esther; Domingo, Juan; Ayala, Guillermo; Marti-Bonmati, Luis; Goceri, E
2017-01-13
Anatomical atlases are 3D volumes or shapes representing an organ or structure of the human body. They contain either the prototypical shape of the object of interest together with other shapes representing its statistical variations (statistical atlas) or a probability map of belonging to the object (probabilistic atlas). Probabilistic atlases are mostly built with simple estimations only involving the data at each spatial location. A new method for probabilistic atlas construction that uses a generalized linear model is proposed. This method aims to improve the estimation of the probability to be covered by the liver. Furthermore, all methods to build an atlas involve previous coregistration of the sample of shapes available. The influence of the geometrical transformation adopted for registration in the quality of the final atlas has not been sufficiently investigated. The ability of an atlas to adapt to a new case is one of the most important quality criteria that should be taken into account. The presented experiments show that some methods for atlas construction are severely affected by the previous coregistration step. We show the good performance of the new approach. Furthermore, results suggest that extremely flexible registration methods are not always beneficial, since they can reduce the variability of the atlas and hence its ability to give sensible values of probability when used as an aid in segmentation of new cases.
Extreme Precipitation in Poland in the Years 1951-2010
NASA Astrophysics Data System (ADS)
Malinowska, Miroslawa
2017-12-01
The characteristics of extreme precipitation, including the dominant trends, were analysed for eight stations located in different parts of Poland for the period 1951-2010. Five indices enabling the assessment of the intensity and frequency of both extremely dry and wet conditions were applied. The indices included the number of days with precipitation ≥10mm·d-1 (R10), maximum number of consecutive dry days (CDD), maximum 5-day precipitation total (R5d), simple daily intensity index (SDII), and the fraction of annual total precipitation due to events exceeding the 95th percentile calculated for the period 1961-1990. Annual trends were calculated using standard linear regression method, while the fit of the model was assessed with the F-test at the 95% confidence level. The analysed changes in extreme precipitation showed mixed patterns. A significant positive trend in the number of days with precipitation ≥10mm·d-1 (R10) was observed in central Poland, while a significant negative one, in south-eastern Poland. Based on the analysis of maximum 5-day precipitation totals (R5d), statistically significant positive trends in north-western, western and eastern parts of the country were detected, while the negative trends were found in the central and northeastern parts. Daily precipitation, expressed as single daily intensity index (SDII), increased over time in northern and central Poland. In southern Poland, the variation of SDII index showed non-significant negative tendencies. Finally, the fraction of annual total precipitation due to the events exceeding the 1961-1990 95th percentile increased at one station only, namely, in Warsaw. The indicator which refers to dry conditions, i.e. maximum number of consecutive dry days (CDD) displayed negative trends throughout the surveyed area, with the exception of Szczecin that is a representative of north-western Poland.
A nested observation and model approach to non linear groundwater surface water interactions.
NASA Astrophysics Data System (ADS)
van der Velde, Y.; Rozemeijer, J. C.; de Rooij, G. H.
2009-04-01
Surface water quality measurements in The Netherlands are scattered in time and space. Therefore, water quality status and its variations and trends are difficult to determine. In order to reach the water quality goals according to the European Water Framework Directive, we need to improve our understanding of the dynamics of surface water quality and the processes that affect it. In heavily drained lowland catchment groundwater influences the discharge towards the surface water network in many complex ways. Especially a strong seasonal contracting and expanding system of discharging ditches and streams affects discharge and solute transport. At a tube drained field site the tube drain flux and the combined flux of all other flow routes toward a stretch of 45 m of surface water have been measured for a year. Also the groundwater levels at various locations in the field and the discharge at two nested catchment scales have been monitored. The unique reaction of individual flow routes on rainfall events at the field site allowed us to separate the discharge at a 4 ha catchment and at a 6 km2 into flow route contributions. The results of this nested experimental setup combined with the results of a distributed hydrological model has lead to the formulation of a process model approach that focuses on the spatial variability of discharge generation driven by temporal and spatial variations in groundwater levels. The main idea of this approach is that discharge is not generated by catchment average storages or groundwater heads, but is mainly generated by points scale extremes i.e. extreme low permeability, extreme high groundwater heads or extreme low surface elevations, all leading to catchment discharge. We focused on describing the spatial extremes in point scale storages and this led to a simple and measurable expression that governs the non-linear groundwater surface water interaction. We will present the analysis of the field site data to demonstrate the potential of nested-scale, high frequency observations. The distributed hydrological model results will be used to show transient catchment scale relations between groundwater levels and discharges. These analyses lead to a simple expression that can describe catchment scale groundwater surface water interactions.
NASA Astrophysics Data System (ADS)
von Korff Schmising, Clemens; Weder, David; Noll, Tino; Pfau, Bastian; Hennecke, Martin; Strüber, Christian; Radu, Ilie; Schneider, Michael; Staeck, Steffen; Günther, Christian M.; Lüning, Jan; Merhe, Alaa el dine; Buck, Jens; Hartmann, Gregor; Viefhaus, Jens; Treusch, Rolf; Eisebitt, Stefan
2017-05-01
A new device for polarization control at the free electron laser facility FLASH1 at DESY has been commissioned for user operation. The polarizer is based on phase retardation upon reflection off metallic mirrors. Its performance is characterized in three independent measurements and confirms the theoretical predictions of efficient and broadband generation of circularly polarized radiation in the extreme ultraviolet spectral range from 35 eV to 90 eV. The degree of circular polarization reaches up to 90% while maintaining high total transmission values exceeding 30%. The simple design of the device allows straightforward alignment for user operation and rapid switching between left and right circularly polarized radiation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sandhu, Arvinder S.; Gagnon, Etienne; Paul, Ariel
2006-12-15
We present evidence for a new regime of high-harmonic generation in a waveguide where bright, sub-optical-cycle, quasimonochromatic, extreme ultraviolet (EUV) light is generated via a mechanism that is relatively insensitive to carrier-envelope phase fluctuations. The interplay between the transient plasma which determines the phase matching conditions and the instantaneous laser intensity which drives harmonic generation gives rise to a new nonlinear stabilization mechanism in the waveguide, localizing the phase-matched EUV emission to within sub-optical-cycle duration. The sub-optical-cycle EUV emission generated by this mechanism can also be selectively optimized in the spectral domain by simple tuning of parameters.
Multifocal nerve lesions and LZTR1 germline mutations in segmental schwannomatosis.
Farschtschi, Said; Mautner, Victor-Felix; Pham, Mirko; Nguyen, Rosa; Kehrer-Sawatzki, Hildegard; Hutter, Sonja; Friedrich, Reinhard E; Schulz, Alexander; Morrison, Helen; Jones, David T W; Bendszus, Martin; Bäumer, Philipp
2016-10-01
Schwannomatosis is a genetic disorder characterized by the occurrence of multiple peripheral schwannomas. Segmental schwannomatosis is diagnosed when schwannomas are restricted to 1 extremity and is thought to be caused by genetic mosaicism. We studied 5 patients with segmental schwannomatosis through microstructural magnetic resonance neurography and mutation analysis of NF2, SMARCB1, and LZTR1. In 4 of 5 patients, subtle fascicular nerve lesions were detected in clinically unaffected extremities. Two patients exhibited LZTR1 germline mutations. This appears contrary to a simple concept of genetic mosaicism and suggests more complex and heterogeneous mechanisms underlying the phenotype of segmental schwannomatosis than previously thought. Ann Neurol 2016;80:625-628. © 2016 American Neurological Association.
Cabrera, Carlos; Chang, Lei; Stone, Mars; Busch, Michael; Wilson, David H
2015-11-01
Nucleic acid testing (NAT) has become the standard for high sensitivity in detecting low levels of virus. However, adoption of NAT can be cost prohibitive in low-resource settings where access to extreme sensitivity could be clinically advantageous for early detection of infection. We report development and preliminary validation of a simple, low-cost, fully automated digital p24 antigen immunoassay with the sensitivity of quantitative NAT viral load (NAT-VL) methods for detection of acute HIV infection. We developed an investigational 69-min immunoassay for p24 capsid protein for use on a novel digital analyzer on the basis of single-molecule-array technology. We evaluated the assay for sensitivity by dilution of standardized preparations of p24, cultured HIV, and preseroconversion samples. We characterized analytical performance and concordance with 2 NAT-VL methods and 2 contemporary p24 Ag/Ab combination immunoassays with dilutions of viral isolates and samples from the earliest stages of HIV infection. Analytical sensitivity was 0.0025 ng/L p24, equivalent to 60 HIV RNA copies/mL. The limit of quantification was 0.0076 ng/L, and imprecision across 10 runs was <10% for samples as low as 0.09 ng/L. Clinical specificity was 95.1%. Sensitivity concordance vs NAT-VL on dilutions of preseroconversion samples and Group M viral isolates was 100%. The digital immunoassay exhibited >4000-fold greater sensitivity than contemporary immunoassays for p24 and sensitivity equivalent to that of NAT methods for early detection of HIV. The data indicate that NAT-level sensitivity for acute HIV infection is possible with a simple, low-cost digital immunoassay. © 2015 American Association for Clinical Chemistry.
Simple Test Functions in Meshless Local Petrov-Galerkin Methods
NASA Technical Reports Server (NTRS)
Raju, Ivatury S.
2016-01-01
Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.
Application of short-data methods on extreme surge levels
NASA Astrophysics Data System (ADS)
Feng, X.
2014-12-01
Tropical cyclone-induced storm surges are among the most destructive natural hazards that impact the United States. Unfortunately for academic research, the available time series for extreme surge analysis are very short. The limited data introduces uncertainty and affects the accuracy of statistical analyses of extreme surge levels. This study deals with techniques applicable to data sets less than 20 years, including simulation modelling and methods based on the parameters of the parent distribution. The verified water levels from water gauges spread along the Southwest and Southeast Florida Coast, as well as the Florida Keys, are used in this study. Methods to calculate extreme storm surges are described and reviewed, including 'classical' methods based on the generalized extreme value (GEV) distribution and the generalized Pareto distribution (GPD), and approaches designed specifically to deal with short data sets. Incorporating global-warming influence, the statistical analysis reveals enhanced extreme surge magnitudes and frequencies during warm years, while reduced levels of extreme surge activity are observed in the same study domain during cold years. Furthermore, a non-stationary GEV distribution is applied to predict the extreme surge levels with warming sea surface temperatures. The non-stationary GEV distribution indicates that with 1 Celsius degree warming in sea surface temperature from the baseline climate, the 100-year return surge level in Southwest and Southeast Florida will increase by up to 40 centimeters. The considered statistical approaches for extreme surge estimation based on short data sets will be valuable to coastal stakeholders, including urban planners, emergency managers, and the hurricane and storm surge forecasting and warning system.
Extreme Trust Region Policy Optimization for Active Object Recognition.
Liu, Huaping; Wu, Yupei; Sun, Fuchun; Huaping Liu; Yupei Wu; Fuchun Sun; Sun, Fuchun; Liu, Huaping; Wu, Yupei
2018-06-01
In this brief, we develop a deep reinforcement learning method to actively recognize objects by choosing a sequence of actions for an active camera that helps to discriminate between the objects. The method is realized using trust region policy optimization, in which the policy is realized by an extreme learning machine and, therefore, leads to efficient optimization algorithm. The experimental results on the publicly available data set show the advantages of the developed extreme trust region optimization method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kwon, Ryun-Young; Chae, Jongchul; Zhang Jie
2010-05-01
We measure the heights of EUV bright points (BPs) above the solar surface by applying a stereoscopic method to the data taken by the Solar TErrestrial RElations Observatory/SECCHI/Extreme UltraViolet Imager (EUVI). We have developed a three-dimensional reconstruction method for point-like features such as BPs using the simple principle that the position of a point in the three-dimensional space is specified as the intersection of two lines of sight. From a set of data consisting of EUVI 171 A, 195 A, 284 A, and 304 A images taken on 11 days arbitrarily selected during a period of 14 months, we havemore » identified and analyzed 210 individual BPs that were visible on all four passband images and smaller than 30 Mm. The BPs seen in the 304 A images have an average height of 4.4 Mm, and are often associated with the legs of coronal loops. In the 171 A, 195 A, and 284 A images the BPs appear loop-shaped, and have average heights of 5.1, 6.7, and 6.1 Mm, respectively. Moreover, there is a tendency that overlying loops are filled with hotter plasmas. The average heights of BPs in 171 A, 195 A, and 284 A passbands are roughly twice the corresponding average lengths. Our results support the notion that an EUV BP represents a system of small loops with temperature stratification like flaring loops, being consistent with the magnetic reconnection origin.« less
Estimating the extreme low-temperature event using nonparametric methods
NASA Astrophysics Data System (ADS)
D'Silva, Anisha
This thesis presents a new method of estimating the one-in-N low temperature threshold using a non-parametric statistical method called kernel density estimation applied to daily average wind-adjusted temperatures. We apply our One-in-N Algorithm to local gas distribution companies (LDCs), as they have to forecast the daily natural gas needs of their consumers. In winter, demand for natural gas is high. Extreme low temperature events are not directly related to an LDCs gas demand forecasting, but knowledge of extreme low temperatures is important to ensure that an LDC has enough capacity to meet customer demands when extreme low temperatures are experienced. We present a detailed explanation of our One-in-N Algorithm and compare it to the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution. We show that our One-in-N Algorithm estimates the one-in- N low temperature threshold more accurately than the methods using the generalized extreme value distribution, the normal distribution, and the variance-weighted composite distribution according to root mean square error (RMSE) measure at a 5% level of significance. The One-in- N Algorithm is tested by counting the number of times the daily average wind-adjusted temperature is less than or equal to the one-in- N low temperature threshold.
NASA Astrophysics Data System (ADS)
Yin, Yixing; Chen, Haishan; Xu, Chong-Yu; Xu, Wucheng; Chen, Changchun; Sun, Shanlei
2016-05-01
The regionalization methods, which "trade space for time" by pooling information from different locations in the frequency analysis, are efficient tools to enhance the reliability of extreme quantile estimates. This paper aims at improving the understanding of the regional frequency of extreme precipitation by using regionalization methods, and providing scientific background and practical assistance in formulating the regional development strategies for water resources management in one of the most developed and flood-prone regions in China, the Yangtze River Delta (YRD) region. To achieve the main goals, L-moment-based index-flood (LMIF) method, one of the most popular regionalization methods, is used in the regional frequency analysis of extreme precipitation with special attention paid to inter-site dependence and its influence on the accuracy of quantile estimates, which has not been considered by most of the studies using LMIF method. Extensive data screening of stationarity, serial dependence, and inter-site dependence was carried out first. The entire YRD region was then categorized into four homogeneous regions through cluster analysis and homogenous analysis. Based on goodness-of-fit statistic and L-moment ratio diagrams, generalized extreme-value (GEV) and generalized normal (GNO) distributions were identified as the best fitted distributions for most of the sub-regions, and estimated quantiles for each region were obtained. Monte Carlo simulation was used to evaluate the accuracy of the quantile estimates taking inter-site dependence into consideration. The results showed that the root-mean-square errors (RMSEs) were bigger and the 90 % error bounds were wider with inter-site dependence than those without inter-site dependence for both the regional growth curve and quantile curve. The spatial patterns of extreme precipitation with a return period of 100 years were finally obtained which indicated that there are two regions with highest precipitation extremes and a large region with low precipitation extremes. However, the regions with low precipitation extremes are the most developed and densely populated regions of the country, and floods will cause great loss of human life and property damage due to the high vulnerability. The study methods and procedure demonstrated in this paper will provide useful reference for frequency analysis of precipitation extremes in large regions, and the findings of the paper will be beneficial in flood control and management in the study area.
NASA Astrophysics Data System (ADS)
Odagiri, Kenta; Takatsuka, Kazuo
2009-02-01
We report a comparative study on pattern formation between the methods of cellular automata (CA) and reaction-diffusion equations (RD) applying to a morphology of bacterial colony formation. To do so, we began the study with setting an extremely simple model, which was designed to realize autocatalytic proliferation of bacteria (denoted as X ) fed with nutrition (N) and their inactive state (prespore state) P1 due to starvation: X+N→2X and X→P1 , respectively. It was found numerically that while the CA could successfully generate rich patterns ranging from the circular fat structure to the viscous-finger-like complicated one, the naive RD reproduced only the circular pattern but failed to give a finger structure. Augmenting the RD equations by adding two physical factors, (i) a threshold effect in the dynamics of X+N→2X (breaking the continuity limit of RD) and (ii) internal noise with onset threshold (breaking the inherent symmetry of RD), we have found that the viscous-finger-like realistic patterns are indeed recovered by thus modified RD. This highlights the important difference between CA and RD, and at the same time, clarifies the necessary factors for the complicated patterns to emerge in such a surprisingly simple model system.
Ribosomal Binding Site Switching: An Effective Strategy for High-Throughput Cloning Constructions
Li, Yunlong; Zhang, Yong; Lu, Pei; Rayner, Simon; Chen, Shiyun
2012-01-01
Direct cloning of PCR fragments by TA cloning or blunt end ligation are two simple methods which would greatly benefit high-throughput (HTP) cloning constructions if the efficiency can be improved. In this study, we have developed a ribosomal binding site (RBS) switching strategy for direct cloning of PCR fragments. RBS is an A/G rich region upstream of the translational start codon and is essential for gene expression. Change from A/G to T/C in the RBS blocks its activity and thereby abolishes gene expression. Based on this property, we introduced an inactive RBS upstream of a selectable marker gene, and designed a fragment insertion site within this inactive RBS. Forward and reverse insertions of specifically tailed fragments will respectively form an active and inactive RBS, thus all background from vector self-ligation and fragment reverse insertions will be eliminated due to the non-expression of the marker gene. The effectiveness of our strategy for TA cloning and blunt end ligation are confirmed. Application of this strategy to gene over-expression, a bacterial two-hybrid system, a bacterial one-hybrid system, and promoter bank construction are also verified. The advantages of this simple procedure, together with its low cost and high efficiency, makes our strategy extremely useful in HTP cloning constructions. PMID:23185557
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Ki-Joong; Lu, Ping; Culp, Jeffrey T.
Integration of optical fiber with sensitive thin films offers great potential for the realization of novel chemical sensing platforms. In this study, we present a simple design strategy and high performance of nanoporous metal–organic framework (MOF) based optical gas sensors, which enables detection of a wide range of concentrations of small molecules based upon extremely small differences in refractive indices as a function of analyte adsorption within the MOF framework. Thin and compact MOF films can be uniformly formed and tightly bound on the surface of etched optical fiber through a simple solution method which is critical for manufacturability ofmore » MOF-based sensor devices. The resulting sensors show high sensitivity/selectivity to CO 2 gas relative to other small gases (H 2, N 2, O 2, and CO) with rapid (< tens of seconds) response time and excellent reversibility, which can be well correlated to the physisorption of gases into a nanoporous MOF. We propose a refractive index based sensing mechanism for the MOF-integrated optical fiber platform which results in an amplification of inherent optical absorption present within the MOF-based sensing layer with increasing values of effective refractive index associated with adsorption of gases.« less
Kim, Ki-Joong; Lu, Ping; Culp, Jeffrey T.; ...
2018-01-18
Integration of optical fiber with sensitive thin films offers great potential for the realization of novel chemical sensing platforms. In this study, we present a simple design strategy and high performance of nanoporous metal–organic framework (MOF) based optical gas sensors, which enables detection of a wide range of concentrations of small molecules based upon extremely small differences in refractive indices as a function of analyte adsorption within the MOF framework. Thin and compact MOF films can be uniformly formed and tightly bound on the surface of etched optical fiber through a simple solution method which is critical for manufacturability ofmore » MOF-based sensor devices. The resulting sensors show high sensitivity/selectivity to CO 2 gas relative to other small gases (H 2, N 2, O 2, and CO) with rapid (< tens of seconds) response time and excellent reversibility, which can be well correlated to the physisorption of gases into a nanoporous MOF. We propose a refractive index based sensing mechanism for the MOF-integrated optical fiber platform which results in an amplification of inherent optical absorption present within the MOF-based sensing layer with increasing values of effective refractive index associated with adsorption of gases.« less
A critical assessment of two types of personal UV dosimeters.
Seckmeyer, Gunther; Klingebiel, Marcus; Riechelmann, Stefan; Lohse, Insa; McKenzie, Richard L; Liley, J Ben; Allen, Martin W; Siani, Anna-Maria; Casale, Giuseppe R
2012-01-01
Doses of erythemally weighted irradiances derived from polysulphone (PS) and electronic ultraviolet (EUV) dosimeters have been compared with measurements obtained using a reference spectroradiometer. PS dosimeters showed mean absolute deviations of 26% with a maximum deviation of 44%, the calibrated EUV dosimeters showed mean absolute deviations of 15% (maximum 33%) around noon during several test days in the northern hemisphere autumn. In the case of EUV dosimeters, measurements with various cut-off filters showed that part of the deviation from the CIE erythema action spectrum was due to a small, but significant sensitivity to visible radiation that varies between devices and which may be avoided by careful preselection. Usually the method of calibrating UV sensors by direct comparison to a reference instrument leads to reliable results. However, in some circumstances the quality of measurements made with simple sensors may be over-estimated. In the extreme case, a simple pyranometer can be used as a UV instrument, providing acceptable results for cloudless skies, but very poor results under cloudy conditions. It is concluded that while UV dosimeters are useful for their design purpose, namely to estimate personal UV exposures, they should not be regarded as an inexpensive replacement for meteorological grade instruments. © 2011 Wiley Periodicals, Inc. Photochemistry and Photobiology © 2011 The American Society of Photobiology.
NASA Astrophysics Data System (ADS)
Hamimi, Z.; Kassem, O. M. K.; El-Sabrouty, M. N.
2015-09-01
The rotation of rigid objects within a flowing viscous medium is a function of several factors including the degree of non-coaxiality. The relationship between the orientation of such objects and their aspect ratio can be used in vorticity analyses in a variety of geological settings. Method for estimation of vorticity analysis to quantitative of kinematic vorticity number (Wm) has been applied using rotated rigid objects, such as quartz and feldspar objects. The kinematic vorticity number determined for high temperature mylonitic Abt schist in Al Amar area, extreme eastern Arabian Shield, ranges from ˜0.8 to 0.9. Obtained results from vorticity and strain analyses indicate that deformation in the area deviated from simple shear. It is concluded that nappe stacking occurred early during an earlier thrusting event, probably by brittle imbrications. Ductile strain was superimposed on the nappe structure at high-pressure as revealed by a penetrative subhorizontal foliation that is developed subparallel to tectonic contacts versus the underlying and overlying nappes. Accumulation of ductile strain during underplating was not by simple shear but involved a component of vertical shortening, which caused the subhorizontal foliation in the Al Amar area. In most cases, this foliation was formed concurrently with thrust sheets imbrications, indicating that nappe stacking was associated with vertical shortening.
NASA Astrophysics Data System (ADS)
Chen, Tsing-Chang; Yen, Ming-Cheng; Wu, Kuang-Der; Ng, Thomas
1992-08-01
The time evolution of the Indian monsoon is closely related to locations of the northward migrating monsoon troughs and ridges which can be well depicted with the 30 60day filtered 850-mb streamfunction. Thus, long-range forecasts of the large-scale low-level monsoon can be obtained from those of the filtered 850-mb streamfunction. These long-range forecasts were made in this study in terms of the Auto Regressive (AR) Moving-Average process. The historical series of the AR model were constructed with the 30 60day filtered 850-mb streamfunction [˜ψ (850mb)] time series of 4months. However, the phase of the last low-frequency cycle in the ˜ψ (850mb) time series can be skewed by the bandpass filtering. To reduce this phase skewness, a simple scheme is introduced. With this phase modification of the filtered 850-mb streamfunction, we performed the pilot forecast experiments of three summers with the AR forecast process. The forecast errors in the positions of the northward propagating monsoon troughs and ridges at Day 20 are generally within the range of 1
2days behind the observed, except in some extreme cases.
Simple, empirical approach to predict neutron capture cross sections from nuclear masses
NASA Astrophysics Data System (ADS)
Couture, A.; Casten, R. F.; Cakirli, R. B.
2017-12-01
Background: Neutron capture cross sections are essential to understanding the astrophysical s and r processes, the modeling of nuclear reactor design and performance, and for a wide variety of nuclear forensics applications. Often, cross sections are needed for nuclei where experimental measurements are difficult. Enormous effort, over many decades, has gone into attempting to develop sophisticated statistical reaction models to predict these cross sections. Such work has met with some success but is often unable to reproduce measured cross sections to better than 40 % , and has limited predictive power, with predictions from different models rapidly differing by an order of magnitude a few nucleons from the last measurement. Purpose: To develop a new approach to predicting neutron capture cross sections over broad ranges of nuclei that accounts for their values where known and which has reliable predictive power with small uncertainties for many nuclei where they are unknown. Methods: Experimental neutron capture cross sections were compared to empirical mass observables in regions of similar structure. Results: We present an extremely simple method, based solely on empirical mass observables, that correlates neutron capture cross sections in the critical energy range from a few keV to a couple hundred keV. We show that regional cross sections are compactly correlated in medium and heavy mass nuclei with the two-neutron separation energy. These correlations are easily amenable to predict unknown cross sections, often converting the usual extrapolations to more reliable interpolations. It almost always reproduces existing data to within 25 % and estimated uncertainties are below about 40 % up to 10 nucleons beyond known data. Conclusions: Neutron capture cross sections display a surprisingly strong connection to the two-neutron separation energy, a nuclear structure property. The simple, empirical correlations uncovered provide model-independent predictions of neutron capture cross sections, extending far from stability, including for nuclei of the highest sensitivity to r -process nucleosynthesis.
ERIC Educational Resources Information Center
Dahl, Robyn Mieko; Droser, Mary L.
2016-01-01
University earth science departments seeking to establish meaningful geoscience outreach programs often pursue large-scale, grant-funded programs. Although this type of outreach is highly successful, it is also extremely costly, and grant funding can be difficult to secure. Here, we present the Geoscience Education Outreach Program (GEOP), a…
Bryan A. Black; Daniel Griffin; Peter van der Sleen; Alan D. Wanamaker; James H. Speer; David C. Frank; David W. Stahle; Neil Pederson; Carolyn A. Copenheaver; Valerie Trouet; Shelly Griffin; Bronwyn M. Gillanders
2016-01-01
High-resolution biogenic and geologic proxies in which one increment or layer is formed per year are crucial to describing natural ranges of environmental variability in Earth's physical and biological systems. However, dating controls are necessary to ensure temporal precision and accuracy; simple counts cannot ensure that all layers are placed correctly in time...
"In Situ" Observation of a Soap-Film Catenoid--A Simple Educational Physics Experiment
ERIC Educational Resources Information Center
Ito, Masato; Sato, Taku
2010-01-01
The solution to the Euler-Lagrange equation is an extremal functional. To understand that the functional is stationary at local extrema (maxima or minima), we propose a physics experiment that involves using a soap film to form a catenoid. A catenoid is a surface that is formed between two coaxial circular rings and is classified mathematically as…
Theory of Mind Deficits in Children with Fragile X Syndrome
ERIC Educational Resources Information Center
Cornish, K.; Burack, J. A.; Rahman, A.; Munir, F.; Russo, N.; Grant, C.
2005-01-01
Given the consistent findings of theory of mind deficits in children with autism, it would be extremely beneficial to examine the profile of theory of mind abilities in other clinical groups such as fragile X syndrome (FXS) and Down syndrome (DS). The aim of the present study was to assess whether boys with FXS are impaired in simple social…
Triboelectric-generator-driven pulse electrodeposition for micropatterning.
Zhu, Guang; Pan, Caofeng; Guo, Wenxi; Chen, Chih-Yen; Zhou, Yusheng; Yu, Ruomeng; Wang, Zhong Lin
2012-09-12
By converting ambient energy into electricity, energy harvesting is capable of at least offsetting, or even replacing, the reliance of small portable electronics on traditional power supplies, such as batteries. Here we demonstrate a novel and simple generator with extremely low cost for efficiently harvesting mechanical energy that is typically present in the form of vibrations and random displacements/deformation. Owing to the coupling of contact charging and electrostatic induction, electric generation was achieved with a cycled process of contact and separation between two polymer films. A detailed theory is developed for understanding the proposed mechanism. The instantaneous electric power density reached as high as 31.2 mW/cm(3) at a maximum open circuit voltage of 110 V. Furthermore, the generator was successfully used without electric storage as a direct power source for pulse electrodeposition (PED) of micro/nanocrystalline silver structure. The cathodic current efficiency reached up to 86.6%. Not only does this work present a new type of generator that is featured by simple fabrication, large electric output, excellent robustness, and extremely low cost, but also extends the application of energy-harvesting technology to the field of electrochemistry with further utilizations including, but not limited to, pollutant degradation, corrosion protection, and water splitting.
Andromeda IV: A new local volume very metal-poor galaxy
NASA Astrophysics Data System (ADS)
Pustilnik, S. A.; Tepliakova, A. L.; Kniazev, A. Y.; Burenkov, A. N.
2008-06-01
And IV is a low surface brightness (LSB) dwarf galaxy at a distance of 6.1 Mpc, projecting close to M 31. In this paper the results of spectroscopy of the And IV two brightest HII regions with the SAO 6-m telescope (BTA) are presented. In spectra of both of them the faint line [OIII] λ4363 Å was detected and this allowed us to determine their O/H by the classical Te method. Their values for 12+log(O/H) are equal to 7.49±0.06 and 7.55±0.23, respectively. The comparison of the direct O/H calculations with the two most reliable semi-empirical and empirical methods shows the good consistency between these methods. For And IV absolute blue magnitude, MB = -12.6, our value for O/H corresponds to the ‘standard’ relation between O/H and LB for dwarf irregular galaxies (DIGs). And IV appears to be a new representative of the extremely metal-deficient gas-rich galaxies in the Local Volume. The very large range of M(HI) for LSB galaxies with close metallicities and luminosities indicates that simple models of LSBG chemical evolution are too limited to predict such striking diversity.
A level set method for determining critical curvatures for drainage and imbibition.
Prodanović, Masa; Bryant, Steven L
2006-12-15
An accurate description of the mechanics of pore level displacement of immiscible fluids could significantly improve the predictions from pore network models of capillary pressure-saturation curves, interfacial areas and relative permeability in real porous media. If we assume quasi-static displacement, at constant pressure and surface tension, pore scale interfaces are modeled as constant mean curvature surfaces, which are not easy to calculate. Moreover, the extremely irregular geometry of natural porous media makes it difficult to evaluate surface curvature values and corresponding geometric configurations of two fluids. Finally, accounting for the topological changes of the interface, such as splitting or merging, is nontrivial. We apply the level set method for tracking and propagating interfaces in order to robustly handle topological changes and to obtain geometrically correct interfaces. We describe a simple but robust model for determining critical curvatures for throat drainage and pore imbibition. The model is set up for quasi-static displacements but it nevertheless captures both reversible and irreversible behavior (Haines jump, pore body imbibition). The pore scale grain boundary conditions are extracted from model porous media and from imaged geometries in real rocks. The method gives quantitative agreement with measurements and with other theories and computational approaches.
Normal tissue complication probability modelling of tissue fibrosis following breast radiotherapy
NASA Astrophysics Data System (ADS)
Alexander, M. A. R.; Brooks, W. A.; Blake, S. W.
2007-04-01
Cosmetic late effects of radiotherapy such as tissue fibrosis are increasingly regarded as being of importance. It is generally considered that the complication probability of a radiotherapy plan is dependent on the dose uniformity, and can be reduced by using better compensation to remove dose hotspots. This work aimed to model the effects of improved dose homogeneity on complication probability. The Lyman and relative seriality NTCP models were fitted to clinical fibrosis data for the breast collated from the literature. Breast outlines were obtained from a commercially available Rando phantom using the Osiris system. Multislice breast treatment plans were produced using a variety of compensation methods. Dose-volume histograms (DVHs) obtained for each treatment plan were reduced to simple numerical parameters using the equivalent uniform dose and effective volume DVH reduction methods. These parameters were input into the models to obtain complication probability predictions. The fitted model parameters were consistent with a parallel tissue architecture. Conventional clinical plans generally showed reducing complication probabilities with increasing compensation sophistication. Extremely homogenous plans representing idealized IMRT treatments showed increased complication probabilities compared to conventional planning methods, as a result of increased dose to areas receiving sub-prescription doses using conventional techniques.
A simple encoding method for Sigma-Delta ADC based biopotential acquisition systems.
Guerrero, Federico N; Spinelli, Enrique M
2017-10-01
Sigma Delta analogue-to-digital converters allow acquiring the full dynamic range of biomedical signals at the electrodes, resulting in less complex hardware and increased measurement robustness. However, the increased data size per sample (typically 24 bits) demands the transmission of extremely large volumes of data across the isolation barrier, thus increasing power consumption on the patient side. This problem is accentuated when a large number of channels is used as in current 128-256 electrodes biopotential acquisition systems, that usually opt for an optic fibre link to the computer. An analogous problem occurs for simpler low-power acquisition platforms that transmit data through a wireless link to a computing platform. In this paper, a low-complexity encoding method is presented to decrease sample data size without losses, while preserving the full DC-coupled signal. The method achieved a 2.3 average compression ratio evaluated over an ECG and EMG signal bank acquired with equipment based on Sigma-Delta converters. It demands a very low processing load: a C language implementation is presented that resulted in an 110 clock cycles average execution on an 8-bit microcontroller.
Sved, J A; Yu, H; Dominiak, B; Gilchrist, A S
2003-02-01
Long-range dispersal of a species may involve either a single long-distance movement from a core population or spreading via unobserved intermediate populations. Where the new populations originate as small propagules, genetic drift may be extreme and gene frequency or assignment methods may not prove useful in determining the relation between the core population and outbreak samples. We describe computationally simple resampling methods for use in this situation to distinguish between the different modes of dispersal. First, estimates of heterozygosity can be used to test for direct sampling from the core population and to estimate the effective size of intermediate populations. Second, a test of sharing of alleles, particularly rare alleles, can show whether outbreaks are related to each other rather than arriving as independent samples from the core population. The shared-allele statistic also serves as a genetic distance measure that is appropriate for small samples. These methods were applied to data on a fruit fly pest species, Bactrocera tryoni, which is quarantined from some horticultural areas in Australia. We concluded that the outbreaks in the quarantine zone came from a heterogeneous set of genetically differentiated populations, possibly ones that overwinter in the vicinity of the quarantine zone.
NASA Astrophysics Data System (ADS)
Haustein, Karsten; Otto, Friederike; Uhe, Peter; Allen, Myles; Cullen, Heidi
2016-04-01
Within the last decade, extreme weather event attribution has emerged as a new field of science and garnered increasing attention from the wider scientific community and the public. Numerous methods have been put forward to determine the contribution of anthropogenic climate change to individual extreme weather events. So far nearly all such analyses were done months after an event has happened. First, we present our newly established method which can assess the fraction of attributable risk (FAR) of a severe weather event due to an external driver in real-time. The method builds on a large ensemble of atmosphere-only GCM/RCM simulations forced by seasonal forecast sea surface temperatures (SSTs). Taking the UK 2013/14 winter floods as an example, we demonstrate that the change in risk for heavy rainfall during the England floods due to anthropogenic climate change is of similar magnitude using either observed or seasonal forecast SSTs. While FAR is assumed to be independent from event-specific dynamic contributions due to anomalous circulation patterns as a first approximation, the risk of an event to occur under current conditions is clearly a function of the state of the atmosphere. The shorter the event, the more it is a result of chaotic internal weather variability. Hence we are interested to (1) attribute the event to thermodynamic and dynamic causes and to (2) establish a sensible time-scale for which we can make a useful and potentially robust attribution statement with regard to event-specific dynamics. Having tested the dynamic response of our model to SST conditions in January 2014, we find that observed SSTs are required to establish a discernible link between anomalous ocean temperatures and the atmospheric circulation over the North Atlantic in general and the UK in particular. However, for extreme events occurring under strongly anomalous SST patterns, associated with known low-frequency climate modes such as El Nino or La Nina, forecast SSTs can provide sufficient guidance to determine the dynamic contribution to the event on the basis of monthly mean values. No such link can be made (North Atlantic/Western Europe region) for shorter time-scales, unless the observed state of the circulation is taken as reference for the model analysis (e.g. Christidis et al. 2014). We present results from our most recent attribution analysis for the December 2015 UK floods (Storm Desmond and Eva), during which we find a robust teleconnection link between Pacific SSTs and North Atlantic Jetstream anomalies. This is true for both experiments, with forecast and observed SSTs. We propose a fast and simple analysis method based on the comparison of current climatological circulation patterns with actual and natural conditions. Alternative methods are discussed and analysed regarding their potential for fast-track attribution of the role of dynamics. Also, we briefly revisit the issue of internal vs forced dynamic contributions.
Intentional Voice Command Detection for Trigger-Free Speech Interface
NASA Astrophysics Data System (ADS)
Obuchi, Yasunari; Sumiyoshi, Takashi
In this paper we introduce a new framework of audio processing, which is essential to achieve a trigger-free speech interface for home appliances. If the speech interface works continually in real environments, it must extract occasional voice commands and reject everything else. It is extremely important to reduce the number of false alarms because the number of irrelevant inputs is much larger than the number of voice commands even for heavy users of appliances. The framework, called Intentional Voice Command Detection, is based on voice activity detection, but enhanced by various speech/audio processing techniques such as emotion recognition. The effectiveness of the proposed framework is evaluated using a newly-collected large-scale corpus. The advantages of combining various features were tested and confirmed, and the simple LDA-based classifier demonstrated acceptable performance. The effectiveness of various methods of user adaptation is also discussed.
The multilayer temporal network of public transport in Great Britain
NASA Astrophysics Data System (ADS)
Gallotti, Riccardo; Barthelemy, Marc
2015-01-01
Despite the widespread availability of information concerning public transport coming from different sources, it is extremely hard to have a complete picture, in particular at a national scale. Here, we integrate timetable data obtained from the United Kingdom open-data program together with timetables of domestic flights, and obtain a comprehensive snapshot of the temporal characteristics of the whole UK public transport system for a week in October 2010. In order to focus on multi-modal aspects of the system, we use a coarse graining procedure and define explicitly the coupling between different transport modes such as connections at airports, ferry docks, rail, metro, coach and bus stations. The resulting weighted, directed, temporal and multilayer network is provided in simple, commonly used formats, ensuring easy access and the possibility of a straightforward use of old or specifically developed methods on this new and extensive dataset.
The Torsion of Members Having Sections Common in Aircraft Construction
NASA Technical Reports Server (NTRS)
Trayer, George W; March, H W
1930-01-01
Within recent years a great variety of approximate torsion formulas and drafting-room processes have been advocated. In some of these, especially where mathematical considerations are involved, the results are extremely complex and are not generally intelligible to engineers. The principal object of this investigation was to determine by experiment and theoretical investigation how accurate the more common of these formulas are and on what assumptions they are founded and, if none of the proposed methods proved to be reasonable accurate in practice, to produce simple, practical formulas from reasonably correct assumptions, backed by experiment. A second object was to collect in readily accessible form the most useful of known results for the more common sections. Formulas for all the important solid sections that have yielded to mathematical treatment are listed. Then follows a discussion of the torsion of tubular rods with formulas both rigorous and approximate.
Multi-scale silica structures for improved point of care detection
NASA Astrophysics Data System (ADS)
Lin, Sophia; Lin, Lancy; Cho, Eunbyul; Pezzani, Gaston A. O.; Khine, Michelle
2017-03-01
The need for sensitive, portable diagnostic tests at the point of care persists. We report on a simple method to obtain improved detection of biomolecules by a two-fold mechanism. Silica (SiO2) is coated on pre-stressed thermoplastic shrink-wrap film. When the film retracts, the resulting micro- and nanostructures yield far-field fluorescence signal enhancements over their planar or wrinkled counterparts. Because the film shrinks by 95% in surface area, there is also a 20x concentration effect. The SiO2 structured substrate is therefore used for improved detection of labeled proteins and DNA hybridization via both fluorescent and bright field. Through optical characterization studies, we attribute the fluorescence signal enhancements of 100x to increased surface density and light scattering from the rough SiO2 structures. Combining with our open channel self-wicking microfluidics, we can achieve extremely low cost yet sensitive point of care diagnostics.
Geoelectrical mapping and groundwater contamination
NASA Astrophysics Data System (ADS)
Blum, Rainer
Specific electrical resistivity of near-surface materials is mainly controlled by the groundwater content and thus reacts extremely sensitive to any change in the ion content. Geoelectric mapping is a well-established, simple, and inexpensive technique for observing areal distributions of apparent specific electrical resistivities. These are a composite result of the true resistivities in the underground, and with some additional information the mapping of apparent resistivities can help to delineate low-resistivity groundwater contaminations, typically observed downstream from sanitary landfills and other waste sites. The presence of other good conductors close to the surface, mainly clays, is a serious noise source and has to be sorted out by supporting observations of conductivities in wells and geoelectric depth soundings. The method may be used to monitor the extent of groundwater contamination at a specific time as well as the change of a contamination plume with time, by carrying out repeated measurements. Examples for both are presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chakraborty, Sudipta; Nelson, Austin; Hoke, Anderson
2016-12-12
Traditional testing methods fall short in evaluating interactions between multiple smart inverters providing advanced grid support functions due to the fact that such interactions largely depend on their placements on the electric distribution systems with impedances between them. Even though significant concerns have been raised by the utilities on the effects of such interactions, little effort has been made to evaluate them. In this paper, power hardware-in-the-loop (PHIL) based testing was utilized to evaluate autonomous volt-var operations of multiple smart photovoltaic (PV) inverters connected to a simple distribution feeder model. The results provided in this paper show that depending onmore » volt-var control (VVC) parameters and grid parameters, interaction between inverters and between the inverter and the grid is possible in some extreme cases with very high VVC slopes, fast response times and large VVC response delays.« less
Mao, Li; Liu, Yu-Xiang; Huang, Chun-Hua; Gao, Hui-Ying; Kalyanaraman, Balaraman; Zhu, Ben-Zhan
2015-07-07
The ubiquitous distribution coupled with their carcinogenicity has raised public concerns on the potential risks to both human health and the ecosystem posed by the halogenated aromatic compounds (XAr). Recently, advanced oxidation processes (AOPs) have been increasingly favored as an "environmentally-green" technology for the remediation of such recalcitrant and highly toxic XAr. Here, we show that AOPs-mediated degradation of the priority pollutant pentachlorophenol and all other XAr produces an intrinsic chemiluminescence that directly depends on the generation of the extremely reactive hydroxyl radicals. We propose that the hydroxyl radical-dependent formation of quinoid intermediates and electronically excited carbonyl species is responsible for this unusual chemiluminescence production. A rapid, sensitive, simple, and effective chemiluminescence method was developed to quantify trace amounts of XAr and monitor their real-time degradation kinetics. These findings may have broad biological and environmental implications for future research on this important class of halogenated persistent organic pollutants.
NASA Astrophysics Data System (ADS)
Yamada, Takahiro; Watanabe, Kenta; Nozaki, Mikito; Yamada, Hisashi; Takahashi, Tokio; Shimizu, Mitsuaki; Yoshigoe, Akitaka; Hosoi, Takuji; Shimura, Takayoshi; Watanabe, Heiji
2018-01-01
A simple and feasible method for fabricating high-quality and highly reliable GaN-based metal-oxide-semiconductor (MOS) devices was developed. The direct chemical vapor deposition of SiO2 films on GaN substrates forming Ga-oxide interlayers was carried out to fabricate SiO2/GaO x /GaN stacked structures. Although well-behaved hysteresis-free GaN-MOS capacitors with extremely low interface state densities below 1010 cm-2 eV-1 were obtained by postdeposition annealing, Ga diffusion into overlying SiO2 layers severely degraded the dielectric breakdown characteristics. However, this problem was found to be solved by rapid thermal processing, leading to the superior performance of the GaN-MOS devices in terms of interface quality, insulating property, and gate dielectric reliability.
Facile Synthesis and Catalysis of Pure-Silica and Heteroatom LTA
Boal, Ben W.; Schmidt, Joel E.; Deimund, Mark A.; ...
2015-11-05
Zeolite A (LTA) has many large-scale uses in separations and ion exchange applications. Because of the high aluminum content and lack of high-temperature stability, applications in catalysis, while highly desired, have been extremely limited. Herein, we report a robust method to prepare pure-silica, aluminosilicate (product Si/Al = 12–42), and titanosilicate LTA in fluoride media using a simple, imidazolium- based organic structure-directing agent. The aluminosilicate material is an active catalyst for the methanol-to-olefins reaction with higher product selectivities to butenes as well as C 5 and C 6 products than the commercialized silicoalumniophosphate or zeolite analogue that both have the chabazitemore » framework (SAPO- 34 and SSZ-13, respectively). Furthermore, the crystal structures of the as-made and calcined pure-silica materials were solved using singlecrystal X-ray diffraction, providing information about the occluded organics and fluoride as well as structural information.« less
Ultrafast dynamics of low-energy electron attachment via a non-valence correlation-bound state
NASA Astrophysics Data System (ADS)
Rogers, Joshua P.; Anstöter, Cate S.; Verlet, Jan R. R.
2018-03-01
The primary electron-attachment process in electron-driven chemistry represents one of the most fundamental chemical transformations with wide-ranging importance in science and technology. However, the mechanistic detail of the seemingly simple reaction of an electron and a neutral molecule to form an anion remains poorly understood, particularly at very low electron energies. Here, time-resolved photoelectron imaging was used to probe the electron-attachment process to a non-polar molecule using time-resolved methods. An initially populated diffuse non-valence state of the anion that is bound by correlation forces evolves coherently in ∼30 fs into a valence state of the anion. The extreme efficiency with which the correlation-bound state serves as a doorway state for low-energy electron attachment explains a number of electron-driven processes, such as anion formation in the interstellar medium and electron attachment to fullerenes.
A fast, preconditioned conjugate gradient Toeplitz solver
NASA Technical Reports Server (NTRS)
Pan, Victor; Schrieber, Robert
1989-01-01
A simple factorization is given of an arbitrary hermitian, positive definite matrix in which the factors are well-conditioned, hermitian, and positive definite. In fact, given knowledge of the extreme eigenvalues of the original matrix A, an optimal improvement can be achieved, making the condition numbers of each of the two factors equal to the square root of the condition number of A. This technique is to applied to the solution of hermitian, positive definite Toeplitz systems. Large linear systems with hermitian, positive definite Toeplitz matrices arise in some signal processing applications. A stable fast algorithm is given for solving these systems that is based on the preconditioned conjugate gradient method. The algorithm exploits Toeplitz structure to reduce the cost of an iteration to O(n log n) by applying the fast Fourier Transform to compute matrix-vector products. Matrix factorization is used as a preconditioner.
NASA Astrophysics Data System (ADS)
Nakanishi, Hideya; Imazu, Setsuo; Ohsuna, Masaki; Kojima, Mamoru; Nonomura, Miki; Shoji, Mamoru; Emoto, Masahiko; Yoshida, Masanobu; Iwata, Chie; Miyake, Hitoshi; Nagayama, Yoshio; Kawahata, Kazuo
To deal with endless data streams acquired in LHD steady-state experiments, the LHD data acquisition system was designed with a simple concept that divides a long pulse into a consecutive series of 10-s “subshots”. Latest digitizers applying high-speed PCI-Express technology, however, output nonstop gigabyte per second data streams whose subshot intervals would be extremely long if 10-s rule was applied. These digitizers need shorter subshot intervals, less than 10-s long. In contrast, steady-state fusion plants need uninterrupted monitoring of the environment and device soundness. They adopt longer subshot lengths of either 10 min or 1 day. To cope with both uninterrupted monitoring and ultra-fast diagnostics, the ability to vary the subshot length according to the type of operation is required. In this study, a design modification that enables variable subshot lengths was implemented and its practical effectiveness in LHD was verified.
NASA Astrophysics Data System (ADS)
Coronel-Brizio, H. F.; Hernández-Montoya, A. R.
2005-08-01
The so-called Pareto-Levy or power-law distribution has been successfully used as a model to describe probabilities associated to extreme variations of stock markets indexes worldwide. The selection of the threshold parameter from empirical data and consequently, the determination of the exponent of the distribution, is often done using a simple graphical method based on a log-log scale, where a power-law probability plot shows a straight line with slope equal to the exponent of the power-law distribution. This procedure can be considered subjective, particularly with regard to the choice of the threshold or cutoff parameter. In this work, a more objective procedure based on a statistical measure of discrepancy between the empirical and the Pareto-Levy distribution is presented. The technique is illustrated for data sets from the New York Stock Exchange (DJIA) and the Mexican Stock Market (IPC).
Review of FD-TD numerical modeling of electromagnetic wave scattering and radar cross section
NASA Technical Reports Server (NTRS)
Taflove, Allen; Umashankar, Korada R.
1989-01-01
Applications of the finite-difference time-domain (FD-TD) method for numerical modeling of electromagnetic wave interactions with structures are reviewed, concentrating on scattering and radar cross section (RCS). A number of two- and three-dimensional examples of FD-TD modeling of scattering and penetration are provided. The objects modeled range in nature from simple geometric shapes to extremely complex aerospace and biological systems. Rigorous analytical or experimental validatons are provided for the canonical shapes, and it is shown that FD-TD predictive data for near fields and RCS are in excellent agreement with the benchmark data. It is concluded that with continuing advances in FD-TD modeling theory for target features relevant to the RCS problems and in vector and concurrent supercomputer technology, it is likely that FD-TD numerical modeling will occupy an important place in RCS technology in the 1990s and beyond.
On the nonlinearity of spatial scales in extreme weather attribution statements
NASA Astrophysics Data System (ADS)
Angélil, Oliver; Stone, Daíthí; Perkins-Kirkpatrick, Sarah; Alexander, Lisa V.; Wehner, Michael; Shiogama, Hideo; Wolski, Piotr; Ciavarella, Andrew; Christidis, Nikolaos
2018-04-01
In the context of ongoing climate change, extreme weather events are drawing increasing attention from the public and news media. A question often asked is how the likelihood of extremes might have changed by anthropogenic greenhouse-gas emissions. Answers to the question are strongly influenced by the model used, duration, spatial extent, and geographic location of the event—some of these factors often overlooked. Using output from four global climate models, we provide attribution statements characterised by a change in probability of occurrence due to anthropogenic greenhouse-gas emissions, for rainfall and temperature extremes occurring at seven discretised spatial scales and three temporal scales. An understanding of the sensitivity of attribution statements to a range of spatial and temporal scales of extremes allows for the scaling of attribution statements, rendering them relevant to other extremes having similar but non-identical characteristics. This is a procedure simple enough to approximate timely estimates of the anthropogenic contribution to the event probability. Furthermore, since real extremes do not have well-defined physical borders, scaling can help quantify uncertainty around attribution results due to uncertainty around the event definition. Results suggest that the sensitivity of attribution statements to spatial scale is similar across models and that the sensitivity of attribution statements to the model used is often greater than the sensitivity to a doubling or halving of the spatial scale of the event. The use of a range of spatial scales allows us to identify a nonlinear relationship between the spatial scale of the event studied and the attribution statement.
On the nonlinearity of spatial scales in extreme weather attribution statements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angélil, Oliver; Stone, Daíthí; Perkins-Kirkpatrick, Sarah
In the context of continuing climate change, extreme weather events are drawing increasing attention from the public and news media. A question often asked is how the likelihood of extremes might have changed by anthropogenic greenhouse-gas emissions. Answers to the question are strongly influenced by the model used, duration, spatial extent, and geographic location of the event—some of these factors often overlooked. Using output from four global climate models, we provide attribution statements characterised by a change in probability of occurrence due to anthropogenic greenhouse-gas emissions, for rainfall and temperature extremes occurring at seven discretised spatial scales and three temporalmore » scales. An understanding of the sensitivity of attribution statements to a range of spatial and temporal scales of extremes allows for the scaling of attribution statements, rendering them relevant to other extremes having similar but non-identical characteristics. This is a procedure simple enough to approximate timely estimates of the anthropogenic contribution to the event probability. Furthermore, since real extremes do not have well-defined physical borders, scaling can help quantify uncertainty around attribution results due to uncertainty around the event definition. Results suggest that the sensitivity of attribution statements to spatial scale is similar across models and that the sensitivity of attribution statements to the model used is often greater than the sensitivity to a doubling or halving of the spatial scale of the event. The use of a range of spatial scales allows us to identify a nonlinear relationship between the spatial scale of the event studied and the attribution statement.« less
On the nonlinearity of spatial scales in extreme weather attribution statements
Angélil, Oliver; Stone, Daíthí; Perkins-Kirkpatrick, Sarah; ...
2017-06-17
In the context of continuing climate change, extreme weather events are drawing increasing attention from the public and news media. A question often asked is how the likelihood of extremes might have changed by anthropogenic greenhouse-gas emissions. Answers to the question are strongly influenced by the model used, duration, spatial extent, and geographic location of the event—some of these factors often overlooked. Using output from four global climate models, we provide attribution statements characterised by a change in probability of occurrence due to anthropogenic greenhouse-gas emissions, for rainfall and temperature extremes occurring at seven discretised spatial scales and three temporalmore » scales. An understanding of the sensitivity of attribution statements to a range of spatial and temporal scales of extremes allows for the scaling of attribution statements, rendering them relevant to other extremes having similar but non-identical characteristics. This is a procedure simple enough to approximate timely estimates of the anthropogenic contribution to the event probability. Furthermore, since real extremes do not have well-defined physical borders, scaling can help quantify uncertainty around attribution results due to uncertainty around the event definition. Results suggest that the sensitivity of attribution statements to spatial scale is similar across models and that the sensitivity of attribution statements to the model used is often greater than the sensitivity to a doubling or halving of the spatial scale of the event. The use of a range of spatial scales allows us to identify a nonlinear relationship between the spatial scale of the event studied and the attribution statement.« less
Explicit Computations of Instantons and Large Deviations in Beta-Plane Turbulence
NASA Astrophysics Data System (ADS)
Laurie, J.; Bouchet, F.; Zaboronski, O.
2012-12-01
We use a path integral formalism and instanton theory in order to make explicit analytical predictions about large deviations and rare events in beta-plane turbulence. The path integral formalism is a concise way to get large deviation results in dynamical systems forced by random noise. In the most simple cases, it leads to the same results as the Freidlin-Wentzell theory, but it has a wider range of applicability. This approach is however usually extremely limited, due to the complexity of the theoretical problems. As a consequence it provides explicit results in a fairly limited number of models, often extremely simple ones with only a few degrees of freedom. Few exception exist outside the realm of equilibrium statistical physics. We will show that the barotropic model of beta-plane turbulence is one of these non-equilibrium exceptions. We describe sets of explicit solutions to the instanton equation, and precise derivations of the action functional (or large deviation rate function). The reason why such exact computations are possible is related to the existence of hidden symmetries and conservation laws for the instanton dynamics. We outline several applications of this apporach. For instance, we compute explicitly the very low probability to observe flows with an energy much larger or smaller than the typical one. Moreover, we consider regimes for which the system has multiple attractors (corresponding to different numbers of alternating jets), and discuss the computation of transition probabilities between two such attractors. These extremely rare events are of the utmost importance as the dynamics undergo qualitative macroscopic changes during such transitions.
NASA Astrophysics Data System (ADS)
Bhardwaj, Alok; Ziegler, Alan D.; Wasson, Robert J.; Chow, Winston; Sharma, Mukat L.
2017-04-01
Extreme monsoon rainfall is the primary reason of floods and other secondary hazards such as landslides in the Indian Himalaya. Understanding the phenomena of extreme monsoon rainfall is therefore required to study the natural hazards. In this work, we study the characteristics of extreme monsoon rainfall including its intensity and frequency in the Garhwal Himalaya in India, with a focus on the Mandakini River Catchment, the site of devastating flood and multiple large landslides in 2013. We have used two long term rainfall gridded data sets: the Asian Precipitation Highly Resolved Observational Data Integration Towards Evaluation of Water Resources (APHRODITE) product with daily rainfall data from 1951-2007 and the India Meteorological Department (IMD) product with daily rainfall data from 1901 to 2013. Two methods of Mann Kendall and Sen Slope estimator are used to identify the statistical significance and magnitude of trends in intensity and frequency of extreme monsoon rainfall respectively, at a significance level of 0.05. The autocorrelation in the time series of extreme monsoon rainfall is identified and reduced using the methods of: pre-whitening, trend-free pre-whitening, variance correction, and block bootstrap. We define extreme monsoon rainfall threshold as the 99th percentile of time series of rainfall values and any rainfall depth greater than 99th percentile is considered as extreme in nature. With the IMD data set, significant increasing trend in intensity and frequency of extreme rainfall with slope magnitude of 0.55 and 0.02 respectively was obtained in the north of the Mandakini Catchment as identified by all four methods. Significant increasing trend in intensity with a slope magnitude of 0.3 is found in the middle of the catchment as identified by all methods except block bootstrap. In the south of the catchment, significant increasing trend in intensity with a slope magnitude of 0.86 for pre-whitening method and 0.28 for trend-free pre-whitening and variance correction methods was obtained. Further, increasing trend in frequency with a slope magnitude of 0.01 was identified by three methods except block bootstrap in the south of the catchment. With the APHRODITE data set, we obtained significant increasing trend in intensity with a slope magnitude of 1.27 at the middle of the catchment as identified by all four methods. Collectively, both the datasets show signals of increasing intensity, and IMD shows results for increasing frequency in the Mandakini Catchment. The increasing occurrence of extreme events, as identified here, is becoming more disastrous because of rising human population and infrastructure in the Mandakini Catchment. For example, the 2013 flood due to extreme rainfall was catastrophic in terms of loss of human and animal lives and destruction of the local economy. We believe our results will help understand more about extreme rainfall events in the Mandakini Catchment and in the Indian Himalaya.
Auxiliary variables for numerically solving nonlinear equations with softly broken symmetries.
Olum, Ken D; Masoumi, Ali
2017-06-01
General methods for solving simultaneous nonlinear equations work by generating a sequence of approximate solutions that successively improve a measure of the total error. However, if the total error function has a narrow curved valley, the available techniques tend to find the solution after a very large number of steps, if ever. The solver first converges rapidly to the valley, but once there it converges extremely slowly to the solution. In this paper we show that in the specific physically important case where these valleys are the result of a softly broken symmetry, the solution can often be found much more quickly by adding the generators of the softly broken symmetry as auxiliary variables. This makes the number of variables more than the equations and hence there will be a family of solutions, any one of which would be acceptable. We present a procedure for finding solutions in this case and apply it to several simple examples and an important problem in the physics of false vacuum decay. We also provide a Mathematica package that implements Powell's hybrid method with the generalization to allow more variables than equations.
Nakano, Sachie; Tsukimura, Takahiro; Togawa, Tadayasu; Ohashi, Toya; Kobayashi, Masahisa; Takayama, Katsuyoshi; Kobayashi, Yukuharu; Abiko, Hiroshi; Satou, Masatsugu; Nakahata, Tohru; Warnock, David G; Sakuraba, Hitoshi; Shibasaki, Futoshi
2015-01-01
We developed an immunochromatography-based assay for detecting antibodies against recombinant α-galactosidase A proteins in serum. The evaluation of 29 serum samples from Fabry patients, who had received enzyme replacement therapy with agalsidase alpha and/or agalsidase beta, was performed by means of this assay method, and the results clearly revealed that the patients exhibited the same level of antibodies against both agalsidase alpha and agalsidase beta, regardless of the species of recombinant α-galactosidase A used for enzyme replacement therapy. A conventional enzyme-linked immunosorbent assay supported the results. Considering these, enzyme replacement therapy with agalsidase alpha or agalsidase beta would generate antibodies against the common epitopes in both agalsidase alpha and agalsidase beta. Most of the patients who showed immunopositive reaction exhibited classic Fabry phenotype and harbored gene mutations affecting biosynthesis of α-galactosidase A. As immunochromatography is a handy and simple assay system which can be available at bedside, this assay method would be extremely useful for quick evaluation or first screening of serum antibodies against agalsidase alpha or agalsidase beta in Fabry disease with enzyme replacement therapy.
NASA Astrophysics Data System (ADS)
Gîrgel, I.; Šatka, A.; Priesol, J.; Coulon, P.-M.; Le Boulbar, E. D.; Batten, T.; Allsopp, D. W. E.; Shields, P. A.
2018-04-01
III-nitride nanostructures are of interest for a new generation of light-emitting diodes (LEDs). However, the characterization of doping incorporation in nanorod (NR) structures, which is essential for creating the p-n junction diodes, is extremely challenging. This is because the established electrical measurement techniques (such as capacitance–voltage or Hall-effect methods) require a simple sample geometry and reliable ohmic contacts, both of which are difficult to achieve in nanoscale devices. The need for homogenous, conformal n-type or p-type layers in core–shell nanostructures magnifies these challenges. Consequently, we demonstrate how a combination of non-contact methods (micro-photoluminescence, micro-Raman and cathodoluminescence), as well as electron-beam-induced-current, can be used to analyze the uniformity of magnesium incorporation in core–shell NRs and make a first estimate of doping levels by the evolution of band transitions, strain and current mapping. These techniques have been used to optimize the growth of core–shell nanostructures for electrical carrier injection, a significant milestone for their use in LEDs.
NASA Astrophysics Data System (ADS)
Kim, Byoung Soo; Kim, Hyun Jin; An, Suyeong; Chi, Sangwon; Kim, Junseok; Lee, Jonghwi
2017-07-01
Recently, numerous attempts have been made to engineer micro- and nano-porous surface patterns or to develop convenient preparation methods for the practical applications of self-cleaning surfaces, water-repellent surfaces, novel textures, etc. Herein, we introduce a simple, cheap, and repeatable crystallization-based method to produce porous surface structures, on any surface of already fabricated polymeric materials. Contact of the solvent phase with cooled polymer surfaces enabled the limited dissolution of the surfaces and the subsequent extremely fast melt crystallization of the solvent. After removing the crystals, various micro- and nano-porous patterns were obtained, whose pore sizes ranged over three orders of magnitude. Pore depth was linearly dependent on the dissolution time. Crystal growth was mainly directed normal to the surfaces, but it was also controlled in-plane, resulting in cylindrical or lamellar structures. Superhydrophobic surfaces were successfully prepared on both polystyrene and polycarbonate. This process offers a novel surface engineering tool for a variety of polymer surfaces, whose topology can be conveniently controlled over a wide range by crystal engineering.
NASA Astrophysics Data System (ADS)
Mayabadi, A. H.; Waman, V. S.; Kamble, M. M.; Ghosh, S. S.; Gabhale, B. B.; Rondiya, S. R.; Rokade, A. V.; Khadtare, S. S.; Sathe, V. G.; Pathan, H. M.; Gosavi, S. W.; Jadkar, S. R.
2014-02-01
Nanocrystalline thin films of TiO2 were prepared on glass substrates from an aqueous solution of TiCl3 and NH4OH at room temperature using the simple and cost-effective chemical bath deposition (CBD) method. The influence of deposition time on structural, morphological and optical properties was systematically investigated. TiO2 transition from a mixed anatase-rutile phase to a pure rutile phase was revealed by low-angle XRD and Raman spectroscopy. Rutile phase formation was confirmed by FTIR spectroscopy. Scanning electron micrographs revealed that the multigrain structure of as-deposited TiO2 thin films was completely converted into semi-spherical nanoparticles. Optical studies showed that rutile thin films had a high absorption coefficient and a direct bandgap. The optical bandgap decreased slightly (3.29-3.07 eV) with increasing deposition time. The ease of deposition of rutile thin films at low temperature is useful for the fabrication of extremely thin absorber (ETA) solar cells, dye-sensitized solar cells, and gas sensors.
He, Xuming; Ng, K W; Shi, Jian
2003-02-15
When age-specific percentile curves are constructed for several correlated variables, the marginal method of handling one variable at a time has typically been used. We address the question, frequently asked by practitioners, of whether we can achieve efficiency gains by joint estimation. We focus on a simple but common method of Box-Cox transformation and assess the statistical impact of a joint transformation to multivariate normality on the percentile curve estimation for correlated variables. We find that there is little gain from the joint transformation for estimating percentiles around the median but a noticeable reduction in variances is possible for estimating extreme percentiles that are usually of main interest in medical and biological applications. Our study is motivated by problems in constructing percentile charts for IgG subclasses of children and for blood pressures in adult populations, both of which are discussed in the paper as examples, and yet our general findings are applicable to a wide range of other problems. Copyright 2003 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Presnov, Denis E.; Bozhev, Ivan V.; Miakonkikh, Andrew V.; Simakin, Sergey G.; Trifonov, Artem S.; Krupenin, Vladimir A.
2018-02-01
We present the original method for fabricating a sensitive field/charge sensor based on field effect transistor (FET) with a nanowire channel that uses CMOS-compatible processes only. A FET with a kink-like silicon nanowire channel was fabricated from the inhomogeneously doped silicon on insulator wafer very close (˜100 nm) to the extremely sharp corner of a silicon chip forming local probe. The single e-beam lithographic process with a shadow deposition technique, followed by separate two reactive ion etching processes, was used to define the narrow semiconductor nanowire channel. The sensors charge sensitivity was evaluated to be in the range of 0.1-0.2 e /√{Hz } from the analysis of their transport and noise characteristics. The proposed method provides a good opportunity for the relatively simple manufacture of a local field sensor for measuring the electrical field distribution, potential profiles, and charge dynamics for a wide range of mesoscopic objects. Diagnostic systems and devices based on such sensors can be used in various fields of physics, chemistry, material science, biology, electronics, medicine, etc.
Yang, Changju; Kim, Hyongsuk; Adhikari, Shyam Prasad; Chua, Leon O.
2016-01-01
A hybrid learning method of a software-based backpropagation learning and a hardware-based RWC learning is proposed for the development of circuit-based neural networks. The backpropagation is known as one of the most efficient learning algorithms. A weak point is that its hardware implementation is extremely difficult. The RWC algorithm, which is very easy to implement with respect to its hardware circuits, takes too many iterations for learning. The proposed learning algorithm is a hybrid one of these two. The main learning is performed with a software version of the BP algorithm, firstly, and then, learned weights are transplanted on a hardware version of a neural circuit. At the time of the weight transplantation, a significant amount of output error would occur due to the characteristic difference between the software and the hardware. In the proposed method, such error is reduced via a complementary learning of the RWC algorithm, which is implemented in a simple hardware. The usefulness of the proposed hybrid learning system is verified via simulations upon several classical learning problems. PMID:28025566
Detection of timescales in evolving complex systems
Darst, Richard K.; Granell, Clara; Arenas, Alex; Gómez, Sergio; Saramäki, Jari; Fortunato, Santo
2016-01-01
Most complex systems are intrinsically dynamic in nature. The evolution of a dynamic complex system is typically represented as a sequence of snapshots, where each snapshot describes the configuration of the system at a particular instant of time. This is often done by using constant intervals but a better approach would be to define dynamic intervals that match the evolution of the system’s configuration. To this end, we propose a method that aims at detecting evolutionary changes in the configuration of a complex system, and generates intervals accordingly. We show that evolutionary timescales can be identified by looking for peaks in the similarity between the sets of events on consecutive time intervals of data. Tests on simple toy models reveal that the technique is able to detect evolutionary timescales of time-varying data both when the evolution is smooth as well as when it changes sharply. This is further corroborated by analyses of several real datasets. Our method is scalable to extremely large datasets and is computationally efficient. This allows a quick, parameter-free detection of multiple timescales in the evolution of a complex system. PMID:28004820
Nano-soldering of magnetically aligned three-dimensional nanowire networks.
Gao, Fan; Gu, Zhiyong
2010-03-19
It is extremely challenging to fabricate 3D integrated nanostructures and hybrid nanoelectronic devices. In this paper, we report a simple and efficient method to simultaneously assemble and solder nanowires into ordered 3D and electrically conductive nanowire networks. Nano-solders such as tin were fabricated onto both ends of multi-segmented nanowires by a template-assisted electrodeposition method. These nanowires were then self-assembled and soldered into large-scale 3D network structures by magnetic field assisted assembly in a liquid medium with a high boiling point. The formation of junctions/interconnects between the nanowires and the scale of the assembly were dependent on the solder reflow temperature and the strength of the magnetic field. The size of the assembled nanowire networks ranged from tens of microns to millimeters. The electrical characteristics of the 3D nanowire networks were measured by regular current-voltage (I-V) measurements using a probe station with micropositioners. Nano-solders, when combined with assembling techniques, can be used to efficiently connect and join nanowires with low contact resistance, which are very well suited for sensor integration as well as nanoelectronic device fabrication.
Engineered nanoconstructs for the multiplexed and sensitive detection of high-risk pathogens
NASA Astrophysics Data System (ADS)
Seo, Youngmin; Kim, Ji-Eun; Jeong, Yoon; Lee, Kwan Hong; Hwang, Jangsun; Hong, Jongwook; Park, Hansoo; Choi, Jonghoon
2016-01-01
Many countries categorize the causative agents of severe infectious diseases as high-risk pathogens. Given their extreme infectivity and potential to be used as biological weapons, a rapid and sensitive method for detection of high-risk pathogens (e.g., Bacillus anthracis, Francisella tularensis, Yersinia pestis, and Vaccinia virus) is highly desirable. Here, we report the construction of a novel detection platform comprising two units: (1) magnetic beads separately conjugated with multiple capturing antibodies against four different high-risk pathogens for simple and rapid isolation, and (2) genetically engineered apoferritin nanoparticles conjugated with multiple quantum dots and detection antibodies against four different high-risk pathogens for signal amplification. For each high-risk pathogen, we demonstrated at least 10-fold increase in sensitivity compared to traditional lateral flow devices that utilize enzyme-based detection methods. Multiplexed detection of high-risk pathogens in a sample was also successful by using the nanoconstructs harboring the dye molecules with fluorescence at different wavelengths. We ultimately envision the use of this novel nanoprobe detection platform in future applications that require highly sensitive on-site detection of high-risk pathogens.
Seizure drawings: insight into the self-image of children with epilepsy.
Stafstrom, Carl E; Havlena, Janice
2003-02-01
Epilepsy is a chronic disorder that is associated with numerous psychological challenges, especially in children. Drawings have been underutilized as a method to obtain insight into psychological issues in children with epilepsy. We asked 105 children with epilepsy, ages 5 to 18 years, to draw a picture of what it is like to have a seizure. Across ages and epilepsy syndromes, the drawings showed evidence of impaired self-concept, low self-esteem, and a sense of helplessness and vulnerability. Overall, the drawings of human figures were less developed than expected for chronological age. In some drawings, indicators of underlying depression were found. When considered by epilepsy syndrome or seizure type, some specific artistic features were noted. Children with simple partial (motor) seizures drew distorted body parts, especially limbs. Those with complex partial seizures depicted sensory symptoms and mental status changes such as confusion. Children with generalized tonic-clonic seizures showed shaking extremities. Drawings by children with absence seizures illustrated mainly staring. In conclusion, drawings are a powerful method to examine the self-concept of children with epilepsy and gain insight into their feelings about themselves and their world.
Detecting occlusion inside a ventricular catheter using photoacoustic imaging through skull
NASA Astrophysics Data System (ADS)
Tavakoli, Behnoosh; Guo, Xiaoyu; Taylor, Russell H.; Kang, Jin U.; Boctor, Emad M.
2014-03-01
Ventricular catheters are used to treat hydrocephalus by diverting the excess of the cerebrospinal fluid (CSF) to the reabsorption site so as to regulate the intracranial pressure. The failure rate of these shunts is extremely high due to the ingrown tissue that blocks the CSF flow. We have studied a method to image the occlusion inside the shunt through the skull. In this approach the pulsed laser light coupled to the optical fiber illuminate the occluding tissue inside the catheter and an external ultrasound transducer is applied to detect the generated photoacoustic signal. The feasibility of this method is investigated using a phantom made of ovis aries brain tissue and adult human skull. We were able to image the target inside the shunt located 20mm deep inside the brain through about 4mm thick skull bone. This study could lead to the development of a simple, safe and non-invasive device for percutaneous restoration of patency to occluded shunts. This will eliminate the need of the surgical replacement of the occluded catheters which expose the patients to risks including hemorrhage and brain injury.
den Braver, Michiel W; Vermeulen, Nico P E; Commandeur, Jan N M
2017-03-01
Modification of cellular macromolecules by reactive drug metabolites is considered to play an important role in the initiation of tissue injury by many drugs. Detection and identification of reactive intermediates is often performed by analyzing the conjugates formed after trapping by glutathione (GSH). Although sensitivity of modern mass spectrometrical methods is extremely high, absolute quantification of GSH-conjugates is critically dependent on the availability of authentic references. Although 1 H NMR is currently the method of choice for quantification of metabolites formed biosynthetically, its intrinsically low sensitivity can be a limiting factor in quantification of GSH-conjugates which generally are formed at low levels. In the present study, a simple but sensitive and generic method for absolute quantification of GSH-conjugates is presented. The method is based on quantitative alkaline hydrolysis of GSH-conjugates and subsequent quantification of glutamic acid and glycine by HPLC after precolumn derivatization with o-phthaldialdehyde/N-acetylcysteine (OPA/NAC). Because of the lower stability of the glycine OPA/NAC-derivate, quantification of the glutamic acid OPA/NAC-derivate appeared most suitable for quantification of GSH-conjugates. The novel method was used to quantify the concentrations of GSH-conjugates of diclofenac, clozapine and acetaminophen and quantification was consistent with 1 H NMR, but with a more than 100-fold lower detection limit for absolute quantification. Copyright © 2017. Published by Elsevier B.V.
NASA Astrophysics Data System (ADS)
Alam, Md. Fazle; Laskar, Amaj Ahmed; Ahmed, Shahbaz; Shaida, Mohd. Azfar; Younus, Hina
2017-08-01
Melamine toxicity has recently attracted worldwide attention as it causes renal failure and the death of humans and animals. Therefore, developing a simple, fast and sensitive method for the routine detection of melamine is the need of the hour. Herein, we have developed a selective colorimetric method for the detection of melamine in milk samples based upon in-situ formation of silver nanoparticles (AgNPs) via tannic acid. The AgNPs thus formed were characterized by UV-Visible spectrophotometer, transmission electron microscope (TEM), zetasizer and dynamic light scattering (DLS). The AgNPs were used to detect melamine under in vitro condition and in raw milk spiked with melamine. Under optimal conditions, melamine could be selectively detected in vitro within the concentration range of 0.05-1.4 μM with a limit of detection (LOD) of 0.01 μM, which is lower than the strictest melamine safety requirement of 1 ppm. In spiked raw milk, the recovery percentage range was 99.5-106.5% for liquid milk and 98.5-105.5% for powdered milk. The present method shows extreme selectivity with no significant interference with other substances like urea, glucose, glycine, ascorbic acid etc. This assay method does not utilize organic cosolvents, enzymatic reactions, light sensitive dye molecules and sophisticated instrumentation, thereby overcoming some of the limitations of the other conventional methods.
Incorporating Nonstationarity into IDF Curves across CONUS from Station Records and Implications
NASA Astrophysics Data System (ADS)
Wang, K.; Lettenmaier, D. P.
2017-12-01
Intensity-duration-frequency (IDF) curves are widely used for engineering design of storm-affected structures. Current practice is that IDF-curves are based on observed precipitation extremes fit to a stationary probability distribution (e.g., the extreme value family). However, there is increasing evidence of nonstationarity in station records. We apply the Mann-Kendall trend test to over 1000 stations across the CONUS at a 0.05 significance level, and find that about 30% of stations test have significant nonstationarity for at least one duration (1-, 2-, 3-, 6-, 12-, 24-, and 48-hours). We fit the stations to a GEV distribution with time-varying location and scale parameters using a Bayesian- methodology and compare the fit of stationary versus nonstationary GEV distributions to observed precipitation extremes. Within our fitted nonstationary GEV distributions, we compare distributions with a time-varying location parameter versus distributions with both time-varying location and scale parameters. For distributions with two time-varying parameters, we pay particular attention to instances where location and scale trends have opposing directions. Finally, we use the mathematical framework based on work of Koutsoyiannis to generate IDF curves based on the fitted GEV distributions and discuss the implications that using time-varying parameters may have on simple scaling relationships. We apply the above methods to evaluate how frequency statistics based on a stationary assumption compare to those that incorporate nonstationarity for both short and long term projects. Overall, we find that neglecting nonstationarity can lead to under- or over-estimates (depending on the trend for the given duration and region) of important statistics such as the design storm.
A simple miniature device for wireless stimulation of neural circuits in small behaving animals.
Zhang, Yisi; Langford, Bruce; Kozhevnikov, Alexay
2011-10-30
The use of wireless neural stimulation devices offers significant advantages for neural stimulation experiments in behaving animals. We demonstrate a simple, low-cost and extremely lightweight wireless neural stimulation device which is made from off-the-shelf components. The device has low power consumption and does not require a high-power RF preamplifier. Neural stimulation can be carried out in either a voltage source mode or a current source mode. Using the device, we carry out wireless stimulation in the premotor brain area HVC of a songbird and demonstrate that such stimulation causes rapid perturbations of the acoustic structure of the song. Published by Elsevier B.V.
Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection
Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad
2014-01-01
Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices. PMID:25177107
Using a Euclid distance discriminant method to find protein coding genes in the yeast genome.
Zhang, Chun-Ting; Wang, Ju; Zhang, Ren
2002-02-01
The Euclid distance discriminant method is used to find protein coding genes in the yeast genome, based on the single nucleotide frequencies at three codon positions in the ORFs. The method is extremely simple and may be extended to find genes in prokaryotic genomes or eukaryotic genomes with less introns. Six-fold cross-validation tests have demonstrated that the accuracy of the algorithm is better than 93%. Based on this, it is found that the total number of protein coding genes in the yeast genome is less than or equal to 5579 only, about 3.8-7.0% less than 5800-6000, which is currently widely accepted. The base compositions at three codon positions are analyzed in details using a graphic method. The result shows that the preference codons adopted by yeast genes are of the RGW type, where R, G and W indicate the bases of purine, non-G and A/T, whereas the 'codons' in the intergenic sequences are of the form NNN, where N denotes any base. This fact constitutes the basis of the algorithm to distinguish between coding and non-coding ORFs in the yeast genome. The names of putative non-coding ORFs are listed here in detail.
Katharopoulos, Efstathios; Touloupi, Katerina; Touraki, Maria
2016-08-01
The present study describes the development of a simple and efficient screening system that allows identification and quantification of nine bacteriocins produced by Lactococcus lactis. Cell-free L. lactis extracts presented a broad spectrum of antibacterial activity, including Gram-negative bacteria, Gram-positive bacteria, and fungi. The characterization of their sensitivity to pH, and heat, showed that the extracts retained their antibacterial activity at extreme pH values and in a wide temperature range. The loss of antibacterial activity following treatment of the extracts with lipase or protease suggests a lipoproteinaceous nature of the produced antimicrobials. The extracts were subjected to a purification protocol that employs a two phase extraction using ammonium sulfate precipitation and organic solvent precipitation, followed by ion exchange chromatography, solid phase extraction and HPLC. In the nine fractions that presented antimicrobial activity, bacteriocins were quantified by the turbidometric method using a standard curve of nisin and by the HPLC method with nisin as the external standard, with both methods producing comparable results. Turbidometry appears to be unique in the qualitative determination of bacteriocins but the only method suitable to both separate and quantify the bacteriocins providing increased sensitivity, accuracy, and precision is HPLC. Copyright © 2016 Elsevier B.V. All rights reserved.
Bias and Stability of Single Variable Classifiers for Feature Ranking and Selection.
Fakhraei, Shobeir; Soltanian-Zadeh, Hamid; Fotouhi, Farshad
2014-11-01
Feature rankings are often used for supervised dimension reduction especially when discriminating power of each feature is of interest, dimensionality of dataset is extremely high, or computational power is limited to perform more complicated methods. In practice, it is recommended to start dimension reduction via simple methods such as feature rankings before applying more complex approaches. Single Variable Classifier (SVC) ranking is a feature ranking based on the predictive performance of a classifier built using only a single feature. While benefiting from capabilities of classifiers, this ranking method is not as computationally intensive as wrappers. In this paper, we report the results of an extensive study on the bias and stability of such feature ranking method. We study whether the classifiers influence the SVC rankings or the discriminative power of features themselves has a dominant impact on the final rankings. We show the common intuition of using the same classifier for feature ranking and final classification does not always result in the best prediction performance. We then study if heterogeneous classifiers ensemble approaches provide more unbiased rankings and if they improve final classification performance. Furthermore, we calculate an empirical prediction performance loss for using the same classifier in SVC feature ranking and final classification from the optimal choices.
Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques
NASA Astrophysics Data System (ADS)
Elliott, Louie C.
This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.
NASA Astrophysics Data System (ADS)
Haruki, W.; Iseri, Y.; Takegawa, S.; Sasaki, O.; Yoshikawa, S.; Kanae, S.
2016-12-01
Natural disasters caused by heavy rainfall occur every year in Japan. Effective countermeasures against such events are important. In 2015, a catastrophic flood occurred in Kinu river basin, which locates in the northern part of Kanto region. The remarkable feature of this flood event was not only in the intensity of rainfall but also in the spatial characteristics of heavy rainfall area. The flood was caused by continuous overlapping of heavy rainfall area over the Kinu river basin, suggesting consideration of spatial extent is quite important to assess impacts of heavy rainfall events. However, the spatial extent of heavy rainfall events cannot be properly measured through rainfall measurement by rain gauges at observation points. On the other hand, rainfall measurements by radar observations provide spatially and temporarily high resolution rainfall data which would be useful to catch the characteristics of heavy rainfall events. For long term effective countermeasure, extreme heavy rainfall scenario considering rainfall area and distribution is required. In this study, a new method for generating extreme heavy rainfall events using Monte Carlo Simulation has been developed in order to produce extreme heavy rainfall scenario. This study used AMeDAS analyzed precipitation data which is high resolution grid precipitation data made by Japan Meteorological Agency. Depth area duration (DAD) analysis has been conducted to extract extreme rainfall events in the past, considering time and spatial scale. In the Monte Carlo Simulation, extreme rainfall event is generated based on events extracted by DAD analysis. Extreme heavy rainfall events are generated in specific region in Japan and the types of generated extreme heavy rainfall events can be changed by varying the parameter. For application of this method, we focused on Kanto region in Japan. As a result, 3000 years rainfall data are generated. 100 -year probable rainfall and return period of flood in Kinu River Basin (2015) are obtained using generated data. We compared 100-year probable rainfall calculated by this method with other traditional method. New developed method enables us to generate extreme rainfall events considering time and spatial scale and produce extreme rainfall scenario.
Statistic analysis of annual total ozone extremes for the period 1964-1988
NASA Technical Reports Server (NTRS)
Krzyscin, Janusz W.
1994-01-01
Annual extremes of total column amount of ozone (in the period 1964-1988) from a network of 29 Dobson stations have been examined using the extreme value analysis. The extremes have been calculated as the highest deviation of daily mean total ozone from its long-term monthly mean, normalized by the monthly standard deviations. The extremes have been selected from the direct-Sun total ozone observations only. The extremes resulting from abrupt changes in ozone (day to day changes greater than 20 percent) have not been considered. The ordered extremes (maxima in ascending way, minima in descending way) have been fitted to one of three forms of the Fisher-Tippet extreme value distribution by the nonlinear least square method (Levenberg-Marguard method). We have found that the ordered extremes from a majority of Dobson stations lie close to Fisher-Tippet type III. The extreme value analysis of the composite annual extremes (combined from averages of the annual extremes selected at individual stations) has shown that the composite maxima are fitted by the Fisher-Tippet type III and the composite minima by the Fisher-Tippet type I. The difference between the Fisher-Tippet types of the composite extremes seems to be related to the ozone downward trend. Extreme value prognoses for the period 1964-2014 (derived from the data taken at: all analyzed stations, the North American, and the European stations) have revealed that the prognostic extremes are close to the largest annual extremes in the period 1964-1988 and there are only small regional differences in the prognoses.
Solution to the Problem of Calibration of Low-Cost Air Quality Measurement Sensors in Networks.
Miskell, Georgia; Salmond, Jennifer A; Williams, David E
2018-04-27
We provide a simple, remote, continuous calibration technique suitable for application in a hierarchical network featuring a few well-maintained, high-quality instruments ("proxies") and a larger number of low-cost devices. The ideas are grounded in a clear definition of the purpose of a low-cost network, defined here as providing reliable information on air quality at small spatiotemporal scales. The technique assumes linearity of the sensor signal. It derives running slope and offset estimates by matching mean and standard deviations of the sensor data to values derived from proxies over the same time. The idea is extremely simple: choose an appropriate proxy and an averaging-time that is sufficiently long to remove the influence of short-term fluctuations but sufficiently short that it preserves the regular diurnal variations. The use of running statistical measures rather than cross-correlation of sites means that the method is robust against periods of missing data. Ideas are first developed using simulated data and then demonstrated using field data, at hourly and 1 min time-scales, from a real network of low-cost semiconductor-based sensors. Despite the almost naïve simplicity of the method, it was robust for both drift detection and calibration correction applications. We discuss the use of generally available geographic and environmental data as well as microscale land-use regression as means to enhance the proxy estimates and to generalize the ideas to other pollutants with high spatial variability, such as nitrogen dioxide and particulates. These improvements can also be used to minimize the required number of proxy sites.
Wang, Peng; Kim, Mijin; Peng, Zhiwei; Sun, Chuan-Fu; Mok, Jasper; Lieberman, Anna; Wang, YuHuang
2017-09-26
Attaining aqueous solutions of individual, long single-walled carbon nanotubes is a critical first step for harnessing the extraordinary properties of these materials. However, the widely used ultrasonication-ultracentrifugation approach and its variants inadvertently cut the nanotubes into short pieces. The process is also time-consuming and difficult to scale. Here we present an unexpectedly simple solution to this decade-old challenge by directly neutralizing a nanotube-chlorosulfonic acid solution in the presence of sodium deoxycholate. This straightforward superacid-surfactant exchange eliminates the need for both ultrasonication and ultracentrifugation altogether, allowing aqueous solutions of individual nanotubes to be prepared within minutes and preserving the full length of the nanotubes. We found that the average length of the processed nanotubes is more than 350% longer than sonicated controls, with a significant fraction approaching ∼9 μm, a length that is limited by only the raw material. The nondestructive nature is manifested by an extremely low density of defects, bright and homogeneous photoluminescence in the near-infrared, and ultrahigh electrical conductivity in transparent thin films (130 Ω/sq at 83% transmittance), which well exceeds that of indium tin oxide. Furthermore, we demonstrate that our method is fully compatible with established techniques for sorting nanotubes by their electronic structures and can also be readily applied to graphene. This surprisingly simple method thus enables nondestructive aqueous solution processing of high-quality carbon nanomaterials at large-scale and low-cost with the potential for a wide range of fundamental studies and applications, including, for example, transparent conductors, near-infrared imaging, and high-performance electronics.
NASA Astrophysics Data System (ADS)
González-Llana, Arturo; González-Bárcena, David; Pérez-Grande, Isabel; Sanz-Andrés, Ángel
2018-07-01
The selection of the extreme thermal environmental conditions -albedo coefficient and Earth infrared radiation- for the thermal design of stratospheric balloon missions is usually based on the methodologies applied in space missions. However, the particularities of stratospheric balloon missions, such as the much higher residence time of the balloon payload over a determined area, make necessary an approach centered in the actual environment the balloon is going to find, in terms of geographic area and season of flight. In this sense, this work is focussed on stratospheric balloon missions circumnavigating the North Pole during the summer period. Pairs of albedo and Earth infrared radiation satellite data restricted to this area and season of interest have been treated statistically. Furthermore, the environmental conditions leading to the extreme temperatures of the payload depend in turn on the surface finish, and more particularly on the ratio between the solar absorptance and the infrared emissivity α/ε. A simple but representative thermal model of a balloon and its payload has been set up in order to identify the pairs of albedo coefficient and Earth infrared radiation leading to extreme temperatures for each value of α/ε.
NASA Technical Reports Server (NTRS)
Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael G.; Gershunov, Alexander; Gutowski, William J., Jr.; Gyakum, John R.; Katz, Richard W.;
2015-01-01
The objective of this paper is to review statistical methods, dynamics, modeling efforts, and trends related to temperature extremes, with a focus upon extreme events of short duration that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). The statistics, dynamics, and modeling sections of this paper are written to be autonomous and so can be read separately. Methods to define extreme events statistics and to identify and connect LSMPs to extreme temperature events are presented. Recent advances in statistical techniques connect LSMPs to extreme temperatures through appropriately defined covariates that supplement more straightforward analyses. Various LSMPs, ranging from synoptic to planetary scale structures, are associated with extreme temperature events. Current knowledge about the synoptics and the dynamical mechanisms leading to the associated LSMPs is incomplete. Systematic studies of: the physics of LSMP life cycles, comprehensive model assessment of LSMP-extreme temperature event linkages, and LSMP properties are needed. Generally, climate models capture observed properties of heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreak frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Modeling studies have identified the impact of large-scale circulation anomalies and landatmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs to more specifically understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated. The paper concludes with unresolved issues and research questions.
Spectra of Adjacency Matrices in Networks of Extreme Introverts and Extroverts
NASA Astrophysics Data System (ADS)
Bassler, Kevin E.; Ezzatabadipour, Mohammadmehdi; Zia, R. K. P.
Many interesting properties were discovered in recent studies of preferred degree networks, suitable for describing social behavior of individuals who tend to prefer a certain number of contacts. In an extreme version (coined the XIE model), introverts always cut links while extroverts always add them. While the intra-group links are static, the cross-links are dynamic and lead to an ensemble of bipartite graphs, with extraordinary correlations between elements of the incidence matrix: nij In the steady state, this system can be regarded as one in thermal equilibrium with long-ranged interactions between the nij's, and displays an extreme Thouless effect. Here, we report simulation studies of a different perspective of networks, namely, the spectra associated with this ensemble of adjacency matrices {aij } . As a baseline, we first consider the spectra associated with a simple random (Erdős-Rényi) ensemble of bipartite graphs, where simulation results can be understood analytically. Work supported by the NSF through Grant DMR-1507371.
PERSISTENCE MAPPING USING EUV SOLAR IMAGER DATA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, B. J.; Young, C. A., E-mail: barbara.j.thompson@nasa.gov
We describe a simple image processing technique that is useful for the visualization and depiction of gradually evolving or intermittent structures in solar physics extreme-ultraviolet imagery. The technique is an application of image segmentation, which we call “Persistence Mapping,” to isolate extreme values in a data set, and is particularly useful for the problem of capturing phenomena that are evolving in both space and time. While integration or “time-lapse” imaging uses the full sample (of size N ), Persistence Mapping rejects ( N − 1)/ N of the data set and identifies the most relevant 1/ N values using themore » following rule: if a pixel reaches an extreme value, it retains that value until that value is exceeded. The simplest examples isolate minima and maxima, but any quantile or statistic can be used. This paper demonstrates how the technique has been used to extract the dynamics in long-term evolution of comet tails, erupting material, and EUV dimming regions.« less
Utilization of the concentric circle model in clinical nursing: a review.
Kazuma, K
1999-12-01
In this article, I review applications of the concentric circle model in clinical nursing. The concentric circle model is based on the cross-sectional shape of the body extremities at several points, and can be used in the areas of both kinesiology and nutritional science. This model makes it possible to calculate the cross-sectional area of muscles from measurement of the circumference of the extremities and the thickness of adipose (fatty) tissue. Then, changes in muscle strength or nutritional status can be inferred or assessed from these data. This model requires only simple and non-invasive measurements, and this is a significant and essential characteristic for its use by nurses, both in clinical and research applications.
Plasma-assisted oxide removal from ruthenium-coated EUV optics
NASA Astrophysics Data System (ADS)
Dolgov, A.; Lee, C. J.; Bijkerk, F.; Abrikosov, A.; Krivtsun, V. M.; Lopaev, D.; Yakushev, O.; van Kampen, M.
2018-04-01
An experimental study of oxide reduction at the surface of ruthenium layers on top of multilayer mirrors and thin Ru/Si films is presented. Oxidation and reduction processes were observed under conditions close to those relevant for extreme ultraviolet lithography. The oxidized ruthenium surface was exposed to a low-temperature hydrogen plasma, similar to the plasma induced by extreme ultraviolet radiation. The experiments show that hydrogen ions are the main reducing agent. Furthermore, the addition of hydrogen radicals increases the reduction rate beyond that expected from simple flux calculations. We show that low-temperature hydrogen plasmas can be effective for reducing oxidized top surfaces. Our proof-of-concept experiments show that an in situ, EUV-generated plasma cleaning technology is feasible.
Changes in atmospheric circulation patterns affect midcontinent wetlands sensitive to climate
LaBaugh, J.W.; Winter, T.C.; Swanson, G.A.; Rosenberry, D.
1996-01-01
Twenty-seven years of data from midcontinent wetlands indicate that the response of these wetlands to extremes in precipitation-drought and deluge-persists beyond the extreme events. Chemical changes transcend such simple relations as increased salinity during dry periods because drought provides mechanisms for removal of salt by deflation and seepage to groundwater. Inundation of vegetation zones including rooted or floating mats of cattail (Typha glauca) can stimulate sulfate reduction and shift the anion balance from sulfate to bicarbonate dominance. Disruptions in the circulation of moisture-laden air masses over the midcontinent, as in the drought of 1988 and the deluge of 1993, have a major effect on these wetlands, which are representatives of the primary waterfowl breeding habitat of the continent.
Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods
NASA Astrophysics Data System (ADS)
Werner, A. T.; Cannon, A. J.
2015-06-01
Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e., correlation tests) and distributional properties (i.e., tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3 day peak flow and 7 day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational datasets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational dataset. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7 day low flow events, regardless of reanalysis or observational dataset. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event-scale spatial gradients, passed the greatest number of tests for hydrologic extremes. Non-stationarity in the observational/reanalysis datasets complicated the evaluation of downscaling performance. Comparing temporal homogeneity and trends in climate indices and hydrological model outputs calculated from downscaled reanalyses and gridded observations was useful for diagnosing the reliability of the various historical datasets. We recommend that such analyses be conducted before such data are used to construct future hydro-climatic change scenarios.
Hydrologic extremes - an intercomparison of multiple gridded statistical downscaling methods
NASA Astrophysics Data System (ADS)
Werner, Arelia T.; Cannon, Alex J.
2016-04-01
Gridded statistical downscaling methods are the main means of preparing climate model data to drive distributed hydrological models. Past work on the validation of climate downscaling methods has focused on temperature and precipitation, with less attention paid to the ultimate outputs from hydrological models. Also, as attention shifts towards projections of extreme events, downscaling comparisons now commonly assess methods in terms of climate extremes, but hydrologic extremes are less well explored. Here, we test the ability of gridded downscaling models to replicate historical properties of climate and hydrologic extremes, as measured in terms of temporal sequencing (i.e. correlation tests) and distributional properties (i.e. tests for equality of probability distributions). Outputs from seven downscaling methods - bias correction constructed analogues (BCCA), double BCCA (DBCCA), BCCA with quantile mapping reordering (BCCAQ), bias correction spatial disaggregation (BCSD), BCSD using minimum/maximum temperature (BCSDX), the climate imprint delta method (CI), and bias corrected CI (BCCI) - are used to drive the Variable Infiltration Capacity (VIC) model over the snow-dominated Peace River basin, British Columbia. Outputs are tested using split-sample validation on 26 climate extremes indices (ClimDEX) and two hydrologic extremes indices (3-day peak flow and 7-day peak flow). To characterize observational uncertainty, four atmospheric reanalyses are used as climate model surrogates and two gridded observational data sets are used as downscaling target data. The skill of the downscaling methods generally depended on reanalysis and gridded observational data set. However, CI failed to reproduce the distribution and BCSD and BCSDX the timing of winter 7-day low-flow events, regardless of reanalysis or observational data set. Overall, DBCCA passed the greatest number of tests for the ClimDEX indices, while BCCAQ, which is designed to more accurately resolve event-scale spatial gradients, passed the greatest number of tests for hydrologic extremes. Non-stationarity in the observational/reanalysis data sets complicated the evaluation of downscaling performance. Comparing temporal homogeneity and trends in climate indices and hydrological model outputs calculated from downscaled reanalyses and gridded observations was useful for diagnosing the reliability of the various historical data sets. We recommend that such analyses be conducted before such data are used to construct future hydro-climatic change scenarios.
Non-Cell-Adhesive Substrates for Printing of Arrayed Biomaterials
Appel, Eric A.; Larson, Benjamin L.; Luly, Kathryn M.; Kim, Jinseong D.
2015-01-01
Cellular microarrays have become extremely useful in expediting the investigation of large libraries of (bio)materials for both in vitro and in vivo biomedical applications. We have developed an exceedingly simple strategy for the fabrication of non-cell-adhesive substrates supporting the immobilization of diverse (bio)material features, including both monomeric and polymeric adhesion molecules (e.g. RGD and polylysine), hydrogels, and polymers. PMID:25430948
Gounaridis, Lefteris; Groumas, Panos; Schreuder, Erik; Heideman, Rene; Avramopoulos, Hercules; Kouloumentas, Christos
2016-04-04
It is still a common belief that ultra-high quality-factors (Q-factors) are a prerequisite in optical resonant cavities for high refractive index resolution and low detection limit in biosensing applications. In combination with the ultra-short steps that are necessary when the measurement of the resonance shift relies on the wavelength scanning of a laser source and conventional methods for data processing, the high Q-factor requirement makes these biosensors extremely impractical. In this work we analyze an alternative processing method based on the fast-Fourier transform, and show through Monte-Carlo simulations that improvement by 2-3 orders of magnitude can be achieved in the resolution and the detection limit of the system in the presence of amplitude and spectral noise. More significantly, this improvement is maximum for low Q-factors around 104 and is present also for high intra-cavity losses and large scanning steps making the designs compatible with the low-cost aspect of lab-on-a-chip technology. Using a micro-ring resonator as model cavity and a system design with low Q-factor (104), low amplitude transmission (0.85) and relatively large scanning step (0.25 pm), we show that resolution close to 0.01 pm and detection limit close to 10-7 RIU can be achieved improving the sensing performance by more than 2 orders of magnitude compared to the performance of systems relying on a simple peak search processing method. The improvement in the limit of detection is present even when the simple method is combined with ultra-high Q-factors and ultra-short scanning steps due to the trade-off between the system resolution and sensitivity. Early experimental results are in agreement with the trends of the numerical studies.
Estimation of methanogen biomass via quantitation of coenzyme M
Elias, Dwayne A.; Krumholz, Lee R.; Tanner, Ralph S.; Suflita, Joseph M.
1999-01-01
Determination of the role of methanogenic bacteria in an anaerobic ecosystem often requires quantitation of the organisms. Because of the extreme oxygen sensitivity of these organisms and the inherent limitations of cultural techniques, an accurate biomass value is very difficult to obtain. We standardized a simple method for estimating methanogen biomass in a variety of environmental matrices. In this procedure we used the thiol biomarker coenzyme M (CoM) (2-mercaptoethanesulfonic acid), which is known to be present in all methanogenic bacteria. A high-performance liquid chromatography-based method for detecting thiols in pore water (A. Vairavamurthy and M. Mopper, Anal. Chim. Acta 78:363–370, 1990) was modified in order to quantify CoM in pure cultures, sediments, and sewage water samples. The identity of the CoM derivative was verified by using liquid chromatography-mass spectroscopy. The assay was linear for CoM amounts ranging from 2 to 2,000 pmol, and the detection limit was 2 pmol of CoM/ml of sample. CoM was not adsorbed to sediments. The methanogens tested contained an average of 19.5 nmol of CoM/mg of protein and 0.39 ± 0.07 fmol of CoM/cell. Environmental samples contained an average of 0.41 ± 0.17 fmol/cell based on most-probable-number estimates. CoM was extracted by using 1% tri-(N)-butylphosphine in isopropanol. More than 90% of the CoM was recovered from pure cultures and environmental samples. We observed no interference from sediments in the CoM recovery process, and the method could be completed aerobically within 3 h. Freezing sediment samples resulted in 46 to 83% decreases in the amounts of detectable CoM, whereas freezing had no effect on the amounts of CoM determined in pure cultures. The method described here provides a quick and relatively simple way to estimate methanogenic biomass.
Bayesian Non-Stationary Index Gauge Modeling of Gridded Precipitation Extremes
NASA Astrophysics Data System (ADS)
Verdin, A.; Bracken, C.; Caldwell, J.; Balaji, R.; Funk, C. C.
2017-12-01
We propose a Bayesian non-stationary model to generate watershed scale gridded estimates of extreme precipitation return levels. The Climate Hazards Group Infrared Precipitation with Stations (CHIRPS) dataset is used to obtain gridded seasonal precipitation extremes over the Taylor Park watershed in Colorado for the period 1981-2016. For each year, grid cells within the Taylor Park watershed are aggregated to a representative "index gauge," which is input to the model. Precipitation-frequency curves for the index gauge are estimated for each year, using climate variables with significant teleconnections as proxies. Such proxies enable short-term forecasting of extremes for the upcoming season. Disaggregation ratios of the index gauge to the grid cells within the watershed are computed for each year and preserved to translate the index gauge precipitation-frequency curve to gridded precipitation-frequency maps for select return periods. Gridded precipitation-frequency maps are of the same spatial resolution as CHIRPS (0.05° x 0.05°). We verify that the disaggregation method preserves spatial coherency of extremes in the Taylor Park watershed. Validation of the index gauge extreme precipitation-frequency method consists of ensuring extreme value statistics are preserved on a grid cell basis. To this end, a non-stationary extreme precipitation-frequency analysis is performed on each grid cell individually, and the resulting frequency curves are compared to those produced by the index gauge disaggregation method.
Life at extreme elevations on Atacama volcanoes: the closest thing to Mars on Earth?
Schmidt, S K; Gendron, E M S; Vincent, K; Solon, A J; Sommers, P; Schubert, Z R; Vimercati, L; Porazinska, D L; Darcy, J L; Sowell, P
2018-03-20
Here we describe recent breakthroughs in our understanding of microbial life in dry volcanic tephra ("soil") that covers much of the surface area of the highest elevation volcanoes on Earth. Dry tephra above 6000 m.a.s.l. is perhaps the best Earth analog for the surface of Mars because these "soils" are acidic, extremely oligotrophic, exposed to a thin atmosphere, high UV fluxes, and extreme temperature fluctuations across the freezing point. The simple microbial communities found in these extreme sites have among the lowest alpha diversity of any known earthly ecosystem and contain bacteria and eukaryotes that are uniquely adapted to these extreme conditions. The most abundant eukaryotic organism across the highest elevation sites is a Naganishia species that is metabolically versatile, can withstand high levels of UV radiation and can grow at sub-zero temperatures, and during extreme diurnal freeze-thaw cycles (e.g. - 10 to + 30 °C). The most abundant bacterial phylotype at the highest dry sites sampled (6330 m.a.s.l. on Volcán Llullaillaco) belongs to the enigmatic B12-WMSP1 clade which is related to the Ktedonobacter/Thermosporothrix clade that includes versatile organisms with the largest known bacterial genomes. Close relatives of B12-WMSP1 are also found in fumarolic soils on Volcán Socompa and in oligotrophic, fumarolic caves on Mt. Erebus in Antarctica. In contrast to the extremely low diversity of dry tephra, fumaroles found at over 6000 m.a.s.l. on Volcán Socompa support very diverse microbial communities with alpha diversity levels rivalling those of low elevation temperate soils. Overall, the high-elevation biome of the Atacama region provides perhaps the best "natural experiment" in which to study microbial life in both its most extreme setting (dry tephra) and in one of its least extreme settings (fumarolic soils).
Management of defects on lower extremities with the use of matriderm and skin graft.
Choi, Jun-Young; Kim, Seong-Hun; Oh, Gwang-Jin; Roh, Si-Gyun; Lee, Nae-Ho; Yang, Kyung-Moo
2014-07-01
The reconstruction of large skin and soft tissue defects on the lower extremities is challenging. The skin graft is a simple and frequently used method for covering a skin defect. However, poor skin quality and architecture are well-known problems that lead to scar contracture. The collagen-elastin matrix, Matriderm, has been used to improve the quality of skin grafts; however, no statistical and objective review of the results has been reported. Thirty-four patients (23 male and 11 female) who previously received a skin graft and simultaneous application of Matriderm between January 2010 and June 2012 were included in this study. The quality of the skin graft was evaluated using Cutometer, occasionally accompanied by pathologic findings. All 34 patients showed good skin quality compared to a traditional skin graft and were satisfied with their results. The statistical data for the measurement of the mechanical properties of the skin were similar to those for normal skin. In addition, there was no change in the engraftment rate. The biggest problem of a traditional skin graft is scar contracture. However, the dermal matrix presents an improvement in skin quality with elastin and collagen. Therefore, a skin graft along with a simultaneous application of Matriderm is safe and effective and leads to a significantly better outcome from the perspective of skin elasticity.
Computing return times or return periods with rare event algorithms
NASA Astrophysics Data System (ADS)
Lestang, Thibault; Ragone, Francesco; Bréhier, Charles-Edouard; Herbert, Corentin; Bouchet, Freddy
2018-04-01
The average time between two occurrences of the same event, referred to as its return time (or return period), is a useful statistical concept for practical applications. For instance insurances or public agencies may be interested by the return time of a 10 m flood of the Seine river in Paris. However, due to their scarcity, reliably estimating return times for rare events is very difficult using either observational data or direct numerical simulations. For rare events, an estimator for return times can be built from the extrema of the observable on trajectory blocks. Here, we show that this estimator can be improved to remain accurate for return times of the order of the block size. More importantly, we show that this approach can be generalised to estimate return times from numerical algorithms specifically designed to sample rare events. So far those algorithms often compute probabilities, rather than return times. The approach we propose provides a computationally extremely efficient way to estimate numerically the return times of rare events for a dynamical system, gaining several orders of magnitude of computational costs. We illustrate the method on two kinds of observables, instantaneous and time-averaged, using two different rare event algorithms, for a simple stochastic process, the Ornstein–Uhlenbeck process. As an example of realistic applications to complex systems, we finally discuss extreme values of the drag on an object in a turbulent flow.
NASA Astrophysics Data System (ADS)
Ghosh, S. B.; Bhattacharya, K.; Nayak, S.; Mukherjee, P.; Salaskar, D.; Kale, S. P.
2015-09-01
Definitive identification of microorganisms, including pathogenic and non-pathogenic bacteria, is extremely important for a wide variety of applications including food safety, environmental studies, bio-terrorism threats, microbial forensics, criminal investigations and above all disease diagnosis. Although extremely powerful techniques such as those based on PCR and microarrays exist, they require sophisticated laboratory facilities along with elaborate sample preparation by trained researchers. Among different spectroscopic techniques, FTIR was used in the 1980s and 90s for bacterial identification. In the present study five species of Bacillus were isolated from the aerobic predigester chamber of Nisargruna Biogas Plant (NBP) and were identified to the species level by biochemical and molecular biological (16S ribosomal DNA sequence) methods. Those organisms were further checked by solid state spectroscopic absorbance measurements using a wide range of electromagnetic radiation (wavelength 200 nm to 25,000 nm) encompassing UV, visible, near Infrared and Infrared regions. UV-Vis and NIR spectroscopy was performed on dried bacterial cell suspension on silicon wafer in specular mode while FTIR was performed on KBr pellets containing the bacterial cells. Consistent and reproducible species specific spectra were obtained and sensitivity up to a level of 1000 cells was observed in FTIR with a DTGS detector. This clearly shows the potential of solid state spectroscopic techniques for simple, easy to implement, reliable and sensitive detection of bacteria from environmental samples.
Moradi, Sona; Hadjesfandiari, Narges; Toosi, Salma Fallah; Kizhakkedathu, Jayachandran N; Hatzikiriakos, Savvas G
2016-07-13
In order to design antithrombotic implants, the effect of extreme wettability (superhydrophilicity to superhydrophobicity) on the biocompatibility of the metallic substrates (stainless steel and titanium) was investigated. The wettability of the surface was altered by chemical treatments and laser ablation methods. The chemical treatments generated different functionality groups and chemical composition as evident from XPS analysis. The micro/nanopatterning by laser ablation resulted in three different pattern geometry and different surface roughness and consequently wettability. The patterned surface were further modified with chemical treatments to generate a wide range of surface wettability. The influence of chemical functional groups, pattern geometry, and surface wettability on protein adsorption and platelet adhesion was studied. On chemically treated flat surfaces, the type of hydrophilic treatment was shown to be a contributing factor that determines the platelet adhesion, since the hydrophilic oxidized substrates exhibit less platelet adhesion in comparison to the control untreated or acid treated surfaces. Also, the surface morphology, surface roughness, and superhydrophobic character of the surfaces are contributing factors to platelet adhesion on the surface. Our results show that superhydrophobic cauliflower-like patterns are highly resistant to platelet adhesion possibly due to the stability of Cassie-Baxter state for this pattern compared to others. Our results also show that simple surface treatments on metals offer a novel way to improve the hemocompatibility of metallic substrates.
Selection criteria for wear resistant powder coatings under extreme erosive wear conditions
NASA Astrophysics Data System (ADS)
Kulu, P.; Pihl, T.
2002-12-01
Wear-resistant thermal spray coatings for sliding wear are hard but brittle (such as carbide and oxide based coatings), which makes them useless under impact loading conditions and sensitive to fatigue. Under extreme conditions of erosive wear (impact loading, high hardness of abrasives, and high velocity of abradant particles), composite coatings ensure optimal properties of hardness and toughness. The article describes tungsten carbide-cobalt (WC-Co) systems and self-fluxing alloys, containing tungsten carbide based hardmetal particles [NiCrSiB-(WC-Co)] deposited by the detonation gun, continuous detonation spraying, and spray fusion processes. Different powder compositions and processes were studied, and the effect of the coating structure and wear parameters on the wear resistance of coatings are evaluated. The dependence of the wear resistance of sprayed and fused coatings on their hardness is discussed, and hardness criteria for coating selection are proposed. The so-called “double cemented” structure of WC-Co based hardmetal or metal matrix composite coatings, as compared with a simple cobalt matrix containing particles of WC, was found optimal. Structural criteria for coating selection are provided. To assist the end user in selecting an optimal deposition method and materials, coating selection diagrams of wear resistance versus hardness are given. This paper also discusses the cost-effectiveness of coatings in the application areas that are more sensitive to cost, and composite coatings based on recycled materials are offered.
Jonker, Dirk; Gustafsson, Ewa; Rolander, Bo; Arvidsson, Inger; Nordander, Catarina
2015-01-01
A new health surveillance protocol for work-related upper-extremity musculoskeletal disorders has been validated by comparing the results with a reference protocol. The studied protocol, Health Surveillance in Adverse Ergonomics Conditions (HECO), is a new version of the reference protocol modified for application in the Occupational Health Service (OHS). The HECO protocol contains both a screening part and a diagnosing part. Sixty-three employees were examined. The screening in HECO did not miss any diagnosis found when using the reference protocol, but in comparison to the reference protocol considerable time savings could be achieved. Fair to good agreement between the protocols was obtained for one or more diagnoses in neck/shoulders (86%, k = 0.62) and elbow/hands (84%, k = 0.49). Therefore, the results obtained using the HECO protocol can be compared with a reference material collected with the reference protocol, and thus provide information of the magnitude of disorders in an examined work group. Practitioner Summary: The HECO protocol is a relatively simple physical examination protocol for identification of musculoskeletal disorders in the neck and upper extremities. The protocol is a reliable and cost-effective tool for the OHS to use for occupational health surveillance in order to detect workplaces at high risk for developing musculoskeletal disorders. PMID:25761380
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hassanein, Ahmed; Konkashbaev, Isak
A device and method for generating extremely short-wave ultraviolet electromagnetic wave uses two intersecting plasma beams generated by two plasma accelerators. The intersection of the two plasma beams emits electromagnetic radiation and in particular radiation in the extreme ultraviolet wavelength. In the preferred orientation two axially aligned counter streaming plasmas collide to produce an intense source of electromagnetic radiation at the 13.5 nm wavelength. The Mather type plasma accelerators can utilize tin, or lithium covered electrodes. Tin, lithium or xenon can be used as the photon emitting gas source.
Uncertainty in determining extreme precipitation thresholds
NASA Astrophysics Data System (ADS)
Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili
2013-10-01
Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method are more reasonable and applicable for the Pearl River Basin.
Evaluation of extreme temperature events in northern Spain based on process control charts
NASA Astrophysics Data System (ADS)
Villeta, M.; Valencia, J. L.; Saá, A.; Tarquis, A. M.
2018-02-01
Extreme climate events have recently attracted the attention of a growing number of researchers because these events impose a large cost on agriculture and associated insurance planning. This study focuses on extreme temperature events and proposes a new method for their evaluation based on statistical process control tools, which are unusual in climate studies. A series of minimum and maximum daily temperatures for 12 geographical areas of a Spanish region between 1931 and 2009 were evaluated by applying statistical process control charts to statistically test whether evidence existed for an increase or a decrease of extreme temperature events. Specification limits were determined for each geographical area and used to define four types of extreme anomalies: lower and upper extremes for the minimum and maximum anomalies. A new binomial Markov extended process that considers the autocorrelation between extreme temperature events was generated for each geographical area and extreme anomaly type to establish the attribute control charts for the annual fraction of extreme days and to monitor the occurrence of annual extreme days. This method was used to assess the significance of changes and trends of extreme temperature events in the analysed region. The results demonstrate the effectiveness of an attribute control chart for evaluating extreme temperature events. For example, the evaluation of extreme maximum temperature events using the proposed statistical process control charts was consistent with the evidence of an increase in maximum temperatures during the last decades of the last century.
NASA Astrophysics Data System (ADS)
Thornton, James
2016-04-01
December 2015 was recently confirmed as the UK's wettest month on record by the Met Office. The most extreme precipitation was associated with three extratropical storm systems, named Desmond, Eva and Frank by the pilot Met Éireann/Met Office "Name our storms" project. In response, river levels reached new maxima at many locations across Northern England. Property damage was widespread, with at least 16,000 homes in England flooded. As with recent predecessors, these events reinvigorated public debate about the extent to which natural weather variability, anthropogenic climate change, increased urbanisation and/or other changes in catchment and river management might be responsible for apparent increases in flood frequency and severity. Change detection and attribution science is required to inform the debate, but is complicated by the short (typically ~ 35 years) river flow records available. Running a large number of coupled climate and hydrological model simulations is a powerful way of addressing the 'attribution question' with respect to the hypothesised climate forcing, for example, albeit one that remains largely in the research domain at present. In the meantime, flood-frequency analysis of available records still forms the bedrock of practice in the water industry; the results are used routinely in the design of new defence structures and in the development of flood hazard maps, amongst other things. In such analyses, it is usual for the records to be assumed stationary. In this context, the specific aims of this research are twofold: • To investigate whether, under the assumption of stationarity, the outputs of standard flood-frequency modelling methods (both 'single-site' and 'spatially pooled' methods) differ significantly depending on whether the new peaks are included or excluded, and; • To assess the sustainability of previous conclusions regarding trends in English river flows by reapplying simple statistical tests, such as the Mann-Kendal test, to data series with the new peaks included. Overall, the research seeks to explore the robustness of commonly-employed statistical flood estimation methods to instrumentally unprecedented extremes. Should it be found that the new records do indeed represent paradigm-shifting 'leverage points', then the suggestion of the Deputy Chief Executive of the Environment Agency, David Rooke - that a "complete rethink" of flood mitigation matters is required in our world of "unknown extremes" - must be given sufficient attention.
Extreme values in the Chinese and American stock markets based on detrended fluctuation analysis
NASA Astrophysics Data System (ADS)
Cao, Guangxi; Zhang, Minjia
2015-10-01
This paper focuses on the comparative analysis of extreme values in the Chinese and American stock markets based on the detrended fluctuation analysis (DFA) algorithm using the daily data of Shanghai composite index and Dow Jones Industrial Average. The empirical results indicate that the multifractal detrended fluctuation analysis (MF-DFA) method is more objective than the traditional percentile method. The range of extreme value of Dow Jones Industrial Average is smaller than that of Shanghai composite index, and the extreme value of Dow Jones Industrial Average is more time clustering. The extreme value of the Chinese or American stock markets is concentrated in 2008, which is consistent with the financial crisis in 2008. Moreover, we investigate whether extreme events affect the cross-correlation between the Chinese and American stock markets using multifractal detrended cross-correlation analysis algorithm. The results show that extreme events have nothing to do with the cross-correlation between the Chinese and American stock markets.
Mapping axonal density and average diameter using non-monotonic time-dependent gradient-echo MRI
NASA Astrophysics Data System (ADS)
Nunes, Daniel; Cruz, Tomás L.; Jespersen, Sune N.; Shemesh, Noam
2017-04-01
White Matter (WM) microstructures, such as axonal density and average diameter, are crucial to the normal function of the Central Nervous System (CNS) as they are closely related with axonal conduction velocities. Conversely, disruptions of these microstructural features may result in severe neurological deficits, suggesting that their noninvasive mapping could be an important step towards diagnosing and following pathophysiology. Whereas diffusion based MRI methods have been proposed to map these features, they typically entail the application of powerful gradients, which are rarely available in the clinic, or extremely long acquisition schemes to extract information from parameter-intensive models. In this study, we suggest that simple and time-efficient multi-gradient-echo (MGE) MRI can be used to extract the axon density from susceptibility-driven non-monotonic decay in the time-dependent signal. We show, both theoretically and with simulations, that a non-monotonic signal decay will occur for multi-compartmental microstructures - such as axons and extra-axonal spaces, which were here used as a simple model for the microstructure - and that, for axons parallel to the main magnetic field, the axonal density can be extracted. We then experimentally demonstrate in ex-vivo rat spinal cords that its different tracts - characterized by different microstructures - can be clearly contrasted using the MGE-derived maps. When the quantitative results are compared against ground-truth histology, they reflect the axonal fraction (though with a bias, as evident from Bland-Altman analysis). As well, the extra-axonal fraction can be estimated. The results suggest that our model is oversimplified, yet at the same time evidencing a potential and usefulness of the approach to map underlying microstructures using a simple and time-efficient MRI sequence. We further show that a simple general-linear-model can predict the average axonal diameters from the four model parameters, and map these average axonal diameters in the spinal cords. While clearly further modelling and theoretical developments are necessary, we conclude that salient WM microstructural features can be extracted from simple, SNR-efficient multi-gradient echo MRI, and that this paves the way towards easier estimation of WM microstructure in vivo.
Mapping axonal density and average diameter using non-monotonic time-dependent gradient-echo MRI.
Nunes, Daniel; Cruz, Tomás L; Jespersen, Sune N; Shemesh, Noam
2017-04-01
White Matter (WM) microstructures, such as axonal density and average diameter, are crucial to the normal function of the Central Nervous System (CNS) as they are closely related with axonal conduction velocities. Conversely, disruptions of these microstructural features may result in severe neurological deficits, suggesting that their noninvasive mapping could be an important step towards diagnosing and following pathophysiology. Whereas diffusion based MRI methods have been proposed to map these features, they typically entail the application of powerful gradients, which are rarely available in the clinic, or extremely long acquisition schemes to extract information from parameter-intensive models. In this study, we suggest that simple and time-efficient multi-gradient-echo (MGE) MRI can be used to extract the axon density from susceptibility-driven non-monotonic decay in the time-dependent signal. We show, both theoretically and with simulations, that a non-monotonic signal decay will occur for multi-compartmental microstructures - such as axons and extra-axonal spaces, which were here used as a simple model for the microstructure - and that, for axons parallel to the main magnetic field, the axonal density can be extracted. We then experimentally demonstrate in ex-vivo rat spinal cords that its different tracts - characterized by different microstructures - can be clearly contrasted using the MGE-derived maps. When the quantitative results are compared against ground-truth histology, they reflect the axonal fraction (though with a bias, as evident from Bland-Altman analysis). As well, the extra-axonal fraction can be estimated. The results suggest that our model is oversimplified, yet at the same time evidencing a potential and usefulness of the approach to map underlying microstructures using a simple and time-efficient MRI sequence. We further show that a simple general-linear-model can predict the average axonal diameters from the four model parameters, and map these average axonal diameters in the spinal cords. While clearly further modelling and theoretical developments are necessary, we conclude that salient WM microstructural features can be extracted from simple, SNR-efficient multi-gradient echo MRI, and that this paves the way towards easier estimation of WM microstructure in vivo. Copyright © 2017 Elsevier Inc. All rights reserved.
A SIMPLE METHOD FOR THE EXTRACTION AND QUANTIFICATION OF PHOTOPIGMENTS FROM SYMBIODINIUM SPP.
John E. Rogers and Dragoslav Marcovich. Submitted. Simple Method for the Extraction and Quantification of Photopigments from Symbiodinium spp.. Limnol. Oceanogr. Methods. 19 p. (ERL,GB 1192).
We have developed a simple, mild extraction procedure using methanol which, when...
Fang, Cheng; Butler, David Lee
2013-05-01
In this paper, an innovative method for CMM (Coordinate Measuring Machine) self-calibration is proposed. In contrast to conventional CMM calibration that relies heavily on a high precision reference standard such as a laser interferometer, the proposed calibration method is based on a low-cost artefact which is fabricated with commercially available precision ball bearings. By optimizing the mathematical model and rearranging the data sampling positions, the experimental process and data analysis can be simplified. In mathematical expression, the samples can be minimized by eliminating the redundant equations among those configured by the experimental data array. The section lengths of the artefact are measured at arranged positions, with which an equation set can be configured to determine the measurement errors at the corresponding positions. With the proposed method, the equation set is short of one equation, which can be supplemented by either measuring the total length of the artefact with a higher-precision CMM or calibrating the single point error at the extreme position with a laser interferometer. In this paper, the latter is selected. With spline interpolation, the error compensation curve can be determined. To verify the proposed method, a simple calibration system was set up on a commercial CMM. Experimental results showed that with the error compensation curve uncertainty of the measurement can be reduced to 50%.
Attribution of Extreme Rainfall Events in the South of France Using EURO-CORDEX Simulations
NASA Astrophysics Data System (ADS)
Luu, L. N.; Vautard, R.; Yiou, P.
2017-12-01
The Mediterranean region regularly undergoes episodes of intense precipitation in the fall season that exceed 300mm a day. This study focuses on the role of climate change on the dynamics of the events that occur in the South of France. We used an ensemble of 10 EURO-CORDEX model simulations with two horizontal resolutions (EUR-11: 0.11° and EUR-44: 0.44°) for the attribution of extreme rainfall in the fall in the Cevennes mountain range (South of France). The biases of the simulations were corrected with simple scaling adjustment and a quantile correction (CDFt). This produces five datasets including EUR-44 and EUR-11 with and without scaling adjustment and CDFt-EUR-11, on which we test the impact of resolution and bias correction on the extremes. Those datasets, after pooling all of models together, are fitted by a stationary Generalized Extreme Value distribution for several periods to estimate a climate change signal in the tail of distribution of extreme rainfall in the Cévenne region. Those changes are then interpreted by a scaling model that links extreme rainfall with mean and maximum daily temperature. The results show that higher-resolution simulations with bias adjustment provide a robust and confident increase of intensity and likelihood of occurrence of autumn extreme rainfall in the area in current climate in comparison with historical climate. The probability (exceedance probability) of 1-in-1000-year event in historical climate may increase by a factor of 1.8 under current climate with a confident interval of 0.4 to 5.3 following the CDFt bias-adjusted EUR-11. The change of magnitude appears to follow the Clausius-Clapeyron relation that indicates a 7% increase in rainfall per 1oC increase in temperature.
Extreme Material Physical Properties and Measurements above 100 tesla
NASA Astrophysics Data System (ADS)
Mielke, Charles
2011-03-01
The National High Magnetic Field Laboratory (NHMFL) Pulsed Field Facility (PFF) at Los Alamos National Laboratory (LANL) offers extreme environments of ultra high magnetic fields above 100 tesla by use of the Single Turn method as well as fields approaching 100 tesla with more complex methods. The challenge of metrology in the extreme magnetic field generating devices is complicated by the millions of amperes of current and tens of thousands of volts that are required to deliver the pulsed power needed for field generation. Methods of detecting physical properties of materials are essential parts of the science that seeks to understand and eventually control the fundamental functionality of materials in extreme environments. De-coupling the signal of the sample from the electro-magnetic interference associated with the magnet system is required to make these state-of-the-art magnetic fields useful to scientists studying materials in high magnetic fields. The cutting edge methods that are being used as well as methods in development will be presented with recent results in Graphene and High-Tc superconductors along with the methods and challenges. National Science Foundation DMR-Award 0654118.
Bentahir, Mostafa; Laduron, Frederic; Irenge, Leonid; Ambroise, Jérôme; Gala, Jean-Luc
2014-01-01
Separating CBRN mixed samples that contain both chemical and biological warfare agents (CB mixed sample) in liquid and solid matrices remains a very challenging issue. Parameters were set up to assess the performance of a simple filtration-based method first optimized on separate C- and B-agents, and then assessed on a model of CB mixed sample. In this model, MS2 bacteriophage, Autographa californica nuclear polyhedrosis baculovirus (AcNPV), Bacillus atrophaeus and Bacillus subtilis spores were used as biological agent simulants whereas ethyl methylphosphonic acid (EMPA) and pinacolyl methylphophonic acid (PMPA) were used as VX and soman (GD) nerve agent surrogates, respectively. Nanoseparation centrifugal devices with various pore size cut-off (30 kD up to 0.45 µm) and three RNA extraction methods (Invisorb, EZ1 and Nuclisens) were compared. RNA (MS2) and DNA (AcNPV) quantification was carried out by means of specific and sensitive quantitative real-time PCRs (qPCR). Liquid chromatography coupled to time-of-flight mass spectrometry (LC/TOFMS) methods was used for quantifying EMPA and PMPA. Culture methods and qPCR demonstrated that membranes with a 30 kD cut-off retain more than 99.99% of biological agents (MS2, AcNPV, Bacillus Atrophaeus and Bacillus subtilis spores) tested separately. A rapid and reliable separation of CB mixed sample models (MS2/PEG-400 and MS2/EMPA/PMPA) contained in simple liquid or complex matrices such as sand and soil was also successfully achieved on a 30 kD filter with more than 99.99% retention of MS2 on the filter membrane, and up to 99% of PEG-400, EMPA and PMPA recovery in the filtrate. The whole separation process turnaround-time (TAT) was less than 10 minutes. The filtration method appears to be rapid, versatile and extremely efficient. The separation method developed in this work constitutes therefore a useful model for further evaluating and comparing additional separation alternative procedures for a safe handling and preparation of CB mixed samples. PMID:24505375
[Analysis on the accuracy of simple selection method of Fengshi (GB 31)].
Li, Zhixing; Zhang, Haihua; Li, Suhe
2015-12-01
To explore the accuracy of simple selection method of Fengshi (GB 31). Through the study of the ancient and modern data,the analysis and integration of the acupuncture books,the comparison of the locations of Fengshi (GB 31) by doctors from all dynasties and the integration of modern anatomia, the modern simple selection method of Fengshi (GB 31) is definite, which is the same as the traditional way. It is believed that the simple selec tion method is in accord with the human-oriented thought of TCM. Treatment by acupoints should be based on the emerging nature and the individual difference of patients. Also, it is proposed that Fengshi (GB 31) should be located through the integration between the simple method and body surface anatomical mark.
An analytic data analysis method for oscillatory slug tests.
Chen, Chia-Shyun
2006-01-01
An analytical data analysis method is developed for slug tests in partially penetrating wells in confined or unconfined aquifers of high hydraulic conductivity. As adapted from the van der Kamp method, the determination of the hydraulic conductivity is based on the occurrence times and the displacements of the extreme points measured from the oscillatory data and their theoretical counterparts available in the literature. This method is applied to two sets of slug test response data presented by Butler et al.: one set shows slow damping with seven discernable extremities, and the other shows rapid damping with three extreme points. The estimates of the hydraulic conductivity obtained by the analytic method are in good agreement with those determined by an available curve-matching technique.
Harp, E.L.; Reid, M.E.; McKenna, J.P.; Michael, J.A.
2009-01-01
Loss of life and property caused by landslides triggered by extreme rainfall events demonstrates the need for landslide-hazard assessment in developing countries where recovery from such events often exceeds the country's resources. Mapping landslide hazards in developing countries where the need for landslide-hazard mitigation is great but the resources are few is a challenging, but not intractable problem. The minimum requirements for constructing a physically based landslide-hazard map from a landslide-triggering storm, using the simple methods we discuss, are: (1) an accurate mapped landslide inventory, (2) a slope map derived from a digital elevation model (DEM) or topographic map, and (3) material strength properties of the slopes involved. Provided that the landslide distribution from a triggering event can be documented and mapped, it is often possible to glean enough topographic and geologic information from existing databases to produce a reliable map that depicts landslide hazards from an extreme event. Most areas of the world have enough topographic information to provide digital elevation models from which to construct slope maps. In the likely event that engineering properties of slope materials are not available, reasonable estimates can be made with detailed field examination by engineering geologists or geotechnical engineers. Resulting landslide hazard maps can be used as tools to guide relocation and redevelopment, or, more likely, temporary relocation efforts during severe storm events such as hurricanes/typhoons to minimize loss of life and property. We illustrate these methods in two case studies of lethal landslides in developing countries: Tegucigalpa, Honduras (during Hurricane Mitch in 1998) and the Chuuk Islands, Micronesia (during Typhoon Chata'an in 2002).
NASA Astrophysics Data System (ADS)
Lüttger, Andrea B.; Feike, Til
2018-04-01
Climate change constitutes a major challenge for high productivity in wheat, the most widely grown crop in Germany. Extreme weather events including dry spells and heat waves, which negatively affect wheat yields, are expected to aggravate in the future. It is crucial to improve the understanding of the spatiotemporal development of such extreme weather events and the respective crop-climate relationships in Germany. Thus, the present study is a first attempt to evaluate the historic development of relevant drought and heat-related extreme weather events from 1901 to 2010 on county level (NUTS-3) in Germany. Three simple drought indices and two simple heat stress indices were used in the analysis. A continuous increase in dry spells over time was observed over the investigated periods from 1901-1930, 1931-1960, 1961-1990 to 2001-2010. Short and medium dry spells, i.e., precipitation-free periods longer than 5 and 8 days, respectively, increased more strongly compared to longer dry spells (longer than 11 days). The heat-related stress indices with maximum temperatures above 25 and 28 °C during critical wheat growth phases showed no significant increase over the first three periods but an especially sharp increase in the final 1991-2010 period with the increases being particularly pronounced in parts of Southwestern Germany. Trend analysis over the entire 110-year period using Mann-Kendall test revealed a significant positive trend for all investigated indices except for heat stress above 25 °C during flowering period. The analysis of county-level yield data from 1981 to 2010 revealed declining spatial yield variability and rather constant temporal yield variability over the three investigated (1981-1990, 1991-2000, and 2001-2010) decades. A clear spatial gradient manifested over time with variability in the West being much smaller than in the east of Germany. Correlating yield variability with the previously analyzed extreme weather indices revealed strong spatiotemporal fluctuations in explanatory power of the different indices over all German counties and the three time periods. Over the 30 years, yield deviations were increasingly well correlated with heat and drought-related indices, with the number of days with maximum temperature above 25 °C during anthesis showing a sharp increase in explanatory power over entire Germany in the final 2001-2010 period.
Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures.
Papior, Nick R; Calogero, Gaetano; Brandbyge, Mads
2018-06-27
We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C 60 ). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.
Tartari, Flamur; Buzo, Stiliano; Vyshka, Gentian
2009-01-01
The apparatus invented by Dr. Luigj Benusi in 1943, in Tirana, was a practical application of the Kowarsky technique and Ambard laws, helping in determining blood urea levels and very important to a variety of diseases, mainly kidney disorders. The apparatus was invented and prepared from very simple laboratory materials, such as glasses, test tubes, corks and volumetric cylinders. Technologically, it was based upon the determination of blood urea through hypobromite, and, among the advantages of the apparatus of Benusi, were its extreme simplicity, the smaller amount of blood needed for producing results (2 milliliters), as well as an easiest way to clean up and to manage the apparatus from a practical point of everyday use.
Simple and efficient LCAO basis sets for the diffuse states in carbon nanostructures
NASA Astrophysics Data System (ADS)
Papior, Nick R.; Calogero, Gaetano; Brandbyge, Mads
2018-06-01
We present a simple way to describe the lowest unoccupied diffuse states in carbon nanostructures in density functional theory calculations using a minimal LCAO (linear combination of atomic orbitals) basis set. By comparing plane wave basis calculations, we show how these states can be captured by adding long-range orbitals to the standard LCAO basis sets for the extreme cases of planar sp 2 (graphene) and curved carbon (C60). In particular, using Bessel functions with a long range as additional basis functions retain a minimal basis size. This provides a smaller and simpler atom-centered basis set compared to the standard pseudo-atomic orbitals (PAOs) with multiple polarization orbitals or by adding non-atom-centered states to the basis.
Simple and compact expressions for neutrino oscillation probabilities in matter
Minakata, Hisakazu; Parke, Stephen J.
2016-01-29
We reformulate perturbation theory for neutrino oscillations in matter with an expansion parameter related to the ratio of the solar to the atmospheric Δm 2 scales. Unlike previous works, use a renormalized basis in which certain first-order effects are taken into account in the zeroth-order Hamiltonian. Using this perturbation theory we derive extremely compact expressions for the neutrino oscillations probabilities in matter. We find, for example, that the ν e disappearance probability at this order is of a simple two flavor form with an appropriately identified mixing angle and Δm 2. Furthermore, despite exceptional simplicity in their forms they accommodatemore » all order effects θ 13 and the matter potential.« less
Co-occurence of florid cemento-osseous dysplasia and simple bone cyst: a case report.
Rao, Kumuda Arvind; Shetty, Shishir Ram; Babu, Subhas G; Castelino, Renita Lorina
2011-01-01
The purpose of this report is to present a rare case of co-occurrence of florid cemento-osseous dysplasia with simple bone cyst in a middle aged Asian woman. Most of the reported cases are isolated cases of simple bone cyst or florid cemento-osseous dysplasia, but co-occurrence of these two entities is extremely rare. The authors report a 41 year old female patient with co-occurrence of mandibular florid cemento-osseous dysplasia with simple bone cyst. A thorough clinical and radiological examination was carried out. It was diagnosed mandibular cyst with possible co-occurrence of florid cemento-osseous dysplasia. Surgical exploration of the multilocular lesion was applied. Since, the patient was symptomatic at the time of presentation utmost caution was taken during the surgical procedure as florid cemento-osseous dysplasia is associated with hypo-vascularity of the affected bone. Based on histopathological, as well as supporting clinico-radiological findings a confirmative diagnosis of florid cemento-osseous dysplasia co-occurring with simple bone cyst was made. Patient was followed-up for a period of six months and was reported to be asymptomatic. Timely diagnosis and well planned treatment is important to obtain a good prognosis when a rare co-occurrence of two or more bone lesions affects the jaws.
Charvat, A; Stasicki, B; Abel, B
2006-03-09
In the present article a novel approach for rapid product screening of fast reactions in IR-laser-heated liquid microbeams in a vacuum is highlighted. From absorbed energies, a shock wave analysis, high-speed laser stroboscopy, and thermodynamic data of high-temperature water the enthalpy, temperature, density, pressure, and the reaction time window for the hot water filament could be characterized. The experimental conditions (30 kbar, 1750 K, density approximately 1 g/cm3) present during the lifetime of the filament (20-30 ns) were extreme and provided a unique environment for high-temperature water chemistry. For the probe of the reaction products liquid beam desorption mass spectrometry was employed. A decisive feature of the technique is that ionic species, as well as neutral products and intermediates may be detected (neutrals as protonated aggregates) via time-of-flight mass spectrometry without any additional ionization laser. After the explosive disintegration of the superheated beam, high-temperature water reactions are efficiently quenched via expansion and evaporative cooling. For first exploratory experiments for chemistry in ultrahigh-temperature, -pressure and -density water, we have chosen resorcinol as a benchmark system, simple enough and well studied in high-temperature water environments much below 1000 K. Contrary to oxidation reactions usually present under less extreme and dense supercritical conditions, we have observed hydration and little H-atom abstraction during the narrow time window of the experiment. Small amounts of radicals but no ionic intermediates other than simple proton adducts were detected. The experimental findings are discussed in terms of the energetic and dense environment and the small time window for reaction, and they provide firm evidence for additional thermal reaction channels in extreme molecular environments.
Dokekias, A Elira; Ossini, L Ngolet; Tsiba, F O Atipo; Malanda, F; Koko, I; De Montalembert, M
2009-01-01
Homozygous, sickle-cell disease (SCD) is responsible for acute complication, especially anaemic crisis and special situation such as acute chest syndrome, stroke and acute priapism. Pregnancy sickle-cell disease presents high risk for the mother and the fetus. In these indications, blood transfusion is the main therapy aiming to reduce anaemia in order to restore hemoglobin's rate or to increase normal Hb proportion. This study aims to assess the short-term efficiency of the red cell transfusion in SCD homozygous form. One hundred and twelve homozygous sickle-cell patients were enrolled in this prospective study: 59 females and 53 males, median age is 21,8 years (extremes: 2 and 45 years). These patients are mostly with very low income. Two groups of patients are included in this study. In the first group, patients present acute anemia crisis caused by infections disease (malaria, bacterial infections). In the second group (20 cases), SCD patients have particularly situations: pregnancy (10 cases); stroke (six cases); cardiac failure (two cases) and priapism (two cases). Transfusion treatment in first group is simple regimen. Transfusion of EC increased median Hb level at 2,9 g/dl (extremes: 1,1 and 4,7). In the second group of patients, 16 cases were transfused by manual partial exchange (1-3) and four patients received simple regimen of transfusion. Median Hb level was 3,1g/dl (extremes: 2,4-4,9 g/dl). HbS percentage reduction was after PTE between -30 and -66,8% (median: -52,6%). According to our diagnostic possibilities (blood serologic test), we have not found any contamination by HIV, HBV and HCV (virus).
NASA Astrophysics Data System (ADS)
Crowcroft, Jon; Madhavapeddy, Anil; Schwarzkopf, Malte; Hong, Theodore; Mortier, Richard
Current opinion and debate surrounding the capabilities and use of the Cloud is particularly strident. By contrast, the academic community has long pursued completely decentralised approaches to service provision. In this paper we contrast these two extremes, and propose an architecture, Droplets, that enables a controlled trade-off between the costs and benefits of each. We also provide indications of implementation technologies and three simple sample applications that substantially benefit by exploiting these trade-offs.
2. West portal of Tunnel 3, oblique view to northnorthwest, ...
2. West portal of Tunnel 3, oblique view to north-northwest, 135mm lens. Note the simple concrete portal face and wingwalls, characteristic of the later (1923-27) period of construction on the Natron Cutoff. Note also the extreme surface spalling of the concrete, evidence of the severe freeze-thaw cycle at this elevation. - Southern Pacific Railroad Natron Cutoff, Tunnel 3, Milepost 537.77, Odell Lake, Klamath County, OR