Estimation of resist sensitivity for extreme ultraviolet lithography using an electron beam
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oyama, Tomoko Gowa, E-mail: ohyama.tomoko@qst.go.jp; Oshima, Akihiro; Tagawa, Seiichi, E-mail: tagawa@sanken.osaka-u.ac.jp
2016-08-15
It is a challenge to obtain sufficient extreme ultraviolet (EUV) exposure time for fundamental research on developing a new class of high sensitivity resists for extreme ultraviolet lithography (EUVL) because there are few EUV exposure tools that are very expensive. In this paper, we introduce an easy method for predicting EUV resist sensitivity by using conventional electron beam (EB) sources. If the chemical reactions induced by two ionizing sources (EB and EUV) are the same, the required absorbed energies corresponding to each required exposure dose (sensitivity) for the EB and EUV would be almost equivalent. Based on this theory, wemore » calculated the resist sensitivities for the EUV/soft X-ray region. The estimated sensitivities were found to be comparable to the experimentally obtained sensitivities. It was concluded that EB is a very useful exposure tool that accelerates the development of new resists and sensitivity enhancement processes for 13.5 nm EUVL and 6.x nm beyond-EUVL (BEUVL).« less
NASA Astrophysics Data System (ADS)
Dibike, Y. B.; Eum, H. I.; Prowse, T. D.
2017-12-01
Flows originating from alpine dominated cold region watersheds typically experience extended winter low flows followed by spring snowmelt and summer rainfall driven high flows. In a warmer climate, there will be temperature- induced shift in precipitation from snow towards rain as well as changes in snowmelt timing affecting the frequency of extreme high and low flow events which could significantly alter ecosystem services. This study examines the potential changes in the frequency and severity of hydrologic extremes in the Athabasca River watershed in Alberta, Canada based on the Variable Infiltration Capacity (VIC) hydrologic model and selected and statistically downscaled climate change scenario data from the latest Coupled Model Intercomparison Project (CMIP5). The sensitivity of these projected changes is also examined by applying different extreme flow analysis methods. The hydrological model projections show an overall increase in mean annual streamflow in the watershed and a corresponding shift in the freshet timing to earlier period. Most of the streams are projected to experience increases during the winter and spring seasons and decreases during the summer and early fall seasons, with an overall projected increases in extreme high flows, especially for low frequency events. While the middle and lower parts of the watershed are characterised by projected increases in extreme high flows, the high elevation alpine region is mainly characterised by corresponding decreases in extreme low flow events. However, the magnitude of projected changes in extreme flow varies over a wide range, especially for low frequent events, depending on the climate scenario and period of analysis, and sometimes in a nonlinear way. Nonetheless, the sensitivity of the projected changes to the statistical method of analysis is found to be relatively small compared to the inter-model variability.
NASA Astrophysics Data System (ADS)
Fix, Miranda J.; Cooley, Daniel; Hodzic, Alma; Gilleland, Eric; Russell, Brook T.; Porter, William C.; Pfister, Gabriele G.
2018-03-01
We conduct a case study of observed and simulated maximum daily 8-h average (MDA8) ozone (O3) in three US cities for summers during 1996-2005. The purpose of this study is to evaluate the ability of a high resolution atmospheric chemistry model to reproduce observed relationships between meteorology and high or extreme O3. We employ regional coupled chemistry-transport model simulations to make three types of comparisons between simulated and observational data, comparing (1) tails of the O3 response variable, (2) distributions of meteorological predictor variables, and (3) sensitivities of high and extreme O3 to meteorological predictors. This last comparison is made using two methods: quantile regression, for the 0.95 quantile of O3, and tail dependence optimization, which is used to investigate even higher O3 extremes. Across all three locations, we find substantial differences between simulations and observational data in both meteorology and meteorological sensitivities of high and extreme O3.
Cazelle, Elodie; Eskes, Chantra; Hermann, Martina; Jones, Penny; McNamee, Pauline; Prinsen, Menk; Taylor, Hannah; Wijnands, Marcel V W
2015-04-01
A.I.S.E. investigated the suitability of the regulatory adopted ICE in vitro test method (OECD TG 438) with or without histopathology to identify detergent and cleaning formulations having extreme pH that require classification as EU CLP/UN GHS Category 1. To this aim, 18 extreme pH detergent and cleaning formulations were tested covering both alkaline and acidic extreme pHs. The ICE standard test method following OECD Test Guideline 438 showed good concordance with in vivo classification (83%) and good and balanced specificity and sensitivity values (83%) which are in line with the performances of currently adopted in vitro test guidelines, confirming its suitability to identify Category 1 extreme pH detergent and cleaning products. In contrast to previous findings obtained with non-extreme pH formulations, the use of histopathology did not improve the sensitivity of the assay whilst it strongly decreased its specificity for the extreme pH formulations. Furthermore, use of non-testing prediction rules for classification showed poor concordance values (33% for the extreme pH rule and 61% for the EU CLP additivity approach) with high rates of over-prediction (100% for the extreme pH rule and 50% for the additivity approach), indicating that these non-testing prediction rules are not suitable to predict Category 1 hazards of extreme pH detergent and cleaning formulations. Copyright © 2015 Elsevier Ltd. All rights reserved.
[Ultrasound examination for lower extremity deep vein thrombosis].
Toyota, Kosaku
2014-09-01
Surgery is known to be a major risk factor of vein thrombosis. Progression from lower extremity deep vein thrombosis (DVT) to pulmonary embolism can lead to catastrophic outcome, although the incidence ratio is low. The ability to rule in or rule out DVT is becoming essential for anesthesiologists. Non-invasive technique of ultrasonography is a sensitive and specific tool for the assessment of lower extremity DVT. This article introduces the basics and practical methods of ultrasound examination for lower extremity DVT.
Knapp, Alan K.; Avolio, Meghan L.; Beier, Claus; Carroll, Charles J.W.; Collins, Scott L.; Dukes, Jeffrey S.; Fraser, Lauchlan H.; Griffin-Nolan, Robert J.; Hoover, David L.; Jentsch, Anke; Loik, Michael E.; Phillips, Richard P.; Post, Alison K.; Sala, Osvaldo E.; Slette, Ingrid J.; Yahdjian, Laura; Smith, Melinda D.
2017-01-01
Intensification of the global hydrological cycle, ranging from larger individual precipitation events to more extreme multiyear droughts, has the potential to cause widespread alterations in ecosystem structure and function. With evidence that the incidence of extreme precipitation years (defined statistically from historical precipitation records) is increasing, there is a clear need to identify ecosystems that are most vulnerable to these changes and understand why some ecosystems are more sensitive to extremes than others. To date, opportunistic studies of naturally occurring extreme precipitation years, combined with results from a relatively small number of experiments, have provided limited mechanistic understanding of differences in ecosystem sensitivity, suggesting that new approaches are needed. Coordinated distributed experiments (CDEs) arrayed across multiple ecosystem types and focused on water can enhance our understanding of differential ecosystem sensitivity to precipitation extremes, but there are many design challenges to overcome (e.g., cost, comparability, standardization). Here, we evaluate contemporary experimental approaches for manipulating precipitation under field conditions to inform the design of ‘Drought-Net’, a relatively low-cost CDE that simulates extreme precipitation years. A common method for imposing both dry and wet years is to alter each ambient precipitation event. We endorse this approach for imposing extreme precipitation years because it simultaneously alters other precipitation characteristics (i.e., event size) consistent with natural precipitation patterns. However, we do not advocate applying identical treatment levels at all sites – a common approach to standardization in CDEs. This is because precipitation variability varies >fivefold globally resulting in a wide range of ecosystem-specific thresholds for defining extreme precipitation years. For CDEs focused on precipitation extremes, treatments should be based on each site's past climatic characteristics. This approach, though not often used by ecologists, allows ecological responses to be directly compared across disparate ecosystems and climates, facilitating process-level understanding of ecosystem sensitivity to precipitation extremes.
Knapp, Alan K; Avolio, Meghan L; Beier, Claus; Carroll, Charles J W; Collins, Scott L; Dukes, Jeffrey S; Fraser, Lauchlan H; Griffin-Nolan, Robert J; Hoover, David L; Jentsch, Anke; Loik, Michael E; Phillips, Richard P; Post, Alison K; Sala, Osvaldo E; Slette, Ingrid J; Yahdjian, Laura; Smith, Melinda D
2017-05-01
Intensification of the global hydrological cycle, ranging from larger individual precipitation events to more extreme multiyear droughts, has the potential to cause widespread alterations in ecosystem structure and function. With evidence that the incidence of extreme precipitation years (defined statistically from historical precipitation records) is increasing, there is a clear need to identify ecosystems that are most vulnerable to these changes and understand why some ecosystems are more sensitive to extremes than others. To date, opportunistic studies of naturally occurring extreme precipitation years, combined with results from a relatively small number of experiments, have provided limited mechanistic understanding of differences in ecosystem sensitivity, suggesting that new approaches are needed. Coordinated distributed experiments (CDEs) arrayed across multiple ecosystem types and focused on water can enhance our understanding of differential ecosystem sensitivity to precipitation extremes, but there are many design challenges to overcome (e.g., cost, comparability, standardization). Here, we evaluate contemporary experimental approaches for manipulating precipitation under field conditions to inform the design of 'Drought-Net', a relatively low-cost CDE that simulates extreme precipitation years. A common method for imposing both dry and wet years is to alter each ambient precipitation event. We endorse this approach for imposing extreme precipitation years because it simultaneously alters other precipitation characteristics (i.e., event size) consistent with natural precipitation patterns. However, we do not advocate applying identical treatment levels at all sites - a common approach to standardization in CDEs. This is because precipitation variability varies >fivefold globally resulting in a wide range of ecosystem-specific thresholds for defining extreme precipitation years. For CDEs focused on precipitation extremes, treatments should be based on each site's past climatic characteristics. This approach, though not often used by ecologists, allows ecological responses to be directly compared across disparate ecosystems and climates, facilitating process-level understanding of ecosystem sensitivity to precipitation extremes. © 2016 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Hijikata, Hayato; Kozawa, Takahiro; Tagawa, Seiichi; Takei, Satoshi
2009-06-01
A bottom extreme-ultraviolet-sensitive coating (BESC) for evaluation of the absorption coefficients of ultrathin films such as extreme ultraviolet (EUV) resists was developed. This coating consists of a polymer, crosslinker, acid generator, and acid-responsive chromic dye and is formed by a conventional spin-coating method. By heating the film after spin-coating, a crosslinking reaction is induced and the coating becomes insoluble. A typical resist solution can be spin-coated on a substrate covered with the coating film. The evaluation of the linear absorption coefficients of polymer films was demonstrated by measuring the EUV absorption of BESC substrates on which various polymers were spin-coated.
A study of the stress wave factor technique for nondestructive evaluation of composite materials
NASA Technical Reports Server (NTRS)
Sarrafzadeh-Khoee, A.; Kiernan, M. T.; Duke, J. C., Jr.; Henneke, E. G., II
1986-01-01
The acousto-ultrasonic method of nondestructive evaluation is an extremely sensitive means of assessing material response. Efforts continue to complete the understanding of this method. In order to achieve the full sensitivity of the technique, extreme care must be taken in its performance. This report provides an update of the efforts to advance the understanding of this method and to increase its application to the nondestructive evaluation of composite materials. Included are descriptions of a novel optical system that is capable of measuring in-plane and out-of-plane displacements, an IBM PC-based data acquisition system, an extensive data analysis software package, the azimuthal variation of acousto-ultrasonic behavior in graphite/epoxy laminates, and preliminary examination of processing variation in graphite-aluminum tubes.
Johnson, Mitchell E; Landers, James P
2004-11-01
Laser-induced fluorescence is an extremely sensitive method for detection in chemical separations. In addition, it is well-suited to detection in small volumes, and as such is widely used for capillary electrophoresis and microchip-based separations. This review explores the detailed instrumental conditions required for sub-zeptomole, sub-picomolar detection limits. The key to achieving the best sensitivity is to use an excitation and emission volume that is matched to the separation system and that, simultaneously, will keep scattering and luminescence background to a minimum. We discuss how this is accomplished with confocal detection, 90 degrees on-capillary detection, and sheath-flow detection. It is shown that each of these methods have their advantages and disadvantages, but that all can be used to produce extremely sensitive detectors for capillary- or microchip-based separations. Analysis of these capabilities allows prediction of the optimal means of achieving ultrasensitive detection on microchips.
A general method for handling missing binary outcome data in randomized controlled trials
Jackson, Dan; White, Ian R; Mason, Dan; Sutton, Stephen
2014-01-01
Aims The analysis of randomized controlled trials with incomplete binary outcome data is challenging. We develop a general method for exploring the impact of missing data in such trials, with a focus on abstinence outcomes. Design We propose a sensitivity analysis where standard analyses, which could include ‘missing = smoking’ and ‘last observation carried forward’, are embedded in a wider class of models. Setting We apply our general method to data from two smoking cessation trials. Participants A total of 489 and 1758 participants from two smoking cessation trials. Measurements The abstinence outcomes were obtained using telephone interviews. Findings The estimated intervention effects from both trials depend on the sensitivity parameters used. The findings differ considerably in magnitude and statistical significance under quite extreme assumptions about the missing data, but are reasonably consistent under more moderate assumptions. Conclusions A new method for undertaking sensitivity analyses when handling missing data in trials with binary outcomes allows a wide range of assumptions about the missing data to be assessed. In two smoking cessation trials the results were insensitive to all but extreme assumptions. PMID:25171441
A sub-sampled approach to extremely low-dose STEM
Stevens, A.; Luzi, L.; Yang, H.; ...
2018-01-22
The inpainting of deliberately and randomly sub-sampled images offers a potential means to image specimens at a high resolution and under extremely low-dose conditions (≤1 e -/Å 2) using a scanning transmission electron microscope. We show that deliberate sub-sampling acquires images at least an order of magnitude faster than conventional low-dose methods for an equivalent electron dose. More importantly, when adaptive sub-sampling is implemented to acquire the images, there is a significant increase in the resolution and sensitivity which accompanies the increase in imaging speed. Lastly, we demonstrate the potential of this method for beam sensitive materials and in-situ observationsmore » by experimentally imaging the node distribution in a metal-organic framework.« less
Miao, Meng; Zhao, Gaosheng; Xu, Li; Dong, Junguo; Cheng, Ping
2018-03-01
A direct analytical method based on spray-inlet microwave plasma torch tandem mass spectrometry was applied to simultaneously determine 4 phthalate esters (PAEs), namely, benzyl butyl phthalate, diethyl phthalate, dipentyl phthalate, and dodecyl phthalate with extremely high sensitivity in spirits without sample treatment. Among the 4 brands of spirit products, 3 kinds of PAE compounds were directly determined at very low concentrations from 1.30 to 114 ng·g -1 . Compared with other online and off-line methods, the spray-inlet microwave plasma torch tandem mass spectrometry technique is extremely simple, rapid, sensitive, and high efficient, providing an ideal screening tool for PAEs in spirits. Copyright © 2017 John Wiley & Sons, Ltd.
A sub-sampled approach to extremely low-dose STEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, A.; Luzi, L.; Yang, H.
The inpainting of deliberately and randomly sub-sampled images offers a potential means to image specimens at a high resolution and under extremely low-dose conditions (≤1 e -/Å 2) using a scanning transmission electron microscope. We show that deliberate sub-sampling acquires images at least an order of magnitude faster than conventional low-dose methods for an equivalent electron dose. More importantly, when adaptive sub-sampling is implemented to acquire the images, there is a significant increase in the resolution and sensitivity which accompanies the increase in imaging speed. Lastly, we demonstrate the potential of this method for beam sensitive materials and in-situ observationsmore » by experimentally imaging the node distribution in a metal-organic framework.« less
Ege, Tolga; Unlu, Aytekin; Tas, Huseyin; Bek, Dogan; Turkan, Selim; Cetinkaya, Aytac
2015-01-01
Background: Decision of limb salvage or amputation is generally aided with several trauma scoring systems such as the mangled extremity severity score (MESS). However, the reliability of the injury scores in the settling of open fractures due to explosives and missiles is challenging. Mortality and morbidity of the extremity trauma due to firearms are generally associated with time delay in revascularization, injury mechanism, anatomy of the injured site, associated injuries, age and the environmental circumstance. The purpose of the retrospective study was to evaluate the extent of extremity injuries due to ballistic missiles and to detect the reliability of mangled extremity severity score (MESS) in both upper and lower extremities. Materials and Methods: Between 2004 and 2014, 139 Gustillo Anderson Type III open fractures of both the upper and lower extremities were enrolled in the study. Data for patient age, fire arm type, transporting time from the field to the hospital (and the method), injury severity scores, MESS scores, fracture types, amputation levels, bone fixation methods and postoperative infections and complications retrieved from the two level-2 trauma center's data base. Sensitivity, specificity, positive and negative predictive values of the MESS were calculated to detect the ability in deciding amputation in the mangled limb. Results: Amputation was performed in 39 extremities and limb salvage attempted in 100 extremities. The mean followup time was 14.6 months (range 6–32 months). In the amputated group, the mean MESS scores for upper and lower extremity were 8.8 (range 6–11) and 9.24 (range 6–11), respectively. In the limb salvage group, the mean MESS scores for upper and lower extremities were 5.29 (range 4–7) and 5.19 (range 3–8), respectively. Sensitivity of MESS in upper and lower extremities were calculated as 80% and 79.4% and positive predictive values detected as 55.55% and 83.3%, respectively. Specificity of MESS score for upper and lower extremities was 84% and 86.6%; negative predictive values were calculated as 95.45% and 90.2%, respectively. Conclusion: MESS is not predictive in combat related extremity injuries especially if between a score of 6–8. Limb ischemia and presence or absence of shock can be used in initial decision-making for amputation. PMID:26806974
A Non-Stationary Approach for Estimating Future Hydroclimatic Extremes Using Monte-Carlo Simulation
NASA Astrophysics Data System (ADS)
Byun, K.; Hamlet, A. F.
2017-12-01
There is substantial evidence that observed hydrologic extremes (e.g. floods, extreme stormwater events, and low flows) are changing and that climate change will continue to alter the probability distributions of hydrologic extremes over time. These non-stationary risks imply that conventional approaches for designing hydrologic infrastructure (or making other climate-sensitive decisions) based on retrospective analysis and stationary statistics will become increasingly problematic through time. To develop a framework for assessing risks in a non-stationary environment our study develops a new approach using a super ensemble of simulated hydrologic extremes based on Monte Carlo (MC) methods. Specifically, using statistically downscaled future GCM projections from the CMIP5 archive (using the Hybrid Delta (HD) method), we extract daily precipitation (P) and temperature (T) at 1/16 degree resolution based on a group of moving 30-yr windows within a given design lifespan (e.g. 10, 25, 50-yr). Using these T and P scenarios we simulate daily streamflow using the Variable Infiltration Capacity (VIC) model for each year of the design lifespan and fit a Generalized Extreme Value (GEV) probability distribution to the simulated annual extremes. MC experiments are then used to construct a random series of 10,000 realizations of the design lifespan, estimating annual extremes using the estimated unique GEV parameters for each individual year of the design lifespan. Our preliminary results for two watersheds in Midwest show that there are considerable differences in the extreme values for a given percentile between conventional MC and non-stationary MC approach. Design standards based on our non-stationary approach are also directly dependent on the design lifespan of infrastructure, a sensitivity which is notably absent from conventional approaches based on retrospective analysis. The experimental approach can be applied to a wide range of hydroclimatic variables of interest.
Improved Extreme Learning Machine based on the Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Cui, Licheng; Zhai, Huawei; Wang, Benchao; Qu, Zengtang
2018-03-01
Extreme learning machine and its improved ones is weak in some points, such as computing complex, learning error and so on. After deeply analyzing, referencing the importance of hidden nodes in SVM, an novel analyzing method of the sensitivity is proposed which meets people’s cognitive habits. Based on these, an improved ELM is proposed, it could remove hidden nodes before meeting the learning error, and it can efficiently manage the number of hidden nodes, so as to improve the its performance. After comparing tests, it is better in learning time, accuracy and so on.
A general method for handling missing binary outcome data in randomized controlled trials.
Jackson, Dan; White, Ian R; Mason, Dan; Sutton, Stephen
2014-12-01
The analysis of randomized controlled trials with incomplete binary outcome data is challenging. We develop a general method for exploring the impact of missing data in such trials, with a focus on abstinence outcomes. We propose a sensitivity analysis where standard analyses, which could include 'missing = smoking' and 'last observation carried forward', are embedded in a wider class of models. We apply our general method to data from two smoking cessation trials. A total of 489 and 1758 participants from two smoking cessation trials. The abstinence outcomes were obtained using telephone interviews. The estimated intervention effects from both trials depend on the sensitivity parameters used. The findings differ considerably in magnitude and statistical significance under quite extreme assumptions about the missing data, but are reasonably consistent under more moderate assumptions. A new method for undertaking sensitivity analyses when handling missing data in trials with binary outcomes allows a wide range of assumptions about the missing data to be assessed. In two smoking cessation trials the results were insensitive to all but extreme assumptions. © 2014 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.
Absolute sensitivity calibration of an extreme ultraviolet spectrometer for tokamak measurements
NASA Astrophysics Data System (ADS)
Guirlet, R.; Schwob, J. L.; Meyer, O.; Vartanian, S.
2017-01-01
An extreme ultraviolet spectrometer installed on the Tore Supra tokamak has been calibrated in absolute units of brightness in the range 10-340 Å. This has been performed by means of a combination of techniques. The range 10-113 Å was absolutely calibrated by using an ultrasoft-X ray source emitting six spectral lines in this range. The calibration transfer to the range 113-182 Å was performed using the spectral line intensity branching ratio method. The range 182-340 Å was calibrated thanks to radiative-collisional modelling of spectral line intensity ratios. The maximum sensitivity of the spectrometer was found to lie around 100 Å. Around this wavelength, the sensitivity is fairly flat in a 80 Å wide interval. The spatial variations of sensitivity along the detector assembly were also measured. The observed trend is related to the quantum efficiency decrease as the angle of the incoming photon trajectories becomes more grazing.
Takeda, Mitsuo
2013-01-01
The paper reviews a technique for fringe analysis referred to as Fourier fringe analysis (FFA) or the Fourier transform method, with a particular focus on its application to metrology of extreme physical phenomena. Examples include the measurement of extremely small magnetic fields with subfluxon sensitivity by electron wave interferometry, subnanometer wavefront evaluation of projection optics for extreme UV lithography, the detection of sub-Ångstrom distortion of a crystal lattice, and the measurement of ultrashort optical pulses in the femotsecond to attosecond range, which show how the advantages of FFA are exploited in these cutting edge applications.
Frouzan, Arash; Masoumi, Kambiz; Delirroyfard, Ali; Mazdaie, Behnaz; Bagherzadegan, Elnaz
2017-01-01
Background Long bone fractures are common injuries caused by trauma. Some studies have demonstrated that ultrasound has a high sensitivity and specificity in the diagnosis of upper and lower extremity long bone fractures. Objective The aim of this study was to determine the accuracy of ultrasound compared with plain radiography in diagnosis of upper and lower extremity long bone fractures in traumatic patients. Methods This cross-sectional study assessed 100 patients admitted to the emergency department of Imam Khomeini Hospital, Ahvaz, Iran with trauma to the upper and lower extremities, from September 2014 through October 2015. In all patients, first ultrasound and then standard plain radiography for the upper and lower limb was performed. Data were analyzed by SPSS version 21 to determine the specificity and sensitivity. Results The mean age of patients with upper and lower limb trauma were 31.43±12.32 years and 29.63±5.89 years, respectively. Radius fracture was the most frequent compared to other fractures (27%). Sensitivity, specificity, positive predicted value, and negative predicted value of ultrasound compared with plain radiography in the diagnosis of upper extremity long bones were 95.3%, 87.7%, 87.2% and 96.2%, respectively, and the highest accuracy was observed in left arm fractures (100%). Tibia and fibula fractures were the most frequent types compared to other fractures (89.2%). Sensitivity, specificity, PPV and NPV of ultrasound compared with plain radiography in the diagnosis of upper extremity long bone fractures were 98.6%, 83%, 65.4% and 87.1%, respectively, and the highest accuracy was observed in men, lower ages and femoral fractures. Conclusion The results of this study showed that ultrasound compared with plain radiography has a high accuracy in the diagnosis of upper and lower extremity long bone fractures. PMID:28979747
Lamar, William L.; Goerlitz, Donald F.; Law, LeRoy M.
1965-01-01
Pesticides, in minute quantities, may affect the regimen of streams, and because they may concentrate in sediments, aquatic organisms, and edible aquatic foods, their detection and their measurement in the parts-per-trillion range are considered essential. In 1964 the U.S. Geological Survey at Menlo Park, Calif., began research on methods for monitoring pesticides in water. Two systems were selected--electron-capture gas chromatography and microcoulometric-titration gas chromatography. Studies on these systems are now in progress. This report provides current information on the development and application of an electron-capture gas chromatographic procedure. This method is a convenient and extremely sensitive procedure for the detection and measurement of organic pesticides having high electron affinities, notably the chlorinated organic pesticides. The electron-affinity detector is extremely sensitive to these substances but it is not as sensitive to many other compounds. By this method, the chlorinated organic pesticide may be determined on a sample of convenient size in concentrations as low as the parts-per-trillion range. To insure greater accuracy in the identifications, the pesticides reported were separated and identified by their retention times on two different types of gas chromatographic columns.
Ege, Tolga; Unlu, Aytekin; Tas, Huseyin; Bek, Dogan; Turkan, Selim; Cetinkaya, Aytac
2015-01-01
Decision of limb salvage or amputation is generally aided with several trauma scoring systems such as the mangled extremity severity score (MESS). However, the reliability of the injury scores in the settling of open fractures due to explosives and missiles is challenging. Mortality and morbidity of the extremity trauma due to firearms are generally associated with time delay in revascularization, injury mechanism, anatomy of the injured site, associated injuries, age and the environmental circumstance. The purpose of the retrospective study was to evaluate the extent of extremity injuries due to ballistic missiles and to detect the reliability of mangled extremity severity score (MESS) in both upper and lower extremities. Between 2004 and 2014, 139 Gustillo Anderson Type III open fractures of both the upper and lower extremities were enrolled in the study. Data for patient age, fire arm type, transporting time from the field to the hospital (and the method), injury severity scores, MESS scores, fracture types, amputation levels, bone fixation methods and postoperative infections and complications retrieved from the two level-2 trauma center's data base. Sensitivity, specificity, positive and negative predictive values of the MESS were calculated to detect the ability in deciding amputation in the mangled limb. Amputation was performed in 39 extremities and limb salvage attempted in 100 extremities. The mean followup time was 14.6 months (range 6-32 months). In the amputated group, the mean MESS scores for upper and lower extremity were 8.8 (range 6-11) and 9.24 (range 6-11), respectively. In the limb salvage group, the mean MESS scores for upper and lower extremities were 5.29 (range 4-7) and 5.19 (range 3-8), respectively. Sensitivity of MESS in upper and lower extremities were calculated as 80% and 79.4% and positive predictive values detected as 55.55% and 83.3%, respectively. Specificity of MESS score for upper and lower extremities was 84% and 86.6%; negative predictive values were calculated as 95.45% and 90.2%, respectively. MESS is not predictive in combat related extremity injuries especially if between a score of 6-8. Limb ischemia and presence or absence of shock can be used in initial decision-making for amputation.
Zhao, Xian-En; Yan, Ping; Wang, Renjun; Zhu, Shuyun; You, Jinmao; Bai, Yu; Liu, Huwei
2016-08-01
Quantitative analysis of cholesterol and its metabolic steroid hormones plays a vital role in diagnosing endocrine disorders and understanding disease progression, as well as in clinical medicine studies. Because of their extremely low abundance in body fluids, it remains a challenging task to develop a sensitive detection method. A hyphenated technique of dual ultrasonic-assisted dispersive liquid-liquid microextraction (dual-UADLLME) coupled with microwave-assisted derivatization (MAD) was proposed for cleansing, enrichment and sensitivity enhancement. 4'-Carboxy-substituted rosamine (CSR) was synthesized and used as derivatization reagent. An ultra-high performance liquid chromatography tandem mass spectrometry (UHPLC-MS/MS) method was developed for determination of cholesterol and its metabolic steroid hormones in the multiple reaction monitoring mode. Parameters of dual-UADLLME, MAD and UHPLC-MS/MS were all optimized. Satisfactory linearity, recovery, repeatability, accuracy and precision, absence of matrix effect and extremely low limits of detection (LODs, 0.08-0.15 pg mL(-1) ) were achieved. Through the combination of dual-UADLLME and MAD, a determination method for cholesterol and its metabolic steroid hormones in human plasma, serum and urine samples was developed and validated with high sensitivity, selectivity, accuracy and perfect matrix effect results. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Input reconstruction of chaos sensors.
Yu, Dongchuan; Liu, Fang; Lai, Pik-Yin
2008-06-01
Although the sensitivity of sensors can be significantly enhanced using chaotic dynamics due to its extremely sensitive dependence on initial conditions and parameters, how to reconstruct the measured signal from the distorted sensor response becomes challenging. In this paper we suggest an effective method to reconstruct the measured signal from the distorted (chaotic) response of chaos sensors. This measurement signal reconstruction method applies the neural network techniques for system structure identification and therefore does not require the precise information of the sensor's dynamics. We discuss also how to improve the robustness of reconstruction. Some examples are presented to illustrate the measurement signal reconstruction method suggested.
Statistical evaluation of forecasts
NASA Astrophysics Data System (ADS)
Mader, Malenka; Mader, Wolfgang; Gluckman, Bruce J.; Timmer, Jens; Schelter, Björn
2014-08-01
Reliable forecasts of extreme but rare events, such as earthquakes, financial crashes, and epileptic seizures, would render interventions and precautions possible. Therefore, forecasting methods have been developed which intend to raise an alarm if an extreme event is about to occur. In order to statistically validate the performance of a prediction system, it must be compared to the performance of a random predictor, which raises alarms independent of the events. Such a random predictor can be obtained by bootstrapping or analytically. We propose an analytic statistical framework which, in contrast to conventional methods, allows for validating independently the sensitivity and specificity of a forecasting method. Moreover, our method accounts for the periods during which an event has to remain absent or occur after a respective forecast.
Tellurium: A new sensitive test
Lakin, H.W.; Thompson, C.E.
1963-01-01
A new, extremely sensitive method for the quantitative determination of tellurium is based on the induced precipitation of elemental gold from a 6N HCl solution containing gold chloride, cupric chloride, and hypophosphorous acid; the amount of gold reduced is proportional to the amount of tellurium present. As little as 1 nanogram (1 ?? 70-9 g) of tellurium gives a measurable reaction with 1 mg of gold in 50 ml of solution.
Groundwater sensitivity mapping in Kentucky using GIS and digitally vectorized geologic quadrangles
NASA Astrophysics Data System (ADS)
Croskrey, Andrea; Groves, Chris
2008-05-01
Groundwater sensitivity (Ray and O’dell in Environ Geol 22:345 352, 1993a) refers to the inherent ease with which groundwater can be contaminated based on hydrogeologic characteristics. We have developed digital methods for identifying areas of varying groundwater sensitivity for a ten county area of south central Kentucky at a scale of 1:100,000. The study area includes extensive limestone karst sinkhole plains, with groundwater extremely sensitive to contamination. Digitally vectorized geologic quadrangles (DVGQs) were combined with elevation data to identify both hydrogeologic groundwater sensitivity regions and zones of “high risk runoff” where contaminants could be transported in runoff from less sensitive to higher sensitivity (particularly karst) areas. While future work will fine-tune these maps with additional layers of data (soils for example) as digital data have become available, using DVGQs allows a relatively rapid assessment of groundwater sensitivity for Kentucky at a more useful scale than previously available assessment methods, such as DRASTIC and DIVERSITY.
Hig Resolution Seismometer Insensitive to Extremely Strong Magnetic Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abramovich, Igor A.
A highly sensitive broadband seismic sensor has been developed successfully to be used in beam focusing systems of particale accelerators. The sensor is completely insensitive to extremely strong magnetic fields and to hard radiation conditions that exist at the place of their installation. A unique remote sensor calibration method has been invented and implemented. Several such sensors were sold to LAPP (LAPP-IN2P3/CNRS-Université de Savoie; Laboratoire d'Annecy-le-Vieux de Physique des Particules)
NASA Astrophysics Data System (ADS)
Leta, O. T.; El-Kadi, A. I.; Dulaiova, H.
2016-12-01
Extreme events, such as flooding and drought, are expected to occur at increased frequencies worldwide due to climate change influencing the water cycle. This is particularly critical for tropical islands where the local freshwater resources are very sensitive to climate. This study examined the impact of climate change on extreme streamflow, reservoir water volume and outflow for the Nuuanu watershed, using the Soil and Water Assessment Tool (SWAT) model. Based on the sensitive parameters screened by the Latin Hypercube-One-factor-At-a-Time (LH-OAT) method, SWAT was calibrated and validated to daily streamflow using the SWAT Calibration and Uncertainty Program (SWAT-CUP) at three streamflow gauging stations. Results showed that SWAT adequately reproduced the observed daily streamflow hydrographs at all stations. This was verified with Nash-Sutcliffe Efficiency that resulted in acceptable values of 0.58 to 0.88, whereby more than 90% of observations were bracketed within 95% model prediction uncertainty interval for both calibration and validation periods, signifying the potential applicability of SWAT for future prediction. The climate change impact on extreme flows, reservoir water volume and outflow was assessed under the Representative Concentration Pathways of 4.5 and 8.5 scenarios. We found wide changes in extreme peak and low flows ranging from -44% to 20% and -50% to -2%, respectively, compared to baseline. Consequently, the amount of water stored in Nuuanu reservoir will be decreased up to 27% while the corresponding outflow rates are expected to decrease up to 37% relative to the baseline. In addition, the stored water and extreme flows are highly sensitive to rainfall change when compared to temperature and solar radiation changes. It is concluded that the decrease in extreme low and peak flows can have serious consequences, such as flooding, drought, with detrimental effects on riparian ecological functioning. This study's results are expected to aid in reservoir operation as well as in identifying appropriate climate change adaptation strategies.
NASA Astrophysics Data System (ADS)
Ishii, Katsuhiro; Nakamura, Sohichiro; Sato, Yuki
2014-08-01
High-sensitivity low-coherence DLS apply to measurement of particle size distribution of pigments suspended in a ink. This method can be apply to extremely dense and turbid media without dilution. We show the temporal variation of particle size distribution of thixotropy and sedimentary pigments due to aggregation, agglomerate, and sedimentation. Moreover, we demonstrate the influence of dilution of ink to particle size distribution.
The influence of weather on health-related help-seeking behavior of senior citizens in Hong Kong.
Wong, Ho Ting; Chiu, Marcus Yu Lung; Wu, Cynthia Sau Ting; Lee, Tsz Cheung
2015-03-01
It is believed that extreme hot and cold weather has a negative impact on general health conditions. Much research focuses on mortality, but there is relatively little community health research. This study is aimed at identifying high-risk groups who are sensitive to extreme weather conditions, in particular, very hot and cold days, through an analysis of the health-related help-seeking patterns of over 60,000 Personal Emergency Link (PE-link) users in Hong Kong relative to weather conditions. In the study, 1,659,716 PE-link calls to the help center were analyzed. Results showed that females, older elderly, people who did not live alone, non-subsidized (relatively high-income) users, and those without medical histories of heart disease, hypertension, stroke, and diabetes were more sensitive to extreme weather condition. The results suggest that using official government weather forecast reports to predict health-related help-seeking behavior is feasible. An evidence-based strategic plan could be formulated by using a method similar to that used in this study to identify high-risk groups. Preventive measures could be established for protecting the target groups when extreme weather conditions are forecasted.
The influence of weather on health-related help-seeking behavior of senior citizens in Hong Kong
NASA Astrophysics Data System (ADS)
Wong, Ho Ting; Chiu, Marcus Yu Lung; Wu, Cynthia Sau Ting; Lee, Tsz Cheung
2015-03-01
It is believed that extreme hot and cold weather has a negative impact on general health conditions. Much research focuses on mortality, but there is relatively little community health research. This study is aimed at identifying high-risk groups who are sensitive to extreme weather conditions, in particular, very hot and cold days, through an analysis of the health-related help-seeking patterns of over 60,000 Personal Emergency Link (PE-link) users in Hong Kong relative to weather conditions. In the study, 1,659,716 PE-link calls to the help center were analyzed. Results showed that females, older elderly, people who did not live alone, non-subsidized (relatively high-income) users, and those without medical histories of heart disease, hypertension, stroke, and diabetes were more sensitive to extreme weather condition. The results suggest that using official government weather forecast reports to predict health-related help-seeking behavior is feasible. An evidence-based strategic plan could be formulated by using a method similar to that used in this study to identify high-risk groups. Preventive measures could be established for protecting the target groups when extreme weather conditions are forecasted.
Uncertainty in determining extreme precipitation thresholds
NASA Astrophysics Data System (ADS)
Liu, Bingjun; Chen, Junfan; Chen, Xiaohong; Lian, Yanqing; Wu, Lili
2013-10-01
Extreme precipitation events are rare and occur mostly on a relatively small and local scale, which makes it difficult to set the thresholds for extreme precipitations in a large basin. Based on the long term daily precipitation data from 62 observation stations in the Pearl River Basin, this study has assessed the applicability of the non-parametric, parametric, and the detrended fluctuation analysis (DFA) methods in determining extreme precipitation threshold (EPT) and the certainty to EPTs from each method. Analyses from this study show the non-parametric absolute critical value method is easy to use, but unable to reflect the difference of spatial rainfall distribution. The non-parametric percentile method can account for the spatial distribution feature of precipitation, but the problem with this method is that the threshold value is sensitive to the size of rainfall data series and is subjected to the selection of a percentile thus make it difficult to determine reasonable threshold values for a large basin. The parametric method can provide the most apt description of extreme precipitations by fitting extreme precipitation distributions with probability distribution functions; however, selections of probability distribution functions, the goodness-of-fit tests, and the size of the rainfall data series can greatly affect the fitting accuracy. In contrast to the non-parametric and the parametric methods which are unable to provide information for EPTs with certainty, the DFA method although involving complicated computational processes has proven to be the most appropriate method that is able to provide a unique set of EPTs for a large basin with uneven spatio-temporal precipitation distribution. The consistency between the spatial distribution of DFA-based thresholds with the annual average precipitation, the coefficient of variation (CV), and the coefficient of skewness (CS) for the daily precipitation further proves that EPTs determined by the DFA method are more reasonable and applicable for the Pearl River Basin.
NASA Astrophysics Data System (ADS)
Sun, Xiao-Yan; Chu, Dong-Kai; Dong, Xin-Ran; Zhou, Chu; Li, Hai-Tao; Luo-Zhi; Hu, You-Wang; Zhou, Jian-Ying; Cong-Wang; Duan, Ji-An
2016-03-01
A High sensitive refractive index (RI) sensor based on Mach-Zehnder interferometer (MZI) in a conventional single-mode optical fiber is proposed, which is fabricated by femtosecond laser transversal-scanning inscription method and chemical etching. A rectangular cavity structure is formed in part of fiber core and cladding interface. The MZI sensor shows excellent refractive index sensitivity and linearity, which exhibits an extremely high RI sensitivity of -17197 nm/RIU (refractive index unit) with the linearity of 0.9996 within the refractive index range of 1.3371-1.3407. The experimental results are consistent with theoretical analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qu, Xuanlu M.; Louie, Alexander V.; Ashman, Jonathan
Purpose: Surgery combined with radiation therapy (RT) is the cornerstone of multidisciplinary management of extremity soft tissue sarcoma (STS). Although RT can be given in either the preoperative or the postoperative setting with similar local recurrence and survival outcomes, the side effect profiles, costs, and long-term functional outcomes are different. The aim of this study was to use decision analysis to determine optimal sequencing of RT with surgery in patients with extremity STS. Methods and Materials: A cost-effectiveness analysis was conducted using a state transition Markov model, with quality-adjusted life years (QALYs) as the primary outcome. A time horizon ofmore » 5 years, a cycle length of 3 months, and a willingness-to-pay threshold of $50,000/QALY was used. One-way deterministic sensitivity analyses were performed to determine the thresholds at which each strategy would be preferred. The robustness of the model was assessed by probabilistic sensitivity analysis. Results: Preoperative RT is a more cost-effective strategy ($26,633/3.00 QALYs) than postoperative RT ($28,028/2.86 QALYs) in our base case scenario. Preoperative RT is the superior strategy with either 3-dimensional conformal RT or intensity-modulated RT. One-way sensitivity analyses identified the relative risk of chronic adverse events as having the greatest influence on the preferred timing of RT. The likelihood of preoperative RT being the preferred strategy was 82% on probabilistic sensitivity analysis. Conclusions: Preoperative RT is more cost effective than postoperative RT in the management of resectable extremity STS, primarily because of the higher incidence of chronic adverse events with RT in the postoperative setting.« less
An improvement of LLNA:DA to assess the skin sensitization potential of chemicals.
Zhang, Hongwei; Shi, Ying; Wang, Chao; Zhao, Kangfeng; Zhang, Shaoping; Wei, Lan; Dong, Li; Gu, Wen; Xu, Yongjun; Ruan, Hongjie; Zhi, Hong; Yang, Xiaoyan
2017-01-01
We developed a modified local lymph node assay based on ATP (LLNA:DA), termed the Two-Stage LLNA:DA, to further reduce the animal numbers in the identification of sensitizers. In the Two-Stage LLNA:DA procedure, 13 chemicals ranging from non-sensitizers to extreme sensitizers were selected. The first stage used reduced LLNA:DA (rLLNA:DA) to screen out sensitive chemicals. The second stage used LLNA:DA based on OECD 442 (A) to classify those potential sensitizers screened out in the first stage. In the first stage, the SIs of the methyl methacrylate, salicylic acid, methyl salicylate, ethyl salicylate, isopropanol and propanediol were below 1.8 and need not to be tested in the second step. Others continued to be tested by LLNA:DA. In the second stage, sodium lauryl sulphate and xylene were classified as weak sensitizers. a-hexyl cinnamic aldehyde and eugenol were moderate sensitizers. Benzalkonium chloride and glyoxal were strong sensitizers, and phthalic anhydride was an extreme sensitizer. The 9/9, 11/12, 10/11, and 8/13 (positive or negative only) categories of the Two-Stage LLNA:DA were consistent with those from the other methods (LLNA, LLNA:DA, GPMT/BT and HMT/HPTA), suggesting that Two-Stage LLNA:DA have a high coincidence rate with reported data. In conclusion, The Two-Stage LLNA:DA is in line with the "3R" rules, and can be a modification of LLNA:DA but needs more study.
Atomic-resolution transmission electron microscopy of electron beam–sensitive crystalline materials
NASA Astrophysics Data System (ADS)
Zhang, Daliang; Zhu, Yihan; Liu, Lingmei; Ying, Xiangrong; Hsiung, Chia-En; Sougrat, Rachid; Li, Kun; Han, Yu
2018-02-01
High-resolution imaging of electron beam–sensitive materials is one of the most difficult applications of transmission electron microscopy (TEM). The challenges are manifold, including the acquisition of images with extremely low beam doses, the time-constrained search for crystal zone axes, the precise image alignment, and the accurate determination of the defocus value. We develop a suite of methods to fulfill these requirements and acquire atomic-resolution TEM images of several metal organic frameworks that are generally recognized as highly sensitive to electron beams. The high image resolution allows us to identify individual metal atomic columns, various types of surface termination, and benzene rings in the organic linkers. We also apply our methods to other electron beam–sensitive materials, including the organic-inorganic hybrid perovskite CH3NH3PbBr3.
Stability and delay sensitivity of neutral fractional-delay systems.
Xu, Qi; Shi, Min; Wang, Zaihua
2016-08-01
This paper generalizes the stability test method via integral estimation for integer-order neutral time-delay systems to neutral fractional-delay systems. The key step in stability test is the calculation of the number of unstable characteristic roots that is described by a definite integral over an interval from zero to a sufficient large upper limit. Algorithms for correctly estimating the upper limits of the integral are given in two concise ways, parameter dependent or independent. A special feature of the proposed method is that it judges the stability of fractional-delay systems simply by using rough integral estimation. Meanwhile, the paper shows that for some neutral fractional-delay systems, the stability is extremely sensitive to the change of time delays. Examples are given for demonstrating the proposed method as well as the delay sensitivity.
The use of copula functions for predictive analysis of correlations between extreme storm tides
NASA Astrophysics Data System (ADS)
Domino, Krzysztof; Błachowicz, Tomasz; Ciupak, Maurycy
2014-11-01
In this paper we present a method used in quantitative description of weakly predictable hydrological, extreme events at inland sea. Investigations for correlations between variations of individual measuring points, employing combined statistical methods, were carried out. As a main tool for this analysis we used a two-dimensional copula function sensitive for correlated extreme effects. Additionally, a new proposed methodology, based on Detrended Fluctuations Analysis (DFA) and Anomalous Diffusion (AD), was used for the prediction of negative and positive auto-correlations and associated optimum choice of copula functions. As a practical example we analysed maximum storm tides data recorded at five spatially separated places at the Baltic Sea. For the analysis we used Gumbel, Clayton, and Frank copula functions and introduced the reversed Clayton copula. The application of our research model is associated with modelling the risk of high storm tides and possible storm flooding.
Murata, Fernando Henrique Antunes; Ferreira, Marina Neves; Pereira-Chioccola, Vera Lucia; Spegiorin, Lígia Cosentino Junqueira Franco; Meira-Strejevitch, Cristina da Silva; Gava, Ricardo; Silveira-Carvalho, Aparecida Perpétuo; de Mattos, Luiz Carlos; Brandão de Mattos, Cinara Cássia
2017-09-01
Toxoplasmosis during pregnancy can have severe consequences. The use of sensitive and specific serological and molecular methods is extremely important for the correct diagnosis of the disease. We compared the ELISA and ELFA serological methods, conventional PCR (cPCR), Nested PCR and quantitative PCR (qPCR) in the diagnosis of Toxoplasma gondii infection in pregnant women without clinical suspicion of toxoplasmosis (G1=94) and with clinical suspicion of toxoplasmosis (G2=53). The results were compared using the Kappa index, and the sensitivity, specificity, positive predictive value and negative predictive value were calculated. The results of the serological methods showed concordance between the ELISA and ELFA methods even though ELFA identified more positive cases than ELISA. Molecular methods were discrepant with cPCR using B22/23 primers having greater sensitivity and lower specificity compared to the other molecular methods. Copyright © 2017 Elsevier Inc. All rights reserved.
Ståhlberg, Anders; Krzyzanowski, Paul M; Jackson, Jennifer B; Egyud, Matthew; Stein, Lincoln; Godfrey, Tony E
2016-06-20
Detection of cell-free DNA in liquid biopsies offers great potential for use in non-invasive prenatal testing and as a cancer biomarker. Fetal and tumor DNA fractions however can be extremely low in these samples and ultra-sensitive methods are required for their detection. Here, we report an extremely simple and fast method for introduction of barcodes into DNA libraries made from 5 ng of DNA. Barcoded adapter primers are designed with an oligonucleotide hairpin structure to protect the molecular barcodes during the first rounds of polymerase chain reaction (PCR) and prevent them from participating in mis-priming events. Our approach enables high-level multiplexing and next-generation sequencing library construction with flexible library content. We show that uniform libraries of 1-, 5-, 13- and 31-plex can be generated. Utilizing the barcodes to generate consensus reads for each original DNA molecule reduces background sequencing noise and allows detection of variant alleles below 0.1% frequency in clonal cell line DNA and in cell-free plasma DNA. Thus, our approach bridges the gap between the highly sensitive but specific capabilities of digital PCR, which only allows a limited number of variants to be analyzed, with the broad target capability of next-generation sequencing which traditionally lacks the sensitivity to detect rare variants. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Cervantes, Barbara; Kirschke, Jan S; Klupp, Elizabeth; Kooijman, Hendrik; Börnert, Peter; Haase, Axel; Rummeny, Ernst J; Karampinos, Dimitrios C
2018-01-01
To design a preparation module for vessel signal suppression in MR neurography of the extremities, which causes minimal attenuation of nerve signal and is highly insensitive to eddy currents and motion. The orthogonally combined motion- and diffusion-sensitized driven equilibrium (OC-MDSDE) preparation was proposed, based on the improved motion- and diffusion-sensitized driven equilibrium methods (iMSDE and FC-DSDE, respectively), with specific gradient design and orientation. OC-MDSDE was desensitized against eddy currents using appropriately designed gradient prepulses. The motion sensitivity and vessel signal suppression capability of OC-MDSDE and its components were assessed in vivo in the knee using 3D turbo spin echo (TSE). Nerve-to-vessel signal ratios were measured for iMSDE and OC-MDSDE in 7 subjects. iMSDE was shown to be highly sensitive to motion with increasing flow sensitization. FC-DSDE showed robustness against motion, but resulted in strong nerve signal loss with diffusion gradients oriented parallel to the nerve. OC-MDSDE showed superior vessel suppression compared to iMSDE and FC-DSDE and maintained high nerve signal. Mean nerve-to-vessel signal ratios in 7 subjects were 0.40 ± 0.17 for iMSDE and 0.63 ± 0.37 for OC-MDSDE. OC-MDSDE combined with 3D TSE in the extremities allows high-near-isotropic-resolution imaging of peripheral nerves with reduced vessel contamination and high nerve signal. Magn Reson Med 79:407-415, 2018. © 2017 Wiley Periodicals, Inc. © 2017 International Society for Magnetic Resonance in Medicine.
Fast Coherent Differential Imaging for Exoplanet Imaging
NASA Astrophysics Data System (ADS)
Gerard, Benjamin; Marois, Christian; Galicher, Raphael; Veran, Jean-Pierre; Macintosh, B.; Guyon, O.; Lozi, J.; Pathak, P.; Sahoo, A.
2018-06-01
Direct detection and detailed characterization of exoplanets using extreme adaptive optics (ExAO) is a key science goal of future extremely large telescopes and space observatories. However, quasi-static wavefront errors will limit the sensitivity of this endeavor. Additional limitations for ground-based telescopes arise from residual AO-corrected atmospheric wavefront errors, generating short-lived aberrations that will average into a halo over a long exposure, also limiting the sensitivity of exoplanet detection. We develop the framework for a solution to both of these problems using the self-coherent camera (SCC), to be applied to ground-based telescopes, called Fast Atmospheric SCC Technique (FAST). Simulations show that for typical ExAO targets the FAST approach can reach ~100 times better in raw contrast than what is currently achieved with ExAO instruments if we extrapolate for an hour of observing time, illustrating that the sensitivity improvement from this method could play an essential role in the future ground-based detection and characterization of lower mass/colder exoplanets.
Study of complex molecular systems by probe vibrational spectroscopy method
NASA Astrophysics Data System (ADS)
Boldeskul, A. E.; Zatsepin, V. M.; Atakhodjaev, A. K.; Shermatov, A. N.; Ashburiev, R.
1984-03-01
Experimental study of benzonitril as a probe in aqueous solution of sodium lauril sulphate /SDS/ by Raman spectroscopy technique showed integral moments of √ /C X N/ line to be extremely sensitive to the structural transitions in micellar systems. The central part of the experimental contour was used to determine integral moments with the help of line shape approximant received by Mori method
USDA-ARS?s Scientific Manuscript database
Rangeland environments are particularly susceptible to erosion due to extreme rainfall events and low vegetation cover. Landowners and managers need access to reliable erosion evaluation methods in order to protect productivity and hydrologic integrity of their rangelands and make resource allocati...
Lu, Huijuan; Wei, Shasha; Zhou, Zili; Miao, Yanzi; Lu, Yi
2015-01-01
The main purpose of traditional classification algorithms on bioinformatics application is to acquire better classification accuracy. However, these algorithms cannot meet the requirement that minimises the average misclassification cost. In this paper, a new algorithm of cost-sensitive regularised extreme learning machine (CS-RELM) was proposed by using probability estimation and misclassification cost to reconstruct the classification results. By improving the classification accuracy of a group of small sample which higher misclassification cost, the new CS-RELM can minimise the classification cost. The 'rejection cost' was integrated into CS-RELM algorithm to further reduce the average misclassification cost. By using Colon Tumour dataset and SRBCT (Small Round Blue Cells Tumour) dataset, CS-RELM was compared with other cost-sensitive algorithms such as extreme learning machine (ELM), cost-sensitive extreme learning machine, regularised extreme learning machine, cost-sensitive support vector machine (SVM). The results of experiments show that CS-RELM with embedded rejection cost could reduce the average cost of misclassification and made more credible classification decision than others.
Ability of Ultrasonography in Detection of Different Extremity Bone Fractures; a Case Series Study
Bozorgi, Farzad; Shayesteh Azar, Massoud; Montazer, Seyed Hossein; Chabra, Aroona; Heidari, Seyed Farshad; Khalilian, Alireza
2017-01-01
Introduction: Despite radiography being the gold standard in evaluation of orthopedic injuries, using bedside ultrasonography has several potential supremacies such as avoiding exposure to ionizing radiation, availability in pre-hospital settings, being extensively accessible, and ability to be used on the bedside. The aim of the present study is to evaluate the diagnostic accuracy of ultrasonography in detection of extremity bone fractures. Methods: This study is a case series study, which was prospectively conducted on multiple blunt trauma patients, who were 18 years old or older, had stable hemodynamic, Glasgow coma scale 15, and signs or symptoms of a possible extremity bone fracture. After initial assessment, ultrasonography of suspected bones was performed by a trained emergency medicine resident and prevalence of true positive and false negative findings were calculated compared to plain radiology. Results: 108 patients with the mean age of 44.6 ± 20.4 years were studied (67.6% male). Analysis was done on 158 sites of fracture, which were confirmed with plain radiography. 91 (57.6%) cases were suspected to have upper extremity fracture(s) and 67 (42.4%) to have lower ones. The most frequent site of injuries were forearm (36.7%) in upper limbs and leg (27.8%) in lower limbs. Prevalence of true positive and false negative cases for fractures detected by ultrasonography were 59 (64.8%) and 32 (35.52%) for upper and 49 (73.1%) and 18 (26.9%) for lower extremities, respectively. In addition, prevalence of true positive and false negative detected cases for intra-articular fractures were 24 (48%) and 26 (52%), respectively. Conclusion The present study shows the moderate sensitivity (68.3%) of ultrasonography in detection of different extremity bone fractures. Ultrasonography showed the best sensitivity in detection of femur (100%) and humerus (76.2%) fractures, respectively. It had low sensitivity in detection of in intra-articular fractures. PMID:28286822
Sensitivity of UK butterflies to local climatic extremes: which life stages are most at risk?
McDermott Long, Osgur; Warren, Rachel; Price, Jeff; Brereton, Tom M; Botham, Marc S; Franco, Aldina M A
2017-01-01
There is growing recognition as to the importance of extreme climatic events (ECEs) in determining changes in species populations. In fact, it is often the extent of climate variability that determines a population's ability to persist at a given site. This study examined the impact of ECEs on the resident UK butterfly species (n = 41) over a 37-year period. The study investigated the sensitivity of butterflies to four extremes (drought, extreme precipitation, extreme heat and extreme cold), identified at the site level, across each species' life stages. Variations in the vulnerability of butterflies at the site level were also compared based on three life-history traits (voltinism, habitat requirement and range). This is the first study to examine the effects of ECEs at the site level across all life stages of a butterfly, identifying sensitive life stages and unravelling the role life-history traits play in species sensitivity to ECEs. Butterfly population changes were found to be primarily driven by temperature extremes. Extreme heat was detrimental during overwintering periods and beneficial during adult periods and extreme cold had opposite impacts on both of these life stages. Previously undocumented detrimental effects were identified for extreme precipitation during the pupal life stage for univoltine species. Generalists were found to have significantly more negative associations with ECEs than specialists. With future projections of warmer, wetter winters and more severe weather events, UK butterflies could come under severe pressure given the findings of this study. © 2016 The Authors. Journal of Animal Ecology © 2016 British Ecological Society.
Ma, Chao; Ouyang, Jihong; Chen, Hui-Ling; Zhao, Xue-Hua
2014-01-01
A novel hybrid method named SCFW-KELM, which integrates effective subtractive clustering features weighting and a fast classifier kernel-based extreme learning machine (KELM), has been introduced for the diagnosis of PD. In the proposed method, SCFW is used as a data preprocessing tool, which aims at decreasing the variance in features of the PD dataset, in order to further improve the diagnostic accuracy of the KELM classifier. The impact of the type of kernel functions on the performance of KELM has been investigated in detail. The efficiency and effectiveness of the proposed method have been rigorously evaluated against the PD dataset in terms of classification accuracy, sensitivity, specificity, area under the receiver operating characteristic (ROC) curve (AUC), f-measure, and kappa statistics value. Experimental results have demonstrated that the proposed SCFW-KELM significantly outperforms SVM-based, KNN-based, and ELM-based approaches and other methods in the literature and achieved highest classification results reported so far via 10-fold cross validation scheme, with the classification accuracy of 99.49%, the sensitivity of 100%, the specificity of 99.39%, AUC of 99.69%, the f-measure value of 0.9964, and kappa value of 0.9867. Promisingly, the proposed method might serve as a new candidate of powerful methods for the diagnosis of PD with excellent performance.
Ma, Chao; Ouyang, Jihong; Chen, Hui-Ling; Zhao, Xue-Hua
2014-01-01
A novel hybrid method named SCFW-KELM, which integrates effective subtractive clustering features weighting and a fast classifier kernel-based extreme learning machine (KELM), has been introduced for the diagnosis of PD. In the proposed method, SCFW is used as a data preprocessing tool, which aims at decreasing the variance in features of the PD dataset, in order to further improve the diagnostic accuracy of the KELM classifier. The impact of the type of kernel functions on the performance of KELM has been investigated in detail. The efficiency and effectiveness of the proposed method have been rigorously evaluated against the PD dataset in terms of classification accuracy, sensitivity, specificity, area under the receiver operating characteristic (ROC) curve (AUC), f-measure, and kappa statistics value. Experimental results have demonstrated that the proposed SCFW-KELM significantly outperforms SVM-based, KNN-based, and ELM-based approaches and other methods in the literature and achieved highest classification results reported so far via 10-fold cross validation scheme, with the classification accuracy of 99.49%, the sensitivity of 100%, the specificity of 99.39%, AUC of 99.69%, the f-measure value of 0.9964, and kappa value of 0.9867. Promisingly, the proposed method might serve as a new candidate of powerful methods for the diagnosis of PD with excellent performance. PMID:25484912
Crisp, Jonathan G; Lovato, Luis M; Jang, Timothy B
2010-12-01
Compression ultrasonography of the lower extremity is an established method of detecting proximal lower extremity deep venous thrombosis when performed by a certified operator in a vascular laboratory. Our objective is to determine the sensitivity and specificity of bedside 2-point compression ultrasonography performed in the emergency department (ED) with portable vascular ultrasonography for the detection of proximal lower extremity deep venous thrombosis. We did this by directly comparing emergency physician-performed ultrasonography to lower extremity duplex ultrasonography performed by the Department of Radiology. This was a prospective, cross-sectional study and diagnostic test assessment of a convenience sample of ED patients with a suspected lower extremity deep venous thrombosis, conducted at a single-center, urban, academic ED. All physicians had a 10-minute training session before enrolling patients. ED compression ultrasonography occurred before Department of Radiology ultrasonography and involved identification of 2 specific points: the common femoral and popliteal vessels, with subsequent compression of the common femoral and popliteal veins. The study result was considered positive for proximal lower extremity deep venous thrombosis if either vein was incompressible or a thrombus was visualized. Sensitivity and specificity were calculated with the final radiologist interpretation of the Department of Radiology ultrasonography as the criterion standard. A total of 47 physicians performed 199 2-point compression ultrasonographic examinations in the ED. Median number of examinations per physician was 2 (range 1 to 29 examinations; interquartile range 1 to 5 examinations). There were 45 proximal lower extremity deep venous thromboses observed on Department of Radiology evaluation, all correctly identified by ED 2-point compression ultrasonography. The 153 patients without proximal lower extremity deep venous thrombosis all had a negative ED compression ultrasonographic result. One patient with a negative Department of Radiology ultrasonographic result was found to have decreased compression of the popliteal vein on ED compression ultrasonography, giving a single false-positive result, yet repeated ultrasonography by the Department of Radiology 1 week later showed a popliteal deep venous thrombosis. The sensitivity and specificity of ED 2-point compression ultrasonography for deep venous thrombosis were 100% (95% confidence interval 92% to 100%) and 99% (95% confidence interval 96% to 100%), respectively. Emergency physician-performed 2-point compression ultrasonography of the lower extremity with a portable vascular ultrasonographic machine, conducted in the ED by this physician group and in this patient sample, accurately identified the presence and absence of proximal lower extremity deep venous thrombosis. Copyright © 2010 American College of Emergency Physicians. Published by Mosby, Inc. All rights reserved.
Stover, Bert; Silverstein, Barbara; Wickizer, Thomas; Martin, Diane P; Kaufman, Joel
2007-06-01
Work related upper extremity musculoskeletal disorders (MSD) result in substantial disability, and expense. Identifying workers or jobs with high risk can trigger intervention before workers are injured or the condition worsens. We investigated a disability instrument, the QuickDASH, as a workplace screening tool to identify workers at high risk of developing upper extremity MSDs. Subjects included workers reporting recurring upper extremity MSD symptoms in the past 7 days (n = 559). The QuickDASH was reasonably accurate at baseline with sensitivity of 73% for MSD diagnosis, and 96% for symptom severity. Specificity was 56% for diagnosis, and 53% for symptom severity. At 1-year follow-up sensitivity and specificity for MSD diagnosis was 72% and 54%, respectively, as predicted by the baseline QuickDASH score. For symptom severity, sensitivity and specificity were 86% and 52%. An a priori target sensitivity of 70% and specificity of 50% was met by symptom severity, work pace and quality, and MSD diagnosis. The QuickDASH may be useful for identifying jobs or workers with increased risk for upper extremity MSDs. It may provide an efficient health surveillance screening tool useful for targeting early workplace intervention for prevention of upper extremity MSD problems.
The Extreme Ultraviolet Explorer
NASA Technical Reports Server (NTRS)
Malina, R. F.; Bowyer, S.; Lampton, M.; Finley, D.; Paresce, F.; Penegor, G.; Heetderks, H.
1982-01-01
The Extreme Ultraviolet Explorer Mission is described. The purpose of this mission is to search the celestial sphere for astronomical sources of extreme ultraviolet (EUV) radiation (100 to 1000 A). The search will be accomplished with the use of three EUV telescopes, each sensitive to different bands within the EUV band. A fourth telescope will perform a higher sensitivity search of a limited sample of the sky in a single EUV band. In six months, the entire sky will be scanned at a sensitivity level comparable to existing surveys in other more traditional astronomical bandpasses.
Liu, Yuxuan; Huang, Xiangyi; Ren, Jicun
2016-01-01
CE is an ideal analytical method for extremely volume-limited biological microenvironments. However, the small injection volume makes it a challenge to achieve highly sensitive detection. Chemiluminescence (CL) detection is characterized by providing low background with excellent sensitivity because of requiring no light source. The coupling of CL with CE and MCE has become a powerful analytical method. So far, this method has been widely applied to chemical analysis, bioassay, drug analysis, and environment analysis. In this review, we first introduce some developments for CE-CL and MCE-CL systems, and then put the emphasis on the applications in the last 10 years. Finally, we discuss the future prospects. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Bravini, Elisabetta; Giordano, Andrea; Sartorio, Francesco; Ferriero, Giorgio; Vercelli, Stefano
2017-04-01
To investigate dimensionality and the measurement properties of the Italian Lower Extremity Functional Scale using both classical test theory and Rasch analysis methods, and to provide insights for an improved version of the questionnaire. Rasch analysis of individual patient data. Rehabilitation centre. A total of 135 patients with musculoskeletal diseases of the lower limb. Patients were assessed with the Lower Extremity Functional Scale before and after the rehabilitation. Rasch analysis showed some problems related to rating scale category functioning, items fit, and items redundancy. After an iterative process, which resulted in the reduction of rating scale categories from 5 to 4, and in the deletion of 5 items, the psychometric properties of the Italian Lower Extremity Functional Scale improved. The retained 15 items with a 4-level response format fitted the Rasch model (internal construct validity), and demonstrated unidimensionality and good reliability indices (person-separation reliability 0.92; Cronbach's alpha 0.94). Then, the analysis showed differential item functioning for six of the retained items. The sensitivity to change of the Italian 15-item Lower Extremity Functional Scale was nearly equal to the one of the original version (effect size: 0.93 and 0.98; standardized response mean: 1.20 and 1.28, respectively for the 15-item and 20-item versions). The Italian Lower Extremity Functional Scale had unsatisfactory measurement properties. However, removing five items and simplifying the scoring from 5 to 4 levels resulted in a more valid measure with good reliability and sensitivity to change.
Modified Brown-Forsythe Procedure for Testing Interaction Effects in Split-Plot Designs
ERIC Educational Resources Information Center
Vallejo, Guillermo; Ato, Manuel
2006-01-01
The standard univariate and multivariate methods are conventionally used to analyze continuous data from groups by trials repeated measures designs, in spite of being extremely sensitive to departures from the multisample sphericity assumption when group sizes are unequal. However, in the last 10 years several authors have offered alternative…
Robust Regression for Slope Estimation in Curriculum-Based Measurement Progress Monitoring
ERIC Educational Resources Information Center
Mercer, Sterett H.; Lyons, Alina F.; Johnston, Lauren E.; Millhoff, Courtney L.
2015-01-01
Although ordinary least-squares (OLS) regression has been identified as a preferred method to calculate rates of improvement for individual students during curriculum-based measurement (CBM) progress monitoring, OLS slope estimates are sensitive to the presence of extreme values. Robust estimators have been developed that are less biased by…
Surface plasmon resonance spectroscopy sensor and methods for using same
Anderson, Brian Benjamin; Nave, Stanley Eugene
2002-01-01
A surface plasmon resonance ("SPR") probe with a detachable sensor head and system and methods for using the same in various applications is described. The SPR probe couples fiber optic cables directly to an SPR substrate that has a generally planar input surface and a generally curved reflecting surface, such as a substrate formed as a hemisphere. Forming the SPR probe in this manner allows the probe to be miniaturized and operate without the need for high precision, expensive and bulky collimating or focusing optics. Additionally, the curved reflecting surface of the substrate can be coated with one or multiple patches of sensing medium to allow the probe to detect for multiple analytes of interest or to provide multiple readings for comparison and higher precision. Specific applications for the probe are disclosed, including extremely high sensitive relative humidity and dewpoint detection for, e.g., moisture-sensitive environment such as volatile chemical reactions. The SPR probe disclosed operates with a large dynamic range and provides extremely high quality spectra despite being robust enough for field deployment and readily manufacturable.
NASA Astrophysics Data System (ADS)
Kozawa, Takahiro; Santillan, Julius Joseph; Itani, Toshiro
2017-10-01
The role of photons in lithography is to transfer the energy and information required for resist pattern formation. In the information-deficit region, a trade-off relationship is observed between line edge roughness (LER) and sensitivity. However, the sensitivity can be increased without increasing LER in the energy-deficit region. In this study, the sensitivity enhancement limit was investigated, assuming line-and-space patterns with a half-pitch of 11 nm. LER was calculated by a Monte Carlo method. It was unrealistic to increase the sensitivity twofold while keeping the line width roughness (LWR) within 10% critical dimension (CD), whereas the twofold sensitivity enhancement with 20% CD LWR was feasible. The requirements are roughly that the sensitization distance should be less than 2 nm and that the total sensitizer concentration should be higher than 0.3 nm-3.
van der Burg, Max Post; Tyre, Andrew J
2011-01-01
Wildlife managers often make decisions under considerable uncertainty. In the most extreme case, a complete lack of data leads to uncertainty that is unquantifiable. Information-gap decision theory deals with assessing management decisions under extreme uncertainty, but it is not widely used in wildlife management. So too, robust population management methods were developed to deal with uncertainties in multiple-model parameters. However, the two methods have not, as yet, been used in tandem to assess population management decisions. We provide a novel combination of the robust population management approach for matrix models with the information-gap decision theory framework for making conservation decisions under extreme uncertainty. We applied our model to the problem of nest survival management in an endangered bird species, the Mountain Plover (Charadrius montanus). Our results showed that matrix sensitivities suggest that nest management is unlikely to have a strong effect on population growth rate, confirming previous analyses. However, given the amount of uncertainty about adult and juvenile survival, our analysis suggested that maximizing nest marking effort was a more robust decision to maintain a stable population. Focusing on the twin concepts of opportunity and robustness in an information-gap model provides a useful method of assessing conservation decisions under extreme uncertainty.
Sensitivity of Rainfall Extremes Under Warming Climate in Urban India
NASA Astrophysics Data System (ADS)
Ali, H.; Mishra, V.
2017-12-01
Extreme rainfall events in urban India halted transportation, damaged infrastructure, and affected human lives. Rainfall extremes are projected to increase under the future climate. We evaluated the relationship (scaling) between rainfall extremes at different temporal resolutions (daily, 3-hourly, and 30 minutes), daily dewpoint temperature (DPT) and daily air temperature at 850 hPa (T850) for 23 urban areas in India. Daily rainfall extremes obtained from Global Surface Summary of Day Data (GSOD) showed positive regression slopes for most of the cities with median of 14%/K for the period of 1979-2013 for DPT and T850, which is higher than Clausius-Clapeyron (C-C) rate ( 7%). Moreover, sub-daily rainfall extremes are more sensitive to both DPT and T850. For instance, 3-hourly rainfall extremes obtained from Tropical Rainfall Measurement Mission (TRMM 3B42 V7) showed regression slopes more than 16%/K aginst DPT and T850 for the period of 1998-2015. Half-hourly rainfall extremes from the Integrated Multi-satellitE Retrievals (IMERGE) of Global precipitation mission (GPM) also showed higher sensitivity against changes in DPT and T850. The super scaling of rainfall extremes against changes in DPT and T850 can be attributed to convective nature of precipitation in India. Our results show that urban India may witness non-stationary rainfall extremes, which, in turn will affect stromwater designs and frequency and magniture of urban flooding.
Chipinda, Itai; Mbiya, Wilbes; Adigun, Risikat Ajibola; Morakinyo, Moshood K.; Law, Brandon F.; Simoyi, Reuben H.; Siegel, Paul D.
2015-01-01
Chemical allergens bind directly, or after metabolic or abiotic activation, to endogenous proteins to become allergenic. Assessment of this initial binding has been suggested as a target for development of assays to screen chemicals for their allergenic potential. Recently we reported a nitrobenzenethiol (NBT) based method for screening thiol reactive skin sensitizers, however, amine selective sensitizers are not detected by this assay. In the present study we describe an amine (pyridoxylamine (PDA)) based kinetic assay to complement the NBT assay for identification of amine-selective and non-selective skin sensitizers. UV-Vis spectrophotometry and fluorescence were used to measure PDA reactivity for 57 chemicals including anhydrides, aldehydes, and quinones where reaction rates ranged from 116 to 6.2 × 10−6 M−1 s−1 for extreme to weak sensitizers, respectively. No reactivity towards PDA was observed with the thiol-selective sensitizers, non-sensitizers and prohaptens. The PDA rate constants correlated significantly with their respective murine local lymph node assay (LLNA) threshold EC3 values (R2 = 0.76). The use of PDA serves as a simple, inexpensive amine based method that shows promise as a preliminary screening tool for electrophilic, amine-selective skin sensitizers. PMID:24333919
You, Zhu-Hong; Lei, Ying-Ke; Zhu, Lin; Xia, Junfeng; Wang, Bing
2013-01-01
Protein-protein interactions (PPIs) play crucial roles in the execution of various cellular processes and form the basis of biological mechanisms. Although large amount of PPIs data for different species has been generated by high-throughput experimental techniques, current PPI pairs obtained with experimental methods cover only a fraction of the complete PPI networks, and further, the experimental methods for identifying PPIs are both time-consuming and expensive. Hence, it is urgent and challenging to develop automated computational methods to efficiently and accurately predict PPIs. We present here a novel hierarchical PCA-EELM (principal component analysis-ensemble extreme learning machine) model to predict protein-protein interactions only using the information of protein sequences. In the proposed method, 11188 protein pairs retrieved from the DIP database were encoded into feature vectors by using four kinds of protein sequences information. Focusing on dimension reduction, an effective feature extraction method PCA was then employed to construct the most discriminative new feature set. Finally, multiple extreme learning machines were trained and then aggregated into a consensus classifier by majority voting. The ensembling of extreme learning machine removes the dependence of results on initial random weights and improves the prediction performance. When performed on the PPI data of Saccharomyces cerevisiae, the proposed method achieved 87.00% prediction accuracy with 86.15% sensitivity at the precision of 87.59%. Extensive experiments are performed to compare our method with state-of-the-art techniques Support Vector Machine (SVM). Experimental results demonstrate that proposed PCA-EELM outperforms the SVM method by 5-fold cross-validation. Besides, PCA-EELM performs faster than PCA-SVM based method. Consequently, the proposed approach can be considered as a new promising and powerful tools for predicting PPI with excellent performance and less time.
Capacity building for hydrological change - using a blended learning approach
NASA Astrophysics Data System (ADS)
Nacken, H.
2015-04-01
Extreme hydrological events have always been a challenge to societies. There is growing evidence that hydrological extremes have already become more severe in some regions. The Middle East and North Africa (MENA) region is characterized as one of the world's most water-scarce and driest regions, with a high dependency on climate-sensitive agriculture. There is an urgent need for capacity building programmes that prepare water professionals and communities to deal with the expected hydrological changes and extremes. The most successful capacity building programmes are the country driven ones which involve a wide range of national stakeholders, have a high degree of in-country ownership and have an applicability character. The method of choice to set up such capacity building programmes will be through blended learning.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Picard, Richard Roy; Bhat, Kabekode Ghanasham
2017-07-18
We examine sensitivity analysis and uncertainty quantification for molecular dynamics simulation. Extreme (large or small) output values for the LAMMPS code often occur at the boundaries of input regions, and uncertainties in those boundary values are overlooked by common SA methods. Similarly, input values for which code outputs are consistent with calibration data can also occur near boundaries. Upon applying approaches in the literature for imprecise probabilities (IPs), much more realistic results are obtained than for the complacent application of standard SA and code calibration.
USDA-ARS?s Scientific Manuscript database
Quantitative PCR (Q-PCR) utilizing specific primer sequences and a fluorogenic, 5’-exonuclease linear hydrolysis probe is well established as a detection and identification method for Phakopsora pachyrhizi, the soybean rust pathogen. Because of the extreme sensitivity of Q-PCR, the DNA of a single u...
Measurement of barrier tissue integrity with an organic electrochemical transistor.
Jimison, Leslie H; Tria, Scherrine A; Khodagholy, Dion; Gurfinkel, Moshe; Lanzarini, Erica; Hama, Adel; Malliaras, George G; Owens, Róisín M
2012-11-20
The integration of an organic electrochemical transistor with human barrier tissue cells provides a novel method for assessing toxicology of compounds in vitro. Minute variations in paracellular ionic flux induced by toxic compounds are measured in real time, with unprecedented temporal resolution and extreme sensitivity. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Robot-aided assessment of lower extremity functions: a review.
Maggioni, Serena; Melendez-Calderon, Alejandro; van Asseldonk, Edwin; Klamroth-Marganska, Verena; Lünenburger, Lars; Riener, Robert; van der Kooij, Herman
2016-08-02
The assessment of sensorimotor functions is extremely important to understand the health status of a patient and its change over time. Assessments are necessary to plan and adjust the therapy in order to maximize the chances of individual recovery. Nowadays, however, assessments are seldom used in clinical practice due to administrative constraints or to inadequate validity, reliability and responsiveness. In clinical trials, more sensitive and reliable measurement scales could unmask changes in physiological variables that would not be visible with existing clinical scores.In the last decades robotic devices have become available for neurorehabilitation training in clinical centers. Besides training, robotic devices can overcome some of the limitations in traditional clinical assessments by providing more objective, sensitive, reliable and time-efficient measurements. However, it is necessary to understand the clinical needs to be able to develop novel robot-aided assessment methods that can be integrated in clinical practice.This paper aims at providing researchers and developers in the field of robotic neurorehabilitation with a comprehensive review of assessment methods for the lower extremities. Among the ICF domains, we included those related to lower extremities sensorimotor functions and walking; for each chapter we present and discuss existing assessments used in routine clinical practice and contrast those to state-of-the-art instrumented and robot-aided technologies. Based on the shortcomings of current assessments, on the identified clinical needs and on the opportunities offered by robotic devices, we propose future directions for research in rehabilitation robotics. The review and recommendations provided in this paper aim to guide the design of the next generation of robot-aided functional assessments, their validation and their translation to clinical practice.
LSENS, The NASA Lewis Kinetics and Sensitivity Analysis Code
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.
2000-01-01
A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS (the NASA Lewis kinetics and sensitivity analysis code), are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include: static system; steady, one-dimensional, inviscid flow; incident-shock initiated reaction in a shock tube; and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method (LSODE, the Livermore Solver for Ordinary Differential Equations), which works efficiently for the extremes of very fast and very slow reactions, is used to solve the "stiff" ordinary differential equation systems that arise in chemical kinetics. For static reactions, the code uses the decoupled direct method to calculate sensitivity coefficients of the dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters. Solution methods for the equilibrium and post-shock conditions and for perfectly stirred reactor problems are either adapted from or based on the procedures built into the NASA code CEA (Chemical Equilibrium and Applications).
SVM-based automatic diagnosis method for keratoconus
NASA Astrophysics Data System (ADS)
Gao, Yuhong; Wu, Qiang; Li, Jing; Sun, Jiande; Wan, Wenbo
2017-06-01
Keratoconus is a progressive cornea disease that can lead to serious myopia and astigmatism, or even to corneal transplantation, if it becomes worse. The early detection of keratoconus is extremely important to know and control its condition. In this paper, we propose an automatic diagnosis algorithm for keratoconus to discriminate the normal eyes and keratoconus ones. We select the parameters obtained by Oculyzer as the feature of cornea, which characterize the cornea both directly and indirectly. In our experiment, 289 normal cases and 128 keratoconus cases are divided into training and test sets respectively. Far better than other kernels, the linear kernel of SVM has sensitivity of 94.94% and specificity of 97.87% with all the parameters training in the model. In single parameter experiment of linear kernel, elevation with 92.03% sensitivity and 98.61% specificity and thickness with 97.28% sensitivity and 97.82% specificity showed their good classification abilities. Combining elevation and thickness of the cornea, the proposed method can reach 97.43% sensitivity and 99.19% specificity. The experiments demonstrate that the proposed automatic diagnosis method is feasible and reliable.
Akita, Shinsuke; Mitsukawa, Nobuyuki; Kazama, Toshiki; Kuriyama, Motone; Kubota, Yoshitaka; Omori, Naoko; Koizumi, Tomoe; Kosaka, Kentaro; Uno, Takashi; Satoh, Kaneshige
2013-06-01
Lymphoscintigraphy is the gold-standard examination for extremity lymphoedema. Indocyanine green lymphography may be useful for diagnosis as well. We compared the utility of these two examination methods for patients with suspected extremity lymphoedema and for those in whom surgical treatment of lymphoedema was under consideration. A total of 169 extremities with lymphoedema secondary to lymph node dissection and 65 extremities with idiopathic oedema (suspected primary lymphoedema) were evaluated; the utility of indocyanine green lymphography for diagnosis was compared with lymphoscintigraphy. Regression analysis between lymphoscintigraphy type and indocyanine green lymphography stage was conducted in the secondary lymphoedema group. In secondary oedema, the sensitivity of indocyanine green lymphography, compared with lymphoscintigraphy, was 0.972, the specificity was 0.548 and the accuracy was 0.816. When patients with lymphoscintigraphy type I and indocyanine green lymphography stage I were regarded as negative, the sensitivity of the indocyanine green lymphography was 0.978, the specificity was 0.925 and the accuracy was 0.953. There was a significant positive correlation between the lymphoscintigraphy type and the indocyanine green lymphography stage. In idiopathic oedema, the sensitivity of indocyanine green lymphography was 0.974, the specificity was 0.778 and the accuracy was 0.892. In secondary lymphoedema, earlier and less severe dysfunction could be detected by indocyanine green lymphography. Indocyanine green lymphography is recommended to determine patients' suitability for lymphaticovenular anastomosis, because the diagnostic ability of the test and its evaluation capability for disease severity is similar to lymphoscintigraphy but with less invasiveness and a lower cost. To detect primary lymphoedema, indocyanine green lymphography should be used first as a screening examination; when the results are positive, lymphoscintigraphy is useful to obtain further information. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Pasanen, Kati; Krosshaug, Tron; Vasankari, Tommi; Kannus, Pekka; Heinonen, Ari; Kujala, Urho M; Avela, Janne; Perttunen, Jarmo; Parkkari, Jari
2018-01-01
Background/aim Poor frontal plane knee control can manifest as increased dynamic knee valgus during athletic tasks. The purpose of this study was to investigate the association between frontal plane knee control and the risk of acute lower extremity injuries. In addition, we wanted to study if the single-leg squat (SLS) test can be used as a screening tool to identify athletes with an increased injury risk. Methods A total of 306 basketball and floorball players participated in the baseline SLS test and a 12-month injury registration follow-up. Acute lower extremity time-loss injuries were registered. Frontal plane knee projection angles (FPKPA) during the SLS were calculated using a two-dimensional video analysis. Results Athletes displaying a high FPKPA were 2.7 times more likely to sustain a lower extremity injury (adjusted OR 2.67, 95% CI 1.23 to 5.83) and 2.4 times more likely to sustain an ankle injury (OR 2.37, 95% CI 1.13 to 4.98). There was no statistically significant association between FPKPA and knee injury (OR 1.49, 95% CI 0.56 to 3.98). The receiver operating characteristic curve analyses indicated poor combined sensitivity and specificity when FPKPA was used as a screening test for lower extremity injuries (area under the curve of 0.59) and ankle injuries (area under the curve of 0.58). Conclusions Athletes displaying a large FPKPA in the SLS test had an elevated risk of acute lower extremity and ankle injuries. However, the SLS test is not sensitive and specific enough to be used as a screening tool for future injury risk. PMID:29387448
The effect of uncertainties in distance-based ranking methods for multi-criteria decision making
NASA Astrophysics Data System (ADS)
Jaini, Nor I.; Utyuzhnikov, Sergei V.
2017-08-01
Data in the multi-criteria decision making are often imprecise and changeable. Therefore, it is important to carry out sensitivity analysis test for the multi-criteria decision making problem. The paper aims to present a sensitivity analysis for some ranking techniques based on the distance measures in multi-criteria decision making. Two types of uncertainties are considered for the sensitivity analysis test. The first uncertainty is related to the input data, while the second uncertainty is towards the Decision Maker preferences (weights). The ranking techniques considered in this study are TOPSIS, the relative distance and trade-off ranking methods. TOPSIS and the relative distance method measure a distance from an alternative to the ideal and antiideal solutions. In turn, the trade-off ranking calculates a distance of an alternative to the extreme solutions and other alternatives. Several test cases are considered to study the performance of each ranking technique in both types of uncertainties.
An initial investigation into methods of computing transonic aerodynamic sensitivity coefficients
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1994-01-01
The primary accomplishments of the project are as follows: (1) Using the transonic small perturbation equation as a flowfield model, the project demonstrated that the quasi-analytical method could be used to obtain aerodynamic sensitivity coefficients for airfoils at subsonic, transonic, and supersonic conditions for design variables such as Mach number, airfoil thickness, maximum camber, angle of attack, and location of maximum camber. It was established that the quasi-analytical approach was an accurate method for obtaining aerodynamic sensitivity derivatives for airfoils at transonic conditions and usually more efficient than the finite difference approach. (2) The usage of symbolic manipulation software to determine the appropriate expressions and computer coding associated with the quasi-analytical method for sensitivity derivatives was investigated. Using the three dimensional fully conservative full potential flowfield model, it was determined that symbolic manipulation along with a chain rule approach was extremely useful in developing a combined flowfield and quasi-analytical sensitivity derivative code capable of considering a large number of realistic design variables. (3) Using the three dimensional fully conservative full potential flowfield model, the quasi-analytical method was applied to swept wings (i.e. three dimensional) at transonic flow conditions. (4) The incremental iterative technique has been applied to the three dimensional transonic nonlinear small perturbation flowfield formulation, an equivalent plate deflection model, and the associated aerodynamic and structural discipline sensitivity equations; and coupled aeroelastic results for an aspect ratio three wing in transonic flow have been obtained.
Comparisons of Robustness and Sensitivity between Cancer and Normal Cells by Microarray Data
Chu, Liang-Hui; Chen, Bor-Sen
2008-01-01
Robustness is defined as the ability to uphold performance in face of perturbations and uncertainties, and sensitivity is a measure of the system deviations generated by perturbations to the system. While cancer appears as a robust but fragile system, few computational and quantitative evidences demonstrate robustness tradeoffs in cancer. Microarrays have been widely applied to decipher gene expression signatures in human cancer research, and quantification of global gene expression profiles facilitates precise prediction and modeling of cancer in systems biology. We provide several efficient computational methods based on system and control theory to compare robustness and sensitivity between cancer and normal cells by microarray data. Measurement of robustness and sensitivity by linear stochastic model is introduced in this study, which shows oscillations in feedback loops of p53 and demonstrates robustness tradeoffs that cancer is a robust system with some extreme fragilities. In addition, we measure sensitivity of gene expression to perturbations in other gene expression and kinetic parameters, discuss nonlinear effects in feedback loops of p53 and extend our method to robustness-based cancer drug design. PMID:19259409
Navarro-Pujalte, Esther; Gacto-Sánchez, Mariano; Montilla-Herrador, Joaquina; Escolar-Reina, Pilar; Ángeles Franco-Sierra, María; Medina-Mirapeix, Francesc
2018-01-12
Prospective longitudinal study. To examine the sensitivity of the Mobility Activities Measure for lower extremities and to compare it to the sensitivity of the Physical Functioning Scale (PF-10) and the Patient-Specific Functional Scale (PSFS) at week 4 and week 8 post-hospitalization in outpatient rehabilitation settings. Mobility Activities Measure is a set of short mobility measures to track outpatient rehabilitation progress: its scales have shown good properties but its sensitivity to change has not been reported. Patients with musculoskeletal conditions were recruited at admission in three outpatient rehabilitation settings in Spain. Data were collected at admission, week 4 and week 8 from an initial sample of 236 patients (mean age ± SD = 36.7 ± 11.1). Mobility Activities Measure scales for lower extremity; PF-10; and PSFS. All the Mobility Activities Measure scales were sensitive to both positive and negative changes (the Standardized Response Means (SRMs) ranged between 1.05 and 1.53 at week 4, and between 0.63 and 1.47 at week 8). The summary measure encompassing the three Mobility Activities Measure scales detected a higher proportion of participants who had improved beyond the minimal detectable change (MDC) than detected by the PSFS and the PF-10 both at week 4 (86.64% vs. 69.81% and 42.23%, respectively) and week 8 (71.14% vs. 55.65% and 60.81%, respectively). The three Mobility Activities Measure scales assessing the lower extremity can be used across outpatient rehabilitation settings to provide consistent and sensitive measures of changes in patients' mobility. Implications for rehabilitation All the scales of the Mobility Activities Measure for the lower extremity were sensitive to both positive and negative change across the follow-up periods. Overall, the summary measure encompassing the three Mobility Activities Measure scales for the lower extremity appeared more sensitive to positive changes than the Physical Functioning Scale, especially during the first four weeks of treatment. The summary measure also detected a higher percentage of participants with positive change that exceeded the minimal detectable change than the Patient-Specific Functional Scale and the Physical Functioning Scale at the first follow-up period. By demonstrating their consistency and sensitivity to change, the three Mobility Activities Measures scales can now be considered in order to track patients' functional progress. Mobility Activities Measure can be therefore used in patients with musculoskeletal conditions across outpatient rehabilitation settings to provide estimates of change in mobility activities focusing on the lower extremity.
On the nonlinearity of spatial scales in extreme weather attribution statements
NASA Astrophysics Data System (ADS)
Angélil, Oliver; Stone, Daíthí; Perkins-Kirkpatrick, Sarah; Alexander, Lisa V.; Wehner, Michael; Shiogama, Hideo; Wolski, Piotr; Ciavarella, Andrew; Christidis, Nikolaos
2018-04-01
In the context of ongoing climate change, extreme weather events are drawing increasing attention from the public and news media. A question often asked is how the likelihood of extremes might have changed by anthropogenic greenhouse-gas emissions. Answers to the question are strongly influenced by the model used, duration, spatial extent, and geographic location of the event—some of these factors often overlooked. Using output from four global climate models, we provide attribution statements characterised by a change in probability of occurrence due to anthropogenic greenhouse-gas emissions, for rainfall and temperature extremes occurring at seven discretised spatial scales and three temporal scales. An understanding of the sensitivity of attribution statements to a range of spatial and temporal scales of extremes allows for the scaling of attribution statements, rendering them relevant to other extremes having similar but non-identical characteristics. This is a procedure simple enough to approximate timely estimates of the anthropogenic contribution to the event probability. Furthermore, since real extremes do not have well-defined physical borders, scaling can help quantify uncertainty around attribution results due to uncertainty around the event definition. Results suggest that the sensitivity of attribution statements to spatial scale is similar across models and that the sensitivity of attribution statements to the model used is often greater than the sensitivity to a doubling or halving of the spatial scale of the event. The use of a range of spatial scales allows us to identify a nonlinear relationship between the spatial scale of the event studied and the attribution statement.
On the nonlinearity of spatial scales in extreme weather attribution statements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Angélil, Oliver; Stone, Daíthí; Perkins-Kirkpatrick, Sarah
In the context of continuing climate change, extreme weather events are drawing increasing attention from the public and news media. A question often asked is how the likelihood of extremes might have changed by anthropogenic greenhouse-gas emissions. Answers to the question are strongly influenced by the model used, duration, spatial extent, and geographic location of the event—some of these factors often overlooked. Using output from four global climate models, we provide attribution statements characterised by a change in probability of occurrence due to anthropogenic greenhouse-gas emissions, for rainfall and temperature extremes occurring at seven discretised spatial scales and three temporalmore » scales. An understanding of the sensitivity of attribution statements to a range of spatial and temporal scales of extremes allows for the scaling of attribution statements, rendering them relevant to other extremes having similar but non-identical characteristics. This is a procedure simple enough to approximate timely estimates of the anthropogenic contribution to the event probability. Furthermore, since real extremes do not have well-defined physical borders, scaling can help quantify uncertainty around attribution results due to uncertainty around the event definition. Results suggest that the sensitivity of attribution statements to spatial scale is similar across models and that the sensitivity of attribution statements to the model used is often greater than the sensitivity to a doubling or halving of the spatial scale of the event. The use of a range of spatial scales allows us to identify a nonlinear relationship between the spatial scale of the event studied and the attribution statement.« less
On the nonlinearity of spatial scales in extreme weather attribution statements
Angélil, Oliver; Stone, Daíthí; Perkins-Kirkpatrick, Sarah; ...
2017-06-17
In the context of continuing climate change, extreme weather events are drawing increasing attention from the public and news media. A question often asked is how the likelihood of extremes might have changed by anthropogenic greenhouse-gas emissions. Answers to the question are strongly influenced by the model used, duration, spatial extent, and geographic location of the event—some of these factors often overlooked. Using output from four global climate models, we provide attribution statements characterised by a change in probability of occurrence due to anthropogenic greenhouse-gas emissions, for rainfall and temperature extremes occurring at seven discretised spatial scales and three temporalmore » scales. An understanding of the sensitivity of attribution statements to a range of spatial and temporal scales of extremes allows for the scaling of attribution statements, rendering them relevant to other extremes having similar but non-identical characteristics. This is a procedure simple enough to approximate timely estimates of the anthropogenic contribution to the event probability. Furthermore, since real extremes do not have well-defined physical borders, scaling can help quantify uncertainty around attribution results due to uncertainty around the event definition. Results suggest that the sensitivity of attribution statements to spatial scale is similar across models and that the sensitivity of attribution statements to the model used is often greater than the sensitivity to a doubling or halving of the spatial scale of the event. The use of a range of spatial scales allows us to identify a nonlinear relationship between the spatial scale of the event studied and the attribution statement.« less
Bradley, Phelim; Gordon, N. Claire; Walker, Timothy M.; Dunn, Laura; Heys, Simon; Huang, Bill; Earle, Sarah; Pankhurst, Louise J.; Anson, Luke; de Cesare, Mariateresa; Piazza, Paolo; Votintseva, Antonina A.; Golubchik, Tanya; Wilson, Daniel J.; Wyllie, David H.; Diel, Roland; Niemann, Stefan; Feuerriegel, Silke; Kohl, Thomas A.; Ismail, Nazir; Omar, Shaheed V.; Smith, E. Grace; Buck, David; McVean, Gil; Walker, A. Sarah; Peto, Tim E. A.; Crook, Derrick W.; Iqbal, Zamin
2015-01-01
The rise of antibiotic-resistant bacteria has led to an urgent need for rapid detection of drug resistance in clinical samples, and improvements in global surveillance. Here we show how de Bruijn graph representation of bacterial diversity can be used to identify species and resistance profiles of clinical isolates. We implement this method for Staphylococcus aureus and Mycobacterium tuberculosis in a software package (‘Mykrobe predictor') that takes raw sequence data as input, and generates a clinician-friendly report within 3 minutes on a laptop. For S. aureus, the error rates of our method are comparable to gold-standard phenotypic methods, with sensitivity/specificity of 99.1%/99.6% across 12 antibiotics (using an independent validation set, n=470). For M. tuberculosis, our method predicts resistance with sensitivity/specificity of 82.6%/98.5% (independent validation set, n=1,609); sensitivity is lower here, probably because of limited understanding of the underlying genetic mechanisms. We give evidence that minor alleles improve detection of extremely drug-resistant strains, and demonstrate feasibility of the use of emerging single-molecule nanopore sequencing techniques for these purposes. PMID:26686880
Local bias-induced phase transitions
Seal, Katyayani; Baddorf, Arthur P.; Jesse, Stephen; ...
2008-11-27
Electrical bias-induced phase transitions underpin a wide range of applications from data storage to energy generation and conversion. The mechanisms behind these transitions are often quite complex and in many cases are extremely sensitive to local defects that act as centers for local transformations or pinning. Furthermore, using ferroelectrics as an example, we review methods for probing bias-induced phase transitions and discuss the current limitations and challenges for extending the methods to field-induced phase transitions and electrochemical reactions in energy storage, biological and molecular systems.
Monitoring Ion Implantation Energy Using Non-contact Characterization Methods
NASA Astrophysics Data System (ADS)
Tallian, M.; Pap, A.; Mocsar, K.; Somogyi, A.; Nadudvari, Gy.; Kosztka, D.; Pavelka, T.
2011-01-01
State-of-the-art ultra-shallow junctions are produced using extremely low ion implant energies, down to the range of 1-3 keV. This can be achieved by a variety of production techniques; however there is a significant risk that the actual implantation energy differs from the desired value. To detect this, sensitive measurement methods need to be utilized. Experiments show that both Photomodulated Reflection measurements before anneal and Junction Photovoltage-based sheet resistance measurements after anneal are suitable for this purpose.
NASA Astrophysics Data System (ADS)
Casola, J.; Johanson, E.; Groth, P.; Snow, C.; Choate, A.
2012-12-01
Southeastern Pennsylvania Transportation Authority (SEPTA), with support from the Federal Transit Administration, has been investigating its agency's vulnerability to weather-related disruption and damages as a way to inform an overall adaptation strategy for climate variability and change. Exploiting daily rail service records maintained by SEPTA and observations from nearby weather stations, we have developed a methodology for quantifying the sensitivity of SEPTA's Manayunk/Norristown rail line to various weather events (e.g., snow storms, heat waves, heavy rainfall and flooding, tropical storms). For each type of event, sensitivity is equated to the frequency and extent of service disruptions associated with the event, and includes the identification of thresholds beyond which impacts are observed. In addition, we have estimated the monetary costs associated with repair and replacement of infrastructure following these events. Our results have facilitated discussions with SEPTA operational staff, who have outlined the institutional aspects of their preparation and response processes for these weather events. We envision the methodology as being useful for resource and infrastructure managers across the public and private sector, and potentially scalable to smaller or larger operations. There are several advantageous aspects of the method: 1) the quantification of sensitivity, and the coupling of that sensitivity to cost information, provides credible input to SEPTA decision-makers as they establish the priorities and level of investment associated with their adaptation actions for addressing extreme weather; 2) the method provides a conceptual foundation for estimating the magnitude, frequency, and costs of potential future impacts at a local scale, especially with regard to heat waves; 3) the sensitivity information serves as an excellent discussion tool, enabling further research and information gathering about institutional relationships and procedures. These relationships and procedures are critical to the effectiveness of preparation for and responses to extreme weather events, but are often not explicitly documented.
USDA-ARS?s Scientific Manuscript database
Historical streamflow data from the Pacific Northwest indicate that the precipitation amount has been the dominant control on the magnitude of low streamflow extremes compared to the air temperature-affected timing of snowmelt runoff. The relative sensitivities of low streamflow to precipitation and...
NASA Astrophysics Data System (ADS)
Parhi, P.; Giannini, A.; Lall, U.; Gentine, P.
2016-12-01
Assessing and managing risks posed by climate variability and change is challenging in the tropics, from both a socio-economic and a scientific perspective. Most of the vulnerable countries with a limited climate adaptation capability are in the tropics. However, climate projections, particularly of extreme precipitation, are highly uncertain there. The CMIP5 (Coupled Model Inter- comparison Project - Phase 5) inter-model range of extreme precipitation sensitivity to the global temperature under climate change is much larger in the tropics as compared to the extra-tropics. It ranges from nearly 0% to greater than 30% across models (O'Gorman 2012). The uncertainty is also large in historical gauge or satellite based observational records. These large uncertainties in the sensitivity of tropical precipitation extremes highlight the need to better understand how tropical precipitation extremes respond to warming. We hypothesize that one of the factors explaining the large uncertainty is due to differing sensitivities during different phases of warming. We consider the `growth' and `mature' phases of warming under climate variability case- typically associated with an El Niño event. In the remote tropics (away from tropical Pacific Ocean), the response of the precipitation extremes during the two phases can be through different pathways: i) a direct and fast changing radiative forcing in an atmospheric column, acting top-down due to the tropospheric warming, and/or ii) an indirect effect via changes in surface temperatures, acting bottom-up through surface water and energy fluxes. We also speculate that the insights gained here might be useful in interpreting the large sensitivity under climate change scenarios, since the physical mechanisms during the two warming phases under climate variability case, have some correspondence with an increasing and stabilized green house gas emission scenarios.
Criteria for site selection and frequency allocation (keynote paper), part 5
NASA Technical Reports Server (NTRS)
Rottger, J.
1985-01-01
Technical aspects of mesosphere-stratosphere-troposphere (MST) Radar on site and frequency selection were discussed. Recommendations on site selections are presented. Tests of interference will be conducted before selecting a site. A small directional antenna may be suitable to simulate sidelobe sensitivity of radars however, sophisticated data-processing methods make system sensitivity extremely good. The use of the complete data system to look for interference is recommended. There is the difficulty of allocation of frequencies -- almost continuous use by these radars will be made when the band 40 to 60 MHz is allocated to other services.
Hesse, Almut; Biyikal, Mustafa; Rurack, Knut; Weller, Michael G
2016-02-01
An improved antibody against the explosive pentaerythritol tetranitrate (PETN) was developed. The immunogen was designed by the concept of bioisosteric replacement, which led to an excellent polyclonal antibody with extreme selectivity and immunoassays of very good sensitivity. Compounds such as nitroglycerine, 2,4,6-trinitrotoluene, 1,3,5-trinitrobenzene, hexogen (RDX), 2,4,6-trinitroaniline, 1,3-dinitrobenzene, octogen (HMX), triacetone triperoxide, ammonium nitrate, 2,4,6-trinitrophenol and nitrobenzene were tested for potential cross-reactivity. The detection limit of a competitive enzyme-linked immunosorbent assay was determined to be around 0.5 µg/l. The dynamic range of the assay was found to be between 1 and 1000 µg/l, covering a concentration range of three decades. This work shows the successful application of the bioisosteric concept in immunochemistry by exchange of a nitroester to a carbonate diester. The antiserum might be used for the development of quick tests, biosensors, microtitration plate immunoassays, microarrays and other analytical methods for the highly sensitive detection of PETN, an explosive frequently used by terrorists, exploiting the extreme difficulty of its detection. Copyright © 2015 John Wiley & Sons, Ltd.
Extreme sensitivity biosensing platform based on hyperbolic metamaterials
NASA Astrophysics Data System (ADS)
Sreekanth, Kandammathe Valiyaveedu; Alapan, Yunus; Elkabbash, Mohamed; Ilker, Efe; Hinczewski, Michael; Gurkan, Umut A.; de Luca, Antonio; Strangi, Giuseppe
2016-06-01
Optical sensor technology offers significant opportunities in the field of medical research and clinical diagnostics, particularly for the detection of small numbers of molecules in highly diluted solutions. Several methods have been developed for this purpose, including label-free plasmonic biosensors based on metamaterials. However, the detection of lower-molecular-weight (<500 Da) biomolecules in highly diluted solutions is still a challenging issue owing to their lower polarizability. In this context, we have developed a miniaturized plasmonic biosensor platform based on a hyperbolic metamaterial that can support highly confined bulk plasmon guided modes over a broad wavelength range from visible to near infrared. By exciting these modes using a grating-coupling technique, we achieved different extreme sensitivity modes with a maximum of 30,000 nm per refractive index unit (RIU) and a record figure of merit (FOM) of 590. We report the ability of the metamaterial platform to detect ultralow-molecular-weight (244 Da) biomolecules at picomolar concentrations using a standard affinity model streptavidin-biotin.
Impacts of urbanization on Indian summer monsoon rainfall extremes
NASA Astrophysics Data System (ADS)
Shastri, Hiteshri; Paul, Supantha; Ghosh, Subimal; Karmakar, Subhankar
2015-01-01
areas have different climatology with respect to their rural surroundings. Though urbanization is a worldwide phenomenon, it is especially prevalent in India, where urban areas have experienced an unprecedented rate of growth over the last 30 years. Here we take up an observational study to understand the influence of urbanization on the characteristics of precipitation (specifically extremes) in India. We identify 42 urban regions and compare their extreme rainfall characteristics with those of surrounding rural areas. We observe that, on an overall scale, the urban signatures on extreme rainfall are not prominently and consistently visible, but they are spatially nonuniform. Zonal analysis reveals significant impacts of urbanization on extreme rainfall in central and western regions of India. An additional examination, to understand the influences of urbanization on heavy rainfall climatology, is carried with station level data using a statistical method, quantile regression. This is performed for the most populated city of India, Mumbai, in pair with a nearby nonurban area, Alibaug; both having similar geographic location. The derived extreme rainfall regression quantiles reveal the sensitivity of extreme rainfall events to the increased urbanization. Overall the study identifies the climatological zones in India, where increased urbanization affects regional rainfall pattern and extremes, with a detailed case study of Mumbai. This also calls attention to the need of further experimental investigation, for the identification of the key climatological processes, in different regions of India, affected by increased urbanization.
NASA Technical Reports Server (NTRS)
Greene, William H.
1990-01-01
A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.
Overlapping illusions by transformation optics without any negative refraction material.
Sun, Fei; He, Sailing
2016-01-11
A novel method to achieve an overlapping illusion without any negative refraction index material is introduced with the help of the optic-null medium (ONM) designed by an extremely stretching spatial transformation. Unlike the previous methods to achieve such an optical illusion by transformation optics (TO), our method can achieve a power combination and reshape the radiation pattern at the same time. Unlike the overlapping illusion with some negative refraction index material, our method is not sensitive to the loss of the materials. Other advantages over existing methods are discussed. Numerical simulations are given to verify the performance of the proposed devices.
JPRS Report, Soviet Union, Political Affairs
1988-10-26
Christa Wolfs novel "Patterns of Childhood." This very sensitive and timely work reflects the psychology attending the rise of fascism. We also have...incriminating evidence and extenuating circumstances? Psychologically , it is extremely difficult. There are allegations, there are sus- picions, there are...Zionism, which has for a long time been chipping away at the socialist foundation of our society, with its perfidious psychological methods." What does
NASA Astrophysics Data System (ADS)
Schlager, Hans; Arnold, Frank; Aufmhoff, Heinfried; Minikin, Andreas; Baumann, Robert; Simgen, Hardy; Lindemann, Stefan; Rauch, Ludwig; Kaether, Frank; Pirjola, Liisa; Schumann, Ulrich
2014-05-01
We report unique airborne measurements, at the tropopause, of the Fukushima radio nuclide Xe-133, aerosol particles (size, shape, number concentration, volatility), aerosol precursor gases (particularly SO2, HNO3, H2O). Our measurements and accompanying model simulations indicate homogeneous and cosmic ray induced aerosol formation at the tropopause. Using an extremely sensitive detection method, we managed to detect Fukushima Xe-133, an ideal transport tracer, at and even above the tropopause. To our knowledge, these airborne Xe-133 measurements are the only of their kind. Our investigations represent a striking example how a pioneering measurement of a Fukshima radio nuclide, employing an extremely sensitive method, can lead to new insights into an important atmospheric process. After the Fukushima accidential Xe-133 release (mostly during 11-15 March 2011), we have conducted two aircraft missions, which took place over Central Europe, on 23 March and 11 April 2011. In the air masses, encountered by the research aircraft on 23 March, we have detected Fukushima Xe-133 by an extremely sensitive method, at and even above the tropopause. Besides increased concentrations of Xe-133, we have detected also increased concentrations of the gases SO2, HNO3, and H2O. The Xe-133 data and accompanying transport model simulations indicate that a West-Pacific Warm Conveyor Belt (WCB) lifted East-Asian planetary boundary layer air to and even above the tropopause, followed by relatively fast quasi-horizontal advection to Europe. Along with Xe-133, anthropogenic SO2, NOx (mostly released from East-Asian ground-level combustion sources), and warer vapour were also lifted by the WCB. After the lift, SO2 and NOx experienced efficient solar UV-radiation driven conversion to the important aerosol precursors gases H2SO4 and HNO3. Our investigations indicate that, increased concentrations of the gases SO2, HNO3, and H2O promoted homogeneous and cosmic ray induced aerosol formation at and even above the tropopause.
Patrick R. Kormos; Charlie Luce; Seth J. Wenger; Wouter R. Berghuijs
2016-01-01
Path analyses of historical streamflow data from the Pacific Northwest indicate that the precipitation amount has been the dominant control on the magnitude of low streamflow extremes compared to the air temperature-affected timing of snowmelt runoff. The relative sensitivities of low streamflow to precipitation and temperature changes have important...
Frouzan, Arash; Masoumi, Kambiz; Delirroyfard, Ali; Mazdaie, Behnaz; Bagherzadegan, Elnaz
2017-08-01
Long bone fractures are common injuries caused by trauma. Some studies have demonstrated that ultrasound has a high sensitivity and specificity in the diagnosis of upper and lower extremity long bone fractures. The aim of this study was to determine the accuracy of ultrasound compared with plain radiography in diagnosis of upper and lower extremity long bone fractures in traumatic patients. This cross-sectional study assessed 100 patients admitted to the emergency department of Imam Khomeini Hospital, Ahvaz, Iran with trauma to the upper and lower extremities, from September 2014 through October 2015. In all patients, first ultrasound and then standard plain radiography for the upper and lower limb was performed. Data were analyzed by SPSS version 21 to determine the specificity and sensitivity. The mean age of patients with upper and lower limb trauma were 31.43±12.32 years and 29.63±5.89 years, respectively. Radius fracture was the most frequent compared to other fractures (27%). Sensitivity, specificity, positive predicted value, and negative predicted value of ultrasound compared with plain radiography in the diagnosis of upper extremity long bones were 95.3%, 87.7%, 87.2% and 96.2%, respectively, and the highest accuracy was observed in left arm fractures (100%). Tibia and fibula fractures were the most frequent types compared to other fractures (89.2%). Sensitivity, specificity, PPV and NPV of ultrasound compared with plain radiography in the diagnosis of upper extremity long bone fractures were 98.6%, 83%, 65.4% and 87.1%, respectively, and the highest accuracy was observed in men, lower ages and femoral fractures. The results of this study showed that ultrasound compared with plain radiography has a high accuracy in the diagnosis of upper and lower extremity long bone fractures.
NASA Astrophysics Data System (ADS)
Kulenkampff, Johannes; Zakhnini, Abdelhamid; Gründig, Marion; Lippmann-Pipke, Johanna
2016-08-01
Clay plays a prominent role as barrier material in the geosphere. The small particle sizes cause extremely small pore sizes and induce low permeability and high sorption capacity. Transport of dissolved species by molecular diffusion, driven only by a concentration gradient, is less sensitive to the pore size. Heterogeneous structures on the centimetre scale could cause heterogeneous effects, like preferential transport zones, which are difficult to assess. Laboratory measurements with diffusion cells yield limited information on heterogeneity, and pore space imaging methods have to consider scale effects. We established positron emission tomography (PET), applying a high-resolution PET scanner as a spatially resolved quantitative method for direct laboratory observation of the molecular diffusion process of a PET tracer on the prominent scale of 1-100 mm. Although PET is rather insensitive to bulk effects, quantification required significant improvements of the image reconstruction procedure with respect to Compton scatter and attenuation. The experiments were conducted with 22Na and 124I over periods of 100 and 25 days, respectively. From the images we derived trustable anisotropic diffusion coefficients and, in addition, we identified indications of preferential transport zones. We thus demonstrated the unique potential of the PET imaging modality for geoscientific process monitoring under conditions where other methods fail, taking advantage of the extremely high detection sensitivity that is typical of radiotracer applications.
NASA Astrophysics Data System (ADS)
Bravo, Mikel; Angulo-Vinuesa, Xabier; Martin-Lopez, Sonia; Lopez-Amo, Manuel; Gonzalez-Herraez, Miguel
2013-05-01
High-Q resonators have been widely used for sensing purposes. High Q factors normally lead to sharp spectral peaks which accordingly provide a strong sensitivity in spectral interrogation methods. In this work we employ a low-Q ring resonator to develop a high sensitivity sub-micrometric resolution displacement sensor. We use the slow-light effects occurring close to the critical coupling regime to achieve high sensitivity in the device. By tuning the losses in the cavity close to the critical coupling, extremely high group delay variations can be achieved, which in turn introduce strong enhancements of the absorption of the structure. We first validate the concept using an Optical Vector Analyzer (OVA) and then we propose a simple functional scheme for achieving a low-cost interrogation of this kind of sensors.
2013-01-01
Background Protein-protein interactions (PPIs) play crucial roles in the execution of various cellular processes and form the basis of biological mechanisms. Although large amount of PPIs data for different species has been generated by high-throughput experimental techniques, current PPI pairs obtained with experimental methods cover only a fraction of the complete PPI networks, and further, the experimental methods for identifying PPIs are both time-consuming and expensive. Hence, it is urgent and challenging to develop automated computational methods to efficiently and accurately predict PPIs. Results We present here a novel hierarchical PCA-EELM (principal component analysis-ensemble extreme learning machine) model to predict protein-protein interactions only using the information of protein sequences. In the proposed method, 11188 protein pairs retrieved from the DIP database were encoded into feature vectors by using four kinds of protein sequences information. Focusing on dimension reduction, an effective feature extraction method PCA was then employed to construct the most discriminative new feature set. Finally, multiple extreme learning machines were trained and then aggregated into a consensus classifier by majority voting. The ensembling of extreme learning machine removes the dependence of results on initial random weights and improves the prediction performance. Conclusions When performed on the PPI data of Saccharomyces cerevisiae, the proposed method achieved 87.00% prediction accuracy with 86.15% sensitivity at the precision of 87.59%. Extensive experiments are performed to compare our method with state-of-the-art techniques Support Vector Machine (SVM). Experimental results demonstrate that proposed PCA-EELM outperforms the SVM method by 5-fold cross-validation. Besides, PCA-EELM performs faster than PCA-SVM based method. Consequently, the proposed approach can be considered as a new promising and powerful tools for predicting PPI with excellent performance and less time. PMID:23815620
Newborn Jaundice Technologies: Unbound Bilirubin and Bilirubin Binding Capacity In Neonates
Amin, Sanjiv B.; Lamola, Angelo A.
2011-01-01
Neonatal jaundice (hyperbilirubinemia), extremely common in neonates, can be associated with neurotoxicity. A safe level of bilirubin has not been defined in either premature or term infants. Emerging evidence suggest that the level of unbound (or “free”) bilirubin has a better sensitivity and specificity than total serum bilirubin for bilirubin-induced neurotoxicity. Although recent studies suggest the usefulness of free bilirubin measurements in managing high-risk neonates including premature infants, there currently exists no widely available method to assay the serum free bilirubin concentration. To keep pace with the growing demand, in addition to reevaluation of old methods, several promising new methods are being developed for sensitive, accurate, and rapid measurement of free bilirubin and bilirubin binding capacity. These innovative methods need to be validated before adopting for clinical use. We provide an overview of some promising methods for free bilirubin and binding capacity measurements with the goal to enhance research in this area of active interest and apparent need. PMID:21641486
Metallic superhydrophobic surfaces via thermal sensitization
NASA Astrophysics Data System (ADS)
Vahabi, Hamed; Wang, Wei; Popat, Ketul C.; Kwon, Gibum; Holland, Troy B.; Kota, Arun K.
2017-06-01
Superhydrophobic surfaces (i.e., surfaces extremely repellent to water) allow water droplets to bead up and easily roll off from the surface. While a few methods have been developed to fabricate metallic superhydrophobic surfaces, these methods typically involve expensive equipment, environmental hazards, or multi-step processes. In this work, we developed a universal, scalable, solvent-free, one-step methodology based on thermal sensitization to create appropriate surface texture and fabricate metallic superhydrophobic surfaces. To demonstrate the feasibility of our methodology and elucidate the underlying mechanism, we fabricated superhydrophobic surfaces using ferritic (430) and austenitic (316) stainless steels (representative alloys) with roll off angles as low as 4° and 7°, respectively. We envision that our approach will enable the fabrication of superhydrophobic metal alloys for a wide range of civilian and military applications.
Ultrananocrystalline Diamond Membranes for Detection of High-Mass Proteins
NASA Astrophysics Data System (ADS)
Kim, H.; Park, J.; Aksamija, Z.; Arbulu, M.; Blick, R. H.
2016-12-01
Mechanical resonators realized on the nanoscale by now offer applications in mass sensing of biomolecules with extraordinary sensitivity. The general idea is that perfect mechanical mass sensors should be of extremely small size to achieve zepto- or yoctogram sensitivity in weighing single molecules similar to a classical scale. However, the small effective size and long response time for weighing biomolecules with a cantilever restricts their usefulness as a high-throughput method. Commercial mass spectrometry (MS), on the other hand, such as electrospray ionization and matrix-assisted laser desorption and ionization (MALDI) time of flight (TOF) and their charge-amplifying detectors are the gold standards to which nanomechanical resonators have to live up to. These two methods rely on the ionization and acceleration of biomolecules and the following ion detection after a mass selection step, such as TOF. The principle we describe here for ion detection is based on the conversion of kinetic energy of the biomolecules into thermal excitation of chemical vapor deposition diamond nanomembranes via phonons followed by phonon-mediated detection via field emission of thermally emitted electrons. We fabricate ultrathin diamond membranes with large lateral dimensions for MALDI TOF MS of high-mass proteins. These diamond membranes are realized by straightforward etching methods based on semiconductor processing. With a minimal thickness of 100 nm and cross sections of up to 400 ×400 μ m2 , the membranes offer extreme aspect ratios. Ion detection is demonstrated in MALDI TOF analysis over a broad range from insulin to albumin. The resulting data in detection show much enhanced resolution as compared to existing detectors, which can offer better sensitivity and overall performance in resolving protein masses.
NASA Astrophysics Data System (ADS)
Kao, S. C.; Naz, B. S.; Gangrade, S.; Ashfaq, M.; Rastogi, D.
2016-12-01
The magnitude and frequency of hydroclimate extremes are projected to increase in the conterminous United States (CONUS) with significant implications for future water resource planning and flood risk management. Nevertheless, apart from the change of natural environment, the choice of model spatial resolution could also artificially influence the features of simulated extremes. To better understand how the spatial resolution of meteorological forcings may affect hydroclimate projections, we test the runoff sensitivity using the Variable Infiltration Capacity (VIC) model that was calibrated for each CONUS 8-digit hydrologic unit (HUC8) at 1/24° ( 4km) grid resolution. The 1980-2012 gridded Daymet and PRISM meteorological observations are used to conduct the 1/24° resolution control simulation. Comparative simulations are achieved by smoothing the 1/24° forcing into 1/12° and 1/8° resolutions which are then used to drive the VIC model for the CONUS. In addition, we also test how the simulated high and low runoff conditions would react to change in precipitation (±10%) and temperature (+1°C). The results are further analyzed for various types of hydroclimate extremes across different watersheds in the CONUS. This work helps us understand the sensitivity of simulated runoff to different spatial resolutions of climate forcings and also its sensitivity to different watershed sizes and characteristics of extreme events in the future climate conditions.
Water permeation through organic materials
NASA Astrophysics Data System (ADS)
Doughty, D. H.; West, I. A.
1981-09-01
Atmospheric moisture is routinely excluded from weapon systems by the use of elastomer seals at assembly joints and electrical feedthroughs while internal moisture is minimized by relying on desiccants and on pre-dried components assembled in special low humidity assembly rooms. Published values of the water permeation coefficient for ethylene-propylene rubber and other o-ring materials are subject to some variability and the effects of aging on water permability are unknown. We have thus devised a new and extremely sensitive method for measuring moisture permeation coefficients in organic materials. This method uses dilute tritiated water as a tracer and it is approximately two orders of magnitude more sensitive than other methods. We are therefore able to make measurements on materials under STS temperature and humidity conditions. Rate data showing the approach to equilibrium and water permeability values for a variety of elastomers are presented. The test apparatus is also described.
Assessing the impact of climate and land use changes on extreme floods in a large tropical catchment
NASA Astrophysics Data System (ADS)
Jothityangkoon, Chatchai; Hirunteeyakul, Chow; Boonrawd, Kowit; Sivapalan, Murugesu
2013-05-01
In the wake of the recent catastrophic floods in Thailand, there is considerable concern about the safety of large dams designed and built some 50 years ago. In this paper a distributed rainfall-runoff model appropriate for extreme flood conditions is used to generate revised estimates of the Probable Maximum Flood (PMF) for the Upper Ping River catchment (area 26,386 km2) in northern Thailand, upstream of location of the large Bhumipol Dam. The model has two components: a continuous water balance model based on a configuration of parameters estimated from climate, soil and vegetation data and a distributed flood routing model based on non-linear storage-discharge relationships of the river network under extreme flood conditions. The model is implemented under several alternative scenarios regarding the Probable Maximum Precipitation (PMP) estimates and is also used to estimate the potential effects of both climate change and land use and land cover changes on the extreme floods. These new estimates are compared against estimates using other hydrological models, including the application of the original prediction methods under current conditions. Model simulations and sensitivity analyses indicate that a reasonable Probable Maximum Flood (PMF) at the dam site is 6311 m3/s, which is only slightly higher than the original design flood of 6000 m3/s. As part of an uncertainty assessment, the estimated PMF is sensitive to the design method, input PMP, land use changes and the floodplain inundation effect. The increase of PMP depth by 5% can cause a 7.5% increase in PMF. Deforestation by 10%, 20%, 30% can result in PMF increases of 3.1%, 6.2%, 9.2%, respectively. The modest increase of the estimated PMF (to just 6311 m3/s) in spite of these changes is due to the factoring of the hydraulic effects of trees and buildings on the floodplain as the flood situation changes from normal floods to extreme floods, when over-bank flows may be the dominant flooding process, leading to a substantial reduction in the PMF estimates.
Engineered nanoconstructs for the multiplexed and sensitive detection of high-risk pathogens
NASA Astrophysics Data System (ADS)
Seo, Youngmin; Kim, Ji-Eun; Jeong, Yoon; Lee, Kwan Hong; Hwang, Jangsun; Hong, Jongwook; Park, Hansoo; Choi, Jonghoon
2016-01-01
Many countries categorize the causative agents of severe infectious diseases as high-risk pathogens. Given their extreme infectivity and potential to be used as biological weapons, a rapid and sensitive method for detection of high-risk pathogens (e.g., Bacillus anthracis, Francisella tularensis, Yersinia pestis, and Vaccinia virus) is highly desirable. Here, we report the construction of a novel detection platform comprising two units: (1) magnetic beads separately conjugated with multiple capturing antibodies against four different high-risk pathogens for simple and rapid isolation, and (2) genetically engineered apoferritin nanoparticles conjugated with multiple quantum dots and detection antibodies against four different high-risk pathogens for signal amplification. For each high-risk pathogen, we demonstrated at least 10-fold increase in sensitivity compared to traditional lateral flow devices that utilize enzyme-based detection methods. Multiplexed detection of high-risk pathogens in a sample was also successful by using the nanoconstructs harboring the dye molecules with fluorescence at different wavelengths. We ultimately envision the use of this novel nanoprobe detection platform in future applications that require highly sensitive on-site detection of high-risk pathogens.
NASA Astrophysics Data System (ADS)
Sippel, Sebastian; Zscheischler, Jakob; Heimann, Martin; Lange, Holger; Mahecha, Miguel D.; van Oldenborgh, Geert Jan; Otto, Friederike E. L.; Reichstein, Markus
2017-01-01
Daily precipitation extremes and annual totals have increased in large parts of the global land area over the past decades. These observations are consistent with theoretical considerations of a warming climate. However, until recently these trends have not been shown to consistently affect dry regions over land. A recent study, published by Donat et al. (2016), now identified significant increases in annual-maximum daily extreme precipitation (Rx1d) and annual precipitation totals (PRCPTOT) in dry regions. Here, we revisit the applied methods and explore the sensitivity of changes in precipitation extremes and annual totals to alternative choices of defining a dry region (i.e. in terms of aridity as opposed to precipitation characteristics alone). We find that (a) statistical artifacts introduced by data pre-processing based on a time-invariant reference period lead to an overestimation of the reported trends by up to 40 %, and that (b) the reported trends of globally aggregated extremes and annual totals are highly sensitive to the definition of a dry region of the globe
. For example, using the same observational dataset, accounting for the statistical artifacts, and based on different aridity-based dryness definitions, we find a reduction in the positive trend of Rx1d from the originally reported +1.6 % decade-1 to +0.2 to +0.9 % decade-1 (period changes for 1981-2010 averages relative to 1951-1980 are reduced to -1.32 to +0.97 % as opposed to +4.85 % in the original study). If we include additional but less homogenized data to cover larger regions, the global trend increases slightly (Rx1d: +0.4 to +1.1 % decade-1), and in this case we can indeed confirm (partly) significant increases in Rx1d. However, these globally aggregated estimates remain uncertain as considerable gaps in long-term observations in the Earth's arid and semi-arid regions remain. In summary, adequate data pre-processing and accounting for uncertainties regarding the definition of dryness are crucial to the quantification of spatially aggregated trends in precipitation extremes in the world's dry regions. In view of the high relevance of the question to many potentially affected stakeholders, we call for a well-reflected choice of specific data processing methods and the inclusion of alternative dryness definitions to guarantee that communicated results related to climate change be robust.
Zhang, Yuyang; Xing, Zhen; She, Dejun; Huang, Nan; Cao, Dairong
2018-01-01
Purpose The aim of this study was to prospectively evaluate the repeatability of non–contrast-enhanced lower-extremity magnetic resonance angiography using the flow-spoiled fresh blood imaging (FS-FBI). Methods Forty-three healthy volunteers and 15 patients with lower-extremity arterial stenosis were recruited in this study and were examined by FS-FBI. Digital subtraction angiography was performed within a week after the FS-FBI in the patient group. Repeatability was assessed by the following parameters: grading of image quality, diameter and area of major arteries, and grading of stenosis of lower-extremity arteries. Two experienced radiologists blinded for patient data independently evaluated the FS-FBI and digital subtraction angiography images. Intraclass correlation coefficients (ICCs), sensitivity, and specificity were used for statistical analysis. Results The grading of image quality of most data was satisfactory. The ICCs for the first and second measures were 0.792 and 0.884 in the femoral segment and 0.803 and 0.796 in the tibiofibular segment for healthy volunteer group, 0.873 and 1.000 in the femoral segment, and 0.737 and 0.737 in the tibiofibular segment for the patient group. Intraobserver and interobserver agreements on diameter and area of arteries were excellent, with ICCs mostly greater than 0.75 in the volunteer group. For stenosis grading analysis, intraobserver ICCs range from 0.784 to 0.862 and from 0.778 to 0.854, respectively. Flow-spoiled fresh blood imaging yielded a mean sensitivity and specificity to detect arterial stenosis or occlusion of 90% and 80% for femoral segment and 86.7% and 93.3% for tibiofibular segment at least. Conclusions Lower-extremity angiography with FS-FBI is a reliable and reproducible screening tool for lower-extremity atherosclerotic disease, especially for patients with impaired renal function. PMID:28787351
Cabrera, Carlos; Chang, Lei; Stone, Mars; Busch, Michael; Wilson, David H
2015-11-01
Nucleic acid testing (NAT) has become the standard for high sensitivity in detecting low levels of virus. However, adoption of NAT can be cost prohibitive in low-resource settings where access to extreme sensitivity could be clinically advantageous for early detection of infection. We report development and preliminary validation of a simple, low-cost, fully automated digital p24 antigen immunoassay with the sensitivity of quantitative NAT viral load (NAT-VL) methods for detection of acute HIV infection. We developed an investigational 69-min immunoassay for p24 capsid protein for use on a novel digital analyzer on the basis of single-molecule-array technology. We evaluated the assay for sensitivity by dilution of standardized preparations of p24, cultured HIV, and preseroconversion samples. We characterized analytical performance and concordance with 2 NAT-VL methods and 2 contemporary p24 Ag/Ab combination immunoassays with dilutions of viral isolates and samples from the earliest stages of HIV infection. Analytical sensitivity was 0.0025 ng/L p24, equivalent to 60 HIV RNA copies/mL. The limit of quantification was 0.0076 ng/L, and imprecision across 10 runs was <10% for samples as low as 0.09 ng/L. Clinical specificity was 95.1%. Sensitivity concordance vs NAT-VL on dilutions of preseroconversion samples and Group M viral isolates was 100%. The digital immunoassay exhibited >4000-fold greater sensitivity than contemporary immunoassays for p24 and sensitivity equivalent to that of NAT methods for early detection of HIV. The data indicate that NAT-level sensitivity for acute HIV infection is possible with a simple, low-cost digital immunoassay. © 2015 American Association for Clinical Chemistry.
Cost-sensitive AdaBoost algorithm for ordinal regression based on extreme learning machine.
Riccardi, Annalisa; Fernández-Navarro, Francisco; Carloni, Sante
2014-10-01
In this paper, the well known stagewise additive modeling using a multiclass exponential (SAMME) boosting algorithm is extended to address problems where there exists a natural order in the targets using a cost-sensitive approach. The proposed ensemble model uses an extreme learning machine (ELM) model as a base classifier (with the Gaussian kernel and the additional regularization parameter). The closed form of the derived weighted least squares problem is provided, and it is employed to estimate analytically the parameters connecting the hidden layer to the output layer at each iteration of the boosting algorithm. Compared to the state-of-the-art boosting algorithms, in particular those using ELM as base classifier, the suggested technique does not require the generation of a new training dataset at each iteration. The adoption of the weighted least squares formulation of the problem has been presented as an unbiased and alternative approach to the already existing ELM boosting techniques. Moreover, the addition of a cost model for weighting the patterns, according to the order of the targets, enables the classifier to tackle ordinal regression problems further. The proposed method has been validated by an experimental study by comparing it with already existing ensemble methods and ELM techniques for ordinal regression, showing competitive results.
DNA-modified electrodes fabricated using copper-free click chemistry for enhanced protein detection.
Furst, Ariel L; Hill, Michael G; Barton, Jacqueline K
2013-12-31
A method of DNA monolayer formation has been developed using copper-free click chemistry that yields enhanced surface homogeneity and enables variation in the amount of DNA assembled; extremely low-density DNA monolayers, with as little as 5% of the monolayer being DNA, have been formed. These DNA-modified electrodes (DMEs) were characterized visually, with AFM, and electrochemically, and were found to facilitate DNA-mediated reduction of a distally bound redox probe. These low-density monolayers were found to be more homogeneous than traditional thiol-modified DNA monolayers, with greater helix accessibility through an increased surface area-to-volume ratio. Protein binding efficiency of the transcriptional activator TATA-binding protein (TBP) was also investigated on these surfaces and compared to that on DNA monolayers formed with standard thiol-modified DNA. Our low-density monolayers were found to be extremely sensitive to TBP binding, with a signal decrease in excess of 75% for 150 nM protein. This protein was detectable at 4 nM, on the order of its dissociation constant, with our low-density monolayers. The improved DNA helix accessibility and sensitivity of our low-density DNA monolayers to TBP binding reflects the general utility of this method of DNA monolayer formation for DNA-based electrochemical sensor development.
Stadler, Julia; Eder, Johanna; Pratscher, Barbara; Brandt, Sabine; Schneller, Doris; Müllegger, Robert; Vogl, Claus; Trautinger, Franz; Brem, Gottfried; Burgstaller, Joerg P.
2015-01-01
Cell-free circulating tumor DNA in the plasma of cancer patients has become a common point of interest as indicator of therapy options and treatment response in clinical cancer research. Especially patient- and tumor-specific single nucleotide variants that accurately distinguish tumor DNA from wild type DNA are promising targets. The reliable detection and quantification of these single-base DNA variants is technically challenging. Currently, a variety of techniques is applied, with no apparent “gold standard”. Here we present a novel qPCR protocol that meets the conditions of extreme sensitivity and specificity that are required for detection and quantification of tumor DNA. By consecutive application of two polymerases, one of them designed for extreme base-specificity, the method reaches unprecedented sensitivity and specificity. Three qPCR assays were tested with spike-in experiments, specific for point mutations BRAF V600E, PTEN T167A and NRAS Q61L of melanoma cell lines. It was possible to detect down to one copy of tumor DNA per reaction (Poisson distribution), at a background of up to 200 000 wild type DNAs. To prove its clinical applicability, the method was successfully tested on a small cohort of BRAF V600E positive melanoma patients. PMID:26562020
Abstraction and model evaluation in category learning.
Vanpaemel, Wolf; Storms, Gert
2010-05-01
Thirty previously published data sets, from seminal category learning tasks, are reanalyzed using the varying abstraction model (VAM). Unlike a prototype-versus-exemplar analysis, which focuses on extreme levels of abstraction only, a VAM analysis also considers the possibility of partial abstraction. Whereas most data sets support no abstraction when only the extreme possibilities are considered, we show that evidence for abstraction can be provided using the broader view on abstraction provided by the VAM. The present results generalize earlier demonstrations of partial abstraction (Vanpaemel & Storms, 2008), in which only a small number of data sets was analyzed. Following the dominant modus operandi in category learning research, Vanpaemel and Storms evaluated the models on their best fit, a practice known to ignore the complexity of the models under consideration. In the present study, in contrast, model evaluation not only relies on the maximal likelihood, but also on the marginal likelihood, which is sensitive to model complexity. Finally, using a large recovery study, it is demonstrated that, across the 30 data sets, complexity differences between the models in the VAM family are small. This indicates that a (computationally challenging) complexity-sensitive model evaluation method is uncalled for, and that the use of a (computationally straightforward) complexity-insensitive model evaluation method is justified.
Gong, Y L; Yang, Z C; Yin, S P; Liu, M X; Zhang, C; Luo, X Q; Peng, Y Z
2016-09-20
To analyze the distribution and drug resistance of pathogen isolated from severely burned patients with bloodstream infection, so as to provide reference for the clinical treatment of these patients. Blood samples of 162 severely burned patients (including 120 patients with extremely severe burn) with bloodstream infection admitted into our burn ICU from January 2011 to December 2014 were collected. Pathogens were cultured by fully automatic blood culture system, and API bacteria identification panels were used to identify pathogen. Kirby-Bauer paper disk diffusion method was used to detect the drug resistance of major Gram-negative and -positive bacteria to 37 antibiotics including ampicillin, piperacillin and teicoplanin, etc. (resistance to vancomycin was detected by E test), and drug resistance of fungi to 5 antibiotics including voriconazole and amphotericin B, etc. Modified Hodge test was used to further identify imipenem and meropenem resistant Klebsiella pneumonia. D test was used to detect erythromycin-induced clindamycin resistant Staphylococcus aureus. The pathogen distribution and drug resistance rate were analyzed by WHONET 5.5. Mortality rate and infected pathogens of patients with extremely severe burn and patients with non-extremely severe burn were recorded. Data were processed with Wilcoxon rank sum test. (1) Totally 1 658 blood samples were collected during the four years, and 339 (20.4%) strains of pathogens were isolated. The isolation rate of Gram-negative bacteria, Gram-positive bacteria, and fungi were 68.4% (232/339), 24.5% (83/339), and 7.1% (24/339), respectively. The top three pathogens with isolation rate from high to low were Acinetobacter baumannii, Staphylococcus aureus, and Pseudomonas aeruginosa in turn. (2) Except for the low drug resistance rate to polymyxin B and minocycline, drug resistance rate of Acinetobacter baumannii to the other antibiotics were relatively high (81.0%-100.0%). Pseudomonas aeruginosa was sensitive to polymyxin B but highly resistant to other antibiotics (57.7%-100.0%). Enterobacter cloacae was sensitive to imipenem and meropenem, while its drug resistance rates to ciprofloxacin, levofloxacin, cefoperazone/sulbactam, cefepime, piperacillin/tazobactam were 25.0%-49.0%, and those to the other antibiotics were 66.7%-100.0%. Drug resistance rates of Klebsiella pneumoniae to cefoperazone/sulbactam, imipenem, and meropenem were low (5.9%-15.6%, two imipenem- and meropenem-resistant strains were identified by modified Hodge test), while its drug resistance rates to amoxicillin/clavulanic acid, piperacillin/tazobactam, cefepime, cefoxitin, amikacin, levofloxacin were 35.3%-47.1%, and those to the other antibiotics were 50.0%-100.0%. (3) Drug resistance rates of methicillin-resistant Staphylococcus aureus (MRSA) to most of the antibiotics were higher than those of the methicillin-sensitive Staphylococcus aureus (MSSA). MRSA was sensitive to linezolid, vancomycin, and teicoplanin, while its drug resistance rates to compound sulfamethoxazole, clindamycin, minocycline, and erythromycin were 5.3%-31.6%, and those to the other antibiotics were 81.6%-100.0%. Except for totally resistant to penicillin G and tetracycline, MSSA was sensitive to the other antibiotics. Fourteen Staphylococcus aureus strains were resistant to erythromycin-induced clindamycin. Enterococcus was sensitive to vancomycin and teicoplanin, while its drug resistance rates to linezolid, chloramphenicol, nitrofurantoin, and high unit gentamicin were low (10.0%-30.0%), and those to ciprofloxacin, erythromycin, minocycline, and ampicillin were high (60.0%-80.0%). Enterococcus was fully resistant to rifampicin. (4) Fungi was sensitive to amphotericin B, and drug resistance rates of fungi to voriconazole, fluconazole, itraconazole, and ketoconazole were 7.2%-12.5%. (5) The mortality of patients with extremely severe burn was higher than that of patients with non-extremely severe burn. The variety of infected pathogens in patients with extremely severe burn significantly outnumbered that in patients with non-extremely severe burn (Z=-2.985, P=0.005). The variety of pathogen in severely burned patients with bloodstream infection is wide, with the main pathogens as Acinetobacter baumannii, Staphylococcus aureus, and Pseudomonas aeruginosa, and the drug resistance situation is grim. The types of infected pathogen in patients with extremely severe burn are more complex, and the mortality of these patients is higher when compared with that of patients with non-extremely severe burn.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aristov, Andrey I.; Kabashin, Andrei V., E-mail: kabashin@lp3.univ-mrs.fr; Zywietz, Urs
2014-02-17
By using methods of laser-induced transfer combined with nanoparticle lithography, we design and fabricate large-area gold nanoparticle-based metamaterial arrays exhibiting extreme Heaviside-like phase jumps in reflected light due to a strong diffractive coupling of localized plasmons. When employed in sensing schemes, these phase singularities provide the sensitivity of 5 × 10{sup 4} deg. of phase shift per refractive index unit change that is comparable with best values reported for plasmonic biosensors. The implementation of sensor platforms on the basis of such metamaterial arrays promises a drastic improvement of sensitivity and cost efficiency of plasmonic biosensing devices.
A Novel Approach for Lie Detection Based on F-Score and Extreme Learning Machine
Gao, Junfeng; Wang, Zhao; Yang, Yong; Zhang, Wenjia; Tao, Chunyi; Guan, Jinan; Rao, Nini
2013-01-01
A new machine learning method referred to as F-score_ELM was proposed to classify the lying and truth-telling using the electroencephalogram (EEG) signals from 28 guilty and innocent subjects. Thirty-one features were extracted from the probe responses from these subjects. Then, a recently-developed classifier called extreme learning machine (ELM) was combined with F-score, a simple but effective feature selection method, to jointly optimize the number of the hidden nodes of ELM and the feature subset by a grid-searching training procedure. The method was compared to two classification models combining principal component analysis with back-propagation network and support vector machine classifiers. We thoroughly assessed the performance of these classification models including the training and testing time, sensitivity and specificity from the training and testing sets, as well as network size. The experimental results showed that the number of the hidden nodes can be effectively optimized by the proposed method. Also, F-score_ELM obtained the best classification accuracy and required the shortest training and testing time. PMID:23755136
Wang, Yinan; Chan, Wan
2014-06-25
Nephrotoxic and carcinogenic aristolochic acids (AAs) are naturally occurring nitrophenanthrene carboxylic acids in the herbal genus Aristolochia. The misuse of AA-containing herbs in preparing slimming drugs has caused hundred of cases of kidney disease in Belgium women in a slimming regime in the early 1990s. Accumulating evidence also suggested that prolong dietary intake of AA-contaminated food is one of the major causes to the Balkan endemic nephropathy that was first observed in the late 1950s. Therefore, analytical methods of high sensitivity are extremely important for safeguarding human exposure to AA-containing herbal medicines, herbal remedies, and food composites. In this paper, we describe the development of a new high-performance liquid chromatography coupled fluorescence detector (HPLC-FLD) method for the sensitive determination of AAs. The method makes use of a novel cysteine-induced denitration reaction that "turns on" the fluorescence of AAs for fluorometric detections. Our results showed that the combination of cysteine-induced denitration and HPLC-FLD analysis allows for sensitive quantification of AA-I and AA-II at detection limits of 27.1 and 25.4 ng/g, respectively. The method was validated and has been successfully applied in quantifying AAs in Chinese herbal medicines.
Impact of extreme weather events and climate change for health and social care systems.
Curtis, Sarah; Fair, Alistair; Wistow, Jonathan; Val, Dimitri V; Oven, Katie
2017-12-05
This review, commissioned by the Research Councils UK Living With Environmental Change (LWEC) programme, concerns research on the impacts on health and social care systems in the United Kingdom of extreme weather events, under conditions of climate change. Extreme weather events considered include heatwaves, coldwaves and flooding. Using a structured review method, we consider evidence regarding the currently observed and anticipated future impacts of extreme weather on health and social care systems and the potential of preparedness and adaptation measures that may enhance resilience. We highlight a number of general conclusions which are likely to be of international relevance, although the review focussed on the situation in the UK. Extreme weather events impact the operation of health services through the effects on built, social and institutional infrastructures which support health and health care, and also because of changes in service demand as extreme weather impacts on human health. Strategic planning for extreme weather and impacts on the care system should be sensitive to within country variations. Adaptation will require changes to built infrastructure systems (including transport and utilities as well as individual care facilities) and also to institutional and social infrastructure supporting the health care system. Care sector organisations, communities and individuals need to adapt their practices to improve resilience of health and health care to extreme weather. Preparedness and emergency response strategies call for action extending beyond the emergency response services, to include health and social care providers more generally.
MEMS cantilever sensor for THz photoacoustic chemical sensing and pectroscopy
NASA Astrophysics Data System (ADS)
Glauvitz, Nathan E.
Sensitive Microelectromechanical System (MEMS) cantilever designs were modeled, fabricated, and tested to measure the photoacoustic (PA) response of gasses to terahertz (THz) radiation. Surface and bulk micromachining technologies were employed to create the extremely sensitive devices that could detect very small changes in pressure. Fabricated devices were then tested in a custom made THz PA vacuum test chamber where the cantilever deflections caused by the photoacoustic effect were measured with a laser interferometer and iris beam clipped methods. The sensitive cantilever designs achieved a normalized noise equivalent absorption coefficient of 2.83x10-10 cm-1 W Hz-½ using a 25 microW radiation source power and a 1 s sampling time. Traditional gas phase molecular spectroscopy absorption cells are large and bulky. The outcome of this research resulted was a photoacoustic detection method that was virtually independent of the absorption path-length, which allowed the chamber dimensions to be greatly reduced, leading to the possibility of a compact, portable chemical detection and spectroscopy system
NASA Astrophysics Data System (ADS)
Luo, Qingying; Liu, Lin; Yang, Cai; Yuan, Jing; Feng, Hongtao; Chen, Yan; Zhao, Peng; Yu, Zhiqiang; Jin, Zongwen
2018-03-01
MicroRNAs (miRNAs) are single stranded endogenous molecules composed of only 18-24 nucleotides which are critical for gene expression regulating the translation of messenger RNAs. Conventional methods based on enzyme-assisted nucleic acid amplification techniques have many problems, such as easy contamination, high cost, susceptibility to false amplification, and tendency to have sequence mismatches. Here we report a rapid, ratiometric, enzyme-free, sensitive, and highly selective single-step miRNA detection using three-way junction assembled (or self-assembled) FRET probes. The developed strategy can be operated within the linear range from subnanomolar to hundred nanomolar concentrations of miRNAs. In comparison with the traditional approaches, our method showed high sensitivity for the miRNA detection and extreme selectivity for the efficient discrimination of single-base mismatches. The results reveal that the strategy paved a new avenue for the design of novel highly specific probes applicable in diagnostics and potentially in microscopic imaging of miRNAs in real biological environments.
A NEW METHOD OF SWEAT TESTING: THE CF QUANTUM® SWEAT TEST
Rock, Michael J.; Makholm, Linda; Eickhoff, Jens
2015-01-01
Background Conventional methods of sweat testing are time consuming and have many steps that can and do lead to errors. This study compares conventional sweat testing to a new quantitative method, the CF Quantum® (CFQT) sweat test. This study tests the diagnostic accuracy and analytic validity of the CFQT. Methods Previously diagnosed CF patients and patients who required a sweat test for clinical indications were invited to have the CFQT test performed. Both conventional sweat testing and the CFQT were performed bilaterally on the same day. Pairs of data from each test are plotted as a correlation graph and Bland Altman plot. Sensitivity and specificity were calculated as well as the means and coefficient of variation by test and by extremity. After completing the study, subjects or their parents were asked for their preference of the CFQT and conventional sweat testing. Results The correlation coefficient between the CFQT and conventional sweat testing was 0.98 (95% confidence interval: 0.97–0.99). The sensitivity and specificity of the CFQT in diagnosing CF was 100% (95% confidence interval: 94–100%) and 96% (95% confidence interval: 89–99%), respectively. In one center in this three center multicenter study, there were higher sweat chloride values in patients with CF and also more tests that were invalid due to discrepant values between the two extremities. The percentage of invalid tests was higher in the CFQT method (16.5%) compared to conventional sweat testing (3.8%)(p < 0.001). In the post-test questionnaire, 88% of subjects/parents preferred the CFQT test. Conclusions The CFQT is a fast and simple method of quantitative sweat chloride determination. This technology requires further refinement to improve the analytic accuracy at higher sweat chloride values and to decrease the number of invalid tests. PMID:24862724
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Qing; Leung, Lai-Yung R.; Rauscher, Sara
This study investigates the resolution dependency of precipitation extremes in an aqua-planet framework. Strong resolution dependency of precipitation extremes is seen over both tropics and extra-tropics, and the magnitude of this dependency also varies with dynamical cores. Moisture budget analyses based on aqua-planet simulations with the Community Atmosphere Model (CAM) using the Model for Prediction Across Scales (MPAS) and High Order Method Modeling Environment (HOMME) dynamical cores but the same physics parameterizations suggest that during precipitation extremes moisture supply for surface precipitation is mainly derived from advective moisture convergence. The resolution dependency of precipitation extremes mainly originates from advective moisturemore » transport in the vertical direction. At most vertical levels over the tropics and in the lower atmosphere over the subtropics, the vertical eddy transport of mean moisture field dominates the contribution to precipitation extremes and its resolution dependency. Over the subtropics, the source of moisture, its associated energy, and the resolution dependency during extremes are dominated by eddy transport of eddies moisture at the mid- and upper-troposphere. With both MPAS and HOMME dynamical cores, the resolution dependency of the vertical advective moisture convergence is mainly explained by dynamical changes (related to vertical velocity or omega), although the vertical gradients of moisture act like averaging kernels to determine the sensitivity of the overall resolution dependency to the changes in omega at different vertical levels. The natural reduction of variability with coarser resolution, represented by areal data averaging (aggregation) effect, largely explains the resolution dependency in omega. The thermodynamic changes, which likely result from non-linear feedback in response to the large dynamical changes, are small compared to the overall changes in dynamics (omega). However, after excluding the data aggregation effect in omega, thermodynamic changes become relatively significant in offsetting the effect of dynamics leading to reduce differences between the simulated and aggregated results. Compared to MPAS, the simulated stronger vertical motion with HOMME also results in larger resolution dependency. Compared to the simulation at fine resolution, the vertical motion during extremes is insufficiently resolved/parameterized at the coarser resolution even after accounting for the natural reduction in variability with coarser resolution, and this is more distinct in the simulation with HOMME. To reduce uncertainties in simulated precipitation extremes, future development in cloud parameterizations must address their sensitivity to spatial resolution as well as dynamical cores.« less
Liu, Yanqiu; Lu, Huijuan; Yan, Ke; Xia, Haixia; An, Chunlin
2016-01-01
Embedding cost-sensitive factors into the classifiers increases the classification stability and reduces the classification costs for classifying high-scale, redundant, and imbalanced datasets, such as the gene expression data. In this study, we extend our previous work, that is, Dissimilar ELM (D-ELM), by introducing misclassification costs into the classifier. We name the proposed algorithm as the cost-sensitive D-ELM (CS-D-ELM). Furthermore, we embed rejection cost into the CS-D-ELM to increase the classification stability of the proposed algorithm. Experimental results show that the rejection cost embedded CS-D-ELM algorithm effectively reduces the average and overall cost of the classification process, while the classification accuracy still remains competitive. The proposed method can be extended to classification problems of other redundant and imbalanced data.
Highly polarization sensitive photodetectors based on quasi-1D titanium trisulfide (TiS3)
NASA Astrophysics Data System (ADS)
Liu, Sijie; Xiao, Wenbo; Zhong, Mianzeng; Pan, Longfei; Wang, Xiaoting; Deng, Hui-Xiong; Liu, Jian; Li, Jingbo; Wei, Zhongming
2018-05-01
Photodetectors with high polarization sensitivity are in great demand in advanced optical communication. Here, we demonstrate that photodetectors based on titanium trisulfide (TiS3) are extremely sensitive to polarized light (from visible to the infrared), due to its reduced in-plane structural symmetry. By density functional theory calculation, TiS3 has a direct bandgap of 1.13 eV. The highest photoresponsivity reaches 2500 A W-1. What is more, in-plane optical selection caused by strong anisotropy leads to the photoresponsivity ratio for different directions of polarization that can reach 4:1. The angle-dependent photocurrents of TiS3 clearly display strong linear dichroism. Moreover, the Raman peak at 370 cm-1 is also very sensitive to the polarization direction. The theoretical optical absorption of TiS3 is calculated by using the HSE06 hybrid functional method, in qualitative agreement with the observed experimental photoresponsivity.
Highly polarization sensitive photodetectors based on quasi-1D titanium trisulfide (TiS3).
Liu, Sijie; Xiao, Wenbo; Zhong, Mianzeng; Pan, Longfei; Wang, Xiaoting; Deng, Hui-Xiong; Liu, Jian; Li, Jingbo; Wei, Zhongming
2018-05-04
Photodetectors with high polarization sensitivity are in great demand in advanced optical communication. Here, we demonstrate that photodetectors based on titanium trisulfide (TiS 3 ) are extremely sensitive to polarized light (from visible to the infrared), due to its reduced in-plane structural symmetry. By density functional theory calculation, TiS 3 has a direct bandgap of 1.13 eV. The highest photoresponsivity reaches 2500 A W -1 . What is more, in-plane optical selection caused by strong anisotropy leads to the photoresponsivity ratio for different directions of polarization that can reach 4:1. The angle-dependent photocurrents of TiS 3 clearly display strong linear dichroism. Moreover, the Raman peak at 370 cm -1 is also very sensitive to the polarization direction. The theoretical optical absorption of TiS 3 is calculated by using the HSE06 hybrid functional method, in qualitative agreement with the observed experimental photoresponsivity.
Magda, Balázs; Dobi, Zoltán; Mészáros, Katalin; Szabó, Éva; Márta, Zoltán; Imre, Tímea; Szabó, Pál T
2017-06-05
The aim of this study was to develop a sensitive, reliable and high-throughput liquid chromatography - electrospray ionization - mass spectrometric (LC-ESI-MS/MS) method for the simultaneous quantitation of cortisol and cortisone in human saliva. Derivatization with 2-hydrazino-1-methylpyridine (HMP) was one of the most challenging aspects of the method development. The reagent was reacting with cortisol and cortisone at 60°C within 1h, giving mono- and bis-hydrazone derivatives. Investigation of derivatization reaction and sample preparation was detailed and discussed. Improvement of method sensitivity was achieved with charged derivatization and use of on-line solid phase extraction (on-line SPE). The lower limit of quantitation (LLOQ) was 5 and 10pg/ml for cortisol and cortisone, respectively. The developed method was subsequently applied to clinical laboratory measurement of cortisol and cortisone in human saliva. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Jianfei; Wang, Shijun; Turkbey, Evrim B.
Purpose: Renal calculi are common extracolonic incidental findings on computed tomographic colonography (CTC). This work aims to develop a fully automated computer-aided diagnosis system to accurately detect renal calculi on CTC images. Methods: The authors developed a total variation (TV) flow method to reduce image noise within the kidneys while maintaining the characteristic appearance of renal calculi. Maximally stable extremal region (MSER) features were then calculated to robustly identify calculi candidates. Finally, the authors computed texture and shape features that were imported to support vector machines for calculus classification. The method was validated on a dataset of 192 patients andmore » compared to a baseline approach that detects calculi by thresholding. The authors also compared their method with the detection approaches using anisotropic diffusion and nonsmoothing. Results: At a false positive rate of 8 per patient, the sensitivities of the new method and the baseline thresholding approach were 69% and 35% (p < 1e − 3) on all calculi from 1 to 433 mm{sup 3} in the testing dataset. The sensitivities of the detection methods using anisotropic diffusion and nonsmoothing were 36% and 0%, respectively. The sensitivity of the new method increased to 90% if only larger and more clinically relevant calculi were considered. Conclusions: Experimental results demonstrated that TV-flow and MSER features are efficient means to robustly and accurately detect renal calculi on low-dose, high noise CTC images. Thus, the proposed method can potentially improve diagnosis.« less
Radiation microscope for SEE testing using GeV ions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doyle, Barney Lee; Knapp, James Arthur; Rossi, Paolo
2009-09-01
Radiation Effects Microscopy is an extremely useful technique in failure analysis of electronic parts used in radiation environment. It also provides much needed support for development of radiation hard components used in spacecraft and nuclear weapons. As the IC manufacturing technology progresses, more and more overlayers are used; therefore, the sensitive region of the part is getting farther and farther from the surface. The thickness of these overlayers is so large today that the traditional microbeams, which are used for REM are unable to reach the sensitive regions. As a result, higher ion beam energies have to be used (>more » GeV), which are available only at cyclotrons. Since it is extremely complicated to focus these GeV ion beams, a new method has to be developed to perform REM at cyclotrons. We developed a new technique, Ion Photon Emission Microscopy, where instead of focusing the ion beam we use secondary photons emitted from a fluorescence layer on top of the devices being tested to determine the position of the ion hit. By recording this position information in coincidence with an SEE signal we will be able to indentify radiation sensitive regions of modern electronic parts, which will increase the efficiency of radiation hard circuits.« less
Chon, Sung-Bin; Kwak, Young Ho; Hwang, Seung-Sik; Oh, Won Sup; Bae, Jun-Ho
2013-12-01
Detecting severe hyperkalemia is challenging. We explored its prevalence in symptomatic or extreme bradycardia and devised a diagnostic rule. This retrospective cross-sectional study included patients with symptomatic (heart rate [HR] ≤ 50/min with dyspnea, chest pain, altered mentality, dizziness/syncope/presyncope, general weakness, oliguria, or shock) or extreme (HR ≤ 40/min) bradycardia at an emergency department for 46 months. Risk factors for severe hyperkalemia were chosen by multiple logistic regression analysis from history (sex, age, comorbidities, and medications), vital signs, and electrocardiography (ECG; maximum precordial T-wave amplitude, PR, and QRS intervals). The derived diagnostic index was validated using bootstrapping method. Among the 169 participants enrolled, 87 (51.5%) were female. The mean (SD) age was 71.2 (12.5) years. Thirty-six (21.3%) had severe hyperkalemia. The diagnostic summed "maximum precordial T ≥ 8.5 mV (2)," "atrial fibrillation/junctional bradycardia (1)," "HR ≤ 42/min (1)," "diltiazem medication (2)," and "diabetes mellitus (1)." The C-statistics were 0.86 (0.80-0.93) and were validated. For scores of 4 or higher, sensitivity was 0.50, specificity was 0.92, and positive likelihood ratio was 6.02. The "ECG-only index," which sums the 3 ECG findings, had a sensitivity of 0.50, specificity of 0.90, and likelihood ratio (+) of 5.10 for scores of 3 or higher. Severe hyperkalemia is prevalent in symptomatic or extreme bradycardia and detectable by quantitative electrocardiographic parameters and history. © 2013.
Fujii, Naoto; Aoki-Murakami, Erii; Tsuji, Bun; Kenny, Glen P; Nagashima, Kei; Kondo, Narihiko; Nishiyasu, Takeshi
2017-11-01
We evaluated cold sensation at rest and in response to exercise-induced changes in core and skin temperatures in cold-sensitive exercise trained females. Fifty-eight trained young females were screened by a questionnaire, selecting cold-sensitive (Cold-sensitive, n = 7) and non-cold-sensitive (Control, n = 7) individuals. Participants rested in a room at 29.5°C for ~100 min after which ambient temperature was reduced to 23.5°C where they remained resting for 60 min. Participants then performed 30-min of moderate intensity cycling (50% peak oxygen uptake) followed by a 60-min recovery. Core and mean skin temperatures and cold sensation over the whole-body and extremities (fingers and toes) were assessed throughout. Resting core temperature was lower in the Cold-sensitive relative to Control group (36.4 ± 0.3 vs. 36.7 ± 0.2°C). Core temperature increased to similar levels at end-exercise (~37.2°C) and gradually returned to near preexercise rest levels at the end of recovery (>36.6°C). Whole-body cold sensation was greater in the Cold-sensitive relative to Control group during resting at a room temperature of 23.5°C only without a difference in mean skin temperature between groups. In contrast, cold sensation of the extremities was greater in the Cold-sensitive group prior to, during and following exercise albeit this was not paralleled by differences in mean extremity skin temperature. We show that young trained females who are sensitive to cold exhibit augmented whole-body cold sensation during rest under temperate ambient conditions. However, this response is diminished during and following exercise. In contrast, cold sensation of extremities is augmented during resting that persists during and following exercise. © 2017 The Authors. Physiological Reports published by Wiley Periodicals, Inc. on behalf of The Physiological Society and the American Physiological Society.
North Atlantic storm driving of extreme wave heights in the North Sea
NASA Astrophysics Data System (ADS)
Bell, R. J.; Gray, S. L.; Jones, O. P.
2017-04-01
The relationship between storms and extreme ocean waves in the North Sea is assessed using a long-period wave data set and storms identified in the Interim ECMWF Re-Analysis (ERA-Interim). An ensemble sensitivity analysis is used to provide information on the spatial and temporal forcing from mean sea-level pressure and surface wind associated with extreme ocean wave height responses. Extreme ocean waves in the central North Sea arise due to intense extratropical cyclone winds from either the cold conveyor belt (northerly-wind events) or the warm conveyor belt (southerly-wind events). The largest wave heights are associated with northerly-wind events which tend to have stronger wind speeds and occur as the cold conveyor belt wraps rearward round the cyclone to the cold side of the warm front. The northerly-wind events provide a larger fetch to the central North Sea to aid wave growth. Southerly-wind events are associated with the warm conveyor belts of intense extratropical cyclones that develop in the left upper tropospheric jet exit region. Ensemble sensitivity analysis can provide early warning of extreme wave events by demonstrating a relationship between wave height and high pressure to the west of the British Isles for northerly-wind events 48 h prior. Southerly-wind extreme events demonstrate sensitivity to low pressure to the west of the British Isles 36 h prior.
Balantekin, Katherine N; Birch, Leann L; Savage, Jennifer S
2018-04-01
To examine the relationship of family, friend, and media factors on weight-control group membership at 15 years separately and in a combined model. Subjects included 166 15 year girls. Latent class analysis identified four patterns of weight-control behaviors: non-dieters, lifestyle, dieters, and extreme dieters. Family (family functioning, priority of the family meals, maternal/paternal weight-teasing, and mother's/father's dieting), friend (weight-teasing and dieting), and media variables (media sensitivity and weekly TV time) were included as predictors of weight-control group membership. Family functioning and priority of family meals predicted membership in the Extreme Dieters group, and maternal weight-teasing predicted membership in both dieters and extreme dieters. Friend's dieting and weight-teasing predicted membership in both dieters and extreme dieters. Media sensitivity was significantly associated with membership in lifestyle, dieters, and extreme dieters. In a combined influence model with family, friend, and media factors included, the following remained significantly associated with weight-control group membership: family functioning, friends' dieting, and media sensitivity. Family, friends, and the media are three sources of sociocultural influence, which play a role in adolescent girls' use of patterns of weight-control behaviors; family functioning was a protective factor, whereas friend's dieting and media sensitivity were risk factors. These findings emphasize the need for multidimensional interventions, addressing risk factors for dieting and use of unhealthy weight-control behaviors at the family, peer, and community (e.g., media) levels.
NASA Astrophysics Data System (ADS)
Hwang, Seonhong; Kim, Seunghyeon; Son, Jongsang; Kim, Youngho
2012-02-01
Manual wheelchair users are at a high risk of pain and injuries to the upper extremities due to mechanical inefficiency of wheelchair propulsion motion. The kinetic analysis of the upper extremities during manual wheelchair propulsion in various conditions needed to be investigated. We developed and calibrated a wheelchair dynamometer for measuring kinetic parameters during propulsion. We utilized the dynamometer to investigate and compare the propulsion torque and power values of experienced and novice users under four different conditions. Experienced wheelchair users generated lower torques with more power than novice users and reacted alertly and sensitively to changing conditions. We expect that these basic methods and results may help to quantitatively evaluate the mechanical efficiency of manual wheelchair propulsion.
Jeznach, Lillian C; Hagemann, Mark; Park, Mi-Hyun; Tobiason, John E
2017-10-01
Extreme precipitation events are of concern to managers of drinking water sources because these occurrences can affect both water supply quantity and quality. However, little is known about how these low probability events impact organic matter and nutrient loads to surface water sources and how these loads may impact raw water quality. This study describes a method for evaluating the sensitivity of a water body of interest from watershed input simulations under extreme precipitation events. An example application of the method is illustrated using the Wachusett Reservoir, an oligo-mesotrophic surface water reservoir in central Massachusetts and a major drinking water supply to metropolitan Boston. Extreme precipitation event simulations during the spring and summer resulted in total organic carbon, UV-254 (a surrogate measurement for reactive organic matter), and total algae concentrations at the drinking water intake that exceeded recorded maximums. Nutrient concentrations after storm events were less likely to exceed recorded historical maximums. For this particular reservoir, increasing inter-reservoir transfers of water with lower organic matter content after a large precipitation event has been shown in practice and in model simulations to decrease organic matter levels at the drinking water intake, therefore decreasing treatment associated oxidant demand, energy for UV disinfection, and the potential for formation of disinfection byproducts. Copyright © 2017 Elsevier Ltd. All rights reserved.
The advance of non-invasive detection methods in osteoarthritis
NASA Astrophysics Data System (ADS)
Dai, Jiao; Chen, Yanping
2011-06-01
Osteoarthritis (OA) is one of the most prevalent chronic diseases which badly affected the patients' living quality and economy. Detection and evaluation technology can provide basic information for early treatment. A variety of imaging methods in OA were reviewed, such as conventional X-ray, computed tomography (CT), ultrasound (US), magnetic resonance imaging (MRI) and near-infrared spectroscopy (NIRS). Among the existing imaging modalities, the spatial resolution of X-ray is extremely high; CT is a three-dimensional method, which has high density resolution; US as an evaluation method of knee OA discriminates lesions sensitively between normal cartilage and degenerative one; as a sensitive and nonionizing method, MRI is suitable for the detection of early OA, but the cost is too expensive for routine use; NIRS is a safe, low cost modality, and is also good at detecting early stage OA. In a word, each method has its own advantages, but NIRS is provided with broader application prospect, and it is likely to be used in clinical daily routine and become the golden standard for diagnostic detection.
Wu, Zhenyu; Zou, Ming
2014-10-01
An increasing number of users interact, collaborate, and share information through social networks. Unprecedented growth in social networks is generating a significant amount of unstructured social data. From such data, distilling communities where users have common interests and tracking variations of users' interests over time are important research tracks in fields such as opinion mining, trend prediction, and personalized services. However, these tasks are extremely difficult considering the highly dynamic characteristics of the data. Existing community detection methods are time consuming, making it difficult to process data in real time. In this paper, dynamic unstructured data is modeled as a stream. Tag assignments stream clustering (TASC), an incremental scalable community detection method, is proposed based on locality-sensitive hashing. Both tags and latent interactions among users are incorporated in the method. In our experiments, the social dynamic behaviors of users are first analyzed. The proposed TASC method is then compared with state-of-the-art clustering methods such as StreamKmeans and incremental k-clique; results indicate that TASC can detect communities more efficiently and effectively. Copyright © 2014 Elsevier Ltd. All rights reserved.
Digital PCR analysis of circulating nucleic acids.
Hudecova, Irena
2015-10-01
Detection of plasma circulating nucleic acids (CNAs) requires the use of extremely sensitive and precise methods. The commonly used quantitative real-time polymerase chain reaction (PCR) poses certain technical limitations in relation to the precise measurement of CNAs whereas the costs of massively parallel sequencing are still relatively high. Digital PCR (dPCR) now represents an affordable and powerful single molecule counting strategy to detect minute amounts of genetic material with performance surpassing many quantitative methods. Microfluidic (chip) and emulsion (droplet)-based technologies have already been integrated into platforms offering hundreds to millions of nanoliter- or even picoliter-scale reaction partitions. The compelling observations reported in the field of cancer research, prenatal testing, transplantation medicine and virology support translation of this technology into routine use. Extremely sensitive plasma detection of rare mutations originating from tumor or placental cells among a large background of homologous sequences facilitates unraveling of the early stages of cancer or the detection of fetal mutations. Digital measurement of quantitative changes in plasma CNAs associated with cancer or graft rejection provides valuable information on the monitoring of disease burden or the recipient's immune response and subsequent therapy treatment. Furthermore, careful quantitative assessment of the viral load offers great value for effective monitoring of antiviral therapy for immunosuppressed or transplant patients. The present review describes the inherent features of dPCR that make it exceptionally robust in precise and sensitive quantification of CNAs. Moreover, I provide an insight into the types of potential clinical applications that have been developed by researchers to date. Copyright © 2015 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Biotechnical use of polymerase chain reaction for microbiological analysis of biological samples.
Lantz, P G; Abu al-Soud, W; Knutsson, R; Hahn-Hägerdal, B; Rådström, P
2000-01-01
Since its introduction in the mid-80s, polymerase chain reaction (PCR) technology has been recognised as a rapid, sensitive and specific molecular diagnostic tool for the analysis of micro-organisms in clinical, environmental and food samples. Although this technique can be extremely effective with pure solutions of nucleic acids, it's sensitivity may be reduced dramatically when applied directly to biological samples. This review describes PCR technology as a microbial detection method, PCR inhibitors in biological samples and various sample preparation techniques that can be used to facilitate PCR detection, by either separating the micro-organisms from PCR inhibitors and/or by concentrating the micro-organisms to detectable concentrations. Parts of this review are updated and based on a doctoral thesis by Lantz [1] and on a review discussing methods to overcome PCR inhibition in foods [2].
Henderson, Sarah B; Gauld, Jillian S; Rauch, Stephen A; McLean, Kathleen E; Krstic, Nikolas; Hondula, David M; Kosatsky, Tom
2016-11-15
Most excess deaths that occur during extreme hot weather events do not have natural heat recorded as an underlying or contributing cause. This study aims to identify the specific individuals who died because of hot weather using only secondary data. A novel approach was developed in which the expected number of deaths was repeatedly sampled from all deaths that occurred during a hot weather event, and compared with deaths during a control period. The deaths were compared with respect to five factors known to be associated with hot weather mortality. Individuals were ranked by their presence in significant models over 100 trials of 10,000 repetitions. Those with the highest rankings were identified as probable excess deaths. Sensitivity analyses were performed on a range of model combinations. These methods were applied to a 2009 hot weather event in greater Vancouver, Canada. The excess deaths identified were sensitive to differences in model combinations, particularly between univariate and multivariate approaches. One multivariate and one univariate combination were chosen as the best models for further analyses. The individuals identified by multiple combinations suggest that marginalized populations in greater Vancouver are at higher risk of death during hot weather. This study proposes novel methods for classifying specific deaths as expected or excess during a hot weather event. Further work is needed to evaluate performance of the methods in simulation studies and against clinically identified cases. If confirmed, these methods could be applied to a wide range of populations and events of interest.
Highly sensitive Europium doped SrSO4 OSL nanophosphor for radiation dosimetry applications
NASA Astrophysics Data System (ADS)
Patle, Anita; Patil, R. R.; Kulkarni, M. S.; Bhatt, B. C.; Moharil, S. V.
2015-10-01
Highly sensitive Europium doped SrSO4 optically stimulated luminescent (OSL) phosphor was developed by synthesizing a nano phosphor which is treated at 1000 °C. Excellent OSL properties are observed in the developed phosphor and the sensitivity is found to be 1.26 times to that of the commercial Al2O3:C (Landauer Inc.) phosphor based on area integration method. The sample showed a single TL glow peak around 230 °C which is found to reduce by 47% after the OSL readout. Sublinear dose response with the saturation around 100 mGy is observed in this sample which suggests that it is extremely sensitive and hence will be suitable in detecting very low dose levels. Minimum measurable dose on the used set up is estimated to be 1.42 μGy. Practically no fading is observed for first ten days and the phosphor has excellent reusability. High sensitivity, low fading, excellent reusability will make this phosphor suitable for radiation dosimetry applications using OSL.
2009-03-01
transition fatigue regimes; however, microplasticity (i.e., heterogeneous plasticity at the scale of microstructure) is relevant to understanding fatigue...and Socie [57] considered the affect of microplastic 14 Microstructure-Sensitive Extreme Value Probabilities for High Cycle Fatigue of Ni-Base...considers the local stress state as affected by intergranular interactions and microplasticity . For the calculations given below, the volumes over which
Kogovšek, P; Hodgetts, J; Hall, J; Prezelj, N; Nikolić, P; Mehle, N; Lenarčič, R; Rotter, A; Dickinson, M; Boonham, N; Dermastia, M; Ravnikar, M
2015-01-01
In Europe the most devastating phytoplasma associated with grapevine yellows (GY) diseases is a quarantine pest, flavescence dorée (FDp), from the 16SrV taxonomic group. The on-site detection of FDp with an affordable device would contribute to faster and more efficient decisions on the control measures for FDp. Therefore, a real-time isothermal LAMP assay for detection of FDp was validated according to the EPPO standards and MIQE guidelines. The LAMP assay was shown to be specific and extremely sensitive, because it detected FDp in all leaf samples that were determined to be FDp infected using quantitative real-time PCR. The whole procedure of sample preparation and testing was designed and optimized for on-site detection and can be completed in one hour. The homogenization procedure of the grapevine samples (leaf vein, flower or berry) was optimized to allow direct testing of crude homogenates with the LAMP assay, without the need for DNA extraction, and was shown to be extremely sensitive. PMID:26146413
Brittle materials at high-loading rates: an open area of research
NASA Astrophysics Data System (ADS)
Forquin, Pascal
2017-01-01
Brittle materials are extensively used in many civil and military applications involving high-strain-rate loadings such as: blasting or percussive drilling of rocks, ballistic impact against ceramic armour or transparent windshields, plastic explosives used to damage or destroy concrete structures, soft or hard impacts against concrete structures and so on. With all of these applications, brittle materials are subjected to intense loadings characterized by medium to extremely high strain rates (few tens to several tens of thousands per second) leading to extreme and/or specific damage modes such as multiple fragmentation, dynamic cracking, pore collapse, shearing, mode II fracturing and/or microplasticity mechanisms in the material. Additionally, brittle materials exhibit complex features such as a strong strain-rate sensitivity and confining pressure sensitivity that justify expending greater research efforts to understand these complex features. Currently, the most popular dynamic testing techniques used for this are based on the use of split Hopkinson pressure bar methodologies and/or plate-impact testing methods. However, these methods do have some critical limitations and drawbacks when used to investigate the behaviour of brittle materials at high loading rates. The present theme issue of Philosophical Transactions A provides an overview of the latest experimental methods and numerical tools that are currently being developed to investigate the behaviour of brittle materials at high loading rates. This article is part of the themed issue 'Experimental testing and modelling of brittle materials at high strain rates'.
Brittle materials at high-loading rates: an open area of research
2017-01-01
Brittle materials are extensively used in many civil and military applications involving high-strain-rate loadings such as: blasting or percussive drilling of rocks, ballistic impact against ceramic armour or transparent windshields, plastic explosives used to damage or destroy concrete structures, soft or hard impacts against concrete structures and so on. With all of these applications, brittle materials are subjected to intense loadings characterized by medium to extremely high strain rates (few tens to several tens of thousands per second) leading to extreme and/or specific damage modes such as multiple fragmentation, dynamic cracking, pore collapse, shearing, mode II fracturing and/or microplasticity mechanisms in the material. Additionally, brittle materials exhibit complex features such as a strong strain-rate sensitivity and confining pressure sensitivity that justify expending greater research efforts to understand these complex features. Currently, the most popular dynamic testing techniques used for this are based on the use of split Hopkinson pressure bar methodologies and/or plate-impact testing methods. However, these methods do have some critical limitations and drawbacks when used to investigate the behaviour of brittle materials at high loading rates. The present theme issue of Philosophical Transactions A provides an overview of the latest experimental methods and numerical tools that are currently being developed to investigate the behaviour of brittle materials at high loading rates. This article is part of the themed issue ‘Experimental testing and modelling of brittle materials at high strain rates’. PMID:27956517
Brittle materials at high-loading rates: an open area of research.
Forquin, Pascal
2017-01-28
Brittle materials are extensively used in many civil and military applications involving high-strain-rate loadings such as: blasting or percussive drilling of rocks, ballistic impact against ceramic armour or transparent windshields, plastic explosives used to damage or destroy concrete structures, soft or hard impacts against concrete structures and so on. With all of these applications, brittle materials are subjected to intense loadings characterized by medium to extremely high strain rates (few tens to several tens of thousands per second) leading to extreme and/or specific damage modes such as multiple fragmentation, dynamic cracking, pore collapse, shearing, mode II fracturing and/or microplasticity mechanisms in the material. Additionally, brittle materials exhibit complex features such as a strong strain-rate sensitivity and confining pressure sensitivity that justify expending greater research efforts to understand these complex features. Currently, the most popular dynamic testing techniques used for this are based on the use of split Hopkinson pressure bar methodologies and/or plate-impact testing methods. However, these methods do have some critical limitations and drawbacks when used to investigate the behaviour of brittle materials at high loading rates. The present theme issue of Philosophical Transactions A provides an overview of the latest experimental methods and numerical tools that are currently being developed to investigate the behaviour of brittle materials at high loading rates.This article is part of the themed issue 'Experimental testing and modelling of brittle materials at high strain rates'. © 2016 The Author(s).
Rapid characterization of microorganisms by mass spectrometry--what can be learned and how?
Fenselau, Catherine C
2013-08-01
Strategies for the rapid and reliable analysis of microorganisms have been sought to meet national needs in defense, homeland security, space exploration, food and water safety, and clinical diagnosis. Mass spectrometry has long been a candidate technique because it is extremely rapid and can provide highly specific information. It has excellent sensitivity. Molecular and fragment ion masses provide detailed fingerprints, which can also be interpreted. Mass spectrometry is also a broad band method--everything has a mass--and it is automatable. Mass spectrometry is a physiochemical method that is orthogonal and complementary to biochemical and morphological methods used to characterize microorganisms.
Multi-window detection for P-wave in electrocardiograms based on bilateral accumulative area.
Chen, Riqing; Huang, Yingsong; Wu, Jian
2016-11-01
P-wave detection is one of the most challenging aspects in electrocardiograms (ECGs) due to its low amplitude, low frequency, and variable waveforms. This work introduces a novel multi-window detection method for P-wave delineation based on the bilateral accumulative area. The bilateral accumulative area is calculated by summing the areas covered by the P-wave curve with left and right sliding windows. The onset and offset of a positive P-wave correspond to the local maxima of the area detector. The position drift and difference in area variation of local extreme points with different windows are used to systematically combine multi-window and 12-lead synchronous detection methods, which are used to screen the optimization boundary points from all extreme points of different window widths and adaptively match the P-wave location. The proposed method was validated with ECG signals from various databases, including the Standard CSE Database, T-Wave Alternans Challenge Database, PTB Diagnostic ECG Database, and the St. Petersburg Institute of Cardiological Technics 12-Lead Arrhythmia Database. The average sensitivity Se was 99.44% with a positive predictivity P+ of 99.37% for P-wave detection. Standard deviations of 3.7 and 4.3ms were achieved for the onset and offset of P-waves, respectively, which is in agreement with the accepted tolerances required by the CSE committee. Compared with well-known delineation methods, this method can achieve high sensitivity and positive predictability using a simple calculation process. The experiment results suggest that the bilateral accumulative area could be an effective detection tool for ECG signal analysis. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cazelle, Elodie; Eskes, Chantra; Hermann, Martina; Jones, Penny; McNamee, Pauline; Prinsen, Menk; Taylor, Hannah; Wijnands, Marcel V W
2014-06-01
A.I.S.E. investigated the suitability of histopathological evaluations as an additional endpoint to the regulatory adopted ICE in vitro test method (OECD TG 438) to identify non-extreme pH detergent and cleaning products that require classification as EU CLP/UN GHS Category 1 (serious eye damage). To this aim, a total of 30 non-extreme pH products covering the range of in vivo classifications for eye irritation, and representing various product categories were tested. Epithelium vacuolation (mid and lower layers) and erosion (at least moderate) were found to be the most relevant histopathological effects induced by products classified in vivo as Category 1. Histopathology criteria specifically developed for non-extreme pH detergent and cleaning products were shown to correctly identify materials classified as Category 1 based on in vivo persistent effects, and to significantly increase the overall sensitivity of the standard ICE prediction model for Category 1 identification (to 75%) whilst maintaining a good concordance (73%). In contrast, use of EU CLP additivity approach for classification of mixtures was considerably less predictive, with a concordance of only 27%, and 100% over-predictions of non-Category 1 products. As such, use of histopathology as an addition to the ICE test method was found suitable to identify EU CLP/UN GHS Category 1 non-extreme pH detergent and cleaning products and to allow a better discrimination from Category 2 products. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wei, Ping; Li, Xinyang; Luo, Xi; Li, Jianfeng
2018-02-01
The centroid method is commonly adopted to locate the spot in the sub-apertures in the Shack-Hartmann wavefront sensor (SH-WFS), in which preprocessing image is required before calculating the spot location due to that the centroid method is extremely sensitive to noises. In this paper, the SH-WFS image was simulated according to the characteristics of the noises, background and intensity distribution. The Optimal parameters of SH-WFS image preprocessing method were put forward, in different signal-to-noise ratio (SNR) conditions, where the wavefront reconstruction error was considered as the evaluation index. Two methods of image preprocessing, thresholding method and windowing combing with thresholding method, were compared by studying the applicable range of SNR and analyzing the stability of the two methods, respectively.
Nasehzadeh, M
2017-01-01
Abstract Background and aims Short periods of extreme temperature may affect wheat (Triticum aestivum) seed weight, but also quality. Temporal sensitivity to extreme temperature during seed development and maturation was investigated. Methods Plants of ‘Tybalt’ grown at ambient temperature were moved to growth cabinets at 29/20°C or 34/20°C (2010), or 15/10°C or 34/20°C (2011), for successive 7-d periods from 7 DAA (days after anthesis) onwards, and also 7–65 DAA in 2011. Seed samples were harvested serially and moisture content, weight, ability to germinate, subsequent longevity in air-dry storage and bread-making quality were determined. Key Results High temperature (34/20°C) reduced final seed weight, with greatest temporal sensitivity at 7–14 or 14–21 DAA. Several aspects of bread-making quality were also most sensitive to high temperature then, but whereas protein quality decreased protein and sulphur concentrations improved. Early exposure to high temperature provided earlier development of ability to germinate and tolerate desiccation, but had little effect on maximum germination capacity. All treatments at 15/10°C resulted in ability to germinate declining between 58 and 65 DAA. Early exposure to high temperature hastened improvement in seed storage longevity, but the subsequent decline in late maturation preceded that in the control. Long (7–65 DAA) exposure to 15/10°C disrupted the development of seed longevity, with no improvement after seed filling ended. Longevity improved during maturation drying in other treatments. Early (7–14 DAA) exposure to high temperature reduced and low temperature increased subsequent longevity at harvest maturity, whereas late (35 or 42–49 DAA) exposure to high temperature increased and low temperature reduced it. Conclusions Temporal sensitivity to extreme temperature was detected. It varied considerably amongst the contrasting seed variables investigated. Subsequent seed longevity at harvest maturity responded negatively to temperature early in development, but positively later in development and throughout maturation. PMID:28637252
NASA Astrophysics Data System (ADS)
Pántano, V. C.; Penalba, O. C.
2013-05-01
Extreme events of temperature and rainfall have a socio-economic impact in the rainfed agriculture production region in Argentina. The magnitude of the impact can be analyzed through the water balance which integrates the characteristics of the soil and climate conditions. Changes observed in climate variables during the last decades affected the components of the water balance. As a result, a displacement of the agriculture border towards the west was produced, improving the agricultural production of the region. The objective of this work is to analyze how the variability of rainfall and temperature leads the hydric condition of the soil, with special focus on extreme events. The hydric conditions of the soil (HC= Excess- Deficit) were estimated from the monthly water balance (Thornthwaite and Mather method, 1957), using monthly potential evapotranspiration (PET) and monthly accumulated rainfall (R) for 33 stations (period 1970-2006). Information of temperature and rainfall was provided by National Weather Service and the effective capacity of soil water was considered from Forte Lay and Spescha (2001). An agricultural extreme condition occurs when soil moisture and rainfall are inadequate or excessive for the development of the crops. In this study, we define an extreme event when the variable is less (greater) than its 20% and 10% (80% and 90%) percentile. In order to evaluate how sensitive is the HC to water and heat stress in the region, different conditional probabilities were evaluated. There is a weaker response of HC to extreme low PET while extreme low R leads high values of HC. However, this behavior is not always observed, especially in the western region where extreme high and low PET show a stronger influence over the HC. Finally, to analyze the temporal variability of extreme PET and R, leading hydric condition of the soil, the number of stations presenting extreme conditions was computed for each month. As an example, interesting results were observed for April. During this month, the water recharge of the soil is crucial to let the winter crops manage with the scarce rainfalls occurring in the following months. In 1970, 1974, 1977, 1978 and 1997 more than 50% of the stations were under extreme high PET; while 1970, 1974, 1978 and 1988 presented more than 40% under extreme low R. Thus, the 70s was the more threatened decade of the period. Since the 80s (except for 1997), extreme dry events due to one variable or the other are mostly presented separately, over smaller areas. The response of the spatial distribution of HC is stronger when both variables present extreme conditions. In particular, during 1997 the region presents extreme low values of HC as a consequence of extreme low R and high PET. Communities dependent on agriculture are highly sensitive to climate variability and its extremes. In the studied region, it was shown that scarce water and heat stress contribute to the resulting hydric condition, producing strong impact over different productive activities. Extreme temperature seems to have a stronger influence over extreme unfavorable hydric conditions.
Long-term Changes in Extreme Air Pollution Meteorology and the Implications for Air Quality.
Hou, Pei; Wu, Shiliang
2016-03-31
Extreme air pollution meteorological events, such as heat waves, temperature inversions and atmospheric stagnation episodes, can significantly affect air quality. Based on observational data, we have analyzed the long-term evolution of extreme air pollution meteorology on the global scale and their potential impacts on air quality, especially the high pollution episodes. We have identified significant increasing trends for the occurrences of extreme air pollution meteorological events in the past six decades, especially over the continental regions. Statistical analysis combining air quality data and meteorological data further indicates strong sensitivities of air quality (including both average air pollutant concentrations and high pollution episodes) to extreme meteorological events. For example, we find that in the United States the probability of severe ozone pollution when there are heat waves could be up to seven times of the average probability during summertime, while temperature inversions in wintertime could enhance the probability of severe particulate matter pollution by more than a factor of two. We have also identified significant seasonal and spatial variations in the sensitivity of air quality to extreme air pollution meteorology.
NASA Astrophysics Data System (ADS)
Bréda, Nathalie; Badeau, Vincent
2008-09-01
The aim of this paper is to illustrate how some extreme events could affect forest ecosystems. Forest tree response can be analysed using dendroecological methods, as tree-ring widths are strongly controlled by climatic or biotic events. Years with such events induce similar tree responses and are called pointer years. They can result from extreme climatic events like frost, a heat wave, spring water logging, drought or insect damage… Forest tree species showed contrasting responses to climatic hazards, depending on their sensitivity to water shortage or temperature hardening, as illustrated from our dendrochronological database. For foresters, a drought or a pest disease is an extreme event if visible and durable symptoms are induced (leaf discolouration, leaf loss, perennial organs mortality, tree dieback and mortality). These symptoms here are shown, lagging one or several years behind a climatic or biotic event, from forest decline cases in progress since the 2003 drought or attributed to previous severe droughts or defoliations in France. Tree growth or vitality recovery is illustrated, and the functional interpretation of the long lasting memory of trees is discussed. A coupled approach linking dendrochronology and ecophysiology helps in discussing vulnerability of forest stands, and suggests management advices in order to mitigate extreme drought and cope with selective mortality.
Tran, Thomas; Kostecki, Renata; Catton, Michael; Druce, Julian
2018-05-09
Rapid differentiation of wild-type measles virus from measles vaccine strains is crucial during a measles outbreak and in a measles elimination setting. A real-time RT-PCR for the rapid detection of measles vaccine strains was developed with high specificity and greater sensitivity than when compared to traditional measles genotyping methods. The "stressed" minor grove binder TaqMan probe design approach achieves specificity to vaccine strains only, without compromising sensitivity. This assay has proven to be extremely useful in outbreak settings, without requiring sequence genotyping, for over 4 years at the Regional Measles Reference Laboratory for the Western Pacific Region. Copyright © 2018 Tran et al.
Liu, Zheng-jia; Yu, Xing-xiu; Li, Lei; Huang, Mei
2011-08-01
Based on the ecological sensitivity-resilience-pressure (SRP) conceptual model, and selecting 13 indices including landscape diversity index, soil erosion, and elevation, etc. , the vulnerability of the eco-environment in Yimeng mountainous area of Shandong Province was assessed under the support of GIS and by using principal component analysis and hierarchy analytical method. According to the eco-environmental vulnerability index (EVI) values, the eco-environment vulnerability of study area was classified into 5 levels, i.e., slight (<1.8), light (1.8-2.8), moderate (2.8-3.5), heavy (3.5-4.0), and extreme vulnerability (>4.0). In the study area, moderately vulnerable area occupied 43.3% of the total, while the slightly, lightly, heavily, and extremely vulnerable areas occupied 6.1%, 33.8%, 15.9%, and 0.9%, respectively. The heavily and extremely vulnerable areas mainly located in the topographically complicated hilly area or the hill-plain ecotone with frequent human activities.
Back-illuminated imager and method for making electrical and optical connections to same
NASA Technical Reports Server (NTRS)
Pain, Bedabrata (Inventor)
2010-01-01
Methods for bringing or exposing metal pads or traces to the backside of a backside-illuminated imager allow the pads or traces to reside on the illumination side for electrical connection. These methods provide a solution to a key packaging problem for backside thinned imagers. The methods also provide alignment marks for integrating color filters and microlenses to the imager pixels residing on the frontside of the wafer, enabling high performance multispectral and high sensitivity imagers, including those with extremely small pixel pitch. In addition, the methods incorporate a passivation layer for protection of devices against external contamination, and allow interface trap density reduction via thermal annealing. Backside-illuminated imagers with illumination side electrical connections are also disclosed.
[Methods for determination of cholinesterase activity].
Dingová, D; Hrabovská, A
2015-01-01
Cholinesterases hydrolyze acetylcholine and thus they play a key role in a process of cholinergic neurotransmission. Changes in their activities are linked to many diseases (e.g Alzheimer disease, Parkinson disease, lipid disorders). Thus, it is important to determine their activity in a fast, simply and precise way. In this review, different approaches of studying cholinesterase activities (e.g pH-dependent, spectrophotometric, radiometric, histochemical methods or biosensors) are discussed. Comparisons, advantages or disadvantages of selected methods (e.g most widely used Ellman's assay, extremely sensitive Johnson Russell method or modern technique with golden nanoparticles) are presented. This review enables one to choose a suitable method for determination of cholinesterase activities with respect to laboratory equipment, type of analysis, pH, temperature scale or special conditions.
Arkusz, Joanna; Stępnik, Maciej; Sobala, Wojciech; Dastych, Jarosław
2010-11-10
The aim of this study was to find differentially regulated genes in THP-1 monocytic cells exposed to sensitizers and nonsensitizers and to investigate if such genes could be reliable markers for an in vitro predictive method for the identification of skin sensitizing chemicals. Changes in expression of 35 genes in the THP-1 cell line following treatment with chemicals of different sensitizing potential (from nonsensitizers to extreme sensitizers) were assessed using real-time PCR. Verification of 13 candidate genes by testing a large number of chemicals (an additional 22 sensitizers and 8 nonsensitizers) revealed that prediction of contact sensitization potential was possible based on evaluation of changes in three genes: IL8, HMOX1 and PAIMP1. In total, changes in expression of these genes allowed correct detection of sensitization potential of 21 out of 27 (78%) test sensitizers. The gene expression levels inside potency groups varied and did not allow estimation of sensitization potency of test chemicals. Results of this study indicate that evaluation of changes in expression of proposed biomarkers in THP-1 cells could be a valuable model for preliminary screening of chemicals to discriminate an appreciable majority of sensitizers from nonsensitizers. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Experimental investigation of the Multipoint Ultrasonic Flowmeter
NASA Astrophysics Data System (ADS)
Jakub, Filipský
2018-06-01
The Multipoint Ultrasonic Flowmeter is a vector tomographic device capable of reconstructing all three components of velocity field based solely on boundary ultrasonic measurements. Computer simulations have shown the feasibility of such a device and have been published previously. This paper describes an experimental investigation of achievable accuracy of such a method. Doubled acoustic tripoles used to obtain information of the solenoidal part of vector field show extremely short differences between the Time Of Flights (TOFs) of individual sensors and are therefore sensitive to parasitic effects of TOF measurements. Sampling at 40MHz and correlation method is used to measure the TOF.
Model-Based, Closed-Loop Control of PZT Creep for Cavity Ring-Down Spectroscopy
McCartt, A D; Ognibene, T J; Bench, G; Turteltaub, K W
2014-01-01
Cavity ring-down spectrometers typically employ a PZT stack to modulate the cavity transmission spectrum. While PZTs ease instrument complexity and aid measurement sensitivity, PZT hysteresis hinders the implementation of cavity-length-stabilized, data-acquisition routines. Once the cavity length is stabilized, the cavity’s free spectral range imparts extreme linearity and precision to the measured spectrum’s wavelength axis. Methods such as frequency-stabilized cavity ring-down spectroscopy have successfully mitigated PZT hysteresis, but their complexity limits commercial applications. Described herein is a single-laser, model-based, closed-loop method for cavity length control. PMID:25395738
Model-Based, Closed-Loop Control of PZT Creep for Cavity Ring-Down Spectroscopy.
McCartt, A D; Ognibene, T J; Bench, G; Turteltaub, K W
2014-09-01
Cavity ring-down spectrometers typically employ a PZT stack to modulate the cavity transmission spectrum. While PZTs ease instrument complexity and aid measurement sensitivity, PZT hysteresis hinders the implementation of cavity-length-stabilized, data-acquisition routines. Once the cavity length is stabilized, the cavity's free spectral range imparts extreme linearity and precision to the measured spectrum's wavelength axis. Methods such as frequency-stabilized cavity ring-down spectroscopy have successfully mitigated PZT hysteresis, but their complexity limits commercial applications. Described herein is a single-laser, model-based, closed-loop method for cavity length control.
Surface-enhanced Raman spectroscopy on coupled two-layer nanorings
NASA Astrophysics Data System (ADS)
Hou, Yumin; Xu, Jun; Wang, Pengwei; Yu, Dapeng
2010-05-01
A reproducible quasi-three-dimensional structure, composed of top and bottom concentric nanorings with same periodicity but different widths and no overlapping at the perpendicular direction, is built up by a separation-layer method, which results in huge enhancement of surface-enhanced Raman spectroscopy (SERS) due to the coupling of plasmons. Simulations show plasmonic focusing with "hot arcs" of electromagnetic enhancement meeting the need of quantitative SERS with extremely high sensitivities. In addition, the separation-layer method opens a simple and effective way to adjust the coupling of plasmons among nanostructures which is essential for the fabrication of SERS-based sensors.
den Braver, Michiel W; Vermeulen, Nico P E; Commandeur, Jan N M
2017-03-01
Modification of cellular macromolecules by reactive drug metabolites is considered to play an important role in the initiation of tissue injury by many drugs. Detection and identification of reactive intermediates is often performed by analyzing the conjugates formed after trapping by glutathione (GSH). Although sensitivity of modern mass spectrometrical methods is extremely high, absolute quantification of GSH-conjugates is critically dependent on the availability of authentic references. Although 1 H NMR is currently the method of choice for quantification of metabolites formed biosynthetically, its intrinsically low sensitivity can be a limiting factor in quantification of GSH-conjugates which generally are formed at low levels. In the present study, a simple but sensitive and generic method for absolute quantification of GSH-conjugates is presented. The method is based on quantitative alkaline hydrolysis of GSH-conjugates and subsequent quantification of glutamic acid and glycine by HPLC after precolumn derivatization with o-phthaldialdehyde/N-acetylcysteine (OPA/NAC). Because of the lower stability of the glycine OPA/NAC-derivate, quantification of the glutamic acid OPA/NAC-derivate appeared most suitable for quantification of GSH-conjugates. The novel method was used to quantify the concentrations of GSH-conjugates of diclofenac, clozapine and acetaminophen and quantification was consistent with 1 H NMR, but with a more than 100-fold lower detection limit for absolute quantification. Copyright © 2017. Published by Elsevier B.V.
Wilson, Kate E; Marouga, Rita; Prime, John E; Pashby, D Paul; Orange, Paul R; Crosier, Steven; Keith, Alexander B; Lathe, Richard; Mullins, John; Estibeiro, Peter; Bergling, Helene; Hawkins, Edward; Morris, Christopher M
2005-10-01
Comparative proteomic methods are rapidly being applied to many different biological systems including complex tissues. One pitfall of these methods is that in some cases, such as oncology and neuroscience, tissue complexity requires isolation of specific cell types and sample is limited. Laser microdissection (LMD) is commonly used for obtaining such samples for proteomic studies. We have combined LMD with sensitive thiol-reactive saturation dye labelling of protein samples and 2-D DIGE to identify protein changes in a test system, the isolated CA1 pyramidal neurone layer of a transgenic (Tg) rat carrying a human amyloid precursor protein transgene. Saturation dye labelling proved to be extremely sensitive with a spot map of over 5,000 proteins being readily produced from 5 mug total protein, with over 100 proteins being significantly altered at p < 0.0005. Of the proteins identified, all showed coherent changes associated with transgene expression. It was, however, difficult to identify significantly different proteins using PMF and MALDI-TOF on gels containing less than 500 mug total protein. The use of saturation dye labelling of limiting samples will therefore require the use of highly sensitive MS techniques to identify the significantly altered proteins isolated using methods such as LMD.
Application of Methods of Numerical Analysis to Physical and Engineering Data.
1980-10-15
directed algorithm would seem to be called for. However, 1(0) is itself a random process, making its gradient too unreliable for such a sensitive algorithm...radiation energy on the detector . Active laser systems, on the other hand, have created now the possibility for extremely narrow path band systems...emitted by the earth and its atmosphere. The broad spectral range was selected so that the field of view of the detector could be narrowed to obtain
Damage Effects Identified By Scatter Evaluation Of Supersmooth Surfaces
NASA Astrophysics Data System (ADS)
Stowell, W. K.; Orazio, Fred D.
1983-12-01
The surface quality of optics used in an extremely sensitive laser instrument, such as a Ring Laser Gyro (RLG), is critical. This has led to the development of a Variable Angle Scatterometer at the Air Force Wright Aeronautical Laboratories at Wright-Patterson Air Force Base, which can detect low level light scatter from the high quality optics used in RLG's, without first overcoating with metals. With this instrument we have been able to identify damage effects that occur during the typical processing and handling of optics which cause wide variation in subsequent measurements depending on when, in the process, one takes data. These measurements indicate that techniques such as a Total Integrated Scatter (TIS) may be inadequate for standards on extremely low scatter optics because of the lack of sensitivity of the method on such surfaces. The general term for optical surfaces better than the lowest level of the scratch-dig standards has become "supersmooth", and is seen in technical literature as well as in advertising. A performance number, such as Bidirectional Radiation Distribution Function (BRDF), which can be measured from the uncoated optical surface by equipment such as the Variable Angle Scatterometer (VAS) is proposed as a method of generating better optical surface specifications. Data show that surfaces of average BRDF values near 10 parts per billion per steriadian (0.010 PPM/Sr) for 0-(301 = 0.5, are now possible and measurable.
Damage Effects Identified By Scatter Evaluation Of Supersmooth Surfaces
NASA Astrophysics Data System (ADS)
Stowell, W. K.
1984-10-01
The surface quality of optics used in an extremely sensitive laser instrument, such as a Ring Laser Gyro (RLG), is critical. This has led to the development of a Variable Angle Scatterometer at the Air Force Wright Aeronautical Laboratories at Wright-Patterson Air Force Base, which can detect low level light scatter from the high quality optics used in RLG's, without first overcoating with metals. With this instrument we have been able to identify damage effects that occur during the typical processing and handling of optics which cause wide variation in subsequent measurements depending on when, in the process, one takes data. These measurements indicate that techniques such as a Total Integrated Scatter (TIS) may be inadequate for standards on extremely low scatter optics because of the lack of sensitivity of the method on such surfaces. The general term for optical surfaces better than the lowest level of the scratch-dig standards has become "supersmooth", and is seen in technical literature as well as in advertising. A performance number, such as Bidirectional Radiation Distribution Function (BRDF), which can be measured from the uncoated optical surface by equipment such as the Variable Angle Scatterometer (VAS) is proposed as a method of generating better optical surface specifications. Data show that surfaces of average BRDF values near 10 parts per billion per steriadian (0.010 PPM/Sr) for 0-(301 = 0.5, are now possible and measurable.
Photoacoustic sensor for medical diagnostics
NASA Astrophysics Data System (ADS)
Wolff, Marcus; Groninga, Hinrich G.; Harde, Hermann
2004-03-01
The development of new optical sensor technologies has a major impact on the progress of diagnostic methods. Of the permanently increasing number of non-invasive breath tests, the 13C-Urea Breath Test (UBT) for the detection of Helicobacter pylori is the most prominent. However, many recent developments, like the detection of cancer by breath test, go beyond gastroenterological applications. We present a new detection scheme for breath analysis that employs an especially compact and simple set-up. Photoacoustic Spectroscopy (PAS) represents an offset-free technique that allows for short absorption paths and small sample cells. Using a single-frequency diode laser and taking advantage of acoustical resonances of the sample cell, we performed extremely sensitive and selective measurements. The smart data processing method contributes to the extraordinary sensitivity and selectivity as well. Also, the reasonable acquisition cost and low operational cost make this detection scheme attractive for many biomedical applications. The experimental set-up and data processing method, together with exemplary isotope-selective measurements on carbon dioxide, are presented.
A magnetic method for determining the geometry of hydraulic fractures
Byerlee, J.D.; Johnston, M.J.S.
1976-01-01
We propose a method that may be used to determine the spatial orientation of the fracture plane developed during hydraulic fracture. In the method, magnetic particles are injected into the crack with the fracturing fluid so as to generate a sheet of magnetized material. Since the magnetization of a body with extreme dimension ratios, such as a crack, exceeds that of an equidimensional body and since this magnetization is sensitive both to orientation and geometry, this could be used to obtain information about the crack. By measuring the vertical and horizontal components of the magnetic field and field gradients at the earth's surface surrounding the injection well with superconducting magnetometers having 10-4 gamma sensitivity and also by measuring field direction within the well itself, it should be possible to calculate the orientation and perhaps infer the approximate geometry of the fracture surface. Experiments on electric field potential operated in conjunction with this experiment could further constrain estimates of shape and orientation. ?? 1976 Birkha??user Verlag.
Solid oxide fuel cell simulation and design optimization with numerical adjoint techniques
NASA Astrophysics Data System (ADS)
Elliott, Louie C.
This dissertation reports on the application of numerical optimization techniques as applied to fuel cell simulation and design. Due to the "multi-physics" inherent in a fuel cell, which results in a highly coupled and non-linear behavior, an experimental program to analyze and improve the performance of fuel cells is extremely difficult. This program applies new optimization techniques with computational methods from the field of aerospace engineering to the fuel cell design problem. After an overview of fuel cell history, importance, and classification, a mathematical model of solid oxide fuel cells (SOFC) is presented. The governing equations are discretized and solved with computational fluid dynamics (CFD) techniques including unstructured meshes, non-linear solution methods, numerical derivatives with complex variables, and sensitivity analysis with adjoint methods. Following the validation of the fuel cell model in 2-D and 3-D, the results of the sensitivity analysis are presented. The sensitivity derivative for a cost function with respect to a design variable is found with three increasingly sophisticated techniques: finite difference, direct differentiation, and adjoint. A design cycle is performed using a simple optimization method to improve the value of the implemented cost function. The results from this program could improve fuel cell performance and lessen the world's dependence on fossil fuels.
Ab initio atomic recombination reaction energetics on model heat shield surfaces
NASA Technical Reports Server (NTRS)
Senese, Fredrick; Ake, Robert
1992-01-01
Ab initio quantum mechanical calculations on small hydration complexes involving the nitrate anion are reported. The self-consistent field method with accurate basis sets has been applied to compute completely optimized equilibrium geometries, vibrational frequencies, thermochemical parameters, and stable site labilities of complexes involving 1, 2, and 3 waters. The most stable geometries in the first hydration shell involve in-plane waters bridging pairs of nitrate oxygens with two equal and bent hydrogen bonds. A second extremely labile local minimum involves out-of-plane waters with a single hydrogen bond and lies about 2 kcal/mol higher. The potential in the region of the second minimum is extremely flat and qualitatively sensitive to changes in the basis set; it does not correspond to a true equilibrium structure.
Nanomaterial-based electrochemical sensors for arsenic - A review.
Kempahanumakkagari, Sureshkumar; Deep, Akash; Kim, Ki-Hyun; Kumar Kailasa, Suresh; Yoon, Hye-On
2017-09-15
The existence of arsenic in the environment poses severe global health threats. Considering its toxicity, the sensing of arsenic is extremely important. Due to the complexity of environmental and biological samples, many of the available detection methods for arsenic have serious limitations on selectivity and sensitivity. To improve sensitivity and selectivity and to circumvent interferences, different electrode systems have been developed based on surface modification with nanomaterials including carbonaceous nanomaterials, metallic nanoparticles (MNPs), metal nanotubes (MNTs), and even enzymes. Despite the progress made in electrochemical sensing of arsenic, some issues still need to be addressed to realize cost effective, portable, and flow-injection type sensor systems. The present review provides an in-depth evaluation of the nanoparticle-modified electrode (NME) based methods for the electrochemical sensing of arsenic. NME based sensing systems are projected to become an important option for monitoring hazardous pollutants in both environmental and biological media. Copyright © 2017 Elsevier B.V. All rights reserved.
Comparative economics of space resource utilization
NASA Technical Reports Server (NTRS)
Cutler, Andrew Hall
1991-01-01
Physical economic factors such as mass payback ratio, total payback ratio, and capital payback time are discussed and used to compare the economics of using resources from the Moon, Mars and its moons, and near Earth asteroids to serve certain near term markets such as propellant in low Earth orbit or launched mass reduction for lunar and Martian exploration. Methods for accounting for the time cost of money in simple figures of merit such as MPRs are explored and applied to comparisons such as those between lunar, Martian, and asteroidal resources. Methods for trading off capital and operating costs to compare schemes with substantially different capital to operating cost ratio are presented and discussed. Areas where further research or engineering would be extremely useful in reducing economic uncertainty are identified, as are areas where economic merit is highly sensitive to engineering performance - as well as areas where such sensitivity is surprisingly low.
Nayeri, Fatemeh; Shariat, Mamak; Mousavi Behbahani, Hamid Modarres; Dehghan, Padideh; Ebrahim, Bita
2014-01-01
Hypoglycemia is considered as a serious risk factor in neonates. In the majority of cases, it occurs with no clinical symptoms. Accordingly, early diagnosis is extremely imperative, which can also lead to less morbidity and mortality. The aim of this study was to assess the importance of screening blood glucose using glucometer (known as a quick and cost-effective diagnostic test) in comparison with laboratory method. A total of 219 neonates at risk of hypoglycemia were included in this study. Blood glucose was measured by glucometer and laboratory. In addition glucose level of capillary blood was measured by glucometer at the same time. Sensitivity and specificity of capillary blood glucose measurement by glucometer were 83.5%, 97.5% respectively (ppv=80%), (npv=98%). Capillary blood glucose measured by glucometer has an acceptable sensitivity and specificity in measurement of neonatal blood glucose. Therefore measurement by glucometer is recommended as a proper diagnostic test.
Gadaleta, Alessandro; Biance, Anne-Laure; Siria, Alessandro; Bocquet, Lyderic
2015-05-07
A challenge for the development of nanofluidics is to develop new instrumentation tools, able to probe the extremely small mass transport across individual nanochannels. Such tools are a prerequisite for the fundamental exploration of the breakdown of continuum transport in nanometric confinement. In this letter, we propose a novel method for the measurement of the hydrodynamic permeability of nanometric pores, by diverting the classical technique of Coulter counting to characterize a pressure-driven flow across an individual nanopore. Both the analysis of the translocation rate, as well as the detailed statistics of the dwell time of nanoparticles flowing across a single nanopore, allow us to evaluate the permeability of the system. We reach a sensitivity for the water flow down to a few femtoliters per second, which is more than two orders of magnitude better than state-of-the-art alternative methods.
Computer-aided detection of renal calculi from noncontrast CT images using TV-flow and MSER features
Liu, Jianfei; Wang, Shijun; Turkbey, Evrim B.; Linguraru, Marius George; Yao, Jianhua; Summers, Ronald M.
2015-01-01
Purpose: Renal calculi are common extracolonic incidental findings on computed tomographic colonography (CTC). This work aims to develop a fully automated computer-aided diagnosis system to accurately detect renal calculi on CTC images. Methods: The authors developed a total variation (TV) flow method to reduce image noise within the kidneys while maintaining the characteristic appearance of renal calculi. Maximally stable extremal region (MSER) features were then calculated to robustly identify calculi candidates. Finally, the authors computed texture and shape features that were imported to support vector machines for calculus classification. The method was validated on a dataset of 192 patients and compared to a baseline approach that detects calculi by thresholding. The authors also compared their method with the detection approaches using anisotropic diffusion and nonsmoothing. Results: At a false positive rate of 8 per patient, the sensitivities of the new method and the baseline thresholding approach were 69% and 35% (p < 1e − 3) on all calculi from 1 to 433 mm3 in the testing dataset. The sensitivities of the detection methods using anisotropic diffusion and nonsmoothing were 36% and 0%, respectively. The sensitivity of the new method increased to 90% if only larger and more clinically relevant calculi were considered. Conclusions: Experimental results demonstrated that TV-flow and MSER features are efficient means to robustly and accurately detect renal calculi on low-dose, high noise CTC images. Thus, the proposed method can potentially improve diagnosis. PMID:25563255
Statistical Evaluation and Improvement of Methods for Combining Random and Harmonic Loads
NASA Technical Reports Server (NTRS)
Brown, A. M.; McGhee, D. S.
2003-01-01
Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield/ultimate strength and high- cycle fatigue capability. This Technical Publication examines the cumulative distribution percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematica to calculate the combined value corresponding to any desired percentile is then presented along with a curve tit to this value. Another Excel macro that calculates the combination using Monte Carlo simulation is shown. Unlike the traditional techniques. these methods quantify the calculated load value with a consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Additionally, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can substantially lower the design loading without losing any of the identified structural reliability.
Statistical Comparison and Improvement of Methods for Combining Random and Harmonic Loads
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; McGhee, David S.
2004-01-01
Structures in many environments experience both random and harmonic excitation. A variety of closed-form techniques has been used in the aerospace industry to combine the loads resulting from the two sources. The resulting combined loads are then used to design for both yield ultimate strength and high cycle fatigue capability. This paper examines the cumulative distribution function (CDF) percentiles obtained using each method by integrating the joint probability density function of the sine and random components. A new Microsoft Excel spreadsheet macro that links with the software program Mathematics is then used to calculate the combined value corresponding to any desired percentile along with a curve fit to this value. Another Excel macro is used to calculate the combination using a Monte Carlo simulation. Unlike the traditional techniques, these methods quantify the calculated load value with a Consistent percentile. Using either of the presented methods can be extremely valuable in probabilistic design, which requires a statistical characterization of the loading. Also, since the CDF at high probability levels is very flat, the design value is extremely sensitive to the predetermined percentile; therefore, applying the new techniques can lower the design loading substantially without losing any of the identified structural reliability.
Sertsu, M G; Nardello, M; Giglia, A; Corso, A J; Maurizio, C; Juschkin, L; Nicolosi, P
2015-12-10
Accurate measurements of optical properties of multilayer (ML) mirrors and chemical compositions of interdiffusion layers are particularly challenging to date. In this work, an innovative and nondestructive experimental characterization method for multilayers is discussed. The method is based on extreme ultraviolet (EUV) reflectivity measurements performed on a wide grazing incidence angular range at an energy near the absorption resonance edge of low-Z elements in the ML components. This experimental method combined with the underlying physical phenomenon of abrupt changes of optical constants near EUV resonance edges enables us to characterize optical and structural properties of multilayers with high sensitivity. A major advantage of the method is to perform detailed quantitative analysis of buried interfaces of multilayer structures in a nondestructive and nonimaging setup. Coatings of Si/Mo multilayers on a Si substrate with period d=16.4 nm, number of bilayers N=25, and different capping structures are investigated. Stoichiometric compositions of Si-on-Mo and Mo-on-Si interface diffusion layers are derived. Effects of surface oxidation reactions and carbon contaminations on the optical constants of capping layers and the impact of neighboring atoms' interactions on optical responses of Si and Mo layers are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilcox, Kevin R.; Shi, Zheng; Gherardi, Laureano A.
Climatic changes are altering Earth's hydrological cycle, resulting in altered precipitation amounts, increased interannual variability of precipitation, and more frequent extreme precipitation events. These trends will likely continue into the future, having substantial impacts on net primary productivity (NPP) and associated ecosystem services such as food production and carbon sequestration. Frequently, experimental manipulations of precipitation have linked altered precipitation regimes to changes in NPP. Yet, findings have been diverse and substantial uncertainty still surrounds generalities describing patterns of ecosystem sensitivity to altered precipitation. Additionally, we do not know whether previously observed correlations between NPP and precipitation remain accurate when precipitationmore » changes become extreme. We synthesized results from 83 case studies of experimental precipitation manipulations in grasslands worldwide. Here, we used meta-analytical techniques to search for generalities and asymmetries of aboveground NPP (ANPP) and belowground NPP (BNPP) responses to both the direction and magnitude of precipitation change. Sensitivity (i.e., productivity response standardized by the amount of precipitation change) of BNPP was similar under precipitation additions and reductions, but ANPP was more sensitive to precipitation additions than reductions; this was especially evident in drier ecosystems. Additionally, overall relationships between the magnitude of productivity responses and the magnitude of precipitation change were saturating in form. The saturating form of this relationship was likely driven by ANPP responses to very extreme precipitation increases, although there were limited studies imposing extreme precipitation change, and there was considerable variation among experiments. Finally, this highlights the importance of incorporating gradients of manipulations, ranging from extreme drought to extreme precipitation increases into future climate change experiments. Additionally, policy and land management decisions related to global change scenarios should consider how ANPP and BNPP responses may differ, and that ecosystem responses to extreme events might not be predicted from relationships found under moderate environmental changes.« less
NASA Astrophysics Data System (ADS)
Sippel, S.; Otto, F. E. L.; Forkel, M.; Allen, M. R.; Guillod, B. P.; Heimann, M.; Reichstein, M.; Seneviratne, S. I.; Kirsten, T.; Mahecha, M. D.
2015-12-01
Understanding, quantifying and attributing the impacts of climatic extreme events and variability is crucial for societal adaptation in a changing climate. However, climate model simulations generated for this purpose typically exhibit pronounced biases in their output that hinders any straightforward assessment of impacts. To overcome this issue, various bias correction strategies are routinely used to alleviate climate model deficiencies most of which have been criticized for physical inconsistency and the non-preservation of the multivariate correlation structure. We assess how biases and their correction affect the quantification and attribution of simulated extremes and variability in i) climatological variables and ii) impacts on ecosystem functioning as simulated by a terrestrial biosphere model. Our study demonstrates that assessments of simulated climatic extreme events and impacts in the terrestrial biosphere are highly sensitive to bias correction schemes with major implications for the detection and attribution of these events. We introduce a novel ensemble-based resampling scheme based on a large regional climate model ensemble generated by the distributed weather@home setup[1], which fully preserves the physical consistency and multivariate correlation structure of the model output. We use extreme value statistics to show that this procedure considerably improves the representation of climatic extremes and variability. Subsequently, biosphere-atmosphere carbon fluxes are simulated using a terrestrial ecosystem model (LPJ-GSI) to further demonstrate the sensitivity of ecosystem impacts to the methodology of bias correcting climate model output. We find that uncertainties arising from bias correction schemes are comparable in magnitude to model structural and parameter uncertainties. The present study consists of a first attempt to alleviate climate model biases in a physically consistent way and demonstrates that this yields improved simulations of climate extremes and associated impacts. [1] http://www.climateprediction.net/weatherathome/
Chemiluminescence Resonance Energy Transfer-based Detection for Microchip Electrophoresis
Huang, Yong; Shi, Ming; Liu, Rongjun
2010-01-01
Since the channels in micro- and nanofluidic devices are extremely small, a sensitive detection is required following microchip electrophoresis (MCE). This work describes a highly sensitive and yet universal detection scheme based on chemiluminescence resonance energy transfer (CRET) for MCE. It was found that an efficient CRET occurred between a luminol donor and a CdTe quantum dot (QD) acceptor in the luminol-NaBrO-QD system, and that it was sensitively suppressed by the presence of certain organic compounds of biological interest including biogenic amines and thiols, amino acids, organic acids, and steroids. These findings allowed developing sensitive MCE-CL assays for the tested compounds. The proposed MCE-CL methods showed desired analytical figures of merit such as a wide concentration range of linear response. Detection limits obtained were ~10−9 M for biogenic amines including dopamine and epinephrine, and ~ 10−8 M for biogenic thiols (e.g. glutathione and acetylcysteine), organic acids (i.e. ascorbic acid and uric acid), estrogens, and native amino acids. These were 10 to 1000 times more sensitive than those of previously reported MCE-based methods with chemiluminescence, electrochemical, or laser induced fluorescence detection for quantifying corresponding compounds. To evaluate the applicability of the present MCE-CL method for analyzing real biological samples, it was used to determine amino acids in individual human red blood cells. Nine amino acids including Lys, Ser, Ala, Glu, Trp, etc. were detected. The contents ranged from 3 to 31 amol /cell. The assay proved to be simple, quick, reproducible, and very sensitive. PMID:20121202
B1- non-uniformity correction of phased-array coils without measuring coil sensitivity.
Damen, Frederick C; Cai, Kejia
2018-04-18
Parallel imaging can be used to increase SNR and shorten acquisition times, albeit, at the cost of image non-uniformity. B 1 - non-uniformity correction techniques are confounded by signal that varies not only due to coil induced B 1 - sensitivity variation, but also the object's own intrinsic signal. Herein, we propose a method that makes minimal assumptions and uses only the coil images themselves to produce a single combined B 1 - non-uniformity-corrected complex image with the highest available SNR. A novel background noise classifier is used to select voxels of sufficient quality to avoid the need for regularization. Unique properties of the magnitude and phase were used to reduce the B 1 - sensitivity to two joint additive models for estimation of the B 1 - inhomogeneity. The complementary corruption of the imaged object across the coil images is used to abate individual coil correction imperfections. Results are presented from two anatomical cases: (a) an abdominal image that is challenging in both extreme B 1 - sensitivity and intrinsic tissue signal variation, and (b) a brain image with moderate B 1 - sensitivity and intrinsic tissue signal variation. A new relative Signal-to-Noise Ratio (rSNR) quality metric is proposed to evaluate the performance of the proposed method and the RF receiving coil array. The proposed method has been shown to be robust to imaged objects with widely inhomogeneous intrinsic signal, and resilient to poorly performing coil elements. Copyright © 2018. Published by Elsevier Inc.
Yang, Weijuan; Zhang, Hongyan; Li, Mengxue; Wang, Zonghua; Zhou, Jie; Wang, Shihua; Lu, Guodong; Fu, FengFu
2014-11-19
As one of the most destructive and widespread disease of rice, Magnaporthe oryzae (also called Magnaporthe grisea) has a significant negative impact on rice production. Therefore, it is still in high demand to develop extremely sensitive and accurate methods for the early diagnosis of Magnaporthe oryzae (M. oryzae). In this study, we developed a novel magnetic-controllable electrochemical biosensor for the ultra sensitive and specific detection of M. oryzae in rice plant by using M. oryzae's chitinases (Mgchi) as biochemical marker and a rice (Oryza sativa) cDNA encoding mannose-binding jacalin-related lectin (Osmbl) as recognition probe. The proposed biosensor combined with the merits of chronoamperometry, electrically magnetic-controllable gold electrode and magnetic beads (MBs)-based palladium nano-particles (PdNPs) catalysis amplification, has an ultra-high sensitivity and specificity for the detection of trace M. oryzae in rice plant. It could be used to detect M. oryzae in rice plant in the initial infection stage (before any symptomatic lesions were observed) to help farmers timely manage the disease. In comparison with previous methods, the proposed method has notable advantages such as higher sensitivity, excellent specificity, short analysis time, robust resistibility to complex matrix and low cost etc. The success in this study provides a reliable approach for the early diagnosis and fast screening of M. oryzae in rice plant. Copyright © 2014 Elsevier B.V. All rights reserved.
An AST-ELM Method for Eliminating the Influence of Charging Phenomenon on ECT.
Wang, Xiaoxin; Hu, Hongli; Jia, Huiqin; Tang, Kaihao
2017-12-09
Electrical capacitance tomography (ECT) is a promising imaging technology of permittivity distributions in multiphase flow. To reduce the effect of charging phenomenon on ECT measurement, an improved extreme learning machine method combined with adaptive soft-thresholding (AST-ELM) is presented and studied for image reconstruction. This method can provide a nonlinear mapping model between the capacitance values and medium distributions by using machine learning but not an electromagnetic-sensitive mechanism. Both simulation and experimental tests are carried out to validate the performance of the presented method, and reconstructed images are evaluated by relative error and correlation coefficient. The results have illustrated that the image reconstruction accuracy by the proposed AST-ELM method has greatly improved than that by the conventional methods under the condition with charging object.
An AST-ELM Method for Eliminating the Influence of Charging Phenomenon on ECT
Wang, Xiaoxin; Hu, Hongli; Jia, Huiqin; Tang, Kaihao
2017-01-01
Electrical capacitance tomography (ECT) is a promising imaging technology of permittivity distributions in multiphase flow. To reduce the effect of charging phenomenon on ECT measurement, an improved extreme learning machine method combined with adaptive soft-thresholding (AST-ELM) is presented and studied for image reconstruction. This method can provide a nonlinear mapping model between the capacitance values and medium distributions by using machine learning but not an electromagnetic-sensitive mechanism. Both simulation and experimental tests are carried out to validate the performance of the presented method, and reconstructed images are evaluated by relative error and correlation coefficient. The results have illustrated that the image reconstruction accuracy by the proposed AST-ELM method has greatly improved than that by the conventional methods under the condition with charging object. PMID:29232850
NASA Astrophysics Data System (ADS)
Cook, L. M.; Samaras, C.; McGinnis, S. A.
2017-12-01
Intensity-duration-frequency (IDF) curves are a common input to urban drainage design, and are used to represent extreme rainfall in a region. As rainfall patterns shift into a non-stationary regime as a result of climate change, these curves will need to be updated with future projections of extreme precipitation. Many regions have begun to update these curves to reflect the trends from downscaled climate models; however, few studies have compared the methods for doing so, as well as the uncertainty that results from the selection of the native grid scale and temporal resolution of the climate model. This study examines the variability in updated IDF curves for Pittsburgh using four different methods for adjusting gridded regional climate model (RCM) outputs into station scale precipitation extremes: (1) a simple change factor applied to observed return levels, (2) a naïve adjustment of stationary and non-stationary Generalized Extreme Value (GEV) distribution parameters, (3) a transfer function of the GEV parameters from the annual maximum series, and (4) kernel density distribution mapping bias correction of the RCM time series. Return level estimates (rainfall intensities) and confidence intervals from these methods for the 1-hour to 48-hour duration are tested for sensitivity to the underlying spatial and temporal resolution of the climate ensemble from the NA-CORDEX project, as well as, the future time period for updating. The first goal is to determine if uncertainty is highest for: (i) the downscaling method, (ii) the climate model resolution, (iii) the climate model simulation, (iv) the GEV parameters, or (v) the future time period examined. Initial results of the 6-hour, 10-year return level adjusted with the simple change factor method using four climate model simulations of two different spatial resolutions show that uncertainty is highest in the estimation of the GEV parameters. The second goal is to determine if complex downscaling methods and high-resolution climate models are necessary for updating, or if simpler methods and lower resolution climate models will suffice. The final results can be used to inform the most appropriate method and climate model resolutions to use for updating IDF curves for urban drainage design.
Haque, Farzin; Lunn, Jennifer; Fang, Huaming; Smithrud, David; Guo, Peixuan
2012-01-01
A highly sensitive and reliable method to sense and identify a single chemical at extremely low concentrations and high contamination is important for environmental surveillance, homeland security, athlete drug monitoring, toxin/drug screening, and earlier disease diagnosis. This manuscript reports a method for precise detection of single chemicals. The hub of the bacteriophage phi29 DNA packaging motor is a connector consisting of twelve protein subunits encircled into a 3.6-nm channel as a path for dsDNA to enter during packaging and to exit during infection. The connector has previously been inserted into a lipid bilayer to serve as a membrane-embedded channel. Herein we report the modification of the phi29 channel to develop a class of sensors to detect single chemicals. The Lysine-234 of each protein subunit was mutated to cysteine, generating 12-SH ring lining the channel wall. Chemicals passing through this robust channel and interactions with the SH-group generated extremely reliable, precise, and sensitive current signatures as revealed by single channel conductance assays. Ethane (57 Daltons), thymine (167 Daltons), and benzene (105 Daltons) with reactive thioester moieties were clearly discriminated upon interaction with the available set of cysteine residues. The covalent attachment of each analyte induced discrete step-wise blockage in current signature with a corresponding decrease in conductance due to the physical blocking of the channel. Transient binding of the chemicals also produced characteristic fingerprints that were deduced from the unique blockage amplitude and pattern of the signals. This study shows that the phi29 connector can be used to sense chemicals with reactive thioesters or maleimide using single channel conduction assays based on their distinct fingerprints. The results demonstrated that this channel system could be further developed into very sensitive sensing devices. PMID:22458779
Economic Evidence on the Health Impacts of Climate Change in Europe
Hutton, Guy; Menne, Bettina
2014-01-01
BACKGROUND In responding to the health impacts of climate change, economic evidence and tools inform decision makers of the efficiency of alternative health policies and interventions. In a time when sweeping budget cuts are affecting all tiers of government, economic evidence on health protection from climate change spending enables comparison with other public spending. METHODS The review included 53 countries of the World Health Organization (WHO) European Region. Literature was obtained using a Medline and Internet search of key terms in published reports and peer-reviewed literature, and from institutions working on health and climate change. Articles were included if they provided economic estimation of the health impacts of climate change or adaptation measures to protect health from climate change in the WHO European Region. Economic studies are classified under health impact cost, health adaptation cost, and health economic evaluation (comparing both costs and impacts). RESULTS A total of 40 relevant studies from Europe were identified, covering the health damage or adaptation costs related to the health effects of climate change and response measures to climate-sensitive diseases. No economic evaluation studies were identified of response measures specific to the impacts of climate change. Existing studies vary in terms of the economic outcomes measured and the methods for evaluation of health benefits. The lack of robust health impact data underlying economic studies significantly affects the availability and precision of economic studies. CONCLUSIONS Economic evidence in European countries on the costs of and response to climate-sensitive diseases is extremely limited and fragmented. Further studies are urgently needed that examine health impacts and the costs and efficiency of alternative responses to climate-sensitive health conditions, in particular extreme weather events (other than heat) and potential emerging diseases and other conditions threatening Europe. PMID:25452694
Haque, Farzin; Lunn, Jennifer; Fang, Huaming; Smithrud, David; Guo, Peixuan
2012-04-24
A highly sensitive and reliable method to sense and identify a single chemical at extremely low concentrations and high contamination is important for environmental surveillance, homeland security, athlete drug monitoring, toxin/drug screening, and earlier disease diagnosis. This article reports a method for precise detection of single chemicals. The hub of the bacteriophage phi29 DNA packaging motor is a connector consisting of 12 protein subunits encircled into a 3.6 nm channel as a path for dsDNA to enter during packaging and to exit during infection. The connector has previously been inserted into a lipid bilayer to serve as a membrane-embedded channel. Herein we report the modification of the phi29 channel to develop a class of sensors to detect single chemicals. The lysine-234 of each protein subunit was mutated to cysteine, generating 12-SH ring lining the channel wall. Chemicals passing through this robust channel and interactions with the SH group generated extremely reliable, precise, and sensitive current signatures as revealed by single channel conductance assays. Ethane (57 Da), thymine (167 Da), and benzene (105 Da) with reactive thioester moieties were clearly discriminated upon interaction with the available set of cysteine residues. The covalent attachment of each analyte induced discrete stepwise blockage in current signature with a corresponding decrease in conductance due to the physical blocking of the channel. Transient binding of the chemicals also produced characteristic fingerprints that were deduced from the unique blockage amplitude and pattern of the signals. This study shows that the phi29 connector can be used to sense chemicals with reactive thioesters or maleimide using single channel conduction assays based on their distinct fingerprints. The results demonstrated that this channel system could be further developed into very sensitive sensing devices.
Functional design of electrolytic biosensor
NASA Astrophysics Data System (ADS)
Gamage Preethichandra, D. M.; Mala Ekanayake, E. M. I.; Onoda, M.
2017-11-01
A novel amperometric biosensbased on conjugated polypyrrole (PPy) deposited on a Pt modified ITO (indium tin oxide) conductive glass substrate and their performances are described. We have presented a method of developing a highly sensitive and low-cost nano-biosensor for blood glucose measurements. The fabrication method proposed decreases the cost of production significantly as the amount of noble metals used is minimized. A nano-corrugated PPy substrate was developed through pulsed electrochemical deposition. The sensitivity achieved was 325 mA/(Mcm2) and the linear range of the developed sensor was 50-60 mmol/l. Then the application of the electrophoresis helps the glucose oxidase (GOx) on the PPy substrate. The main reason behind this high enzyme loading is the high electric field applied across the sensor surface (working electrode) and the counter electrode where that pushes the nano-scale enzyme particles floating in the phosphate buffer solution towards the substrate. The novel technique used has provided an extremely high sensitivities and very high linear ranges for enzyme (GOx) and therefore can be concluded that this is a very good technique to load enzyme onto the conducting polymer substrates.
NASA Astrophysics Data System (ADS)
Jia, Yali; Qin, Jia; Zhi, Zhongwei; Wang, Ruikang K.
2011-08-01
The primary pathophysiology of peripheral arterial disease is associated with impaired perfusion to the muscle tissue in the lower extremities. The lack of effective pharmacologic treatments that stimulate vessel collateralization emphasizes the need for an imaging method that can be used to dynamically visualize depth-resolved microcirculation within muscle tissues. Optical microangiography (OMAG) is a recently developed label-free imaging method capable of producing three-dimensional images of dynamic blood perfusion within microcirculatory tissue beds at an imaging depth of up to ~2 mm, with an unprecedented imaging sensitivity of blood flow at ~4 μm/s. In this paper, we demonstrate the utility of OMAG in imaging the detailed blood flow distributions, at a capillary-level resolution, within skeletal muscles of mice. By use of the mouse model of hind-limb ischemia, we show that OMAG can assess the time-dependent changes in muscle perfusion and perfusion restoration along tissue depth. These findings indicate that OMAG can represent a sensitive, consistent technique to effectively study pharmacologic therapies aimed at promoting the growth and development of collateral vessels.
Landscape sensitivity in a dynamic environment
NASA Astrophysics Data System (ADS)
Lin, Jiun-Chuan; Jen, Chia-Horn
2010-05-01
Landscape sensitivity at different scales and topics is presented in this study. Methodological approach composed most of this paper. According to the environmental records in the south eastern Asia, the environment change is highly related with five factors, such as scale of influence area, background of environment characters, magnitude and frequency of events, thresholds of occurring hazards and influence by time factor. This paper tries to demonstrate above five points from historical and present data. It is found that landscape sensitivity is highly related to the degree of vulnerability of the land and the processes which put on the ground including human activities. The scale of sensitivity and evaluation of sensitivities is demonstrated in this paper by the data around east Asia. The methods of classification are mainly from the analysis of environmental data and the records of hazards. From the trend of rainfall records, rainfall intensity and change of temperature, the magnitude and frequency of earthquake, dust storm, days of draught, number of hazards, there are many coincidence on these factors with landscape sensitivities. In conclusion, the landscape sensitivities could be classified as four groups: physical stable, physical unstable, unstable, extremely unstable. This paper explain the difference.
Advancing the sensitivity of selected reaction monitoring-based targeted quantitative proteomics
Shi, Tujin; Su, Dian; Liu, Tao; Tang, Keqi; Camp, David G.; Qian, Wei-Jun; Smith, Richard D.
2012-01-01
Selected reaction monitoring (SRM)—also known as multiple reaction monitoring (MRM)—has emerged as a promising high-throughput targeted protein quantification technology for candidate biomarker verification and systems biology applications. A major bottleneck for current SRM technology, however, is insufficient sensitivity for e.g., detecting low-abundance biomarkers likely present at the low ng/mL to pg/mL range in human blood plasma or serum, or extremely low-abundance signaling proteins in cells or tissues. Herein we review recent advances in methods and technologies, including front-end immunoaffinity depletion, fractionation, selective enrichment of target proteins/peptides including posttranslational modifications (PTMs), as well as advances in MS instrumentation which have significantly enhanced the overall sensitivity of SRM assays and enabled the detection of low-abundance proteins at low to sub- ng/mL level in human blood plasma or serum. General perspectives on the potential of achieving sufficient sensitivity for detection of pg/mL level proteins in plasma are also discussed. PMID:22577010
Advancing the sensitivity of selected reaction monitoring-based targeted quantitative proteomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Tujin; Su, Dian; Liu, Tao
2012-04-01
Selected reaction monitoring (SRM)—also known as multiple reaction monitoring (MRM)—has emerged as a promising high-throughput targeted protein quantification technology for candidate biomarker verification and systems biology applications. A major bottleneck for current SRM technology, however, is insufficient sensitivity for e.g., detecting low-abundance biomarkers likely present at the pg/mL to low ng/mL range in human blood plasma or serum, or extremely low-abundance signaling proteins in the cells or tissues. Herein we review recent advances in methods and technologies, including front-end immunoaffinity depletion, fractionation, selective enrichment of target proteins/peptides or their posttranslational modifications (PTMs), as well as advances in MS instrumentation, whichmore » have significantly enhanced the overall sensitivity of SRM assays and enabled the detection of low-abundance proteins at low to sub- ng/mL level in human blood plasma or serum. General perspectives on the potential of achieving sufficient sensitivity for detection of pg/mL level proteins in plasma are also discussed.« less
Liu, Jianfei; Wang, Shijun; Turkbey, Evrim B; Linguraru, Marius George; Yao, Jianhua; Summers, Ronald M
2015-01-01
Renal calculi are common extracolonic incidental findings on computed tomographic colonography (CTC). This work aims to develop a fully automated computer-aided diagnosis system to accurately detect renal calculi on CTC images. The authors developed a total variation (TV) flow method to reduce image noise within the kidneys while maintaining the characteristic appearance of renal calculi. Maximally stable extremal region (MSER) features were then calculated to robustly identify calculi candidates. Finally, the authors computed texture and shape features that were imported to support vector machines for calculus classification. The method was validated on a dataset of 192 patients and compared to a baseline approach that detects calculi by thresholding. The authors also compared their method with the detection approaches using anisotropic diffusion and nonsmoothing. At a false positive rate of 8 per patient, the sensitivities of the new method and the baseline thresholding approach were 69% and 35% (p < 1e - 3) on all calculi from 1 to 433 mm(3) in the testing dataset. The sensitivities of the detection methods using anisotropic diffusion and nonsmoothing were 36% and 0%, respectively. The sensitivity of the new method increased to 90% if only larger and more clinically relevant calculi were considered. Experimental results demonstrated that TV-flow and MSER features are efficient means to robustly and accurately detect renal calculi on low-dose, high noise CTC images. Thus, the proposed method can potentially improve diagnosis.
Jöres, A P W; Heverhagen, J T; Bonél, H; Exadaktylos, A; Klink, T
2016-02-01
The purpose of this study was to evaluate the diagnostic accuracy of full-body linear X-ray scanning (LS) in multiple trauma patients in comparison to 128-multislice computed tomography (MSCT). 106 multiple trauma patients (female: 33; male: 73) were retrospectively included in this study. All patients underwent LS of the whole body, including extremities, and MSCT covering the neck, thorax, abdomen, and pelvis. The diagnostic accuracy of LS for the detection of fractures of the truncal skeleton and pneumothoraces was evaluated in comparison to MSCT by two observers in consensus. Extremity fractures detected by LS were documented. The overall sensitivity of LS was 49.2 %, the specificity was 93.3 %, the positive predictive value was 91 %, and the negative predictive value was 57.5 %. The overall sensitivity for vertebral fractures was 16.7 %, and the specificity was 100 %. The sensitivity was 48.7 % and the specificity 98.2 % for all other fractures. Pneumothoraces were detected in 12 patients by CT, but not by LS. 40 extremity fractures were detected by LS, of which 4 fractures were dislocated, and 2 were fully covered by MSCT. The diagnostic accuracy of LS is limited in the evaluation of acute trauma of the truncal skeleton. LS allows fast whole-body X-ray imaging, and may be valuable for detecting extremity fractures in trauma patients in addition to MSCT. The overall sensitivity of LS for truncal skeleton injuries in multiple-trauma patients was < 50 %. The diagnostic reference standard MSCT is the preferred and reliable imaging modality. LS may be valuable for quick detection of extremity fractures. © Georg Thieme Verlag KG Stuttgart · New York.
Extreme rainfall, vulnerability and risk: a continental-scale assessment for South America.
Vörösmarty, Charles J; Bravo de Guenni, Lelys; Wollheim, Wilfred M; Pellerin, Brian; Bjerklie, David; Cardoso, Manoel; D'Almeida, Cassiano; Green, Pamela; Colon, Lilybeth
2013-11-13
Extreme weather continues to preoccupy society as a formidable public safety concern bearing huge economic costs. While attention has focused on global climate change and how it could intensify key elements of the water cycle such as precipitation and river discharge, it is the conjunction of geophysical and socioeconomic forces that shapes human sensitivity and risks to weather extremes. We demonstrate here the use of high-resolution geophysical and population datasets together with documentary reports of rainfall-induced damage across South America over a multi-decadal, retrospective time domain (1960-2000). We define and map extreme precipitation hazard, exposure, affectedpopulations, vulnerability and risk, and use these variables to analyse the impact of floods as a water security issue. Geospatial experiments uncover major sources of risk from natural climate variability and population growth, with change in climate extremes bearing a minor role. While rural populations display greatest relative sensitivity to extreme rainfall, urban settings show the highest rates of increasing risk. In the coming decades, rapid urbanization will make South American cities the focal point of future climate threats but also an opportunity for reducing vulnerability, protecting lives and sustaining economic development through both traditional and ecosystem-based disaster risk management systems.
Wilcox, Kevin R.; Shi, Zheng; Gherardi, Laureano A.; ...
2017-04-02
Climatic changes are altering Earth's hydrological cycle, resulting in altered precipitation amounts, increased interannual variability of precipitation, and more frequent extreme precipitation events. These trends will likely continue into the future, having substantial impacts on net primary productivity (NPP) and associated ecosystem services such as food production and carbon sequestration. Frequently, experimental manipulations of precipitation have linked altered precipitation regimes to changes in NPP. Yet, findings have been diverse and substantial uncertainty still surrounds generalities describing patterns of ecosystem sensitivity to altered precipitation. Additionally, we do not know whether previously observed correlations between NPP and precipitation remain accurate when precipitationmore » changes become extreme. We synthesized results from 83 case studies of experimental precipitation manipulations in grasslands worldwide. Here, we used meta-analytical techniques to search for generalities and asymmetries of aboveground NPP (ANPP) and belowground NPP (BNPP) responses to both the direction and magnitude of precipitation change. Sensitivity (i.e., productivity response standardized by the amount of precipitation change) of BNPP was similar under precipitation additions and reductions, but ANPP was more sensitive to precipitation additions than reductions; this was especially evident in drier ecosystems. Additionally, overall relationships between the magnitude of productivity responses and the magnitude of precipitation change were saturating in form. The saturating form of this relationship was likely driven by ANPP responses to very extreme precipitation increases, although there were limited studies imposing extreme precipitation change, and there was considerable variation among experiments. Finally, this highlights the importance of incorporating gradients of manipulations, ranging from extreme drought to extreme precipitation increases into future climate change experiments. Additionally, policy and land management decisions related to global change scenarios should consider how ANPP and BNPP responses may differ, and that ecosystem responses to extreme events might not be predicted from relationships found under moderate environmental changes.« less
Wilcox, Kevin R; Shi, Zheng; Gherardi, Laureano A; Lemoine, Nathan P; Koerner, Sally E; Hoover, David L; Bork, Edward; Byrne, Kerry M; Cahill, James; Collins, Scott L; Evans, Sarah; Gilgen, Anna K; Holub, Petr; Jiang, Lifen; Knapp, Alan K; LeCain, Daniel; Liang, Junyi; Garcia-Palacios, Pablo; Peñuelas, Josep; Pockman, William T; Smith, Melinda D; Sun, Shanghua; White, Shannon R; Yahdjian, Laura; Zhu, Kai; Luo, Yiqi
2017-10-01
Climatic changes are altering Earth's hydrological cycle, resulting in altered precipitation amounts, increased interannual variability of precipitation, and more frequent extreme precipitation events. These trends will likely continue into the future, having substantial impacts on net primary productivity (NPP) and associated ecosystem services such as food production and carbon sequestration. Frequently, experimental manipulations of precipitation have linked altered precipitation regimes to changes in NPP. Yet, findings have been diverse and substantial uncertainty still surrounds generalities describing patterns of ecosystem sensitivity to altered precipitation. Additionally, we do not know whether previously observed correlations between NPP and precipitation remain accurate when precipitation changes become extreme. We synthesized results from 83 case studies of experimental precipitation manipulations in grasslands worldwide. We used meta-analytical techniques to search for generalities and asymmetries of aboveground NPP (ANPP) and belowground NPP (BNPP) responses to both the direction and magnitude of precipitation change. Sensitivity (i.e., productivity response standardized by the amount of precipitation change) of BNPP was similar under precipitation additions and reductions, but ANPP was more sensitive to precipitation additions than reductions; this was especially evident in drier ecosystems. Additionally, overall relationships between the magnitude of productivity responses and the magnitude of precipitation change were saturating in form. The saturating form of this relationship was likely driven by ANPP responses to very extreme precipitation increases, although there were limited studies imposing extreme precipitation change, and there was considerable variation among experiments. This highlights the importance of incorporating gradients of manipulations, ranging from extreme drought to extreme precipitation increases into future climate change experiments. Additionally, policy and land management decisions related to global change scenarios should consider how ANPP and BNPP responses may differ, and that ecosystem responses to extreme events might not be predicted from relationships found under moderate environmental changes. © 2017 John Wiley & Sons Ltd.
Large uncertainties in observed daily precipitation extremes over land
NASA Astrophysics Data System (ADS)
Herold, Nicholas; Behrangi, Ali; Alexander, Lisa V.
2017-01-01
We explore uncertainties in observed daily precipitation extremes over the terrestrial tropics and subtropics (50°S-50°N) based on five commonly used products: the Climate Hazards Group InfraRed Precipitation with Stations (CHIRPS) dataset, the Global Precipitation Climatology Centre-Full Data Daily (GPCC-FDD) dataset, the Tropical Rainfall Measuring Mission (TRMM) multi-satellite research product (T3B42 v7), the Precipitation Estimation from Remotely Sensed Information using Artificial Neural Networks-Climate Data Record (PERSIANN-CDR), and the Global Precipitation Climatology Project's One-Degree Daily (GPCP-1DD) dataset. We use the precipitation indices R10mm and Rx1day, developed by the Expert Team on Climate Change Detection and Indices, to explore the behavior of "moderate" and "extreme" extremes, respectively. In order to assess the sensitivity of extreme precipitation to different grid sizes we perform our calculations on four common spatial resolutions (0.25° × 0.25°, 1° × 1°, 2.5° × 2.5°, and 3.75° × 2.5°). The impact of the chosen "order of operation" in calculating these indices is also determined. Our results show that moderate extremes are relatively insensitive to product and resolution choice, while extreme extremes can be very sensitive. For example, at 0.25° × 0.25° quasi-global mean Rx1day values vary from 37 mm in PERSIANN-CDR to 62 mm in T3B42. We find that the interproduct spread becomes prominent at resolutions of 1° × 1° and finer, thus establishing a minimum effective resolution at which observational products agree. Without improvements in interproduct spread, these exceedingly large observational uncertainties at high spatial resolution may limit the usefulness of model evaluations. As has been found previously, resolution sensitivity can be largely eliminated by applying an order of operation where indices are calculated prior to regridding. However, this approach is not appropriate when true area averages are desired (e.g., for model evaluations).
Abnormal viscoelastic behavior of side-chain liquid-crystal polymers
NASA Astrophysics Data System (ADS)
Gallani, J. L.; Hilliou, L.; Martinoty, P.; Keller, P.
1994-03-01
We show that, contrary to what is commonly believed, the isotropic phase of side-chain liquid-crystal polymers has viscoelastic properties which are totally different from those of ordinary flexible melt polymers. The results can be explained by the existence of a transient network created by the dynamic association of mesogenic groups belonging to different chains. The extremely high sensitivity of the compound to the state of the surfaces with which it is in contact offers us an unexpected method of studying surface states.
1990-12-21
Crawshaw , 1979; White, 1983; Lagerspetz, 1987). In fish under extreme thermal stress, regions of the brain appear to be the most sensitive, and...proteins. BioEssays 2: 48-52. CRAIG, E. A. 1989. Essential roles of 7OkDa heat inducible proteins. BioEssays 2: 48-52. CRAWSHAW , L.I. 1976. Effect of...rapid temperature change on mean body temperature and gill ventilation in carp. Amer. J. Physiol. 331: 837-841. CRAWSHAW , L. I. 1979. Responses to rapid
Murnick, Daniel E; Dogru, Ozgur; Ilkmen, Erhan
2008-07-01
We show a new ultrasensitive laser-based analytical technique, intracavity optogalvanic spectroscopy, allowing extremely high sensitivity for detection of (14)C-labeled carbon dioxide. Capable of replacing large accelerator mass spectrometers, the technique quantifies attomoles of (14)C in submicrogram samples. Based on the specificity of narrow laser resonances coupled with the sensitivity provided by standing waves in an optical cavity and detection via impedance variations, limits of detection near 10(-15) (14)C/(12)C ratios are obtained. Using a 15-W (14)CO2 laser, a linear calibration with samples from 10(-15) to >1.5 x 10(-12) in (14)C/(12)C ratios, as determined by accelerator mass spectrometry, is demonstrated. Possible applications include microdosing studies in drug development, individualized subtherapeutic tests of drug metabolism, carbon dating and real time monitoring of atmospheric radiocarbon. The method can also be applied to detection of other trace entities.
Nondestructive evaluation of a ceramic matrix composite material
NASA Technical Reports Server (NTRS)
Grosskopf, Paul P.; Duke, John C., Jr.
1992-01-01
Monolithic ceramic materials have proven their usefulness in many applications, yet, their potential for critical structural applications is limited because of their sensitivity to small imperfections. To overcome this extreme sensitivity to small imperfections, ceramic matrix composite materials have been developed that have the ability to withstand some distributed damage. A borosilicate glass reinforced with several layers of silicon-carbide fiber mat has been studied. Four-point flexure and tension tests were performed not only to determine some of the material properties, but also to initiate a controlled amount of damage within each specimen. Acousto-ultrasonic (AU) measurements were performed periodically during mechanical testing. This paper will compare the AU results to the mechanical test results and data from other nondestructive methods including acoustic emission monitoring and X-ray radiography. It was found that the AU measurements were sensitive to the damage that had developed within the material.
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan; Bittker, David A.
1993-01-01
A general chemical kinetics and sensitivity analysis code for complex, homogeneous, gas-phase reactions is described. The main features of the code, LSENS, are its flexibility, efficiency and convenience in treating many different chemical reaction models. The models include static system, steady, one-dimensional, inviscid flow, shock initiated reaction, and a perfectly stirred reactor. In addition, equilibrium computations can be performed for several assigned states. An implicit numerical integration method, which works efficiently for the extremes of very fast and very slow reaction, is used for solving the 'stiff' differential equation systems that arise in chemical kinetics. For static reactions, sensitivity coefficients of all dependent variables and their temporal derivatives with respect to the initial values of dependent variables and/or the rate coefficient parameters can be computed. This paper presents descriptions of the code and its usage, and includes several illustrative example problems.
NASA Astrophysics Data System (ADS)
Zboril, Ondrej; Nedoma, Jan; Cubik, Jakub; Novak, Martin; Bednarek, Lukas; Fajkus, Marcel; Vasinek, Vladimir
2016-04-01
Interferometric sensors are very accurate and sensitive sensors that due to the extreme sensitivity allow sensing vibration and acoustic signals. This paper describes a new method of implementation of Mach-Zehnder interferometer for sensing of vibrations caused by touching on the window panes. Window panes are part of plastic windows, in which the reference arm of the interferometer is mounted and isolated inside the frame, a measuring arm of the interferometer is fixed to the window pane and it is mounted under the cover of the window frame. It prevents visibility of the optical fiber and this arrangement is the basis for the safety system. For the construction of the vibration sensor standard elements of communication networks are used - optical fiber according to G.652D and 1x2 splitters with dividing ratio 1:1. Interferometer operated at a wavelength of 1550 nm. The paper analyses the sensitivity of the window in a 12x12 measuring points matrix, there is specified sensitivity distribution of the window pane.
Climatic Extremes and Food Grain Production in India
NASA Astrophysics Data System (ADS)
A, A.; Mishra, V.
2015-12-01
Climate change is likely to affect food and water security in India. India has witnessed tremendous growth in its food production after the green revolution. However, during the recent decades the food grain yields were significantly affected by the extreme climate and weather events. Air temperature and associated extreme events (number of hot days and hot nights, heat waves) increased significantly during the last 50 years in the majority of India. More remarkably, a substantial increase in mean and extreme temperatures was observed during the winter season in India. On the other hand, India witnessed extreme flood and drought events that have become frequent during the past few decades. Extreme rainfall during the non-monsoon season adversely affected the food grain yields and results in tremendous losses in several parts of the country. Here we evaluate the changes in hydroclimatic extremes and its linkage with the food grain production in India. We use observed food grain yield data for the period of 1980-2012 at district level. We understand the linkages between food grain yield and crop phenology obtained from the high resolution leaf area index and NDVI datasets from satellites. We used long-term observed data of daily precipitation and maximum and minimum temperatures to evaluate changes in the extreme events. We use statistical models to develop relationships between crop yields, mean and extreme temperatures for various crops to understand the sensitivity of these crops towards changing climatic conditions. We find that some of the major crop types and predominant crop growing areas have shown a significant sensitivity towards changes in extreme climatic conditions in India.
NASA Astrophysics Data System (ADS)
Zhang, Zhihao; Zhang, Chunxi; Xu, Xiaobin
2017-09-01
Small diameter (cladding and coating diameter of 100 and 135 μm) polarization maintaining photonic crystal fibres (SDPM-PCFs) possess many unique properties and are extremely suitable for applications in fibre optic gyroscopes. In this study, we have investigated and measured the stress characteristics of an SDPM-PCF using the finite-element method and a Mach-Zehnder interferometer, respectively. Our results reveal a radial and axial sensitivity of 0.315 ppm/N/m and 25.2 ppm per 1 × 105 N/m2, respectively, for the SDPM-PCF. These values are 40% smaller than the corresponding parameters of conventional small diameter (cladding and coating diameter of 80 and 135 μm) panda fibres.
A sub-sampled approach to extremely low-dose STEM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, A.; Luzi, L.; Yang, H.
The inpainting of randomly sub-sampled images acquired by scanning transmission electron microscopy (STEM) is an attractive method for imaging under low-dose conditions (≤ 1 e -Å 2) without changing either the operation of the microscope or the physics of the imaging process. We show that 1) adaptive sub-sampling increases acquisition speed, resolution, and sensitivity; and 2) random (non-adaptive) sub-sampling is equivalent, but faster than, traditional low-dose techniques. Adaptive sub-sampling opens numerous possibilities for the analysis of beam sensitive materials and in-situ dynamic processes at the resolution limit of the aberration corrected microscope and is demonstrated here for the analysis ofmore » the node distribution in metal-organic frameworks (MOFs).« less
NASA Astrophysics Data System (ADS)
Adhi, H. A.; Wijaya, S. K.; Prawito; Badri, C.; Rezal, M.
2017-03-01
Stroke is one of cerebrovascular diseases caused by the obstruction of blood flow to the brain. Stroke becomes the leading cause of death in Indonesia and the second in the world. Stroke also causes of the disability. Ischemic stroke accounts for most of all stroke cases. Obstruction of blood flow can cause tissue damage which results the electrical changes in the brain that can be observed through the electroencephalogram (EEG). In this study, we presented the results of automatic detection of ischemic stroke and normal subjects based on the scaling exponent EEG obtained through detrended fluctuation analysis (DFA) using extreme learning machine (ELM) as the classifier. The signal processing was performed with 18 channels of EEG in the range of 0-30 Hz. Scaling exponents of the subjects were used as the input for ELM to classify the ischemic stroke. The performance of detection was observed by the value of accuracy, sensitivity and specificity. The result showed, performance of the proposed method to classify the ischemic stroke was 84 % for accuracy, 82 % for sensitivity and 87 % for specificity with 120 hidden neurons and sine as the activation function of ELM.
Rapid Characterization of Microorganisms by Mass Spectrometry—What Can Be Learned and How?
NASA Astrophysics Data System (ADS)
Fenselau, Catherine C.
2013-08-01
Strategies for the rapid and reliable analysis of microorganisms have been sought to meet national needs in defense, homeland security, space exploration, food and water safety, and clinical diagnosis. Mass spectrometry has long been a candidate technique because it is extremely rapid and can provide highly specific information. It has excellent sensitivity. Molecular and fragment ion masses provide detailed fingerprints, which can also be interpreted. Mass spectrometry is also a broad band method—everything has a mass—and it is automatable. Mass spectrometry is a physiochemical method that is orthogonal and complementary to biochemical and morphological methods used to characterize microorganisms.
NASA Technical Reports Server (NTRS)
Taylor, Arthur C., III; Hou, Gene W.
1994-01-01
The straightforward automatic-differentiation and the hand-differentiated incremental iterative methods are interwoven to produce a hybrid scheme that captures some of the strengths of each strategy. With this compromise, discrete aerodynamic sensitivity derivatives are calculated with the efficient incremental iterative solution algorithm of the original flow code. Moreover, the principal advantage of automatic differentiation is retained (i.e., all complicated source code for the derivative calculations is constructed quickly with accuracy). The basic equations for second-order sensitivity derivatives are presented; four methods are compared. Each scheme requires that large systems are solved first for the first-order derivatives and, in all but one method, for the first-order adjoint variables. Of these latter three schemes, two require no solutions of large systems thereafter. For the other two for which additional systems are solved, the equations and solution procedures are analogous to those for the first order derivatives. From a practical viewpoint, implementation of the second-order methods is feasible only with software tools such as automatic differentiation, because of the extreme complexity and large number of terms. First- and second-order sensitivities are calculated accurately for two airfoil problems, including a turbulent flow example; both geometric-shape and flow-condition design variables are considered. Several methods are tested; results are compared on the basis of accuracy, computational time, and computer memory. For first-order derivatives, the hybrid incremental iterative scheme obtained with automatic differentiation is competitive with the best hand-differentiated method; for six independent variables, it is at least two to four times faster than central finite differences and requires only 60 percent more memory than the original code; the performance is expected to improve further in the future.
Opto-mechanical design and development of a 460mm diffractive transmissive telescope
NASA Astrophysics Data System (ADS)
Qi, Bo; Wang, Lihua; Cui, Zhangang; Bian, Jiang; Xiang, Sihua; Ma, Haotong; Fan, Bin
2018-01-01
Using lightweight, replicated diffractive optics, we can construct extremely large aperture telescopes in space.The transmissive primary significantly reduces the sensitivities to out of plane motion as compared to reflective systems while reducing the manufacturing time and costs. This paper focuses on the design, fabrication and ground demonstration of a 460mm diffractive transmissive telescope the primary F/# is 6, optical field of view is 0.2° imagine bandwidth is 486nm 656nm.The design method of diffractive optical system was verified, the ability to capture a high-quality image using diffractive telescope collection optics was tested.The results show that the limit resolution is 94lp/mm, the diffractive system has a good imagine performance with broad bandwidths. This technology is particularly promising as a means to achieve extremely large optical primaries from compact, lightweight packages.
NASA Astrophysics Data System (ADS)
Wang, L.-P.; Ochoa-Rodríguez, S.; Onof, C.; Willems, P.
2015-09-01
Gauge-based radar rainfall adjustment techniques have been widely used to improve the applicability of radar rainfall estimates to large-scale hydrological modelling. However, their use for urban hydrological applications is limited as they were mostly developed based upon Gaussian approximations and therefore tend to smooth off so-called "singularities" (features of a non-Gaussian field) that can be observed in the fine-scale rainfall structure. Overlooking the singularities could be critical, given that their distribution is highly consistent with that of local extreme magnitudes. This deficiency may cause large errors in the subsequent urban hydrological modelling. To address this limitation and improve the applicability of adjustment techniques at urban scales, a method is proposed herein which incorporates a local singularity analysis into existing adjustment techniques and allows the preservation of the singularity structures throughout the adjustment process. In this paper the proposed singularity analysis is incorporated into the Bayesian merging technique and the performance of the resulting singularity-sensitive method is compared with that of the original Bayesian (non singularity-sensitive) technique and the commonly used mean field bias adjustment. This test is conducted using as case study four storm events observed in the Portobello catchment (53 km2) (Edinburgh, UK) during 2011 and for which radar estimates, dense rain gauge and sewer flow records, as well as a recently calibrated urban drainage model were available. The results suggest that, in general, the proposed singularity-sensitive method can effectively preserve the non-normality in local rainfall structure, while retaining the ability of the original adjustment techniques to generate nearly unbiased estimates. Moreover, the ability of the singularity-sensitive technique to preserve the non-normality in rainfall estimates often leads to better reproduction of the urban drainage system's dynamics, particularly of peak runoff flows.
Itter, Malcolm S; Finley, Andrew O; D'Amato, Anthony W; Foster, Jane R; Bradford, John B
2017-06-01
Changes in the frequency, duration, and severity of climate extremes are forecast to occur under global climate change. The impacts of climate extremes on forest productivity and health remain difficult to predict due to potential interactions with disturbance events and forest dynamics-changes in forest stand composition, density, size and age structure over time. Such interactions may lead to non-linear forest growth responses to climate involving thresholds and lag effects. Understanding how forest dynamics influence growth responses to climate is particularly important given stand structure and composition can be modified through management to increase forest resistance and resilience to climate change. To inform such adaptive management, we develop a hierarchical Bayesian state space model in which climate effects on tree growth are allowed to vary over time and in relation to past climate extremes, disturbance events, and forest dynamics. The model is an important step toward integrating disturbance and forest dynamics into predictions of forest growth responses to climate extremes. We apply the model to a dendrochronology data set from forest stands of varying composition, structure, and development stage in northeastern Minnesota that have experienced extreme climate years and forest tent caterpillar defoliation events. Mean forest growth was most sensitive to water balance variables representing climatic water deficit. Forest growth responses to water deficit were partitioned into responses driven by climatic threshold exceedances and interactions with insect defoliation. Forest growth was both resistant and resilient to climate extremes with the majority of forest growth responses occurring after multiple climatic threshold exceedances across seasons and years. Interactions between climate and disturbance were observed in a subset of years with insect defoliation increasing forest growth sensitivity to water availability. Forest growth was particularly sensitive to climate extremes during periods of high stem density following major regeneration events when average inter-tree competition was high. Results suggest the resistance and resilience of forest growth to climate extremes can be increased through management steps such as thinning to reduce competition during early stages of stand development and small-group selection harvests to maintain forest structures characteristic of older, mature stands. © 2017 by the Ecological Society of America.
Itter, Malcolm S.; Finley, Andrew O.; D'Amato, Anthony W.; Foster, Jane R.; Bradford, John B.
2017-01-01
Changes in the frequency, duration, and severity of climate extremes are forecast to occur under global climate change. The impacts of climate extremes on forest productivity and health remain difficult to predict due to potential interactions with disturbance events and forest dynamics—changes in forest stand composition, density, size and age structure over time. Such interactions may lead to non-linear forest growth responses to climate involving thresholds and lag effects. Understanding how forest dynamics influence growth responses to climate is particularly important given stand structure and composition can be modified through management to increase forest resistance and resilience to climate change. To inform such adaptive management, we develop a hierarchical Bayesian state space model in which climate effects on tree growth are allowed to vary over time and in relation to past climate extremes, disturbance events, and forest dynamics. The model is an important step toward integrating disturbance and forest dynamics into predictions of forest growth responses to climate extremes. We apply the model to a dendrochronology data set from forest stands of varying composition, structure, and development stage in northeastern Minnesota that have experienced extreme climate years and forest tent caterpillar defoliation events. Mean forest growth was most sensitive to water balance variables representing climatic water deficit. Forest growth responses to water deficit were partitioned into responses driven by climatic threshold exceedances and interactions with insect defoliation. Forest growth was both resistant and resilient to climate extremes with the majority of forest growth responses occurring after multiple climatic threshold exceedances across seasons and years. Interactions between climate and disturbance were observed in a subset of years with insect defoliation increasing forest growth sensitivity to water availability. Forest growth was particularly sensitive to climate extremes during periods of high stem density following major regeneration events when average inter-tree competition was high. Results suggest the resistance and resilience of forest growth to climate extremes can be increased through management steps such as thinning to reduce competition during early stages of stand development and small-group selection harvests to maintain forest structures characteristic of older, mature stands.
The impact of radiology expertise upon the localization of subtle pulmonary lesions
NASA Astrophysics Data System (ADS)
Robinson, John W.; Brennan, Patrick C.; Mello-Thoms, Claudia; Lewis, Sarah J.
2016-03-01
Rationale and objectives: This study investigates the influence of radiology expertise in the correct localization of lesions when radiologists are requested to complete an observer task. Specifically, the ability to detect pulmonary lesions of different subtleties is explored in relation to radiologists' reported specialty. Materials and Methods: Institutional ethics was granted. Ten radiologists (5 thoracic, 5 non-thoracic) interpreted 40 posterior-anterior (PA) chest x-rays (CXRs) consisting of 21 normal and 19 abnormal cases (solitary pulmonary nodule). The abnormal cases contained a solitary nodule with an established subtlety (subtlety 5 = obvious to subtlety 1 = extremely subtle). Radiologists read the test set and identified any pulmonary nodule using a 1-5 confidence scale (1=no pulmonary nodule to 5=highest confidence case contains a pulmonary lesion). The radiologists interpreted the image bank twice and the cases were randomized for each reader between reads. Results: The Kruskal-Wallis test identified that subtlety of nodules significantly influenced the sensitivity of nonthoracic radiologists (P=<0.0001) and thoracic radiologists (P=<0.0001). A Wilcoxon rank test demonstrated a significant difference in sensitivity for radiologist specialisation (P=0.013), with thoracic radiologists better compared to non-thoracic radiologists (mean sensitivity 0.479 and 0.389 respectively). The sensitivity of nodule detection decreased when comparing subtlety 4 to 3, 3 to 2 and 2 to 1 for non-thoracic and thoracic radiologists'with the subtlety 3 to subtlety 2 being significant (P=0.014) for non thoracic radiologists while thoracic radiologists' demonstrated a decrease but no transitions between subtlety were significant. The most noticeable, and interesting, effect was with the thoracic radiologists' with the average means of subtlety 2 and 1 being almost the same and closely comparable to level 3. Conclusion: Results from this study indicate that expertise in chest radiology does significantly impact upon the sensitivity of radiologists in detecting pulmonary lesions of varying subtlety. Thoracic radiologists had a consistently higher sensitivity with subtle, very subtle and extremely subtle nodules.
Floods and food security: A method to estimate the effect of inundation on crops availability
NASA Astrophysics Data System (ADS)
Pacetti, Tommaso; Caporali, Enrica; Rulli, Maria Cristina
2017-12-01
The inner connections between floods and food security are extremely relevant, especially in developing countries where food availability can be highly jeopardized by extreme events that damage the primary access to food, i.e. agriculture. A method for the evaluation of the effects of floods on food supply, consisting of the integration of remote sensing data, agricultural statistics and water footprint databases, is proposed and applied to two different case studies. Based on the existing literature related to extreme floods, the events in Bangladesh (2007) and in Pakistan (2010) have been selected as exemplary case studies. Results show that the use of remote sensing data combined with other sources of onsite information is particularly useful to assess the effects of flood events on food availability. The damages caused by floods on agricultural areas are estimated in terms of crop losses and then converted into lost calories and water footprint as complementary indicators. Method results are fully repeatable; whereas, for remote sensed data the sources of data are valid worldwide and the data regarding land use and crops characteristics are strongly site specific, which need to be carefully evaluated. A sensitivity analysis has been carried out for the water depth critical on the crops in Bangladesh, varying the assumed level by ±20%. The results show a difference in the energy content losses estimation of 12% underlying the importance of an accurate data choice.
NASA Astrophysics Data System (ADS)
Duenkel, Lothar; Eichler, Juergen; Ackermann, Gerhard; Schneeweiss, Claudia
2004-06-01
Holography is the most fascinating technology for three-dimensional imaging. But despite of many decades of research, the seek for an ideal recording material has never been given up. From all ultra-fine materials, silver bromide emulsions with very small grain sizes have the highest sensitivity. In recent years however, many traditional manufacturers discontinued their production. Meanwhile, newcomers succeeded in manufacturing emulsions which are very suitable for holography, concerning extremely high resolution, brigthness and sensitivity1. But two problems may still linger: First, the deficient market situation for production and application on this field. Second, the reputation of the system of being extremely complicated for laboratory preparation. In such a crucial situation, the authors have succeeded in presenting a laboratory procedure for making do-it-yourself materials available to any expert who is well versed in holography, and who disposes of normal darkroom equipment2. The methodology is based on precipitation using the traditional double-jet method according to Thiry and predecessors3. But sensitization is carried out by a diffusion process according to the procedure as proposed by Blyth et al.4 Thus, precipitation and coating on one side and sensitization on the other one are separated strictly from one another. Efficient desaltation is an important process too, warranting the high opto-mechanical quality of the layer. The material has been sensitzed for HeNe-Laser radiation (632,8 nm) only up to now. The mean diameter of the silver bromide grains is about 15 nm, as determined by transmission electron microscopy (TEM). Phillips-Bjelkhagen Ultimate (PBU) or Fe3+ rehalogenation bleach are applied successfully5-6. In final result, a new generation of holograms with ultra-high resolution, proper contrast, excellent sharpness and light brightness has been obtained. Holography belongs to an advancing technology where the search for an ideal recording material is still going on. Of these materials, the ultrafine grain silver bromide emulsions are unsurpassed in sensitivity. But in recent years many traditional manufacturers discontinued their production. In such a critical situation, the authors have succeeded in developing a new technology to make do-it-yourself materials of very high quality. The procedure involves elements of two different methods: The traditional double-jet method by pouring silver nitrate and potassium bromide into a vigorously stirred gelatin solution, and a diffusion process to sensitize the coated layer efficiently. The material has been sensitized for He/Ne-laser radiation by 632.8 nm. Denisyuk holograms of real 3D-objects were obtained in ultrahigh resolution, excellent brightness and clarity with CW-C2 developer and PBU rehalogenation bleach according to Bjelkhagen et al. The material is characterized by TEM, reflexion spectroscopy, and other methods. The new results have been involved in university education already with great success. The fundamental principles of the methodology as well as new results by application in intellectual and hybrid systems were reported.
Allergic sensitization: screening methods
2014-01-01
Experimental in silico, in vitro, and rodent models for screening and predicting protein sensitizing potential are discussed, including whether there is evidence of new sensitizations and allergies since the introduction of genetically modified crops in 1996, the importance of linear versus conformational epitopes, and protein families that become allergens. Some common challenges for predicting protein sensitization are addressed: (a) exposure routes; (b) frequency and dose of exposure; (c) dose-response relationships; (d) role of digestion, food processing, and the food matrix; (e) role of infection; (f) role of the gut microbiota; (g) influence of the structure and physicochemical properties of the protein; and (h) the genetic background and physiology of consumers. The consensus view is that sensitization screening models are not yet validated to definitively predict the de novo sensitizing potential of a novel protein. However, they would be extremely useful in the discovery and research phases of understanding the mechanisms of food allergy development, and may prove fruitful to provide information regarding potential allergenicity risk assessment of future products on a case by case basis. These data and findings were presented at a 2012 international symposium in Prague organized by the Protein Allergenicity Technical Committee of the International Life Sciences Institute’s Health and Environmental Sciences Institute. PMID:24739743
A mechanism of extreme growth and reliable signaling in sexually selected ornaments and weapons.
Emlen, Douglas J; Warren, Ian A; Johns, Annika; Dworkin, Ian; Lavine, Laura Corley
2012-08-17
Many male animals wield ornaments or weapons of exaggerated proportions. We propose that increased cellular sensitivity to signaling through the insulin/insulin-like growth factor (IGF) pathway may be responsible for the extreme growth of these structures. We document how rhinoceros beetle horns, a sexually selected weapon, are more sensitive to nutrition and more responsive to perturbation of the insulin/IGF pathway than other body structures. We then illustrate how enhanced sensitivity to insulin/IGF signaling in a growing ornament or weapon would cause heightened condition sensitivity and increased variability in expression among individuals--critical properties of reliable signals of male quality. The possibility that reliable signaling arises as a by-product of the growth mechanism may explain why trait exaggeration has evolved so many different times in the context of sexual selection.
Margin and sensitivity methods for security analysis of electric power systems
NASA Astrophysics Data System (ADS)
Greene, Scott L.
Reliable operation of large scale electric power networks requires that system voltages and currents stay within design limits. Operation beyond those limits can lead to equipment failures and blackouts. Security margins measure the amount by which system loads or power transfers can change before a security violation, such as an overloaded transmission line, is encountered. This thesis shows how to efficiently compute security margins defined by limiting events and instabilities, and the sensitivity of those margins with respect to assumptions, system parameters, operating policy, and transactions. Security margins to voltage collapse blackouts, oscillatory instability, generator limits, voltage constraints and line overloads are considered. The usefulness of computing the sensitivities of these margins with respect to interarea transfers, loading parameters, generator dispatch, transmission line parameters, and VAR support is established for networks as large as 1500 buses. The sensitivity formulas presented apply to a range of power system models. Conventional sensitivity formulas such as line distribution factors, outage distribution factors, participation factors and penalty factors are shown to be special cases of the general sensitivity formulas derived in this thesis. The sensitivity formulas readily accommodate sparse matrix techniques. Margin sensitivity methods are shown to work effectively for avoiding voltage collapse blackouts caused by either saddle node bifurcation of equilibria or immediate instability due to generator reactive power limits. Extremely fast contingency analysis for voltage collapse can be implemented with margin sensitivity based rankings. Interarea transfer can be limited by voltage limits, line limits, or voltage stability. The sensitivity formulas presented in this thesis apply to security margins defined by any limit criteria. A method to compute transfer margins by directly locating intermediate events reduces the total number of loadflow iterations required by each margin computation and provides sensitivity information at minimal additional cost. Estimates of the effect of simultaneous transfers on the transfer margins agree well with the exact computations for a network model derived from a portion of the U.S grid. The accuracy of the estimates over a useful range of conditions and the ease of obtaining the estimates suggest that the sensitivity computations will be of practical value.
Mass Spectrometry-based Assay for High Throughput and High Sensitivity Biomarker Verification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Xuejiang; Tang, Keqi
Searching for disease specific biomarkers has become a major undertaking in the biomedical research field as the effective diagnosis, prognosis and treatment of many complex human diseases are largely determined by the availability and the quality of the biomarkers. A successful biomarker as an indicator to a specific biological or pathological process is usually selected from a large group of candidates by a strict verification and validation process. To be clinically useful, the validated biomarkers must be detectable and quantifiable by the selected testing techniques in their related tissues or body fluids. Due to its easy accessibility, protein biomarkers wouldmore » ideally be identified in blood plasma or serum. However, most disease related protein biomarkers in blood exist at very low concentrations (<1ng/mL) and are “masked” by many none significant species at orders of magnitude higher concentrations. The extreme requirements of measurement sensitivity, dynamic range and specificity make the method development extremely challenging. The current clinical protein biomarker measurement primarily relies on antibody based immunoassays, such as ELISA. Although the technique is sensitive and highly specific, the development of high quality protein antibody is both expensive and time consuming. The limited capability of assay multiplexing also makes the measurement an extremely low throughput one rendering it impractical when hundreds to thousands potential biomarkers need to be quantitatively measured across multiple samples. Mass spectrometry (MS)-based assays have recently shown to be a viable alternative for high throughput and quantitative candidate protein biomarker verification. Among them, the triple quadrupole MS based assay is the most promising one. When it is coupled with liquid chromatography (LC) separation and electrospray ionization (ESI) source, a triple quadrupole mass spectrometer operating in a special selected reaction monitoring (SRM) mode, also known as multiple reaction monitoring (MRM), is capable of quantitatively measuring hundreds of candidate protein biomarkers from a relevant clinical sample in a single analysis. The specificity, reproducibility and sensitivity could be as good as ELISA. Furthermore, SRM MS can also quantify protein isoforms and post-translational modifications, for which traditional antibody-based immunoassays often don’t exist.« less
Bucci, Monica; Mandelli, Maria Luisa; Berman, Jeffrey I.; Amirbekian, Bagrat; Nguyen, Christopher; Berger, Mitchel S.; Henry, Roland G.
2013-01-01
Introduction Diffusion MRI tractography has been increasingly used to delineate white matter pathways in vivo for which the leading clinical application is presurgical mapping of eloquent regions. However, there is rare opportunity to quantify the accuracy or sensitivity of these approaches to delineate white matter fiber pathways in vivo due to the lack of a gold standard. Intraoperative electrical stimulation (IES) provides a gold standard for the location and existence of functional motor pathways that can be used to determine the accuracy and sensitivity of fiber tracking algorithms. In this study we used intraoperative stimulation from brain tumor patients as a gold standard to estimate the sensitivity and accuracy of diffusion tensor MRI (DTI) and q-ball models of diffusion with deterministic and probabilistic fiber tracking algorithms for delineation of motor pathways. Methods We used preoperative high angular resolution diffusion MRI (HARDI) data (55 directions, b = 2000 s/mm2) acquired in a clinically feasible time frame from 12 patients who underwent a craniotomy for resection of a cerebral glioma. The corticospinal fiber tracts were delineated with DTI and q-ball models using deterministic and probabilistic algorithms. We used cortical and white matter IES sites as a gold standard for the presence and location of functional motor pathways. Sensitivity was defined as the true positive rate of delineating fiber pathways based on cortical IES stimulation sites. For accuracy and precision of the course of the fiber tracts, we measured the distance between the subcortical stimulation sites and the tractography result. Positive predictive rate of the delineated tracts was assessed by comparison of subcortical IES motor function (upper extremity, lower extremity, face) with the connection of the tractography pathway in the motor cortex. Results We obtained 21 cortical and 8 subcortical IES sites from intraoperative mapping of motor pathways. Probabilistic q-ball had the best sensitivity (79%) as determined from cortical IES compared to deterministic q-ball (50%), probabilistic DTI (36%), and deterministic DTI (10%). The sensitivity using the q-ball algorithm (65%) was significantly higher than using DTI (23%) (p < 0.001) and the probabilistic algorithms (58%) were more sensitive than deterministic approaches (30%) (p = 0.003). Probabilistic q-ball fiber tracks had the smallest offset to the subcortical stimulation sites. The offsets between diffusion fiber tracks and subcortical IES sites were increased significantly for those cases where the diffusion fiber tracks were visibly thinner than expected. There was perfect concordance between the subcortical IES function (e.g. hand stimulation) and the cortical connection of the nearest diffusion fiber track (e.g. upper extremity cortex). Discussion This study highlights the tremendous utility of intraoperative stimulation sites to provide a gold standard from which to evaluate diffusion MRI fiber tracking methods and has provided an object standard for evaluation of different diffusion models and approaches to fiber tracking. The probabilistic q-ball fiber tractography was significantly better than DTI methods in terms of sensitivity and accuracy of the course through the white matter. The commonly used DTI fiber tracking approach was shown to have very poor sensitivity (as low as 10% for deterministic DTI fiber tracking) for delineation of the lateral aspects of the corticospinal tract in our study. Effects of the tumor/edema resulted in significantly larger offsets between the subcortical IES and the preoperative fiber tracks. The provided data show that probabilistic HARDI tractography is the most objective and reproducible analysis but given the small sample and number of stimulation points a generalization about our results should be given with caution. Indeed our results inform the capabilities of preoperative diffusion fiber tracking and indicate that such data should be used carefully when making pre-surgical and intra-operative management decisions. PMID:24273719
Verheijde, Joseph L; White, Fred; Tompkins, James; Dahl, Peder; Hentz, Joseph G; Lebec, Michael T; Cornwall, Mark
2013-12-01
To investigate reliability, validity, and sensitivity to change of the Lower Extremity Functional Scale (LEFS) in individuals affected by stroke. The secondary objective was to test the validity and sensitivity of a single-item linear analog scale (LAS) of function. Prospective cohort reliability and validation study. A single rehabilitation department in an academic medical center. Forty-three individuals receiving neurorehabilitation for lower extremity dysfunction after stroke were studied. Their ages ranged from 32 to 95 years, with a mean of 70 years; 77% were men. Test-retest reliability was assessed by calculating the classical intraclass correlation coefficient, and the Bland-Altman limits of agreement. Validity was assessed by calculating the Pearson correlation coefficient between the instruments. Sensitivity to change was assessed by comparing baseline scores with end of treatment scores. Measurements were taken at baseline, after 1-3 days, and at 4 and 8 weeks. The LEFS, Short-Form-36 Physical Function Scale, Berg Balance Scale, Six-Minute Walk Test, Five-Meter Walk Test, Timed Up-and-Go test, and the LAS of function were used. The test-retest reliability of the LEFS was found to be excellent (ICC = 0.96). Correlated with the 6 other measures of function studied, the validity of the LEFS was found to be moderate to high (r = 0.40-0.71). Regarding the sensitivity to change, the mean LEFS scores from baseline to study end increased 1.2 SD and for LAS 1.1 SD. LEFS exhibits good reliability, validity, and sensitivity to change in patients with lower extremity impairments secondary to stroke. Therefore, the LEFS can be a clinically efficient outcome measure in the rehabilitation of patients with subacute stroke. The LAS is shown to be a time-saving and reasonable option to track changes in a patient's functional status. Copyright © 2013 American Academy of Physical Medicine and Rehabilitation. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Smith, M. D.; Knapp, A.; Hoover, D. L.; Avolio, M. L.; Felton, A. J.; Slette, I.; Wilcox, K.
2017-12-01
Climate extremes, such as drought, are increasing in frequency and intensity, and the ecological consequences of these extreme events can be substantial and widespread. Yet, little is known about the factors that determine recovery of ecosystem function post-drought. Such knowledge is particularly important because post-drought recovery periods can be protracted depending on drought legacy effects (e.g., loss key plant populations, altered community structure and/or biogeochemical processes). These drought legacies may alter ecosystem function for many years post-drought and may impact future sensitivity to climate extremes. With forecasts of more frequent drought, there is an imperative to understand whether and how post-drought legacies will affect ecosystem response to future drought events. To address this knowledge gap, we experimentally imposed over an eight year period two extreme growing season droughts, each two years in duration followed by a two-year recovery period, in a central US grassland. We found that aboveground net primary productivity (ANPP) declined dramatically with the first drought and was accompanied by a large shift in plant species composition (loss of C3 forb and increase in C4 grasses). This drought legacy - shift in plant composition - persisted two years post-drought. Yet, despite this legacy, ANPP recovered fully. However, we expected that previously-droughted grassland would be less sensitive to a second extreme drought due to the shift in plant composition. Contrary to this expectation, previously droughted grassland experienced a greater loss in ANPP than grassland that had not experienced drought. Furthermore, previously droughted grassland did not fully recover after the second drought. Thus, the legacy of drought - a shift in plant community composition - increased ecosystem sensitivity to a future extreme drought event.
Interactions of Mean Climate Change and Climate Variability on Food Security Extremes
NASA Technical Reports Server (NTRS)
Ruane, Alexander C.; McDermid, Sonali; Mavromatis, Theodoros; Hudson, Nicholas; Morales, Monica; Simmons, John; Prabodha, Agalawatte; Ahmad, Ashfaq; Ahmad, Shakeel; Ahuja, Laj R.
2015-01-01
Recognizing that climate change will affect agricultural systems both through mean changes and through shifts in climate variability and associated extreme events, we present preliminary analyses of climate impacts from a network of 1137 crop modeling sites contributed to the AgMIP Coordinated Climate-Crop Modeling Project (C3MP). At each site sensitivity tests were run according to a common protocol, which enables the fitting of crop model emulators across a range of carbon dioxide, temperature, and water (CTW) changes. C3MP can elucidate several aspects of these changes and quantify crop responses across a wide diversity of farming systems. Here we test the hypothesis that climate change and variability interact in three main ways. First, mean climate changes can affect yields across an entire time period. Second, extreme events (when they do occur) may be more sensitive to climate changes than a year with normal climate. Third, mean climate changes can alter the likelihood of climate extremes, leading to more frequent seasons with anomalies outside of the expected conditions for which management was designed. In this way, shifts in climate variability can result in an increase or reduction of mean yield, as extreme climate events tend to have lower yield than years with normal climate.C3MP maize simulations across 126 farms reveal a clear indication and quantification (as response functions) of mean climate impacts on mean yield and clearly show that mean climate changes will directly affect the variability of yield. Yield reductions from increased climate variability are not as clear as crop models tend to be less sensitive to dangers on the cool and wet extremes of climate variability, likely underestimating losses from water-logging, floods, and frosts.
Automated Lab-on-a-Chip Electrophoresis System
NASA Technical Reports Server (NTRS)
Willis, Peter A.; Mora, Maria; Greer, Harold F.; Fisher, Anita M.; Bryant, Sherrisse
2012-01-01
Capillary electrophoresis is an analytical technique that can be used to detect and quantify extremely small amounts of various biological molecules. In the search for biochemical traces of life on other planets, part of this search involves an examination of amino acids, which are the building blocks of life on Earth. The most sensitive method for detecting amino acids is the use of laser induced fluorescence. However, since amino acids do not, in general, fluoresce, they first must be reacted with a fluorescent dye label prior to analysis. After this process is completed, the liquid sample then must be transported into the electrophoresis system. If the system is to be reused multiple times, samples must be added and removed each time. In typical laboratories, this process is performed manually by skilled human operators using standard laboratory equipment. This level of human intervention is not possible if this technology is to be implemented on extraterrestrial targets. Microchip capillary electrophoresis (CE) combined with laser induced fluorescence detection (LIF) was selected as an extremely sensitive method to detect amino acids and other compounds that can be tagged with a fluorescent dye. It is highly desirable to package this technology into an integrated, autonomous, in situ instrument capable of performing CE-LIF on the surface of an extraterrestrial body. However, to be fully autonomous, the CE device must be able to perform a large number of sample preparation and analysis operations without the direct intervention of a human.
Changes in insecticide resistance of the rice striped stem borer (Lepidoptera: Crambidae).
Su, Jianya; Zhang, Zhenzhen; Wu, Min; Gao, Congfen
2014-02-01
Application of insecticides is the most important method to control Chilo suppressalis (Walker) (Lepidoptera: Crambidae), and continuous use of individual insecticides has driven the rapid development of insecticide resistance in C. suppressalis during the past 30 yr. Monitoring insecticide resistance provides information essential for integrated pest management. Insecticide resistance of field populations to monosultap, triazophos, chlorpyrifos, and abamectin in China was examined in 2010 and 2011. The results indicated that the resistance levels of 14 field populations to four insecticides were significantly different. Four populations showed moderate resistance, and other populations possessed low-level resistance or were susceptible to monosultap. Nine populations displayed an extremely high or a high level of resistance to triazophos, whereas four populations were sensitive to this agent. Five populations exhibited a low level of resistance to abamectin, while the others remained sensitive. When compared with historical data, resistance to monosultap and triazophos decreased significantly, and the percentage of populations with high-level or extremely high-level resistance was obviously reduced. By contrast, the resistance to abamectin increased slightly. The increasing and decreasing resistance levels reported in this study highlight the different evolutionary patterns of insecticide resistance in C. suppressalis. An overreliance on one or two insecticides may promote rapid development of resistance. Slow development of resistance to abamectin, which was used mainly in mixtures with other insecticides, implies that the use of insecticide mixtures may be an effective method to delay the evolution of resistance to insecticides.
Ewald, Julie A; Wheatley, Christopher J; Aebischer, Nicholas J; Moreby, Stephen J; Duffield, Simon J; Crick, Humphrey Q P; Morecroft, Michael B
2015-11-01
Cereal fields are central to balancing food production and environmental health in the face of climate change. Within them, invertebrates provide key ecosystem services. Using 42 years of monitoring data collected in southern England, we investigated the sensitivity and resilience of invertebrates in cereal fields to extreme weather events and examined the effect of long-term changes in temperature, rainfall and pesticide use on invertebrate abundance. Of the 26 invertebrate groups examined, eleven proved sensitive to extreme weather events. Average abundance increased in hot/dry years and decreased in cold/wet years for Araneae, Cicadellidae, adult Heteroptera, Thysanoptera, Braconidae, Enicmus and Lathridiidae. The average abundance of Delphacidae, Cryptophagidae and Mycetophilidae increased in both hot/dry and cold/wet years relative to other years. The abundance of all 10 groups usually returned to their long-term trend within a year after the extreme event. For five of them, sensitivity to cold/wet events was lowest (translating into higher abundances) at locations with a westerly aspect. Some long-term trends in invertebrate abundance correlated with temperature and rainfall, indicating that climate change may affect them. However, pesticide use was more important in explaining the trends, suggesting that reduced pesticide use would mitigate the effects of climate change. © 2015 John Wiley & Sons Ltd.
Fast principal component analysis for stacking seismic data
NASA Astrophysics Data System (ADS)
Wu, Juan; Bai, Min
2018-04-01
Stacking seismic data plays an indispensable role in many steps of the seismic data processing and imaging workflow. Optimal stacking of seismic data can help mitigate seismic noise and enhance the principal components to a great extent. Traditional average-based seismic stacking methods cannot obtain optimal performance when the ambient noise is extremely strong. We propose a principal component analysis (PCA) algorithm for stacking seismic data without being sensitive to noise level. Considering the computational bottleneck of the classic PCA algorithm in processing massive seismic data, we propose an efficient PCA algorithm to make the proposed method readily applicable for industrial applications. Two numerically designed examples and one real seismic data are used to demonstrate the performance of the presented method.
Lu, Bingxin; Leong, Hon Wai
2016-02-01
Genomic islands (GIs) are clusters of functionally related genes acquired by lateral genetic transfer (LGT), and they are present in many bacterial genomes. GIs are extremely important for bacterial research, because they not only promote genome evolution but also contain genes that enhance adaption and enable antibiotic resistance. Many methods have been proposed to predict GI. But most of them rely on either annotations or comparisons with other closely related genomes. Hence these methods cannot be easily applied to new genomes. As the number of newly sequenced bacterial genomes rapidly increases, there is a need for methods to detect GI based solely on sequences of a single genome. In this paper, we propose a novel method, GI-SVM, to predict GIs given only the unannotated genome sequence. GI-SVM is based on one-class support vector machine (SVM), utilizing composition bias in terms of k-mer content. From our evaluations on three real genomes, GI-SVM can achieve higher recall compared with current methods, without much loss of precision. Besides, GI-SVM allows flexible parameter tuning to get optimal results for each genome. In short, GI-SVM provides a more sensitive method for researchers interested in a first-pass detection of GI in newly sequenced genomes.
The importance of range edges for an irruptive species during extreme weather events
Bateman, Brooke L.; Pidgeon, Anna M.; Radeloff, Volker C.; Allstadt, Andrew J.; Akçakaya, H. Resit; Thogmartin, Wayne E.; Vavrus, Stephen J.; Heglund, Patricia J.
2015-01-01
In a changing climate where more frequent extreme weather may be more common, conservation strategies for weather-sensitive species may require consideration of habitat in the edges of species’ ranges, even though non-core areas may be unoccupied in ‘normal’ years. Our results highlight the conservation importance of range edges in providing refuge from extreme events, such as drought, and climate change.
Extreme rainfall, vulnerability and risk: a continental-scale assessment for South America
Vorosmarty, Charles J.; de Guenni, Lelys Bravo; Wollheim, Wilfred M.; Pellerin, Brian A.; Bjerklie, David M.; Cardoso, Manoel; D'Almeida, Cassiano; Colon, Lilybeth
2013-01-01
Extreme weather continues to preoccupy society as a formidable public safety concern bearing huge economic costs. While attention has focused on global climate change and how it could intensify key elements of the water cycle such as precipitation and river discharge, it is the conjunction of geophysical and socioeconomic forces that shapes human sensitivity and risks to weather extremes. We demonstrate here the use of high-resolution geophysical and population datasets together with documentary reports of rainfall-induced damage across South America over a multi-decadal, retrospective time domain (1960–2000). We define and map extreme precipitation hazard, exposure, affectedpopulations, vulnerability and risk, and use these variables to analyse the impact of floods as a water security issue. Geospatial experiments uncover major sources of risk from natural climate variability and population growth, with change in climate extremes bearing a minor role. While rural populations display greatest relative sensitivity to extreme rainfall, urban settings show the highest rates of increasing risk. In the coming decades, rapid urbanization will make South American cities the focal point of future climate threats but also an opportunity for reducing vulnerability, protecting lives and sustaining economic development through both traditional and ecosystem-based disaster risk management systems.
A pratical deconvolution algorithm in multi-fiber spectra extraction
NASA Astrophysics Data System (ADS)
Zhang, Haotong; Li, Guangwei; Bai, Zhongrui
2015-08-01
Deconvolution algorithm is a very promising method in multi-fiber spectroscopy data reduction, the method can extract spectra to the photo noise level as well as improve the spectral resolution, but as mentioned in Bolton & Schlegel (2010), it is limited by its huge computation requirement and thus can not be implemented directly in actual data reduction. We develop a practical algorithm to solve the computation problem. The new algorithm can deconvolve a 2D fiber spectral image of any size with actual PSFs, which may vary with positions. We further consider the influence of noise, which is thought to be an intrinsic ill-posed problem in deconvolution algorithms. We modify our method with a Tikhonov regularization item to depress the method induced noise. A series of simulations based on LAMOST data are carried out to test our method under more real situations with poisson noise and extreme cross talk, i.e., the fiber-to-fiber distance is comparable to the FWHM of the fiber profile. Compared with the results of traditional extraction methods, i.e., the Aperture Extraction Method and the Profile Fitting Method, our method shows both higher S/N and spectral resolution. The computaion time for a noise added image with 250 fibers and 4k pixels in wavelength direction, is about 2 hours when the fiber cross talk is not in the extreme case and 3.5 hours for the extreme fiber cross talk. We finally apply our method to real LAMOST data. We find that the 1D spectrum extracted by our method has both higher SNR and resolution than the traditional methods, but there are still some suspicious weak features possibly caused by the noise sensitivity of the method around the strong emission lines. How to further attenuate the noise influence will be the topic of our future work. As we have demonstrated, multi-fiber spectra extracted by our method will have higher resolution and signal to noise ratio thus will provide more accurate information (such as higher radial velocity and metallicity measurement accuracy in stellar physics) to astronomers than traditional methods.
Harris, Catherine R.; Osterberg, E. Charles; Sanford, Thomas; Alwaal, Amjad; Gaither, Thomas W.; McAninch, Jack W.; McCulloch, Charles E.; Breyer, Benjamin N.
2016-01-01
Objective To determine which factors are associated with higher urethroplasty procedural costs and whether they have been increasing or decreasing over time. Identification of determinants of extreme costs may help reduce cost while maintaining quality. Materials and Methods We conducted a retrospective analysis using the 2001–2010 Healthcare Cost and Utilization Project - Nationwide Inpatient Sample (HCUP-NIS). The HCUP-NIS captures hospital charges which we converted to cost using the HCUP Cost-to-Charge Ratio. Log cost linear regression with sensitivity analysis was used to determine variables associated with increased costs. Extreme cost was defined as the top 20th percentile of expenditure, analyzed with logistic regression and expressed as Odds Ratios (OR). Results A total of 2298 urethroplasties were recorded in NIS over the study period. The median (interquartile range) calculated costs was $7321 ($5677–$10000). Patients with multiple comorbid conditions were associated with extreme costs (OR 1.56 95% CI 1.19–2.04, p=0.02) compared to patients with no comorbid disease. Inpatient complications raised the odds of extreme costs OR 3.2 CI 2.14–4.75, p<0.001). Graft urethroplasties were associated with extreme costs (OR 1.78 95% CI 1.2–2.64, p=0.005). Variation in patient age, race, hospital region, bed size, teaching status, payer type, and volume of urethroplasty cases were not associated with extremes of cost. Conclusion Cost variation for perioperative inpatient urethroplasty procedures is dependent on preoperative patient comorbidities, postoperative complications and surgical complexity related to graft usage. Procedural cost and cost variation are critical for understanding which aspects of care have the greatest impact on cost. PMID:27107626
Levanič, Tom; Popa, Ionel; Poljanšek, Simon; Nechita, Constantin
2013-09-01
Increase in temperature and decrease in precipitation pose a major future challenge for sustainable ecosystem management in Romania. To understand ecosystem response and the wider social consequences of environmental change, we constructed a 396-year long (1615-2010) drought sensitive tree-ring width chronology (TRW) of Pinus nigra var. banatica (Georg. et Ion.) growing on steep slopes and shallow organic soil. We established a statistical relationship between TRW and two meteorological parameters-monthly sum of precipitation (PP) and standardised precipitation index (SPI). PP and SPI correlate significantly with TRW (r = 0.54 and 0.58) and are stable in time. Rigorous statistical tests, which measure the accuracy and prediction ability of the model, were all significant. SPI was eventually reconstructed back to 1688, with extreme dry and wet years identified using the percentile method. By means of reconstruction, we identified two so far unknown extremely dry years in Romania--1725 and 1782. Those 2 years are almost as dry as 1946, which was known as the "year of great famine." Since no historical documents for these 2 years were available in local archives, we compared the results with those from neighbouring countries and discovered that both years were extremely dry in the wider region (Slovakia, Hungary, Anatolia, Syria, and Turkey). While the 1800-1900 period was relatively mild, with only two moderately extreme years as far as weather is concerned, the 1900-2009 period was highly salient owing to the very high number of wet and dry extremes--five extremely wet and three extremely dry events (one of them in 1946) were identified.
Recent developments in optical detection methods for microchip separations.
Götz, Sebastian; Karst, Uwe
2007-01-01
This paper summarizes the features and performances of optical detection systems currently applied in order to monitor separations on microchip devices. Fluorescence detection, which delivers very high sensitivity and selectivity, is still the most widely applied method of detection. Instruments utilizing laser-induced fluorescence (LIF) and lamp-based fluorescence along with recent applications of light-emitting diodes (LED) as excitation sources are also covered in this paper. Since chemiluminescence detection can be achieved using extremely simple devices which no longer require light sources and optical components for focusing and collimation, interesting approaches based on this technique are presented, too. Although UV/vis absorbance is a detection method that is commonly used in standard desktop electrophoresis and liquid chromatography instruments, it has not yet reached the same level of popularity for microchip applications. Current applications of UV/vis absorbance detection to microchip separations and innovative approaches that increase sensitivity are described. This article, which contains 85 references, focuses on developments and applications published within the last three years, points out exciting new approaches, and provides future perspectives on this field.
Ion spectrometric detection technologies for ultra-traces of explosives: a review.
Mäkinen, Marko; Nousiainen, Marjaana; Sillanpää, Mika
2011-01-01
In recent years, explosive materials have been widely employed for various military applications and civilian conflicts; their use for hostile purposes has increased considerably. The detection of different kind of explosive agents has become crucially important for protection of human lives, infrastructures, and properties. Moreover, both the environmental aspects such as the risk of soil and water contamination and health risks related to the release of explosive particles need to be taken into account. For these reasons, there is a growing need to develop analyzing methods which are faster and more sensitive for detecting explosives. The detection techniques of the explosive materials should ideally serve fast real-time analysis in high accuracy and resolution from a minimal quantity of explosive without involving complicated sample preparation. The performance of the in-field analysis of extremely hazardous material has to be user-friendly and safe for operators. The two closely related ion spectrometric methods used in explosive analyses include mass spectrometry (MS) and ion mobility spectrometry (IMS). The four requirements-speed, selectivity, sensitivity, and sampling-are fulfilled with both of these methods. Copyright © 2011 Wiley Periodicals, Inc.
Pan, Xiaoming; Zhang, Yanfang; Sha, Xuejiao; Wang, Jing; Li, Jing; Dong, Ping; Liang, Xingguo
2017-03-28
White spot syndrome virus (WSSV) is a major threat to the shrimp farming industry and so far there is no effective therapy for it, and thus early diagnostic of WSSV is of great importance. However, at the early stage of infection, the extremely low-abundance of WSSV DNA challenges the detection sensitivity and accuracy of PCR. To effectively detect low-abundance WSSV, here we developed a pre-amplification PCR (pre-amp PCR) method to amplify trace amounts of WSSV DNA from massive background genomic DNA. Combining with normal specific PCR, 10 copies of target WSSV genes were detected from ~10 10 magnitude of backgrounds. In particular, multiple target genes were able to be balanced amplified with similar efficiency due to the usage of the universal primer. The efficiency of the pre-amp PCR was validated by nested-PCR and quantitative PCR, and pre-amp PCR showed higher efficiency than nested-PCR when multiple targets were detected. The developed method is particularly suitable for the super early diagnosis of WSSV, and has potential to be applied in other low-abundance sample detection cases.
Using Dictionary Pair Learning for Seizure Detection.
Ma, Xin; Yu, Nana; Zhou, Weidong
2018-02-13
Automatic seizure detection is extremely important in the monitoring and diagnosis of epilepsy. The paper presents a novel method based on dictionary pair learning (DPL) for seizure detection in the long-term intracranial electroencephalogram (EEG) recordings. First, for the EEG data, wavelet filtering and differential filtering are applied, and the kernel function is performed to make the signal linearly separable. In DPL, the synthesis dictionary and analysis dictionary are learned jointly from original training samples with alternating minimization method, and sparse coefficients are obtained by using of linear projection instead of costly [Formula: see text]-norm or [Formula: see text]-norm optimization. At last, the reconstructed residuals associated with seizure and nonseizure sub-dictionary pairs are calculated as the decision values, and the postprocessing is performed for improving the recognition rate and reducing the false detection rate of the system. A total of 530[Formula: see text]h from 20 patients with 81 seizures were used to evaluate the system. Our proposed method has achieved an average segment-based sensitivity of 93.39%, specificity of 98.51%, and event-based sensitivity of 96.36% with false detection rate of 0.236/h.
Bui, Huy; Pham, Van Hoi; Pham, Van Dai; Hoang, Thi Hong Cam; Pham, Thanh Binh; Do, Thuy Chi; Ngo, Quang Minh; Nguyen, Thuy Van
2018-05-07
A vast majority of the organic solvents used in industry and laboratories are volatile, hazardous and toxic organic compounds, they are considered as a potent problem for human health and a cause of environmental pollution. Although analytical laboratory methods can determine extremely low solvent concentration, the sensing method with low cost and high sensitivity remains a conundrum. This paper presents and compares three methods (volatile organic compound (VOC), liquid drop and saturated vapour pressure) for determination of organic solvents in liquid environment by using photonic sensor based on nano-porous silicon (pSi) microcavity structures. Among those, the VOC method provides the highest sensitivity at low solvent volume concentrations because it can create a high vapour pressure of the analyte on the sensor surface owing to the capillary deposition of organic solvent into the silicon pores. This VOC method consists of three steps: heating the solution with its particular boiling temperature, controlling the flowing gas through liquid and cooling sensor. It delivers the highest sensitivity of 6.9 nm/% at concentration of 5% and the limit of detection (LOD) of pSi-sensor is 0.014% in case of ethanol in water when using an optical system with a resolution of 0.1 nm. Especially, the VOC method is capable of detecting low volume concentration of methanol in two tested ethanol solutions of 30% (v/v) and 45% (v/v) with the LOD of pSi-sensor up to 0.01% and 0.04%, respectively. This result will help pave a way to control the quality of contaminated liquor beverages.
A new method of sweat testing: the CF Quantum®sweat test.
Rock, Michael J; Makholm, Linda; Eickhoff, Jens
2014-09-01
Conventional methods of sweat testing are time consuming and have many steps that can and do lead to errors. This study compares conventional sweat testing to a new quantitative method, the CF Quantum® (CFQT) sweat test. This study tests the diagnostic accuracy and analytic validity of the CFQT. Previously diagnosed CF patients and patients who required a sweat test for clinical indications were invited to have the CFQT test performed. Both conventional sweat testing and the CFQT were performed bilaterally on the same day. Pairs of data from each test are plotted as a correlation graph and Bland-Altman plot. Sensitivity and specificity were calculated as well as the means and coefficient of variation by test and by extremity. After completing the study, subjects or their parents were asked for their preference of the CFQT and conventional sweat testing. The correlation coefficient between the CFQT and conventional sweat testing was 0.98 (95% confidence interval: 0.97-0.99). The sensitivity and specificity of the CFQT in diagnosing CF was 100% (95% confidence interval: 94-100%) and 96% (95% confidence interval: 89-99%), respectively. In one center in this three center multicenter study, there were higher sweat chloride values in patients with CF and also more tests that were invalid due to discrepant values between the two extremities. The percentage of invalid tests was higher in the CFQT method (16.5%) compared to conventional sweat testing (3.8%) (p < 0.001). In the post-test questionnaire, 88% of subjects/parents preferred the CFQT test. The CFQT is a fast and simple method of quantitative sweat chloride determination. This technology requires further refinement to improve the analytic accuracy at higher sweat chloride values and to decrease the number of invalid tests. Copyright © 2014 European Cystic Fibrosis Society. Published by Elsevier B.V. All rights reserved.
Ma, Qi Yun; Zhang, Ji Quan; Lai, Quan; Zhang, Feng; Dong, Zhen Hua; A, Lu Si
2017-06-18
Fourteen extreme climatic indices related with main regional meteorological disasters and vegetation growth were calculated based on daily data from 13 meteorological stations during 1960-2014 in Songnen Grassland, Northeast China. Then, the variation trend and the spatial and temporal patterns of climatic extreme events were analyzed by using regression analysis, break trend analy-sis, Mann-Kendall test, Sen's slope estimator and moving t-test method. The results indicated that summer days (SU25), warm days (TX90P), warm nights (TN90P) and warm spell duration (WSDI) representing extremely high temperatures showed significant increasing trends (P<0.05). Meanwhile, frost days (FD0), cold days (TX10P), cold nights (TN10P) and cold spell duration indicator (CSDI) representing extremely low temperatures showed obviously decreasing trends. The magnitudes of changes in cold indices (FD0, TX10P, TN10P and CSDI) were clearly greater than those of warm indices (SU25, TX90P, TN90P and WSDI), and that changes in night indices were larger than those of day indices. Regional climate warming trend was obvious from 1970 to 2009, and the most occurrences of the abrupt changes in these indices were identified in this period. The extreme precipitation indices did not show obvious trend, in general, SDII and CDD experienced a slightly decreasing trend while RX5D, R95P, PRCPTOT and CWD witnessed a mildly increasing trend. It may be concluded that regional climate changed towards warming and slightly wetting in Songnen Grassland. The most sensitive region for extreme temperature was distributed in the south and north region. Additionally, the extreme temperature indices showed clearly spatial difference between the south and the north. As for the spatial variations of extreme precipitation indices, the climate could be characterized by becoming wetter in northern region, and getting drier in southern region, especially in southwestern region with a high drought risk.
NASA Astrophysics Data System (ADS)
Schroeer, K.; Kirchengast, G.
2016-12-01
Relating precipitation intensity to temperature is a popular approach to assess potential changes of extreme events in a warming climate. Potential increases in extreme rainfall induced hazards, such as flash flooding, serve as motivation. It has not been addressed whether the temperature-precipitation scaling approach is meaningful on a regional to local level, where the risk of climate and weather impact is dealt with. Substantial variability of temperature sensitivity of extreme precipitation has been found that results from differing methodological assumptions as well as from varying climatological settings of the study domains. Two aspects are consistently found: First, temperature sensitivities beyond the expected consistency with the Clausius-Clapeyron (CC) equation are a feature of short-duration, convective, sub-daily to sub-hourly high-percentile rainfall intensities at mid-latitudes. Second, exponential growth ceases or reverts at threshold temperatures that vary from region to region, as moisture supply becomes limited. Analyses of pooled data, or of single or dispersed stations over large areas make it difficult to estimate the consequences in terms of local climate risk. In this study we test the meaningfulness of the scaling approach from an impact scale perspective. Temperature sensitivities are assessed using quantile regression on hourly and sub-hourly precipitation data from 189 stations in the Austrian south-eastern Alpine region. The observed scaling rates vary substantially, but distinct regional and seasonal patterns emerge. High sensitivity exceeding CC-scaling is seen on the 10-minute scale more than on the hourly scale, in storms shorter than 2 hours duration, and in shoulder seasons, but it is not necessarily a significant feature of the extremes. To be impact relevant, change rates need to be linked to absolute rainfall amounts. We show that high scaling rates occur in lower temperature conditions and thus have smaller effect on absolute precipitation intensities. While reporting of mere percentage numbers can be misleading, scaling studies can add value to process understanding on the local scale, if the factors that influence scaling rates are considered from both a methodological and a physical perspective.
Climate change, extreme weather events, and us health impacts: what can we say?
Mills, David M
2009-01-01
Address how climate change impacts on a group of extreme weather events could affect US public health. A literature review summarizes arguments for, and evidence of, a climate change signal in select extreme weather event categories, projections for future events, and potential trends in adaptive capacity and vulnerability in the United States. Western US wildfires already exhibit a climate change signal. The variability within hurricane and extreme precipitation/flood data complicates identifying a similar climate change signal. Health impacts of extreme events are not equally distributed and are very sensitive to a subset of exceptional extreme events. Cumulative uncertainty in forecasting climate change driven characteristics of extreme events and adaptation prevents confidently projecting the future health impacts from hurricanes, wildfires, and extreme precipitation/floods in the United States attributable to climate change.
400 Years of summer hydroclimate from stable isotopes in Iberian trees
NASA Astrophysics Data System (ADS)
Andreu-Hayles, Laia; Ummenhofer, Caroline C.; Barriendos, Mariano; Schleser, Gerhard H.; Helle, Gerhard; Leuenberger, Markus; Gutiérrez, Emilia; Cook, Edward R.
2017-07-01
Tree rings are natural archives that annually record distinct types of past climate variability depending on the parameters measured. Here, we use ring-width and stable isotopes in cellulose of trees from the northwestern Iberian Peninsula (IP) to understand regional summer hydroclimate over the last 400 years and the associated atmospheric patterns. Correlations between tree rings and climate data demonstrate that isotope signatures in the targeted Iberian pine forests are very sensitive to water availability during the summer period, and are mainly controlled by stomatal conductance. Non-linear methods based on extreme events analysis allow for capturing distinct seasonal climatic variability recorded by tree-ring parameters and asymmetric signals of the associated atmospheric features. Moreover, years with extreme high (low) values in the tree-ring records were characterised by coherent large-scale atmospheric circulation patterns with reduced (enhanced) moisture transport onto the northwestern IP. These analyses of extremes revealed that high/low proxy values do not necessarily correspond to mirror images in the atmospheric anomaly patterns, suggesting different drivers of these patterns and the corresponding signature recorded in the proxies. Regional hydroclimate features across the broader IP and western Europe during extreme wet/dry summers detected by the northwestern IP trees compare favourably to independent multicentury sea level pressure and drought reconstructions for Europe. Historical records also validate our findings that attribute non-linear moisture signals recorded by extreme tree-ring values to distinct large-scale atmospheric patterns and allow for 400-year reconstructions of the frequency of occurrence of extreme conditions in late spring and summer hydroclimate.
Extremal Correlators in the Ads/cft Correspondence
NASA Astrophysics Data System (ADS)
D'Hoker, Eric; Freedman, Daniel Z.; Mathur, Samir D.; Matusis, Alec; Rastelli, Leonardo
The non-renormalization of the 3-point functions
400 years of summer hydroclimate from stable isotopes in Iberian trees
NASA Astrophysics Data System (ADS)
Andreu-Hayles, Laia; Ummenhofer, Caroline C.; Barriendos, Mariano; Schleser, Gerhard H.; Helle, Gerhard; Leuenberger, Markus; Gutierrez, Emilia; Cook, Edward R.
2017-04-01
Tree rings are natural archives that annually record distinct types of past climate variability depending on the parameters measured. Here, we use ring-width and stable isotopes in cellulose of trees from the northwestern Iberian Peninsula (IP) to understand regional summer hydroclimate over the last 400 years and the associated atmospheric patterns. Correlations between tree rings and climate data demonstrate that isotope signatures in the targeted Iberian pine forests are very sensitive to water availability during the summer period, and are mainly controlled by stomatal conductance. Non-linear methods based on extreme events analysis allow for capturing distinct seasonal climatic variability recorded by tree-ring parameters and asymmetric signals of the associated atmospheric features. Moreover, years with extreme high (low) values in the tree-ring records were characterised by coherent large-scale atmospheric circulation patterns with reduced (enhanced) moisture transport onto the northwestern IP. These analyses of extremes revealed that high/low proxy values do not necessarily correspond to mirror images in the atmospheric anomaly patterns, suggesting different drivers of these patterns and the corresponding signature recorded in the proxies. Regional hydroclimate features across the broader IP and western Europe during extreme wet/dry summers detected by the northwestern IP trees compare favourably to an independent multicentury sea level pressure and drought reconstruction for Europe. Historical records also validate our findings that attribute non-linear moisture signals recorded by extreme tree-ring values to distinct large-scale atmospheric patterns and allow for 400-yr reconstructions of the frequency of occurrence of extreme conditions in summer hydroclimate. We will discuss how the results for Lillo compare with other records.
Greve, Douglas N; Salat, David H; Bowen, Spencer L; Izquierdo-Garcia, David; Schultz, Aaron P; Catana, Ciprian; Becker, J Alex; Svarer, Claus; Knudsen, Gitte M; Sperling, Reisa A; Johnson, Keith A
2016-05-15
A cross-sectional group study of the effects of aging on brain metabolism as measured with (18)F-FDG-PET was performed using several different partial volume correction (PVC) methods: no correction (NoPVC), Meltzer (MZ), Müller-Gärtner (MG), and the symmetric geometric transfer matrix (SGTM) using 99 subjects aged 65-87years from the Harvard Aging Brain study. Sensitivity to parameter selection was tested for MZ and MG. The various methods and parameter settings resulted in an extremely wide range of conclusions as to the effects of age on metabolism, from almost no changes to virtually all of cortical regions showing a decrease with age. Simulations showed that NoPVC had significant bias that made the age effect on metabolism appear to be much larger and more significant than it is. MZ was found to be the same as NoPVC for liberal brain masks; for conservative brain masks, MZ showed few areas correlated with age. MG and SGTM were found to be similar; however, MG was sensitive to a thresholding parameter that can result in data loss. CSF uptake was surprisingly high at about 15% of that in gray matter. The exclusion of CSF from SGTM and MG models, which is almost universally done, caused a substantial loss in the power to detect age-related changes. This diversity of results reflects the literature on the metabolism of aging and suggests that extreme care should be taken when applying PVC or interpreting results that have been corrected for partial volume effects. Using the SGTM, significant age-related changes of about 7% per decade were found in frontal and cingulate cortices as well as primary visual and insular cortices. Copyright © 2016 Elsevier Inc. All rights reserved.
Greve, Douglas N.; Salat, David H.; Bowen, Spencer L.; Izquierdo-Garcia, David; Schultz, Aaron P.; Catana, Ciprian; Becker, J. Alex; Svarer, Claus; Knudsen, Gitte; Sperling, Reisa A.; Johnson, Keith A.
2016-01-01
A cross-sectional group study of the effects of aging on brain metabolism as measured with 18F-FDG PET was performed using several different partial volume correction (PVC) methods: no correction (NoPVC), Meltzer (MZ), Müller-Gärtner (MG), and the symmetric geometric transfer matrix (SGTM) using 99 subjects aged 65-87 from the Harvard Aging Brain study. Sensitivity to parameter selection was tested for MZ and MG. The various methods and parameter settings resulted in an extremely wide range of conclusions as to the effects of age on metabolism, from almost no changes to virtually all of cortical regions showing a decrease with age. Simulations showed that NoPVC had significant bias that made the age effect on metabolism appear to be much larger and more significant than it is. MZ was found to be the same as NoPVC for liberal brain masks; for conservative brain masks, MZ showed few areas correlated with age. MG and SGTM were found to be similar; however, MG was sensitive to a thresholding parameter that can result in data loss. CSF uptake was surprisingly high at about 15% of that in gray matter. Exclusion of CSF from SGTM and MG models, which is almost universally done, caused a substantial loss in the power to detect age-related changes. This diversity of results reflects the literature on the metabolism of aging and suggests that extreme care should be taken when applying PVC or interpreting results that have been corrected for partial volume effects. Using the SGTM, significant age-related changes of about 7% per decade were found in frontal and cingulate cortices as well as primary visual and insular cortices. PMID:26915497
Gis-Based Multi-Criteria Decision Analysis for Forest Fire Risk Mapping
NASA Astrophysics Data System (ADS)
Akay, A. E.; Erdoğan, A.
2017-11-01
The forested areas along the coastal zone of the Mediterranean region in Turkey are classified as first-degree fire sensitive areas. Forest fires are major environmental disaster that affects the sustainability of forest ecosystems. Besides, forest fires result in important economic losses and even threaten human lives. Thus, it is critical to determine the forested areas with fire risks and thereby minimize the damages on forest resources by taking necessary precaution measures in these areas. The risk of forest fire can be assessed based on various factors such as forest vegetation structures (tree species, crown closure, tree stage), topographic features (slope and aspect), and climatic parameters (temperature, wind). In this study, GIS-based Multi-Criteria Decision Analysis (MCDA) method was used to generate forest fire risk map. The study was implemented in the forested areas within Yayla Forest Enterprise Chiefs at Dursunbey Forest Enterprise Directorate which is classified as first degree fire sensitive area. In the solution process, "extAhp 2.0" plug-in running Analytic Hierarchy Process (AHP) method in ArcGIS 10.4.1 was used to categorize study area under five fire risk classes: extreme risk, high risk, moderate risk, and low risk. The results indicated that 23.81 % of the area was of extreme risk, while 25.81 % was of high risk. The result indicated that the most effective criterion was tree species, followed by tree stages. The aspect had the least effective criterion on forest fire risk. It was revealed that GIS techniques integrated with MCDA methods are effective tools to quickly estimate forest fire risk at low cost. The integration of these factors into GIS can be very useful to determine forested areas with high fire risk and also to plan forestry management after fire.
Ultra-sensitive detection of leukemia by graphene
NASA Astrophysics Data System (ADS)
Akhavan, Omid; Ghaderi, Elham; Hashemi, Ehsan; Rahighi, Reza
2014-11-01
Graphene oxide nanoplatelets (GONPs) with extremely sharp edges (lateral dimensions ~20-200 nm and thicknesses <2 nm) were applied in extraction of the overexpressed guanine synthesized in the cytoplasm of leukemia cells. The blood serums containing the extracted guanine were used in differential pulse voltammetry (DPV) with reduced graphene oxide nanowall (rGONW) electrodes to develop fast and ultra-sensitive electrochemical detection of leukemia cells at leukemia fractions (LFs) of ~10-11 (as the lower detection limit). The stability of the DPV signals obtained by oxidation of the extracted guanine on the rGONWs was studied after 20 cycles. Without the guanine extraction, the DPV peaks relating to guanine oxidation of normal and abnormal cells overlapped at LFs <10-9, and consequently, the performance of rGONWs alone was limited at this level. As a benchmark, the DPV using glassy carbon electrodes was able to detect only LFs ~ 10-2. The ultra-sensitivity obtained by this combination method (guanine extraction by GONPs and then guanine oxidation by rGONWs) is five orders of magnitude better than the sensitivity of the best current technologies (e.g., specific mutations by polymerase chain reaction) which not only are expensive, but also require a few days for diagnosis.Graphene oxide nanoplatelets (GONPs) with extremely sharp edges (lateral dimensions ~20-200 nm and thicknesses <2 nm) were applied in extraction of the overexpressed guanine synthesized in the cytoplasm of leukemia cells. The blood serums containing the extracted guanine were used in differential pulse voltammetry (DPV) with reduced graphene oxide nanowall (rGONW) electrodes to develop fast and ultra-sensitive electrochemical detection of leukemia cells at leukemia fractions (LFs) of ~10-11 (as the lower detection limit). The stability of the DPV signals obtained by oxidation of the extracted guanine on the rGONWs was studied after 20 cycles. Without the guanine extraction, the DPV peaks relating to guanine oxidation of normal and abnormal cells overlapped at LFs <10-9, and consequently, the performance of rGONWs alone was limited at this level. As a benchmark, the DPV using glassy carbon electrodes was able to detect only LFs ~ 10-2. The ultra-sensitivity obtained by this combination method (guanine extraction by GONPs and then guanine oxidation by rGONWs) is five orders of magnitude better than the sensitivity of the best current technologies (e.g., specific mutations by polymerase chain reaction) which not only are expensive, but also require a few days for diagnosis. Electronic supplementary information (ESI) available. See DOI: 10.1039/C4NR04589K
Laget, Sophie; Broncy, Lucile; Hormigos, Katia; Dhingra, Dalia M; BenMohamed, Fatima; Capiod, Thierry; Osteras, Magne; Farinelli, Laurent; Jackson, Stephen; Paterlini-Bréchot, Patrizia
2017-01-01
Circulating Tumor Cells (CTC) and Circulating Tumor Microemboli (CTM) are Circulating Rare Cells (CRC) which herald tumor invasion and are expected to provide an opportunity to improve the management of cancer patients. An unsolved technical issue in the CTC field is how to obtain highly sensitive and unbiased collection of these fragile and heterogeneous cells, in both live and fixed form, for their molecular study when they are extremely rare, particularly at the beginning of the invasion process. We report on a new protocol to enrich from blood live CTC using ISET® (Isolation by SizE of Tumor/Trophoblastic Cells), an open system originally developed for marker-independent isolation of fixed tumor cells. We have assessed the impact of our new enrichment method on live tumor cells antigen expression, cytoskeleton structure, cell viability and ability to expand in culture. We have also explored the ISET® in vitro performance to collect intact fixed and live cancer cells by using spiking analyses with extremely low number of fluorescent cultured cells. We describe results consistently showing the feasibility of isolating fixed and live tumor cells with a Lower Limit of Detection (LLOD) of one cancer cell per 10 mL of blood and a sensitivity at LLOD ranging from 83 to 100%. This very high sensitivity threshold can be maintained when plasma is collected before tumor cells isolation. Finally, we have performed a comparative next generation sequencing (NGS) analysis of tumor cells before and after isolation from blood and culture. We established the feasibility of NGS analysis of single live and fixed tumor cells enriched from blood by our system. This study provides new protocols for detection and characterization of CTC collected from blood at the very early steps of tumor invasion.
NASA Astrophysics Data System (ADS)
Han, Jin-Hee
2018-03-01
Recently the aspect ratio of capacitor and via hole of memory semiconductor device has been dramatically increasing in order to store more information in a limited area. A small amount of remained residues after etch process on the bottom of the high aspect ratio structure can make a critical failure in device operation. Back-scattered electrons (BSE) are mainly used for inspecting the defect located at the bottom of the high aspect ratio structure or analyzing the overlay of the multi-layer structure because these electrons have a high linearity with the direction of emission and a high kinetic energy above 50eV. However, there is a limitation on that it cannot detect ultra-thin residue material having a thickness of several nanometers because the surface sensitivity is extremely low. We studied the characteristics of BSE spectra using Monte Carlo simulations for several cases which the high aspect ratio structures have extreme microscopic residues. Based on the assumption that most of the electrons emitted without energy loss are localized on the surface, we selected the detection energy window which has a range of 20eV below the maximum energy of the BSE. This window section is named as the high-energy BSE region. As a result of comparing the detection sensitivity of the conventional and the high-energy BSE detection mode, we found that the detection sensitivity for the residuals which have 2nm thickness is improved by more than 10 times in the high-energy BSE mode. This BSE technology is a new inspection method that can greatly be improved the inspection sensitivity for the ultra-thin residual material presented in the high aspect ratio structure, and its application will be expanded.
Amplification of Angular Rotations Using Weak Measurements
NASA Astrophysics Data System (ADS)
Magaña-Loaiza, Omar S.; Mirhosseini, Mohammad; Rodenburg, Brandon; Boyd, Robert W.
2014-05-01
We present a weak measurement protocol that permits a sensitive estimation of angular rotations based on the concept of weak-value amplification. The shift in the state of a pointer, in both angular position and the conjugate orbital angular momentum bases, is used to estimate angular rotations. This is done by an amplification of both the real and imaginary parts of the weak-value of a polarization operator that has been coupled to the pointer, which is a spatial mode, via a spin-orbit coupling. Our experiment demonstrates the first realization of weak-value amplification in the azimuthal degree of freedom. We have achieved effective amplification factors as large as 100, providing a sensitivity that is on par with more complicated methods that employ quantum states of light or extremely large values of orbital angular momentum.
Skin sensitization remains an important endpoint for consumers, manufacturers and regulators. Although the development of alternative approaches to assess skin sensitization potential has been extremely active over many years, the implication of regulations such as REACH and the ...
Shot-noise-limited optical Faraday polarimetry with enhanced laser noise cancelling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Jiaming; Department of Physics, Indiana University Purdue University Indianapolis, Indianapolis, Indiana 46202; Luo, Le, E-mail: leluo@iupui.edu
2014-03-14
We present a shot-noise-limited measurement of optical Faraday rotations with sub-ten-nanoradian angular sensitivity. This extremely high sensitivity is achieved by using electronic laser noise cancelling and phase sensitive detection. Specially, an electronic laser noise canceller with a common mode rejection ratio of over 100 dB was designed and built for enhanced laser noise cancelling. By measuring the Faraday rotation of ambient air, we demonstrate an angular sensitivity of up to 9.0×10{sup −9} rad/√(Hz), which is limited only by the shot-noise of the photocurrent of the detector. To date, this is the highest angular sensitivity ever reported for Faraday polarimeters in the absencemore » of cavity enhancement. The measured Verdet constant of ambient air, 1.93(3)×10{sup −9}rad/(G cm) at 633 nm wavelength, agrees extremely well with the earlier experiments using high finesse optical cavities. Further, we demonstrate the applications of this sensitive technique in materials science by measuring the Faraday effect of an ultrathin iron film.« less
Assessing Regional Scale Variability in Extreme Value Statistics Under Altered Climate Scenarios
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunsell, Nathaniel; Mechem, David; Ma, Chunsheng
Recent studies have suggested that low-frequency modes of climate variability can significantly influence regional climate. The climatology associated with extreme events has been shown to be particularly sensitive. This has profound implications for droughts, heat waves, and food production. We propose to examine regional climate simulations conducted over the continental United States by applying a recently developed technique which combines wavelet multi–resolution analysis with information theory metrics. This research is motivated by two fundamental questions concerning the spatial and temporal structure of extreme events. These questions are 1) what temporal scales of the extreme value distributions are most sensitive tomore » alteration by low-frequency climate forcings and 2) what is the nature of the spatial structure of variation in these timescales? The primary objective is to assess to what extent information theory metrics can be useful in characterizing the nature of extreme weather phenomena. Specifically, we hypothesize that (1) changes in the nature of extreme events will impact the temporal probability density functions and that information theory metrics will be sensitive these changes and (2) via a wavelet multi–resolution analysis, we will be able to characterize the relative contribution of different timescales on the stochastic nature of extreme events. In order to address these hypotheses, we propose a unique combination of an established regional climate modeling approach and advanced statistical techniques to assess the effects of low-frequency modes on climate extremes over North America. The behavior of climate extremes in RCM simulations for the 20th century will be compared with statistics calculated from the United States Historical Climatology Network (USHCN) and simulations from the North American Regional Climate Change Assessment Program (NARCCAP). This effort will serve to establish the baseline behavior of climate extremes, the validity of an innovative multi–resolution information theory approach, and the ability of the RCM modeling framework to represent the low-frequency modulation of extreme climate events. Once the skill of the modeling and analysis methodology has been established, we will apply the same approach for the AR5 (IPCC Fifth Assessment Report) climate change scenarios in order to assess how climate extremes and the the influence of lowfrequency variability on climate extremes might vary under changing climate. The research specifically addresses the DOE focus area 2. Simulation of climate extremes under a changing climate. Specific results will include (1) a better understanding of the spatial and temporal structure of extreme events, (2) a thorough quantification of how extreme values are impacted by low-frequency climate teleconnections, (3) increased knowledge of current regional climate models ability to ascertain these influences, and (4) a detailed examination of the how the distribution of extreme events are likely to change under different climate change scenarios. In addition, this research will assess the ability of the innovative wavelet information theory approach to characterize extreme events. Any and all of these results will greatly enhance society’s ability to understand and mitigate the regional ramifications of future global climate change.« less
Whittaker, Rachel L; Park, Woojin; Dickerson, Clark R
2018-04-27
Efficient and holistic identification of fatigue-induced movement strategies can be limited by large between-subject variability in descriptors of joint angle data. One promising alternative to traditional, or computationally intensive methods is the symbolic motion structure representation algorithm (SMSR), which identifies the basic spatial-temporal structure of joint angle data using string descriptors of temporal joint angle trajectories. This study attempted to use the SMSR to identify changes in upper extremity time series joint angle data during a repetitive goal directed task causing muscle fatigue. Twenty-eight participants (15 M, 13 F) performed a seated repetitive task until fatigued. Upper extremity joint angles were extracted from motion capture for representative task cycles. SMSRs, averages and ranges of several joint angles were compared at the start and end of the repetitive task to identify kinematic changes with fatigue. At the group level, significant increases in the range of all joint angle data existed with large between-subject variability that posed a challenge to the interpretation of these fatigue-related changes. However, changes in the SMSRs across participants effectively summarized the adoption of adaptive movement strategies. This establishes SMSR as a viable, logical, and sensitive method of fatigue identification via kinematic changes, with novel application and pragmatism for visual assessment of fatigue development. Copyright © 2018 Elsevier Ltd. All rights reserved.
Rao, Ameya; Long, Hu; Harley-Trochimczyk, Anna; Pham, Thang; Zettl, Alex; Carraro, Carlo; Maboudian, Roya
2017-01-25
A simple and versatile strategy is presented for the localized on-chip synthesis of an ordered metal oxide hollow sphere array directly on a low power microheater platform to form a closely integrated miniaturized gas sensor. Selective microheater surface modification through fluorinated monolayer self-assembly and its subsequent microheater-induced thermal decomposition enables the position-controlled deposition of an ordered two-dimensional colloidal sphere array, which serves as a sacrificial template for metal oxide growth via homogeneous chemical precipitation; this strategy ensures control in both the morphology and placement of the sensing material on only the active heated area of the microheater platform, providing a major advantage over other methods of presynthesized nanomaterial integration via suspension coating or printing. A fabricated tin oxide hollow sphere-based sensor shows high sensitivity (6.5 ppb detection limit) and selectivity toward formaldehyde, and extremely fast response (1.8 s) and recovery (5.4 s) times. This flexible and scalable method can be used to fabricate high performance miniaturized gas sensors with a variety of hollow nanostructured metal oxides for a range of applications, including combining multiple metal oxides for superior sensitivity and tunable selectivity.
NASA Astrophysics Data System (ADS)
Chang, C.; Melkonian, J.; Riha, S. J.; Gu, L.; Sun, Y.
2017-12-01
Improving the sensitivity of methods for crop monitoring and yield forecasting is crucial as the frequency of extreme weather events increases. Conventional remote monitoring methods rely on greenness-based indices such as NDVI and EVI, which do not directly measure photosynthesis and are not sufficiently sensitive to rapid plant stress response. Solar-induced chlorophyll fluorescence (SIF) is a promising new technology that serves as a direct functional proxy of photosynthesis. We developed the first system utilizing dual QE Pro spectrometers to continuously measure the diurnal and seasonal cycle of SIF, and deployed the system in a corn field in upstate New York in 2017. To complement SIF, canopy-level measurements of carbon and water fluxes were also measured, along with concurrent leaf-level measurements of gas exchange and PAM fluorescence, midday water potential, leaf pigments, phenology, LAI, and soil moisture. We show that SIF is well correlated to GPP during the growing season and show that both are controlled by similar environmental conditions including PAR and water availability. We also describe diurnal changes in photosynthesis and plant water status and demonstrate the sensitivity of SIF to diurnal plant response.
Infrared and Raman Microscopy in Cell Biology
Matthäus, Christian; Bird, Benjamin; Miljković, Miloš; Chernenko, Tatyana; Romeo, Melissa; Diem, Max
2009-01-01
This chapter presents novel microscopic methods to monitor cell biological processes of live or fixed cells without the use of any dye, stains, or other contrast agent. These methods are based on spectral techniques that detect inherent spectroscopic properties of biochemical constituents of cells, or parts thereof. Two different modalities have been developed for this task. One of them is infrared micro-spectroscopy, in which an average snapshot of a cell’s biochemical composition is collected at a spatial resolution of typically 25 mm. This technique, which is extremely sensitive and can collect such a snapshot in fractions of a second, is particularly suited for studying gross biochemical changes. The other technique, Raman microscopy (also known as Raman micro-spectroscopy), is ideally suited to study variations of cellular composition on the scale of subcellular organelles, since its spatial resolution is as good as that of fluorescence microscopy. Both techniques exhibit the fingerprint sensitivity of vibrational spectroscopy toward biochemical composition, and can be used to follow a variety of cellular processes. PMID:19118679
Comparative sensitizing potencies of fragrances, preservatives, and hair dyes.
Lidén, Carola; Yazar, Kerem; Johansen, Jeanne D; Karlberg, Ann-Therese; Uter, Wolfgang; White, Ian R
2016-11-01
The local lymph node assay (LLNA) is used for assessing sensitizing potential in hazard identification and risk assessment for regulatory purposes. Sensitizing potency on the basis of the LLNA is categorized into extreme (EC3 value of ≤0.2%), strong (>0.2% to ≤2%), and moderate (>2%). To compare the sensitizing potencies of fragrance substances, preservatives, and hair dye substances, which are skin sensitizers that frequently come into contact with the skin of consumers and workers, LLNA results and EC3 values for 72 fragrance substances, 25 preservatives and 107 hair dye substances were obtained from two published compilations of LLNA data and opinions by the Scientific Committee on Consumer Safety and its predecessors. The median EC3 values of fragrances (n = 61), preservatives (n = 19) and hair dyes (n = 59) were 5.9%, 0.9%, and 1.3%, respectively. The majority of sensitizing preservatives and hair dyes are thus strong or extreme sensitizers (EC3 value of ≤2%), and fragrances are mostly moderate sensitizers. Although fragrances are typically moderate sensitizers, they are among the most frequent causes of contact allergy. This indicates that factors other than potency need to be addressed more rigorously in risk assessment and risk management. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Wind and wave extremes over the world oceans from very large ensembles
NASA Astrophysics Data System (ADS)
Breivik, Øyvind; Aarnes, Ole Johan; Abdalla, Saleh; Bidlot, Jean-Raymond; Janssen, Peter A. E. M.
2014-07-01
Global return values of marine wind speed and significant wave height are estimated from very large aggregates of archived ensemble forecasts at +240 h lead time. Long lead time ensures that the forecasts represent independent draws from the model climate. Compared with ERA-Interim, a reanalysis, the ensemble yields higher return estimates for both wind speed and significant wave height. Confidence intervals are much tighter due to the large size of the data set. The period (9 years) is short enough to be considered stationary even with climate change. Furthermore, the ensemble is large enough for nonparametric 100 year return estimates to be made from order statistics. These direct return estimates compare well with extreme value estimates outside areas with tropical cyclones. Like any method employing modeled fields, it is sensitive to tail biases in the numerical model, but we find that the biases are moderate outside areas with tropical cyclones.
Wu, Xiang; Lee, Hyungseok; Bilsel, Osman; ...
2015-01-01
One of the key roadblocks in UCNP development is its extremely limited choices of excitation wavelengths. We report a generic design to program UCNPs to possess highly tunable dye characteristic excitation bands. Using such distinctive properties, we were able to develop a new excitation wavelength selective security imaging. Finally, this work unleashed the greater freedom of the excitation wavelengths of the upconversion nanoparticles and we believe it is a game-changer in the field and this method will enable numerous applications that are currently limited by existing UCNPs.
NASA Technical Reports Server (NTRS)
Smathers, J. B.; Kuykendall, W. E., Jr.; Wright, R. E., Jr.; Marshall, J. R.
1973-01-01
Radioisotope measurement techniques and neutron activation analysis are evaluated for use in identifying and locating contamination sources in space environment simulation chambers. The alpha range method allows the determination of total contaminant concentration in vapor state and condensate state. A Cf-252 neutron activation analysis system for detecting oils and greases tagged with stable elements is described. While neutron activation analysis of tagged contaminants offers specificity, an on-site system is extremely costly to implement and provides only marginal detection sensitivity under even the most favorable conditions.
DePhillipo, Nick; Kimura, Iris; Kocher, Morgan; Hetzler, Ronald
2017-01-01
Background Due to the high number of adolescent athletes and subsequent lower extremity injuries, improvements of injury prevention strategies with emphasis on clinic-based and practical assessments are warranted. Purpose The purpose of this study was to prospectively investigate if a battery of functional performance tests (FPT) could be used as a preseason-screening tool to identify adolescent athletes at risk for sports-related acute lower extremity injury via comparison of injured and uninjured subjects. Methods One hundred adolescent volleyball, basketball and soccer athletes (female, n=62; male, n=38; mean age = 14.4 ± 1.6) participated. The FPT assessment included: triple hop for distance, star excursion balance test, double leg lowering maneuver, drop jump video test, and multi-stage fitness test. Composite scores were calculated using a derived equation. Subjects were monitored throughout their designated sport season(s), which consisted of a six-month surveillance period. The schools certified athletic trainer (ATC) recorded all injuries. Subjects were categorized into groups according to sex and injury incidence (acute lower extremity injury vs. uninjured) for analysis. Results Mean FPT composite scores were significantly lower for the injured compared to the uninjured groups in both sexes (males: 19.06 ± 3.59 vs. 21.90 ± 2.44; females: 19.48 ± 3.35 vs. 22.10 ± 3.06 injured and uninjured, respectively)(p < .05). The receiver-operator characteristic analysis determined the cut-off score at ≤ 20 for both genders (sensitivity=.71, specificity=.81, for males; sensitivity=.67, specificity=.69, for females)(p<.05) for acute noncontact lower extremity injuries. Significant positive correlations were found between the FPT composite score and the multi-stage fitness test in male subjects (r=.474, p=.003), suggesting a relationship between functional performance, aerobic capacity, and potential injury risk. Conclusion A comprehensive assessment of functional performance tests may be beneficial to identify high-injury risk adolescents prior to athletic participation. PMID:28515975
Osland, Michael J.; Day, Richard H.; Hall, Courtney T.; Brumfield, Marisa D; Dugas, Jason; Jones, William R.
2017-01-01
Within the context of climate change, there is a pressing need to better understand the ecological implications of changes in the frequency and intensity of climate extremes. Along subtropical coasts, less frequent and warmer freeze events are expected to permit freeze-sensitive mangrove forests to expand poleward and displace freeze-tolerant salt marshes. Here, our aim was to better understand the drivers of poleward mangrove migration by quantifying spatiotemporal patterns in mangrove range expansion and contraction across land-ocean temperature gradients. Our work was conducted in a freeze-sensitive mangrove-marsh transition zone that spans a land-ocean temperature gradient in one of the world's most wetland-rich regions (Mississippi River Deltaic Plain; Louisiana, USA). We used historical air temperature data (1893-2014), alternative future climate scenarios, and coastal wetland coverage data (1978-2011) to investigate spatiotemporal fluctuations and climate-wetland linkages. Our analyses indicate that changes in mangrove coverage have been controlled primarily by extreme freeze events (i.e., air temperatures below a threshold zone of -6.3 to -7.6 °C). We expect that in the past 121 years, mangrove range expansion and contraction has occurred across land-ocean temperature gradients. Mangrove resistance, resilience, and dominance were all highest in areas closer to the ocean where temperature extremes were buffered by large expanses of water and saturated soil. Under climate change, these areas will likely serve as local hotspots for mangrove dispersal, growth, range expansion, and displacement of salt marsh. Collectively, our results show that the frequency and intensity of freeze events across land-ocean temperature gradients greatly influences spatiotemporal patterns of range expansion and contraction of freeze-sensitive mangroves. We expect that, along subtropical coasts, similar processes govern the distribution and abundance of other freeze-sensitive organisms. In broad terms, our findings can be used to better understand and anticipate the ecological effects of changing winter climate extremes, especially within the transition zone between tropical and temperate climates.
A comparison of solute-transport solution techniques based on inverse modelling results
Mehl, S.; Hill, M.C.
2000-01-01
Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results-simulated breakthrough curves, sensitivity analysis, and calibrated parameter values-change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.Five common numerical techniques (finite difference, predictor-corrector, total-variation-diminishing, method-of-characteristics, and modified-method-of-characteristics) were tested using simulations of a controlled conservative tracer-test experiment through a heterogeneous, two-dimensional sand tank. The experimental facility was constructed using randomly distributed homogeneous blocks of five sand types. This experimental model provides an outstanding opportunity to compare the solution techniques because of the heterogeneous hydraulic conductivity distribution of known structure, and the availability of detailed measurements with which to compare simulated concentrations. The present work uses this opportunity to investigate how three common types of results - simulated breakthrough curves, sensitivity analysis, and calibrated parameter values - change in this heterogeneous situation, given the different methods of simulating solute transport. The results show that simulated peak concentrations, even at very fine grid spacings, varied because of different amounts of numerical dispersion. Sensitivity analysis results were robust in that they were independent of the solution technique. They revealed extreme correlation between hydraulic conductivity and porosity, and that the breakthrough curve data did not provide enough information about the dispersivities to estimate individual values for the five sands. However, estimated hydraulic conductivity values are significantly influenced by both the large possible variations in model dispersion and the amount of numerical dispersion present in the solution technique.
NASA Astrophysics Data System (ADS)
Qiu, Hong; Tian, Linwei; Ho, Kin-fai; Yu, Ignatius T. S.; Thach, Thuan-Quoc; Wong, Chit-Ming
2016-05-01
The short-term effects of ambient cold temperature on mortality have been well documented in the literature worldwide. However, less is known about which subpopulations are more vulnerable to death related to extreme cold. We aimed to examine the personal characteristics and underlying causes of death that modified the association between extreme cold and mortality in a case-only approach. Individual information of 197,680 deaths of natural causes, daily temperature, and air pollution concentrations in cool season (November-April) during 2002-2011 in Hong Kong were collected. Extreme cold was defined as those days with preceding week with a daily maximum temperature at or less than the 1st percentile of its distribution. Logistic regression models were used to estimate the effects of modification, further controlling for age, seasonal pattern, and air pollution. Sensitivity analyses were conducted by using the 5th percentile as cutoff point to define the extreme cold. Subjects with age of 85 and older were more vulnerable to extreme cold, with an odds ratio (OR) of 1.33 (95 % confidence interval (CI), 1.22-1.45). The greater risk of extreme cold-related mortality was observed for total cardiorespiratory diseases and several specific causes including hypertensive diseases, stroke, congestive heart failure, chronic obstructive pulmonary disease (COPD), and pneumonia. Hypertensive diseases exhibited the greatest vulnerability to extreme cold exposure, with an OR of 1.37 (95 % CI, 1.13-1.65). Sensitivity analyses showed the robustness of these effect modifications. This evidence on which subpopulations are vulnerable to the adverse effects of extreme cold is important to inform public health measures to minimize those effects.
Qin, Xiao-ying; Li, Guo-xuan; Qin, Ya-zhen; Wang, Yu; Wang, Feng-rong; Liu, Dai-hong; Xu, Lan-ping; Chen, Huan; Han, Wei; Wang, Jing-zhi; Zhang, Xiao-hui; Li, Jin-lan; Li, Ling-di; Liu, Kai-yan; Huang, Xiao-jun
2011-08-01
Analysis of changes in recipient and donor hematopoietic cell origin is extremely useful to monitor the effect of hematopoietic stem cell transplantation (HSCT) and sequential adoptive immunotherapy by donor lymphocyte infusions. We developed a sensitive, reliable and rapid real-time PCR method based on sequence polymorphism systems to quantitatively assess the hematopoietic chimerism after HSCT. A panel of 29 selected sequence polymorphism (SP) markers was screened by real-time PCR in 101 HSCT patients with leukemia and other hematological diseases. The chimerism kinetics of bone marrow samples of 8 HSCT patients in remission and relapse situations were followed longitudinally. Recipient genotype discrimination was possible in 97.0% (98 of 101) with a mean number of 2.5 (1-7) informative markers per recipient/donor pair. Using serial dilutions of plasmids containing specific SP markers, the linear correlation (r) of 0.99, the slope between -3.2 and -3.7 and the sensitivity of 0.1% were proved reproducible. By this method, it was possible to very accurately detect autologous signals in the range from 0.1% to 30%. The accuracy of the method in the very important range of autologous signals below 5% was extraordinarily high (standard deviation <1.85%), which might significantly improve detection accuracy of changes in autologous signals early in the post-transplantation course of follow-up. The main advantage of the real-time PCR method over short tandem repeat PCR chimerism assays is the absence of PCR competition and plateau biases, with demonstrated greater sensitivity and linearity. Finally, we prospectively analyzed bone marrow samples of 8 patients who received allografts and presented the chimerism kinetics of remission and relapse situations that illustrated the sensitivity level and the promising clinical application of this method. This SP-based real-time PCR assay provides a rapid, sensitive, and accurate quantitative assessment of mixed chimerism that can be useful in predicting graft rejection and early relapse.
NASA Astrophysics Data System (ADS)
Li, J.; Santos, J. T.; Sillanpää, M. A.
2018-02-01
A single-electron transistor (SET) can be used as an extremely sensitive charge detector. Mechanical displacements can be converted into charge, and hence, SETs can become sensitive detectors of mechanical oscillations. For studying small-energy oscillations, an important approach to realize the mechanical resonators is to use piezoelectric materials. Besides coupling to traditional electric circuitry, the strain-generated piezoelectric charge allows for measuring ultrasmall oscillations via SET detection. Here, we explore the usage of SETs to detect the shear-mode oscillations of a 6-mm-diameter quartz disk resonator with a resonance frequency around 9 MHz. We measure the mechanical oscillations using either a conventional DC SET, or use the SET as a homodyne or heterodyne mixer, or finally, as a radio-frequency single-electron transistor (RF-SET). The RF-SET readout is shown to be the most sensitive method, allowing us to measure mechanical displacement amplitudes below 10^{-13} m. We conclude that a detection based on a SET offers a potential to reach the sensitivity at the quantum limit of the mechanical vibrations.
Ultrasensitive Biosensors Using Enhanced Fano Resonances in Capped Gold Nanoslit Arrays
Lee, Kuang-Li; Huang, Jhih-Bin; Chang, Jhih-Wei; Wu, Shu-Han; Wei, Pei-Kuen
2015-01-01
Nanostructure-based sensors are capable of sensitive and label-free detection for biomedical applications. However, plasmonic sensors capable of highly sensitive detection with high-throughput and low-cost fabrication techniques are desirable. We show that capped gold nanoslit arrays made by thermal-embossing nanoimprint method on a polymer film can produce extremely sharp asymmetric resonances for a transverse magnetic-polarized wave. An ultrasmall linewidth is formed due to the enhanced Fano coupling between the cavity resonance mode in nanoslits and surface plasmon resonance mode on periodic metallic surface. With an optimal slit length and width, the full width at half-maximum bandwidth of the Fano mode is only 3.68 nm. The wavelength sensitivity is 926 nm/RIU for 60-nm-width and 1,000-nm-period nanoslits. The figure of merit is up to 252. The obtained value is higher than the theoretically estimated upper limits of the prism-coupling SPR sensors and the previously reported record high figure-of-merit in array sensors. In addition, the structure has an ultrahigh intensity sensitivity up to 48,117%/RIU. PMID:25708955
NASA Astrophysics Data System (ADS)
Li, J.; Santos, J. T.; Sillanpää, M. A.
2018-06-01
A single-electron transistor (SET) can be used as an extremely sensitive charge detector. Mechanical displacements can be converted into charge, and hence, SETs can become sensitive detectors of mechanical oscillations. For studying small-energy oscillations, an important approach to realize the mechanical resonators is to use piezoelectric materials. Besides coupling to traditional electric circuitry, the strain-generated piezoelectric charge allows for measuring ultrasmall oscillations via SET detection. Here, we explore the usage of SETs to detect the shear-mode oscillations of a 6-mm-diameter quartz disk resonator with a resonance frequency around 9 MHz. We measure the mechanical oscillations using either a conventional DC SET, or use the SET as a homodyne or heterodyne mixer, or finally, as a radio-frequency single-electron transistor (RF-SET). The RF-SET readout is shown to be the most sensitive method, allowing us to measure mechanical displacement amplitudes below 10^{-13} m. We conclude that a detection based on a SET offers a potential to reach the sensitivity at the quantum limit of the mechanical vibrations.
Developing a healthcare law library.
Sconyers, J M
1998-01-01
Legal materials are expensive, bulky, and extremely time sensitive. Selecting the appropriate means of ensuring easy access to easily-retrievable, timely legal materials is of extreme importance to any lawyer. The author gives an overview of the various means of retrieving necessary research, including the strengths and weaknesses of each of the various options.
High Sensitive Scintillation Observations At Very Low Frequencies
NASA Astrophysics Data System (ADS)
Konovalenko, A. A.; Falkovich, I. S.; Kalinichenko, N. N.; Olyak, M. R.; Lecacheux, A.; Rosolen, C.; Bougeret, J.-L.; Rucker, H. O.; Tokarev, Yu.
The observation of interplanetary scintillations of compact radio sources is powerful method of solar wind diagnostics. This method is developed mainly at decimeter- meter wavelengths. New possibilities are opened at extremely low frequencies (decameter waves) especially at large elongations. Now this approach is being actively developed using high effective decameter antennas UTR-2, URAN and Nancay Decameter Array. New class of back-end facility like high dynamic range, high resolution digital spectral processors, as well as dynamic spectra determination ideology give us new opportunities for distinguishing of the ionospheric and interplanetary scintillations and for observations of large number of radio sources, whith different angular sizes and elongations, even for the cases of rather weak objects.
Improving the performance of extreme learning machine for hyperspectral image classification
NASA Astrophysics Data System (ADS)
Li, Jiaojiao; Du, Qian; Li, Wei; Li, Yunsong
2015-05-01
Extreme learning machine (ELM) and kernel ELM (KELM) can offer comparable performance as the standard powerful classifier―support vector machine (SVM), but with much lower computational cost due to extremely simple training step. However, their performance may be sensitive to several parameters, such as the number of hidden neurons. An empirical linear relationship between the number of training samples and the number of hidden neurons is proposed. Such a relationship can be easily estimated with two small training sets and extended to large training sets so as to greatly reduce computational cost. Other parameters, such as the steepness parameter in the sigmodal activation function and regularization parameter in the KELM, are also investigated. The experimental results show that classification performance is sensitive to these parameters; fortunately, simple selections will result in suboptimal performance.
Milyo, Jeffrey; Mellor, Jennifer M
2003-01-01
Objective To illustrate the potential sensitivity of ecological associations between mortality and certain socioeconomic factors to different methods of age-adjustment. Data Sources Secondary analysis employing state-level data from several publicly available sources. Crude and age-adjusted mortality rates for 1990 are obtained from the U.S. Centers for Disease Control. The Gini coefficient for family income and percent of persons below the federal poverty line are from the U.S. Bureau of Labor Statistics. Putnam's (2000) Social Capital Index was downloaded from ; the Social Mistrust Index was calculated from responses to the General Social Survey, following the method described in Kawachi et al. (1997). All other covariates are obtained from the U.S. Census Bureau. Study Design We use least squares regression to estimate the effect of several state-level socioeconomic factors on mortality rates. We examine whether these statistical associations are sensitive to the use of alternative methods of accounting for the different age composition of state populations. Following several previous studies, we present results for the case when only mortality rates are age-adjusted. We contrast these results with those obtained from regressions of crude mortality on age variables. Principal Findings Different age-adjustment methods can cause a change in the sign or statistical significance of the association between mortality and various socioeconomic factors. When age variables are included as regressors, we find no significant association between mortality and either income inequality, minority racial concentration, or social capital. Conclusions Ecological associations between certain socioeconomic factors and mortality may be extremely sensitive to different age-adjustment methods. PMID:14727797
Meteorological risks and impacts on crop production systems in Belgium
NASA Astrophysics Data System (ADS)
Gobin, Anne
2013-04-01
Extreme weather events such as droughts, heat stress, rain storms and floods can have devastating effects on cropping systems. The perspective of rising risk-exposure is exacerbated further by projected increases of extreme events with climate change. More limits to aid received for agricultural damage and an overall reduction of direct income support to farmers further impacts farmers' resilience. Based on insurance claims, potatoes and rapeseed are the most vulnerable crops, followed by cereals and sugar beets. Damages due to adverse meteorological events are strongly dependent on crop type, crop stage and soil type. Current knowledge gaps exist in the response of arable crops to the occurrence of extreme events. The degree of temporal overlap between extreme weather events and the sensitive periods of the farming calendar requires a modelling approach to capture the mixture of non-linear interactions between the crop and its environment. The regional crop model REGCROP (Gobin, 2010) enabled to examine the likely frequency and magnitude of drought, heat stress and waterlogging in relation to the cropping season and crop sensitive stages of six arable crops: winter wheat, winter barley, winter rapeseed, potato, sugar beet and maize. Since crop development is driven by thermal time, crops matured earlier during the warmer 1988-2008 period than during the 1947-1987 period. Drought and heat stress, in particular during the sensitive crop stages, occur at different times in the cropping season and significantly differ between two climatic periods, 1947-1987 and 1988-2008. Soil moisture deficit increases towards harvesting, such that earlier maturing winter crops may avoid drought stress that occurs in late spring and summer. This is reflected in a decrease both in magnitude and frequency of soil moisture deficit around the sensitive stages during the 1988-2008 period when atmospheric drought may be compensated for with soil moisture. The risk of drought spells during the sensitive stages of summer crops increases and may be further aggravated by atmospheric moisture deficits and heat stress. Summer crops may therefore benefit from earlier planting dates and beneficial moisture conditions during early canopy development, but will suffer from increased drought and heat stress during crop maturity. During the harvesting stages, the number of waterlogged days increases in particular for tuber crops. Physically based crop models assist in understanding the links between different factors causing crop damage. The approach allows for assessing the meteorological impacts on crop growth due to the sensitive stages occurring earlier during the growing season and due to extreme weather events. Though average yields have risen continuously between 1947 and 2008 mainly due to technological advances, there is no evidence that relative tolerance to adverse weather conditions such as atmospheric moisture deficit and temperature extremes has changed.
NASA Astrophysics Data System (ADS)
Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua
2014-07-01
Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.
Imholte, Gregory; Gottardo, Raphael
2017-01-01
Summary The peptide microarray immunoassay simultaneously screens sample serum against thousands of peptides, determining the presence of antibodies bound to array probes. Peptide microarrays tiling immunogenic regions of pathogens (e.g. envelope proteins of a virus) are an important high throughput tool for querying and mapping antibody binding. Because of the assay’s many steps, from probe synthesis to incubation, peptide microarray data can be noisy with extreme outliers. In addition, subjects may produce different antibody profiles in response to an identical vaccine stimulus or infection, due to variability among subjects’ immune systems. We present a robust Bayesian hierarchical model for peptide microarray experiments, pepBayes, to estimate the probability of antibody response for each subject/peptide combination. Heavy-tailed error distributions accommodate outliers and extreme responses, and tailored random effect terms automatically incorporate technical effects prevalent in the assay. We apply our model to two vaccine trial datasets to demonstrate model performance. Our approach enjoys high sensitivity and specificity when detecting vaccine induced antibody responses. A simulation study shows an adaptive thresholding classification method has appropriate false discovery rate control with high sensitivity, and receiver operating characteristics generated on vaccine trial data suggest that pepBayes clearly separates responses from non-responses. PMID:27061097
Workshop on Using NASA Data for Time-Sensitive Applications
NASA Technical Reports Server (NTRS)
Davies, Diane K.; Brown, Molly E.; Murphy, Kevin J.; Michael, Karen A.; Zavodsky, Bradley T.; Stavros, E. Natasha; Carroll, Mark L.
2017-01-01
Over the past decade, there has been an increase in the use of NASA's Earth Observing System (EOS) data and imagery for time-sensitive applications such as monitoring wildfires, floods, and extreme weather events. In September 2016, NASA sponsored a workshop for data users, producers, and scientists to discuss the needs of time-sensitive science applications.
2016-04-01
AFCEC-CX-TY-TR-2016-0007 HANDHELD CHEM/ BIOSENSOR USING EXTREME CONFORMATIONAL CHANGES IN DESIGNED BINDING PROTEINS TO ENHANCE SURFACE PLASMON...Include area code) 03/24/2016 Abstract 08/14/2015--03/31/2016 Handheld chem/ biosensor using extreme conformational changes in designed binding...Baltimore, Maryland on 17-21 April 2016. We propose the development of a highly sensitive handheld chem/ biosensor device using a novel class of engineered
Charge-transfer-based terbium MOF nanoparticles as fluorescent pH sensor for extreme acidity.
Qi, Zewan; Chen, Yang
2017-01-15
Newly emerged metal organic frameworks (MOFs) have aroused the great interest in designing functional materials by means of its flexible structure and component. In this study, we used lanthanide Tb 3+ ions and small molecular ligands to design and assemble a kind of pH-sensitive MOF nanoparticle based on intramolecular-charge-transfer effect. This kind of made-to-order MOF nanoparticle for H + is highly specific and sensitive and could be used to fluorescently indicate pH value of strong acidic solution via preset mechanism through luminescence of Tb 3+ . The long luminescence lifetime of Tb 3+ allows eliminating concomitant non-specific fluorescence by time-revised fluorescence techniques, processing an advantage in sensing H + in biological media with strong autofluorescence. Our method showed a great potential of MOF structures in designing and constructing sensitive sensing materials for specific analytes directly via the assembly of functional ions/ligands. Copyright © 2016 Elsevier B.V. All rights reserved.
Optofluidic refractometer using resonant optical tunneling effect.
Jian, A Q; Zhang, X M; Zhu, W M; Yu, M
2010-12-30
This paper presents the design and analysis of a liquid refractive index sensor that utilizes a unique physical mechanism of resonant optical tunneling effect (ROTE). The sensor consists of two hemicylindrical prisms, two air gaps, and a microfluidic channel. All parts can be microfabricated using an optical resin NOA81. Theoretical study shows that this ROTE sensor has extremely sharp transmission peak and achieves a sensitivity of 760 nm∕refractive index unit (RIU) and a detectivity of 85 000 RIU(-1). Although the sensitivity is smaller than that of a typical surface plasmon resonance (SPR) sensor (3200 nm∕RIU) and is comparable to a 95% reflectivity Fabry-Pérot (FP) etalon (440 nm∕RIU), the detectivity is 17 000 times larger than that of the SPR sensor and 85 times larger than that of the FP etalon. Such ROTE sensor could potentially achieve an ultrahigh sensitivity of 10(-9) RIU, two orders higher than the best results of current methods.
NASA Astrophysics Data System (ADS)
Pestana, Noah Benjamin
Accurate quantification of circulating cell populations is important in many areas of pre-clinical and clinical biomedical research, for example, in the study of cancer metastasis or the immune response following tissue and organ transplants. Normally this is done "ex-vivo" by drawing and purifying a small volume of blood and then analyzing it with flow cytometry, hemocytometry or microfludic devices, but the sensitivity of these techniques are poor and the process of handling samples has been shown to affect cell viability and behavior. More recently "in vivo flow cytometry" (IVFC) techniques have been developed where fluorescently-labeled cells flowing in a small blood vessel in the ear or retina are analyzed, but the sensitivity is generally poor due to the small sampling volume. To address this, our group recently developed a method known as "Diffuse Fluorescence Flow Cytometry" (DFFC) that allows detection and counting of rare circulating cells with diffuse photons, offering extremely high single cell counting sensitivity. In this thesis, an improved DFFC prototype was designed and validated. The chief improvements were three-fold, i) improved optical collection efficiency, ii) improved detection electronics, and iii) development of a method to mitigate motion artifacts during in vivo measurements. In combination, these improvements yielded an overall instrument detection sensitivity better than 1 cell/mL in vivo, which is the most sensitive IVFC system reported to date. Second, development and validation of a low-cost microfluidic device reader for analysis of ocular fluids is described. We demonstrate that this device has equivalent or better sensitivity and accuracy compared a fluorescence microscope, but at an order-of-magnitude reduced cost with simplified operation. Future improvements to both instruments are also discussed.
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model. PMID:29518121
Graphene-Reinforced Aluminum Matrix Composites: A Review of Synthesis Methods and Properties
NASA Astrophysics Data System (ADS)
Chen, Fei; Gupta, Nikhil; Behera, Rakesh K.; Rohatgi, Pradeep K.
2018-06-01
Graphene-reinforced aluminum (Gr-Al) matrix nanocomposites (NCs) have attracted strong interest from both research and industry in high-performance weight-sensitive applications. Due to the vastly different bonding characteristics of the Al matrix (metallic) and graphene (in-plane covalent + inter-plane van der Waals), the graphene phase has a general tendency to agglomerate and phase separate in the metal matrix, which is detrimental for the mechanical and chemical properties of the composite. Thus, synthesis of Gr-Al NCs is extremely challenging. This review summarizes the different methods available to synthesize Gr-Al NCs and the resulting properties achieved in these NCs. Understanding the effect of processing parameters on the realized properties opens up the possibility of tailoring the synthesis methods to achieve the desired properties for a given application.
Spectrophotometric determination of traces of boron in high purity silicon
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parashar, D.C.; Sarkar, A.K.; Singh, N.
1989-07-01
A reddish brown complex is formed between boron and curcumin in concentrated sulfuric acid and glacial acetic acid mixture (1:1). The colored complex is highly selective and stable for about 3 hours and has the maximum absorbance at 545 nm. The sensitivity of the method is extremely high and the detection limit is 3 parts per billion based on 0.004 absorbance value. The interference of some of the important cations and anions relevant to silicon were studied and it is found that 100 fold excess of most of these cations and anions do not interfere in the determination of boron.more » The method is successfully employed for the determination of boron in silicon used in semiconductor devices. The results have been verified by standard addition method.« less
Graphene-Reinforced Aluminum Matrix Composites: A Review of Synthesis Methods and Properties
NASA Astrophysics Data System (ADS)
Chen, Fei; Gupta, Nikhil; Behera, Rakesh K.; Rohatgi, Pradeep K.
2018-03-01
Graphene-reinforced aluminum (Gr-Al) matrix nanocomposites (NCs) have attracted strong interest from both research and industry in high-performance weight-sensitive applications. Due to the vastly different bonding characteristics of the Al matrix (metallic) and graphene (in-plane covalent + inter-plane van der Waals), the graphene phase has a general tendency to agglomerate and phase separate in the metal matrix, which is detrimental for the mechanical and chemical properties of the composite. Thus, synthesis of Gr-Al NCs is extremely challenging. This review summarizes the different methods available to synthesize Gr-Al NCs and the resulting properties achieved in these NCs. Understanding the effect of processing parameters on the realized properties opens up the possibility of tailoring the synthesis methods to achieve the desired properties for a given application.
Sánchez-Pérez, J F; Marín, F; Morales, J L; Cánovas, M; Alhama, F
2018-01-01
Mathematical models simulating different and representative engineering problem, atomic dry friction, the moving front problems and elastic and solid mechanics are presented in the form of a set of non-linear, coupled or not coupled differential equations. For different parameters values that influence the solution, the problem is numerically solved by the network method, which provides all the variables of the problems. Although the model is extremely sensitive to the above parameters, no assumptions are considered as regards the linearization of the variables. The design of the models, which are run on standard electrical circuit simulation software, is explained in detail. The network model results are compared with common numerical methods or experimental data, published in the scientific literature, to show the reliability of the model.
Fabrication of sub-12 nm thick silicon nanowires by processing scanning probe lithography masks
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kyoung Ryu, Yu; Garcia, Ricardo, E-mail: r.garcia@csic.es; Aitor Postigo, Pablo
2014-06-02
Silicon nanowires are key elements to fabricate very sensitive mechanical and electronic devices. We provide a method to fabricate sub-12 nm silicon nanowires in thickness by combining oxidation scanning probe lithography and anisotropic dry etching. Extremely thin oxide masks (0.3–1.1 nm) are transferred into nanowires of 2–12 nm in thickness. The width ratio between the mask and the silicon nanowire is close to one which implies that the nanowire width is controlled by the feature size of the nanolithography. This method enables the fabrication of very small single silicon nanowires with cross-sections below 100 nm{sup 2}. Those values are the smallest obtained withmore » a top-down lithography method.« less
Absorption Coefficient of a Semiconductor Thin Film from Photoluminescence
NASA Astrophysics Data System (ADS)
Rey, G.; Spindler, C.; Babbe, F.; Rachad, W.; Siebentritt, S.; Nuys, M.; Carius, R.; Li, S.; Platzer-Björkman, C.
2018-06-01
The photoluminescence (PL) of semiconductors can be used to determine their absorption coefficient (α ) using Planck's generalized law. The standard method, suitable only for self-supported thick samples, like wafers, is extended to multilayer thin films by means of the transfer-matrix method to include the effect of the substrate and optional front layers. α values measured on various thin-film solar-cell absorbers by both PL and photothermal deflection spectroscopy (PDS) show good agreement. PL measurements are extremely sensitive to the semiconductor absorption and allow us to advantageously circumvent parasitic absorption from the substrate; thus, α can be accurately determined down to very low values, allowing us to investigate deep band tails with a higher dynamic range than in any other method, including spectrophotometry and PDS.
Quantifying the relationship between extreme air pollution events and extreme weather events
NASA Astrophysics Data System (ADS)
Zhang, Henian; Wang, Yuhang; Park, Tae-Won; Deng, Yi
2017-05-01
Extreme weather events can strongly affect surface air quality, which has become a major environmental factor to affect human health. Here, we examined the relationship between extreme ozone and PM2.5 (particular matter with an aerodynamic diameter less than 2.5 μm) events and the representative meteorological parameters such as daily maximum temperature (Tmax), minimum relative humidity (RHmin), and minimum wind speed (Vmin), using the location-specific 95th or 5th percentile threshold derived from historical reanalysis data (30 years for ozone and 10 years for PM2.5). We found that ozone and PM2.5 extremes were decreasing over the years, reflecting EPA's tightened standards and effort on reducing the corresponding precursor's emissions. Annual ozone and PM2.5 extreme days were highly correlated with Tmax and RHmin, especially in the eastern U.S. They were positively (negatively) correlated with Vmin in urban (rural and suburban) stations. The overlapping ratios of ozone extreme days with Tmax were fairly constant, about 32%, and tended to be high in fall and low in winter. Ozone extreme days were most sensitive to Tmax, then RHmin, and least sensitive to Vmin. The majority of ozone extremes occurred when Tmax was between 300 K and 320 K, RHmin was less than 40%, and Vmin was less than 3 m/s. The number of annual extreme PM2.5 days was highly positively correlated with the extreme RHmin/Tmax days, with correlation coefficient between PM2.5/RHmin highest in urban and suburban regions and the correlation coefficient between PM2.5/Tmax highest in rural area. Tmax has more impact on PM2.5 extreme over the eastern U.S. Extreme PM2.5 days were more likely to occur at low RH conditions in the central and southeastern U.S., especially during spring time, and at high RH conditions in the northern U.S. and the Great Plains. Most extreme PM2.5 events occurred when Tmax was between 300 K and 320 K and RHmin was between 10% and 50%. Extreme PM2.5 days usually occurred when Vmin was under 2 m/s. However, during spring season in the Southeast and fall season in Northwest, high winds were found to accompany extreme PM2.5 days, likely reflecting the impact of fire emissions.
Peripheral Quantitative CT (pQCT) Using a Dedicated Extremity Cone-Beam CT Scanner
Muhit, A. A.; Arora, S.; Ogawa, M.; Ding, Y.; Zbijewski, W.; Stayman, J. W.; Thawait, G.; Packard, N.; Senn, R.; Yang, D.; Yorkston, J.; Bingham, C.O.; Means, K.; Carrino, J. A.; Siewerdsen, J. H.
2014-01-01
Purpose We describe the initial assessment of the peripheral quantitative CT (pQCT) imaging capabilities of a cone-beam CT (CBCT) scanner dedicated to musculoskeletal extremity imaging. The aim is to accurately measure and quantify bone and joint morphology using information automatically acquired with each CBCT scan, thereby reducing the need for a separate pQCT exam. Methods A prototype CBCT scanner providing isotropic, sub-millimeter spatial resolution and soft-tissue contrast resolution comparable or superior to standard multi-detector CT (MDCT) has been developed for extremity imaging, including the capability for weight-bearing exams and multi-mode (radiography, fluoroscopy, and volumetric) imaging. Assessment of pQCT performance included measurement of bone mineral density (BMD), morphometric parameters of subchondral bone architecture, and joint space analysis. Measurements employed phantoms, cadavers, and patients from an ongoing pilot study imaged with the CBCT prototype (at various acquisition, calibration, and reconstruction techniques) in comparison to MDCT (using pQCT protocols for analysis of BMD) and micro-CT (for analysis of subchondral morphometry). Results The CBCT extremity scanner yielded BMD measurement within ±2–3% error in both phantom studies and cadaver extremity specimens. Subchondral bone architecture (bone volume fraction, trabecular thickness, degree of anisotropy, and structure model index) exhibited good correlation with gold standard micro-CT (error ~5%), surpassing the conventional limitations of spatial resolution in clinical MDCT scanners. Joint space analysis demonstrated the potential for sensitive 3D joint space mapping beyond that of qualitative radiographic scores in application to non-weight-bearing versus weight-bearing lower extremities and assessment of phalangeal joint space integrity in the upper extremities. Conclusion The CBCT extremity scanner demonstrated promising initial results in accurate pQCT analysis from images acquired with each CBCT scan. Future studies will include improved x-ray scatter correction and image reconstruction techniques to further improve accuracy and to correlate pQCT metrics with known pathology. PMID:25076823
Tracking rural-to-urban migration in China: Lessons from the 2005 inter-census population survey.
Ebenstein, Avraham; Zhao, Yaohui
2015-01-01
We examined migration in China using the 2005 inter-census population survey, in which migrants were registered at both their place of original (hukou) residence and at their destination. We find evidence that the estimated number of internal migrants in China is extremely sensitive to the enumeration method. We estimate that the traditional destination-based survey method fails to account for more than a third of migrants found using comparable origin-based methods. The 'missing' migrants are disproportionately young, male, and holders of rural hukou. We find that origin-based methods are more effective at capturing migrants who travel short distances for short periods, whereas destination-based methods are more effective when entire households have migrated and no remaining family members are located at the hukou location. We conclude with a set of policy recommendations for the design of population surveys in countries with large migrant populations.
Extreme Sea Conditions in Shallow Water: Estimation based on in-situ measurements
NASA Astrophysics Data System (ADS)
Le Crom, Izan; Saulnier, Jean-Baptiste
2013-04-01
The design of marine renewable energy devices and components is based, among others, on the assessment of the environmental extreme conditions (winds, currents, waves, and water level) that must be combined together in order to evaluate the maximal loads on a floating/fixed structure, and on the anchoring system over a determined return period. Measuring devices are generally deployed at sea over relatively short durations (a few months to a few years), typically when describing water free surface elevation, and extrapolation methods based on hindcast data (and therefore on wave simulation models) have to be used. How to combine, in a realistic way, the action of the different loads (winds and waves for instance) and which correlation of return periods should be used are highly topical issues. However, the assessment of the extreme condition itself remains a not-fully-solved, crucial, and sensitive task. Above all in shallow water, extreme wave height, Hmax, is the most significant contribution in the dimensioning process of EMR devices. As a case study, existing methodologies for deep water have been applied to SEMREV, the French marine energy test site. The interest of this study, especially at this location, goes beyond the simple application to SEMREV's WEC and floating wind turbines deployment as it could also be extended to the Banc de Guérande offshore wind farm that are planned close by. More generally to pipes and communication cables as it is a redundant problematic. The paper will first present the existing measurements (wave and wind on site), the prediction chain that has been developed via wave models, the extrapolation methods applied to hindcast data, and will try to formulate recommendations for improving this assessment in shallow water.
[The use of a detector of the extremely weak radiation as a variometer of gravitation field].
Gorshkov, E S; Bondarenko, E G; Shapovalov, S N; Sokolovskiĭ, V V; Troshichev, O A
2001-01-01
It was shown that the detector of extremely weak radiation with selectively increased sensitivity to the nonelectromagnetic, including the gravitational component of the spectrum of active physical fields can be used as the basis for constructing a variometer of gravitational field of a new type.
USDA-ARS?s Scientific Manuscript database
Rice (Oryza sativa L.) in Yangtze River Valley (YRV) suffered serious yield losses in 2003 when extreme heatwave (HW), hampered rice reproductive growth phase (RGP). Climate change induced extreme and asymmetrical fluctuations in temperature during heat sensitive stage of rice growth cycle, i.e., RG...
Skiöld, Sara; Azimzadeh, Omid; Merl-Pham, Juliane; Naslund, Ingemar; Wersall, Peter; Lidbrink, Elisabet; Tapio, Soile; Harms-Ringdahl, Mats; Haghdoost, Siamak
2015-06-01
Radiation therapy is a cornerstone of modern cancer treatment. Understanding the mechanisms behind normal tissue sensitivity is essential in order to minimize adverse side effects and yet to prevent local cancer reoccurrence. The aim of this study was to identify biomarkers of radiation sensitivity to enable personalized cancer treatment. To investigate the mechanisms behind radiation sensitivity a pilot study was made where eight radiation-sensitive and nine normo-sensitive patients were selected from a cohort of 2914 breast cancer patients, based on acute tissue reactions after radiation therapy. Whole blood was sampled and irradiated in vitro with 0, 1, or 150 mGy followed by 3 h incubation at 37°C. The leukocytes of the two groups were isolated, pooled and protein expression profiles were investigated using isotope-coded protein labeling method (ICPL). First, leukocytes from the in vitro irradiated whole blood from normo-sensitive and extremely sensitive patients were compared to the non-irradiated controls. To validate this first study a second ICPL analysis comparing only the non-irradiated samples was conducted. Both approaches showed unique proteomic signatures separating the two groups at the basal level and after doses of 1 and 150 mGy. Pathway analyses of both proteomic approaches suggest that oxidative stress response, coagulation properties and acute phase response are hallmarks of radiation sensitivity supporting our previous study on oxidative stress response. This investigation provides unique characteristics of radiation sensitivity essential for individualized radiation therapy. Copyright © 2015 Elsevier B.V. All rights reserved.
Extreme warming challenges sentinel status of kelp forests as indicators of climate change.
Reed, Daniel; Washburn, Libe; Rassweiler, Andrew; Miller, Robert; Bell, Tom; Harrer, Shannon
2016-12-13
The desire to use sentinel species as early warning indicators of impending climate change effects on entire ecosystems is attractive, but we need to verify that such approaches have sound biological foundations. A recent large-scale warming event in the North Pacific Ocean of unprecedented magnitude and duration allowed us to evaluate the sentinel status of giant kelp, a coastal foundation species that thrives in cold, nutrient-rich waters and is considered sensitive to warming. Here, we show that giant kelp and the majority of species that associate with it did not presage ecosystem effects of extreme warming off southern California despite giant kelp's expected vulnerability. Our results challenge the general perception that kelp-dominated systems are highly vulnerable to extreme warming events and expose the more general risk of relying on supposed sentinel species that are assumed to be very sensitive to climate change.
Extreme warming challenges sentinel status of kelp forests as indicators of climate change
NASA Astrophysics Data System (ADS)
Reed, Daniel; Washburn, Libe; Rassweiler, Andrew; Miller, Robert; Bell, Tom; Harrer, Shannon
2016-12-01
The desire to use sentinel species as early warning indicators of impending climate change effects on entire ecosystems is attractive, but we need to verify that such approaches have sound biological foundations. A recent large-scale warming event in the North Pacific Ocean of unprecedented magnitude and duration allowed us to evaluate the sentinel status of giant kelp, a coastal foundation species that thrives in cold, nutrient-rich waters and is considered sensitive to warming. Here, we show that giant kelp and the majority of species that associate with it did not presage ecosystem effects of extreme warming off southern California despite giant kelp's expected vulnerability. Our results challenge the general perception that kelp-dominated systems are highly vulnerable to extreme warming events and expose the more general risk of relying on supposed sentinel species that are assumed to be very sensitive to climate change.
De Backer, A; Martinez, G T; MacArthur, K E; Jones, L; Béché, A; Nellist, P D; Van Aert, S
2015-04-01
Quantitative annular dark field scanning transmission electron microscopy (ADF STEM) has become a powerful technique to characterise nano-particles on an atomic scale. Because of their limited size and beam sensitivity, the atomic structure of such particles may become extremely challenging to determine. Therefore keeping the incoming electron dose to a minimum is important. However, this may reduce the reliability of quantitative ADF STEM which will here be demonstrated for nano-particle atom-counting. Based on experimental ADF STEM images of a real industrial catalyst, we discuss the limits for counting the number of atoms in a projected atomic column with single atom sensitivity. We diagnose these limits by combining a thorough statistical method and detailed image simulations. Copyright © 2014 Elsevier B.V. All rights reserved.
Ability of Ultrasonography in Detection of Different Extremity Bone Fractures; a Case Series Study.
Bozorgi, Farzad; Shayesteh Azar, Massoud; Montazer, Seyed Hossein; Chabra, Aroona; Heidari, Seyed Farshad; Khalilian, Alireza
2017-01-01
Despite radiography being the gold standard in evaluation of orthopedic injuries, using bedside ultrasonography has several potential supremacies such as avoiding exposure to ionizing radiation, availability in pre-hospital settings, being extensively accessible, and ability to be used on the bedside. The aim of the present study is to evaluate the diagnostic accuracy of ultrasonography in detection of extremity bone fractures. This study is a case series study, which was prospectively conducted on multiple blunt trauma patients, who were 18 years old or older, had stable hemodynamic, Glasgow coma scale 15, and signs or symptoms of a possible extremity bone fracture. After initial assessment, ultrasonography of suspected bones was performed by a trained emergency medicine resident and prevalence of true positive and false negative findings were calculated compared to plain radiology. 108 patients with the mean age of 44.6 ± 20.4 years were studied (67.6% male). Analysis was done on 158 sites of fracture, which were confirmed with plain radiography. 91 (57.6%) cases were suspected to have upper extremity fracture(s) and 67 (42.4%) to have lower ones. The most frequent site of injuries were forearm (36.7%) in upper limbs and leg (27.8%) in lower limbs. Prevalence of true positive and false negative cases for fractures detected by ultrasonography were 59 (64.8%) and 32 (35.52%) for upper and 49 (73.1%) and 18 (26.9%) for lower extremities, respectively. In addition, prevalence of true positive and false negative detected cases for intra-articular fractures were 24 (48%) and 26 (52%), respectively. The present study shows the moderate sensitivity (68.3%) of ultrasonography in detection of different extremity bone fractures. Ultrasonography showed the best sensitivity in detection of femur (100%) and humerus (76.2%) fractures, respectively. It had low sensitivity in detection of in intra-articular fractures.
Chen, Lili; Hao, Yaru
2017-01-01
Preterm birth (PTB) is the leading cause of perinatal mortality and long-term morbidity, which results in significant health and economic problems. The early detection of PTB has great significance for its prevention. The electrohysterogram (EHG) related to uterine contraction is a noninvasive, real-time, and automatic novel technology which can be used to detect, diagnose, or predict PTB. This paper presents a method for feature extraction and classification of EHG between pregnancy and labour group, based on Hilbert-Huang transform (HHT) and extreme learning machine (ELM). For each sample, each channel was decomposed into a set of intrinsic mode functions (IMFs) using empirical mode decomposition (EMD). Then, the Hilbert transform was applied to IMF to obtain analytic function. The maximum amplitude of analytic function was extracted as feature. The identification model was constructed based on ELM. Experimental results reveal that the best classification performance of the proposed method can reach an accuracy of 88.00%, a sensitivity of 91.30%, and a specificity of 85.19%. The area under receiver operating characteristic (ROC) curve is 0.88. Finally, experimental results indicate that the method developed in this work could be effective in the classification of EHG between pregnancy and labour group.
Designing ecological climate change impact assessments to reflect key climatic drivers
Sofaer, Helen R.; Barsugli, Joseph J.; Jarnevich, Catherine S.; Abatzoglou, John T.; Talbert, Marian; Miller, Brian W.; Morisette, Jeffrey T.
2017-01-01
Identifying the climatic drivers of an ecological system is a key step in assessing its vulnerability to climate change. The climatic dimensions to which a species or system is most sensitive – such as means or extremes – can guide methodological decisions for projections of ecological impacts and vulnerabilities. However, scientific workflows for combining climate projections with ecological models have received little explicit attention. We review Global Climate Model (GCM) performance along different dimensions of change and compare frameworks for integrating GCM output into ecological models. In systems sensitive to climatological means, it is straightforward to base ecological impact assessments on mean projected changes from several GCMs. Ecological systems sensitive to climatic extremes may benefit from what we term the ‘model space’ approach: a comparison of ecological projections based on simulated climate from historical and future time periods. This approach leverages the experimental framework used in climate modeling, in which historical climate simulations serve as controls for future projections. Moreover, it can capture projected changes in the intensity and frequency of climatic extremes, rather than assuming that future means will determine future extremes. Given the recent emphasis on the ecological impacts of climatic extremes, the strategies we describe will be applicable across species and systems. We also highlight practical considerations for the selection of climate models and data products, emphasizing that the spatial resolution of the climate change signal is generally coarser than the grid cell size of downscaled climate model output. Our review illustrates how an understanding of how climate model outputs are derived and downscaled can improve the selection and application of climatic data used in ecological modeling.
Designing ecological climate change impact assessments to reflect key climatic drivers.
Sofaer, Helen R; Barsugli, Joseph J; Jarnevich, Catherine S; Abatzoglou, John T; Talbert, Marian K; Miller, Brian W; Morisette, Jeffrey T
2017-07-01
Identifying the climatic drivers of an ecological system is a key step in assessing its vulnerability to climate change. The climatic dimensions to which a species or system is most sensitive - such as means or extremes - can guide methodological decisions for projections of ecological impacts and vulnerabilities. However, scientific workflows for combining climate projections with ecological models have received little explicit attention. We review Global Climate Model (GCM) performance along different dimensions of change and compare frameworks for integrating GCM output into ecological models. In systems sensitive to climatological means, it is straightforward to base ecological impact assessments on mean projected changes from several GCMs. Ecological systems sensitive to climatic extremes may benefit from what we term the 'model space' approach: a comparison of ecological projections based on simulated climate from historical and future time periods. This approach leverages the experimental framework used in climate modeling, in which historical climate simulations serve as controls for future projections. Moreover, it can capture projected changes in the intensity and frequency of climatic extremes, rather than assuming that future means will determine future extremes. Given the recent emphasis on the ecological impacts of climatic extremes, the strategies we describe will be applicable across species and systems. We also highlight practical considerations for the selection of climate models and data products, emphasizing that the spatial resolution of the climate change signal is generally coarser than the grid cell size of downscaled climate model output. Our review illustrates how an understanding of how climate model outputs are derived and downscaled can improve the selection and application of climatic data used in ecological modeling. © 2017 John Wiley & Sons Ltd.
Gardner, Bethany T.; Dale, Ann Marie; Buckner-Petty, Skye; Rachford, Robert; Strickland, Jaime; Kaskutas, Vicki; Evanoff, Bradley
2017-01-01
Purpose Few studies have explored measures of function across a range of health outcomes in a general working population. Using four upper extremity (UE) case definitions from the scientific literature, we described the performance of functional measures of work, activities of daily living, and overall health. Methods A sample of 573 workers completed several functional measures: modified recall versions of the QuickDASH, Levine Functional Status Scale (FSS), DASH Work module (DASH-W), and standard SF-8 physical component score. We determined case status based on four UE case definitions: 1) UE symptoms, 2) UE musculoskeletal disorders (MSD), 3) carpal tunnel syndrome (CTS), and 4) work limitations due to UE symptoms. We calculated effect sizes for each case definition to show the magnitude of the differences that were detected between cases and non-cases for each case definition on each functional measure. Sensitivity and specificity analyses showed how well each measure identified functional impairments across the UE case definitions. Results All measures discriminated between cases and non-cases for each case definition with the largest effect sizes for CTS and work limitations, particularly for the modified FSS and DASH-W measures. Specificity was high and sensitivity was low for outcomes of UE symptoms and UE MSD in all measures. Sensitivity was high for CTS and work limitations. Conclusions Functional measures developed specifically for use in clinical, treatment-seeking populations may identify mild levels of impairment in relatively healthy, active working populations, but measures performed better among workers with CTS or those reporting limitations at work. PMID:26091980
Allowable SEM noise for unbiased LER measurement
NASA Astrophysics Data System (ADS)
Papavieros, George; Constantoudis, Vassilios; Gogolides, Evangelos
2018-03-01
Recently, a novel method for the calculation of unbiased Line Edge Roughness based on Power Spectral Density analysis has been proposed. In this paper first an alternative method is discussed and investigated, utilizing the Height-Height Correlation Function (HHCF) of edges. The HHCF-based method enables the unbiased determination of the whole triplet of LER parameters including besides rms the correlation length and roughness exponent. The key of both methods is the sensitivity of PSD and HHCF on noise at high frequencies and short distance respectively. Secondly, we elaborate a testbed of synthesized SEM images with controlled LER and noise to justify the effectiveness of the proposed unbiased methods. Our main objective is to find out the boundaries of the method in respect to noise levels and roughness characteristics, for which the method remains reliable, i.e the maximum amount of noise allowed, for which the output results cope with the controllable known inputs. At the same time, we will also set the extremes of roughness parameters for which the methods hold their accuracy.
ERIC Educational Resources Information Center
Shavinina, Larisa V.
1999-01-01
Examination of the child prodigy phenomenon suggests it is a result of extremely accelerated mental development during sensitive periods that leads to the rapid growth of a child's cognitive resources and their construction into specific exceptional achievements. (Author/DB)
NASA Astrophysics Data System (ADS)
Kozawa, Takahiro; Santillan, Julius Joseph; Itani, Toshiro
2017-01-01
The development of lithography processes with sub-10 nm resolution is challenging. Stochastic phenomena such as line width roughness (LWR) are significant problems. In this study, the feasibility of sub-10 nm fabrication using chemically amplified extreme ultraviolet resists with photodecomposable quenchers was investigated from the viewpoint of the suppression of LWR. The relationship between sensitizer concentration (the sum of acid generator and photodecomposable quencher concentrations) and resist performance was clarified, using the simulation based on the sensitization and reaction mechanisms of chemically amplified resists. For the total sensitizer concentration of 0.5 nm-3 and the effective reaction radius for the deprotection of 0.1 nm, the reachable half-pitch while maintaining 10% critical dimension (CD) LWR was 11 nm. The reachable half-pitch was 7 nm for 20% CD LWR. The increase in the effective reaction radius is required to realize the sub-10 nm fabrication with 10% CD LWR.
High-resolution crystal spectrometer for the 10-60 A extreme ultraviolet region
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beiersdorfer, P.; Brown, G.V.; Goddard, R.
2004-10-01
A vacuum crystal spectrometer with nominal resolving power approaching 1000 is described for measuring emission lines with wavelength in the extreme ultraviolet region up to 60 A. The instrument utilizes a flat octadecyl hydrogen maleate crystal and a thin-window 1D position-sensitive gas proportional detector. This detector employs a 1-{mu}m-thick 100x8 mm{sup 2} aluminized polyimide window and operates at one atmosphere pressure. The spectrometer has been implemented on the Livermore electron beam ion traps. The performance of the instrument is illustrated in measurements of the newly discovered magnetic field-sensitive line in Ar{sup 8+}.
An overview of the extreme ultraviolet explorer and its scientific program
NASA Technical Reports Server (NTRS)
Malina, Roger F.; Finley, David S.; Jelinsky, Patrick; Vallerga, John; Bowyer, Stuart
1987-01-01
NASA's Extreme Ultraviolet Explorer (EUVE) will carry out an all-sky survey from 8 to 90 nm in four bandpasses; the limiting sensitivity will be between 2 to 3 orders of magnitude fainter than the hot white dwarf HZ 43. A deep survey will also be carried out along the ecliptic which will have a limiting sensitivity of 1 to 2 orders of magnitude fainter than the all-sky survey in the bandpass from 8 to 50 nm. The payload also includes a spectrometer which will be used to observe the brighter sources found in the surveys with a spectral resolution of 1 to 2 A.
Imaging of upper extremity stress fractures in the athlete.
Anderson, Mark W
2006-07-01
Although it is much less common than injuries in the lower extremities, an upper extremity stress injury can have a significant impact on an athlete. If an accurate and timely diagnosis is to be made, the clinician must have a high index of suspicion of a stress fracture in any athlete who is involved in a throwing, weightlifting, or upper extremity weight-bearing sport and presents with chronic pain in the upper extremity. Imaging should play an integral role in the work-up of these patients; if initial radiographs are unrevealing, further cross-sectional imaging should be strongly considered. Although a three-phase bone scan is highly sensitive in this regard, MRI has become the study of choice at most centers.
NASA Astrophysics Data System (ADS)
Ganesh, Shruthi Vatsyayani
With the advent of microfluidic technologies for molecular diagnostics, a lot of emphasis has been placed on developing diagnostic tools for resource poor regions in the form of Extreme Point of Care devices. To ensure commercial viability of such a device there is a need to develop an accurate sample to answer system, which is robust, portable, isolated yet highly sensitive and cost effective. This need has been a driving force for research involving integration of different microsystems like droplet microfluidics, Compact-disc (CD)microfluidics along with sample preparation and detection modules on a single platform. This work attempts to develop a proof of concept prototype of one such device using existing CD microfluidics tools to generate stable droplets used in point of care diagnostics (POC diagnostics). Apart from using a fairly newer technique for droplet generation and stabilization, the work aims to develop this method focused towards diagnostics for rural healthcare. The motivation for this work is first described with an emphasis on the current need for diagnostic testing in rural health-care and the general guidelines prescribed by WHO for such a sample to answer system. Furthermore, a background on CD and droplet microfluidics is presented to understand the merits and de-merits of each system and the need for integrating the two. This phase of the thesis also includes different methods employed/demonstrated to generate droplets on a spinning platform. An overview on the detection platforms is also presented to understand the challenges involved in building an extreme point of care device. In the third phase of the thesis, general manufacturing techniques and materials used to accomplish this work is presented. Lastly, design trials for droplet generation is presented. The shortcomings of these trials are solved by investigating mechanisms pertaining to design modification and use of agarose based droplet generation to ensure a more robust sample processing method. This method is further characterized and compared with non-agarose based system and the results are analyzed. In conclusion, future prospects of this work are discussed in relation to extreme POC applications.
Jaeschke, Roman; Stevens, Scott M.; Goodacre, Steven; Wells, Philip S.; Stevenson, Matthew D.; Kearon, Clive; Schunemann, Holger J.; Crowther, Mark; Pauker, Stephen G.; Makdissi, Regina; Guyatt, Gordon H.
2012-01-01
Background: Objective testing for DVT is crucial because clinical assessment alone is unreliable and the consequences of misdiagnosis are serious. This guideline focuses on the identification of optimal strategies for the diagnosis of DVT in ambulatory adults. Methods: The methods of this guideline follow those described in Methodology for the Development of Antithrombotic Therapy and Prevention of Thrombosis Guidelines: Antithrombotic Therapy and Prevention of Thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Results: We suggest that clinical assessment of pretest probability of DVT, rather than performing the same tests in all patients, should guide the diagnostic process for a first lower extremity DVT (Grade 2B). In patients with a low pretest probability of first lower extremity DVT, we recommend initial testing with D-dimer or ultrasound (US) of the proximal veins over no diagnostic testing (Grade 1B), venography (Grade 1B), or whole-leg US (Grade 2B). In patients with moderate pretest probability, we recommend initial testing with a highly sensitive D-dimer, proximal compression US, or whole-leg US rather than no testing (Grade 1B) or venography (Grade 1B). In patients with a high pretest probability, we recommend proximal compression or whole-leg US over no testing (Grade 1B) or venography (Grade 1B). Conclusions: Favored strategies for diagnosis of first DVT combine use of pretest probability assessment, D-dimer, and US. There is lower-quality evidence available to guide diagnosis of recurrent DVT, upper extremity DVT, and DVT during pregnancy. PMID:22315267
NASA Astrophysics Data System (ADS)
Sandvik, M. I.; Sorteberg, A.
2013-12-01
Studies (RegClim, 2005; Caroletti & Barstad, 2010; Bengtsson et al., 2009; Trenberth, 1999; Pall et al., 2007) indicate an increased risk of more frequent precipitation extremes in a warming world, which may result in more frequent flooding, avalanches and landslides. Thus, the ability to understand how processes influence extreme precipitation events could result in a better representation in models used in both research and weather forecasting. The Weather Research and Forecasting (WRF) model was used on 26 extreme precipitation events located on the west coast of Norway between 1980-2011. The goal of the study was to see how sensitive the intensity and distribution of the precipitation for these case studies were to a warmer/colder Atlantic Ocean, with a uniform change of ×2°C. To secure that the large-scale system remained the same when the Sea Surface Temperature (SST) was changed, spectral nudging was introduced. To avoid the need of a convective scheme, and the uncertainties it brings, a nested domain with a 2km grid resolution was used over Southern Norway. WRF generally underestimated the daily precipitation. The case studies were divided into 2 clusters, depending on the wind direction towards the coast, to search for patterns within each of the clusters. By the use of ensemble mean, the percentage change between the control run and the 2 sensitivity runs were different for the 2 clusters.
Repeated Solid-state Dewetting of Thin Gold Films for Nanogap-rich Plasmonic Nanoislands.
Kang, Minhee; Park, Sang-Gil; Jeong, Ki-Hun
2015-10-15
This work reports a facile wafer-level fabrication for nanogap-rich gold nanoislands for highly sensitive surface enhanced Raman scattering (SERS) by repeating solid-state thermal dewetting of thin gold film. The method provides enlarged gold nanoislands with small gap spacing, which increase the number of electromagnetic hotspots and thus enhance the extinction intensity as well as the tunability for plasmon resonance wavelength. The plasmonic nanoislands from repeated dewetting substantially increase SERS enhancement factor over one order-of-magnitude higher than those from a single-step dewetting process and they allow ultrasensitive SERS detection of a neurotransmitter with extremely low Raman activity. This simple method provides many opportunities for engineering plasmonics for ultrasensitive detection and highly efficient photon collection.
Repeated Solid-state Dewetting of Thin Gold Films for Nanogap-rich Plasmonic Nanoislands
Kang, Minhee; Park, Sang-Gil; Jeong, Ki-Hun
2015-01-01
This work reports a facile wafer-level fabrication for nanogap-rich gold nanoislands for highly sensitive surface enhanced Raman scattering (SERS) by repeating solid-state thermal dewetting of thin gold film. The method provides enlarged gold nanoislands with small gap spacing, which increase the number of electromagnetic hotspots and thus enhance the extinction intensity as well as the tunability for plasmon resonance wavelength. The plasmonic nanoislands from repeated dewetting substantially increase SERS enhancement factor over one order-of-magnitude higher than those from a single-step dewetting process and they allow ultrasensitive SERS detection of a neurotransmitter with extremely low Raman activity. This simple method provides many opportunities for engineering plasmonics for ultrasensitive detection and highly efficient photon collection. PMID:26469768
Electrochemical Biosensors for Rapid Detection of Foodborne Salmonella: A Critical Overview
Cinti, Stefano; Volpe, Giulia; Piermarini, Silvia; Delibato, Elisabetta; Palleschi, Giuseppe
2017-01-01
Salmonella has represented the most common and primary cause of food poisoning in many countries for at least over 100 years. Its detection is still primarily based on traditional microbiological culture methods which are labor-intensive, extremely time consuming, and not suitable for testing a large number of samples. Accordingly, great efforts to develop rapid, sensitive and specific methods, easy to use, and suitable for multi-sample analysis, have been made and continue. Biosensor-based technology has all the potentialities to meet these requirements. In this paper, we review the features of the electrochemical immunosensors, genosensors, aptasensors and phagosensors developed in the last five years for Salmonella detection, focusing on the critical aspects of their application in food analysis. PMID:28820458
Miniature traveling wave tube and method of making
NASA Technical Reports Server (NTRS)
Kosmahl, Henry G. (Inventor)
1989-01-01
It is an object of the invention to provide a miniature traveling wave tube which will have most of the advantages of solid state circuitry but with higher efficiency and without being highly sensitive to temperature and various types of electromagnetic radiation and subatomic particles as are solid state devices. The traveling wave tube which is about 2.5 cm in length includes a slow wave circuit (SWS) comprising apertured fins with a top cover which is insulated from the fins by strips or rungs of electrically insulating, dielectric material. Another object of the invention is to construct a SWS of extremely small size by employing various grooving or etching methods and by providing insulating strips or rungs by various deposition and masking techniques.
Micromechanical potentiometric sensors
Thundat, Thomas G.
2000-01-01
A microcantilever potentiometric sensor utilized for detecting and measuring physical and chemical parameters in a sample of media is described. The microcantilevered spring element includes at least one chemical coating on a coated region, that accumulates a surface charge in response to hydrogen ions, redox potential, or ion concentrations in a sample of the media being monitored. The accumulation of surface charge on one surface of the microcantilever, with a differing surface charge on an opposing surface, creates a mechanical stress and a deflection of the spring element. One of a multitude of deflection detection methods may include the use of a laser light source focused on the microcantilever, with a photo-sensitive detector receiving reflected laser impulses. The microcantilevered spring element is approximately 1 to 100 .mu.m long, approximately 1 to 50 .mu.m wide, and approximately 0.3 to 3.0 .mu.m thick. An accuracy of detection of deflections of the cantilever is provided in the range of 0.01 nanometers of deflection. The microcantilever apparatus and a method of detection of parameters require only microliters of a sample to be placed on, or near the spring element surface. The method is extremely sensitive to the detection of the parameters to be measured.
Johyama, Y; Yokota, K; Fujiki, Y; Takeshita, T; Morimoto, K
1999-10-01
Methyltetrahydrophthalic anhydride (MTHPA) stimulates the production of specific IgE antibodies which can cause occupational allergy even at extremely low levels of exposure (15-22 micrograms/m3). Safe use in industry demands control of the levels of exposure causing allergic diseases. Thus, the air monitoring of MTHPA is very important, and sensitive methods are required to measure low air concentrations or short-time peak exposures. This paper outlines the use of silica-gel tubes for sampling airborne MTHPA vapour, followed by analysis using gas chromatography with electron-capture detection. No breakthrough was observed at 113, 217, 673 and 830 micrograms/m3 (sampling volume 30, 60, 60 and 20 l, respectively; relative humidity 40-55%). Concentrations > 1.0 microgram/m3 could be quantified at 20-min sampling with a sampling rate of 1 l/min. The present method can also be applied to measurements of exposure to hexahydrophthalic and methylhexahydrophthalic anhydride. The risk of MTHPA exposure in two condenser plants was also assessed by determining MTHPA levels in air of the workplace. In conclusion, our method was found to be reliable and sensitive, and can be applied to the evaluation of MTHPA exposure.
Design of nuclease-based target recycling signal amplification in aptasensors.
Yan, Mengmeng; Bai, Wenhui; Zhu, Chao; Huang, Yafei; Yan, Jiao; Chen, Ailiang
2016-03-15
Compared with conventional antibody-based immunoassay methods, aptasensors based on nucleic acid aptamer have made at least two significant breakthroughs. One is that aptamers are more easily used for developing various simple and rapid homogeneous detection methods by "sample in signal out" without multi-step washing. The other is that aptamers are more easily employed for developing highly sensitive detection methods by using various nucleic acid-based signal amplification approaches. As many substances playing regulatory roles in physiology or pathology exist at an extremely low concentration and many chemical contaminants occur in trace amounts in food or environment, aptasensors for signal amplification contribute greatly to detection of such targets. Among the signal amplification approaches in highly sensitive aptasensors, the nuclease-based target recycling signal amplification has recently become a research focus because it shows easy design, simple operation, and rapid reaction and can be easily developed for homogenous assay. In this review, we summarized recent advances in the development of various nuclease-based target recycling signal amplification with the aim to provide a general guide for the design of aptamer-based ultrasensitive biosensing assays. Copyright © 2015 Elsevier B.V. All rights reserved.
Wong, Chin Lin; Lam, Ai-Leen; Smith, Maree T.; Ghassabian, Sussan
2016-01-01
The direct peptide reactivity assay (DPRA) is a validated method for in vitro assessment of the skin sensitization potential of chemicals. In the present work, we describe a peptide reactivity assay using 96-well plate format and systematically identified the optimal assay conditions for accurate and reproducible classification of chemicals with known sensitizing capacity. The aim of the research is to ensure that the analytical component of the peptide reactivity assay is robust, accurate, and reproducible in accordance with criteria that are used for the validation of bioanalytical methods. Analytical performance was evaluated using quality control samples (QCs; heptapeptides at low, medium, and high concentrations) and incubation of control chemicals (chemicals with known sensitization capacity, weak, moderate, strong, extreme, and non-sensitizers) with each of three synthetic heptapeptides, viz Cor1-C420 (Ac-NKKCDLF), cysteine- (Ac-RFAACAA), and lysine- (Ac-RFAAKAA) containing heptapeptides. The optimal incubation temperature for all three heptapeptides was 25°C. Apparent heptapeptide depletion was affected by vial material composition. Incubation of test chemicals with Cor1-C420, showed that peptide depletion was unchanged in polypropylene vials over 3-days storage in an autosampler but this was not the case for borosilicate glass vials. For cysteine-containing heptapeptide, the concentration was not stable by day 3 post-incubation in borosilicate glass vials. Although the lysine-containing heptapeptide concentration was unchanged in both polypropylene and borosilicate glass vials, the apparent extent of lysine-containing heptapeptide depletion by ethyl acrylate, differed between polypropylene (24.7%) and glass (47.3%) vials. Additionally, the peptide-chemical complexes for Cor1-C420-cinnamaldehyde and cysteine-containing heptapeptide-2, 4-dinitrochlorobenzene were partially reversible during 3-days of autosampler storage. These observations further highlight the difficulty in adapting in vitro methods to high-throughput format for screening the skin sensitization potential of large numbers of chemicals whilst ensuring that the data produced are both accurate and reproducible. PMID:27014067
NASA Astrophysics Data System (ADS)
Alam, Md. Fazle; Laskar, Amaj Ahmed; Ahmed, Shahbaz; Shaida, Mohd. Azfar; Younus, Hina
2017-08-01
Melamine toxicity has recently attracted worldwide attention as it causes renal failure and the death of humans and animals. Therefore, developing a simple, fast and sensitive method for the routine detection of melamine is the need of the hour. Herein, we have developed a selective colorimetric method for the detection of melamine in milk samples based upon in-situ formation of silver nanoparticles (AgNPs) via tannic acid. The AgNPs thus formed were characterized by UV-Visible spectrophotometer, transmission electron microscope (TEM), zetasizer and dynamic light scattering (DLS). The AgNPs were used to detect melamine under in vitro condition and in raw milk spiked with melamine. Under optimal conditions, melamine could be selectively detected in vitro within the concentration range of 0.05-1.4 μM with a limit of detection (LOD) of 0.01 μM, which is lower than the strictest melamine safety requirement of 1 ppm. In spiked raw milk, the recovery percentage range was 99.5-106.5% for liquid milk and 98.5-105.5% for powdered milk. The present method shows extreme selectivity with no significant interference with other substances like urea, glucose, glycine, ascorbic acid etc. This assay method does not utilize organic cosolvents, enzymatic reactions, light sensitive dye molecules and sophisticated instrumentation, thereby overcoming some of the limitations of the other conventional methods.
Early Lung Cancer Diagnosis by Biosensors
Zhang, Yuqian; Yang, Dongliang; Weng, Lixing; Wang, Lianhui
2013-01-01
Lung cancer causes an extreme threat to human health, and the mortality rate due to lung cancer has not decreased during the last decade. Prognosis or early diagnosis could help reduce the mortality rate. If microRNA and tumor-associated antigens (TAAs), as well as the corresponding autoantibodies, can be detected prior to clinical diagnosis, such high sensitivity of biosensors makes the early diagnosis and prognosis of cancer realizable. This review provides an overview of tumor-associated biomarker identifying methods and the biosensor technology available today. Laboratorial researches utilizing biosensors for early lung cancer diagnosis will be highlighted. PMID:23892596
Free-stream disturbance, continuous Eigenfunctions, boundary-layer instability and transition
NASA Technical Reports Server (NTRS)
Grosch, C. E.
1980-01-01
A rational foundation is presented for the application of the linear shear flows to transition prediction, and an explicit method is given for carrying out the necessary calculations. The expansions used are shown to be complete. Sample calculations show that a typical boundary layer is very sensitive to vorticity disturbances in the inner boundary layer, near the critical layer. Vorticity disturbances three or four boundary layer thicknesses above the boundary are nearly uncoupled from the boundary layer in that the amplitudes of the discrete Tollmien-Schlicting waves are an extremely small fraction of the amplitude of the disturbance.
Brennan, T.M.; Hammons, B.E.; Tsao, J.Y.
1992-12-15
A method for on-line accurate monitoring and precise control of molecular beam epitaxial growth of Groups III-III-V or Groups III-V-V layers in an advanced semiconductor device incorporates reflection mass spectrometry. The reflection mass spectrometry is responsive to intentional perturbations in molecular fluxes incident on a substrate by accurately measuring the molecular fluxes reflected from the substrate. The reflected flux is extremely sensitive to the state of the growing surface and the measurements obtained enable control of newly forming surfaces that are dynamically changing as a result of growth. 3 figs.
Ground level measurements of air conductivities under Florida thunderstorms
NASA Technical Reports Server (NTRS)
Blakeslee, Richard J.; Krider, E. P.
1992-01-01
Values of the positive and negative polar conductivities under summer thunderstorms in Florida are highly variable and exhibit a significant electrode effect, but the total conductivity usually remains close to values found in fair weather, 0.4 to 1.8 x 10 exp -14 S/m. With these values a method proposed by Krider and Musser (1982) for estimating the total conductivity from changes in the slope of the electric field recovery following a lightning discharge will be extremely sensitive to small time variations in the local Maxwell current density and must be modified to include these effects.
Measuring the absolute carrier-envelope phase of many-cycle laser fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tzallas, P.; Skantzakis, E.; Charalambidis, D.
2010-12-15
The carrier-envelope phase (CEP) of high-peak-power, many-cycle laser fields becomes a crucial parameter when such fields are used, in conjunction with polarization gating techniques, in isolated attosecond (asec) pulse generation. However, its measurement has not been achieved so far. We demonstrate a physical process sensitive to the CEP value of such fields and describe a method for its online shot-to-shot monitoring. This work paves the way for the exploitation of energetic isolated asec pulses in studies of nonlinear extreme ultraviolet (XUV) processes and XUV-pump-XUV-probe experiments with asec resolutions.
Brennan, Thomas M.; Hammons, B. Eugene; Tsao, Jeffrey Y.
1992-01-01
A method for on-line accurate monitoring and precise control of molecular beam epitaxial growth of Groups III-III-V or Groups III-V-V layers in an advanced semiconductor device incorporates reflection mass spectrometry. The reflection mass spectrometry is responsive to intentional perturbations in molecular fluxes incident on a substrate by accurately measuring the molecular fluxes reflected from the substrate. The reflected flux is extremely sensitive to the state of the growing surface and the measurements obtained enable control of newly forming surfaces that are dynamically changing as a result of growth.
Towards a General Theory of Extremes for Observables of Chaotic Dynamical Systems.
Lucarini, Valerio; Faranda, Davide; Wouters, Jeroen; Kuna, Tobias
2014-01-01
In this paper we provide a connection between the geometrical properties of the attractor of a chaotic dynamical system and the distribution of extreme values. We show that the extremes of so-called physical observables are distributed according to the classical generalised Pareto distribution and derive explicit expressions for the scaling and the shape parameter. In particular, we derive that the shape parameter does not depend on the chosen observables, but only on the partial dimensions of the invariant measure on the stable, unstable, and neutral manifolds. The shape parameter is negative and is close to zero when high-dimensional systems are considered. This result agrees with what was derived recently using the generalized extreme value approach. Combining the results obtained using such physical observables and the properties of the extremes of distance observables, it is possible to derive estimates of the partial dimensions of the attractor along the stable and the unstable directions of the flow. Moreover, by writing the shape parameter in terms of moments of the extremes of the considered observable and by using linear response theory, we relate the sensitivity to perturbations of the shape parameter to the sensitivity of the moments, of the partial dimensions, and of the Kaplan-Yorke dimension of the attractor. Preliminary numerical investigations provide encouraging results on the applicability of the theory presented here. The results presented here do not apply for all combinations of Axiom A systems and observables, but the breakdown seems to be related to very special geometrical configurations.
Towards a General Theory of Extremes for Observables of Chaotic Dynamical Systems
NASA Astrophysics Data System (ADS)
Lucarini, Valerio; Faranda, Davide; Wouters, Jeroen; Kuna, Tobias
2014-02-01
In this paper we provide a connection between the geometrical properties of the attractor of a chaotic dynamical system and the distribution of extreme values. We show that the extremes of so-called physical observables are distributed according to the classical generalised Pareto distribution and derive explicit expressions for the scaling and the shape parameter. In particular, we derive that the shape parameter does not depend on the chosen observables, but only on the partial dimensions of the invariant measure on the stable, unstable, and neutral manifolds. The shape parameter is negative and is close to zero when high-dimensional systems are considered. This result agrees with what was derived recently using the generalized extreme value approach. Combining the results obtained using such physical observables and the properties of the extremes of distance observables, it is possible to derive estimates of the partial dimensions of the attractor along the stable and the unstable directions of the flow. Moreover, by writing the shape parameter in terms of moments of the extremes of the considered observable and by using linear response theory, we relate the sensitivity to perturbations of the shape parameter to the sensitivity of the moments, of the partial dimensions, and of the Kaplan-Yorke dimension of the attractor. Preliminary numerical investigations provide encouraging results on the applicability of the theory presented here. The results presented here do not apply for all combinations of Axiom A systems and observables, but the breakdown seems to be related to very special geometrical configurations.
Zhang, Jianhua; Nie, Xianzhou; Boquel, Sébastien; Al-Daoud, Fadi; Pelletier, Yvan
2015-12-01
The sensitivity of reverse transcription-polymerase chain reaction (RT-PCR) for virus detection is influenced by many factors such as specificity of primers and quality of templates. These factors become extremely important for successful detection when virus concentration is low. Total RNA isolated from Potato virus Y (PVY)-infected potato plants using the sodium sulfite RNA isolation method or RNeasy plant mini kit contains a high proportion of host RNA and may also contain trace amount of phenolic and polysaccharide residues, which may inhibit RT-PCR. The goal of this study was to enhance the sensitivity of PVY detection by reducing host RNA in the extract by differential centrifugation followed by extraction using an RNeasy mini kit (DCR method). One-step RT-PCR had relatively low amplification efficiency for PVY RNA when a high proportion of plant RNA was present. SYBR Green-based real time RT-PCR showed that the RNA isolated by the DCR method had a higher cycle threshold value (Ct) for the elongation factor 1-α mRNA (Ef1α) of potato than the Ct value of the RNA extracted using the RNeasy plant mini kit, indicating that the DCR method significantly reduced the proportion of potato RNA in the extract. The detectable amount of RNA extracted using the DCR method was <0.001ng when plant sap from 10 PVY-infected and PVY-free potato leaflets in a 1.5:100 fresh weight ratio was extracted, compared with 0.01 and 0.02ng of RNA using the RNeasy plant mini kit and sodium sulfite RNA isolation methods, respectively. Copyright © 2015. Published by Elsevier B.V.
Soltaninejad, Mohammadreza; Yang, Guang; Lambrou, Tryphon; Allinson, Nigel; Jones, Timothy L; Barrick, Thomas R; Howe, Franklyn A; Ye, Xujiong
2017-02-01
We propose a fully automated method for detection and segmentation of the abnormal tissue associated with brain tumour (tumour core and oedema) from Fluid- Attenuated Inversion Recovery (FLAIR) Magnetic Resonance Imaging (MRI). The method is based on superpixel technique and classification of each superpixel. A number of novel image features including intensity-based, Gabor textons, fractal analysis and curvatures are calculated from each superpixel within the entire brain area in FLAIR MRI to ensure a robust classification. Extremely randomized trees (ERT) classifier is compared with support vector machine (SVM) to classify each superpixel into tumour and non-tumour. The proposed method is evaluated on two datasets: (1) Our own clinical dataset: 19 MRI FLAIR images of patients with gliomas of grade II to IV, and (2) BRATS 2012 dataset: 30 FLAIR images with 10 low-grade and 20 high-grade gliomas. The experimental results demonstrate the high detection and segmentation performance of the proposed method using ERT classifier. For our own cohort, the average detection sensitivity, balanced error rate and the Dice overlap measure for the segmented tumour against the ground truth are 89.48 %, 6 % and 0.91, respectively, while, for the BRATS dataset, the corresponding evaluation results are 88.09 %, 6 % and 0.88, respectively. This provides a close match to expert delineation across all grades of glioma, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management.
Nunes, Vera L; Beaumont, Mark A; Butlin, Roger K; Paulo, Octávio S
2011-01-01
Identification of loci with adaptive importance is a key step to understand the speciation process in natural populations, because those loci are responsible for phenotypic variation that affects fitness in different environments. We conducted an AFLP genome scan in populations of ocellated lizards (Lacerta lepida) to search for candidate loci influenced by selection along an environmental gradient in the Iberian Peninsula. This gradient is strongly influenced by climatic variables, and two subspecies can be recognized at the opposite extremes: L. lepida iberica in the northwest and L. lepida nevadensis in the southeast. Both subspecies show substantial morphological differences that may be involved in their local adaptation to the climatic extremes. To investigate how the use of a particular outlier detection method can influence the results, a frequentist method, DFDIST, and a Bayesian method, BayeScan, were used to search for outliers influenced by selection. Additionally, the spatial analysis method was used to test for associations of AFLP marker band frequencies with 54 climatic variables by logistic regression. Results obtained with each method highlight differences in their sensitivity. DFDIST and BayeScan detected a similar proportion of outliers (3-4%), but only a few loci were simultaneously detected by both methods. Several loci detected as outliers were also associated with temperature, insolation or precipitation according to spatial analysis method. These results are in accordance with reported data in the literature about morphological and life-history variation of L. lepida subspecies along the environmental gradient. © 2010 Blackwell Publishing Ltd.
NASA Astrophysics Data System (ADS)
Murphy, Conor; Bastola, Satish; Sweeney, John
2013-04-01
Climate change impact and adaptation assessments have traditionally adopted a 'top-down' scenario based approach, where information from different Global Climate Models (GCMs) and emission scenarios are employed to develop impacts led adaptation strategies. Due to the tradeoffs in the computational cost and need to include a wide range of GCMs for fuller characterization of uncertainties, scenarios are better used for sensitivity testing and adaptation options appraisal. One common approach to adaptation that has been defined as robust is the use of safety margins. In this work the sensitivity of safety margins that have been adopted by the agency responsible for flood risk management in Ireland, to the uncertainty in future projections are examined. The sensitivity of fluvial flood risk to climate change is assessed for four Irish catchments using a large number of GCMs (17) forced with three emissions scenarios (SRES A1B, A2, B1) as input to four hydrological models. Both uncertainty within and between hydrological models is assessed using the GLUE framework. Regionalisation is achieved using a change factor method to infer changes in the parameters of a weather generator using monthly output from the GCMs, while flood frequency analysis is conducted using the method of probability weighted moments to fit the Generalised Extreme Value distribution to ~20,000 annual maxima series. The sensitivity of design margins to the uncertainty space considered is visualised using risk response surfaces. The hydrological sensitivity is measured as the percentage change in flood peak for specified recurrence intervals. Results indicate that there is a considerable residual risk associated with allowances of +20% when uncertainties are accounted for and that the risk of exceedence of design allowances is greatest for more extreme, low frequency events with considerable implication for critical infrastructure, e.g., culverts, bridges, flood defences whose designs are normally associated with such return periods. Sensitivity results show that the impact of climate change is not as great for flood peaks with higher return periods. The average width of the uncertainty range and the size of the range for each catchment reveals that the uncertainties in low frequency events are greater than high frequency events. In addition, the uncertainty interval, estimated as the average width of the uncertainty range of flow for the five return periods, grows wider with a decrease in the runoff coefficient and wetness index of each catchment, both of which tend to increase the nonlinearity in the rainfall response. A key management question that emerges is the acceptability of residual risk where high exposure of vulnerable populations and/or critical infrastructure coincide with high costs of additional capacity in safety margins.
A comparison of alternative variants of the lead and lag time TTO.
Devlin, Nancy; Buckingham, Ken; Shah, Koonal; Tsuchiya, Aki; Tilling, Carl; Wilkinson, Grahame; van Hout, Ben
2013-05-01
'Lead Time' TTO improves upon conventional TTO by providing a uniform method for eliciting positive and negative values. This research investigates (i) the values generated from different combinations of time in poor health and in full health; and the order in which these appear (lead vs. lag); (ii) whether values concur with participants' views about states; (iii) methods for handling extreme preferences. n = 208 participants valued five EQ-5D states, using two of four variants. Combinations of lead time and health state duration were: 10 years and 20 years; 5 years and 1 year; 5 years and 10 years; and a health state duration of 5 years with a lag time of 10 years. Longer lead times capture more preferences, but may involve a framing effect. Lag time results in less non-trading for mild states, and less time being traded for severe states. Negative values broadly agree with participants' stated opinion that the state is worse than dead. The values are sensitive to the ratio of lead time to duration of poor health, and the order in which these appear (lead vs. lag). It is feasible to handle extreme preferences though challenges remain. Copyright © 2012 John Wiley & Sons, Ltd.
A Pathological Brain Detection System based on Extreme Learning Machine Optimized by Bat Algorithm.
Lu, Siyuan; Qiu, Xin; Shi, Jianping; Li, Na; Lu, Zhi-Hai; Chen, Peng; Yang, Meng-Meng; Liu, Fang-Yuan; Jia, Wen-Juan; Zhang, Yudong
2017-01-01
It is beneficial to classify brain images as healthy or pathological automatically, because 3D brain images can generate so much information which is time consuming and tedious for manual analysis. Among various 3D brain imaging techniques, magnetic resonance (MR) imaging is the most suitable for brain, and it is now widely applied in hospitals, because it is helpful in the four ways of diagnosis, prognosis, pre-surgical, and postsurgical procedures. There are automatic detection methods; however they suffer from low accuracy. Therefore, we proposed a novel approach which employed 2D discrete wavelet transform (DWT), and calculated the entropies of the subbands as features. Then, a bat algorithm optimized extreme learning machine (BA-ELM) was trained to identify pathological brains from healthy controls. A 10x10-fold cross validation was performed to evaluate the out-of-sample performance. The method achieved a sensitivity of 99.04%, a specificity of 93.89%, and an overall accuracy of 98.33% over 132 MR brain images. The experimental results suggest that the proposed approach is accurate and robust in pathological brain detection. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Katharopoulos, Efstathios; Touloupi, Katerina; Touraki, Maria
2016-08-01
The present study describes the development of a simple and efficient screening system that allows identification and quantification of nine bacteriocins produced by Lactococcus lactis. Cell-free L. lactis extracts presented a broad spectrum of antibacterial activity, including Gram-negative bacteria, Gram-positive bacteria, and fungi. The characterization of their sensitivity to pH, and heat, showed that the extracts retained their antibacterial activity at extreme pH values and in a wide temperature range. The loss of antibacterial activity following treatment of the extracts with lipase or protease suggests a lipoproteinaceous nature of the produced antimicrobials. The extracts were subjected to a purification protocol that employs a two phase extraction using ammonium sulfate precipitation and organic solvent precipitation, followed by ion exchange chromatography, solid phase extraction and HPLC. In the nine fractions that presented antimicrobial activity, bacteriocins were quantified by the turbidometric method using a standard curve of nisin and by the HPLC method with nisin as the external standard, with both methods producing comparable results. Turbidometry appears to be unique in the qualitative determination of bacteriocins but the only method suitable to both separate and quantify the bacteriocins providing increased sensitivity, accuracy, and precision is HPLC. Copyright © 2016 Elsevier B.V. All rights reserved.
Can quantile mapping improve precipitation extremes from regional climate models?
NASA Astrophysics Data System (ADS)
Tani, Satyanarayana; Gobiet, Andreas
2015-04-01
The ability of quantile mapping to accurately bias correct regard to precipitation extremes is investigated in this study. We developed new methods by extending standard quantile mapping (QMα) to improve the quality of bias corrected extreme precipitation events as simulated by regional climate model (RCM) output. The new QM version (QMβ) was developed by combining parametric and nonparametric bias correction methods. The new nonparametric method is tested with and without a controlling shape parameter (Qmβ1 and Qmβ0, respectively). Bias corrections are applied on hindcast simulations for a small ensemble of RCMs at six different locations over Europe. We examined the quality of the extremes through split sample and cross validation approaches of these three bias correction methods. This split-sample approach mimics the application to future climate scenarios. A cross validation framework with particular focus on new extremes was developed. Error characteristics, q-q plots and Mean Absolute Error (MAEx) skill scores are used for evaluation. We demonstrate the unstable behaviour of correction function at higher quantiles with QMα, whereas the correction functions with for QMβ0 and QMβ1 are smoother, with QMβ1 providing the most reasonable correction values. The result from q-q plots demonstrates that, all bias correction methods are capable of producing new extremes but QMβ1 reproduces new extremes with low biases in all seasons compared to QMα, QMβ0. Our results clearly demonstrate the inherent limitations of empirical bias correction methods employed for extremes, particularly new extremes, and our findings reveals that the new bias correction method (Qmß1) produces more reliable climate scenarios for new extremes. These findings present a methodology that can better capture future extreme precipitation events, which is necessary to improve regional climate change impact studies.
Design, assembly, and metrology of an oil-immersion microscope objective with long working distance
NASA Astrophysics Data System (ADS)
Peng, Wei-Jei; Lin, Wen-Lung; Kuo, Hui-Jean; Ho, Cheng-Fang; Hsu, Wei-Yao
2016-10-01
The design, tolerance sensitivity reduction, assembly, and optical bench test for an oil-immersion microscope objective with long working distance employed in a lattice light-sheet microscope is presented in this paper. In this application, the orthogonal excitation and detection objectives are dipped in an oil medium. The excitation objective focuses the incident laser beam to generate fluorescence on specimen for collecting by detection objective. The excitation objective is custom-designed to meet the requirement specification such as oil-immersion, the long working distance, and numerical aperture (NA) of 0.5, etc. To produce an acceptable point spread function (PSF) for effective excitation, the performance of the objective needs to be close to diffraction limit. Because the tolerance of the modulation transfer function (MTF) is more and more sensitive at higher spatial frequency, it is extremely critical to keep the performance after manufacture. Consequently, an insensitive optical design is very important for relaxing tolerance. We compare the design with and without tolerance sensitivity reduction, and the as-built MTF shows the result. Furthermore, the method for sensitivity reduction is presented. The opto-mechanical design and assembly method are also discussed. Eventually, the objective with five spherical lenses was fabricated. In optical bench test, the depth of the oil is sensitive to MTF, and it leads to the complicated adjustment. For solving this issue, we made an index-matching lens to replace oil for measurement easily. Finally, the measured MTF of the excitation objective can accomplish the requirement specification and successfully employed in a lattice light-sheet microscope.
NASA Astrophysics Data System (ADS)
Padmanabhan, Saraswathi; Shinoj, Vengalathunadakal K.; Murukeshan, Vadakke M.; Padmanabhan, Parasuraman
2010-01-01
A simple optical method using hollow-core photonic crystal fiber for protein detection has been described. In this study, estrogen receptor (ER) from a MCF-7 breast carcinoma cell lysates immobilized inside a hollow-core photonic crystal fiber was detected using anti-ER primary antibody with either Alexa™ Fluor 488 (green fluorescent dye) or 555 (red Fluorescent dye) labeled Goat anti-rabbit IgG as the secondary antibody. The fluorescence fingerprints of the ERα protein were observed under fluorescence microscope, and its optical characteristics were analyzed. The ERα protein detection by this proposed method is based on immuno binding from sample volume as low as 50 nL. This method is expected to offer great potential as a biosensor for medical diagnostics and therapeutics applications.
NMT - A new individual ion counting method: Comparison to a Faraday cup
NASA Astrophysics Data System (ADS)
Burton, Michael; Gorbunov, Boris
2018-03-01
Two sample detectors used to analyze the emission from Gas Chromatography (GC) columns are the Flame Ionization Detector (FID) and the Electron Capture Detector (ECD). Both of these detectors involve ionization of the sample molecules and then measuring electric current in the gas using a Faraday cup. In this paper a newly discovered method of ion counting, Nanotechnology Molecular Tagging (NMT) is tested as a replacement to the Faraday cup in GCs. In this method the effective physical volume of individual molecules is enlarged up to 1 billion times enabling them to be detected by an optical particle counter. It was found that the sensitivity of NMT was considerably greater than the Faraday cup. The background in the NMT was circa 200 ions per cm3, corresponding to an extremely low electric current ∼10-17 A.
Osland, Michael J; Day, Richard H; Hall, Courtney T; Brumfield, Marisa D; Dugas, Jason L; Jones, William R
2017-01-01
Within the context of climate change, there is a pressing need to better understand the ecological implications of changes in the frequency and intensity of climate extremes. Along subtropical coasts, less frequent and warmer freeze events are expected to permit freeze-sensitive mangrove forests to expand poleward and displace freeze-tolerant salt marshes. Here, our aim was to better understand the drivers of poleward mangrove migration by quantifying spatiotemporal patterns in mangrove range expansion and contraction across land-ocean temperature gradients. Our work was conducted in a freeze-sensitive mangrove-marsh transition zone that spans a land-ocean temperature gradient in one of the world's most wetland-rich regions (Mississippi River Deltaic Plain; Louisiana, USA). We used historical air temperature data (1893-2014), alternative future climate scenarios, and coastal wetland coverage data (1978-2011) to investigate spatiotemporal fluctuations and climate-wetland linkages. Our analyses indicate that changes in mangrove coverage have been controlled primarily by extreme freeze events (i.e., air temperatures below a threshold zone of -6.3 to -7.6°C). We expect that in the past 121 yr, mangrove range expansion and contraction has occurred across land-ocean temperature gradients. Mangrove resistance, resilience, and dominance were all highest in areas closer to the ocean where temperature extremes were buffered by large expanses of water and saturated soil. Under climate change, these areas will likely serve as local hotspots for mangrove dispersal, growth, range expansion, and displacement of salt marsh. Collectively, our results show that the frequency and intensity of freeze events across land-ocean temperature gradients greatly influences spatiotemporal patterns of range expansion and contraction of freeze-sensitive mangroves. We expect that, along subtropical coasts, similar processes govern the distribution and abundance of other freeze-sensitive organisms. In broad terms, our findings can be used to better understand and anticipate the ecological effects of changing winter climate extremes, especially within the transition zone between tropical and temperate climates. © 2016 by the Ecological Society of America.
Sensitivity to psychostimulants in mice bred for high and low stimulation to methamphetamine.
Kamens, H M; Burkhart-Kasch, S; McKinnon, C S; Li, N; Reed, C; Phillips, T J
2005-03-01
Methamphetamine (MA) and cocaine induce behavioral effects primarily through modulation of dopamine neurotransmission. However, the genetic regulation of sensitivity to these two drugs may be similar or disparate. Using selective breeding, lines of mice were produced with extreme sensitivity (high MA activation; HMACT) and insensitivity (low MA activation; LMACT) to the locomotor stimulant effects of acute MA treatment. Studies were performed to determine whether there is pleiotropic genetic influence on sensitivity to the locomotor stimulant effect of MA and to other MA- and cocaine-related behaviors. The HMACT line exhibited more locomotor stimulation in response to several doses of MA and cocaine, compared to the LMACT line. Both lines exhibited locomotor sensitization to 2 mg/kg of MA and 10 mg/kg of cocaine; the magnitude of sensitization was similar in the two lines. However, the lines differed in the magnitude of sensitization to a 1 mg/kg dose of MA, a dose that did not produce a ceiling effect that may confound interpretation of studies using higher doses. The LMACT line consumed more MA and cocaine in a two-bottle choice drinking paradigm; the lines consumed similar amounts of saccharin and quinine, although the HMACT line exhibited slightly elevated preference for a low concentration of saccharin. These results suggest that some genes that influence sensitivity to the acute locomotor stimulant effect of MA have a pleiotropic influence on the magnitude of behavioral sensitization to MA and sensitivity to the stimulant effects of cocaine. Further, extreme sensitivity to MA may protect against MA and cocaine self-administration.
Early developmental stages of fish are extremely sensitive to a class of toxic and persistent environmental contaminants known as dioxin-like compounds (DLCs). Most of the toxicological actions of DLCs are mediated via the Aryl hydrocarbon Receptor (AhR) that regulates transcript...
Alfaro-Núñez, Alonzo; Gilbert, M Thomas P
2014-09-01
The Chelonid fibropapilloma-associated herpesvirus (CFPHV) is hypothesized to be the causative agent of fibropapillomatosis, a neoplastic disease in sea turtles, given its consistent detection by PCR in fibropapilloma tumours. CFPHV has also been detected recently by PCR in tissue samples from clinically healthy (non exhibiting fibropapilloma tumours) turtles, thus representing presumably latent infections of the pathogen. Given that template copy numbers of viruses in latent infections can be very low, extremely sensitive PCR assays are needed to optimize detection efficiency. In this study, efficiency of several PCR assays designed for CFPHV detection is explored and compared to a method published previously. The results show that adoption of a triplet set of singleplex PCR assays outperforms other methods, with an approximately 3-fold increase in detection success in comparison to the standard assay. Thus, a new assay for the detection of CFPHV DNA markers is presented, and adoption of its methodology is recommended in future CFPHV screens among sea turtles. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Presnov, Denis E.; Bozhev, Ivan V.; Miakonkikh, Andrew V.; Simakin, Sergey G.; Trifonov, Artem S.; Krupenin, Vladimir A.
2018-02-01
We present the original method for fabricating a sensitive field/charge sensor based on field effect transistor (FET) with a nanowire channel that uses CMOS-compatible processes only. A FET with a kink-like silicon nanowire channel was fabricated from the inhomogeneously doped silicon on insulator wafer very close (˜100 nm) to the extremely sharp corner of a silicon chip forming local probe. The single e-beam lithographic process with a shadow deposition technique, followed by separate two reactive ion etching processes, was used to define the narrow semiconductor nanowire channel. The sensors charge sensitivity was evaluated to be in the range of 0.1-0.2 e /√{Hz } from the analysis of their transport and noise characteristics. The proposed method provides a good opportunity for the relatively simple manufacture of a local field sensor for measuring the electrical field distribution, potential profiles, and charge dynamics for a wide range of mesoscopic objects. Diagnostic systems and devices based on such sensors can be used in various fields of physics, chemistry, material science, biology, electronics, medicine, etc.
Highly Sensitive and Selective Gas Sensor Using Hydrophilic and Hydrophobic Graphenes
Some, Surajit; Xu, Yang; Kim, Youngmin; Yoon, Yeoheung; Qin, Hongyi; Kulkarni, Atul; Kim, Taesung; Lee, Hyoyoung
2013-01-01
New hydrophilic 2D graphene oxide (GO) nanosheets with various oxygen functional groups were employed to maintain high sensitivity in highly unfavorable environments (extremely high humidity, strong acidic or basic). Novel one-headed polymer optical fiber sensor arrays using hydrophilic GO and hydrophobic reduced graphene oxide (rGO) were carefully designed, leading to the selective sensing of volatile organic gases for the first time. The two physically different surfaces of GO and rGO could provide the sensing ability to distinguish between tetrahydrofuran (THF) and dichloromethane (MC), respectively, which is the most challenging issue in the area of gas sensors. The eco-friendly physical properties of GO allowed for faster sensing and higher sensitivity when compared to previous results for rGO even under extreme environments of over 90% humidity, making it the best choice for an environmentally friendly gas sensor. PMID:23736838
Robust, Thin Optical Films for Extreme Environments
NASA Technical Reports Server (NTRS)
2006-01-01
The environment of space presents scientists and engineers with the challenges of a harsh, unforgiving laboratory in which to conduct their scientific research. Solar astronomy and X-ray astronomy are two of the more challenging areas into which NASA scientists delve, as the optics for this high-tech work must be extremely sensitive and accurate, yet also be able to withstand the battering dished out by radiation, extreme temperature swings, and flying debris. Recent NASA work on this rugged equipment has led to the development of a strong, thin film for both space and laboratory use.
Digital image profilers for detecting faint sources which have bright companions
NASA Technical Reports Server (NTRS)
Morris, Elena; Flint, Graham; Slavey, Robert
1992-01-01
For this program, an image profiling system was developed which offers the potential for detecting extremely faint optical sources that are located in close proximity to bright companions. The approach employed is novel in three respects. First, it does not require an optical system wherein extraordinary measures must be taken to minimize diffraction and scatter. Second, it does not require detectors possessing either extreme uniformity in sensitivity or extreme temporal stability. Finally, the system can readily be calibrated, or nulled, in space by testing against an unresolved singular stellar source.
FastID: Extremely Fast Forensic DNA Comparisons
2017-05-19
FastID: Extremely Fast Forensic DNA Comparisons Darrell O. Ricke, PhD Bioengineering Systems & Technologies Massachusetts Institute of...Technology Lincoln Laboratory Lexington, MA USA Darrell.Ricke@ll.mit.edu Abstract—Rapid analysis of DNA forensic samples can have a critical impact on...time sensitive investigations. Analysis of forensic DNA samples by massively parallel sequencing is creating the next gold standard for DNA
Trchunian, A; Ogandzhanian, E; Sarkisian, E; Gonian, S; Oganesian, A; Oganesian, S
2001-01-01
It was found that "sound" electromagnetic radiations of extremely high frequencies (53.5-68 GHz) or millimeter waves (wavelength range of 4.2-5.6 mm) of low intensity (power density 0.01 mW) have a bactericidal effect on Escherichia coli bacteria. It was shown that exposure to irradiation of extremely high frequencies increases the electrokinetic potential and surface change density of bacteria and decreases of membrane potential. The total secretion of hydrogen ions was suppressed, the H+ flux from the cytoplasm to medium decreased, and the flux of N,N'-dicyclohexylcarbodiimide-sensitive potassium ions increased, which was accompanied by changes in the stoichiometry of these fluxes and an increase in the sensitivity of H+ ions to N,N'-dicyclohexylcarbodiimide. The effects depended on duration of exposure: as the time of exposure increased, the bactericidal effect increased, whereas the membranotropic effects decreased. The effects also depended on growth phase of bacteria: the irradiation affected the cells in the stationary but not in the logarithmic phase. It is assumed that the H(+)-ATPase complex F0F1 is involved in membranotropic effects of electromagnetic radiation of extremely high frequencies. Presumably, there are some compensatory mechanisms that eliminate the membranotropic effects.
NASA Astrophysics Data System (ADS)
Buitrago, Elizabeth; Nagahara, Seiji; Yildirim, Oktay; Nakagawa, Hisashi; Tagawa, Seiichi; Meeuwissen, Marieke; Nagai, Tomoki; Naruoka, Takehiko; Verspaget, Coen; Hoefnagels, Rik; Rispens, Gijsbert; Shiraishi, Gosuke; Terashita, Yuichi; Minekawa, Yukie; Yoshihara, Kosuke; Oshima, Akihiro; Vockenhuber, Michaela; Ekinci, Yasin
2016-07-01
Extreme ultraviolet lithography (EUVL, λ=13.5 nm) is the most promising candidate to manufacture electronic devices for future technology nodes in the semiconductor industry. Nonetheless, EUVL still faces many technological challenges as it moves toward high-volume manufacturing (HVM). A key bottleneck from the tool design and performance point of view has been the development of an efficient, high-power EUV light source for high throughput production. Consequently, there has been extensive research on different methodologies to enhance EUV resist sensitivity. Resist performance is measured in terms of its ultimate printing resolution, line width roughness (LWR), sensitivity [S or best energy (BE)], and exposure latitude (EL). However, there are well-known fundamental trade-off relationships (line width roughness, resolution and sensitivity trade-off) among these parameters for chemically amplified resists (CARs). We present early proof-of-principle results for a multiexposure lithography process that has the potential for high sensitivity enhancement without compromising other important performance characteristics by the use of a "Photosensitized Chemically Amplified Resist™" (PSCAR™). With this method, we seek to increase the sensitivity by combining a first EUV pattern exposure with a second UV-flood exposure (λ=365 nm) and the use of a PSCAR. In addition, we have evaluated over 50 different state-of-the-art EUV CARs. Among these, we have identified several promising candidates that simultaneously meet sensitivity, LWR, and EL high-performance requirements with the aim of resolving line space (L/S) features for the 7- and 5-nm logic node [16- and 13-nm half-pitch (HP), respectively] for HVM. Several CARs were additionally found to be well resolved down to 12- and 11-nm HP with minimal pattern collapse and bridging, a remarkable feat for CARs. Finally, the performance of two negative tone state-of-the-art alternative resist platforms previously investigated was compared to the CAR performance at and below 16-nm HP resolution, demonstrating the need for alternative resist solutions at 13-nm resolution and below. EUV interference lithography (IL) has provided and continues to provide a simple yet powerful platform for academic and industrial research, enabling the characterization and development of resist materials before commercial EUV exposure tools become available. Our experiments have been performed at the EUV-IL set-up in the Swiss Light Source (SLS) synchrotron facility located at the Paul Scherrer Institute (PSI).
NASA Astrophysics Data System (ADS)
Ghosh, S. B.; Bhattacharya, K.; Nayak, S.; Mukherjee, P.; Salaskar, D.; Kale, S. P.
2015-09-01
Definitive identification of microorganisms, including pathogenic and non-pathogenic bacteria, is extremely important for a wide variety of applications including food safety, environmental studies, bio-terrorism threats, microbial forensics, criminal investigations and above all disease diagnosis. Although extremely powerful techniques such as those based on PCR and microarrays exist, they require sophisticated laboratory facilities along with elaborate sample preparation by trained researchers. Among different spectroscopic techniques, FTIR was used in the 1980s and 90s for bacterial identification. In the present study five species of Bacillus were isolated from the aerobic predigester chamber of Nisargruna Biogas Plant (NBP) and were identified to the species level by biochemical and molecular biological (16S ribosomal DNA sequence) methods. Those organisms were further checked by solid state spectroscopic absorbance measurements using a wide range of electromagnetic radiation (wavelength 200 nm to 25,000 nm) encompassing UV, visible, near Infrared and Infrared regions. UV-Vis and NIR spectroscopy was performed on dried bacterial cell suspension on silicon wafer in specular mode while FTIR was performed on KBr pellets containing the bacterial cells. Consistent and reproducible species specific spectra were obtained and sensitivity up to a level of 1000 cells was observed in FTIR with a DTGS detector. This clearly shows the potential of solid state spectroscopic techniques for simple, easy to implement, reliable and sensitive detection of bacteria from environmental samples.
Selection criteria for wear resistant powder coatings under extreme erosive wear conditions
NASA Astrophysics Data System (ADS)
Kulu, P.; Pihl, T.
2002-12-01
Wear-resistant thermal spray coatings for sliding wear are hard but brittle (such as carbide and oxide based coatings), which makes them useless under impact loading conditions and sensitive to fatigue. Under extreme conditions of erosive wear (impact loading, high hardness of abrasives, and high velocity of abradant particles), composite coatings ensure optimal properties of hardness and toughness. The article describes tungsten carbide-cobalt (WC-Co) systems and self-fluxing alloys, containing tungsten carbide based hardmetal particles [NiCrSiB-(WC-Co)] deposited by the detonation gun, continuous detonation spraying, and spray fusion processes. Different powder compositions and processes were studied, and the effect of the coating structure and wear parameters on the wear resistance of coatings are evaluated. The dependence of the wear resistance of sprayed and fused coatings on their hardness is discussed, and hardness criteria for coating selection are proposed. The so-called “double cemented” structure of WC-Co based hardmetal or metal matrix composite coatings, as compared with a simple cobalt matrix containing particles of WC, was found optimal. Structural criteria for coating selection are provided. To assist the end user in selecting an optimal deposition method and materials, coating selection diagrams of wear resistance versus hardness are given. This paper also discusses the cost-effectiveness of coatings in the application areas that are more sensitive to cost, and composite coatings based on recycled materials are offered.
Multi-scale silica structures for improved point of care detection
NASA Astrophysics Data System (ADS)
Lin, Sophia; Lin, Lancy; Cho, Eunbyul; Pezzani, Gaston A. O.; Khine, Michelle
2017-03-01
The need for sensitive, portable diagnostic tests at the point of care persists. We report on a simple method to obtain improved detection of biomolecules by a two-fold mechanism. Silica (SiO2) is coated on pre-stressed thermoplastic shrink-wrap film. When the film retracts, the resulting micro- and nanostructures yield far-field fluorescence signal enhancements over their planar or wrinkled counterparts. Because the film shrinks by 95% in surface area, there is also a 20x concentration effect. The SiO2 structured substrate is therefore used for improved detection of labeled proteins and DNA hybridization via both fluorescent and bright field. Through optical characterization studies, we attribute the fluorescence signal enhancements of 100x to increased surface density and light scattering from the rough SiO2 structures. Combining with our open channel self-wicking microfluidics, we can achieve extremely low cost yet sensitive point of care diagnostics.
Real-Time Measurement of Nanotube Resonator Fluctuations in an Electron Microscope
2017-01-01
Mechanical resonators based on low-dimensional materials provide a unique platform for exploring a broad range of physical phenomena. The mechanical vibrational states are indeed extremely sensitive to charges, spins, photons, and adsorbed masses. However, the roadblock is often the readout of the resonator, because the detection of the vibrational states becomes increasingly difficult for smaller resonators. Here, we report an unprecedentedly sensitive method to detect nanotube resonators with effective masses in the 10–20 kg range. We use the beam of an electron microscope to resolve the mechanical fluctuations of a nanotube in real-time for the first time. We obtain full access to the thermally driven Brownian motion of the resonator, both in space and time domains. Our results establish the viability of carbon nanotube resonator technology at room temperature and pave the way toward the observation of novel thermodynamics regimes and quantum effects in nanomechanics. PMID:28186773
Sehata, Go; Sato, Hiroaki; Ito, Toshihiro; Imaizumi, Yoshitaka; Noro, Taichi; Oishi, Eiji
2015-07-01
We used real-time RT-PCR and virus titration to examine canine distemper virus (CDV) kinetics in peripheral blood and rectal and nasal secretions from 12 experimentally infected dogs. Real-time RT-PCR proved extremely sensitive, and the correlation between the two methods for rectal and nasal (r=0.78, 0.80) samples on the peak day of viral RNA was good. Although the dogs showed diverse symptoms, viral RNA kinetics were similar; the peak of viral RNA in the symptomatic dogs was consistent with the onset of symptoms. These results indicate that real-time RT-PCR is sufficiently sensitive to monitor CDV replication in experimentally infected dogs regardless of the degree of clinical manifestation and suggest that the peak of viral RNA reflects active CDV replication.
SEHATA, Go; SATO, Hiroaki; ITO, Toshihiro; IMAIZUMI, Yoshitaka; NORO, Taichi; OISHI, Eiji
2015-01-01
We used real-time RT-PCR and virus titration to examine canine distemper virus (CDV) kinetics in peripheral blood and rectal and nasal secretions from 12 experimentally infected dogs. Real-time RT-PCR proved extremely sensitive, and the correlation between the two methods for rectal and nasal (r=0.78, 0.80) samples on the peak day of viral RNA was good. Although the dogs showed diverse symptoms, viral RNA kinetics were similar; the peak of viral RNA in the symptomatic dogs was consistent with the onset of symptoms. These results indicate that real-time RT-PCR is sufficiently sensitive to monitor CDV replication in experimentally infected dogs regardless of the degree of clinical manifestation and suggest that the peak of viral RNA reflects active CDV replication. PMID:25728411
Advanced industrial fluorescence metrology used for qualification of high quality optical materials
NASA Astrophysics Data System (ADS)
Engel, Axel; Becker, Hans-Juergen; Sohr, Oliver; Haspel, Rainer; Rupertus, Volker
2003-11-01
Schott Glas is developing and producing the optical material for various specialized applications in telecommunication, biomedical, optical, and micro lithography technology. The requirements on quality for optical materials are extremely high and still increasing. For example in micro lithography applications the impurities of the material are specified to be in the low ppb range. Usually the impurities in the lower ppb range are determined using analytical methods like LA ICP-MS and Neutron Activation Analysis. On the other hand absorption and laser resistivity of optical material is qualified with optical methods like precision spectral photometers and in-situ transmission measurements having UV lasers. Analytical methods have the drawback that they are time consuming and rather expensive, whereas the sensitivity for the absorption method will not be sufficient to characterize the future needs (coefficient much below 10-3 cm-1). In order to achieve the current and future quality requirements a Jobin Yvon FLUOROLOG 3.22 fluorescence spectrometer is employed to enable fast and precise qualification and analysis. The main advantage of this setup is the combination of highest sensitivity (more than one order of magnitude higher sensitivity that state of the art UV absorption spectroscopy) and fast measurement and evaluation cycles (several minutes compared to several hours necessary for chemical analytics). An overview is given for spectral characteristics and using specified standards. Moreover correlations to the material qualities are shown. In particular we have investigated the elementary fluorescence and absorption of rare earth element impurities as well as defects induced luminescence originated by impurities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Putten, Maurice H. P. M.
2015-09-01
Long gamma-ray bursts (GRBs) associated with supernovae and short GRBs with extended emission (SGRBEE) from mergers are probably powered by black holes as a common inner engine, as their prompt GRB emission satisfies the same Amati correlation in the E{sub p,i}–E{sub iso} plane. We introduce modified Bardeen equations to identify hyper-accretion driving newly formed black holes in core-collapse supernovae to near-extremal spin as a precursor to prompt GRB emission. Subsequent spin-down is observed in the BATSE catalog of long GRBs. Spin-down provides a natural unification of long durations associated with the lifetime of black hole spin for normal long GRBsmore » and SGRBEEs, given the absence of major fallback matter in mergers. The results point to major emissions unseen in high frequency gravitational waves. A novel matched filtering method is described for LIGO–Virgo and KAGRA broadband probes of nearby core-collapse supernovae at essentially maximal sensitivity.« less
Origins of extreme broadening mechanisms in near-edge x-ray spectra of nitrogen compounds
NASA Astrophysics Data System (ADS)
Vinson, John; Jach, Terrence; Elam, W. T.; Denlinger, J. D.
2014-11-01
We demonstrate the observation of many-body lifetime effects in valence-band x-ray emission. A comparison of the N K α emission of crystalline ammonium nitrate to molecular-orbital calculations revealed an unexpected, extreme broadening of the NO σ recombination—so extensively as to virtually disappear. GW calculations establish that this disappearance is due to a large imaginary component of the self-energy associated with the NO σ orbitals. Building upon density-functional theory, we have calculated radiative transitions from the nitrogen 1 s level of ammonium nitrate and ammonium chloride using a Bethe-Salpeter method to include electron-hole interactions. The absorption and emission spectra of both crystals evince large, orbital-dependent sensitivity to molecular dynamics. We demonstrate that many-body effects as well as thermal and zero-point motion are vital for understanding observed spectra. A computational approach using average atomic positions and uniform broadening to account for lifetime and phonon effects is unsatisfactory.
NASA Astrophysics Data System (ADS)
Feilx Kim, Seojin; Jee, Myungkook James
2018-01-01
Measuring High-z clusters’ masses is very important as the cluster abundance is extremely sensitive to the cosmological parameters. However, deriving their masses from the intracluster medium properties (i.e., Sunyaev-Zel’dovich or X-ray observations) is not the best method because of their departure from the hydrostatic equilibrium. Fortunately, the “See Change” Hubble Space Telescope program offers a rare opportunity to measure them using weak gravitational lensing. We study SPT-CL J0205-5829 (z=1.322) and MOO1014+0038 (z=1.24) discovered in the SPT-SZ and MaDCoW Surveys, respectively. Previous non-lensing based approaches suggest that both targets might be extremely massive clusters. After carefully addressing various possible systematics from the Advanced Camera for Surveys (ACS) and Wide Field Camera 3 (WFC3) images, we successfully detect clear weak lensing signals. We present their 2-dimensional mass maps and compare our weak-lensing masses with previous ICM-based results.
Brigham, Mark E.; Payne, Gregory A.; Andrews, William J.; Abbott, Marvin M.
2002-01-01
The sampling network was evaluated with respect to areal coverage, sampling frequency, and analytical schedules. Areal coverage could be expanded to include one additional watershed that is not part of the current network. A new sampling site on the North Canadian River might be useful because of expanding urbanization west of the city, but sampling at some other sites could be discontinued or reduced based on comparisons of data between the sites. Additional real-time or periodic monitoring for dissolved oxygen may be useful to prevent anoxic conditions in pools behind new low-water dams. The sampling schedules, both monthly and quarterly, are adequate to evaluate trends, but additional sampling during flow extremes may be needed to quantify loads and evaluate water-quality during flow extremes. Emerging water-quality issues may require sampling for volatile organic compounds, sulfide, total phosphorus, chlorophyll-a, Esherichia coli, and enterococci, as well as use of more sensitive laboratory analytical methods for determination of cadmium, mercury, lead, and silver.
NASA Astrophysics Data System (ADS)
Kozawa, Takahiro; Santillan, Julius Joseph; Itani, Toshiro
2017-06-01
In lithography using high-energy photons such as an extreme ultraviolet (EUV) radiation, the shot noise of photons is a critical issue. The shot noise is a cause of line edge/width roughness (LER/LWR) and stochastic defect generation and limits the resist performance. In this study, the effects of photodecomposable quenchers were investigated from the viewpoint of the shot noise limit. The latent images of line-and-space patterns with 11 nm half-pitch were calculated using a Monte Carlo method. In the simulation, the effect of secondary electron blur was eliminated to clarify the shot noise limits regarding stochastic phenomena such as LER. The shot noise limit for chemically amplified resists with acid generators and photodecomposable quenchers was approximately the same as that for chemically amplified resists with acid generators and conventional quenchers when the total sensitizer concentration was the same. The effect of photodecomposable quenchers on the shot noise limit was essentially the same as that of acid generators.
Statistical downscaling modeling with quantile regression using lasso to estimate extreme rainfall
NASA Astrophysics Data System (ADS)
Santri, Dewi; Wigena, Aji Hamim; Djuraidah, Anik
2016-02-01
Rainfall is one of the climatic elements with high diversity and has many negative impacts especially extreme rainfall. Therefore, there are several methods that required to minimize the damage that may occur. So far, Global circulation models (GCM) are the best method to forecast global climate changes include extreme rainfall. Statistical downscaling (SD) is a technique to develop the relationship between GCM output as a global-scale independent variables and rainfall as a local- scale response variable. Using GCM method will have many difficulties when assessed against observations because GCM has high dimension and multicollinearity between the variables. The common method that used to handle this problem is principal components analysis (PCA) and partial least squares regression. The new method that can be used is lasso. Lasso has advantages in simultaneuosly controlling the variance of the fitted coefficients and performing automatic variable selection. Quantile regression is a method that can be used to detect extreme rainfall in dry and wet extreme. Objective of this study is modeling SD using quantile regression with lasso to predict extreme rainfall in Indramayu. The results showed that the estimation of extreme rainfall (extreme wet in January, February and December) in Indramayu could be predicted properly by the model at quantile 90th.
Development of a paper-based carbon nanotube sensing microfluidic device for biological detection.
Yang, Shih-I; Lei, Kin Fong; Tsai, Shiao-Wen; Hsu, Hsiao-Ting
2013-01-01
Carbon nanotube (CNT) has been utilized for the biological detection due to its extremely sensitive to biological molecules. A paper-based CNT sensing microfluidic device has been developed for the detection of protein, i.e., biotin-avidin, binding. We have developed a fabrication method that allows controlled deposition of bundled CNTs with well-defined dimensions to form sensors on paper. Then, polydimethyl siloxane (PDMS) was used to pattern the hydrophobic boundary on paper to form the reaction sites. The proposed fabrication method is based on vacuum filtration process with a metal mask covering on a filter paper for the definition of the dimension of sensor. The length, width, and thickness of the CNT-based sensors are readily controlled by the metal mask and the weight of the CNT powder used during the filtration process, respectively. Homogeneous deposition of CNTs with well-defined dimensions can be achieved. The CNT-based sensor on paper has been demonstrated on the detection of the protein binding. Biotin was first immobilized on the CNT's sidewall and avidin suspended solution was applied to the site. The result of the biotin-avidin binding was measured by the resistance change of the sensor, which is a label-free detection method. It showed the CNT is sensitive to the biological molecules and the proposed paper-based CNT sensing device is a possible candidate for point-of-care biosensors. Thus, electrical bio-assays on paper-based microfluidics can be realized to develop low cost, sensitive, and specific diagnostic devices.
Grunau, R V; Whitfield, M F; Petrie, J H
1994-09-01
High-technology medical care of extremely low-birth-weight (ELBW) infants (< 1001 g) involves repeated medical interventions which are potentially painful and may later affect reaction to pain. At 18 months corrected age (CCA), we examined parent ratings of pain sensitivity and how pain sensitivity ratings related to child temperament and parenting style in 2 groups of ELBW children (49 with a birth weight of 480-800 g and 75 with a birth weight of 801-1000 g) and 2 control groups (42 heavier preterm (1500-2499 g) and 29 full-birth-weight (FBW) children (> 2500 g). Both groups of ELBW toddlers were rated by parents as significantly lower in pain sensitivity compared with both control groups. The relationships between child temperament and pain sensitivity rating varied systematically across the groups. Temperament was strongly related to rated pain sensitivity in the FBW group, moderately related in the heavier preterm and ELBW 801-1000 g groups, and not related in the lowest birth-weight group (< 801 g). Parental style did not mediate ratings of pain sensitivity. The results suggest that parents perceive differences in pain behavior of ELBW toddlers compared with heavier preterm and FBW toddlers, especially for those less than 801 g. Longitudinal research into the development of pain behavior for infants who experience lengthy hospitalization is warranted.
NASA Astrophysics Data System (ADS)
Liu, Meixian; Xu, Xianli; Sun, Alex
2015-07-01
Climate extremes can cause devastating damage to human society and ecosystems. Recent studies have drawn many conclusions about trends in climate extremes, but few have focused on quantitative analysis of their spatial variability and underlying mechanisms. By using the techniques of overlapping moving windows, the Mann-Kendall trend test, correlation, and stepwise regression, this study examined the spatial-temporal variation of precipitation extremes and investigated the potential key factors influencing this variation in southwestern (SW) China, a globally important biodiversity hot spot and climate-sensitive region. Results showed that the changing trends of precipitation extremes were not spatially uniform, but the spatial variability of these precipitation extremes decreased from 1959 to 2012. Further analysis found that atmospheric circulations rather than local factors (land cover, topographic conditions, etc.) were the main cause of such precipitation extremes. This study suggests that droughts or floods may become more homogenously widespread throughout SW China. Hence, region-wide assessments and coordination are needed to help mitigate the economic and ecological impacts.
Tropical precipitation extremes: Response to SST-induced warming in aquaplanet simulations
NASA Astrophysics Data System (ADS)
Bhattacharya, Ritthik; Bordoni, Simona; Teixeira, João.
2017-04-01
Scaling of tropical precipitation extremes in response to warming is studied in aquaplanet experiments using the global Weather Research and Forecasting (WRF) model. We show how the scaling of precipitation extremes is highly sensitive to spatial and temporal averaging: while instantaneous grid point extreme precipitation scales more strongly than the percentage increase (˜7% K-1) predicted by the Clausius-Clapeyron (CC) relationship, extremes for zonally and temporally averaged precipitation follow a slight sub-CC scaling, in agreement with results from Climate Model Intercomparison Project (CMIP) models. The scaling depends crucially on the employed convection parameterization. This is particularly true when grid point instantaneous extremes are considered. These results highlight how understanding the response of precipitation extremes to warming requires consideration of dynamic changes in addition to the thermodynamic response. Changes in grid-scale precipitation, unlike those in convective-scale precipitation, scale linearly with the resolved flow. Hence, dynamic changes include changes in both large-scale and convective-scale motions.
Barbhaiya, Medha; Dong, Yan; Sparks, Jeffrey A; Losina, Elena; Costenbader, Karen H; Katz, Jeffrey N
2017-06-19
Studies of the epidemiology and outcomes of avascular necrosis (AVN) require accurate case-finding methods. The aim of this study was to evaluate performance characteristics of a claims-based algorithm designed to identify AVN cases in administrative data. Using a centralized patient registry from a US academic medical center, we identified all adults aged ≥18 years who underwent magnetic resonance imaging (MRI) of an upper/lower extremity joint during the 1.5 year study period. A radiologist report confirming AVN on MRI served as the gold standard. We examined the sensitivity, specificity, positive predictive value (PPV) and positive likelihood ratio (LR + ) of four algorithms (A-D) using International Classification of Diseases, 9th edition (ICD-9) codes for AVN. The algorithms ranged from least stringent (Algorithm A, requiring ≥1 ICD-9 code for AVN [733.4X]) to most stringent (Algorithm D, requiring ≥3 ICD-9 codes, each at least 30 days apart). Among 8200 patients who underwent MRI, 83 (1.0% [95% CI 0.78-1.22]) had AVN by gold standard. Algorithm A yielded the highest sensitivity (81.9%, 95% CI 72.0-89.5), with PPV of 66.0% (95% CI 56.0-75.1). The PPV of algorithm D increased to 82.2% (95% CI 67.9-92.0), although sensitivity decreased to 44.6% (95% CI 33.7-55.9). All four algorithms had specificities >99%. An algorithm that uses a single billing code to screen for AVN among those who had MRI has the highest sensitivity and is best suited for studies in which further medical record review confirming AVN is feasible. Algorithms using multiple billing codes are recommended for use in administrative databases when further AVN validation is not feasible.
Laget, Sophie; Dhingra, Dalia M.; BenMohamed, Fatima; Capiod, Thierry; Osteras, Magne; Farinelli, Laurent; Jackson, Stephen; Paterlini-Bréchot, Patrizia
2017-01-01
Circulating Tumor Cells (CTC) and Circulating Tumor Microemboli (CTM) are Circulating Rare Cells (CRC) which herald tumor invasion and are expected to provide an opportunity to improve the management of cancer patients. An unsolved technical issue in the CTC field is how to obtain highly sensitive and unbiased collection of these fragile and heterogeneous cells, in both live and fixed form, for their molecular study when they are extremely rare, particularly at the beginning of the invasion process. We report on a new protocol to enrich from blood live CTC using ISET® (Isolation by SizE of Tumor/Trophoblastic Cells), an open system originally developed for marker-independent isolation of fixed tumor cells. We have assessed the impact of our new enrichment method on live tumor cells antigen expression, cytoskeleton structure, cell viability and ability to expand in culture. We have also explored the ISET® in vitro performance to collect intact fixed and live cancer cells by using spiking analyses with extremely low number of fluorescent cultured cells. We describe results consistently showing the feasibility of isolating fixed and live tumor cells with a Lower Limit of Detection (LLOD) of one cancer cell per 10 mL of blood and a sensitivity at LLOD ranging from 83 to 100%. This very high sensitivity threshold can be maintained when plasma is collected before tumor cells isolation. Finally, we have performed a comparative next generation sequencing (NGS) analysis of tumor cells before and after isolation from blood and culture. We established the feasibility of NGS analysis of single live and fixed tumor cells enriched from blood by our system. This study provides new protocols for detection and characterization of CTC collected from blood at the very early steps of tumor invasion. PMID:28060956
Optimal sensitivity for molecular recognition MAC-mode AFM
Schindler; Badt; Hinterdorfer; Kienberger; Raab; Wielert-Badt; Pastushenko
2000-02-01
Molecular recognition force microscopy (MRFM) using the magnetic AC mode (MAC mode) atomic force microscope (AFM) was recently investigated to locate and probe recognition sites. A flexible crosslinker carrying a ligand is bound to the tip for the molecular recognition of receptors on the surface of a sample. In this report, the driving frequency is calculated which optimizes the sensitivity (S). The sensitivity of MRFM is defined as the relative change of the magnetically excited cantilever deflection amplitude arising from a crosslinker/antibody/antigen connection that is characterized by a very small force constant. The sensitivity is calculated in a damped oscillator model with a certain value of quality factor Q, which, together with load, defines the frequency response (unloaded oscillator shows resonance at Q > 0.707). If Q < 1, the greatest value of S corresponds to zero driving frequency omega (measured in units of eigenfrequency). Therefore, for Q < 1, MAC-mode has no advantage in comparison with DC-mode. Two additional extremes are found at omegaL = (1 - 1/Q)(1/2) and omegaR = (1 + 1/Q)(1/2), with corresponding sensitivities S(L) = Q2/(2Q - 1), S(R) = Q2/(2Q + 1). The L-extreme exists only for Q > 1, and then S(L) > S(R), i.e. the L-extreme is the main one. For Q > 1, S(L) > 1, and for Q > 2.41, S(R) > 1. These are the critical Q-values, above which selecting driving frequency equal to sigmaL or sigmaR brings advantage to MAC mode vs. DC mode. Satisfactory quality of the oscillator model is demonstrated by comparison of some results with those calculated within the classical description of cantilevers.
Early Ambulation After Microsurgical Reconstruction of the Lower Extremity.
Orseck, Michael J; Smith, Christopher Robert; Kirby, Sean; Trujillo, Manuel
2018-06-01
Successful outcomes after microsurgical reconstruction of the lower extremity include timely return to ambulation. Some combination of physical examination, ViOptix tissue oxygen saturation monitoring, and the implantable venous Doppler have shown promise in increasing sensitivity of current flap monitoring. We have incorporated this system into our postoperative monitoring protocol in an effort to initiate earlier dependency protocols. A prospective analysis of 36 anterolateral thigh free flap and radial forearm flaps for lower extremity reconstruction was performed. Indications for reconstruction were acute and chronic wounds, as well as oncologic resection. Twenty-three patients were able to ambulate and 3 were able to dangle their leg on the first postoperative day. One flap showed early mottling that improved immediately after elevation. After reelevation and return to baseline, the dependency protocol was successfully implemented on postoperative day 3. All flaps went on to successful healing. Physical examination, implantable venous Doppler, and ViOptix can be used reliably as an adjunct to increase the sensitivity of detecting poorly performing flaps during the postoperative progression of dependency.
The extreme ultraviolet explorer
NASA Technical Reports Server (NTRS)
Bowyer, Stuart; Malina, Roger F.
1990-01-01
The Extreme Ultraviolet Explorer (EUVE) mission, currently scheduled for launch in September 1991, is described. The primary purpose of the mission is to survey the celestial sphere for astronomical sources of Extreme Ultraviolet (EUV) radiation. The survey will be accomplished with the use of three EUV telescopes, each sensitive to a different segment of the EUV band. A fourth telescope will perform a high sensitivity search of a limited sample of the sky in the shortest wavelength bands. The all sky survey will be carried out in the first six months of the mission and will be made in four bands, or colors. The second phase of the mission, conducted entirely by guest observers selected by NASA, will be devoted to spectroscopic observations of EUV sources. The performance of the instrument components is described. An end to end model of the mission, from a stellar source to the resulting scientific data, was constructed. Hypothetical data from astronomical sources processed through this model are shown.
Yang, Zhen; Zhi, Shaotao; Feng, Zhu; Lei, Chong; Zhou, Yong
2018-01-01
A sensitive and innovative assay system based on a micro-MEMS-fluxgate sensor and immunomagnetic beads-labels was developed for the rapid analysis of C-reactive proteins (CRP). The fluxgate sensor presented in this study was fabricated through standard micro-electro-mechanical system technology. A multi-loop magnetic core made of Fe-based amorphous ribbon was employed as the sensing element, and 3-D solenoid copper coils were used to control the sensing core. Antibody-conjugated immunomagnetic microbeads were strategically utilized as signal tags to label the CRP via the specific conjugation of CRP to polyclonal CRP antibodies. Separate Au film substrates were applied as immunoplatforms to immobilize CRP-beads labels through classical sandwich assays. Detection and quantification of the CRP at different concentrations were implemented by detecting the stray field of CRP labeled magnetic beads using the newly-developed micro-fluxgate sensor. The resulting system exhibited the required sensitivity, stability, reproducibility, and selectivity. A detection limit as low as 0.002 μg/mL CRP with a linearity range from 0.002 μg/mL to 10 μg/mL was achieved, and this suggested that the proposed biosystem possesses high sensitivity. In addition to the extremely low detection limit, the proposed method can be easily manipulated and possesses a quick response time. The response time of our sensor was less than 5 s, and the entire detection period for CRP analysis can be completed in less than 30 min using the current method. Given the detection performance and other advantages such as miniaturization, excellent stability and specificity, the proposed biosensor can be considered as a potential candidate for the rapid analysis of CRP, especially for point-of-care platforms. PMID:29601593
FLUXNET to MODIS: Connecting the dots to capture heterogenious biosphere metabolism
NASA Astrophysics Data System (ADS)
Woods, K. D.; Schwalm, C.; Huntzinger, D. N.; Massey, R.; Poulter, B.; Kolb, T.
2015-12-01
Eddy co-variance flux towers provide our most widely distributed network of direct observations for land-atmosphere carbon exchange. Carbon flux sensitivity analysis is a method that uses in situ networks to understand how ecosystems respond to changes in climatic variables. Flux towers concurrently observe key ecosystem metabolic processes (e..g. gross primary productivity) and micrometeorological variation, but only over small footprints. Remotely sensed vegetation indices from MODIS offer continuous observations of the vegetated land surface, but are less direct, as they are based on light use efficiency algorithms, and not on the ground observations. The marriage of these two data products offers an opportunity to validate remotely sensed indices with in situ observations and translate information derived from tower sites to globally gridded products. Here we provide correlations between Enhanced Vegetation Index (EVI), Leaf Area Index (LAI) and MODIS gross primary production with FLUXNET derived estimates of gross primary production, respiration and net ecosystem exchange. We demonstrate remotely sensed vegetation products which have been transformed to gridded estimates of terrestrial biosphere metabolism on a regional-to-global scale. We demonstrate anomalies in gross primary production, respiration, and net ecosystem exchange as predicted by both MODIS-carbon flux sensitivities and meteorological driver-carbon flux sensitivities. We apply these sensitivities to recent extreme climatic events and demonstrate both our ability to capture changes in biosphere metabolism, and differences in the calculation of carbon flux anomalies based on method. The quantification of co-variation in these two methods of observation is important as it informs both how remotely sensed vegetation indices are correlated with on the ground tower observations, and with what certainty we can expand these observations and relationships.
Economic evidence on the health impacts of climate change in europe.
Hutton, Guy; Menne, Bettina
2014-01-01
In responding to the health impacts of climate change, economic evidence and tools inform decision makers of the efficiency of alternative health policies and interventions. In a time when sweeping budget cuts are affecting all tiers of government, economic evidence on health protection from climate change spending enables comparison with other public spending. The review included 53 countries of the World Health Organization (WHO) European Region. Literature was obtained using a Medline and Internet search of key terms in published reports and peer-reviewed literature, and from institutions working on health and climate change. Articles were included if they provided economic estimation of the health impacts of climate change or adaptation measures to protect health from climate change in the WHO European Region. Economic studies are classified under health impact cost, health adaptation cost, and health economic evaluation (comparing both costs and impacts). A total of 40 relevant studies from Europe were identified, covering the health damage or adaptation costs related to the health effects of climate change and response measures to climate-sensitive diseases. No economic evaluation studies were identified of response measures specific to the impacts of climate change. Existing studies vary in terms of the economic outcomes measured and the methods for evaluation of health benefits. The lack of robust health impact data underlying economic studies significantly affects the availability and precision of economic studies. Economic evidence in European countries on the costs of and response to climate-sensitive diseases is extremely limited and fragmented. Further studies are urgently needed that examine health impacts and the costs and efficiency of alternative responses to climate-sensitive health conditions, in particular extreme weather events (other than heat) and potential emerging diseases and other conditions threatening Europe.
2015-01-01
This work describes the method of a selective hydride generation-cryotrapping (HG-CT) coupled to an extremely sensitive but simple in-house assembled and designed atomic fluorescence spectrometry (AFS) instrument for determination of toxicologically important As species. Here, an advanced flame-in-gas-shield atomizer (FIGS) was interfaced to HG-CT and its performance was compared to a standard miniature diffusion flame (MDF) atomizer. A significant improvement both in sensitivity and baseline noise was found that was reflected in improved (4 times) limits of detection (LODs). The yielded LODs with the FIGS atomizer were 0.44, 0.74, 0.15, 0.17 and 0.67 ng L–1 for arsenite, total inorganic, mono-, dimethylated As and trimethylarsine oxide, respectively. Moreover, the sensitivities with FIGS and MDF were equal for all As species, allowing for the possibility of single species standardization with arsenate standard for accurate quantification of all other As species. The accuracy of HG-CT-AFS with FIGS was verified by speciation analysis in two samples of bottled drinking water and certified reference materials, NRC CASS-5 (nearshore seawater) and SLRS-5 (river water) that contain traces of methylated As species. As speciation was in agreement with results previously reported and sums of all quantified species corresponded with the certified total As. The feasibility of HG-CT-AFS with FIGS was also demonstrated by the speciation analysis in microsamples of exfoliated bladder epithelial cells isolated from human urine. The results for the sums of trivalent and pentavalent As species corresponded well with the reference results obtained by HG-CT-ICPMS (inductively coupled plasma mass spectrometry). PMID:25300934
DNA-labeled clay: A sensitive new method for tracing particle transport
Mahler, B.J.; Winkler, M.; Bennett, P.; Hillis, D.M.
1998-01-01
The behavior of mobile colloids and sediment in most natural environments remains poorly understood, in part because characteristics of existing sediment tracers limit their wide-spread use. Here we describe the development of a new approach that uses a DNA-labeled montmorillonite clay as a highly sensitive and selective sediment tracer that can potentially characterize sediment and colloid transport in a wide variety of environments, including marine, wetland, ground-water, and atmospheric systems. Characteristics of DNA in natural systems render it unsuitable as an aqueous tracer but admirably suited as a label for tracing particulates. The DNA-labeled-clay approach, using techniques developed from molecular biology, has extremely low detection limits, very specific detection, and a virtually infinite number of tracer signatures. Furthermore, DNA-labeled clay has the same physical characteristics as the particles it is designed to trace, it is environmentally benign, and it can be relatively inexpensively produced and detected. Our initial results show that short (500 base pair) strands of synthetically produced DNA reversibly adsorb to both Na-montmorillonite and powdered silica surfaces via a magnesium bridge. The DNA-montmorillonite surface complexes are stable in calcium-bicarbonate spring waters for periods of up to 18 days and only slowly desorb to the aqueous phase, whereas the silica surface complex is stable only in distilled water. Both materials readily release the adsorbed DNA in dilute EDTA solutions for amplification by the polymerase chain reaction (PCR) and quantification. The stability of the DNA-labeled clay complex suggests that this material would be appropriate for use as an extremely sensitive sediment tracer for flow periods of as long as 2 weeks, and possibly longer.
Boulyga, Sergei F; Heumann, Klaus G
2006-01-01
A method by inductively coupled plasma mass spectrometry (ICP-MS) was developed which allows the measurement of (236)U at concentration ranges down to 3 x 10(-14)g g(-1) and extremely low (236)U/(238)U isotope ratios in soil samples of 10(-7). By using the high-efficiency solution introduction system APEX in connection with a sector-field ICP-MS a sensitivity of more than 5,000 counts fg(-1) uranium was achieved. The use of an aerosol desolvating unit reduced the formation rate of uranium hydride ions UH(+)/U(+) down to a level of 10(-6). An abundance sensitivity of 3 x 10(-7) was observed for (236)U/(238)U isotope ratio measurements at mass resolution 4000. The detection limit for (236)U and the lowest detectable (236)U/(238)U isotope ratio were improved by more than two orders of magnitude compared with corresponding values by alpha spectrometry. Determination of uranium in soil samples collected in the vicinity of Chernobyl nuclear power plant (NPP) resulted in that the (236)U/(238)U isotope ratio is a much more sensitive and accurate marker for environmental contamination by spent uranium in comparison to the (235)U/(238)U isotope ratio. The ICP-MS technique allowed for the first time detection of irradiated uranium in soil samples even at distances more than 200 km to the north of Chernobyl NPP (Mogilev region). The concentration of (236)U in the upper 0-10 cm soil layers varied from 2 x 10(-9)g g(-1) within radioactive spots close to the Chernobyl NPP to 3 x 10(-13)g g(-1) on a sampling site located by >200 km from Chernobyl.
Musil, Stanislav; Matoušek, Tomáš; Currier, Jenna M; Stýblo, Miroslav; Dědina, Jiří
2014-10-21
This work describes the method of a selective hydride generation-cryotrapping (HG-CT) coupled to an extremely sensitive but simple in-house assembled and designed atomic fluorescence spectrometry (AFS) instrument for determination of toxicologically important As species. Here, an advanced flame-in-gas-shield atomizer (FIGS) was interfaced to HG-CT and its performance was compared to a standard miniature diffusion flame (MDF) atomizer. A significant improvement both in sensitivity and baseline noise was found that was reflected in improved (4 times) limits of detection (LODs). The yielded LODs with the FIGS atomizer were 0.44, 0.74, 0.15, 0.17 and 0.67 ng L(-1) for arsenite, total inorganic, mono-, dimethylated As and trimethylarsine oxide, respectively. Moreover, the sensitivities with FIGS and MDF were equal for all As species, allowing for the possibility of single species standardization with arsenate standard for accurate quantification of all other As species. The accuracy of HG-CT-AFS with FIGS was verified by speciation analysis in two samples of bottled drinking water and certified reference materials, NRC CASS-5 (nearshore seawater) and SLRS-5 (river water) that contain traces of methylated As species. As speciation was in agreement with results previously reported and sums of all quantified species corresponded with the certified total As. The feasibility of HG-CT-AFS with FIGS was also demonstrated by the speciation analysis in microsamples of exfoliated bladder epithelial cells isolated from human urine. The results for the sums of trivalent and pentavalent As species corresponded well with the reference results obtained by HG-CT-ICPMS (inductively coupled plasma mass spectrometry).
Maclachlan, Liam; White, Steven G; Reid, Duncan
2015-08-01
Functional assessments are conducted in both clinical and athletic settings in an attempt to identify those individuals who exhibit movement patterns that may increase their risk of non-contact injury. In place of highly sophisticated three-dimensional motion analysis, functional testing can be completed through observation. To evaluate the validity of movement observation assessments by summarizing the results of articles comparing human observation in real-time or video play-back and three-dimensional motion analysis of lower extremity kinematics during functional screening tests. Systematic review. A computerized systematic search was conducted through Medline, SPORTSdiscus, Scopus, Cinhal, and Cochrane health databases between February and April of 2014. Validity studies comparing human observation (real-time or video play-back) to three-dimensional motion analysis of functional tasks were selected. Only studies comprising uninjured, healthy subjects conducting lower extremity functional assessments were appropriate for review. Eligible observers were certified health practitioners or qualified members of sports and athletic training teams that conduct athlete screening. The Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) was used to appraise the literature. Results are presented in terms of functional tasks. Six studies met the inclusion criteria. Across these studies, two-legged squats, single-leg squats, drop-jumps, and running and cutting manoeuvres were the functional tasks analysed. When compared to three-dimensional motion analysis, observer ratings of lower extremity kinematics, such as knee position in relation to the foot, demonstrated mixed results. Single-leg squats achieved target sensitivity values (≥ 80%) but not specificity values (≥ 50%>%). Drop-jump task agreement ranged from poor (< 50%) to excellent (> 80%). Two-legged squats achieved 88% sensitivity and 85% specificity. Mean underestimations as large as 198 (peak knee flexion) were found in the results of those assessing running and side-step cutting manoeuvres. Variables such as the speed of movement, the methods of rating, the profiles of participants and the experience levels of observers may have influenced the outcomes of functional testing. The small number of studies used limits generalizability. Furthermore, this review used two dimensional video-playback for the majority of observations. If the movements had been rated in real-time three dimensional video, the results may have been different. Slower, speed controlled movements using dichotomous ratings reach target sensitivity and demonstrate higher overall levels of agreement. As a result, their utilization in functional screening is advocated. 1A.
USDA-ARS?s Scientific Manuscript database
Most insects have evolved highly sensitive olfactory systems which respond to odors in their environment. The extremely sensitive nature of the insect olfaction system is enhanced by the ability to learn to associate external stimuli with resources, such as food, hosts, and mates. There have been a ...
Microsurgery within reconstructive surgery of extremities.
Pheradze, I; Pheradze, T; Tsilosani, G; Goginashvili, Z; Mosiava, T
2006-05-01
Reconstructive surgery of extremities is an object of a special attention of surgeons. Vessel and nerve damages, deficiency of soft tissue, bone, associated with infection results in a complete loss of extremity function, it also raises a question of amputation. The goal of the study was to improve the role of microsurgery in reconstructive surgery of limbs. We operated on 294 patients with various diseases and damages of extremities: pathology of nerves, vessels, tissue loss. An original method of treatment of large simultaneous functional defects of limbs has been used. Good functional and aesthetic results were obtained. Results of reconstructive operations on extremities might be improved by using of microsurgery methods. Microsurgery is deemed as a method of choice for extremities' reconstructive surgery as far as outcomes achieved through application of microsurgical technique significantly surpass the outcomes obtained through the use of routine surgical methods.
NASA Astrophysics Data System (ADS)
Hoover, D. L.; Wilcox, K.; Young, K. E.
2017-12-01
Droughts are projected to increase in frequency and intensity with climate change, which may have dramatic and prolonged effects on ecosystem structure and function. There are currently hundreds of published, ongoing, and new drought experiments worldwide aimed to assess ecosystem sensitivities to drought and identify the mechanisms governing ecological resistance and resilience. However, to date, the results from these experiments have varied widely, and thus patterns of drought sensitivities have been difficult to discern. This lack of consensus at the field scale, limits the abilities of experiments to help improve land surface models, which often fail to realistically simulate ecological responses to extreme events. This is unfortunate because models offer an alternative, yet complementary approach to increase the spatial and temporal assessment of ecological sensitivities to drought that are not possible in the field due to logistical and financial constraints. Here we examined 89 published drought experiments, along with their associated historical precipitation records to (1) identify where and how drought experiments have been imposed, (2) determine the extremity of drought treatments in the context of historical climate, and (3) assess the influence of precipitation variability on drought experiments. We found an overall bias in drought experiments towards short-term, extreme experiments in water-limited ecosystems. When placed in the context of local historical precipitation, most experimental droughts were extreme, with 61% below the 5th, and 43% below the 1st percentile. Furthermore, we found that interannual precipitation variability had a large and potentially underappreciated effect on drought experiments due to the co-varying nature of control and drought treatments. Thus detecting ecological effects in experimental droughts is strongly influenced by the interaction between drought treatment magnitude, precipitation variability, and key physiological thresholds. The results from this study have important implication for the design and interpretation of drought experiments as well as integrating field results with land surface models.
High Resolution Hydro-climatological Projections for Western Canada
NASA Astrophysics Data System (ADS)
Erler, Andre Richard
Accurate identification of the impact of global warming on water resources and hydro-climatic extremes represents a significant challenge to the understanding of climate change on the regional scale. Here an analysis of hydro-climatic changes in western Canada is presented, with specific focus on the Fraser and Athabasca River basins and on changes in hydro-climatic extremes. The analysis is based on a suite of simulations designed to characterize internal variability, as well as model uncertainty. A small ensemble of Community Earth System Model version 1 (CESM1) simulations was employed to generate global climate projections, which were downscaled to 10 km resolution using the Weather Research and Forecasting model (WRF V3.4.1) with several sets of physical parameterizations. Downscaling was performed for a historical validation period and a mid- and end-21st-century projection period, using the RCP8.5 greenhouse gas trajectory. Daily station observations and monthly gridded datasets were used for validation. Changes in hydro-climatic extremes are characterized using Extreme Value Analysis. A novel method of aggregating data from climatologically similar stations was employed to increase the statistical power of the analysis. Changes in mean and extreme precipitation are found to differ strongly between seasons and regions, but (relative) changes in extremes generally follow changes in the (seasonal) mean. At the end of the 21st century, precipitation and precipitation extremes are projected to increase by 30% at the coast in fall and land-inwards in winter, while the projected increase in summer precipitation is smaller and changes in extremes are often not statistically significant. Reasons for the differences between seasons, the role of precipitation recycling in atmospheric water transport, and the sensitivity to physics parameterizations are discussed. Major changes are projected for the Fraser River basin, including earlier snowmelt and a 50% reduction in peak runoff. Combined with higher evapotranspiration, a significant increase in late summer drought risk is likely, but increasing fall precipitation might also increase the risk of moderate flooding. In the Athabasca River basin, increasing winter precipitation and snowmelt is balanced by increasing evapotranspiration in summer and no significant change in flood or drought risk is projected.
On the Performance of Carbon Nanotubes in Extreme Conditions and in the Presence of Microwaves
2013-01-01
been considered for use as transparent conductors include: transparent conducting oxides (TCOs), intrinsically conducting polymers (ICPs), graphene ...optical transmission properties, but are extremely sensitive to environmental conditions (such as temperature and humidity). Graphene has recently...during the dicing procedure, silver paint was applied to the sample to serve as improvised contact/probe-landing points. Figure 1 shows the CNT thin
Weather based risks and insurances for crop production in Belgium
NASA Astrophysics Data System (ADS)
Gobin, Anne
2014-05-01
Extreme weather events such as late frosts, droughts, heat waves and rain storms can have devastating effects on cropping systems. Damages due to extreme events are strongly dependent on crop type, crop stage, soil type and soil conditions. The perspective of rising risk-exposure is exacerbated further by limited aid received for agricultural damage, an overall reduction of direct income support to farmers and projected intensification of weather extremes with climate change. According to both the agriculture and finance sectors, a risk assessment of extreme weather events and their impact on cropping systems is needed. The impact of extreme weather events particularly during the sensitive periods of the farming calendar requires a modelling approach to capture the mixture of non-linear interactions between the crop, its environment and the occurrence of the meteorological event. The risk of soil moisture deficit increases towards harvesting, such that drought stress occurs in spring and summer. Conversely, waterlogging occurs mostly during early spring and autumn. Risks of temperature stress appear during winter and spring for chilling and during summer for heat. Since crop development is driven by thermal time and photoperiod, the regional crop model REGCROP (Gobin, 2010) enabled to examine the likely frequency, magnitude and impacts of frost, drought, heat stress and waterlogging in relation to the cropping season and crop sensitive stages. The risk profiles were subsequently confronted with yields, yield losses and insurance claims for different crops. Physically based crop models such as REGCROP assist in understanding the links between different factors causing crop damage as demonstrated for cropping systems in Belgium. Extreme weather events have already precipitated contraction of insurance coverage in some markets (e.g. hail insurance), and the process can be expected to continue if the losses or damages from such events increase in the future. Climate change will stress this further and impacts on crop growth are expected to be twofold, owing to the sensitive stages occurring earlier during the growing season and to the changes in return period of extreme weather events. Though average yields have risen continuously due to technological advances, there is no evidence that relative tolerance to adverse weather events has improved. The research is funded by the Belgian Science Policy Organisation (Belspo) under contract nr SD/RI/03A.
Huang, Yi; Liu, Dexiang; Tang, Yukuan; Fan, Zhaoyang; Chen, Hanwei; Liu, Xin
2015-01-01
Objectives To compare the image quality and diagnostic performance of two non-contrast enhanced MR angiography (NCE-MRA) techniques using flow-sensitive dephasing (FSD) prepared steady-state free precession (SSFP) and quiescent-interval single-shot (QISS) for the calf arteries in patients with diabetes. Materials and Methods Twenty six patients underwent the two NCE-MRA techniques followed by contrast-enhanced MRA (CE-MRA) of lower extremity on a 1.5T MR system. Image quality scores, arterial stenosis scores, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), vessel sharpness, and diagnostic accuracy for detecting more than 50% arterial stenosis were evaluated and statistically compared using CE-MRA as the reference standard. Results All examinations were performed successfully. Of the total 153 calf arterial segments obtained in the 26 patients, FSD and QISS showed no significant difference in the number of diagnostic arterial segments (151 [98%] vs. 147 [96%], respectively, P>0.05). The image quality of FSD was higher than that of QISS in the peroneal artery and posterior tibial artery (P<0.05), but no significant difference in the anterior tibial artery (P>0.05). SNR and CNR of FSD were higher than those of QISS (P<0.01), while FSD showed comparable vessel sharpness compared with QISS (P>0.05). The time efficiency of SNR and CNR between FSD and QISS showed no significant difference when taking into account the times for FSD-related scout scans. There was no difference in sensitivity (95% vs. 93%, P>0.05) and negative predictive value (98% vs. 97%, P>0.05) between FSD and QISS for detecting stenosis greater than 50%. However, FSD showed higher specificities (99% vs. 92%, P<0.05) and diagnostic accuracy (98% vs. 92%, P<0.05) compared to QISS. Conclusion Both FSD and QISS had similar high sensitivity and negative predictive value for detecting calf arteries with over 50% stenosis, but FSD showed slightly higher diagnostic specificity and better depiction of arterial lesions due to its isotropic submillimeter spatial resolution. QISS, being an easier to use and less time-consuming technique, could be a method of choice for rapid screening of arterial disease of the lower extremity. PMID:26035645
Using Blood Indexes to Predict Overweight Statuses: An Extreme Learning Machine-Based Approach
Chen, Huiling; Yang, Bo; Liu, Dayou; Liu, Wenbin; Liu, Yanlong; Zhang, Xiuhua; Hu, Lufeng
2015-01-01
The number of the overweight people continues to rise across the world. Studies have shown that being overweight can increase health risks, such as high blood pressure, diabetes mellitus, coronary heart disease, and certain forms of cancer. Therefore, identifying the overweight status in people is critical to prevent and decrease health risks. This study explores a new technique that uses blood and biochemical measurements to recognize the overweight condition. A new machine learning technique, an extreme learning machine, was developed to accurately detect the overweight status from a pool of 225 overweight and 251 healthy subjects. The group included 179 males and 297 females. The detection method was rigorously evaluated against the real-life dataset for accuracy, sensitivity, specificity, and AUC (area under the receiver operating characteristic (ROC) curve) criterion. Additionally, the feature selection was investigated to identify correlating factors for the overweight status. The results demonstrate that there are significant differences in blood and biochemical indexes between healthy and overweight people (p-value < 0.01). According to the feature selection, the most important correlated indexes are creatinine, hemoglobin, hematokrit, uric Acid, red blood cells, high density lipoprotein, alanine transaminase, triglyceride, and γ-glutamyl transpeptidase. These are consistent with the results of Spearman test analysis. The proposed method holds promise as a new, accurate method for identifying the overweight status in subjects. PMID:26600199
Terahertz Science and Technology of Macroscopically Aligned Carbon Nanotube Films
NASA Astrophysics Data System (ADS)
Kono, Junichiro
One of the outstanding challenges in nanotechnology is how to assemble individual nano-objects into macroscopic architectures while preserving their extraordinary properties. For example, the one-dimensional character of electrons in individual carbon nanotubes leads to extremely anisotropic transport, optical, and magnetic phenomena, but their macroscopic manifestations have been limited. Here, we describe methods for preparing macroscopic films, sheets, and fibers of highly aligned carbon nanotubes and their applications to basic and applied terahertz studies. Sufficiently thick films act as ideal terahertz polarizers, and appropriately doped films operate as polarization-sensitive, flexible, powerless, and ultra-broadband detectors. Together with recently developed chirality enrichment methods, these developments will ultimately allow us to study dynamic conductivities of interacting one-dimensional electrons in macroscopic single crystals of single-chirality single-wall carbon nanotubes.
NASA Astrophysics Data System (ADS)
Leijala, Ulpu; Björkqvist, Jan-Victor; Johansson, Milla M.; Pellikka, Havu
2017-04-01
Future coastal management continuously strives for more location-exact and precise methods to investigate possible extreme sea level events and to face flooding hazards in the most appropriate way. Evaluating future flooding risks by understanding the behaviour of the joint effect of sea level variations and wind waves is one of the means to make more comprehensive flooding hazard analysis, and may at first seem like a straightforward task to solve. Nevertheless, challenges and limitations such as availability of time series of the sea level and wave height components, the quality of data, significant locational variability of coastal wave height, as well as assumptions to be made depending on the study location, make the task more complicated. In this study, we present a statistical method for combining location-specific probability distributions of water level variations (including local sea level observations and global mean sea level rise) and wave run-up (based on wave buoy measurements). The goal of our method is to obtain a more accurate way to account for the waves when making flooding hazard analysis on the coast compared to the approach of adding a separate fixed wave action height on top of sea level -based flood risk estimates. As a result of our new method, we gain maximum elevation heights with different return periods of the continuous water mass caused by a combination of both phenomena, "the green water". We also introduce a sensitivity analysis to evaluate the properties and functioning of our method. The sensitivity test is based on using theoretical wave distributions representing different alternatives of wave behaviour in relation to sea level variations. As these wave distributions are merged with the sea level distribution, we get information on how the different wave height conditions and shape of the wave height distribution influence the joint results. Our method presented here can be used as an advanced tool to minimize over- and underestimation of the combined effect of sea level variations and wind waves, and to help coastal infrastructure planning and support smooth and safe operation of coastal cities in a changing climate.
Benchmark solution for the Spencer-Lewis equation of electron transport theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ganapol, B.D.
As integrated circuits become smaller, the shielding of these sensitive components against penetrating electrons becomes extremely critical. Monte Carlo methods have traditionally been the method of choice in shielding evaluations primarily because they can incorporate a wide variety of relevant physical processes. Recently, however, as a result of a more accurate numerical representation of the highly forward peaked scattering process, S/sub n/ methods for one-dimensional problems have been shown to be at least as cost-effective in comparison with Monte Carlo methods. With the development of these deterministic methods for electron transport, a need has arisen to assess the accuracy ofmore » proposed numerical algorithms and to ensure their proper coding. It is the purpose of this presentation to develop a benchmark to the Spencer-Lewis equation describing the transport of energetic electrons in solids. The solution will take advantage of the correspondence between the Spencer-Lewis equation and the transport equation describing one-group time-dependent neutron transport.« less
NASA Technical Reports Server (NTRS)
Oliver, A. Brandon
2017-01-01
Obtaining measurements of flight environments on ablative heat shields is both critical for spacecraft development and extremely challenging due to the harsh heating environment and surface recession. Thermocouples installed several millimeters below the surface are commonly used to measure the heat shield temperature response, but an ill-posed inverse heat conduction problem must be solved to reconstruct the surface heating environment from these measurements. Ablation can contribute substantially to the measurement response making solutions to the inverse problem strongly dependent on the recession model, which is often poorly characterized. To enable efficient surface reconstruction for recession model sensitivity analysis, a method for decoupling the surface recession evaluation from the inverse heat conduction problem is presented. The decoupled method is shown to provide reconstructions of equivalent accuracy to the traditional coupled method but with substantially reduced computational effort. These methods are applied to reconstruct the environments on the Mars Science Laboratory heat shield using diffusion limit and kinetically limited recession models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tang, Yanmei; Li, Xinli; Bai, Yan
The measurement of multiphase flow parameters is of great importance in a wide range of industries. In the measurement of multiphase, the signals from the sensors are extremely weak and often buried in strong background noise. It is thus desirable to develop effective signal processing techniques that can detect the weak signal from the sensor outputs. In this paper, two methods, i.e., lock-in-amplifier (LIA) and improved Duffing chaotic oscillator are compared to detect and process the weak signal. For sinusoidal signal buried in noise, the correlation detection with sinusoidal reference signal is simulated by using LIA. The improved Duffing chaoticmore » oscillator method, which based on the Wigner transformation, can restore the signal waveform and detect the frequency. Two methods are combined to detect and extract the weak signal. Simulation results show the effectiveness and accuracy of the proposed improved method. The comparative analysis shows that the improved Duffing chaotic oscillator method can restrain noise strongly since it is sensitive to initial conditions.« less
Asymmetrical Pedaling Patterns in Parkinson's Disease Patients
Penko, Amanda L.; Hirsch, Joshua R.; Voelcker-Rehage, Claudia; Martin, Philip E.; Blackburn, Gordon; Alberts, Jay L.
2015-01-01
Background Approximately 1.5 million Americans are affected by Parkinson's disease [1] which includes the symptoms of postural instability and gait dysfunction. Currently, clinical evaluations of postural instability and gait dysfunction consist of a subjective rater assessment of gait patterns using items from the Unified Parkinson's Disease Rating Scale, and assessments can be insensitive to the effectiveness of medical interventions. Current research suggests the importance of cycling for Parkinson's disease patients, and while Parkinson's gait has been evaluated in previous studies, little is known about lower extremity control during cycling. The purpose of this study is to examine the lower extremity coordination patterns of Parkinson's patients during cycling. Methods Twenty five participants, ages 44-72, with a clinical diagnosis of idiopathic Parkinson's disease participated in an exercise test on a cycle ergometer that was equipped with pedal force measurements. Crank torque, crank angle and power produced by right and left leg were measured throughout the test to calculate Symmetry Index at three stages of exercise (20 Watt, 60 Watt, maximum performance). Findings Decreases in Symmetry Index were observed for average power output in Parkinson's patients as workload increased. Maximum power Symmetry Index showed a significant difference in symmetry between performance at both the 20 Watt and 60 Watt stage and the maximal resistance stage. Minimum power Symmetry Index did not show significant differences across the stages of the test. While lower extremity asymmetries were present in Parkinson's patients during pedaling, these asymmetries did not correlate to postural instability and gait dysfunction Unified Parkinson's Disease Rating Scale scores. Interpretation This pedaling analysis allows for a more sensitive measure of lower extremity function than the Unified Parkinson's Disease Rating Scale and may help to provide unique insight into current and future lower extremity function. PMID:25467810
Understanding neuromotor strategy during functional upper extremity tasks using symbolic dynamics.
Nathan, Dominic E; Guastello, Stephen J; Prost, Robert W; Jeutter, Dean C
2012-01-01
The ability to model and quantify brain activation patterns that pertain to natural neuromotor strategy of the upper extremities during functional task performance is critical to the development of therapeutic interventions such as neuroprosthetic devices. The mechanisms of information flow, activation sequence and patterns, and the interaction between anatomical regions of the brain that are specific to movement planning, intention and execution of voluntary upper extremity motor tasks were investigated here. This paper presents a novel method using symbolic dynamics (orbital decomposition) and nonlinear dynamic tools of entropy, self-organization and chaos to describe the underlying structure of activation shifts in regions of the brain that are involved with the cognitive aspects of functional upper extremity task performance. Several questions were addressed: (a) How is it possible to distinguish deterministic or causal patterns of activity in brain fMRI from those that are really random or non-contributory to the neuromotor control process? (b) Can the complexity of activation patterns over time be quantified? (c) What are the optimal ways of organizing fMRI data to preserve patterns of activation, activation levels, and extract meaningful temporal patterns as they evolve over time? Analysis was performed using data from a custom developed time resolved fMRI paradigm involving human subjects (N=18) who performed functional upper extremity motor tasks with varying time delays between the onset of intention and onset of actual movements. The results indicate that there is structure in the data that can be quantified through entropy and dimensional complexity metrics and statistical inference, and furthermore, orbital decomposition is sensitive in capturing the transition of states that correlate with the cognitive aspects of functional task performance.
A High-Sensitivity Current Sensor Utilizing CrNi Wire and Microfiber Coils
Xie, Xiaodong; Li, Jie; Sun, Li-Peng; Shen, Xiang; Jin, Long; Guan, Bai-ou
2014-01-01
We obtain an extremely high current sensitivity by wrapping a section of microfiber on a thin-diameter chromium-nickel wire. Our detected current sensitivity is as high as 220.65 nm/A2 for a structure length of only 35 μm. Such sensitivity is two orders of magnitude higher than the counterparts reported in the literature. Analysis shows that a higher resistivity or/and a thinner diameter of the metal wire may produce higher sensitivity. The effects of varying the structure parameters on sensitivity are discussed. The presented structure has potential for low-current sensing or highly electrically-tunable filtering applications. PMID:24824372
A high-sensitivity current sensor utilizing CrNi wire and microfiber coils.
Xie, Xiaodong; Li, Jie; Sun, Li-Peng; Shen, Xiang; Jin, Long; Guan, Bai-ou
2014-05-12
We obtain an extremely high current sensitivity by wrapping a section of microfiber on a thin-diameter chromium-nickel wire. Our detected current sensitivity is as high as 220.65 nm/A2 for a structure length of only 35 μm. Such sensitivity is two orders of magnitude higher than the counterparts reported in the literature. Analysis shows that a higher resistivity or/and a thinner diameter of the metal wire may produce higher sensitivity. The effects of varying the structure parameters on sensitivity are discussed. The presented structure has potential for low-current sensing or highly electrically-tunable filtering applications.
Design and Manufacturing of Extremely Low Mass Flight Systems
NASA Technical Reports Server (NTRS)
Johnson, Michael R.
2002-01-01
Extremely small flight systems pose some unusual design and manufacturing challenges. The small size of the components that make up the system generally must be built with extremely tight tolerances to maintain the functionality of the assembled item. Additionally, the total mass of the system is extremely sensitive to what would be considered small perturbations in a larger flight system. The MUSES C mission, designed, built, and operated by Japan, has a small rover provided by NASA that falls into this small flight system category. This NASA-provided rover is used as a case study of an extremely small flight system design. The issues that were encountered with the rover portion of the MUSES C program are discussed and conclusions about the recommended mass margins at different stages of a small flight system project are presented.
Co-amplification at lower denaturation temperature-PCR: methodology and applications.
Liang, Hui; Chen, Guo-Jie; Yu, Yan; Xiong, Li-Kuan
2018-03-20
Co-amplification at lower denaturation temperature-polymerase chain reaction (COLD-PCR) is a novel form of PCR that selectively denatures and amplifies low-abundance mutations from mixtures of wild-type and mutation-containing sequences, enriching the mutation 10 to 100 folds. Due to the slightly altered melting temperature (Tm) of the double-stranded DNA and the formation of the mutation/wild-type heteroduplex DNA, COLD-PCR methods are sensitive, specific, accurate, cost-effective and easy to maneuver, and can enrich mutations of any type and at any position, even unknown mutations within amplicons. COLD-PCR and its improved methods are now applied in cancer, microorganisms, prenatal screening, animals and plants. They are extremely useful for early diagnosis, monitoring the prognosis of disease and the efficiency of the treatment, drug selection, prediction of prognosis, plant breeding and etc. In this review, we introduce the principles, key techniques, derived methods and applications of COLD-PCR.
Enhancement of venous drainage with vein stripper for reversed pedicled neurocutaneous flaps.
Sonmez, Erhan; Silistireli, Özlem Karataş; Karaaslan, Önder; Kamburoğlu, Haldun Onuralp; Safak, Tunc
2013-05-01
The flaps based on the vascular axis of superficial sensitive cutaneous nerves had gained increased popularity in reconstructive surgery because of such major advantages as preservation of major extremity arteries and avoidance of microsurgical procedures. However, postoperative venous congestion resulting in partial or total necrosis is still a common problem for these flaps. The aim of the current study is to introduce a new method for reducing the postoperative venous congestion of neural island flap with the results of reconstruction of the soft tissue defects of foot and ankle. This method was used to treat 19 patients with various chronic soft tissue defects of the foot and ankle between 2011 and 2012. We observed that the novel method presented in this report enables effective venous drainage, solving the postoperative venous congestion problem of these flaps. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Process for measuring degradation of sulfur hexafluoride in high voltage systems
Sauers, Isidor
1986-01-01
This invention is a method of detecting the presence of toxic and corrosive by-products in high voltage systems produced by electrically induced degradation of SF.sub.6 insulating gas in the presence of certain impurities. It is an improvement over previous methods because it is extremely sensitive, detecting by-products present in parts per billion concentrations, and because the device employed is of a simple design and takes advantage of the by-products natural affinity for fluoride ions. The method employs an ion-molecule reaction cell in which negative ions of the by-products are produced by fluorine attachment. These ions are admitted to a negative ion mass spectrometer and identified by their spectra. This spectrometry technique is an improvement over conventional techniques because the negative ion peaks are strong and not obscured by a major ion spectra of the SF.sub.6 component as is the case in positive ion mass spectrometry.
Process for measuring degradation of sulfur hexafluoride in high voltage systems
Sauers, I.
1985-04-23
This invention is a method of detecting the presence of toxic and corrosive by-products in high voltage systems produced by electrically induced degradation of SF/sub 6/ insulating gas in the presence of certain impurities. It is an improvement over previous methods because it is extremely sensitive, detecting by-products present in parts per billion concentrations, and because the device employed is of a simple design and takes advantage of the by-products natural affinity for fluoride ions. The method employs an ion-molecule reaction cell in which negative ions of the by-products are produced by fluorine attachment. These ions are admitted to a negative ion mass spectrometer and identified by their spectra. This spectrometry technique is an improvement over conventional techniques because the negative ion peaks are strong and not obscured by a major ion spectra of the SF/sub 6/ component as is the case in positive ion mass spectrometry.
Statistical Methods for Quantifying the Variability of Solar Wind Transients of All Sizes
NASA Astrophysics Data System (ADS)
Tindale, E.; Chapman, S. C.
2016-12-01
The solar wind is inherently variable across a wide range of timescales, from small-scale turbulent fluctuations to the 11-year periodicity induced by the solar cycle. Each solar cycle is unique, and this change in overall cycle activity is coupled from the Sun to Earth via the solar wind, leading to long-term trends in space weather. Our work [Tindale & Chapman, 2016] applies novel statistical methods to solar wind transients of all sizes, to quantify the variability of the solar wind associated with the solar cycle. We use the same methods to link solar wind observations with those on the Sun and Earth. We use Wind data to construct quantile-quantile (QQ) plots comparing the statistical distributions of multiple commonly used solar wind-magnetosphere coupling parameters between the minima and maxima of solar cycles 23 and 24. We find that in each case the distribution is multicomponent, ranging from small fluctuations to extreme values, with the same functional form at all phases of the solar cycle. The change in PDF is captured by a simple change of variables, which is independent of the PDF model. Using this method we can quantify the quietness of the cycle 24 maximum, identify which variable drives the changing distribution of composite parameters such as ɛ, and we show that the distribution of ɛ is less sensitive to changes in its extreme values than that of its constituents. After demonstrating the QQ method on solar wind data, we extend the analysis to include solar and magnetospheric data spanning the same time period. We focus on GOES X-ray flux and WDC AE index data. Finally, having studied the statistics of transients across the full distribution, we apply the same method to time series of extreme bursts in each variable. Using these statistical tools, we aim to track the solar cycle-driven variability from the Sun through the solar wind and into the Earth's magnetosphere. Tindale, E. and S.C. Chapman (2016), Geophys. Res. Lett., 43(11), doi: 10.1002/2016GL068920.
Sensitivity assessment of freshwater macroinvertebrates to pesticides using biological traits.
Ippolito, A; Todeschini, R; Vighi, M
2012-03-01
Assessing the sensitivity of different species to chemicals is one of the key points in predicting the effects of toxic compounds in the environment. Trait-based predicting methods have proved to be extremely efficient for assessing the sensitivity of macroinvertebrates toward compounds with non specific toxicity (narcotics). Nevertheless, predicting the sensitivity of organisms toward compounds with specific toxicity is much more complex, since it depends on the mode of action of the chemical. The aim of this work was to predict the sensitivity of several freshwater macroinvertebrates toward three classes of plant protection products: organophosphates, carbamates and pyrethroids. Two databases were built: one with sensitivity data (retrieved, evaluated and selected from the U.S. Environmental Protection Agency ECOTOX database) and the other with biological traits. Aside from the "traditional" traits usually considered in ecological analysis (i.e. body size, respiration technique, feeding habits, etc.), multivariate analysis was used to relate the sensitivity of organisms to some other characteristics which may be involved in the process of intoxication. Results confirmed that, besides traditional biological traits, related to uptake capability (e.g. body size and body shape) some traits more related to particular metabolic characteristics or patterns have a good predictive capacity on the sensitivity to these kinds of toxic substances. For example, behavioral complexity, assumed as an indicator of nervous system complexity, proved to be an important predictor of sensitivity towards these compounds. These results confirm the need for more complex traits to predict effects of highly specific substances. One key point for achieving a complete mechanistic understanding of the process is the choice of traits, whose role in the discrimination of sensitivity should be clearly interpretable, and not only statistically significant.
NASA Astrophysics Data System (ADS)
Guo, Enliang; Zhang, Jiquan; Si, Ha; Dong, Zhenhua; Cao, Tiehua; Lan, Wu
2017-10-01
Environmental changes have brought about significant changes and challenges to water resources and management in the world; these include increasing climate variability, land use change, intensive agriculture, and rapid urbanization and industrial development, especially much more frequency extreme precipitation events. All of which greatly affect water resource and the development of social economy. In this study, we take extreme precipitation events in the Midwest of Jilin Province as an example; daily precipitation data during 1960-2014 are used. The threshold of extreme precipitation events is defined by multifractal detrended fluctuation analysis (MF-DFA) method. Extreme precipitation (EP), extreme precipitation ratio (EPR), and intensity of extreme precipitation (EPI) are selected as the extreme precipitation indicators, and then the Kolmogorov-Smirnov (K-S) test is employed to determine the optimal probability distribution function of extreme precipitation indicators. On this basis, copulas connect nonparametric estimation method and the Akaike Information Criterion (AIC) method is adopted to determine the bivariate copula function. Finally, we analyze the characteristics of single variable extremum and bivariate joint probability distribution of the extreme precipitation events. The results show that the threshold of extreme precipitation events in semi-arid areas is far less than that in subhumid areas. The extreme precipitation frequency shows a significant decline while the extreme precipitation intensity shows a trend of growth; there are significant differences in spatiotemporal of extreme precipitation events. The spatial variation trend of the joint return period gets shorter from the west to the east. The spatial distribution of co-occurrence return period takes on contrary changes and it is longer than the joint return period.
Ultra-low power operation of self-heated, suspended carbon nanotube gas sensors
NASA Astrophysics Data System (ADS)
Chikkadi, Kiran; Muoth, Matthias; Maiwald, Verena; Roman, Cosmin; Hierold, Christofer
2013-11-01
We present a suspended carbon nanotube gas sensor that senses NO2 at ambient temperature and recovers from gas exposure at an extremely low power of 2.9 μW by exploiting the self-heating effect for accelerated gas desorption. The recovery time of 10 min is two orders of magnitude faster than non-heated recovery at ambient temperature. This overcomes an important bottleneck for the practical application of carbon nanotube gas sensors. Furthermore, the method is easy to implement in sensor systems and requires no additional components, paving the way for ultra-low power, compact, and highly sensitive gas sensors.
A Sludge Drum in the APNea System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hensley, D.
1998-11-17
The assay of sludge drums pushes the APNea System to a definite extreme. Even though it seems clear that neutron based assay should be the method of choice for sludge drums, the difficulties posed by this matrix push any NDA technique to its limits. Special emphasis is given here to the differential die-away technique, which appears to approach the desired sensitivity. A parallel analysis of ethafoam drums will be presented, since the ethafoam matrix fits well within the operating range of the AIWea System, and, having been part of the early PDP trials, has been assayed by many in themore » NDA community.« less
An experimental investigation of Iosipescu specimen for composite materials
NASA Technical Reports Server (NTRS)
Ho, H.; Tsai, M. Y.; Morton, J.; Farley, G. L.
1991-01-01
A detailed experimental evaluation of the Iosipescu specimen tested in the modified Wyoming fixture is presented. Moire interferometry is employed to determine the deformation of unidirectional and cross-ply graphite-epoxy specimens. The results of the moire experiments are compared to those from the traditional strain-gage method. It is shown that the strain-gage readings from one surface of a specimen together with corresponding data from moire interferometry on the opposite face documented an extreme sensitivity of some fiber orientations to twisting. A localized hybrid analysis is introduced to perform efficient reduction of moire data, producing whole-field strain distributions in the specimen test sections.
An efficient surrogate-based simulation-optimization method for calibrating a regional MODFLOW model
NASA Astrophysics Data System (ADS)
Chen, Mingjie; Izady, Azizallah; Abdalla, Osman A.
2017-01-01
Simulation-optimization method entails a large number of model simulations, which is computationally intensive or even prohibitive if the model simulation is extremely time-consuming. Statistical models have been examined as a surrogate of the high-fidelity physical model during simulation-optimization process to tackle this problem. Among them, Multivariate Adaptive Regression Splines (MARS), a non-parametric adaptive regression method, is superior in overcoming problems of high-dimensions and discontinuities of the data. Furthermore, the stability and accuracy of MARS model can be improved by bootstrap aggregating methods, namely, bagging. In this paper, Bagging MARS (BMARS) method is integrated to a surrogate-based simulation-optimization framework to calibrate a three-dimensional MODFLOW model, which is developed to simulate the groundwater flow in an arid hardrock-alluvium region in northwestern Oman. The physical MODFLOW model is surrogated by the statistical model developed using BMARS algorithm. The surrogate model, which is fitted and validated using training dataset generated by the physical model, can approximate solutions rapidly. An efficient Sobol' method is employed to calculate global sensitivities of head outputs to input parameters, which are used to analyze their importance for the model outputs spatiotemporally. Only sensitive parameters are included in the calibration process to further improve the computational efficiency. Normalized root mean square error (NRMSE) between measured and simulated heads at observation wells is used as the objective function to be minimized during optimization. The reasonable history match between the simulated and observed heads demonstrated feasibility of this high-efficient calibration framework.
Observation of Anderson localization in disordered nanophotonic structures
NASA Astrophysics Data System (ADS)
Sheinfux, Hanan Herzig; Lumer, Yaakov; Ankonina, Guy; Genack, Azriel Z.; Bartal, Guy; Segev, Mordechai
2017-06-01
Anderson localization is an interference effect crucial to the understanding of waves in disordered media. However, localization is expected to become negligible when the features of the disordered structure are much smaller than the wavelength. Here we experimentally demonstrate the localization of light in a disordered dielectric multilayer with an average layer thickness of 15 nanometers, deep into the subwavelength regime. We observe strong disorder-induced reflections that show that the interplay of localization and evanescence can lead to a substantial decrease in transmission, or the opposite feature of enhanced transmission. This deep-subwavelength Anderson localization exhibits extreme sensitivity: Varying the thickness of a single layer by 2 nanometers changes the reflection appreciably. This sensitivity, approaching the atomic scale, holds the promise of extreme subwavelength sensing.
Tan, Jia-Lian; Yang, Ting-Ting; Liu, Yu; Zhang, Xue; Cheng, Shu-Jin; Zuo, Hua; He, Huawei
2016-05-01
A novel rhodamine-based fluorescent pH probe responding to extremely low pH values has been synthesized and characterized. This probe showed an excellent photophysical response to pH on the basis that the colorless spirocyclic structure under basic conditions opened to a colored and highly fluorescent form under extreme acidity. The quantitative relationship between fluorescence intensity and pH value (1.75-2.62) was consistent with the equilibrium equation pH = pKa + log[(Imax - I)/(I - Imin)]. This sensitive pH probe was also characterized with good reversibility and no interaction with interfering metal ions, and was successfully applied to image Escherichia coli under strong acidity. Copyright © 2015 John Wiley & Sons, Ltd.
Relating precipitation to fronts at a sub-daily basis
NASA Astrophysics Data System (ADS)
Hénin, Riccardo; Ramos, Alexandre M.; Liberato, Margarida L. R.; Gouveia, Célia
2017-04-01
High impact events over Western Iberia include precipitation extremes that are cause for concern as they lead to flooding, landslides, extensive property damage and human casualties. These events are usually associated with low pressure systems over the North Atlantic moving eastward towards the European western coasts (Liberato and Trigo, 2014). A method to detect fronts and to associate amounts of precipitation to each front is tested, distinguishing between warm and cold fronts. The 6-hourly ERA-interim 1979-2012 reanalysis with 1°x1° horizontal resolution is used for the purpose. An objective front identification method (the Thermal Method described in Shemm et al., 2014) is applied to locate fronts all over the Northern Hemisphere considering the equivalent potential temperature as thermal parameter to use in the model. On the other hand, we settled a squared search box of tuneable dimension (from 2 to 10 degrees long) to look for a front in the neighbourhood of a grid point affected by precipitation. A sensitivity analysis is performed and the optimal dimension of the box is assessed in order to avoid over(under) estimation of precipitation. This is performed in the light of the variability and typical dynamics of warm/cold frontal systems in the Western Europe region. Afterwards, using the extreme event ranking over Iberia proposed by Ramos et al. (2014) the first ranked extreme events are selected in order to validate the method with specific case studies. Finally, climatological and trend maps of frontal activity are produced both on annual and seasonal scales. Trend maps show a decrease of frontal precipitation over north-western Europe and a slight increase over south-western Europe, mainly due to warm fronts. REFERENCES Liberato M.L.R. and R.M. Trigo (2014) Extreme precipitation events and related impacts in Western Iberia. Hydrology in a Changing World: Environmental and Human Dimensions. IAHS Red Book No 363, 171-176. ISSN: 0144-7815. Ramos A.M., R.M. Trigo and M.L.R. Liberato (2014) A ranking of high-resolution daily precipitation extreme events for the Iberian Peninsula, Atmospheric Science Letters 15, 328 - 334. doi: 10.1002/asl2.507. Shemm S., I. Rudeva and I. Simmonds (2014) Extratropical fronts in the lower troposphere - global perspectives obtained from two automated methods. Quarterly Journal of the Royal Meteorological Society, 141: 1686-1698, doi: 10.1002/qj.2471. ACKNOWLEDGEMENTS This work is supported by FCT - project UID/GEO/50019/2013 - Instituto Dom Luiz. Fundação para a Ciência e a Tecnologia, Portugal (FCT) is also providing for R. Hénin doctoral grant (PD/BD/114479/2016) and A.M. Ramos postdoctoral grant (FCT/DFRH/SFRH/BPD/84328/2012).
NASA Astrophysics Data System (ADS)
Lazoglou, Georgia; Anagnostopoulou, Christina; Tolika, Konstantia; Kolyva-Machera, Fotini
2018-04-01
The increasing trend of the intensity and frequency of temperature and precipitation extremes during the past decades has substantial environmental and socioeconomic impacts. Thus, the objective of the present study is the comparison of several statistical methods of the extreme value theory (EVT) in order to identify which is the most appropriate to analyze the behavior of the extreme precipitation, and high and low temperature events, in the Mediterranean region. The extremes choice was made using both the block maxima and the peaks over threshold (POT) technique and as a consequence both the generalized extreme value (GEV) and generalized Pareto distributions (GPDs) were used to fit them. The results were compared, in order to select the most appropriate distribution for extremes characterization. Moreover, this study evaluates the maximum likelihood estimation, the L-moments and the Bayesian method, based on both graphical and statistical goodness-of-fit tests. It was revealed that the GPD can characterize accurately both precipitation and temperature extreme events. Additionally, GEV distribution with the Bayesian method is proven to be appropriate especially for the greatest values of extremes. Another important objective of this investigation was the estimation of the precipitation and temperature return levels for three return periods (50, 100, and 150 years) classifying the data into groups with similar characteristics. Finally, the return level values were estimated with both GEV and GPD and with the three different estimation methods, revealing that the selected method can affect the return level values for both the parameter of precipitation and temperature.
Picogram determination of N-nitrosodimethylamine in water.
Hu, Ruikang; Zhang, Lifeng; Yang, Zhaoguang
2008-01-01
N-nitrosodimethylamine (NDMA) persistence within surface waters is a major concern for downstream communities exploiting these waters as drinking water supplies. The objective of this study is to develop a novel and efficient analytical method for NDMA via different technologies: pulsed splitless gas chromatography-nitrogen phosphorus detector (GC-NPD), large volume injection (LVI) gas chromatography-mass spectrometry (GC/MS) via programmable temperature vaporizer (PTV) inlet or PTV-gas chromatography-triple quadruple mass spectrometry (GC-MS/MS) and continuous liquid-liquid extraction. It was found that the sensitivity required for NDMA analysis by GC-NPD can only be achieved when the NPD bead is extremely clean. LVI via PTV can greatly improve GC-MS system sensitivity for analyzing NDMA. With the help of DB-624 (25 m x 200 microm x 1.12 microm) connected with DB-5MS (30 m x 250 microm x 0.25 microm) in series, PTV-GC/MS could overcome the matrix interference for the trace analysis of NDMA. Variable instrument conditions were studied in detail, with the optimized process being validated via precision and accuracy studies. PTV- triple quadruple GC-MS/MS system could efficiently remove the interference on a single DB-5MS (30 m x 250 microm x 0.25 microm) column with good sensitivity and selectivity. The developed methods have been successfully applied to test NDMA in different types of water samples with satisfactory results. (c) IWA Publishing 2008.
The Sensitive Infrared Signal Detection by Sum Frequency Generation
NASA Technical Reports Server (NTRS)
Wong, Teh-Hwa; Yu, Jirong; Bai, Yingxin
2013-01-01
An up-conversion device that converts 2.05-micron light to 700 nm signal by sum frequency generation using a periodically poled lithium niobate crystal is demonstrated. The achieved 92% up-conversion efficiency paves the path to detect extremely weak 2.05-micron signal with well established silicon avalanche photodiode detector for sensitive lidar applications.
Hu, Yingli; Ding, Meili; Liu, Xiao-Qin; Sun, Lin-Bing; Jiang, Hai-Long
2016-04-28
Based on an organic ligand involving both carboxylate and tetrazole groups, a chemically stable Zn(II) metal-organic framework has been rationally synthesized and behaves as a fluorescence chemosensor for the highly selective and sensitive detection of picric acid, an extremely hazardous and strong explosive.
NASA Astrophysics Data System (ADS)
Fraisse, C.; Pequeno, D.; Staub, C. G.; Perry, C.
2016-12-01
Climate variability, particularly the occurrence of extreme weather conditions such as dry spells and heat stress during sensitive crop developmental phases can substantially increase the prospect of reduced crop yields. Yield losses or crop failure risk due to stressful weather conditions vary mainly due to stress severity and exposure time and duration. The magnitude of stress effects is also crop specific, differing in terms of thresholds and adaptation to environmental conditions. To help producers in the Southeast USA mitigate and monitor the risk of crop losses due to extreme weather events we developed a web-based tool that evaluates the risk of extreme weather events during the season taking into account the crop development stages. Producers can enter their plans for the upcoming season in a given field (e.g. crop, variety, planting date, acreage etc.), select or not a specific El Nino Southern Oscillation (ENSO) phase, and will be presented with the probabilities (ranging from 0 -100%) of extreme weather events occurring during sensitive phases of the growing season for the selected conditions. The DSSAT models CERES-Maize, CROPGRO-Soybean, CROPGRO-Cotton, and N-Wheat phenology models have been translated from FORTRAN to a standalone versions in R language. These models have been tested in collaboration with Extension faculty and producers during the 2016 season and their usefulness for risk mitigation and monitoring evaluated. A companion AgroClimate app was also developed to help producers track and monitor phenology development during the cropping season.
Lisi Pei; Nathan Moore; Shiyuan Zhong; Lifeng Luo; David W. Hyndman; Warren E. Heilman; Zhiqiu Gao
2014-01-01
Extreme weather and climate events, especially short-term excessive drought and wet periods over agricultural areas, have received increased attention. The Southern Great Plains (SGP) is one of the largest agricultural regions in North America and features the underlying Ogallala-High Plains Aquifer system worth great economic value in large part due to production...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Ying
My graduate research has focused on separation science and bioanalytical analysis, which emphasized in method development. It includes three major areas: enantiomeric separations using high performance liquid chromatography (HPLC), Super/subcritical fluid chromatography (SFC), and capillary electrophoresis (CE); drug-protein binding behavior studies using CE; and carbohydrate analysis using liquid chromatograph-electrospray ionization mass spectrometry (LC-ESI-MS). Enantiomeric separations continue to be extremely important in the pharmaceutical industry. An in-depth evaluation of the enantiomeric separation capabilities of macrocyclic glycopeptides CSPs with SFC mobile phases was investigated using a set of over 100 chiral compounds. It was found that the macrocyclic based CSPs were ablemore » to separate enantiomers of various compounds with different polarities and functionalities. Seventy percent of all separations were achieved in less than 4 min due to the high flow rate (4.0 ml/min) that can be used in SFC. Drug-protein binding is an important process in determining the activity and fate of a drug once it enters the body. Two drug/protein systems have been studied using frontal analysis CE method. More sensitive fluorescence detection was introduced in this assay, which overcame the problem of low sensitivity that is common when using UV detection for drug-protein studies. In addition, the first usage of an argon ion laser with 257 nm beam coupled with CCD camera as a frontal analysis detection method enabled the simultaneous observation of drug fluorescence as well as the protein fluorescence. LC-ESI-MS was used for the separation and characterization of underivatized oligosaccharide mixtures. With the limits of detection as low as 50 picograms, all individual components of oligosaccharide mixtures (up to 11 glucose-units long) were baseline resolved on a Cyclobond I 2000 column and detected using ESI-MS. This system is characterized by high chromatographic resolution, high column stability, and high sensitivity. In addition, this method showed potential usefulness for the sensitive and quick analysis of hydrolysis products of polysaccharides, and for trace level analysis of individual oligosaccharides or oligosaccharide isomers from biological systems.« less
Hjelmeland, Anna K; Wylie, Philip L; Ebeler, Susan E
2016-02-01
Methoxypyrazines are volatile compounds found in plants, microbes, and insects that have potent vegetal and earthy aromas. With sensory detection thresholds in the low ng L(-1) range, modest concentrations of these compounds can profoundly impact the aroma quality of foods and beverages, and high levels can lead to consumer rejection. The wine industry routinely analyzes the most prevalent methoxypyrazine, 2-isobutyl-3-methoxypyrazine (IBMP), to aid in harvest decisions, since concentrations decrease during berry ripening. In addition to IBMP, three other methoxypyrazines IPMP (2-isopropyl-3-methoxypyrazine), SBMP (2-sec-butyl-3-methoxypyrazine), and EMP (2-ethyl-3-methoxypyrazine) have been identified in grapes and/or wine and can impact aroma quality. Despite their routine analysis in the wine industry (mostly IBMP), accurate methoxypyrazine quantitation is hindered by two major challenges: sensitivity and resolution. With extremely low sensory detection thresholds (~8-15 ng L(-1) in wine for IBMP), highly sensitive analytical methods to quantify methoxypyrazines at trace levels are necessary. Here we were able to achieve resolution of IBMP as well as IPMP, EMP, and SBMP from co-eluting compounds using one-dimensional chromatography coupled to positive chemical ionization tandem mass spectrometry. Three extraction techniques HS-SPME (headspace-solid phase microextraction), SBSE (stirbar sorptive extraction), and HSSE (headspace sorptive extraction) were validated and compared. A 30 min extraction time was used for HS-SPME and SBSE extraction techniques, while 120 min was necessary to achieve sufficient sensitivity for HSSE extractions. All extraction methods have limits of quantitation (LOQ) at or below 1 ng L(-1) for all four methoxypyrazines analyzed, i.e., LOQ's at or below reported sensory detection limits in wine. The method is high throughput, with resolution of all compounds possible with a relatively rapid 27 min GC oven program. Copyright © 2015 Elsevier B.V. All rights reserved.
Chen, Lin; Huang, Ping; Yang, Hui-qing; Deng, Ya-bin; Guo, Meng-lin; Li, Dong-hui
2015-08-01
Determination of chondroitin sulfate in the biomedical field has an important value. The conventional methods for the assay of chondroitin sulfate are still unsatisfactory in sensitivity, selectivity or simplicity. This work aimed at developing a novel method for sensitive and selective determination of chondroitin sulfate by fluorimetry. We found that some kinds of cationic surfactants have the ability to quench the fluorescence of tetrasulfonated aluminum phthalocyanine (AlS4Pc), a strongly fluorescent compound which emits at red region, with high efficiency. But, the fluorescence of the above-mentioned fluorescence quenching system recovered significantly when chondroitin sulfate (CS) exits. Tetradecyl dimethyl benzyl ammonium chloride(TDBAC) which was screened from all of the candidates of cationic surfactants was chosen as the quencher because it shows the most efficient quenching effect. It was found that the fluorescence of AlS4Pc was extremely quenched by TDBAC because of the formation of association complex between AlS4Pc and TDBAC. Fluorescence of the association complex recovered dramatically after the addition of chondroitin sulfate (CS) due to the ability of chondroitin sulfate to shift the association equilibrium of the association, leading to the release of AlS4Pc, thus resulting in an increase in the fluorescence of the reaction system. Based on this phenomenon, a novel method with simplicity, accuracy and sensitivity was developed for quantitative determination of CS. Factors including the reaction time, influencing factors and the effect of coexisting substances were investigated and discussed. Under optimum conditions the linear range of the calibration curve was 0.20~10.0 μg · mL(-1). The detection limit for CS was 0.070 μg · mL(-1). The method has been applied to the analysis of practical samples with satisfied results. This work expands the applications of AlS4Pc in biomedical area.
Comparing regional precipitation and temperature extremes in climate model and reanalysis products
Angélil, Oliver; Perkins-Kirkpatrick, Sarah; Alexander, Lisa V.; ...
2016-07-12
A growing field of research aims to characterise the contribution of anthropogenic emissions to the likelihood of extreme weather and climate events. These analyses can be sensitive to the shapes of the tails of simulated distributions. If tails are found to be unrealistically short or long, the anthropogenic signal emerges more or less clearly, respectively, from the noise of possible weather. Here we compare the chance of daily land-surface precipitation and near-surface temperature extremes generated by three Atmospheric Global Climate Models typically used for event attribution, with distributions from six reanalysis products. The likelihoods of extremes are compared for area-averagesmore » over grid cell and regional sized spatial domains. Results suggest a bias favouring overly strong attribution estimates for hot and cold events over many regions of Africa and Australia, and a bias favouring overly weak attribution estimates over regions of North America and Asia. For rainfall, results are more sensitive to geographic location. Although the three models show similar results over many regions, they do disagree over others. Equally, results highlight the discrepancy amongst reanalyses products. This emphasises the importance of using multiple reanalysis and/or observation products, as well as multiple models in event attribution studies.« less
Quantifying variability in fast and slow solar wind: From turbulence to extremes
NASA Astrophysics Data System (ADS)
Tindale, E.; Chapman, S. C.; Moloney, N.; Watkins, N. W.
2017-12-01
Fast and slow solar wind exhibit variability across a wide range of spatiotemporal scales, with evolving turbulence producing fluctuations on sub-hour timescales and the irregular solar cycle modulating the system over many years. Here, we apply the data quantile-quantile (DQQ) method [Tindale and Chapman 2016, 2017] to over 20 years of Wind data, to study the time evolution of the statistical distribution of plasma parameters in fast and slow solar wind. This model-independent method allows us to simultaneously explore the evolution of fluctuations across all scales. We find a two-part functional form for the statistical distributions of the interplanetary magnetic field (IMF) magnitude and its components, with each region of the distribution evolving separately over the solar cycle. Up to a value of 8nT, turbulent fluctuations dominate the distribution of the IMF, generating the approximately lognormal shape found by Burlaga [2001]. The mean of this core-turbulence region tracks solar cycle activity, while its variance remains constant, independent of the fast or slow state of the solar wind. However, when we test the lognormality of this core-turbulence component over time, we find the model provides a poor description of the data at solar maximum, where sharp peaks in the distribution dominate over the lognormal shape. At IMF values higher than 8nT, we find a separate, extremal distribution component, whose moments are sensitive to solar cycle phase, the peak activity of the cycle and the solar wind state. We further investigate these `extremal' values using burst analysis, where a burst is defined as a continuous period of exceedance over a predefined threshold. This form of extreme value statistics allows us to study the stochastic process underlying the time series, potentially supporting a probabilistic forecast of high-energy events. Tindale, E., and S.C. Chapman (2016), Geophys. Res. Lett., 43(11) Tindale, E., and S.C. Chapman (2017), submitted Burlaga, L.F. (2001), J. Geophys. Res., 106(A8)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Hyejin; Jeong, Sinyoung; Ko, Eunbyeol
2015-05-15
Surface-enhanced Raman scattering techniques have been widely used for bioanalysis due to its high sensitivity and multiplex capacity. However, the point-scanning method using a micro-Raman system, which is the most common method in the literature, has a disadvantage of extremely long measurement time for on-chip immunoassay adopting a large chip area of approximately 1-mm scale and confocal beam point of ca. 1-μm size. Alternative methods such as sampled spot scan with high confocality and large-area scan method with enlarged field of view and low confocality have been utilized in order to minimize the measurement time practically. In this study, wemore » analyzed the two methods in respect of signal-to-noise ratio and sampling-led signal fluctuations to obtain insights into a fast and reliable readout strategy. On this basis, we proposed a methodology for fast and reliable quantitative measurement of the whole chip area. The proposed method adopted a raster scan covering a full area of 100 μm × 100 μm region as a proof-of-concept experiment while accumulating signals in the CCD detector for single spectrum per frame. One single scan with 10 s over 100 μm × 100 μm area yielded much higher sensitivity compared to sampled spot scanning measurements and no signal fluctuations attributed to sampled spot scan. This readout method is able to serve as one of key technologies that will bring quantitative multiplexed detection and analysis into practice.« less
Multiplex biosensing with highly sensitive magnetic nanoparticle quantification method
NASA Astrophysics Data System (ADS)
Nikitin, M. P.; Orlov, A. V.; Znoyko, S. L.; Bragina, V. A.; Gorshkov, B. G.; Ksenevich, T. I.; Cherkasov, V. R.; Nikitin, P. I.
2018-08-01
Unique properties of magnetic nanoparticles (MNP) have provided many breakthrough solutions for life science. The immense potential of MNP as labels in advanced immunoassays stems from the fact that they, unlike optical labels, can be easily detected inside 3D opaque porous biosensing structures or in colored mediums, manipulated by an external magnetic field, exhibit high stability and negligible background signal in biological samples, etc. In this research, the magnetic nanolabels and an original technique of their quantification by non-linear magnetization have permitted development of novel methods of multiplex biosensing. Several types of highly sensitive multi-channel readers that offer an extremely wide linear dynamic range are developed to count MNP in different recognition zones for quantitative concentration measurements of various analytes. Four approaches to multiplex biosensing based on MNP have been demonstrated in one-run tests based on several 3D porous structures; flat and micropillar microfluidic sensor chips; multi-line lateral flow strips and modular architecture of the strips, which is the first 3D multiplexing method that goes beyond the traditional planar techniques. Detection of cardio- and cancer markers, small molecules and oligonucleotides were used in the experiments. The analytical characteristics of the developed multiplex methods are on the level of the modern time-consuming laboratory techniques. The developed multiplex biosensing platforms are promising for medical and veterinary diagnostics, food inspection, environmental and security monitoring, etc.
Lee, Jung Ho; Cavagnero, Silvia
2013-01-01
NMR is an extremely powerful, yet insensitive technique. Many available nuclear polarization methods that address sensitivity are not directly applicable to low-concentration biomolecules in liquids and are often too invasive. Photochemically induced dynamic nuclear polarization (photo-CIDNP) is no exception. It needs high-power laser irradiation, which often leads to sample degradation, and photosensitizer reduction. Here, we introduce a novel tri-enzyme system that significantly overcomes the above challenges rendering photo-CIDNP a practically applicable technique for NMR sensitivity enhancement in solution. The specificity of the nitrate reductase (NR) enzyme is exploited to selectively in situ re-oxidize the reduced photo-CIDNP dye FMNH2. At the same time, the oxygen-scavenging ability of glucose oxidase (GO) and catalase (CAT) is synergistically employed to prevent sample photodegradation. The resulting tri-enzyme system (NR-GO-CAT) enables prolonged sensitivity-enhanced data collection in 1D and 2D heteronuclear NMR, leading to the highest photo-CIDNP sensitivity enhancement (48-fold relative to SE-HSQC) achieved to date for amino acids and polypeptides in solution. NR-GO-CAT extends the concentration limit of photo-CIDNP NMR down to the low micromolar range. In addition, sensitivity (relative to the reference SE-HSQC) is found to be inversely proportional to sample concentration, paving the way to the future analysis of even more diluted samples. PMID:23560683
Ely, D. Matthew
2006-01-01
Recharge is a vital component of the ground-water budget and methods for estimating it range from extremely complex to relatively simple. The most commonly used techniques, however, are limited by the scale of application. One method that can be used to estimate ground-water recharge includes process-based models that compute distributed water budgets on a watershed scale. These models should be evaluated to determine which model parameters are the dominant controls in determining ground-water recharge. Seven existing watershed models from different humid regions of the United States were chosen to analyze the sensitivity of simulated recharge to model parameters. Parameter sensitivities were determined using a nonlinear regression computer program to generate a suite of diagnostic statistics. The statistics identify model parameters that have the greatest effect on simulated ground-water recharge and that compare and contrast the hydrologic system responses to those parameters. Simulated recharge in the Lost River and Big Creek watersheds in Washington State was sensitive to small changes in air temperature. The Hamden watershed model in west-central Minnesota was developed to investigate the relations that wetlands and other landscape features have with runoff processes. Excess soil moisture in the Hamden watershed simulation was preferentially routed to wetlands, instead of to the ground-water system, resulting in little sensitivity of any parameters to recharge. Simulated recharge in the North Fork Pheasant Branch watershed, Wisconsin, demonstrated the greatest sensitivity to parameters related to evapotranspiration. Three watersheds were simulated as part of the Model Parameter Estimation Experiment (MOPEX). Parameter sensitivities for the MOPEX watersheds, Amite River, Louisiana and Mississippi, English River, Iowa, and South Branch Potomac River, West Virginia, were similar and most sensitive to small changes in air temperature and a user-defined flow routing parameter. Although the primary objective of this study was to identify, by geographic region, the importance of the parameter value to the simulation of ground-water recharge, the secondary objectives proved valuable for future modeling efforts. The value of a rigorous sensitivity analysis can (1) make the calibration process more efficient, (2) guide additional data collection, (3) identify model limitations, and (4) explain simulated results.
Research on the remote sensing methods of drought monitoring in Chongqing
NASA Astrophysics Data System (ADS)
Yang, Shiqi; Tang, Yunhui; Gao, Yanghua; Xu, Yongjin
2011-12-01
There are regional and periodic droughts in Chongqing, which impacted seriously on agricultural production and people's lives. This study attempted to monitor the drought in Chongqing with complex terrain using MODIS data. First, we analyzed and compared three remote sensing methods for drought monitoring (time series of vegetation index, temperature vegetation dryness index (TVDI), and vegetation supply water index (VSWI)) for the severe drought in 2006. Then we developed a remote sensing based drought monitoring model for Chongqing by combining soil moisture data and meteorological data. The results showed that the three remote sensing based drought monitoring models performed well in detecting the occurrence of drought in Chongqing on a certain extent. However, Time Series of Vegetation Index has stronger sensitivity in time pattern but weaker in spatial pattern; although TVDI and VSWI can reflect inverse the whole process of severe drought in 2006 summer from drought occurred - increased - relieved - increased again - complete remission in spatial domain, but TVDI requires the situation of extreme drought and extreme moist both exist in study area which it is more difficult in Chongqing; VSWI is simple and practicable, which the correlation coefficient between VSWI and soil moisture data reaches significant levels. In summary, VSWI is the best model for summer drought monitoring in Chongqing.
A Review of Recent Advances in Research on Extreme Heat Events
NASA Technical Reports Server (NTRS)
Horton, Radley M.; Mankin, Justin S.; Lesk, Corey; Coffel, Ethan; Raymond, Colin
2016-01-01
Reviewing recent literature, we report that changes in extreme heat event characteristics such as magnitude, frequency, and duration are highly sensitive to changes in mean global-scale warming. Numerous studies have detected significant changes in the observed occurrence of extreme heat events, irrespective of how such events are defined. Further, a number of these studies have attributed present-day changes in the risk of individual heat events and the documented global-scale increase in such events to anthropogenic-driven warming. Advances in process-based studies of heat events have focused on the proximate land-atmosphere interactions through soil moisture anomalies, and changes in occurrence of the underlying atmospheric circulation associated with heat events in the mid-latitudes. While evidence for a number of hypotheses remains limited, climate change nevertheless points to tail risks of possible changes in heat extremes that could exceed estimates generated from model outputs of mean temperature. We also explore risks associated with compound extreme events and nonlinear impacts associated with extreme heat.
Zhang, Yuyang; Xing, Zhen; She, Dejun; Huang, Nan; Cao, Dairong
The aim of this study was to prospectively evaluate the repeatability of non-contrast-enhanced lower-extremity magnetic resonance angiography using the flow-spoiled fresh blood imaging (FS-FBI). Forty-three healthy volunteers and 15 patients with lower-extremity arterial stenosis were recruited in this study and were examined by FS-FBI. Digital subtraction angiography was performed within a week after the FS-FBI in the patient group. Repeatability was assessed by the following parameters: grading of image quality, diameter and area of major arteries, and grading of stenosis of lower-extremity arteries. Two experienced radiologists blinded for patient data independently evaluated the FS-FBI and digital subtraction angiography images. Intraclass correlation coefficients (ICCs), sensitivity, and specificity were used for statistical analysis. The grading of image quality of most data was satisfactory. The ICCs for the first and second measures were 0.792 and 0.884 in the femoral segment and 0.803 and 0.796 in the tibiofibular segment for healthy volunteer group, 0.873 and 1.000 in the femoral segment, and 0.737 and 0.737 in the tibiofibular segment for the patient group. Intraobserver and interobserver agreements on diameter and area of arteries were excellent, with ICCs mostly greater than 0.75 in the volunteer group. For stenosis grading analysis, intraobserver ICCs range from 0.784 to 0.862 and from 0.778 to 0.854, respectively. Flow-spoiled fresh blood imaging yielded a mean sensitivity and specificity to detect arterial stenosis or occlusion of 90% and 80% for femoral segment and 86.7% and 93.3% for tibiofibular segment at least. Lower-extremity angiography with FS-FBI is a reliable and reproducible screening tool for lower-extremity atherosclerotic disease, especially for patients with impaired renal function.
Patil, Ajeetkumar; Bhat, Sujatha; Pai, Keerthilatha M; Rai, Lavanya; Kartha, V B; Chidangil, Santhosh
2015-09-08
An ultra-sensitive high performance liquid chromatography-laser induced fluorescence (HPLC-LIF) based technique has been developed by our group at Manipal, for screening, early detection, and staging for various cancers, using protein profiling of clinical samples like, body fluids, cellular specimens, and biopsy-tissue. More than 300 protein profiles of different clinical samples (serum, saliva, cellular samples and tissue homogenates) from volunteers (normal, and different pre-malignant/malignant conditions) were recorded using this set-up. The protein profiles were analyzed using principal component analysis (PCA) to achieve objective detection and classification of malignant, premalignant and healthy conditions with high sensitivity and specificity. The HPLC-LIF protein profiling combined with PCA, as a routine method for screening, diagnosis, and staging of cervical cancer and oral cancer, is discussed in this paper. In recent years, proteomics techniques have advanced tremendously in life sciences and medical sciences for the detection and identification of proteins in body fluids, tissue homogenates and cellular samples to understand biochemical mechanisms leading to different diseases. Some of the methods include techniques like high performance liquid chromatography, 2D-gel electrophoresis, MALDI-TOF-MS, SELDI-TOF-MS, CE-MS and LC-MS techniques. We have developed an ultra-sensitive high performance liquid chromatography-laser induced fluorescence (HPLC-LIF) based technique, for screening, early detection, and staging for various cancers, using protein profiling of clinical samples like, body fluids, cellular specimens, and biopsy-tissue. More than 300 protein profiles of different clinical samples (serum, saliva, cellular samples and tissue homogenates) from healthy and volunteers with different malignant conditions were recorded by using this set-up. The protein profile data were analyzed using principal component analysis (PCA) for objective classification and detection of malignant, premalignant and healthy conditions. The method is extremely sensitive to detect proteins with limit of detection of the order of femto-moles. The HPLC-LIF combined with PCA as a potential proteomic method for the diagnosis of oral cancer and cervical cancer has been discussed in this paper. This article is part of a Special Issue entitled: Proteomics in India. Copyright © 2015 Elsevier B.V. All rights reserved.
Plasmon Ruler with Ångstrom Length Resolution
Hill, Ryan T.; Mock, Jack J.; Hucknall, Angus; Wolter, Scott D.; Jokerst, Nan M.; Smith, David R.; Chilkoti, Ashutosh
2012-01-01
We demonstrate a plasmon nanoruler using a coupled film-nanoparticle (film-NP) format that is well suited for investigating the sensitivity extremes of plasmonic coupling. Because it is relatively straightforward to functionalize bulk, surface plasmon supporting films such as gold, we are able to precisely control plasmonic gap dimensions by creating ultra-thin molecular spacer layers on the gold films, on top of which we immobilize plasmon resonant nanoparticles (NPs). Each immobilized NP becomes coupled to the underlying film and functions as a plasmon nanoruler, exhibiting a distance-dependent resonance red-shift in its peak plasmon wavelength as it approaches the film. Due to the uniformity of response from the film-NPs to separation distance, we are able to use extinction and scattering measurements from ensembles of film-NPs to characterize the coupling effect over a series of very short separation distances – ranging from 5 – 20 Å – and combine these measurements with similar data from larger separation distances extending out to 27 nm. We find that the film-NP plasmon nanoruler is extremely sensitive at very short film-NP separation distances, yielding spectral shifts as large as 5 nm for every 1 Å change in separation distance. The film-NP coupling at extremely small spacings is so uniform and reliable that we are able to usefully probe gap dimensions where the classical Drude model of the conducting electrons in the metals is no longer descriptive; for gap sizes smaller than a few nanometers, either quantum or semi-classical models of the carrier response must be employed to predict the observed wavelength shifts. We find that, despite the limitations, large field enhancements and extreme sensitivity persist down to even the smallest gap sizes. PMID:22966857
Plasmon ruler with angstrom length resolution.
Hill, Ryan T; Mock, Jack J; Hucknall, Angus; Wolter, Scott D; Jokerst, Nan M; Smith, David R; Chilkoti, Ashutosh
2012-10-23
We demonstrate a plasmon nanoruler using a coupled film nanoparticle (film-NP) format that is well-suited for investigating the sensitivity extremes of plasmonic coupling. Because it is relatively straightforward to functionalize bulk surface plasmon supporting films, such as gold, we are able to precisely control plasmonic gap dimensions by creating ultrathin molecular spacer layers on the gold films, on top of which we immobilize plasmon resonant nanoparticles (NPs). Each immobilized NP becomes coupled to the underlying film and functions as a plasmon nanoruler, exhibiting a distance-dependent resonance red shift in its peak plasmon wavelength as it approaches the film. Due to the uniformity of response from the film-NPs to separation distance, we are able to use extinction and scattering measurements from ensembles of film-NPs to characterize the coupling effect over a series of very short separation distances-ranging from 5 to 20 Å-and combine these measurements with similar data from larger separation distances extending out to 27 nm. We find that the film-NP plasmon nanoruler is extremely sensitive at very short film-NP separation distances, yielding spectral shifts as large as 5 nm for every 1 Å change in separation distance. The film-NP coupling at extremely small spacings is so uniform and reliable that we are able to usefully probe gap dimensions where the classical Drude model of the conducting electrons in the metals is no longer descriptive; for gap sizes smaller than a few nanometers, either quantum or semiclassical models of the carrier response must be employed to predict the observed wavelength shifts. We find that, despite the limitations, large field enhancements and extreme sensitivity persist down to even the smallest gap sizes.
Lavell, Cassie H; Zimmer-Gembeck, Melanie J; Farrell, Lara J; Webb, Haley
2014-09-01
Body dysmorphic disorder (BDD) is characterized by extreme preoccupation with perceived deficits in physical appearance, and sufferers experience severe impairment in functioning. Previous research has indicated that individuals with BDD are high in social anxiety, and often report being the victims of appearance-based teasing. However, there is little research into the possible mechanisms that might explain these relationships. The current study examined appearance-based rejection sensitivity as a mediator between perceived appearance-based victimization, social anxiety, and body dysmorphic symptoms in a sample of 237 Australian undergraduate psychology students. Appearance-based rejection sensitivity fully mediated the relationship between appearance-based victimization and body dysmorphic symptoms, and partially mediated the relationship between social anxiety and body dysmorphic symptoms. Findings suggest that individuals high in social anxiety or those who have a history of more appearance-based victimization may have a bias towards interpreting further appearance-based rejection, which may contribute to extreme appearance concerns such as BDD. Copyright © 2014 Elsevier Ltd. All rights reserved.
Extreme ultraviolet patterning of tin-oxo cages
NASA Astrophysics Data System (ADS)
Haitjema, Jarich; Zhang, Yu; Vockenhuber, Michaela; Kazazis, Dimitrios; Ekinci, Yasin; Brouwer, Albert M.
2017-07-01
We report on the extreme ultraviolet (EUV) patterning performance of tin-oxo cages. These cage molecules were already known to function as a negative tone photoresist for EUV radiation, but in this work, we significantly optimized their performance. Our results show that sensitivity and resolution are only meaningful photoresist parameters if the process conditions are optimized. We focus on contrast curves of the materials using large area EUV exposures and patterning of the cages using EUV interference lithography. It is shown that baking steps, such as postexposure baking, can significantly affect both the sensitivity and contrast in the open-frame experiments as well as the patterning experiments. A layer thickness increase reduced the necessary dose to induce a solubility change but decreased the patterning quality. The patterning experiments were affected by minor changes in processing conditions such as an increased rinsing time. In addition, we show that the anions of the cage can influence the sensitivity and quality of the patterning, probably through their effect on physical properties of the materials.
Fiber-optic refractometer based on an etched high-Q π-phase-shifted fiber-Bragg-grating.
Zhang, Qi; Ianno, Natale J; Han, Ming
2013-07-10
We present a compact and highly-sensitive fiber-optic refractometer based on a high-Q π-phase-shifted fiber-Bragg-grating (πFBG) that is chemically etched to the core of the fiber. Due to the p phase-shift, a strong πFBG forms a high-Q optical resonator and the reflection spectrum features an extremely narrow notch that can be used for highly sensitivity refractive index measurement. The etched πFBG demonstrated here has a diameter of ~9.3 μm and a length of only 7 mm, leading to a refractive index responsivity of 2.9 nm/RIU (RIU: refractive index unit) at an ambient refractive index of 1.318. The reflection spectrum of the etched πFBG features an extremely narrow notch with a linewidth of only 2.1 pm in water centered at ~1,550 nm, corresponding to a Q-factor of 7.4 × 10(5), which allows for potentially significantly improved sensitivity over refractometers based on regular fiber Bragg gratings.
A single pH fluorescent probe for biosensing and imaging of extreme acidity and extreme alkalinity.
Chao, Jian-Bin; Wang, Hui-Juan; Zhang, Yong-Bin; Li, Zhi-Qing; Liu, Yu-Hong; Huo, Fang-Jun; Yin, Cai-Xia; Shi, Ya-Wei; Wang, Juan-Juan
2017-07-04
A simple tailor-made pH fluorescent probe 2-benzothiazole (N-ethylcarbazole-3-yl) hydrazone (Probe) is facilely synthesized by the condensation reaction of 2-hydrazinobenzothiazole with N-ethylcarbazole-3-formaldehyde, which is a useful fluorescent probe for monitoring extremely acidic and alkaline pH, quantitatively. The pH titrations indicate that Probe displays a remarkable emission enhancement with a pK a of 2.73 and responds linearly to minor pH fluctuations within the extremely acidic range of 2.21-3.30. Interestingly, Probe also exhibits strong pH-dependent characteristics with pK a 11.28 and linear response to extreme-alkalinity range of 10.41-12.43. In addition, Probe shows a large Stokes shift of 84 nm under extremely acidic and alkaline conditions, high selectivity, excellent sensitivity, good water-solubility and fine stability, all of which are favorable for intracellular pH imaging. The probe is further successfully applied to image extremely acidic and alkaline pH values fluctuations in E. coli cells. Copyright © 2017 Elsevier B.V. All rights reserved.
Zhou, Hong; Liu, Jing; Xu, Jing-Juan; Zhang, Shu-Sheng; Chen, Hong-Yuan
2018-03-21
Modern optical detection technology plays a critical role in current clinical detection due to its high sensitivity and accuracy. However, higher requirements such as extremely high detection sensitivity have been put forward due to the clinical needs for the early finding and diagnosing of malignant tumors which are significant for tumor therapy. The technology of isothermal amplification with nucleic acids opens up avenues for meeting this requirement. Recent reports have shown that a nucleic acid amplification-assisted modern optical sensing interface has achieved satisfactory sensitivity and accuracy, high speed and specificity. Compared with isothermal amplification technology designed to work completely in a solution system, solid biosensing interfaces demonstrated better performances in stability and sensitivity due to their ease of separation from the reaction mixture and the better signal transduction on these optical nano-biosensing interfaces. Also the flexibility and designability during the construction of these nano-biosensing interfaces provided a promising research topic for the ultrasensitive detection of cancer diseases. In this review, we describe the construction of the burgeoning number of optical nano-biosensing interfaces assisted by a nucleic acid amplification strategy, and provide insightful views on: (1) approaches to the smart fabrication of an optical nano-biosensing interface, (2) biosensing mechanisms via the nucleic acid amplification method, (3) the newest strategies and future perspectives.
Accuracy and sensitivity analysis on seismic anisotropy parameter estimation
NASA Astrophysics Data System (ADS)
Yan, Fuyong; Han, De-Hua
2018-04-01
There is significant uncertainty in measuring the Thomsen’s parameter δ in laboratory even though the dimensions and orientations of the rock samples are known. It is expected that more challenges will be encountered in the estimating of the seismic anisotropy parameters from field seismic data. Based on Monte Carlo simulation of vertical transversely isotropic layer cake model using the database of laboratory anisotropy measurement from the literature, we apply the commonly used quartic non-hyperbolic reflection moveout equation to estimate the seismic anisotropy parameters and test its accuracy and sensitivities to the source-receive offset, vertical interval velocity error and time picking error. The testing results show that the methodology works perfectly for noise-free synthetic data with short spread length. However, this method is extremely sensitive to the time picking error caused by mild random noises, and it requires the spread length to be greater than the depth of the reflection event. The uncertainties increase rapidly for the deeper layers and the estimated anisotropy parameters can be very unreliable for a layer with more than five overlain layers. It is possible that an isotropic formation can be misinterpreted as a strong anisotropic formation. The sensitivity analysis should provide useful guidance on how to group the reflection events and build a suitable geological model for anisotropy parameter inversion.
Molecular Tools for the Detection of Nitrogen Cycling Archaea
Rusch, Antje
2013-01-01
Archaea are widespread in extreme and temperate environments, and cultured representatives cover a broad spectrum of metabolic capacities, which sets them up for potentially major roles in the biogeochemistry of their ecosystems. The detection, characterization, and quantification of archaeal functions in mixed communities require Archaea-specific primers or probes for the corresponding metabolic genes. Five pairs of degenerate primers were designed to target archaeal genes encoding key enzymes of nitrogen cycling: nitrite reductases NirA and NirB, nitrous oxide reductase (NosZ), nitrogenase reductase (NifH), and nitrate reductases NapA/NarG. Sensitivity towards their archaeal target gene, phylogenetic specificity, and gene specificity were evaluated in silico and in vitro. Owing to their moderate sensitivity/coverage, the novel nirB-targeted primers are suitable for pure culture studies only. The nirA-targeted primers showed sufficient sensitivity and phylogenetic specificity, but poor gene specificity. The primers designed for amplification of archaeal nosZ performed well in all 3 criteria; their discrimination against bacterial homologs appears to be weakened when Archaea are strongly outnumbered by bacteria in a mixed community. The novel nifH-targeted primers showed high sensitivity and gene specificity, but failed to discriminate against bacterial homologs. Despite limitations, 4 of the new primer pairs are suitable tools in several molecular methods applied in archaeal ecology. PMID:23365509
Dahabreh, Issa J.; Sheldrick, Radley C.; Paulus, Jessica K.; Chung, Mei; Varvarigou, Vasileia; Jafri, Haseeb; Rassen, Jeremy A.; Trikalinos, Thomas A.; Kitsios, Georgios D.
2012-01-01
Aims Randomized controlled trials (RCTs) are the gold standard for assessing the efficacy of therapeutic interventions because randomization protects from biases inherent in observational studies. Propensity score (PS) methods, proposed as a potential solution to confounding of the treatment–outcome association, are widely used in observational studies of therapeutic interventions for acute coronary syndromes (ACS). We aimed to systematically assess agreement between observational studies using PS methods and RCTs on therapeutic interventions for ACS. Methods and results We searched for observational studies of interventions for ACS that used PS methods to estimate treatment effects on short- or long-term mortality. Using a standardized algorithm, we matched observational studies to RCTs based on patients’ characteristics, interventions, and outcomes (‘topics’), and we compared estimates of treatment effect between the two designs. When multiple observational studies or RCTs were identified for the same topic, we performed a meta-analysis and used the summary relative risk for comparisons. We matched 21 observational studies investigating 17 distinct clinical topics to 63 RCTs (median = 3 RCTs per observational study) for short-term (7 topics) and long-term (10 topics) mortality. Estimates from PS analyses differed statistically significantly from randomized evidence in two instances; however, observational studies reported more extreme beneficial treatment effects compared with RCTs in 13 of 17 instances (P = 0.049). Sensitivity analyses limited to large RCTs, and using alternative meta-analysis models yielded similar results. Conclusion For the treatment of ACS, observational studies using PS methods produce treatment effect estimates that are of more extreme magnitude compared with those from RCTs, although the differences are rarely statistically significant. PMID:22711757
Uncertainty Analysis of Decomposing Polyurethane Foam
NASA Technical Reports Server (NTRS)
Hobbs, Michael L.; Romero, Vicente J.
2000-01-01
Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2) a quadratic response surface (QUAD). The LHS techniques do not require derivatives of the response variable and are subsequently relatively insensitive to numerical noise. To compare the LIN and QUAD methods to the MV method, a direct LHS analysis (DLHS) was performed using the full grid and timestep resolved finite element model. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model.
Ghosh, S B; Bhattacharya, K; Nayak, S; Mukherjee, P; Salaskar, D; Kale, S P
2015-09-05
Definitive identification of microorganisms, including pathogenic and non-pathogenic bacteria, is extremely important for a wide variety of applications including food safety, environmental studies, bio-terrorism threats, microbial forensics, criminal investigations and above all disease diagnosis. Although extremely powerful techniques such as those based on PCR and microarrays exist, they require sophisticated laboratory facilities along with elaborate sample preparation by trained researchers. Among different spectroscopic techniques, FTIR was used in the 1980s and 90s for bacterial identification. In the present study five species of Bacillus were isolated from the aerobic predigester chamber of Nisargruna Biogas Plant (NBP) and were identified to the species level by biochemical and molecular biological (16S ribosomal DNA sequence) methods. Those organisms were further checked by solid state spectroscopic absorbance measurements using a wide range of electromagnetic radiation (wavelength 200 nm to 25,000 nm) encompassing UV, visible, near Infrared and Infrared regions. UV-Vis and NIR spectroscopy was performed on dried bacterial cell suspension on silicon wafer in specular mode while FTIR was performed on KBr pellets containing the bacterial cells. Consistent and reproducible species specific spectra were obtained and sensitivity up to a level of 1000 cells was observed in FTIR with a DTGS detector. This clearly shows the potential of solid state spectroscopic techniques for simple, easy to implement, reliable and sensitive detection of bacteria from environmental samples. Copyright © 2015 Elsevier B.V. All rights reserved.
Ultrasensitive Detection of Single-Walled Carbon Nanotubes Using Surface Plasmon Resonance.
Jang, Daeho; Na, Wonhwi; Kang, Minwook; Kim, Namjoon; Shin, Sehyun
2016-01-05
Because single-walled carbon nanotubes (SWNTs) are known to be a potentially dangerous material, inducing cancers and other diseases, any possible leakage of SWNTs through an aquatic medium such as drinking water will result in a major public threat. To solve this problem, for the present study, a highly sensitive, quantitative detection method of SWNTs in an aqueous solution was developed using surface plasmon resonance (SPR) spectroscopy. For a highly sensitive and specific detection, a strong affinity conjugation with biotin-streptavidin was adopted on an SPR sensing mechanism. During the pretreatment process, the SWNT surface was functionalized and hydrophilized using a thymine-chain based biotinylated single-strand DNA linker (B-ssDNA) and bovine serum albumin (BSA). The pretreated SWNTs were captured on a sensing film, the surface of which was immobilized with streptavidin on biotinylated gold film. The captured SWNTs were measured in real-time using SPR spectroscopy. Specific binding with SWNTs was verified through several validation experiments. The present method using an SPR sensor is capable of detecting SWNTs of as low as 100 fg/mL, which is the lowest level reported thus far for carbon-nanotube detection. In addition, the SPR sensor showed a linear characteristic within the range of 100 pg/mL to 200 ng/mL. These findings imply that the present SPR sensing method can detect an extremely low level of SWNTs in an aquatic environment with high sensitivity and high specificity, and thus any potential leakage of SWNTs into an aquatic environment can be precisely monitored within a couple of hours.
Analysis of the dependence of extreme rainfalls
NASA Astrophysics Data System (ADS)
Padoan, Simone; Ancey, Christophe; Parlange, Marc
2010-05-01
The aim of spatial analysis is to quantitatively describe the behavior of environmental phenomena such as precipitation levels, wind speed or daily temperatures. A number of generic approaches to spatial modeling have been developed[1], but these are not necessarily ideal for handling extremal aspects given their focus on mean process levels. The areal modelling of the extremes of a natural process observed at points in space is important in environmental statistics; for example, understanding extremal spatial rainfall is crucial in flood protection. In light of recent concerns over climate change, the use of robust mathematical and statistical methods for such analyses has grown in importance. Multivariate extreme value models and the class of maxstable processes [2] have a similar asymptotic motivation to the univariate Generalized Extreme Value (GEV) distribution , but providing a general approach to modeling extreme processes incorporating temporal or spatial dependence. Statistical methods for max-stable processes and data analyses of practical problems are discussed by [3] and [4]. This work illustrates methods to the statistical modelling of spatial extremes and gives examples of their use by means of a real extremal data analysis of Switzerland precipitation levels. [1] Cressie, N. A. C. (1993). Statistics for Spatial Data. Wiley, New York. [2] de Haan, L and Ferreria A. (2006). Extreme Value Theory An Introduction. Springer, USA. [3] Padoan, S. A., Ribatet, M and Sisson, S. A. (2009). Likelihood-Based Inference for Max-Stable Processes. Journal of the American Statistical Association, Theory & Methods. In press. [4] Davison, A. C. and Gholamrezaee, M. (2009), Geostatistics of extremes. Journal of the Royal Statistical Society, Series B. To appear.
Extreme data compression for the CMB
NASA Astrophysics Data System (ADS)
Zablocki, Alan; Dodelson, Scott
2016-04-01
We apply the Karhunen-Loéve methods to cosmic microwave background (CMB) data sets, and show that we can recover the input cosmology and obtain the marginalized likelihoods in Λ cold dark matter cosmologies in under a minute, much faster than Markov chain Monte Carlo methods. This is achieved by forming a linear combination of the power spectra at each multipole l , and solving a system of simultaneous equations such that the Fisher matrix is locally unchanged. Instead of carrying out a full likelihood evaluation over the whole parameter space, we need evaluate the likelihood only for the parameter of interest, with the data compression effectively marginalizing over all other parameters. The weighting vectors contain insight about the physical effects of the parameters on the CMB anisotropy power spectrum Cl . The shape and amplitude of these vectors give an intuitive feel for the physics of the CMB, the sensitivity of the observed spectrum to cosmological parameters, and the relative sensitivity of different experiments to cosmological parameters. We test this method on exact theory Cl as well as on a Wilkinson Microwave Anisotropy Probe (WMAP)-like CMB data set generated from a random realization of a fiducial cosmology, comparing the compression results to those from a full likelihood analysis using CosmoMC. After showing that the method works, we apply it to the temperature power spectrum from the WMAP seven-year data release, and discuss the successes and limitations of our method as applied to a real data set.
A Glycomics Platform for the Analysis of Permethylated Oligosaccharide Alditols
Costello, Catherine E.; Contado-Miller, Joy May; Cipollo, John F.
2007-01-01
This communication reports the development of an LC/MS platform for the analysis of permethylated oligosaccharide alditols that, for the first time, demonstrates routine online oligosaccharide isomer separation of these compounds prior to introduction into the mass spectrometer. The method leverages a high resolution liquid chromatography system with the superior fragmentation pattern characteristics of permethylated oligosaccharide alditols that are dissociated under low-energy collision conditions using quadrupole orthogonal time-of-flight (QoTOF) instrumentation and up to pseudo MS3 mass spectrometry. Glycoforms, including isomers, are readily identified and their structures assigned. The isomer-specific spectra include highly informative cross-ring and elimination fragments, branch position specific signatures and glycosidic bond fragments, thus facilitating linkage, branch and sequence assignment. The method is sensitive and can be applied using as little as 40 fmol of derivatized oligosaccharide. Because permethylation renders oligosaccharides nearly chemically equivalent in the mass spectrometer, the method is semi-quantitative and, in this regard, is comparable to methods reported using high field NMR and capillary electrophoresis. In this post - genomic age, the importance of glycosylation in biological processes has become clear. The nature of many of the important questions in glycomics is such that sample material is often extremely limited, thus necessitating the development of highly sensitive methods for rigorous structural assignment of the oligosaccharides in complex mixtures. The glycomics platform presented here fulfills these criteria and should lead to more facile glycomics analyses. PMID:17719235
Chemical imaging analysis of the brain with X-ray methods
NASA Astrophysics Data System (ADS)
Collingwood, Joanna F.; Adams, Freddy
2017-04-01
Cells employ various metal and metalloid ions to augment the structure and the function of proteins and to assist with vital biological processes. In the brain they mediate biochemical processes, and disrupted metabolism of metals may be a contributing factor in neurodegenerative disorders. In this tutorial review we will discuss the particular role of X-ray methods for elemental imaging analysis of accumulated metal species and metal-containing compounds in biological materials, in the context of post-mortem brain tissue. X-rays have the advantage that they have a short wavelength and can penetrate through a thick biological sample. Many of the X-ray microscopy techniques that provide the greatest sensitivity and specificity for trace metal concentrations in biological materials are emerging at synchrotron X-ray facilities. Here, the extremely high flux available across a wide range of soft and hard X-rays, combined with state-of-the-art focusing techniques and ultra-sensitive detectors, makes it viable to undertake direct imaging of a number of elements in brain tissue. The different methods for synchrotron imaging of metals in brain tissues at regional, cellular, and sub-cellular spatial resolution are discussed. Methods covered include X-ray fluorescence for elemental imaging, X-ray absorption spectrometry for speciation imaging, X-ray diffraction for structural imaging, phase contrast for enhanced contrast imaging and scanning transmission X-ray microscopy for spectromicroscopy. Two- and three-dimensional (confocal and tomographic) imaging methods are considered as well as the correlation of X-ray microscopy with other imaging tools.
Optical imaging of RNAi-mediated silencing of cancer
NASA Astrophysics Data System (ADS)
Ochiya, Takahiro; Honma, Kimi; Takeshita, Fumitaka; Nagahara, Shunji
2008-02-01
RNAi has rapidly become a powerful tool for drug target discovery and validation in an in vitro culture system and, consequently, interest is rapidly growing for extension of its application to in vivo systems, such as animal disease models and human therapeutics. Cancer is one obvious application for RNAi therapeutics, because abnormal gene expression is thought to contribute to the pathogenesis and maintenance of the malignant phenotype of cancer and thereby many oncogenes and cell-signaling molecules present enticing drug target possibilities. RNAi, potent and specific, could silence tumor-related genes and would appear to be a rational approach to inhibit tumor growth. In subsequent in vivo studies, the appropriate cancer model must be developed for an evaluation of siRNA effects on tumors. How to evaluate the effect of siRNA in an in vivo therapeutic model is also important. Accelerating the analyses of these models and improving their predictive value through whole animal imaging methods, which provide cancer inhibition in real time and are sensitive to subtle changes, are crucial for rapid advancement of these approaches. Bioluminescent imaging is one of these optically based imaging methods that enable rapid in vivo analyses of a variety of cellular and molecular events with extreme sensitivity.
Fluorescence-Guided Resection of Malignant Glioma with 5-ALA
Kaneko, Sadahiro
2016-01-01
Malignant gliomas are extremely difficult to treat with no specific curative treatment. On the other hand, photodynamic medicine represents a promising technique for neurosurgeons in the treatment of malignant glioma. The resection rate of malignant glioma has increased from 40% to 80% owing to 5-aminolevulinic acid-photodynamic diagnosis (ALA-PDD). Furthermore, ALA is very useful because it has no serious complications. Based on previous research, it is apparent that protoporphyrin IX (PpIX) accumulates abundantly in malignant glioma tissues after ALA administration. Moreover, it is evident that the mechanism underlying PpIX accumulation in malignant glioma tissues involves an abnormality in porphyrin-heme metabolism, specifically decreased ferrochelatase enzyme activity. During resection surgery, the macroscopic fluorescence of PpIX to the naked eye is more sensitive than magnetic resonance imaging, and the alert real time spectrum of PpIX is the most sensitive method. In the future, chemotherapy with new anticancer agents, immunotherapy, and new methods of radiotherapy and gene therapy will be developed; however, ALA will play a key role in malignant glioma treatment before the development of these new treatments. In this paper, we provide an overview and present the results of our clinical research on ALA-PDD. PMID:27429612
Evolution of a genetic polymorphism with climate change in a Mediterranean landscape
Thompson, John; Charpentier, Anne; Bouguet, Guillaume; Charmasson, Faustine; Roset, Stephanie; Buatois, Bruno; Vernet, Philippe; Gouyon, Pierre-Henri
2013-01-01
Many species show changes in distribution and phenotypic trait variation in response to climatic warming. Evidence of genetically based trait responses to climate change is, however, less common. Here, we detected evolutionary variation in the landscape-scale distribution of a genetically based chemical polymorphism in Mediterranean wild thyme (Thymus vulgaris) in association with modified extreme winter freezing events. By comparing current data on morph distribution with that observed in the early 1970s, we detected a significant increase in the proportion of morphs that are sensitive to winter freezing. This increase in frequency was observed in 17 of the 24 populations in which, since the 1970s, annual extreme winter freezing temperatures have risen above the thresholds that cause mortality of freezing-sensitive morphs. Our results provide an original example of rapid ongoing evolutionary change associated with relaxed selection (less extreme freezing events) on a local landscape scale. In species whose distribution and genetic variability are shaped by strong selection gradients, there may be little time lag associated with their ecological and evolutionary response to long-term environmental change. PMID:23382198
The Extreme Ultraviolet Explorer Mission
NASA Technical Reports Server (NTRS)
Bowyer, S.; Malina, R. F.
1991-01-01
The Extreme Ultraviolet Explorer (EUVE) mission, currently scheduled from launch in September 1991, is described. The primary purpose of the mission is to survey the celestial sphere for astronomical sources of extreme ultraviolet (EUV) radiation with the use of three EUV telescope, each sensitive to a different segment of the EUV band. A fourth telescope is planned to perform a high-sensitivity search of a limited sample of the sky in the shortest wavelength bands. The all-sky survey is planned to be carried out in the first six months of the mission in four bands, or colors, 70-180 A, 170-250 A, 400-600 A, and 500-700 A. The second phase of the mission is devoted to spectroscopic observations of EUV sources. A high-efficiency grazing-incidence spectrometer using variable line-space gratings is planned to provide spectral data with about 1-A resolution. An end-to-end model of the mission, from a stellar source to the resulting scientific data, is presented. Hypothetical data from astronomical sources were processed through this model and are shown.
Attributing extreme precipitation in the Black Sea region to sea surface warming
NASA Astrophysics Data System (ADS)
Meredith, Edmund; Semenov, Vladimir; Maraun, Douglas; Park, Wonsun; Chernokulsky, Alexander
2016-04-01
Higher sea surface temperatures (SSTs) warm and moisten the overlying atmosphere, increasing the low-level atmospheric instability, the moisture available to precipitating systems and, hence, the potential for intense convective systems. Both the Mediterranean and Black Sea regions have seen a steady increase in summertime SSTs since the early 1980s, by over 2 K in places. This raises the question of how this SST increase has affected convective precipitation extremes in the region, and through which mechanisms any effects are manifested. In particular, the Black Sea town of Krymsk suffered an unprecedented precipitation extreme in July 2012, which may have been influenced by Black Sea warming, causing over 170 deaths. To address this question, we adopt two distinct modelling approaches to event attribution and compare their relative merits. In the first, we use the traditional probabilistic event attribution approach involving global climate model ensembles representative of the present and a counterfactual past climate where regional SSTs have not increased. In the second, we use the conditional event attribution approach, taking the 2012 Krymsk precipitation extreme as a showcase example. Under the second approach, we carry out ensemble sensitivity experiments of the Krymsk event at convection-permitting resolution with the WRF regional model, and test the sensitivity of the event to a range of SST forcings. Both experiments show the crucial role of recent Black Sea warming in amplifying the 2012 Krymsk precipitation extreme. In the conditional event attribution approach, though, the explicit simulation of convective processes provides detailed insight into the physical mechanisms behind the extremeness of the event, revealing the dominant role of dynamical (i.e. static stability and vertical motions) over thermodynamical (i.e. increased atmospheric moisture) changes. Additionally, the wide range of SST states tested in the regional setup, which would be infeasible under the global modelling approach, reveal that the intensity of the Krymsk event responds highly nonlinearly to Black Sea warming and suggests a role for regional SST thresholds in more intense coastal convective extremes.
Autoerythrocyte sensitization syndrome presenting with general neurodermatitis
Oh, In Young; Ko, Eun Jung
2013-01-01
Autoerythrocyte sensitization syndrome (AES) was first described by Gardner and Diamond in 1955, when four women with painful bruising were depicted. Patients with AES typically present with the development of recurrent, spontaneous, painful ecchymosis, frequently preceded by a prodrome of pain or itching of the skin. The patients are sensitive to their own red blood cells injected intradermally, and underlying coagulopathies are thought to be absent. We introduce a 70-year-old woman presenting with recurrent episodes of painful bruising on the trunk and extremities. PMID:23956968
ERIC Educational Resources Information Center
Khakzad, Mohammad Reza; Javanbakht, Maryam; Shayegan, Mohammad Reza; Kianoush, Sina; Omid, Fatemeh; Hojati, Maryam; Meshkat, Mojtaba
2012-01-01
C-reactive protein (CRP) is a beneficial diagnostic test for the evaluation of inflammatory response. Extremely low levels of CRP can be detected using high-sensitivity CRP (hs-CRP) test. A considerable body of evidence has demonstrated that inflammatory response has an important role in the pathophysiology of autism. In this study, we evaluated…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, H.T.; Bachalo, W.D.
1984-10-01
The feasibility of developing a particle-sizing instrument for in-situ measurements in industrial environments, based on the method of optical heterodyne or coherent detection, was investigated. The instrument, a coherent optical particle spectrometer, or COPS, is potentially capable of measuring several important particle parameters, such as particle size, number density, and speed, because of the versatility of the optical heterodyne method. Water droplets generated by an aerosol/particle generator were used to test the performance of the COPS. Study findings have shown that the optical setup of the COPS is extremely sensitive to even minute mechanical or acoustic vibrations. At the optimalmore » setup, the COPS performs satisfactorily and has more than adequate signal-to-noise even with a 0.5 mW He-Ne laser.« less
Logit-normal mixed model for Indian monsoon precipitation
NASA Astrophysics Data System (ADS)
Dietz, L. R.; Chatterjee, S.
2014-09-01
Describing the nature and variability of Indian monsoon precipitation is a topic of much debate in the current literature. We suggest the use of a generalized linear mixed model (GLMM), specifically, the logit-normal mixed model, to describe the underlying structure of this complex climatic event. Four GLMM algorithms are described and simulations are performed to vet these algorithms before applying them to the Indian precipitation data. The logit-normal model was applied to light, moderate, and extreme rainfall. Findings indicated that physical constructs were preserved by the models, and random effects were significant in many cases. We also found GLMM estimation methods were sensitive to tuning parameters and assumptions and therefore, recommend use of multiple methods in applications. This work provides a novel use of GLMM and promotes its addition to the gamut of tools for analysis in studying climate phenomena.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard, Patrick, E-mail: patrjr@uw.edu; Phillips, Mark; Smith, Wade
Purpose: Create a cost-effectiveness model comparing preoperative intensity modulated radiation therapy (IMRT) versus 3-dimensional conformal radiation therapy (3DCRT) for extremity soft tissue sarcomas. Methods and Materials: Input parameters included 5-year local recurrence rates, rates of acute wound adverse events, and chronic toxicities (edema, fracture, joint stiffness, and fibrosis). Health-state utilities were used to calculate quality-adjusted life years (QALYs). Overall treatment costs per QALY or incremental cost-effectiveness ratio (ICER) were calculated. Roll-back analysis was performed using average costs and utilities to determine the baseline preferred radiation technique. One-way, 2-way, and probabilistic sensitivity analyses (PSA) were performed for input parameters with themore » largest impact on the ICER. Results: Overall treatment costs were $17,515.58 for 3DCRT compared with $22,920.51 for IMRT. The effectiveness was higher for IMRT (3.68 QALYs) than for 3DCRT (3.35 QALYs). The baseline ICER for IMRT was $16,842.75/QALY, making it the preferable treatment. The ICER was most sensitive to the probability of local recurrence, upfront radiation costs, local recurrence costs, certain utilities (no toxicity/no recurrence, grade 1 toxicity/no local recurrence, grade 4 toxicity/no local recurrence), and life expectancy. Dominance patterns emerged when the cost of 3DCRT exceeded $15,532.05 (IMRT dominates) or the life expectancy was under 1.68 years (3DCRT dominates). Furthermore, preference patterns changed based on the rate of local recurrence (threshold: 13%). The PSA results demonstrated that IMRT was the preferred cost-effective technique for 64% of trials compared with 36% for 3DCRT. Conclusions: Based on our model, IMRT is the preferred technique by lowering rates of local recurrence, severe toxicities, and improving QALYs. From a third-party payer perspective, IMRT should be a supported approach for extremity soft tissue sarcomas.« less
NASA Astrophysics Data System (ADS)
Gallus, William; Parodi, Antonio; Miglietta, Marcello; Maugeri, Maurizio
2017-04-01
As the global climate has warmed in recent decades, interest has grown in the impacts on extreme events associated with thunderstorms such as tornadoes and intense rainfall that can cause flash flooding. Because warmer temperatures allow the atmosphere to contain larger values of water vapor, it is generally accepted that short-term rainfall may become more intense in a future warmer climate. Regarding tornadoes, it is more difficult to say what might happen since although increased temperatures and humidity in the lowest part of the troposphere should increase thermodynamic instability, allowing for stronger thunderstorm updrafts, vertical wind shear necessary for storm-scale rotation may decrease as the pole to equator temperature gradient weakens. The Mediterranean Sea is an important source for moisture that fuels thunderstorms in Italy, and it has been warming faster than most water bodies in recent decades. The present study uses three methods to gain preliminary insight into the role that the warming Mediterranean may have on tornadoes and thunderstorms with intense rainfall in Italy. First, a historical archive of Italian tornadoes has been updated for the 1990s, and it will be used along with other data from the European Severe Weather Database to discuss possible trends in tornado occurrence. Second, convection-allowing Weather Research and Forecasting (WRF) model simulations have been performed for three extreme events to examine sensitivity to both the sea surface temperatures and other model parameters. These events include a flash flood-producing storm event near Milan, a non-tornadic severe hail event in far northeastern Italy, and the Mira EF-4 tornado of July 2015. Sensitivities in rainfall amount, radar reflectivity and storm structure, and storm rotation will be discussed. Finally, changes in the frequency of intense mesoscale convective system events in and near the Ligurian Sea, inferred from the presence of strong convergence lines in EXPRESS-Hydro regional climate model output, will be examined.
Torgomyan, Heghine; Trchounian, Armen
2015-01-01
The effects of extremely high frequency electromagnetic irradiation and antibiotics on Escherichia coli can create new opportunities for applications in different areas—medicine, agriculture, and food industry. Previously was shown that irradiated bacterial sensitivity against antibiotics was changed. In this work, it was presented the results that irradiation of antibiotics and then adding into growth medium was more effective compared with non-irradiated antibiotics bactericidal action. The selected antibiotics (tetracycline, kanamycin, chloramphenicol, and ceftriaxone) were from different groups. Antibiotics irradiation was performed with low intensity 53 GHz frequency during 1 h. The E. coli growth properties—lag-phase duration and specific growth rate—were markedly changed. Enhanced bacterial sensitivity to irradiated antibiotics is similar to the effects of antibiotics of higher concentrations.
NASA Astrophysics Data System (ADS)
Asano, Atsushi; Maeyoshi, Yuta; Watanabe, Shogo; Saeki, Akinori; Sugimoto, Masaki; Yoshikawa, Masahito; Nanto, Hidehito; Tsukuda, Satoshi; Tanaka, Shun-Ichiro; Seki, Shu
2013-03-01
Cyclodextrins (CDs), hosting selectively a wide range of guest molecules in their hydrophobic cavity, were directly fabricated into 1-dimensional nanostructures with extremely wide surface area by single particle nanofabrication technique in the present paper. The copolymers of acrylamide and mono(6-allyl)-β-CD were synthesized, and the crosslinking reaction of the polymer alloys with poly(4-bromostyrene) (PBrS) in SPNT gave nanowires on the quarts substrate with high number density of 5×109 cm-2. Quartz crystal microbalance (QCM) measurement suggested 320 fold high sensitivity for formic acid vapor adsorption in the nanowire fabricated surfaces compared with that in the thin solid film of PBrS, due to the incorporation of CD units and extremely wide surface area of the nanowires.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flinn, D.G.; Hall, S.; Morris, J.
This volume describes the background research, the application of the proposed loss evaluation techniques, and the results. The research identified present loss calculation methods as appropriate, provided care was taken to represent the various system elements in sufficient detail. The literature search of past methods and typical data revealed that extreme caution in using typical values (load factor, etc.) should be taken to ensure that all factors were referred to the same time base (daily, weekly, etc.). The performance of the method (and computer program) proposed in this project was determined by comparison of results with a rigorous evaluation ofmore » losses on the Salt River Project system. This rigorous evaluation used statistical modeling of the entire system as well as explicit enumeration of all substation and distribution transformers. Further tests were conducted at Public Service Electric and Gas of New Jersey to check the appropriateness of the methods in a northern environment. Finally sensitivity tests indicated data elements inaccuracy of which would most affect the determination of losses using the method developed in this project.« less
The WSRT Virgo Hi filament survey. II. Cross correlation data
NASA Astrophysics Data System (ADS)
Popping, A.; Braun, R.
2011-04-01
Context. The extended environment of galaxies contains a wealth of information about the formation and life cycle of galaxies which are regulated by accretion and feedback processes. Observations of neutral hydrogen are routinely used to image the high brightness disks of galaxies and to study their kinematics. Deeper observations will give more insight into the distribution of diffuse gas in the extended halo of the galaxies and the inter-galactic medium, where numerical simulations predict a cosmic web of extended structures and gaseous filaments. Aims: To observe the extended environment of galaxies, column density sensitivities have to be achieved that probe the regime of Lyman limit systems. H i observations are typically limited to a brightness sensitivity of NHI ~ 1019 cm-2, but this must be improved upon by ~2 orders of magnitude. Methods: In this paper we present the interferometric data of the Westerbork Virgo H i Filament Survey (WVFS) - the total power product of this survey has been published in an earlier paper. By observing at extreme hour angles, a filled aperture is simulated of 300 × 25 m in size, that has the typical collecting power and sensitivity of a single dish telescope, but the well defined bandpass characteristics of an interferometer. With the very good surface brightness sensitivity of the data, we hope to make new H i detections of diffuse systems with moderate angular resolution. Results: The survey maps 135 degrees in Right Ascension between 8 and 17 h and 11 degrees in Declination between - 1 and 10 degrees, including the galaxy filament connecting the Local Group with the Virgo Cluster. Only positive declinations could be completely processed and analysed due to projection effects. A typical flux sensitivity of 6 mJy beam-1 over 16 km s-1 is achieved, that corresponds to a brightness sensitivity of NHI ~ 1018 cm-2. An unbiased search has been done with a high significance threshold as well a search with a lower significance limit but requiring an optical counterpart. In total, 199 objects have been detected, of which 17 are new H i detections. Conclusions: By observing at extreme hour angles with the WSRT, a filled aperture can be simulated in projection, with a very good brightness sensitivity, comparable to that of a single dish telescope. Despite some technical challenges, the data provide valuable constraints on faint, circum-galactic H i features. Appendix is only available at electronic form at http://www.aanda.org
Eyer, H; Metz, H; Preac-Mursic, V
1975-11-21
Comparing examinations with Ampicillin sensitive and resistant bacteria-strains show that the bactericidal activity of serum is dependent on the bacteria-strains, on the Ampicillin sensitivity of the particular exciter and on the number of bacteria/ml (germ count). Bactericide effect could always be obtained with sensitive strains as a result of additional chemotherapy. With several resistant strains a bactericide effect could not be obtained in this case the continuous optimal Ampicillin addition was the decisive factor. Because of the extremely complicated process of the bactericide one should not make general conclusions from the individual experimental results.
NASA Astrophysics Data System (ADS)
Benedict, James J.; Medeiros, Brian; Clement, Amy C.; Pendergrass, Angeline G.
2017-06-01
Precipitation distributions and extremes play a fundamental role in shaping Earth's climate and yet are poorly represented in many global climate models. Here, a suite of idealized Community Atmosphere Model (CAM) aquaplanet simulations is examined to assess the aquaplanet's ability to reproduce hydroclimate statistics of real-Earth configurations and to investigate sensitivities of precipitation distributions and extremes to model physics, horizontal grid resolution, and ocean type. Little difference in precipitation statistics is found between aquaplanets using time-constant sea-surface temperatures and those implementing a slab ocean model with a 50 m mixed-layer depth. In contrast, CAM version 5.3 (CAM5.3) produces more time mean, zonally averaged precipitation than CAM version 4 (CAM4), while CAM4 generates significantly larger precipitation variance and frequencies of extremely intense precipitation events. The largest model configuration-based precipitation sensitivities relate to choice of horizontal grid resolution in the selected range 1-2°. Refining grid resolution has significant physics-dependent effects on tropical precipitation: for CAM4, time mean zonal mean precipitation increases along the Equator and the intertropical convergence zone (ITCZ) narrows, while for CAM5.3 precipitation decreases along the Equator and the twin branches of the ITCZ shift poleward. Increased grid resolution also reduces light precipitation frequencies and enhances extreme precipitation for both CAM4 and CAM5.3 resulting in better alignment with observational estimates. A discussion of the potential implications these hydrologic cycle sensitivities have on the interpretation of precipitation statistics in future climate projections is also presented.
Finite-frequency sensitivity kernels for head waves
NASA Astrophysics Data System (ADS)
Zhang, Zhigang; Shen, Yang; Zhao, Li
2007-11-01
Head waves are extremely important in determining the structure of the predominantly layered Earth. While several recent studies have shown the diffractive nature and the 3-D Fréchet kernels of finite-frequency turning waves, analogues of head waves in a continuous velocity structure, the finite-frequency effects and sensitivity kernels of head waves are yet to be carefully examined. We present the results of a numerical study focusing on the finite-frequency effects of head waves. Our model has a low-velocity layer over a high-velocity half-space and a cylindrical-shaped velocity perturbation placed beneath the interface at different locations. A 3-D finite-difference method is used to calculate synthetic waveforms. Traveltime and amplitude anomalies are measured by the cross-correlation of synthetic seismograms from models with and without the velocity perturbation and are compared to the 3-D sensitivity kernels constructed from full waveform simulations. The results show that the head wave arrival-time and amplitude are influenced by the velocity structure surrounding the ray path in a pattern that is consistent with the Fresnel zones. Unlike the `banana-doughnut' traveltime sensitivity kernels of turning waves, the traveltime sensitivity of the head wave along the ray path below the interface is weak, but non-zero. Below the ray path, the traveltime sensitivity reaches the maximum (absolute value) at a depth that depends on the wavelength and propagation distance. The sensitivity kernels vary with the vertical velocity gradient in the lower layer, but the variation is relatively small at short propagation distances when the vertical velocity gradient is within the range of the commonly accepted values. Finally, the depression or shoaling of the interface results in increased or decreased sensitivities, respectively, beneath the interface topography.
Amini, Kasra; Savelyev, Evgeny; Brauße, Felix; Berrah, Nora; Bomme, Cédric; Brouard, Mark; Burt, Michael; Christensen, Lauge; Düsterer, Stefan; Erk, Benjamin; Höppner, Hauke; Kierspel, Thomas; Krecinic, Faruk; Lauer, Alexandra; Lee, Jason W. L.; Müller, Maria; Müller, Erland; Mullins, Terence; Redlin, Harald; Schirmel, Nora; Thøgersen, Jan; Techert, Simone; Toleikis, Sven; Treusch, Rolf; Trippel, Sebastian; Ulmer, Anatoli; Vallance, Claire; Wiese, Joss; Johnsson, Per; Küpper, Jochen; Rudenko, Artem; Rouzée, Arnaud; Stapelfeldt, Henrik; Rolles, Daniel; Boll, Rebecca
2018-01-01
We explore time-resolved Coulomb explosion induced by intense, extreme ultraviolet (XUV) femtosecond pulses from a free-electron laser as a method to image photo-induced molecular dynamics in two molecules, iodomethane and 2,6-difluoroiodobenzene. At an excitation wavelength of 267 nm, the dominant reaction pathway in both molecules is neutral dissociation via cleavage of the carbon–iodine bond. This allows investigating the influence of the molecular environment on the absorption of an intense, femtosecond XUV pulse and the subsequent Coulomb explosion process. We find that the XUV probe pulse induces local inner-shell ionization of atomic iodine in dissociating iodomethane, in contrast to non-selective ionization of all photofragments in difluoroiodobenzene. The results reveal evidence of electron transfer from methyl and phenyl moieties to a multiply charged iodine ion. In addition, indications for ultrafast charge rearrangement on the phenyl radical are found, suggesting that time-resolved Coulomb explosion imaging is sensitive to the localization of charge in extended molecules. PMID:29430482
Amini, Kasra; Savelyev, Evgeny; Brauße, Felix; Berrah, Nora; Bomme, Cédric; Brouard, Mark; Burt, Michael; Christensen, Lauge; Düsterer, Stefan; Erk, Benjamin; Höppner, Hauke; Kierspel, Thomas; Krecinic, Faruk; Lauer, Alexandra; Lee, Jason W L; Müller, Maria; Müller, Erland; Mullins, Terence; Redlin, Harald; Schirmel, Nora; Thøgersen, Jan; Techert, Simone; Toleikis, Sven; Treusch, Rolf; Trippel, Sebastian; Ulmer, Anatoli; Vallance, Claire; Wiese, Joss; Johnsson, Per; Küpper, Jochen; Rudenko, Artem; Rouzée, Arnaud; Stapelfeldt, Henrik; Rolles, Daniel; Boll, Rebecca
2018-01-01
We explore time-resolved Coulomb explosion induced by intense, extreme ultraviolet (XUV) femtosecond pulses from a free-electron laser as a method to image photo-induced molecular dynamics in two molecules, iodomethane and 2,6-difluoroiodobenzene. At an excitation wavelength of 267 nm, the dominant reaction pathway in both molecules is neutral dissociation via cleavage of the carbon-iodine bond. This allows investigating the influence of the molecular environment on the absorption of an intense, femtosecond XUV pulse and the subsequent Coulomb explosion process. We find that the XUV probe pulse induces local inner-shell ionization of atomic iodine in dissociating iodomethane, in contrast to non-selective ionization of all photofragments in difluoroiodobenzene. The results reveal evidence of electron transfer from methyl and phenyl moieties to a multiply charged iodine ion. In addition, indications for ultrafast charge rearrangement on the phenyl radical are found, suggesting that time-resolved Coulomb explosion imaging is sensitive to the localization of charge in extended molecules.
Meteorological risks as drivers of innovation for agroecosystem management
NASA Astrophysics Data System (ADS)
Gobin, Anne; Van de Vyver, Hans; Zamani, Sepideh; Curnel, Yannick; Planchon, Viviane; Verspecht, Ann; Van Huylenbroeck, Guido
2015-04-01
Devastating weather-related events recorded in recent years have captured the interest of the general public in Belgium. The MERINOVA project research hypothesis is that meteorological risks act as drivers of environmental innovation in agro-ecosystem management which is being tested using a "chain of risk" approach. The major objectives are to (1) assess the probability of extreme meteorological events by means of probability density functions; (2) analyse the extreme events impact of on agro-ecosystems using process-based bio-physical modelling methods; (3) identify the most vulnerable agro-ecosystems using fuzzy multi-criteria and spatial analysis; (4) uncover innovative risk management and adaptation options using actor-network theory and economic modelling; and, (5) communicate to research, policy and practitioner communities using web-based techniques. Generalized Extreme Value (GEV) theory was used to model annual rainfall maxima based on location-, scale- and shape-parameters that determine the centre of the distribution, the deviation of the location-parameter and the upper tail decay, respectively. Likewise the distributions of consecutive rainy days, rainfall deficits and extreme 24-hour rainfall were modelled. Spatial interpolation of GEV-derived return levels resulted in maps of extreme precipitation, precipitation deficits and wet periods. The degree of temporal overlap between extreme weather conditions and sensitive periods in the agro-ecosystem was determined using a bio-physically based modelling framework that couples phenological models, a soil water balance, crop growth and environmental models. 20-year return values were derived for frost, heat stress, drought, waterlogging and field access during different sensitive stages for different arable crops. Extreme yield values were detected from detrended long term arable yields and relationships were found with soil moisture conditions, heat stress or other meteorological variables during the season. A methodology for identifying agro-ecosystem vulnerability was developed using spatially explicit information and was tested for arable crop production in Belgium. The different components of vulnerability for a region include spatial information on meteorology, soil available water content, soil erosion, the degree of waterlogging, crop share and the diversity of potato varieties. The level of vulnerability and resilience of an agro-ecosystem is also determined by risk management. The types of agricultural risk and their relative importance differ across sectors and farm types. Risk types are further distinguished according to production, market, institutional, financial and liability risks. Strategies are often combined in the risk management strategy of a farmer and include reduction and prevention, mitigation, coping and impact reduction. Based on an extensive literature review, a portfolio of potential strategies was identified at farm, market and policy level. Research hypotheses were tested using an on-line questionnaire on knowledge of agricultural risk, measuring the general risk aversion of the farmer and risk management strategies. The "chain of risk" approach adopted as a research methodology allows for investigating the hypothesis that meteorological risks act as drivers for agricultural innovation. Risks related to extreme weather events in Belgium are mainly caused by heat, frost, excess rainfall, drought and storms, and their impact is predominantly felt by arable, horticultural and extensive dairy farmers. Quantification of the risk is evaluated in terms of probability of occurrence, magnitude, frequency and extent of impact on several agro-ecosystems services. The spatial extent of vulnerability is developed by integrating different layers of geo-information, while risk management is analysed using questionnaires and economic modelling methods. Future work will concentrate on the further development and testing of the currently developed modelling methodologies. https://merinova.vito.be The research is funded by the Belgian Science Policy Organisation (Belspo) under contract nr SD/RI/03A.
Ganapathy, Vaidyanathan; Hay, Joel W; Kim, Jae H
2012-02-01
This study evaluated the cost-effectiveness of a 100% human milk-based diet composed of mother's milk fortified with a donor human milk-based human milk fortifier (HMF) versus mother's milk fortified with bovine milk-based HMF to initiate enteral nutrition among extremely premature infants in the neonatal intensive care unit (NICU). A net expected costs calculator was developed to compare the total NICU costs among extremely premature infants who were fed either a bovine milk-based HMF-fortified diet or a 100% human milk-based diet, based on the previously observed risks of overall necrotizing enterocolitis (NEC) and surgical NEC in a randomized controlled study that compared outcomes of these two feeding strategies among 207 very low birth weight infants. The average NICU costs for an extremely premature infant without NEC and the incremental costs due to medical and surgical NEC were derived from a separate analysis of hospital discharges in the state of California in 2007. The sensitivity of cost-effectiveness results to the risks and costs of NEC and to prices of milk supplements was studied. The adjusted incremental costs of medical NEC and surgical NEC over and above the average costs incurred for extremely premature infants without NEC, in 2011 US$, were $74,004 (95% confidence interval, $47,051-$100,957) and $198,040 (95% confidence interval, $159,261-$236,819) per infant, respectively. Extremely premature infants fed with 100% human-milk based products had lower expected NICU length of stay and total expected costs of hospitalization, resulting in net direct savings of 3.9 NICU days and $8,167.17 (95% confidence interval, $4,405-$11,930) per extremely premature infant (p < 0.0001). Costs savings from the donor HMF strategy were sensitive to price and quantity of donor HMF, percentage reduction in risk of overall NEC and surgical NEC achieved, and incremental costs of surgical NEC. Compared with feeding extremely premature infants with mother's milk fortified with bovine milk-based supplements, a 100% human milk-based diet that includes mother's milk fortified with donor human milk-based HMF may result in potential net savings on medical care resources by preventing NEC.
Uncertainties in obtaining high reliability from stress-strength models
NASA Technical Reports Server (NTRS)
Neal, Donald M.; Matthews, William T.; Vangel, Mark G.
1992-01-01
There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mirocha, C.J.; Pawlosky, R.J.; Gunther, R.
1989-12-22
Methods of analysis for T-2 toxin, HT-2 and T-2-tetraol in blood and urine were developed using hybrid tandem mass spectrometry, more specifically, Multiple Reaction Monitoring (MRM). Essentially, the mass spectra of the above toxins were obtained in electron impact, the fragments studied for selection of appropriate parent and daughters were generated with the objective of using them analytically. As an example, m/z 478 of the trifluoroacetate derivative of T-2 toxin was reacted in the collision chamber (field free region three) with argon and 23 eV to produce daughters 12, 138 and 180. These were used in method development so thatmore » T-2 was detected in a biological matrix with a sensitivity of 1 part per billion. A field method of urine collection was developed for the analysis of T-2 toxin. An attempt was made to find toxic isolates of Fusarium in soils of the Arctic of Norway that would explain some of the hemorrhagic activity noted with this genus. More specifically, descriptions of toxicity of biological warfare agents originating in Southeast Asia included extreme hemorrhaging. To this end, toxic isolates were found that caused extreme hemorrhaging in rats. The natural product responsible for the toxicity was isolated, purified and characterized as wortmannin. Wortmannin was shown to cause hemorrhaging in the heart, bladder, stomach and thymus. The chemistry, NMR and mass spectra of wortmannin are presented.« less
NASA Astrophysics Data System (ADS)
Lorente-Plazas, Raquel; Mauger, Guillaume; Salathé, Eric; Mitchell, Todd P.
2017-04-01
Flooding is one of the natural hazard that causes the significant economic, ecosystem and human losses every year. Large percentage of floodings in the western of the US caused by heavy precipitation events are associated to atmospheric rivers (ARs). With the warmer climate is expected an increase of saturated water pressure which could increase the intensity and frequency of the ARs. In this work we attend to address two questions: 1) what are the large-scale drivers that promotes differences in ARs promoting heavy precipitation at different locations and 2) how climate change will influence on ARs and extreme precipitation. The methods applied in our analysis consist on a dynamical downscaling using the Weather Research and Forecasting (WRF) model. The target region is the western coastline U.S. on a domain with 12-km grid spacing. Regional climate simulations (RCM) encompass a historical period (1970-2010) and future projections (2020-2060) using NNRP and ECHAM as initial and boundary conditions. Clustering methods are applied to the RCM to identify regions with similar precipitation variability. At each region, the extreme events of precipitation according to 99 percentile are identified and associated to integrated vapor transport (ITV). Results show how heaviest precipitation in each region is associated to different AR patterns. When an AR impacts coastline, the direction and intensity of the IVT determine the areas affected by heavy precipitation. Coastal mountains play a key role intensifying the precipitation in the coastline and avoiding the inland penetration of the IVT. The shape of the atmospheric rivers is related to differences in 500 hPa geopotential between the mean and the extreme precipitation. Areas with heaviest precipitation are located in the interface of Z500 differences.
Eberle, Jonas; Warnock, Rachel C M; Ahrens, Dirk
2016-05-05
Defining species units can be challenging, especially during the earliest stages of speciation, when phylogenetic inference and delimitation methods may be compromised by incomplete lineage sorting (ILS) or secondary gene flow. Integrative approaches to taxonomy, which combine molecular and morphological evidence, have the potential to be valuable in such cases. In this study we investigated the South African scarab beetle genus Pleophylla using data collected from 110 individuals of eight putative morphospecies. The dataset included four molecular markers (cox1, 16S, rrnL, ITS1) and morphometric data based on male genital morphology. We applied a suite of molecular and morphological approaches to species delimitation, and implemented a novel Bayesian approach in the software iBPP, which enables continuous morphological trait and molecular data to be combined. Traditional morphology-based species assignments were supported quantitatively by morphometric analyses of the male genitalia (eigenshape analysis, CVA, LDA). While the ITS1-based delineation was also broadly congruent with the morphospecies, the cox1 data resulted in over-splitting (GMYC modelling, haplotype networks, PTP, ABGD). In the most extreme case morphospecies shared identical haplotypes, which may be attributable to ILS based on statistical tests performed using the software JML. We found the strongest support for putative morphospecies based on phylogenetic evidence using the combined approach implemented in iBPP. However, support for putative species was sensitive to the use of alternative guide trees and alternative combinations of priors on the population size (θ) and rootage (τ 0 ) parameters, especially when the analysis was based on molecular or morphological data alone. We demonstrate that continuous morphological trait data can be extremely valuable in assessing competing hypotheses to species delimitation. In particular, we show that the inclusion of morphological data in an integrative Bayesian framework can improve the resolution of inferred species units. However, we also demonstrate that this approach is extremely sensitive to guide tree and prior parameter choice. These parameters should be chosen with caution - if possible - based on independent empirical evidence, or careful sensitivity analyses should be performed to assess the robustness of results. Young species provide exemplars for investigating the mechanisms of speciation and for assessing the performance of tools used to delimit species on the basis of molecular and/or morphological evidence.
Bacon, Charles R.; Grove, Marty; Vazquez, Jorge A.; Coble, Matthew A.
2012-01-01
Answers to many questions in Earth science require chemical analysis of minute volumes of minerals, volcanic glass, or biological materials. Secondary Ion Mass Spectrometry (SIMS) is an extremely sensitive analytical method in which a 5–30 micrometer diameter "primary" beam of charged particles (ions) is focused on a region of a solid specimen to sputter secondary ions from 1–5 nanograms of the sample under high vacuum. The elemental abundances and isotopic ratios of these secondary ions are determined with a mass spectrometer. These results can be used for geochronology to determine the age of a region within a crystal thousands to billions of years old or to precisely measure trace abundances of chemical elements at concentrations as low as parts per billion. A partnership of the U.S. Geological Survey and the Stanford University School of Earth Sciences operates a large SIMS instrument, the Sensitive High-Resolution Ion Microprobe with Reverse Geometry (SHRIMP–RG) on the Stanford campus.
Digital Quantification of Proteins and mRNA in Single Mammalian Cells.
Albayrak, Cem; Jordi, Christian A; Zechner, Christoph; Lin, Jing; Bichsel, Colette A; Khammash, Mustafa; Tay, Savaş
2016-03-17
Absolute quantification of macromolecules in single cells is critical for understanding and modeling biological systems that feature cellular heterogeneity. Here we show extremely sensitive and absolute quantification of both proteins and mRNA in single mammalian cells by a very practical workflow that combines proximity ligation assay (PLA) and digital PCR. This digital PLA method has femtomolar sensitivity, which enables the quantification of very small protein concentration changes over its entire 3-log dynamic range, a quality necessary for accounting for single-cell heterogeneity. We counted both endogenous (CD147) and exogenously expressed (GFP-p65) proteins from hundreds of single cells and determined the correlation between CD147 mRNA and the protein it encodes. Using our data, a stochastic two-state model of the central dogma was constructed and verified using joint mRNA/protein distributions, allowing us to estimate transcription burst sizes and extrinsic noise strength and calculate the transcription and translation rate constants in single mammalian cells. Copyright © 2016 Elsevier Inc. All rights reserved.
Zhang, Hua; Wang, Chen; Sun, Han-Lei; Fu, Gang; Chen, Shu; Zhang, Yue-Jiao; Chen, Bing-Hui; Anema, Jason R.; Yang, Zhi-Lin; Li, Jian-Feng; Tian, Zhong-Qun
2017-01-01
Surface molecular information acquired in situ from a catalytic process can greatly promote the rational design of highly efficient catalysts by revealing structure-activity relationships and reaction mechanisms. Raman spectroscopy can provide this rich structural information, but normal Raman is not sensitive enough to detect trace active species adsorbed on the surface of catalysts. Here we develop a general method for in situ monitoring of heterogeneous catalytic processes through shell-isolated nanoparticle-enhanced Raman spectroscopy (SHINERS) satellite nanocomposites (Au-core silica-shell nanocatalyst-satellite structures), which are stable and have extremely high surface Raman sensitivity. By combining operando SHINERS with density functional theory calculations, we identify the working mechanisms for CO oxidation over PtFe and Pd nanocatalysts, which are typical low- and high-temperature catalysts, respectively. Active species, such as surface oxides, superoxide/peroxide species and Pd–C/Pt–C bonds are directly observed during the reactions. We demonstrate that in situ SHINERS can provide a deep understanding of the fundamental concepts of catalysis. PMID:28537269
Zhang, Quanxin; Zhang, Geping; Sun, Xiaofeng; Yin, Keyang; Li, Hongguang
2017-05-31
Dye-sensitized solar cells (DSSCs) are highly promising since they can potentially solve global energy issues. The development of new photosensitizers is the key to fully realizing perspectives proposed to DSSCs. Being cheap and nontoxic, carbon quantum dots (CQDs) have emerged as attractive candidates for this purpose. However, current methodologies to build up CQD-sensitized solar cells (CQDSCs) result in an imperfect apparatus with extremely low power conversion efficiencies (PCEs). Herein, we present a simple strategy of growing carbon quantum dots (CQDs) onto TiO₂ surfaces in situ. The CQDs/TiO₂ hybridized photoanode was then used to construct solar cell with an improved PCE of 0.87%, which is higher than all of the reported CQDSCs adopting the simple post-adsorption method. This result indicates that an in situ growing strategy has great advantages in terms of optimizing the performance of CQDSCs. In addition, we have also found that the mechanisms dominating the performance of CQDSCs are different from those behind the solar cells using inorganic semiconductor quantum dots (ISQDs) as the photosensitizers, which re-confirms the conclusion that the characteristics of CQDs differ from those of ISQDs.
NASA Astrophysics Data System (ADS)
Shokri-Kojori, Hossein; Ji, Yiwen; Han, Xu; Paik, Younghun; Braunschweig, Adam; Kim, Sung Jin
2016-03-01
Localized surface Plasmon Resonance (LSPR) is a nanoscale phenomenon which presents strong resonance associated with noble metal nanostructures. This plasmon resonance based technology enables highly sensitive detection for chemical and biological applications. Recently, we have developed a plasmon field effect transistor (FET) that enables direct plasmonic-to-electric signal conversion with signal amplification. The plasmon FET consists of back-gated field effect transistor incorporated with gold nanoparticles on top of the FET channel. The gold nanostructures are physically separated from transistor electrodes and can be functionalized for a specific biological application. In this presentation, we report a successful demonstration of a model system to detect Con A proteins using Carbohydrate linkers as a capture molecule. The plasmon FET detected a very low concentration of Con A (0.006 mg/L) while it offers a wide dynamic range of 0.006-50 mg/L. In this demonstration, we used two-color light sources instead of a bulky spectrometer to achieve high sensitivity and wide dynamic range. The details of two-color based differential measurement method will be discussed. This novel protein-based sensor has several advantages such as extremely small size for point-of-care system, multiplexing capability, no need of complex optical geometry.
Prospective clinical study to evaluate an oscillometric blood pressure monitor in pet rabbits.
Bellini, Luca; Veladiano, Irene A; Schrank, Magdalena; Candaten, Matteo; Mollo, Antonio
2018-02-27
Rabbits are particularly sensitive to develop hypotension during sedation or anaesthesia. Values of systolic or mean non-invasive arterial blood pressure below 80 or 60 mmHg respectively are common under anaesthesia despite an ongoing surgery. A reliable method of monitoring arterial blood pressure is extremely important, although invasive technique is not always possible due to the anatomy and dimension of the artery. The aim of this study was to evaluate the agreement between a new oscillometric device for non-invasive arterial blood pressure measurement and the invasive method. Moreover the trending ability of the device, ability to identify changes in the same direction with the invasive methods, was evaluated as well as the sensibility of the device in identifying hypotension arbitrarily defined as invasive arterial blood pressure below 80 or 60 mmHg. Bland-Altman analysis for repeated measurements showed a poor agreement between the two methods; the oscillometric device overestimated the invasive arterial blood pressure, particularly at high arterial pressure values. The same analysis repeated considering oscillometric measurement that match invasive mean pressure lower or equal to 60 mmHg showed a decrease in biases and limits of agreement between methods. The trending ability of the device, evaluated with both the 4-quadrant plot and the polar plot was poor. Concordance rate of mean arterial blood pressure was higher than systolic and diastolic pressure although inferior to 90%. The sensibility of the device in detecting hypotension defined as systolic or mean invasive arterial blood pressure lower than 80 or 60 mmHg was superior for mean oscillometric pressure rather than systolic. A sensitivity of 92% was achieved with an oscillometric measurement for mean pressure below 65 mmHg instead of 60 mmHg. Non-invasive systolic blood pressure is less sensitive as indicator of hypotension regardless of the cutoff limit considered. Although mean invasive arterial blood pressure is overestimated by the device, the sensitivity of this non-invasive oscillometric monitor in detecting invasive mean pressure below 60 mmHg is acceptable but a cutoff value of 65 mmHg needs to be used.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam SM, Jahangir
2017-01-01
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems. PMID:28422080
NASA Astrophysics Data System (ADS)
Lin, Pei-Yi; Hagan, Katherine; Fenoglio, Angela; Grant, P. Ellen; Franceschini, Maria Angela
2016-05-01
Low-grade germinal matrix-intraventricular hemorrhage (GM-IVH) is the most common complication in extremely premature neonates. The occurrence of GM-IVH is highly associated with hemodynamic instability in the premature brain, yet the long-term impact of low-grade GM-IVH on cerebral blood flow and neuronal health have not been fully investigated. We used an innovative combination of frequency-domain near infrared spectroscopy and diffuse correlation spectroscopy (FDNIRS-DCS) to measure cerebral oxygen saturation (SO2) and an index of cerebral blood flow (CBFi) at the infant’s bedside and compute an index of cerebral oxygen metabolism (CMRO2i). We enrolled twenty extremely low gestational age (ELGA) neonates (seven with low-grade GM-IVH) and monitored them weekly until they reached full-term equivalent age. During their hospital stay, we observed consistently lower CBFi and CMRO2i in ELGA neonates with low-grade GM-IVH compared to neonates without hemorrhages. Furthermore, lower CBFi and CMRO2i in the former group persists even after the resolution of the hemorrhage. In contrast, SO2 does not differ between groups. Thus, CBFi and CMRO2i may have better sensitivity than SO2 in detecting GM-IVH-related effects on infant brain development. FDNIRS-DCS methods may have clinical benefit for monitoring the evolution of GM-IVH, evaluating treatment response, and potentially predicting neurodevelopmental outcome.
Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir
2017-04-19
As a high performance-cost ratio solution for differential pressure measurement, piezo-resistive differential pressure sensors are widely used in engineering processes. However, their performance is severely affected by the environmental temperature and the static pressure applied to them. In order to modify the non-linear measuring characteristics of the piezo-resistive differential pressure sensor, compensation actions should synthetically consider these two aspects. Advantages such as nonlinear approximation capability, highly desirable generalization ability and computational efficiency make the kernel extreme learning machine (KELM) a practical approach for this critical task. Since the KELM model is intrinsically sensitive to the regularization parameter and the kernel parameter, a searching scheme combining the coupled simulated annealing (CSA) algorithm and the Nelder-Mead simplex algorithm is adopted to find an optimal KLEM parameter set. A calibration experiment at different working pressure levels was conducted within the temperature range to assess the proposed method. In comparison with other compensation models such as the back-propagation neural network (BP), radius basis neural network (RBF), particle swarm optimization optimized support vector machine (PSO-SVM), particle swarm optimization optimized least squares support vector machine (PSO-LSSVM) and extreme learning machine (ELM), the compensation results show that the presented compensation algorithm exhibits a more satisfactory performance with respect to temperature compensation and synthetic compensation problems.
Bottlenecks drive temporal and spatial genetic changes in alpine caddisfly metapopulations.
Shama, Lisa N S; Kubow, Karen B; Jokela, Jukka; Robinson, Christopher T
2011-09-27
Extinction and re-colonisation of local populations is common in ephemeral habitats such as temporary streams. In most cases, such population turnover leads to reduced genetic diversity within populations and increased genetic differentiation among populations due to stochastic founder events, genetic drift, and bottlenecks associated with re-colonisation. Here, we examined the spatio-temporal genetic structure of 8 alpine caddisfly populations inhabiting permanent and temporary streams from four valleys in two regions of the Swiss Alps in years before and after a major stream drying event, the European heat wave in summer 2003. We found that population turnover after 2003 led to a loss of allelic richness and gene diversity but not to significant changes in observed heterozygosity. Within all valleys, permanent and temporary streams in any given year were not differentiated, suggesting considerable gene flow and admixture between streams with differing hydroperiods. Large changes in allele frequencies after 2003 resulted in a substantial increase in genetic differentiation among valleys within one to two years (1-2 generations) driven primarily by drift and immigration. Signatures of genetic bottlenecks were detected in all 8 populations after 2003 using the M-ratio method, but in no populations when using a heterozygosity excess method, indicating differential sensitivity of bottleneck detection methods. We conclude that genetic differentiation among A. uncatus populations changed markedly both temporally and spatially in response to the extreme climate event in 2003. Our results highlight the magnitude of temporal population genetic changes in response to extreme events. More specifically, our results show that extreme events can cause rapid genetic divergence in metapopulations. Further studies are needed to determine if recovery from this perturbation through gradual mixing of diverged populations by migration and gene flow leads to the pre-climate event state, or whether the observed changes represent a new genetic equilibrium.
Effect of single-site mutations on hydrophobic-polar lattice proteins
NASA Astrophysics Data System (ADS)
Shi, Guangjie; Vogel, Thomas; Wüst, Thomas; Li, Ying Wai; Landau, David P.
2014-09-01
We developed a heuristic method for determining the ground-state degeneracy of hydrophobic-polar (HP) lattice proteins, based on Wang-Landau and multicanonical sampling. It is applied during comprehensive studies of single-site mutations in specific HP proteins with different sequences. The effects in which we are interested include structural changes in ground states, changes of ground-state energy, degeneracy, and thermodynamic properties of the system. With respect to mutations, both extremely sensitive and insensitive positions in the HP sequence have been found. That is, ground-state energies and degeneracies, as well as other thermodynamic and structural quantities, may be either largely unaffected or may change significantly due to mutation.
Effective Detection of Mycotoxins by a Highly Luminescent Metal–Organic Framework
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Zhichao; Lustig, William P.; Zhang, Jingming
In this paper, we designed and synthesized a new luminescent metal–organic framework (LMOF). LMOF-241 is highly porous and emits strong blue light with high efficiency. We demonstrate for the first time that very fast and extremely sensitive optical detection can be achieved, making use of the fluorescence quenching of an LMOF material. The compound is responsive to Aflatoxin B1 at parts per billion level, which makes it the best performing luminescence-based chemical sensor to date. We studied the electronic properties of LMOF-241 and selected mycotoxins, as well as the extent of mycotoxin–LMOF interactions, employing theoretical methods. Finally, possible electron andmore » energy transfer mechanisms are discussed.« less
Effective Detection of Mycotoxins by a Highly Luminescent Metal–Organic Framework
Hu, Zhichao; Lustig, William P.; Zhang, Jingming; ...
2015-12-11
In this paper, we designed and synthesized a new luminescent metal–organic framework (LMOF). LMOF-241 is highly porous and emits strong blue light with high efficiency. We demonstrate for the first time that very fast and extremely sensitive optical detection can be achieved, making use of the fluorescence quenching of an LMOF material. The compound is responsive to Aflatoxin B1 at parts per billion level, which makes it the best performing luminescence-based chemical sensor to date. We studied the electronic properties of LMOF-241 and selected mycotoxins, as well as the extent of mycotoxin–LMOF interactions, employing theoretical methods. Finally, possible electron andmore » energy transfer mechanisms are discussed.« less
NMR and MRI apparatus and method
Clarke, John; Kelso, Nathan; Lee, SeungKyun; Moessle, Michael; Myers, Whittier; McDermott, Robert; ten Haken, Bernard; Pines, Alexander; Trabesinger, Andreas
2007-03-06
Nuclear magnetic resonance (NMR) signals are detected in microtesla fields. Prepolarization in millitesla fields is followed by detection with an untuned dc superconducting quantum interference device (SQUID) magnetometer. Because the sensitivity of the SQUID is frequency independent, both signal-to-noise ratio (SNR) and spectral resolution are enhanced by detecting the NMR signal in extremely low magnetic fields, where the NMR lines become very narrow even for grossly inhomogeneous measurement fields. Additional signal to noise benefits are obtained by use of a low noise polarization coil, comprising litz wire or superconducting materials. MRI in ultralow magnetic field is based on the NMR at ultralow fields. Gradient magnetic fields are applied, and images are constructed from the detected NMR signals.
... 3½, kids should have eye health screenings and visual acuity tests (tests that measure sharpness of vision) ... eye rubbing extreme light sensitivity poor focusing poor visual tracking (following an object) abnormal alignment or movement ...
[Measurement of Water COD Based on UV-Vis Spectroscopy Technology].
Wang, Xiao-ming; Zhang, Hai-liang; Luo, Wei; Liu, Xue-mei
2016-01-01
Ultraviolet/visible (UV/Vis) spectroscopy technology was used to measure water COD. A total of 135 water samples were collected from Zhejiang province. Raw spectra with 3 different pretreatment methods (Multiplicative Scatter Correction (MSC), Standard Normal Variate (SNV) and 1st Derivatives were compared to determine the optimal pretreatment method for analysis. Spectral variable selection is an important strategy in spectrum modeling analysis, because it tends to parsimonious data representation and can lead to multivariate models with better performance. In order to simply calibration models, the preprocessed spectra were then used to select sensitive wavelengths by competitive adaptive reweighted sampling (CARS), Random frog and Successive Genetic Algorithm (GA) methods. Different numbers of sensitive wavelengths were selected by different variable selection methods with SNV preprocessing method. Partial least squares (PLS) was used to build models with the full spectra, and Extreme Learning Machine (ELM) was applied to build models with the selected wavelength variables. The overall results showed that ELM model performed better than PLS model, and the ELM model with the selected wavelengths based on CARS obtained the best results with the determination coefficient (R2), RMSEP and RPD were 0.82, 14.48 and 2.34 for prediction set. The results indicated that it was feasible to use UV/Vis with characteristic wavelengths which were obtained by CARS variable selection method, combined with ELM calibration could apply for the rapid and accurate determination of COD in aquaculture water. Moreover, this study laid the foundation for further implementation of online analysis of aquaculture water and rapid determination of other water quality parameters.
Sideband-Separating, Millimeter-Wave Heterodyne Receiver
NASA Technical Reports Server (NTRS)
Ward, John S.; Bumble, Bruce; Lee, Karen A.; Kawamura, Jonathan H.; Chattopadhyay, Goutam; Stek, paul; Stek, Paul
2010-01-01
Researchers have demonstrated a submillimeter-wave spectrometer that combines extremely broad bandwidth with extremely high sensitivity and spectral resolution to enable future spacecraft to measure the composition of the Earth s troposphere in three dimensions many times per day at spatial resolutions as high as a few kilometers. Microwave limb sounding is a proven remote-sensing technique that measures thermal emission spectra from molecular gases along limb views of the Earth s atmosphere against a cold space background.
Chau, Q; Bruguier, P
2007-01-01
In nuclear facilities, some activities such as reprocessing, recycling and production of bare fuel rods expose the workers to mixed neutron-photon fields. For several workplaces, particularly in glove boxes, some workers expose their hands to mixed fields. The mastery of the photon extremity dosimetry is relatively good, whereas the neutron dosimetry still raises difficulties. In this context, the Institute for Radiological Protection and Nuclear Safety (IRSN) has proposed a study on a passive neutron extremity dosemeter based on chemically etched CR-39 (PADC: polyallyldiglycolcarbonate), named PN-3, already used in routine practice for whole body dosimetry. This dosemeter is a chip of plastic sensitive to recoil protons. The chemical etching process amplifies the size of the impact. The reading system for tracks counting is composed of a microscope, a video camera and an image analyser. This system is combined with the dose evaluation algorithm. The performance of the dosemeter PN-3 has been largely studied and proved by several laboratories in terms of passive individual neutron dosemeter which is used in routine production by different companies. This study focuses on the sensitivity of the extremity dosemeter, as well as its performance in the function of the level of the neutron energy. The dosemeter was exposed to monoenergetic neutron fields in laboratory conditions and to mixed fields in glove boxes at workplaces.