Frouzan, Arash; Masoumi, Kambiz; Delirroyfard, Ali; Mazdaie, Behnaz; Bagherzadegan, Elnaz
2017-08-01
Long bone fractures are common injuries caused by trauma. Some studies have demonstrated that ultrasound has a high sensitivity and specificity in the diagnosis of upper and lower extremity long bone fractures. The aim of this study was to determine the accuracy of ultrasound compared with plain radiography in diagnosis of upper and lower extremity long bone fractures in traumatic patients. This cross-sectional study assessed 100 patients admitted to the emergency department of Imam Khomeini Hospital, Ahvaz, Iran with trauma to the upper and lower extremities, from September 2014 through October 2015. In all patients, first ultrasound and then standard plain radiography for the upper and lower limb was performed. Data were analyzed by SPSS version 21 to determine the specificity and sensitivity. The mean age of patients with upper and lower limb trauma were 31.43±12.32 years and 29.63±5.89 years, respectively. Radius fracture was the most frequent compared to other fractures (27%). Sensitivity, specificity, positive predicted value, and negative predicted value of ultrasound compared with plain radiography in the diagnosis of upper extremity long bones were 95.3%, 87.7%, 87.2% and 96.2%, respectively, and the highest accuracy was observed in left arm fractures (100%). Tibia and fibula fractures were the most frequent types compared to other fractures (89.2%). Sensitivity, specificity, PPV and NPV of ultrasound compared with plain radiography in the diagnosis of upper extremity long bone fractures were 98.6%, 83%, 65.4% and 87.1%, respectively, and the highest accuracy was observed in men, lower ages and femoral fractures. The results of this study showed that ultrasound compared with plain radiography has a high accuracy in the diagnosis of upper and lower extremity long bone fractures.
Frouzan, Arash; Masoumi, Kambiz; Delirroyfard, Ali; Mazdaie, Behnaz; Bagherzadegan, Elnaz
2017-01-01
Background Long bone fractures are common injuries caused by trauma. Some studies have demonstrated that ultrasound has a high sensitivity and specificity in the diagnosis of upper and lower extremity long bone fractures. Objective The aim of this study was to determine the accuracy of ultrasound compared with plain radiography in diagnosis of upper and lower extremity long bone fractures in traumatic patients. Methods This cross-sectional study assessed 100 patients admitted to the emergency department of Imam Khomeini Hospital, Ahvaz, Iran with trauma to the upper and lower extremities, from September 2014 through October 2015. In all patients, first ultrasound and then standard plain radiography for the upper and lower limb was performed. Data were analyzed by SPSS version 21 to determine the specificity and sensitivity. Results The mean age of patients with upper and lower limb trauma were 31.43±12.32 years and 29.63±5.89 years, respectively. Radius fracture was the most frequent compared to other fractures (27%). Sensitivity, specificity, positive predicted value, and negative predicted value of ultrasound compared with plain radiography in the diagnosis of upper extremity long bones were 95.3%, 87.7%, 87.2% and 96.2%, respectively, and the highest accuracy was observed in left arm fractures (100%). Tibia and fibula fractures were the most frequent types compared to other fractures (89.2%). Sensitivity, specificity, PPV and NPV of ultrasound compared with plain radiography in the diagnosis of upper extremity long bone fractures were 98.6%, 83%, 65.4% and 87.1%, respectively, and the highest accuracy was observed in men, lower ages and femoral fractures. Conclusion The results of this study showed that ultrasound compared with plain radiography has a high accuracy in the diagnosis of upper and lower extremity long bone fractures. PMID:28979747
PHASE QUANTIZATION STUDY OF SPATIAL LIGHT MODULATOR FOR EXTREME HIGH-CONTRAST IMAGING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dou, Jiangpei; Ren, Deqing, E-mail: jpdou@niaot.ac.cn, E-mail: jiangpeidou@gmail.com
2016-11-20
Direct imaging of exoplanets by reflected starlight is extremely challenging due to the large luminosity ratio to the primary star. Wave-front control is a critical technique to attenuate the speckle noise in order to achieve an extremely high contrast. We present a phase quantization study of a spatial light modulator (SLM) for wave-front control to meet the contrast requirement of detection of a terrestrial planet in the habitable zone of a solar-type star. We perform the numerical simulation by employing the SLM with different phase accuracy and actuator numbers, which are related to the achievable contrast. We use an optimizationmore » algorithm to solve the quantization problems that is matched to the controllable phase step of the SLM. Two optical configurations are discussed with the SLM located before and after the coronagraph focal plane mask. The simulation result has constrained the specification for SLM phase accuracy in the above two optical configurations, which gives us a phase accuracy of 0.4/1000 and 1/1000 waves to achieve a contrast of 10{sup -10}. Finally, we have demonstrated that an SLM with more actuators can deliver a competitive contrast performance on the order of 10{sup -10} in comparison to that by using a deformable mirror.« less
Phase Quantization Study of Spatial Light Modulator for Extreme High-contrast Imaging
NASA Astrophysics Data System (ADS)
Dou, Jiangpei; Ren, Deqing
2016-11-01
Direct imaging of exoplanets by reflected starlight is extremely challenging due to the large luminosity ratio to the primary star. Wave-front control is a critical technique to attenuate the speckle noise in order to achieve an extremely high contrast. We present a phase quantization study of a spatial light modulator (SLM) for wave-front control to meet the contrast requirement of detection of a terrestrial planet in the habitable zone of a solar-type star. We perform the numerical simulation by employing the SLM with different phase accuracy and actuator numbers, which are related to the achievable contrast. We use an optimization algorithm to solve the quantization problems that is matched to the controllable phase step of the SLM. Two optical configurations are discussed with the SLM located before and after the coronagraph focal plane mask. The simulation result has constrained the specification for SLM phase accuracy in the above two optical configurations, which gives us a phase accuracy of 0.4/1000 and 1/1000 waves to achieve a contrast of 10-10. Finally, we have demonstrated that an SLM with more actuators can deliver a competitive contrast performance on the order of 10-10 in comparison to that by using a deformable mirror.
NASA Astrophysics Data System (ADS)
Kim, S. K.; Lee, J.; Zhang, C.; Ames, S.; Williams, D. N.
2017-12-01
Deep learning techniques have been successfully applied to solve many problems in climate and geoscience using massive-scaled observed and modeled data. For extreme climate event detections, several models based on deep neural networks have been recently proposed and attend superior performance that overshadows all previous handcrafted expert based method. The issue arising, though, is that accurate localization of events requires high quality of climate data. In this work, we propose framework capable of detecting and localizing extreme climate events in very coarse climate data. Our framework is based on two models using deep neural networks, (1) Convolutional Neural Networks (CNNs) to detect and localize extreme climate events, and (2) Pixel recursive recursive super resolution model to reconstruct high resolution climate data from low resolution climate data. Based on our preliminary work, we have presented two CNNs in our framework for different purposes, detection and localization. Our results using CNNs for extreme climate events detection shows that simple neural nets can capture the pattern of extreme climate events with high accuracy from very coarse reanalysis data. However, localization accuracy is relatively low due to the coarse resolution. To resolve this issue, the pixel recursive super resolution model reconstructs the resolution of input of localization CNNs. We present a best networks using pixel recursive super resolution model that synthesizes details of tropical cyclone in ground truth data while enhancing their resolution. Therefore, this approach not only dramat- ically reduces the human effort, but also suggests possibility to reduce computing cost required for downscaling process to increase resolution of data.
Using New Models to Analyze Complex Regularities of the World: Commentary on Musso et al. (2013)
ERIC Educational Resources Information Center
Nokelainen, Petri; Silander, Tomi
2014-01-01
This commentary to the recent article by Musso et al. (2013) discusses issues related to model fitting, comparison of classification accuracy of generative and discriminative models, and two (or more) cultures of data modeling. We start by questioning the extremely high classification accuracy with an empirical data from a complex domain. There is…
Validation of China-wide interpolated daily climate variables from 1960 to 2011
NASA Astrophysics Data System (ADS)
Yuan, Wenping; Xu, Bing; Chen, Zhuoqi; Xia, Jiangzhou; Xu, Wenfang; Chen, Yang; Wu, Xiaoxu; Fu, Yang
2015-02-01
Temporally and spatially continuous meteorological variables are increasingly in demand to support many different types of applications related to climate studies. Using measurements from 600 climate stations, a thin-plate spline method was applied to generate daily gridded climate datasets for mean air temperature, maximum temperature, minimum temperature, relative humidity, sunshine duration, wind speed, atmospheric pressure, and precipitation over China for the period 1961-2011. A comprehensive evaluation of interpolated climate was conducted at 150 independent validation sites. The results showed superior performance for most of the estimated variables. Except for wind speed, determination coefficients ( R 2) varied from 0.65 to 0.90, and interpolations showed high consistency with observations. Most of the estimated climate variables showed relatively consistent accuracy among all seasons according to the root mean square error, R 2, and relative predictive error. The interpolated data correctly predicted the occurrence of daily precipitation at validation sites with an accuracy of 83 %. Moreover, the interpolation data successfully explained the interannual variability trend for the eight meteorological variables at most validation sites. Consistent interannual variability trends were observed at 66-95 % of the sites for the eight meteorological variables. Accuracy in distinguishing extreme weather events differed substantially among the meteorological variables. The interpolated data identified extreme events for the three temperature variables, relative humidity, and sunshine duration with an accuracy ranging from 63 to 77 %. However, for wind speed, air pressure, and precipitation, the interpolation model correctly identified only 41, 48, and 58 % of extreme events, respectively. The validation indicates that the interpolations can be applied with high confidence for the three temperatures variables, as well as relative humidity and sunshine duration based on the performance of these variables in estimating daily variations, interannual variability, and extreme events. Although longitude, latitude, and elevation data are included in the model, additional information, such as topography and cloud cover, should be integrated into the interpolation algorithm to improve performance in estimating wind speed, atmospheric pressure, and precipitation.
High Accuracy Temperature Measurements Using RTDs with Current Loop Conditioning
NASA Technical Reports Server (NTRS)
Hill, Gerald M.
1997-01-01
To measure temperatures with a greater degree of accuracy than is possible with thermocouples, RTDs (Resistive Temperature Detectors) are typically used. Calibration standards use specialized high precision RTD probes with accuracies approaching 0.001 F. These are extremely delicate devices, and far too costly to be used in test facility instrumentation. Less costly sensors which are designed for aeronautical wind tunnel testing are available and can be readily adapted to probes, rakes, and test rigs. With proper signal conditioning of the sensor, temperature accuracies of 0.1 F is obtainable. For reasons that will be explored in this paper, the Anderson current loop is the preferred method used for signal conditioning. This scheme has been used in NASA Lewis Research Center's 9 x 15 Low Speed Wind Tunnel, and is detailed.
Extremal optimization for Sherrington-Kirkpatrick spin glasses
NASA Astrophysics Data System (ADS)
Boettcher, S.
2005-08-01
Extremal Optimization (EO), a new local search heuristic, is used to approximate ground states of the mean-field spin glass model introduced by Sherrington and Kirkpatrick. The implementation extends the applicability of EO to systems with highly connected variables. Approximate ground states of sufficient accuracy and with statistical significance are obtained for systems with more than N=1000 variables using ±J bonds. The data reproduces the well-known Parisi solution for the average ground state energy of the model to about 0.01%, providing a high degree of confidence in the heuristic. The results support to less than 1% accuracy rational values of ω=2/3 for the finite-size correction exponent, and of ρ=3/4 for the fluctuation exponent of the ground state energies, neither one of which has been obtained analytically yet. The probability density function for ground state energies is highly skewed and identical within numerical error to the one found for Gaussian bonds. But comparison with infinite-range models of finite connectivity shows that the skewness is connectivity-dependent.
A Novel Gravity Compensation Method for High Precision Free-INS Based on “Extreme Learning Machine”
Zhou, Xiao; Yang, Gongliu; Cai, Qingzhong; Wang, Jing
2016-01-01
In recent years, with the emergency of high precision inertial sensors (accelerometers and gyros), gravity compensation has become a major source influencing the navigation accuracy in inertial navigation systems (INS), especially for high-precision INS. This paper presents preliminary results concerning the effect of gravity disturbance on INS. Meanwhile, this paper proposes a novel gravity compensation method for high-precision INS, which estimates the gravity disturbance on the track using the extreme learning machine (ELM) method based on measured gravity data on the geoid and processes the gravity disturbance to the height where INS has an upward continuation, then compensates the obtained gravity disturbance into the error equations of INS to restrain the INS error propagation. The estimation accuracy of the gravity disturbance data is verified by numerical tests. The root mean square error (RMSE) of the ELM estimation method can be improved by 23% and 44% compared with the bilinear interpolation method in plain and mountain areas, respectively. To further validate the proposed gravity compensation method, field experiments with an experimental vehicle were carried out in two regions. Test 1 was carried out in a plain area and Test 2 in a mountain area. The field experiment results also prove that the proposed gravity compensation method can significantly improve the positioning accuracy. During the 2-h field experiments, the positioning accuracy can be improved by 13% and 29% respectively, in Tests 1 and 2, when the navigation scheme is compensated by the proposed gravity compensation method. PMID:27916856
NASA Astrophysics Data System (ADS)
Keilis-Borok, V. I.; Soloviev, A.; Gabrielov, A.
2011-12-01
We describe a uniform approach to predicting different extreme events, also known as critical phenomena, disasters, or crises. The following types of such events are considered: strong earthquakes; economic recessions (their onset and termination); surges of unemployment; surges of crime; and electoral changes of the governing party. A uniform approach is possible due to the common feature of these events: each of them is generated by a certain hierarchical dissipative complex system. After a coarse-graining, such systems exhibit regular behavior patterns; we look among them for "premonitory patterns" that signal the approach of an extreme event. We introduce methodology, based on the optimal control theory, assisting disaster management in choosing optimal set of disaster preparedness measures undertaken in response to a prediction. Predictions with their currently realistic (limited) accuracy do allow preventing a considerable part of the damage by a hierarchy of preparedness measures. Accuracy of prediction should be known, but not necessarily high.
NASA Technical Reports Server (NTRS)
Dube, W. P.; Sparks, L. L.; Slifka, A. J.; Bitsy, R. M.
1990-01-01
Advanced aerospace designs require thermal insulation systems which are consistent with cryogenic fluids, high thermal loads, and design restrictions such as weight and volume. To evaluate the thermal performance of these insulating systems, an apparatus capable of measuring thermal conductivity using extreme temperature differences (27 to 1100 K) is being developed. This system is described along with estimates of precision and accuracy in selected operating conditions. Preliminary data are presented.
NASA Astrophysics Data System (ADS)
Dube, W. P.; Sparks, L. L.; Slifka, A. J.; Bitsy, R. M.
Advanced aerospace designs require thermal insulation systems which are consistent with cryogenic fluids, high thermal loads, and design restrictions such as weight and volume. To evaluate the thermal performance of these insulating systems, an apparatus capable of measuring thermal conductivity using extreme temperature differences (27 to 1100 K) is being developed. This system is described along with estimates of precision and accuracy in selected operating conditions. Preliminary data are presented.
Diffractive shear interferometry for extreme ultraviolet high-resolution lensless imaging
NASA Astrophysics Data System (ADS)
Jansen, G. S. M.; de Beurs, A.; Liu, X.; Eikema, K. S. E.; Witte, S.
2018-05-01
We demonstrate a novel imaging approach and associated reconstruction algorithm for far-field coherent diffractive imaging, based on the measurement of a pair of laterally sheared diffraction patterns. The differential phase profile retrieved from such a measurement leads to improved reconstruction accuracy, increased robustness against noise, and faster convergence compared to traditional coherent diffractive imaging methods. We measure laterally sheared diffraction patterns using Fourier-transform spectroscopy with two phase-locked pulse pairs from a high harmonic source. Using this approach, we demonstrate spectrally resolved imaging at extreme ultraviolet wavelengths between 28 and 35 nm.
Advanced Ultrasonic Diagnosis of Extremity Trauma: The Faster Exam
NASA Technical Reports Server (NTRS)
Dulchavsky, S. A.; Henry, S. E.; Moed, B. R.; Diebel, L. N.; Marshburn, T.; Hamilton, D. R.; Logan, J.; Kirkpatrick, A. W.; Williams, D. R.
2002-01-01
Ultrasound is of prO)len accuracy in abdominal and thoracic trauma and may be useful to diagnose extremity injury in situations where radiography is not available such as military and space applications. We prospectively evaluated the utility of extremity , ultrasound performed by trained, non-physician personnel in patients with extremity trauma, to simulate remote aerospace or military applications . Methods: Patients with extremity trauma were identified by history, physical examination, and radiographic studies. Ultrasound examination was performed bilaterally by nonphysician personnel with a portable ultrasound device using a 10-5 MHz linear probe, Images were video-recorded for later analysis against radiography by Fisher's exact test. The average time of examination was 4 minutes. Ultrasound accurately diagnosed extremity, injury in 94% of patients with no false positive exams; accuracy was greater in mid-shaft locations and least in the metacarpa/metatarsals. Soft tissue/tendon injury was readily visualized . Extremity ultrasound can be performed quickly and accurately by nonphysician personnel with excellent accuracy. Blinded verification of the utility of ultrasound in patients with extremity injury should be done to determine if Extremity and Respiratory evaluation should be added to the FAST examination (the FASTER exam) and verify the technique in remote locations such as military and aerospace applications.
Saini, Harsh; Lal, Sunil Pranit; Naidu, Vimal Vikash; Pickering, Vincel Wince; Singh, Gurmeet; Tsunoda, Tatsuhiko; Sharma, Alok
2016-12-05
High dimensional feature space generally degrades classification in several applications. In this paper, we propose a strategy called gene masking, in which non-contributing dimensions are heuristically removed from the data to improve classification accuracy. Gene masking is implemented via a binary encoded genetic algorithm that can be integrated seamlessly with classifiers during the training phase of classification to perform feature selection. It can also be used to discriminate between features that contribute most to the classification, thereby, allowing researchers to isolate features that may have special significance. This technique was applied on publicly available datasets whereby it substantially reduced the number of features used for classification while maintaining high accuracies. The proposed technique can be extremely useful in feature selection as it heuristically removes non-contributing features to improve the performance of classifiers.
Devaney, John; Barrett, Brian; Barrett, Frank; Redmond, John; O Halloran, John
2015-01-01
Quantification of spatial and temporal changes in forest cover is an essential component of forest monitoring programs. Due to its cloud free capability, Synthetic Aperture Radar (SAR) is an ideal source of information on forest dynamics in countries with near-constant cloud-cover. However, few studies have investigated the use of SAR for forest cover estimation in landscapes with highly sparse and fragmented forest cover. In this study, the potential use of L-band SAR for forest cover estimation in two regions (Longford and Sligo) in Ireland is investigated and compared to forest cover estimates derived from three national (Forestry2010, Prime2, National Forest Inventory), one pan-European (Forest Map 2006) and one global forest cover (Global Forest Change) product. Two machine-learning approaches (Random Forests and Extremely Randomised Trees) are evaluated. Both Random Forests and Extremely Randomised Trees classification accuracies were high (98.1-98.5%), with differences between the two classifiers being minimal (<0.5%). Increasing levels of post classification filtering led to a decrease in estimated forest area and an increase in overall accuracy of SAR-derived forest cover maps. All forest cover products were evaluated using an independent validation dataset. For the Longford region, the highest overall accuracy was recorded with the Forestry2010 dataset (97.42%) whereas in Sligo, highest overall accuracy was obtained for the Prime2 dataset (97.43%), although accuracies of SAR-derived forest maps were comparable. Our findings indicate that spaceborne radar could aid inventories in regions with low levels of forest cover in fragmented landscapes. The reduced accuracies observed for the global and pan-continental forest cover maps in comparison to national and SAR-derived forest maps indicate that caution should be exercised when applying these datasets for national reporting.
Devaney, John; Barrett, Brian; Barrett, Frank; Redmond, John; O`Halloran, John
2015-01-01
Quantification of spatial and temporal changes in forest cover is an essential component of forest monitoring programs. Due to its cloud free capability, Synthetic Aperture Radar (SAR) is an ideal source of information on forest dynamics in countries with near-constant cloud-cover. However, few studies have investigated the use of SAR for forest cover estimation in landscapes with highly sparse and fragmented forest cover. In this study, the potential use of L-band SAR for forest cover estimation in two regions (Longford and Sligo) in Ireland is investigated and compared to forest cover estimates derived from three national (Forestry2010, Prime2, National Forest Inventory), one pan-European (Forest Map 2006) and one global forest cover (Global Forest Change) product. Two machine-learning approaches (Random Forests and Extremely Randomised Trees) are evaluated. Both Random Forests and Extremely Randomised Trees classification accuracies were high (98.1–98.5%), with differences between the two classifiers being minimal (<0.5%). Increasing levels of post classification filtering led to a decrease in estimated forest area and an increase in overall accuracy of SAR-derived forest cover maps. All forest cover products were evaluated using an independent validation dataset. For the Longford region, the highest overall accuracy was recorded with the Forestry2010 dataset (97.42%) whereas in Sligo, highest overall accuracy was obtained for the Prime2 dataset (97.43%), although accuracies of SAR-derived forest maps were comparable. Our findings indicate that spaceborne radar could aid inventories in regions with low levels of forest cover in fragmented landscapes. The reduced accuracies observed for the global and pan-continental forest cover maps in comparison to national and SAR-derived forest maps indicate that caution should be exercised when applying these datasets for national reporting. PMID:26262681
High Accuracy Transistor Compact Model Calibrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hembree, Charles E.; Mar, Alan; Robertson, Perry J.
2015-09-01
Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirementsmore » require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.« less
Improving the Accuracy of the Chebyshev Rational Approximation Method Using Substeps
Isotalo, Aarno; Pusa, Maria
2016-05-01
The Chebyshev Rational Approximation Method (CRAM) for solving the decay and depletion of nuclides is shown to have a remarkable decrease in error when advancing the system with the same time step and microscopic reaction rates as the previous step. This property is exploited here to achieve high accuracy in any end-of-step solution by dividing a step into equidistant sub-steps. The computational cost of identical substeps can be reduced significantly below that of an equal number of regular steps, as the LU decompositions for the linear solves required in CRAM only need to be formed on the first substep. Themore » improved accuracy provided by substeps is most relevant in decay calculations, where there have previously been concerns about the accuracy and generality of CRAM. Lastly, with substeps, CRAM can solve any decay or depletion problem with constant microscopic reaction rates to an extremely high accuracy for all nuclides with concentrations above an arbitrary limit.« less
NASA Astrophysics Data System (ADS)
Jiménez, A.; Morante, E.; Viera, T.; Núñez, M.; Reyes, M.
2010-07-01
European Extremely Large Telescope (E-ELT) based in 984 primary mirror segments achieving required optical performance; they must position relatively to adjacent segments with relative nanometer accuracy. CESA designed M1 Position Actuators (PACT) to comply with demanding performance requirements of EELT. Three PACT are located under each segment controlling three out of the plane degrees of freedom (tip, tilt, piston). To achieve a high linear accuracy in long operational displacements, PACT uses two stages in series. First stage based on Voice Coil Actuator (VCA) to achieve high accuracies in very short travel ranges, while second stage based on Brushless DC Motor (BLDC) provides large stroke ranges and allows positioning the first stage closer to the demanded position. A BLDC motor is used achieving a continuous smoothly movement compared to sudden jumps of a stepper. A gear box attached to the motor allows a high reduction of power consumption and provides a great challenge for sizing. PACT space envelope was reduced by means of two flat springs fixed to VCA. Its main characteristic is a low linear axial stiffness. To achieve best performance for PACT, sensors have been included in both stages. A rotary encoder is included in BLDC stage to close position/velocity control loop. An incremental optical encoder measures PACT travel range with relative nanometer accuracy and used to close the position loop of the whole actuator movement. For this purpose, four different optical sensors with different gratings will be evaluated. Control strategy show different internal closed loops that work together to achieve required performance.
NASA Technical Reports Server (NTRS)
1980-01-01
Weed Instrument Inc. produces a line of thermocouples - temperature sensors - for a variety of industrial and research uses. One of the company's newer products is a thermocouple specially designed for high accuracy at extreme temperatures above 3,000 degrees Fahrenheit. Development of sensor brought substantial increases in Weed Instrument sales and employment.
Accuracy of Handheld Blood Glucose Meters at High Altitude
de Vries, Suzanna T.; Fokkert, Marion J.; Dikkeschei, Bert D.; Rienks, Rienk; Bilo, Karin M.; Bilo, Henk J. G.
2010-01-01
Background Due to increasing numbers of people with diabetes taking part in extreme sports (e.g., high-altitude trekking), reliable handheld blood glucose meters (BGMs) are necessary. Accurate blood glucose measurement under extreme conditions is paramount for safe recreation at altitude. Prior studies reported bias in blood glucose measurements using different BGMs at high altitude. We hypothesized that glucose-oxidase based BGMs are more influenced by the lower atmospheric oxygen pressure at altitude than glucose dehydrogenase based BGMs. Methodology/Principal Findings Glucose measurements at simulated altitude of nine BGMs (six glucose dehydrogenase and three glucose oxidase BGMs) were compared to glucose measurement on a similar BGM at sea level and to a laboratory glucose reference method. Venous blood samples of four different glucose levels were used. Moreover, two glucose oxidase and two glucose dehydrogenase based BGMs were evaluated at different altitudes on Mount Kilimanjaro. Accuracy criteria were set at a bias <15% from reference glucose (when >6.5 mmol/L) and <1 mmol/L from reference glucose (when <6.5 mmol/L). No significant difference was observed between measurements at simulated altitude and sea level for either glucose oxidase based BGMs or glucose dehydrogenase based BGMs as a group phenomenon. Two GDH based BGMs did not meet set performance criteria. Most BGMs are generally overestimating true glucose concentration at high altitude. Conclusion At simulated high altitude all tested BGMs, including glucose oxidase based BGMs, did not show influence of low atmospheric oxygen pressure. All BGMs, except for two GDH based BGMs, performed within predefined criteria. At true high altitude one GDH based BGM had best precision and accuracy. PMID:21103399
Bryan A. Black; Daniel Griffin; Peter van der Sleen; Alan D. Wanamaker; James H. Speer; David C. Frank; David W. Stahle; Neil Pederson; Carolyn A. Copenheaver; Valerie Trouet; Shelly Griffin; Bronwyn M. Gillanders
2016-01-01
High-resolution biogenic and geologic proxies in which one increment or layer is formed per year are crucial to describing natural ranges of environmental variability in Earth's physical and biological systems. However, dating controls are necessary to ensure temporal precision and accuracy; simple counts cannot ensure that all layers are placed correctly in time...
NASA Astrophysics Data System (ADS)
De Niel, J.; Demarée, G.; Willems, P.
2017-10-01
Governments, policy makers, and water managers are pushed by recent socioeconomic developments such as population growth and increased urbanization inclusive of occupation of floodplains to impose very stringent regulations on the design of hydrological structures. These structures need to withstand storms with return periods typically ranging between 1,250 and 10,000 years. Such quantification involves extrapolations of systematically measured instrumental data, possibly complemented by quantitative and/or qualitative historical data and paleoflood data. The accuracy of the extrapolations is, however, highly unclear in practice. In order to evaluate extreme river peak flow extrapolation and accuracy, we studied historical and instrumental data of the past 500 years along the Meuse River. We moreover propose an alternative method for the estimation of the extreme value distribution of river peak flows, based on weather types derived by sea level pressure reconstructions. This approach results in a more accurate estimation of the tail of the distribution, where current methods are underestimating the design levels related to extreme high return periods. The design flood for a 1,250 year return period is estimated at 4,800 m3 s-1 for the proposed method, compared with 3,450 and 3,900 m3 s-1 for a traditional method and a previous study.
NASA Astrophysics Data System (ADS)
Goldberg, Kenneth A.; Naulleau, Patrick P.; Bokor, Jeffrey; Chapman, Henry N.
2002-07-01
As the quality of optical systems for extreme ultraviolet lithography improves, high-accuracy wavefront metrology for alignment and qualification becomes ever more important. To enable the development of diffraction-limited EUV projection optics, visible-light and EUV interferometries must work in close collaboration. We present a detailed comparison of EUV and visible-light wavefront measurements performed across the field of view of a lithographic-quality EUV projection optical system designed for use in the Engineering Test Stand developed by the Virtual National Laboratory and the EUV Limited Liability Company. The comparisons reveal that the present level of RMS agreement lies in the 0.3-0.4-nm range. Astigmatism is the most significant aberration component for the alignment of this optical system; it is also the dominant term in the discrepancy, and the aberration with the highest measurement uncertainty. With EUV optical systems requiring total wavefront quality in the (lambda) EUV/50 range, and even higher surface-figure quality for the individual mirror elements, improved accuracy through future comparisons, and additional studies, are required.
NASA Astrophysics Data System (ADS)
Kumari, Komal; Donzis, Diego
2017-11-01
Highly resolved computational simulations on massively parallel machines are critical in understanding the physics of a vast number of complex phenomena in nature governed by partial differential equations. Simulations at extreme levels of parallelism present many challenges with communication between processing elements (PEs) being a major bottleneck. In order to fully exploit the computational power of exascale machines one needs to devise numerical schemes that relax global synchronizations across PEs. This asynchronous computations, however, have a degrading effect on the accuracy of standard numerical schemes.We have developed asynchrony-tolerant (AT) schemes that maintain order of accuracy despite relaxed communications. We show, analytically and numerically, that these schemes retain their numerical properties with multi-step higher order temporal Runge-Kutta schemes. We also show that for a range of optimized parameters,the computation time and error for AT schemes is less than their synchronous counterpart. Stability of the AT schemes which depends upon history and random nature of delays, are also discussed. Support from NSF is gratefully acknowledged.
Stover, Bert; Silverstein, Barbara; Wickizer, Thomas; Martin, Diane P; Kaufman, Joel
2007-06-01
Work related upper extremity musculoskeletal disorders (MSD) result in substantial disability, and expense. Identifying workers or jobs with high risk can trigger intervention before workers are injured or the condition worsens. We investigated a disability instrument, the QuickDASH, as a workplace screening tool to identify workers at high risk of developing upper extremity MSDs. Subjects included workers reporting recurring upper extremity MSD symptoms in the past 7 days (n = 559). The QuickDASH was reasonably accurate at baseline with sensitivity of 73% for MSD diagnosis, and 96% for symptom severity. Specificity was 56% for diagnosis, and 53% for symptom severity. At 1-year follow-up sensitivity and specificity for MSD diagnosis was 72% and 54%, respectively, as predicted by the baseline QuickDASH score. For symptom severity, sensitivity and specificity were 86% and 52%. An a priori target sensitivity of 70% and specificity of 50% was met by symptom severity, work pace and quality, and MSD diagnosis. The QuickDASH may be useful for identifying jobs or workers with increased risk for upper extremity MSDs. It may provide an efficient health surveillance screening tool useful for targeting early workplace intervention for prevention of upper extremity MSD problems.
Hemispheric preference and progressive-part or whole practice in beginning typewriting.
Johns, L B
1989-04-01
This investigation explored the interaction of progressive-part versus whole methods of practice with hemispheric preference for processing information and the impact of each upon high school students' speed and accuracy in beginning typewriting. Zenhausern's Differential Hemispheric Activation Test was scored in such a way that it was possible to plot the scores along a continuum. Analysis of variance gave significant F ratios on 3 of the 4 testing days. The continuous scores were divided into five categories: middle, left moderates, right moderates, extreme rights, and extreme lefts. The moderate-left group speed was consistently the fastest group, and the extreme rights were consistently the slowest group. This difference was significant for all four testing days with the moderate-left mean speed varying between 4 to 6 words per minute faster each testing day. The extreme rights were consistently the most accurate, even though not statistically significantly so. There was no significant difference between method of practice and typewriting speed or between method of practice and typewriting accuracy; however, on all four testing days the mean gross speed of the whole practice learning group was 0.73 to 0.99 words per minute faster than the progressive-part group. A two-way analysis of variance indicated no interaction between method or practice and hemispheric preference.
Pop, Tudor Radu; Vesa, Ştefan Cristian; Trifa, Adrian Pavel; Crişan, Sorin; Buzoianu, Anca Dana
2014-01-01
This study investigates the accuracy of two scores in predicting the risk of acute lower extremity deep vein thrombosis. The study included 170 patients [85 (50%) women and 85 (50%) men] who were diagnosed with acute lower extremity deep vein thrombosis (DVT) with duplex ultrasonography. Median age was 62 (52.75; 72) years. The control group consisted of 166 subjects [96 (57.8%) women and 70 (42.2%) men], without DVT, matched for age (± one year) to those in the group with DVT. The patients and controls were selected from those admitted to the internal medicine, cardiology and geriatrics wards within the Municipal Hospital of Cluj-Napoca, Romania, between October 2009 and June 2011. Clinical, demographic and lab data were recorded for each patient. For each patient we calculated the prior risk of DVT using two prediction scores: Caprini and Padua. According to the Padua score only 93 (54.7%) patients with DVT had been at high risk of developing DVT, while 48 (28.9%) of controls were at high risk of developing DVT. When Padua score included PAI-1 4G/5G and MTHFR C677T polymorphisms, the sensitivity increased at 71.7%. Using the Caprini score, we determined that 147 (86.4%) patients with DVT had been at high risk of developing DVT, while 103 (62%) controls were at high risk of developing DVT. A Caprini score higher than 5 was the strongest predictor of acute lower extremity DVT risk. The Caprini prediction score was more sensitive than the Padua score in assessing the high risk of DVT in medical patients. PAI-1 4G/5G and MTHFR C677T polymorphisms increased the sensitivity of Padua score.
Performance of the Micropower Voltage Reference ADR3430 Under Extreme Temperatures
NASA Technical Reports Server (NTRS)
Patterson, Richard L.; Hammoud, Ahmad
2011-01-01
Electronic systems designed for use in space exploration systems are expected to be exposed to harsh temperatures. For example, operation at cryogenic temperatures is anticipated in space missions such as polar craters of the moon (-223 C), James Webb Space Telescope (-236 C), Mars (-140 C), Europa (-223 C), Titan (-178 C), and other deep space probes away from the sun. Similarly, rovers and landers on the lunar surface, and deep space probes intended for the exploration of Venus are expected to encounter high temperature extremes. Electronics capable of operation under extreme temperatures would not only meet the requirements of future spacebased systems, but would also contribute to enhancing efficiency and improving reliability of these systems through the elimination of the thermal control elements that present electronics need for proper operation under the harsh environment of space. In this work, the performance of a micropower, high accuracy voltage reference was evaluated over a wide temperature range. The Analog Devices ADR3430 chip uses a patented voltage reference architecture to achieve high accuracy, low temperature coefficient, and low noise in a CMOS process [1]. The device combines two voltages of opposite temperature coefficients to create an output voltage that is almost independent of ambient temperature. It is rated for the industrial temperature range of -40 C to +125 C, and is ideal for use in low power precision data acquisition systems and in battery-powered devices. Table 1 shows some of the manufacturer s device specifications.
Cheng, Han-miao; Li, Hong-bin
2015-08-01
The existing electronic transformer calibration systems employing data acquisition cards cannot satisfy some practical applications, because the calibration systems have phase measurement errors when they work in the mode of receiving external synchronization signals. This paper proposes an improved calibration system scheme with phase correction to improve the phase measurement accuracy. We employ NI PCI-4474 to design a calibration system, and the system has the potential to receive external synchronization signals and reach extremely high accuracy classes. Accuracy verification has been carried out in the China Electric Power Research Institute, and results demonstrate that the system surpasses the accuracy class 0.05. Furthermore, this system has been used to test the harmonics measurement accuracy of all-fiber optical current transformers. In the same process, we have used an existing calibration system, and a comparison of the test results is presented. The system after improvement is suitable for the intended applications.
NASA Astrophysics Data System (ADS)
Chetty, S.; Field, L. A.
2014-12-01
SWIMS III, is a low cost, autonomous sensor data gathering platform developed specifically for extreme/harsh cold environments. Arctic ocean's continuing decrease of summer-time ice is related to rapidly diminishing multi-year ice due to the effects of climate change. Ice911 Research aims to develop environmentally inert materials that when deployed will increase the albedo, enabling the formation and/preservation of multi-year ice. SWIMS III's sophisticated autonomous sensors are designed to measure the albedo, weather, water temperature and other environmental parameters. This platform uses low cost, high accuracy/precision sensors, extreme environment command and data handling computer system using satellite and terrestrial wireless solution. The system also incorporates tilt sensors and sonar based ice thickness sensors. The system is light weight and can be deployed by hand by a single person. This presentation covers the technical, and design challenges in developing and deploying these platforms.
Extreme Temperature Operation of a 10 MHz Silicon Oscillator Type STCL1100
NASA Technical Reports Server (NTRS)
Patterson, Richard L.; Hammoud, Ahmad
2008-01-01
The performance of STMicroelectronics 10 MHz silicon oscillator was evaluated under exposure to extreme temperatures. The oscillator was characterized in terms of its output frequency stability, output signal rise and fall times, duty cycle, and supply current. The effects of thermal cycling and re-start capability at extreme low and high temperatures were also investigated. The silicon oscillator chip operated well with good stability in its output frequency over the temperature region of -50 C to +130 C, a range that by far exceeded its recommended specified boundaries of -20 C to +85 C. In addition, this chip, which is a low-cost oscillator designed for use in applications where great accuracy is not required, continued to function at cryogenic temperatures as low as - 195 C but at the expense of drop in its output frequency. The STCL1100 silicon oscillator was also able to re-start at both -195 C and +130 C, and it exhibited no change in performance due to the thermal cycling. In addition, no physical damage was observed in the packaging material due to extreme temperature exposure and thermal cycling. Therefore, it can be concluded that this device could potentially be used in space exploration missions under extreme temperature conditions in microprocessor and other applications where tight clock accuracy is not critical. In addition to the aforementioned screening evaluation, additional testing, however, is required to fully establish the reliability of these devices and to determine their suitability for long-term use.
Monteiro-Soares, M; Martins-Mendes, D; Vaz-Carneiro, A; Sampaio, S; Dinis-Ribeiro, M
2014-10-01
We systematically review the available systems used to classify diabetic foot ulcers in order to synthesize their methodological qualitative issues and accuracy to predict lower extremity amputation, as this may represent a critical point in these patients' care. Two investigators searched, in EBSCO, ISI, PubMed and SCOPUS databases, and independently selected studies published until May 2013 and reporting prognostic accuracy and/or reliability of specific systems for patients with diabetic foot ulcer in order to predict lower extremity amputation. We included 25 studies reporting a prevalence of lower extremity amputation between 6% and 78%. Eight different diabetic foot ulcer descriptions and seven prognostic stratification classification systems were addressed with a variable (1-9) number of factors included, specially peripheral arterial disease (n = 12) or infection at the ulcer site (n = 10) or ulcer depth (n = 10). The Meggitt-Wagner, S(AD)SAD and Texas University Classification systems were the most extensively validated, whereas ten classifications were derived or validated only once. Reliability was reported in a single study, and accuracy measures were reported in five studies with another eight allowing their calculation. Pooled accuracy ranged from 0.65 (for gangrene) to 0.74 (for infection). There are numerous classification systems for diabetic foot ulcer outcome prediction, but only few studies evaluated their reliability or external validity. Studies rarely validated several systems simultaneously and only a few reported accuracy measures. Further studies assessing reliability and accuracy of the available systems and their composing variables are needed. Copyright © 2014 John Wiley & Sons, Ltd.
Multi-element stochastic spectral projection for high quantile estimation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ko, Jordan, E-mail: jordan.ko@mac.com; Garnier, Josselin
2013-06-15
We investigate quantile estimation by multi-element generalized Polynomial Chaos (gPC) metamodel where the exact numerical model is approximated by complementary metamodels in overlapping domains that mimic the model’s exact response. The gPC metamodel is constructed by the non-intrusive stochastic spectral projection approach and function evaluation on the gPC metamodel can be considered as essentially free. Thus, large number of Monte Carlo samples from the metamodel can be used to estimate α-quantile, for moderate values of α. As the gPC metamodel is an expansion about the means of the inputs, its accuracy may worsen away from these mean values where themore » extreme events may occur. By increasing the approximation accuracy of the metamodel, we may eventually improve accuracy of quantile estimation but it is very expensive. A multi-element approach is therefore proposed by combining a global metamodel in the standard normal space with supplementary local metamodels constructed in bounded domains about the design points corresponding to the extreme events. To improve the accuracy and to minimize the sampling cost, sparse-tensor and anisotropic-tensor quadratures are tested in addition to the full-tensor Gauss quadrature in the construction of local metamodels; different bounds of the gPC expansion are also examined. The global and local metamodels are combined in the multi-element gPC (MEgPC) approach and it is shown that MEgPC can be more accurate than Monte Carlo or importance sampling methods for high quantile estimations for input dimensions roughly below N=8, a limit that is very much case- and α-dependent.« less
Design and Error Analysis of a Vehicular AR System with Auto-Harmonization.
Foxlin, Eric; Calloway, Thomas; Zhang, Hongsheng
2015-12-01
This paper describes the design, development and testing of an AR system that was developed for aerospace and ground vehicles to meet stringent accuracy and robustness requirements. The system uses an optical see-through HMD, and thus requires extremely low latency, high tracking accuracy and precision alignment and calibration of all subsystems in order to avoid mis-registration and "swim". The paper focuses on the optical/inertial hybrid tracking system and describes novel solutions to the challenges with the optics, algorithms, synchronization, and alignment with the vehicle and HMD systems. Tracker accuracy is presented with simulation results to predict the registration accuracy. A car test is used to create a through-the-eyepiece video demonstrating well-registered augmentations of the road and nearby structures while driving. Finally, a detailed covariance analysis of AR registration error is derived.
High-accuracy mass spectrometry for fundamental studies.
Kluge, H-Jürgen
2010-01-01
Mass spectrometry for fundamental studies in metrology and atomic, nuclear and particle physics requires extreme sensitivity and efficiency as well as ultimate resolving power and accuracy. An overview will be given on the global status of high-accuracy mass spectrometry for fundamental physics and metrology. Three quite different examples of modern mass spectrometric experiments in physics are presented: (i) the retardation spectrometer KATRIN at the Forschungszentrum Karlsruhe, employing electrostatic filtering in combination with magnetic-adiabatic collimation-the biggest mass spectrometer for determining the smallest mass, i.e. the mass of the electron anti-neutrino, (ii) the Experimental Cooler-Storage Ring at GSI-a mass spectrometer of medium size, relative to other accelerators, for determining medium-heavy masses and (iii) the Penning trap facility, SHIPTRAP, at GSI-the smallest mass spectrometer for determining the heaviest masses, those of super-heavy elements. Finally, a short view into the future will address the GSI project HITRAP at GSI for fundamental studies with highly-charged ions.
Accuracy of surgical wound drainage measurements: an analysis and comparison.
Yue, Brian; Nizzero, Danielle; Zhang, Chunxiao; van Zyl, Natasha; Ting, Jeannette
2015-05-01
Surgical drain tube readings can influence the clinical management of the post-operative patient. The accuracy of these readings has not been documented in the current literature and this experimental study aims to address this paucity. Aliquots (10, 25, 40 and 90 mL) of black tea solution prepared to mimic haemoserous fluid were injected into UnoVac, RedoVac and Jackson-Pratt drain tubes. Nursing and medical staff from a tertiary hospital were asked to estimate drain volumes by direct observation; analysis of variance was performed on the results and significance level was set at 0.05. Doctors and nurses are equally accurate in estimating drain tube volumes. Jackson-Pratt systems were found to be the most accurate for intermediate volumes of 25 and 40 mL. For extreme of volumes (both high and low), all drainage systems were inaccurate. This study suggests that for intermediate volumes (25 and 40 mL), Jackson-Pratt is the drainage system of choice. The accuracy of volume measurement is diminished at the extremes of drain volumes; emptying of drainage systems is recommended to avoid overfilling of drainage systems. © 2014 Royal Australasian College of Surgeons.
Ma, Zhiyuan; Luo, Guangchun; Qin, Ke; Wang, Nan; Niu, Weina
2018-03-01
Sensor drift is a common issue in E-Nose systems and various drift compensation methods have received fruitful results in recent years. Although the accuracy for recognizing diverse gases under drift conditions has been largely enhanced, few of these methods considered online processing scenarios. In this paper, we focus on building online drift compensation model by transforming two domain adaptation based methods into their online learning versions, which allow the recognition models to adapt to the changes of sensor responses in a time-efficient manner without losing the high accuracy. Experimental results using three different settings confirm that the proposed methods save large processing time when compared with their offline versions, and outperform other drift compensation methods in recognition accuracy.
Facilitating Co-Design for Extreme-Scale Systems Through Lightweight Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Engelmann, Christian; Lauer, Frank
This work focuses on tools for investigating algorithm performance at extreme scale with millions of concurrent threads and for evaluating the impact of future architecture choices to facilitate the co-design of high-performance computing (HPC) architectures and applications. The approach focuses on lightweight simulation of extreme-scale HPC systems with the needed amount of accuracy. The prototype presented in this paper is able to provide this capability using a parallel discrete event simulation (PDES), such that a Message Passing Interface (MPI) application can be executed at extreme scale, and its performance properties can be evaluated. The results of an initial prototype aremore » encouraging as a simple 'hello world' MPI program could be scaled up to 1,048,576 virtual MPI processes on a four-node cluster, and the performance properties of two MPI programs could be evaluated at up to 16,384 virtual MPI processes on the same system.« less
Laser interferometric high-precision geometry (angle and length) monitor for JASMINE
NASA Astrophysics Data System (ADS)
Niwa, Y.; Arai, K.; Ueda, A.; Sakagami, M.; Gouda, N.; Kobayashi, Y.; Yamada, Y.; Yano, T.
2008-07-01
The telescope geometry of JASMINE should be stabilized and monitored with the accuracy of about 10 to 100 pm or 10 to 100 prad of rms over about 10 hours. For this purpose, a high-precision interferometric laser metrology system is employed. Useful techniques for measuring displacements on extremely small scales are the wave-front sensing method and the heterodyne interferometrical method. Experiments for verification of measurement principles are well advanced.
A new ultra-high-accuracy angle generator: current status and future direction
NASA Astrophysics Data System (ADS)
Guertin, Christian F.; Geckeler, Ralf D.
2017-09-01
Lack of an extreme high-accuracy angular positioning device available in the United States has left a gap in industrial and scientific efforts conducted there, requiring certain user groups to undertake time-consuming work with overseas laboratories. Specifically, in x-ray mirror metrology the global research community is advancing the state-of-the-art to unprecedented levels. We aim to fill this U.S. gap by developing a versatile high-accuracy angle generator as a part of the national metrology tool set for x-ray mirror metrology and other important industries. Using an established calibration technique to measure the errors of the encoder scale graduations for full-rotation rotary encoders, we implemented an optimized arrangement of sensors positioned to minimize propagation of calibration errors. Our initial feasibility research shows that upon scaling to a full prototype and including additional calibration techniques we can expect to achieve uncertainties at the level of 0.01 arcsec (50 nrad) or better and offer the immense advantage of a highly automatable and customizable product to the commercial market.
Joshi, Nikita; Lira, Alena; Mehta, Ninfa; Paladino, Lorenzo; Sinert, Richard
2013-01-01
Understanding history, physical examination, and ultrasonography (US) to diagnose extremity fractures compared with radiography has potential benefits of decreasing radiation exposure, costs, and pain and improving emergency department (ED) resource management and triage time. The authors performed two electronic searches using PubMed and EMBASE databases for studies published between 1965 to 2012 using a strategy based on the inclusion of any patient presenting with extremity injuries suspicious for fracture who had history and physical examination and a separate search for US performed by an emergency physician (EP) with subsequent radiography. The primary outcome was operating characteristics of ED history, physical examination, and US in diagnosing radiologically proven extremity fractures. The methodologic quality of the studies was assessed using the quality assessment of studies of diagnostic accuracy tool (QUADAS-2). Nine studies met the inclusion criteria for history and physical examination, while eight studies met the inclusion criteria for US. There was significant heterogeneity in the studies that prevented data pooling. Data were organized into subgroups based on anatomic fracture locations, but heterogeneity within the subgroups also prevented data pooling. The prevalence of fracture varied among the studies from 22% to 70%. Upper extremity physical examination tests have positive likelihood ratios (LRs) ranging from 1.2 to infinity and negative LRs ranging from 0 to 0.8. US sensitivities varied between 85% and 100%, specificities varied between 73% and 100%, positive LRs varied between 3.2 and 56.1, and negative LRs varied between 0 and 0.2. Compared with radiography, EP US is an accurate diagnostic test to rule in or rule out extremity fractures. The diagnostic accuracy for history and physical examination are inconclusive. Future research is needed to understand the accuracy of ED US when combined with history and physical examination for upper and lower extremity fractures. © 2013 by the Society for Academic Emergency Medicine.
Predictive Modeling of Risk Associated with Temperature Extremes over Continental US
NASA Astrophysics Data System (ADS)
Kravtsov, S.; Roebber, P.; Brazauskas, V.
2016-12-01
We build an extremely statistically accurate, essentially bias-free empirical emulator of atmospheric surface temperature and apply it for meteorological risk assessment over the domain of continental US. The resulting prediction scheme achieves an order-of-magnitude or larger gain of numerical efficiency compared with the schemes based on high-resolution dynamical atmospheric models, leading to unprecedented accuracy of the estimated risk distributions. The empirical model construction methodology is based on our earlier work, but is further modified to account for the influence of large-scale, global climate change on regional US weather and climate. The resulting estimates of the time-dependent, spatially extended probability of temperature extremes over the simulation period can be used as a risk management tool by insurance companies and regulatory governmental agencies.
Kim, Jin Woo; Choo, Ki Seok; Jeon, Ung Bae; Kim, Tae Un; Hwang, Jae Yeon; Yeom, Jeong A; Jeong, Hee Seok; Choi, Yoon Young; Nam, Kyung Jin; Kim, Chang Won; Jeong, Dong Wook; Lim, Soo Jin
2016-07-01
Multi-detector computed tomography (MDCT) angiography is now used for the diagnosing patients with peripheral arterial disease. The dose of radiation is related to variable factors, such as tube current, tube voltage, and helical pitch. To assess the diagnostic performance and radiation dose of lower extremity CT angiography (CTA) using a 128-slice dual source CT at 80 kVp and high pitch in patients with critical limb ischemia (CLI). Twenty-eight patients (mean, 64.1 years; range, 39-80 years) with CLI were enrolled in this retrospective study and underwent CTA using a 128-slice dual source CT at 80 kVp and high pitch and subsequent intra-arterial digital subtraction angiography (DSA), which was used as a reference standard for assessing diagnostic performance. For arterial segments with significant disease (>50% stenosis), overall sensitivity, specificity, and accuracy of lower extremity CTA were 94.8% (95% CI, 91.7-98.0%), 91.5% (95% CI, 87.7-95.2%), and 93.1% (95% CI, 90.6-95.6%), respectively, and its positive and negative predictive values were 91.0% (95% CI, 87.1-95.0%), and 95.1% (95% CI, 92.1-98.1%), respectively. Mean radiation dose delivered to lower extremities was 266.6 mGy.cm. Lower extremity CTA using a 128-slice dual source CT at 80 kVp and high pitch was found to have good diagnostic performance for the assessment of patients with CLI using an extremely low radiation dose. © The Foundation Acta Radiologica 2015.
NASA Astrophysics Data System (ADS)
Montereale Gavazzi, G.; Madricardo, F.; Janowski, L.; Kruss, A.; Blondel, P.; Sigovini, M.; Foglini, F.
2016-03-01
Recent technological developments of multibeam echosounder systems (MBES) allow mapping of benthic habitats with unprecedented detail. MBES can now be employed in extremely shallow waters, challenging data acquisition (as these instruments were often designed for deeper waters) and data interpretation (honed on datasets with resolution sometimes orders of magnitude lower). With extremely high-resolution bathymetry and co-located backscatter data, it is now possible to map the spatial distribution of fine scale benthic habitats, even identifying the acoustic signatures of single sponges. In this context, it is necessary to understand which of the commonly used segmentation methods is best suited to account for such level of detail. At the same time, new sampling protocols for precisely geo-referenced ground truth data need to be developed to validate the benthic environmental classification. This study focuses on a dataset collected in a shallow (2-10 m deep) tidal channel of the Lagoon of Venice, Italy. Using 0.05-m and 0.2-m raster grids, we compared a range of classifications, both pixel-based and object-based approaches, including manual, Maximum Likelihood Classifier, Jenks Optimization clustering, textural analysis and Object Based Image Analysis. Through a comprehensive and accurately geo-referenced ground truth dataset, we were able to identify five different classes of the substrate composition, including sponges, mixed submerged aquatic vegetation, mixed detritic bottom (fine and coarse) and unconsolidated bare sediment. We computed estimates of accuracy (namely Overall, User, Producer Accuracies and the Kappa statistic) by cross tabulating predicted and reference instances. Overall, pixel based segmentations produced the highest accuracies and the accuracy assessment is strongly dependent on the number of classes chosen for the thematic output. Tidal channels in the Venice Lagoon are extremely important in terms of habitats and sediment distribution, particularly within the context of the new tidal barrier being built. However, they had remained largely unexplored until now, because of the surveying challenges. The application of this remote sensing approach, combined with targeted sampling, opens a new perspective in the monitoring of benthic habitats in view of a knowledge-based management of natural resources in shallow coastal areas.
First-Order SPICE Modeling of Extreme-Temperature 4H-SiC JFET Integrated Circuits
NASA Technical Reports Server (NTRS)
Neudeck, Philip G.; Spry, David J.; Chen, Liang-Yu
2016-01-01
A separate submission to this conference reports that 4H-SiC Junction Field Effect Transistor (JFET) digital and analog Integrated Circuits (ICs) with two levels of metal interconnect have reproducibly demonstrated electrical operation at 500 C in excess of 1000 hours. While this progress expands the complexity and durability envelope of high temperature ICs, one important area for further technology maturation is the development of reasonably accurate and accessible computer-aided modeling and simulation tools for circuit design of these ICs. Towards this end, we report on development and verification of 25 C to 500 C SPICE simulation models of first order accuracy for this extreme-temperature durable 4H-SiC JFET IC technology. For maximum availability, the JFET IC modeling is implemented using the baseline-version SPICE NMOS LEVEL 1 model that is common to other variations of SPICE software and importantly includes the body-bias effect. The first-order accuracy of these device models is verified by direct comparison with measured experimental device characteristics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Han-miao, E-mail: chenghanmiao@hust.edu.cn; Li, Hong-bin, E-mail: lihongbin@hust.edu.cn; State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan 430074
The existing electronic transformer calibration systems employing data acquisition cards cannot satisfy some practical applications, because the calibration systems have phase measurement errors when they work in the mode of receiving external synchronization signals. This paper proposes an improved calibration system scheme with phase correction to improve the phase measurement accuracy. We employ NI PCI-4474 to design a calibration system, and the system has the potential to receive external synchronization signals and reach extremely high accuracy classes. Accuracy verification has been carried out in the China Electric Power Research Institute, and results demonstrate that the system surpasses the accuracy classmore » 0.05. Furthermore, this system has been used to test the harmonics measurement accuracy of all-fiber optical current transformers. In the same process, we have used an existing calibration system, and a comparison of the test results is presented. The system after improvement is suitable for the intended applications.« less
NASA Technical Reports Server (NTRS)
Gramling, C. J.; Long, A. C.; Lee, T.; Ottenstein, N. A.; Samii, M. V.
1991-01-01
A Tracking and Data Relay Satellite System (TDRSS) Onboard Navigation System (TONS) is currently being developed by NASA to provide a high accuracy autonomous navigation capability for users of TDRSS and its successor, the Advanced TDRSS (ATDRSS). The fully autonomous user onboard navigation system will support orbit determination, time determination, and frequency determination, based on observation of a continuously available, unscheduled navigation beacon signal. A TONS experiment will be performed in conjunction with the Explorer Platform (EP) Extreme Ultraviolet Explorer (EUVE) mission to flight quality TONS Block 1. An overview is presented of TONS and a preliminary analysis of the navigation accuracy anticipated for the TONS experiment. Descriptions of the TONS experiment and the associated navigation objectives, as well as a description of the onboard navigation algorithms, are provided. The accuracy of the selected algorithms is evaluated based on the processing of realistic simulated TDRSS one way forward link Doppler measurements. The analysis process is discussed and the associated navigation accuracy results are presented.
Multi-analyte analysis of saliva biomarkers as predictors of periodontal and pre-implant disease
DOE Office of Scientific and Technical Information (OSTI.GOV)
Braun, Thomas; Giannobile, William V; Herr, Amy E
The present invention relates to methods of measuring biomarkers to determine the probability of a periodontal and/or peri-implant disease. More specifically, the invention provides a panel of biomarkers that, when used in combination, can allow determination of the probability of a periodontal and/or peri-implant disease state with extremely high accuracy.
Performance of Improved High-Order Filter Schemes for Turbulent Flows with Shocks
NASA Technical Reports Server (NTRS)
Kotov, Dmitry Vladimirovich; Yee, Helen M C.
2013-01-01
The performance of the filter scheme with improved dissipation control ? has been demonstrated for different flow types. The scheme with local ? is shown to obtain more accurate results than its counterparts with global or constant ?. At the same time no additional tuning is needed to achieve high accuracy of the method when using the local ? technique. However, further improvement of the method might be needed for even more complex and/or extreme flows.
Luo, Guangchun; Qin, Ke; Wang, Nan; Niu, Weina
2018-01-01
Sensor drift is a common issue in E-Nose systems and various drift compensation methods have received fruitful results in recent years. Although the accuracy for recognizing diverse gases under drift conditions has been largely enhanced, few of these methods considered online processing scenarios. In this paper, we focus on building online drift compensation model by transforming two domain adaptation based methods into their online learning versions, which allow the recognition models to adapt to the changes of sensor responses in a time-efficient manner without losing the high accuracy. Experimental results using three different settings confirm that the proposed methods save large processing time when compared with their offline versions, and outperform other drift compensation methods in recognition accuracy. PMID:29494543
An extraordinary directive radiation based on optical antimatter at near infrared.
Mocella, Vito; Dardano, Principia; Rendina, Ivo; Cabrini, Stefano
2010-11-22
In this paper we discuss and experimentally demonstrate that in a quasi- zero-average-refractive-index (QZAI) metamaterial, in correspondence of a divergent source in near infrared (λ = 1.55 μm) the light scattered out is extremely directive (Δθ(out) = 0.06°), coupling with diffraction order of the alternating complementary media grating. With a high degree of accuracy the measurements prove also the excellent vertical confinement of the beam even in the air region of the metamaterial, in absence of any simple vertical confinement mechanism. This extremely sensitive device works on a large contact area and open news perspective to integrated spectroscopy.
Extreme events in total ozone over Arosa - Part 1: Application of extreme value theory
NASA Astrophysics Data System (ADS)
Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.
2010-10-01
In this study ideas from extreme value theory are for the first time applied in the field of stratospheric ozone research, because statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not adequately address the structure of the extremes. We show that statistical extreme value methods are appropriate to identify ozone extremes and to describe the tails of the Arosa (Switzerland) total ozone time series. In order to accommodate the seasonal cycle in total ozone, a daily moving threshold was determined and used, with tools from extreme value theory, to analyse the frequency of days with extreme low (termed ELOs) and high (termed EHOs) total ozone at Arosa. The analysis shows that the Generalized Pareto Distribution (GPD) provides an appropriate model for the frequency distribution of total ozone above or below a mathematically well-defined threshold, thus providing a statistical description of ELOs and EHOs. The results show an increase in ELOs and a decrease in EHOs during the last decades. The fitted model represents the tails of the total ozone data set with high accuracy over the entire range (including absolute monthly minima and maxima), and enables a precise computation of the frequency distribution of ozone mini-holes (using constant thresholds). Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight into the time series properties. Fingerprints of dynamical (e.g. ENSO, NAO) and chemical features (e.g. strong polar vortex ozone loss), and major volcanic eruptions, can be identified in the observed frequency of extreme events throughout the time series. Overall the new approach to analysis of extremes provides more information on time series properties and variability than previous approaches that use only monthly averages and/or mini-holes and mini-highs.
Extreme events in total ozone over Arosa - Part 1: Application of extreme value theory
NASA Astrophysics Data System (ADS)
Rieder, H. E.; Staehelin, J.; Maeder, J. A.; Peter, T.; Ribatet, M.; Davison, A. C.; Stübi, R.; Weihs, P.; Holawe, F.
2010-05-01
In this study ideas from extreme value theory are for the first time applied in the field of stratospheric ozone research, because statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not adequately address the structure of the extremes. We show that statistical extreme value methods are appropriate to identify ozone extremes and to describe the tails of the Arosa (Switzerland) total ozone time series. In order to accommodate the seasonal cycle in total ozone, a daily moving threshold was determined and used, with tools from extreme value theory, to analyse the frequency of days with extreme low (termed ELOs) and high (termed EHOs) total ozone at Arosa. The analysis shows that the Generalized Pareto Distribution (GPD) provides an appropriate model for the frequency distribution of total ozone above or below a mathematically well-defined threshold, thus providing a statistical description of ELOs and EHOs. The results show an increase in ELOs and a decrease in EHOs during the last decades. The fitted model represents the tails of the total ozone data set with high accuracy over the entire range (including absolute monthly minima and maxima), and enables a precise computation of the frequency distribution of ozone mini-holes (using constant thresholds). Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight into the time series properties. Fingerprints of dynamical (e.g. ENSO, NAO) and chemical features (e.g. strong polar vortex ozone loss), and major volcanic eruptions, can be identified in the observed frequency of extreme events throughout the time series. Overall the new approach to analysis of extremes provides more information on time series properties and variability than previous approaches that use only monthly averages and/or mini-holes and mini-highs.
NASA Astrophysics Data System (ADS)
Tiebin, Wu; Yunlian, Liu; Xinjun, Li; Yi, Yu; Bin, Zhang
2018-06-01
Aiming at the difficulty in quality prediction of sintered ores, a hybrid prediction model is established based on mechanism models of sintering and time-weighted error compensation on the basis of the extreme learning machine (ELM). At first, mechanism models of drum index, total iron, and alkalinity are constructed according to the chemical reaction mechanism and conservation of matter in the sintering process. As the process is simplified in the mechanism models, these models are not able to describe high nonlinearity. Therefore, errors are inevitable. For this reason, the time-weighted ELM based error compensation model is established. Simulation results verify that the hybrid model has a high accuracy and can meet the requirement for industrial applications.
NASA Technical Reports Server (NTRS)
Kranz, David William
2010-01-01
The goal of this research project was be to compare and contrast the selected materials used in step measurements during pre-fits of thermal protection system tiles and to compare and contrast the accuracy of measurements made using these selected materials. The reasoning for conducting this test was to obtain a clearer understanding to which of these materials may yield the highest accuracy rate of exacting measurements in comparison to the completed tile bond. These results in turn will be presented to United Space Alliance and Boeing North America for their own analysis and determination. Aerospace structures operate under extreme thermal environments. Hot external aerothermal environments in high Mach number flights lead to high structural temperatures. The differences between tile heights from one to another are very critical during these high Mach reentries. The Space Shuttle Thermal Protection System is a very delicate and highly calculated system. The thermal tiles on the ship are measured to within an accuracy of .001 of an inch. The accuracy of these tile measurements is critical to a successful reentry of an orbiter. This is why it is necessary to find the most accurate method for measuring the height of each tile in comparison to each of the other tiles. The test results indicated that there were indeed differences in the selected materials used in step measurements during prefits of Thermal Protection System Tiles and that Bees' Wax yielded a higher rate of accuracy when compared to the baseline test. In addition, testing for experience level in accuracy yielded no evidence of difference to be found. Lastly the use of the Trammel tool over the Shim Pack yielded variable difference for those tests.
Qureshi, Muhammad Naveed Iqbal; Min, Beomjun; Jo, Hang Joon; Lee, Boreom
2016-01-01
The classification of neuroimaging data for the diagnosis of certain brain diseases is one of the main research goals of the neuroscience and clinical communities. In this study, we performed multiclass classification using a hierarchical extreme learning machine (H-ELM) classifier. We compared the performance of this classifier with that of a support vector machine (SVM) and basic extreme learning machine (ELM) for cortical MRI data from attention deficit/hyperactivity disorder (ADHD) patients. We used 159 structural MRI images of children from the publicly available ADHD-200 MRI dataset. The data consisted of three types, namely, typically developing (TDC), ADHD-inattentive (ADHD-I), and ADHD-combined (ADHD-C). We carried out feature selection by using standard SVM-based recursive feature elimination (RFE-SVM) that enabled us to achieve good classification accuracy (60.78%). In this study, we found the RFE-SVM feature selection approach in combination with H-ELM to effectively enable the acquisition of high multiclass classification accuracy rates for structural neuroimaging data. In addition, we found that the most important features for classification were the surface area of the superior frontal lobe, and the cortical thickness, volume, and mean surface area of the whole cortex. PMID:27500640
Qureshi, Muhammad Naveed Iqbal; Min, Beomjun; Jo, Hang Joon; Lee, Boreom
2016-01-01
The classification of neuroimaging data for the diagnosis of certain brain diseases is one of the main research goals of the neuroscience and clinical communities. In this study, we performed multiclass classification using a hierarchical extreme learning machine (H-ELM) classifier. We compared the performance of this classifier with that of a support vector machine (SVM) and basic extreme learning machine (ELM) for cortical MRI data from attention deficit/hyperactivity disorder (ADHD) patients. We used 159 structural MRI images of children from the publicly available ADHD-200 MRI dataset. The data consisted of three types, namely, typically developing (TDC), ADHD-inattentive (ADHD-I), and ADHD-combined (ADHD-C). We carried out feature selection by using standard SVM-based recursive feature elimination (RFE-SVM) that enabled us to achieve good classification accuracy (60.78%). In this study, we found the RFE-SVM feature selection approach in combination with H-ELM to effectively enable the acquisition of high multiclass classification accuracy rates for structural neuroimaging data. In addition, we found that the most important features for classification were the surface area of the superior frontal lobe, and the cortical thickness, volume, and mean surface area of the whole cortex.
Iterative Correction of Reference Nucleotides (iCORN) using second generation sequencing technology.
Otto, Thomas D; Sanders, Mandy; Berriman, Matthew; Newbold, Chris
2010-07-15
The accuracy of reference genomes is important for downstream analysis but a low error rate requires expensive manual interrogation of the sequence. Here, we describe a novel algorithm (Iterative Correction of Reference Nucleotides) that iteratively aligns deep coverage of short sequencing reads to correct errors in reference genome sequences and evaluate their accuracy. Using Plasmodium falciparum (81% A + T content) as an extreme example, we show that the algorithm is highly accurate and corrects over 2000 errors in the reference sequence. We give examples of its application to numerous other eukaryotic and prokaryotic genomes and suggest additional applications. The software is available at http://icorn.sourceforge.net
Du, Lei; Sun, Qiao; Cai, Changqing; Bai, Jie; Fan, Zhe; Zhang, Yue
2018-01-01
Traffic speed meters are important legal measuring instruments specially used for traffic speed enforcement and must be tested and verified in the field every year using a vehicular mobile standard speed-measuring instrument to ensure speed-measuring performances. The non-contact optical speed sensor and the GPS speed sensor are the two most common types of standard speed-measuring instruments. The non-contact optical speed sensor requires extremely high installation accuracy, and its speed-measuring error is nonlinear and uncorrectable. The speed-measuring accuracy of the GPS speed sensor is rapidly reduced if the amount of received satellites is insufficient enough, which often occurs in urban high-rise regions, tunnels, and mountainous regions. In this paper, a new standard speed-measuring instrument using a dual-antenna Doppler radar sensor is proposed based on a tradeoff between the installation accuracy requirement and the usage region limitation, which has no specified requirements for its mounting distance and no limitation on usage regions and can automatically compensate for the effect of an inclined installation angle on its speed-measuring accuracy. Theoretical model analysis, simulated speed measurement results, and field experimental results compared with a GPS speed sensor with high accuracy showed that the dual-antenna Doppler radar sensor is effective and reliable as a new standard speed-measuring instrument. PMID:29621142
Du, Lei; Sun, Qiao; Cai, Changqing; Bai, Jie; Fan, Zhe; Zhang, Yue
2018-04-05
Traffic speed meters are important legal measuring instruments specially used for traffic speed enforcement and must be tested and verified in the field every year using a vehicular mobile standard speed-measuring instrument to ensure speed-measuring performances. The non-contact optical speed sensor and the GPS speed sensor are the two most common types of standard speed-measuring instruments. The non-contact optical speed sensor requires extremely high installation accuracy, and its speed-measuring error is nonlinear and uncorrectable. The speed-measuring accuracy of the GPS speed sensor is rapidly reduced if the amount of received satellites is insufficient enough, which often occurs in urban high-rise regions, tunnels, and mountainous regions. In this paper, a new standard speed-measuring instrument using a dual-antenna Doppler radar sensor is proposed based on a tradeoff between the installation accuracy requirement and the usage region limitation, which has no specified requirements for its mounting distance and no limitation on usage regions and can automatically compensate for the effect of an inclined installation angle on its speed-measuring accuracy. Theoretical model analysis, simulated speed measurement results, and field experimental results compared with a GPS speed sensor with high accuracy showed that the dual-antenna Doppler radar sensor is effective and reliable as a new standard speed-measuring instrument.
Cost-effective accurate coarse-grid method for highly convective multidimensional unsteady flows
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Niknafs, H. S.
1991-01-01
A fundamentally multidimensional convection scheme is described based on vector transient interpolation modeling rewritten in conservative control-volume form. Vector third-order upwinding is used as the basis of the algorithm; this automatically introduces important cross-difference terms that are absent from schemes using component-wise one-dimensional formulas. Third-order phase accuracy is good; this is important for coarse-grid large-eddy or full simulation. Potential overshoots or undershoots are avoided by using a recently developed universal limiter. Higher order accuracy is obtained locally, where needed, by the cost-effective strategy of adaptive stencil expansion in a direction normal to each control-volume face; this is controlled by monitoring the absolute normal gradient and curvature across the face. Higher (than third) order cross-terms do not appear to be needed. Since the wider stencil is used only in isolated narrow regions (near discontinuities), extremely high (in this case, seventh) order accuracy can be achieved for little more than the cost of a globally third-order scheme.
Lu, Huijuan; Wei, Shasha; Zhou, Zili; Miao, Yanzi; Lu, Yi
2015-01-01
The main purpose of traditional classification algorithms on bioinformatics application is to acquire better classification accuracy. However, these algorithms cannot meet the requirement that minimises the average misclassification cost. In this paper, a new algorithm of cost-sensitive regularised extreme learning machine (CS-RELM) was proposed by using probability estimation and misclassification cost to reconstruct the classification results. By improving the classification accuracy of a group of small sample which higher misclassification cost, the new CS-RELM can minimise the classification cost. The 'rejection cost' was integrated into CS-RELM algorithm to further reduce the average misclassification cost. By using Colon Tumour dataset and SRBCT (Small Round Blue Cells Tumour) dataset, CS-RELM was compared with other cost-sensitive algorithms such as extreme learning machine (ELM), cost-sensitive extreme learning machine, regularised extreme learning machine, cost-sensitive support vector machine (SVM). The results of experiments show that CS-RELM with embedded rejection cost could reduce the average cost of misclassification and made more credible classification decision than others.
Final Report: Ionization chemistry of high temperature molecular fluids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fried, L E
2007-02-26
With the advent of coupled chemical/hydrodynamic reactive flow models for high explosives, understanding detonation chemistry is of increasing importance to DNT. The accuracy of first principles detonation codes, such as CHEETAH, are dependent on an accurate representation of the species present under detonation conditions. Ionic species and non-molecular phases are not currently included coupled chemistry/hydrodynamic simulations. This LDRD will determine the prevalence of such species during high explosive detonations, by carrying out experimental and computational investigation of common detonation products under extreme conditions. We are studying the phase diagram of detonation products such as H{sub 2}O, or NH{sub 3} andmore » mixtures under conditions of extreme pressure (P > 1 GPa) and temperature (T > 1000K). Under these conditions, the neutral molecular form of matter transforms to a phase dominated by ions. The phase boundaries of such a region are unknown.« less
Development of laser interferometric high-precision geometry monitor for JASMINE
NASA Astrophysics Data System (ADS)
Niwa, Yoshito; Arai, Koji; Ueda, Akitoshi; Sakagami, Masaaki; Gouda, Naoteru; Kobayashi, Yukiyasu; Yamada, Yoshiyuki; Yano, Taihei
2008-07-01
The telescope geometry of JASMINE should be stabilized and monitored with the accuracy of about 10 to 100 picometer or 10 to 100 picoradian in root-mean-square over about 10 hours. For this purpose, a high-precision interferometric laser metrology system is employed. One of useful techniques for measuring displacements in extremely minute scales is the heterodyne interferometrical method. Experiment for verification of multi degree of freedom measurement was performed and mirror motions were successfully monitored with three degree of freedom.
Jöres, A P W; Heverhagen, J T; Bonél, H; Exadaktylos, A; Klink, T
2016-02-01
The purpose of this study was to evaluate the diagnostic accuracy of full-body linear X-ray scanning (LS) in multiple trauma patients in comparison to 128-multislice computed tomography (MSCT). 106 multiple trauma patients (female: 33; male: 73) were retrospectively included in this study. All patients underwent LS of the whole body, including extremities, and MSCT covering the neck, thorax, abdomen, and pelvis. The diagnostic accuracy of LS for the detection of fractures of the truncal skeleton and pneumothoraces was evaluated in comparison to MSCT by two observers in consensus. Extremity fractures detected by LS were documented. The overall sensitivity of LS was 49.2 %, the specificity was 93.3 %, the positive predictive value was 91 %, and the negative predictive value was 57.5 %. The overall sensitivity for vertebral fractures was 16.7 %, and the specificity was 100 %. The sensitivity was 48.7 % and the specificity 98.2 % for all other fractures. Pneumothoraces were detected in 12 patients by CT, but not by LS. 40 extremity fractures were detected by LS, of which 4 fractures were dislocated, and 2 were fully covered by MSCT. The diagnostic accuracy of LS is limited in the evaluation of acute trauma of the truncal skeleton. LS allows fast whole-body X-ray imaging, and may be valuable for detecting extremity fractures in trauma patients in addition to MSCT. The overall sensitivity of LS for truncal skeleton injuries in multiple-trauma patients was < 50 %. The diagnostic reference standard MSCT is the preferred and reliable imaging modality. LS may be valuable for quick detection of extremity fractures. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Astrophysics Data System (ADS)
Yin, Yixing; Chen, Haishan; Xu, Chong-Yu; Xu, Wucheng; Chen, Changchun; Sun, Shanlei
2016-05-01
The regionalization methods, which "trade space for time" by pooling information from different locations in the frequency analysis, are efficient tools to enhance the reliability of extreme quantile estimates. This paper aims at improving the understanding of the regional frequency of extreme precipitation by using regionalization methods, and providing scientific background and practical assistance in formulating the regional development strategies for water resources management in one of the most developed and flood-prone regions in China, the Yangtze River Delta (YRD) region. To achieve the main goals, L-moment-based index-flood (LMIF) method, one of the most popular regionalization methods, is used in the regional frequency analysis of extreme precipitation with special attention paid to inter-site dependence and its influence on the accuracy of quantile estimates, which has not been considered by most of the studies using LMIF method. Extensive data screening of stationarity, serial dependence, and inter-site dependence was carried out first. The entire YRD region was then categorized into four homogeneous regions through cluster analysis and homogenous analysis. Based on goodness-of-fit statistic and L-moment ratio diagrams, generalized extreme-value (GEV) and generalized normal (GNO) distributions were identified as the best fitted distributions for most of the sub-regions, and estimated quantiles for each region were obtained. Monte Carlo simulation was used to evaluate the accuracy of the quantile estimates taking inter-site dependence into consideration. The results showed that the root-mean-square errors (RMSEs) were bigger and the 90 % error bounds were wider with inter-site dependence than those without inter-site dependence for both the regional growth curve and quantile curve. The spatial patterns of extreme precipitation with a return period of 100 years were finally obtained which indicated that there are two regions with highest precipitation extremes and a large region with low precipitation extremes. However, the regions with low precipitation extremes are the most developed and densely populated regions of the country, and floods will cause great loss of human life and property damage due to the high vulnerability. The study methods and procedure demonstrated in this paper will provide useful reference for frequency analysis of precipitation extremes in large regions, and the findings of the paper will be beneficial in flood control and management in the study area.
An Improved Iris Recognition Algorithm Based on Hybrid Feature and ELM
NASA Astrophysics Data System (ADS)
Wang, Juan
2018-03-01
The iris image is easily polluted by noise and uneven light. This paper proposed an improved extreme learning machine (ELM) based iris recognition algorithm with hybrid feature. 2D-Gabor filters and GLCM is employed to generate a multi-granularity hybrid feature vector. 2D-Gabor filter and GLCM feature work for capturing low-intermediate frequency and high frequency texture information, respectively. Finally, we utilize extreme learning machine for iris recognition. Experimental results reveal our proposed ELM based multi-granularity iris recognition algorithm (ELM-MGIR) has higher accuracy of 99.86%, and lower EER of 0.12% under the premise of real-time performance. The proposed ELM-MGIR algorithm outperforms other mainstream iris recognition algorithms.
Analysis of Sources of Large Positioning Errors in Deterministic Fingerprinting
2017-01-01
Wi-Fi fingerprinting is widely used for indoor positioning and indoor navigation due to the ubiquity of wireless networks, high proliferation of Wi-Fi-enabled mobile devices, and its reasonable positioning accuracy. The assumption is that the position can be estimated based on the received signal strength intensity from multiple wireless access points at a given point. The positioning accuracy, within a few meters, enables the use of Wi-Fi fingerprinting in many different applications. However, it has been detected that the positioning error might be very large in a few cases, which might prevent its use in applications with high accuracy positioning requirements. Hybrid methods are the new trend in indoor positioning since they benefit from multiple diverse technologies (Wi-Fi, Bluetooth, and Inertial Sensors, among many others) and, therefore, they can provide a more robust positioning accuracy. In order to have an optimal combination of technologies, it is crucial to identify when large errors occur and prevent the use of extremely bad positioning estimations in hybrid algorithms. This paper investigates why large positioning errors occur in Wi-Fi fingerprinting and how to detect them by using the received signal strength intensities. PMID:29186921
Sequencing small genomic targets with high efficiency and extreme accuracy
Schmitt, Michael W.; Fox, Edward J.; Prindle, Marc J.; Reid-Bayliss, Kate S.; True, Lawrence D.; Radich, Jerald P.; Loeb, Lawrence A.
2015-01-01
The detection of minority variants in mixed samples demands methods for enrichment and accurate sequencing of small genomic intervals. We describe an efficient approach based on sequential rounds of hybridization with biotinylated oligonucleotides, enabling more than one-million fold enrichment of genomic regions of interest. In conjunction with error correcting double-stranded molecular tags, our approach enables the quantification of mutations in individual DNA molecules. PMID:25849638
A preliminary 6 DOF attitude and translation control system design for Starprobe
NASA Technical Reports Server (NTRS)
Mak, P.; Mettler, E.; Vijayarahgavan, A.
1981-01-01
The extreme thermal environment near perihelion and the high-accuracy gravitational science experiments impose unique design requirements on various subsystems of Starprobe. This paper examines some of these requirements and their impact on the preliminary design of a six-degree-of-freedom attitude and translational control system. Attention is given to design considerations, the baseline attitude/translational control system, system modeling, and simulation studies.
A Novel Extreme Learning Control Framework of Unmanned Surface Vehicles.
Wang, Ning; Sun, Jing-Chao; Er, Meng Joo; Liu, Yan-Cheng
2016-05-01
In this paper, an extreme learning control (ELC) framework using the single-hidden-layer feedforward network (SLFN) with random hidden nodes for tracking an unmanned surface vehicle suffering from unknown dynamics and external disturbances is proposed. By combining tracking errors with derivatives, an error surface and transformed states are defined to encapsulate unknown dynamics and disturbances into a lumped vector field of transformed states. The lumped nonlinearity is further identified accurately by an extreme-learning-machine-based SLFN approximator which does not require a priori system knowledge nor tuning input weights. Only output weights of the SLFN need to be updated by adaptive projection-based laws derived from the Lyapunov approach. Moreover, an error compensator is incorporated to suppress approximation residuals, and thereby contributing to the robustness and global asymptotic stability of the closed-loop ELC system. Simulation studies and comprehensive comparisons demonstrate that the ELC framework achieves high accuracy in both tracking and approximation.
Evidence for Reflected Light from the Most Eccentric Known Exoplanet
NASA Astrophysics Data System (ADS)
Kane, Stephen
2015-12-01
Planets in highly eccentric orbits form a class of objects not seen within our Solar System. The most extreme case known amongst these objects is the planet orbiting HD 20782, with an orbital period of 597 days and an eccentricity of 0.96. Here we present new data and analysis for this system as part of the Transit Ephemeris Refinement and Monitoring Survey (TERMS). New radial velocities acquired during periastron provide incredible accuracy for the planetary orbit and astrometric results that show the companion is indeed planetary in nature. We obtained MOST photometry during a predicted periastron passage that shows evidence of phase variations due to reflected light from the planet. The extreme nature of this planet presents an ideal case from which to test theories regarding the formation of eccentric orbits and the response of atmospheres to extreme changes in flux.
MO-F-CAMPUS-I-01: Accuracy of Radiologists Interpretation of Mammographic Breast Density
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vedantham, S; Shi, L; Karellas, A
2015-06-15
Purpose: Several commercial and non-commercial software and techniques are available for determining breast density from mammograms. However, where mandated by law the breast density information communicated to the subject/patient is based on radiologist’s interpretation of breast density from mammograms. Several studies have reported on the concordance among radiologists in interpreting mammographic breast density. In this work, we investigated the accuracy of radiologist’s interpretation of breast density. Methods: Volumetric breast density (VBD) determined from 134 unilateral dedicated breast CT scans from 134 subjects was considered the truth. An MQSA-qualified study radiologist with more than 20 years of breast imaging experience reviewedmore » the DICOM “for presentation” standard 2-view mammograms of the corresponding breasts and assigned BIRADS breast density categories. For statistical analysis, the breast density categories were dichotomized in two ways; fatty vs. dense breasts where “fatty” corresponds to BIRADS breast density categories A/B, and “dense” corresponds to BIRADS breast density categories C/D, and extremely dense vs. fatty to heterogeneously dense breasts, where extremely dense corresponds to BIRADS breast density category D and BIRADS breast density categories A through C were grouped as fatty to heterogeneously dense breasts. Logistic regression models (SAS 9.3) were used to determine the association between radiologist’s interpretation of breast density and VBD from breast CT, from which the area under the ROC (AUC) was determined. Results: Both logistic regression models were statistically significant (Likelihood Ratio test, p<0.0001). The accuracy (AUC) of the study radiologist for classification of fatty vs. dense breasts was 88.4% (95% CI: 83–94%) and for classification of extremely dense breast was 94.3% (95% CI: 90–98%). Conclusion: The accuracy of the radiologist in classifying dense and extremely dense breasts is high. Considering the variability in VBD estimates from commercial software, the breast density information communicated to the patient should be based on radiologist’s interpretation. This work was supported in part by NIH R21 CA176470 and R21 CA134128. The contents are solely the responsibility of the authors and do not reflect the official views of the NIH or NCI.« less
Cryogenic Behavior of the High Temperature Crystal Oscillator PX-570
NASA Technical Reports Server (NTRS)
Patterson, Richard; Hammoud, Ahmad; Scherer, Steven
2011-01-01
Microprocessors, data-acquisition systems, and electronic controllers usually require timing signals for proper and accurate operation. These signals are, in most cases, provided by circuits that utilize crystal oscillators due to availability, cost, ease of operation, and accuracy. Stability of these oscillators, i.e. crystal characteristics, is usually governed, amongst other things, by the ambient temperature. Operation of these devices under extreme temperatures requires, therefore, the implementation of some temperature-compensation mechanism either through the manufacturing process of the oscillator part or in the design of the circuit to maintain stability as well as accuracy. NASA future missions into deep space and planetary exploration necessitate operation of electronic instruments and systems in environments where extreme temperatures along with wide-range thermal swings are countered. Most of the commercial devices are very limited in terms of their specified operational temperature while very few custom-made and military-grade parts have the ability to operate in a slightly wider range of temperature. Thus, it is becomes mandatory to design and develop circuits that are capable of operation efficiently and reliably under the space harsh conditions. This report presents the results obtained on the evaluation of a new (COTS) commercial-off-the-shelf crystal oscillator under extreme temperatures. The device selected for evaluation comprised of a 10 MHz, PX-570-series crystal oscillator. This type of device was recently introduced by Vectron International and is designed as high temperature oscillator [1]. These parts are fabricated using proprietary manufacturing processes designed specifically for high temperature and harsh environment applications [1]. The oscillators have a wide continuous operating temperature range; making them ideal for use in military and aerospace industry, industrial process control, geophysical fields, avionics, and engine control. They exhibit low jitter and phase noise, consume little power, and are suited for high shock and vibration applications. The unique package design of these crystal oscillators offers a small ceramic package footprint, as well as providing both through-hole mounting and surface mount options.
Matrices pattern using FIB; 'Out-of-the-box' way of thinking.
Fleger, Y; Gotlib-Vainshtein, K; Talyosef, Y
2017-03-01
Focused ion beam (FIB) is an extremely valuable tool in nanopatterning and nanofabrication for potentially high-resolution patterning, especially when refers to He ion beam microscopy. The work presented here demonstrates an 'out-of-the-box' method of writing using FIB, which enables creating very large matrices, up to the beam-shift limitation, in short times and with high accuracy unachievable by any other writing technique. The new method allows combining different shapes in nanometric dimensions and high resolutions for wide ranges. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.
Photonic crystal fiber Fabry-Perot interferometers with high-reflectance internal mirrors
NASA Astrophysics Data System (ADS)
Fan, Rong; Hou, Yuanbin; Sun, Wei
2015-06-01
We demonstrated an in-line micro fiber-optic Fabry-Perot interferometer with an air cavity which was created by multi-step fusion splicing a muti-mode photonic crystal fiber (MPCF) to a standard single mode fiber (SMF). The fringe visibility of the interference pattern was up to 20 dB by reshaping the air cavity. Experimental results showed that such a device could be used as a highly sensitive strain sensor with the sensitivity of 4.5 pm/μɛ. Moreover, it offered some other outstanding advantages, such as the extremely compact structure, easy fabrication, low cost, and high accuracy.
High accuracy satellite drag model (HASDM)
NASA Astrophysics Data System (ADS)
Storz, M.; Bowman, B.; Branson, J.
The dominant error source in the force models used to predict low perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying high-resolution density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal, semidiurnal and terdiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index a p to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low perigee satellites.
A regressive storm model for extreme space weather
NASA Astrophysics Data System (ADS)
Terkildsen, Michael; Steward, Graham; Neudegg, Dave; Marshall, Richard
2012-07-01
Extreme space weather events, while rare, pose significant risk to society in the form of impacts on critical infrastructure such as power grids, and the disruption of high end technological systems such as satellites and precision navigation and timing systems. There has been an increased focus on modelling the effects of extreme space weather, as well as improving the ability of space weather forecast centres to identify, with sufficient lead time, solar activity with the potential to produce extreme events. This paper describes the development of a data-based model for predicting the occurrence of extreme space weather events from solar observation. The motivation for this work was to develop a tool to assist space weather forecasters in early identification of solar activity conditions with the potential to produce extreme space weather, and with sufficient lead time to notify relevant customer groups. Data-based modelling techniques were used to construct the model, and an extensive archive of solar observation data used to train, optimise and test the model. The optimisation of the base model aimed to eliminate false negatives (missed events) at the expense of a tolerable increase in false positives, under the assumption of an iterative improvement in forecast accuracy during progression of the solar disturbance, as subsequent data becomes available.
A Low Complexity System Based on Multiple Weighted Decision Trees for Indoor Localization
Sánchez-Rodríguez, David; Hernández-Morera, Pablo; Quinteiro, José Ma.; Alonso-González, Itziar
2015-01-01
Indoor position estimation has become an attractive research topic due to growing interest in location-aware services. Nevertheless, satisfying solutions have not been found with the considerations of both accuracy and system complexity. From the perspective of lightweight mobile devices, they are extremely important characteristics, because both the processor power and energy availability are limited. Hence, an indoor localization system with high computational complexity can cause complete battery drain within a few hours. In our research, we use a data mining technique named boosting to develop a localization system based on multiple weighted decision trees to predict the device location, since it has high accuracy and low computational complexity. The localization system is built using a dataset from sensor fusion, which combines the strength of radio signals from different wireless local area network access points and device orientation information from a digital compass built-in mobile device, so that extra sensors are unnecessary. Experimental results indicate that the proposed system leads to substantial improvements on computational complexity over the widely-used traditional fingerprinting methods, and it has a better accuracy than they have. PMID:26110413
Science & Technology Review September/October 2008
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bearinger, J P
2008-07-21
This issue has the following articles: (1) Answering Scientists Most Audacious Questions--Commentary by Dona Crawford; (2) Testing the Accuracy of the Supernova Yardstick--High-resolution simulations are advancing understanding of Type Ia supernovae to help uncover the mysteries of dark energy; (3) Developing New Drugs and Personalized Medical Treatment--Accelerator mass spectrometry is emerging as an essential tool for assessing the effects of drugs in humans; (4) Triage in a Patch--A painless skin patch and accompanying detector can quickly indicate human exposure to biological pathogens, chemicals, explosives, or radiation; and (5) Smoothing Out Defects for Extreme Ultraviolet Lithography--A process for smoothing mask defectsmore » helps move extreme ultraviolet lithography one step closer to creating smaller, more powerful computer chips.« less
Contact high: Mania proneness and positive perception of emotional touches.
Piff, Paul K; Purcell, Amanda; Gruber, June; Hertenstein, Matthew J; Keltner, Dacher
2012-01-01
How do extreme degrees of positive emotion-such as those characteristic of mania-influence emotion perception? The present study investigated how mania proneness, assessed using the Hypomanic Personality Scale, influences the perception of emotion via touch. Using a validated dyadic interaction paradigm for communicating emotion through touch (Hertenstein, Keltner, App, Bulleit, & Jaskolka, 2006), participants (N=53) received eight different touches to their forearm from a stranger and then identified the emotion via forced-choice methodology. Mania proneness predicted increased overall accuracy in touch perception, particularly for positive emotion touches, as well as the over-attribution of positive and under-attribution of negative emotions across all touches. These findings highlight the effects of positive emotion extremes on the perception of emotion in social interactions.
NASA Astrophysics Data System (ADS)
Min, M.
2017-10-01
Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.
Levanič, Tom; Popa, Ionel; Poljanšek, Simon; Nechita, Constantin
2013-09-01
Increase in temperature and decrease in precipitation pose a major future challenge for sustainable ecosystem management in Romania. To understand ecosystem response and the wider social consequences of environmental change, we constructed a 396-year long (1615-2010) drought sensitive tree-ring width chronology (TRW) of Pinus nigra var. banatica (Georg. et Ion.) growing on steep slopes and shallow organic soil. We established a statistical relationship between TRW and two meteorological parameters-monthly sum of precipitation (PP) and standardised precipitation index (SPI). PP and SPI correlate significantly with TRW (r = 0.54 and 0.58) and are stable in time. Rigorous statistical tests, which measure the accuracy and prediction ability of the model, were all significant. SPI was eventually reconstructed back to 1688, with extreme dry and wet years identified using the percentile method. By means of reconstruction, we identified two so far unknown extremely dry years in Romania--1725 and 1782. Those 2 years are almost as dry as 1946, which was known as the "year of great famine." Since no historical documents for these 2 years were available in local archives, we compared the results with those from neighbouring countries and discovered that both years were extremely dry in the wider region (Slovakia, Hungary, Anatolia, Syria, and Turkey). While the 1800-1900 period was relatively mild, with only two moderately extreme years as far as weather is concerned, the 1900-2009 period was highly salient owing to the very high number of wet and dry extremes--five extremely wet and three extremely dry events (one of them in 1946) were identified.
PEPSI spectro-polarimeter for the LBT
NASA Astrophysics Data System (ADS)
Strassmeier, Klaus G.; Hofmann, Axel; Woche, Manfred F.; Rice, John B.; Keller, Christoph U.; Piskunov, N. E.; Pallavicini, Roberto
2003-02-01
PEPSI (Postham Echelle Polarimetric and Spectroscopic Instrument) is to use the unique feature of the LBT and its powerful double mirror configuration to provide high and extremely high spectral resolution full-Stokes four-vector spectra in the wavelength range 450-1100nm. For the given aperture of 8.4m in single mirror mode and 11.8m in double mirror mode, and at a spectral resolution of 40,000-300,000 as designed for the fiber-fed Echelle spectrograph, a polarimetric accuracy between 10-4 and 10-2 can be reached for targets with visual magnitudes of up to 17th magnitude. A polarimetric accuracy better than 10-4 can only be reached for either targets brighter than approximately 10th magnitude together wiht a substantial trade-off wiht the spectral resolution or with spectrum deconvolution techniques. At 10-2, however, we will be able to observe the brightest AGNs down to 17th magnitude.
Accuracy assessment of a mobile terrestrial lidar survey at Padre Island National Seashore
Lim, Samsung; Thatcher, Cindy A.; Brock, John C.; Kimbrow, Dustin R.; Danielson, Jeffrey J.; Reynolds, B.J.
2013-01-01
The higher point density and mobility of terrestrial laser scanning (light detection and ranging (lidar)) is desired when extremely detailed elevation data are needed for mapping vertically orientated complex features such as levees, dunes, and cliffs, or when highly accurate data are needed for monitoring geomorphic changes. Mobile terrestrial lidar scanners have the capability for rapid data collection on a larger spatial scale compared with tripod-based terrestrial lidar, but few studies have examined the accuracy of this relatively new mapping technology. For this reason, we conducted a field test at Padre Island National Seashore of a mobile lidar scanner mounted on a sport utility vehicle and integrated with a position and orientation system. The purpose of the study was to assess the vertical and horizontal accuracy of data collected by the mobile terrestrial lidar system, which is georeferenced to the Universal Transverse Mercator coordinate system and the North American Vertical Datum of 1988. To accomplish the study objectives, independent elevation data were collected by conducting a high-accuracy global positioning system survey to establish the coordinates and elevations of 12 targets spaced throughout the 12 km transect. These independent ground control data were compared to the lidar scanner-derived elevations to quantify the accuracy of the mobile lidar system. The performance of the mobile lidar system was also tested at various vehicle speeds and scan density settings (e.g. field of view and linear point spacing) to estimate the optimal parameters for desired point density. After adjustment of the lever arm parameters, the final point cloud accuracy was 0.060 m (east), 0.095 m (north), and 0.053 m (height). The very high density of the resulting point cloud was sufficient to map fine-scale topographic features, such as the complex shape of the sand dunes.
Zhou, Shiqi; Lamperski, Stanisław; Zydorczak, Maria
2014-08-14
Monte Carlo (MC) simulation and classical density functional theory (DFT) results are reported for the structural and electrostatic properties of a planar electric double layer containing ions having highly asymmetric diameters or valencies under extreme concentration condition. In the applied DFT, for the excess free energy contribution due to the hard sphere repulsion, a recently elaborated extended form of the fundamental measure functional is used, and coupling of Coulombic and short range hard-sphere repulsion is described by a traditional second-order functional perturbation expansion approximation. Comparison between the MC and DFT results indicates that validity interval of the traditional DFT approximation expands to high ion valences running up to 3 and size asymmetry high up to diameter ratio of 4 whether the high valence ions or the large size ion are co- or counter-ions; and to a high bulk electrolyte concentration being close to the upper limit of the electrolyte mole concentration the MC simulation can deal with well. The DFT accuracy dependence on the ion parameters can be self-consistently explained using arguments of liquid state theory, and new EDL phenomena such as overscreening effect due to monovalent counter-ions, extreme layering effect of counter-ions, and appearance of a depletion layer with almost no counter- and co-ions are observed.
Kassamali, Rahil Hussein; Hoey, Edward T D; Ganeshan, Arul; Littlehales, Tracey
2013-01-01
This feasibility study aimed to obtain initial data to assess the performance of a novel noncontrast spoiled magnetic resonance (MR) angiography technique (fresh-blood imaging [FBI]) compared to gadolinium-enhanced MR (Gd-MR) angiography for evaluation of the aorto-iliac and lower extremity arteries. Thirteen patients with suspected lower extremity arterial disease that had undergone Gd-MR angiography and FBI at the same session were randomly included in the study. FBI was performed using an ECG-gated ow-spoiled T2-weighted half-Fourier fast spin-echo sequence. For analysis, the aortoiliac and lower limb arteries were divided into 18 anatomical segments. Two blinded readers individually graded image quality of FBI and also assessed the presence and severity of any stenotic lesions. A similar analysis was performed for the Gd-MR angiography images. A total of 385 arterial segments were analyzed; 34 segments were excluded due to degraded image quality (1.3% of Gd- MR vs. 8% of FBI-MR angiography images). FBI-MR angiography had comparable accuracy to Gd-MR angiography for assessment of the above knee vessels with high kappa statistics (large arteries, 0.91; small arteries, 0.86) and high sensitivity (large arteries, 98.1%; small arteries, 88.6%) and specificity (large arteries, 97.2%; small arteries, 97.6%) using Gd-MR angiography as the gold standard. Initial results show good agreement between FBI-MR angiography and Gd-MR angiography in the diagnosis of peripheral arterial disease, making FBI a potential alternative in patients with renal impairment. FBI showed highest accuracy in the above knee vessels. Technological refinements are required to improve accuracy for assessing the calf and pedal vessels.
The diagnosis of aortoiliac disease. A noninvasive femoral cuff technique.
Barringer, M; Poole, G V; Shircliffe, A C; Meredith, J W; Hightower, F; Plonk, G W
1983-01-01
An inexpensive femoral "cuff" developed in this noninvasive vascular laboratory allows pulse volume recordings and systolic pressure measurements of the femoral arteries. Using the parameters 1) femoral/brachial systolic pressure ratio, 2) wave amplitude, and 3) status of the dicrotic notch for assessment of results, it was found that the cuff correctly identified 59 of 62 limbs with at least 50% aortoiliac stenosis, with only two false-positive results, for an accuracy of 97%. The high, wide thigh cuff identified 57 of the 62 limbs, but had 45 false-positive results (77% accuracy). Use of the femoral "cuff" has refined the ability to identify the anatomic location of significant arterial stenoses in the lower extremities. Images Fig. 1. Fig. 2. PMID:6824373
Snow Depth from Lidar: Challenges and New Technology for Measurements in Extreme Terrain
NASA Astrophysics Data System (ADS)
Berisford, D. F.; Kadatskiy, V.; Boardman, J. W.; Bormann, K.; Deems, J. S.; Goodale, C. E.; Mattmann, C. A.; Ramirez, P.; Richardson, M.; Painter, T. H.
2014-12-01
The Airborne Snow Observatory (ASO) uses an airborne LiDAR system to measure basin-wide snow depth with cm-scale accuracy at ~1m spatial resolution. This is accomplished by creating a Digital Elevation Model (DEM) over snow-free terrain in the summer, then repeating the flights again when the terrain is snow-covered and subtracting the elevations. Snow Water Equivalent (SWE) is then calculated by incorporating modeled snow density estimates, and when combined with coincident spectrometer albedo measurements, informs distributed hydrologic modeling and runoff prediction. This method provides SWE estimates of unprecedented accuracy and extent compared to traditional snow surveys and towers, and 24hr latency data products through the ASO processing pipeline using Apache Tika and OODT software. The timely ASO outputs support operational decision making by water/dam operators for optimal water management. The water-resource snowpack in the western US lies in remote mountainous terrain, spanning large areas containing steep faces at all aspects, often amongst tree canopy. This extreme terrain presents unusual challenges for LiDAR, and requires high altitude flights to achieve wide area coverage, high point density to capture small terrain features, and the ability to capture all slope aspects without shadowing. These challenges were met by the new state-of-the-art Riegl LMS-Q1560 LiDAR system, which incorporates two independent laser channels and a single rotating mirror. Both lasers and mirror are designed to provide forward, backward, and nadir look capability, which minimizes shadowing and ensures data capture even on very steep slopes. The system is capable of logging more than 10 simultaneous pulses in the air, which allows data collection at extremely high resolution while maintaining very high altitude which reduces complete region acquisition time significantly, and allows data collection over terrain with extreme elevation variation. Our experience to-date includes acquisition of data over terrain relief of more than 3500m, and ranges of up to 6000m in a single swath. We present data acquired during spring of 2013 and 2014 in western Colorado and the central Sierra Nevada, which demonstrates the capability of the new LiDAR technology and shows basin-wide measured snow depth and SWE results.
NASA Astrophysics Data System (ADS)
Vincenti, Henri; Vay, Jean-Luc
2018-07-01
The advent of massively parallel supercomputers, with their distributed-memory technology using many processing units, has favored the development of highly-scalable local low-order solvers at the expense of harder-to-scale global very high-order spectral methods. Indeed, FFT-based methods, which were very popular on shared memory computers, have been largely replaced by finite-difference (FD) methods for the solution of many problems, including plasmas simulations with electromagnetic Particle-In-Cell methods. For some problems, such as the modeling of so-called "plasma mirrors" for the generation of high-energy particles and ultra-short radiations, we have shown that the inaccuracies of standard FD-based PIC methods prevent the modeling on present supercomputers at sufficient accuracy. We demonstrate here that a new method, based on the use of local FFTs, enables ultrahigh-order accuracy with unprecedented scalability, and thus for the first time the accurate modeling of plasma mirrors in 3D.
Zhao, Xian-En; Yan, Ping; Wang, Renjun; Zhu, Shuyun; You, Jinmao; Bai, Yu; Liu, Huwei
2016-08-01
Quantitative analysis of cholesterol and its metabolic steroid hormones plays a vital role in diagnosing endocrine disorders and understanding disease progression, as well as in clinical medicine studies. Because of their extremely low abundance in body fluids, it remains a challenging task to develop a sensitive detection method. A hyphenated technique of dual ultrasonic-assisted dispersive liquid-liquid microextraction (dual-UADLLME) coupled with microwave-assisted derivatization (MAD) was proposed for cleansing, enrichment and sensitivity enhancement. 4'-Carboxy-substituted rosamine (CSR) was synthesized and used as derivatization reagent. An ultra-high performance liquid chromatography tandem mass spectrometry (UHPLC-MS/MS) method was developed for determination of cholesterol and its metabolic steroid hormones in the multiple reaction monitoring mode. Parameters of dual-UADLLME, MAD and UHPLC-MS/MS were all optimized. Satisfactory linearity, recovery, repeatability, accuracy and precision, absence of matrix effect and extremely low limits of detection (LODs, 0.08-0.15 pg mL(-1) ) were achieved. Through the combination of dual-UADLLME and MAD, a determination method for cholesterol and its metabolic steroid hormones in human plasma, serum and urine samples was developed and validated with high sensitivity, selectivity, accuracy and perfect matrix effect results. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Estimation of local extreme suspended sediment concentrations in California Rivers.
Tramblay, Yves; Saint-Hilaire, André; Ouarda, Taha B M J; Moatar, Florentina; Hecht, Barry
2010-09-01
The total amount of suspended sediment load carried by a stream during a year is usually transported during one or several extreme events related to high river flow and intense rainfall, leading to very high suspended sediment concentrations (SSCs). In this study quantiles of SSC derived from annual maximums and the 99th percentile of SSC series are considered to be estimated locally in a site-specific approach using regional information. Analyses of relationships between physiographic characteristics and the selected indicators were undertaken using the localities of 5-km radius draining of each sampling site. Multiple regression models were built to test the regional estimation for these indicators of suspended sediment transport. To assess the accuracy of the estimates, a Jack-Knife re-sampling procedure was used to compute the relative bias and root mean square error of the models. Results show that for the 19 stations considered in California, the extreme SSCs can be estimated with 40-60% uncertainty, depending on the presence of flow regulation in the basin. This modelling approach is likely to prove functional in other Mediterranean climate watersheds since they appear useful in California, where geologic, climatic, physiographic, and land-use conditions are highly variable. Copyright 2010 Elsevier B.V. All rights reserved.
Zheng, Wei; Yan, Xiaoyong; Zhao, Wei; Qian, Chengshan
2017-12-20
A novel large-scale multi-hop localization algorithm based on regularized extreme learning is proposed in this paper. The large-scale multi-hop localization problem is formulated as a learning problem. Unlike other similar localization algorithms, the proposed algorithm overcomes the shortcoming of the traditional algorithms which are only applicable to an isotropic network, therefore has a strong adaptability to the complex deployment environment. The proposed algorithm is composed of three stages: data acquisition, modeling and location estimation. In data acquisition stage, the training information between nodes of the given network is collected. In modeling stage, the model among the hop-counts and the physical distances between nodes is constructed using regularized extreme learning. In location estimation stage, each node finds its specific location in a distributed manner. Theoretical analysis and several experiments show that the proposed algorithm can adapt to the different topological environments with low computational cost. Furthermore, high accuracy can be achieved by this method without setting complex parameters.
A Smart High Accuracy Silicon Piezoresistive Pressure Sensor Temperature Compensation System
Zhou, Guanwu; Zhao, Yulong; Guo, Fangfang; Xu, Wenju
2014-01-01
Theoretical analysis in this paper indicates that the accuracy of a silicon piezoresistive pressure sensor is mainly affected by thermal drift, and varies nonlinearly with the temperature. Here, a smart temperature compensation system to reduce its effect on accuracy is proposed. Firstly, an effective conditioning circuit for signal processing and data acquisition is designed. The hardware to implement the system is fabricated. Then, a program is developed on LabVIEW which incorporates an extreme learning machine (ELM) as the calibration algorithm for the pressure drift. The implementation of the algorithm was ported to a micro-control unit (MCU) after calibration in the computer. Practical pressure measurement experiments are carried out to verify the system's performance. The temperature compensation is solved in the interval from −40 to 85 °C. The compensated sensor is aimed at providing pressure measurement in oil-gas pipelines. Compared with other algorithms, ELM acquires higher accuracy and is more suitable for batch compensation because of its higher generalization and faster learning speed. The accuracy, linearity, zero temperature coefficient and sensitivity temperature coefficient of the tested sensor are 2.57% FS, 2.49% FS, 8.1 × 10−5/°C and 29.5 × 10−5/°C before compensation, and are improved to 0.13%FS, 0.15%FS, 1.17 × 10−5/°C and 2.1 × 10−5/°C respectively, after compensation. The experimental results demonstrate that the proposed system is valid for the temperature compensation and high accuracy requirement of the sensor. PMID:25006998
NASA Astrophysics Data System (ADS)
Rieder, Harald E.; Staehelin, Johannes; Maeder, Jörg A.; Peter, Thomas; Ribatet, Mathieu; Davison, Anthony C.; Stübi, Rene; Weihs, Philipp; Holawe, Franz
2010-05-01
In this study tools from extreme value theory (e.g. Coles, 2001; Ribatet, 2007) are applied for the first time in the field of stratospheric ozone research, as statistical analysis showed that previously used concepts assuming a Gaussian distribution (e.g. fixed deviations from mean values) of total ozone data do not address the internal data structure concerning extremes adequately. The study illustrates that tools based on extreme value theory are appropriate to identify ozone extremes and to describe the tails of the world's longest total ozone record (Arosa, Switzerland - for details see Staehelin et al., 1998a,b) (Rieder et al., 2010a). A daily moving threshold was implemented for consideration of the seasonal cycle in total ozone. The frequency of days with extreme low (termed ELOs) and extreme high (termed EHOs) total ozone and the influence of those on mean values and trends is analyzed for Arosa total ozone time series. The results show (a) an increase in ELOs and (b) a decrease in EHOs during the last decades and (c) that the overall trend during the 1970s and 1980s in total ozone is strongly dominated by changes in these extreme events. After removing the extremes, the time series shows a strongly reduced trend (reduction by a factor of 2.5 for trend in annual mean). Furthermore, it is shown that the fitted model represents the tails of the total ozone data set with very high accuracy over the entire range (including absolute monthly minima and maxima). Also the frequency distribution of ozone mini-holes (using constant thresholds) can be calculated with high accuracy. Analyzing the tails instead of a small fraction of days below constant thresholds provides deeper insight in time series properties. Excursions in the frequency of extreme events reveal "fingerprints" of dynamical factors such as ENSO or NAO, and chemical factors, such as cold Arctic vortex ozone losses, as well as major volcanic eruptions of the 20th century (e.g. Gunung Agung, El Chichón, Mt. Pinatubo). Furthermore, atmospheric loading in ozone depleting substances lead to a continuous modification of column ozone in the northern hemisphere also with respect to extreme values (partly again in connection with polar vortex contributions). It is shown that application of extreme value theory allows the identification of many more such fingerprints than conventional time series analysis of annual and seasonal mean values. Especially, the analysis shows the strong influence of dynamics, revealing that even moderate ENSO and NAO events have a discernible effect on total ozone (Rieder et al., 2010b). Overall the presented new extremes concept provides new information on time series properties, variability, trends and the influence of dynamics and chemistry, complementing earlier analyses focusing only on monthly (or annual) mean values. References: Coles, S.: An Introduction to Statistical Modeling of Extreme Values, Springer Series in Statistics, ISBN:1852334592, Springer, Berlin, 2001. Ribatet, M.: POT: Modelling peaks over a threshold, R News, 7, 34-36, 2007. Rieder ,H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part I: Application of extreme value theory, to be submitted to ACPD. Rieder, H.E., Staehelin, J., Maeder, J.A., Ribatet, M., Stübi, R., Weihs, P., Holawe, F., Peter, T., and A.D., Davison (2010): Extreme events in total ozone over Arosa - Part II: Fingerprints of atmospheric dynamics and chemistry and effects on mean values and long-term changes, to be submitted to ACPD. Staehelin, J., Renaud, A., Bader, J., McPeters, R., Viatte, P., Hoegger, B., Bugnion, V., Giroud, M., and Schill, H.: Total ozone series at Arosa (Switzerland): Homogenization and data comparison, J. Geophys. Res., 103(D5), 5827-5842, doi:10.1029/97JD02402, 1998a. Staehelin, J., Kegel, R., and Harris, N. R.: Trend analysis of the homogenized total ozone series of Arosa (Switzerland), 1929-1996, J. Geophys. Res., 103(D7), 8389-8400, doi:10.1029/97JD03650, 1998b.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vilkas, M J; Ishikawa, Y; Trabert, E
Many-Body Perturbation Theory (MBPT) has been employed to calculate with high wavelength accuracy the extreme ultraviolet (EUV) spectra of F-like to P-like Xe ions. They discuss the reliability of the new calculations using the example of EUV beam-foil spectra of Xe, in which n = 3, {Delta}n = 0 transitions of Na-, Mg-, Al-like, and Si-like ions have been found to dominate. A further comparison is made with spectra from an electron beam ion trap, that is, from a device with a very different (low density) excitation balance.
Slice sampling technique in Bayesian extreme of gold price modelling
NASA Astrophysics Data System (ADS)
Rostami, Mohammad; Adam, Mohd Bakri; Ibrahim, Noor Akma; Yahya, Mohamed Hisham
2013-09-01
In this paper, a simulation study of Bayesian extreme values by using Markov Chain Monte Carlo via slice sampling algorithm is implemented. We compared the accuracy of slice sampling with other methods for a Gumbel model. This study revealed that slice sampling algorithm offers more accurate and closer estimates with less RMSE than other methods . Finally we successfully employed this procedure to estimate the parameters of Malaysia extreme gold price from 2000 to 2011.
Real-Time Human Detection for Aerial Captured Video Sequences via Deep Models.
AlDahoul, Nouar; Md Sabri, Aznul Qalid; Mansoor, Ali Mohammed
2018-01-01
Human detection in videos plays an important role in various real life applications. Most of traditional approaches depend on utilizing handcrafted features which are problem-dependent and optimal for specific tasks. Moreover, they are highly susceptible to dynamical events such as illumination changes, camera jitter, and variations in object sizes. On the other hand, the proposed feature learning approaches are cheaper and easier because highly abstract and discriminative features can be produced automatically without the need of expert knowledge. In this paper, we utilize automatic feature learning methods which combine optical flow and three different deep models (i.e., supervised convolutional neural network (S-CNN), pretrained CNN feature extractor, and hierarchical extreme learning machine) for human detection in videos captured using a nonstatic camera on an aerial platform with varying altitudes. The models are trained and tested on the publicly available and highly challenging UCF-ARG aerial dataset. The comparison between these models in terms of training, testing accuracy, and learning speed is analyzed. The performance evaluation considers five human actions (digging, waving, throwing, walking, and running). Experimental results demonstrated that the proposed methods are successful for human detection task. Pretrained CNN produces an average accuracy of 98.09%. S-CNN produces an average accuracy of 95.6% with soft-max and 91.7% with Support Vector Machines (SVM). H-ELM has an average accuracy of 95.9%. Using a normal Central Processing Unit (CPU), H-ELM's training time takes 445 seconds. Learning in S-CNN takes 770 seconds with a high performance Graphical Processing Unit (GPU).
Estimating missing daily temperature extremes in Jaffna, Sri Lanka
NASA Astrophysics Data System (ADS)
Thevakaran, A.; Sonnadara, D. U. J.
2018-04-01
The accuracy of reconstructing missing daily temperature extremes in the Jaffna climatological station, situated in the northern part of the dry zone of Sri Lanka, is presented. The adopted method utilizes standard departures of daily maximum and minimum temperature values at four neighbouring stations, Mannar, Anuradhapura, Puttalam and Trincomalee to estimate the standard departures of daily maximum and minimum temperatures at the target station, Jaffna. The daily maximum and minimum temperatures from 1966 to 1980 (15 years) were used to test the validity of the method. The accuracy of the estimation is higher for daily maximum temperature compared to daily minimum temperature. About 95% of the estimated daily maximum temperatures are within ±1.5 °C of the observed values. For daily minimum temperature, the percentage is about 92. By calculating the standard deviation of the difference in estimated and observed values, we have shown that the error in estimating the daily maximum and minimum temperatures is ±0.7 and ±0.9 °C, respectively. To obtain the best accuracy when estimating the missing daily temperature extremes, it is important to include Mannar which is the nearest station to the target station, Jaffna. We conclude from the analysis that the method can be applied successfully to reconstruct the missing daily temperature extremes in Jaffna where no data is available due to frequent disruptions caused by civil unrests and hostilities in the region during the period, 1984 to 2000.
Accuracy Considerations in Sterile Compounding.
Akers, Michael J
2017-01-01
Published information about the accuracy of filling and closing operations of sterile products is limited and guidelines on the topic are very general. This article highlights the basic principles in sterile-product filling of syringes and vials. Also covered in this article are descriptions of some of the available devices for filling containers, a brief discussion of the advances in vial and syringe filling, a discussion on the advantages and disadvantages of sterile product filling methods, and a discussion on possible problems encountered during filling operations. Because of the extremely high costs of some new drugs, especially biopharmaceuticals, compounding pharmacies may prefer to fill small batches to reduce the risk of unacceptable monetary losses in the event of a manufacturing deviation that results in batch rejection. Copyright© by International Journal of Pharmaceutical Compounding, Inc.
The control of manual entry accuracy in management/engineering information systems, phase 1
NASA Technical Reports Server (NTRS)
Hays, Daniel; Nocke, Henry; Wilson, Harold; Woo, John, Jr.; Woo, June
1987-01-01
It was shown that clerical personnel can be tested for proofreading performance under simulated industrial conditions. A statistical study showed that errors in proofreading follow an extreme value probability theory. The study showed that innovative man/machine interfaces can be developed to improve and control accuracy during data entry.
Palaniappan, Rajkumar; Sundaraj, Kenneth; Sundaraj, Sebastian; Huliraj, N; Revadi, S S
2017-06-08
Auscultation is a medical procedure used for the initial diagnosis and assessment of lung and heart diseases. From this perspective, we propose assessing the performance of the extreme learning machine (ELM) classifiers for the diagnosis of pulmonary pathology using breath sounds. Energy and entropy features were extracted from the breath sound using the wavelet packet transform. The statistical significance of the extracted features was evaluated by one-way analysis of variance (ANOVA). The extracted features were inputted into the ELM classifier. The maximum classification accuracies obtained for the conventional validation (CV) of the energy and entropy features were 97.36% and 98.37%, respectively, whereas the accuracies obtained for the cross validation (CRV) of the energy and entropy features were 96.80% and 97.91%, respectively. In addition, maximum classification accuracies of 98.25% and 99.25% were obtained for the CV and CRV of the ensemble features, respectively. The results indicate that the classification accuracy obtained with the ensemble features was higher than those obtained with the energy and entropy features.
Single photon detection and timing in the Lunar Laser Ranging Experiment.
NASA Technical Reports Server (NTRS)
Poultney, S. K.
1972-01-01
The goals of the Lunar Laser Ranging Experiment lead to the need for the measurement of a 2.5 sec time interval to an accuracy of a nanosecond or better. The systems analysis which included practical retroreflector arrays, available laser systems, and large telescopes led to the necessity of single photon detection. Operation under all background illumination conditions required auxiliary range gates and extremely narrow spectral and spatial filters in addition to the effective gate provided by the time resolution. Nanosecond timing precision at relatively high detection efficiency was obtained using the RCA C31000F photomultiplier and Ortec 270 constant fraction of pulse-height timing discriminator. The timing accuracy over the 2.5 sec interval was obtained using a digital interval with analog vernier ends. Both precision and accuracy are currently checked internally using a triggerable, nanosecond light pulser. Future measurements using sub-nanosecond laser pulses will be limited by the time resolution of single photon detectors.
NASA Astrophysics Data System (ADS)
Xu, Wenbo; Jing, Shaocai; Yu, Wenjuan; Wang, Zhaoxian; Zhang, Guoping; Huang, Jianxi
2013-11-01
In this study, the high risk areas of Sichuan Province with debris flow, Panzhihua and Liangshan Yi Autonomous Prefecture, were taken as the studied areas. By using rainfall and environmental factors as the predictors and based on the different prior probability combinations of debris flows, the prediction of debris flows was compared in the areas with statistical methods: logistic regression (LR) and Bayes discriminant analysis (BDA). The results through the comprehensive analysis show that (a) with the mid-range scale prior probability, the overall predicting accuracy of BDA is higher than those of LR; (b) with equal and extreme prior probabilities, the overall predicting accuracy of LR is higher than those of BDA; (c) the regional predicting models of debris flows with rainfall factors only have worse performance than those introduced environmental factors, and the predicting accuracies of occurrence and nonoccurrence of debris flows have been changed in the opposite direction as the supplemented information.
Energy calibration of CALET onboard the International Space Station
NASA Astrophysics Data System (ADS)
Asaoka, Y.; Akaike, Y.; Komiya, Y.; Miyata, R.; Torii, S.; Adriani, O.; Asano, K.; Bagliesi, M. G.; Bigongiari, G.; Binns, W. R.; Bonechi, S.; Bongi, M.; Brogi, P.; Buckley, J. H.; Cannady, N.; Castellini, G.; Checchia, C.; Cherry, M. L.; Collazuol, G.; Di Felice, V.; Ebisawa, K.; Fuke, H.; Guzik, T. G.; Hams, T.; Hareyama, M.; Hasebe, N.; Hibino, K.; Ichimura, M.; Ioka, K.; Ishizaki, W.; Israel, M. H.; Javaid, A.; Kasahara, K.; Kataoka, J.; Kataoka, R.; Katayose, Y.; Kato, C.; Kawanaka, N.; Kawakubo, Y.; Kitamura, H.; Krawczynski, H. S.; Krizmanic, J. F.; Kuramata, S.; Lomtadze, T.; Maestro, P.; Marrocchesi, P. S.; Messineo, A. M.; Mitchell, J. W.; Miyake, S.; Mizutani, K.; Moiseev, A. A.; Mori, K.; Mori, M.; Mori, N.; Motz, H. M.; Munakata, K.; Murakami, H.; Nakagawa, Y. E.; Nakahira, S.; Nishimura, J.; Okuno, S.; Ormes, J. F.; Ozawa, S.; Pacini, L.; Palma, F.; Papini, P.; Penacchioni, A. V.; Rauch, B. F.; Ricciarini, S.; Sakai, K.; Sakamoto, T.; Sasaki, M.; Shimizu, Y.; Shiomi, A.; Sparvoli, R.; Spillantini, P.; Stolzi, F.; Takahashi, I.; Takayanagi, M.; Takita, M.; Tamura, T.; Tateyama, N.; Terasawa, T.; Tomida, H.; Tsunesada, Y.; Uchihori, Y.; Ueno, S.; Vannuccini, E.; Wefel, J. P.; Yamaoka, K.; Yanagita, S.; Yoshida, A.; Yoshida, K.; Yuda, T.
2017-05-01
In August 2015, the CALorimetric Electron Telescope (CALET), designed for long exposure observations of high energy cosmic rays, docked with the International Space Station (ISS) and shortly thereafter began to collect data. CALET will measure the cosmic ray electron spectrum over the energy range of 1 GeV to 20 TeV with a very high resolution of 2% above 100 GeV, based on a dedicated instrument incorporating an exceptionally thick 30 radiation-length calorimeter with both total absorption and imaging (TASC and IMC) units. Each TASC readout channel must be carefully calibrated over the extremely wide dynamic range of CALET that spans six orders of magnitude in order to obtain a degree of calibration accuracy matching the resolution of energy measurements. These calibrations consist of calculating the conversion factors between ADC units and energy deposits, ensuring linearity over each gain range, and providing a seamless transition between neighboring gain ranges. This paper describes these calibration methods in detail, along with the resulting data and associated accuracies. The results presented in this paper show that a sufficient accuracy was achieved for the calibrations of each channel in order to obtain a suitable resolution over the entire dynamic range of the electron spectrum measurement.
Multispectral image fusion for illumination-invariant palmprint recognition
Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064
Multispectral image fusion for illumination-invariant palmprint recognition.
Lu, Longbin; Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.
High accuracy satellite drag model (HASDM)
NASA Astrophysics Data System (ADS)
Storz, Mark F.; Bowman, Bruce R.; Branson, Major James I.; Casali, Stephen J.; Tobiska, W. Kent
The dominant error source in force models used to predict low-perigee satellite trajectories is atmospheric drag. Errors in operational thermospheric density models cause significant errors in predicted satellite positions, since these models do not account for dynamic changes in atmospheric drag for orbit predictions. The Air Force Space Battlelab's High Accuracy Satellite Drag Model (HASDM) estimates and predicts (out three days) a dynamically varying global density field. HASDM includes the Dynamic Calibration Atmosphere (DCA) algorithm that solves for the phases and amplitudes of the diurnal and semidiurnal variations of thermospheric density near real-time from the observed drag effects on a set of Low Earth Orbit (LEO) calibration satellites. The density correction is expressed as a function of latitude, local solar time and altitude. In HASDM, a time series prediction filter relates the extreme ultraviolet (EUV) energy index E10.7 and the geomagnetic storm index ap, to the DCA density correction parameters. The E10.7 index is generated by the SOLAR2000 model, the first full spectrum model of solar irradiance. The estimated and predicted density fields will be used operationally to significantly improve the accuracy of predicted trajectories for all low-perigee satellites.
Wang, Deyun; Wei, Shuai; Luo, Hongyuan; Yue, Chenqiang; Grunder, Olivier
2017-02-15
The randomness, non-stationarity and irregularity of air quality index (AQI) series bring the difficulty of AQI forecasting. To enhance forecast accuracy, a novel hybrid forecasting model combining two-phase decomposition technique and extreme learning machine (ELM) optimized by differential evolution (DE) algorithm is developed for AQI forecasting in this paper. In phase I, the complementary ensemble empirical mode decomposition (CEEMD) is utilized to decompose the AQI series into a set of intrinsic mode functions (IMFs) with different frequencies; in phase II, in order to further handle the high frequency IMFs which will increase the forecast difficulty, variational mode decomposition (VMD) is employed to decompose the high frequency IMFs into a number of variational modes (VMs). Then, the ELM model optimized by DE algorithm is applied to forecast all the IMFs and VMs. Finally, the forecast value of each high frequency IMF is obtained through adding up the forecast results of all corresponding VMs, and the forecast series of AQI is obtained by aggregating the forecast results of all IMFs. To verify and validate the proposed model, two daily AQI series from July 1, 2014 to June 30, 2016 collected from Beijing and Shanghai located in China are taken as the test cases to conduct the empirical study. The experimental results show that the proposed hybrid model based on two-phase decomposition technique is remarkably superior to all other considered models for its higher forecast accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.
Chen, Jun; Chen, Jianwei; Wang, Sijia; Zhou, Guangmin; Chen, Danqing; Zhang, Huawei; Wang, Hong
2018-04-02
A novel, green, rapid, and precise polar RP-HPLC method has been successfully developed and screened for ectoine high-yield strain in marine bacteria. Ectoine is a polar and extremely useful solute which allows microorganisms to survive in extreme environmental salinity. This paper describes a polar-HPLC method employed polar RP-C18 (5 μm, 250 × 4.6 mm) using pure water as the mobile phase and a column temperature of 30 °C, coupled with a flow rate at 1.0 mL/min and detected under a UV detector at wavelength of 210 nm. Our method validation demonstrates excellent linearity (R 2 = 0.9993), accuracy (100.55%), and a limit of detection LOQ and LOD of 0.372 and 0.123 μgmL -1 , respectively. These results clearly indicate that the developed polar RP-HPLC method for the separation and determination of ectoine is superior to earlier protocols.
BELM: Bayesian extreme learning machine.
Soria-Olivas, Emilio; Gómez-Sanchis, Juan; Martín, José D; Vila-Francés, Joan; Martínez, Marcelino; Magdalena, José R; Serrano, Antonio J
2011-03-01
The theory of extreme learning machine (ELM) has become very popular on the last few years. ELM is a new approach for learning the parameters of the hidden layers of a multilayer neural network (as the multilayer perceptron or the radial basis function neural network). Its main advantage is the lower computational cost, which is especially relevant when dealing with many patterns defined in a high-dimensional space. This brief proposes a bayesian approach to ELM, which presents some advantages over other approaches: it allows the introduction of a priori knowledge; obtains the confidence intervals (CIs) without the need of applying methods that are computationally intensive, e.g., bootstrap; and presents high generalization capabilities. Bayesian ELM is benchmarked against classical ELM in several artificial and real datasets that are widely used for the evaluation of machine learning algorithms. Achieved results show that the proposed approach produces a competitive accuracy with some additional advantages, namely, automatic production of CIs, reduction of probability of model overfitting, and use of a priori knowledge.
Three-dimensional laser window formation for industrial application
NASA Technical Reports Server (NTRS)
Verhoff, Vincent G.; Kowalski, David
1993-01-01
The NASA Lewis Research Center has developed and implemented a unique process for forming flawless three-dimensional, compound-curvature laser windows to extreme accuracies. These windows represent an integral component of specialized nonintrusive laser data acquisition systems that are used in a variety of compressor and turbine research testing facilities. These windows are molded to the flow surface profile of turbine and compressor casings and are required to withstand extremely high pressures and temperatures. This method of glass formation could also be used to form compound-curvature mirrors that would require little polishing and for a variety of industrial applications, including research view ports for testing devices and view ports for factory machines with compound-curvature casings. Currently, sodium-alumino-silicate glass is recommended for three-dimensional laser windows because of its high strength due to chemical strengthening and its optical clarity. This paper discusses the main aspects of three-dimensional laser window formation. It focuses on the unique methodology and the peculiarities that are associated with the formation of these windows.
Forecasting the value-at-risk of Chinese stock market using the HARQ model and extreme value theory
NASA Astrophysics Data System (ADS)
Liu, Guangqiang; Wei, Yu; Chen, Yongfei; Yu, Jiang; Hu, Yang
2018-06-01
Using intraday data of the CSI300 index, this paper discusses value-at-risk (VaR) forecasting of the Chinese stock market from the perspective of high-frequency volatility models. First, we measure the realized volatility (RV) with 5-minute high-frequency returns of the CSI300 index and then model it with the newly introduced heterogeneous autoregressive quarticity (HARQ) model, which can handle the time-varying coefficients of the HAR model. Second, we forecast the out-of-sample VaR of the CSI300 index by combining the HARQ model and extreme value theory (EVT). Finally, using several popular backtesting methods, we compare the VaR forecasting accuracy of HARQ model with other traditional HAR-type models, such as HAR, HAR-J, CHAR, and SHAR. The empirical results show that the novel HARQ model can beat other HAR-type models in forecasting the VaR of the Chinese stock market at various risk levels.
Regulation of memory accuracy with multiple answers: the plurality option.
Luna, Karlos; Higham, Philip A; Martín-Luengo, Beatriz
2011-06-01
We report two experiments that investigated the regulation of memory accuracy with a new regulatory mechanism: the plurality option. This mechanism is closely related to the grain-size option but involves control over the number of alternatives contained in an answer rather than the quantitative boundaries of a single answer. Participants were presented with a slideshow depicting a robbery (Experiment 1) or a murder (Experiment 2), and their memory was tested with five-alternative multiple-choice questions. For each question, participants were asked to generate two answers: a single answer consisting of one alternative and a plural answer consisting of the single answer and two other alternatives. Each answer was rated for confidence (Experiment 1) or for the likelihood of being correct (Experiment 2), and one of the answers was selected for reporting. Results showed that participants used the plurality option to regulate accuracy, selecting single answers when their accuracy and confidence were high, but opting for plural answers when they were low. Although accuracy was higher for selected plural than for selected single answers, the opposite pattern was evident for confidence or likelihood ratings. This dissociation between confidence and accuracy for selected answers was the result of marked overconfidence in single answers coupled with underconfidence in plural answers. We hypothesize that these results can be attributed to overly dichotomous metacognitive beliefs about personal knowledge states that cause subjective confidence to be extreme.
Validation of geometric accuracy of Global Land Survey (GLS) 2000 data
Rengarajan, Rajagopalan; Sampath, Aparajithan; Storey, James C.; Choate, Michael J.
2015-01-01
The Global Land Survey (GLS) 2000 data were generated from Geocover™ 2000 data with the aim of producing a global data set of accuracy better than 25 m Root Mean Square Error (RMSE). An assessment and validation of accuracy of GLS 2000 data set, and its co-registration with Geocover™ 2000 data set is presented here. Since the availability of global data sets that have higher nominal accuracy than the GLS 2000 is a concern, the data sets were assessed in three tiers. In the first tier, the data were compared with the Geocover™ 2000 data. This comparison provided a means of localizing regions of higher differences. In the second tier, the GLS 2000 data were compared with systematically corrected Landsat-7 scenes that were obtained in a time period when the spacecraft pointing information was extremely accurate. These comparisons localize regions where the data are consistently off, which may indicate regions of higher errors. The third tier consisted of comparing the GLS 2000 data against higher accuracy reference data. The reference data were the Digital Ortho Quads over the United States, orthorectified SPOT data over Australia, and high accuracy check points obtained using triangulation bundle adjustment of Landsat-7 images over selected sites around the world. The study reveals that the geometric errors in Geocover™ 2000 data have been rectified in GLS 2000 data, and that the accuracy of GLS 2000 data can be expected to be better than 25 m RMSE for most of its constituent scenes.
Testing of the McMath-Pierce 0.8-Meter East Auxiliary Telescope's Acquisition and Slewing Accuracy
NASA Astrophysics Data System (ADS)
Harshaw, Richard; Ray, Jimmy; Prause, Lori; Douglass, David; Branston, Detrick; Genet, Russell M.
2015-09-01
Following mediocre results with pointing tests of the McMath-Pierce 0.8-meter East Auxiliary Telescope in April 2014, a team of astronomers/engineers met again in May 2014 to test other pointing models and assess the telescope's ability to point with enough accuracy to permit the efficient use of speckle interferometry. Results show that accurate collimation is a pre-requisite for such accuracy. Once attained, the telescope performs extremely well.
Increased Accuracy of Ligand Sensing by Receptor Internalization and Lateral Receptor Diffusion
NASA Astrophysics Data System (ADS)
Aquino, Gerardo; Endres, Robert
2010-03-01
Many types of cells can sense external ligand concentrations with cell-surface receptors at extremely high accuracy. Interestingly, ligand-bound receptors are often internalized, a process also known as receptor-mediated endocytosis. While internalization is involved in a vast number of important functions for the life of a cell, it was recently also suggested to increase the accuracy of sensing ligand as overcounting of the same ligand molecules is reduced. A similar role may be played by receptor diffusion om the cell membrane. Fast, lateral receptor diffusion is known to be relevant in neurotransmission initiated by release of neurotransmitter glutamate in the synaptic cleft between neurons. By binding ligand and removal by diffusion from the region of release of the neurotransmitter, diffusing receptors can be reasonably expected to reduce the local overcounting of the same ligand molecules in the region of signaling. By extending simple ligand-receptor models to out-of-equilibrium thermodynamics, we show that both receptor internalization and lateral diffusion increase the accuracy with which cells can measure ligand concentrations in the external environment. We confirm this with our model and give quantitative predictions for experimental parameters values. We give quantitative predictions, which compare favorably to experimental data of real receptors.
Uprated fine guidance sensor study
NASA Technical Reports Server (NTRS)
1984-01-01
Future orbital observatories will require star trackers of extremely high precision. These sensors must maintain high pointing accuracy and pointing stability simultaneously with a low light level signal from a guide star. To establish the fine guidance sensing requirements and to evaluate candidate fine guidance sensing concepts, the Space Telescope Optical Telescope Assembly was used as the reference optical system. The requirements review was separated into three areas: Optical Telescope Assembly (OTA), Fine Guidance Sensing and astrometry. The results show that the detectors should be installed directly onto the focal surface presented by the optics. This would maximize throughput and minimize point stability error by not incoporating any additional optical elements.
NASA Astrophysics Data System (ADS)
Formetta, Giuseppe; Bell, Victoria; Stewart, Elizabeth
2018-02-01
Regional flood frequency analysis is one of the most commonly applied methods for estimating extreme flood events at ungauged sites or locations with short measurement records. It is based on: (i) the definition of a homogeneous group (pooling-group) of catchments, and on (ii) the use of the pooling-group data to estimate flood quantiles. Although many methods to define a pooling-group (pooling schemes, PS) are based on catchment physiographic similarity measures, in the last decade methods based on flood seasonality similarity have been contemplated. In this paper, two seasonality-based PS are proposed and tested both in terms of the homogeneity of the pooling-groups they generate and in terms of the accuracy in estimating extreme flood events. The method has been applied in 420 catchments in Great Britain (considered as both gauged and ungauged) and compared against the current Flood Estimation Handbook (FEH) PS. Results for gauged sites show that, compared to the current PS, the seasonality-based PS performs better both in terms of homogeneity of the pooling-group and in terms of the accuracy of flood quantile estimates. For ungauged locations, a national-scale hydrological model has been used for the first time to quantify flood seasonality. Results show that in 75% of the tested locations the seasonality-based PS provides an improvement in the accuracy of the flood quantile estimates. The remaining 25% were located in highly urbanized, groundwater-dependent catchments. The promising results support the aspiration that large-scale hydrological models complement traditional methods for estimating design floods.
TRMM- and GPM-based precipitation analysis and modelling in the Tropical Andes
NASA Astrophysics Data System (ADS)
Manz, Bastian; Buytaert, Wouter; Zulkafli, Zed; Onof, Christian
2016-04-01
Despite wide-spread applications of satellite-based precipitation products (SPPs) throughout the TRMM-era, the scarcity of ground-based in-situ data (high density gauge networks, rainfall radar) in many hydro-meteorologically important regions, such as tropical mountain environments, has limited our ability to evaluate both SPPs and individual satellite-based sensors as well as accurately model or merge rainfall at high spatial resolutions, particularly with respect to extremes. This has restricted both the understanding of sensor behaviour and performance controls in such regions as well as the accuracy of precipitation estimates and respective hydrological applications ranging from water resources management to early warning systems. Here we report on our recent research into precipitation analysis and modelling using various TRMM and GPM products (2A25, 3B42 and IMERG) in the tropical Andes. In an initial study, 78 high-frequency (10-min) recording gauges in Colombia and Ecuador are used to generate a ground-based validation dataset for evaluation of instantaneous TRMM Precipitation Radar (TPR) overpasses from the 2A25 product. Detection ability, precipitation time-series, empirical distributions and statistical moments are evaluated with respect to regional climatological differences, seasonal behaviour, rainfall types and detection thresholds. Results confirmed previous findings from extra-tropical regions of over-estimation of low rainfall intensities and under-estimation of the highest 10% of rainfall intensities by the TPR. However, in spite of evident regionalised performance differences as a function of local climatological regimes, the TPR provides an accurate estimate of climatological annual and seasonal rainfall means. On this basis, high-resolution (5 km) climatological maps are derived for the entire tropical Andes. The second objective of this work is to improve the local precipitation estimation accuracy and representation of spatial patterns of extreme rainfall probabilities over the region. For this purpose, an ensemble of high-resolution rainfall fields is generated by stochastic simulation using space-time averaged, coarse-scale (daily, 0.25°) satellite-based rainfall inputs (TRMM 3B42/ -RT) and the high-resolution climatological information derived from the TPR as spatial disaggregation proxies. For evaluation and merging, gridded ground-based rainfall fields are generated from gauge data using sequential simulation. Satellite and ground-based ensembles are subsequently merged using an inverse error weighting scheme. The model was tested over a case study in the Colombian Andes with optional coarse-scale bias correction prior to disaggregation and merging. The resulting outputs were assessed in the context of Generalized Extreme Value theory and showed improved estimation of extreme rainfall probabilities compared to the original TMPA inputs. Initial findings using GPM-IMERG inputs are also presented.
The effect of low velocity impact in the strength characteristics of composite materials laminates
NASA Technical Reports Server (NTRS)
Liebowitz, H.
1983-01-01
The nonlinear vibration response of a double cantilevered beam subjected to pulse loading over a central sector is studied. The initial response is generated in detail to ascertain the energetics of the response. The total energy is used as a gauge of the stability and accuracy of the solution. It is shown that to obtain accurate and stable initial solutions an extremely high spatial and time resolution is required. This requirement was only evident through an examination of the energy of the system. It is proposed, therefore, to use the total energy of the system as a necessary stability and accuracy criterion for the nonlinear response of conservative systems. The results also demonstrate that even for moderate nonlinearities, the effects of membrane forces have a significant influence on the system.
A novel algorithm for detecting active propulsion in wheelchair users following spinal cord injury.
Popp, Werner L; Brogioli, Michael; Leuenberger, Kaspar; Albisser, Urs; Frotzler, Angela; Curt, Armin; Gassert, Roger; Starkey, Michelle L
2016-03-01
Physical activity in wheelchair-bound individuals can be assessed by monitoring their mobility as this is one of the most intense upper extremity activities they perform. Current accelerometer-based approaches for describing wheelchair mobility do not distinguish between self- and attendant-propulsion and hence may overestimate total physical activity. The aim of this study was to develop and validate an inertial measurement unit based algorithm to monitor wheel kinematics and the type of wheelchair propulsion (self- or attendant-) within a "real-world" situation. Different sensor set-ups were investigated, ranging from a high precision set-up including four sensor modules with a relatively short measurement duration of 24 h, to a less precise set-up with only one module attached at the wheel exceeding one week of measurement because the gyroscope of the sensor was turned off. The "high-precision" algorithm distinguished self- and attendant-propulsion with accuracy greater than 93% whilst the long-term measurement set-up showed an accuracy of 82%. The estimation accuracy of kinematic parameters was greater than 97% for both set-ups. The possibility of having different sensor set-ups allows the use of the inertial measurement units as high precision tools for researchers as well as unobtrusive and simple tools for manual wheelchair users. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Ultrasonographic identification of the anatomical landmarks that define cervical lymph nodes spaces.
Lenghel, Lavinia Manuela; Baciuţ, Grigore; Botar-Jid, Carolina; Vasilescu, Dan; Bojan, Anca; Dudea, Sorin M
2013-03-01
The localization of cervical lymph nodes is extremely important in practice for the positive and differential diagnosis as well as the staging of cervical lymphadenopathies. Ultrasonography represents the first line imaging method in the diagnosis of cervical lymphadenopathies due to its excellent resolution and high diagnosis accuracy. The present paper aims to illustrate the ultrasonographic identification of the anatomical landmarks used for the definition of cervical lymphatic spaces. The application of standardized views allows a delineation of clear anatomical landmarks and an accurate localization of the cervical lymph nodes.
Numerical investigation of freak waves
NASA Astrophysics Data System (ADS)
Chalikov, D.
2009-04-01
Paper describes the results of more than 4,000 long-term (up to thousands of peak-wave periods) numerical simulations of nonlinear gravity surface waves performed for investigation of properties and estimation of statistics of extreme (‘freak') waves. The method of solution of 2-D potential wave's equations based on conformal mapping is applied to the simulation of wave behavior assigned by different initial conditions, defined by JONSWAP and Pierson-Moskowitz spectra. It is shown that nonlinear wave evolution sometimes results in appearance of very big waves. The shape of freak waves varies within a wide range: some of them are sharp-crested, others are asymmetric, with a strong forward inclination. Some of them can be very big, but not steep enough to create dangerous conditions for vessels (but not for fixed objects). Initial generation of extreme waves can occur merely as a result of group effects, but in some cases the largest wave suddenly starts to grow. The growth is followed sometimes by strong concentration of wave energy around a peak vertical. It is taking place in the course of a few peak wave periods. The process starts with an individual wave in a physical space without significant exchange of energy with surrounding waves. Sometimes, a crest-to-trough wave height can be as large as nearly three significant wave heights. On the average, only one third of all freak waves come to breaking, creating extreme conditions, however, if a wave height approaches the value of three significant wave heights, all of the freak waves break. The most surprising result was discovery that probability of non-dimensional freak waves (normalized by significant wave height) is actually independent of density of wave energy. It does not mean that statistics of extreme waves does not depend on wave energy. It just proves that normalization of wave heights by significant wave height is so effective, that statistics of non-dimensional extreme waves tends to be independent of wave energy. It is naive to expect that high order moments such as skewness and kurtosis can serve as predictors or even indicators of freak waves. Firstly, the above characteristics cannot be calculated with the use of spectrum usually determined with low accuracy. Such calculations are definitely unstable to a slight perturbation of spectrum. Secondly, even if spectrum is determined with high accuracy (for example calculated with the use of exact model), the high order moments cannot serve as the predictors, since they change synchronically with variations of extreme wave heights. Appearance of freak waves occurs simultaneously with increase of the local kurtosis, hence, kurtosis is simply a passive indicator of the same local geometrical properties of a wave field. This effect disappears completely, if spectrum is calculated over a very wide ensemble of waves. In this case existence of a freak wave is just disguised by other, non freak waves. Thirdly, all high order moments are dependant of spectral presentation - they increase with increasing of spectral resolution and cut-frequency. Statistics of non-dimensional waves as well as emergence of extreme waves is the innate property of a nonlinear wave field. Probability function for steep waves has been constructed. Such type function can be used for development of operational forecast of freak waves based on a standard forecast provided by the 3-d generation wave prediction model (WAVEWATCH or WAM).
The end-to-end simulator for the E-ELT HIRES high resolution spectrograph
NASA Astrophysics Data System (ADS)
Genoni, M.; Landoni, M.; Riva, M.; Pariani, G.; Mason, E.; Di Marcantonio, P.; Disseau, K.; Di Varano, I.; Gonzalez, O.; Huke, P.; Korhonen, H.; Li Causi, Gianluca
2017-06-01
We present the design, architecture and results of the End-to-End simulator model of the high resolution spectrograph HIRES for the European Extremely Large Telescope (E-ELT). This system can be used as a tool to characterize the spectrograph both by engineers and scientists. The model allows to simulate the behavior of photons starting from the scientific object (modeled bearing in mind the main science drivers) to the detector, considering also calibration light sources, and allowing to perform evaluation of the different parameters of the spectrograph design. In this paper, we will detail the architecture of the simulator and the computational model which are strongly characterized by modularity and flexibility that will be crucial in the next generation astronomical observation projects like E-ELT due to of the high complexity and long-time design and development. Finally, we present synthetic images obtained with the current version of the End-to-End simulator based on the E-ELT HIRES requirements (especially high radial velocity accuracy). Once ingested in the Data reduction Software (DRS), they will allow to verify that the instrument design can achieve the radial velocity accuracy needed by the HIRES science cases.
High-accuracy microassembly by intelligent vision systems and smart sensor integration
NASA Astrophysics Data System (ADS)
Schilp, Johannes; Harfensteller, Mark; Jacob, Dirk; Schilp, Michael
2003-10-01
Innovative production processes and strategies from batch production to high volume scale are playing a decisive role in generating microsystems economically. In particular assembly processes are crucial operations during the production of microsystems. Due to large batch sizes many microsystems can be produced economically by conventional assembly techniques using specialized and highly automated assembly systems. At laboratory stage microsystems are mostly assembled by hand. Between these extremes there is a wide field of small and middle sized batch production wherefore common automated solutions rarely are profitable. For assembly processes at these batch sizes a flexible automated assembly system has been developed at the iwb. It is based on a modular design. Actuators like grippers, dispensers or other process tools can easily be attached due to a special tool changing system. Therefore new joining techniques can easily be implemented. A force-sensor and a vision system are integrated into the tool head. The automated assembly processes are based on different optical sensors and smart actuators like high-accuracy robots or linear-motors. A fiber optic sensor is integrated in the dispensing module to measure contactless the clearance between the dispense needle and the substrate. Robot vision systems using the strategy of optical pattern recognition are also implemented as modules. In combination with relative positioning strategies, an assembly accuracy of the assembly system of less than 3 μm can be realized. A laser system is used for manufacturing processes like soldering.
Visser, Bart; De Looze, Michiel; De Graaff, Matthijs; Van Dieën, Jaap
2004-02-05
The objective of the present study was to gain insight into the effects of precision demands and mental pressure on the load of the upper extremity. Two computer mouse tasks were used: an aiming and a tracking task. Upper extremity loading was operationalized as the myo-electric activity of the wrist flexor and extensor and of the trapezius descendens muscles and the applied grip- and click-forces on the computer mouse. Performance measures, reflecting the accuracy in both tasks and the clicking rate in the aiming task, indicated that the levels of the independent variables resulted in distinguishable levels of accuracy and work pace. Precision demands had a small effect on upper extremity loading with a significant increase in the EMG-amplitudes (21%) of the wrist flexors during the aiming tasks. Precision had large effects on performance. Mental pressure had substantial effects on EMG-amplitudes with an increase of 22% in the trapezius when tracking and increases of 41% in the trapezius and 45% and 140% in the wrist extensors and flexors, respectively, when aiming. During aiming, grip- and click-forces increased by 51% and 40% respectively. Mental pressure had small effects on accuracy but large effects on tempo during aiming. Precision demands and mental pressure in aiming and tracking tasks with a computer mouse were found to coincide with increased muscle activity in some upper extremity muscles and increased force exertion on the computer mouse. Mental pressure caused significant effects on these parameters more often than precision demands. Precision and mental pressure were found to have effects on performance, with precision effects being significant for all performance measures studied and mental pressure effects for some of them. The results of this study suggest that precision demands and mental pressure increase upper extremity load, with mental pressure effects being larger than precision effects. The possible role of precision demands as an indirect mental stressor in working conditions is discussed.
Akita, Shinsuke; Mitsukawa, Nobuyuki; Kazama, Toshiki; Kuriyama, Motone; Kubota, Yoshitaka; Omori, Naoko; Koizumi, Tomoe; Kosaka, Kentaro; Uno, Takashi; Satoh, Kaneshige
2013-06-01
Lymphoscintigraphy is the gold-standard examination for extremity lymphoedema. Indocyanine green lymphography may be useful for diagnosis as well. We compared the utility of these two examination methods for patients with suspected extremity lymphoedema and for those in whom surgical treatment of lymphoedema was under consideration. A total of 169 extremities with lymphoedema secondary to lymph node dissection and 65 extremities with idiopathic oedema (suspected primary lymphoedema) were evaluated; the utility of indocyanine green lymphography for diagnosis was compared with lymphoscintigraphy. Regression analysis between lymphoscintigraphy type and indocyanine green lymphography stage was conducted in the secondary lymphoedema group. In secondary oedema, the sensitivity of indocyanine green lymphography, compared with lymphoscintigraphy, was 0.972, the specificity was 0.548 and the accuracy was 0.816. When patients with lymphoscintigraphy type I and indocyanine green lymphography stage I were regarded as negative, the sensitivity of the indocyanine green lymphography was 0.978, the specificity was 0.925 and the accuracy was 0.953. There was a significant positive correlation between the lymphoscintigraphy type and the indocyanine green lymphography stage. In idiopathic oedema, the sensitivity of indocyanine green lymphography was 0.974, the specificity was 0.778 and the accuracy was 0.892. In secondary lymphoedema, earlier and less severe dysfunction could be detected by indocyanine green lymphography. Indocyanine green lymphography is recommended to determine patients' suitability for lymphaticovenular anastomosis, because the diagnostic ability of the test and its evaluation capability for disease severity is similar to lymphoscintigraphy but with less invasiveness and a lower cost. To detect primary lymphoedema, indocyanine green lymphography should be used first as a screening examination; when the results are positive, lymphoscintigraphy is useful to obtain further information. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
ULTRA-SHARP nonoscillatory convection schemes for high-speed steady multidimensional flow
NASA Technical Reports Server (NTRS)
Leonard, B. P.; Mokhtari, Simin
1990-01-01
For convection-dominated flows, classical second-order methods are notoriously oscillatory and often unstable. For this reason, many computational fluid dynamicists have adopted various forms of (inherently stable) first-order upwinding over the past few decades. Although it is now well known that first-order convection schemes suffer from serious inaccuracies attributable to artificial viscosity or numerical diffusion under high convection conditions, these methods continue to enjoy widespread popularity for numerical heat transfer calculations, apparently due to a perceived lack of viable high accuracy alternatives. But alternatives are available. For example, nonoscillatory methods used in gasdynamics, including currently popular TVD schemes, can be easily adapted to multidimensional incompressible flow and convective transport. This, in itself, would be a major advance for numerical convective heat transfer, for example. But, as is shown, second-order TVD schemes form only a small, overly restrictive, subclass of a much more universal, and extremely simple, nonoscillatory flux-limiting strategy which can be applied to convection schemes of arbitrarily high order accuracy, while requiring only a simple tridiagonal ADI line-solver, as used in the majority of general purpose iterative codes for incompressible flow and numerical heat transfer. The new universal limiter and associated solution procedures form the so-called ULTRA-SHARP alternative for high resolution nonoscillatory multidimensional steady state high speed convective modelling.
High-Precision Distribution of Highly Stable Optical Pulse Trains with 8.8 × 10−19 instability
Ning, B.; Zhang, S. Y.; Hou, D.; Wu, J. T.; Li, Z. B.; Zhao, J. Y.
2014-01-01
The high-precision distribution of optical pulse trains via fibre links has had a considerable impact in many fields. In most published work, the accuracy is still fundamentally limited by unavoidable noise sources, such as thermal and shot noise from conventional photodiodes and thermal noise from mixers. Here, we demonstrate a new high-precision timing distribution system that uses a highly precise phase detector to obviously reduce the effect of these limitations. Instead of using photodiodes and microwave mixers, we use several fibre Sagnac-loop-based optical-microwave phase detectors (OM-PDs) to achieve optical-electrical conversion and phase measurements, thereby suppressing the sources of noise and achieving ultra-high accuracy. The results of a distribution experiment using a 10-km fibre link indicate that our system exhibits a residual instability of 2.0 × 10−15 at1 s and8.8 × 10−19 at 40,000 s and an integrated timing jitter as low as 3.8 fs in a bandwidth of 1 Hz to 100 kHz. This low instability and timing jitter make it possible for our system to be used in the distribution of optical-clock signals or in applications that require extremely accurate frequency/time synchronisation. PMID:24870442
Effect of high altitude on blood glucose meter performance.
Fink, Kenneth S; Christensen, Dale B; Ellsworth, Allan
2002-01-01
Participation in high-altitude wilderness activities may expose persons to extreme environmental conditions, and for those with diabetes mellitus, euglycemia is important to ensure safe travel. We conducted a field assessment of the precision and accuracy of seven commonly used blood glucose meters while mountaineering on Mount Rainier, located in Washington State (elevation 14,410 ft). At various elevations each climber-subject used the randomly assigned device to measure the glucose level of capillary blood and three different concentrations of standardized control solutions, and a venous sample was also collected for later glucose analysis. Ordinary least squares regression was used to assess the effect of elevation and of other environmental potential covariates on the precision and accuracy of blood glucose meters. Elevation affects glucometer precision (p = 0.08), but becomes less significant (p = 0.21) when adjusted for temperature and relative humidity. The overall effect of elevation was to underestimate glucose levels by approximately 1-2% (unadjusted) for each 1,000 ft gain in elevation. Blood glucose meter accuracy was affected by elevation (p = 0.03), temperature (p < 0.01), and relative humidity (p = 0.04) after adjustment for the other variables. The interaction between elevation and relative humidity had a meaningful but not statistically significant effect on accuracy (p = 0.07). Thus, elevation, temperature, and relative humidity affect blood glucose meter performance, and elevated glucose levels are more greatly underestimated at higher elevations. Further research will help to identify which blood glucose meters are best suited for specific environments.
Optical proximity correction for anamorphic extreme ultraviolet lithography
NASA Astrophysics Data System (ADS)
Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas
2017-10-01
The change from isomorphic to anamorphic optics in high numerical aperture extreme ultraviolet scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking. OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs that are more tolerant to mask errors.
The science of visual analysis at extreme scale
NASA Astrophysics Data System (ADS)
Nowell, Lucy T.
2011-01-01
Driven by market forces and spanning the full spectrum of computational devices, computer architectures are changing in ways that present tremendous opportunities and challenges for data analysis and visual analytic technologies. Leadership-class high performance computing system will have as many as a million cores by 2020 and support 10 billion-way concurrency, while laptop computers are expected to have as many as 1,000 cores by 2015. At the same time, data of all types are increasing exponentially and automated analytic methods are essential for all disciplines. Many existing analytic technologies do not scale to make full use of current platforms and fewer still are likely to scale to the systems that will be operational by the end of this decade. Furthermore, on the new architectures and for data at extreme scales, validating the accuracy and effectiveness of analytic methods, including visual analysis, will be increasingly important.
Computer aided manual validation of mass spectrometry-based proteomic data.
Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M
2013-06-15
Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.
Automatic localization of cerebral cortical malformations using fractal analysis.
De Luca, A; Arrigoni, F; Romaniello, R; Triulzi, F M; Peruzzo, D; Bertoldo, A
2016-08-21
Malformations of cortical development (MCDs) encompass a variety of brain disorders affecting the normal development and organization of the brain cortex. The relatively low incidence and the extreme heterogeneity of these disorders hamper the application of classical group level approaches for the detection of lesions. Here, we present a geometrical descriptor for a voxel level analysis based on fractal geometry, then define two similarity measures to detect the lesions at single subject level. The pipeline was applied to 15 normal children and nine pediatric patients affected by MCDs following two criteria, maximum accuracy (WACC) and minimization of false positives (FPR), and proved that our lesion detection algorithm is able to detect and locate abnormalities of the brain cortex with high specificity (WACC = 85%, FPR = 96%), sensitivity (WACC = 83%, FPR = 63%) and accuracy (WACC = 85%, FPR = 90%). The combination of global and local features proves to be effective, making the algorithm suitable for the detection of both focal and diffused malformations. Compared to other existing algorithms, this method shows higher accuracy and sensitivity.
Automatic localization of cerebral cortical malformations using fractal analysis
NASA Astrophysics Data System (ADS)
De Luca, A.; Arrigoni, F.; Romaniello, R.; Triulzi, F. M.; Peruzzo, D.; Bertoldo, A.
2016-08-01
Malformations of cortical development (MCDs) encompass a variety of brain disorders affecting the normal development and organization of the brain cortex. The relatively low incidence and the extreme heterogeneity of these disorders hamper the application of classical group level approaches for the detection of lesions. Here, we present a geometrical descriptor for a voxel level analysis based on fractal geometry, then define two similarity measures to detect the lesions at single subject level. The pipeline was applied to 15 normal children and nine pediatric patients affected by MCDs following two criteria, maximum accuracy (WACC) and minimization of false positives (FPR), and proved that our lesion detection algorithm is able to detect and locate abnormalities of the brain cortex with high specificity (WACC = 85%, FPR = 96%), sensitivity (WACC = 83%, FPR = 63%) and accuracy (WACC = 85%, FPR = 90%). The combination of global and local features proves to be effective, making the algorithm suitable for the detection of both focal and diffused malformations. Compared to other existing algorithms, this method shows higher accuracy and sensitivity.
Radar QPE for hydrological design: Intensity-Duration-Frequency curves
NASA Astrophysics Data System (ADS)
Marra, Francesco; Morin, Efrat
2015-04-01
Intensity-duration-frequency (IDF) curves are widely used in flood risk management since they provide an easy link between the characteristics of a rainfall event and the probability of its occurrence. They are estimated analyzing the extreme values of rainfall records, usually basing on raingauge data. This point-based approach raises two issues: first, hydrological design applications generally need IDF information for the entire catchment rather than a point, second, the representativeness of point measurements decreases with the distance from measure location, especially in regions characterized by steep climatological gradients. Weather radar, providing high resolution distributed rainfall estimates over wide areas, has the potential to overcome these issues. Two objections usually restrain this approach: (i) the short length of data records and (ii) the reliability of quantitative precipitation estimation (QPE) of the extremes. This work explores the potential use of weather radar estimates for the identification of IDF curves by means of a long length radar archive and a combined physical- and quantitative- adjustment of radar estimates. Shacham weather radar, located in the eastern Mediterranean area (Tel Aviv, Israel), archives data since 1990 providing rainfall estimates for 23 years over a region characterized by strong climatological gradients. Radar QPE is obtained correcting the effects of pointing errors, ground echoes, beam blockage, attenuation and vertical variations of reflectivity. Quantitative accuracy is then ensured with a range-dependent bias adjustment technique and reliability of radar QPE is assessed by comparison with gauge measurements. IDF curves are derived from the radar data using the annual extremes method and compared with gauge-based curves. Results from 14 study cases will be presented focusing on the effects of record length and QPE accuracy, exploring the potential application of radar IDF curves for ungauged locations and providing insights on the use of radar QPE for hydrological design studies.
NASA Astrophysics Data System (ADS)
Yoon, Sunkwon; Jang, Sangmin; Park, Kyungwon
2017-04-01
Extreme weather due to changing climate is a main source of water-related disasters such as flooding and inundation and its damage will be accelerated somewhere in world wide. To prevent the water-related disasters and mitigate their damage in urban areas in future, we developed a multi-sensor based real-time discharge forecasting system using remotely sensed data such as radar and satellite. We used Communication, Ocean and Meteorological Satellite (COMS) and Korea Meteorological Agency (KMA) weather radar for quantitative precipitation estimation. The Automatic Weather System (AWS) and McGill Algorithm for Precipitation Nowcasting by Lagrangian Extrapolation (MAPLE) were used for verification of rainfall accuracy. The optimal Z-R relation was applied the Tropical Z-R relationship (Z=32R1.65), it has been confirmed that the accuracy is improved in the extreme rainfall events. In addition, the performance of blended multi-sensor combining rainfall was improved in 60mm/h rainfall and more strong heavy rainfall events. Moreover, we adjusted to forecast the urban discharge using Storm Water Management Model (SWMM). Several statistical methods have been used for assessment of model simulation between observed and simulated discharge. In terms of the correlation coefficient and r-squared discharge between observed and forecasted were highly correlated. Based on this study, we captured a possibility of real-time urban discharge forecasting system using remotely sensed data and its utilization for real-time flood warning. Acknowledgement This research was supported by a grant (13AWMP-B066744-01) from Advanced Water Management Research Program (AWMP) funded by Ministry of Land, Infrastructure and Transport (MOLIT) of Korean government.
Thermodynamics of Computational Copying in Biochemical Systems
NASA Astrophysics Data System (ADS)
Ouldridge, Thomas E.; Govern, Christopher C.; ten Wolde, Pieter Rein
2017-04-01
Living cells use readout molecules to record the state of receptor proteins, similar to measurements or copies in typical computational devices. But is this analogy rigorous? Can cells be optimally efficient, and if not, why? We show that, as in computation, a canonical biochemical readout network generates correlations; extracting no work from these correlations sets a lower bound on dissipation. For general input, the biochemical network cannot reach this bound, even with arbitrarily slow reactions or weak thermodynamic driving. It faces an accuracy-dissipation trade-off that is qualitatively distinct from and worse than implied by the bound, and more complex steady-state copy processes cannot perform better. Nonetheless, the cost remains close to the thermodynamic bound unless accuracy is extremely high. Additionally, we show that biomolecular reactions could be used in thermodynamically optimal devices under exogenous manipulation of chemical fuels, suggesting an experimental system for testing computational thermodynamics.
X-ray free-electron laser studies of dense plasmas
NASA Astrophysics Data System (ADS)
Vinko, Sam M.
2015-10-01
> The high peak brightness of X-ray free-electron lasers (FELs), coupled with X-ray optics enabling the focusing of pulses down to sub-micron spot sizes, provides an attractive route to generating high energy-density systems on femtosecond time scales, via the isochoric heating of solid samples. Once created, the fundamental properties of these plasmas can be studied with unprecedented accuracy and control, providing essential experimental data needed to test and benchmark commonly used theoretical models and assumptions in the study of matter in extreme conditions, as well as to develop new predictive capabilities. Current advances in isochoric heating and spectroscopic plasma studies on X-ray FELs are reviewed and future research directions and opportunities discussed.
Material Behavior At The Extreme Cutting Edge In Bandsawing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarwar, Mohammed; Haider, Julfikar; Persson, Martin
2011-01-17
In recent years, bandsawing has been widely accepted as a favourite option for metal cutting off operations where the accuracy of cut, good surface finish, low kerf loss, long tool life and high material removal rate are required. Material removal by multipoint cutting tools such as bandsaw is a complex mechanism owing to the geometry of the bandsaw tooth (e.g., limited gullet size, tooth setting etc.) and the layer of material removed or undeformed chip thickness or depth of cut (5 {mu}m-50 {mu}m) being smaller than or equal to the cutting edge radius (5 {mu}m-15 {mu}m). This situation can leadmore » to inefficient material removal in bandsawing. Most of the research work are concentrated on the mechanics of material removal by single point cutting tool such as lathe tool. However, such efforts are very limited in multipoint cutting tools such as in bandsaw. This paper presents the fundamental understanding of the material behaviour at the extreme cutting edge of bandsaw tooth, which would help in designing and manufacturing of blades with higher cutting performance and life. ''High Speed Photography'' has been carried out to analyse the material removal process at the extreme cutting edge of bandsaw tooth. Geometric model of chip formation mechanisms based on the evidences found during ''High Speed Photography'' and ''Quick Stop'' process is presented. Wear modes and mechanism in bimetal and carbide tipped bandsaw teeth are also presented.« less
Comparison of Satellite Surveying to Traditional Surveying Methods for the Resources Industry
NASA Astrophysics Data System (ADS)
Osborne, B. P.; Osborne, V. J.; Kruger, M. L.
Modern ground-based survey methods involve detailed survey, which provides three-space co-ordinates for surveyed points, to a high level of accuracy. The instruments are operated by surveyors, who process the raw results to create survey location maps for the subject of the survey. Such surveys are conducted for a location or region and referenced to the earth global co- ordinate system with global positioning system (GPS) positioning. Due to this referencing the survey is only as accurate as the GPS reference system. Satellite survey remote sensing utilise satellite imagery which have been processed using commercial geographic information system software. Three-space co-ordinate maps are generated, with an accuracy determined by the datum position accuracy and optical resolution of the satellite platform.This paper presents a case study, which compares topographic surveying undertaken by traditional survey methods with satellite surveying, for the same location. The purpose of this study is to assess the viability of satellite remote sensing for surveying in the resources industry. The case study involves a topographic survey of a dune field for a prospective mining project area in Pakistan. This site has been surveyed using modern surveying techniques and the results are compared to a satellite survey performed on the same area.Analysis of the results from traditional survey and from the satellite survey involved a comparison of the derived spatial co- ordinates from each method. In addition, comparisons have been made of costs and turnaround time for both methods.The results of this application of remote sensing is of particular interest for survey in areas with remote and extreme environments, weather extremes, political unrest, poor travel links, which are commonly associated with mining projects. Such areas frequently suffer language barriers, poor onsite technical support and resources.
Investigation on the Practicality of Developing Reduced Thermal Models
NASA Technical Reports Server (NTRS)
Lombardi, Giancarlo; Yang, Kan
2015-01-01
Throughout the spacecraft design and development process, detailed instrument thermal models are created to simulate their on-orbit behavior and to ensure that they do not exceed any thermal limits. These detailed models, while generating highly accurate predictions, can sometimes lead to long simulation run times, especially when integrated with a spacecraft observatory model. Therefore, reduced models containing less detail are typically produced in tandem with the detailed models so that results may be more readily available, albeit less accurate. In the current study, both reduced and detailed instrument models are integrated with their associated spacecraft bus models to examine the impact of instrument model reduction on run time and accuracy. Preexisting instrument bus thermal model pairs from several projects were used to determine trends between detailed and reduced thermal models; namely, the Mirror Optical Bench (MOB) on the Gravity and Extreme Magnetism Small Explorer (GEMS) spacecraft, Advanced Topography Laser Altimeter System (ATLAS) on the Ice, Cloud, and Elevation Satellite 2 (ICESat-2), and the Neutral Mass Spectrometer (NMS) on the Lunar Atmosphere and Dust Environment Explorer (LADEE). Hot and cold cases were run for each model to capture the behavior of the models at both thermal extremes. It was found that, though decreasing the number of nodes from a detailed to reduced model brought about a reduction in the run-time, a large time savings was not observed, nor was it a linear relationship between the percentage of nodes reduced and time saved. However, significant losses in accuracy were observed with greater model reduction. It was found that while reduced models are useful in decreasing run time, there exists a threshold of reduction where, once exceeded, the loss in accuracy outweighs the benefit from reduced model runtime.
Autonomous Navigation With Ground Station One-Way Forward-Link Doppler Data
NASA Technical Reports Server (NTRS)
Horstkamp, G. M.; Niklewski, D. J.; Gramling, C. J.
1996-01-01
The National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) has spent several years developing operational onboard navigation systems (ONS's) to provide real time autonomous, highly accurate navigation products for spacecraft using NASA's space and ground communication systems. The highly successful Tracking and Data Relay Satellite (TDRSS) ONS (TONS) experiment on the Explorer Platform/Extreme Ultraviolet (EP/EUV) spacecraft, launched on June 7, 1992, flight demonstrated the ONS for high accuracy navigation using TDRSS forward link communication services. In late 1994, a similar ONS experiment was performed using EP/EUV flight hardware (the ultrastable oscillator and Doppler extractor card in one of the TDRSS transponders) and ground system software to demonstrate the feasibility of using an ONS with ground station forward link communication services. This paper provides a detailed evaluation of ground station-based ONS performance of data collected over a 20 day period. The ground station ONS (GONS) experiment results are used to project the expected performance of an operational system. The GONS processes Doppler data derived from scheduled ground station forward link services using a sequential estimation algorithm enhanced by a sophisticated process noise model to provide onboard orbit and frequency determination. Analysis of the GONS experiment performance indicates that real time onboard position accuracies of better than 125 meters (1 sigma) are achievable with two or more 5-minute contacts per day for the EP/EUV 525 kilometer altitude, 28.5 degree inclination orbit. GONS accuracy is shown to be a function of the fidelity of the onboard propagation model, the frequency/geometry of the tracking contacts, and the quality of the tracking measurements. GONS provides a viable option for using autonomous navigation to reduce operational costs for upcoming spacecraft missions with moderate position accuracy requirements.
Soldier Performance and Mood States Following a Strenuous Road March
1990-01-01
13) and the more intense the exercise, the greater the elevation (14). Reductions in heart rate through the use of beta - blockers can substantially...extreme physical fatigue. Shooting accuracy degraded severely under these conditions. An increase in body tremors due to fatigue or elevated post...exercise (9) and this may effect shooting accuracy. Muscle tremors increase after brief or prolonged muscular contractions (10, 11) and such tremors
Achieving accuracy in first-principles calculations at extreme temperature and pressure
NASA Astrophysics Data System (ADS)
Mattsson, Ann; Wills, John
2013-06-01
First-principles calculations are increasingly used to provide EOS data at pressures and temperatures where experimental data is difficult or impossible to obtain. The lack of experimental data, however, also precludes validation of the calculations in those regimes. Factors influencing the accuracy of first-principles data include theoretical approximations, and computational approximations used in implementing and solving the underlying equations. The first category includes approximate exchange-correlation functionals and wave equations simplifying the Dirac equation. In the second category are, e.g., basis completeness and pseudo-potentials. While the first category is extremely hard to assess without experimental data, inaccuracies of the second type should be well controlled. We are using two rather different electronic structure methods (VASP and RSPt) to make explicit the requirements for accuracy of the second type. We will discuss the VASP Projector Augmented Wave potentials, with examples for Li and Mo. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Force Myography to Control Robotic Upper Extremity Prostheses: A Feasibility Study
Cho, Erina; Chen, Richard; Merhi, Lukas-Karim; Xiao, Zhen; Pousett, Brittany; Menon, Carlo
2016-01-01
Advancement in assistive technology has led to the commercial availability of multi-dexterous robotic prostheses for the upper extremity. The relatively low performance of the currently used techniques to detect the intention of the user to control such advanced robotic prostheses, however, limits their use. This article explores the use of force myography (FMG) as a potential alternative to the well-established surface electromyography. Specifically, the use of FMG to control different grips of a commercially available robotic hand, Bebionic3, is investigated. Four male transradially amputated subjects participated in the study, and a protocol was developed to assess the prediction accuracy of 11 grips. Different combinations of grips were examined, ranging from 6 up to 11 grips. The results indicate that it is possible to classify six primary grips important in activities of daily living using FMG with an accuracy of above 70% in the residual limb. Additional strategies to increase classification accuracy, such as using the available modes on the Bebionic3, allowed results to improve up to 88.83 and 89.00% for opposed thumb and non-opposed thumb modes, respectively. PMID:27014682
Quasi-model free control for the post-capture operation of a non-cooperative target
NASA Astrophysics Data System (ADS)
She, Yuchen; Sun, Jun; Li, Shuang; Li, Wendan; Song, Ting
2018-06-01
This paper investigates a quasi-model free control (QMFC) approach for the post-capture control of a non-cooperative space object. The innovation of this paper lies in the following three aspects, which correspond to the three challenges presented in the mission scenario. First, an excitation-response mapping search strategy is developed based on the linearization of the system in terms of a set of parameters, which is efficient in handling the combined spacecraft with a high coupling effect on the inertia matrix. Second, a virtual coordinate system is proposed to efficiently compute the center of mass (COM) of the combined system, which improves the COM tracking efficiency for time-varying COM positions. Third, a linear online corrector is built to reduce the control error to further improve the control accuracy, which helps control the tracking mode within the combined system's time-varying inertia matrix. Finally, simulation analyses show that the proposed control framework is able to realize combined spacecraft post-capture control in extremely unfavorable conditions with high control accuracy.
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification
Liu, Fuxian
2018-01-01
One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references. PMID:29581722
A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification.
Yu, Yunlong; Liu, Fuxian
2018-01-01
One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs) as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM) classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references.
Lefebvre, Francine; Gagnon, Marie-Michèle; Luu, Thuy Mai; Lupien, Geneviève; Dorval, Véronique
2016-03-01
Extremely preterm infants are at high-risk for neurodevelopmental disabilities. The Movement Assessment of Infants (MAI) and the Alberta Infant Motor Scale (AIMS) have been designed to predict outcome with modest accuracy with the Bayley-I or Bayley-II. To examine and compare the predictive validity of the MAI and AIMS in determining neurodevelopmental outcome with the Bayley-III. Retrospective cohort study of 160 infants born at ≤ 28 weeks gestation. At their corrected age, infants underwent the MAI at 4 months, the AIMS at 4 and 10-12 months, and the Bayley-III and neurological examination at 18 months. Sensitivity and specificity were calculated. Infants had a mean gestation of 26.3 ± 1.4 weeks and birth weight of 906 ± 207 g. A high-risk score (≥ 14) for adverse outcome was obtained by 57% of infants on the MAI. On the AIMS, a high-risk score (<5th percentile) was obtained by 56% at 4 months and 30% at 10-12 months. At 18 months, infants with low-risk scores on either the MAI or AIMS had higher cognitive, language, and motor Bayley-III scores than those with high-risk scores. They were less likely to have severe neurodevelopmental impairment. To predict Bayley-III scores <70, sensitivity and specificity were 91% and 49%, respectively, for the MAI and 78% and 48%, respectively, for the AIMS. Extremely preterm infants with low-risk MAI at 4 months or AIMS scores at 4 or 10-12 months had better outcomes than those with high-risk scores. However, both tests lack specificity to predict individual neurodevelopmental status at 18 months. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Focus drive mechanism for the IUE scientific instrument
NASA Technical Reports Server (NTRS)
Devine, E. J.; Dennis, T. B., Jr.
1977-01-01
A compact, lightweight mechanism was developed for in-orbit adjustment of the position of the secondary mirror (focusing) of the International Ultraviolet Explored telescope. This device is a linear drive with small (.0004 in.) and highly repeatable step increments. Extremely close tolerances are also held in tilt and decentering. The unique mechanization is described with attention to the design details that contribute to positional accuracy. Lubrication, materials, thermal considerations, sealing, detenting against launch loads, and other features peculiar to flight hardware are discussed. The methods employed for mounting the low expansion quartz mirror with minimum distortion are also given.
Barausse, Enrico; Yunes, Nicolás; Chamberlain, Katie
2016-06-17
The aLIGO detection of the black-hole binary GW150914 opens a new era for probing extreme gravity. Many gravity theories predict the emission of dipole gravitational radiation by binaries. This is excluded to high accuracy in binary pulsars, but entire classes of theories predict this effect predominantly (or only) in binaries involving black holes. Joint observations of GW150914-like systems by aLIGO and eLISA will improve bounds on dipole emission from black-hole binaries by 6 orders of magnitude relative to current constraints, provided that eLISA is not dramatically descoped.
Production and detection of atomic hexadecapole at Earth's magnetic field.
Acosta, V M; Auzinsh, M; Gawlik, W; Grisins, P; Higbie, J M; Jackson Kimball, D F; Krzemien, L; Ledbetter, M P; Pustelny, S; Rochester, S M; Yashchuk, V V; Budker, D
2008-07-21
Optical magnetometers measure magnetic fields with extremely high precision and without cryogenics. However, at geomagnetic fields, important for applications from landmine removal to archaeology, they suffer from nonlinear Zeeman splitting, leading to systematic dependence on sensor orientation. We present experimental results on a method of eliminating this systematic error, using the hexadecapole atomic polarization moment. In particular, we demonstrate selective production of the atomic hexadecapole moment at Earth's magnetic field and verify its immunity to nonlinear Zeeman splitting. This technique promises to eliminate directional errors in all-optical atomic magnetometers, potentially improving their measurement accuracy by several orders of magnitude.
Optimal control theory (OWEM) applied to a helicopter in the hover and approach phase
NASA Technical Reports Server (NTRS)
Born, G. J.; Kai, T.
1975-01-01
A major difficulty in the practical application of linear-quadratic regulator theory is how to choose the weighting matrices in quadratic cost functions. The control system design with optimal weighting matrices was applied to a helicopter in the hover and approach phase. The weighting matrices were calculated to extremize the closed loop total system damping subject to constraints on the determinants. The extremization is really a minimization of the effects of disturbances, and interpreted as a compromise between the generalized system accuracy and the generalized system response speed. The trade-off between the accuracy and the response speed is adjusted by a single parameter, the ratio of determinants. By this approach an objective measure can be obtained for the design of a control system. The measure is to be determined by the system requirements.
An Evaluation of Attitude-Independent Magnetometer-Bias Determination Methods
NASA Technical Reports Server (NTRS)
Hashmall, J. A.; Deutschmann, Julie
1996-01-01
Although several algorithms now exist for determining three-axis magnetometer (TAM) biases without the use of attitude data, there are few studies on the effectiveness of these methods, especially in comparison with attitude dependent methods. This paper presents the results of a comparison of three attitude independent methods and an attitude dependent method for computing TAM biases. The comparisons are based on in-flight data from the Extreme Ultraviolet Explorer (EUVE), the Upper Atmosphere Research Satellite (UARS), and the Compton Gamma Ray Observatory (GRO). The effectiveness of an algorithm is measured by the accuracy of attitudes computed using biases determined with that algorithm. The attitude accuracies are determined by comparison with known, extremely accurate, star-tracker-based attitudes. In addition, the effect of knowledge of calibration parameters other than the biases on the effectiveness of all bias determination methods is examined.
Plotnikov, Nikolay V
2014-08-12
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.
2015-01-01
Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force. PMID:25136268
Absence of sex differences in mental rotation performance in autism spectrum disorder.
Rohde, Melanie S; Georgescu, Alexandra L; Vogeley, Kai; Fimmers, Rolf; Falter-Wagner, Christine M
2017-08-01
Mental rotation is one of the most investigated cognitive functions showing consistent sex differences. The 'Extreme Male Brain' hypothesis attributes the cognitive profile of individuals with autism spectrum disorder to an extreme version of the male cognitive profile. Previous investigations focused almost exclusively on males with autism spectrum disorder with only limited implications for affected females. This study is the first testing a sample of 12 female adults with high-functioning autism spectrum disorder compared to 14 males with autism spectrum disorder, 12 typically developing females and 14 typically developing males employing a computerised version of the mental rotation test. Reaction time and accuracy served as dependent variables. Their linear relationship with degree of rotation allows separation of rotational aspects of the task, indicated by slopes of the psychometric function, and non-rotational aspects, indicated by intercepts of the psychometric function. While the typical and expected sex difference for rotational task aspects was corroborated in typically developing individuals, no comparable sex difference was found in autism spectrum disorder individuals. Autism spectrum disorder and typically developing individuals did not differ in mental rotation performance. This finding does not support the extreme male brain hypothesis of autism.
Huang, Weilin; Wang, Runqiu; Li, Huijian; Chen, Yangkang
2017-09-20
Microseismic method is an essential technique for monitoring the dynamic status of hydraulic fracturing during the development of unconventional reservoirs. However, one of the challenges in microseismic monitoring is that those seismic signals generated from micro seismicity have extremely low amplitude. We develop a methodology to unveil the signals that are smeared in the strong ambient noise and thus facilitate a more accurate arrival-time picking that will ultimately improve the localization accuracy. In the proposed technique, we decompose the recorded data into several morphological multi-scale components. In order to unveil weak signal, we propose an orthogonalization operator which acts as a time-varying weighting in the morphological reconstruction. The orthogonalization operator is obtained using an inversion process. This orthogonalized morphological reconstruction can be interpreted as a projection of the higher-dimensional vector. We first test the proposed technique using a synthetic dataset. Then the proposed technique is applied to a field dataset recorded in a project in China, in which the signals induced from hydraulic fracturing are recorded by twelve three-component (3-C) geophones in a monitoring well. The result demonstrates that the orthogonalized morphological reconstruction can make the extremely weak microseismic signals detectable.
Image Stability Requirements For a Geostationary Imaging Fourier Transform Spectrometer (GIFTS)
NASA Technical Reports Server (NTRS)
Bingham, G. E.; Cantwell, G.; Robinson, R. C.; Revercomb, H. E.; Smith, W. L.
2001-01-01
A Geostationary Imaging Fourier Transform Spectrometer (GIFTS) has been selected for the NASA New Millennium Program (NMP) Earth Observing-3 (EO-3) mission. Our paper will discuss one of the key GIFTS measurement requirements, Field of View (FOV) stability, and its impact on required system performance. The GIFTS NMP mission is designed to demonstrate new and emerging sensor and data processing technologies with the goal of making revolutionary improvements in meteorological observational capability and forecasting accuracy. The GIFTS payload is a versatile imaging FTS with programmable spectral resolution and spatial scene selection that allows radiometric accuracy and atmospheric sounding precision to be traded in near real time for area coverage. The GIFTS sensor combines high sensitivity with a massively parallel spatial data collection scheme to allow high spatial resolution measurement of the Earth's atmosphere and rapid broad area coverage. An objective of the GIFTS mission is to demonstrate the advantages of high spatial resolution (4 km ground sample distance - gsd) on temperature and water vapor retrieval by allowing sampling in broken cloud regions. This small gsd, combined with the relatively long scan time required (approximately 10 s) to collect high resolution spectra from geostationary (GEO) orbit, may require extremely good pointing control. This paper discusses the analysis of this requirement.
Zhao, Xian-En; Lv, Tao; Zhu, Shuyun; Qu, Fei; Chen, Guang; He, Yongrui; Wei, Na; Li, Guoliang; Xia, Lian; Sun, Zhiwei; Zhang, Shijuan; You, Jinmao; Liu, Shu; Liu, Zhiqiang; Sun, Jing; Liu, Shuying
2016-03-11
This paper, for the first time, reported a speedy hyphenated technique of low toxic dual ultrasonic-assisted dispersive liquid-liquid microextraction (dual-UADLLME) coupled with microwave-assisted derivatization (MAD) for the simultaneous determination of 20(S)-protopanaxadiol (PPD) and 20(S)-protopanaxatriol (PPT). The developed method was based on ultra high performance liquid chromatography tandem mass spectrometry (UHPLC-MS/MS) detection using multiple-reaction monitoring (MRM) mode. A mass spectrometry sensitizing reagent, 4'-carboxy-substituted rosamine (CSR) with high reaction activity and ionization efficiency was synthesized and firstly used as derivatization reagent. Parameters of dual-UADLLME, MAD and UHPLC-MS/MS conditions were all optimized in detail. Low toxic brominated solvents were used as extractant instead of traditional chlorinated solvents. Satisfactory linearity, recovery, repeatability, accuracy and precision, absence of matrix effect and extremely low limits of detection (LODs, 0.010 and 0.015ng/mL for PPD and PPT, respectively) were achieved. The main advantages were rapid, sensitive and environmentally friendly, and exhibited high selectivity, accuracy and good matrix effect results. The proposed method was successfully applied to pharmacokinetics of PPD and PPT in rat plasma. Copyright © 2016 Elsevier B.V. All rights reserved.
The diagnostic management of upper extremity deep vein thrombosis: A review of the literature.
Kraaijpoel, Noémie; van Es, Nick; Porreca, Ettore; Büller, Harry R; Di Nisio, Marcello
2017-08-01
Upper extremity deep vein thrombosis (UEDVT) accounts for 4% to 10% of all cases of deep vein thrombosis. UEDVT may present with localized pain, erythema, and swelling of the arm, but may also be detected incidentally by diagnostic imaging tests performed for other reasons. Prompt and accurate diagnosis is crucial to prevent pulmonary embolism and long-term complications as the post-thrombotic syndrome of the arm. Unlike the diagnostic management of deep vein thrombosis (DVT) of the lower extremities, which is well established, the work-up of patients with clinically suspected UEDVT remains uncertain with limited evidence from studies of small size and poor methodological quality. Currently, only one prospective study evaluated the use of an algorithm, similar to the one used for DVT of the lower extremities, for the diagnostic workup of clinically suspected UEDVT. The algorithm combined clinical probability assessment, D-dimer testing and ultrasonography and appeared to safely and effectively exclude UEDVT. However, before recommending its use in routine clinical practice, external validation of this strategy and improvements of the efficiency are needed, especially in high-risk subgroups in whom the performance of the algorithm appeared to be suboptimal, such as hospitalized or cancer patients. In this review, we critically assess the accuracy and efficacy of current diagnostic tools and provide clinical guidance for the diagnostic management of clinically suspected UEDVT. Copyright © 2017 Elsevier Ltd. All rights reserved.
Closing the Gap: An Analysis of Options for Improving the USAF Fighter Fleet from 2105 to 2035
2015-10-01
capacity. The CBO predicts an increase in capacity for both large, or 2000 lbs class weapons, and small , either 500 lbs class or Small Diameter Bomb ...Laser Guided Bomb (LGB) designed to penetrate extremely hardened bunkers with extreme accuracy.54 Larger weapons can provide better standoff range...operate with impunity in low intensity CAS scenarios. While survivability, with the exception of against small arms ground fire, is far less a
A Metastatistical Approach to Satellite Estimates of Extreme Rainfall Events
NASA Astrophysics Data System (ADS)
Zorzetto, E.; Marani, M.
2017-12-01
The estimation of the average recurrence interval of intense rainfall events is a central issue for both hydrologic modeling and engineering design. These estimates require the inference of the properties of the right tail of the statistical distribution of precipitation, a task often performed using the Generalized Extreme Value (GEV) distribution, estimated either from a samples of annual maxima (AM) or with a peaks over threshold (POT) approach. However, these approaches require long and homogeneous rainfall records, which often are not available, especially in the case of remote-sensed rainfall datasets. We use here, and tailor it to remotely-sensed rainfall estimates, an alternative approach, based on the metastatistical extreme value distribution (MEVD), which produces estimates of rainfall extreme values based on the probability distribution function (pdf) of all measured `ordinary' rainfall event. This methodology also accounts for the interannual variations observed in the pdf of daily rainfall by integrating over the sample space of its random parameters. We illustrate the application of this framework to the TRMM Multi-satellite Precipitation Analysis rainfall dataset, where MEVD optimally exploits the relatively short datasets of satellite-sensed rainfall, while taking full advantage of its high spatial resolution and quasi-global coverage. Accuracy of TRMM precipitation estimates and scale issues are here investigated for a case study located in the Little Washita watershed, Oklahoma, using a dense network of rain gauges for independent ground validation. The methodology contributes to our understanding of the risk of extreme rainfall events, as it allows i) an optimal use of the TRMM datasets in estimating the tail of the probability distribution of daily rainfall, and ii) a global mapping of daily rainfall extremes and distributional tail properties, bridging the existing gaps in rain gauges networks.
SUMER: Solar Ultraviolet Measurements of Emitted Radiation
NASA Technical Reports Server (NTRS)
Wilhelm, K.; Axford, W. I.; Curdt, W.; Gabriel, A. H.; Grewing, M.; Huber, M. C. E.; Jordan, S. D.; Kuehne, M.; Lemaire, P.; Marsch, E.
1992-01-01
The experiment Solar Ultraviolet Measurements of Emitted Radiation (SUMER) is designed for the investigations of plasma flow characteristics, turbulence and wave motions, plasma densities and temperatures, structures and events associated with solar magnetic activity in the chromosphere, the transition zone and the corona. Specifically, SUMER will measure profiles and intensities of Extreme Ultraviolet (EUV) lines emitted in the solar atmosphere ranging from the upper chromosphere to the lower corona; determine line broadenings, spectral positions and Doppler shifts with high accuracy, provide stigmatic images of selected areas of the Sun in the EUV with high spatial, temporal and spectral resolution and obtain full images of the Sun and the inner corona in selectable EUV lines, corresponding to a temperature from 10,000 to more than 1,800,000 K.
Dynamic Synchronous Capture Algorithm for an Electromagnetic Flowmeter.
Fanjiang, Yong-Yi; Lu, Shih-Wei
2017-04-10
This paper proposes a dynamic synchronous capture (DSC) algorithm to calculate the flow rate for an electromagnetic flowmeter. The characteristics of the DSC algorithm can accurately calculate the flow rate signal and efficiently convert an analog signal to upgrade the execution performance of a microcontroller unit (MCU). Furthermore, it can reduce interference from abnormal noise. It is extremely steady and independent of fluctuations in the flow measurement. Moreover, it can calculate the current flow rate signal immediately (m/s). The DSC algorithm can be applied to the current general MCU firmware platform without using DSP (Digital Signal Processing) or a high-speed and high-end MCU platform, and signal amplification by hardware reduces the demand for ADC accuracy, which reduces the cost.
Dynamic Synchronous Capture Algorithm for an Electromagnetic Flowmeter
Fanjiang, Yong-Yi; Lu, Shih-Wei
2017-01-01
This paper proposes a dynamic synchronous capture (DSC) algorithm to calculate the flow rate for an electromagnetic flowmeter. The characteristics of the DSC algorithm can accurately calculate the flow rate signal and efficiently convert an analog signal to upgrade the execution performance of a microcontroller unit (MCU). Furthermore, it can reduce interference from abnormal noise. It is extremely steady and independent of fluctuations in the flow measurement. Moreover, it can calculate the current flow rate signal immediately (m/s). The DSC algorithm can be applied to the current general MCU firmware platform without using DSP (Digital Signal Processing) or a high-speed and high-end MCU platform, and signal amplification by hardware reduces the demand for ADC accuracy, which reduces the cost. PMID:28394306
Mix & match electron beam & scanning probe lithography for high throughput sub-10 nm lithography
NASA Astrophysics Data System (ADS)
Kaestner, Marcus; Hofer, Manuel; Rangelow, Ivo W.
2013-03-01
The prosperous demonstration of a technique able to produce features with single nanometer (SN) resolution could guide the semiconductor industry into the desired beyond CMOS era. In the lithographic community immense efforts are being made to develop extreme ultra-violet lithography (EUVL) and multiple-e-beam direct-write systems as possible successor for next generation lithography (NGL). However, patterning below 20 nm resolution and sub-10 nm overlay alignment accuracy becomes an extremely challenging quest. Herein, the combination of electron beam lithography (EBL) or EUVL with the outstanding capabilities of closed-loop scanning proximal probe nanolithography (SPL) reveals a promising way to improve both patterning resolution and reproducibility in combination with excellent overlay and placement accuracy. In particular, the imaging and lithographic resolution capabilities provided by scanning probe microscopy (SPM) methods touches the atomic level, which expresses the theoretical limit of constructing nanoelectronic devices. Furthermore, the symbiosis between EBL (EUVL) and SPL expands the process window of EBL (EUVL) far beyond state-of-the-art allowing SPL-based pre- and post-patterning of EBL (EUVL) written features at critical dimension level with theoretically nanometer precise pattern overlay alignment. Moreover, we can modify the EBL (EUVL) pattern before as well as after the development step. In this paper we demonstrate proof of concept using the ultra-high resolution molecular glass resist calixarene. Therefor we applied Gaussian E-beam lithography system operating at 10 keV and a home-developed SPL set-up. The introduced Mix and Match lithography strategy enables a powerful use of our SPL set-up especially as post-patterning tool for inspection and repair functions below the sub-10 nm critical dimension level.
Automatic classification of tissue malignancy for breast carcinoma diagnosis.
Fondón, Irene; Sarmiento, Auxiliadora; García, Ana Isabel; Silvestre, María; Eloy, Catarina; Polónia, António; Aguiar, Paulo
2018-05-01
Breast cancer is the second leading cause of cancer death among women. Its early diagnosis is extremely important to prevent avoidable deaths. However, malignancy assessment of tissue biopsies is complex and dependent on observer subjectivity. Moreover, hematoxylin and eosin (H&E)-stained histological images exhibit a highly variable appearance, even within the same malignancy level. In this paper, we propose a computer-aided diagnosis (CAD) tool for automated malignancy assessment of breast tissue samples based on the processing of histological images. We provide four malignancy levels as the output of the system: normal, benign, in situ and invasive. The method is based on the calculation of three sets of features related to nuclei, colour regions and textures considering local characteristics and global image properties. By taking advantage of well-established image processing techniques, we build a feature vector for each image that serves as an input to an SVM (Support Vector Machine) classifier with a quadratic kernel. The method has been rigorously evaluated, first with a 5-fold cross-validation within an initial set of 120 images, second with an external set of 30 different images and third with images with artefacts included. Accuracy levels range from 75.8% when the 5-fold cross-validation was performed to 75% with the external set of new images and 61.11% when the extremely difficult images were added to the classification experiment. The experimental results indicate that the proposed method is capable of distinguishing between four malignancy levels with high accuracy. Our results are close to those obtained with recent deep learning-based methods. Moreover, it performs better than other state-of-the-art methods based on feature extraction, and it can help improve the CAD of breast cancer. Copyright © 2018 Elsevier Ltd. All rights reserved.
Keo, Hong H; Schilling, Marianne; Büchel, Roland; Gröchenig, Ernst; Engelberger, Rolf P; Willenberg, Torsten; Baumgartner, Iris; Gretener, Silvia B
2013-06-01
Fluorescence microlymphography (FML) is used to visualize the lymphatic capillaries. A maximum spread of the fluorescence dye of ≥ 12 mm has been suggested for the diagnosis of lymphedema. However, data on sensitivity and specificity are lacking. The aim of this study was to investigate the accuracy of FML for diagnosing lymphedema in patients with leg swelling. Patients with lower extremity swelling were clinically assessed and separated into lymphedema and non-lymphatic edema groups. FML was studied in all affected legs and the maximum spread of lymphatic capillaries was measured. Test accuracy and receiver operator characteristic (ROC) analysis was performed to assess possible threshold values that predict lymphedema. Between March 2008 and August 2011 a total of 171 patients (184 legs) with a median age of 43.5 (IQR 24, 54) years were assessed. Of those, 94 (51.1%) legs were diagnosed with lymphedema. The sensitivity, specificity, positive and negative likelihood ratio and positive and negative predictive value were 87%, 64%, 2.45, 0.20, 72% and 83% for the 12-mm cut-off level and 79%, 83%, 4.72, 0.26, 83% and 79% for the 14-mm cut-off level, respectively. The area under the ROC curve was 0.82 (95% CI: 0.76, 0.88). Sensitivity was higher in the secondary versus primary lymphedema (95.0% vs 74.3%, p = 0.045). No major adverse events were observed. In conclusion, FML is a simple and safe technique for detecting lymphedema in patients with leg swelling. A cut-off level of ≥ 14-mm maximum spread has a high sensitivity and high specificity of detecting lymphedema and should be chosen.
NASA Astrophysics Data System (ADS)
Žabkar, Rahela; Koračin, Darko; Rakovec, Jože
2013-10-01
A high ozone (O3) concentrations episode during a heat wave event in the Northeastern Mediterranean was investigated using the WRF/Chem model. To understand the major model uncertainties and errors as well as the impacts of model inputs on the model accuracy, an ensemble modelling experiment was conducted. The 51-member ensemble was designed by varying model physics parameterization options (PBL schemes with different surface layer and land-surface modules, and radiation schemes); chemical initial and boundary conditions; anthropogenic and biogenic emission inputs; and model domain setup and resolution. The main impacts of the geographical and emission characteristics of three distinct regions (suburban Mediterranean, continental urban, and continental rural) on the model accuracy and O3 predictions were investigated. In spite of the large ensemble set size, the model generally failed to simulate the extremes; however, as expected from probabilistic forecasting the ensemble spread improved results with respect to extremes compared to the reference run. Noticeable model nighttime overestimations at the Mediterranean and some urban and rural sites can be explained by too strong simulated winds, which reduce the impact of dry deposition and O3 titration in the near surface layers during the nighttime. Another possible explanation could be inaccuracies in the chemical mechanisms, which are suggested also by model insensitivity to variations in the nitrogen oxides (NOx) and volatile organic compounds (VOC) emissions. Major impact factors for underestimations of the daytime O3 maxima at the Mediterranean and some rural sites include overestimation of the PBL depths, a lack of information on forest fires, too strong surface winds, and also possible inaccuracies in biogenic emissions. This numerical experiment with the ensemble runs also provided guidance on an optimum model setup and input data.
Improving the Accuracy of Estimation of Climate Extremes
NASA Astrophysics Data System (ADS)
Zolina, Olga; Detemmerman, Valery; Trenberth, Kevin E.
2010-12-01
Workshop on Metrics and Methodologies of Estimation of Extreme Climate Events; Paris, France, 27-29 September 2010; Climate projections point toward more frequent and intense weather and climate extremes such as heat waves, droughts, and floods, in a warmer climate. These projections, together with recent extreme climate events, including flooding in Pakistan and the heat wave and wildfires in Russia, highlight the need for improved risk assessments to help decision makers and the public. But accurate analysis and prediction of risk of extreme climate events require new methodologies and information from diverse disciplines. A recent workshop sponsored by the World Climate Research Programme (WCRP) and hosted at United Nations Educational, Scientific and Cultural Organization (UNESCO) headquarters in France brought together, for the first time, a unique mix of climatologists, statisticians, meteorologists, oceanographers, social scientists, and risk managers (such as those from insurance companies) who sought ways to improve scientists' ability to characterize and predict climate extremes in a changing climate.
Not looking yourself: The cost of self-selecting photographs for identity verification.
White, David; Burton, Amy L; Kemp, Richard I
2016-05-01
Photo-identification is based on the premise that photographs are representative of facial appearance. However, previous studies show that ratings of likeness vary across different photographs of the same face, suggesting that some images capture identity better than others. Two experiments were designed to examine the relationship between likeness judgments and face matching accuracy. In Experiment 1, we compared unfamiliar face matching accuracy for self-selected and other-selected high-likeness images. Surprisingly, images selected by previously unfamiliar viewers - after very limited exposure to a target face - were more accurately matched than self-selected images chosen by the target identity themselves. Results also revealed extremely low inter-rater agreement in ratings of likeness across participants, suggesting that perceptions of image resemblance are inherently unstable. In Experiment 2, we test whether the cost of self-selection can be explained by this general disagreement in likeness judgments between individual raters. We find that averaging across rankings by multiple raters produces image selections that provide superior identification accuracy. However, benefit of other-selection persisted for single raters, suggesting that inaccurate representations of self interfere with our ability to judge which images faithfully represent our current appearance. © 2015 The British Psychological Society.
Montreal Cognitive Assessment (MoCA): validation study for frontotemporal dementia.
Freitas, Sandra; Simões, Mário R; Alves, Lara; Duro, Diana; Santana, Isabel
2012-09-01
The Montreal Cognitive Assessment (MoCA) is a brief instrument developed for the screening of milder forms of cognitive impairment, having surpassed the well-known limitations of the Mini-Mental State Examination (MMSE). The aim of the present study was to validate the MoCA as a cognitive screening test for behavioral-variant frontotemporal dementia (bv-FTD) by examining its psychometric properties and diagnostic accuracy. Three matched subgroups of participants were considered: bv-FTD (n = 50), Alzheimer disease (n = 50), and a control group of healthy adults (n = 50). Compared with the MMSE, the MoCA demonstrated consistently superior psychometric properties and discriminant capacity, providing comprehensive information about the patients' cognitive profiles. The diagnostic accuracy of MoCA for bv-FTD was extremely high (area under the curve AUC [MoCA] = 0.934, 95% confidence interval [CI] = 0.866-.974; AUC [MMSE] = 0.772, 95% CI = 0.677-0.850). With a cutoff below 17 points, the MoCA results for sensitivity, specificity, positive predictive value, negative predictive value, and classification accuracy were significantly superior to those of the MMSE. The MoCA is a sensitive and accurate instrument for screening the patients with bv-FTD and represents a better option than the MMSE.
NASA Astrophysics Data System (ADS)
Chetty, S.; Field, L. A.
2013-12-01
The Arctic ocean's continuing decrease of summer-time ice is related to rapidly diminishing multi-year ice due to the effects of climate change. Ice911 Research aims to develop environmentally respectful materials that when deployed will increase the albedo, enhancing the formation and/preservation of multi-year ice. Small scale deployments using various materials have been done in Canada, California's Sierra Nevada Mountains and a pond in Minnesota to test the albedo performance and environmental characteristics of these materials. SWIMS is a sophisticated autonomous sensor system being developed to measure the albedo, weather, water temperature and other environmental parameters. The system (SWIMS) employs low cost, high accuracy/precision sensors, high resolution cameras, and an extreme environment command and data handling computer system using satellite and terrestrial wireless communication. The entire system is solar powered with redundant battery backup on a floating buoy platform engineered for low temperature (-40C) and high wind conditions. The system also incorporates tilt sensors, sonar based ice thickness sensors and a weather station. To keep the costs low, each SWIMS unit measures incoming and reflected radiation from the four quadrants around the buoy. This allows data from four sets of sensors, cameras, weather station, water temperature probe to be collected and transmitted by a single on-board solar powered computer. This presentation covers the technical, logistical and cost challenges in designing, developing and deploying these stations in remote, extreme environments. Image captured by camera #3 of setting sun on the SWIMS station One of the images captured by SWIMS Camera #4
Practical vision based degraded text recognition system
NASA Astrophysics Data System (ADS)
Mohammad, Khader; Agaian, Sos; Saleh, Hani
2011-02-01
Rapid growth and progress in the medical, industrial, security and technology fields means more and more consideration for the use of camera based optical character recognition (OCR) Applying OCR to scanned documents is quite mature, and there are many commercial and research products available on this topic. These products achieve acceptable recognition accuracy and reasonable processing times especially with trained software, and constrained text characteristics. Even though the application space for OCR is huge, it is quite challenging to design a single system that is capable of performing automatic OCR for text embedded in an image irrespective of the application. Challenges for OCR systems include; images are taken under natural real world conditions, Surface curvature, text orientation, font, size, lighting conditions, and noise. These and many other conditions make it extremely difficult to achieve reasonable character recognition. Performance for conventional OCR systems drops dramatically as the degradation level of the text image quality increases. In this paper, a new recognition method is proposed to recognize solid or dotted line degraded characters. The degraded text string is localized and segmented using a new algorithm. The new method was implemented and tested using a development framework system that is capable of performing OCR on camera captured images. The framework allows parameter tuning of the image-processing algorithm based on a training set of camera-captured text images. Novel methods were used for enhancement, text localization and the segmentation algorithm which enables building a custom system that is capable of performing automatic OCR which can be used for different applications. The developed framework system includes: new image enhancement, filtering, and segmentation techniques which enabled higher recognition accuracies, faster processing time, and lower energy consumption, compared with the best state of the art published techniques. The system successfully produced impressive OCR accuracies (90% -to- 93%) using customized systems generated by our development framework in two industrial OCR applications: water bottle label text recognition and concrete slab plate text recognition. The system was also trained for the Arabic language alphabet, and demonstrated extremely high recognition accuracy (99%) for Arabic license name plate text recognition with processing times of 10 seconds. The accuracy and run times of the system were compared to conventional and many states of art methods, the proposed system shows excellent results.
Integrated Strategy Improves the Prediction Accuracy of miRNA in Large Dataset
Lipps, David; Devineni, Sree
2016-01-01
MiRNAs are short non-coding RNAs of about 22 nucleotides, which play critical roles in gene expression regulation. The biogenesis of miRNAs is largely determined by the sequence and structural features of their parental RNA molecules. Based on these features, multiple computational tools have been developed to predict if RNA transcripts contain miRNAs or not. Although being very successful, these predictors started to face multiple challenges in recent years. Many predictors were optimized using datasets of hundreds of miRNA samples. The sizes of these datasets are much smaller than the number of known miRNAs. Consequently, the prediction accuracy of these predictors in large dataset becomes unknown and needs to be re-tested. In addition, many predictors were optimized for either high sensitivity or high specificity. These optimization strategies may bring in serious limitations in applications. Moreover, to meet continuously raised expectations on these computational tools, improving the prediction accuracy becomes extremely important. In this study, a meta-predictor mirMeta was developed by integrating a set of non-linear transformations with meta-strategy. More specifically, the outputs of five individual predictors were first preprocessed using non-linear transformations, and then fed into an artificial neural network to make the meta-prediction. The prediction accuracy of meta-predictor was validated using both multi-fold cross-validation and independent dataset. The final accuracy of meta-predictor in newly-designed large dataset is improved by 7% to 93%. The meta-predictor is also proved to be less dependent on datasets, as well as has refined balance between sensitivity and specificity. This study has two folds of importance: First, it shows that the combination of non-linear transformations and artificial neural networks improves the prediction accuracy of individual predictors. Second, a new miRNA predictor with significantly improved prediction accuracy is developed for the community for identifying novel miRNAs and the complete set of miRNAs. Source code is available at: https://github.com/xueLab/mirMeta PMID:28002428
GPS FOM Chimney Analysis using Generalized Extreme Value Distribution
NASA Technical Reports Server (NTRS)
Ott, Rick; Frisbee, Joe; Saha, Kanan
2004-01-01
Many a time an objective of a statistical analysis is to estimate a limit value like 3-sigma 95% confidence upper limit from a data sample. The generalized Extreme Value Distribution method can be profitably employed in many situations for such an estimate. . .. It is well known that according to the Central Limit theorem the mean value of a large data set is normally distributed irrespective of the distribution of the data from which the mean value is derived. In a somewhat similar fashion it is observed that many times the extreme value of a data set has a distribution that can be formulated with a Generalized Distribution. In space shuttle entry with 3-string GPS navigation the Figure Of Merit (FOM) value gives a measure of GPS navigated state accuracy. A GPS navigated state with FOM of 6 or higher is deemed unacceptable and is said to form a FOM 6 or higher chimney. A FOM chimney is a period of time during which the FOM value stays higher than 5. A longer period of FOM of value 6 or higher causes navigated state to accumulate more error for a lack of state update. For an acceptable landing it is imperative that the state error remains low and hence at low altitude during entry GPS data of FOM greater than 5 must not last more than 138 seconds. I To test the GPS performAnce many entry test cases were simulated at the Avionics Development Laboratory. Only high value FoM chimneys are consequential. The extreme value statistical technique is applied to analyze high value FOM chimneys. The Maximum likelihood method is used to determine parameters that characterize the GEV distribution, and then the limit value statistics are estimated.
Kim, Jongin; Lee, Boreom
2018-05-07
Different modalities such as structural MRI, FDG-PET, and CSF have complementary information, which is likely to be very useful for diagnosis of AD and MCI. Therefore, it is possible to develop a more effective and accurate AD/MCI automatic diagnosis method by integrating complementary information of different modalities. In this paper, we propose multi-modal sparse hierarchical extreme leaning machine (MSH-ELM). We used volume and mean intensity extracted from 93 regions of interest (ROIs) as features of MRI and FDG-PET, respectively, and used p-tau, t-tau, and Aβ42 as CSF features. In detail, high-level representation was individually extracted from each of MRI, FDG-PET, and CSF using a stacked sparse extreme learning machine auto-encoder (sELM-AE). Then, another stacked sELM-AE was devised to acquire a joint hierarchical feature representation by fusing the high-level representations obtained from each modality. Finally, we classified joint hierarchical feature representation using a kernel-based extreme learning machine (KELM). The results of MSH-ELM were compared with those of conventional ELM, single kernel support vector machine (SK-SVM), multiple kernel support vector machine (MK-SVM) and stacked auto-encoder (SAE). Performance was evaluated through 10-fold cross-validation. In the classification of AD vs. HC and MCI vs. HC problem, the proposed MSH-ELM method showed mean balanced accuracies of 96.10% and 86.46%, respectively, which is much better than those of competing methods. In summary, the proposed algorithm exhibits consistently better performance than SK-SVM, ELM, MK-SVM and SAE in the two binary classification problems (AD vs. HC and MCI vs. HC). © 2018 Wiley Periodicals, Inc.
Shang, Qiang; Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang
2016-01-01
Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust.
Lin, Ciyun; Yang, Zhaosheng; Bing, Qichun; Zhou, Xiyang
2016-01-01
Short-term traffic flow prediction is one of the most important issues in the field of intelligent transport system (ITS). Because of the uncertainty and nonlinearity, short-term traffic flow prediction is a challenging task. In order to improve the accuracy of short-time traffic flow prediction, a hybrid model (SSA-KELM) is proposed based on singular spectrum analysis (SSA) and kernel extreme learning machine (KELM). SSA is used to filter out the noise of traffic flow time series. Then, the filtered traffic flow data is used to train KELM model, the optimal input form of the proposed model is determined by phase space reconstruction, and parameters of the model are optimized by gravitational search algorithm (GSA). Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. And the SSA-KELM model is compared with several well-known prediction models, including support vector machine, extreme learning machine, and single KLEM model. The experimental results demonstrate that performance of the proposed model is superior to that of the comparison models. Apart from accuracy improvement, the proposed model is more robust. PMID:27551829
Optical proximity correction for anamorphic extreme ultraviolet lithography
NASA Astrophysics Data System (ADS)
Clifford, Chris; Lam, Michael; Raghunathan, Ananthan; Jiang, Fan; Fenger, Germain; Adam, Kostas
2017-10-01
The change from isomorphic to anamorphic optics in high numerical aperture (NA) extreme ultraviolet (EUV) scanners necessitates changes to the mask data preparation flow. The required changes for each step in the mask tape out process are discussed, with a focus on optical proximity correction (OPC). When necessary, solutions to new problems are demonstrated, and verified by rigorous simulation. Additions to the OPC model include accounting for anamorphic effects in the optics, mask electromagnetics, and mask manufacturing. The correction algorithm is updated to include awareness of anamorphic mask geometry for mask rule checking (MRC). OPC verification through process window conditions is enhanced to test different wafer scale mask error ranges in the horizontal and vertical directions. This work will show that existing models and methods can be updated to support anamorphic optics without major changes. Also, the larger mask size in the Y direction can result in better model accuracy, easier OPC convergence, and designs which are more tolerant to mask errors.
Nonempirical Semilocal Free-Energy Density Functional for Matter under Extreme Conditions.
Karasiev, Valentin V; Dufty, James W; Trickey, S B
2018-02-16
Realizing the potential for predictive density functional calculations of matter under extreme conditions depends crucially upon having an exchange-correlation (XC) free-energy functional accurate over a wide range of state conditions. Unlike the ground-state case, no such functional exists. We remedy that with systematic construction of a generalized gradient approximation XC free-energy functional based on rigorous constraints, including the free-energy gradient expansion. The new functional provides the correct temperature dependence in the slowly varying regime and the correct zero-T, high-T, and homogeneous electron gas limits. Its accuracy in the warm dense matter regime is attested by excellent agreement of the calculated deuterium equation of state with reference path integral Monte Carlo results at intermediate and elevated T. Pressure shifts for hot electrons in compressed static fcc Al and for low-density Al demonstrate the combined magnitude of thermal and gradient effects handled well by this functional over a wide T range.
Landsat-8 Operational Land Imager (OLI) radiometric performance on-orbit
Morfitt, Ron; Barsi, Julia A.; Levy, Raviv; Markham, Brian L.; Micijevic, Esad; Ong, Lawrence; Scaramuzza, Pat; Vanderwerff, Kelly
2015-01-01
Expectations of the Operational Land Imager (OLI) radiometric performance onboard Landsat-8 have been met or exceeded. The calibration activities that occurred prior to launch provided calibration parameters that enabled ground processing to produce imagery that met most requirements when data were transmitted to the ground. Since launch, calibration updates have improved the image quality even more, so that all requirements are met. These updates range from detector gain coefficients to reduce striping and banding to alignment parameters to improve the geometric accuracy. This paper concentrates on the on-orbit radiometric performance of the OLI, excepting the radiometric calibration performance. Topics discussed in this paper include: signal-to-noise ratios that are an order of magnitude higher than previous Landsat missions; radiometric uniformity that shows little residual banding and striping, and continues to improve; a dynamic range that limits saturation to extremely high radiance levels; extremely stable detectors; slight nonlinearity that is corrected in ground processing; detectors that are stable and 100% operable; and few image artifacts.
Repair of localized defects in multilayer-coated reticle blanks for extreme ultraviolet lithography
Stearns, Daniel G [Los Altos, CA; Sweeney, Donald W [San Ramon, CA; Mirkarimi, Paul B [Sunol, CA
2004-11-23
A method is provided for repairing defects in a multilayer coating layered onto a reticle blank used in an extreme ultraviolet lithography (EUVL) system. Using high lateral spatial resolution, energy is deposited in the multilayer coating in the vicinity of the defect. This can be accomplished using a focused electron beam, focused ion beam or a focused electromagnetic radiation. The absorbed energy will cause a structural modification of the film, producing a localized change in the film thickness. The change in film thickness can be controlled with sub-nanometer accuracy by adjusting the energy dose. The lateral spatial resolution of the thickness modification is controlled by the localization of the energy deposition. The film thickness is adjusted locally to correct the perturbation of the reflected field. For example, when the structural modification is a localized film contraction, the repair of a defect consists of flattening a mound or spreading out the sides of a depression.
Nonempirical Semilocal Free-Energy Density Functional for Matter under Extreme Conditions
NASA Astrophysics Data System (ADS)
Karasiev, Valentin V.; Dufty, James W.; Trickey, S. B.
2018-02-01
Realizing the potential for predictive density functional calculations of matter under extreme conditions depends crucially upon having an exchange-correlation (X C ) free-energy functional accurate over a wide range of state conditions. Unlike the ground-state case, no such functional exists. We remedy that with systematic construction of a generalized gradient approximation X C free-energy functional based on rigorous constraints, including the free-energy gradient expansion. The new functional provides the correct temperature dependence in the slowly varying regime and the correct zero-T , high-T , and homogeneous electron gas limits. Its accuracy in the warm dense matter regime is attested by excellent agreement of the calculated deuterium equation of state with reference path integral Monte Carlo results at intermediate and elevated T . Pressure shifts for hot electrons in compressed static fcc Al and for low-density Al demonstrate the combined magnitude of thermal and gradient effects handled well by this functional over a wide T range.
Automatic Fault Characterization via Abnormality-Enhanced Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G; Laguna, I; de Supinski, B R
Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help tomore » identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.« less
The matrix exponential in transient structural analysis
NASA Technical Reports Server (NTRS)
Minnetyan, Levon
1987-01-01
The primary usefulness of the presented theory is in the ability to represent the effects of high frequency linear response with accuracy, without requiring very small time steps in the analysis of dynamic response. The matrix exponential contains a series approximation to the dynamic model. However, unlike the usual analysis procedure which truncates the high frequency response, the approximation in the exponential matrix solution is in the time domain. By truncating the series solution to the matrix exponential short, the solution is made inaccurate after a certain time. Yet, up to that time the solution is extremely accurate, including all high frequency effects. By taking finite time increments, the exponential matrix solution can compute the response very accurately. Use of the exponential matrix in structural dynamics is demonstrated by simulating the free vibration response of multi degree of freedom models of cantilever beams.
Heddam, Salim; Kisi, Ozgur
2017-07-01
In this paper, several extreme learning machine (ELM) models, including standard extreme learning machine with sigmoid activation function (S-ELM), extreme learning machine with radial basis activation function (R-ELM), online sequential extreme learning machine (OS-ELM), and optimally pruned extreme learning machine (OP-ELM), are newly applied for predicting dissolved oxygen concentration with and without water quality variables as predictors. Firstly, using data from eight United States Geological Survey (USGS) stations located in different rivers basins, USA, the S-ELM, R-ELM, OS-ELM, and OP-ELM were compared against the measured dissolved oxygen (DO) using four water quality variables, water temperature, specific conductance, turbidity, and pH, as predictors. For each station, we used data measured at an hourly time step for a period of 4 years. The dataset was divided into a training set (70%) and a validation set (30%). We selected several combinations of the water quality variables as inputs for each ELM model and six different scenarios were compared. Secondly, an attempt was made to predict DO concentration without water quality variables. To achieve this goal, we used the year numbers, 2008, 2009, etc., month numbers from (1) to (12), day numbers from (1) to (31) and hour numbers from (00:00) to (24:00) as predictors. Thirdly, the best ELM models were trained using validation dataset and tested with the training dataset. The performances of the four ELM models were evaluated using four statistical indices: the coefficient of correlation (R), the Nash-Sutcliffe efficiency (NSE), the root mean squared error (RMSE), and the mean absolute error (MAE). Results obtained from the eight stations indicated that: (i) the best results were obtained by the S-ELM, R-ELM, OS-ELM, and OP-ELM models having four water quality variables as predictors; (ii) out of eight stations, the OP-ELM performed better than the other three ELM models at seven stations while the R-ELM performed the best at one station. The OS-ELM models performed the worst and provided the lowest accuracy; (iii) for predicting DO without water quality variables, the R-ELM performed the best at seven stations followed by the S-ELM in the second place and the OP-ELM performed the worst with low accuracy; (iv) for the final application where training ELM models with validation dataset and testing with training dataset, the OP-ELM provided the best accuracy using water quality variables and the R-ELM performed the best at all eight stations without water quality variables. Fourthly, and finally, we compared the results obtained from different ELM models with those obtained using multiple linear regression (MLR) and multilayer perceptron neural network (MLPNN). Results obtained using MLPNN and MLR models reveal that: (i) using water quality variables as predictors, the MLR performed the worst and provided the lowest accuracy in all stations; (ii) MLPNN was ranked in the second place at two stations, in the third place at four stations, and finally, in the fourth place at two stations, (iii) for predicting DO without water quality variables, MLPNN is ranked in the second place at five stations, and ranked in the third, fourth, and fifth places in the remaining three stations, while MLR was ranked in the last place with very low accuracy at all stations. Overall, the results suggest that the ELM is more effective than the MLPNN and MLR for modelling DO concentration in river ecosystems.
NASA Technical Reports Server (NTRS)
Santanello, Joseph A.; Peters-Lidard, Christa D.; Kennedy, Aaron D.; Kumar, Sujay; Dong, Xiquan
2011-01-01
Land-atmosphere (L-A) interactions play a critical role in determining the diurnal evolution of land surface and planetary boundary layer (PBL) temperature and moisture states and fluxes. In turn, these interactions regulate the strength of the connection between surface moisture and precipitation in a coupled system. To address deficiencies in numerical weather prediction and climate models due to improper treatment of L-A interactions, recent studies have focused on development of diagnostics to quantify the strength and accuracy of the land-PBL coupling at the process-level. In this study, a diagnosis of the nature and impacts of local land-atmosphere coupling (LoCo) during dry and wet extreme conditions is presented using a combination of models and observations during the summers of2006-7 in the U.S. Southern Great Plains. Specifically, the Weather Research and Forecasting (WRF) model has been coupled to NASA's Land Information System (LIS), which provides a flexible and high resolution representation and initialization of land surface physics and states. A range of diagnostics exploring the links and feedbacks between soil moisture and precipitation are examined for the dry/wet regimes of this region, along with the behavior and accuracy of different land-PBL scheme couplings under these conditions. Results demonstrate how LoCo diagnostics can be applied to coupled model components in the context of their integrated impacts on the process-chain connecting the land surface to the PBL and support of hydrological anomalies.
Kumar, Yogaprakash; Yen, Shih-Cheng; Tay, Arthur; Lee, Wangwei; Gao, Fan; Zhao, Ziyi; Li, Jingze; Hon, Benjamin; Tian-Ma Xu, Tim; Cheong, Angela; Koh, Karen; Ng, Yee-Sien; Chew, Effie; Koh, Gerald
2015-02-01
Range-of-motion (ROM) assessment is a critical assessment tool during the rehabilitation process. The conventional approach uses the goniometer which remains the most reliable instrument but it is usually time-consuming and subject to both intra- and inter-therapist measurement errors. An automated wireless wearable sensor system for the measurement of ROM has previously been developed by the current authors. Presented is the correlation and accuracy of the automated wireless wearable sensor system against a goniometer in measuring ROM in the major joints of upper (UEs) and lower extremities (LEs) in 19 healthy subjects and 20 newly disabled inpatients through intra (same) subject comparison of ROM assessments between the sensor system against goniometer measurements by physical therapists. In healthy subjects, ROM measurements using the new sensor system were highly correlated with goniometry, with 95% of differences < 20° and 10° for most movements in major joints of UE and LE, respectively. Among inpatients undergoing rehabilitation, ROM measurements using the new sensor system were also highly correlated with goniometry, with 95% of the differences being < 20° and 25° for most movements in the major joints of UE and LE, respectively.
Latest performance of ArF immersion scanner NSR-S630D for high-volume manufacturing for 7nm node
NASA Astrophysics Data System (ADS)
Funatsu, Takayuki; Uehara, Yusaku; Hikida, Yujiro; Hayakawa, Akira; Ishiyama, Satoshi; Hirayama, Toru; Kono, Hirotaka; Shirata, Yosuke; Shibazaki, Yuichi
2015-03-01
In order to achieve stable operation in cutting-edge semiconductor manufacturing, Nikon has developed NSR-S630D with extremely accurate overlay while maintaining throughput in various conditions resembling a real production environment. In addition, NSR-S630D has been equipped with enhanced capabilities to maintain long-term overlay stability and user interface improvement all due to our newly developed application software platform. In this paper, we describe the most recent S630D performance in various conditions similar to real productions. In a production environment, superior overlay accuracy with high dose conditions and high throughput are often required; therefore, we have performed several experiments with high dose conditions to demonstrate NSR's thermal aberration capabilities in order to achieve world class overlay performance. Furthermore, we will introduce our new software that enables long term overlay performance.
Black-hole Binaries: Life Begins at 40 keV
NASA Astrophysics Data System (ADS)
Belloni, Tomaso M.; Motta, Sara
2009-05-01
In the study of black-hole transients, an important problem that still needs to be answered is how the high-energy part of the spectrum evolves from the low-hard to the high-soft state, given that they have very different properties. Recent results obtained with RXTE and INTEGRAL have given inconsistent results. With RXTE, we have found that the high-energy cutoff in GX 339-4 during the transition first decreases (during the low-hard state), then increases again across the Hard-Intermediate state, to become unmeasurable in the soft states (possibly because of statistical limitations). We show Simbol-X will be able to determine the spectral shape with superb accuracy. As the high-energy part of the spectrum is relatively less known than the one below 20 keV, Simbol-X will provide important results that will help out understanding of the extreme physical conditions in the vicinity of a stellar-mass black hole.
Focus drive mechanism for the IUE scientific instrument
NASA Technical Reports Server (NTRS)
Devine, E. J.; Dennis, T. B., Jr.
1977-01-01
A compact, lightweight mechanism was developed for in-orbit adjustment of the position of the secondary mirror (focusing) of the International Ultraviolet Explorer telescope. This device is a linear drive with small and highly repeatable step increments. Extremely close tolerances are also held in tilt and decentering. The unique mechanization is described with attention to the design details that contribute to positional accuracy. Lubrication, materials, thermal considerations, sealing, detenting against launch loads, and other features peculiar to flight hardware are discussed. The methods employed for mounting the low expansion quartz mirror with minimum distortion are also given. Results of qualification and acceptance testing, are included.
Testing approximations for non-linear gravitational clustering
NASA Technical Reports Server (NTRS)
Coles, Peter; Melott, Adrian L.; Shandarin, Sergei F.
1993-01-01
The accuracy of various analytic approximations for following the evolution of cosmological density fluctuations into the nonlinear regime is investigated. The Zel'dovich approximation is found to be consistently the best approximation scheme. It is extremely accurate for power spectra characterized by n = -1 or less; when the approximation is 'enhanced' by truncating highly nonlinear Fourier modes the approximation is excellent even for n = +1. The performance of linear theory is less spectrum-dependent, but this approximation is less accurate than the Zel'dovich one for all cases because of the failure to treat dynamics. The lognormal approximation generally provides a very poor fit to the spatial pattern.
Notes on the uwainat oil rim development, Maydan Mahzam and Bul Hanine Fields, offshore Qatar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamam, K.A.
As a result of reservoir simulation studies of the Uwainat reservoirs (Maydan Mahzam and Bul Hanine Fields), drilling to the Uwainat oil rim target became very ''tight'' with a very limited vertical tolerance. To achieve drilling to the tight target requires a precise position of the well at the top of the Lower Arab IV reservoir (a reliable marker) and an accurate isochore of the Lower Arab IV - Uwainat. The discussion shows that the level of accuracy needed in determining both the actual subsea well position and in constructing the depth contours of the reservoirs is extremely high.
Model wall and recovery temperature effects on experimental heat transfer data analysis
NASA Technical Reports Server (NTRS)
Throckmorton, D. A.; Stone, D. R.
1974-01-01
Basic analytical procedures are used to illustrate, both qualitatively and quantitatively, the relative impact upon heat transfer data analysis of certain factors which may affect the accuracy of experimental heat transfer data. Inaccurate knowledge of adiabatic wall conditions results in a corresponding inaccuracy in the measured heat transfer coefficient. The magnitude of the resulting error is extreme for data obtained at wall temperatures approaching the adiabatic condition. High model wall temperatures and wall temperature gradients affect the level and distribution of heat transfer to an experimental model. The significance of each of these factors is examined and its impact upon heat transfer data analysis is assessed.
Laboratory testing of Alcoscan saliva-alcohol test strips
DOT National Transportation Integrated Search
1986-10-01
This report describes a laboratory evaluation of Alcoscan saliva-alcohol test strips. The objectives of this work were: (1) to determine the precision and accuracy of the Alcoscan strips; and (2) to determine what effect extreme ambient temperatures ...
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji
2017-01-01
In the multi-dimensional space-time conservation element and solution element16 (CESE) method, triangles and tetrahedral mesh elements turn out to be the most natural building blocks for 2D and 3D spatial grids, respectively. As such, the CESE method is naturally compatible with the simplest 2D and 3D unstructured grids and thus can be easily applied to solve problems with complex geometries. However, because (a) accurate solution of a high-Reynolds number flow field near a solid wall requires that the grid intervals along the direction normal to the wall be much finer than those in a direction parallel to the wall and, as such, the use of grid cells with extremely high aspect ratio (103 to 106) may become mandatory, and (b) unlike quadrilateral hexahedral grids, it is well-known that accuracy of gradient computations involving triangular tetrahedral grids tends to deteriorate rapidly as cell aspect ratio increases. As a result, the use of triangular tetrahedral grid cells near a solid wall has long been deemed impractical by CFD researchers. In view of (a) the critical role played by triangular tetrahedral grids in the CESE development, and (b) the importance of accurate resolution of high-Reynolds number flow field near a solid wall, as will be presented in the main paper, a comprehensive and rigorous mathematical framework that clearly identifies the reasons behind the accuracy deterioration as described above has been developed for the 2D case involving triangular cells. By avoiding the pitfalls identified by the 2D framework, and its 3D extension, it has been shown numerically.
Adaptive Spontaneous Transitions between Two Mechanisms of Numerical Averaging.
Brezis, Noam; Bronfman, Zohar Z; Usher, Marius
2015-06-04
We investigated the mechanism with which humans estimate numerical averages. Participants were presented with 4, 8 or 16 (two-digit) numbers, serially and rapidly (2 numerals/second) and were instructed to convey the sequence average. As predicted by a dual, but not a single-component account, we found a non-monotonic influence of set-size on accuracy. Moreover, we observed a marked decrease in RT as set-size increases and RT-accuracy tradeoff in the 4-, but not in the 16-number condition. These results indicate that in accordance with the normative directive, participants spontaneously employ analytic/sequential thinking in the 4-number condition and intuitive/holistic thinking in the 16-number condition. When the presentation rate is extreme (10 items/sec) we find that, while performance still remains high, the estimations are now based on intuitive processing. The results are accounted for by a computational model postulating population-coding underlying intuitive-averaging and working-memory-mediated symbolic procedures underlying analytical-averaging, with flexible allocation between the two.
NASA Astrophysics Data System (ADS)
Chen, Alvin U.; Basaran, Osman A.
2000-11-01
Drop formation from a capillary --- dripping mode --- or an ink jet nozzle --- drop-on-demand (DOD) mode --- falls into a class of scientifically challenging yet practically useful free surface flows that exhibit a finite time singularity, i.e. the breakup of an initially single liquid mass into two or more fragments. While computational tools to model such problems have been developed recently, they lack the accuracy needed to quantitatively predict all the dynamics observed in experiments. Here we present a new finite element method (FEM) based on a robust algorithm for elliptic mesh generation and remeshing to handle extremely large interface deformations. The new algorithm allows continuation of computations beyond the first singularity to track fates of both primary and any satellite drops. The accuracy of the computations is demonstrated by comparison of simulations with experimental measurements made possible with an ultra high-speed digital imager capable of recording 100 million frames per second.
Effects of coating rectangular microscopic electrophoresis chamber with methylcellulose
NASA Technical Reports Server (NTRS)
Plank, L. D.
1985-01-01
One of the biggest problems in obtaining high accuracy in microscopic electrophoresis is the parabolic flow of liquid in the chamber due to electroosmotic backflow during application of the electric field. In chambers with glass walls the source of polarization leading to electroosmosis is the negative charge of the silicare and other ions that form the wall structure. It was found by Hjerten, who used a rotating 3.0 mm capillary tube for free zone electrophoresis, that precisely neutralizing this charge was extremely difficult, but if a neutral polymer matrix (formaldehyde fixed methylcellulose) was formed over the glass (quartz) wall the double layer was displaced and the viscosity at the shear plane increased so that electroosmotic flow could be eliminated. Experiments were designed to determine the reliability with which methylcellulose coating of the Zeiss Cytopherometer chamber reduced electroosmotic backflow and the effect of coating on the accuracy of cell electrophoretic mobility (EPN) determinations. Fixed rat erythrocytes (RBC) were used as test particles.
Wang, Feng-Fei; Luo, A-Li; Zhao, Yong-Heng
2014-02-01
The radial velocity of the star is very important for the study of the dynamics structure and chemistry evolution of the Milky Way, is also an useful tool for looking for variable or special objects. In the present work, we focus on calculating the radial velocity of different spectral types of low-resolution stellar spectra by adopting a template matching method, so as to provide effective and reliable reference to the different aspects of scientific research We choose high signal-to-noise ratio (SNR) spectra of different spectral type stellar from the Sloan Digital Sky Survey (SDSS), and add different noise to simulate the stellar spectra with different SNR. Then we obtain theradial velocity measurement accuracy of different spectral type stellar spectra at different SNR by employing a template matching method. Meanwhile, the radial velocity measurement accuracy of white dwarf stars is analyzed as well. We concluded that the accuracy of radial velocity measurements of early-type stars is much higher than late-type ones. For example, the 1-sigma standard error of radial velocity measurements of A-type stars is 5-8 times as large as K-type and M-type stars. We discuss the reason and suggest that the very narrow lines of late-type stars ensure the accuracy of measurement of radial velocities, while the early-type stars with very wide Balmer lines, such as A-type stars, become sensitive to noise and obtain low accuracy of radial velocities. For the spectra of white dwarfs stars, the standard error of radial velocity measurement could be over 50 km x s(-1) because of their extremely wide Balmer lines. The above conclusion will provide a good reference for stellar scientific study.
Analysis of Xrage and Flag High Explosive Burn Models with PBX 9404 Cylinder Tests
NASA Astrophysics Data System (ADS)
Harrier, Danielle; Fessenden, Julianna; Ramsey, Scott
2016-11-01
High explosives are energetic materials that release their chemical energy in a short interval of time. They are able to generate extreme heat and pressure by a shock driven chemical decomposition reaction, which makes them valuable tools that must be understood. This study investigated the accuracy and performance of two Los Alamos National Laboratory hydrodynamic codes, which are used to determine the behavior of explosives within a variety of systems: xRAGE which utilizes an Eulerian mesh, and FLAG with utilizes a Lagrangian mesh. Various programmed and reactive burn models within both codes were tested, using a copper cylinder expansion test. The test was based off of a recent experimental setup which contained the plastic bonded explosive PBX 9404. Detonation velocity versus time curves for this explosive were obtained from the experimental velocity data collected using Photon Doppler Velocimetry (PDV). The modeled results from each of the burn models tested were then compared to one another and to the experimental results using the Jones-Wilkins-Lee (JWL) equation of state parameters that were determined and adjusted from the experimental tests. This study is important to validate the accuracy of our high explosive burn models and the calibrated EOS parameters, which are important for many research topics in physical sciences.
Zhou, Hong; Liu, Jing; Xu, Jing-Juan; Zhang, Shu-Sheng; Chen, Hong-Yuan
2018-03-21
Modern optical detection technology plays a critical role in current clinical detection due to its high sensitivity and accuracy. However, higher requirements such as extremely high detection sensitivity have been put forward due to the clinical needs for the early finding and diagnosing of malignant tumors which are significant for tumor therapy. The technology of isothermal amplification with nucleic acids opens up avenues for meeting this requirement. Recent reports have shown that a nucleic acid amplification-assisted modern optical sensing interface has achieved satisfactory sensitivity and accuracy, high speed and specificity. Compared with isothermal amplification technology designed to work completely in a solution system, solid biosensing interfaces demonstrated better performances in stability and sensitivity due to their ease of separation from the reaction mixture and the better signal transduction on these optical nano-biosensing interfaces. Also the flexibility and designability during the construction of these nano-biosensing interfaces provided a promising research topic for the ultrasensitive detection of cancer diseases. In this review, we describe the construction of the burgeoning number of optical nano-biosensing interfaces assisted by a nucleic acid amplification strategy, and provide insightful views on: (1) approaches to the smart fabrication of an optical nano-biosensing interface, (2) biosensing mechanisms via the nucleic acid amplification method, (3) the newest strategies and future perspectives.
Plant pathogen nanodiagnostic techniques: forthcoming changes?
Khiyami, Mohammad A.; Almoammar, Hassan; Awad, Yasser M.; Alghuthaymi, Mousa A.; Abd-Elsalam, Kamel A.
2014-01-01
Plant diseases are among the major factors limiting crop productivity. A first step towards managing a plant disease under greenhouse and field conditions is to correctly identify the pathogen. Current technologies, such as quantitative polymerase chain reaction (Q-PCR), require a relatively large amount of target tissue and rely on multiple assays to accurately identify distinct plant pathogens. The common disadvantage of the traditional diagnostic methods is that they are time consuming and lack high sensitivity. Consequently, developing low-cost methods to improve the accuracy and rapidity of plant pathogens diagnosis is needed. Nanotechnology, nano particles and quantum dots (QDs) have emerged as essential tools for fast detection of a particular biological marker with extreme accuracy. Biosensor, QDs, nanostructured platforms, nanoimaging and nanopore DNA sequencing tools have the potential to raise sensitivity, specificity and speed of the pathogen detection, facilitate high-throughput analysis, and to be used for high-quality monitoring and crop protection. Furthermore, nanodiagnostic kit equipment can easily and quickly detect potential serious plant pathogens, allowing experts to help farmers in the prevention of epidemic diseases. The current review deals with the application of nanotechnology for quicker, more cost-effective and precise diagnostic procedures of plant diseases. Such an accurate technology may help to design a proper integrated disease management system which may modify crop environments to adversely affect crop pathogens. PMID:26740775
Atmospheric and Fog Effects on Ultra-Wide Band Radar Operating at Extremely High Frequencies.
Balal, Nezah; Pinhasi, Gad A; Pinhasi, Yosef
2016-05-23
The wide band at extremely high frequencies (EHF) above 30 GHz is applicable for high resolution directive radars, resolving the lack of free frequency bands within the lower part of the electromagnetic spectrum. Utilization of ultra-wideband signals in this EHF band is of interest, since it covers a relatively large spectrum, which is free of users, resulting in better resolution in both the longitudinal and transverse dimensions. Noting that frequencies in the millimeter band are subjected to high atmospheric attenuation and dispersion effects, a study of the degradation in the accuracy and resolution is presented. The fact that solid-state millimeter and sub-millimeter radiation sources are producing low power, the method of continuous-wave wideband frequency modulation becomes the natural technique for remote sensing and detection. Millimeter wave radars are used as complementary sensors for the detection of small radar cross-section objects under bad weather conditions, when small objects cannot be seen by optical cameras and infrared detectors. Theoretical analysis for the propagation of a wide "chirped" Frequency-Modulated Continuous-Wave (FMCW) radar signal in a dielectric medium is presented. It is shown that the frequency-dependent (complex) refractivity of the atmospheric medium causes distortions in the phase of the reflected signal, introducing noticeable errors in the longitudinal distance estimations, and at some frequencies may also degrade the resolution.
Atmospheric and Fog Effects on Ultra-Wide Band Radar Operating at Extremely High Frequencies
Balal, Nezah; Pinhasi, Gad A.; Pinhasi, Yosef
2016-01-01
The wide band at extremely high frequencies (EHF) above 30 GHz is applicable for high resolution directive radars, resolving the lack of free frequency bands within the lower part of the electromagnetic spectrum. Utilization of ultra-wideband signals in this EHF band is of interest, since it covers a relatively large spectrum, which is free of users, resulting in better resolution in both the longitudinal and transverse dimensions. Noting that frequencies in the millimeter band are subjected to high atmospheric attenuation and dispersion effects, a study of the degradation in the accuracy and resolution is presented. The fact that solid-state millimeter and sub-millimeter radiation sources are producing low power, the method of continuous-wave wideband frequency modulation becomes the natural technique for remote sensing and detection. Millimeter wave radars are used as complementary sensors for the detection of small radar cross-section objects under bad weather conditions, when small objects cannot be seen by optical cameras and infrared detectors. Theoretical analysis for the propagation of a wide “chirped” Frequency-Modulated Continuous-Wave (FMCW) radar signal in a dielectric medium is presented. It is shown that the frequency-dependent (complex) refractivity of the atmospheric medium causes distortions in the phase of the reflected signal, introducing noticeable errors in the longitudinal distance estimations, and at some frequencies may also degrade the resolution. PMID:27223286
Turbulence and secondary motions in square duct flow
NASA Astrophysics Data System (ADS)
Pirozzoli, Sergio; Modesti, Davide; Orlandi, Paolo; Grasso, Francesco
2017-11-01
We study turbulent flows in pressure-driven ducts with square cross-section through DNS up to Reτ 1050 . Numerical simulations are carried out over extremely long integration times to get adequate convergence of the flow statistics, and specifically high-fidelity representation of the secondary motions which arise. The intensity of the latter is found to be in the order of 1-2% of the bulk velocity, and unaffected by Reynolds number variations. The smallness of the mean convection terms in the streamwise vorticity equation points to a simple characterization of the secondary flows, which in the asymptotic high-Re regime are found to be approximated with good accuracy by eigenfunctions of the Laplace operator. Despite their effect of redistributing the wall shear stress along the duct perimeter, we find that secondary motions do not have large influence on the mean velocity field, which can be characterized with good accuracy as that resulting from the concurrent effect of four independent flat walls, each controlling a quarter of the flow domain. As a consequence, we find that parametrizations based on the hydraulic diameter concept, and modifications thereof, are successful in predicting the duct friction coefficient. This research was carried out using resources from PRACE EU Grants.
Image Retrieval Method for Multiscale Objects from Optical Colonoscopy Images
Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro; Aoki, Hiroshi; Takeuchi, Ken; Suzuki, Yasuo
2017-01-01
Optical colonoscopy is the most common approach to diagnosing bowel diseases through direct colon and rectum inspections. Periodic optical colonoscopy examinations are particularly important for detecting cancers at early stages while still treatable. However, diagnostic accuracy is highly dependent on both the experience and knowledge of the medical doctor. Moreover, it is extremely difficult, even for specialist doctors, to detect the early stages of cancer when obscured by inflammations of the colonic mucosa due to intractable inflammatory bowel diseases, such as ulcerative colitis. Thus, to assist the UC diagnosis, it is necessary to develop a new technology that can retrieve similar cases of diagnostic target image from cases in the past that stored the diagnosed images with various symptoms of colonic mucosa. In order to assist diagnoses with optical colonoscopy, this paper proposes a retrieval method for colonoscopy images that can cope with multiscale objects. The proposed method can retrieve similar colonoscopy images despite varying visible sizes of the target objects. Through three experiments conducted with real clinical colonoscopy images, we demonstrate that the method is able to retrieve objects of any visible size and any location at a high level of accuracy. PMID:28255295
Image Retrieval Method for Multiscale Objects from Optical Colonoscopy Images.
Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro; Aoki, Hiroshi; Takeuchi, Ken; Suzuki, Yasuo
2017-01-01
Optical colonoscopy is the most common approach to diagnosing bowel diseases through direct colon and rectum inspections. Periodic optical colonoscopy examinations are particularly important for detecting cancers at early stages while still treatable. However, diagnostic accuracy is highly dependent on both the experience and knowledge of the medical doctor. Moreover, it is extremely difficult, even for specialist doctors, to detect the early stages of cancer when obscured by inflammations of the colonic mucosa due to intractable inflammatory bowel diseases, such as ulcerative colitis. Thus, to assist the UC diagnosis, it is necessary to develop a new technology that can retrieve similar cases of diagnostic target image from cases in the past that stored the diagnosed images with various symptoms of colonic mucosa. In order to assist diagnoses with optical colonoscopy, this paper proposes a retrieval method for colonoscopy images that can cope with multiscale objects. The proposed method can retrieve similar colonoscopy images despite varying visible sizes of the target objects. Through three experiments conducted with real clinical colonoscopy images, we demonstrate that the method is able to retrieve objects of any visible size and any location at a high level of accuracy.
HITRAP: A Facility for Experiments with Trapped Highly Charged Ions
NASA Astrophysics Data System (ADS)
Quint, W.; Dilling, J.; Djekic, S.; Häffner, H.; Hermanspahn, N.; Kluge, H.-J.; Marx, G.; Moore, R.; Rodriguez, D.; Schönfelder, J.; Sikler, G.; Valenzuela, T.; Verdú, J.; Weber, C.; Werth, G.
2001-01-01
HITRAP is a planned ion trap facility for capturing and cooling of highly charged ions produced at GSI in the heavy-ion complex of the UNILAC-SIS accelerators and the ESR storage ring. In this facility heavy highly charged ions up to uranium will be available as bare nuclei, hydrogen-like ions or few-electron systems at low temperatures. The trap for receiving and studying these ions is designed for operation at extremely high vacuum by cooling to cryogenic temperatures. The stored highly charged ions can be investigated in the trap itself or can be extracted from the trap at energies up to about 10 keV/q. The proposed physics experiments are collision studies with highly charged ions at well-defined low energies (eV/u), high-accuracy measurements to determine the g-factor of the electron bound in a hydrogen-like heavy ion and the atomic binding energies of few-electron systems, laser spectroscopy of HFS transitions and X-ray spectroscopy.
Performance of a new test strip for freestyle blood glucose monitoring systems.
Lock, John Paul; Brazg, Ronald; Bernstein, Robert M; Taylor, Elizabeth; Patel, Mona; Ward, Jeanne; Alva, Shridhara; Chen, Ting; Welsh, Zoë; Amor, Walter; Bhogal, Claire; Ng, Ronald
2011-01-01
a new strip, designed to enhance the ease of use and minimize interference of non-glucose sugars, has been developed to replace the current FreeStyle (Abbott Diabetes Care, Alameda, CA) blood glucose test strip. We evaluated the performance of this new strip. laboratory evaluation included precision, linearity, dynamic range, effects of operating temperature, humidity, altitude, hematocrit, interferents, and blood reapplication. System accuracy, lay user performance, and ease of use for finger capillary blood testing and accuracy for venous blood testing were evaluated at clinics. Lay users also compared the speed and ease of use between the new strip and the current FreeStyle strip. for glucose concentrations <75 mg/dL, 73%, 100%, and 100% of the individual capillary blood glucose results obtained by lay users fell within ± 5, 10, and 15 mg/dL, respectively, of the reference. For glucose concentrations ≥75 mg/dL, 68%, 95%, 99%, and 99% of the lay user results fell within ± 5%, 10%, 15%, and 20%, respectively, of the reference. Comparable accuracy was obtained in the venous blood study. Lay users found the new test strip easy to use and faster and easier to use than the current FreeStyle strip. The new strip maintained accuracy under various challenging conditions, including high concentrations of various interferents, sample reapplication up to 60 s, and extremes in hematocrit, altitude, and operating temperature and humidity. our results demonstrated excellent accuracy of the new FreeStyle test strip and validated the improvements in minimizing interference and enhancing ease of use.
Bayesian Estimation of Combined Accuracy for Tests with Verification Bias
Broemeling, Lyle D.
2011-01-01
This presentation will emphasize the estimation of the combined accuracy of two or more tests when verification bias is present. Verification bias occurs when some of the subjects are not subject to the gold standard. The approach is Bayesian where the estimation of test accuracy is based on the posterior distribution of the relevant parameter. Accuracy of two combined binary tests is estimated employing either “believe the positive” or “believe the negative” rule, then the true and false positive fractions for each rule are computed for two tests. In order to perform the analysis, the missing at random assumption is imposed, and an interesting example is provided by estimating the combined accuracy of CT and MRI to diagnose lung cancer. The Bayesian approach is extended to two ordinal tests when verification bias is present, and the accuracy of the combined tests is based on the ROC area of the risk function. An example involving mammography with two readers with extreme verification bias illustrates the estimation of the combined test accuracy for ordinal tests. PMID:26859487
Advanced optical 3D scanners using DMD technology
NASA Astrophysics Data System (ADS)
Muenstermann, P.; Godding, R.; Hermstein, M.
2017-02-01
Optical 3D measurement techniques are state-of-the-art for highly precise, non-contact surface scanners - not only in industrial development, but also in near-production and even in-line configurations. The need for automated systems with very high accuracy and clear implementation of national precision standards is growing extremely due to expanding international quality guidelines, increasing production transparency and new concepts related to the demands of the fourth industrial revolution. The presentation gives an overview about the present technical concepts for optical 3D scanners and their benefit for customers and various different applications - not only in quality control, but also in design centers or in medical applications. The advantages of DMD-based systems will be discussed and compared to other approaches. Looking at today's 3D scanner market, there is a confusing amount of solutions varying from lowprice solutions to high end systems. Many of them are linked to a very special target group or to special applications. The article will clarify the differences of the approaches and will discuss some key features which are necessary to render optical measurement systems suitable for industrial environments. The paper will be completed by examples for DMDbased systems, e. g. RGB true-color systems with very high accuracy like the StereoScan neo of AICON 3D Systems. Typical applications and the benefits for customers using such systems are described.
Monsoon Forecasting based on Imbalanced Classification Techniques
NASA Astrophysics Data System (ADS)
Ribera, Pedro; Troncoso, Alicia; Asencio-Cortes, Gualberto; Vega, Inmaculada; Gallego, David
2017-04-01
Monsoonal systems are quasiperiodic processes of the climatic system that control seasonal precipitation over different regions of the world. The Western North Pacific Summer Monsoon (WNPSM) is one of those monsoons and it is known to have a great impact both over the global climate and over the total precipitation of very densely populated areas. The interannual variability of the WNPSM along the last 50-60 years has been related to different climatic indices such as El Niño, El Niño Modoki, the Indian Ocean Dipole or the Pacific Decadal Oscillation. Recently, a new and longer series characterizing the monthly evolution of the WNPSM, the WNP Directional Index (WNPDI), has been developed, extending its previous length from about 50 years to more than 100 years (1900-2007). Imbalanced classification techniques have been applied to the WNPDI in order to check the capability of traditional climate indices to capture and forecast the evolution of the WNPSM. The problem of forecasting has been transformed into a binary classification problem, in which the positive class represents the occurrence of an extreme monsoon event. Given that the number of extreme monsoons is much lower than the number of non-extreme monsoons, the resultant classification problem is highly imbalanced. The complete dataset is composed of 1296 instances, where only 71 (5.47%) samples correspond to extreme monsoons. Twenty predictor variables based on the cited climatic indices have been proposed, and namely, models based on trees, black box models such as neural networks, support vector machines and nearest neighbors, and finally ensemble-based techniques as random forests have been used in order to forecast the occurrence of extreme monsoons. It can be concluded that the methodology proposed here reports promising results according to the quality parameters evaluated and predicts extreme monsoons for a temporal horizon of a month with a high accuracy. From a climatological point of view, models based on trees show that the index of the El Niño Modoki in the months previous to an extreme monsoon acts as its best predictor. In most cases, the value of the Indian Ocean Dipole index acts as a second order classifier. But El Niño index, more frequently, or the Pacific Decadal Oscillation index, only in one case, do also modulate the intensity of the WNPSM in some cases.
NASA Astrophysics Data System (ADS)
Fonseca, P. A. M.
2015-12-01
Bacterial diarrheal diseases have a high incidence rate during and after flooding episodes. In the Brazilian Amazon, flood extreme events have become more frequent, leading to high incidence rates for infant diarrhea. In this study we aimed to find a statistical association between rainfall, river levels and diarrheal diseases in children under 5, in the river Acre basin, in the State of Acre (Brazil). We also aimed to identify the time-lag and annual season of extreme rainfall and flooding in different cities in the water basin. The results using Tropical Rainfall Measuring Mission (TRMM) Satellite rainfall data show robustness of these estimates against observational stations on-ground. The Pearson coefficient correlation results (highest 0.35) indicate a time-lag, up to 4 days in three of the cities in the water-basin. In addition, a correlation was also tested between monthly accumulated rainfall and the diarrheal incidence during the rainy season (DJF). Correlation results were higher, especially in Acrelândia (0.7) and Brasiléia and Epitaciolândia (0.5). The correlation between water level monthly averages and diarrheal diseases incidence was 0.3 and 0.5 in Brasiléia and Epitaciolândia. The time-lag evidence found in this paper is critical to inform stakeholders, local populations and civil defense authorities about the time available for preventive and adaptation measures between extreme rainfall and flooding events in vulnerable cities. This study was part of a pilot application in the state of Acre of the PULSE-Brazil project (http://www.pulse-brasil.org/tool/), an interface of climate, environmental and health data to support climate adaptation. The next step of this research is to expand the analysis to other climate variables on diarrheal diseases across the whole Brazilian Amazon Basin and estimate the relative risk (RR) of a child getting sick. A statistical model will estimate RR based on the observed values and seasonal forecasts (higher accuracy for the Amazon region) will be used so the government can be prepared for extreme climate events forecasted. It is expected that these results can be helpful during and post extreme events to improve health surveillance preparedness and better allocate available results in adapting vulnerable cities to climate extreme events.
NASA Astrophysics Data System (ADS)
Griesbaum, Luisa; Marx, Sabrina; Höfle, Bernhard
2017-07-01
In recent years, the number of people affected by flooding caused by extreme weather events has increased considerably. In order to provide support in disaster recovery or to develop mitigation plans, accurate flood information is necessary. Particularly pluvial urban floods, characterized by high temporal and spatial variations, are not well documented. This study proposes a new, low-cost approach to determining local flood elevation and inundation depth of buildings based on user-generated flood images. It first applies close-range digital photogrammetry to generate a geo-referenced 3-D point cloud. Second, based on estimated camera orientation parameters, the flood level captured in a single flood image is mapped to the previously derived point cloud. The local flood elevation and the building inundation depth can then be derived automatically from the point cloud. The proposed method is carried out once for each of 66 different flood images showing the same building façade. An overall accuracy of 0.05 m with an uncertainty of ±0.13 m for the derived flood elevation within the area of interest as well as an accuracy of 0.13 m ± 0.10 m for the determined building inundation depth is achieved. Our results demonstrate that the proposed method can provide reliable flood information on a local scale using user-generated flood images as input. The approach can thus allow inundation depth maps to be derived even in complex urban environments with relatively high accuracies.
Solving the Rational Polynomial Coefficients Based on L Curve
NASA Astrophysics Data System (ADS)
Zhou, G.; Li, X.; Yue, T.; Huang, W.; He, C.; Huang, Y.
2018-05-01
The rational polynomial coefficients (RPC) model is a generalized sensor model, which can achieve high approximation accuracy. And it is widely used in the field of photogrammetry and remote sensing. Least square method is usually used to determine the optimal parameter solution of the rational function model. However the distribution of control points is not uniform or the model is over-parameterized, which leads to the singularity of the coefficient matrix of the normal equation. So the normal equation becomes ill conditioned equation. The obtained solutions are extremely unstable and even wrong. The Tikhonov regularization can effectively improve and solve the ill conditioned equation. In this paper, we calculate pathological equations by regularization method, and determine the regularization parameters by L curve. The results of the experiments on aerial format photos show that the accuracy of the first-order RPC with the equal denominators has the highest accuracy. The high order RPC model is not necessary in the processing of dealing with frame images, as the RPC model and the projective model are almost the same. The result shows that the first-order RPC model is basically consistent with the strict sensor model of photogrammetry. Orthorectification results both the firstorder RPC model and Camera Model (ERDAS9.2 platform) are similar to each other, and the maximum residuals of X and Y are 0.8174 feet and 0.9272 feet respectively. This result shows that RPC model can be used in the aerial photographic compensation replacement sensor model.
2010-01-01
With the globalization of occupational health psychology, more and more researchers are interested in applying employee well-being like work engagement (i.e., a positive, fulfilling, work-related state of mind that is characterized by vigor, dedication, and absorption) to diverse populations. Accurate measurement contributes to our further understanding and to the generalizability of the concept of work engagement across different cultures. The present study investigated the measurement accuracy of the Japanese and the original Dutch versions of the Utrecht Work Engagement Scale (9-item version, UWES-9) and the comparability of this scale between both countries. Item Response Theory (IRT) was applied to the data from Japan (N = 2,339) and the Netherlands (N = 13,406). Reliability of the scale was evaluated at various levels of the latent trait (i.e., work engagement) based the test information function (TIF) and the standard error of measurement (SEM). The Japanese version had difficulty in differentiating respondents with extremely low work engagement, whereas the original Dutch version had difficulty in differentiating respondents with high work engagement. The measurement accuracy of both versions was not similar. Suppression of positive affect among Japanese people and self-enhancement (the general sensitivity to positive self-relevant information) among Dutch people may have caused decreased measurement accuracy. Hence, we should be cautious when interpreting low engagement scores among Japanese as well as high engagement scores among western employees. PMID:21054839
Shimazu, Akihito; Schaufeli, Wilmar B; Miyanaka, Daisuke; Iwata, Noboru
2010-11-05
With the globalization of occupational health psychology, more and more researchers are interested in applying employee well-being like work engagement (i.e., a positive, fulfilling, work-related state of mind that is characterized by vigor, dedication, and absorption) to diverse populations. Accurate measurement contributes to our further understanding and to the generalizability of the concept of work engagement across different cultures. The present study investigated the measurement accuracy of the Japanese and the original Dutch versions of the Utrecht Work Engagement Scale (9-item version, UWES-9) and the comparability of this scale between both countries. Item Response Theory (IRT) was applied to the data from Japan (N = 2,339) and the Netherlands (N = 13,406). Reliability of the scale was evaluated at various levels of the latent trait (i.e., work engagement) based the test information function (TIF) and the standard error of measurement (SEM). The Japanese version had difficulty in differentiating respondents with extremely low work engagement, whereas the original Dutch version had difficulty in differentiating respondents with high work engagement. The measurement accuracy of both versions was not similar. Suppression of positive affect among Japanese people and self-enhancement (the general sensitivity to positive self-relevant information) among Dutch people may have caused decreased measurement accuracy. Hence, we should be cautious when interpreting low engagement scores among Japanese as well as high engagement scores among western employees.
Albadr, Musatafa Abbas Abbood; Tiun, Sabrina; Al-Dhief, Fahad Taha; Sammour, Mahmoud A M
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%.
Tiun, Sabrina; AL-Dhief, Fahad Taha; Sammour, Mahmoud A. M.
2018-01-01
Spoken Language Identification (LID) is the process of determining and classifying natural language from a given content and dataset. Typically, data must be processed to extract useful features to perform LID. The extracting features for LID, based on literature, is a mature process where the standard features for LID have already been developed using Mel-Frequency Cepstral Coefficients (MFCC), Shifted Delta Cepstral (SDC), the Gaussian Mixture Model (GMM) and ending with the i-vector based framework. However, the process of learning based on extract features remains to be improved (i.e. optimised) to capture all embedded knowledge on the extracted features. The Extreme Learning Machine (ELM) is an effective learning model used to perform classification and regression analysis and is extremely useful to train a single hidden layer neural network. Nevertheless, the learning process of this model is not entirely effective (i.e. optimised) due to the random selection of weights within the input hidden layer. In this study, the ELM is selected as a learning model for LID based on standard feature extraction. One of the optimisation approaches of ELM, the Self-Adjusting Extreme Learning Machine (SA-ELM) is selected as the benchmark and improved by altering the selection phase of the optimisation process. The selection process is performed incorporating both the Split-Ratio and K-Tournament methods, the improved SA-ELM is named Enhanced Self-Adjusting Extreme Learning Machine (ESA-ELM). The results are generated based on LID with the datasets created from eight different languages. The results of the study showed excellent superiority relating to the performance of the Enhanced Self-Adjusting Extreme Learning Machine LID (ESA-ELM LID) compared with the SA-ELM LID, with ESA-ELM LID achieving an accuracy of 96.25%, as compared to the accuracy of SA-ELM LID of only 95.00%. PMID:29672546
NASA Astrophysics Data System (ADS)
Murata, C. H.; Fernandes, D. C.; Lavínia, N. C.; Caldas, L. V. E.; Pires, S. R.; Medeiros, R. B.
2014-02-01
The performance of radiological equipment can be assessed using non-invasive methods and portable instruments that can analyze an X-ray beam with just one exposure. These instruments use either an ionization chamber or a state solid detector (SSD) to evaluate X-ray beam parameters. In Brazil, no such instruments are currently being manufactured; consequently, these instruments come at a higher cost to users due to importation taxes. Additionally, quality control tests are time consuming and impose a high workload on the X-ray tubes when evaluating their performance parameters. The assessment of some parameters, such as the half-value layer (HVL), requires several exposures; however, this can be reduced by using a SSD that requires only a single exposure. One such SSD uses photodiodes designed for high X-ray sensitivity without the use of scintillation crystals. This sensitivity allows one electron-hole pair to be created per 3.63 eV of incident energy, resulting in extremely high and stable quantum efficiencies. These silicon photodiodes operate by absorbing photons and generating a flow of current that is proportional to the incident power. The aim of this study was to show the response of the solid sensor PIN RD100A detector in a multifunctional X-ray analysis system that is designed to evaluate the average peak voltage (kVp), exposure time, and HVL of radiological equipment. For this purpose, a prototype board that uses four SSDs was developed to measure kVp, exposure time, and HVL using a single exposure. The reproducibility and accuracy of the results were compared to that of different X-ray beam analysis instruments. The kVp reproducibility and accuracy results were 2% and 3%, respectively; the exposure time reproducibility and accuracy results were 2% and 1%, respectively; and the HVL accuracy was ±2%. The prototype's methodology was able to calculate these parameters with appropriate reproducibility and accuracy. Therefore, the prototype can be considered a multifunctional instrument that can appropriately evaluate the performance of radiological equipment.
Precision and accuracy of 3D lower extremity residua measurement systems
NASA Astrophysics Data System (ADS)
Commean, Paul K.; Smith, Kirk E.; Vannier, Michael W.; Hildebolt, Charles F.; Pilgram, Thomas K.
1996-04-01
Accurate and reproducible geometric measurement of lower extremity residua is required for custom prosthetic socket design. We compared spiral x-ray computed tomography (SXCT) and 3D optical surface scanning (OSS) with caliper measurements and evaluated the precision and accuracy of each system. Spiral volumetric CT scanned surface and subsurface information was used to make external and internal measurements, and finite element models (FEMs). SXCT and OSS were used to measure lower limb residuum geometry of 13 below knee (BK) adult amputees. Six markers were placed on each subject's BK residuum and corresponding plaster casts and distance measurements were taken to determine precision and accuracy for each system. Solid models were created from spiral CT scan data sets with the prosthesis in situ under different loads using p-version finite element analysis (FEA). Tissue properties of the residuum were estimated iteratively and compared with values taken from the biomechanics literature. The OSS and SXCT measurements were precise within 1% in vivo and 0.5% on plaster casts, and accuracy was within 3.5% in vivo and 1% on plaster casts compared with caliper measures. Three-dimensional optical surface and SXCT imaging systems are feasible for capturing the comprehensive 3D surface geometry of BK residua, and provide distance measurements statistically equivalent to calipers. In addition, SXCT can readily distinguish internal soft tissue and bony structure of the residuum. FEM can be applied to determine tissue material properties interactively using inverse methods.
Landenburger, L.; Lawrence, R.L.; Podruzny, S.; Schwartz, C.C.
2008-01-01
Moderate resolution satellite imagery traditionally has been thought to be inadequate for mapping vegetation at the species level. This has made comprehensive mapping of regional distributions of sensitive species, such as whitebark pine, either impractical or extremely time consuming. We sought to determine whether using a combination of moderate resolution satellite imagery (Landsat Enhanced Thematic Mapper Plus), extensive stand data collected by land management agencies for other purposes, and modern statistical classification techniques (boosted classification trees) could result in successful mapping of whitebark pine. Overall classification accuracies exceeded 90%, with similar individual class accuracies. Accuracies on a localized basis varied based on elevation. Accuracies also varied among administrative units, although we were not able to determine whether these differences related to inherent spatial variations or differences in the quality of available reference data.
Overcorrection for Social-Categorization Information Moderates Impact Bias in Affective Forecasting.
Lau, Tatiana; Morewedge, Carey K; Cikara, Mina
2016-10-01
Plural societies require individuals to forecast how others-both in-group and out-group members-will respond to gains and setbacks. Typically, correcting affective forecasts to include more relevant information improves their accuracy by reducing their extremity. In contrast, we found that providing affective forecasters with social-category information about their targets made their forecasts more extreme and therefore less accurate. In both political and sports contexts, forecasters across five experiments exhibited greater impact bias for both in-group and out-group members (e.g., a Democrat or Republican) than for unspecified targets when predicting experiencers' responses to positive and negative events. Inducing time pressure reduced the extremity of forecasts for group-labeled but not unspecified targets, which suggests that the increased impact bias was due to overcorrection for social-category information, not different intuitive predictions for identified targets. Finally, overcorrection was better accounted for by stereotypes than by spontaneous retrieval of extreme group exemplars.
NASA Technical Reports Server (NTRS)
Hock, R. A.; Woods, T. N.; Crotser, D.; Eparvier, F. G.; Woodraska, D. L.; Chamberlin, P. C.; Woods, E. C.
2010-01-01
The NASA Solar Dynamics Observatory (SDO), scheduled for launch in early 2010, incorporates a suite of instruments including the Extreme Ultraviolet Variability Experiment (EVE). EVE has multiple instruments including the Multiple Extreme ultraviolet Grating Spectrographs (MEGS) A, B, and P instruments, the Solar Aspect Monitor (SAM), and the Extreme ultraviolet SpectroPhotometer (ESP). The radiometric calibration of EVE, necessary to convert the instrument counts to physical units, was performed at the National Institute of Standards and Technology (NIST) Synchrotron Ultraviolet Radiation Facility (SURF III) located in Gaithersburg, Maryland. This paper presents the results and derived accuracy of this radiometric calibration for the MEGS A, B, P, and SAM instruments, while the calibration of the ESP instrument is addressed by Didkovsky et al. . In addition, solar measurements that were taken on 14 April 2008, during the NASA 36.240 sounding-rocket flight, are shown for the prototype EVE instruments.
Assessment of a climate model to reproduce rainfall variability and extremes over Southern Africa
NASA Astrophysics Data System (ADS)
Williams, C. J. R.; Kniveton, D. R.; Layberry, R.
2010-01-01
It is increasingly accepted that any possible climate change will not only have an influence on mean climate but may also significantly alter climatic variability. A change in the distribution and magnitude of extreme rainfall events (associated with changing variability), such as droughts or flooding, may have a far greater impact on human and natural systems than a changing mean. This issue is of particular importance for environmentally vulnerable regions such as southern Africa. The sub-continent is considered especially vulnerable to and ill-equipped (in terms of adaptation) for extreme events, due to a number of factors including extensive poverty, famine, disease and political instability. Rainfall variability and the identification of rainfall extremes is a function of scale, so high spatial and temporal resolution data are preferred to identify extreme events and accurately predict future variability. The majority of previous climate model verification studies have compared model output with observational data at monthly timescales. In this research, the assessment of ability of a state of the art climate model to simulate climate at daily timescales is carried out using satellite-derived rainfall data from the Microwave Infrared Rainfall Algorithm (MIRA). This dataset covers the period from 1993 to 2002 and the whole of southern Africa at a spatial resolution of 0.1° longitude/latitude. This paper concentrates primarily on the ability of the model to simulate the spatial and temporal patterns of present-day rainfall variability over southern Africa and is not intended to discuss possible future changes in climate as these have been documented elsewhere. Simulations of current climate from the UK Meteorological Office Hadley Centre's climate model, in both regional and global mode, are firstly compared to the MIRA dataset at daily timescales. Secondly, the ability of the model to reproduce daily rainfall extremes is assessed, again by a comparison with extremes from the MIRA dataset. The results suggest that the model reproduces the number and spatial distribution of rainfall extremes with some accuracy, but that mean rainfall and rainfall variability is under-estimated (over-estimated) over wet (dry) regions of southern Africa.
Rapid induction of false memory for pictures.
Weinstein, Yana; Shanks, David R
2010-07-01
Recognition of pictures is typically extremely accurate, and it is thus unclear whether the reconstructive nature of memory can yield substantial false recognition of highly individuated stimuli. A procedure for the rapid induction of false memories for distinctive colour photographs is proposed. Participants studied a set of object pictures followed by a list of words naming those objects, but embedded in the list were names of unseen objects. When subsequently shown full colour pictures of these unseen objects, participants consistently claimed that they had seen them, while discriminating with high accuracy between studied pictures and new pictures whose names did not appear in the misleading word list. These false memories can be reported with high confidence as well as the feeling of recollection. This new procedure allows the investigation of factors that influence false memory reports with ecologically valid stimuli and of the similarities and differences between true and false memories.
Pressure profiles of the BRing based on the simulation used in the CSRm
NASA Astrophysics Data System (ADS)
Wang, J. C.; Li, P.; Yang, J. C.; Yuan, Y. J.; Wu, B.; Chai, Z.; Luo, C.; Dong, Z. Q.; Zheng, W. H.; Zhao, H.; Ruan, S.; Wang, G.; Liu, J.; Chen, X.; Wang, K. D.; Qin, Z. M.; Yin, B.
2017-07-01
HIAF-BRing, a new multipurpose accelerator facility of the High Intensity heavy-ion Accelerator Facility project, requires an extremely high vacuum lower than 10-11 mbar to fulfill the requirements of radioactive beam physics and high energy density physics. To achieve the required process pressure, the bench-marked codes of VAKTRAK and Molflow+ are used to simulate the pressure profiles of the BRing system. In order to ensure the accuracy of the implementation of VAKTRAK, the computational results are verified by measured pressure data and compared with a new simulation code BOLIDE on the current synchrotron CSRm. Since the verification of VAKTRAK has been done, the pressure profiles of the BRing are calculated with different parameters such as conductance, out-gassing rates and pumping speeds. According to the computational results, the optimal parameters are selected to achieve the required pressure for the BRing.
Using the transit of Venus to probe the upper planetary atmosphere.
Reale, Fabio; Gambino, Angelo F; Micela, Giuseppina; Maggio, Antonio; Widemann, Thomas; Piccioni, Giuseppe
2015-06-23
During a planetary transit, atoms with high atomic number absorb short-wavelength radiation in the upper atmosphere, and the planet should appear larger during a primary transit observed in high-energy bands than in the optical band. Here we measure the radius of Venus with subpixel accuracy during the transit in 2012 observed in the optical, ultraviolet and soft X-rays with Hinode and Solar Dynamics Observatory missions. We find that, while Venus's optical radius is about 80 km larger than the solid body radius (the top of clouds and haze), the radius increases further by >70 km in the extreme ultraviolet and soft X-rays. This measures the altitude of the densest ion layers of Venus's ionosphere (CO2 and CO), useful for planning missions in situ, and a benchmark case for detecting transits of exoplanets in high-energy bands with future missions, such as the ESA Athena.
Rakić, Aleksandar D; Taimre, Thomas; Bertling, Karl; Lim, Yah Leng; Dean, Paul; Indjin, Dragan; Ikonić, Zoran; Harrison, Paul; Valavanis, Alexander; Khanna, Suraj P; Lachab, Mohammad; Wilson, Stephen J; Linfield, Edmund H; Davies, A Giles
2013-09-23
The terahertz (THz) frequency quantum cascade laser (QCL) is a compact source of high-power radiation with a narrow intrinsic linewidth. As such, THz QCLs are extremely promising sources for applications including high-resolution spectroscopy, heterodyne detection, and coherent imaging. We exploit the remarkable phase-stability of THz QCLs to create a coherent swept-frequency delayed self-homodyning method for both imaging and materials analysis, using laser feedback interferometry. Using our scheme we obtain amplitude-like and phase-like images with minimal signal processing. We determine the physical relationship between the operating parameters of the laser under feedback and the complex refractive index of the target and demonstrate that this coherent detection method enables extraction of complex refractive indices with high accuracy. This establishes an ultimately compact and easy-to-implement THz imaging and materials analysis system, in which the local oscillator, mixer, and detector are all combined into a single laser.
Skin-like biosensor system via electrochemical channels for noninvasive blood glucose monitoring.
Chen, Yihao; Lu, Siyuan; Zhang, Shasha; Li, Yan; Qu, Zhe; Chen, Ying; Lu, Bingwei; Wang, Xinyan; Feng, Xue
2017-12-01
Currently, noninvasive glucose monitoring is not widely appreciated because of its uncertain measurement accuracy, weak blood glucose correlation, and inability to detect hyperglycemia/hypoglycemia during sleep. We present a strategy to design and fabricate a skin-like biosensor system for noninvasive, in situ, and highly accurate intravascular blood glucose monitoring. The system integrates an ultrathin skin-like biosensor with paper battery-powered electrochemical twin channels (ETCs). The designed subcutaneous ETCs drive intravascular blood glucose out of the vessel and transport it to the skin surface. The ultrathin (~3 μm) nanostructured biosensor, with high sensitivity (130.4 μA/mM), fully absorbs and measures the glucose, owing to its extreme conformability. We conducted in vivo human clinical trials. The noninvasive measurement results for intravascular blood glucose showed a high correlation (>0.9) with clinically measured blood glucose levels. The system opens up new prospects for clinical-grade noninvasive continuous glucose monitoring.
Skin-like biosensor system via electrochemical channels for noninvasive blood glucose monitoring
Chen, Yihao; Lu, Siyuan; Zhang, Shasha; Li, Yan; Qu, Zhe; Chen, Ying; Lu, Bingwei; Wang, Xinyan; Feng, Xue
2017-01-01
Currently, noninvasive glucose monitoring is not widely appreciated because of its uncertain measurement accuracy, weak blood glucose correlation, and inability to detect hyperglycemia/hypoglycemia during sleep. We present a strategy to design and fabricate a skin-like biosensor system for noninvasive, in situ, and highly accurate intravascular blood glucose monitoring. The system integrates an ultrathin skin-like biosensor with paper battery–powered electrochemical twin channels (ETCs). The designed subcutaneous ETCs drive intravascular blood glucose out of the vessel and transport it to the skin surface. The ultrathin (~3 μm) nanostructured biosensor, with high sensitivity (130.4 μA/mM), fully absorbs and measures the glucose, owing to its extreme conformability. We conducted in vivo human clinical trials. The noninvasive measurement results for intravascular blood glucose showed a high correlation (>0.9) with clinically measured blood glucose levels. The system opens up new prospects for clinical-grade noninvasive continuous glucose monitoring. PMID:29279864
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goncharov, A F; Zaug, J M; Crowhurst, J C
2005-01-27
We present here the summary of the results of our studies using the APS synchrotron beamline IDB Sector 16 (HPCAT). Optical calibration of pressure sensors for high pressures and temperatures: The high-pressure ruby scale for static measurements is well established to at least 100 GPa (about 5% accuracy), however common use of this and other pressure scales at high temperature is clearly based upon unconfirmed assumptions. Namely that high temperature does not affect observed room temperature pressure derivatives. The establishment of a rigorous pressure scale along with the identification of appropriate pressure gauges (i.e. stable in the high P-T environmentmore » and easy to use) is important for securing the absolute accuracy of fundamental experimental science where results guide the development of our understanding of planetary sciences, geophysics, chemistry at extreme conditions, etc. X-ray diffraction in formic acid under high pressure: Formic acid (HCOOH) is common in the solar system; it is a potential component of the Galilean satellites. Despite this, formic acid has not been well-studied at high temperatures and pressures. A phase diagram of formic acid at planetary interior pressures and temperatures will add to the understanding of planetary formation and the potential for life on Europa. Formic acid (unlike most simple organic acids) forms low-temperature crystal structures characterized by infinite hydrogen-bonded chains of molecules. The behavior of these hydrogen bonds at high pressure is of great interest. Our current research fills this need.« less
High contrast stellar observations within the diffraction limit at the Palomar Hale telescope
NASA Astrophysics Data System (ADS)
Mennesson, B.; Hanot, C.; Serabyn, E.; Martin, S. R.; Liewer, K.; Loya, F.; Mawet, D.
2010-07-01
We report on high-accuracy, high-resolution (< 20mas) stellar measurements obtained in the near infrared ( 2.2 microns) at the Palomar 200 inch telescope using two elliptical (3m x 1.5m) sub-apertures located 3.4m apart. Our interferometric coronagraph, known as the "Palomar Fiber Nuller" (PFN), is located downstream of the Palomar adaptive optics (AO) system and recombines the two separate beams into a common singlemode fiber. The AO system acts as a "fringe tracker", maintaining the optical path difference (OPD) between the beams around an adjustable value, which is set to the central dark interference fringe. AO correction ensures high efficiency and stable injection of the beams into the single-mode fiber. A chopper wheel and a fast photometer are used to record short (< 50ms per beam) interleaved sequences of background, individual beam and interferometric signals. In order to analyze these chopped null data sequences, we developed a new statistical method, baptized "Null Self-Calibration" (NSC), which provides astrophysical null measurements at the 0.001 level, with 1 σ uncertainties as low as 0.0003. Such accuracy translates into a dynamic range greater than 1000:1 within the diffraction limit, demonstrating that the approach effectively bridges the traditional gap between regular coronagraphs, limited in angular resolution, and long baseline visibility interferometers, whose dynamic range is restricted to 100:1. As our measurements are extremely sensitive to the brightness distribution very close to the optical axis, we were able to constrain the stellar diameters and amounts of circumstellar emission for a sample of very bright stars. With the improvement expected when the PALM-3000 extreme AO system comes on-line at Palomar, the same instrument now equipped with a state of the art low noise fast read-out near IR camera, will yield 10-4 to 10-3 contrast as close as 30 mas for stars with K magnitude brighter than 6. Such a system will provide a unique and ideal tool for the detection of young (<100 Myr) self-luminous planets and hot debris disks in the immediate vicinity (0.1 to a few AUs) of nearby (< 50pc) stars.
Genomic Prediction Accounting for Residual Heteroskedasticity
Ou, Zhining; Tempelman, Robert J.; Steibel, Juan P.; Ernst, Catherine W.; Bates, Ronald O.; Bello, Nora M.
2015-01-01
Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. PMID:26564950
NASA Astrophysics Data System (ADS)
Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua
2014-07-01
Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.
High-performance time-resolved fluorescence by direct waveform recording.
Muretta, Joseph M; Kyrychenko, Alexander; Ladokhin, Alexey S; Kast, David J; Gillispie, Gregory D; Thomas, David D
2010-10-01
We describe a high-performance time-resolved fluorescence (HPTRF) spectrometer that dramatically increases the rate at which precise and accurate subnanosecond-resolved fluorescence emission waveforms can be acquired in response to pulsed excitation. The key features of this instrument are an intense (1 μJ/pulse), high-repetition rate (10 kHz), and short (1 ns full width at half maximum) laser excitation source and a transient digitizer (0.125 ns per time point) that records a complete and accurate fluorescence decay curve for every laser pulse. For a typical fluorescent sample containing a few nanomoles of dye, a waveform with a signal/noise of about 100 can be acquired in response to a single laser pulse every 0.1 ms, at least 10(5) times faster than the conventional method of time-correlated single photon counting, with equal accuracy and precision in lifetime determination for lifetimes as short as 100 ps. Using standard single-lifetime samples, the detected signals are extremely reproducible, with waveform precision and linearity to within 1% error for single-pulse experiments. Waveforms acquired in 0.1 s (1000 pulses) with the HPTRF instrument were of sufficient precision to analyze two samples having different lifetimes, resolving minor components with high accuracy with respect to both lifetime and mole fraction. The instrument makes possible a new class of high-throughput time-resolved fluorescence experiments that should be especially powerful for biological applications, including transient kinetics, multidimensional fluorescence, and microplate formats.
A laboratory assessment of the measurement accuracy of weighing type rainfall intensity gauges
NASA Astrophysics Data System (ADS)
Colli, M.; Chan, P. W.; Lanza, L. G.; La Barbera, P.
2012-04-01
In recent years the WMO Commission for Instruments and Methods of Observation (CIMO) fostered noticeable advancements in the accuracy of precipitation measurement issue by providing recommendations on the standardization of equipment and exposure, instrument calibration and data correction as a consequence of various comparative campaigns involving manufacturers and national meteorological services from the participating countries (Lanza et al., 2005; Vuerich et al., 2009). Extreme events analysis is proven to be highly affected by the on-site RI measurement accuracy (see e.g. Molini et al., 2004) and the time resolution of the available RI series certainly constitutes another key-factor in constructing hyetographs that are representative of real rain events. The OTT Pluvio2 weighing gauge (WG) and the GEONOR T-200 vibrating-wire precipitation gauge demonstrated very good performance under previous constant flow rate calibration efforts (Lanza et al., 2005). Although WGs do provide better performance than more traditional Tipping Bucket Rain gauges (TBR) under continuous and constant reference intensity, dynamic effects seem to affect the accuracy of WG measurements under real world/time varying rainfall conditions (Vuerich et al., 2009). The most relevant is due to the response time of the acquisition system and the derived systematic delay of the instrument in assessing the exact weight of the bin containing cumulated precipitation. This delay assumes a relevant role in case high resolution rain intensity time series are sought from the instrument, as is the case of many hydrologic and meteo-climatic applications. This work reports the laboratory evaluation of Pluvio2 and T-200 rainfall intensity measurements accuracy. Tests are carried out by simulating different artificial precipitation events, namely non-stationary rainfall intensity, using a highly accurate dynamic rainfall generator. Time series measured by an Ogawa drop counter (DC) at a field test site located within the Hong Kong International Airport (HKIA) were aggregated at a 1-minute scale and used as reference for the artificial rain generation (Colli et al., 2012). The preliminary development and validation of the rainfall simulator for the generation of variable time steps reference intensities is also shown. The generator is characterized by a sufficiently short time response with respect to the expected weighing gauges behavior in order to ensure effective comparison of the measured/reference intensity at very high resolution in time.
Bernecker, Samantha L; Rosellini, Anthony J; Nock, Matthew K; Chiu, Wai Tat; Gutierrez, Peter M; Hwang, Irving; Joiner, Thomas E; Naifeh, James A; Sampson, Nancy A; Zaslavsky, Alan M; Stein, Murray B; Ursano, Robert J; Kessler, Ronald C
2018-04-03
High rates of mental disorders, suicidality, and interpersonal violence early in the military career have raised interest in implementing preventive interventions with high-risk new enlistees. The Army Study to Assess Risk and Resilience in Servicemembers (STARRS) developed risk-targeting systems for these outcomes based on machine learning methods using administrative data predictors. However, administrative data omit many risk factors, raising the question whether risk targeting could be improved by adding self-report survey data to prediction models. If so, the Army may gain from routinely administering surveys that assess additional risk factors. The STARRS New Soldier Survey was administered to 21,790 Regular Army soldiers who agreed to have survey data linked to administrative records. As reported previously, machine learning models using administrative data as predictors found that small proportions of high-risk soldiers accounted for high proportions of negative outcomes. Other machine learning models using self-report survey data as predictors were developed previously for three of these outcomes: major physical violence and sexual violence perpetration among men and sexual violence victimization among women. Here we examined the extent to which this survey information increases prediction accuracy, over models based solely on administrative data, for those three outcomes. We used discrete-time survival analysis to estimate a series of models predicting first occurrence, assessing how model fit improved and concentration of risk increased when adding the predicted risk score based on survey data to the predicted risk score based on administrative data. The addition of survey data improved prediction significantly for all outcomes. In the most extreme case, the percentage of reported sexual violence victimization among the 5% of female soldiers with highest predicted risk increased from 17.5% using only administrative predictors to 29.4% adding survey predictors, a 67.9% proportional increase in prediction accuracy. Other proportional increases in concentration of risk ranged from 4.8% to 49.5% (median = 26.0%). Data from an ongoing New Soldier Survey could substantially improve accuracy of risk models compared to models based exclusively on administrative predictors. Depending upon the characteristics of interventions used, the increase in targeting accuracy from survey data might offset survey administration costs.
High-temperature sensor instrumentation with a thin-film-based sapphire fiber.
Guo, Yuqing; Xia, Wei; Hu, Zhangzhong; Wang, Ming
2017-03-10
A novel sapphire fiber-optic high-temperature sensor has been designed and fabricated based on blackbody radiation theory. Metallic molybdenum has been used as the film material to develop the blackbody cavity, owing to its relatively high melting point compared to that of sapphire. More importantly, the fabrication process for the blackbody cavity is simple, efficient, and economical. Thermal radiation emitted from such a blackbody cavity is transmitted via optical fiber to a remote place for detection. The operating principle, the sensor structure, and the fabrication process are described here in detail. The developed high-temperature sensor was calibrated through a calibration blackbody furnace at temperatures from 900°C to 1200°C and tested by a sapphire crystal growth furnace up to 1880°C. The experimental results of our system agree well with those from a commercial Rayteck MR1SCCF infrared pyrometer, and the maximum residual is approximately 5°C, paving the way for high-accuracy temperature measurement especially for extremely harsh environments.
Uniformly high-order accurate non-oscillatory schemes, 1
NASA Technical Reports Server (NTRS)
Harten, A.; Osher, S.
1985-01-01
The construction and the analysis of nonoscillatory shock capturing methods for the approximation of hyperbolic conservation laws was begun. These schemes share many desirable properties with total variation diminishing schemes (TVD), but TVD schemes have at most first order accuracy, in the sense of truncation error, at extreme of the solution. A uniformly second order approximation was constucted, which is nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time. This is achieved via a nonoscillatory piecewise linear reconstruction of the solution from its cell averages, time evolution through an approximate solution of the resulting initial value problem, and averaging of this approximate solution over each cell.
NASA Astrophysics Data System (ADS)
Naessens, Kris; Van Hove, An; Coosemans, Thierry; Verstuyft, Steven; Ottevaere, Heidi; Vanwassenhove, Luc; Van Daele, Peter; Baets, Roel G.
2000-06-01
Laser ablation is extremely well suited for rapid prototyping and proves to be a versatile technique delivering high accuracy dimensioning and repeatability of features in a wide diversity of materials. In this paper, we present laser ablation as a fabrication method for micro machining in of arrays consisting of precisely dimensioned U-grooves in dedicated polycarbonate and polymethylmetacrylate plates. The dependency of the performance on various parameters is discussed. The fabricated plates are used to hold optical fibers by means of a UV-curable adhesive. Stacking and gluing of the plates allows the assembly of a 2D connector of plastic optical fibers for short distance optical interconnects.
Pen-based computers: Computers without keys
NASA Technical Reports Server (NTRS)
Conklin, Cheryl L.
1994-01-01
The National Space Transportation System (NSTS) is comprised of many diverse and highly complex systems incorporating the latest technologies. Data collection associated with ground processing of the various Space Shuttle system elements is extremely challenging due to the many separate processing locations where data is generated. This presents a significant problem when the timely collection, transfer, collation, and storage of data is required. This paper describes how new technology, referred to as Pen-Based computers, is being used to transform the data collection process at Kennedy Space Center (KSC). Pen-Based computers have streamlined procedures, increased data accuracy, and now provide more complete information than previous methods. The end results is the elimination of Shuttle processing delays associated with data deficiencies.
Piezoelectric Polymers Actuators for Precise Shape Control of Large Scale Space Antennas
NASA Technical Reports Server (NTRS)
Chen, Qin; Natale, Don; Neese, Bret; Ren, Kailiang; Lin, Minren; Zhang, Q. M.; Pattom, Matthew; Wang, K. W.; Fang, Houfei; Im, Eastwood
2007-01-01
Extremely large, lightweight, in-space deployable active and passive microwave antennas are demanded by future space missions. This paper investigates the development of PVDF based piezopolymer actuators for controlling the surface accuracy of a membrane reflector. Uniaxially stretched PVDF films were poled using an electrodeless method which yielded high quality poled piezofilms required for this application. To further improve the piezoperformance of piezopolymers, several PVDF based copolymers were examined. It was found that one of them exhibits nearly three times improvement in the in-plane piezoresponse compared with PVDF and P(VDF-TrFE) piezopolymers. Preliminary experimental results indicate that these flexible actuators are very promising in controlling precisely the shape of the space reflectors.
Acceleration of short and long DNA read mapping without loss of accuracy using suffix array.
Tárraga, Joaquín; Arnau, Vicente; Martínez, Héctor; Moreno, Raul; Cazorla, Diego; Salavert-Torres, José; Blanquer-Espert, Ignacio; Dopazo, Joaquín; Medina, Ignacio
2014-12-01
HPG Aligner applies suffix arrays for DNA read mapping. This implementation produces a highly sensitive and extremely fast mapping of DNA reads that scales up almost linearly with read length. The approach presented here is faster (over 20× for long reads) and more sensitive (over 98% in a wide range of read lengths) than the current state-of-the-art mappers. HPG Aligner is not only an optimal alternative for current sequencers but also the only solution available to cope with longer reads and growing throughputs produced by forthcoming sequencing technologies. https://github.com/opencb/hpg-aligner. © The Author 2014. Published by Oxford University Press.
Navigation strategy and filter design for solar electric missions
NASA Technical Reports Server (NTRS)
Tapley, B. D.; Hagar, H., Jr.
1972-01-01
Methods which have been proposed to improve the navigation accuracy for the low-thrust space vehicle include modifications to the standard Sequential- and Batch-type orbit determination procedures and the use of inertial measuring units (IMU) which measures directly the acceleration applied to the vehicle. The navigation accuracy obtained using one of the more promising modifications to the orbit determination procedures is compared with a combined IMU-Standard. The unknown accelerations are approximated as both first-order and second-order Gauss-Markov processes. The comparison is based on numerical results obtained in a study of the navigation requirements of a numerically simulated 152-day low-thrust mission to the asteroid Eros. The results obtained in the simulation indicate that the DMC algorithm will yield a significant improvement over the navigation accuracies achieved with previous estimation algorithms. In addition, the DMC algorithms will yield better navigation accuracies than the IMU-Standard Orbit Determination algorithm, except for extremely precise IMU measurements, i.e., gyroplatform alignment .01 deg and accelerometer signal-to-noise ratio .07. Unless these accuracies are achieved, the IMU navigation accuracies are generally unacceptable.
Blower, Sally; Go, Myong-Hyun
2011-07-19
Mathematical models are useful tools for understanding and predicting epidemics. A recent innovative modeling study by Stehle and colleagues addressed the issue of how complex models need to be to ensure accuracy. The authors collected data on face-to-face contacts during a two-day conference. They then constructed a series of dynamic social contact networks, each of which was used to model an epidemic generated by a fast-spreading airborne pathogen. Intriguingly, Stehle and colleagues found that increasing model complexity did not always increase accuracy. Specifically, the most detailed contact network and a simplified version of this network generated very similar results. These results are extremely interesting and require further exploration to determine their generalizability.
NASA Astrophysics Data System (ADS)
Luo, Hanjun; Ouyang, Zhengbiao; Liu, Qiang; Chen, Zhiliang; Lu, Hualan
2017-10-01
Cumulative pulses detection with appropriate cumulative pulses number and threshold has the ability to improve the detection performance of the pulsed laser ranging system with GM-APD. In this paper, based on Poisson statistics and multi-pulses cumulative process, the cumulative detection probabilities and their influence factors are investigated. With the normalized probability distribution of each time bin, the theoretical model of the range accuracy and precision is established, and the factors limiting the range accuracy and precision are discussed. The results show that the cumulative pulses detection can produce higher target detection probability and lower false alarm probability. However, for a heavy noise level and extremely weak echo intensity, the false alarm suppression performance of the cumulative pulses detection deteriorates quickly. The range accuracy and precision is another important parameter evaluating the detection performance, the echo intensity and pulse width are main influence factors on the range accuracy and precision, and higher range accuracy and precision is acquired with stronger echo intensity and narrower echo pulse width, for 5-ns echo pulse width, when the echo intensity is larger than 10, the range accuracy and precision lower than 7.5 cm can be achieved.
Hannemann, S; van Duijn, E-J; Ubachs, W
2007-10-01
A narrow-band tunable injection-seeded pulsed titanium:sapphire laser system has been developed for application in high-resolution spectroscopic studies at the fundamental wavelengths in the near infrared as well as in the ultraviolet, deep ultraviolet, and extreme ultraviolet after upconversion. Special focus is on the quantitative assessment of the frequency characteristics of the oscillator-amplifier system on a pulse-to-pulse basis. Frequency offsets between continuous-wave seed light and the pulsed output are measured as well as linear chirps attributed mainly to mode pulling effects in the oscillator cavity. Operational conditions of the laser are found in which these offset and chirp effects are minimal. Absolute frequency calibration at the megahertz level of accuracy is demonstrated on various atomic and molecular resonance lines.
Efficient Ab initio Modeling of Random Multicomponent Alloys
Jiang, Chao; Uberuaga, Blas P.
2016-03-08
Here, we present in this Letter a novel small set of ordered structures (SSOS) method that allows extremely efficient ab initio modeling of random multi-component alloys. Using inverse II-III spinel oxides and equiatomic quinary bcc (so-called high entropy) alloys as examples, we also demonstrate that a SSOS can achieve the same accuracy as a large supercell or a well-converged cluster expansion, but with significantly reduced computational cost. In particular, because of this efficiency, a large number of quinary alloy compositions can be quickly screened, leading to the identification of several new possible high entropy alloy chemistries. Furthermore, the SSOS methodmore » developed here can be broadly useful for the rapid computational design of multi-component materials, especially those with a large number of alloying elements, a challenging problem for other approaches.« less
Subatomic deformation driven by vertical piezoelectricity from CdS ultrathin films.
Wang, Xuewen; He, Xuexia; Zhu, Hongfei; Sun, Linfeng; Fu, Wei; Wang, Xingli; Hoong, Lai Chee; Wang, Hong; Zeng, Qingsheng; Zhao, Wu; Wei, Jun; Jin, Zhong; Shen, Zexiang; Liu, Jie; Zhang, Ting; Liu, Zheng
2016-07-01
Driven by the development of high-performance piezoelectric materials, actuators become an important tool for positioning objects with high accuracy down to nanometer scale, and have been used for a wide variety of equipment, such as atomic force microscopy and scanning tunneling microscopy. However, positioning at the subatomic scale is still a great challenge. Ultrathin piezoelectric materials may pave the way to positioning an object with extreme precision. Using ultrathin CdS thin films, we demonstrate vertical piezoelectricity in atomic scale (three to five space lattices). With an in situ scanning Kelvin force microscopy and single and dual ac resonance tracking piezoelectric force microscopy, the vertical piezoelectric coefficient (d 33) up to 33 pm·V(-1) was determined for the CdS ultrathin films. These findings shed light on the design of next-generation sensors and microelectromechanical devices.
Zhou, Yangbo; Fox, Daniel S; Maguire, Pierce; O’Connell, Robert; Masters, Robert; Rodenburg, Cornelia; Wu, Hanchun; Dapor, Maurizio; Chen, Ying; Zhang, Hongzhou
2016-01-01
Two-dimensional (2D) materials usually have a layer-dependent work function, which require fast and accurate detection for the evaluation of their device performance. A detection technique with high throughput and high spatial resolution has not yet been explored. Using a scanning electron microscope, we have developed and implemented a quantitative analytical technique which allows effective extraction of the work function of graphene. This technique uses the secondary electron contrast and has nanometre-resolved layer information. The measurement of few-layer graphene flakes shows the variation of work function between graphene layers with a precision of less than 10 meV. It is expected that this technique will prove extremely useful for researchers in a broad range of fields due to its revolutionary throughput and accuracy. PMID:26878907
An evaluation of methods for estimating decadal stream loads
NASA Astrophysics Data System (ADS)
Lee, Casey J.; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-11-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen - lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale's ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
An evaluation of methods for estimating decadal stream loads
Lee, Casey; Hirsch, Robert M.; Schwarz, Gregory E.; Holtschlag, David J.; Preston, Stephen D.; Crawford, Charles G.; Vecchia, Aldo V.
2016-01-01
Effective management of water resources requires accurate information on the mass, or load of water-quality constituents transported from upstream watersheds to downstream receiving waters. Despite this need, no single method has been shown to consistently provide accurate load estimates among different water-quality constituents, sampling sites, and sampling regimes. We evaluate the accuracy of several load estimation methods across a broad range of sampling and environmental conditions. This analysis uses random sub-samples drawn from temporally-dense data sets of total nitrogen, total phosphorus, nitrate, and suspended-sediment concentration, and includes measurements of specific conductance which was used as a surrogate for dissolved solids concentration. Methods considered include linear interpolation and ratio estimators, regression-based methods historically employed by the U.S. Geological Survey, and newer flexible techniques including Weighted Regressions on Time, Season, and Discharge (WRTDS) and a generalized non-linear additive model. No single method is identified to have the greatest accuracy across all constituents, sites, and sampling scenarios. Most methods provide accurate estimates of specific conductance (used as a surrogate for total dissolved solids or specific major ions) and total nitrogen – lower accuracy is observed for the estimation of nitrate, total phosphorus and suspended sediment loads. Methods that allow for flexibility in the relation between concentration and flow conditions, specifically Beale’s ratio estimator and WRTDS, exhibit greater estimation accuracy and lower bias. Evaluation of methods across simulated sampling scenarios indicate that (1) high-flow sampling is necessary to produce accurate load estimates, (2) extrapolation of sample data through time or across more extreme flow conditions reduces load estimate accuracy, and (3) WRTDS and methods that use a Kalman filter or smoothing to correct for departures between individual modeled and observed values benefit most from more frequent water-quality sampling.
Ye, Qing; Pan, Hao; Liu, Changhua
2015-01-01
This research proposes a novel framework of final drive simultaneous failure diagnosis containing feature extraction, training paired diagnostic models, generating decision threshold, and recognizing simultaneous failure modes. In feature extraction module, adopt wavelet package transform and fuzzy entropy to reduce noise interference and extract representative features of failure mode. Use single failure sample to construct probability classifiers based on paired sparse Bayesian extreme learning machine which is trained only by single failure modes and have high generalization and sparsity of sparse Bayesian learning approach. To generate optimal decision threshold which can convert probability output obtained from classifiers into final simultaneous failure modes, this research proposes using samples containing both single and simultaneous failure modes and Grid search method which is superior to traditional techniques in global optimization. Compared with other frequently used diagnostic approaches based on support vector machine and probability neural networks, experiment results based on F 1-measure value verify that the diagnostic accuracy and efficiency of the proposed framework which are crucial for simultaneous failure diagnosis are superior to the existing approach. PMID:25722717
Nonempirical Semilocal Free-Energy Density Functional for Matter under Extreme Conditions
Karasiev, Valentin V.; Dufty, James W.; Trickey, S. B.
2018-02-14
The potential for density functional calculations to predict the properties of matter under extreme conditions depends crucially upon having a non-empirical approximate free energy functional valid over a wide range of state conditions. Unlike the ground-state case, no such free-energy exchange- correlation (XC) functional exists. We remedy that with systematic construction of a generalized gradient approximation XC free-energy functional based on rigorous constraints, including the free energy gradient expansion. The new functional provides the correct temperature dependence in the slowly varying regime and the correct zero-T, high-T, and homogeneous electron gas limits. Application in Kohn-Sham calculations for hot electrons inmore » a static fcc Aluminum lattice demon- strates the combined magnitude of thermal and gradient effects handled by this functional. Its accuracy in the increasingly important warm dense matter regime is attested by excellent agreement of the calculated deuterium equation of state with reference path integral Monte Carlo results at intermediate and elevated temperatures and by low density Al calculations over a wide T range.« less
Nonempirical Semilocal Free-Energy Density Functional for Matter under Extreme Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karasiev, Valentin V.; Dufty, James W.; Trickey, S. B.
The potential for density functional calculations to predict the properties of matter under extreme conditions depends crucially upon having a non-empirical approximate free energy functional valid over a wide range of state conditions. Unlike the ground-state case, no such free-energy exchange- correlation (XC) functional exists. We remedy that with systematic construction of a generalized gradient approximation XC free-energy functional based on rigorous constraints, including the free energy gradient expansion. The new functional provides the correct temperature dependence in the slowly varying regime and the correct zero-T, high-T, and homogeneous electron gas limits. Application in Kohn-Sham calculations for hot electrons inmore » a static fcc Aluminum lattice demon- strates the combined magnitude of thermal and gradient effects handled by this functional. Its accuracy in the increasingly important warm dense matter regime is attested by excellent agreement of the calculated deuterium equation of state with reference path integral Monte Carlo results at intermediate and elevated temperatures and by low density Al calculations over a wide T range.« less
Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification
Yang, Xinyi
2016-01-01
In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods. PMID:27610128
NASA Astrophysics Data System (ADS)
Knappe-Grueneberg, Silvia; Schnabel, Allard; Wuebbeler, Gerd; Burghoff, Martin
2008-04-01
The Berlin magnetically shielded room 2 (BMSR-2) features a magnetic residual field below 500pT and a field gradient level less than 0.5pT/mm, which are needed for very sensitive human biomagnetic recordings or low field NMR. Nevertheless, below 15Hz, signals are compromised by an additional noise contribution due to vibration forced sensor movements in the field gradient. Due to extreme shielding, the residual field and its homogeneity are determined mainly by the demagnetization results of the mumetal shells. Eight different demagnetization coil configurations can be realized, each results in a characteristic field pattern. The spatial dc flux density inside BMSR-2 is measured with a movable superconducting quantum interference device system with an accuracy better than 50pT. Residual field and field distribution of the current-driven coils fit well to an air-core coil model, if the high permeable core and the return lines outside of the shells are neglected. Finally, we homogenize the residual field by selecting a proper coil configuration.
Deep Convolutional Extreme Learning Machine and Its Application in Handwritten Digit Classification.
Pang, Shan; Yang, Xinyi
2016-01-01
In recent years, some deep learning methods have been developed and applied to image classification applications, such as convolutional neuron network (CNN) and deep belief network (DBN). However they are suffering from some problems like local minima, slow convergence rate, and intensive human intervention. In this paper, we propose a rapid learning method, namely, deep convolutional extreme learning machine (DC-ELM), which combines the power of CNN and fast training of ELM. It uses multiple alternate convolution layers and pooling layers to effectively abstract high level features from input images. Then the abstracted features are fed to an ELM classifier, which leads to better generalization performance with faster learning speed. DC-ELM also introduces stochastic pooling in the last hidden layer to reduce dimensionality of features greatly, thus saving much training time and computation resources. We systematically evaluated the performance of DC-ELM on two handwritten digit data sets: MNIST and USPS. Experimental results show that our method achieved better testing accuracy with significantly shorter training time in comparison with deep learning methods and other ELM methods.
Steep Hard-X-ray Spectra Indicate Extremely High Accretion Rates in Weak Emission-Line Quasars
NASA Astrophysics Data System (ADS)
Marlar, Andrea; Shemmer, Ohad; Anderson, Scott F.; Brandt, W. Niel; Diamond-Stanic, Aleksandar M.; Fan, Xiaohui; Luo, Bin; Plotkin, Richard; Richards, Gordon T.; Schneider, Donald P.; Wu, Jianfeng
2018-06-01
We present XMM-Newton imaging spectroscopy of ten weak emission-line quasars (WLQs) at 0.928 ≤ z ≤ 3.767, six of which are radio quiet and four which are radio intermediate. The new X-ray data enabled us to measure the hard-X-ray power-law photon index (Γ) in each source with relatively high accuracy. These measurements allowed us to confirm previous reports that WLQs have steeper X-ray spectra, therefore indicating higher accretion rates with respect to "typical" quasars. A comparison between the Γ values of our radio-quiet WLQs and those of a carefully-selected, uniform sample of 84 quasars shows that the first are significantly higher, at the ≥ 3σ level. Collectively, the four radio-intermediate WLQs have lower Γ values with respect to the six radio-quiet WLQs, as may be expected if the spectra of the first group are contaminated by X-ray emission from a jet. These results suggest that, in the absence of significant jet emission along our line of sight, WLQs constitute the extreme high end of the accretion rate distribution in quasars. We detect soft excess emission in our lowest-redshift radio-quiet WLQ, in agreement with previous findings suggesting that the prominence of this feature is associated with a high accretion rate. We have not detected signatures of Compton reflection, Fe Kα lines, or strong variability between two X-ray epochs in any of our WLQs.
NASA Astrophysics Data System (ADS)
Didkovsky, Leonid; Wieman, Seth; Woods, Thomas
2016-10-01
The Extreme ultraviolet Spectrophotometer (ESP), one of the channels of SDO's Extreme ultraviolet Variability Experiment (EVE), measures solar irradiance in several EUV and soft x-ray (SXR) bands isolated using thin-film filters and a transmission diffraction grating, and includes a quad-diode detector positioned at the grating zeroth-order to observe in a wavelength band from about 0.1 to 7.0 nm. The quad diode signal also includes some contribution from shorter wavelength in the grating's first-order and the ratio of zeroth-order to first-order signal depends on both source geometry, and spectral distribution. For example, radiometric calibration of the ESP zeroth-order at the NIST SURF BL-2 with a near-parallel beam provides a different zeroth-to-first-order ratio than modeled for solar observations. The relative influence of "uncalibrated" first-order irradiance during solar observations is a function of the solar spectral irradiance and the locations of large Active Regions or solar flares. We discuss how the "uncalibrated" first-order "solar" component and the use of variable solar reference spectra affect determination of absolute SXR irradiance which currently may be significantly overestimated during high solar activity.
Use of electrocardiographic-thallium exercise testing in clinical practice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gitler, B.; Fishbach, M.; Steingart, R.M.
Although there is a great deal of data on the accuracy of combined electrocardiographic-thallium exercise testing, little is known about the use of these tests in clinical practice. A quantitative likelihood system was employed to characterize referral patterns for such testing, and the impact of test results on the likelihood of coronary artery disease was examined. Two hundred thirteen subjects consecutively referred for the purpose of establishing or excluding the presence of coronary artery disease were studied. No subject had a history of a prior myocardial infarction. By historical evaluation, 96 had a low likelihood of coronary disease (less thanmore » or equal to 0.20), 88 an intermediate likelihood (0.21 to 0.80) and 29 a high likelihood (greater than 0.80). As anticipated from theoretical analyses, testing produced the greatest shifts in disease likelihood in subjects with an intermediate pretest disease likelihood, and confirmed the historical evaluation in patients at the extremes of pretest disease likelihood. Therefore, although electrocardiographic-thallium stress testing is best suited for subjects with intermediate pretest disease likelihood, the majority of referrals had either a high or low likelihood. Clinicians appear to value confirmatory results in patients at the extremes of pretest disease likelihood. Electrocardiographic exercise testing would serve a similar purpose.« less
Segmental Dynamics of Forward Fall Arrests: System Identification Approach
Kim, Kyu-Jung; Ashton-Miller, James A.
2009-01-01
Background Fall-related injuries are multifaceted problems, necessitating thorough biodynamic simulation to identify critical biomechanical factors. Methods A 2-degree-of-freedom discrete impact model was constructed through system identification and validation processes using the experimental data to understand dynamic interactions of various biomechanical parameters in bimanual forward fall arrests. Findings The bimodal reaction force response from the identified models had small identification errors for the first and second force peaks less than 3.5% and high coherence between the measured and identified model responses (R2=0.95). Model validation with separate experimental data also demonstrated excellent validation accuracy and coherence, less than 7% errors and R2=0.87, respectively. The first force peak was usually greater than the second force peak and strongly correlated with the impact velocity of the upper extremity, while the second force peak was associated with the impact velocity of the body. The impact velocity of the upper extremity relative to the body could be a major risk factor to fall-related injuries as observed from model simulations that a 75% faster arm movement relative to the falling speed of the body alone could double the first force peak from soft landing, thereby readily exceeding the fracture strength of the distal radius. Interpretation Considering that the time-critical nature of falling often calls for a fast arm movement, the use of the upper extremity in forward fall arrests is not biomechanically justified unless sufficient reaction time and coordinated protective motion of the upper extremity are available. PMID:19250726
NASA Astrophysics Data System (ADS)
Yin, Yixing; Chen, Haishan; Xu, Chongyu; Xu, Wucheng; Chen, Changchun
2014-05-01
The regionalization methods which 'trade space for time' by including several at-site data records in the frequency analysis are an efficient tool to improve the reliability of extreme quantile estimates. With the main aims of improving the understanding of the regional frequency of extreme precipitation and providing scientific and practical background and assistance in formulating the regional development strategies for water resources management in one of the most developed and flood-prone regions in China, the Yangtze River Delta (YRD) region, in this paper, L-moment-based index-flood (LMIF) method, one of the popular regionalization methods, is used in the regional frequency analysis of extreme precipitation; attention was paid to inter-site dependence and its influence on the accuracy of quantile estimates, which hasn't been considered for most of the studies using LMIF method. Extensive data screening of stationarity, serial dependence and inter-site dependence was carried out first. The entire YRD region was then categorized into four homogeneous regions through cluster analysis and homogenous analysis. Based on goodness-of-fit statistic and L-moment ratio diagrams, Generalized extreme-value (GEV) and Generalized Normal (GNO) distributions were identified as the best-fit distributions for most of the sub regions. Estimated quantiles for each region were further obtained. Monte-Carlo simulation was used to evaluate the accuracy of the quantile estimates taking inter-site dependence into consideration. The results showed that the root mean square errors (RMSEs) were bigger and the 90% error bounds were wider with inter-site dependence than those with no inter-site dependence for both the regional growth curve and quantile curve. The spatial patterns of extreme precipitation with return period of 100 years were obtained which indicated that there are two regions with the highest precipitation extremes (southeastern coastal area of Zhejiang Province and the southwest part of Anhui Province) and a large region with low precipitation extremes in the north and middle parts of Zhejiang Province, Shanghai City and Jiangsu Province. However, the central areas with low precipitation extremes are the most developed and densely populated regions in the study area, thus floods will cause great loss of human life and property damage. These findings will contribute to formulating the regional development strategies for policymakers and stakeholders in water resource management against the menaces of frequently emerged floods.
Kroonblawd, Matthew P; Pietrucci, Fabio; Saitta, Antonino Marco; Goldman, Nir
2018-04-10
We demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTB model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol -1 .
Darmstadt, G L; Kumar, V; Shearer, J C; Misra, R; Mohanty, S; Baqui, A H; Coffey, P S; Awasthi, S; Singh, J V; Santosham, M
2007-10-01
To determine the accuracy and acceptability of a handheld scale prototype designed for nonliterate users to classify newborns into three weight categories (>or=2,500 g; 2,000 to 2,499 g; and <2,000 g). Weights of 1,100 newborns in Uttar Pradesh, India, were measured on the test scale and validated against a gold standard. Mothers, family members and community health stakeholders were interviewed to assess the acceptability of the test scale. The test scale was highly sensitive and specific at classifying newborn weight (normal weight: 95.3 and 96.3%, respectively; low birth weight: 90.4 and 99.2%, respectively; very low birth weight: 91.7 and 98.4%, respectively). It was the overall agreement of the community that the test scale was more practical and easier to interpret than the gold standard. The BIRTHweigh III scale accurately identifies low birth weight and very low birth weight newborns to target weight-specific interventions. The scale is extremely practical and useful for resource-poor settings, especially those with low levels of literacy.
Sequence-based heuristics for faster annotation of non-coding RNA families.
Weinberg, Zasha; Ruzzo, Walter L
2006-01-01
Non-coding RNAs (ncRNAs) are functional RNA molecules that do not code for proteins. Covariance Models (CMs) are a useful statistical tool to find new members of an ncRNA gene family in a large genome database, using both sequence and, importantly, RNA secondary structure information. Unfortunately, CM searches are extremely slow. Previously, we created rigorous filters, which provably sacrifice none of a CM's accuracy, while making searches significantly faster for virtually all ncRNA families. However, these rigorous filters make searches slower than heuristics could be. In this paper we introduce profile HMM-based heuristic filters. We show that their accuracy is usually superior to heuristics based on BLAST. Moreover, we compared our heuristics with those used in tRNAscan-SE, whose heuristics incorporate a significant amount of work specific to tRNAs, where our heuristics are generic to any ncRNA. Performance was roughly comparable, so we expect that our heuristics provide a high-quality solution that--unlike family-specific solutions--can scale to hundreds of ncRNA families. The source code is available under GNU Public License at the supplementary web site.
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco; ...
2018-03-15
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
Thorn, A S; Gathercole, S E
2001-06-01
Language differences in verbal short-term memory were investigated in two experiments. In Experiment 1, bilinguals with high competence in English and French and monolingual English adults with extremely limited knowledge of French were assessed on their serial recall of words and nonwords in both languages. In all cases recall accuracy was superior in the language with which individuals were most familiar, a first-language advantage that remained when variation due to differential rates of articulation in the two languages was taken into account. In Experiment 2, bilinguals recalled lists of English and French words with and without concurrent articulatory suppression. First-language superiority persisted under suppression, suggesting that the language differences in recall accuracy were not attributable to slower rates of subvocal rehearsal in the less familiar language. The findings indicate that language-specific differences in verbal short-term memory do not exclusively originate in the subvocal rehearsal process. It is suggested that one source of language-specific variation might relate to the use of long-term knowledge to support short-term memory performance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kroonblawd, Matthew P.; Pietrucci, Fabio; Saitta, Antonino Marco
Here, we demonstrate the capability of creating robust density functional tight binding (DFTB) models for chemical reactivity in prebiotic mixtures through force matching to short time scale quantum free energy estimates. Molecular dynamics using density functional theory (DFT) is a highly accurate approach to generate free energy surfaces for chemical reactions, but the extreme computational cost often limits the time scales and range of thermodynamic states that can feasibly be studied. In contrast, DFTB is a semiempirical quantum method that affords up to a thousandfold reduction in cost and can recover DFT-level accuracy. Here, we show that a force-matched DFTBmore » model for aqueous glycine condensation reactions yields free energy surfaces that are consistent with experimental observations of reaction energetics. Convergence analysis reveals that multiple nanoseconds of combined trajectory are needed to reach a steady-fluctuating free energy estimate for glycine condensation. Predictive accuracy of force-matched DFTB is demonstrated by direct comparison to DFT, with the two approaches yielding surfaces with large regions that differ by only a few kcal mol –1.« less
Routine preoperative colour Doppler duplex ultrasound scanning in anterolateral thigh flaps.
Lichte, Johanna; Teichmann, Jan; Loberg, Christina; Kloss-Brandstätter, Anita; Bartella, Alexander; Steiner, Timm; Modabber, Ali; Hölzle, Frank; Lethaus, Bernd
2016-10-01
The anterolateral thigh flap (ALT) is often used to reconstruct the head and neck and depends on one or more skin perforators, which often present with variable anatomy. The aim of this study was to localise and evaluate the precise position of these perforators preoperatively with colour Doppler duplex ultrasound scanning (US). We detected 74 perforators in 30 patients. The mean duration of examination with colour Doppler was 29 (range 13-51) minutes. Adequate perforators and their anatomical course could be detected preoperatively extremely accurately (p<0.001). The mean difference between the preoperatively marked, and the real, positions was 6.3 (range 0-16) mm. There was a highly significant correlation between the accuracy of the prediction and the body mass index of the patient (0.75; p<0.001). Neither the age nor the sex of the patient correlated with the accuracy of the prediction. Colour Doppler duplex US used preoperatively to localise perforators in ALT flaps is reliable and could be adopted as standard procedure. Copyright © 2016 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Dean, J C; Wilcox, C H; Daniels, A U; Goodwin, R R; Van Wagoner, E; Dunn, H K
1991-01-01
A new experimental technique for measuring generalized three-dimensional motion of vertebral bodies during cyclic loading in vitro is presented. The system consists of an orthogonal array of three lasers mounted rigidly to one vertebra, and a set of three mutually orthogonal charge-coupled devices mounted rigidly to an adjacent vertebra. Each laser strikes a corresponding charge-coupled device screen. The mathematical model of the system is reduced to a linear set of equations with consequent matrix algebra allowing fast real-time data reduction during cyclic movements of the spine. The range and accuracy of the system is well suited for studying thoracolumbar motion segments. Distinct advantages of the system include miniaturization of the components, the elimination of the need for mechanical linkages between the bodies, and a high degree of accuracy which is not dependent on viewing volume as found in photogrammetric systems. More generally, the spectrum of potential applications of systems of this type to the real-time measurement of the relative motion of two bodies is extremely broad.
Development of a low-cost multiple diode PIV laser for high-speed flow visualization
NASA Astrophysics Data System (ADS)
Bhakta, Raj; Hargather, Michael
2017-11-01
Particle imaging velocimetry (PIV) is an optical visualization technique that typically incorporates a single high-powered laser to illuminate seeded particles in a fluid flow. Standard PIV lasers are extremely costly and have low frequencies that severely limit its capability in high speed, time-resolved imaging. The development of a multiple diode laser system consisting of continuous lasers allows for flexible high-speed imaging with a wider range of test parameters. The developed laser system was fabricated with off-the-shelf parts for approximately 500. A series of experimental tests were conducted to compare the laser apparatus to a standard Nd:YAG double-pulsed PIV laser. Steady and unsteady flows were processed to compare the two systems and validate the accuracy of the multiple laser design. PIV results indicate good correlation between the two laser systems and verifies the construction of a precise laser instrument. The key technical obstacle to this approach was laser calibration and positioning which will be discussed. HDTRA1-14-1-0070.
NASA Astrophysics Data System (ADS)
Zhang, Meijun; Tang, Jian; Zhang, Xiaoming; Zhang, Jiaojiao
2016-03-01
The high accurate classification ability of an intelligent diagnosis method often needs a large amount of training samples with high-dimensional eigenvectors, however the characteristics of the signal need to be extracted accurately. Although the existing EMD(empirical mode decomposition) and EEMD(ensemble empirical mode decomposition) are suitable for processing non-stationary and non-linear signals, but when a short signal, such as a hydraulic impact signal, is concerned, their decomposition accuracy become very poor. An improve EEMD is proposed specifically for short hydraulic impact signals. The improvements of this new EEMD are mainly reflected in four aspects, including self-adaptive de-noising based on EEMD, signal extension based on SVM(support vector machine), extreme center fitting based on cubic spline interpolation, and pseudo component exclusion based on cross-correlation analysis. After the energy eigenvector is extracted from the result of the improved EEMD, the fault pattern recognition based on SVM with small amount of low-dimensional training samples is studied. At last, the diagnosis ability of improved EEMD+SVM method is compared with the EEMD+SVM and EMD+SVM methods, and its diagnosis accuracy is distinctly higher than the other two methods no matter the dimension of the eigenvectors are low or high. The improved EEMD is very propitious for the decomposition of short signal, such as hydraulic impact signal, and its combination with SVM has high ability for the diagnosis of hydraulic impact faults.
Improved Extreme Learning Machine based on the Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Cui, Licheng; Zhai, Huawei; Wang, Benchao; Qu, Zengtang
2018-03-01
Extreme learning machine and its improved ones is weak in some points, such as computing complex, learning error and so on. After deeply analyzing, referencing the importance of hidden nodes in SVM, an novel analyzing method of the sensitivity is proposed which meets people’s cognitive habits. Based on these, an improved ELM is proposed, it could remove hidden nodes before meeting the learning error, and it can efficiently manage the number of hidden nodes, so as to improve the its performance. After comparing tests, it is better in learning time, accuracy and so on.
Perceiving Facial and Vocal Expressions of Emotion in Individuals with Williams Syndrome
ERIC Educational Resources Information Center
Plesa-Skwerer, Daniela; Faja, Susan; Schofield, Casey; Verbalis, Alyssa; Tager-Flusberg, Helen
2006-01-01
People with Williams syndrome are extremely sociable, empathic, and expressive in communication. Some researchers suggest they may be especially sensitive to perceiving emotional expressions. We administered the Faces and Paralanguage subtests of the Diagnostic Analysis of Nonverbal Accuracy Scale (DANVA2), a standardized measure of emotion…
NASA Astrophysics Data System (ADS)
Salido-Monzú, David; Wieser, Andreas
2018-04-01
The intermode beats generated by direct detection of a mode-locked femtosecond laser represent inherent high-quality and high-frequency modulations suitable for electro-optical distance measurement (EDM). This approach has already been demonstrated as a robust alternative to standard long-distance EDM techniques. However, we extend this idea to intermode beating of a wideband source obtained by spectral broadening of a femtosecond laser. We aim at establishing a technological basis for accurate and flexible multiwavelength distance measurement. Results are presented from experiments using beat notes at 1 GHz generated by two bandpass-filtered regions from both extremes of a coherent supercontinuum ranging from 550 to 1050 nm. The displacement measurements performed simultaneously on both colors on a short-distance setup show that noise and coherence of the wideband laser are adequate for achieving accuracies of about 0.01 mm on each channel with a potential improvement by accessing higher beat notes. Pointing and power instabilities have been identified as dominant sources of systematic deviations. Nevertheless, the results demonstrate the basic feasibility of the proposed technique. We consider this a promising starting point for the further development of multiwavelength EDM enabling increased accuracy over long distances through dispersion-based integral refractivity compensation and for remote surface material probing along with distance measurement in laser scanning.
Research of fundamental interactions with use of ultracold neutrons
NASA Astrophysics Data System (ADS)
Serebrov, A. P.
2017-01-01
Use of ultracold neutrons (UCN) gives unique opportunities of a research of fundamental interactions in physics of elementary particles. Search of the electric dipole moment of a neutron (EDM) aims to test models of CP violation. Precise measurement of neutron lifetime is extremely important for cosmology and astrophysics. Considerable progress in these questions can be reached due to supersource of ultracold neutrons on the basis of superfluid helium which is under construction now in PNPI NRC KI. This source will allow us to increase density of ultracold neutrons approximately by 100 times in respect to the best UCN source at high flux reactor of Institute Laue-Langevin (Grenoble, France). Now the project and basic elements of the source are prepared, full-scale model of the source is tested, the scientific program is developed. Increase in accuracy of neutron EDM measurements by order of magnitude, down to level 10-27 -10-28 e cm is planned. It is highly important for physics of elementary particles. Accuracy of measurement of neutron lifetime can be increased by order of magnitude also. At last, at achievement of UCN density ˜ 103 - 104 cm-3, the experiment search for a neutron-antineutron oscillations using UCN will be possible. The present status of the project and its scientific program will be discussed.
Random Forests for Global and Regional Crop Yield Predictions.
Jeong, Jig Han; Resop, Jonathan P; Mueller, Nathaniel D; Fleisher, David H; Yun, Kyungdahm; Butler, Ethan E; Timlin, Dennis J; Shim, Kyo-Moon; Gerber, James S; Reddy, Vangimalla R; Kim, Soo-Hyung
2016-01-01
Accurate predictions of crop yield are critical for developing effective agricultural and food policies at the regional and global scales. We evaluated a machine-learning method, Random Forests (RF), for its ability to predict crop yield responses to climate and biophysical variables at global and regional scales in wheat, maize, and potato in comparison with multiple linear regressions (MLR) serving as a benchmark. We used crop yield data from various sources and regions for model training and testing: 1) gridded global wheat grain yield, 2) maize grain yield from US counties over thirty years, and 3) potato tuber and maize silage yield from the northeastern seaboard region. RF was found highly capable of predicting crop yields and outperformed MLR benchmarks in all performance statistics that were compared. For example, the root mean square errors (RMSE) ranged between 6 and 14% of the average observed yield with RF models in all test cases whereas these values ranged from 14% to 49% for MLR models. Our results show that RF is an effective and versatile machine-learning method for crop yield predictions at regional and global scales for its high accuracy and precision, ease of use, and utility in data analysis. RF may result in a loss of accuracy when predicting the extreme ends or responses beyond the boundaries of the training data.
Modeling of Turbulent Natural Convection in Enclosed Tall Cavities
NASA Astrophysics Data System (ADS)
Goloviznin, V. M.; Korotkin, I. A.; Finogenov, S. A.
2017-12-01
It was shown in our previous work (J. Appl. Mech. Tech. Phys 57 (7), 1159-1171 (2016)) that the eddy-resolving parameter-free CABARET scheme as applied to two-and three-dimensional de Vahl Davis benchmark tests (thermal convection in a square cavity) yields numerical results on coarse (20 × 20 and 20 × 20 × 20) grids that agree surprisingly well with experimental data and highly accurate computations for Rayleigh numbers of up to 1014. In the present paper, the sensitivity of this phenomenon to the cavity shape (varying from cubical to highly elongated) is analyzed. Box-shaped computational domains with aspect ratios of 1: 4, 1: 10, and 1: 28.6 are considered. The results produced by the CABARET scheme are compared with experimental data (aspect ratio of 1: 28.6), DNS results (aspect ratio of 1: 4), and an empirical formula (aspect ratio of 1: 10). In all the cases, the CABARET-based integral parameters of the cavity flow agree well with the other authors' results. Notably coarse grids with mesh refinement toward the walls are used in the CABARET calculations. It is shown that acceptable numerical accuracy on extremely coarse grids is achieved for an aspect ratio of up to 1: 10. For higher aspect ratios, the number of grid cells required for achieving prescribed accuracy grows significantly.
NASA Astrophysics Data System (ADS)
Jaensch, M.; Lampérth, M. U.
2007-04-01
This paper describes the design and performance testing of a micropositioning, vibration isolation and suppression system, which can be used to position a piece of equipment with sub-micrometre accuracy and stabilize it against various types of external disturbance. The presented demonstrator was designed as part of a novel extremely open pre-polarization magnetic resonance imaging (MRI) scanner. The active control system utilizes six piezoelectric actuators, wide-bandwidth optical fibre displacement sensors and a very fast digital field programmable gate array (FPGA) controller. A PID feedback control algorithm with emphasis on a very high level of integral gain is employed. Due to the high external forces expected, the whole structure is designed to be as stiff as possible, including a novel hard mount approach with parallel passive damping for the suspension of the payload. The performance of the system is studied theoretically and experimentally. The sensitive equipment can be positioned in six degrees of freedom with an accuracy of ± 0.2 µm. External disturbances acting on the support structure or the equipment itself are attenuated in three degrees of freedom by more than -20 dB within a bandwidth of 0-200 Hz. Excellent impulse rejection and input tracking are demonstrated as well.
NASA Astrophysics Data System (ADS)
Bonatto, Cristian; Endler, Antonio
2017-07-01
We investigate the occurrence of extreme and rare events, i.e., giant and rare light pulses, in a periodically modulated CO2 laser model. Due to nonlinear resonant processes, we show a scenario of interaction between chaotic bands of different orders, which may lead to the formation of extreme and rare events. We identify a crisis line in the modulation parameter space, and we show that, when the modulation amplitude increases, remaining in the vicinity of the crisis, some statistical properties of the laser pulses, such as the average and dispersion of amplitudes, do not change much, whereas the amplitude of extreme events grows enormously, giving rise to extreme events with much larger deviations than usually reported, with a significant probability of occurrence, i.e., with a long-tailed non-Gaussian distribution. We identify recurrent regular patterns, i.e., precursors, that anticipate the emergence of extreme and rare events, and we associate these regular patterns with unstable periodic orbits embedded in a chaotic attractor. We show that the precursors may or may not lead to the emergence of extreme events. Thus, we compute the probability of success or failure (false alarm) in the prediction of the extreme events, once a precursor is identified in the deterministic time series. We show that this probability depends on the accuracy with which the precursor is identified in the laser intensity time series.
NASA Astrophysics Data System (ADS)
Roman, Jacola; Knuteson, Robert; August, Thomas; Hultberg, Tim; Ackerman, Steve; Revercomb, Hank
2016-08-01
Satellite remote sensing of precipitable water vapor (PWV) is essential for monitoring moisture in real time for weather applications, as well as tracking the long-term changes in PWV for climate change trend detection. This study assesses the accuracies of the current satellite observing system, specifically the National Aeronautics and Space Administration (NASA) Atmospheric Infrared Sounder (AIRS) v6 PWV product and the European Organization for the Exploitation of Meteorological Satellite Studies (EUMETSAT) Infrared Atmospheric Sounding Interferometer (IASI) v6 PWV product, using ground-based SuomiNet Global Positioning System (GPS) network as truth. Elevation-corrected collocated matchups to each SuomiNet GPS station in North America and around the world were created, and results were broken down by station, ARM region, climate zone, and latitude zone. The greatest difference, exceeding 5%, between IASI and AIRS retrievals occurred in the tropics. Generally, IASI and AIRS fall within a 5% error in the PWV range of 20-40 mm (a mean bias less than 2 mm), with a wet bias for extremely low PWV values (less than 5 mm) and a dry bias for extremely high PWV values (greater than 50 mm). The operational IR satellite products are able to capture the mean PWV but degrade in the extreme dry and wet regimes.
EUV Irradiance Inputs to Thermospheric Density Models: Open Issues and Path Forward
NASA Astrophysics Data System (ADS)
Vourlidas, A.; Bruinsma, S.
2018-01-01
One of the objectives of the NASA Living With a Star Institute on "Nowcasting of Atmospheric Drag for low Earth orbit (LEO) Spacecraft" was to investigate whether and how to increase the accuracy of atmospheric drag models by improving the quality of the solar forcing inputs, namely, extreme ultraviolet (EUV) irradiance information. In this focused review, we examine the status of and issues with EUV measurements and proxies, discuss recent promising developments, and suggest a number of ways to improve the reliability, availability, and forecast accuracy of EUV measurements in the next solar cycle.
NASA Technical Reports Server (NTRS)
Ko, William L.
1988-01-01
Accuracies of solutions (structural temperatures and thermal stresses) obtained from different thermal and structural FEMs set up for the Space Shuttle Orbiter (SSO) are compared and discussed. For studying the effect of element size on the solution accuracies of heat-transfer and thermal-stress analyses of the SSO, five SPAR thermal models and five NASTRAN structural models were set up for wing midspan bay 3. The structural temperature distribution over the wing skin (lower and upper) surface of one bay was dome shaped and induced more severe thermal stresses in the chordwise direction than in the spanwise direction. The induced thermal stresses were extremely sensitive to slight variation in structural temperature distributions. Both internal convention and internal radiation were found to have equal effects on the SSO.
Ma, Chao; Ouyang, Jihong; Chen, Hui-Ling; Zhao, Xue-Hua
2014-01-01
A novel hybrid method named SCFW-KELM, which integrates effective subtractive clustering features weighting and a fast classifier kernel-based extreme learning machine (KELM), has been introduced for the diagnosis of PD. In the proposed method, SCFW is used as a data preprocessing tool, which aims at decreasing the variance in features of the PD dataset, in order to further improve the diagnostic accuracy of the KELM classifier. The impact of the type of kernel functions on the performance of KELM has been investigated in detail. The efficiency and effectiveness of the proposed method have been rigorously evaluated against the PD dataset in terms of classification accuracy, sensitivity, specificity, area under the receiver operating characteristic (ROC) curve (AUC), f-measure, and kappa statistics value. Experimental results have demonstrated that the proposed SCFW-KELM significantly outperforms SVM-based, KNN-based, and ELM-based approaches and other methods in the literature and achieved highest classification results reported so far via 10-fold cross validation scheme, with the classification accuracy of 99.49%, the sensitivity of 100%, the specificity of 99.39%, AUC of 99.69%, the f-measure value of 0.9964, and kappa value of 0.9867. Promisingly, the proposed method might serve as a new candidate of powerful methods for the diagnosis of PD with excellent performance.
Ma, Chao; Ouyang, Jihong; Chen, Hui-Ling; Zhao, Xue-Hua
2014-01-01
A novel hybrid method named SCFW-KELM, which integrates effective subtractive clustering features weighting and a fast classifier kernel-based extreme learning machine (KELM), has been introduced for the diagnosis of PD. In the proposed method, SCFW is used as a data preprocessing tool, which aims at decreasing the variance in features of the PD dataset, in order to further improve the diagnostic accuracy of the KELM classifier. The impact of the type of kernel functions on the performance of KELM has been investigated in detail. The efficiency and effectiveness of the proposed method have been rigorously evaluated against the PD dataset in terms of classification accuracy, sensitivity, specificity, area under the receiver operating characteristic (ROC) curve (AUC), f-measure, and kappa statistics value. Experimental results have demonstrated that the proposed SCFW-KELM significantly outperforms SVM-based, KNN-based, and ELM-based approaches and other methods in the literature and achieved highest classification results reported so far via 10-fold cross validation scheme, with the classification accuracy of 99.49%, the sensitivity of 100%, the specificity of 99.39%, AUC of 99.69%, the f-measure value of 0.9964, and kappa value of 0.9867. Promisingly, the proposed method might serve as a new candidate of powerful methods for the diagnosis of PD with excellent performance. PMID:25484912
Automatic computational labeling of glomerular textural boundaries
NASA Astrophysics Data System (ADS)
Ginley, Brandon; Tomaszewski, John E.; Sarder, Pinaki
2017-03-01
The glomerulus, a specialized bundle of capillaries, is the blood filtering unit of the kidney. Each human kidney contains about 1 million glomeruli. Structural damages in the glomerular micro-compartments give rise to several renal conditions; most severe of which is proteinuria, where excessive blood proteins flow freely to the urine. The sole way to confirm glomerular structural damage in renal pathology is by examining histopathological or immunofluorescence stained needle biopsies under a light microscope. However, this method is extremely tedious and time consuming, and requires manual scoring on the number and volume of structures. Computational quantification of equivalent features promises to greatly ease this manual burden. The largest obstacle to computational quantification of renal tissue is the ability to recognize complex glomerular textural boundaries automatically. Here we present a computational pipeline to accurately identify glomerular boundaries with high precision and accuracy. The computational pipeline employs an integrated approach composed of Gabor filtering, Gaussian blurring, statistical F-testing, and distance transform, and performs significantly better than standard Gabor based textural segmentation method. Our integrated approach provides mean accuracy/precision of 0.89/0.97 on n = 200Hematoxylin and Eosin (HE) glomerulus images, and mean 0.88/0.94 accuracy/precision on n = 200 Periodic Acid Schiff (PAS) glomerulus images. Respective accuracy/precision of the Gabor filter bank based method is 0.83/0.84 for HE and 0.78/0.8 for PAS. Our method will simplify computational partitioning of glomerular micro-compartments hidden within dense textural boundaries. Automatic quantification of glomeruli will streamline structural analysis in clinic, and can help realize real time diagnoses and interventions.
Genomic Prediction Accounting for Residual Heteroskedasticity.
Ou, Zhining; Tempelman, Robert J; Steibel, Juan P; Ernst, Catherine W; Bates, Ronald O; Bello, Nora M
2015-11-12
Whole-genome prediction (WGP) models that use single-nucleotide polymorphism marker information to predict genetic merit of animals and plants typically assume homogeneous residual variance. However, variability is often heterogeneous across agricultural production systems and may subsequently bias WGP-based inferences. This study extends classical WGP models based on normality, heavy-tailed specifications and variable selection to explicitly account for environmentally-driven residual heteroskedasticity under a hierarchical Bayesian mixed-models framework. WGP models assuming homogeneous or heterogeneous residual variances were fitted to training data generated under simulation scenarios reflecting a gradient of increasing heteroskedasticity. Model fit was based on pseudo-Bayes factors and also on prediction accuracy of genomic breeding values computed on a validation data subset one generation removed from the simulated training dataset. Homogeneous vs. heterogeneous residual variance WGP models were also fitted to two quantitative traits, namely 45-min postmortem carcass temperature and loin muscle pH, recorded in a swine resource population dataset prescreened for high and mild residual heteroskedasticity, respectively. Fit of competing WGP models was compared using pseudo-Bayes factors. Predictive ability, defined as the correlation between predicted and observed phenotypes in validation sets of a five-fold cross-validation was also computed. Heteroskedastic error WGP models showed improved model fit and enhanced prediction accuracy compared to homoskedastic error WGP models although the magnitude of the improvement was small (less than two percentage points net gain in prediction accuracy). Nevertheless, accounting for residual heteroskedasticity did improve accuracy of selection, especially on individuals of extreme genetic merit. Copyright © 2016 Ou et al.
Shamur, Eyal; Zilka, Miri; Hassner, Tal; China, Victor; Liberzon, Alex; Holzman, Roi
2016-06-01
Using videography to extract quantitative data on animal movement and kinematics constitutes a major tool in biomechanics and behavioral ecology. Advanced recording technologies now enable acquisition of long video sequences encompassing sparse and unpredictable events. Although such events may be ecologically important, analysis of sparse data can be extremely time-consuming and potentially biased; data quality is often strongly dependent on the training level of the observer and subject to contamination by observer-dependent biases. These constraints often limit our ability to study animal performance and fitness. Using long videos of foraging fish larvae, we provide a framework for the automated detection of prey acquisition strikes, a behavior that is infrequent yet critical for larval survival. We compared the performance of four video descriptors and their combinations against manually identified feeding events. For our data, the best single descriptor provided a classification accuracy of 77-95% and detection accuracy of 88-98%, depending on fish species and size. Using a combination of descriptors improved the accuracy of classification by ∼2%, but did not improve detection accuracy. Our results indicate that the effort required by an expert to manually label videos can be greatly reduced to examining only the potential feeding detections in order to filter false detections. Thus, using automated descriptors reduces the amount of manual work needed to identify events of interest from weeks to hours, enabling the assembly of an unbiased large dataset of ecologically relevant behaviors. © 2016. Published by The Company of Biologists Ltd.
Protein classification based on text document classification techniques.
Cheng, Betty Yee Man; Carbonell, Jaime G; Klein-Seetharaman, Judith
2005-03-01
The need for accurate, automated protein classification methods continues to increase as advances in biotechnology uncover new proteins. G-protein coupled receptors (GPCRs) are a particularly difficult superfamily of proteins to classify due to extreme diversity among its members. Previous comparisons of BLAST, k-nearest neighbor (k-NN), hidden markov model (HMM) and support vector machine (SVM) using alignment-based features have suggested that classifiers at the complexity of SVM are needed to attain high accuracy. Here, analogous to document classification, we applied Decision Tree and Naive Bayes classifiers with chi-square feature selection on counts of n-grams (i.e. short peptide sequences of length n) to this classification task. Using the GPCR dataset and evaluation protocol from the previous study, the Naive Bayes classifier attained an accuracy of 93.0 and 92.4% in level I and level II subfamily classification respectively, while SVM has a reported accuracy of 88.4 and 86.3%. This is a 39.7 and 44.5% reduction in residual error for level I and level II subfamily classification, respectively. The Decision Tree, while inferior to SVM, outperforms HMM in both level I and level II subfamily classification. For those GPCR families whose profiles are stored in the Protein FAMilies database of alignments and HMMs (PFAM), our method performs comparably to a search against those profiles. Finally, our method can be generalized to other protein families by applying it to the superfamily of nuclear receptors with 94.5, 97.8 and 93.6% accuracy in family, level I and level II subfamily classification respectively. Copyright 2005 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Krause, Keith Stuart
The change, reduction, or extinction of species is a major issue currently facing the Earth. Efforts are underway to measure, monitor, and protect habitats that contain high species diversity. Remote sensing technology shows extreme value for monitoring species diversity by mapping ecosystems and using those land cover maps or other derived data as proxies to species number and distribution. The National Ecological Observatory Network (NEON) Airborne Observation Platform (AOP) consists of remote sensing instruments such as an imaging spectrometer, a full-waveform lidar, and a high-resolution color camera. AOP collected data over the Ordway-Swisher Biological Station (OSBS) in May 2014. A majority of the OSBS site is covered by the Sandhill ecosystem, which contains a very high diversity of vegetation species and is a native habitat for several threatened fauna species. The research presented here investigates ways to analyze the AOP data to map ecosystems at the OSBS site. The research attempts to leverage the high spatial resolution data and study the variability of the data within a ground plot scale along with integrating data from the different sensors. Mathematical features are derived from the data and brought into a decision tree classification algorithm (rpart), in order to create an ecosystem map for the site. The hyperspectral and lidar features serve as proxies for chemical, functional, and structural differences in the vegetation types for each of the ecosystems. K-folds cross validation shows a training accuracy of 91%, a validation accuracy of 78%, and a 66% accuracy using independent ground validation. The results presented here represent an important contribution to utilizing integrated hyperspectral and lidar remote sensing data for ecosystem mapping, by relating the spatial variability of the data within a ground plot scale to a collection of vegetation types that make up a given ecosystem.
Estimating extreme river discharges in Europe through a Bayesian network
NASA Astrophysics Data System (ADS)
Paprotny, Dominik; Morales-Nápoles, Oswaldo
2017-06-01
Large-scale hydrological modelling of flood hazards requires adequate extreme discharge data. In practise, models based on physics are applied alongside those utilizing only statistical analysis. The former require enormous computational power, while the latter are mostly limited in accuracy and spatial coverage. In this paper we introduce an alternate, statistical approach based on Bayesian networks (BNs), a graphical model for dependent random variables. We use a non-parametric BN to describe the joint distribution of extreme discharges in European rivers and variables representing the geographical characteristics of their catchments. Annual maxima of daily discharges from more than 1800 river gauges (stations with catchment areas ranging from 1.4 to 807 000 km2) were collected, together with information on terrain, land use and local climate. The (conditional) correlations between the variables are modelled through copulas, with the dependency structure defined in the network. The results show that using this method, mean annual maxima and return periods of discharges could be estimated with an accuracy similar to existing studies using physical models for Europe and better than a comparable global statistical model. Performance of the model varies slightly between regions of Europe, but is consistent between different time periods, and remains the same in a split-sample validation. Though discharge prediction under climate change is not the main scope of this paper, the BN was applied to a large domain covering all sizes of rivers in the continent both for present and future climate, as an example. Results show substantial variation in the influence of climate change on river discharges. The model can be used to provide quick estimates of extreme discharges at any location for the purpose of obtaining input information for hydraulic modelling.
NASA Astrophysics Data System (ADS)
Sugawara, Jun; Kamiya, Tomohiro; Mikashima, Bumpei
2017-09-01
Ultra-low thermal expansion ceramics NEXCERATM is regarded as one of potential candidate materials crucial for ultralightweight and thermally-stable optical mirrors for space telescopes which are used in future optical missions satisfying extremely high observation specifications. To realize the high precision NEXCERA mirrors for space telescopes, it is important to develop a deterministic aspheric shape polishing and a precise figure correction polishing method for the NEXCERA. Magnetorheological finishing (MRF) was tested to the NEXCERA aspheric mirror from best fit sphere shape, because the MRF technology is regarded as the best suited process for a precise figure correction of the ultralightweight mirror with thin sheet due to its advantage of low normal force polishing. As using the best combination of material and MR fluid, the MRF was performed high precision figure correction and to induce a hyperbolic shape from a conventionally polished 100mm diameter sphere, and achieved the sufficient high figure accuracy and the high quality surface roughness. In order to apply the NEXCERA to a large scale space mirror, for the next step, a middle size solid mirror, 250 mm diameter concave parabola, was machined. It was roughly ground in the parabolic shape, and was lapped and polished by a computer-controlled polishing machine using sub-aperture polishing tools. It resulted in the smooth surface of 0.6 nm RMS and the figure accuracy of λ/4, being enough as pre-MRF surface. A further study of the NEXCERA space mirrors should be proceeded as a figure correction using the MRF to lightweight mirror with thin mirror sheet.
NASA Astrophysics Data System (ADS)
Apai, Dániel; Kasper, Markus; Skemer, Andrew; Hanson, Jake R.; Lagrange, Anne-Marie; Biller, Beth A.; Bonnefoy, Mickaël; Buenzli, Esther; Vigan, Arthur
2016-03-01
Time-resolved photometry is an important new probe of the physics of condensate clouds in extrasolar planets and brown dwarfs. Extreme adaptive optics systems can directly image planets, but precise brightness measurements are challenging. We present VLT/SPHERE high-contrast, time-resolved broad H-band near-infrared photometry for four exoplanets in the HR 8799 system, sampling changes from night to night over five nights with relatively short integrations. The photospheres of these four planets are often modeled by patchy clouds and may show large-amplitude rotational brightness modulations. Our observations provide high-quality images of the system. We present a detailed performance analysis of different data analysis approaches to accurately measure the relative brightnesses of the four exoplanets. We explore the information in satellite spots and demonstrate their use as a proxy for image quality. While the brightness variations of the satellite spots are strongly correlated, we also identify a second-order anti-correlation pattern between the different spots. Our study finds that KLIP reduction based on principal components analysis with satellite-spot-modulated artificial-planet-injection-based photometry leads to a significant (˜3×) gain in photometric accuracy over standard aperture-based photometry and reaches 0.1 mag per point accuracy for our data set, the signal-to-noise ratio of which is limited by small field rotation. Relative planet-to-planet photometry can be compared between nights, enabling observations spanning multiple nights to probe variability. Recent high-quality relative H-band photometry of the b-c planet pair agrees to about 1%.
Elleithy, Khaled; Elleithy, Abdelrahman
2018-01-01
Eye exam can be as efficacious as physical one in determining health concerns. Retina screening can be the very first clue for detecting a variety of hidden health issues including pre-diabetes and diabetes. Through the process of clinical diagnosis and prognosis; ophthalmologists rely heavily on the binary segmented version of retina fundus image; where the accuracy of segmented vessels, optic disc, and abnormal lesions extremely affects the diagnosis accuracy which in turn affect the subsequent clinical treatment steps. This paper proposes an automated retinal fundus image segmentation system composed of three segmentation subsystems follow same core segmentation algorithm. Despite of broad difference in features and characteristics; retinal vessels, optic disc, and exudate lesions are extracted by each subsystem without the need for texture analysis or synthesis. For sake of compact diagnosis and complete clinical insight, our proposed system can detect these anatomical structures in one session with high accuracy even in pathological retina images. The proposed system uses a robust hybrid segmentation algorithm combines adaptive fuzzy thresholding and mathematical morphology. The proposed system is validated using four benchmark datasets: DRIVE and STARE (vessels), DRISHTI-GS (optic disc), and DIARETDB1 (exudates lesions). Competitive segmentation performance is achieved, outperforming a variety of up-to-date systems and demonstrating the capacity to deal with other heterogeneous anatomical structures. PMID:29888146
Accurate Mobile Urban Mapping via Digital Map-Based SLAM †
Roh, Hyunchul; Jeong, Jinyong; Cho, Younggun; Kim, Ayoung
2016-01-01
This paper presents accurate urban map generation using digital map-based Simultaneous Localization and Mapping (SLAM). Throughout this work, our main objective is generating a 3D and lane map aiming for sub-meter accuracy. In conventional mapping approaches, achieving extremely high accuracy was performed by either (i) exploiting costly airborne sensors or (ii) surveying with a static mapping system in a stationary platform. Mobile scanning systems recently have gathered popularity but are mostly limited by the availability of the Global Positioning System (GPS). We focus on the fact that the availability of GPS and urban structures are both sporadic but complementary. By modeling both GPS and digital map data as measurements and integrating them with other sensor measurements, we leverage SLAM for an accurate mobile mapping system. Our proposed algorithm generates an efficient graph SLAM and achieves a framework running in real-time and targeting sub-meter accuracy with a mobile platform. Integrated with the SLAM framework, we implement a motion-adaptive model for the Inverse Perspective Mapping (IPM). Using motion estimation derived from SLAM, the experimental results show that the proposed approaches provide stable bird’s-eye view images, even with significant motion during the drive. Our real-time map generation framework is validated via a long-distance urban test and evaluated at randomly sampled points using Real-Time Kinematic (RTK)-GPS. PMID:27548175
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.
2018-02-01
Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.
Schreckengaust, Richard; Littlejohn, Lanny; Zarow, Gregory J
2014-02-01
The lower extremity tourniquet failure rate remains significantly higher in combat than in preclinical testing, so we hypothesized that tourniquet placement accuracy, speed, and effectiveness would improve during training and decline during simulated combat. Navy Hospital Corpsman (N = 89), enrolled in a Tactical Combat Casualty Care training course in preparation for deployment, applied Combat Application Tourniquet (CAT) and the Special Operations Forces Tactical Tourniquet (SOFT-T) on day 1 and day 4 of classroom training, then under simulated combat, wherein participants ran an obstacle course to apply a tourniquet while wearing full body armor and avoiding simulated small arms fire (paint balls). Application time and pulse elimination effectiveness improved day 1 to day 4 (p < 0.005). Under simulated combat, application time slowed significantly (p < 0.001), whereas accuracy and effectiveness declined slightly. Pulse elimination was poor for CAT (25% failure) and SOFT-T (60% failure) even in classroom conditions following training. CAT was more quickly applied (p < 0.005) and more effective (p < 0.002) than SOFT-T. Training fostered fast and effective application of leg tourniquets while performance declined under simulated combat. The inherent efficacy of tourniquet products contributes to high failure rates under combat conditions, pointing to the need for superior tourniquets and for rigorous deployment preparation training in simulated combat scenarios. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.
NASA Technical Reports Server (NTRS)
Neeman, Binyamin U.; Ohring, George; Joseph, Joachim H.
1988-01-01
A vertically integrated formulation (VIF) model for sea ice/snow and land snow is discussed which can simulate the nonlinear effects of heat storage and transfer through the layers of snow and ice. The VIF demonstates the accuracy of the multilayer formulation, while benefitting from the computational flexibility of linear formulations. In the second part, the model is implemented in a seasonal dynamic zonally averaged climate model. It is found that, in response to a change between extreme high and low summer insolation orbits, the winter orbital change dominates over the opposite summer change for sea ice. For snow over land the shorter but more pronounced summer orbital change is shown to dominate.
First-principles definition and measurement of planetary electromagnetic-energy budget.
Mishchenko, Michael I; Lock, James A; Lacis, Andrew A; Travis, Larry D; Cairns, Brian
2016-06-01
The imperative to quantify the Earth's electromagnetic-energy budget with an extremely high accuracy has been widely recognized but has never been formulated in the framework of fundamental physics. In this paper we give a first-principles definition of the planetary electromagnetic-energy budget using the Poynting-vector formalism and discuss how it can, in principle, be measured. Our derivation is based on an absolute minimum of theoretical assumptions, is free of outdated notions of phenomenological radiometry, and naturally leads to the conceptual formulation of an instrument called the double hemispherical cavity radiometer (DHCR). The practical measurement of the planetary energy budget would require flying a constellation of several dozen planet-orbiting satellites hosting identical well-calibrated DHCRs.
Modulation of Kekulé adatom ordering due to strain in graphene
NASA Astrophysics Data System (ADS)
González-Árraga, L.; Guinea, F.; San-Jose, P.
2018-04-01
Intervalley scattering of carriers in graphene at "top" adatoms may give rise to a hidden Kekulé ordering pattern in the adatom positions. This ordering is the result of a rapid modulation in the electron-mediated interaction between adatoms at the wave vector K -K' , which has been shown experimentally and theoretically to dominate their spatial distribution. Here we show that the adatom interaction is extremely sensitive to strain in the supporting graphene, which leads to a characteristic spatial modulation of the Kekulé order as a function of adatom distance. Our results suggest that the spatial distributions of adatoms could provide a way to measure the type and magnitude of strain in graphene and the associated pseudogauge field with high accuracy.
A white super-stable source for the metrology of astronomical photometers
NASA Astrophysics Data System (ADS)
Wildi, F. P.; Deline, A.; Chazelas, B.
2015-09-01
The testing of photometers and in particular the testing of high precision photometers for the detection of planetary transits requires a light source which photometric stability is to par or better than the goal stability of the photometer to be tested. In the frame of the CHEOPS mission, a comprehensive calibration bench has been developed. Aside from measuring the sensibility of the CHEOPS payload to the different environmental conditions, this bench will also be used to test the relative accuracy of the payload. A key element of this bench is an extremely stable light source that is used to create an artificial star which is then projected into the payload's telescope. We present here the development of this payload and the performance achieved.
First-principles definition and measurement of planetary electromagnetic-energy budget
NASA Astrophysics Data System (ADS)
Mishchenko, M. I.; James, L.; Lacis, A. A.; Travis, L. D.; Cairns, B.
2016-12-01
The imperative to quantify the Earth's electromagnetic-energy budget with an extremely high accuracy has been widely recognized but has never been formulated in the framework of fundamental physics. In this talk we give a first-principles definition of the planetary electromagnetic-energy budget using the Poynting-vector formalism and discuss how it can, in principle, be measured. Our derivation is based on an absolute minimum of theoretical assumptions, is free of outdated concepts of phenomenological radiometry, and naturally leads to the conceptual formulation of an instrument called the double hemispherical cavity radiometer (DHCR). The practical measurement of the planetary energy budget would require flying a constellation of several dozen planet-orbiting satellites hosting identical well-calibrated DHCRs.
First-Principles Definition and Measurement of Planetary Electromagnetic-Energy Budget
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Lock, James A.; Lacis, Andrew A.; Travis, Larry D.; Cairns, Brian
2016-01-01
The imperative to quantify the Earths electromagnetic-energy budget with an extremely high accuracy has been widely recognized but has never been formulated in the framework of fundamental physics. In this paper we give a first-principles definition of the planetary electromagnetic-energy budget using the Poynting- vector formalism and discuss how it can, in principle, be measured. Our derivation is based on an absolute minimum of theoretical assumptions, is free of outdated notions of phenomenological radiometry, and naturally leads to the conceptual formulation of an instrument called the double hemispherical cavity radiometer (DHCR). The practical measurement of the planetary energy budget would require flying a constellation of several dozen planet-orbiting satellites hosting identical well-calibrated DHCRs.
Zhang, Mi; Wen, Xue Fa; Zhang, Lei Ming; Wang, Hui Min; Guo, Yi Wen; Yu, Gui Rui
2018-02-01
Extreme high temperature is one of important extreme weathers that impact forest ecosystem carbon cycle. In this study, applying CO 2 flux and routine meteorological data measured during 2003-2012, we examined the impacts of extreme high temperature and extreme high temperature event on net carbon uptake of subtropical coniferous plantation in Qianyanzhou. Combining with wavelet analysis, we analyzed environmental controls on net carbon uptake at different temporal scales, when the extreme high temperature and extreme high temperature event happened. The results showed that mean daily cumulative NEE decreased by 51% in the days with daily maximum air temperature range between 35 ℃ and 40 ℃, compared with that in the days with the range between 30 ℃ and 34 ℃. The effects of the extreme high temperature and extreme high temperature event on monthly NEE and annual NEE related to the strength and duration of extreme high tempe-rature event. In 2003, when strong extreme high temperature event happened, the sum of monthly cumulative NEE in July and August was only -11.64 g C·m -2 ·(2 month) -1 . The value decreased by 90%, compared with multi-year average value. At the same time, the relative variation of annual NEE reached -6.7%. In July and August, when the extreme high temperature and extreme high temperature event occurred, air temperature (T a ) and vapor press deficit (VPD) were the dominant controller for the daily variation of NEE. The coherency between NEE T a and NEE VPD was 0.97 and 0.95, respectively. At 8-, 16-, and 32-day periods, T a , VPD, soil water content at 5 cm depth (SWC), and precipitation (P) controlled NEE. The coherency between NEE SWC and NEE P was higher than 0.8 at monthly scale. The results indicated that atmospheric water deficit impacted NEE at short temporal scale, when the extreme high temperature and extreme high temperature event occurred, both of atmospheric water deficit and soil drought stress impacted NEE at long temporal scales in this ecosystem.
NASA Astrophysics Data System (ADS)
Zhao, Lili; Yin, Jianping; Yuan, Lihuan; Liu, Qiang; Li, Kuan; Qiu, Minghui
2017-07-01
Automatic detection of abnormal cells from cervical smear images is extremely demanded in annual diagnosis of women's cervical cancer. For this medical cell recognition problem, there are three different feature sections, namely cytology morphology, nuclear chromatin pathology and region intensity. The challenges of this problem come from feature combination s and classification accurately and efficiently. Thus, we propose an efficient abnormal cervical cell detection system based on multi-instance extreme learning machine (MI-ELM) to deal with above two questions in one unified framework. MI-ELM is one of the most promising supervised learning classifiers which can deal with several feature sections and realistic classification problems analytically. Experiment results over Herlev dataset demonstrate that the proposed method outperforms three traditional methods for two-class classification in terms of well accuracy and less time.
Non-numeric computation for high eccentricity orbits. [Earth satellite orbit perturbation
NASA Technical Reports Server (NTRS)
Sridharan, R.; Renard, M. L.
1975-01-01
Geocentric orbits of large eccentricity (e = 0.9 to 0.95) are significantly perturbed in cislunar space by the sun and moon. The time-history of the height of perigee, subsequent to launch, is particularly critical. The determination of 'launch windows' is mostly concerned with preventing the height of perigee from falling below its low initial value before the mission lifetime has elapsed. Between the extremes of high accuracy digital integration of the equations of motion and of using an approximate, but very fast, stability criteria method, this paper is concerned with the developement of a method of intermediate complexity using non-numeric computation. The computer is used as the theory generator to generalize Lidov's theory using six osculating elements. Symbolic integration is completely automatized and the output is a set of condensed formulae well suited for repeated applications in launch window analysis. Examples of applications are given.
NASA Astrophysics Data System (ADS)
Ren, Changzhi; Li, Xiaoyan; Song, Xiaoli; Niu, Yong; Li, Aihua; Zhang, Zhenchao
2012-09-01
Direct drive technology is the key to solute future 30-m and larger telescope motion system to guarantee a very high tracking accuracy, in spite of unbalanced and sudden loads such as wind gusts and in spite of a structure that, because of its size, can not be infinitely stiff. However, this requires the design and realization of unusually large torque motor that the torque slew rate must be extremely steep too. A conventional torque motor design appears inadequate. This paper explores one redundant unit permanent magnet synchronous motor and its simulation bed for 30-m class telescope. Because its drive system is one high integrated electromechanical system, one complexly electromechanical design method is adopted to improve the efficiency, reliability and quality of the system during the design and manufacture circle. This paper discusses the design and control of the precise tracking simulation bed in detail.
Bethel, EW; Bauer, A; Abbasi, H; ...
2016-06-10
The considerable interest in the high performance computing (HPC) community regarding analyzing and visualization data without first writing to disk, i.e., in situ processing, is due to several factors. First is an I/O cost savings, where data is analyzed /visualized while being generated, without first storing to a filesystem. Second is the potential for increased accuracy, where fine temporal sampling of transient analysis might expose some complex behavior missed in coarse temporal sampling. Third is the ability to use all available resources, CPU’s and accelerators, in the computation of analysis products. This STAR paper brings together researchers, developers and practitionersmore » using in situ methods in extreme-scale HPC with the goal to present existing methods, infrastructures, and a range of computational science and engineering applications using in situ analysis and visualization.« less
Subatomic deformation driven by vertical piezoelectricity from CdS ultrathin films
Wang, Xuewen; He, Xuexia; Zhu, Hongfei; Sun, Linfeng; Fu, Wei; Wang, Xingli; Hoong, Lai Chee; Wang, Hong; Zeng, Qingsheng; Zhao, Wu; Wei, Jun; Jin, Zhong; Shen, Zexiang; Liu, Jie; Zhang, Ting; Liu, Zheng
2016-01-01
Driven by the development of high-performance piezoelectric materials, actuators become an important tool for positioning objects with high accuracy down to nanometer scale, and have been used for a wide variety of equipment, such as atomic force microscopy and scanning tunneling microscopy. However, positioning at the subatomic scale is still a great challenge. Ultrathin piezoelectric materials may pave the way to positioning an object with extreme precision. Using ultrathin CdS thin films, we demonstrate vertical piezoelectricity in atomic scale (three to five space lattices). With an in situ scanning Kelvin force microscopy and single and dual ac resonance tracking piezoelectric force microscopy, the vertical piezoelectric coefficient (d33) up to 33 pm·V−1 was determined for the CdS ultrathin films. These findings shed light on the design of next-generation sensors and microelectromechanical devices. PMID:27419234
Kringel, D; Ultsch, A; Zimmermann, M; Jansen, J-P; Ilias, W; Freynhagen, R; Griessinger, N; Kopf, A; Stein, C; Doehring, A; Resch, E; Lötsch, J
2017-01-01
Next-generation sequencing (NGS) provides unrestricted access to the genome, but it produces ‘big data’ exceeding in amount and complexity the classical analytical approaches. We introduce a bioinformatics-based classifying biomarker that uses emergent properties in genetics to separate pain patients requiring extremely high opioid doses from controls. Following precisely calculated selection of the 34 most informative markers in the OPRM1, OPRK1, OPRD1 and SIGMAR1 genes, pattern of genotypes belonging to either patient group could be derived using a k-nearest neighbor (kNN) classifier that provided a diagnostic accuracy of 80.6±4%. This outperformed alternative classifiers such as reportedly functional opioid receptor gene variants or complex biomarkers obtained via multiple regression or decision tree analysis. The accumulation of several genetic variants with only minor functional influences may result in a qualitative consequence affecting complex phenotypes, pointing at emergent properties in genetics. PMID:27139154
Kringel, D; Ultsch, A; Zimmermann, M; Jansen, J-P; Ilias, W; Freynhagen, R; Griessinger, N; Kopf, A; Stein, C; Doehring, A; Resch, E; Lötsch, J
2017-10-01
Next-generation sequencing (NGS) provides unrestricted access to the genome, but it produces 'big data' exceeding in amount and complexity the classical analytical approaches. We introduce a bioinformatics-based classifying biomarker that uses emergent properties in genetics to separate pain patients requiring extremely high opioid doses from controls. Following precisely calculated selection of the 34 most informative markers in the OPRM1, OPRK1, OPRD1 and SIGMAR1 genes, pattern of genotypes belonging to either patient group could be derived using a k-nearest neighbor (kNN) classifier that provided a diagnostic accuracy of 80.6±4%. This outperformed alternative classifiers such as reportedly functional opioid receptor gene variants or complex biomarkers obtained via multiple regression or decision tree analysis. The accumulation of several genetic variants with only minor functional influences may result in a qualitative consequence affecting complex phenotypes, pointing at emergent properties in genetics.
Comparing an FPGA to a Cell for an Image Processing Application
NASA Astrophysics Data System (ADS)
Rakvic, Ryan N.; Ngo, Hau; Broussard, Randy P.; Ives, Robert W.
2010-12-01
Modern advancements in configurable hardware, most notably Field-Programmable Gate Arrays (FPGAs), have provided an exciting opportunity to discover the parallel nature of modern image processing algorithms. On the other hand, PlayStation3 (PS3) game consoles contain a multicore heterogeneous processor known as the Cell, which is designed to perform complex image processing algorithms at a high performance. In this research project, our aim is to study the differences in performance of a modern image processing algorithm on these two hardware platforms. In particular, Iris Recognition Systems have recently become an attractive identification method because of their extremely high accuracy. Iris matching, a repeatedly executed portion of a modern iris recognition algorithm, is parallelized on an FPGA system and a Cell processor. We demonstrate a 2.5 times speedup of the parallelized algorithm on the FPGA system when compared to a Cell processor-based version.
Field Performance of Photovoltaic Systems in the Tucson Desert
NASA Astrophysics Data System (ADS)
Orsburn, Sean; Brooks, Adria; Cormode, Daniel; Greenberg, James; Hardesty, Garrett; Lonij, Vincent; Salhab, Anas; St. Germaine, Tyler; Torres, Gabe; Cronin, Alexander
2011-10-01
At the Tucson Electric Power (TEP) solar test yard, over 20 different grid-connected photovoltaic (PV) systems are being tested. The goal at the TEP solar test yard is to measure and model real-world performance of PV systems and to benchmark new technologies such as holographic concentrators. By studying voltage and current produced by the PV systems as a function of incident irradiance, and module temperature, we can compare our measurements of field-performance (in a harsh desert environment) to manufacturer specifications (determined under laboratory conditions). In order to measure high-voltage and high-current signals, we designed and built reliable, accurate sensors that can handle extreme desert temperatures. We will present several benchmarks of sensors in a controlled environment, including shunt resistors and Hall-effect current sensors, to determine temperature drift and accuracy. Finally we will present preliminary field measurements of PV performance for several different PV technologies.
A Demonstration of GPS Landslide Monitoring Using Online Positioning User Service (OPUS)
NASA Astrophysics Data System (ADS)
Wang, G.
2011-12-01
Global Positioning System (GPS) technologies have been frequently applied to landslide study, both as a complement, and as an alternative to conventional surveying methods. However, most applications of GPS for landslide monitoring have been limited to the academic community for research purposes. High-accuracy GPS has not been widely equipped in geotechnical companies and used by technicians. The main issue that limits the applications of GPS in the practice of high-accuracy landslide monitoring is the complexity of GPS data processing. This study demonstrated an approach using the Online Positioning User Service (OPUS) (http://www.ngs.noaa.gov/OPUS) provided by the National Geodetic Survey (NGS) of National Oceanic and Atmospheric Administration (NOAA) to process GPS data and conduct long-term landslide monitoring in the Puerto Rico and Virgin Islands Region. Continuous GPS data collected at a creeping landslide site during two years were used to evaluate different scenarios for landslide surveying: continuous or campaign, long duration or short duration, morning or afternoon (different weather conditions). OPUS uses Continuously Operating Reference Station (CORS) managed by NGS (http://www.ngs.noaa.giv/CORS/) as references and user data as a rover to solve a position. There are 19 CORS permanent GPS stations in the Puerto Rico and Virgin Islands region. The dense GPS network provides a precise and reliable reference frame for subcentimeter-accuracy landslide monitoring in this region. Our criterion for the accuracy was the root-mean-square (RMS) of OPUS solutions over a 2-year period with respect to true landslide displacement time series overt the same period. The true landslide displacements were derived from a single-baseline (130 m) GPS processing by using 24-hour continuous data. If continuous GPS surveying is performed in the field, then OPUS static processing can provide 0.6 cm horizontal and 1.1 cm vertical precision with few outliers. If repeated campaign-style surveying is performed in the field, then the choice of observation time window and duration are very important. In order to detect a suspected sliding mass and track the kinematics of a creeping landslide, sub-centimeter horizontal accuracy is often required. OPUS static solutions for sessions of 4 hours or longer and OPUS rapid-static solutions for sessions as short as 15 minutes can achieve accuracy at this level if data collection during extreme weather conditions is avoided, such as rainfall and storm time. This study also indicated that rainfall events can seriously degrade the performance of high-accuracy GPS. Field GPS landslide surveying should avoid rainfall time that is usually accompanied by thunderstorms and the passage of weather fronts.
Improved Personalized Recommendation Based on Causal Association Rule and Collaborative Filtering
ERIC Educational Resources Information Center
Lei, Wu; Qing, Fang; Zhou, Jin
2016-01-01
There are usually limited user evaluation of resources on a recommender system, which caused an extremely sparse user rating matrix, and this greatly reduce the accuracy of personalized recommendation, especially for new users or new items. This paper presents a recommendation method based on rating prediction using causal association rules.…
Feasibility of the Precise Energy Calibration for Fast Neutron Spectrometers
NASA Astrophysics Data System (ADS)
Gaganov, V. V.; Usenko, P. L.; Kryzhanovskaja, M. A.
2017-12-01
Computational studies aimed at improving the accuracy of measurements performed using neutron generators with a tritium target were performed. A measurement design yielding an extremely narrow peak in the energy spectrum of DT neutrons was found. The presence of such a peak establishes the conditions for precise energy calibration of fast-neutron spectrometers.
Prototype of a laser guide star wavefront sensor for the Extremely Large Telescope
NASA Astrophysics Data System (ADS)
Patti, M.; Lombini, M.; Schreiber, L.; Bregoli, G.; Arcidiacono, C.; Cosentino, G.; Diolaiti, E.; Foppiani, I.
2018-06-01
The new class of large telescopes, like the future Extremely Large Telescope (ELT), are designed to work with a laser guide star (LGS) tuned to a resonance of atmospheric sodium atoms. This wavefront sensing technique presents complex issues when applied to big telescopes for many reasons, mainly linked to the finite distance of the LGS, the launching angle, tip-tilt indetermination and focus anisoplanatism. The implementation of a laboratory prototype for the LGS wavefront sensor (WFS) at the beginning of the phase study of MAORY (Multi-conjugate Adaptive Optics Relay) for ELT first light has been indispensable in investigating specific mitigation strategies for the LGS WFS issues. This paper presents the test results of the LGS WFS prototype under different working conditions. The accuracy within which the LGS images are generated on the Shack-Hartmann WFS has been cross-checked with the MAORY simulation code. The experiments show the effect of noise on centroiding precision, the impact of LGS image truncation on wavefront sensing accuracy as well as the temporal evolution of the sodium density profile and LGS image under-sampling.
Diagnosing the Nature of Land-Atmosphere Coupling: A Case Study of Dry/Wet Extremes
NASA Technical Reports Server (NTRS)
Santanello, Joseph A., Jr.; Peters-Lidard, Christa; Kennedy, Aaron D.
2012-01-01
Land-atmosphere (L-A) interactions play a critical role in determining the diurnal evolution of land surface and planetary boundary layer (PBL) temperature and moisture states and fluxes. In turn, these interactions regulate the strength of the connection between surface moisture and precipitation in a coupled system. To address deficiencies in numerical weather prediction and climate models due to improper treatment of L-A interactions, recent studies have focused on development of diagnostics to quantify the strength and accuracy of the land-PBL coupling at the process-level. In this study, a diagnosis of the nature and impacts oflocalland-atmosphere coupling (LoCo) during dry and wet extreme conditions is presented using a combination of models and observations during the summers of2006-7 in the U.S. Southern Great Plains. Specifically, the Weather Research and Forecasting (WRF) model has been coupled to NASA's Land Information System (LIS), which provides a flexible and high-resolution representation and initialization of land surface physics and states. A range of diagnostics exploring the links and feedbacks between soil moisture and precipitation are examined for the dry/wet regimes of this region, along with the behavior and accuracy of different land-PBL scheme couplings under these conditions. In addition, we examine the impact of improved specification ofland surface states, anomalies, and fluxes that are obtained through the use of a hew optimization and uncertainty module in LIS, on the L-A coupling in WRF forecasts. Results demonstrate how LoCo diagnostics can be applied to coupled model components in the context of their integrated impacts on the process-chain connecting the land surface to the PBL and support of hydrological anomalies.
NASA Astrophysics Data System (ADS)
Konduri, Aditya
Many natural and engineering systems are governed by nonlinear partial differential equations (PDEs) which result in a multiscale phenomena, e.g. turbulent flows. Numerical simulations of these problems are computationally very expensive and demand for extreme levels of parallelism. At realistic conditions, simulations are being carried out on massively parallel computers with hundreds of thousands of processing elements (PEs). It has been observed that communication between PEs as well as their synchronization at these extreme scales take up a significant portion of the total simulation time and result in poor scalability of codes. This issue is likely to pose a bottleneck in scalability of codes on future Exascale systems. In this work, we propose an asynchronous computing algorithm based on widely used finite difference methods to solve PDEs in which synchronization between PEs due to communication is relaxed at a mathematical level. We show that while stability is conserved when schemes are used asynchronously, accuracy is greatly degraded. Since message arrivals at PEs are random processes, so is the behavior of the error. We propose a new statistical framework in which we show that average errors drop always to first-order regardless of the original scheme. We propose new asynchrony-tolerant schemes that maintain accuracy when synchronization is relaxed. The quality of the solution is shown to depend, not only on the physical phenomena and numerical schemes, but also on the characteristics of the computing machine. A novel algorithm using remote memory access communications has been developed to demonstrate excellent scalability of the method for large-scale computing. Finally, we present a path to extend this method in solving complex multi-scale problems on Exascale machines.
Precision Column CO2 Measurement from Space Using Broad Band LIDAR
NASA Technical Reports Server (NTRS)
Heaps, William S.
2009-01-01
In order to better understand the budget of carbon dioxide in the Earth's atmosphere it is necessary to develop a global high precision understanding of the carbon dioxide column. To uncover the missing sink" that is responsible for the large discrepancies in the budget as we presently understand it, calculation has indicated that measurement accuracy of 1 ppm is necessary. Because typical column average CO2 has now reached 380 ppm this represents a precision on the order of 0.25% for these column measurements. No species has ever been measured from space at such a precision. In recognition of the importance of understanding the CO2 budget to evaluate its impact on global warming the National Research Council in its decadal survey report to NASA recommended planning for a laser based total CO2 mapping mission in the near future. The extreme measurement accuracy requirements on this mission places very strong constraints on the laser system used for the measurement. This work presents an overview of the characteristics necessary in a laser system used to make this measurement. Consideration is given to the temperature dependence, pressure broadening, and pressure shift of the CO2 lines themselves and how these impact the laser system characteristics. We are examining the possibility of making precise measurements of atmospheric carbon dioxide using a broad band source of radiation. This means that many of the difficulties in wavelength control can be treated in the detector portion of the system rather than the laser source. It also greatly reduces the number of individual lasers required to make a measurement. Simplifications such as these are extremely desirable for systems designed to operate from space.
Ground-based telescope pointing and tracking optimization using a neural controller.
Mancini, D; Brescia, M; Schipani, P
2003-01-01
Neural network models (NN) have emerged as important components for applications of adaptive control theories. Their basic generalization capability, based on acquired knowledge, together with execution rapidity and correlation ability between input stimula, are basic attributes to consider NN as an extremely powerful tool for on-line control of complex systems. By a control system point of view, not only accuracy and speed, but also, in some cases, a high level of adaptation capability is required in order to match all working phases of the whole system during its lifetime. This is particularly remarkable for a new generation ground-based telescope control system. Infact, strong changes in terms of system speed and instantaneous position error tolerance are necessary, especially in case of trajectory disturb induced by wind shake. The classical control scheme adopted in such a system is based on the proportional integral (PI) filter, already applied and implemented on a large amount of new generation telescopes, considered as a standard in this technological environment. In this paper we introduce the concept of a new approach, the neural variable structure proportional integral, (NVSPI), related to the implementation of a standard multi layer perceptron network in new generation ground-based Alt-Az telescope control systems. Its main purpose is to improve adaptive capability of the Variable structure proportional integral model, an already innovative control scheme recently introduced by authors [Proc SPIE (1997)], based on a modified version of classical PI control model, in terms of flexibility and accuracy of the dynamic response range also in presence of wind noise effects. The realization of a powerful well tested and validated telescope model simulation system allowed the possibility to directly compare performances of the two control schemes on simulated tracking trajectories, revealing extremely encouraging results in terms of NVSPI control robustness and reliability.
Progress in Measurement of Carbon Dioxide Using a Broadband Lidar
NASA Technical Reports Server (NTRS)
Heaps, William S.
2010-01-01
In order to better understand the budget of carbon dioxide in the Earth's atmosphere it is necessary to develop a global high precision understanding of the carbon dioxide column. In order to uncover the 'missing sink" that is responsible for the large discrepancies in the budget as we presently understand it calculation has indicated that measurement accuracy on the order of 1 ppm is necessary. Because typical column average CO2 has now reached 380 ppm this represents a precision on the order of .25% for these column measurements. No species has ever been measured from space at such a precision. In recognition of the importance of understanding the CO2 budget in order to evaluate its impact on global warming the National Research Council in its decadal survey report to NASA recommended planning for a laser based total CO2 mapping mission in the near future. The extreme measurement accuracy requirements on this mission places very strong requirements on the laser system used for the measurement. This work presents an overview of the characteristics necessary in a laser system used to make this measurement. Consideration is given to the temperature dependence, pressure broadening, and pressure shift of the CO2 lines themselves and how these impact the laser system characteristics We have been examining the possibility of making precise measurements of atmospheric carbon dioxide using broad band source of radiation. This means that many of the difficulties in wavelength control can be treated in the detector portion of the system rather than the laser source. It also greatly reduces the number of individual lasers required to make a measurement. Simplifications such as these are extremely desirable for systems designed to operate from space.
Retrieval of the complex refractive index of aerosol droplets from optical tweezers measurements.
Miles, Rachael E H; Walker, Jim S; Burnham, Daniel R; Reid, Jonathan P
2012-03-07
The cavity enhanced Raman scattering spectrum recorded from an aerosol droplet provides a unique fingerprint of droplet radius and refractive index, assuming that the droplet is homogeneous in composition. Aerosol optical tweezers are used in this study to capture a single droplet and a Raman fingerprint is recorded using the trapping laser as the source for the Raman excitation. We report here the retrieval of the real part of the refractive index with an uncertainty of ± 0.0012 (better than ± 0.11%), simultaneously measuring the size of the micrometre sized liquid droplet with a precision of better than 1 nm (< ± 0.05% error). In addition, the equilibrium size of the droplet is shown to depend on the laser irradiance due to optical absorption, which elevates the droplet temperature above that of the ambient gas phase. Modulation of the illuminating laser power leads to a modulation in droplet size as the temperature elevation is altered. By measuring induced size changes of <1 nm, we show that the imaginary part of the refractive index can be retrieved even when less than 10 × 10(-9) with an accuracy of better than ± 0.5 × 10(-9). The combination of these measurements allows the complex refractive index of a droplet to be retrieved with high accuracy, with the possibility of making extremely sensitive optical absorption measurements on aerosol samples and the testing of frequently used mixing rules for treating aerosol optical properties. More generally, this method provides an extremely sensitive approach for measuring refractive indices, particularly under solute supersaturation conditions that cannot be accessed by simple bulk-phase measurements.
NASA Astrophysics Data System (ADS)
Zhang, K.; Han, B.; Mansaray, L. R.; Xu, X.; Guo, Q.; Jingfeng, H.
2017-12-01
Synthetic aperture radar (SAR) instruments on board satellites are valuable for high-resolution wind field mapping, especially for coastal studies. Since the launch of Sentinel-1A on April 3, 2014, followed by Sentinel-1B on April 25, 2016, large amount of C-band SAR data have been added to a growing accumulation of SAR datasets (ERS-1/2, RADARSAT-1/2, ENVISAT). These new developments are of great significance for a wide range of applications in coastal sea areas, especially for high spatial resolution wind resource assessment, in which the accuracy of retrieved wind fields is extremely crucial. Recently, it is reported that wind speeds can also be retrieved from C-band cross-polarized SAR images, which is an important complement to wind speed retrieval from co-polarization. However, there is no consensus on the optimal resolution for wind speed retrieval from cross-polarized SAR images. This paper presents a comparison strategy for investigating the influence of spatial resolutions on sea surface wind speed retrieval accuracy with cross-polarized SAR images. Firstly, for wind speeds retrieved from VV-polarized images, the optimal geophysical C-band model (CMOD) function was selected among four CMOD functions. Secondly, the most suitable C-band cross-polarized ocean (C-2PO) model was selected between two C-2POs for the VH-polarized image dataset. Then, the VH-wind speeds retrieved by the selected C-2PO were compared with the VV-polarized sea surface wind speeds retrieved using the optimal CMOD, which served as reference, at different spatial resolutions. Results show that the VH-polarized wind speed retrieval accuracy increases rapidly with the decrease in spatial resolutions from 100 m to 1000 m, with a drop in RMSE of 42%. However, the improvement in wind speed retrieval accuracy levels off with spatial resolutions decreasing from 1000 m to 5000 m. This demonstrates that the pixel spacing of 1 km may be the compromising choice for the tradeoff between the spatial resolution and wind speed retrieval accuracy with cross-polarized images obtained from RADASAT-2 fine quad polarization mode. Figs. 1 illustrate the variation of the following statistical parameters: Bias, Corr, R2, RMSE and STD as a function of spatial resolution.
A laboratory evaluation of the influence of weighing gauges performance on extreme events statistics
NASA Astrophysics Data System (ADS)
Colli, Matteo; Lanza, Luca
2014-05-01
The effects of inaccurate ground based rainfall measurements on the information derived from rain records is yet not much documented in the literature. La Barbera et al. (2002) investigated the propagation of the systematic mechanic errors of tipping bucket type rain gauges (TBR) into the most common statistics of rainfall extremes, e.g. in the assessment of the return period T (or the related non-exceedance probability) of short-duration/high intensity events. Colli et al. (2012) and Lanza et al. (2012) extended the analysis to a 22-years long precipitation data set obtained from a virtual weighing type gauge (WG). The artificial WG time series was obtained basing on real precipitation data measured at the meteo-station of the University of Genova and modelling the weighing gauge output as a linear dynamic system. This approximation was previously validated with dedicated laboratory experiments and is based on the evidence that the accuracy of WG measurements under real world/time varying rainfall conditions is mainly affected by the dynamic response of the gauge (as revealed during the last WMO Field Intercomparison of Rainfall Intensity Gauges). The investigation is now completed by analyzing actual measurements performed by two common weighing gauges, the OTT Pluvio2 load-cell gauge and the GEONOR T-200 vibrating-wire gauge, since both these instruments demonstrated very good performance under previous constant flow rate calibration efforts. A laboratory dynamic rainfall generation system has been arranged and validated in order to simulate a number of precipitation events with variable reference intensities. Such artificial events were generated basing on real world rainfall intensity (RI) records obtained from the meteo-station of the University of Genova so that the statistical structure of the time series is preserved. The influence of the WG RI measurements accuracy on the associated extreme events statistics is analyzed by comparing the original intensity-duration-frequency (IDF) curves with those obtained from the measuring of the simulated rain events. References: Colli, M., L.G. Lanza, and P. La Barbera, (2012). Weighing gauges measurement errors and the design rainfall for urban scale applications, 9th International Workshop On Precipitation In Urban Areas, 6-9 December, 2012, St. Moritz, Switzerland Lanza, L.G., M. Colli, and P. La Barbera (2012). On the influence of rain gauge performance on extreme events statistics: the case of weighing gauges, EGU General Assembly 2012, April 22th, Wien, Austria La Barbera, P., L.G. Lanza, and L. Stagi, (2002). Influence of systematic mechanical errors of tipping-bucket rain gauges on the statistics of rainfall extremes. Water Sci. Techn., 45(2), 1-9.
NASA Astrophysics Data System (ADS)
Vansteenkiste, Thomas; Tavakoli, Mohsen; Ntegeka, Victor; De Smedt, Florimond; Batelaan, Okke; Pereira, Fernando; Willems, Patrick
2014-11-01
The objective of this paper is to investigate the effects of hydrological model structure and calibration on climate change impact results in hydrology. The uncertainty in the hydrological impact results is assessed by the relative change in runoff volumes and peak and low flow extremes from historical and future climate conditions. The effect of the hydrological model structure is examined through the use of five hydrological models with different spatial resolutions and process descriptions. These were applied to a medium sized catchment in Belgium. The models vary from the lumped conceptual NAM, PDM and VHM models over the intermediate detailed and distributed WetSpa model to the fully distributed MIKE SHE model. The latter model accounts for the 3D groundwater processes and interacts bi-directionally with a full hydrodynamic MIKE 11 river model. After careful and manual calibration of these models, accounting for the accuracy of the peak and low flow extremes and runoff subflows, and the changes in these extremes for changing rainfall conditions, the five models respond in a similar way to the climate scenarios over Belgium. Future projections on peak flows are highly uncertain with expected increases as well as decreases depending on the climate scenario. The projections on future low flows are more uniform; low flows decrease (up to 60%) for all models and for all climate scenarios. However, the uncertainties in the impact projections are high, mainly in the dry season. With respect to the model structural uncertainty, the PDM model simulates significantly higher runoff peak flows under future wet scenarios, which is explained by its specific model structure. For the low flow extremes, the MIKE SHE model projects significantly lower low flows in dry scenario conditions in comparison to the other models, probably due to its large difference in process descriptions for the groundwater component, the groundwater-river interactions. The effect of the model calibration was tested by comparing the manual calibration approach with automatic calibrations of the VHM model based on different objective functions. The calibration approach did not significantly alter the model results for peak flow, but the low flow projections were again highly influenced. Model choice as well as calibration strategy hence have a critical impact on low flows, more than on peak flows. These results highlight the high uncertainty in low flow modelling, especially in a climate change context.
An Extreme Learning Machine-Based Neuromorphic Tactile Sensing System for Texture Recognition.
Rasouli, Mahdi; Chen, Yi; Basu, Arindam; Kukreja, Sunil L; Thakor, Nitish V
2018-04-01
Despite significant advances in computational algorithms and development of tactile sensors, artificial tactile sensing is strikingly less efficient and capable than the human tactile perception. Inspired by efficiency of biological systems, we aim to develop a neuromorphic system for tactile pattern recognition. We particularly target texture recognition as it is one of the most necessary and challenging tasks for artificial sensory systems. Our system consists of a piezoresistive fabric material as the sensor to emulate skin, an interface that produces spike patterns to mimic neural signals from mechanoreceptors, and an extreme learning machine (ELM) chip to analyze spiking activity. Benefiting from intrinsic advantages of biologically inspired event-driven systems and massively parallel and energy-efficient processing capabilities of the ELM chip, the proposed architecture offers a fast and energy-efficient alternative for processing tactile information. Moreover, it provides the opportunity for the development of low-cost tactile modules for large-area applications by integration of sensors and processing circuits. We demonstrate the recognition capability of our system in a texture discrimination task, where it achieves a classification accuracy of 92% for categorization of ten graded textures. Our results confirm that there exists a tradeoff between response time and classification accuracy (and information transfer rate). A faster decision can be achieved at early time steps or by using a shorter time window. This, however, results in deterioration of the classification accuracy and information transfer rate. We further observe that there exists a tradeoff between the classification accuracy and the input spike rate (and thus energy consumption). Our work substantiates the importance of development of efficient sparse codes for encoding sensory data to improve the energy efficiency. These results have a significance for a wide range of wearable, robotic, prosthetic, and industrial applications.
Black, Bryan A; Griffin, Daniel; van der Sleen, Peter; Wanamaker, Alan D; Speer, James H; Frank, David C; Stahle, David W; Pederson, Neil; Copenheaver, Carolyn A; Trouet, Valerie; Griffin, Shelly; Gillanders, Bronwyn M
2016-07-01
High-resolution biogenic and geologic proxies in which one increment or layer is formed per year are crucial to describing natural ranges of environmental variability in Earth's physical and biological systems. However, dating controls are necessary to ensure temporal precision and accuracy; simple counts cannot ensure that all layers are placed correctly in time. Originally developed for tree-ring data, crossdating is the only such procedure that ensures all increments have been assigned the correct calendar year of formation. Here, we use growth-increment data from two tree species, two marine bivalve species, and a marine fish species to illustrate sensitivity of environmental signals to modest dating error rates. When falsely added or missed increments are induced at one and five percent rates, errors propagate back through time and eliminate high-frequency variability, climate signals, and evidence of extreme events while incorrectly dating and distorting major disturbances or other low-frequency processes. Our consecutive Monte Carlo experiments show that inaccuracies begin to accumulate in as little as two decades and can remove all but decadal-scale processes after as little as two centuries. Real-world scenarios may have even greater consequence in the absence of crossdating. Given this sensitivity to signal loss, the fundamental tenets of crossdating must be applied to fully resolve environmental signals, a point we underscore as the frontiers of growth-increment analysis continue to expand into tropical, freshwater, and marine environments. © 2016 John Wiley & Sons Ltd.
Towards a monitoring system of temperature extremes in Europe
NASA Astrophysics Data System (ADS)
Lavaysse, Christophe; Cammalleri, Carmelo; Dosio, Alessandro; van der Schrier, Gerard; Toreti, Andrea; Vogt, Jürgen
2018-01-01
Extreme-temperature anomalies such as heat and cold waves may have strong impacts on human activities and health. The heat waves in western Europe in 2003 and in Russia in 2010, or the cold wave in southeastern Europe in 2012, generated a considerable amount of economic loss and resulted in the death of several thousands of people. Providing an operational system to monitor extreme-temperature anomalies in Europe is thus of prime importance to help decision makers and emergency services to be responsive to an unfolding extreme event. In this study, the development and the validation of a monitoring system of extreme-temperature anomalies are presented. The first part of the study describes the methodology based on the persistence of events exceeding a percentile threshold. The method is applied to three different observational datasets, in order to assess the robustness and highlight uncertainties in the observations. The climatology of extreme events from the last 21 years is then analysed to highlight the spatial and temporal variability of the hazard, and discrepancies amongst the observational datasets are discussed. In the last part of the study, the products derived from this study are presented and discussed with respect to previous studies. The results highlight the accuracy of the developed index and the statistical robustness of the distribution used to calculate the return periods.
Modelling Precipitation and Temperature Extremes: The Importance of Horizontal Resolution
NASA Astrophysics Data System (ADS)
Shields, C. A.; Kiehl, J. T.; Meehl, G. A.
2013-12-01
Understanding Earth's water cycle on a warming planet is of critical importance in society's ability to adapt to climate change. Extreme weather events, such as floods, heat waves, and drought will likely change with the water cycle as greenhouse gases continue to rise. Location, duration, and intensity of extreme events can be studied using complex earth system models. Here, we employ the fully coupled Community Earth System Model (CESM1.0) to evaluate extreme event impacts for different possible future forcing scenarios. Simulations applying the Representative Concentration Pathway (RCP) scenarios 2.6 and 8.5 were chosen to bracket the range of model responses. Because extreme weather events happen on a regional scale, there is a tendency to favor using higher resolution models, i.e. models that can represent regional features with greater accuracy. Within the CESM1.0 framework, we evaluate both the standard 1 degree resolution (1 degree atmosphere/land coupled to 1 degree ocean/sea ice), and the higher 0.5 degree resolution version (0.5 degree atmosphere/land coupled to 1 degree ocean/sea ice), focusing on extreme precipitation events, heat waves, and droughts. We analyze a variety of geographical regions, but generally find that benefits from increased horizontal resolution are most significant on the regional scale.
SU-E-J-191: Motion Prediction Using Extreme Learning Machine in Image Guided Radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jia, J; Cao, R; Pei, X
Purpose: Real-time motion tracking is a critical issue in image guided radiotherapy due to the time latency caused by image processing and system response. It is of great necessity to fast and accurately predict the future position of the respiratory motion and the tumor location. Methods: The prediction of respiratory position was done based on the positioning and tracking module in ARTS-IGRT system which was developed by FDS Team (www.fds.org.cn). An approach involving with the extreme learning machine (ELM) was adopted to predict the future respiratory position as well as the tumor’s location by training the past trajectories. For themore » training process, a feed-forward neural network with one single hidden layer was used for the learning. First, the number of hidden nodes was figured out for the single layered feed forward network (SLFN). Then the input weights and hidden layer biases of the SLFN were randomly assigned to calculate the hidden neuron output matrix. Finally, the predicted movement were obtained by applying the output weights and compared with the actual movement. Breathing movement acquired from the external infrared markers was used to test the prediction accuracy. And the implanted marker movement for the prostate cancer was used to test the implementation of the tumor motion prediction. Results: The accuracy of the predicted motion and the actual motion was tested. Five volunteers with different breathing patterns were tested. The average prediction time was 0.281s. And the standard deviation of prediction accuracy was 0.002 for the respiratory motion and 0.001 for the tumor motion. Conclusion: The extreme learning machine method can provide an accurate and fast prediction of the respiratory motion and the tumor location and therefore can meet the requirements of real-time tumor-tracking in image guided radiotherapy.« less
You, Zhu-Hong; Lei, Ying-Ke; Zhu, Lin; Xia, Junfeng; Wang, Bing
2013-01-01
Protein-protein interactions (PPIs) play crucial roles in the execution of various cellular processes and form the basis of biological mechanisms. Although large amount of PPIs data for different species has been generated by high-throughput experimental techniques, current PPI pairs obtained with experimental methods cover only a fraction of the complete PPI networks, and further, the experimental methods for identifying PPIs are both time-consuming and expensive. Hence, it is urgent and challenging to develop automated computational methods to efficiently and accurately predict PPIs. We present here a novel hierarchical PCA-EELM (principal component analysis-ensemble extreme learning machine) model to predict protein-protein interactions only using the information of protein sequences. In the proposed method, 11188 protein pairs retrieved from the DIP database were encoded into feature vectors by using four kinds of protein sequences information. Focusing on dimension reduction, an effective feature extraction method PCA was then employed to construct the most discriminative new feature set. Finally, multiple extreme learning machines were trained and then aggregated into a consensus classifier by majority voting. The ensembling of extreme learning machine removes the dependence of results on initial random weights and improves the prediction performance. When performed on the PPI data of Saccharomyces cerevisiae, the proposed method achieved 87.00% prediction accuracy with 86.15% sensitivity at the precision of 87.59%. Extensive experiments are performed to compare our method with state-of-the-art techniques Support Vector Machine (SVM). Experimental results demonstrate that proposed PCA-EELM outperforms the SVM method by 5-fold cross-validation. Besides, PCA-EELM performs faster than PCA-SVM based method. Consequently, the proposed approach can be considered as a new promising and powerful tools for predicting PPI with excellent performance and less time.
Deposition and characterization of B4C/CeO2 multilayers at 6.x nm extreme ultraviolet wavelengths
NASA Astrophysics Data System (ADS)
Sertsu, M. G.; Giglia, A.; Brose, S.; Park, D.; Wang, Z. S.; Mayer, J.; Juschkin, L.; Nicolosi, P.
2016-03-01
New multilayers of boron carbide/cerium dioxide (B4C/CeO2) combination on silicon (Si) substrate are manufactured to represent reflective-optics candidates for future lithography at 6.x nm wavelength. This is one of only a few attempts to make multilayers of this kind. Combination of several innovative experiments enables detailed study of optical properties, structural properties, and interface profiles of the multilayers in order to open up a room for further optimization of the manufacturing process. The interface profile is visualized by high-angle annular dark-field imaging which provides highly sensitive contrast to atomic number. Synchrotron based at-wavelength extreme ultraviolet (EUV) reflectance measurements near the boron (B) absorption edge allow derivation of optical parameters with high sensitivity to local atom interactions. X-ray reflectivity measurements at Cu-Kalpha (8 keV ) determine the period of multilayers with high in-depth resolution. By combining these measurements and choosing robust nonlinear curve fitting algorithms, accuracy of the results has been significantly improved. It also enables a comprehensive characterization of multilayers. Interface diffusion is determined to be a major cause for the low reflectivity performance. Optical constants of B4C and CeO2 layers are derived in EUV wavelengths. Besides, optical properties and asymmetric thicknesses of inter-diffusion layers (interlayers) in EUV wavelengths near the boron edge are determined. Finally, ideal reflectivity of the B4C/CeO2 combination is calculated by using optical constants derived from the proposed measurements in order to evaluate the potentiality of the design.
Status of prototype of SG-III high-power solid-state laser
NASA Astrophysics Data System (ADS)
Yu, Haiwu; Jing, Feng; Wei, Xiaofeng; Zheng, Wanguo; Zhang, Xiaomin; Sui, Zhan; Li, Mingzhong; Hu, Dongxia; He, Shaobo; Peng, Zhitao; Feng, Bin; Zhou, Hai; Guo, Liangfu; Li, Xiaoqun; Su, Jingqin; Zhao, Runchang; Yang, Dong; Zheng, Kuixing; Yuan, Xiaodong
2008-10-01
We are currently developing a large aperture neodymium-glass based high-power solid state laser, Shenguang-III (SG-III), which will be used to provide extreme conditions for high-energy-density physical experiments in China. As a baseline design, SG-III will be composed of 48 beams arranged in 6 bundles with each beam aperture of 40cm×40cm. A prototype of SG-III (TIL-Technical Integration experimental Line) was developed from 2000, and completed in 2007. TIL is composed of 8 beams (four in vertical and two in horizontal), with each square aperture of 30cm×30cm. After frequency tripling, TIL has delivered about 10kJ in 0.351 μm at 1 ns pulsewidth. As an operational laser facility, TIL has a beam divergence of 70 μrad (focus length of 2.2m, i.e., 30DL) and pointing accuracy of 30 μm (RMS), and meets the requirements of physical experiments.
Application of short-data methods on extreme surge levels
NASA Astrophysics Data System (ADS)
Feng, X.
2014-12-01
Tropical cyclone-induced storm surges are among the most destructive natural hazards that impact the United States. Unfortunately for academic research, the available time series for extreme surge analysis are very short. The limited data introduces uncertainty and affects the accuracy of statistical analyses of extreme surge levels. This study deals with techniques applicable to data sets less than 20 years, including simulation modelling and methods based on the parameters of the parent distribution. The verified water levels from water gauges spread along the Southwest and Southeast Florida Coast, as well as the Florida Keys, are used in this study. Methods to calculate extreme storm surges are described and reviewed, including 'classical' methods based on the generalized extreme value (GEV) distribution and the generalized Pareto distribution (GPD), and approaches designed specifically to deal with short data sets. Incorporating global-warming influence, the statistical analysis reveals enhanced extreme surge magnitudes and frequencies during warm years, while reduced levels of extreme surge activity are observed in the same study domain during cold years. Furthermore, a non-stationary GEV distribution is applied to predict the extreme surge levels with warming sea surface temperatures. The non-stationary GEV distribution indicates that with 1 Celsius degree warming in sea surface temperature from the baseline climate, the 100-year return surge level in Southwest and Southeast Florida will increase by up to 40 centimeters. The considered statistical approaches for extreme surge estimation based on short data sets will be valuable to coastal stakeholders, including urban planners, emergency managers, and the hurricane and storm surge forecasting and warning system.
Lower extremity EMG-driven modeling of walking with automated adjustment of musculoskeletal geometry
Meyer, Andrew J.; Patten, Carolynn
2017-01-01
Neuromusculoskeletal disorders affecting walking ability are often difficult to manage, in part due to limited understanding of how a patient’s lower extremity muscle excitations contribute to the patient’s lower extremity joint moments. To assist in the study of these disorders, researchers have developed electromyography (EMG) driven neuromusculoskeletal models utilizing scaled generic musculoskeletal geometry. While these models can predict individual muscle contributions to lower extremity joint moments during walking, the accuracy of the predictions can be hindered by errors in the scaled geometry. This study presents a novel EMG-driven modeling method that automatically adjusts surrogate representations of the patient’s musculoskeletal geometry to improve prediction of lower extremity joint moments during walking. In addition to commonly adjusted neuromusculoskeletal model parameters, the proposed method adjusts model parameters defining muscle-tendon lengths, velocities, and moment arms. We evaluated our EMG-driven modeling method using data collected from a high-functioning hemiparetic subject walking on an instrumented treadmill at speeds ranging from 0.4 to 0.8 m/s. EMG-driven model parameter values were calibrated to match inverse dynamic moments for five degrees of freedom in each leg while keeping musculoskeletal geometry close to that of an initial scaled musculoskeletal model. We found that our EMG-driven modeling method incorporating automated adjustment of musculoskeletal geometry predicted net joint moments during walking more accurately than did the same method without geometric adjustments. Geometric adjustments improved moment prediction errors by 25% on average and up to 52%, with the largest improvements occurring at the hip. Predicted adjustments to musculoskeletal geometry were comparable to errors reported in the literature between scaled generic geometric models and measurements made from imaging data. Our results demonstrate that with appropriate experimental data, joint moment predictions for walking generated by an EMG-driven model can be improved significantly when automated adjustment of musculoskeletal geometry is included in the model calibration process. PMID:28700708
Measurement Properties of Instruments for Measuring of Lymphedema: Systematic Review.
Hidding, Janine T; Viehoff, Peter B; Beurskens, Carien H G; van Laarhoven, Hanneke W M; Nijhuis-van der Sanden, Maria W G; van der Wees, Philip J
2016-12-01
Lymphedema is a common complication of cancer treatment, resulting in swelling and subjective symptoms. Reliable and valid measurement of this side effect of medical treatment is important. The purpose of this study was to provide best evidence regarding which measurement instruments are most appropriate in measuring lymphedema in its different stages. The PubMed and Web of Science databases were used, and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were followed. Clinical studies on measurement instruments assessing lymphedema were reviewed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) scoring instrument for quality assessment. Data on reliability, concurrent validity, convergent validity, sensitivity, specificity, applicability, and costs were extracted. Pooled data showed good intrarater intraclass correlation coefficients (ICCs) (.89) for bioimpedance spectroscopy (BIS) in the lower extremities and high intrarater and interrater ICCs for water volumetry, tape measurement, and perometry (.98-.99) in the upper extremities. In the upper extremities, the standard error of measurement was 3.6% (σ=0.7%) for water volumetry, 5.6% (σ=2.1%) for perometry, and 6.6% (σ=2.6%) for tape measurement. Sensitivity of tape measurement in the upper extremities, using different cutoff points, varied from 0.73 to 0.90, and specificity values varied from 0.72 to 0.78. No uniform definition of lymphedema was available, and a gold standard as a reference test was lacking. Items concerning risk of bias were study design, patient selection, description of lymphedema, blinding of test outcomes, and number of included participants. Measurement instruments with evidence for good reliability and validity were BIS, water volumetry, tape measurement, and perometry, where BIS can detect alterations in extracellular fluid in stage 1 lymphedema and the other measurement instruments can detect alterations in volume starting from stage 2. In research, water volumetry is indicated as a reference test for measuring lymphedema in the upper extremities. © 2016 American Physical Therapy Association.
Masini, Brendan D; Waterman, Scott M; Wenke, Joseph C; Owens, Brett D; Hsu, Joseph R; Ficke, James R
2009-04-01
Injuries are common during combat operations. The high costs of extremity injuries both in resource utilization and disability are well known in the civilian sector. We hypothesized that, similarly, combat-related extremity injuries, when compared with other injures from the current conflicts in Iraq and Afghanistan, require the largest percentage of medical resources, account for the greatest number of disabled soldiers, and have greater costs of disability benefits. Descriptive epidemiologic study and cost analysis. The Department of Defense Medical Metrics (M2) database was queried for the hospital admissions and billing data of a previously published cohort of soldiers injured in Iraq and Afghanistan between October 2001 and January 2005 and identified from the Joint Theater Trauma Registry. The US Army Physical Disability Administration database was also queried for Physical Evaluation Board outcomes for these soldiers, allowing calculation of disability benefit cost. Primary body region injured was assigned using billing records that gave a primary diagnosis International Classification of Diseases Ninth Edition code, which was corroborated with Joint Theater Trauma Registry injury mechanisms and descriptions for accuracy. A total of 1333 soldiers had complete admission data and were included from 1566 battle injuries not returned to duty of 3102 total casualties. Extremity-injured patients had the longest average inpatient stay at 10.7 days, accounting for 65% of the $65.3-million total inpatient resource utilization, 64% of the 464 patients found "unfit for duty," and 64% of the $170-million total projected disability benefit costs. Extrapolation of data yields total disability costs for this conflict, approaching $2 billion. Combat-related extremity injuries require the greatest utilization of resources for inpatient treatment in the initial postinjury period, cause the greatest number of disabled soldiers, and have the greatest projected disability benefit costs. This study highlights the need for continued or increased funding and support for military orthopaedic surgeons and extremity trauma research efforts.
Wichmann, Julian L; Gillott, Matthew R; De Cecco, Carlo N; Mangold, Stefanie; Varga-Szemes, Akos; Yamada, Ricardo; Otani, Katharina; Canstein, Christian; Fuller, Stephen R; Vogl, Thomas J; Todoran, Thomas M; Schoepf, U Joseph
2016-02-01
The aim of this study was to evaluate the impact of a noise-optimized virtual monochromatic imaging algorithm (VMI+) on image quality and diagnostic accuracy at dual-energy computed tomography angiography (CTA) of the lower extremity runoff. This retrospective Health Insurance Portability and Accountability Act-compliant study was approved by the local institutional review board. We evaluated dual-energy CTA studies of the lower extremity runoff in 48 patients (16 women; mean age, 63.3 ± 13.8 years) performed on a third-generation dual-source CT system. Images were reconstructed with standard linear blending (F_0.5), VMI+, and traditional monochromatic (VMI) algorithms at 40 to 120 keV in 10-keV intervals. Vascular attenuation and image noise in 18 artery segments were measured; signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated. Five-point scales were used to subjectively evaluate vascular attenuation and image noise. In a subgroup of 21 patients who underwent additional invasive catheter angiography, diagnostic accuracy for the detection of significant stenosis (≥50% lumen restriction) of F_0.5, 50-keV VMI+, and 60-keV VMI data sets were assessed. Objective image quality metrics were highest in the 40- and 50-keV VMI+ series (SNR: 20.2 ± 10.7 and 19.0 ± 9.5, respectively; CNR: 18.5 ± 10.3 and 16.8 ± 9.1, respectively) and were significantly (all P < 0.001) higher than in the corresponding VMI data sets (SNR: 8.7 ± 4.1 and 10.8 ± 5.0; CNR: 8.0 ± 4.0 and 9.6 ± 4.9) and F_0.5 series (SNR: 10.7 ± 4.4; CNR: 8.3 ± 4.1). Subjective assessment of attenuation was highest in the 40- and 50-keV VMI and VMI+ image series (range, 4.84-4.91), superior to F_0.5 (4.07; P < 0.001). Corresponding subjective noise assessment was superior for 50-keV VMI+ (4.71; all P < 0.001) compared with VMI (2.60) and F_0.5 (4.11). Sensitivity and specificity for detection of 50% or greater stenoses were highest in VMI+ reconstructions (92% and 95%, respectively), significantly higher compared with standard F_0.5 (87% and 90%; both P ≤ 0.02). Image reconstruction using low-kiloelectron volt VMI+ improves image quality and diagnostic accuracy compared with traditional VMI technique and standard linear blending for evaluation of the lower extremity runoff using dual-energy CTA.
Assessment of the cPAS-based BGISEQ-500 platform for metagenomic sequencing.
Fang, Chao; Zhong, Huanzi; Lin, Yuxiang; Chen, Bing; Han, Mo; Ren, Huahui; Lu, Haorong; Luber, Jacob M; Xia, Min; Li, Wangsheng; Stein, Shayna; Xu, Xun; Zhang, Wenwei; Drmanac, Radoje; Wang, Jian; Yang, Huanming; Hammarström, Lennart; Kostic, Aleksandar D; Kristiansen, Karsten; Li, Junhua
2018-03-01
More extensive use of metagenomic shotgun sequencing in microbiome research relies on the development of high-throughput, cost-effective sequencing. Here we present a comprehensive evaluation of the performance of the new high-throughput sequencing platform BGISEQ-500 for metagenomic shotgun sequencing and compare its performance with that of 2 Illumina platforms. Using fecal samples from 20 healthy individuals, we evaluated the intra-platform reproducibility for metagenomic sequencing on the BGISEQ-500 platform in a setup comprising 8 library replicates and 8 sequencing replicates. Cross-platform consistency was evaluated by comparing 20 pairwise replicates on the BGISEQ-500 platform vs the Illumina HiSeq 2000 platform and the Illumina HiSeq 4000 platform. In addition, we compared the performance of the 2 Illumina platforms against each other. By a newly developed overall accuracy quality control method, an average of 82.45 million high-quality reads (96.06% of raw reads) per sample, with 90.56% of bases scoring Q30 and above, was obtained using the BGISEQ-500 platform. Quantitative analyses revealed extremely high reproducibility between BGISEQ-500 intra-platform replicates. Cross-platform replicates differed slightly more than intra-platform replicates, yet a high consistency was observed. Only a low percentage (2.02%-3.25%) of genes exhibited significant differences in relative abundance comparing the BGISEQ-500 and HiSeq platforms, with a bias toward genes with higher GC content being enriched on the HiSeq platforms. Our study provides the first set of performance metrics for human gut metagenomic sequencing data using BGISEQ-500. The high accuracy and technical reproducibility confirm the applicability of the new platform for metagenomic studies, though caution is still warranted when combining metagenomic data from different platforms.
Astrophysics on the Edge: New Instrumental Developments at the ING
NASA Astrophysics Data System (ADS)
Santander-García, M.; Rodríguez-Gil, P.; Tulloch, S.; Rutten, R. G. M.
Present and future key instruments at the Isaac Newton Group of Telescopes (ING) are introduced, and their corresponding latest scientific highlights are presented. GLAS (Ground-layer Laser Adaptive optics System): The recently installed 515 nm laser, mounted on the WHT (William Herschel Telescope), produces a bright artificial star at a height of 15 km. This enables almost full-sky access to Adaptive Optics observations. Recent commissioning observations with the NAOMI+GLAS system showed that very significant improvement in image quality can be obtained, e.g. down to 0.16 arcsec in the H band. QUCAM2 and QUCAM3: Two Low Light Level (L3) CCD cameras for fast or faint-object spectroscopy with the twin-armed ISIS spectrograph at the WHT. Their use opens a new window of high time-frequency observations, as well as access to fainter objects. They are powerful instruments for research on compact objects such as white dwarfs, neutron stars or black holes, stellar pulsations, and compact binaries.HARPS-NEF (High-Accuracy Radial-velocity Planet Searcher of the New Earths Facility): An extremely stable, high-resolution (R ˜ 120, 000) spectrograph for the WHT which is being constructed for commissioning in 2009-2010. Its radial velocity stability of < 1 m s- 1 may in the future be even further improved by using a Fabry-Perot laser-comb, a wavelength calibration unit capable of achieving an accuracy of 1 cm s- 1. This instrument will effectively allow to search for earth-like exoplanets.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chubar O.; Berman, L; Chu, Y.S.
2012-04-04
Partially-coherent wavefront propagation calculations have proven to be feasible and very beneficial in the design of beamlines for 3rd and 4th generation Synchrotron Radiation (SR) sources. These types of calculations use the framework of classical electrodynamics for the description, on the same accuracy level, of the emission by relativistic electrons moving in magnetic fields of accelerators, and the propagation of the emitted radiation wavefronts through beamline optical elements. This enables accurate prediction of performance characteristics for beamlines exploiting high SR brightness and/or high spectral flux. Detailed analysis of radiation degree of coherence, offered by the partially-coherent wavefront propagation method, ismore » of paramount importance for modern storage-ring based SR sources, which, thanks to extremely small sub-nanometer-level electron beam emittances, produce substantial portions of coherent flux in X-ray spectral range. We describe the general approach to partially-coherent SR wavefront propagation simulations and present examples of such simulations performed using 'Synchrotron Radiation Workshop' (SRW) code for the parameters of hard X-ray undulator based beamlines at the National Synchrotron Light Source II (NSLS-II), Brookhaven National Laboratory. These examples illustrate general characteristics of partially-coherent undulator radiation beams in low-emittance SR sources, and demonstrate advantages of applying high-accuracy physical-optics simulations to the optimization and performance prediction of X-ray optical beamlines in these new sources.« less
Higher-than-predicted saltation threshold wind speeds on Titan.
Burr, Devon M; Bridges, Nathan T; Marshall, John R; Smith, James K; White, Bruce R; Emery, Joshua P
2015-01-01
Titan, the largest satellite of Saturn, exhibits extensive aeolian, that is, wind-formed, dunes, features previously identified exclusively on Earth, Mars and Venus. Wind tunnel data collected under ambient and planetary-analogue conditions inform our models of aeolian processes on the terrestrial planets. However, the accuracy of these widely used formulations in predicting the threshold wind speeds required to move sand by saltation, or by short bounces, has not been tested under conditions relevant for non-terrestrial planets. Here we derive saltation threshold wind speeds under the thick-atmosphere, low-gravity and low-sediment-density conditions on Titan, using a high-pressure wind tunnel refurbished to simulate the appropriate kinematic viscosity for the near-surface atmosphere of Titan. The experimentally derived saltation threshold wind speeds are higher than those predicted by models based on terrestrial-analogue experiments, indicating the limitations of these models for such extreme conditions. The models can be reconciled with the experimental results by inclusion of the extremely low ratio of particle density to fluid density on Titan. Whereas the density ratio term enables accurate modelling of aeolian entrainment in thick atmospheres, such as those inferred for some extrasolar planets, our results also indicate that for environments with high density ratios, such as in jets on icy satellites or in tenuous atmospheres or exospheres, the correction for low-density-ratio conditions is not required.
Using Blood Indexes to Predict Overweight Statuses: An Extreme Learning Machine-Based Approach
Chen, Huiling; Yang, Bo; Liu, Dayou; Liu, Wenbin; Liu, Yanlong; Zhang, Xiuhua; Hu, Lufeng
2015-01-01
The number of the overweight people continues to rise across the world. Studies have shown that being overweight can increase health risks, such as high blood pressure, diabetes mellitus, coronary heart disease, and certain forms of cancer. Therefore, identifying the overweight status in people is critical to prevent and decrease health risks. This study explores a new technique that uses blood and biochemical measurements to recognize the overweight condition. A new machine learning technique, an extreme learning machine, was developed to accurately detect the overweight status from a pool of 225 overweight and 251 healthy subjects. The group included 179 males and 297 females. The detection method was rigorously evaluated against the real-life dataset for accuracy, sensitivity, specificity, and AUC (area under the receiver operating characteristic (ROC) curve) criterion. Additionally, the feature selection was investigated to identify correlating factors for the overweight status. The results demonstrate that there are significant differences in blood and biochemical indexes between healthy and overweight people (p-value < 0.01). According to the feature selection, the most important correlated indexes are creatinine, hemoglobin, hematokrit, uric Acid, red blood cells, high density lipoprotein, alanine transaminase, triglyceride, and γ-glutamyl transpeptidase. These are consistent with the results of Spearman test analysis. The proposed method holds promise as a new, accurate method for identifying the overweight status in subjects. PMID:26600199
Common polygenic variation enhances risk prediction for Alzheimer’s disease
Sims, Rebecca; Bannister, Christian; Harold, Denise; Vronskaya, Maria; Majounie, Elisa; Badarinarayan, Nandini; Morgan, Kevin; Passmore, Peter; Holmes, Clive; Powell, John; Brayne, Carol; Gill, Michael; Mead, Simon; Goate, Alison; Cruchaga, Carlos; Lambert, Jean-Charles; van Duijn, Cornelia; Maier, Wolfgang; Ramirez, Alfredo; Holmans, Peter; Jones, Lesley; Hardy, John; Seshadri, Sudha; Schellenberg, Gerard D.; Amouyel, Philippe
2015-01-01
The identification of subjects at high risk for Alzheimer’s disease is important for prognosis and early intervention. We investigated the polygenic architecture of Alzheimer’s disease and the accuracy of Alzheimer’s disease prediction models, including and excluding the polygenic component in the model. This study used genotype data from the powerful dataset comprising 17 008 cases and 37 154 controls obtained from the International Genomics of Alzheimer’s Project (IGAP). Polygenic score analysis tested whether the alleles identified to associate with disease in one sample set were significantly enriched in the cases relative to the controls in an independent sample. The disease prediction accuracy was investigated in a subset of the IGAP data, a sample of 3049 cases and 1554 controls (for whom APOE genotype data were available) by means of sensitivity, specificity, area under the receiver operating characteristic curve (AUC) and positive and negative predictive values. We observed significant evidence for a polygenic component enriched in Alzheimer’s disease (P = 4.9 × 10−26). This enrichment remained significant after APOE and other genome-wide associated regions were excluded (P = 3.4 × 10−19). The best prediction accuracy AUC = 78.2% (95% confidence interval 77–80%) was achieved by a logistic regression model with APOE, the polygenic score, sex and age as predictors. In conclusion, Alzheimer’s disease has a significant polygenic component, which has predictive utility for Alzheimer’s disease risk and could be a valuable research tool complementing experimental designs, including preventative clinical trials, stem cell selection and high/low risk clinical studies. In modelling a range of sample disease prevalences, we found that polygenic scores almost doubles case prediction from chance with increased prediction at polygenic extremes. PMID:26490334
NASA Astrophysics Data System (ADS)
Bižić, Milan B.; Petrović, Dragan Z.; Tomić, Miloš C.; Djinović, Zoran V.
2017-07-01
This paper presents the development of a unique method for experimental determination of wheel-rail contact forces and contact point position by using the instrumented wheelset (IWS). Solutions of key problems in the development of IWS are proposed, such as the determination of optimal locations, layout, number and way of connecting strain gauges as well as the development of an inverse identification algorithm (IIA). The base for the solution of these problems is the wheel model and results of FEM calculations, while IIA is based on the method of blind source separation using independent component analysis. In the first phase, the developed method was tested on a wheel model and a high accuracy was obtained (deviations of parameters obtained with IIA and really applied parameters in the model are less than 2%). In the second phase, experimental tests on the real object or IWS were carried out. The signal-to-noise ratio was identified as the main influential parameter on the measurement accuracy. Тhе obtained results have shown that the developed method enables measurement of vertical and lateral wheel-rail contact forces Q and Y and their ratio Y/Q with estimated errors of less than 10%, while the estimated measurement error of contact point position is less than 15%. At flange contact and higher values of ratio Y/Q or Y force, the measurement errors are reduced, which is extremely important for the reliability and quality of experimental tests of safety against derailment of railway vehicles according to the standards UIC 518 and EN 14363. The obtained results have shown that the proposed method can be successfully applied in solving the problem of high accuracy measurement of wheel-rail contact forces and contact point position using IWS.
Perspective. Extremely fine tuning of doping enabled by combinatorial molecular-beam epitaxy
Wu, J.; Bozovic, I.
2015-04-06
Chemical doping provides an effective method to control the electric properties of complex oxides. However, the state-of-art accuracy in controlling doping is limited to about 1%. This hampers elucidation of the precise doping dependences of physical properties and phenomena of interest, such as quantum phase transitions. Using the combinatorial molecular beam epitaxy, we improve the accuracy in tuning the doping level by two orders of magnitude. We illustrate this novel method by two examples: a systematic investigation of the doping dependence of interface superconductivity, and a study of the competing ground states in the vicinity of the insulator-to-superconductor transition.
Sensitiveness of the colorimetric estimation of titanium
Wells, R.C.
1911-01-01
The accuracy of the colorimetric estimation of titanium is practically constant over concentrations ranging from the strongest down to those containing about 1.5 mg. TiO2 in 100 cc. The change in concentration required to produce a perceptible difference in intensity between two solutions, at favorable concentrations, was found to be about 6.5 per cent, which does not differ much from the results of others with chromium and copper solutions. With suitable precautions, such as comparing by substitution and taking the mean of several settings or of the two perceptibly different extremes, the accuracy of the colorimetric comparisons appears to be about 2 per cent.
NASA Astrophysics Data System (ADS)
Zhong, Xianyun; Fan, Bin; Wu, Fan
2017-10-01
Single crystal calcium fluoride (CaF2) is the excellent transparent optical substance that has extremely good permeability and refractive index from 120nm wavelength ultraviolet range to 12μm wavelength infrared range and it has widely used in the applications of various advanced optical instrument, such as infrared optical systems (IR), short wavelength optical lithography systems (DUV), as well as high power UV laser systems. Nevertheless, the characteristics of CaF2 material, including low fracture toughness, low hardness, low thermal conductivity and high thermal expansion coefficient, result in that the conventional pitch polishing techniques usually expose to lots of problems, such as subsurface damage, scratches, digs and so on. Single point diamond turning (SPDT) is a prospective technology for manufacture the brittle material, but the residual surface textures or artifacts of SPDT will cause great scattering losses. Meanwhile, the roughness also falls far short from the requirement in the short wavelength optical systems. So, the advanced processing technologies for obtaining the shape accuracy, roughness, surface flaw at the same time need to put forward. In this paper, the authors investigate the Magnetorheological Finishing (MRF) technology for the high precision processing of CaF2 material. We finish the surface accuracy RMS λ/150 and roughness Rq 0.3nm on the concave aspheric from originate shape error 0.7λ and roughness 17nm by the SPDT. The studying of the MRF techniques makes a great effort to the processing level of CaF2 material for the state-of-the-art DUV lithography systems applications.
Study of multi-functional precision optical measuring system for large scale equipment
NASA Astrophysics Data System (ADS)
Jiang, Wei; Lao, Dabao; Zhou, Weihu; Zhang, Wenying; Jiang, Xingjian; Wang, Yongxi
2017-10-01
The effective application of high performance measurement technology can greatly improve the large-scale equipment manufacturing ability. Therefore, the geometric parameters measurement, such as size, attitude and position, requires the measurement system with high precision, multi-function, portability and other characteristics. However, the existing measuring instruments, such as laser tracker, total station, photogrammetry system, mostly has single function, station moving and other shortcomings. Laser tracker needs to work with cooperative target, but it can hardly meet the requirement of measurement in extreme environment. Total station is mainly used for outdoor surveying and mapping, it is hard to achieve the demand of accuracy in industrial measurement. Photogrammetry system can achieve a wide range of multi-point measurement, but the measuring range is limited and need to repeatedly move station. The paper presents a non-contact opto-electronic measuring instrument, not only it can work by scanning the measurement path but also measuring the cooperative target by tracking measurement. The system is based on some key technologies, such as absolute distance measurement, two-dimensional angle measurement, automatically target recognition and accurate aiming, precision control, assembly of complex mechanical system and multi-functional 3D visualization software. Among them, the absolute distance measurement module ensures measurement with high accuracy, and the twodimensional angle measuring module provides precision angle measurement. The system is suitable for the case of noncontact measurement of large-scale equipment, it can ensure the quality and performance of large-scale equipment throughout the process of manufacturing and improve the manufacturing ability of large-scale and high-end equipment.
Historical Underpinnings of Bipolar Disorder Diagnostic Criteria
Mason, Brittany L.; Brown, E. Sherwood; Croarkin, Paul E.
2016-01-01
Mood is the changing expression of emotion and can be described as a spectrum. The outermost ends of this spectrum highlight two states, the lowest low, melancholia, and the highest high, mania. These mood extremes have been documented repeatedly in human history, being first systematically described by Hippocrates. Nineteenth century contemporaries Falret and Baillarger described two forms of an extreme mood disorder, with the validity and accuracy of both debated. Regardless, the concept of a cycling mood disease was accepted before the end of the 19th century. Kraepelin then described “manic depressive insanity” and presented his description of a full spectrum of mood dysfunction which could be exhibited through single episodes of mania or depression or a complement of many episodes of each. It was this concept which was incorporated into the first DSM and carried out until DSM-III, in which the description of episodic mood dysfunction was used to build a diagnosis of bipolar disorder. Criticism of this approach is explored through discussion of the bipolar spectrum concept and some recent examinations of the clinical validity of these DSM diagnoses are presented. The concept of bipolar disorder in children is also explored. PMID:27429010
Western, David; Mose, Victor N; Worden, Jeffrey; Maitumo, David
2015-01-01
We monitored pasture biomass on 20 permanent plots over 35 years to gauge the reliability of rainfall and NDVI as proxy measures of forage shortfalls in a savannah ecosystem. Both proxies are reliable indicators of pasture biomass at the onset of dry periods but fail to predict shortfalls in prolonged dry spells. In contrast, grazing pressure predicts pasture deficits with a high degree of accuracy. Large herbivores play a primary role in determining the severity of pasture deficits and variation across habitats. Grazing pressure also explains oscillations in plant biomass unrelated to rainfall. Plant biomass has declined steadily and biomass per unit of rainfall has fallen by a third, corresponding to a doubling in grazing intensity over the study period. The rising probability of forage deficits fits local pastoral perceptions of an increasing frequency of extreme shortfalls. The decline in forage is linked to sedentarization, range loss and herbivore compression into drought refuges, rather than climate change. The results show that the decline in rangeland productivity and increasing frequency of pasture shortfalls can be ameliorated by better husbandry practices and reinforces the need for ground monitoring to complement remote sensing in forecasting pasture shortfalls.
Lamar, William L.; Goerlitz, Donald F.; Law, LeRoy M.
1965-01-01
Pesticides, in minute quantities, may affect the regimen of streams, and because they may concentrate in sediments, aquatic organisms, and edible aquatic foods, their detection and their measurement in the parts-per-trillion range are considered essential. In 1964 the U.S. Geological Survey at Menlo Park, Calif., began research on methods for monitoring pesticides in water. Two systems were selected--electron-capture gas chromatography and microcoulometric-titration gas chromatography. Studies on these systems are now in progress. This report provides current information on the development and application of an electron-capture gas chromatographic procedure. This method is a convenient and extremely sensitive procedure for the detection and measurement of organic pesticides having high electron affinities, notably the chlorinated organic pesticides. The electron-affinity detector is extremely sensitive to these substances but it is not as sensitive to many other compounds. By this method, the chlorinated organic pesticide may be determined on a sample of convenient size in concentrations as low as the parts-per-trillion range. To insure greater accuracy in the identifications, the pesticides reported were separated and identified by their retention times on two different types of gas chromatographic columns.
U.S., European ALMA Partners Award Prototype Antenna Contracts
NASA Astrophysics Data System (ADS)
2000-03-01
The U.S. and European partners in the Atacama Large Millimeter Array (ALMA) project have awarded contracts to U.S. and Italian firms, respectively, for two prototype antennas. ALMA is a planned telescope array, expected to consist of 64 millimeter-wave antennas with 12-meter diameter dishes. The array will be built at a high-altitude, extremely dry mountain site in Chile's Atacama desert, and is scheduled to be completed sometime in this decade. On February 22, 2000, Associated Universities Inc. (AUI) signed an approximately $6.2 million contract with Vertex Antenna Systems, of Santa Clara, Calif., for construction of one prototype ALMA antenna. AUI operates the U.S. National Radio Astronomy Observatory (NRAO) for the National Science Foundation under a cooperative agreement. The European partners contracted with the consortium of European Industrial Engineering and Costamasnaga, of Mestre, Italy, on February 21, 2000, for the production of another prototype. (Mestre is located on the inland side of Venice.) The two antennas must meet identical specifications, but will inherently be of different designs. This will ensure that the best possible technologies are incorporated into the final production antennas. Only one of the designs will be selected for final production. Several technical challenges must be met for the antennas to perform to ALMA specifications. Each antenna must have extremely high surface accuracy (25 micrometers, or one-third the diameter of a human hair, over the entire 12-meter diameter). This means that, when completed, the surface accuracy of the ALMA dishes will be 20 times greater than that of the Very Large Array (VLA) antennas, and about 50 times greater than dish antennas for communications or radar. The ALMA antennas must also have extremely high pointing accuracy (0.6 arcseconds). An additional challenge is that the antennas, when installed at the ALMA site in Chile, will be exposed to the ravages of weather at 16,500 feet (5000 meters) elevation. All previous millimeter-wavelength antennas that meet such exacting specifications for surface accuracy and pointing accuracy have been housed within telescope enclosures. The U.S. and European prototype antennas will be delivered to the NRAO VLA site, near Socorro, New Mexico, in October and November of 2001, respectively. Preparations for ALMA prototype testing are already underway at the VLA site. Three pads are being constructed for the antennas to rest on. An ALMA control room within the VLA control building is being established. About ten full-time ALMA staff will be involved in the testing. Additionally, ALMA project members from around the U.S. and the world will visit the VLA site to participate in the test program. The two prototype antennas will first be tested separately. Following that, the two will be linked together and tested as an interferometer. Millimeter-wave astronomy is the study of the universe in the spectral region between what is traditionally considered radio waves and infrared radiation. In this realm, ALMA will study the structure of the early universe and the evolution of galaxies; gather crucial data on the formation of stars, protoplanetary disks, and planets; and provide new insights on the familiar objects of our own solar system. ALMA is an international partnership between the United States (National Science Foundation) and Europe. European participants include the member states of the European Southern Observatory (Belgium, Denmark, France, Germany, Italy, the Netherlands, Sweden and Switzerland), the Centre National de la Recherche Scientifique (France), the Max-Planck Gesellschaft (Germany), the Netherlands Foundation for Research in Astronomy, the United Kingdom Particle Physics and Astronomy Research Council, the Oficina de Ciencia Y Tecnologia/Instituto Geografico Nacional OCYT/IGN (Spain), and the Swedish Natural Science Research Council (NFR). The project is currently in a Design and Development phase governed by a Memorandum of Understanding between the United States and Europe. It is hoped and expected that Japan will also join the project as a third equal partner. Negotiations are currently underway to add Canada to the United States team and Spain to the European team. The National Radio Astronomy Observatory is a facility of the National Science Foundation, operated under cooperative agreement by Associated Universities, Inc.
ALMA Partners Award Prototype Antenna Contracts in Europe and the USA
NASA Astrophysics Data System (ADS)
2000-03-01
The European and U.S. partners in the Atacama Large Millimeter Array (ALMA) project have awarded contracts to firms in Italy and the USA, respectively, for two prototype antennas. ALMA is a planned telescope array, expected to consist of 64 millimeter-wave antennas with 12-meter diameter dishes, cf. ESO Press Release 09/99 and ESO PR Video Clip 08/99. The array will be built at a high-altitude, extremely dry mountain site in Chile's Atacama desert, and is scheduled to be completed sometime in this decade. The European partners contracted with the consortium of European Industrial Engineering and Costamasnaga (Mestre, Italy), on February 21, 2000, for the production of one prototype ALMA antenna. On February 22, 2000, Associated Universities Inc. signed a contract with Vertex Antenna Systems (Santa Clara, California), for construction of another prototype antenna. The two antennas must meet identical specifications, but will inherently be of different designs. This will ensure that the best possible technologies are incorporated into the final production antennas. Several technical challenges must be met for the antennas to perform to ALMA specifications. Each antenna must have extremely high surface accuracy (25 µm, or one-third the diameter of a human hair, over the entire 12-meter diameter). This means that, when completed, the surface accuracy of the ALMA dishes will be 20 times greater than that of the Very Large Array (VLA) antennas near Socorro (New Mexico, USA), and about 50 times greater than dish antennas for communications or radar. The ALMA antennas must also have extremely high pointing accuracy (0.6 arcseconds). An additional challenge is that the antennas, when installed at the ALMA site in Chile, will be exposed to the ravages of weather at 5000 m elevation. All previous millimeter-wavelength antennas that meet such exacting specifications for surface accuracy and pointing accuracy have been housed within telescope enclosures. The U.S. and European prototype antennas will be delivered to the NRAO VLA site in October and November of 2001, respectively. Preparations for ALMA prototype testing are already underway at the VLA site. Three pads are being constructed for the antennas to rest on. An ALMA control room within the VLA control building is being established. About ten full-time ALMA staff will be involved in the testing. Additionally, ALMA project members from around the U.S. and the world will visit the VLA site to participate in the test program. The two prototype antennas will first be tested separately. Following that, the two will be linked together and tested as an interferometer. Millimeter-wave astronomy is the study of the universe in the spectral region between what is traditionally considered radio waves and infrared radiation. In this realm, ALMA will study the structure of the early universe and the evolution of galaxies; gather crucial data on the formation of stars, protoplanetary disks, and planets; and provide new insights on the familiar objects of our own solar system. ALMA is an international partnership between the United States (National Science Foundation) and Europe. European participants include the member states of the European Southern Observatory (Belgium, Denmark, France, Germany, Italy, the Netherlands, Sweden and Switzerland), the Centre National de la Recherche Scientifique (CNRS) in France, the Max-Planck Gesellschaft (Germany), the Netherlands Foundation for Research in Astronomy, the United Kingdom Particle Physics and Astronomy Research Council (PPARC), the Oficina de Ciencia Y Tecnologia/Instituto Geografico Nacional OCYT/IGN (Spain) and the Swedish Natural Science Research Council (NFR). The project is currently in a Design and Development phase governed by a Memorandum of Understanding between the United States and Europe. Negotiations are currently underway to add Canada to the United States team. Note [1] This Press Release is published simultaneously by the U.S. National Radio Astronomy Observatory (NRAO) , a facility of the National Science Foundation and operated under cooperative agreement by Associated Universities, Inc. ESO Video News Reel no. 5 with sequences related to the ALMA project is available to broadcasters on request.
Hierarchical extreme learning machine based reinforcement learning for goal localization
NASA Astrophysics Data System (ADS)
AlDahoul, Nouar; Zaw Htike, Zaw; Akmeliawati, Rini
2017-03-01
The objective of goal localization is to find the location of goals in noisy environments. Simple actions are performed to move the agent towards the goal. The goal detector should be capable of minimizing the error between the predicted locations and the true ones. Few regions need to be processed by the agent to reduce the computational effort and increase the speed of convergence. In this paper, reinforcement learning (RL) method was utilized to find optimal series of actions to localize the goal region. The visual data, a set of images, is high dimensional unstructured data and needs to be represented efficiently to get a robust detector. Different deep Reinforcement models have already been used to localize a goal but most of them take long time to learn the model. This long learning time results from the weights fine tuning stage that is applied iteratively to find an accurate model. Hierarchical Extreme Learning Machine (H-ELM) was used as a fast deep model that doesn’t fine tune the weights. In other words, hidden weights are generated randomly and output weights are calculated analytically. H-ELM algorithm was used in this work to find good features for effective representation. This paper proposes a combination of Hierarchical Extreme learning machine and Reinforcement learning to find an optimal policy directly from visual input. This combination outperforms other methods in terms of accuracy and learning speed. The simulations and results were analysed by using MATLAB.
Improved forecasts of winter weather extremes over midlatitudes with extra Arctic observations
NASA Astrophysics Data System (ADS)
Sato, Kazutoshi; Inoue, Jun; Yamazaki, Akira; Kim, Joo-Hong; Maturilli, Marion; Dethloff, Klaus; Hudson, Stephen R.; Granskog, Mats A.
2017-02-01
Recent cold winter extremes over Eurasia and North America have been considered to be a consequence of a warming Arctic. More accurate weather forecasts are required to reduce human and socioeconomic damages associated with severe winters. However, the sparse observing network over the Arctic brings errors in initializing a weather prediction model, which might impact accuracy of prediction results at midlatitudes. Here we show that additional Arctic radiosonde observations from the Norwegian young sea ICE expedition (N-ICE2015) drifting ice camps and existing land stations during winter improved forecast skill and reduced uncertainties of weather extremes at midlatitudes of the Northern Hemisphere. For two winter storms over East Asia and North America in February 2015, ensemble forecast experiments were performed with initial conditions taken from an ensemble atmospheric reanalysis in which the observation data were assimilated. The observations reduced errors in initial conditions in the upper troposphere over the Arctic region, yielding more precise prediction of the locations and strengths of upper troughs and surface synoptic disturbances. Errors and uncertainties of predicted upper troughs at midlatitudes would be brought with upper level high potential vorticity (PV) intruding southward from the observed Arctic region. This is because the PV contained a "signal" of the additional Arctic observations as it moved along an isentropic surface. This suggests that a coordinated sustainable Arctic observing network would be effective not only for regional weather services but also for reducing weather risks in locations distant from the Arctic.
Extreme ultraviolet interferometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, Kenneth A.
EUV lithography is a promising and viable candidate for circuit fabrication with 0.1-micron critical dimension and smaller. In order to achieve diffraction-limited performance, all-reflective multilayer-coated lithographic imaging systems operating near 13-nm wavelength and 0.1 NA have system wavefront tolerances of 0.27 nm, or 0.02 waves RMS. Owing to the highly-sensitive resonant reflective properties of multilayer mirrors and extraordinarily tight tolerances set forth for their fabrication, EUV optical systems require at-wavelength EUV interferometry for final alignment and qualification. This dissertation discusses the development and successful implementation of high-accuracy EUV interferometric techniques. Proof-of-principle experiments with a prototype EUV point-diffraction interferometer for themore » measurement of Fresnel zoneplate lenses first demonstrated sub-wavelength EUV interferometric capability. These experiments spurred the development of the superior phase-shifting point-diffraction interferometer (PS/PDI), which has been implemented for the testing of an all-reflective lithographic-quality EUV optical system. Both systems rely on pinhole diffraction to produce spherical reference wavefronts in a common-path geometry. Extensive experiments demonstrate EUV wavefront-measuring precision beyond 0.02 waves RMS. EUV imaging experiments provide verification of the high-accuracy of the point-diffraction principle, and demonstrate the utility of the measurements in successfully predicting imaging performance. Complementary to the experimental research, several areas of theoretical investigation related to the novel PS/PDI system are presented. First-principles electromagnetic field simulations of pinhole diffraction are conducted to ascertain the upper limits of measurement accuracy and to guide selection of the pinhole diameter. Investigations of the relative merits of different PS/PDI configurations accompany a general study of the most significant sources of systematic measurement errors. To overcome a variety of experimental difficulties, several new methods in interferogram analysis and phase-retrieval were developed: the Fourier-Transform Method of Phase-Shift Determination, which uses Fourier-domain analysis to improve the accuracy of phase-shifting interferometry; the Fourier-Transform Guided Unwrap Method, which was developed to overcome difficulties associated with a high density of mid-spatial-frequency blemishes and which uses a low-spatial-frequency approximation to the measured wavefront to guide the phase unwrapping in the presence of noise; and, finally, an expedient method of Gram-Schmidt orthogonalization which facilitates polynomial basis transformations in wave-front surface fitting procedures.« less
NASA Technical Reports Server (NTRS)
Putnam, WilliamM.
2011-01-01
In 2008 the World Modeling Summit for Climate Prediction concluded that "climate modeling will need-and is ready-to move to fundamentally new high-resolution approaches to capitalize on the seamlessness of the weather-climate continuum." Following from this, experimentation with very high-resolution global climate modeling has gained enhanced priority within many modeling groups and agencies. The NASA Goddard Earth Observing System model (GEOS-5) has been enhanced to provide a capability for the execution at the finest horizontal resolutions POS,SIOle with a global climate model today. Using this high-resolution, non-hydrostatic version of GEOS-5, we have developed a unique capability to explore the intersection of weather and climate within a seamless prediction system. Week-long weather experiments, to mUltiyear climate simulations at global resolutions ranging from 3.5- to 14-km have demonstrated the predictability of extreme events including severe storms along frontal systems, extra-tropical storms, and tropical cyclones. The primary benefits of high resolution global models will likely be in the tropics, with better predictions of the genesis stages of tropical cyclones and of the internal structure of their mature stages. Using satellite data we assess the accuracy of GEOS-5 in representing extreme weather phenomena, and their interaction within the global climate on seasonal time-scales. The impacts of convective parameterization and the frequency of coupling between the moist physics and dynamics are explored in terms of precipitation intensity and the representation of deep convection. We will also describe the seasonal variability of global tropical cyclone activity within a global climate model capable of representing the most intense category 5 hurricanes.
NASA Astrophysics Data System (ADS)
Zbijewski, W.; Sisniega, A.; Stayman, J. W.; Thawait, G.; Packard, N.; Yorkston, J.; Demehri, S.; Fritz, J.; Siewerdsen, J. H.
2015-03-01
Purpose: Arthritis and bone trauma are often accompanied by bone marrow edema (BME). BME is challenging to detect in CT due to the overlaying trabecular structure but can be visualized using dual-energy (DE) techniques to discriminate water and fat. We investigate the feasibility of DE imaging of BME on a dedicated flat-panel detector (FPD) extremities cone-beam CT (CBCT) with a unique x-ray tube with three longitudinally mounted sources. Methods: Simulations involved a digital BME knee phantom imaged with a 60 kVp low-energy beam (LE) and 105 kVp high-energy beam (HE) (+0.25 mm Ag filter). Experiments were also performed on a test-bench with a Varian 4030CB FPD using the same beam energies as the simulation study. A three-source configuration was implemented with x-ray sources distributed along the longitudinal axis and DE CBCT acquisition in which the superior and inferior sources operate at HE (and collect half of the projection angles each) and the central source operates at LE. Three-source DE CBCT was compared to a double-scan, single-source orbit. Experiments were performed with a wrist phantom containing a 50 mg/ml densitometry insert submerged in alcohol (simulating fat) with drilled trabeculae down to ~1 mm to emulate the trabecular matrix. Reconstruction-based three-material decomposition of fat, soft tissue, and bone was performed. Results: For a low-dose scan (36 mAs in the HE and LE data), DE CBCT achieved combined accuracy of ~0.80 for a pattern of BME spherical lesions ranging 2.5 - 10 mm diameter in the knee phantom. The accuracy increased to ~0.90 for a 360 mAs scan. Excellent DE discrimination of the base materials was achieved in the experiments. Approximately 80% of the alcohol (fat) voxels in the trabecular phantom was properly identified both for single and 3-source acquisitions, indicating the ability to detect edemous tissue (water-equivalent plastic in the body of the densitometry insert) from the fat inside the trabecular matrix (emulating normal trabecular bone with significant fraction of yellow marrow). Conclusion: Detection of BME and quantification of water and fat content were achieved in extremities DE CBCT with a longitudinal configuration of sources providing DE imaging in a single gantry rotation. The findings support the development of DE imaging capability for CBCT of the extremities in areas conventionally in the domain of MRI.
Zbijewski, W.; Sisniega, A.; Stayman, J. W.; Thawait, G.; Packard, N.; Yorkston, J.; Demehri, S.; Fritz, J.; Siewerdsen, J. H.
2015-01-01
Purpose Arthritis and bone trauma are often accompanied by bone marrow edema (BME). BME is challenging to detect in CT due to the overlaying trabecular structure but can be visualized using dual-energy (DE) techniques to discriminate water and fat. We investigate the feasibility of DE imaging of BME on a dedicated flat-panel detector (FPD) extremities cone-beam CT (CBCT) with a unique x-ray tube with three longitudinally mounted sources. Methods Simulations involved a digital BME knee phantom imaged with a 60 kVp low-energy beam (LE) and 105 kVp high-energy beam (HE) (+0.25 mm Ag filter). Experiments were also performed on a test-bench with a Varian 4030CB FPD using the same beam energies as the simulation study. A three-source configuration was implemented with x-ray sources distributed along the longitudinal axis and DE CBCT acquisition in which the superior and inferior sources operate at HE (and collect half of the projection angles each) and the central source operates at LE. Three-source DE CBCT was compared to a double-scan, single-source orbit. Experiments were performed with a wrist phantom containing a 50 mg/ml densitometry insert submerged in alcohol (simulating fat) with drilled trabeculae down to ~1 mm to emulate the trabecular matrix. Reconstruction-based three-material decomposition of fat, soft tissue, and bone was performed. Results For a low-dose scan (36 mAs in the HE and LE data), DE CBCT achieved combined accuracy of ~0.80 for a pattern of BME spherical lesions ranging 2.5 – 10 mm diameter in the knee phantom. The accuracy increased to ~0.90 for a 360 mAs scan. Excellent DE discrimination of the base materials was achieved in the experiments. Approximately 80% of the alcohol (fat) voxels in the trabecular phantom was properly identified both for single and 3-source acquisitions, indicating the ability to detect edemous tissue (water-equivalent plastic in the body of the densitometry insert) from the fat inside the trabecular matrix (emulating normal trabecular bone with significant fraction of yellow marrow). Conclusion Detection of BME and quantification of water and fat content were achieved in extremities DE CBCT with a longitudinal configuration of sources providing DE imaging in a single gantry rotation. The findings support the development of DE imaging capability for CBCT of the extremities in areas conventionally in the domain of MRI. PMID:26045631
Zbijewski, W; Sisniega, A; Stayman, J W; Thawait, G; Packard, N; Yorkston, J; Demehri, S; Fritz, J; Siewerdsen, J H
2015-02-21
Arthritis and bone trauma are often accompanied by bone marrow edema (BME). BME is challenging to detect in CT due to the overlaying trabecular structure but can be visualized using dual-energy (DE) techniques to discriminate water and fat. We investigate the feasibility of DE imaging of BME on a dedicated flat-panel detector (FPD) extremities cone-beam CT (CBCT) with a unique x-ray tube with three longitudinally mounted sources. Simulations involved a digital BME knee phantom imaged with a 60 kVp low-energy beam (LE) and 105 kVp high-energy beam (HE) (+0.25 mm Ag filter). Experiments were also performed on a test-bench with a Varian 4030CB FPD using the same beam energies as the simulation study. A three-source configuration was implemented with x-ray sources distributed along the longitudinal axis and DE CBCT acquisition in which the superior and inferior sources operate at HE (and collect half of the projection angles each) and the central source operates at LE. Three-source DE CBCT was compared to a double-scan, single-source orbit. Experiments were performed with a wrist phantom containing a 50 mg/ml densitometry insert submerged in alcohol (simulating fat) with drilled trabeculae down to ~1 mm to emulate the trabecular matrix. Reconstruction-based three-material decomposition of fat, soft tissue, and bone was performed. For a low-dose scan (36 mAs in the HE and LE data), DE CBCT achieved combined accuracy of ~0.80 for a pattern of BME spherical lesions ranging 2.5 - 10 mm diameter in the knee phantom. The accuracy increased to ~0.90 for a 360 mAs scan. Excellent DE discrimination of the base materials was achieved in the experiments. Approximately 80% of the alcohol (fat) voxels in the trabecular phantom was properly identified both for single and 3-source acquisitions, indicating the ability to detect edemous tissue (water-equivalent plastic in the body of the densitometry insert) from the fat inside the trabecular matrix (emulating normal trabecular bone with significant fraction of yellow marrow). Detection of BME and quantification of water and fat content were achieved in extremities DE CBCT with a longitudinal configuration of sources providing DE imaging in a single gantry rotation. The findings support the development of DE imaging capability for CBCT of the extremities in areas conventionally in the domain of MRI.
Muver, a computational framework for accurately calling accumulated mutations.
Burkholder, Adam B; Lujan, Scott A; Lavender, Christopher A; Grimm, Sara A; Kunkel, Thomas A; Fargo, David C
2018-05-09
Identification of mutations from next-generation sequencing data typically requires a balance between sensitivity and accuracy. This is particularly true of DNA insertions and deletions (indels), that can impart significant phenotypic consequences on cells but are harder to call than substitution mutations from whole genome mutation accumulation experiments. To overcome these difficulties, we present muver, a computational framework that integrates established bioinformatics tools with novel analytical methods to generate mutation calls with the extremely low false positive rates and high sensitivity required for accurate mutation rate determination and comparison. Muver uses statistical comparison of ancestral and descendant allelic frequencies to identify variant loci and assigns genotypes with models that include per-sample assessments of sequencing errors by mutation type and repeat context. Muver identifies maximally parsimonious mutation pathways that connect these genotypes, differentiating potential allelic conversion events and delineating ambiguities in mutation location, type, and size. Benchmarking with a human gold standard father-son pair demonstrates muver's sensitivity and low false positive rates. In DNA mismatch repair (MMR) deficient Saccharomyces cerevisiae, muver detects multi-base deletions in homopolymers longer than the replicative polymerase footprint at rates greater than predicted for sequential single-base deletions, implying a novel multi-repeat-unit slippage mechanism. Benchmarking results demonstrate the high accuracy and sensitivity achieved with muver, particularly for indels, relative to available tools. Applied to an MMR-deficient Saccharomyces cerevisiae system, muver mutation calls facilitate mechanistic insights into DNA replication fidelity.
Fully accelerating quantum Monte Carlo simulations of real materials on GPU clusters
NASA Astrophysics Data System (ADS)
Esler, Kenneth
2011-03-01
Quantum Monte Carlo (QMC) has proved to be an invaluable tool for predicting the properties of matter from fundamental principles, combining very high accuracy with extreme parallel scalability. By solving the many-body Schrödinger equation through a stochastic projection, it achieves greater accuracy than mean-field methods and better scaling with system size than quantum chemical methods, enabling scientific discovery across a broad spectrum of disciplines. In recent years, graphics processing units (GPUs) have provided a high-performance and low-cost new approach to scientific computing, and GPU-based supercomputers are now among the fastest in the world. The multiple forms of parallelism afforded by QMC algorithms make the method an ideal candidate for acceleration in the many-core paradigm. We present the results of porting the QMCPACK code to run on GPU clusters using the NVIDIA CUDA platform. Using mixed precision on GPUs and MPI for intercommunication, we observe typical full-application speedups of approximately 10x to 15x relative to quad-core CPUs alone, while reproducing the double-precision CPU results within statistical error. We discuss the algorithm modifications necessary to achieve good performance on this heterogeneous architecture and present the results of applying our code to molecules and bulk materials. Supported by the U.S. DOE under Contract No. DOE-DE-FG05-08OR23336 and by the NSF under No. 0904572.
Measurement of aspheric mirror by nanoprofiler using normal vector tracing
NASA Astrophysics Data System (ADS)
Kitayama, Takao; Shiraji, Hiroki; Yamamura, Kazuya; Endo, Katsuyoshi
2016-09-01
Aspheric or free-form optics with high accuracy are necessary in many fields such as third-generation synchrotron radiation and extreme-ultraviolet lithography. Therefore the demand of measurement method for aspherical or free-form surface with nanometer accuracy increases. Purpose of our study is to develop a non-contact measurement technology for aspheric or free-form surfaces directly with high repeatability. To achieve this purpose we have developed threedimensional Nanoprofiler which detects normal vectors of sample surface. The measurement principle is based on the straightness of laser light and the accurate motion of rotational goniometers. This machine consists of four rotational stages, one translational stage and optical head which has the quadrant photodiode (QPD) and laser source. In this measurement method, we conform the incident light beam to reflect the beam by controlling five stages and determine the normal vectors and the coordinates of the surface from signal of goniometers, translational stage and QPD. We can obtain three-dimensional figure from the normal vectors and their coordinates by surface reconstruction algorithm. To evaluate performance of this machine we measure a concave aspheric mirror with diameter of 150 mm. As a result we achieve to measure large area of 150mm diameter. And we observe influence of systematic errors which the machine has. Then we simulated the influence and subtracted it from measurement result.
EEG channels reduction using PCA to increase XGBoost's accuracy for stroke detection
NASA Astrophysics Data System (ADS)
Fitriah, N.; Wijaya, S. K.; Fanany, M. I.; Badri, C.; Rezal, M.
2017-07-01
In Indonesia, based on the result of Basic Health Research 2013, the number of stroke patients had increased from 8.3 ‰ (2007) to 12.1 ‰ (2013). These days, some researchers are using electroencephalography (EEG) result as another option to detect the stroke disease besides CT Scan image as the gold standard. A previous study on the data of stroke and healthy patients in National Brain Center Hospital (RS PON) used Brain Symmetry Index (BSI), Delta-Alpha Ratio (DAR), and Delta-Theta-Alpha-Beta Ratio (DTABR) as the features for classification by an Extreme Learning Machine (ELM). The study got 85% accuracy with sensitivity above 86 % for acute ischemic stroke detection. Using EEG data means dealing with many data dimensions, and it can reduce the accuracy of classifier (the curse of dimensionality). Principal Component Analysis (PCA) could reduce dimensionality and computation cost without decreasing classification accuracy. XGBoost, as the scalable tree boosting classifier, can solve real-world scale problems (Higgs Boson and Allstate dataset) with using a minimal amount of resources. This paper reuses the same data from RS PON and features from previous research, preprocessed with PCA and classified with XGBoost, to increase the accuracy with fewer electrodes. The specific fewer electrodes improved the accuracy of stroke detection. Our future work will examine the other algorithm besides PCA to get higher accuracy with less number of channels.
Kasper, Jürgen; Heesen, Christoph; Köpke, Sascha; Mühlhauser, Ingrid; Lenz, Matthias
2011-01-01
Statistical health risk information has been proven confusing and difficult to understand. While existing research indicates that presenting risk information in frequency formats is superior to relative risk and probability formats, the optimal design of frequency formats is still unclear. The aim of this study was to compare presentation of multi-figure pictographs in consecutive and random arrangements regarding accuracy in perception and vulnerability for cognitive bias. A total of 111 patients with multiple sclerosis were randomly assigned to two experimental conditions: patient information using 100 figure pictographs in 1) unsorted (UP group) or 2) consecutive arrangement (CP group).The study experiment was framed as patient information on how risks and benefit could be explained. The information comprised two scenarios of a treatment decision with varying levels of emotional relevance. Primary outcome measure was accuracy of information recall (errors made when recalling previously presented frequencies of benefits and side effects). Cognitive bias was measured as additional error appearing with higher emotional involvement. The uncertainty tolerance scale and a set of items to assess risk attribution were surveyed. The study groups did not differ in their accuracy of recalling benefits, but recall of side effects was more accurate in the CP-group. Cognitive bias when recalling benefits was higher in the UP-group than in the CP-group and equal for side effects in both groups. RESULTS were similar in subgroup analyses of patients 1) with highly irrational risk attribution 2) with experience regarding the hypothetical contents or 3) with experience regarding pictograph presentation of frequencies. Overall, benefit was overestimated by more than 100% and variance of recall was extremely high. Consecutive arrangement as commonly used seems not clearly superior to unsorted arrangement which is more close to reality. General poor performance and the corresponding high variance of recall might have clouded existing effects of the arrangement types. More research is needed with varying proportions and other samples.
A novel technique for evaluating the volcanic cloud top altitude using GPS Radio Occultation data
NASA Astrophysics Data System (ADS)
Biondi, Riccardo; Corradini, Stefano; Guerrieri, Lorenzo; Merucci, Luca; Stelitano, Dario; Pugnaghi, Sergio
2017-04-01
Volcanic ash and sulfuric gases are a major hazards to aviation since they damage the aircraft engines also at large distance from the eruption. Many challenges given by volcanic explosive eruptions are still discussed and several issues are far from being solved. The cloud top altitude can be detected with different techniques, but the accuracy is still quite coarse. This parameter is important for the air traffic to know what altitude can be ash free, and it assumes a key role for the contribution of the eruption to the climate change. Moreover, the cloud top altitude is also strictly related to the mass ejected by the eruption and represent a key parameter for the ash and SO2 retrievals by using several techniques. The Global Positioning System (GPS) Radio Occultation (RO) technique enables real time measurement of atmospheric density structure in any meteorological condition, in remote areas and during extreme atmospheric events with high vertical resolution and accuracy and this makes the RO an interesting tool for this kind of studies. In this study we have tracked the Eyjafjöll 2010 eruption by using MODIS satellite measurements and retrieved the volcanic cloud top altitudes by using two different procedures exploiting the thermal infrared CO2 absorption bands around 13.4 micrometers. The first approach is a modification of the standard CO2 slicing method while the second is based on look up tables computations. We have then selected all the RO profiles co-located with the volcanic cloud and implemented an algorithm based on the variation of the bending angle for detecting the cloud top altitude with high accuracy. The results of the comparison between the MODIS and RO volcanic height retrievals are encouraging and suggesting that, due to their independence from weather conditions and due to their high vertical resolution, the RO observations can contribute to improved detection and monitoring of volcanic clouds and to support warning systems.
Jakubowicz, Jessica F; Bai, Shasha; Matlock, David N; Jones, Michelle L; Hu, Zhuopei; Proffitt, Betty; Courtney, Sherry E
2018-05-01
High electrode temperature during transcutaneous monitoring is associated with skin burns in extremely premature infants. We evaluated the accuracy and precision of CO 2 and O 2 measurements using lower transcutaneous electrode temperatures below 42°C. We enrolled 20 neonates. Two transcutaneous monitors were placed simultaneously on each neonate, with one electrode maintained at 42°C and the other randomized to temperatures of 38, 39, 40, 41, and 42°C. Arterial blood was collected twice at each temperature. At the time of arterial blood sampling, values for transcutaneously measured partial pressure of CO 2 (P tcCO 2 ) were not significantly different among test temperatures. There was no evidence of skin burning at any temperature. For P tcCO 2 , Bland-Altman analyses of all test temperatures versus 42°C showed good precision and low bias. Transcutaneously measured partial pressure of O 2 (P tcO 2 ) values trended arterial values but had large negative bias. Transcutaneous electrode temperatures as low as 38°C allow an assessment of P tcCO 2 as accurate as that with electrodes at 42°C. Copyright © 2018 by Daedalus Enterprises.
Weather-based forecasts of California crop yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lobell, D B; Cahill, K N; Field, C B
2005-09-26
Crop yield forecasts provide useful information to a range of users. Yields for several crops in California are currently forecast based on field surveys and farmer interviews, while for many crops official forecasts do not exist. As broad-scale crop yields are largely dependent on weather, measurements from existing meteorological stations have the potential to provide a reliable, timely, and cost-effective means to anticipate crop yields. We developed weather-based models of state-wide yields for 12 major California crops (wine grapes, lettuce, almonds, strawberries, table grapes, hay, oranges, cotton, tomatoes, walnuts, avocados, and pistachios), and tested their accuracy using cross-validation over themore » 1980-2003 period. Many crops were forecast with high accuracy, as judged by the percent of yield variation explained by the forecast, the number of yields with correctly predicted direction of yield change, or the number of yields with correctly predicted extreme yields. The most successfully modeled crop was almonds, with 81% of yield variance captured by the forecast. Predictions for most crops relied on weather measurements well before harvest time, allowing for lead times that were longer than existing procedures in many cases.« less
Development of a method for personal, spatiotemporal exposure assessment.
Adams, Colby; Riggs, Philip; Volckens, John
2009-07-01
This work describes the development and evaluation of a high resolution, space and time-referenced sampling method for personal exposure assessment to airborne particulate matter (PM). This method integrates continuous measures of personal PM levels with the corresponding location-activity (i.e. work/school, home, transit) of the subject. Monitoring equipment include a small, portable global positioning system (GPS) receiver, a miniature aerosol nephelometer, and an ambient temperature monitor to estimate the location, time, and magnitude of personal exposure to particulate matter air pollution. Precision and accuracy of each component, as well as the integrated method performance were tested in a combination of laboratory and field tests. Spatial data was apportioned into pre-determined location-activity categories (i.e. work/school, home, transit) with a simple, temporospatially-based algorithm. The apportioning algorithm was extremely effective with an overall accuracy of 99.6%. This method allows examination of an individual's estimated exposure through space and time, which may provide new insights into exposure-activity relationships not possible with traditional exposure assessment techniques (i.e., time-integrated, filter-based measurements). Furthermore, the method is applicable to any contaminant or stressor that can be measured on an individual with a direct-reading sensor.
Accurate Typing of Human Leukocyte Antigen Class I Genes by Oxford Nanopore Sequencing.
Liu, Chang; Xiao, Fangzhou; Hoisington-Lopez, Jessica; Lang, Kathrin; Quenzel, Philipp; Duffy, Brian; Mitra, Robi David
2018-04-03
Oxford Nanopore Technologies' MinION has expanded the current DNA sequencing toolkit by delivering long read lengths and extreme portability. The MinION has the potential to enable expedited point-of-care human leukocyte antigen (HLA) typing, an assay routinely used to assess the immunologic compatibility between organ donors and recipients, but the platform's high error rate makes it challenging to type alleles with accuracy. We developed and validated accurate typing of HLA by Oxford nanopore (Athlon), a bioinformatic pipeline that i) maps nanopore reads to a database of known HLA alleles, ii) identifies candidate alleles with the highest read coverage at different resolution levels that are represented as branching nodes and leaves of a tree structure, iii) generates consensus sequences by remapping the reads to the candidate alleles, and iv) calls the final diploid genotype by blasting consensus sequences against the reference database. Using two independent data sets generated on the R9.4 flow cell chemistry, Athlon achieved a 100% accuracy in class I HLA typing at the two-field resolution. Copyright © 2018 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Photoelectric scanning-based method for positioning omnidirectional automatic guided vehicle
NASA Astrophysics Data System (ADS)
Huang, Zhe; Yang, Linghui; Zhang, Yunzhi; Guo, Yin; Ren, Yongjie; Lin, Jiarui; Zhu, Jigui
2016-03-01
Automatic guided vehicle (AGV) as a kind of mobile robot has been widely used in many applications. For better adapting to the complex working environment, more and more AGVs are designed to be omnidirectional by being equipped with Mecanum wheels for increasing their flexibility and maneuverability. However, as the AGV with this kind of wheels suffers from the position errors mainly because of the frequent slipping property, how to measure its position accurately in real time is an extremely important issue. Among the ways of achieving it, the photoelectric scanning methodology based on angle measurement is efficient. Hence, we propose a feasible method to ameliorate the positioning process, which mainly integrates four photoelectric receivers and one laser transmitter. To verify the practicality and accuracy, actual experiments and computer simulations have been conducted. In the simulation, the theoretical positioning error is less than 0.28 mm in a 10 m×10 m space. In the actual experiment, the performances about the stability, accuracy, and dynamic capability of this method were inspected. It demonstrates that the system works well and the performance of the position measurement is high enough to fulfill the mainstream tasks.
NASA Astrophysics Data System (ADS)
Shibataki, Takuya; Takahashi, Yasuhito; Fujiwara, Koji
2018-04-01
This paper discusses a measurement method for saturation magnetizations of iron core materials using an electromagnet, which can apply an extremely large magnetic field strength to a specimen. It is said that electrical steel sheets are completely saturated at such a large magnetic field strength over about 100 kA/m. The saturation magnetization can be obtained by assuming that the completely saturated specimen shows a linear change of the flux density with the magnetic field strength because the saturation magnetization is constant. In order to accurately evaluate the flux density in the specimen, an air flux between the specimen and a winding of B-coil for detecting the flux density is compensated by utilizing an ideal condition that the incremental permeability of saturated specimen is equal to the permeability of vacuum. An error of magnetic field strength caused by setting a sensor does not affect the measurement accuracy of saturation magnetization. The error is conveniently cancelled because the saturation magnetization is a function of a ratio of the magnetic field strength to its increment. It may be concluded that the saturation magnetization can be easily measured with high accuracy by using the proposed method.
Characterization and classification of South American land cover types using satellite data
NASA Technical Reports Server (NTRS)
Townshend, J. R. G.; Justice, C. O.; Kalb, V.
1987-01-01
Various methods are compared for carrying out land cover classifications of South America using multitemporal Advanced Very High Resolution Radiometer data. Fifty-two images of the normalized difference vegetation index (NDVI) from a 1-year period are used to generate multitemporal data sets. Three main approaches to land cover classification are considered, namely the use of the principal components transformed images, the use of a characteristic curves procedure based on NDVI values plotted against time, and finally application of the maximum likelihood rule to multitemporal data sets. Comparison of results from training sites indicates that the last approach yields the most accurate results. Despite the reliance on training site figures for performance assessment, the results are nevertheless extremely encouraging, with accuracies for several cover types exceeding 90 per cent.
Liu, Yanqiu; Lu, Huijuan; Yan, Ke; Xia, Haixia; An, Chunlin
2016-01-01
Embedding cost-sensitive factors into the classifiers increases the classification stability and reduces the classification costs for classifying high-scale, redundant, and imbalanced datasets, such as the gene expression data. In this study, we extend our previous work, that is, Dissimilar ELM (D-ELM), by introducing misclassification costs into the classifier. We name the proposed algorithm as the cost-sensitive D-ELM (CS-D-ELM). Furthermore, we embed rejection cost into the CS-D-ELM to increase the classification stability of the proposed algorithm. Experimental results show that the rejection cost embedded CS-D-ELM algorithm effectively reduces the average and overall cost of the classification process, while the classification accuracy still remains competitive. The proposed method can be extended to classification problems of other redundant and imbalanced data.
Azria, E; Tsatsaris, V; Moriette, G; Hirsch, E; Schmitz, T; Cabrol, D; Goffinet, F
2007-05-01
Extreme premature child's long-term prognostic is getting better and better known, and if a resuscitation procedure is possible at birth, it won't guarantee survival or a survival free of disability. Incertitude toward individual prognosis and outcome for those childs remains considerable. In this field, we are at the frontier of medical knowledge and the answer to the question, "how to decide the ante and postnatal care?" is crucial. This work is focused on this problematic of decision making in the context of extreme prematurity. It attempts to deconstruct this concept and to explicit its stakes. Thus, with the support of the medical sources and of philosophical debates, we tried to build a decision-making procedure that complies with the ethical requirements of medical care, accuracy, justice and equity. This decision-making procedure is primarily concerned with the singularity of each decision situation and it intends to link it closely to the notions of rationality and responsibility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Apai, Dániel; Skemer, Andrew; Hanson, Jake R.
Time-resolved photometry is an important new probe of the physics of condensate clouds in extrasolar planets and brown dwarfs. Extreme adaptive optics systems can directly image planets, but precise brightness measurements are challenging. We present VLT/SPHERE high-contrast, time-resolved broad H-band near-infrared photometry for four exoplanets in the HR 8799 system, sampling changes from night to night over five nights with relatively short integrations. The photospheres of these four planets are often modeled by patchy clouds and may show large-amplitude rotational brightness modulations. Our observations provide high-quality images of the system. We present a detailed performance analysis of different data analysismore » approaches to accurately measure the relative brightnesses of the four exoplanets. We explore the information in satellite spots and demonstrate their use as a proxy for image quality. While the brightness variations of the satellite spots are strongly correlated, we also identify a second-order anti-correlation pattern between the different spots. Our study finds that KLIP reduction based on principal components analysis with satellite-spot-modulated artificial-planet-injection-based photometry leads to a significant (∼3×) gain in photometric accuracy over standard aperture-based photometry and reaches 0.1 mag per point accuracy for our data set, the signal-to-noise ratio of which is limited by small field rotation. Relative planet-to-planet photometry can be compared between nights, enabling observations spanning multiple nights to probe variability. Recent high-quality relative H-band photometry of the b–c planet pair agrees to about 1%.« less
Online sensing and control of oil in process wastewater
NASA Astrophysics Data System (ADS)
Khomchenko, Irina B.; Soukhomlinoff, Alexander D.; Mitchell, T. F.; Selenow, Alexander E.
2002-02-01
Industrial processes, which eliminate high concentration of oil in their waste stream, find it extremely difficult to measure and control the water purification process. Most oil separation processes involve chemical separation using highly corrosive caustics, acids, surfactants, and emulsifiers. Included in the output of this chemical treatment process are highly adhesive tar-like globules, emulsified and surface oils, and other emulsified chemicals, in addition to suspended solids. The level of oil/hydrocarbons concentration in the wastewater process may fluctuate from 1 ppm to 10,000 ppm, depending upon the specifications of the industry and level of water quality control. The authors have developed a sensing technology, which provides the accuracy of scatter/absorption sensing in a contactless environment by combining these methodologies with reflective measurement. The sensitivity of the sensor may be modified by changing the fluid level control in the flow cell, allowing for a broad range of accurate measurement from 1 ppm to 10,000 ppm. Because this sensing system has been designed to work in a highly invasive environment, it can be placed close to the process source to allow for accurate real time measurement and control.
NASA Astrophysics Data System (ADS)
Giesige, C.; Nava, E.
2016-12-01
In the midst of a changing climate we have seen extremes in weather events: lightning, wildfires, hurricanes, tornadoes, and earthquakes. All of these ride on an imbalance of magnetic and electrical distribution about the earth including what goes on from the atmospheric and geophysic levels. There is relevance to the important role the sun plays in developing and feeding of the extreme weather events along with the sun's role helping to create a separation of charges on earth furthering climactic extremes. Focusing attention in North America and on how the sun, atmospheric and geophysic winds come together producing lightning events, there are connections between energy distribution in the environment, lightning caused wildfires, and extreme wildfire behavior. Lightning caused wildfires and extreme fire behavior have become enhanced with the changing climate conditions. Even with strong developments in wildfire science, there remains a lack in full understanding of connections that create a lightning caused wildfire event and lack of monitoring advancements in predicting extreme fire behavior. Several connections have been made in our research allowing us to connect multiple facets of the environment in regards to electric and magnetic influences on wildfires. Among them include: irradiance, winds, pressure systems, humidity, and topology. The connections can be made to develop better detection systems of wildfires, establish with more accuracy areas of highest risk for wildfire and extreme wildfire behavior, and prediction of wildfire behavior. A platform found within the environment can also lead to further understanding and monitoring of other extreme weather events in the future.
Amisaki, Takashi; Toyoda, Shinjiro; Miyagawa, Hiroh; Kitamura, Kunihiro
2003-04-15
Evaluation of long-range Coulombic interactions still represents a bottleneck in the molecular dynamics (MD) simulations of biological macromolecules. Despite the advent of sophisticated fast algorithms, such as the fast multipole method (FMM), accurate simulations still demand a great amount of computation time due to the accuracy/speed trade-off inherently involved in these algorithms. Unless higher order multipole expansions, which are extremely expensive to evaluate, are employed, a large amount of the execution time is still spent in directly calculating particle-particle interactions within the nearby region of each particle. To reduce this execution time for pair interactions, we developed a computation unit (board), called MD-Engine II, that calculates nonbonded pairwise interactions using a specially designed hardware. Four custom arithmetic-processors and a processor for memory manipulation ("particle processor") are mounted on the computation board. The arithmetic processors are responsible for calculation of the pair interactions. The particle processor plays a central role in realizing efficient cooperation with the FMM. The results of a series of 50-ps MD simulations of a protein-water system (50,764 atoms) indicated that a more stringent setting of accuracy in FMM computation, compared with those previously reported, was required for accurate simulations over long time periods. Such a level of accuracy was efficiently achieved using the cooperative calculations of the FMM and MD-Engine II. On an Alpha 21264 PC, the FMM computation at a moderate but tolerable level of accuracy was accelerated by a factor of 16.0 using three boards. At a high level of accuracy, the cooperative calculation achieved a 22.7-fold acceleration over the corresponding conventional FMM calculation. In the cooperative calculations of the FMM and MD-Engine II, it was possible to achieve more accurate computation at a comparable execution time by incorporating larger nearby regions. Copyright 2003 Wiley Periodicals, Inc. J Comput Chem 24: 582-592, 2003
Modeling Extra-Long Tsunami Propagation: Assessing Data, Model Accuracy and Forecast Implications
NASA Astrophysics Data System (ADS)
Titov, V. V.; Moore, C. W.; Rabinovich, A.
2017-12-01
Detecting and modeling tsunamis propagating tens of thousands of kilometers from the source is a formidable scientific challenge and seemingly satisfies only scientific curiosity. However, results of such analyses produce a valuable insight into the tsunami propagation dynamics, model accuracy and would provide important implications for tsunami forecast. The Mw = 9.3 megathrust earthquake of December 26, 2004 off the coast of Sumatra generated a tsunami that devastated Indian Ocean coastlines and spread into the Pacific and Atlantic oceans. The tsunami was recorded by a great number of coastal tide gauges, including those located in 15-25 thousand kilometers from the source area. To date, it is still the farthest instrumentally detected tsunami. The data from these instruments throughout the world oceans enabled to estimate various statistical parameters and energy decay of this event. High-resolution records of this tsunami from DARTs 32401 (offshore of northern Chile), 46405 and NeMO (both offshore of the US West Coast), combined with the mainland tide gauge measurements enabled us to examine far-field characteristics of the 2004 in the Pacific Ocean and to compare the results of global numerical simulations with the observations. Despite their small heights (less than 2 cm at deep-ocean locations), the records demonstrated consistent spatial and temporal structure. The numerical model described well the frequency content, amplitudes and general structure of the observed waves at deep-ocean and coastal gages. We present analysis of the measurements and comparison with model data to discuss implication for tsunami forecast accuracy. Model study for such extreme distances from the tsunami source and at extra-long times after the event is an attempt to find accuracy bounds for tsunami models and accuracy limitations of model use for forecast. We discuss results in application to tsunami model forecast and tsunami modeling in general.
Modeling extreme (Carrington-type) space weather events using three-dimensional MHD code simulations
NASA Astrophysics Data System (ADS)
Ngwira, C. M.; Pulkkinen, A. A.; Kuznetsova, M. M.; Glocer, A.
2013-12-01
There is growing concern over possible severe societal consequences related to adverse space weather impacts on man-made technological infrastructure and systems. In the last two decades, significant progress has been made towards the modeling of space weather events. Three-dimensional (3-D) global magnetohydrodynamics (MHD) models have been at the forefront of this transition, and have played a critical role in advancing our understanding of space weather. However, the modeling of extreme space weather events is still a major challenge even for existing global MHD models. In this study, we introduce a specially adapted University of Michigan 3-D global MHD model for simulating extreme space weather events that have a ground footprint comparable (or larger) to the Carrington superstorm. Results are presented for an initial simulation run with ``very extreme'' constructed/idealized solar wind boundary conditions driving the magnetosphere. In particular, we describe the reaction of the magnetosphere-ionosphere system and the associated ground induced geoelectric field to such extreme driving conditions. We also discuss the results and what they might mean for the accuracy of the simulations. The model is further tested using input data for an observed space weather event to verify the MHD model consistence and to draw guidance for future work. This extreme space weather MHD model is designed specifically for practical application to the modeling of extreme geomagnetically induced electric fields, which can drive large currents in earth conductors such as power transmission grids.
Luo, Z; Li, X; Zhu, M; Tang, J; Li, Z; Zhou, X; Song, G; Liu, Z; Zhou, H; Zhang, W
2017-01-01
Essentials Required warfarin doses for mechanical heart valves vary greatly. A two-stage extreme phenotype design was used to identify novel warfarin dose associated mutation. We identified a group of variants significantly associated with extreme warfarin dose. Four novel identified mutations account for 2.2% of warfarin dose discrepancies. Background The variation among patients in warfarin response complicates the management of warfarin therapy, and an improper therapeutic dose usually results in serious adverse events. Objective To use a two-stage extreme phenotype strategy in order to discover novel warfarin dose-associated mutations in heart valve replacement patients. Patients/method A total of 1617 stable-dose patients were enrolled and divided randomly into two cohorts. Stage I patients were genotyped into three groups on the basis of VKORC1-1639G>A and CYP2C9*3 polymorphisms; only patients with the therapeutic dose at the upper or lower 5% of each genotype group were selected as extreme-dose patients for resequencing of the targeted regions. Evaluation of the accuracy of the sequence data and the potential value of the stage I-identified significant mutations were conducted in a validation cohort of 420 subjects. Results A group of mutations were found to be significantly associated with the extreme warfarin dose. The validation work finally identified four novel mutations, i.e. DNMT3A rs2304429 (24.74%), CYP1A1 rs3826041 (47.35%), STX1B rs72800847 (7.01%), and NQO1 rs10517 (36.11%), which independently and significantly contributed to the overall variability in the warfarin dose. After addition of these four mutations, the estimated regression equation was able to account for 56.2% (R 2 Adj = 0.562) of the overall variability in the warfarin maintenance dose, with a predictive accuracy of 62.4%. Conclusion Our study provides evidence linking genetic variations in STX1B, DNMT3A and CYP1A1 to warfarin maintenance dose. The newly identified mutations together account for 2.2% of warfarin dose discrepancy. © 2016 The Authors. Journal of Thrombosis and Haemostasis published by Wiley Periodicals, Inc. on behalf of International Society on Thrombosis and Haemostasis.
Magellan: Radar performance and data products
Pettengill, G.H.; Ford, P.G.; Johnson, W.T.K.; Raney, R.K.; Soderblom, L.A.
1991-01-01
The Magellan Venus orbiter carries only one scientific instrument: a 12.6-centimeter-wavelength radar system shared among three data-taking modes. The syntheticaperture mode images radar echoes from the Venus surface at a resolution of between 120 and 300 meters, depending on spacecraft altitude. In the altimetric mode, relative height measurement accuracies may approach 5 meters, depending on the terrain's roughness, although orbital uncertainties place a floor of about 50 meters on the absolute uncertainty. In areas of extremely rough topography, accuracy is limited by the inherent line-of-sight radar resolution of about 88 meters. The maximum elevation observed to date, corresponding to a planetary radius of 6062 kilometers, lies within Maxwell Mons. When used as a thermal emission radiometer, the system can determine surface emissivities to an absolute accuracy of about 0.02. Mosaicked and archival digital data products will be released in compact disk (CDROM) format.
Speller, Nicholas C; Siraj, Noureen; Regmi, Bishnu P; Marzoughi, Hassan; Neal, Courtney; Warner, Isiah M
2015-01-01
Herein, we demonstrate an alternative strategy for creating QCM-based sensor arrays by use of a single sensor to provide multiple responses per analyte. The sensor, which simulates a virtual sensor array (VSA), was developed by depositing a thin film of ionic liquid, either 1-octyl-3-methylimidazolium bromide ([OMIm][Br]) or 1-octyl-3-methylimidazolium thiocyanate ([OMIm][SCN]), onto the surface of a QCM-D transducer. The sensor was exposed to 18 different organic vapors (alcohols, hydrocarbons, chlorohydrocarbons, nitriles) belonging to the same or different homologous series. The resulting frequency shifts (Δf) were measured at multiple harmonics and evaluated using principal component analysis (PCA) and discriminant analysis (DA) which revealed that analytes can be classified with extremely high accuracy. In almost all cases, the accuracy for identification of a member of the same class, that is, intraclass discrimination, was 100% as determined by use of quadratic discriminant analysis (QDA). Impressively, some VSAs allowed classification of all 18 analytes tested with nearly 100% accuracy. Such results underscore the importance of utilizing lesser exploited properties that influence signal transduction. Overall, these results demonstrate excellent potential of the virtual sensor array strategy for detection and discrimination of vapor phase analytes utilizing the QCM. To the best of our knowledge, this is the first report on QCM VSAs, as well as an experimental sensor array, that is based primarily on viscoelasticity, film thickness, and harmonics.
Monitoring global climate change using SLR data from LARES and other geodetic satellites
NASA Astrophysics Data System (ADS)
Paolozzi, Antonio; Paris, Claudio; Pavlis, Erricos C.; Sindoni, Giampiero; Ciufolini, Ignazio
2016-04-01
The Earth Orientation Parameters (EOP), i.e. the spin axis of the Earth, is influenced by the mass redistribution inside and on the surface of the Earth. On the Earth surface, global ice melting, sea level change and atmospheric circulation are the prime contributors. Recent studies have unraveled the majority of the mysteries behind the Chandler wobble, the annual motion and the secular motion of the pole. The differences from the motion of a pole for a rigid Earth is indeed due to the mass redistribution and transfer of angular momentum among the atmosphere, the oceans and solid Earth. The technique of laser ranging and the use of laser ranged satellites such as LARES along with other techniques such Very Long Baseline Interferometry (VLBI) allow to measure the EOP with accuracies at the level of ~200 μas which correspond to few millimeters at the Earth's surface, while the use of Global Navigation Satellite System (GNSS) data can reach an accuracy even below 100 μas. At these unprecedented high levels of accuracy, even tiny anomalous behavior in EOP can be observed and thus correlated to global environmental changes such as ice melting on Greenland and the polar caps, and extreme events that involve strong ocean-atmosphere coupling interactions such as the El Niño. The contribution of Satellite Laser Ranging (SLR) data such as from the LARES mission and similar satellites to this area is outlined in this paper.
Li, Yuancheng; Qiu, Rixuan; Jing, Sitong
2018-01-01
Advanced Metering Infrastructure (AMI) realizes a two-way communication of electricity data through by interconnecting with a computer network as the core component of the smart grid. Meanwhile, it brings many new security threats and the traditional intrusion detection method can't satisfy the security requirements of AMI. In this paper, an intrusion detection system based on Online Sequence Extreme Learning Machine (OS-ELM) is established, which is used to detecting the attack in AMI and carrying out the comparative analysis with other algorithms. Simulation results show that, compared with other intrusion detection methods, intrusion detection method based on OS-ELM is more superior in detection speed and accuracy.
The Extreme Ultraviolet Explorer mission
NASA Technical Reports Server (NTRS)
Malina, R. F.; Battel, S. J.
1989-01-01
The Extreme Ultraviolet Explorer (EUVE) mission will be the first user of NASA's new Explorer platform. The instrumentation included on this mission consists of three grazing incidence scanning telescopes, a deep survey instrument and an EUV spectrometer. The bandpass covered is 80 to 900 A. During the first six months of the mission, the scanning telescopes will be used to make all-sky maps in four bandpasses; astronomical sources wil be detected and their positions determined to an accuracy of 0.1 deg. The deep survey instrument will survey the sky with higher sensitivity along the ecliptic in two bandpasses between 80 and 500 A. Engineering and design aspects of the science payload and features of the instrument design are described.
NASA Astrophysics Data System (ADS)
Schubert, J. E.; Gallien, T.; Shakeri Majd, M.; Sanders, B. F.
2012-12-01
Globally, over 20 million people currently reside below high tide levels and 200 million are below storm tide levels. Future climate change along with the pressures of urbanization will exacerbate flooding in low lying coastal communities. In Southern California, coastal flooding is triggered by a combination of high tides, storm surge, and waves and recent research suggests that a current 100 year flood event may be experienced on a yearly basis by 2050 due to sea level rise adding a positive offset to return levels. Currently, Southern California coastal communities mitigate the threat of beach overwash, and consequent backshore flooding, with a combination of planning and operational activities such as protective beach berm construction. Theses berms consist of temporary alongshore sand dunes constructed days or hours before an extreme tide or wave event. Hydraulic modeling in urbanized embayments has shown that coastal flooding predictions are extremely sensitive to the presence of coastal protective infrastructure, requiring parameterization of the hard infrastructure elevations at centimetric accuracy. Beach berms are an example of temporary dynamic structures which undergo severe erosion during extreme events and are typically not included in flood risk assessment. Currently, little is known about the erosion process and performance of these structures, which adds uncertainty to flood hazard delineation and flood forecasts. To develop a deeper understanding of beach berm erosion dynamics, three trapezoidal shaped berms, approximately 35 m long and 1.5 m high, were constructed and failure during rising tide conditions was observed using terrestrial laser scanning. Concurrently, real-time kinematic GPS, high-definition time lapse photography, a local tide gauge and wave climate data were collected. The result is a rich and unique observational dataset capturing berm erosion dynamics. This poster highlights the data collected and presents methods for processing and leveraging multi-sensor field observation data. The data obtained from this study will be used to support the development and validation of a numerical beach berm overtopping and overwash model that will allow for improved predictions of coastal flood damage during winter storms and large swells.
Abuassba, Adnan O M; Zhang, Dezheng; Luo, Xiong; Shaheryar, Ahmad; Ali, Hazrat
2017-01-01
Extreme Learning Machine (ELM) is a fast-learning algorithm for a single-hidden layer feedforward neural network (SLFN). It often has good generalization performance. However, there are chances that it might overfit the training data due to having more hidden nodes than needed. To address the generalization performance, we use a heterogeneous ensemble approach. We propose an Advanced ELM Ensemble (AELME) for classification, which includes Regularized-ELM, L 2 -norm-optimized ELM (ELML2), and Kernel-ELM. The ensemble is constructed by training a randomly chosen ELM classifier on a subset of training data selected through random resampling. The proposed AELM-Ensemble is evolved by employing an objective function of increasing diversity and accuracy among the final ensemble. Finally, the class label of unseen data is predicted using majority vote approach. Splitting the training data into subsets and incorporation of heterogeneous ELM classifiers result in higher prediction accuracy, better generalization, and a lower number of base classifiers, as compared to other models (Adaboost, Bagging, Dynamic ELM ensemble, data splitting ELM ensemble, and ELM ensemble). The validity of AELME is confirmed through classification on several real-world benchmark datasets.
Abuassba, Adnan O. M.; Ali, Hazrat
2017-01-01
Extreme Learning Machine (ELM) is a fast-learning algorithm for a single-hidden layer feedforward neural network (SLFN). It often has good generalization performance. However, there are chances that it might overfit the training data due to having more hidden nodes than needed. To address the generalization performance, we use a heterogeneous ensemble approach. We propose an Advanced ELM Ensemble (AELME) for classification, which includes Regularized-ELM, L2-norm-optimized ELM (ELML2), and Kernel-ELM. The ensemble is constructed by training a randomly chosen ELM classifier on a subset of training data selected through random resampling. The proposed AELM-Ensemble is evolved by employing an objective function of increasing diversity and accuracy among the final ensemble. Finally, the class label of unseen data is predicted using majority vote approach. Splitting the training data into subsets and incorporation of heterogeneous ELM classifiers result in higher prediction accuracy, better generalization, and a lower number of base classifiers, as compared to other models (Adaboost, Bagging, Dynamic ELM ensemble, data splitting ELM ensemble, and ELM ensemble). The validity of AELME is confirmed through classification on several real-world benchmark datasets. PMID:28546808
Adam, Asrul; Ibrahim, Zuwairie; Mokhtar, Norrima; Shapiai, Mohd Ibrahim; Cumming, Paul; Mubin, Marizan
2016-01-01
Various peak models have been introduced to detect and analyze peaks in the time domain analysis of electroencephalogram (EEG) signals. In general, peak model in the time domain analysis consists of a set of signal parameters, such as amplitude, width, and slope. Models including those proposed by Dumpala, Acir, Liu, and Dingle are routinely used to detect peaks in EEG signals acquired in clinical studies of epilepsy or eye blink. The optimal peak model is the most reliable peak detection performance in a particular application. A fair measure of performance of different models requires a common and unbiased platform. In this study, we evaluate the performance of the four different peak models using the extreme learning machine (ELM)-based peak detection algorithm. We found that the Dingle model gave the best performance, with 72 % accuracy in the analysis of real EEG data. Statistical analysis conferred that the Dingle model afforded significantly better mean testing accuracy than did the Acir and Liu models, which were in the range 37-52 %. Meanwhile, the Dingle model has no significant difference compared to Dumpala model.
A Pathological Brain Detection System based on Extreme Learning Machine Optimized by Bat Algorithm.
Lu, Siyuan; Qiu, Xin; Shi, Jianping; Li, Na; Lu, Zhi-Hai; Chen, Peng; Yang, Meng-Meng; Liu, Fang-Yuan; Jia, Wen-Juan; Zhang, Yudong
2017-01-01
It is beneficial to classify brain images as healthy or pathological automatically, because 3D brain images can generate so much information which is time consuming and tedious for manual analysis. Among various 3D brain imaging techniques, magnetic resonance (MR) imaging is the most suitable for brain, and it is now widely applied in hospitals, because it is helpful in the four ways of diagnosis, prognosis, pre-surgical, and postsurgical procedures. There are automatic detection methods; however they suffer from low accuracy. Therefore, we proposed a novel approach which employed 2D discrete wavelet transform (DWT), and calculated the entropies of the subbands as features. Then, a bat algorithm optimized extreme learning machine (BA-ELM) was trained to identify pathological brains from healthy controls. A 10x10-fold cross validation was performed to evaluate the out-of-sample performance. The method achieved a sensitivity of 99.04%, a specificity of 93.89%, and an overall accuracy of 98.33% over 132 MR brain images. The experimental results suggest that the proposed approach is accurate and robust in pathological brain detection. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
On the Development of Multi-Step Inverse FEM with Shell Model
NASA Astrophysics Data System (ADS)
Huang, Y.; Du, R.
2005-08-01
The inverse or one-step finite element approach is increasingly used in the sheet metal stamping industry to predict strain distribution and the initial blank shape in the preliminary design stage. Based on the existing theory, there are two types of method: one is based on the principle of virtual work and the other is based on the principle of extreme work. Much research has been conducted to improve the accuracy of simulation results. For example, based on the virtual work principle, Batoz et al. developed a new method using triangular DKT shell elements. In this new method, the bending and unbending effects are considered. Based on the principle of extreme work, Majlessi and et al. proposed the multi-step inverse approach with membrane elements and applied it to an axis-symmetric part. Lee and et al. presented an axis-symmetric shell element model to solve the similar problem. In this paper, a new multi-step inverse method is introduced with no limitation on the workpiece shape. It is a shell element model based on the virtual work principle. The new method is validated by means of comparing to the commercial software system (PAMSTAMP®). The comparison results indicate that the accuracy is good.
NASA Astrophysics Data System (ADS)
Adhi, H. A.; Wijaya, S. K.; Prawito; Badri, C.; Rezal, M.
2017-03-01
Stroke is one of cerebrovascular diseases caused by the obstruction of blood flow to the brain. Stroke becomes the leading cause of death in Indonesia and the second in the world. Stroke also causes of the disability. Ischemic stroke accounts for most of all stroke cases. Obstruction of blood flow can cause tissue damage which results the electrical changes in the brain that can be observed through the electroencephalogram (EEG). In this study, we presented the results of automatic detection of ischemic stroke and normal subjects based on the scaling exponent EEG obtained through detrended fluctuation analysis (DFA) using extreme learning machine (ELM) as the classifier. The signal processing was performed with 18 channels of EEG in the range of 0-30 Hz. Scaling exponents of the subjects were used as the input for ELM to classify the ischemic stroke. The performance of detection was observed by the value of accuracy, sensitivity and specificity. The result showed, performance of the proposed method to classify the ischemic stroke was 84 % for accuracy, 82 % for sensitivity and 87 % for specificity with 120 hidden neurons and sine as the activation function of ELM.
Extreme scale multi-physics simulations of the tsunamigenic 2004 Sumatra megathrust earthquake
NASA Astrophysics Data System (ADS)
Ulrich, T.; Gabriel, A. A.; Madden, E. H.; Wollherr, S.; Uphoff, C.; Rettenberger, S.; Bader, M.
2017-12-01
SeisSol (www.seissol.org) is an open-source software package based on an arbitrary high-order derivative Discontinuous Galerkin method (ADER-DG). It solves spontaneous dynamic rupture propagation on pre-existing fault interfaces according to non-linear friction laws, coupled to seismic wave propagation with high-order accuracy in space and time (minimal dispersion errors). SeisSol exploits unstructured meshes to account for complex geometries, e.g. high resolution topography and bathymetry, 3D subsurface structure, and fault networks. We present the up-to-date largest (1500 km of faults) and longest (500 s) dynamic rupture simulation modeling the 2004 Sumatra-Andaman earthquake. We demonstrate the need for end-to-end-optimization and petascale performance of scientific software to realize realistic simulations on the extreme scales of subduction zone earthquakes: Considering the full complexity of subduction zone geometries leads inevitably to huge differences in element sizes. The main code improvements include a cache-aware wave propagation scheme and optimizations of the dynamic rupture kernels using code generation. In addition, a novel clustered local-time-stepping scheme for dynamic rupture has been established. Finally, asynchronous output has been implemented to overlap I/O and compute time. We resolve the frictional sliding process on the curved mega-thrust and a system of splay faults, as well as the seismic wave field and seafloor displacement with frequency content up to 2.2 Hz. We validate the scenario by geodetic, seismological and tsunami observations. The resulting rupture dynamics shed new light on the activation and importance of splay faults.
2016-08-09
This image shows the bare bones of the first prototype starshade by NASA's Jet Propulsion Laboratory, Pasadena, California. The prototype was shown in technology partner Astro Aerospace/Northrup Grumman's facility in Santa Barbara, California in 2013. In order for the petals of the starshade to diffract starlight away from the camera of a space telescope, they must be deployed with accuracy once the starshade reaches space. The four petals pictured in the image are being measured for this positional accuracy with a laser. As shown by this 66-foot (20-meter) model, starshades can come in many shapes and sizes. This design shows petals that are more extreme in shape which properly diffracts starlight for smaller telescopes. http://photojournal.jpl.nasa.gov/catalog/PIA20903
Full Spatial Resolution Infrared Sounding Application in the Preconvection Environment
NASA Astrophysics Data System (ADS)
Liu, C.; Liu, G.; Lin, T.
2013-12-01
Advanced infrared (IR) sounders such as the Atmospheric Infrared Sounder (AIRS) and Infrared Atmospheric Sounding Interferometer (IASI) provide atmospheric temperature and moisture profiles with high vertical resolution and high accuracy in preconvection environments. The derived atmospheric stability indices such as convective available potential energy (CAPE) and lifted index (LI) from advanced IR soundings can provide critical information 1 ; 6 h before the development of severe convective storms. Three convective storms are selected for the evaluation of applying AIRS full spatial resolution soundings and the derived products on providing warning information in the preconvection environments. In the first case, the AIRS full spatial resolution soundings revealed local extremely high atmospheric instability 3 h ahead of the convection on the leading edge of a frontal system, while the second case demonstrates that the extremely high atmospheric instability is associated with the local development of severe thunderstorm in the following hours. The third case is a local severe storm that occurred on 7-8 August 2010 in Zhou Qu, China, which caused more than 1400 deaths and left another 300 or more people missing. The AIRS full spatial resolution LI product shows the atmospheric instability 3.5 h before the storm genesis. The CAPE and LI from AIRS full spatial resolution and operational AIRS/AMSU soundings along with Geostationary Operational Environmental Satellite (GOES) Sounder derived product image (DPI) products were analyzed and compared. Case studies show that full spatial resolution AIRS retrievals provide more useful warning information in the preconvection environments for determining favorable locations for convective initiation (CI) than do the coarser spatial resolution operational soundings and lower spectral resolution GOES Sounder retrievals. The retrieved soundings are also tested in a regional data assimilation WRF 3D-var system to evaluate the potential assist in the NWP model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, Sun-Ju; Lee, Chung-Uk; Koo, Jae-Rim, E-mail: sjchung@kasi.re.kr, E-mail: leecu@kasi.re.kr, E-mail: koojr@kasi.re.kr
2014-04-20
Even though the recently discovered high-magnification event MOA-2010-BLG-311 had complete coverage over its peak, confident planet detection did not happen due to extremely weak central perturbations (EWCPs, fractional deviations of ≲ 2%). For confident detection of planets in EWCP events, it is necessary to have both high cadence monitoring and high photometric accuracy better than those of current follow-up observation systems. The next-generation ground-based observation project, Korea Microlensing Telescope Network (KMTNet), satisfies these conditions. We estimate the probability of occurrence of EWCP events with fractional deviations of ≤2% in high-magnification events and the efficiency of detecting planets in the EWCPmore » events using the KMTNet. From this study, we find that the EWCP events occur with a frequency of >50% in the case of ≲ 100 M {sub E} planets with separations of 0.2 AU ≲ d ≲ 20 AU. We find that for main-sequence and sub-giant source stars, ≳ 1 M {sub E} planets in EWCP events with deviations ≤2% can be detected with frequency >50% in a certain range that changes with the planet mass. However, it is difficult to detect planets in EWCP events of bright stars like giant stars because it is easy for KMTNet to be saturated around the peak of the events because of its constant exposure time. EWCP events are caused by close, intermediate, and wide planetary systems with low-mass planets and close and wide planetary systems with massive planets. Therefore, we expect that a much greater variety of planetary systems than those already detected, which are mostly intermediate planetary systems, regardless of the planet mass, will be significantly detected in the near future.« less
Simopoulos, Thomas T; Manchikanti, Laxmaiah; Gupta, Sanjeeva; Aydin, Steve M; Kim, Chong Hwan; Solanki, Daneshvari; Nampiaparampil, Devi E; Singh, Vijay; Staats, Peter S; Hirsch, Joshua A
2015-01-01
The sacroiliac joint is well known as a cause of low back and lower extremity pain. Prevalence estimates are 10% to 25% in patients with persistent axial low back pain without disc herniation, discogenic pain, or radiculitis based on multiple diagnostic studies and systematic reviews. However, at present there are no definitive management options for treating sacroiliac joint pain. To evaluate the diagnostic accuracy and therapeutic effectiveness of sacroiliac joint interventions. A systematic review of the diagnostic accuracy and therapeutic effectiveness of sacroiliac joint interventions. The available literature on diagnostic and therapeutic sacroiliac joint interventions was reviewed. The quality assessment criteria utilized were the Quality Appraisal of Reliability Studies (QAREL) checklist for diagnostic accuracy studies, Cochrane review criteria to assess sources of risk of bias, and Interventional Pain Management Techniques-Quality Appraisal of Reliability and Risk of Bias Assessment (IPM-QRB) criteria for randomized therapeutic trials and Interventional Pain Management Techniques-Quality Appraisal of Reliability and Risk of Bias Assessment for Nonrandomized Studies (IPM-QRBNR) for observational therapeutic assessments. The level of evidence was based on a best evidence synthesis with modified grading of qualitative evidence from Level I to Level V. Data sources included relevant literature published from 1966 through March 2015 that were identified through searches of PubMed and EMBASE, manual searches of the bibliographies of known primary and review articles, and all other sources. For the diagnostic accuracy assessment, and for the therapeutic modalities, the primary outcome measure of pain relief and improvement in functional status were utilized. A total of 11 diagnostic accuracy studies and 14 therapeutic studies were included. The evidence for diagnostic accuracy is Level II for dual diagnostic blocks with at least 70% pain relief as the criterion standard and Level III evidence for single diagnostic blocks with at least 75% pain relief as the criterion standard. The evidence for cooled radiofrequency neurotomy in managing sacroiliac joint pain is Level II to III. The evidence for conventional radiofrequency neurotomy, intraarticular steroid injections, and periarticular injections with steroids or botulinum toxin is limited: Level III or IV. The limitations of this systematic review include inconsistencies in diagnostic accuracy studies with a paucity of high quality, replicative, and consistent literature. The limitations for therapeutic interventions include variations in technique, variable diagnostic standards for inclusion criteria, and variable results. The evidence for the accuracy of diagnostic and therapeutic effectiveness of sacroiliac joint interventions varied from Level II to Level IV.
Fabrication of Single, Vertically Aligned Carbon Nanotubes in 3D Nanoscale Architectures
NASA Technical Reports Server (NTRS)
Kaul, Anupama B.; Megerian, Krikor G.; Von Allmen, Paul A.; Baron, Richard L.
2010-01-01
Plasma-enhanced chemical vapor deposition (PECVD) and high-throughput manufacturing techniques for integrating single, aligned carbon nanotubes (CNTs) into novel 3D nanoscale architectures have been developed. First, the PECVD growth technique ensures excellent alignment of the tubes, since the tubes align in the direction of the electric field in the plasma as they are growing. Second, the tubes generated with this technique are all metallic, so their chirality is predetermined, which is important for electronic applications. Third, a wafer-scale manufacturing process was developed that is high-throughput and low-cost, and yet enables the integration of just single, aligned tubes with nanoscale 3D architectures with unprecedented placement accuracy and does not rely on e-beam lithography. Such techniques should lend themselves to the integration of PECVD grown tubes for applications ranging from interconnects, nanoelectromechanical systems (NEMS), sensors, bioprobes, or other 3D electronic devices. Chemically amplified polyhydroxystyrene-resin-based deep UV resists were used in conjunction with excimer laser-based (lambda = 248 nm) step-and-repeat lithography to form Ni catalyst dots = 300 nm in diameter that nucleated single, vertically aligned tubes with high yield using dc PECVD growth. This is the first time such chemically amplified resists have been used, resulting in the nucleation of single, vertically aligned tubes. In addition, novel 3D nanoscale architectures have been created using topdown techniques that integrate single, vertically aligned tubes. These were enabled by implementing techniques that use deep-UV chemically amplified resists for small-feature-size resolution; optical lithography units that allow unprecedented control over layer-to-layer registration; and ICP (inductively coupled plasma) etching techniques that result in near-vertical, high-aspect-ratio, 3D nanoscale architectures, in conjunction with the use of materials that are structurally and chemically compatible with the high-temperature synthesis of the PECVD-grown tubes. The techniques offer a wafer-scale process solution for integrating single PECVD-grown nanotubes into novel architectures that should accelerate their integration in 3D electronics in general. NASA can directly benefit from this technology for its extreme-environment planetary missions. Current Si transistors are inherently more susceptible to high radiation, and do not tolerate extremes in temperature. These novel 3D nanoscale architectures can form the basis for NEMS switches that are inherently less susceptible to radiation or to thermal extremes.
NASA Astrophysics Data System (ADS)
Deguchi, T.; Rokugawa, S.; Matsushima, J.
2009-04-01
InSAR is an application technique of synthetic aperture radars and is now drawing attention as a methodology capable of measuring subtle surface deformation over a wide area with a high spatial resolution. In this study, the authors applied the method of measuring long-term land subsidence by combining InSAR and time series analysis to Kanto Plains of Japan using 28 images of ENVISAT/ASAR data. In this measuring method, the value of land deformation is set as an unknown parameter and the optimal solution to the land deformation amount is derived by applying a smoothness-constrained inversion algorithm. The vicinity of the Kanto Plain started to subside in the 1910s, and became exposed to extreme land subsidence supposedly in accordance with the reconstruction efforts after the Second World War and the economic development activities. The main causes of the land subsidence include the intake of underground water for the use in industries, agriculture, waterworks, and other fields. In the Kujukuri area, the exploitation of soluble natural gas also counts. The Ministry of Environment reported in its documents created in fiscal 2006 that a total of 214 km2 in Tokyo and the six prefectures around the Plain had undergone a subsidence of 1 cm or more per a year. As a result of long-term land subsidence over approximately five and a half years from 13th January, 2003, to 30th June, 2008, unambiguous land deformation was detected in six areas: (i) Haneda Airport, (ii) Urayasu City, (iii) Kasukabe-Koshigaya, (iv) Southern Kanagawa, (v) Toride-Ryugasaki, and (vi) Kujukuri in Chiba Prefecture. In particular, the results for the Kujukuri area were compared with the leveling data taken around the same area to verify the measuring accuracy. The comparative study revealed that the regression formula between the results obtained by time series analysis and those by the leveling can be expressed as a straight line with a gradient of approximately 1, though including a bias of about 10 mm. Moreover, the correlation coefficient between the two methods demonstrates an extremely high correlation, exceeding 0.85. In conclusion, the spatial geometry of land deformation derived by time series analysis is found as mirroring the precise area of deformation captured by the leveling technique with a high accuracy.
Stochastic analysis of 1D and 2D surface topography of x-ray mirrors
NASA Astrophysics Data System (ADS)
Tyurina, Anastasia Y.; Tyurin, Yury N.; Yashchuk, Valeriy V.
2017-08-01
The design and evaluation of the expected performance of new optical systems requires sophisticated and reliable information about the surface topography for planned optical elements before they are fabricated. The problem is especially complex in the case of x-ray optics, particularly for the X-ray Surveyor under development and other missions. Modern x-ray source facilities are reliant upon the availability of optics with unprecedented quality (surface slope accuracy < 0.1μrad). The high angular resolution and throughput of future x-ray space observatories requires hundreds of square meters of high quality optics. The uniqueness of the optics and limited number of proficient vendors makes the fabrication extremely time consuming and expensive, mostly due to the limitations in accuracy and measurement rate of metrology used in fabrication. We discuss improvements in metrology efficiency via comprehensive statistical analysis of a compact volume of metrology data. The data is considered stochastic and a new statistical model called Invertible Time Invariant Linear Filter (InTILF) is developed now for 2D surface profiles to provide compact description of the 2D data additionally to 1D data treated so far. The model captures faint patterns in the data and serves as a quality metric and feedback to polishing processes, avoiding high resolution metrology measurements over the entire optical surface. The modeling, implemented in our Beatmark software, allows simulating metrology data for optics made by the same vendor and technology. The forecast data is vital for reliable specification for optical fabrication, to be exactly adequate for the required system performance.
Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image
NASA Astrophysics Data System (ADS)
Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.
2018-04-01
At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.
Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu
2017-05-25
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.
Zhao, Qilong; Strykowski, Gabriel; Li, Jiancheng; Pan, Xiong; Xu, Xinyu
2017-01-01
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3–5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems. PMID:28587086
NASA Astrophysics Data System (ADS)
Zhao, Q.
2017-12-01
Gravity data gaps in mountainous areas are nowadays often filled in with the data from airborne gravity surveys. Because of the errors caused by the airborne gravimeter sensors, and because of rough flight conditions, such errors cannot be completely eliminated. The precision of the gravity disturbances generated by the airborne gravimetry is around 3-5 mgal. A major obstacle in using airborne gravimetry are the errors caused by the downward continuation. In order to improve the results the external high-accuracy gravity information e.g., from the surface data can be used for high frequency correction, while satellite information can be applying for low frequency correction. Surface data may be used to reduce the systematic errors, while regularization methods can reduce the random errors in downward continuation. Airborne gravity surveys are sometimes conducted in mountainous areas and the most extreme area of the world for this type of survey is the Tibetan Plateau. Since there are no high-accuracy surface gravity data available for this area, the above error minimization method involving the external gravity data cannot be used. We propose a semi-parametric downward continuation method in combination with regularization to suppress the systematic error effect and the random error effect in the Tibetan Plateau; i.e., without the use of the external high-accuracy gravity data. We use a Louisiana airborne gravity dataset from the USA National Oceanic and Atmospheric Administration (NOAA) to demonstrate that the new method works effectively. Furthermore, and for the Tibetan Plateau we show that the numerical experiment is also successfully conducted using the synthetic Earth Gravitational Model 2008 (EGM08)-derived gravity data contaminated with the synthetic errors. The estimated systematic errors generated by the method are close to the simulated values. In addition, we study the relationship between the downward continuation altitudes and the error effect. The analysis results show that the proposed semi-parametric method combined with regularization is efficient to address such modelling problems.
Combining quantitative and qualitative breast density measures to assess breast cancer risk.
Kerlikowske, Karla; Ma, Lin; Scott, Christopher G; Mahmoudzadeh, Amir P; Jensen, Matthew R; Sprague, Brian L; Henderson, Louise M; Pankratz, V Shane; Cummings, Steven R; Miglioretti, Diana L; Vachon, Celine M; Shepherd, John A
2017-08-22
Accurately identifying women with dense breasts (Breast Imaging Reporting and Data System [BI-RADS] heterogeneously or extremely dense) who are at high breast cancer risk will facilitate discussions of supplemental imaging and primary prevention. We examined the independent contribution of dense breast volume and BI-RADS breast density to predict invasive breast cancer and whether dense breast volume combined with Breast Cancer Surveillance Consortium (BCSC) risk model factors (age, race/ethnicity, family history of breast cancer, history of breast biopsy, and BI-RADS breast density) improves identifying women with dense breasts at high breast cancer risk. We conducted a case-control study of 1720 women with invasive cancer and 3686 control subjects. We calculated ORs and 95% CIs for the effect of BI-RADS breast density and Volpara™ automated dense breast volume on invasive cancer risk, adjusting for other BCSC risk model factors plus body mass index (BMI), and we compared C-statistics between models. We calculated BCSC 5-year breast cancer risk, incorporating the adjusted ORs associated with dense breast volume. Compared with women with BI-RADS scattered fibroglandular densities and second-quartile dense breast volume, women with BI-RADS extremely dense breasts and third- or fourth-quartile dense breast volume (75% of women with extremely dense breasts) had high breast cancer risk (OR 2.87, 95% CI 1.84-4.47, and OR 2.56, 95% CI 1.87-3.52, respectively), whereas women with extremely dense breasts and first- or second-quartile dense breast volume were not at significantly increased breast cancer risk (OR 1.53, 95% CI 0.75-3.09, and OR 1.50, 95% CI 0.82-2.73, respectively). Adding continuous dense breast volume to a model with BCSC risk model factors and BMI increased discriminatory accuracy compared with a model with only BCSC risk model factors (C-statistic 0.639, 95% CI 0.623-0.654, vs. C-statistic 0.614, 95% CI 0.598-0.630, respectively; P < 0.001). Women with dense breasts and fourth-quartile dense breast volume had a BCSC 5-year risk of 2.5%, whereas women with dense breasts and first-quartile dense breast volume had a 5-year risk ≤ 1.8%. Risk models with automated dense breast volume combined with BI-RADS breast density may better identify women with dense breasts at high breast cancer risk than risk models with either measure alone.
Nordberg, Maj-Liz; Evertson, Joakim
2003-12-01
Vegetation cover-change analysis requires selection of an appropriate set of variables for measuring and characterizing change. Satellite sensors like Landsat TM offer the advantages of wide spatial coverage while providing land-cover information. This facilitates the monitoring of surface processes. This study discusses change detection in mountainous dry-heath communities in Jämtland County, Sweden, using satellite data. Landsat-5 TM and Landsat-7 ETM+ data from 1984, 1994 and 2000, respectively, were used. Different change detection methods were compared after the images had been radiometrically normalized, georeferenced and corrected for topographic effects. For detection of the classes change--no change the NDVI image differencing method was the most accurate with an overall accuracy of 94% (K = 0.87). Additional change information was extracted from an alternative method called NDVI regression analysis and vegetation change in 3 categories within mountainous dry-heath communities were detected. By applying a fuzzy set thresholding technique the overall accuracy was improved from of 65% (K = 0.45) to 74% (K = 0.59). The methods used generate a change product showing the location of changed areas in sensitive mountainous heath communities, and it also indicates the extent of the change (high, moderate and unchanged vegetation cover decrease). A total of 17% of the dry and extremely dry-heath vegetation within the study area has changed between 1984 and 2000. On average 4% of the studied heath communities have been classified as high change, i.e. have experienced "high vegetation cover decrease" during the period. The results show that the low alpine zone of the southern part of the study area shows the highest amount of "high vegetation cover decrease". The results also show that the main change occurred between 1994 and 2000.
NASA Astrophysics Data System (ADS)
Palaseanu, M.; Thatcher, C.; Danielson, J.; Gesch, D. B.; Poppenga, S.; Kottermair, M.; Jalandoni, A.; Carlson, E.
2016-12-01
Coastal topographic and bathymetric (topobathymetric) data with high spatial resolution (1-meter or better) and high vertical accuracy are needed to assess the vulnerability of Pacific Islands to climate change impacts, including sea level rise. According to the Intergovernmental Panel on Climate Change reports, low-lying atolls in the Pacific Ocean are extremely vulnerable to king tide events, storm surge, tsunamis, and sea-level rise. The lack of coastal topobathymetric data has been identified as a critical data gap for climate vulnerability and adaptation efforts in the Republic of the Marshall Islands (RMI). For Majuro Atoll, home to the largest city of RMI, the only elevation dataset currently available is the Shuttle Radar Topography Mission data which has a 30-meter spatial resolution and 16-meter vertical accuracy (expressed as linear error at 90%). To generate high-resolution digital elevation models (DEMs) in the RMI, elevation information and photographic imagery have been collected from field surveys using GNSS/total station and unmanned aerial vehicles for Structure-from-Motion (SfM) point cloud generation. Digital Globe WorldView II imagery was processed to create SfM point clouds to fill in gaps in the point cloud derived from the higher resolution UAS photos. The combined point cloud data is filtered and classified to bare-earth and georeferenced using the GNSS data acquired on roads and along survey transects perpendicular to the coast. A total station was used to collect elevation data under tree canopies where heavy vegetation cover blocked the view of GNSS satellites. A subset of the GPS / total station data was set aside for error assessment of the resulting DEM.
NASA Astrophysics Data System (ADS)
Shah, Abhay G.; Friedman, John L.; Whiting, Bernard F.
2014-03-01
We present a novel analytic extraction of high-order post-Newtonian (pN) parameters that govern quasicircular binary systems. Coefficients in the pN expansion of the energy of a binary system can be found from corresponding coefficients in an extreme-mass-ratio inspiral computation of the change ΔU in the redshift factor of a circular orbit at fixed angular velocity. Remarkably, by computing this essentially gauge-invariant quantity to accuracy greater than one part in 10225, and by assuming that a subset of pN coefficients are rational numbers or products of π and a rational, we obtain the exact analytic coefficients. We find the previously unexpected result that the post-Newtonian expansion of ΔU (and of the change ΔΩ in the angular velocity at fixed redshift factor) have conservative terms at half-integral pN order beginning with a 5.5 pN term. This implies the existence of a corresponding 5.5 pN term in the expansion of the energy of a binary system. Coefficients in the pN series that do not belong to the subset just described are obtained to accuracy better than 1 part in 10265-23n at nth pN order. We work in a radiation gauge, finding the radiative part of the metric perturbation from the gauge-invariant Weyl scalar ψ0 via a Hertz potential. We use mode-sum renormalization, and find high-order renormalization coefficients by matching a series in L=ℓ+1/2 to the large-L behavior of the expression for ΔU. The nonradiative parts of the perturbed metric associated with changes in mass and angular momentum are calculated in the Schwarzschild gauge.
Evaluation of downscaled, gridded climate data for the conterminous United States
Robert J. Behnke,; Stephen J. Vavrus,; Andrew Allstadt,; Thomas P. Albright,; Thogmartin, Wayne E.; Volker C. Radeloff,
2016-01-01
Weather and climate affect many ecological processes, making spatially continuous yet fine-resolution weather data desirable for ecological research and predictions. Numerous downscaled weather data sets exist, but little attempt has been made to evaluate them systematically. Here we address this shortcoming by focusing on four major questions: (1) How accurate are downscaled, gridded climate data sets in terms of temperature and precipitation estimates?, (2) Are there significant regional differences in accuracy among data sets?, (3) How accurate are their mean values compared with extremes?, and (4) Does their accuracy depend on spatial resolution? We compared eight widely used downscaled data sets that provide gridded daily weather data for recent decades across the United States. We found considerable differences among data sets and between downscaled and weather station data. Temperature is represented more accurately than precipitation, and climate averages are more accurate than weather extremes. The data set exhibiting the best agreement with station data varies among ecoregions. Surprisingly, the accuracy of the data sets does not depend on spatial resolution. Although some inherent differences among data sets and weather station data are to be expected, our findings highlight how much different interpolation methods affect downscaled weather data, even for local comparisons with nearby weather stations located inside a grid cell. More broadly, our results highlight the need for careful consideration among different available data sets in terms of which variables they describe best, where they perform best, and their resolution, when selecting a downscaled weather data set for a given ecological application.
NASA Technical Reports Server (NTRS)
Hashmall, J.; Davis, W.; Harman, R.
1993-01-01
The science mission of the Extreme Ultraviolet Explorer (EUVE) requires attitude solutions with uncertainties of 27, 16.7, 16.7 arcseconds (3 sigma) around the roll, pitch, and yaw axes, respectively. The primary input to the attitude determination process is provided by two NASA standard fixed-head star trackers (FHSTs) and a Teledyne dry rotor inertial reference unit (DRIRU) 2. The attitude determination requirements approach the limits attainable with the FHSTs and DRIRU. The Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC) designed and executed calibration procedures that far exceeded the extent and the data volume of any other FDF-supported mission. The techniques and results of this attempt to obtain attitude accuracies at the limit of sensor capability and the results of analysis of the factors that limit the attitude accuracy are the primary subjects of this paper. The success of the calibration effort is judged by the resulting measurement residuals and comparisons between ground- and onboard-determined attitudes. The FHST star position residuals have been reduced to less tha 4 arcsec per axis -- a value that appears to be limited by the sensor capabilities. The FDF ground system uses a batch least-squares estimator to determine attitude. The EUVE onboard computer (OBC) uses an extended Kalman filter. Currently, there are systematic differences between the two attitude solutions that occasionally exceed the mission requirements for 3 sigma attitude uncertainty. Attempts to understand and reduce these differences are continuing.
Raaben, Marco; Holtslag, Herman R; Leenen, Luke P H; Augustine, Robin; Blokhuis, Taco J
2018-01-01
Individuals with lower extremity fractures are often instructed on how much weight to bear on the affected extremity. Previous studies have shown limited therapy compliance in weight bearing during rehabilitation. In this study we investigated the effect of real-time visual biofeedback on weight bearing in individuals with lower extremity fractures in two conditions: full weight bearing and touch-down weight bearing. 11 participants with full weight bearing and 12 participants with touch-down weight bearing after lower extremity fractures have been measured with an ambulatory biofeedback system. The participants first walked 15m and the biofeedback system was only used to register the weight bearing. The same protocol was then repeated with real-time visual feedback during weight bearing. The participants could thereby adapt their loading to the desired level and improve therapy compliance. In participants with full weight bearing, real-time visual biofeedback resulted in a significant increase in loading from 50.9±7.51% bodyweight (BW) without feedback to 63.2±6.74%BW with feedback (P=0.0016). In participants with touch-down weight bearing, the exerted lower extremity load decreased from 16.7±9.77kg without feedback to 10.27±4.56kg with feedback (P=0.0718). More important, the variance between individual steps significantly decreased after feedback (P=0.018). Ambulatory monitoring weight bearing after lower extremity fractures showed that therapy compliance is low, both in full and touch-down weight bearing. Real-time visual biofeedback resulted in significantly higher peak loads in full weight bearing and increased accuracy of individual steps in touch-down weight bearing. Real-time visual biofeedback therefore results in improved therapy compliance after lower extremity fractures. Copyright © 2017 Elsevier B.V. All rights reserved.
Vorovencii, Iosif
2017-09-26
The desertification risk affects around 40% of the agricultural land in various regions of Romania. The purpose of this study is to analyse the risk of desertification in the south-west of Romania in the period 1984-2011 using the change vector analysis (CVA) technique and Landsat thematic mapper (TM) satellite images. CVA was applied to combinations of normalised difference vegetation index (NDVI)-albedo, NDVI-bare soil index (BI) and tasselled cap greenness (TCG)-tasselled cap brightness (TCB). The combination NDVI-albedo proved to be the best in assessing the desertification risk, with an overall accuracy of 87.67%, identifying a desertification risk on 25.16% of the studied period. The classification of the maps was performed for the following classes: desertification risk, re-growing and persistence. Four degrees of desertification risk and re-growing were used: low, medium, high and extreme. Using the combination NDVI-albedo, 0.53% of the analysed surface was assessed as having an extreme degree of desertification risk, 3.93% a high degree, 8.72% a medium degree and 11.98% a low degree. The driving forces behind the risk of desertification are both anthropogenic and climatic causes. The anthropogenic causes include the destruction of the irrigation system, deforestation, the destruction of the forest shelterbelts, the fragmentation of agricultural land and its inefficient management. Climatic causes refer to increase of temperatures, frequent and prolonged droughts and decline of the amount of precipitation.
Assessment of the accuracy and stability of frameless gamma knife radiosurgery.
Chung, Hyun-Tai; Park, Woo-Yoon; Kim, Tae Hoon; Kim, Yong Kyun; Chun, Kook Jin
2018-06-03
The aim of this study was to assess the accuracy and stability of frameless gamma knife radiosurgery (GKRS). The accuracies of the radiation isocenter and patient couch movement were evaluated by film dosimetry with a half-year cycle. Radiation isocenter assessment with a diode detector and cone-beam computed tomography (CBCT) image accuracy tests were performed daily with a vendor-provided tool for one and a half years after installation. CBCT image quality was examined twice a month with a phantom. The accuracy of image coregistration using CBCT images was studied using magnetic resonance (MR) and computed tomography (CT) images of another phantom. The overall positional accuracy was measured in whole procedure tests using film dosimetry with an anthropomorphic phantom. The positional errors of the radiation isocenter at the center and at an extreme position were both less than 0.1 mm. The three-dimensional deviation of the CBCT coordinate system was stable for one and a half years (mean 0.04 ± 0.02 mm). Image coregistration revealed a difference of 0.2 ± 0.1 mm between CT and CBCT images and a deviation of 0.4 ± 0.2 mm between MR and CBCT images. The whole procedure test of the positional accuracy of the mask-based irradiation revealed an accuracy of 0.5 ± 0.6 mm. The radiation isocenter accuracy, patient couch movement accuracy, and Gamma Knife Icon CBCT accuracy were all approximately 0.1 mm and were stable for one and a half years. The coordinate system assigned to MR images through coregistration was more accurate than the system defined by fiducial markers. Possible patient motion during irradiation should be considered when evaluating the overall accuracy of frameless GKRS. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Yang, Fanlin; Zhao, Chunxia; Zhang, Kai; Feng, Chengkai; Ma, Yue
2017-07-01
Acoustic seafloor classification with multibeam backscatter measurements is an attractive approach for mapping seafloor properties over a large area. However, artifacts in the multibeam backscatter measurements prevent accurate characterization of the seafloor. In particular, the backscatter level is extremely strong and highly variable in the near-nadir region due to the specular echo phenomenon. Consequently, striped artifacts emerge in the backscatter image, which can degrade the classification accuracy. This study focuses on the striped artifacts in multibeam backscatter images. To this end, a calibration algorithm based on equal mean-variance fitting is developed. By fitting the local shape of the angular response curve, the striped artifacts are compressed and moved according to the relations between the mean and variance in the near-nadir and off-nadir region. The algorithm utilized the measured data of near-nadir region and retained the basic shape of the response curve. The experimental results verify the high performance of the proposed method.
A high-precision, distributed geodetic strainmeter based on dual coaxial cable Bragg gratings
NASA Astrophysics Data System (ADS)
Fu, J.; Wei, T.; Wei, M.; Shen, Y.
2014-12-01
Observations of surface deformation are essential for understanding a wide range of geophysical problems, including earthquakes, volcanoes, landslides, and glaciers. Current geodetic technologies, such as GPS, InSAR, borehole and laser strainmeters, are costly and limited in their temporal or spatial resolution. Here we present a new type of strainmeter based on coaxial cable Bragg grating (CCBG) sensing technology that provides high-precision, distributed strain measurements at a moderate cost. The coaxial-cable-based strainmeter is designed to cover a long distance (~ km) under harsh environmental conditions such as extreme temperatures. To minimize the environmental noises, two CCBGs are introduced into the geodetic strainmeter: one is used to measure the strain applied on it, and the other acts as a reference only to detect the environmental noises. The environmental noises are removed using the inputs from the strained CCBG and the reference CCBG in a frequency mixer. The test results show that the geodetic strainmeter with dual CCBGs has micron-strain accuracy in the lab.
Trimodal spectra for high discrimination of benign and malignant prostate tissue
NASA Astrophysics Data System (ADS)
Al Salhi, Mohamad; Masilamani, Vadivel; Trinka, Vijmasi; Rabah, Danny; Al Turki, Mohammed R.
2011-02-01
High false positives and over diagnosis is a major problem with management of prostate cancer. A non-invasive or a minimally invasive technique to accurately distinguish malignant prostate cancers from benign tumors will be extremely helpful to overcome this problem. In this paper, we had used three different fluorescence spectroscopy techniques viz., Fluorescence Emission Spectrum (FES), Stokes' Shift Spectrum (SSS) and Reflectance Spectrum (RS) to discriminate benign prostate tumor tissues (N=12) and malignant prostate cancer tissues (N=8). These fluorescence techniques were used to determine the relative concentration of naturally occurring biomolecules such as tryptophan, elastin, NADH and flavin which are found to be out of proportion in cancer tissues. Our studies show that combining all three techniques, benign and malignant prostate tissues could be classified with accuracy greater than 90%. This preliminary report is based on in vitro spectroscopy analysis. However, by employing fluorescence endoscopy techniques, this can be extended to in vivo analysis as well. This technique has the potential to identify malignant prostate tissues without surgery.
Analysis of xRAGE and flag high explosive burn models with PBX 9404 cylinder tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrier, Danielle; Andersen, Kyle Richard
High explosives are energetic materials that release their chemical energy in a short interval of time. They are able to generate extreme heat and pressure by a shock driven chemical decomposition reaction, which makes them valuable tools that must be understood. This study investigated the accuracy and performance of two Los Alamos National Laboratory hydrodynamic codes, which are used to determine the behavior of explosives within a variety of systems: xRAGE which utilizes an Eulerian mesh, and FLAG with utilizes a Lagrangian mesh. Various programmed and reactive burn models within both codes were tested using a copper cylinder expansion test.more » The test was based on a recent experimental setup which contained the plastic bonded explosive PBX 9404. Detonation velocity versus time curves for this explosive were obtained using Photon Doppler Velocimetry (PDV). The modeled results from each of the burn models tested were then compared to one another and to the experimental results. This study validate« less
Tackling The Dragon: Investigating Lensed Galaxy Structure
NASA Astrophysics Data System (ADS)
Fortenberry, Alexander; Livermore, Rachael
2018-01-01
Galaxies have been seen to have a rapid decrease in star formation beginning at a redshift of around 1-2 up to the present day. To understand the processes underpinning this change, we need to observe the inner structure of galaxies and understand where and how the stellar mass builds up. However, at high redshifts our observable resolution is limited, which hinders the accuracy of the data. The lack of resolution at high redshift can be counteracted with the use of gravitational lensing. The magnification provided by the gravitational lens between us and the galaxies in question enables us to see extreme detail within the galaxies. To begin fine-tuning this process, we used Hubble data of Abell 370, a galaxy cluster, which lenses a galaxy know as “The Dragon” at z=0.725. With the increased detail proved by the gravitational lens we provide a detailed analysis of the galaxy’s spatially resolved star formation rate, stellar age, and masses.
Performance Evaluation of a UWB-RFID System for Potential Space Applications
NASA Technical Reports Server (NTRS)
Phan, Chan T.; Arndt, D.; Ngo, P.; Gross, J.; Ni, Jianjun; Rafford, Melinda
2006-01-01
This talk presents a brief overview of the ultra-wideband (UWB) RFID system with emphasis on the performance evaluation of a commercially available UWB-RFID system. There are many RFID systems available today, but many provide just basic identification for auditing and inventory tracking. For applications that require high precision real time tracking, UWB technology has been shown to be a viable solution. The use of extremely short bursts of RF pulses offers high immunity to interference from other RF systems, precise tracking due to sub-nanosecond time resolution, and robust performance in multipath environments. The UWB-RFID system Sapphire DART (Digital Active RFID & Tracking) will be introduced in this talk. Laboratory testing using Sapphire DART is performed to evaluate its capability such as coverage area, accuracy, ease of operation, and robustness. Performance evaluation of this system in an operational environment (a receiving warehouse) for inventory tracking is also conducted. Concepts of using the UWB-RFID technology to track astronauts and assets are being proposed for space exploration.
Accuracy, reliability, and timing of visual evaluations of decay in fresh-cut lettuce
Hayes, Ryan J.
2018-01-01
Visual assessments are used for evaluating the quality of food products, such as fresh-cut lettuce packaged in bags with modified atmosphere. We have compared the accuracy and the reliability of visual evaluations of decay on fresh-cut lettuce performed with experienced and inexperienced raters. In addition, we have analyzed decay data from over 4.5 thousand bags to determine the optimum timing for evaluations to detect differences among accessions. Lin’s concordance coefficient (ρc) that takes into consideration both the closeness of the data and the conformance to the identity line showed high repeatability (intra-rater reliability, ρc = 0.97), reproducibility (inter-rater reliability, ρc = 0.92), and accuracy (ρc = 0.96) for experienced raters. Inexperienced raters did not perform as well and their ratings showed decreased repeatability (ρc = 0.93), but even larger reduction in reproducibility (ρc = 0.80) and accuracy (ρc = 0.90). We have detected that 5.3% of ratings were outside of the 95% limits of agreement. These under- or overestimates were predominantly found for bags with intermediate levels of decay, which corresponds to the middle of the rating scale. This occurs because intermediate amounts of decay are more difficult to discriminate than extremes. The frequencies of aberrant ratings for experienced raters ranged from 0.6% to 4.4% (mean = 2.1%), for inexperienced raters the frequencies were substantially higher, ranging from 6.1% to 15.6% (mean = 9.4%). Therefore, we recommend that new raters receive training that includes practical examples in this range of decay, use of standard area diagrams, and continuing interaction with experienced raters (consultation during actual rating). Very high agreement among experienced raters indicate that visual ratings can be successfully used for evaluations of decay, until a more objective, rapid, and affordable method is developed. We recommend evaluating samples at multiple time points until 42 days after processing (about 80% decay on average) and then combining these individual ratings into the area under the decay progress stairs (AUDePS) score. Applying this approach, experienced evaluators can accurately detect difference among lettuce accessions and identify lettuce cultivars with reduced decay. PMID:29664945
A Novel Method of High Accuracy, Wavefront Phase and Amplitude Correction for Coronagraphy
NASA Technical Reports Server (NTRS)
Bowers, Charles W.; Woodgate, Bruce E.; Lyon, Richard G.
2003-01-01
Detection of extra-solar, and especially terrestrial-like planets, using coronagraphy requires an extremely high level of wavefront correction. For example, the study of Woodruff et al. (2002) has shown that phase uniformity of order 10(exp -4)lambda(rms) must be achieved over the critical range of spatial frequencies to produce the approx. 10(exp 10) contrast needed for the Terrestrial Planet Finder (TPF) mission. Correction of wavefront phase errors to this level may be accomplished by using a very high precision deformable mirror (DM). However, not only phase but also amplitude uniformity of the same scale (approx. 10(exp -4)) and over the same spatial frequency range must be simultaneously obtained to remove all residual speckle in the image plane. We present a design for producing simultaneous wavefront phase and amplitude uniformity to high levels from an input wavefront of lower quality. The design uses a dual Michelson interferometer arrangement incorporating two DM and a single, fixed mirror (all at pupils) and two beamsplitters: one with unequal (asymmetric) beam splitting and one with symmetric beam splitting. This design allows high precision correction of both phase and amplitude using DM with relatively coarse steps and permits a simple correction algorithm.
Trace element analysis by EPMA in geosciences: detection limit, precision and accuracy
NASA Astrophysics Data System (ADS)
Batanova, V. G.; Sobolev, A. V.; Magnin, V.
2018-01-01
Use of the electron probe microanalyser (EPMA) for trace element analysis has increased over the last decade, mainly because of improved stability of spectrometers and the electron column when operated at high probe current; development of new large-area crystal monochromators and ultra-high count rate spectrometers; full integration of energy-dispersive / wavelength-dispersive X-ray spectrometry (EDS/WDS) signals; and the development of powerful software packages. For phases that are stable under a dense electron beam, the detection limit and precision can be decreased to the ppm level by using high acceleration voltage and beam current combined with long counting time. Data on 10 elements (Na, Al, P, Ca, Ti, Cr, Mn, Co, Ni, Zn) in olivine obtained on a JEOL JXA-8230 microprobe with tungsten filament show that the detection limit decreases proportionally to the square root of counting time and probe current. For all elements equal or heavier than phosphorus (Z = 15), the detection limit decreases with increasing accelerating voltage. The analytical precision for minor and trace elements analysed in olivine at 25 kV accelerating voltage and 900 nA beam current is 4 - 18 ppm (2 standard deviations of repeated measurements of the olivine reference sample) and is similar to the detection limit of corresponding elements. To analyse trace elements accurately requires careful estimation of background, and consideration of sample damage under the beam and secondary fluorescence from phase boundaries. The development and use of matrix reference samples with well-characterised trace elements of interest is important for monitoring and improving of the accuracy. An evaluation of the accuracy of trace element analyses in olivine has been made by comparing EPMA data for new reference samples with data obtained by different in-situ and bulk analytical methods in six different laboratories worldwide. For all elements, the measured concentrations in the olivine reference sample were found to be identical (within internal precision) to reference values, suggesting that achieved precision and accuracy are similar. The spatial resolution of EPMA in a silicate matrix, even at very extreme conditions (accelerating voltage 25 kV), does not exceed 7 - 8 μm and thus is still better than laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) or secondary ion mass spectrometry (SIMS) of similar precision. These make the electron microprobe an indispensable method with applications in experimental petrology, geochemistry and cosmochemistry.
Xu, Xu; McGorry, Raymond W
2015-07-01
The Kinect™ sensor released by Microsoft is a low-cost, portable, and marker-less motion tracking system for the video game industry. Since the first generation Kinect sensor was released in 2010, many studies have been conducted to examine the validity of this sensor when used to measure body movement in different research areas. In 2014, Microsoft released the computer-used second generation Kinect sensor with a better resolution for the depth sensor. However, very few studies have performed a direct comparison between all the Kinect sensor-identified joint center locations and their corresponding motion tracking system-identified counterparts, the result of which may provide some insight into the error of the Kinect-identified segment length, joint angles, as well as the feasibility of adapting inverse dynamics to Kinect-identified joint centers. The purpose of the current study is to first propose a method to align the coordinate system of the Kinect sensor with respect to the global coordinate system of a motion tracking system, and then to examine the accuracy of the Kinect sensor-identified coordinates of joint locations during 8 standing and 8 sitting postures of daily activities. The results indicate the proposed alignment method can effectively align the Kinect sensor with respect to the motion tracking system. The accuracy level of the Kinect-identified joint center location is posture-dependent and joint-dependent. For upright standing posture, the average error across all the participants and all Kinect-identified joint centers is 76 mm and 87 mm for the first and second generation Kinect sensor, respectively. In general, standing postures can be identified with better accuracy than sitting postures, and the identification accuracy of the joints of the upper extremities is better than for the lower extremities. This result may provide some information regarding the feasibility of using the Kinect sensor in future studies. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.
NASA Astrophysics Data System (ADS)
Goehring, E. C.; Carlsen, W.; Larsen, J.; Simms, E.; Smith, M.
2007-12-01
From Local to EXtreme Environments (FLEXE) is an innovative new project of the GLOBE Program that involves middle and high school students in systematic, facilitated analyses and comparisons of real environmental data. Through FLEXE, students collect and analyze data from various sources, including the multi-year GLOBE database, deep-sea scientific research projects, and direct measurements of the local environment collected by students using GLOBE sampling protocols. Initial FLEXE materials and training have focused on student understanding of energy transfer through components of the Earth system, including a comparison of how local environmental conditions differ from those found at deep-sea hydrothermal vent communities. While the importance of data acquisition, accuracy and replication is emphasized, FLEXE is also uniquely structured to deepen students' understanding of multiple aspects of the process and nature of science, including written communication of results and on-line peer review. Analyses of data are facilitated through structured, web-based interactions and culminating activities with at-sea scientists through an online forum. The project benefits from the involvement of a professional evaluator, and as the model is tested and refined, it may serve as a template for the inclusion of additional "extreme" earth systems. FLEXE is a partnership of the international GLOBE web- based education program and the NSF Ridge 2000 mid-ocean ridge and hydrothermal vent research program, and includes the expertise of the Center for Science and the Schools at Penn State University. International collaborators also include the InterRidge and ChEss international research programs.
ERIC Educational Resources Information Center
Runnels, Judith
2016-01-01
Since its release in 1979 the TOEIC® (Test of English for International Communication) has been consistently and widely used by educational institutions and companies of Japan despite criticisms that it provides little useable information about language ability. In order to both reduce the extreme focus on and also aid with the practical…
NASA Technical Reports Server (NTRS)
Munasinghe, L.; Jun, T.; Rind, D. H.
2012-01-01
Consensus on global warming is the result of multiple and varying lines of evidence, and one key ramification is the increase in frequency of extreme climate events including record high temperatures. Here we develop a metric- called "record equivalent draws" (RED)-based on record high (low) temperature observations, and show that changes in RED approximate changes in the likelihood of extreme high (low) temperatures. Since we also show that this metric is independent of the specifics of the underlying temperature distributions, RED estimates can be aggregated across different climates to provide a genuinely global assessment of climate change. Using data on monthly average temperatures across the global landmass we find that the frequency of extreme high temperatures increased 10-fold between the first three decades of the last century (1900-1929) and the most recent decade (1999-2008). A more disaggregated analysis shows that the increase in frequency of extreme high temperatures is greater in the tropics than in higher latitudes, a pattern that is not indicated by changes in mean temperature. Our RED estimates also suggest concurrent increases in the frequency of both extreme high and extreme low temperatures during 2002-2008, a period when we observe a plateauing of global mean temperature. Using daily extreme temperature observations, we find that the frequency of extreme high temperatures is greater in the daily minimum temperature time-series compared to the daily maximum temperature time-series. There is no such observable difference in the frequency of extreme low temperatures between the daily minimum and daily maximum.
Infrasound ray tracing models for real events
NASA Astrophysics Data System (ADS)
Averbuch, Gil; Applbaum, David; Price, Colin; Ben Horin, Yochai
2015-04-01
Infrasound ray tracing models for real events C. Price1, G. Averbuch1, D. Applbaum1, Y. Ben Horin2 (1) Department of Geosciences, Tel Aviv University, Israel (2) Soreq Nuclear Research Center, Yavne, Israel Ray tracing models for infrasound propagation require two atmospheric parameters: the speed of sound profile and the wind profile. The usage of global atmospheric models for the speed of sound and wind profiles raises a fundamental question: can these models provide accurate results for modeling real events that have been detected by the infrasound arrays? Moreover, can these models provide accurate results for events that occurred during extreme weather conditions? We use 2D and 3D ray tracing models based on a modified Hamiltonian for a moving medium. Radiosonde measurements enable us to update the first 20 km of both speed of sound and wind profiles. The 2009 and 2011 Sayarim calibration experiments in Israel served us as a test for the models. In order to answer the question regarding the accuracy of the model during extreme weather conditions, we simulate infrasound sprite signals that were detected by the infrasound array in Mt. Meron, Israel. The results from modeling the Sayarim experiment provided us sufficient insight to conclude that ray tracing modeling can provide accurate results for real events that occurred during fair weather conditions. We conclude that the time delay in the model of the 2009 experiment is due to lack of accuracy in the wind and speed of sound profiles. Perturbed profiles provide accurate results. Earlier arrivals in 2011 are a result of the assumption that the earth is flat (no topography) and the use of local radiosonde measurements for the entire model. Using local radiosonde measurements only for part of the model and neglecting them on other parts prevents the early arrivals. We were able to determine which sprite is the one that got detected in the infrasound array as well as providing a height range for the sprite's height or the sprite's most energetic part. Even though atmospheric wind has a strong influence on infrasound wave propagation, our estimation is that for high altitude sources, extreme weather in the troposphere below has low impact on the trajectories of the waves.
Ultrasonic technique for monitoring of liquid density variations
NASA Astrophysics Data System (ADS)
Kazys, R.; Rekuviene, R.; Sliteris, R.; Mazeika, L.; Zukauskas, E.
2015-01-01
A novel ultrasonic measurement technique for density measurements of different liquids in extreme conditions has been developed. The proposed density measurement method is based on transformation of the acoustic impedance of the measured liquid. The higher accuracy of measurements is achieved by means of the λ/4 acoustic matching layer between the load and the ultrasonic waveguide transducer. Introduction of the matching layer enhances sensitivity of the measurement system. Sometimes, the density measurements must be performed in very complex conditions: high temperature (up to 200 °C), pressure (up to 10 MPa), and high chemical activity of the medium under measurement. In this case, the special geometry metal waveguides are proposed to use in order to protect the piezoelectric transducer surface from influence of a high temperature. The experimental set-up of technique was calibrated using the reference liquids with different densities: ethyl ether, ethyl alcohol, distilled water, and different concentration (20%, 40%, and 60%) sugar-water solutions. The uncertainty of measurements is less than 1%. The proposed measurement method was verified in real conditions by monitoring the density of a melted polypropylene during manufacturing process.
Xu, Wenjun; Chen, Jie; Lau, Henry Y K; Ren, Hongliang
2017-09-01
Accurate motion control of flexible surgical manipulators is crucial in tissue manipulation tasks. The tendon-driven serpentine manipulator (TSM) is one of the most widely adopted flexible mechanisms in minimally invasive surgery because of its enhanced maneuverability in torturous environments. TSM, however, exhibits high nonlinearities and conventional analytical kinematics model is insufficient to achieve high accuracy. To account for the system nonlinearities, we applied a data driven approach to encode the system inverse kinematics. Three regression methods: extreme learning machine (ELM), Gaussian mixture regression (GMR) and K-nearest neighbors regression (KNNR) were implemented to learn a nonlinear mapping from the robot 3D position states to the control inputs. The performance of the three algorithms was evaluated both in simulation and physical trajectory tracking experiments. KNNR performed the best in the tracking experiments, with the lowest RMSE of 2.1275 mm. The proposed inverse kinematics learning methods provide an alternative and efficient way to accurately model the tendon driven flexible manipulator. Copyright © 2016 John Wiley & Sons, Ltd.
Molecular Imprinting Technology in Quartz Crystal Microbalance (QCM) Sensors.
Emir Diltemiz, Sibel; Keçili, Rüstem; Ersöz, Arzu; Say, Rıdvan
2017-02-24
Molecularly imprinted polymers (MIPs) as artificial antibodies have received considerable scientific attention in the past years in the field of (bio)sensors since they have unique features that distinguish them from natural antibodies such as robustness, multiple binding sites, low cost, facile preparation and high stability under extreme operation conditions (higher pH and temperature values, etc.). On the other hand, the Quartz Crystal Microbalance (QCM) is an analytical tool based on the measurement of small mass changes on the sensor surface. QCM sensors are practical and convenient monitoring tools because of their specificity, sensitivity, high accuracy, stability and reproducibility. QCM devices are highly suitable for converting the recognition process achieved using MIP-based memories into a sensor signal. Therefore, the combination of a QCM and MIPs as synthetic receptors enhances the sensitivity through MIP process-based multiplexed binding sites using size, 3D-shape and chemical function having molecular memories of the prepared sensor system toward the target compound to be detected. This review aims to highlight and summarize the recent progress and studies in the field of (bio)sensor systems based on QCMs combined with molecular imprinting technology.
MRI-Compatible Pneumatic Robot for Transperineal Prostate Needle Placement.
Fischer, Gregory S; Iordachita, Iulian; Csoma, Csaba; Tokuda, Junichi; Dimaio, Simon P; Tempany, Clare M; Hata, Nobuhiko; Fichtinger, Gabor
2008-06-01
Magnetic resonance imaging (MRI) can provide high-quality 3-D visualization of prostate and surrounding tissue, thus granting potential to be a superior medical imaging modality for guiding and monitoring prostatic interventions. However, the benefits cannot be readily harnessed for interventional procedures due to difficulties that surround the use of high-field (1.5T or greater) MRI. The inability to use conventional mechatronics and the confined physical space makes it extremely challenging to access the patient. We have designed a robotic assistant system that overcomes these difficulties and promises safe and reliable intraprostatic needle placement inside closed high-field MRI scanners. MRI compatibility of the robot has been evaluated under 3T MRI using standard prostate imaging sequences and average SNR loss is limited to 5%. Needle alignment accuracy of the robot under servo pneumatic control is better than 0.94 mm rms per axis. The complete system workflow has been evaluated in phantom studies with accurate visualization and targeting of five out of five 1 cm targets. The paper explains the robot mechanism and controller design, the system integration, and presents results of preliminary evaluation of the system.
Zimmermann, Jan; Vazquez, Yuriria; Glimcher, Paul W; Pesaran, Bijan; Louie, Kenway
2016-09-01
Video-based noninvasive eye trackers are an extremely useful tool for many areas of research. Many open-source eye trackers are available but current open-source systems are not designed to track eye movements with the temporal resolution required to investigate the mechanisms of oculomotor behavior. Commercial systems are available but employ closed source hardware and software and are relatively expensive, limiting wide-spread use. Here we present Oculomatic, an open-source software and modular hardware solution to eye tracking for use in humans and non-human primates. Oculomatic features high temporal resolution (up to 600Hz), real-time eye tracking with high spatial accuracy (<0.5°), and low system latency (∼1.8ms, 0.32ms STD) at a relatively low-cost. Oculomatic compares favorably to our existing scleral search-coil system while being fully non invasive. We propose that Oculomatic can support a wide range of research into the properties and neural mechanisms of oculomotor behavior. Copyright © 2016 Elsevier B.V. All rights reserved.
Detection of IL-6 by magnetic nanoparticles grown with the assistance of mid-infrared lighting.
Jiang, Xiufeng; Zhang, Ye; Miao, Xiaofei; Li, Zenghui; Bao, Zengtao; Wang, Tong
2013-01-01
Nanomedical systems have attracted considerable attention primarily due to suitability in applications for specific cell selection through biomolecular targeting and rare cell detection enhancement in a diverse, multicellular population. In the present study, magnetic nanoparticles were prepared for use in high accuracy cell sensing. Magnetic nanoparticle growth was assisted by mid-infrared lighting. By this mechanism, a narrow window, estimated to be 2%, was achieved for the dimension distribution of grown nanoparticles. Combined with silicon nanowire (SiNW) transistors, a sensor with ultra high sensitivity for the detection of specific potential low abundance biomarkers has been achieved, which has been specifically used to detect interleukin-6 (IL-6) at extremely low concentrations. A novel biosensor with high sensitivity has been fabricated and utilized in the detection of IL-6 at 75 fM to 50 pM. The system consists of an SiNW transistor and magnetic nanoparticles with even dimension distribution. The novel sensor system is suitable for quantifying IL-6 at low concentrations in protein samples.
Enabling affordable and efficiently deployed location based smart home systems.
Kelly, Damian; McLoone, Sean; Dishongh, Terry
2009-01-01
With the obvious eldercare capabilities of smart environments it is a question of "when", rather than "if", these technologies will be routinely integrated into the design of future houses. In the meantime, health monitoring applications must be integrated into already complete home environments. However, there is significant effort involved in installing the hardware necessary to monitor the movements of an elder throughout an environment. Our work seeks to address the high infrastructure requirements of traditional location-based smart home systems by developing an extremely low infrastructure localisation technique. A study of the most efficient method of obtaining calibration data for an environment is conducted and different mobile devices are compared for localisation accuracy and cost trade-off. It is believed that these developments will contribute towards more efficiently deployed location-based smart home systems.
Smart mobility solution with multiple input Output interface.
Sethi, Aartika; Deb, Sujay; Ranjan, Prabhat; Sardar, Arghya
2017-07-01
Smart wheelchairs are commonly used to provide solution for mobility impairment. However their usage is limited primarily due to high cost owing from sensors required for giving input, lack of adaptability for different categories of input and limited functionality. In this paper we propose a smart mobility solution using smartphone with inbuilt sensors (accelerometer, camera and speaker) as an input interface. An Emotiv EPOC+ is also used for motor imagery based input control synced with facial expressions in cases of extreme disability. Apart from traction, additional functions like home security and automation are provided using Internet of Things (IoT) and web interfaces. Although preliminary, our results suggest that this system can be used as an integrated and efficient solution for people suffering from mobility impairment. The results also indicate a decent accuracy is obtained for the overall system.
Limitations of bootstrap current models
Belli, Emily A.; Candy, Jefferey M.; Meneghini, Orso; ...
2014-03-27
We assess the accuracy and limitations of two analytic models of the tokamak bootstrap current: (1) the well-known Sauter model and (2) a recent modification of the Sauter model by Koh et al. For this study, we use simulations from the first-principles kinetic code NEO as the baseline to which the models are compared. Tests are performed using both theoretical parameter scans as well as core- to-edge scans of real DIII-D and NSTX plasma profiles. The effects of extreme aspect ratio, large impurity fraction, energetic particles, and high collisionality are studied. In particular, the error in neglecting cross-species collisional couplingmore » – an approximation inherent to both analytic models – is quantified. Moreover, the implications of the corrections from kinetic NEO simulations on MHD equilibrium reconstructions is studied via integrated modeling with kinetic EFIT.« less
NASA Technical Reports Server (NTRS)
Stark, G.; Smith, P. L.; Ito, K.; Yoshino, K.
1992-01-01
Photodissociation following absorption of extreme-ultraviolet photons is an important factor in determining the abundance and isotropic fractionation of CO in diffuse and translucent interstellar clouds. The principal channel for destruction of CO-13 in such clouds begins with absorption in the (1,0) vibrational band of the E1Pi - X1Sigma(+) system; similarly, absorption in the (0,0) band begins a significant destruction channel for CO-12. Reliable modeling of the CO fractionation process depends critically upon the accuracy of the photoabsorption cross section for these bands. We have measured the cross sections for the relevant isotropic species and for the (1,0) band of CO-12. Our results, which are uncertain by about 10 percent, are for the most part larger than previous measurements.
Cryogenic Thermal Conductivity Measurements on Candidate Materials for Space Missions
NASA Technical Reports Server (NTRS)
Tuttle, JIm; Canavan, Ed; Jahromi, Amir
2017-01-01
Spacecraft and instruments on space missions are built using a wide variety of carefully-chosen materials. In addition to having mechanical properties appropriate for surviving the launch environment, these materials generally must have thermal conductivity values which meet specific requirements in their operating temperature ranges. Space missions commonly propose to include materials for which the thermal conductivity is not well known at cryogenic temperatures. We developed a test facility in 2004 at NASAs Goddard Space Flight Center to measure material thermal conductivity at temperatures between 4 and 300 Kelvin, and we have characterized many candidate materials since then. The measurement technique is not extremely complex, but proper care to details of the setup, data acquisition and data reduction is necessary for high precision and accuracy. We describe the thermal conductivity measurement process and present results for several materials.
Critical appraisal of the 1977 diagnostic criteria for Minamata disease.
Yorifuji, Takashi; Tsuda, Toshihide; Inoue, Sachiko; Takao, Soshi; Harada, Masazumi; Kawachi, Ichiro
2013-01-01
Large-scale food poisoning caused by methylmercury was identified in Minamata, Japan, in the 1950s (Minamata disease). Although the diagnostic criteria for the disease remain current, few studies have been carried out to assess the diagnostic accuracy of the criteria. From a 1971 population-based investigation, data from 2 villages were selected: Minamata (high-exposure area; n = 779) and Ariake (low-exposure area; n = 755). The authors examined the prevalence of neurologic signs characteristic of methylmercury poisoning and the validity of the criteria. A substantial number of residents in the exposed area exhibited neurologic signs even after excluding officially certified patients. Using paresthesia of the extremities as the gold standard of diagnosis, the criteria had a sensitivity of 66%. The current diagnostic criteria as well as the official certification system substantially underestimate the incidence of Minamata disease.
NASA Astrophysics Data System (ADS)
Yan, Feng-Gang; Cao, Bin; Rong, Jia-Jia; Shen, Yi; Jin, Ming
2016-12-01
A new technique is proposed to reduce the computational complexity of the multiple signal classification (MUSIC) algorithm for direction-of-arrival (DOA) estimate using a uniform linear array (ULA). The steering vector of the ULA is reconstructed as the Kronecker product of two other steering vectors, and a new cost function with spatial aliasing at hand is derived. Thanks to the estimation ambiguity of this spatial aliasing, mirror angles mathematically relating to the true DOAs are generated, based on which the full spectral search involved in the MUSIC algorithm is highly compressed into a limited angular sector accordingly. Further complexity analysis and performance studies are conducted by computer simulations, which demonstrate that the proposed estimator requires an extremely reduced computational burden while it shows a similar accuracy to the standard MUSIC.
Cryogenic thermal conductivity measurements on candidate materials for space missions
NASA Astrophysics Data System (ADS)
Tuttle, James; Canavan, Edgar; Jahromi, Amir
2017-12-01
Spacecraft and instruments on space missions are built using a wide variety of carefully-chosen materials. It is common for NASA engineers to propose new candidate materials which have not been totally characterized at cryogenic temperatures. In many cases a material's cryogenic thermal conductivity must be known before selecting it for a specific space-flight application. We developed a test facility in 2004 at NASA's Goddard Space Flight Center to measure the longitudinal thermal conductivity of materials at temperatures between 4 and 300 K, and we have characterized many candidate materials since then. The measurement technique is not extremely complex, but proper care to details of the setup, data acquisition and data reduction is necessary for high precision and accuracy. We describe the thermal conductivity measurement process and present results for ten engineered materials, including alloys, polymers, composites, and a ceramic.
SUMER: Solar Ultraviolet Measurements of Emitted Radiation
NASA Technical Reports Server (NTRS)
Wilhelm, K.; Axford, W. I.; Curdt, W.; Gabriel, A. H.; Grewing, M.; Huber, M. C. E.; Jordan, M. C. E.; Lemaire, P.; Marsch, E.; Poland, A. I.
1988-01-01
The SUMER (solar ultraviolet measurements of emitted radiation) experiment is described. It will study flows, turbulent motions, waves, temperatures and densities of the plasma in the upper atmosphere of the Sun. Structures and events associated with solar magnetic activity will be observed on various spatial and temporal scales. This will contribute to the understanding of coronal heating processes and the solar wind expansion. The instrument will take images of the Sun in EUV (extreme ultra violet) light with high resolution in space, wavelength and time. The spatial resolution and spectral resolving power of the instrument are described. Spectral shifts can be determined with subpixel accuracy. The wavelength range extends from 500 to 1600 angstroms. The integration time can be as short as one second. Line profiles, shifts and broadenings are studied. Ratios of temperature and density sensitive EUV emission lines are established.
Interferometric at-wavelength flare characterization of EUV optical systems
Naulleau, Patrick P.; Goldberg, Kenneth Alan
2001-01-01
The extreme ultraviolet (EUV) phase-shifting point diffraction interferometer (PS/PDI) provides the high-accuracy wavefront characterization critical to the development of EUV lithography systems. Enhancing the implementation of the PS/PDI can significantly extend its spatial-frequency measurement bandwidth. The enhanced PS/PDI is capable of simultaneously characterizing both wavefront and flare. The enhanced technique employs a hybrid spatial/temporal-domain point diffraction interferometer (referred to as the dual-domain PS/PDI) that is capable of suppressing the scattered-reference-light noise that hinders the conventional PS/PDI. Using the dual-domain technique in combination with a flare-measurement-optimized mask and an iterative calculation process for removing flare contribution caused by higher order grating diffraction terms, the enhanced PS/PDI can be used to simultaneously measure both figure and flare in optical systems.
Shadow Areas Robust Matching Among Image Sequence in Planetary Landing
NASA Astrophysics Data System (ADS)
Ruoyan, Wei; Xiaogang, Ruan; Naigong, Yu; Xiaoqing, Zhu; Jia, Lin
2017-01-01
In this paper, an approach for robust matching shadow areas in autonomous visual navigation and planetary landing is proposed. The approach begins with detecting shadow areas, which are extracted by Maximally Stable Extremal Regions (MSER). Then, an affine normalization algorithm is applied to normalize the areas. Thirdly, a descriptor called Multiple Angles-SIFT (MA-SIFT) that coming from SIFT is proposed, the descriptor can extract more features of an area. Finally, for eliminating the influence of outliers, a method of improved RANSAC based on Skinner Operation Condition is proposed to extract inliers. At last, series of experiments are conducted to test the performance of the approach this paper proposed, the results show that the approach can maintain the matching accuracy at a high level even the differences among the images are obvious with no attitude measurements supplied.
Evaluation of a high-torque backlash-free roller actuator
NASA Technical Reports Server (NTRS)
Steinetz, Bruce M.
1988-01-01
The results are presented of a test program that evaluated the stiffness, accuracy, torque ripple, frictional losses, and torque holding capability of a 16:1 ratio, 430 N-m (320 ft-lb) planetary roller drive for a potential space vehicle actuator application. The drive's planet roller supporting structure and bearings were found to be the largest contributors to overall drive compliance, accounting for more than half the total. In comparison, the traction roller contacts themselves contributed only 9 percent of the drive's compliance based on an experimentally verified stiffnesss model. Torque ripple tests showed the drive to be extremely smooth, actually providing some damping of input torsional oscillations. The drive also demonstrated the ability to hold static torque with drifts of 7 arc sec or less over a 24-hour period at 35 percent of full load.
Li, Yuancheng; Jing, Sitong
2018-01-01
Advanced Metering Infrastructure (AMI) realizes a two-way communication of electricity data through by interconnecting with a computer network as the core component of the smart grid. Meanwhile, it brings many new security threats and the traditional intrusion detection method can’t satisfy the security requirements of AMI. In this paper, an intrusion detection system based on Online Sequence Extreme Learning Machine (OS-ELM) is established, which is used to detecting the attack in AMI and carrying out the comparative analysis with other algorithms. Simulation results show that, compared with other intrusion detection methods, intrusion detection method based on OS-ELM is more superior in detection speed and accuracy. PMID:29485990
Robust representation and recognition of facial emotions using extreme sparse learning.
Shojaeilangari, Seyedehsamaneh; Yau, Wei-Yun; Nandakumar, Karthik; Li, Jun; Teoh, Eam Khwang
2015-07-01
Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.
Advances in LEDs for automotive applications
NASA Astrophysics Data System (ADS)
Bhardwaj, Jy; Peddada, Rao; Spinger, Benno
2016-03-01
High power LEDs were introduced in automotive headlights in 2006-2007, for example as full LED headlights in the Audi R8 or low beam in Lexus. Since then, LED headlighting has become established in premium and volume automotive segments and beginning to enable new compact form factors such as distributed low beam and new functions such as adaptive driving beam. New generations of highly versatile high power LEDs are emerging to meet these application needs. In this paper, we will detail ongoing advances in LED technology that enable revolutionary styling, performance and adaptive control in automotive headlights. As the standards which govern the necessary lumens on the road are well established, increasing luminance enables not only more design freedom but also headlight cost reduction with space and weight saving through more compact optics. Adaptive headlighting is based on LED pixelation and requires high contrast, high luminance, smaller LEDs with high-packing density for pixelated Matrix Lighting sources. Matrix applications require an extremely tight tolerance on not only the X, Y placement accuracy, but also on the Z height of the LEDs given the precision optics used to image the LEDs onto the road. A new generation of chip scale packaged (CSP) LEDs based on Wafer Level Packaging (WLP) have been developed to meet these needs, offering a form factor less than 20% increase over the LED emitter surface footprint. These miniature LEDs are surface mount devices compatible with automated tools for L2 board direct attach (without the need for an interposer or L1 substrate), meeting the high position accuracy as well as the optical and thermal performance. To illustrate the versatility of the CSP LEDs, we will show the results of, firstly, a reflector-based distributed low beam using multiple individual cavities each with only 20mm height and secondly 3x4 to 3x28 Matrix arrays for adaptive full beam. Also a few key trends in rear lighting and impact on LED light source technology are discussed.
Improve accuracy for automatic acetabulum segmentation in CT images.
Liu, Hao; Zhao, Jianning; Dai, Ning; Qian, Hongbo; Tang, Yuehong
2014-01-01
Separation of the femur head and acetabulum is one of main difficulties in the diseased hip joint due to deformed shapes and extreme narrowness of the joint space. To improve the segmentation accuracy is the key point of existing automatic or semi-automatic segmentation methods. In this paper, we propose a new method to improve the accuracy of the segmented acetabulum using surface fitting techniques, which essentially consists of three parts: (1) design a surface iterative process to obtain an optimization surface; (2) change the ellipsoid fitting to two-phase quadric surface fitting; (3) bring in a normal matching method and an optimization region method to capture edge points for the fitting quadric surface. Furthermore, this paper cited vivo CT data sets of 40 actual patients (with 79 hip joints). Test results for these clinical cases show that: (1) the average error of the quadric surface fitting method is 2.3 (mm); (2) the accuracy ratio of automatically recognized contours is larger than 89.4%; (3) the error ratio of section contours is less than 10% for acetabulums without severe malformation and less than 30% for acetabulums with severe malformation. Compared with similar methods, the accuracy of our method, which is applied in a software system, is significantly enhanced.
MEGADOCK: An All-to-All Protein-Protein Interaction Prediction System Using Tertiary Structure Data
Ohue, Masahito; Matsuzaki, Yuri; Uchikoga, Nobuyuki; Ishida, Takashi; Akiyama, Yutaka
2014-01-01
The elucidation of protein-protein interaction (PPI) networks is important for understanding cellular structure and function and structure-based drug design. However, the development of an effective method to conduct exhaustive PPI screening represents a computational challenge. We have been investigating a protein docking approach based on shape complementarity and physicochemical properties. We describe here the development of the protein-protein docking software package “MEGADOCK” that samples an extremely large number of protein dockings at high speed. MEGADOCK reduces the calculation time required for docking by using several techniques such as a novel scoring function called the real Pairwise Shape Complementarity (rPSC) score. We showed that MEGADOCK is capable of exhaustive PPI screening by completing docking calculations 7.5 times faster than the conventional docking software, ZDOCK, while maintaining an acceptable level of accuracy. When MEGADOCK was applied to a subset of a general benchmark dataset to predict 120 relevant interacting pairs from 120 x 120 = 14,400 combinations of proteins, an F-measure value of 0.231 was obtained. Further, we showed that MEGADOCK can be applied to a large-scale protein-protein interaction-screening problem with accuracy better than random. When our approach is combined with parallel high-performance computing systems, it is now feasible to search and analyze protein-protein interactions while taking into account three-dimensional structures at the interactome scale. MEGADOCK is freely available at http://www.bi.cs.titech.ac.jp/megadock. PMID:23855673
NASA Astrophysics Data System (ADS)
Wood, Michael J.; Aristizabal, Felipe; Coady, Matthew; Nielson, Kent; Ragogna, Paul J.; Kietzig, Anne-Marie
2018-02-01
The production of millimetric liquid droplets has importance in a wide range of applications both in the laboratory and industrially. As such, much effort has been put forth to devise methods to generate these droplets on command in a manner which results in high diameter accuracy and precision, well-defined trajectories followed by successive droplets and low oscillations in droplet shape throughout their descents. None of the currently employed methods of millimetric droplet generation described in the literature adequately addresses all of these desired droplet characteristics. The reported methods invariably involve the cohesive separation of the desired volume of liquid from the bulk supply in the same step that separates the single droplet from the solid generator. We have devised a droplet generation device which separates the desired volume of liquid within a tee-apparatus in a step prior to the generation of the droplet which has yielded both high accuracy and precision of the diameters of the final droplets produced. Further, we have engineered a generating tip with extreme antiwetting properties which has resulted in reduced adhesion forces between the liquid droplet and the solid tip. This has yielded the ability to produce droplets of low mass without necessitating different diameter generating tips or the addition of surfactants to the liquid, well-defined droplet trajectories, and low oscillations in droplet volume. The trajectories and oscillations of the droplets produced have been assessed and presented quantitatively in a manner that has been lacking in the current literature.
NASA Astrophysics Data System (ADS)
Li, Xingmin; Lu, Ling; Yang, Wenfeng; Cheng, Guodong
2012-07-01
Estimating surface evapotranspiration is extremely important for the study of water resources in arid regions. Data from the National Oceanic and Atmospheric Administration's Advanced Very High Resolution Radiometer (NOAA/AVHRR), meteorological observations and data obtained from the Watershed Allied Telemetry Experimental Research (WATER) project in 2008 are applied to the evaporative fraction model to estimate evapotranspiration over the Heihe River Basin. The calculation method for the parameters used in the model and the evapotranspiration estimation results are analyzed and evaluated. The results observed within the oasis and the banks of the river suggest that more evapotranspiration occurs in the inland river basin in the arid region from May to September. Evapotranspiration values for the oasis, where the land surface types and vegetations are highly variable, are relatively small and heterogeneous. In the Gobi desert and other deserts with little vegetation, evapotranspiration remains at its lowest level during this period. These results reinforce the conclusion that rational utilization of water resources in the oasis is essential to manage the water resources in the inland river basin. In the remote sensing-based evapotranspiration model, the accuracy of the parameter estimate directly affects the accuracy of the evapotranspiration results; more accurate parameter values yield more precise values for evapotranspiration. However, when using the evaporative fraction to estimate regional evapotranspiration, better calculation results can be achieved only if evaporative fraction is constant in the daytime.
Silitonga, Arridina Susan; Hassan, Masjuki Haji; Ong, Hwai Chyuan; Kusumo, Fitranto
2017-11-01
The purpose of this study is to investigate the performance, emission and combustion characteristics of a four-cylinder common-rail turbocharged diesel engine fuelled with Jatropha curcas biodiesel-diesel blends. A kernel-based extreme learning machine (KELM) model is developed in this study using MATLAB software in order to predict the performance, combustion and emission characteristics of the engine. To acquire the data for training and testing the KELM model, the engine speed was selected as the input parameter, whereas the performance, exhaust emissions and combustion characteristics were chosen as the output parameters of the KELM model. The performance, emissions and combustion characteristics predicted by the KELM model were validated by comparing the predicted data with the experimental data. The results show that the coefficient of determination of the parameters is within a range of 0.9805-0.9991 for both the KELM model and the experimental data. The mean absolute percentage error is within a range of 0.1259-2.3838. This study shows that KELM modelling is a useful technique in biodiesel production since it facilitates scientists and researchers to predict the performance, exhaust emissions and combustion characteristics of internal combustion engines with high accuracy.
Three-Dimensional Lower Extremity Joint Loading in a Carved Ski and Snowboard Turn: A Pilot Study
Müller, Erich
2014-01-01
A large number of injuries to the lower extremity occur in skiing and snowboarding. Due to the difficulty of collecting 3D kinematic and kinetic data with high accuracy, a possible relationship between injury statistic and joint loading has not been studied. Therefore, the purpose of the current study was to compare ankle and knee joint loading at the steering leg between carved ski and snowboard turns. Kinetic data were collected using mobile force plates mounted under the toe and heel part of the binding on skies or snowboard (KISTLER). Kinematic data were collected with five synchronized, panning, tilting, and zooming cameras. An extended version of the Yeadon model was applied to calculate inertial properties of the segments. Ankle and knee joint forces and moments were calculated using inverse dynamic analysis. Results showed higher forces along the longitudinal axis in skiing and similar forces for skiing and snowboarding in anterior-posterior and mediolateral direction. Joint moments were consistently greater during a snowboard turn, but more fluctuations were observed in skiing. Hence, when comparing joint loading between carved ski and snowboard turns, one should differentiate between forces and moments, including the direction of forces and moments and the turn phase. PMID:25317202
2015-01-01
We monitored pasture biomass on 20 permanent plots over 35 years to gauge the reliability of rainfall and NDVI as proxy measures of forage shortfalls in a savannah ecosystem. Both proxies are reliable indicators of pasture biomass at the onset of dry periods but fail to predict shortfalls in prolonged dry spells. In contrast, grazing pressure predicts pasture deficits with a high degree of accuracy. Large herbivores play a primary role in determining the severity of pasture deficits and variation across habitats. Grazing pressure also explains oscillations in plant biomass unrelated to rainfall. Plant biomass has declined steadily and biomass per unit of rainfall has fallen by a third, corresponding to a doubling in grazing intensity over the study period. The rising probability of forage deficits fits local pastoral perceptions of an increasing frequency of extreme shortfalls. The decline in forage is linked to sedentarization, range loss and herbivore compression into drought refuges, rather than climate change. The results show that the decline in rangeland productivity and increasing frequency of pasture shortfalls can be ameliorated by better husbandry practices and reinforces the need for ground monitoring to complement remote sensing in forecasting pasture shortfalls. PMID:26317512
SET: a pupil detection method using sinusoidal approximation
Javadi, Amir-Homayoun; Hakimi, Zahra; Barati, Morteza; Walsh, Vincent; Tcheang, Lili
2015-01-01
Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as “SET”) that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (“Natural”); and images of less challenging indoor scenes (“CASIA-Iris-Thousand”). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (“DLL”), which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk). PMID:25914641
NASA Technical Reports Server (NTRS)
Woods, T. N.; Eparvier, F. G.; Hock, R.; Jones, A. R.; Woodraska, D.; Judge, D.; Didkovsky, L.; Lean, J.; Mariska, J.; Warren, H.;
2010-01-01
The highly variable solar extreme ultraviolet (EUV) radiation is the major energy input to the Earth's upper atmosphere, strongly impacting the geospace environment, affecting satellite operations, communications, and navigation. The Extreme ultraviolet Variability Experiment (EVE) onboard the NASA Solar Dynamics Observatory (SDO) will measure the solar EUV irradiance from 0.1 to 105 nm with unprecedented spectral resolution (0.1 nm), temporal cadence (ten seconds), and accuracy (20%). EVE includes several irradiance instruments: The Multiple EUV Grating Spectrographs (MEGS)-A is a grazingincidence spectrograph that measures the solar EUV irradiance in the 5 to 37 nm range with 0.1-nm resolution, and the MEGS-B is a normal-incidence, dual-pass spectrograph that measures the solar EUV irradiance in the 35 to 105 nm range with 0.1-nm resolution. To provide MEGS in-flight calibration, the EUV SpectroPhotometer (ESP) measures the solar EUV irradiance in broadbands between 0.1 and 39 nm, and a MEGS-Photometer measures the Sun s bright hydrogen emission at 121.6 nm. The EVE data products include a near real-time space-weather product (Level 0C), which provides the solar EUV irradiance in specific bands and also spectra in 0.1-nm intervals with a cadence of one minute and with a time delay of less than 15 minutes. The EVE higher-level products are Level 2 with the solar EUV irradiance at higher time cadence (0.25 seconds for photometers and ten seconds for spectrographs) and Level 3 with averages of the solar irradiance over a day and over each one-hour period. The EVE team also plans to advance existing models of solar EUV irradiance and to operationally use the EVE measurements in models of Earth s ionosphere and thermosphere. Improved understanding of the evolution of solar flares and extending the various models to incorporate solar flare events are high priorities for the EVE team.
Micro-Vibration Measurements on Thermally Loaded Multi-Layer Insulation Samples in Vacuum
NASA Technical Reports Server (NTRS)
Deutsch, Georg; Grillenbeck, Anton
2008-01-01
Some scientific missions require to an extreme extent the absence of any on-board microvibration. Recent projects dedicated to measuring the Earth's gravity field and modeling the geoid with extremely high accuracy are examples. Their missions demand for extremely low micro-vibration environment on orbit for: (1) Not disturbing the measurement of earth gravity effects with the installed gradiometer or (2) Even not damaging the very high sensitive instruments. Based on evidence from ongoing missions multi-layer insulation (MLI) type thermal control blankets have been identified as a structural element of spacecrafts which might deform under temperature variations being caused by varying solar irradiation in orbit. Any such deformation exerts tiny forces which may cause small reactions resulting in micro-vibrations, in particular by exciting the spacecraft eigenmodes. The principle of the test set-up for the micro-vibration test was as follows. A real side wall panel of the spacecraft (size about 0.25 m2) was low-frequency suspended in a thermal vacuum chamber. On the one side of this panel, the MLI samples were fixed by using the standard methods. In front of the MLI, an IR-rig was installed which provided actively controlled IR-radiation power of about 6 kW/m2 in order to heat the MLI surface. The cooling was passive using the shroud temperature at a chamber pressure <1E-5mbar. The resulting micro-vibrations due to MLI motion in the heating and the cooling phase were measured via seismic accelerometers which were rigidly mounted to the panel. Video recording was used to correlate micro-vibration events to any visual MLI motion. Different MLI sample types were subjected to various thermal cycles in a temperature range between -60 C to +80 C. In this paper, the experience on these micro-vibration measurements will be presented and the conclusions for future applications will be discussed
The Subaru Coronagraphic Extreme AO Project
NASA Astrophysics Data System (ADS)
Martinache, Frantz; Guyon, O.; Lozi, J.; Tamura, M.; Hodapp, K.; Suzuki, R.; Hayano, Y.; McElwain, M. W.
2009-01-01
While the existence of large numbers of extrasolar planets around solar type stars has been unambiguously demonstrated by radial velocity, transit and microlensing surveys, attempts at direct imaging with AO-equipped large telescopes remain unsuccessful. Because they supposedly offer more favorable contrast ratios, young systems consitute prime targets for imaging. Such observations will provide key insights on the formation and early evolution of planets and disks. Current surveys are limited by modest AO performance which limits inner working angle to 0.2", and only reach maximum sensitivity outside 1". This translates into orbital distances greater than 10 AU even on most nearby systems, while only 5 % of the known exoplanets have a semimajor axis greater than 10 AU. This calls for a major change of approach in the techniques used for direct imaging of the direct vicinity of stars. A sensible way to do the job is to combine coronagraphy and Extreme AO. Only accurate and fast control of the wavefront will permit the detection of high contrast planetary companions within 10 AU. The SCExAO system, currently under assembly, is an upgrade of the HiCIAO coronagraphic differential imaging camera, mounted behind the 188-actuator curvature AO system on Subaru Telescope. This platform includes a 1000-actuator MEMS deformable mirror for high accuracy wavefront correction and a PIAA coronagraph which delivers high contrast at 0.05" from the star (5 AU at 100 pc). Key technologies have been validated in the laboratory: high performance wavefront sensing schemes, spider vanes and central obstruction removal, and lossless beam apodization. The project is designed to be highly flexible to continuously integrate new technologies with high scientific payoff. Planned upgrades include an integral field unit for spectral characterization of planets/disks and a non-redundant aperture mask to push the performance of the system toward separations less than lambda/D.
Sonar gas seepage characterization using high resolution systems at short ranges
NASA Astrophysics Data System (ADS)
Schneider von Deimling, J.; Lohrberg, A.; Mücke, I.
2017-12-01
Sonar is extremely sensitive in regard to submarine remote sensing of free gas bubbles. Known reasons for this are (1) high impedance contrast between water and gas, holding true also at larger depths with higher hydrostatic pressures and thus greater mole density in a gas bubble; (2) resonating behavior at a specific depth-frequency-size/shape relation with highly non-linear behavior; (3) an overlooked property being valuable for gas seepage detection and characterization is the movement of bubbles controlled by their overall trajectory governed by buoyancy, upwelling effects, tides, eddies, and currents. Moving objects are an unusual seismo-acoustic target in solid earth geophysics, and most processors hardly consider such short term movement. However, analyzing movement pattern over time and space highly improves human and algorithmic bubble detection and helps mitigation of false alarms often caused by fish's swim bladders. We optimized our sonar surveys for gas bubble trajectory analyses using calibrated split-beam and broadband/short pulse multibeam to gather very high quality sonar images. Thus we present sonar data patterns of gas seepage sites recorded at shorter ranges showing individual bubbles or groups of bubbles. Subsequent analyses of bubble trajectories and sonar strength can be used to quantify minor gas fluxes with high accuracy. Moreover, we analyzed strong gas bubble seepage sites with significant upwelling. Acoustic inversion of such major seep fluxes is extremely challenging if not even impossible given uncertainties in bubble size spectra, upwelling velocities, and beam geometry position of targets. Our 3D analyses of the water column multibeam data unraveled that some major bubble flows prescribe spiral vortex trajectories. The phenomenon was first found at an abandoned well site in the North Sea, but our recent investigations confirm such complex bubble trajectories exist at natural seeps, i.e. at the CO2 seep site Panarea (Italy). We hypothesize that accurate 3D analyses of plume shape and trajectory analyses might help to estimate threshold for fluxes.
NASA Astrophysics Data System (ADS)
Maimaitijiang, Maitiniyazi; Ghulam, Abduwasit; Sidike, Paheding; Hartling, Sean; Maimaitiyiming, Matthew; Peterson, Kyle; Shavers, Ethan; Fishman, Jack; Peterson, Jim; Kadam, Suhas; Burken, Joel; Fritschi, Felix
2017-12-01
Estimating crop biophysical and biochemical parameters with high accuracy at low-cost is imperative for high-throughput phenotyping in precision agriculture. Although fusion of data from multiple sensors is a common application in remote sensing, less is known on the contribution of low-cost RGB, multispectral and thermal sensors to rapid crop phenotyping. This is due to the fact that (1) simultaneous collection of multi-sensor data using satellites are rare and (2) multi-sensor data collected during a single flight have not been accessible until recent developments in Unmanned Aerial Systems (UASs) and UAS-friendly sensors that allow efficient information fusion. The objective of this study was to evaluate the power of high spatial resolution RGB, multispectral and thermal data fusion to estimate soybean (Glycine max) biochemical parameters including chlorophyll content and nitrogen concentration, and biophysical parameters including Leaf Area Index (LAI), above ground fresh and dry biomass. Multiple low-cost sensors integrated on UASs were used to collect RGB, multispectral, and thermal images throughout the growing season at a site established near Columbia, Missouri, USA. From these images, vegetation indices were extracted, a Crop Surface Model (CSM) was advanced, and a model to extract the vegetation fraction was developed. Then, spectral indices/features were combined to model and predict crop biophysical and biochemical parameters using Partial Least Squares Regression (PLSR), Support Vector Regression (SVR), and Extreme Learning Machine based Regression (ELR) techniques. Results showed that: (1) For biochemical variable estimation, multispectral and thermal data fusion provided the best estimate for nitrogen concentration and chlorophyll (Chl) a content (RMSE of 9.9% and 17.1%, respectively) and RGB color information based indices and multispectral data fusion exhibited the largest RMSE 22.6%; the highest accuracy for Chl a + b content estimation was obtained by fusion of information from all three sensors with an RMSE of 11.6%. (2) Among the plant biophysical variables, LAI was best predicted by RGB and thermal data fusion while multispectral and thermal data fusion was found to be best for biomass estimation. (3) For estimation of the above mentioned plant traits of soybean from multi-sensor data fusion, ELR yields promising results compared to PLSR and SVR in this study. This research indicates that fusion of low-cost multiple sensor data within a machine learning framework can provide relatively accurate estimation of plant traits and provide valuable insight for high spatial precision in agriculture and plant stress assessment.
NASA Astrophysics Data System (ADS)
Mohammed, K.; Islam, A. S.; Khan, M. J. U.; Das, M. K.
2017-12-01
With the large number of hydrologic models presently available along with the global weather and geographic datasets, streamflows of almost any river in the world can be easily modeled. And if a reasonable amount of observed data from that river is available, then simulations of high accuracy can sometimes be performed after calibrating the model parameters against those observed data through inverse modeling. Although such calibrated models can succeed in simulating the general trend or mean of the observed flows very well, more often than not they fail to adequately simulate the extreme flows. This causes difficulty in tasks such as generating reliable projections of future changes in extreme flows due to climate change, which is obviously an important task due to floods and droughts being closely connected to people's lives and livelihoods. We propose an approach where the outputs of a physically-based hydrologic model are used as an input to a machine learning model to try and better simulate the extreme flows. To demonstrate this offline-coupling approach, the Soil and Water Assessment Tool (SWAT) was selected as the physically-based hydrologic model, the Artificial Neural Network (ANN) as the machine learning model and the Ganges-Brahmaputra-Meghna (GBM) river system as the study area. The GBM river system, located in South Asia, is the third largest in the world in terms of freshwater generated and forms the largest delta in the world. The flows of the GBM rivers were simulated separately in order to test the performance of this proposed approach in accurately simulating the extreme flows generated by different basins that vary in size, climate, hydrology and anthropogenic intervention on stream networks. Results show that by post-processing the simulated flows of the SWAT models with ANN models, simulations of extreme flows can be significantly improved. The mean absolute errors in simulating annual maximum/minimum daily flows were minimized from 4967 cusecs to 1294 cusecs for Ganges, from 5695 cusecs to 2115 cusecs for Brahmaputra and from 689 cusecs to 321 cusecs for Meghna. Using this approach, simulations of hydrologic variables other than streamflow can also be improved given that a decent amount of observed data for that variable is available.
Towards predictive many-body calculations of phonon-limited carrier mobilities in semiconductors
NASA Astrophysics Data System (ADS)
Poncé, Samuel; Margine, Elena R.; Giustino, Feliciano
2018-03-01
We probe the accuracy limit of ab initio calculations of carrier mobilities in semiconductors, within the framework of the Boltzmann transport equation. By focusing on the paradigmatic case of silicon, we show that fully predictive calculations of electron and hole mobilities require many-body quasiparticle corrections to band structures and electron-phonon matrix elements, the inclusion of spin-orbit coupling, and an extremely fine sampling of inelastic scattering processes in momentum space. By considering all these factors we obtain excellent agreement with experiment, and we identify the band effective masses as the most critical parameters to achieve predictive accuracy. Our findings set a blueprint for future calculations of carrier mobilities, and pave the way to engineering transport properties in semiconductors by design.
A Data-Driven Approach to Assess Coastal Vulnerability: Machine Learning from Hurricane Sandy
NASA Astrophysics Data System (ADS)
Foti, R.; Miller, S. M.; Montalto, F. A.
2015-12-01
As climate changes and population living along the coastlines continues to increase, an understanding of coastal risk and vulnerability to extreme events becomes increasingly important. With as many as 700,000 people living less than 3 m above the high tide line, New York City (NYC) represents one of the most threatened among major world cities. Recent events, most notably Hurricane Sandy, have put a tremendous pressure on the mosaic of economic, environmental, and social activities occurring in NYC at the interface between land and water. Using information on property damages collected by the Civil Air Patrol (CAP) after Hurricane Sandy, we developed a machine-learning based model able to identify the primary factors determining the occurrence and the severity of damages and intended to both assess and predict coastal vulnerability. The available dataset consists of categorical classifications of damages, ranging from 0 (not damaged) to 5 (damaged and flooded), and available for a sample of buildings in the NYC area. A set of algorithms, such as Logistic Regression, Gradient Boosting and Random Forest, were trained on 75% of the available dataset and tested on the remaining 25%, both training and test sets being picked at random. A combination of factors, including elevation, distance from shore, surge depth, soil type and proximity to key topographic features, such as wetlands and parks, were used as predictors. Trained algorithms were able to achieve over 85% prediction accuracy on both the training set and, most notably, the test set, with as few as six predictors, allowing a realistic depiction of the field of damage. Given their accuracy and robustness, we believe that these algorithms can be successfully applied to provide fields of coastal vulnerability for future extreme events, as well as to assess the consequences of changes, whether intended (e.g. land use change) or contingent (e.g. sea level rise), in the physical layout of NYC.
NASA Technical Reports Server (NTRS)
Turpin, Jason B.
2004-01-01
One-dimensional water-hammer modeling involves the solution of two coupled non-linear hyperbolic partial differential equations (PDEs). These equations result from applying the principles of conservation of mass and momentum to flow through a pipe, and usually the assumption that the speed at which pressure waves propagate through the pipe is constant. In order to solve these equations for the interested quantities (i.e. pressures and flow rates), they must first be converted to a system of ordinary differential equations (ODEs) by either approximating the spatial derivative terms with numerical techniques or using the Method of Characteristics (MOC). The MOC approach is ideal in that no numerical approximation errors are introduced in converting the original system of PDEs into an equivalent system of ODEs. Unfortunately this resulting system of ODEs is bound by a time step constraint so that when integrating the equations the solution can only be obtained at fixed time intervals. If the fluid system to be modeled also contains dynamic components (i.e. components that are best modeled by a system of ODEs), it may be necessary to take extremely small time steps during certain points of the model simulation in order to achieve stability and/or accuracy in the solution. Coupled together, the fixed time step constraint invoked by the MOC, and the occasional need for extremely small time steps in order to obtain stability and/or accuracy, can greatly increase simulation run times. As one solution to this problem, a method for combining variable step integration (VSI) algorithms with the MOC was developed for modeling water-hammer in systems with highly dynamic components. A case study is presented in which reverse flow through a dual-flapper check valve introduces a water-hammer event. The predicted pressure responses upstream of the check-valve are compared with test data.
Hirata, Aya; Sugiyama, Daisuke; Watanabe, Makoto; Tamakoshi, Akiko; Iso, Hiroyasu; Kotani, Kazuhiko; Kiyama, Masahiko; Yamada, Michiko; Ishikawa, Shizukiyo; Murakami, Yoshitaka; Miura, Katsuyuki; Ueshima, Hirotsugu; Okamura, Tomonori
2018-02-08
The effect of very high or extremely high levels of high-density lipoprotein cholesterol (HDL-C) on cardiovascular disease (CVD) is not well described. Although a few recent studies have reported the adverse effects of extremely high levels of HDL-C on CVD events, these did not show a statistically significant association between extremely high levels of HDL-C and cause-specific CVD mortality. In addition, Asian populations have not been studied. We examine the impact of extremely high levels of HDL-C on cause-specific CVD mortality using pooled data of Japanese cohort studies. We performed a large-scale pooled analysis of 9 Japanese cohorts including 43,407 participants aged 40-89 years, dividing the participants into 5 groups by HDL-C levels, including extremely high levels of HDL-C ≥2.33 mmol/L (≥90 mg/dL). We estimated the adjusted hazard ratio of each HDL-C category for all-cause death and cause-specific deaths compared with HDL-C 1.04-1.55 mmol/L (40-59 mg/dL) using a cohort-stratified Cox proportional hazards model. During a 12.1-year follow-up, 4995 all-cause deaths and 1280 deaths due to overall CVD were identified. Extremely high levels of HDL-C were significantly associated with increased risk of atherosclerotic CVD mortality (hazard ratio = 2.37, 95% confidence interval: 1.37-4.09 for total) and increased risk for coronary heart disease and ischemic stroke. In addition, the risk for extremely high HDL-C was more evident among current drinkers. We showed extremely high levels of HDL-C had an adverse effect on atherosclerotic CVD mortality in a pooled analysis of Japanese cohorts. Copyright © 2018 National Lipid Association. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Hou, Zeyu; Lu, Wenxi
2018-05-01
Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.
Radio Galaxy Zoo: compact and extended radio source classification with deep learning
NASA Astrophysics Data System (ADS)
Lukic, V.; Brüggen, M.; Banfield, J. K.; Wong, O. I.; Rudnick, L.; Norris, R. P.; Simmons, B.
2018-05-01
Machine learning techniques have been increasingly useful in astronomical applications over the last few years, for example in the morphological classification of galaxies. Convolutional neural networks have proven to be highly effective in classifying objects in image data. In the context of radio-interferometric imaging in astronomy, we looked for ways to identify multiple components of individual sources. To this effect, we design a convolutional neural network to differentiate between different morphology classes using sources from the Radio Galaxy Zoo (RGZ) citizen science project. In this first step, we focus on exploring the factors that affect the performance of such neural networks, such as the amount of training data, number and nature of layers, and the hyperparameters. We begin with a simple experiment in which we only differentiate between two extreme morphologies, using compact and multiple-component extended sources. We found that a three-convolutional layer architecture yielded very good results, achieving a classification accuracy of 97.4 per cent on a test data set. The same architecture was then tested on a four-class problem where we let the network classify sources into compact and three classes of extended sources, achieving a test accuracy of 93.5 per cent. The best-performing convolutional neural network set-up has been verified against RGZ Data Release 1 where a final test accuracy of 94.8 per cent was obtained, using both original and augmented images. The use of sigma clipping does not offer a significant benefit overall, except in cases with a small number of training images.
Bhaduri, Aritra; Banerjee, Amitava; Roy, Subhrajit; Kar, Sougata; Basu, Arindam
2018-03-01
We present a neuromorphic current mode implementation of a spiking neural classifier with lumped square law dendritic nonlinearity. It has been shown previously in software simulations that such a system with binary synapses can be trained with structural plasticity algorithms to achieve comparable classification accuracy with fewer synaptic resources than conventional algorithms. We show that even in real analog systems with manufacturing imperfections (CV of 23.5% and 14.4% for dendritic branch gains and leaks respectively), this network is able to produce comparable results with fewer synaptic resources. The chip fabricated in [Formula: see text]m complementary metal oxide semiconductor has eight dendrites per cell and uses two opposing cells per class to cancel common-mode inputs. The chip can operate down to a [Formula: see text] V and dissipates 19 nW of static power per neuronal cell and [Formula: see text] 125 pJ/spike. For two-class classification problems of high-dimensional rate encoded binary patterns, the hardware achieves comparable performance as software implementation of the same with only about a 0.5% reduction in accuracy. On two UCI data sets, the IC integrated circuit has classification accuracy comparable to standard machine learners like support vector machines and extreme learning machines while using two to five times binary synapses. We also show that the system can operate on mean rate encoded spike patterns, as well as short bursts of spikes. To the best of our knowledge, this is the first attempt in hardware to perform classification exploiting dendritic properties and binary synapses.
MRIQC: Advancing the automatic prediction of image quality in MRI from unseen sites
2017-01-01
Quality control of MRI is essential for excluding problematic acquisitions and avoiding bias in subsequent image processing and analysis. Visual inspection is subjective and impractical for large scale datasets. Although automated quality assessments have been demonstrated on single-site datasets, it is unclear that solutions can generalize to unseen data acquired at new sites. Here, we introduce the MRI Quality Control tool (MRIQC), a tool for extracting quality measures and fitting a binary (accept/exclude) classifier. Our tool can be run both locally and as a free online service via the OpenNeuro.org portal. The classifier is trained on a publicly available, multi-site dataset (17 sites, N = 1102). We perform model selection evaluating different normalization and feature exclusion approaches aimed at maximizing across-site generalization and estimate an accuracy of 76%±13% on new sites, using leave-one-site-out cross-validation. We confirm that result on a held-out dataset (2 sites, N = 265) also obtaining a 76% accuracy. Even though the performance of the trained classifier is statistically above chance, we show that it is susceptible to site effects and unable to account for artifacts specific to new sites. MRIQC performs with high accuracy in intra-site prediction, but performance on unseen sites leaves space for improvement which might require more labeled data and new approaches to the between-site variability. Overcoming these limitations is crucial for a more objective quality assessment of neuroimaging data, and to enable the analysis of extremely large and multi-site samples. PMID:28945803
Afzal, Naveed; Sohn, Sunghwan; Abram, Sara; Scott, Christopher G.; Chaudhry, Rajeev; Liu, Hongfang; Kullo, Iftikhar J.; Arruda-Olson, Adelaide M.
2016-01-01
Objective Lower extremity peripheral arterial disease (PAD) is highly prevalent and affects millions of individuals worldwide. We developed a natural language processing (NLP) system for automated ascertainment of PAD cases from clinical narrative notes and compared the performance of the NLP algorithm to billing code algorithms, using ankle-brachial index (ABI) test results as the gold standard. Methods We compared the performance of the NLP algorithm to 1) results of gold standard ABI; 2) previously validated algorithms based on relevant ICD-9 diagnostic codes (simple model) and 3) a combination of ICD-9 codes with procedural codes (full model). A dataset of 1,569 PAD patients and controls was randomly divided into training (n= 935) and testing (n= 634) subsets. Results We iteratively refined the NLP algorithm in the training set including narrative note sections, note types and service types, to maximize its accuracy. In the testing dataset, when compared with both simple and full models, the NLP algorithm had better accuracy (NLP: 91.8%, full model: 81.8%, simple model: 83%, P<.001), PPV (NLP: 92.9%, full model: 74.3%, simple model: 79.9%, P<.001), and specificity (NLP: 92.5%, full model: 64.2%, simple model: 75.9%, P<.001). Conclusions A knowledge-driven NLP algorithm for automatic ascertainment of PAD cases from clinical notes had greater accuracy than billing code algorithms. Our findings highlight the potential of NLP tools for rapid and efficient ascertainment of PAD cases from electronic health records to facilitate clinical investigation and eventually improve care by clinical decision support. PMID:28189359
2013-01-01
Background Protein-protein interactions (PPIs) play crucial roles in the execution of various cellular processes and form the basis of biological mechanisms. Although large amount of PPIs data for different species has been generated by high-throughput experimental techniques, current PPI pairs obtained with experimental methods cover only a fraction of the complete PPI networks, and further, the experimental methods for identifying PPIs are both time-consuming and expensive. Hence, it is urgent and challenging to develop automated computational methods to efficiently and accurately predict PPIs. Results We present here a novel hierarchical PCA-EELM (principal component analysis-ensemble extreme learning machine) model to predict protein-protein interactions only using the information of protein sequences. In the proposed method, 11188 protein pairs retrieved from the DIP database were encoded into feature vectors by using four kinds of protein sequences information. Focusing on dimension reduction, an effective feature extraction method PCA was then employed to construct the most discriminative new feature set. Finally, multiple extreme learning machines were trained and then aggregated into a consensus classifier by majority voting. The ensembling of extreme learning machine removes the dependence of results on initial random weights and improves the prediction performance. Conclusions When performed on the PPI data of Saccharomyces cerevisiae, the proposed method achieved 87.00% prediction accuracy with 86.15% sensitivity at the precision of 87.59%. Extensive experiments are performed to compare our method with state-of-the-art techniques Support Vector Machine (SVM). Experimental results demonstrate that proposed PCA-EELM outperforms the SVM method by 5-fold cross-validation. Besides, PCA-EELM performs faster than PCA-SVM based method. Consequently, the proposed approach can be considered as a new promising and powerful tools for predicting PPI with excellent performance and less time. PMID:23815620
Predicting the Magnetic Properties of ICMEs: A Pragmatic View
NASA Astrophysics Data System (ADS)
Riley, P.; Linker, J.; Ben-Nun, M.; Torok, T.; Ulrich, R. K.; Russell, C. T.; Lai, H.; de Koning, C. A.; Pizzo, V. J.; Liu, Y.; Hoeksema, J. T.
2017-12-01
The southward component of the interplanetary magnetic field plays a crucial role in being able to successfully predict space weather phenomena. Yet, thus far, it has proven extremely difficult to forecast with any degree of accuracy. In this presentation, we describe an empirically-based modeling framework for estimating Bz values during the passage of interplanetary coronal mass ejections (ICMEs). The model includes: (1) an empirically-based estimate of the magnetic properties of the flux rope in the low corona (including helicity and field strength); (2) an empirically-based estimate of the dynamic properties of the flux rope in the high corona (including direction, speed, and mass); and (3) a physics-based estimate of the evolution of the flux rope during its passage to 1 AU driven by the output from (1) and (2). We compare model output with observations for a selection of events to estimate the accuracy of this approach. Importantly, we pay specific attention to the uncertainties introduced by the components within the framework, separating intrinsic limitations from those that can be improved upon, either by better observations or more sophisticated modeling. Our analysis suggests that current observations/modeling are insufficient for this empirically-based framework to provide reliable and actionable prediction of the magnetic properties of ICMEs. We suggest several paths that may lead to better forecasts.
NASA Astrophysics Data System (ADS)
Szunyogh, Istvan; Kostelich, Eric J.; Gyarmati, G.; Patil, D. J.; Hunt, Brian R.; Kalnay, Eugenia; Ott, Edward; Yorke, James A.
2005-08-01
The accuracy and computational efficiency of the recently proposed local ensemble Kalman filter (LEKF) data assimilation scheme is investigated on a state-of-the-art operational numerical weather prediction model using simulated observations. The model selected for this purpose is the T62 horizontal- and 28-level vertical-resolution version of the Global Forecast System (GFS) of the National Center for Environmental Prediction. The performance of the data assimilation system is assessed for different configurations of the LEKF scheme. It is shown that a modest size (40-member) ensemble is sufficient to track the evolution of the atmospheric state with high accuracy. For this ensemble size, the computational time per analysis is less than 9 min on a cluster of PCs. The analyses are extremely accurate in the mid-latitude storm track regions. The largest analysis errors, which are typically much smaller than the observational errors, occur where parametrized physical processes play important roles. Because these are also the regions where model errors are expected to be the largest, limitations of a real-data implementation of the ensemble-based Kalman filter may be easily mistaken for model errors. In light of these results, the importance of testing the ensemble-based Kalman filter data assimilation systems on simulated observations is stressed.
Hu, Haixiang; Zhang, Xin; Ford, Virginia; Luo, Xiao; Qi, Erhui; Zeng, Xuefeng; Zhang, Xuejun
2016-11-14
Edge effect is regarded as one of the most difficult technical issues in a computer controlled optical surfacing (CCOS) process. Traditional opticians have to even up the consequences of the two following cases. Operating CCOS in a large overhang condition affects the accuracy of material removal, while in a small overhang condition, it achieves a more accurate performance, but leaves a narrow rolled-up edge, which takes time and effort to remove. In order to control the edge residuals in the latter case, we present a new concept of the 'heterocercal' tool influence function (TIF). Generated from compound motion equipment, this type of TIF can 'transfer' the material removal from the inner place to the edge, meanwhile maintaining the high accuracy and efficiency of CCOS. We call it the 'heterocercal' TIF, because of the inspiration from the heterocercal tails of sharks, whose upper lobe provides most of the explosive power. The heterocercal TIF was theoretically analyzed, and physically realized in CCOS facilities. Experimental and simulation results showed good agreement. It enables significant control of the edge effect and convergence of entire surface errors in large tool-to-mirror size-ratio conditions. This improvement will largely help manufacturing efficiency in some extremely large optical system projects, like the tertiary mirror of the Thirty Meter Telescope.
NASA Astrophysics Data System (ADS)
Kleemann, Bernd H.; Kurz, Julian; Hetzler, Jochen; Pomplun, Jan; Burger, Sven; Zschiedrich, Lin; Schmidt, Frank
2011-05-01
Finite element methods (FEM) for the rigorous electromagnetic solution of Maxwell's equations are known to be very accurate. They possess a high convergence rate for the determination of near field and far field quantities of scattering and diffraction processes of light with structures having feature sizes in the range of the light wavelength. We are using FEM software for 3D scatterometric diffraction calculations allowing the application of a brilliant and extremely fast solution method: the reduced basis method (RBM). The RBM constructs a reduced model of the scattering problem from precalculated snapshot solutions, guided self-adaptively by an error estimator. Using RBM, we achieve an efficiency accuracy of about 10-4 compared to the direct problem with only 35 precalculated snapshots being the reduced basis dimension. This speeds up the calculation of diffraction amplitudes by a factor of about 1000 compared to the conventional solution of Maxwell's equations by FEM. This allows us to reconstruct the three geometrical parameters of our phase grating from "measured" scattering data in a 3D parameter manifold online in a minute having the full FEM accuracy available. Additionally, also a sensitivity analysis or the choice of robust measuring strategies, for example, can be done online in a few minutes.
Enhanced auditory spatial localization in blind echolocators.
Vercillo, Tiziana; Milne, Jennifer L; Gori, Monica; Goodale, Melvyn A
2015-01-01
Echolocation is the extraordinary ability to represent the external environment by using reflected sound waves from self-generated auditory pulses. Blind human expert echolocators show extremely precise spatial acuity and high accuracy in determining the shape and motion of objects by using echoes. In the current study, we investigated whether or not the use of echolocation would improve the representation of auditory space, which is severely compromised in congenitally blind individuals (Gori et al., 2014). The performance of three blind expert echolocators was compared to that of 6 blind non-echolocators and 11 sighted participants. Two tasks were performed: (1) a space bisection task in which participants judged whether the second of a sequence of three sounds was closer in space to the first or the third sound and (2) a minimum audible angle task in which participants reported which of two sounds presented successively was located more to the right. The blind non-echolocating group showed a severe impairment only in the space bisection task compared to the sighted group. Remarkably, the three blind expert echolocators performed both spatial tasks with similar or even better precision and accuracy than the sighted group. These results suggest that echolocation may improve the general sense of auditory space, most likely through a process of sensory calibration. Copyright © 2014 Elsevier Ltd. All rights reserved.
Intra-Operative Frozen Sections for Ovarian Tumors – A Tertiary Center Experience
Arshad, Nur Zaiti Md; Ng, Beng Kwang; Paiman, Noor Asmaliza Md; Mahdy, Zaleha Abdullah; Noor, Rushdan Mohd
2018-01-01
Background: Accuracy of diagnosis with intra-operative frozen sections is extremely important in the evaluation of ovarian tumors so that appropriate surgical procedures can be selected. Study design: All patients who with intra-operative frozen sections for ovarian masses in a tertiary center over nine years from June 2008 until April 2017 were reviewed. Frozen section diagnosis and final histopathological reports were compared. Main outcome measures: Sensitivity, specificity, positive and negative predictive values of intra-operative frozen section as compared to final histopathological results for ovarian tumors. Results: A total of 92 cases were recruited for final evaluation. The frozen section diagnoses were comparable with the final histopathological reports in 83.7% of cases. The sensitivity, specificity, positive predictive value and negative predictive value for benign and malignant ovarian tumors were 95.6%, 85.1%, 86.0% and 95.2% and 69.2%, 100%, 100% and 89.2% respectively. For borderline ovarian tumors, the sensitivity and specificity were 76.2% and 88.7%, respectively; the positive predictive value was 66.7% and the negative predictive value was 92.7%. Conclusion: The accuracy of intra-operative frozen section diagnoses for ovarian tumors is high and this approach remains a reliable option in assessing ovarian masses intra-operatively. PMID:29373916
An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects.
Kim, Jinkwon; Min, Se Dong; Lee, Myoungho
2011-06-27
Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians.
An arrhythmia classification algorithm using a dedicated wavelet adapted to different subjects
2011-01-01
Background Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia. Methods In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm. Results A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%. Conclusions The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians. PMID:21707989
Franklin, Daniel; O'Higgins, Paul; Oxnard, Charles E; Dadour, Ian
2006-12-01
The determination of sex is a critical component in forensic anthropological investigation. The literature attests to numerous metrical standards, each utilizing diffetent skeletal elements, for sex determination in South A frican Blacks. Metrical standards are popular because they provide a high degree of expected accuracy and are less error-prone than subjective nonmetric visual techniques. We note, however, that there appears to be no established metric mandible discriminant function standards for sex determination in this population.We report here on a preliminary investigation designed to evaluate whether the mandible is a practical element for sex determination in South African Blacks. The sample analyzed comprises 40 nonpathological Zulu individuals drawn from the R.A. Dart Collection. Ten linear measurements, obtained from mathematically trans-formed three-dimensional landmark data, are analyzed using basic univariate statistics and discriminant function analyses. Seven of the 10 measurements examined are found to be sexually dimorphic; the dimensions of the ramus are most dimorphic. The sex classification accuracy of the discriminant functions ranged from 72.5 to 87.5% for the univariate method, 92.5% for the stepwise method, and 57.5 to 95% for the direct method. We conclude that the mandible is an extremely useful element for sex determination in this population.
Status of the radio technique for cosmic-ray induced air showers
NASA Astrophysics Data System (ADS)
Schröder, Frank G.
2016-10-01
Radio measurements yield calorimetric information on the electromagnetic shower component around the clock. However, until recently it was not clear whether radio measurements can compete in accuracy with established night-time techniques like air-Cherenkov or air-fluorescence detection. Due to recent progress in the radio technique as well as in the understanding of the emission mechanisms, the performance of current radio experiments has significantly improved. Above 100 PeV, digital, state-of-the-art antenna arrays achieve a reconstruction accuracy for the energy similar to that of other techniques, and can provide an independent measurement of the absolute energy scale. Furthermore, radio measurements are sensitive to the mass composition of the primary particles: First, the position of the shower maximum can be reconstructed from the radio signal. Second, in combination with muon detectors the measurement of the electromagnetic component provides complementary information on the primary mass. Since the radio footprint is huge for inclined showers, and the radio signal does not suffer absorption in the atmosphere, future radio arrays either focus on inclined showers at the highest energy, or on ultra-high precision measurements with extremely dense arrays. This proceeding reviews the current status of radio experiments and simulations as well as future plans.
An automatic iris occlusion estimation method based on high-dimensional density estimation.
Li, Yung-Hui; Savvides, Marios
2013-04-01
Iris masks play an important role in iris recognition. They indicate which part of the iris texture map is useful and which part is occluded or contaminated by noisy image artifacts such as eyelashes, eyelids, eyeglasses frames, and specular reflections. The accuracy of the iris mask is extremely important. The performance of the iris recognition system will decrease dramatically when the iris mask is inaccurate, even when the best recognition algorithm is used. Traditionally, people used the rule-based algorithms to estimate iris masks from iris images. However, the accuracy of the iris masks generated this way is questionable. In this work, we propose to use Figueiredo and Jain's Gaussian Mixture Models (FJ-GMMs) to model the underlying probabilistic distributions of both valid and invalid regions on iris images. We also explored possible features and found that Gabor Filter Bank (GFB) provides the most discriminative information for our goal. Finally, we applied Simulated Annealing (SA) technique to optimize the parameters of GFB in order to achieve the best recognition rate. Experimental results show that the masks generated by the proposed algorithm increase the iris recognition rate on both ICE2 and UBIRIS dataset, verifying the effectiveness and importance of our proposed method for iris occlusion estimation.
The Parker-Sochacki Method--A Powerful New Method for Solving Systems of Differential Equations
NASA Astrophysics Data System (ADS)
Rudmin, Joseph W.
2001-04-01
The Parker-Sochacki Method--A Powerful New Method for Solving Systems of Differential Equations Joseph W. Rudmin (Physics Dept, James Madison University) A new system of solving systems of differential equations will be presented, which has been developed by J. Edgar Parker and James Sochacki, of the James Madison University Mathematics Department. The method produces MacClaurin Series solutions to systems of differential equations, with the coefficients in either algebraic or numerical form. The method yields high-degree solutions: 20th degree is easily obtainable. It is conceptually simple, fast, and extremely general. It has been applied to over a hundred systems of differential equations, some of which were previously unsolved, and has yet to fail to solve any system for which the MacClaurin series converges. The method is non-recursive: each coefficient in the series is calculated just once, in closed form, and its accuracy is limited only by the digital accuracy of the computer. Although the original differential equations may include any mathematical functions, the computational method includes ONLY the operations of addition, subtraction, and multiplication. Furthermore, it is perfectly suited to parallel -processing computer languages. Those who learn this system will never use Runge-Kutta or predictor-corrector methods again. Examples will be presented, including the classical many-body problem.
High temperature antenna pointing mechanism for BepiColombo mission
NASA Astrophysics Data System (ADS)
Mürer, Johan A.; Harper, Richard; Anderson, Mike
2005-07-01
This paper describes the two axis Antenna Pointing Mechanism (APM) with dual frequency (X-Ka bands) Rotary Joint (RJ) developed by Kongsberg Defence and Aerospace and BAE Systems, in the frame of the ESA BepiColombo mission to the planet Mercury. The extreme environmental conditions induced by Mercury's proximity to the Sun (up to 14.500 W/m2 direct solar fluxes, up to 5000 W/m2 infrared flux and up to 1200 W/m2 albedo shine form the planet surface), have dictated the need for a specific high temperature development of the pointing mechanism and of its integrated RF Rotary Joint. Global thermal analysis of the antenna predicts qualification temperature for the elevation stage APM between 250°C and 295°C. In addition, the mechanism shall survive extreme cold temperatures during the interplanetary cruise phase. Beside the harsh environment, the stringent pointing accuracy required by the antenna high frequency operations, and the extreme dimensional stability demanded by a radio science experiment (which is using the antenna for range and range rate measurements), have introduced additional, specific challenges to the mechanism design. Innovative solutions have been deemed necessary at system architecture level, in the design of the mechanisms critical areas and in the selection of high temperature compatible materials and processes. The very high working temperature of the mechanism ruled out use of aluminium alloys, which is replaced by Titanium alloy and stainless steels. Special heat treatments of the steel are applied for minimum loss of hardness. The structures are optimised for minimum mass. To handle thermal stresses and distortion, a very compact design of the APM was performed integrating the bearings, position sensor and drive chain within minimum structural length. The Rotary Joint is a unique design tailored to the APM using a common main bearing support. Special manufacturing processes have been tested and applied for manufacture of the very compact RJ being the first of its kind (dual X-Ka band) in European space development. The twin channels are arranged concentrically, permitting continuous 360° rotation. Maximum use of waveguide has been made to minimise the loss in the Ka-band frequency channel and this leads to an unconventional design of the X-band channel. A specific effort and extensive test program at ESTL in the UK have been put in place to identify suitable high temperature solutions for the RJ and APM bearings lubrication. The high temperature demands the use of a dry lubrication system. High working loads due to thermal stresses puts extra challenge to the life duration of the dry film lubrication. Lead lubrication was initially the preferred concept, but has later in the program been substituted by MoS2 film. A design life of 20,000 cycles at 250°C and elevated load has been demonstrated for the bearings with MoS2. Special attention has been paid to the materials in the stepper motor using high temperature solder material and MoS2 dry lubrication in the bearings and gear train. The APM is designed for use of a high accuracy inductive based position sensor with remote signal and amplifier electronics. Electrical signal transfer is via a high temperature Twist Capsule. The activity has included the design, manufacturing and testing in a respresentative environment of a breadboard model of the APM and of its integrated radio frequency RJ. The breadboard does not include a position sensor or the Twist Capsule. The breadboard tests will include functional performance tests in air, vibration tests and thermal vacuum. The thermal vacuum test will include RF testing at high temperature combined with APM pointing performance.
Pan, Shin-Liang; Liang, Huey-Wen; Hou, Wen-Hsuan; Yeh, Tian-Shin
2014-11-01
To assess the responsiveness of one generic questionnaire, Medical Outcomes Study Short Form-36 (SF-36), and one region-specific outcome measure, Lower Extremity Functional Scale (LEFS), in patients with traumatic injuries of lower extremities. A prospective and observational study of patients after traumatic injuries of lower extremities. Assessments were performed at baseline and 3 months later. In-patients and out-patients in two university hospitals in Taiwan. A convenience sample of 109 subjects were evaluated and 94 (86%) were followed. Not applicable. Assessments of responsiveness with distribution-based approach (effect size, standardized response mean [SRM], minimal detectable change) and anchor-based approach (receiver's operating curve analysis, ROC analysis). LEFS and physical component score (PCS) of SF-36 were all responsive to global improvement, with fair-to-good accuracy in discriminating between participants with and without improvement. The area under curve gained by ROC analysis for LEFS and SF-36 PCS was similar (0.65 vs. 0.70, p=0.26). Our findings revealed comparable responsiveness of LEFS and PCS of SF-36 in a sample of subjects with traumatic injuries of lower limbs. Either type of functional measure would be suitable for use in clinical trials where improvement in function was an endpoint of interest. Copyright © 2014 Elsevier Ltd. All rights reserved.