Asymmetric Memory Circuit Would Resist Soft Errors
NASA Technical Reports Server (NTRS)
Buehler, Martin G.; Perlman, Marvin
1990-01-01
Some nonlinear error-correcting codes more efficient in presence of asymmetry. Combination of circuit-design and coding concepts expected to make integrated-circuit random-access memories more resistant to "soft" errors (temporary bit errors, also called "single-event upsets" due to ionizing radiation). Integrated circuit of new type made deliberately more susceptible to one kind of bit error than to other, and associated error-correcting code adapted to exploit this asymmetry in error probabilities.
NASA Astrophysics Data System (ADS)
Zhang, Kuiyuan; Umehara, Shigehiro; Yamaguchi, Junki; Furuta, Jun; Kobayashi, Kazutoshi
2016-08-01
This paper analyzes how body bias and BOX region thickness affect soft error rates in 65-nm SOTB (Silicon on Thin BOX) and 28-nm UTBB (Ultra Thin Body and BOX) FD-SOI processes. Soft errors are induced by alpha-particle and neutron irradiation and the results are then analyzed by Monte Carlo based simulation using PHITS-TCAD. The alpha-particle-induced single event upset (SEU) cross-section and neutron-induced soft error rate (SER) obtained by simulation are consistent with measurement results. We clarify that SERs decreased in response to an increase in the BOX thickness for SOTB while SERs in UTBB are independent of BOX thickness. We also discover SOTB develops a higher tolerance to soft errors when reverse body bias is applied while UTBB become more susceptible.
Overview of Device SEE Susceptibility from Heavy Ions
NASA Technical Reports Server (NTRS)
Nichols, D. K.; Coss, J. R.; McCarthy, K. P.; Schwartz, H. R.; Smith, L. S.
1998-01-01
A fifth set of heavy ion single event effects (SEE) test data have been collected since the last IEEE publications (1,2,3,4) in December issues for 1985, 1987, 1989, and 1991. Trends in SEE susceptibility (including soft errors and latchup) for state-of-the-art parts are evaluated.
Trends in Device SEE Susceptibility from Heavy Ions
NASA Technical Reports Server (NTRS)
Nichols, D. K.; Coss, J. R.; McCarty, K. P.; Schwartz, H. R.; Swift, G. M.; Watson, R. K.; Koga, R.; Crain, W. R.; Crawford, K. B.; Hansel, S. J.
1995-01-01
The sixth set of heavy ion single event effects (SEE) test data have been collected since the last IEEE publications in December issues of IEEE - Nuclear Science Transactions for 1985, 1987, 1989, 1991, and the IEEE Workshop Record, 1993. Trends in SEE susceptibility (including soft errors and latchup) for state-of- are evaluated.
New-Sum: A Novel Online ABFT Scheme For General Iterative Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Dingwen; Song, Shuaiwen; Krishnamoorthy, Sriram
Emerging high-performance computing platforms, with large component counts and lower power margins, are anticipated to be more susceptible to soft errors in both logic circuits and memory subsystems. We present an online algorithm-based fault tolerance (ABFT) approach to efficiently detect and recover soft errors for general iterative methods. We design a novel checksum-based encoding scheme for matrix-vector multiplication that is resilient to both arithmetic and memory errors. Our design decouples the checksum updating process from the actual computation, and allows adaptive checksum overhead control. Building on this new encoding mechanism, we propose two online ABFT designs that can effectively recovermore » from errors when combined with a checkpoint/rollback scheme.« less
Latest trends in parts SEP susceptibility from heavy ions
NASA Technical Reports Server (NTRS)
Nichols, Donald K.; Smith, L. S.; Soli, George A.; Koga, R.; Kolasinski, W. A.
1989-01-01
JPL and Aerospace have collected a third set of heavy-ion single-event phenomena (SEP) test data since their last joint IEEE publications in December 1985 and December 1987. Trends in SEP susceptibility (e.g., soft errors and latchup) for state-of-the-art parts are presented. Results of the study indicate that hard technologies and unacceptably soft technologies can be flagged. In some instances, specific tested parts can be taken as candidates for key microprocessors or memories. As always with radiation test data, specific test data for qualified flight parts is recommended for critical applications.
Bezrukov, Ilja; Schmidt, Holger; Mantlik, Frédéric; Schwenzer, Nina; Brendle, Cornelia; Schölkopf, Bernhard; Pichler, Bernd J
2013-10-01
Hybrid PET/MR systems have recently entered clinical practice. Thus, the accuracy of MR-based attenuation correction in simultaneously acquired data can now be investigated. We assessed the accuracy of 4 methods of MR-based attenuation correction in lesions within soft tissue, bone, and MR susceptibility artifacts: 2 segmentation-based methods (SEG1, provided by the manufacturer, and SEG2, a method with atlas-based susceptibility artifact correction); an atlas- and pattern recognition-based method (AT&PR), which also used artifact correction; and a new method combining AT&PR and SEG2 (SEG2wBONE). Attenuation maps were calculated for the PET/MR datasets of 10 patients acquired on a whole-body PET/MR system, allowing for simultaneous acquisition of PET and MR data. Eighty percent iso-contour volumes of interest were placed on lesions in soft tissue (n = 21), in bone (n = 20), near bone (n = 19), and within or near MR susceptibility artifacts (n = 9). Relative mean volume-of-interest differences were calculated with CT-based attenuation correction as a reference. For soft-tissue lesions, none of the methods revealed a significant difference in PET standardized uptake value relative to CT-based attenuation correction (SEG1, -2.6% ± 5.8%; SEG2, -1.6% ± 4.9%; AT&PR, -4.7% ± 6.5%; SEG2wBONE, 0.2% ± 5.3%). For bone lesions, underestimation of PET standardized uptake values was found for all methods, with minimized error for the atlas-based approaches (SEG1, -16.1% ± 9.7%; SEG2, -11.0% ± 6.7%; AT&PR, -6.6% ± 5.0%; SEG2wBONE, -4.7% ± 4.4%). For lesions near bone, underestimations of lower magnitude were observed (SEG1, -12.0% ± 7.4%; SEG2, -9.2% ± 6.5%; AT&PR, -4.6% ± 7.8%; SEG2wBONE, -4.2% ± 6.2%). For lesions affected by MR susceptibility artifacts, quantification errors could be reduced using the atlas-based artifact correction (SEG1, -54.0% ± 38.4%; SEG2, -15.0% ± 12.2%; AT&PR, -4.1% ± 11.2%; SEG2wBONE, 0.6% ± 11.1%). For soft-tissue lesions, none of the evaluated methods showed statistically significant errors. For bone lesions, significant underestimations of -16% and -11% occurred for methods in which bone tissue was ignored (SEG1 and SEG2). In the present attenuation correction schemes, uncorrected MR susceptibility artifacts typically result in reduced attenuation values, potentially leading to highly reduced PET standardized uptake values, rendering lesions indistinguishable from background. While AT&PR and SEG2wBONE show accurate results in both soft tissue and bone, SEG2wBONE uses a two-step approach for tissue classification, which increases the robustness of prediction and can be applied retrospectively if more precision in bone areas is needed.
Update on parts SEE suspectibility from heavy ions. [Single Event Effects
NASA Technical Reports Server (NTRS)
Nichols, D. K.; Smith, L. S.; Schwartz, H. R.; Soli, G.; Watson, K.; Koga, R.; Crain, W. R.; Crawford, K. B.; Hansel, S. J.; Lau, D. D.
1991-01-01
JPL and the Aerospace Corporation have collected a fourth set of heavy ion single event effects (SEE) test data. Trends in SEE susceptibility (including soft errors and latchup) for state-of-the-art parts are displayed. All data are conveniently divided into two tables: one for MOS devices, and one for a shorter list of recently tested bipolar devices. In addition, a new table of data for latchup tests only (invariably CMOS processes) is given.
Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement
Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian
2013-01-01
Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system remained within the acceptable range. PMID:22678990
Accuracy study of a robotic system for MRI-guided prostate needle placement.
Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian
2013-09-01
Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system remained within the acceptable range. Copyright © 2012 John Wiley & Sons, Ltd.
Clover: Compiler directed lightweight soft error resilience
Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...
2015-05-01
This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less
An Investigation into Soft Error Detection Efficiency at Operating System Level
Taheri, Hassan
2014-01-01
Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance. PMID:24574894
An investigation into soft error detection efficiency at operating system level.
Asghari, Seyyed Amir; Kaynak, Okyay; Taheri, Hassan
2014-01-01
Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.
A Case for Soft Error Detection and Correction in Computational Chemistry.
van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A
2013-09-10
High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.
NASA Astrophysics Data System (ADS)
Watanabe, Y.; Abe, S.
2014-06-01
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant source of soft errors regardless of design rule.
A Quatro-Based 65-nm Flip-Flop Circuit for Soft-Error Resilience
NASA Astrophysics Data System (ADS)
Li, Y.-Q.; Wang, H.-B.; Liu, R.; Chen, L.; Nofal, I.; Shi, S.-T.; He, A.-L.; Guo, G.; Baeg, S. H.; Wen, S.-J.; Wong, R.; Chen, M.; Wu, Q.
2017-06-01
A flip-flop circuit hardened against soft errors is presented in this paper. This design is an improved version of Quatro for further enhanced soft-error resilience by integrating the guard-gate technique. The proposed design, as well as reference Quatro and regular flip-flops, was implemented and manufactured in a 65-nm CMOS bulk technology. Experimental characterization results of their alpha and heavy ions soft-error rates verified the superior hardening performance of the proposed design over the other two circuits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, Y., E-mail: watanabe@aees.kyushu-u.ac.jp; Abe, S.
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant sourcemore » of soft errors regardless of design rule.« less
Proton upsets in LSI memories in space
NASA Technical Reports Server (NTRS)
Mcnulty, P. J.; Wyatt, R. C.; Filz, R. C.; Rothwell, P. L.; Farrell, G. E.
1980-01-01
Two types of large scale integrated dynamic random access memory devices were tested and found to be subject to soft errors when exposed to protons incident at energies between 18 and 130 MeV. These errors are shown to differ significantly from those induced in the same devices by alphas from an Am-241 source. There is considerable variation among devices in their sensitivity to proton-induced soft errors, even among devices of the same type. For protons incident at 130 MeV, the soft error cross sections measured in these experiments varied from 10 to the -8th to 10 to the -6th sq cm/proton. For individual devices, however, the soft error cross section consistently increased with beam energy from 18-130 MeV. Analysis indicates that the soft errors induced by energetic protons result from spallation interactions between the incident protons and the nuclei of the atoms comprising the device. Because energetic protons are the most numerous of both the galactic and solar cosmic rays and form the inner radiation belt, proton-induced soft errors have potentially serious implications for many electronic systems flown in space.
Thermodynamic properties of a hard/soft-magnetic bilayer model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taaev, T. A., E-mail: taaev89@mail.ru; Khizriev, K. Sh.; Murtazaev, A. K.
2016-05-15
A model for describing the thermodynamic properties of a hard/soft-magnetic bilayer is proposed and thoroughly studied using the Monte Carlo method. Temperature dependences of the heat capacity, total magnetization, magnetizations of the hard- and soft-magnetic layers, total magnetic susceptibility, and susceptibilities of the hard- and soft-magnetic layers have been calculated by this method in the framework of the proposed model. The obtained temperature dependences of the heat capacity and magnetic susceptibility display double maxima that result from the two phase transitions that take place in the system. The influence of system dimensions on the thermodynamic properties of the model hasmore » been considered.« less
NASA Astrophysics Data System (ADS)
Celik, Cihangir
Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano-scale technologies. Prevention of SEEs has been studied and applied in the semiconductor industry by including radiation protection precautions in the system architecture or by using corrective algorithms in the system operation. Decreasing 10B content (20%of natural boron) in the natural boron of Borophosphosilicate glass (BPSG) layers that are conventionally used in the fabrication of semiconductor devices was one of the major radiation protection approaches for the system architecture. Neutron interaction in the BPSG layer was the origin of the SEEs because of the 10B (n,alpha) 7Li reaction products. Both of the particles produced have the capability of ionization in the silicon substrate region, whose thickness is comparable to the ranges of these particles. Using the soft error phenomenon in exactly the opposite manner of the semiconductor industry can provide a new neutron detection system based on the SERs in the semiconductor memories. By investigating the soft error mechanisms in the available semiconductor memories and enhancing the soft error occurrences in these devices, one can convert all memory using intelligent systems into portable, power efficient, directiondependent neutron detectors. The Neutron Intercepting Silicon Chip (NISC) project aims to achieve this goal by introducing 10B-enriched BPSG layers to the semiconductor memory architectures. This research addresses the development of a simulation tool, the NISC Soft Error Analysis Tool (NISCSAT), for soft error modeling and analysis in the semiconductor memories to provide basic design considerations for the NISC. NISCSAT performs particle transport and calculates the soft error probabilities, or SER, depending on energy depositions of the particles in a given memory node model of the NISC. Soft error measurements were performed with commercially available, off-the-shelf semiconductor memories and microprocessors to observe soft error variations with the neutron flux and memory supply voltage. Measurement results show that soft errors in the memories increase proportionally with the neutron flux, whereas they decrease with increasing the supply voltages. NISC design considerations include the effects of device scaling, 10B content in the BPSG layer, incoming neutron energy, and critical charge of the node for this dissertation. NISCSAT simulations were performed with various memory node models to account these effects. Device scaling simulations showed that any further increase in the thickness of the BPSG layer beyond 2 mum causes self-shielding of the incoming neutrons due to the BPSG layer and results in lower detection efficiencies. Moreover, if the BPSG layer is located more than 4 mum apart from the depletion region in the node, there are no soft errors in the node due to the fact that both of the reaction products have lower ranges in the silicon or any possible node layers. Calculation results regarding the critical charge indicated that the mean charge deposition of the reaction products in the sensitive volume of the node is about 15 fC. It is evident that the NISC design should have a memory architecture with a critical charge of 15 fC or less to obtain higher detection efficiencies. Moreover, the sensitive volume should be placed in close proximity to the BPSG layers so that its location would be within the range of alpha and 7Li particles. Results showed that the distance between the BPSG layer and the sensitive volume should be less than 2 mum to increase the detection efficiency of the NISC. Incoming neutron energy was also investigated by simulations and the results obtained from these simulations showed that NISC neutron detection efficiency is related with the neutron cross-sections of 10B (n,alpha) 7Li reaction, e.g., ratio of the thermal (0.0253 eV) to fast (2 MeV) neutron detection efficiencies is approximately equal to 8000:1. Environmental conditions and their effects on the NISC performance were also studied in this research. Cosmic rays were modeled and simulated via NISCSAT to investigate detection reliability of the NISC. Simulation results show that cosmic rays account for less than 2 % of the soft errors for the thermal neutron detection. On the other hand, fast neutron detection by the NISC, which already has a poor efficiency due to the low neutron cross-sections, becomes almost impossible at higher altitudes where the cosmic ray fluxes and their energies are higher. NISCSAT simulations regarding soft error dependency of the NISC for temperature and electromagnetic fields show that there are no significant effects in the NISC detection efficiency. Furthermore, the detection efficiency of the NISC decreases with both air humidity and use of moderators since the incoming neutrons scatter away before reaching the memory surface.
Multi-bits error detection and fast recovery in RISC cores
NASA Astrophysics Data System (ADS)
Jing, Wang; Xing, Yang; Yuanfu, Zhao; Weigong, Zhang; Jiao, Shen; Keni, Qiu
2015-11-01
The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap.
NASA Astrophysics Data System (ADS)
Yilmaz, Işik; Marschalko, Marian; Bednarik, Martin
2013-04-01
The paper presented herein compares and discusses the use of bivariate, multivariate and soft computing techniques for collapse susceptibility modelling. Conditional probability (CP), logistic regression (LR) and artificial neural networks (ANN) models representing the bivariate, multivariate and soft computing techniques were used in GIS based collapse susceptibility mapping in an area from Sivas basin (Turkey). Collapse-related factors, directly or indirectly related to the causes of collapse occurrence, such as distance from faults, slope angle and aspect, topographical elevation, distance from drainage, topographic wetness index (TWI), stream power index (SPI), Normalized Difference Vegetation Index (NDVI) by means of vegetation cover, distance from roads and settlements were used in the collapse susceptibility analyses. In the last stage of the analyses, collapse susceptibility maps were produced from the models, and they were then compared by means of their validations. However, Area Under Curve (AUC) values obtained from all three models showed that the map obtained from soft computing (ANN) model looks like more accurate than the other models, accuracies of all three models can be evaluated relatively similar. The results also showed that the conditional probability is an essential method in preparation of collapse susceptibility map and highly compatible with GIS operating features.
Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip
NASA Astrophysics Data System (ADS)
Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang
2016-09-01
Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.
Liu, J; Chen, Y; Bao, X M; Ling, X L; Ding, J P; Zhang, Z K
2017-05-23
Objective: To explore the diagnostic performance of susceptibility weighted imaging (SWI)in distinguishing benign or malignant soft tissue tumor, and to study pathological observation. Methods: Sixty-eight patients with soft tissue tumor, who received no previous treatment or invasive examination, received routine preoperative MRI examination and SWI scanning. The graduation and distribution of intratumoral susceptibility signal intensity(ITSS) and proportion of tumor volume were observed.The pathological results were also included for comparative analysis. Results: Fourty of 68 patients were benign and 28 were malignant. 72.5% (29/40) patients with benign soft tissue tumors were ITSS grade 1 and ITSS grade 3 (hemangioma). 89.3%(25/28) patients with malignant soft tissue tumors were ITSS grade 2 and ITSS grade 3. The difference was statistically significant ( P <0.01). The distribution of ITSS in patients with benign soft tissue tumors was dominated by peripheral distribution and diffuse distribution (hemangioma), accounting for 90.0% (36/40). The distribution of ITSS in patients with malignant soft tissue tumors mainly distributed in the central region, accounting for 78.6% (22 /28). The difference was statistically significant ( P <0.01). The proportion of tumor volume occupied by ITSS in benign soft tissue tumors was <1/3 and> 2/3 (hemangioma), accounting for 90.0% (36/40). The volume of malignant soft tissue tumors were predominantly <1/3 , accounting for 82.1% (23/28). The difference was statistically significant ( P <0.01). Conclusion: SWI is sensitive in displaying the vein and blood metabolites in soft tissue lesions, which is helpful for the differential diagnosis of benign and malignant tumors in soft tissue.
2009-01-01
Background Increasing reports of carbapenem resistant Acinetobacter baumannii infections are of serious concern. Reliable susceptibility testing results remains a critical issue for the clinical outcome. Automated systems are increasingly used for species identification and susceptibility testing. This study was organized to evaluate the accuracies of three widely used automated susceptibility testing methods for testing the imipenem susceptibilities of A. baumannii isolates, by comparing to the validated test methods. Methods Selected 112 clinical isolates of A. baumanii collected between January 2003 and May 2006 were tested to confirm imipenem susceptibility results. Strains were tested against imipenem by the reference broth microdilution (BMD), disk diffusion (DD), Etest, BD Phoenix, MicroScan WalkAway and Vitek 2 automated systems. Data were analysed by comparing the results from each test method to those produced by the reference BMD test. Results MicroScan performed true identification of all A. baumannii strains while Vitek 2 unidentified one strain, Phoenix unidentified two strains and misidentified two strains. Eighty seven of the strains (78%) were resistant to imipenem by BMD. Etest, Vitek 2 and BD Phoenix produced acceptable error rates when tested against imipenem. Etest showed the best performance with only two minor errors (1.8%). Vitek 2 produced eight minor errors(7.2%). BD Phoenix produced three major errors (2.8%). DD produced two very major errors (1.8%) (slightly higher (0.3%) than the acceptable limit) and three major errors (2.7%). MicroScan showed the worst performance in susceptibility testing with unacceptable error rates; 28 very major (25%) and 50 minor errors (44.6%). Conclusion Reporting errors for A. baumannii against imipenem do exist in susceptibility testing systems. We suggest clinical laboratories using MicroScan system for routine use should consider using a second, independent antimicrobial susceptibility testing method to validate imipenem susceptibility. Etest, whereever available, may be used as an easy method to confirm imipenem susceptibility. PMID:19291298
2011-10-01
Phoenix, and Vitek 2 systems). Discordant results were categorized as very major errors (VME), major errors (ME), and minor errors (mE). DNA sequences...01 OCT 2011 2 . REPORT TYPE N/A 3. DATES COVERED - 4. TITLE AND SUBTITLE Carbapenem Susceptibility Testing Errors Using Three Automated...FDA standards required for device approval (11). The Vitek 2 method was the only automated susceptibility method in our study that satisfied FDA
Impact of time-of-flight PET on quantification errors in MR imaging-based attenuation correction.
Mehranian, Abolfazl; Zaidi, Habib
2015-04-01
Time-of-flight (TOF) PET/MR imaging is an emerging imaging technology with great capabilities offered by TOF to improve image quality and lesion detectability. We assessed, for the first time, the impact of TOF image reconstruction on PET quantification errors induced by MR imaging-based attenuation correction (MRAC) using simulation and clinical PET/CT studies. Standard 4-class attenuation maps were derived by segmentation of CT images of 27 patients undergoing PET/CT examinations into background air, lung, soft-tissue, and fat tissue classes, followed by the assignment of predefined attenuation coefficients to each class. For each patient, 4 PET images were reconstructed: non-TOF and TOF both corrected for attenuation using reference CT-based attenuation correction and the resulting 4-class MRAC maps. The relative errors between non-TOF and TOF MRAC reconstructions were compared with their reference CT-based attenuation correction reconstructions. The bias was locally and globally evaluated using volumes of interest (VOIs) defined on lesions and normal tissues and CT-derived tissue classes containing all voxels in a given tissue, respectively. The impact of TOF on reducing the errors induced by metal-susceptibility and respiratory-phase mismatch artifacts was also evaluated using clinical and simulation studies. Our results show that TOF PET can remarkably reduce attenuation correction artifacts and quantification errors in the lungs and bone tissues. Using classwise analysis, it was found that the non-TOF MRAC method results in an error of -3.4% ± 11.5% in the lungs and -21.8% ± 2.9% in bones, whereas its TOF counterpart reduced the errors to -2.9% ± 7.1% and -15.3% ± 2.3%, respectively. The VOI-based analysis revealed that the non-TOF and TOF methods resulted in an average overestimation of 7.5% and 3.9% in or near lung lesions (n = 23) and underestimation of less than 5% for soft tissue and in or near bone lesions (n = 91). Simulation results showed that as TOF resolution improves, artifacts and quantification errors are substantially reduced. TOF PET substantially reduces artifacts and improves significantly the quantitative accuracy of standard MRAC methods. Therefore, MRAC should be less of a concern on future TOF PET/MR scanners with improved timing resolution. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
FPGA-Based, Self-Checking, Fault-Tolerant Computers
NASA Technical Reports Server (NTRS)
Some, Raphael; Rennels, David
2004-01-01
A proposed computer architecture would exploit the capabilities of commercially available field-programmable gate arrays (FPGAs) to enable computers to detect and recover from bit errors. The main purpose of the proposed architecture is to enable fault-tolerant computing in the presence of single-event upsets (SEUs). [An SEU is a spurious bit flip (also called a soft error) caused by a single impact of ionizing radiation.] The architecture would also enable recovery from some soft errors caused by electrical transients and, to some extent, from intermittent and permanent (hard) errors caused by aging of electronic components. A typical FPGA of the current generation contains one or more complete processor cores, memories, and highspeed serial input/output (I/O) channels, making it possible to shrink a board-level processor node to a single integrated-circuit chip. Custom, highly efficient microcontrollers, general-purpose computers, custom I/O processors, and signal processors can be rapidly and efficiently implemented by use of FPGAs. Unfortunately, FPGAs are susceptible to SEUs. Prior efforts to mitigate the effects of SEUs have yielded solutions that degrade performance of the system and require support from external hardware and software. In comparison with other fault-tolerant- computing architectures (e.g., triple modular redundancy), the proposed architecture could be implemented with less circuitry and lower power demand. Moreover, the fault-tolerant computing functions would require only minimal support from circuitry outside the central processing units (CPUs) of computers, would not require any software support, and would be largely transparent to software and to other computer hardware. There would be two types of modules: a self-checking processor module and a memory system (see figure). The self-checking processor module would be implemented on a single FPGA and would be capable of detecting its own internal errors. It would contain two CPUs executing identical programs in lock step, with comparison of their outputs to detect errors. It would also contain various cache local memory circuits, communication circuits, and configurable special-purpose processors that would use self-checking checkers. (The basic principle of the self-checking checker method is to utilize logic circuitry that generates error signals whenever there is an error in either the checker or the circuit being checked.) The memory system would comprise a main memory and a hardware-controlled check-pointing system (CPS) based on a buffer memory denoted the recovery cache. The main memory would contain random-access memory (RAM) chips and FPGAs that would, in addition to everything else, implement double-error-detecting and single-error-correcting memory functions to enable recovery from single-bit errors.
Modern Soft Tissue Pathology | Center for Cancer Research
This book comprehensively covers modern soft tissue pathology and includes both tumors and non-neoplastic entities. Soft tissues make up a large bulk of the human body, and they are susceptible to a wide range of diseases. Many soft-tissue tumors are biologically very aggressive, and the chance of them metastasizing to vital organs is quite high. In recent years, the outlook
NASA Astrophysics Data System (ADS)
Tien Bui, Dieu; Pradhan, Biswajeet; Nampak, Haleh; Bui, Quang-Thanh; Tran, Quynh-An; Nguyen, Quoc-Phi
2016-09-01
This paper proposes a new artificial intelligence approach based on neural fuzzy inference system and metaheuristic optimization for flood susceptibility modeling, namely MONF. In the new approach, the neural fuzzy inference system was used to create an initial flood susceptibility model and then the model was optimized using two metaheuristic algorithms, Evolutionary Genetic and Particle Swarm Optimization. A high-frequency tropical cyclone area of the Tuong Duong district in Central Vietnam was used as a case study. First, a GIS database for the study area was constructed. The database that includes 76 historical flood inundated areas and ten flood influencing factors was used to develop and validate the proposed model. Root Mean Square Error (RMSE), Mean Absolute Error (MAE), Receiver Operating Characteristic (ROC) curve, and area under the ROC curve (AUC) were used to assess the model performance and its prediction capability. Experimental results showed that the proposed model has high performance on both the training (RMSE = 0.306, MAE = 0.094, AUC = 0.962) and validation dataset (RMSE = 0.362, MAE = 0.130, AUC = 0.911). The usability of the proposed model was evaluated by comparing with those obtained from state-of-the art benchmark soft computing techniques such as J48 Decision Tree, Random Forest, Multi-layer Perceptron Neural Network, Support Vector Machine, and Adaptive Neuro Fuzzy Inference System. The results show that the proposed MONF model outperforms the above benchmark models; we conclude that the MONF model is a new alternative tool that should be used in flood susceptibility mapping. The result in this study is useful for planners and decision makers for sustainable management of flood-prone areas.
Register file soft error recovery
Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.
2013-10-15
Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.
Preisig, James C
2005-07-01
Equations are derived for analyzing the performance of channel estimate based equalizers. The performance is characterized in terms of the mean squared soft decision error (sigma2(s)) of each equalizer. This error is decomposed into two components. These are the minimum achievable error (sigma2(0)) and the excess error (sigma2(e)). The former is the soft decision error that would be realized by the equalizer if the filter coefficient calculation were based upon perfect knowledge of the channel impulse response and statistics of the interfering noise field. The latter is the additional soft decision error that is realized due to errors in the estimates of these channel parameters. These expressions accurately predict the equalizer errors observed in the processing of experimental data by a channel estimate based decision feedback equalizer (DFE) and a passive time-reversal equalizer. Further expressions are presented that allow equalizer performance to be predicted given the scattering function of the acoustic channel. The analysis using these expressions yields insights into the features of surface scattering that most significantly impact equalizer performance in shallow water environments and motivates the implementation of a DFE that is robust with respect to channel estimation errors.
Increased ABA sensitivity results in higher seed dormancy in soft white spring wheat cultivar ‘Zak’
USDA-ARS?s Scientific Manuscript database
As a strategy to increase the seed dormancy of soft white wheat, mutants with increased sensitivity to the plant hormone abscisic acid (ABA) were identified in mutagenized grain of soft white spring wheat ‘Zak”. Lack of seed dormancy is correlated with increased susceptibility to preharvest sprouti...
Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiala, David J; Mueller, Frank; Engelmann, Christian
Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores. Exacerbating this situation, some of these faults remain undetected, manifesting themselves as silent errors that corrupt memory while applications continue to operate and report incorrect results. This paper studies the potential for redundancy to both detect and correct soft errors in MPI message-passing applications. Our study investigates the challenges inherent to detecting soft errors within MPI application while providing transparent MPI redundancy. By assuming a model wherein corruption in application data manifests itself by producing differing MPI message data betweenmore » replicas, we study the best suited protocols for detecting and correcting MPI data that is the result of corruption. To experimentally validate our proposed detection and correction protocols, we introduce RedMPI, an MPI library which resides in the MPI profiling layer. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring any modifications to the application source by utilizing either double or triple redundancy. Our results indicate that our most efficient consistency protocol can successfully protect applications experiencing even high rates of silent data corruption with runtime overheads between 0% and 30% as compared to unprotected applications without redundancy. Using our fault injector within RedMPI, we observe that even a single soft error can have profound effects on running applications, causing a cascading pattern of corruption in most cases causes that spreads to all other processes. RedMPI's protection has been shown to successfully mitigate the effects of soft errors while allowing applications to complete with correct results even in the face of errors.« less
Practicality of Evaluating Soft Errors in Commercial sub-90 nm CMOS for Space Applications
NASA Technical Reports Server (NTRS)
Pellish, Jonathan A.; LaBel, Kenneth A.
2010-01-01
The purpose of this presentation is to: Highlight space memory evaluation evolution, Review recent developments regarding low-energy proton direct ionization soft errors, Assess current space memory evaluation challenges, including increase of non-volatile technology choices, and Discuss related testing and evaluation complexities.
Quantitative susceptibility mapping: Report from the 2016 reconstruction challenge.
Langkammer, Christian; Schweser, Ferdinand; Shmueli, Karin; Kames, Christian; Li, Xu; Guo, Li; Milovic, Carlos; Kim, Jinsuh; Wei, Hongjiang; Bredies, Kristian; Buch, Sagar; Guo, Yihao; Liu, Zhe; Meineke, Jakob; Rauscher, Alexander; Marques, José P; Bilgic, Berkin
2018-03-01
The aim of the 2016 quantitative susceptibility mapping (QSM) reconstruction challenge was to test the ability of various QSM algorithms to recover the underlying susceptibility from phase data faithfully. Gradient-echo images of a healthy volunteer acquired at 3T in a single orientation with 1.06 mm isotropic resolution. A reference susceptibility map was provided, which was computed using the susceptibility tensor imaging algorithm on data acquired at 12 head orientations. Susceptibility maps calculated from the single orientation data were compared against the reference susceptibility map. Deviations were quantified using the following metrics: root mean squared error (RMSE), structure similarity index (SSIM), high-frequency error norm (HFEN), and the error in selected white and gray matter regions. Twenty-seven submissions were evaluated. Most of the best scoring approaches estimated the spatial frequency content in the ill-conditioned domain of the dipole kernel using compressed sensing strategies. The top 10 maps in each category had similar error metrics but substantially different visual appearance. Because QSM algorithms were optimized to minimize error metrics, the resulting susceptibility maps suffered from over-smoothing and conspicuity loss in fine features such as vessels. As such, the challenge highlighted the need for better numerical image quality criteria. Magn Reson Med 79:1661-1673, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
21 CFR 522.56 - Amikacin sulfate injection.
Code of Federal Regulations, 2010 CFR
2010-04-01
...) caused by susceptible strains of Escherichia coli and Proteus spp. and skin and soft tissue infections caused by susceptible strains of Pseudomonas spp. and E. coli. (3) Limitations. The drug is administered...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimura, K.; Ohmi, K.; Tottori University Electronic Display Research Center, 101 Minami4-chome, Koyama-cho, Tottori-shi, Tottori 680-8551
With increasing density of memory devices, the issue of generating soft errors by cosmic rays is becoming more and more serious. Therefore, the irradiation resistance of resistance random access memory (ReRAM) to cosmic radiation has to be elucidated for practical use. In this paper, we investigated the data retention characteristics of ReRAM against ultraviolet irradiation with a Pt/NiO/ITO structure. Soft errors were confirmed to be caused by ultraviolet irradiation in both low- and high-resistance states. An analysis of the wavelength dependence of light irradiation on data retention characteristics suggested that electronic excitation from the valence to the conduction band andmore » to the energy level generated due to the introduction of oxygen vacancies caused the errors. Based on a statistically estimated soft error rates, the errors were suggested to be caused by the cohesion and dispersion of oxygen vacancies owing to the generation of electron-hole pairs and valence changes by the ultraviolet irradiation.« less
Morozovska, Anna N.; Vysochanskii, Yulian M.; Varenyk, Oleksandr V.; ...
2015-09-29
The impact of the flexoelectric effect on the generalized susceptibility and soft phonon dispersion is not well known in the long-range-ordered phases of ferroics. Within the Landau-Ginzburg-Devonshire approach we obtained analytical expressions for the generalized susceptibility and phonon dispersion relations in the ferroelectric phase. The joint action of the static and dynamic flexoelectric effects induces nondiagonal components of the generalized susceptibility, whose amplitude is proportional to the convolution of the spontaneous polarization with the flexocoupling constants. The flexocoupling essentially broadens the k spectrum of the generalized susceptibility and leads to an additional “pushing away” of the optical and acoustic softmore » mode phonon branches. The degeneracy of the transverse optical and acoustic modes disappears in the ferroelectric phase in comparison with the paraelectric phase due to the joint action of flexoelectric coupling and ferroelectric nonlinearity. Lastly, the results obtained might be mainly important for theoretical analyses of a broad spectrum of experimental data, including neutron and Brillouin scattering.« less
Random Weighting, Strong Tracking, and Unscented Kalman Filter for Soft Tissue Characterization.
Shin, Jaehyun; Zhong, Yongmin; Oetomo, Denny; Gu, Chengfan
2018-05-21
This paper presents a new nonlinear filtering method based on the Hunt-Crossley model for online nonlinear soft tissue characterization. This method overcomes the problem of performance degradation in the unscented Kalman filter due to contact model error. It adopts the concept of Mahalanobis distance to identify contact model error, and further incorporates a scaling factor in predicted state covariance to compensate identified model error. This scaling factor is determined according to the principle of innovation orthogonality to avoid the cumbersome computation of Jacobian matrix, where the random weighting concept is adopted to improve the estimation accuracy of innovation covariance. A master-slave robotic indentation system is developed to validate the performance of the proposed method. Simulation and experimental results as well as comparison analyses demonstrate that the efficacy of the proposed method for online characterization of soft tissue parameters in the presence of contact model error.
NASA Astrophysics Data System (ADS)
Lakshmi, B. V., ,, Dr.; Gawali, Mr. Praveen B.; Deenadayalan, K., ,, Dr.; Ramesh, D. S., ,, Prof.
2017-04-01
Rock magnetic and anisotropy of magnetic susceptibility (AMS) of earthquake affected soft sediments: Examples from Shillong and Latur (Deccan Trap), India. B.V.Lakshmi, Praveen B.Gawali, K.Deenadayalan and D.S.Ramesh Indian Institute of Geomagnetism, plot 5, sector 18, Near Kalamboli Highway, New Panvel(W), Navi Mumbai 410218 Combined rock magnetism and anisotropy of magnetic susceptibility (AMS) studies on earthquake induced soft and non-soft sediments from Shillong and Latur, India have thrown up interesting results. The morphology of hysteresis loops, the pattern of isothermal remanent magnetization (IRM) acquisition, and temperature dependence of susceptibility indicate that titano-magnetite/magnetite is the main magnetic carrier in these sediments. We also analyzed the anisotropy of magnetic susceptibility (AMS) of liquefaction features within the seismically active Dauki fault, Shillong Plateau. We discovered that host sediments (non-liquefied), are characterized by an oblate AMS ellipsoid and liquefied sediment are characterized by a triaxial AMS ellipsoid, well grouped maximum susceptibility axis K1 (NNW-SSE trend). Field evidence and AMS analysis indicate that most of these features were emplaced by injection inferred to be due to seismically triggered fluidization. Anisotropy of magnetic susceptibility (AMS) of deformed and undeformed unconsolidated clay samples of Deccan Trap terrain from the 2000-year-old paleoearthquake site of Ther village, Maharashtra, India, was also studied. Such deposits are rare in the compact basaltic terrain because of which the results acquired are very important. The undeformed clay samples exhibit typical sedimentary fabric with an oblate AMS ellipsoid, whereas the deformed samples are tightly grouped in the inferred compression direction, probably effected by an earthquake, exhibiting prolate as well as oblate AMS ellipsoids. Rock magnetic and AMS methodology can help understand the behavior of different sediments to the regional deformational processes active in the Himalayan region, and possibly local deformational activities in the compact Deccan trap region. The accumulating stress and strain direction can be delineated to infer strike of the forces accumulating stresses. These studies can be used to build the chronology of past earthquakes.
Pan, Hong-Wei; Li, Wei; Li, Rong-Guo; Li, Yong; Zhang, Yi; Sun, En-Hua
2018-01-01
Rapid identification and determination of the antibiotic susceptibility profiles of the infectious agents in patients with bloodstream infections are critical steps in choosing an effective targeted antibiotic for treatment. However, there has been minimal effort focused on developing combined methods for the simultaneous direct identification and antibiotic susceptibility determination of bacteria in positive blood cultures. In this study, we constructed a lysis-centrifugation-wash procedure to prepare a bacterial pellet from positive blood cultures, which can be used directly for identification by matrix-assisted laser desorption/ionization-time-of-flight mass spectrometry (MALDI-TOF MS) and antibiotic susceptibility testing by the Vitek 2 system. The method was evaluated using a total of 129 clinical bacteria-positive blood cultures. The whole sample preparation process could be completed in <15 min. The correct rate of direct MALDI-TOF MS identification was 96.49% for gram-negative bacteria and 97.22% for gram-positive bacteria. Vitek 2 antimicrobial susceptibility testing of gram-negative bacteria showed an agreement rate of antimicrobial categories of 96.89% with a minor error, major error, and very major error rate of 2.63, 0.24, and 0.24%, respectively. Category agreement of antimicrobials against gram-positive bacteria was 92.81%, with a minor error, major error, and very major error rate of 4.51, 1.22, and 1.46%, respectively. These results indicated that our direct antibiotic susceptibility analysis method worked well compared to the conventional culture-dependent laboratory method. Overall, this fast, easy, and accurate method can facilitate the direct identification and antibiotic susceptibility testing of bacteria in positive blood cultures.
CLEAR: Cross-Layer Exploration for Architecting Resilience
2017-03-01
benchmark analysis, also provides cost-effective solutions (~1% additional energy cost for the same 50× improvement). This paper addresses the...core (OoO-core) [Wang 04], across 18 benchmarks . Such extensive exploration enables us to conclusively answer the above cross-layer resilience...analysis of the effects of soft errors on application benchmarks , provides a highly effective soft error resilience approach. 3. The above
Alpha particle-induced soft errors in microelectronic devices. I
NASA Astrophysics Data System (ADS)
Redman, D. J.; Sega, R. M.; Joseph, R.
1980-03-01
The article provides a tutorial review and trend assessment of the problem of alpha particle-induced soft errors in VLSI memories. Attention is given to an analysis of the design evolution of modern ICs, and the characteristics of alpha particles and their origin in IC packaging are reviewed. Finally, the process of an alpha particle penetrating an IC is examined.
NASA Technical Reports Server (NTRS)
Carreno, Victor A.; Choi, G.; Iyer, R. K.
1990-01-01
A simulation study is described which predicts the susceptibility of an advanced control system to electrical transients resulting in logic errors, latched errors, error propagation, and digital upset. The system is based on a custom-designed microprocessor and it incorporates fault-tolerant techniques. The system under test and the method to perform the transient injection experiment are described. Results for 2100 transient injections are analyzed and classified according to charge level, type of error, and location of injection.
Gene expression and metabolism preceding soft scald, a chilling injury of 'Honeycrisp' apple fruit.
Leisso, Rachel S; Gapper, Nigel E; Mattheis, James P; Sullivan, Nathanael L; Watkins, Christopher B; Giovannoni, James J; Schaffer, Robert J; Johnston, Jason W; Hanrahan, Ines; Hertog, Maarten L A T M; Nicolaï, Bart M; Rudell, David R
2016-10-12
'Honeycrisp' is an apple cultivar that is susceptible to soft scald, a chilling injury expressed as necrotic patches on the peel. Improved understanding of metabolism associated with the disorder would improve our understanding of soft scald and contribute to developing more effective management strategies for apple storage. It was expected that specific gene expression and specific metabolite levels in the peel would be linked with soft scald risk at harvest and/or specific time points during cold storage. Fruit from nine 'Honeycrisp' apple orchards that would eventually develop different incidences of soft scald between 4 and 8 weeks of cold air storage were used to contrast and determine differential transcriptomic and metabolomic changes during storage. Untargeted metabolic profiling revealed changes in a number of distinct pathways preceding and concurrent with soft scald symptom development, including elevated γ-aminobutryic acid (GABA), 1-hexanol, acylated steryl glycosides, and free p-coumaryl acyl esters. At harvest, levels of sesquiterpenoid and triterpenoid acyl esters were relatively higher in peel of fruit that did not later develop the disorder. RNA-seq driven gene expression profiling highlighted possible involvement of genes and associated metabolic processes with soft scald development. These included elevated expression of genes involved in lipid peroxidation and phenolic metabolism in fruit with soft scald, and isoprenoid/brassinosteroid metabolism in fruit that did not develop soft scald. Expression of other stress-related genes in fruit that developed soft scald included chlorophyll catabolism, cell wall loosening, and lipid transport while superoxide dismutases were up-regulated in fruit that did not develop the disorder. This study delineates the sequential transcriptomic and metabolomic changes preceding soft scald symptom development. Changes were differential depending on susceptibility of fruit to the disorder and could be attributed to key stress related and mediating pathways.
Low delay and area efficient soft error correction in arbitration logic
Sugawara, Yutaka
2013-09-10
There is provided an arbitration logic device for controlling an access to a shared resource. The arbitration logic device comprises at least one storage element, a winner selection logic device, and an error detection logic device. The storage element stores a plurality of requestors' information. The winner selection logic device selects a winner requestor among the requestors based on the requestors' information received from a plurality of requestors. The winner selection logic device selects the winner requestor without checking whether there is the soft error in the winner requestor's information.
Wang, Peng; Bowler, Sarah L; Kantz, Serena F; Mettus, Roberta T; Guo, Yan; McElheny, Christi L; Doi, Yohei
2016-12-01
Treatment options for infections due to carbapenem-resistant Acinetobacter baumannii are extremely limited. Minocycline is a semisynthetic tetracycline derivative with activity against this pathogen. This study compared susceptibility testing methods that are used in clinical microbiology laboratories (Etest, disk diffusion, and Sensititre broth microdilution methods) for testing of minocycline, tigecycline, and doxycycline against 107 carbapenem-resistant A. baumannii clinical isolates. Susceptibility rates determined with the standard broth microdilution method using cation-adjusted Mueller-Hinton (MH) broth were 77.6% for minocycline and 29% for doxycycline, and 92.5% of isolates had tigecycline MICs of ≤2 μg/ml. Using MH agar from BD and Oxoid, susceptibility rates determined with the Etest method were 67.3% and 52.3% for minocycline, 21.5% and 18.7% for doxycycline, and 71% and 29.9% for tigecycline, respectively. With the disk diffusion method using MH agar from BD and Oxoid, susceptibility rates were 82.2% and 72.9% for minocycline and 34.6% and 34.6% for doxycycline, respectively, and rates of MICs of ≤2 μg/ml were 46.7% and 23.4% for tigecycline. In comparison with the standard broth microdilution results, very major rates were low (∼2.8%) for all three drugs across the methods, but major error rates were higher (∼5.6%), especially with the Etest method. For minocycline, minor error rates ranged from 14% to 37.4%. For tigecycline, minor error rates ranged from 6.5% to 69.2%. The majority of minor errors were due to susceptible results being reported as intermediate. For minocycline susceptibility testing of carbapenem-resistant A. baumannii strains, very major errors are rare, but major and minor errors overcalling strains as intermediate or resistant occur frequently with susceptibility testing methods that are feasible in clinical laboratories. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Oliveira-Santos, Thiago; Klaeser, Bernd; Weitzel, Thilo; Krause, Thomas; Nolte, Lutz-Peter; Peterhans, Matthias; Weber, Stefan
2011-01-01
Percutaneous needle intervention based on PET/CT images is effective, but exposes the patient to unnecessary radiation due to the increased number of CT scans required. Computer assisted intervention can reduce the number of scans, but requires handling, matching and visualization of two different datasets. While one dataset is used for target definition according to metabolism, the other is used for instrument guidance according to anatomical structures. No navigation systems capable of handling such data and performing PET/CT image-based procedures while following clinically approved protocols for oncologic percutaneous interventions are available. The need for such systems is emphasized in scenarios where the target can be located in different types of tissue such as bone and soft tissue. These two tissues require different clinical protocols for puncturing and may therefore give rise to different problems during the navigated intervention. Studies comparing the performance of navigated needle interventions targeting lesions located in these two types of tissue are not often found in the literature. Hence, this paper presents an optical navigation system for percutaneous needle interventions based on PET/CT images. The system provides viewers for guiding the physician to the target with real-time visualization of PET/CT datasets, and is able to handle targets located in both bone and soft tissue. The navigation system and the required clinical workflow were designed taking into consideration clinical protocols and requirements, and the system is thus operable by a single person, even during transition to the sterile phase. Both the system and the workflow were evaluated in an initial set of experiments simulating 41 lesions (23 located in bone tissue and 18 in soft tissue) in swine cadavers. We also measured and decomposed the overall system error into distinct error sources, which allowed for the identification of particularities involved in the process as well as highlighting the differences between bone and soft tissue punctures. An overall average error of 4.23 mm and 3.07 mm for bone and soft tissue punctures, respectively, demonstrated the feasibility of using this system for such interventions. The proposed system workflow was shown to be effective in separating the preparation from the sterile phase, as well as in keeping the system manageable by a single operator. Among the distinct sources of error, the user error based on the system accuracy (defined as the distance from the planned target to the actual needle tip) appeared to be the most significant. Bone punctures showed higher user error, whereas soft tissue punctures showed higher tissue deformation error.
Multi-Spectral Solar Telescope Array. II - Soft X-ray/EUV reflectivity of the multilayer mirrors
NASA Technical Reports Server (NTRS)
Barbee, Troy W., Jr.; Weed, J. W.; Hoover, Richard B.; Allen, Maxwell J.; Lindblom, Joakim F.; O'Neal, Ray H.; Kankelborg, Charles C.; Deforest, Craig E.; Paris, Elizabeth S.; Walker, Arthur B. C., Jr.
1991-01-01
The Multispectral Solar Telescope Array is a rocket-borne observatory which encompasses seven compact soft X-ray/EUV, multilayer-coated, and two compact far-UV, interference film-coated, Cassegrain and Ritchey-Chretien telescopes. Extensive measurements are presented on the efficiency and spectral bandpass of the X-ray/EUV telescopes. Attention is given to systematic errors and measurement errors.
Soft tick sampling and collection
USDA-ARS?s Scientific Manuscript database
Several soft tick species in the genus Ornithodoros are vectors of African swine fever virus (ASFV) in nature, or known to be susceptible to infection. African swine fever (ASF) caused by ASFV is considered one of the most serious transboundary swine diseases because of its high lethality for pigs, ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batista, Antonio J. N.; Santos, Bruno; Fernandes, Ana
The data acquisition and control instrumentation cubicles room of the ITER tokamak will be irradiated with neutrons during the fusion reactor operation. A Virtex-6 FPGA from Xilinx (XC6VLX365T-1FFG1156C) is used on the ATCA-IO-PROCESSOR board, included in the ITER Catalog of I and C products - Fast Controllers. The Virtex-6 is a re-programmable logic device where the configuration is stored in Static RAM (SRAM), functional data stored in dedicated Block RAM (BRAM) and functional state logic in Flip-Flops. Single Event Upsets (SEU) due to the ionizing radiation of neutrons causes soft errors, unintended changes (bit-flips) to the values stored in statemore » elements of the FPGA. The SEU monitoring and soft errors repairing, when possible, were explored in this work. An FPGA built-in Soft Error Mitigation (SEM) controller detects and corrects soft errors in the FPGA configuration memory. Novel SEU sensors with Error Correction Code (ECC) detect and repair the BRAM memories. Proper management of SEU can increase reliability and availability of control instrumentation hardware for nuclear applications. The results of the tests performed using the SEM controller and the BRAM SEU sensors are presented for a Virtex-6 FPGA (XC6VLX240T-1FFG1156C) when irradiated with neutrons from the Portuguese Research Reactor (RPI), a 1 MW nuclear fission reactor operated by IST in the neighborhood of Lisbon. Results show that the proposed SEU mitigation technique is able to repair the majority of the detected SEU errors in the configuration and BRAM memories. (authors)« less
Utilization of robotic-arm assisted total knee arthroplasty for soft tissue protection.
Sultan, Assem A; Piuzzi, Nicolas; Khlopas, Anton; Chughtai, Morad; Sodhi, Nipun; Mont, Michael A
2017-12-01
Despite the well-established success of total knee arthroplasty (TKA), iatrogenic ligamentous and soft tissue injuries are infrequent, but potential complications that can have devastating impact on clinical outcomes. These injuries are often related to technical errors and excessive soft tissue manipulation, particularly during bony resections. Recently, robotic-arm assisted TKA was introduced and demonstrated promising results with potential technical advantages over manual surgery in implant positioning and mechanical accuracy. Furthermore, soft tissue protection is an additional potential advantage offered by these systems that can reduce inadvertent human technical errors encountered during standard manual resections. Therefore, due to the relative paucity of literature, we attempted to answer the following questions: 1) does robotic-arm assisted TKA offer a technical advantage that allows enhanced soft tissue protection? 2) What is the available evidence about soft tissue protection? Recently introduced models of robotic-arm assisted TKA systems with advanced technology showed promising clinical outcomes and soft tissue protection in the short- and mid-term follow-up with results comparable or superior to manual TKA. In this review, we attempted to explore this dimension of robotics in TKA and investigate the soft tissue related complications currently reported in the literature.
Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for sub-130 nm Technologies
NASA Technical Reports Server (NTRS)
Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Michael M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.
2010-01-01
We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.
Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for Sub-130 nm Technologies
NASA Technical Reports Server (NTRS)
Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Thomas M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.
2010-01-01
We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.
Hard sphere perturbation theory for thermodynamics of soft-sphere model liquid
NASA Astrophysics Data System (ADS)
Mon, K. K.
2001-09-01
It is a long-standing consensus in the literature that hard sphere perturbation theory (HSPT) is not accurate for dense soft sphere model liquids, interacting with repulsive r-n pair potentials for small n. In this paper, we show that if the intrinsic error of HSPT for soft sphere model liquids is accounted for, then this is not completely true. We present results for n=4, 6, 9, 12 which indicate that, even first order variational HSPT can provide free energy upper bounds to within a few percent at densities near freezing when corrected for the intrinsic error of the HSPT.
A system-level approach for embedded memory robustness
NASA Astrophysics Data System (ADS)
Mariani, Riccardo; Boschi, Gabriele
2005-11-01
New ultra-deep submicron technologies are bringing not only new advantages such extraordinary transistor densities or unforeseen performances, but also new uncertainties such soft-error susceptibility, modelling complexity, coupling effects, leakage contribution and increased sensitivity to internal and external disturbs. Nowadays, embedded memories are taking profit of such new technologies and they are more and more used in systems: therefore as robustness and reliability requirement increase, memory systems must be protected against different kind of faults (permanent and transient) and that should be done in an efficient way. It means that reliability and costs, such overhead and performance degradation, must be efficiently tuned based on the system and on the application. Moreover, the new emerging norms for safety-critical applications such IEC 61508 are requiring precise answers in terms of robustness also in the case of memory systems. In this paper, classical protection techniques for error detection and correction are enriched with a system-aware approach, where the memory system is analyzed based on its role in the application. A configurable memory protection system is presented, together with the results of its application to a proof-of-concept architecture. This work has been developed in the framework of MEDEA+ T126 project called BLUEBERRIES.
Full temperature single event upset characterization of two microprocessor technologies
NASA Technical Reports Server (NTRS)
Nichols, Donald K.; Coss, James R.; Smith, L. S.; Rax, Bernard; Huebner, Mark
1988-01-01
Data for the 9450 I3L bipolar microprocessor and the 80C86 CMOS/epi (vintage 1985) microprocessor are presented, showing single-event soft errors for the full MIL-SPEC temperature range of -55 to 125 C. These data show for the first time that the soft-error cross sections continue to decrease with decreasing temperature at subzero temperatures. The temperature dependence of the two parts, however, is very different.
Closed-Loop Analysis of Soft Decisions for Serial Links
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Steele, Glen F.; Zucha, Joan P.; Schlesinger, Adam M.
2013-01-01
We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin; Steele, Glen; Zucha, Joan; Schlesinger, Adam
2013-01-01
We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.
NASA Astrophysics Data System (ADS)
Yoshizawa, Masasumi; Nakamura, Yuuta; Ishiguro, Masataka; Moriya, Tadashi
2007-07-01
In this paper, we describe a method of compensating the attenuation of the ultrasound caused by soft tissue in the transducer vibration method for the measurement of the acoustic impedance of in vivo bone. In the in vivo measurement, the acoustic impedance of bone is measured through soft tissue; therefore, the amplitude of the ultrasound reflected from the bone is attenuated. This attenuation causes an error of the order of -20 to -30% when the acoustic impedance is determined from the measured signals. To compensate the attenuation, the attenuation coefficient and length of the soft tissue are measured by the transducer vibration method. In the experiment using a phantom, this method allows the measurement of the acoustic impedance typically with an error as small as -8 to 10%.
Design Techniques for Power-Aware Combinational Logic SER Mitigation
NASA Astrophysics Data System (ADS)
Mahatme, Nihaar N.
The history of modern semiconductor devices and circuits suggests that technologists have been able to maintain scaling at the rate predicted by Moore's Law [Moor-65]. With improved performance, speed and lower area, technology scaling has also exacerbated reliability issues such as soft errors. Soft errors are transient errors that occur in microelectronic circuits due to ionizing radiation particle strikes on reverse biased semiconductor junctions. These radiation induced errors at the terrestrial-level are caused due to radiation particle strikes by (1) alpha particles emitted as decay products of packing material (2) cosmic rays that produce energetic protons and neutrons, and (3) thermal neutrons [Dodd-03], [Srou-88] and more recently muons and electrons [Ma-79] [Nara-08] [Siew-10] [King-10]. In the space environment radiation induced errors are a much bigger threat and are mainly caused by cosmic heavy-ions, protons etc. The effects of radiation exposure on circuits and measures to protect against them have been studied extensively for the past 40 years, especially for parts operating in space. Radiation particle strikes can affect memory as well as combinational logic. Typically when these particles strike semiconductor junctions of transistors that are part of feedback structures such as SRAM memory cells or flip-flops, it can lead to an inversion of the cell content. Such a failure is formally called a bit-flip or single-event upset (SEU). When such particles strike sensitive junctions part of combinational logic gates they produce transient voltage spikes or glitches called single-event transients (SETs) that could be latched by receiving flip-flops. As the circuits are clocked faster, there are more number of clocking edges which increases the likelihood of latching these transients. In older technology generations the probability of errors in flip-flops due to SETs being latched was much lower compared to direct strikes on flip-flops or SRAMs leading to SEUs. This was mainly because the operating frequencies were much lower for older technology generations. The Intel Pentium II for example was fabricated using 0.35 microm technology and operated between 200-330 MHz. With technology scaling however, operating frequencies have increased tremendously and the contribution of soft errors due to latched SETs from combinational logic could account for a significant proportion of the chip-level soft error rate [Sief-12][Maha-11][Shiv02] [Bu97]. Therefore there is a need to systematically characterize the problem of combinational logic single-event effects (SEE) and understand the various factors that affect the combinational logic single-event error rate. Just as scaling has led to soft errors emerging as a reliability-limiting failure mode for modern digital ICs, the problem of increasing power consumption has arguably been a bigger bane of scaling. While Moore's Law loftily states the blessing of technology scaling to be smaller and faster transistor it fails to highlight that the power density increases exponentially with every technology generation. The power density problem was partially solved in the 1970's and 1980's by moving from bipolar and GaAs technologies to full-scale silicon CMOS technologies. Following this however, technology miniaturization that enabled high-speed, multicore and parallel computing has steadily increased the power density and the power consumption problem. Today minimizing the power consumption is as much critical for power hungry server farms as it for portable devices, all pervasive sensor networks and future eco-bio-sensors. Low-power consumption is now regularly part of design philosophies for various digital products with diverse applications from computing to communication to healthcare. Thus designers in today's world are left grappling with both a "power wall" as well as a "reliability wall". Unfortunately, when it comes to improving reliability through soft error mitigation, most approaches are invariably straddled with overheads in terms of area or speed and more importantly power. Thus, the cost of protecting combinational logic through the use of power hungry mitigation approaches can disrupt the power budget significantly. Therefore there is a strong need to develop techniques that can provide both power minimization as well as combinational logic soft error mitigation. This dissertation, advances hitherto untapped opportunities to jointly reduce power consumption and deliver soft error resilient designs. Circuit as well as architectural approaches are employed to achieve this objective and the advantages of cross-layer optimization for power and soft error reliability are emphasized.
Bulik, Catharine C.; Fauntleroy, Kathy A.; Jenkins, Stephen G.; Abuali, Mayssa; LaBombardi, Vincent J.; Nicolau, David P.; Kuti, Joseph L.
2010-01-01
We describe the levels of agreement between broth microdilution, Etest, Vitek 2, Sensititre, and MicroScan methods to accurately define the meropenem MIC and categorical interpretation of susceptibility against carbapenemase-producing Klebsiella pneumoniae (KPC). A total of 46 clinical K. pneumoniae isolates with KPC genotypes, all modified Hodge test and blaKPC positive, collected from two hospitals in NY were included. Results obtained by each method were compared with those from broth microdilution (the reference method), and agreement was assessed based on MICs and Clinical Laboratory Standards Institute (CLSI) interpretative criteria using 2010 susceptibility breakpoints. Based on broth microdilution, 0%, 2.2%, and 97.8% of the KPC isolates were classified as susceptible, intermediate, and resistant to meropenem, respectively. Results from MicroScan demonstrated the most agreement with those from broth microdilution, with 95.6% agreement based on the MIC and 2.2% classified as minor errors, and no major or very major errors. Etest demonstrated 82.6% agreement with broth microdilution MICs, a very major error rate of 2.2%, and a minor error rate of 2.2%. Vitek 2 MIC agreement was 30.4%, with a 23.9% very major error rate and a 39.1% minor error rate. Sensititre demonstrated MIC agreement for 26.1% of isolates, with a 3% very major error rate and a 26.1% minor error rate. Application of FDA breakpoints had little effect on minor error rates but increased very major error rates to 58.7% for Vitek 2 and Sensititre. Meropenem MIC results and categorical interpretations for carbapenemase-producing K. pneumoniae differ by methodology. Confirmation of testing results is encouraged when an accurate MIC is required for antibiotic dosing optimization. PMID:20484603
Quantitative susceptibility mapping of human brain at 3T: a multisite reproducibility study.
Lin, P-Y; Chao, T-C; Wu, M-L
2015-03-01
Quantitative susceptibility mapping of the human brain has demonstrated strong potential in examining iron deposition, which may help in investigating possible brain pathology. This study assesses the reproducibility of quantitative susceptibility mapping across different imaging sites. In this study, the susceptibility values of 5 regions of interest in the human brain were measured on 9 healthy subjects following calibration by using phantom experiments. Each of the subjects was imaged 5 times on 1 scanner with the same procedure repeated on 3 different 3T systems so that both within-site and cross-site quantitative susceptibility mapping precision levels could be assessed. Two quantitative susceptibility mapping algorithms, similar in principle, one by using iterative regularization (iterative quantitative susceptibility mapping) and the other with analytic optimal solutions (deterministic quantitative susceptibility mapping), were implemented, and their performances were compared. Results show that while deterministic quantitative susceptibility mapping had nearly 700 times faster computation speed, residual streaking artifacts seem to be more prominent compared with iterative quantitative susceptibility mapping. With quantitative susceptibility mapping, the putamen, globus pallidus, and caudate nucleus showed smaller imprecision on the order of 0.005 ppm, whereas the red nucleus and substantia nigra, closer to the skull base, had a somewhat larger imprecision of approximately 0.01 ppm. Cross-site errors were not significantly larger than within-site errors. Possible sources of estimation errors are discussed. The reproducibility of quantitative susceptibility mapping in the human brain in vivo is regionally dependent, and the precision levels achieved with quantitative susceptibility mapping should allow longitudinal and multisite studies such as aging-related changes in brain tissue magnetic susceptibility. © 2015 by American Journal of Neuroradiology.
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
2010-04-01
the most active aminoglycoside (27.1% of isolates were susceptible ). Disk diffusion and Etest tended to be more accurate than the Vitek 2 , Phoenix...and MicroScan automated systems; but errors were noted with all methods. The Vitek 2 instrument incorrectly reported that more than one-third of the...Acinetobacter, we have observed in clinical practice at the San Antonio Mil- itary Medical Center results of susceptibility to amikacin from the Vitek 2
Yang, Yuan; Quan, Nannan; Bu, Jingjing; Li, Xueping; Yu, Ningmei
2016-09-26
High order modulation and demodulation technology can solve the frequency requirement between the wireless energy transmission and data communication. In order to achieve reliable wireless data communication based on high order modulation technology for visual prosthesis, this work proposed a Reed-Solomon (RS) error correcting code (ECC) circuit on the basis of differential amplitude and phase shift keying (DAPSK) soft demodulation. Firstly, recognizing the weakness of the traditional DAPSK soft demodulation algorithm based on division that is complex for hardware implementation, an improved phase soft demodulation algorithm for visual prosthesis to reduce the hardware complexity is put forward. Based on this new algorithm, an improved RS soft decoding method is hence proposed. In this new decoding method, the combination of Chase algorithm and hard decoding algorithms is used to achieve soft decoding. In order to meet the requirements of implantable visual prosthesis, the method to calculate reliability of symbol-level based on multiplication of bit reliability is derived, which reduces the testing vectors number of Chase algorithm. The proposed algorithms are verified by MATLAB simulation and FPGA experimental results. During MATLAB simulation, the biological channel attenuation property model is added into the ECC circuit. The data rate is 8 Mbps in the MATLAB simulation and FPGA experiments. MATLAB simulation results show that the improved phase soft demodulation algorithm proposed in this paper saves hardware resources without losing bit error rate (BER) performance. Compared with the traditional demodulation circuit, the coding gain of the ECC circuit has been improved by about 3 dB under the same BER of [Formula: see text]. The FPGA experimental results show that under the condition of data demodulation error with wireless coils 3 cm away, the system can correct it. The greater the distance, the higher the BER. Then we use a bit error rate analyzer to measure BER of the demodulation circuit and the RS ECC circuit with different distance of two coils. And the experimental results show that the RS ECC circuit has about an order of magnitude lower BER than the demodulation circuit when under the same coils distance. Therefore, the RS ECC circuit has more higher reliability of the communication in the system. The improved phase soft demodulation algorithm and soft decoding algorithm proposed in this paper enables data communication that is more reliable than other demodulation system, which also provide a significant reference for further study to the visual prosthesis system.
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
Arendrup, Maiken Cavling; Garcia-Effron, Guillermo; Lass-Flörl, Cornelia; Lopez, Alicia Gomez; Rodriguez-Tudela, Juan-Luis; Cuenca-Estrella, Manuel; Perlin, David S.
2010-01-01
This study compared nine susceptibility testing methods and 12 endpoints for anidulafungin, caspofungin, and micafungin with the same collection of blinded FKS hot spot mutant (n = 29) and wild-type isolates (n = 94). The susceptibility tests included EUCAST Edef 7.1, agar dilution, Etest, and disk diffusion with RPMI-1640 plus 2% glucose (2G) and IsoSensitest-2G media and CLSI M27A-3. Microdilution plates were read after 24 and 48 h. The following test parameters were evaluated: fks hot spot mutants overlapping the wild-type distribution, distance between the two populations, number of very major errors (VMEs; fks mutants misclassified as susceptible), and major errors (MEs; wild-type isolates classified as resistant) using a wild-type-upper-limit value (WT-UL) (two twofold-dilutions higher than the MIC50) as the susceptibility breakpoint. The methods with the lowest number of errors (given as VMEs/MEs) across the three echinocandins were CLSI (12%/1%), agar dilution with RPMI-2G medium (14%/0%), and Etest with RPMI-2G medium (8%/3%). The fewest errors overall were observed for anidulafungin (4%/1% for EUCAST, 4%/3% for CLSI, and 3%/9% for Etest with RPMI-2G). For micafungin, VME rates of 10 to 71% were observed. For caspofungin, agar dilution with either medium was superior (VMEs/MEs of 0%/1%), while CLSI, EUCAST with IsoSensitest-2G medium, and Etest were less optimal (VMEs of 7%, 10%, and 10%, respectively). Applying the CLSI breakpoint (S ≤ 2 μg/ml) for CLSI results, 89.2% fks hot spot mutants were classified as anidulafungin susceptible, 60.7% as caspofungin susceptible, and 92.9% as micafungin susceptible. In conclusion, no test was perfect, but anidulafungin susceptibility testing using the WT-UL to define susceptibility reliably identified fks hot spot mutants. PMID:19884370
Lear, Aaron; McCord, Gary; Peiffer, Jeffrey; Watkins, Richard R; Parikh, Arpan; Warrington, Steven
2011-01-01
Methicillin-resistant Staphylococcus aureus (MRSA) skin and soft tissue infections have been documented with increasing frequency in both team and individual sports in recent years. It also seems that the level of MRSA skin and soft tissue infections in the general population has increased. One hundred ninety athletes from 6 local high school football teams were recruited for this prospective observational study to document nasal colonization and the potential role this plays in skin and soft tissue infections in football players and, in particular, MRSA infections. Athletes had nasal swabs done before their season started, and they filled out questionnaires regarding potential risk factors for skin and soft tissue infections. Those enrolled in the study were then observed over the course of the season for skin and soft tissue infections. Those infected had data about their infections collected. One hundred ninety of 386 available student athletes enrolled in the study. Forty-four of the subjects had nasal colonization with methicillin-susceptible S. aureus, and none were colonized with MRSA. There were 10 skin and soft tissue infections (8 bacterial and 2 fungal) documented over the course of the season. All were treated as outpatients with oral or topical antibiotics, and none were considered serious. Survey data from the preseason questionnaire showed 21% with skin infection, 11% with methicillin-susceptible S. aureus, and none with MRSA infection during the past year. Three reported a remote history of MRSA infection. We documented an overall skin infection rate of 5.3% among high school football players over a single season. Our results suggest that skin and soft tissue infection may not be widespread among high school athletes in northeast Ohio.
Comparisons of single event vulnerability of GaAs SRAMS
NASA Astrophysics Data System (ADS)
Weatherford, T. R.; Hauser, J. R.; Diehl, S. E.
1986-12-01
A GaAs MESFET/JFET model incorporated into SPICE has been used to accurately describe C-EJFET, E/D MESFET and D MESFET/resistor GaAs memory technologies. These cells have been evaluated for critical charges due to gate-to-drain and drain-to-source charge collection. Low gate-to-drain critical charges limit conventional GaAs SRAM soft error rates to approximately 1E-6 errors/bit-day. SEU hardening approaches including decoupling resistors, diodes, and FETs have been investigated. Results predict GaAs RAM cell critical charges can be increased to over 0.1 pC. Soft error rates in such hardened memories may approach 1E-7 errors/bit-day without significantly reducing memory speed. Tradeoffs between hardening level, performance and fabrication complexity are discussed.
Vu, Lien T; Chen, Chao-Chang A; Lee, Chia-Cheng; Yu, Chia-Wei
2018-04-20
This study aims to develop a compensating method to minimize the shrinkage error of the shell mold (SM) in the injection molding (IM) process to obtain uniform optical power in the central optical zone of soft axial symmetric multifocal contact lenses (CL). The Z-shrinkage error along the Z axis or axial axis of the anterior SM corresponding to the anterior surface of a dry contact lens in the IM process can be minimized by optimizing IM process parameters and then by compensating for additional (Add) powers in the central zone of the original lens design. First, the shrinkage error is minimized by optimizing three levels of four IM parameters, including mold temperature, injection velocity, packing pressure, and cooling time in 18 IM simulations based on an orthogonal array L 18 (2 1 ×3 4 ). Then, based on the Z-shrinkage error from IM simulation, three new contact lens designs are obtained by increasing the Add power in the central zone of the original multifocal CL design to compensate for the optical power errors. Results obtained from IM process simulations and the optical simulations show that the new CL design with 0.1 D increasing in Add power has the closest shrinkage profile to the original anterior SM profile with percentage of reduction in absolute Z-shrinkage error of 55% and more uniform power in the central zone than in the other two cases. Moreover, actual experiments of IM of SM for casting soft multifocal CLs have been performed. The final product of wet CLs has been completed for the original design and the new design. Results of the optical performance have verified the improvement of the compensated design of CLs. The feasibility of this compensating method has been proven based on the measurement results of the produced soft multifocal CLs of the new design. Results of this study can be further applied to predict or compensate for the total optical power errors of the soft multifocal CLs.
A cascaded coding scheme for error control
NASA Technical Reports Server (NTRS)
Shu, L.; Kasami, T.
1985-01-01
A cascade coding scheme for error control is investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are evaluated. They seem to be quite suitable for satellite down-link error control.
A cascaded coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Lin, S.
1985-01-01
A cascaded coding scheme for error control was investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are studied which seem to be quite suitable for satellite down-link error control.
Testing a Novel 3D Printed Radiographic Imaging Device for Use in Forensic Odontology.
Newcomb, Tara L; Bruhn, Ann M; Giles, Bridget; Garcia, Hector M; Diawara, Norou
2017-01-01
There are specific challenges related to forensic dental radiology and difficulties in aligning X-ray equipment to teeth of interest. Researchers used 3D printing to create a new device, the combined holding and aiming device (CHAD), to address the positioning limitations of current dental X-ray devices. Participants (N = 24) used the CHAD, soft dental wax, and a modified external aiming device (MEAD) to determine device preference, radiographer's efficiency, and technique errors. Each participant exposed six X-rays per device for a total of 432 X-rays scored. A significant difference was found at the 0.05 level between the three devices (p = 0.0015), with the MEAD having the least amount of total errors and soft dental wax taking the least amount of time. Total errors were highest when participants used soft dental wax-both the MEAD and the CHAD performed best overall. Further research in forensic dental radiology and use of holding devices is needed. © 2016 American Academy of Forensic Sciences.
Real-time soft error rate measurements on bulk 40 nm SRAM memories: a five-year dual-site experiment
NASA Astrophysics Data System (ADS)
Autran, J. L.; Munteanu, D.; Moindjie, S.; Saad Saoud, T.; Gasiot, G.; Roche, P.
2016-11-01
This paper reports five years of real-time soft error rate experimentation conducted with the same setup at mountain altitude for three years and then at sea level for two years. More than 7 Gbit of SRAM memories manufactured in CMOS bulk 40 nm technology have been subjected to the natural radiation background. The intensity of the atmospheric neutron flux has been continuously measured on site during these experiments using dedicated neutron monitors. As the result, the neutron and alpha component of the soft error rate (SER) have been very accurately extracted from these measurements, refining the first SER estimations performed in 2012 for this SRAM technology. Data obtained at sea level evidence, for the first time, a possible correlation between the neutron flux changes induced by the daily atmospheric pressure variations and the measured SER. Finally, all of the experimental data are compared with results obtained from accelerated tests and numerical simulation.
Chaves, Sandra; Gadanho, Mário; Tenreiro, Rogério; Cabrita, José
1999-01-01
Metronidazole susceptibility of 100 Helicobacter pylori strains was assessed by determining the inhibition zone diameters by disk diffusion test and the MICs by agar dilution and PDM Epsilometer test (E test). Linear regression analysis was performed, allowing the definition of significant linear relations, and revealed correlations of disk diffusion results with both E-test and agar dilution results (r2 = 0.88 and 0.81, respectively). No significant differences (P = 0.84) were found between MICs defined by E test and those defined by agar dilution, taken as a standard. Reproducibility comparison between E-test and disk diffusion tests showed that they are equivalent and with good precision. Two interpretative susceptibility schemes (with or without an intermediate class) were compared by an interpretative error rate analysis method. The susceptibility classification scheme that included the intermediate category was retained, and breakpoints were assessed for diffusion assay with 5-μg metronidazole disks. Strains with inhibition zone diameters less than 16 mm were defined as resistant (MIC > 8 μg/ml), those with zone diameters equal to or greater than 16 mm but less than 21 mm were considered intermediate (4 μg/ml < MIC ≤ 8 μg/ml), and those with zone diameters of 21 mm or greater were regarded as susceptible (MIC ≤ 4 μg/ml). Error rate analysis applied to this classification scheme showed occurrence frequencies of 1% for major errors and 7% for minor errors, when the results were compared to those obtained by agar dilution. No very major errors were detected, suggesting that disk diffusion might be a good alternative for determining the metronidazole sensitivity of H. pylori strains. PMID:10203543
Gutiérrez, J. J.; Russell, James K.
2016-01-01
Background. Cardiopulmonary resuscitation (CPR) feedback devices are being increasingly used. However, current accelerometer-based devices overestimate chest displacement when CPR is performed on soft surfaces, which may lead to insufficient compression depth. Aim. To assess the performance of a new algorithm for measuring compression depth and rate based on two accelerometers in a simulated resuscitation scenario. Materials and Methods. Compressions were provided to a manikin on two mattresses, foam and sprung, with and without a backboard. One accelerometer was placed on the chest and the second at the manikin's back. Chest displacement and mattress displacement were calculated from the spectral analysis of the corresponding acceleration every 2 seconds and subtracted to compute the actual sternal-spinal displacement. Compression rate was obtained from the chest acceleration. Results. Median unsigned error in depth was 2.1 mm (4.4%). Error was 2.4 mm in the foam and 1.7 mm in the sprung mattress (p < 0.001). Error was 3.1/2.0 mm and 1.8/1.6 mm with/without backboard for foam and sprung, respectively (p < 0.001). Median error in rate was 0.9 cpm (1.0%), with no significant differences between test conditions. Conclusion. The system provided accurate feedback on chest compression depth and rate on soft surfaces. Our solution compensated mattress displacement, avoiding overestimation of compression depth when CPR is performed on soft surfaces. PMID:27999808
Resnick, C M; Dang, R R; Glick, S J; Padwa, B L
2017-03-01
Three-dimensional (3D) soft tissue prediction is replacing two-dimensional analysis in planning for orthognathic surgery. The accuracy of different computational models to predict soft tissue changes in 3D, however, is unclear. A retrospective pilot study was implemented to assess the accuracy of Dolphin 3D software in making these predictions. Seven patients who had a single-segment Le Fort I osteotomy and had preoperative (T 0 ) and >6-month postoperative (T 1 ) cone beam computed tomography (CBCT) scans and 3D photographs were included. The actual skeletal change was determined by subtracting the T 0 from the T 1 CBCT. 3D photographs were overlaid onto the T 0 CBCT and virtual skeletal movements equivalent to the achieved repositioning were applied using Dolphin 3D planner. A 3D soft tissue prediction (T P ) was generated and differences between the T P and T 1 images (error) were measured at 14 points and at the nasolabial angle. A mean linear prediction error of 2.91±2.16mm was found. The mean error at the nasolabial angle was 8.1±5.6°. In conclusion, the ability to accurately predict 3D soft tissue changes after Le Fort I osteotomy using Dolphin 3D software is limited. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
de Cueto, Marina; Ceballos, Esther; Martinez-Martinez, Luis; Perea, Evelio J.; Pascual, Alvaro
2004-01-01
In order to further decrease the time lapse between initial inoculation of blood culture media and the reporting of results of identification and antimicrobial susceptibility tests for microorganisms causing bacteremia, we performed a prospective study in which specially processed fluid from positive blood culture bottles from Bactec 9240 (Becton Dickinson, Cockeysville, Md.) containing aerobic media were directly inoculated into Vitek 2 system cards (bio-Mérieux, France). Organism identification and susceptibility results were compared with those obtained from cards inoculated with a standardized bacterial suspension obtained following subculture to agar; 100 consecutive positive monomicrobic blood cultures, consisting of 50 gram-negative rods and 50 gram-positive cocci, were included in the study. For gram-negative organisms, 31 of the 50 (62%) showed complete agreement with the standard method for species identification, while none of the 50 gram-positive cocci were correctly identified by the direct method. For gram-negative rods, there were 50% categorical agreements between the direct and standard methods for all drugs tested. The very major error rate was 2.4%, and the major error rate was 0.6%. The overall error rate for gram-negatives was 6.6%. Complete agreement in clinical categories of all antimicrobial agents evaluated was obtained for 19 of 50 (38%) gram-positive cocci evaluated; the overall error rate was 8.4%, with 2.8% minor errors, 2.4% major errors, and 3.2% very major errors. These findings suggest that the Vitek 2 cards inoculated directly from positive Bactec 9240 bottles do not provide acceptable bacterial identification or susceptibility testing in comparison with corresponding cards tested by a standard method. PMID:15297523
Addressing the unit of analysis in medical care studies: a systematic review.
Calhoun, Aaron W; Guyatt, Gordon H; Cabana, Michael D; Lu, Downing; Turner, David A; Valentine, Stacey; Randolph, Adrienne G
2008-06-01
We assessed the frequency that patients are incorrectly used as the unit of analysis among studies of physicians' patient care behavior in articles published in high impact journals. We surveyed 30 high-impact journals across 6 medical fields for articles susceptible to unit of analysis errors published from 1994 to 2005. Three reviewers independently abstracted articles using previously published criteria to determine the presence of analytic errors. One hundred fourteen susceptible articles were found published in 15 journals, 4 journals published the majority (71 of 114 or 62.3%) of studies, 40 were intervention studies, and 74 were noninterventional studies. The unit of analysis error was present in 19 (48%) of the intervention studies and 31 (42%) of the noninterventional studies (overall error rate 44%). The frequency of the error decreased between 1994-1999 (N = 38; 65% error) and 2000-2005 (N = 76; 33% error) (P = 0.001). Although the frequency of the error in published studies is decreasing, further improvement remains desirable.
Ligozzi, Marco; Bernini, Cinzia; Bonora, Maria Grazia; de Fatima, Maria; Zuliani, Jessica; Fontana, Roberta
2002-01-01
A study was conducted to evaluate the new VITEK 2 system (bioMérieux) for identification and antibiotic susceptibility testing of gram-positive cocci. Clinical isolates of Staphylococcus aureus (n = 100), coagulase-negative staphylococci (CNS) (n = 100), Enterococcus spp. (n = 89), Streptococcus agalactiae (n = 29), and Streptococcus pneumoniae (n = 66) were examined with the ID-GPC identification card and with the AST-P515 (for staphylococci), AST-P516 (for enterococci and S. agalactiae) and AST-P506 (for pneumococci) susceptibility cards. The identification comparison methods were the API Staph for staphylococci and the API 20 Strep for streptococci and enterococci; for antimicrobial susceptibility testing, the agar dilution method according to the procedure of the National Committee for Clinical Laboratory Standards (NCCLS) was used. The VITEK 2 system correctly identified to the species level (only one choice or after simple supplementary tests) 99% of S. aureus, 96.5% of S. agalactiae, 96.9% of S. pneumoniae, 92.7% of Enterococcus faecalis, 91.3% of Staphylococcus haemolyticus, and 88% of Staphylococcus epidermidis but was least able to identify Enterococcus faecium (71.4% correct). More than 90% of gram-positive cocci were identified within 3 h. According to the NCCLS breakpoints, antimicrobial susceptibility testing with the VITEK 2 system gave 96% correct category agreement, 0.82% very major errors, 0.17% major errors, and 2.7% minor errors. Antimicrobial susceptibility testing showed category agreement from 94 to 100% for S. aureus, from 90 to 100% for CNS, from 91 to 100% for enterococci, from 96 to 100% for S. agalactiae, and from 91 to 100% for S. pneumoniae. Microorganism-antibiotic combinations that gave very major errors were CNS-erythromycin, CNS-oxacillin, enterococci-teicoplanin, and enterococci-high-concentration gentamicin. Major errors were observed for CNS-oxacillin and S. agalactiae-tetracycline combinations. In conclusion the results of this study indicate that the VITEK 2 system represents an accurate and acceptable means for performing identification and antibiotic susceptibility tests with medically relevant gram-positive cocci. PMID:11980942
NASA Astrophysics Data System (ADS)
Steger, Stefan; Brenning, Alexander; Bell, Rainer; Glade, Thomas
2016-12-01
There is unanimous agreement that a precise spatial representation of past landslide occurrences is a prerequisite to produce high quality statistical landslide susceptibility models. Even though perfectly accurate landslide inventories rarely exist, investigations of how landslide inventory-based errors propagate into subsequent statistical landslide susceptibility models are scarce. The main objective of this research was to systematically examine whether and how inventory-based positional inaccuracies of different magnitudes influence modelled relationships, validation results, variable importance and the visual appearance of landslide susceptibility maps. The study was conducted for a landslide-prone site located in the districts of Amstetten and Waidhofen an der Ybbs, eastern Austria, where an earth-slide point inventory was available. The methodological approach comprised an artificial introduction of inventory-based positional errors into the present landslide data set and an in-depth evaluation of subsequent modelling results. Positional errors were introduced by artificially changing the original landslide position by a mean distance of 5, 10, 20, 50 and 120 m. The resulting differently precise response variables were separately used to train logistic regression models. Odds ratios of predictor variables provided insights into modelled relationships. Cross-validation and spatial cross-validation enabled an assessment of predictive performances and permutation-based variable importance. All analyses were additionally carried out with synthetically generated data sets to further verify the findings under rather controlled conditions. The results revealed that an increasing positional inventory-based error was generally related to increasing distortions of modelling and validation results. However, the findings also highlighted that interdependencies between inventory-based spatial inaccuracies and statistical landslide susceptibility models are complex. The systematic comparisons of 12 models provided valuable evidence that the respective error-propagation was not only determined by the degree of positional inaccuracy inherent in the landslide data, but also by the spatial representation of landslides and the environment, landslide magnitude, the characteristics of the study area, the selected classification method and an interplay of predictors within multiple variable models. Based on the results, we deduced that a direct propagation of minor to moderate inventory-based positional errors into modelling results can be partly counteracted by adapting the modelling design (e.g. generalization of input data, opting for strongly generalizing classifiers). Since positional errors within landslide inventories are common and subsequent modelling and validation results are likely to be distorted, the potential existence of inventory-based positional inaccuracies should always be considered when assessing landslide susceptibility by means of empirical models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hashii, Haruko, E-mail: haruko@pmrc.tsukuba.ac.jp; Hashimoto, Takayuki; Okawa, Ayako
2013-03-01
Purpose: Radiation therapy for cancer may be required for patients with implantable cardiac devices. However, the influence of secondary neutrons or scattered irradiation from high-energy photons (≥10 MV) on implantable cardioverter-defibrillators (ICDs) is unclear. This study was performed to examine this issue in 2 ICD models. Methods and Materials: ICDs were positioned around a water phantom under conditions simulating clinical radiation therapy. The ICDs were not irradiated directly. A control ICD was positioned 140 cm from the irradiation isocenter. Fractional irradiation was performed with 18-MV and 10-MV photon beams to give cumulative in-field doses of 600 Gy and 1600 Gy,more » respectively. Errors were checked after each fraction. Soft errors were defined as severe (change to safety back-up mode), moderate (memory interference, no changes in device parameters), and minor (slight memory change, undetectable by computer). Results: Hard errors were not observed. For the older ICD model, the incidences of severe, moderate, and minor soft errors at 18 MV were 0.75, 0.5, and 0.83/50 Gy at the isocenter. The corresponding data for 10 MV were 0.094, 0.063, and 0 /50 Gy. For the newer ICD model at 18 MV, these data were 0.083, 2.3, and 5.8 /50 Gy. Moderate and minor errors occurred at 18 MV in control ICDs placed 140 cm from the isocenter. The error incidences were 0, 1, and 0 /600 Gy at the isocenter for the newer model, and 0, 1, and 6 /600Gy for the older model. At 10 MV, no errors occurred in control ICDs. Conclusions: ICD errors occurred more frequently at 18 MV irradiation, which suggests that the errors were mainly caused by secondary neutrons. Soft errors of ICDs were observed with high energy photon beams, but most were not critical in the newer model. These errors may occur even when the device is far from the irradiation field.« less
Kan, Hirohito; Arai, Nobuyuki; Takizawa, Masahiro; Omori, Kazuyoshi; Kasai, Harumasa; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta
2018-06-11
We developed a non-regularized, variable kernel, sophisticated harmonic artifact reduction for phase data (NR-VSHARP) method to accurately estimate local tissue fields without regularization for quantitative susceptibility mapping (QSM). We then used a digital brain phantom to evaluate the accuracy of the NR-VSHARP method, and compared it with the VSHARP and iterative spherical mean value (iSMV) methods through in vivo human brain experiments. Our proposed NR-VSHARP method, which uses variable spherical mean value (SMV) kernels, minimizes L2 norms only within the volume of interest to reduce phase errors and save cortical information without regularization. In a numerical phantom study, relative local field and susceptibility map errors were determined using NR-VSHARP, VSHARP, and iSMV. Additionally, various background field elimination methods were used to image the human brain. In a numerical phantom study, the use of NR-VSHARP considerably reduced the relative local field and susceptibility map errors throughout a digital whole brain phantom, compared with VSHARP and iSMV. In the in vivo experiment, the NR-VSHARP-estimated local field could sufficiently achieve minimal boundary losses and phase error suppression throughout the brain. Moreover, the susceptibility map generated using NR-VSHARP minimized the occurrence of streaking artifacts caused by insufficient background field removal. Our proposed NR-VSHARP method yields minimal boundary losses and highly precise phase data. Our results suggest that this technique may facilitate high-quality QSM. Copyright © 2017. Published by Elsevier Inc.
Non Diphtheritic Corynebacteria: An Emerging Nosocomial Pathogen in Skin and Soft Tissue Infection.
Rudresh, Shoorashetty Manohar; Ravi, G S; Alex, Ann Mary; Mamatha, K R; Sunitha, L; Ramya, K Thangam
2015-12-01
Non-diphtheritic corynebacteria are normal inhabitants of skin and mucous membrane. When isolated from clinical specimens they are often considered as contaminants. Recent reports suggest their role as emerging nosocomial pathogens. To speciate non-diphtheritic corynebacteria isolated from wound specimens, to correlate their clinical significance and to determine their invitro antimicrobial susceptibilities to 9 antimicrobial agents. Twenty five non-diphtheritic corynebacteria from skin and soft tissue infections were selected for study. Isolates were identified by battery of tests and minimum inhibitory concentration (MIC) was detected by Clinical & Laboratory Standards Institute (CLSI) described broth microdilution method. MIC was interpreted according CLSI and British Society for Antimicrobial Chemotherapy (BSAC) guidelines. C. amycolatum was the predominant species (20%) followed by C. striatum (16%). Penicillin was least effective invitro followed by clindamycin and ciprofloxacin. Excellent activities were shown by vancomycin, linezolid and imipenem. Multidrug resistance was found in all the species. Non-diphtheritic corynebacteria are potential nosocomial pathogens among acute/chronic complicated skin and soft tissue infection. Vancomycin or linezolid can be used empirically to treat such infections until the invitro susceptibility results are available.
Ghorbani, Mahdi; Salahshour, Fateme; Haghparast, Abbas; Knaup, Courtney
2014-01-01
Purpose The aim of this study is to compare the dose in various soft tissues in brachytherapy with photon emitting sources. Material and methods 103Pd, 125I, 169Yb, 192Ir brachytherapy sources were simulated with MCNPX Monte Carlo code, and their dose rate constant and radial dose function were compared with the published data. A spherical phantom with 50 cm radius was simulated and the dose at various radial distances in adipose tissue, breast tissue, 4-component soft tissue, brain (grey/white matter), muscle (skeletal), lung tissue, blood (whole), 9-component soft tissue, and water were calculated. The absolute dose and relative dose difference with respect to 9-component soft tissue was obtained for various materials, sources, and distances. Results There was good agreement between the dosimetric parameters of the sources and the published data. Adipose tissue, breast tissue, 4-component soft tissue, and water showed the greatest difference in dose relative to the dose to the 9-component soft tissue. The other soft tissues showed lower dose differences. The dose difference was also higher for 103Pd source than for 125I, 169Yb, and 192Ir sources. Furthermore, greater distances from the source had higher relative dose differences and the effect can be justified due to the change in photon spectrum (softening or hardening) as photons traverse the phantom material. Conclusions The ignorance of soft tissue characteristics (density, composition, etc.) by treatment planning systems incorporates a significant error in dose delivery to the patient in brachytherapy with photon sources. The error depends on the type of soft tissue, brachytherapy source, as well as the distance from the source. PMID:24790623
Rhodes, Nathaniel J.; Richardson, Chad L.; Heraty, Ryan; Liu, Jiajun; Malczynski, Michael; Qi, Chao
2014-01-01
While a lack of concordance is known between gold standard MIC determinations and Vitek 2, the magnitude of the discrepancy and its impact on treatment decisions for extended-spectrum-β-lactamase (ESBL)-producing Escherichia coli are not. Clinical isolates of ESBL-producing E. coli were collected from blood, tissue, and body fluid samples from January 2003 to July 2009. Resistance genotypes were identified by PCR. Primary analyses evaluated the discordance between Vitek 2 and gold standard methods using cefepime susceptibility breakpoint cutoff values of 8, 4, and 2 μg/ml. The discrepancies in MICs between the methods were classified per convention as very major, major, and minor errors. Sensitivity, specificity, and positive and negative predictive values for susceptibility classifications were calculated. A total of 304 isolates were identified; 59% (179) of the isolates carried blaCTX-M, 47% (143) carried blaTEM, and 4% (12) carried blaSHV. At a breakpoint MIC of 8 μg/ml, Vitek 2 produced a categorical agreement of 66.8% and exhibited very major, major, and minor error rates of 23% (20/87 isolates), 5.1% (8/157 isolates), and 24% (73/304), respectively. The sensitivity, specificity, and positive and negative predictive values for a susceptibility breakpoint of 8 μg/ml were 94.9%, 61.2%, 72.3%, and 91.8%, respectively. The sensitivity, specificity, and positive and negative predictive values for a susceptibility breakpoint of 2 μg/ml were 83.8%, 65.3%, 41%, and 93.3%, respectively. Vitek 2 results in unacceptably high error rates for cefepime compared to those of agar dilution for ESBL-producing E. coli. Clinicians should be wary of making treatment decisions on the basis of Vitek 2 susceptibility results for ESBL-producing E. coli. PMID:24752253
Accuracy of three-dimensional facial soft tissue simulation in post-traumatic zygoma reconstruction.
Li, P; Zhou, Z W; Ren, J Y; Zhang, Y; Tian, W D; Tang, W
2016-12-01
The aim of this study was to evaluate the accuracy of novel software-CMF-preCADS-for the prediction of soft tissue changes following repositioning surgery for zygomatic fractures. Twenty patients who had sustained an isolated zygomatic fracture accompanied by facial deformity and who were treated with repositioning surgery participated in this study. Cone beam computed tomography (CBCT) scans and three-dimensional (3D) stereophotographs were acquired preoperatively and postoperatively. The 3D skeletal model from the preoperative CBCT data was matched with the postoperative one, and the fractured zygomatic fragments were segmented and aligned to the postoperative position for prediction. Then, the predicted model was matched with the postoperative 3D stereophotograph for quantification of the simulation error. The mean absolute error in the zygomatic soft tissue region between the predicted model and the real one was 1.42±1.56mm for all cases. The accuracy of the prediction (mean absolute error ≤2mm) was 87%. In the subjective assessment it was found that the majority of evaluators considered the predicted model and the postoperative model to be 'very similar'. CMF-preCADS software can provide a realistic, accurate prediction of the facial soft tissue appearance after repositioning surgery for zygomatic fractures. The reliability of this software for other types of repositioning surgery for maxillofacial fractures should be validated in the future. Copyright © 2016. Published by Elsevier Ltd.
Evaluating Application Resilience with XRay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Sui; Bronevetsky, Greg; Li, Bin
2015-05-07
The rising count and shrinking feature size of transistors within modern computers is making them increasingly vulnerable to various types of soft faults. This problem is especially acute in high-performance computing (HPC) systems used for scientific computing, because these systems include many thousands of compute cores and nodes, all of which may be utilized in a single large-scale run. The increasing vulnerability of HPC applications to errors induced by soft faults is motivating extensive work on techniques to make these applications more resiilent to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithmspecific error detection andmore » tolerance techniques. Effective use of such techniques requires a detailed understanding of how a given application is affected by soft faults to ensure that (i) efforts to improve application resilience are spent in the code regions most vulnerable to faults and (ii) the appropriate resilience technique is applied to each code region. This paper presents XRay, a tool to view the application vulnerability to soft errors, and illustrates how XRay can be used in the context of a representative application. In addition to providing actionable insights into application behavior XRay automatically selects the number of fault injection experiments required to provide an informative view of application behavior, ensuring that the information is statistically well-grounded without performing unnecessary experiments.« less
Karlowsky, James A; Biedenbach, Douglas J; Bouchillon, Samuel K; Hackel, Meredith; Iaconis, Joseph P; Sahm, Daniel F
2016-10-01
The objective of this report was to document antimicrobial susceptibility testing surveillance data for ceftaroline and comparative agents from the AWARE (Assessing Worldwide Antimicrobial Resistance Evaluation) global surveillance program for bacterial pathogens causing skin and soft tissue and respiratory tract infections in African and Middle Eastern countries from 2012 through 2014. Pathogen identities were confirmed by MALDI-TOF and antimicrobial susceptibility testing performed by CLSI broth microdilution methodology in a central laboratory. All methicillin-susceptible Staphylococcus aureus (MSSA) (n= 923; MIC90, 0.25 μg/mL) and 91.8% of methicillin-resistant S. aureus (MRSA) (n= 1161; MIC90, 1 μg/mL) tested were susceptible to ceftaroline. The maximum ceftaroline MIC observed for isolates of MRSA was 2 μg/mL. All Streptococcus pyogenes (n= 174; MIC90, 0.008 μg/mL), Streptococcus agalactiae (n= 44; MIC90, 0.015 μg/mL), Streptococcus pneumoniae (n= 351; MIC90, 0.25 μg/mL), and Haemophilus influenzae (n= 84; MIC90, ≤0.015 μg/mL) were susceptible to ceftaroline. Rates of susceptibility to ceftaroline among ESBL-negative Escherichia coli (n= 338), Klebsiella pneumoniae (n= 241), and Klebsiella oxytoca (n= 97) were 89.1% (MIC90, 1 μg/mL), 94.2% (MIC90, 0.5 μg/mL), and 99.0% (MIC90, 0.5 μg/mL), respectively. Copyright © 2016. Published by Elsevier Inc.
A device for characterising the mechanical properties of the plantar soft tissue of the foot.
Parker, D; Cooper, G; Pearson, S; Crofts, G; Howard, D; Busby, P; Nester, C
2015-11-01
The plantar soft tissue is a highly functional viscoelastic structure involved in transferring load to the human body during walking. A Soft Tissue Response Imaging Device was developed to apply a vertical compression to the plantar soft tissue whilst measuring the mechanical response via a combined load cell and ultrasound imaging arrangement. Accuracy of motion compared to input profiles; validation of the response measured for standard materials in compression; variability of force and displacement measures for consecutive compressive cycles; and implementation in vivo with five healthy participants. Static displacement displayed average error of 0.04 mm (range of 15 mm), and static load displayed average error of 0.15 N (range of 250 N). Validation tests showed acceptable agreement compared to a Houndsfield tensometer for both displacement (CMC > 0.99 RMSE > 0.18 mm) and load (CMC > 0.95 RMSE < 4.86 N). Device motion was highly repeatable for bench-top tests (ICC = 0.99) and participant trials (CMC = 1.00). Soft tissue response was found repeatable for intra (CMC > 0.98) and inter trials (CMC > 0.70). The device has been shown to be capable of implementing complex loading patterns similar to gait, and of capturing the compressive response of the plantar soft tissue for a range of loading conditions in vivo. Copyright © 2015. Published by Elsevier Ltd.
Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B
2000-12-01
Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.
Blöchliger, Nicolas; Keller, Peter M; Böttger, Erik C; Hombach, Michael
2017-09-01
The procedure for setting clinical breakpoints (CBPs) for antimicrobial susceptibility has been poorly standardized with respect to population data, pharmacokinetic parameters and clinical outcome. Tools to standardize CBP setting could result in improved antibiogram forecast probabilities. We propose a model to estimate probabilities for methodological categorization errors and defined zones of methodological uncertainty (ZMUs), i.e. ranges of zone diameters that cannot reliably be classified. The impact of ZMUs on methodological error rates was used for CBP optimization. The model distinguishes theoretical true inhibition zone diameters from observed diameters, which suffer from methodological variation. True diameter distributions are described with a normal mixture model. The model was fitted to observed inhibition zone diameters of clinical Escherichia coli strains. Repeated measurements for a quality control strain were used to quantify methodological variation. For 9 of 13 antibiotics analysed, our model predicted error rates of < 0.1% applying current EUCAST CBPs. Error rates were > 0.1% for ampicillin, cefoxitin, cefuroxime and amoxicillin/clavulanic acid. Increasing the susceptible CBP (cefoxitin) and introducing ZMUs (ampicillin, cefuroxime, amoxicillin/clavulanic acid) decreased error rates to < 0.1%. ZMUs contained low numbers of isolates for ampicillin and cefuroxime (3% and 6%), whereas the ZMU for amoxicillin/clavulanic acid contained 41% of all isolates and was considered not practical. We demonstrate that CBPs can be improved and standardized by minimizing methodological categorization error rates. ZMUs may be introduced if an intermediate zone is not appropriate for pharmacokinetic/pharmacodynamic or drug dosing reasons. Optimized CBPs will provide a standardized antibiotic susceptibility testing interpretation at a defined level of probability. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Antimicrobial Chemotherapy. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Boyanova, Lyudmila; Ilieva, Juliana; Gergova, Galina; Mitov, Ivan
2016-01-01
We compared levofloxacin (1 μg/disk) disk diffusion method to E test against 212 Helicobacter pylori strains. Using diameter breakpoints for susceptibility (≥15 mm) and resistance (≤9 mm), very major error, major error rate, and categoric agreement were 0.0%, 0.6%, and 93.9%, respectively. The method may be useful in low-resource laboratories. Copyright © 2016 Elsevier Inc. All rights reserved.
A Strategy to Use Soft Data Effectively in Randomized Controlled Clinical Trials.
ERIC Educational Resources Information Center
Kraemer, Helena Chmura; Thiemann, Sue
1989-01-01
Sees soft data, measures having substantial intrasubject variability due to errors of measurement or response inconsistency, as important measures of response in randomized clinical trials. Shows that using intensive design and slope of response on time as outcome measure maximizes sample retention and decreases within-group variability, thus…
Machen, Alexandra; Drake, Tim; Wang, Yun F. (Wayne)
2014-01-01
Rapid identification and antimicrobial susceptibility testing of microorganisms causing bloodstream infections or sepsis have the potential to improve patient care. This proof-of-principle study evaluates the Lysis-Filtration Method for identification as well as antimicrobial susceptibility testing of bacteria directly from positive blood culture bottles in a clinical setting. A total of 100 non-duplicated positive blood cultures were tested and 1012 microorganism-antimicrobial combinations were assessed. An aliquot of non-charcoal blood culture broth was incubated with lysis buffer briefly before being filtered and washed. Microorganisms recovered from the filter membrane were first identified by using Matrix-Assisted Laser Desorption/Ionization Time-of-Flight VITEK® Mass Spectrometry (VITEK MS). After quick identification from VITEK MS, filtered microorganisms were inoculated to VITEK®2 system for full panel antimicrobial susceptibility testing analysis. Of 100 bottles tested, the VITEK MS resulted in 94.0% correct organism identification to the species level. Compared to the conventional antimicrobial susceptibility testing methods, direct antimicrobial susceptibility testing from VITEK®2 resulted in 93.5% (946/1012) category agreement of antimicrobials tested, with 3.6% (36/1012) minor error, 1.7% (7/1012) major error, and 1.3% (13/1012) very major error of antimicrobials. The average time to identification and antimicrobial susceptibility testing was 11.4 hours by using the Lysis-Filtration method for both VITEK MS and VITEK®2 compared to 56.3 hours by using conventional methods (p<0.00001). Thus, the same-day results of microorganism identification and antimicrobial susceptibility testing directly from positive blood culture can be achieved and can be used for appropriate antibiotic therapy and antibiotic stewardship. PMID:24551067
Machen, Alexandra; Drake, Tim; Wang, Yun F Wayne
2014-01-01
Rapid identification and antimicrobial susceptibility testing of microorganisms causing bloodstream infections or sepsis have the potential to improve patient care. This proof-of-principle study evaluates the Lysis-Filtration Method for identification as well as antimicrobial susceptibility testing of bacteria directly from positive blood culture bottles in a clinical setting. A total of 100 non-duplicated positive blood cultures were tested and 1012 microorganism-antimicrobial combinations were assessed. An aliquot of non-charcoal blood culture broth was incubated with lysis buffer briefly before being filtered and washed. Microorganisms recovered from the filter membrane were first identified by using Matrix-Assisted Laser Desorption/Ionization Time-of-Flight VITEK® Mass Spectrometry (VITEK MS). After quick identification from VITEK MS, filtered microorganisms were inoculated to VITEK®2 system for full panel antimicrobial susceptibility testing analysis. Of 100 bottles tested, the VITEK MS resulted in 94.0% correct organism identification to the species level. Compared to the conventional antimicrobial susceptibility testing methods, direct antimicrobial susceptibility testing from VITEK®2 resulted in 93.5% (946/1012) category agreement of antimicrobials tested, with 3.6% (36/1012) minor error, 1.7% (7/1012) major error, and 1.3% (13/1012) very major error of antimicrobials. The average time to identification and antimicrobial susceptibility testing was 11.4 hours by using the Lysis-Filtration method for both VITEK MS and VITEK®2 compared to 56.3 hours by using conventional methods (p<0.00001). Thus, the same-day results of microorganism identification and antimicrobial susceptibility testing directly from positive blood culture can be achieved and can be used for appropriate antibiotic therapy and antibiotic stewardship.
NASA Technical Reports Server (NTRS)
Lienert, Barry R.
1991-01-01
Monte Carlo perturbations of synthetic tensors to evaluate the Hext/Jelinek elliptical confidence regions for anisotropy of magnetic susceptibility (AMS) eigenvectors are used. When the perturbations are 33 percent of the minimum anisotropy, both the shapes and probability densities of the resulting eigenvector distributions agree with the elliptical distributions predicted by the Hext/Jelinek equations. When the perturbation size is increased to 100 percent of the minimum eigenvalue difference, the major axis of the 95 percent confidence ellipse underestimates the observed eigenvector dispersion by about 10 deg. The observed distributions of the principal susceptibilities (eigenvalues) are close to being normal, with standard errors that agree well with the calculated Hext/Jelinek errors. The Hext/Jelinek ellipses are also able to describe the AMS dispersions due to instrumental noise and provide reasonable limits for the AMS dispersions observed in two Hawaiian basaltic dikes. It is concluded that the Hext/Jelinek method provides a satisfactory description of the errors in AMS data and should be a standard part of any AMS data analysis.
Research on On-Line Modeling of Fed-Batch Fermentation Process Based on v-SVR
NASA Astrophysics Data System (ADS)
Ma, Yongjun
The fermentation process is very complex and non-linear, many parameters are not easy to measure directly on line, soft sensor modeling is a good solution. This paper introduces v-support vector regression (v-SVR) for soft sensor modeling of fed-batch fermentation process. v-SVR is a novel type of learning machine. It can control the accuracy of fitness and prediction error by adjusting the parameter v. An on-line training algorithm is discussed in detail to reduce the training complexity of v-SVR. The experimental results show that v-SVR has low error rate and better generalization with appropriate v.
NASA Astrophysics Data System (ADS)
Tambara, Lucas Antunes; Tonfat, Jorge; Santos, André; Kastensmidt, Fernanda Lima; Medina, Nilberto H.; Added, Nemitala; Aguiar, Vitor A. P.; Aguirre, Fernando; Silveira, Marcilei A. G.
2017-02-01
The increasing system complexity of FPGA-based hardware designs and shortening of time-to-market have motivated the adoption of new designing methodologies focused on addressing the current need for high-performance circuits. High-Level Synthesis (HLS) tools can generate Register Transfer Level (RTL) designs from high-level software programming languages. These tools have evolved significantly in recent years, providing optimized RTL designs, which can serve the needs of safety-critical applications that require both high performance and high reliability levels. However, a reliability evaluation of HLS-based designs under soft errors has not yet been presented. In this work, the trade-offs of different HLS-based designs in terms of reliability, resource utilization, and performance are investigated by analyzing their behavior under soft errors and comparing them to a standard processor-based implementation in an SRAM-based FPGA. Results obtained from fault injection campaigns and radiation experiments show that it is possible to increase the performance of a processor-based system up to 5,000 times by changing its architecture with a small impact in the cross section (increasing up to 8 times), and still increasing the Mean Workload Between Failures (MWBF) of the system.
The Communication Link and Error ANalysis (CLEAN) simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.; Crowe, Shane
1993-01-01
During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattarivand, Mike; Summers, Clare; Robar, James
Purpose: To evaluate the validity of using spine as a surrogate for tumor positioning with ExacTrac stereoscopic imaging in lung stereotactic body radiation therapy (SBRT). Methods: Using the Novalis ExacTrac x-ray system, 39 lung SBRT patients (182 treatments) were aligned before treatment with 6 degrees (6D) of freedom couch (3 translations, 3 rotations) based on spine matching on stereoscopic images. The couch was shifted to treatment isocenter and pre-treatment CBCT was performed based on a soft tissue match around tumor volume. The CBCT data were used to measure residual errors following ExacTrac alignment. The thresholds for re-aligning the patients basedmore » on CBCT were 3mm shift or 3° rotation (in any 6D). In order to evaluate the effect of tumor location on residual errors, correlations between tumor distance from spine and individual residual errors were calculated. Results: Residual errors were up to 0.5±2.4mm. Using 3mm/3° thresholds, 80/182 (44%) of the treatments required re-alignment based on CBCT soft tissue matching following ExacTrac spine alignment. Most mismatches were in sup-inf, ant-post, and roll directions which had larger standard deviations. No correlation was found between tumor distance from spine and individual residual errors. Conclusion: ExacTrac stereoscopic imaging offers a quick pre-treatment patient alignment. However, bone matching based on spine is not reliable for aligning lung SBRT patients who require soft tissue image registration from CBCT. Spine can be a poor surrogate for lung SBRT patient alignment even for proximal tumor volumes.« less
Optimization of a superconducting linear levitation system using a soft ferromagnet
NASA Astrophysics Data System (ADS)
Agramunt-Puig, Sebastia; Del-Valle, Nuria; Navau, Carles; Sanchez, Alvaro
2013-04-01
The use of guideways that combine permanent magnets and soft ferromagnetic materials is a common practice in magnetic levitation transport systems (maglevs) with bulk high-temperature superconductors. Theoretical tools to simulate in a realistic way both the behavior of all elements (permanent magnets, soft ferromagnet and superconductor) and their mutual effects are helpful to optimize the designs of real systems. Here we present a systematic study of the levitation of a maglev with translational symmetry consisting of a superconducting bar and a guideway with two identic permanent magnets and a soft ferromagnetic material between them. The system is simulated with a numerical model based on the energy minimization method that allows to analyze the mutual interaction of the superconductor, assumed to be in the critical state, and a soft ferromagnet with infinite susceptibility. Results indicate that introducing a soft ferromagnet within the permanent magnets not only increases the levitation force but also improves the stability. Besides, an estimation of the relative sizes and shapes of the soft ferromagnet, permanent magnets and the superconductor in order to obtain large levitation force with full stability is provided.
Smart Braid Feedback for the Closed-Loop Control of Soft Robotic Systems.
Felt, Wyatt; Chin, Khai Yi; Remy, C David
2017-09-01
This article experimentally investigates the potential of using flexible, inductance-based contraction sensors in the closed-loop motion control of soft robots. Accurate motion control remains a highly challenging task for soft robotic systems. Precise models of the actuation dynamics and environmental interactions are often unavailable. This renders open-loop control impossible, while closed-loop control suffers from a lack of suitable feedback. Conventional motion sensors, such as linear or rotary encoders, are difficult to adapt to robots that lack discrete mechanical joints. The rigid nature of these sensors runs contrary to the aspirational benefits of soft systems. As truly soft sensor solutions are still in their infancy, motion control of soft robots has so far relied on laboratory-based sensing systems such as motion capture, electromagnetic (EM) tracking, or Fiber Bragg Gratings. In this article, we used embedded flexible sensors known as Smart Braids to sense the contraction of McKibben muscles through changes in inductance. We evaluated closed-loop control on two systems: a revolute joint and a planar, one degree of freedom continuum manipulator. In the revolute joint, our proposed controller compensated for elasticity in the actuator connections. The Smart Braid feedback allowed motion control with a steady-state root-mean-square (RMS) error of [1.5]°. In the continuum manipulator, Smart Braid feedback enabled tracking of the desired tip angle with a steady-state RMS error of [1.25]°. This work demonstrates that Smart Braid sensors can provide accurate position feedback in closed-loop motion control suitable for field applications of soft robotic systems.
Adaptive and Resilient Soft Tensegrity Robots.
Rieffel, John; Mouret, Jean-Baptiste
2018-04-17
Living organisms intertwine soft (e.g., muscle) and hard (e.g., bones) materials, giving them an intrinsic flexibility and resiliency often lacking in conventional rigid robots. The emerging field of soft robotics seeks to harness these same properties to create resilient machines. The nature of soft materials, however, presents considerable challenges to aspects of design, construction, and control-and up until now, the vast majority of gaits for soft robots have been hand-designed through empirical trial-and-error. This article describes an easy-to-assemble tensegrity-based soft robot capable of highly dynamic locomotive gaits and demonstrating structural and behavioral resilience in the face of physical damage. Enabling this is the use of a machine learning algorithm able to discover effective gaits with a minimal number of physical trials. These results lend further credence to soft-robotic approaches that seek to harness the interaction of complex material dynamics to generate a wealth of dynamical behaviors.
Calibration of low-temperature ac susceptometers with a copper cylinder standard
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, D.-X.; Skumryev, V.
2010-02-15
A high-quality low-temperature ac susceptometer is calibrated by comparing the measured ac susceptibility of a copper cylinder with its eddy-current ac susceptibility accurately calculated. Different from conventional calibration techniques that compare the measured results with the known property of a standard sample at certain fixed temperature T, field amplitude H{sub m}, and frequency f, to get a magnitude correction factor, here, the electromagnetic properties of the copper cylinder are unknown and are determined during the calibration of the ac susceptometer in the entire T, H{sub m}, and f range. It is shown that the maximum magnitude error and the maximummore » phase error of the susceptometer are less than 0.7% and 0.3 deg., respectively, in the region T=5-300 K and f=111-1111 Hz at H{sub m}=800 A/m, after a magnitude correction by a constant factor as done in a conventional calibration. However, the magnitude and phase errors can reach 2% and 4.3 deg. at 10 000 and 11 Hz, respectively. Since the errors are reproducible, a large portion of them may be further corrected after a calibration, the procedure for which is given. Conceptual discussions concerning the error sources, comparison with other calibration methods, and applications of ac susceptibility techniques are presented.« less
Devitt, Aleea L.; Tippett, Lynette; Schacter, Daniel L.; Addis, Donna Rose
2016-01-01
Because of its reconstructive nature, autobiographical memory (AM) is subject to a range of distortions. One distortion involves the erroneous incorporation of features from one episodic memory into another, forming what are known as memory conjunction errors. Healthy aging has been associated with an enhanced susceptibility to conjunction errors for laboratory stimuli, yet it is unclear whether these findings translate to the autobiographical domain. We investigated the impact of aging on vulnerability to AM conjunction errors, and explored potential cognitive processes underlying the formation of these errors. An imagination recombination paradigm was used to elicit AM conjunction errors in young and older adults. Participants also completed a battery of neuropsychological tests targeting relational memory and inhibition ability. Consistent with findings using laboratory stimuli, older adults were more susceptible to AM conjunction errors than younger adults. However, older adults were not differentially vulnerable to the inflating effects of imagination. Individual variation in AM conjunction error vulnerability was attributable to inhibitory capacity. An inability to suppress the cumulative familiarity of individual AM details appears to contribute to the heightened formation of AM conjunction errors with age. PMID:27929343
NASA Astrophysics Data System (ADS)
Tan, Chee Ghuan; Chia, Wei Ting; Majid, Taksiah A.; Nazri, Fadzli Mohamed; Adiyanto, Mohd Irwan
2017-10-01
On 5th June 2015, a moderate earthquake with Mw 5.9 hit Ranau, resulted in damages of the existing non-seismically designed buildings, such that 61 buildings, including mosques, schools, hospitals and Ranau police headquarters were suffered from different level structural damages. Soft storey irregularity is one of the main reasons of the building damage. This study is to investigate the soft-story effect on the propagation path of plastic hinges RC building under seismic excitation. The plastic hinges formation and seismic performance of five moment resisting RC frames with different infill configurations are studied. The seismic performance of building is evaluated by Incremental Dynamic Analysis (IDA). Open ground soft storey structure shows the lowest seismic resistance, collapses at 0.55g pga. The maximum interstorey drift ratio (IDRmax) in soft storey buildings ranging from 0.53% to 2.96% which are far greater than bare frame ranging from 0.095% to 0.69%. The presence of infill walls creates stiffer upper stories causing moments concentrate at the soft storey, resulting the path of plastic hinge propagation is dominant at the soft storey columns. Hence, the buildings with soft storey are very susceptible under earthquake load.
Non Diphtheritic Corynebacteria: An Emerging Nosocomial Pathogen in Skin and Soft Tissue Infection
Ravi, GS; Alex, Ann Mary; Mamatha, KR; Sunitha, L; Ramya, K Thangam
2015-01-01
Introduction Non-diphtheritic corynebacteria are normal inhabitants of skin and mucous membrane. When isolated from clinical specimens they are often considered as contaminants. Recent reports suggest their role as emerging nosocomial pathogens. Aim To speciate non-diphtheritic corynebacteria isolated from wound specimens, to correlate their clinical significance and to determine their invitro antimicrobial susceptibilities to 9 antimicrobial agents. Materials and Methods Twenty five non-diphtheritic corynebacteria from skin and soft tissue infections were selected for study. Isolates were identified by battery of tests and minimum inhibitory concentration (MIC) was detected by Clinical & Laboratory Standards Institute (CLSI) described broth microdilution method. MIC was interpreted according CLSI and British Society for Antimicrobial Chemotherapy (BSAC) guidelines. Results C. amycolatum was the predominant species (20%) followed by C. striatum (16%). Penicillin was least effective invitro followed by clindamycin and ciprofloxacin. Excellent activities were shown by vancomycin, linezolid and imipenem. Multidrug resistance was found in all the species. Conclusion Non-diphtheritic corynebacteria are potential nosocomial pathogens among acute/chronic complicated skin and soft tissue infection. Vancomycin or linezolid can be used empirically to treat such infections until the invitro susceptibility results are available. PMID:26816891
Sroussi, Herve Y; Epstein, Joel B; Bensadoun, Rene-Jean; Saunders, Deborah P; Lalla, Rajesh V; Migliorati, Cesar A; Heaivilin, Natalie; Zumsteg, Zachary S
2017-12-01
Patients undergoing radiation therapy for the head and neck are susceptible to a significant and often abrupt deterioration in their oral health. The oral morbidities of radiation therapy include but are not limited to an increased susceptibility to dental caries and periodontal disease. They also include profound and often permanent functional and sensory changes involving the oral soft tissue. These changes range from oral mucositis experienced during and soon after treatment, mucosal opportunistic infections, neurosensory disorders, and tissue fibrosis. Many of the oral soft tissue changes following radiation therapy are difficult challenges to the patients and their caregivers and require life-long strategies to alleviate their deleterious effect on basic life functions and on the quality of life. We discuss the presentation, prognosis, and management strategies of the dental structure and oral soft tissue morbidities resulting from the administration of therapeutic radiation in head and neck patient. A case for a collaborative and integrated multidisciplinary approach to the management of these patients is made, with specific recommendation to include knowledgeable and experienced oral health care professionals in the treatment team. © 2017 The Authors. Cancer Medicine published by John Wiley & Sons Ltd.
Catalanotti, Piergiorgio; Lanza, Michele; Del Prete, Antonio; Lucido, Maria; Catania, Maria Rosaria; Gallè, Francesca; Boggia, Daniela; Perfetto, Brunella; Rossano, Fabio
2005-10-01
In recent years, an increase in ocular pathologies related to soft contact lens has been observed. The most common infectious agents were Staphylococcus spp. Some strains produce an extracellular polysaccharidic slime that can cause severe infections. Polysaccharide synthesis is under genetic control and involves a specific intercellular adhesion (ica) locus, in particular, icaA and icaD genes. Conjunctival swabs from 97 patients with presumably bacterial bilateral conjunctivitis, wearers of soft contact lenses were examined. We determined the ability of staphylococci to produce slime, relating it to the presence of icaA and icaD genes. We also investigated the antibiotic susceptibility and Pulsed Field Gel Electrophoresis (PFGE) patterns of the clinical isolates. We found that 74.1% of the S. epidermidis strains and 61.1% of the S. aureus strains isolated were slime producers and showed icaA and icaD genes. Both S. epidermidis and S. aureus slime-producing strains exhibited more surface hydrophobicity than non-producing slime strains. The PFGE patterns overlapped in S. epidermidis strains with high hydrophobicity. The similar PFGE patterns were not related to biofilm production. We found scarce matching among the Staphylococcus spp. studied, slime production, surface hydrophobicity and antibiotic susceptibility.
Dibb, Russell; Liu, Chunlei
2017-06-01
To develop a susceptibility-based MRI technique for probing microstructure and fiber architecture of magnetically anisotropic tissues-such as central nervous system white matter, renal tubules, and myocardial fibers-in three dimensions using susceptibility tensor imaging (STI) tools. STI can probe tissue microstructure, but is limited by reconstruction artifacts because of absent phase information outside the tissue and noise. STI accuracy may be improved by estimating a joint eigenvector from mutually anisotropic susceptibility and relaxation tensors. Gradient-recalled echo image data were simulated using a numerical phantom and acquired from the ex vivo mouse brain, kidney, and heart. Susceptibility tensor data were reconstructed using STI, regularized STI, and the proposed algorithm of mutually anisotropic and joint eigenvector STI (MAJESTI). Fiber map and tractography results from each technique were compared with diffusion tensor data. MAJESTI reduced the estimated susceptibility tensor orientation error by 30% in the phantom, 36% in brain white matter, 40% in the inner medulla of the kidney, and 45% in myocardium. This improved the continuity and consistency of susceptibility-based fiber tractography in each tissue. MAJESTI estimation of the susceptibility tensors yields lower orientation errors for susceptibility-based fiber mapping and tractography in the intact brain, kidney, and heart. Magn Reson Med 77:2331-2346, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Lohrmann, Carol A.
1990-03-01
Interoperability of commercial Land Mobile Radios (LMR) and the military's tactical LMR is highly desirable if the U.S. government is to respond effectively in a national emergency or in a joint military operation. This ability to talk securely and immediately across agency and military service boundaries is often overlooked. One way to ensure interoperability is to develop and promote Federal communication standards (FS). This thesis surveys one area of the proposed FS 1024 for LMRs; namely, the error detection and correction (EDAC) of the message indicator (MI) bits used for cryptographic synchronization. Several EDAC codes are examined (Hamming, Quadratic Residue, hard decision Golay and soft decision Golay), tested on three FORTRAN programmed channel simulations (INMARSAT, Gaussian and constant burst width), compared and analyzed (based on bit error rates and percent of error-free super-frame runs) so that a best code can be recommended. Out of the four codes under study, the soft decision Golay code (24,12) is evaluated to be the best. This finding is based on the code's ability to detect and correct errors as well as the relative ease of implementation of the algorithm.
Rapid Antimicrobial Susceptibility Testing Using Forward Laser Light Scatter Technology
Clinton, Lani K.; Hewitt, Carolyn; Koyamatsu, Terri; Sun, Yilun; Jamison, Ginger; Perkins, Rosalie; Tang, Li; Pounds, Stanley; Bankowski, Matthew J.
2016-01-01
The delayed reporting of antimicrobial susceptibility testing remains a limiting factor in clinical decision-making in the treatment of bacterial infection. This study evaluates the use of forward laser light scatter (FLLS) to measure bacterial growth for the early determination of antimicrobial susceptibility. Three isolates each (two clinical isolates and one reference strain) of Staphylococcus aureus, Escherichia coli, and Pseudomonas aeruginosa were tested in triplicate using two commercial antimicrobial testing systems, the Vitek2 and the MicroScan MIC panel, to challenge the BacterioScan FLLS. The BacterioScan FLLS showed a high degree of categorical concordance with the commercial methods. Pairwise comparison with each commercial system serving as a reference standard showed 88.9% agreement with MicroScan (two minor errors) and 72.2% agreement with Vitek (five minor errors). FLLS using the BacterioScan system shows promise as a novel method for the rapid and accurate determination of antimicrobial susceptibility. PMID:27558176
NASA Astrophysics Data System (ADS)
Alexandrou, Constantia; Athenodorou, Andreas; Cichy, Krzysztof; Constantinou, Martha; Horkel, Derek P.; Jansen, Karl; Koutsou, Giannis; Larkin, Conor
2018-04-01
We compare lattice QCD determinations of topological susceptibility using a gluonic definition from the gradient flow and a fermionic definition from the spectral-projector method. We use ensembles with dynamical light, strange and charm flavors of maximally twisted mass fermions. For both definitions of the susceptibility we employ ensembles at three values of the lattice spacing and several quark masses at each spacing. The data are fitted to chiral perturbation theory predictions with a discretization term to determine the continuum chiral condensate in the massless limit and estimate the overall discretization errors. We find that both approaches lead to compatible results in the continuum limit, but the gluonic ones are much more affected by cutoff effects. This finally yields a much smaller total error in the spectral-projector results. We show that there exists, in principle, a value of the spectral cutoff which would completely eliminate discretization effects in the topological susceptibility.
Khurana, Harpreet Kaur; Cho, Il Kyu; Shim, Jae Yong; Li, Qing X; Jun, Soojin
2008-02-13
Aspartame is a low-calorie sweetener commonly used in soft drinks; however, the maximum usage dose is limited by the U.S. Food and Drug Administration. Fourier transform infrared (FTIR) spectroscopy with attenuated total reflectance sampling accessory and partial least-squares regression (PLS) was used for rapid determination of aspartame in soft drinks. On the basis of spectral characterization, the highest R2 value, and lowest PRESS value, the spectral region between 1600 and 1900 cm(-1) was selected for quantitative estimation of aspartame. The potential of FTIR spectroscopy for aspartame quantification was examined and validated by the conventional HPLC method. Using the FTIR method, aspartame contents in four selected carbonated diet soft drinks were found to average from 0.43 to 0.50 mg/mL with prediction errors ranging from 2.4 to 5.7% when compared with HPLC measurements. The developed method also showed a high degree of accuracy because real samples were used for calibration, thus minimizing potential interference errors. The FTIR method developed can be suitably used for routine quality control analysis of aspartame in the beverage-manufacturing sector.
45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.
Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile
2012-07-30
In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.
Estimating patient-specific soft-tissue properties in a TKA knee.
Ewing, Joseph A; Kaufman, Michelle K; Hutter, Erin E; Granger, Jeffrey F; Beal, Matthew D; Piazza, Stephen J; Siston, Robert A
2016-03-01
Surgical technique is one factor that has been identified as critical to success of total knee arthroplasty. Researchers have shown that computer simulations can aid in determining how decisions in the operating room generally affect post-operative outcomes. However, to use simulations to make clinically relevant predictions about knee forces and motions for a specific total knee patient, patient-specific models are needed. This study introduces a methodology for estimating knee soft-tissue properties of an individual total knee patient. A custom surgical navigation system and stability device were used to measure the force-displacement relationship of the knee. Soft-tissue properties were estimated using a parameter optimization that matched simulated tibiofemoral kinematics with experimental tibiofemoral kinematics. Simulations using optimized ligament properties had an average root mean square error of 3.5° across all tests while simulations using generic ligament properties taken from literature had an average root mean square error of 8.4°. Specimens showed large variability among ligament properties regardless of similarities in prosthetic component alignment and measured knee laxity. These results demonstrate the importance of soft-tissue properties in determining knee stability, and suggest that to make clinically relevant predictions of post-operative knee motions and forces using computer simulations, patient-specific soft-tissue properties are needed. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu
2008-10-01
The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.
Addressing the Hard Factors for Command File Errors by Probabilistic Reasoning
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Bryant, Larry
2014-01-01
Command File Errors (CFE) are managed using standard risk management approaches at the Jet Propulsion Laboratory. Over the last few years, more emphasis has been made on the collection, organization, and analysis of these errors for the purpose of reducing the CFE rates. More recently, probabilistic modeling techniques have been used for more in depth analysis of the perceived error rates of the DAWN mission and for managing the soft factors in the upcoming phases of the mission. We broadly classify the factors that can lead to CFE's as soft factors, which relate to the cognition of the operators and hard factors which relate to the Mission System which is composed of the hardware, software and procedures used for the generation, verification & validation and execution of commands. The focus of this paper is to use probabilistic models that represent multiple missions at JPL to determine the root cause and sensitivities of the various components of the mission system and develop recommendations and techniques for addressing them. The customization of these multi-mission models to a sample interplanetary spacecraft is done for this purpose.
Cutti, Andrea Giovanni; Cappello, Angelo; Davalli, Angelo
2006-01-01
Soft tissue artefact is the dominant error source for upper extremity motion analyses that use skin-mounted markers, especially in humeral axial rotation. A new in vivo technique is presented that is based on the definition of a humerus bone-embedded frame almost "artefact free" but influenced by the elbow orientation in the measurement of the humeral axial rotation, and on an algorithm designed to solve this kinematic coupling. The technique was validated in vivo in a study of six healthy subjects who performed five arm-movement tasks. For each task the similarity between a gold standard pattern and the axial rotation pattern before and after the application of the compensation algorithm was evaluated in terms of explained variance, gain, phase and offset. In addition the root mean square error between the patterns was used as a global similarity estimator. After the application, for four out of five tasks, patterns were highly correlated, in phase, with almost equal gain and limited offset; the root mean square error decreased from the original 9 degrees to 3 degrees . The proposed technique appears to help compensate for the soft tissue artefact affecting axial rotation. A further development is also proposed to make the technique effective also for the pure prono-supination task.
NASA Astrophysics Data System (ADS)
Zand, Ramtin; DeMara, Ronald F.
2017-12-01
In this paper, we have developed a radiation-hardened non-volatile lookup table (LUT) circuit utilizing spin Hall effect (SHE)-magnetic random access memory (MRAM) devices. The design is motivated by modeling the effect of radiation particles striking hybrid complementary metal oxide semiconductor/spin based circuits, and the resistive behavior of SHE-MRAM devices via established and precise physics equations. The models developed are leveraged in the SPICE circuit simulator to verify the functionality of the proposed design. The proposed hardening technique is based on using feedback transistors, as well as increasing the radiation capacity of the sensitive nodes. Simulation results show that our proposed LUT circuit can achieve multiple node upset (MNU) tolerance with more than 38% and 60% power-delay product improvement as well as 26% and 50% reduction in device count compared to the previous energy-efficient radiation-hardened LUT designs. Finally, we have performed a process variation analysis showing that the MNU immunity of our proposed circuit is realized at the cost of increased susceptibility to transistor and MRAM variations compared to an unprotected LUT design.
Black, Dolores Archuleta; Robinson, William H.; Wilcox, Ian Zachary; ...
2015-08-07
Single event effects (SEE) are a reliability concern for modern microelectronics. Bit corruptions can be caused by single event upsets (SEUs) in the storage cells or by sampling single event transients (SETs) from a logic path. Likewise, an accurate prediction of soft error susceptibility from SETs requires good models to convert collected charge into compact descriptions of the current injection process. This paper describes a simple, yet effective, method to model the current waveform resulting from a charge collection event for SET circuit simulations. The model uses two double-exponential current sources in parallel, and the results illustrate why a conventionalmore » model based on one double-exponential source can be incomplete. Furthermore, a small set of logic cells with varying input conditions, drive strength, and output loading are simulated to extract the parameters for the dual double-exponential current sources. As a result, the parameters are based upon both the node capacitance and the restoring current (i.e., drive strength) of the logic cell.« less
In-flight calibration of the Hitomi Soft X-ray Spectrometer. (2) Point spread function
NASA Astrophysics Data System (ADS)
Maeda, Yoshitomo; Sato, Toshiki; Hayashi, Takayuki; Iizuka, Ryo; Angelini, Lorella; Asai, Ryota; Furuzawa, Akihiro; Kelley, Richard; Koyama, Shu; Kurashima, Sho; Ishida, Manabu; Mori, Hideyuki; Nakaniwa, Nozomi; Okajima, Takashi; Serlemitsos, Peter J.; Tsujimoto, Masahiro; Yaqoob, Tahir
2018-03-01
We present results of inflight calibration of the point spread function of the Soft X-ray Telescope that focuses X-rays onto the pixel array of the Soft X-ray Spectrometer system. We make a full array image of a point-like source by extracting a pulsed component of the Crab nebula emission. Within the limited statistics afforded by an exposure time of only 6.9 ks and limited knowledge of the systematic uncertainties, we find that the raytracing model of 1 {^'.} 2 half-power-diameter is consistent with an image of the observed event distributions across pixels. The ratio between the Crab pulsar image and the raytracing shows scatter from pixel to pixel that is 40% or less in all except one pixel. The pixel-to-pixel ratio has a spread of 20%, on average, for the 15 edge pixels, with an averaged statistical error of 17% (1 σ). In the central 16 pixels, the corresponding ratio is 15% with an error of 6%.
Eisner, Brian H; Kambadakone, Avinash; Monga, Manoj; Anderson, James K; Thoreson, Andrew A; Lee, Hang; Dretler, Stephen P; Sahani, Dushyant V
2009-04-01
We determined the most accurate method of measuring urinary stones on computerized tomography. For the in vitro portion of the study 24 calculi, including 12 calcium oxalate monohydrate and 12 uric acid stones, that had been previously collected at our clinic were measured manually with hand calipers as the gold standard measurement. The calculi were then embedded into human kidney-sized potatoes and scanned using 64-slice multidetector computerized tomography. Computerized tomography measurements were performed at 4 window settings, including standard soft tissue windows (window width-320 and window length-50), standard bone windows (window width-1120 and window length-300), 5.13x magnified soft tissue windows and 5.13x magnified bone windows. Maximum stone dimensions were recorded. For the in vivo portion of the study 41 patients with distal ureteral stones who underwent noncontrast computerized tomography and subsequently spontaneously passed the stones were analyzed. All analyzed stones were 100% calcium oxalate monohydrate or mixed, calcium based stones. Stones were prospectively collected at the clinic and the largest diameter was measured with digital calipers as the gold standard. This was compared to computerized tomography measurements using 4.0x magnified soft tissue windows and 4.0x magnified bone windows. Statistical comparisons were performed using Pearson's correlation and paired t test. In the in vitro portion of the study the most accurate measurements were obtained using 5.13x magnified bone windows with a mean 0.13 mm difference from caliper measurement (p = 0.6). Measurements performed in the soft tissue window with and without magnification, and in the bone window without magnification were significantly different from hand caliper measurements (mean difference 1.2, 1.9 and 1.4 mm, p = 0.003, <0.001 and 0.0002, respectively). When comparing measurement errors between stones of different composition in vitro, the error for calcium oxalate calculi was significantly different from the gold standard for all methods except bone window settings with magnification. For uric acid calculi the measurement error was observed only in standard soft tissue window settings. In vivo 4.0x magnified bone windows was superior to 4.0x magnified soft tissue windows in measurement accuracy. Magnified bone window measurements were not statistically different from digital caliper measurements (mean underestimation vs digital caliper 0.3 mm, p = 0.4), while magnified soft tissue windows were statistically distinct (mean underestimation 1.4 mm, p = 0.001). In this study magnified bone windows were the most accurate method of stone measurements in vitro and in vivo. Therefore, we recommend the routine use of magnified bone windows for computerized tomography measurement of stones. In vitro the measurement error in calcium oxalate stones was greater than that in uric acid stones, suggesting that stone composition may be responsible for measurement inaccuracies.
Scaled CMOS Technology Reliability Users Guide
NASA Technical Reports Server (NTRS)
White, Mark
2010-01-01
The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is presented revealing a power relationship. General models describing the soft error rates across scaled product generations are presented. The analysis methodology may be applied to other scaled microelectronic products and their key parameters.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Staphylococcus aureus. For the treatment of genitourinary tract infections (cystitis) due to susceptible strains of Escherichia coli, Proteus mirabilis, and S. aureus. (ii) Cats. For the treatment of skin and soft... strains of Pasteurella multocida, S. aureus, Staphylococcus epidermidis, and Streptococcus spp. (3...
Code of Federal Regulations, 2010 CFR
2010-04-01
... Staphylococcus aureus. For the treatment of genitourinary tract infections (cystitis) due to susceptible strains of Escherichia coli, Proteus mirabilis, and S. aureus. (ii) Cats. For the treatment of skin and soft... strains of Pasteurella multocida, S. aureus, Staphylococcus epidermidis, and Streptococcus spp. (3...
Code of Federal Regulations, 2014 CFR
2014-04-01
... Staphylococcus aureus. For the treatment of genitourinary tract infections (cystitis) due to susceptible strains of Escherichia coli, Proteus mirabilis, and S. aureus. (ii) Cats. For the treatment of skin and soft... strains of Pasteurella multocida, S. aureus, Staphylococcus epidermidis, and Streptococcus spp. (3...
Code of Federal Regulations, 2012 CFR
2012-04-01
... Staphylococcus aureus. For the treatment of genitourinary tract infections (cystitis) due to susceptible strains of Escherichia coli, Proteus mirabilis, and S. aureus. (ii) Cats. For the treatment of skin and soft... strains of Pasteurella multocida, S. aureus, Staphylococcus epidermidis, and Streptococcus spp. (3...
Code of Federal Regulations, 2011 CFR
2011-04-01
... Staphylococcus aureus. For the treatment of genitourinary tract infections (cystitis) due to susceptible strains of Escherichia coli, Proteus mirabilis, and S. aureus. (ii) Cats. For the treatment of skin and soft... strains of Pasteurella multocida, S. aureus, Staphylococcus epidermidis, and Streptococcus spp. (3...
Co-operation of digital nonlinear equalizers and soft-decision LDPC FEC in nonlinear transmission.
Tanimura, Takahito; Oda, Shoichiro; Hoshida, Takeshi; Aoki, Yasuhiko; Tao, Zhenning; Rasmussen, Jens C
2013-12-30
We experimentally and numerically investigated the characteristics of 128 Gb/s dual polarization - quadrature phase shift keying signals received with two types of nonlinear equalizers (NLEs) followed by soft-decision (SD) low-density parity-check (LDPC) forward error correction (FEC). Successful co-operation among SD-FEC and NLEs over various nonlinear transmissions were demonstrated by optimization of parameters for NLEs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogunmolu, O; Gans, N; Jiang, S
Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance ofmore » the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control will be pursued.« less
Colosi, Ioana A; Faure, Odile; Dessaigne, Bérangére; Bourdon, Cécile; Lebeau, Bernadette; Colosi, Horaţiu A; Pelloux, Hervé
2012-05-01
We compared the E-test method to that of the Neo-Sensitabs tablet diffusion assay for evaluating the in vitro susceptibility of 100 clinical isolates of filamentous fungi (Aspergillus spp., Fusarium spp., Scedosporium spp., zygomycetes and other molds) to amphotericin B, itraconazole, voriconazole, caspofungin, and posaconazole. We determined the categorical agreement level between E-test minimum inhibitory concentrations (MIC) and tablet end-points, as opposed to the following disagreement parameters: very major error - resistant parameter (R) in E-test and susceptible (S) in tablet; major error - S by E-test and R by tablet; minor error - shifts between S and susceptible dose-dependent (S-DD) or S-DD and R. We also performed linear regression analyses and computed Pearson's correlation coefficients (R values) between the log transforms of MICs and the inhibition zone diameters of the five studied antifungal agents. For itraconazole we obtained 97% categorical agreement and R = -0.727. Categorical agreement for caspofungin and voriconazole was 96% and R =-0.821 and R = -0.789, respectively. For posaconazole the categorical agreement was 94% and R =-0.743. Amphotericin B exhibited a lower degree of agreement (76%, R = -0.672), especially in studies of Aspergillus spp. Our results suggest a potential value of the Neo-Sensitabs assay for in vitro susceptibility testing of molds to itraconazole, voriconazole, caspofungin and posaconazole, while amphotericin B exhibited an overall lower degree of agreement.
Dynamic soft tissue deformation estimation based on energy analysis
NASA Astrophysics Data System (ADS)
Gao, Dedong; Lei, Yong; Yao, Bin
2016-10-01
The needle placement accuracy of millimeters is required in many needle-based surgeries. The tissue deformation, especially that occurring on the surface of organ tissue, affects the needle-targeting accuracy of both manual and robotic needle insertions. It is necessary to understand the mechanism of tissue deformation during needle insertion into soft tissue. In this paper, soft tissue surface deformation is investigated on the basis of continuum mechanics, where a geometry model is presented to quantitatively approximate the volume of tissue deformation. The energy-based method is presented to the dynamic process of needle insertion into soft tissue based on continuum mechanics, and the volume of the cone is exploited to quantitatively approximate the deformation on the surface of soft tissue. The external work is converted into potential, kinetic, dissipated, and strain energies during the dynamic rigid needle-tissue interactive process. The needle insertion experimental setup, consisting of a linear actuator, force sensor, needle, tissue container, and a light, is constructed while an image-based method for measuring the depth and radius of the soft tissue surface deformations is introduced to obtain the experimental data. The relationship between the changed volume of tissue deformation and the insertion parameters is created based on the law of conservation of energy, with the volume of tissue deformation having been obtained using image-based measurements. The experiments are performed on phantom specimens, and an energy-based analytical fitted model is presented to estimate the volume of tissue deformation. The experimental results show that the energy-based analytical fitted model can predict the volume of soft tissue deformation, and the root mean squared errors of the fitting model and experimental data are 0.61 and 0.25 at the velocities 2.50 mm/s and 5.00 mm/s. The estimating parameters of the soft tissue surface deformations are proven to be useful for compensating the needle-targeting error in the rigid needle insertion procedure, especially for percutaneous needle insertion into organs.
Blumenfeld, Philip; Hata, Nobuhiko; DiMaio, Simon; Zou, Kelly; Haker, Steven; Fichtinger, Gabor; Tempany, Clare M C
2007-09-01
To quantify needle placement accuracy of magnetic resonance image (MRI)-guided core needle biopsy of the prostate. A total of 10 biopsies were performed with 18-gauge (G) core biopsy needle via a percutaneous transperineal approach. Needle placement error was assessed by comparing the coordinates of preplanned targets with the needle tip measured from the intraprocedural coherent gradient echo images. The source of these errors was subsequently investigated by measuring displacement caused by needle deflection and needle susceptibility artifact shift in controlled phantom studies. Needle placement error due to misalignment of the needle template guide was also evaluated. The mean and standard deviation (SD) of errors in targeted biopsies was 6.5 +/- 3.5 mm. Phantom experiments showed significant placement error due to needle deflection with a needle with an asymmetrically beveled tip (3.2-8.7 mm depending on tissue type) but significantly smaller error with a symmetrical bevel (0.6-1.1 mm). Needle susceptibility artifacts observed a shift of 1.6 +/- 0.4 mm from the true needle axis. Misalignment of the needle template guide contributed an error of 1.5 +/- 0.3 mm. Needle placement error was clinically significant in MRI-guided biopsy for diagnosis of prostate cancer. Needle placement error due to needle deflection was the most significant cause of error, especially for needles with an asymmetrical bevel. (c) 2007 Wiley-Liss, Inc.
Microcircuit radiation effects databank
NASA Technical Reports Server (NTRS)
1983-01-01
Radiation test data submitted by many testers is collated to serve as a reference for engineers who are concerned with and have some knowledge of the effects of the natural radiation environment on microcircuits. Total dose damage information and single event upset cross sections, i.e., the probability of a soft error (bit flip) or of a hard error (latchup) are presented.
75 FR 10165 - New Animal Drugs; Change of Sponsor
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-05
...: This rule is effective March 5, 2010. FOR FURTHER INFORMATION CONTACT: David R. Newkirk, Center for... skin and soft tissue infections including cellulitis, pyoderma, dermatitis, wound infections, and..., cellulitis, and dermatitis caused by susceptible strains of Pasteurella multocida, S. aureus, Staphylococcus...
21 CFR 520.1618 - Orbifloxacin suspension.
Code of Federal Regulations, 2011 CFR
2011-04-01
... spp., Klebsiella pneumoniae, E. coli, Enterobacter spp., Citrobacter spp., E. faecalis, β-hemolytic...) in cats caused by susceptible strains of S. aureus, E. coli, and P. multocida. [75 FR 26646, May 12... pseudintermedius, Proteus mirabilis, Escherichia coli, and Enterococcus faecalis and skin and soft tissue...
Code of Federal Regulations, 2014 CFR
2014-04-01
... FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) ANIMAL DRUGS, FEEDS, AND RELATED PRODUCTS IMPLANTATION OR INJECTABLE DOSAGE FORM NEW ANIMAL DRUGS § 522.56 Amikacin. (a... Escherichia coli and Proteus spp. and skin and soft tissue infections caused by susceptible strains of...
Code of Federal Regulations, 2012 CFR
2012-04-01
... FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) ANIMAL DRUGS, FEEDS, AND RELATED PRODUCTS IMPLANTATION OR INJECTABLE DOSAGE FORM NEW ANIMAL DRUGS § 522.56 Amikacin. (a... of Escherichia coli and Proteus spp. and skin and soft tissue infections caused by susceptible...
Code of Federal Regulations, 2013 CFR
2013-04-01
... FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) ANIMAL DRUGS, FEEDS, AND RELATED PRODUCTS IMPLANTATION OR INJECTABLE DOSAGE FORM NEW ANIMAL DRUGS § 522.56 Amikacin. (a... of Escherichia coli and Proteus spp. and skin and soft tissue infections caused by susceptible...
Szczotka-Flynn, Loretta B.; Imamura, Yoshifumi; Chandra, Jyotsna; Yu, Changping; Mukherjee, Pranab K.; Pearlman, Eric; Ghannoum, Mahmoud A.
2014-01-01
PURPOSE To determine if clinical and reference strains of Pseudomonas aeruginosa, Serratia marcescens, and Staphylococcus aureus form biofilms on silicone hydrogel contact lenses, and ascertain antimicrobial activities of contact lens care solutions. METHODS Clinical and American Type Culture Collection (ATCC) reference strains of Pseudomonas aeruginosa, Serratia marcescens, and Staphylococcus aureus were incubated with lotrafilcon A lenses under conditions that facilitate biofilm formation. Biofilms were quantified by quantitative culturing (colony forming units, CFUs), and gross morphology and architecture were evaluated using scanning electron microscopy (SEM) and confocal microscopy. Susceptibilities of the planktonic and biofilm growth phases of the bacteria to five common multipurpose contact lens care solutions and one hydrogen peroxide care solution were assessed. RESULTS P. aeruginosa, S. marcescens, and S. aureus reference and clinical strains formed biofilms on lotrafilcon A silicone hydrogel contact lenses, as dense networks of cells arranged in multiple layers with visible extracellular matrix. The biofilms were resistant to commonly used biguanide preserved multipurpose care solutions. P. aeruginosa and S. aureus biofilms were susceptible to a hydrogen peroxide and a polyquaternium preserved care solution, whereas S. marcescens biofilm was resistant to a polyquaternium preserved care solution but susceptible to hydrogen peroxide disinfection. In contrast, the planktonic forms were always susceptible. CONCLUSIONS P. aeruginosa, S. marcescens, and S. aureus form biofilms on lotrafilcon A contact lenses, which in contrast to planktonic cells, are resistant to the antimicrobial activity of several soft contact lens care products. PMID:19654521
Theoretical study on perpendicular magnetoelectric coupling in ferroelectromagnet system
NASA Astrophysics Data System (ADS)
Zhong, Chonggui; Jiang, Qing
2002-06-01
We apply the Heisenberg model for antiferromagnetic interaction and Diffour model for ferroelectric interaction to analyze the magnetic, electric, magnetoelectric property in the system with the spontaneous coexistence of the ferroelectric and antiferromagnetic orders below a certain temperature. The soft mode theory is used to calculate the on-site polarization and mean field theory is applied to deal with the on-site magnetization. We also present the perpendicular magnetoelectric susceptibility χme⊥, polarization susceptibility χp as a function of temperature, and discuss the effect of the inherent magnetoelectric coupling on them. In addition, it is found that an anomaly appears in the curve of the polarization susceptibility due to the coupling between the ferroelectric and antiferromagnetic orders.
NASA Astrophysics Data System (ADS)
Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez
2014-03-01
Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.
Evidence for explosive chromospheric evaporation in a solar flare observed with SMM
NASA Technical Reports Server (NTRS)
Zarro, D. M.; Saba, J. L. R.; Strong, K. T.; Canfield, R. C.; Metcalf, T.
1986-01-01
SMM soft X-ray data and Sacramento Peak Observatory H-alpha observations are combined in a study of the impulsive phase of a solar flare. A blue asymmetry, indicative of upflow motions, was observed in the coronal Ca XIX line during the soft X-ray rise phase. H-alpha redshifts, indicative of downward motions, were observed simultaneously in bright flare kernels during the period of hard X-ray emission. It is shown that, to within observational errors, the impulsive phase momentum transported by the upflowing soft X-ray plasma is equivalent to that of the downward moving chromospheric material.
Detecting Silent Data Corruption for Extreme-Scale Applications through Data Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bautista-Gomez, Leonardo; Cappello, Franck
Supercomputers allow scientists to study natural phenomena by means of computer simulations. Next-generation machines are expected to have more components and, at the same time, consume several times less energy per operation. These trends are pushing supercomputer construction to the limits of miniaturization and energy-saving strategies. Consequently, the number of soft errors is expected to increase dramatically in the coming years. While mechanisms are in place to correct or at least detect some soft errors, a significant percentage of those errors pass unnoticed by the hardware. Such silent errors are extremely damaging because they can make applications silently produce wrongmore » results. In this work we propose a technique that leverages certain properties of high-performance computing applications in order to detect silent errors at the application level. Our technique detects corruption solely based on the behavior of the application datasets and is completely application-agnostic. We propose multiple corruption detectors, and we couple them to work together in a fashion transparent to the user. We demonstrate that this strategy can detect the majority of the corruptions, while incurring negligible overhead. We show that with the help of these detectors, applications can have up to 80% of coverage against data corruption.« less
Collins, Malcolm; September, Alison V; Posthumus, Michael
2015-12-01
Evidence from familial and genetic association studies have reported that DNA sequence variants play an important role, together with non-genetic factors, in the aetiology of both exercise-associated and occupational-associated acute and chronic musculoskeletal soft tissue injuries. The associated variants, which have been identified to date, may contribute to the interindividual variation in the structure and, by implication, mechanical properties of the collagen fibril and surrounding matrix within musculoskeletal soft tissues, as well as their response to mechanical loading and other stimuli. Future work should focus on the establishment of multidisciplinary international consortia for the identification of biologically relevant variants involved in modulating injury risk. These consortia will improve the limitations of the published hypothesis-driven genetic association studies, since they will allow resources to be pooled in recruiting large well-characterised cohorts required for whole-genome screening. Finally, clinicians and coaches need to be aware that many direct-to-consumer companies are currently marketing genetic tests directly to athletes without it being requested by an appropriately qualified healthcare professional, and without interpretation alongside other clinical indicators or lifestyle factors. These specific genetic tests are premature and are not necessarily required to evaluate susceptibility to musculoskeletal soft tissue injury. Current practice should rather consider susceptibility through known risk factors such as a positive family history of a specific injury, a history of other tendon and/or ligament injuries and participation in activities associated with the specific musculoskeletal injuries. Potential susceptible athletes may then be individually managed to reduce their risk profile. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
NASA Astrophysics Data System (ADS)
Collins, Jarrod A.; Heiselman, Jon S.; Weis, Jared A.; Clements, Logan W.; Simpson, Amber L.; Jarnagin, William R.; Miga, Michael I.
2017-03-01
In image-guided liver surgery (IGLS), sparse representations of the anterior organ surface may be collected intraoperatively to drive image-to-physical space registration. Soft tissue deformation represents a significant source of error for IGLS techniques. This work investigates the impact of surface data quality on current surface based IGLS registration methods. In this work, we characterize the robustness of our IGLS registration methods to noise in organ surface digitization. We study this within a novel human-to-phantom data framework that allows a rapid evaluation of clinically realistic data and noise patterns on a fully characterized hepatic deformation phantom. Additionally, we implement a surface data resampling strategy that is designed to decrease the impact of differences in surface acquisition. For this analysis, n=5 cases of clinical intraoperative data consisting of organ surface and salient feature digitizations from open liver resection were collected and analyzed within our human-to-phantom validation framework. As expected, results indicate that increasing levels of noise in surface acquisition cause registration fidelity to deteriorate. With respect to rigid registration using the raw and resampled data at clinically realistic levels of noise (i.e. a magnitude of 1.5 mm), resampling improved TRE by 21%. In terms of nonrigid registration, registrations using resampled data outperformed the raw data result by 14% at clinically realistic levels and were less susceptible to noise across the range of noise investigated. These results demonstrate the types of analyses our novel human-to-phantom validation framework can provide and indicate the considerable benefits of resampling strategies.
Asymmetric soft-error resistant memory
NASA Technical Reports Server (NTRS)
Buehler, Martin G. (Inventor); Perlman, Marvin (Inventor)
1991-01-01
A memory system is provided, of the type that includes an error-correcting circuit that detects and corrects, that more efficiently utilizes the capacity of a memory formed of groups of binary cells whose states can be inadvertently switched by ionizing radiation. Each memory cell has an asymmetric geometry, so that ionizing radiation causes a significantly greater probability of errors in one state than in the opposite state (e.g., an erroneous switch from '1' to '0' is far more likely than a switch from '0' to'1'. An asymmetric error correcting coding circuit can be used with the asymmetric memory cells, which requires fewer bits than an efficient symmetric error correcting code.
Slowing dynamics in supercooled liquids and other soft materials
NASA Astrophysics Data System (ADS)
Yardimci, Hasan
The slow structural dynamics displayed by supercooled liquids and the transition to an out-of-equilibrium glass state that they engender are among the most challenging issues in condensed matter physics. This thesis reports experimental studies designed to elucidate central aspects of these slow dynamics and the nature of the glass state. The subjects of these studies include glass forming molecular liquids and other soft materials that have been advanced as model glassy systems such as clay suspensions and block copolymer micelle solutions. The main experimental techniques employed in these investigations have been dielectric susceptibility and neutron scattering. In the first half of this thesis, we report frequency-dependent dielectric susceptibility measurements characterizing the evolution in the dynamical properties, or aging, of two supercooled liquids, sorbitol and xylitol, quenched below their calorimetric glass transition temperatures, Tg. In addition to the alpha relaxation that tracks the structural dynamics, the susceptibilities of both liquids possess a secondary Johari-Goldstein relaxation at higher frequencies. Following a quench below Tg, the susceptibility slowly approaches equilibrium behavior. For both liquids features of the Johari-Goldstein relaxation display a dependence on the time since the quench, or aging time, that is very similar to the age dependence of the alpha peak. Implications of these findings for aging in glasses and the nature of Johari-Goldstein relaxation are discussed. Further investigation of the aging in sorbitol reveals that it displays memory strikingly similar to that of a variety of glassy materials, particularly spin glasses. During a temporary stop in cooling, the susceptibility changes with time due to aging. The memory is revealed upon reheating as the susceptibility retraces these changes. To investigate the out-of-equilibrium state of the liquid as it displays this memory, we have employed a set of intricate thermal histories by interrupting the heating stage of the cycle and characterizing the subsequent aging. At temperatures above that of the original cooling stop, the liquid enters a state on heating with an effective age that is proportional to the duration of the stop, while at lower temperatures no effective age can be assigned and subtler behavior emerges. These results, which reveal differences with memory displayed by spin glasses, are discussed in the context of the liquid's energy landscape. In the second half of the thesis, we report neutron scattering measurements and dielectric studies on a set of disordered soft materials. (Abstract shortened by UMI.)
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Lin, Shu
2000-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
Rapid Antimicrobial Susceptibility Testing Using Forward Laser Light Scatter Technology.
Hayden, Randall T; Clinton, Lani K; Hewitt, Carolyn; Koyamatsu, Terri; Sun, Yilun; Jamison, Ginger; Perkins, Rosalie; Tang, Li; Pounds, Stanley; Bankowski, Matthew J
2016-11-01
The delayed reporting of antimicrobial susceptibility testing remains a limiting factor in clinical decision-making in the treatment of bacterial infection. This study evaluates the use of forward laser light scatter (FLLS) to measure bacterial growth for the early determination of antimicrobial susceptibility. Three isolates each (two clinical isolates and one reference strain) of Staphylococcus aureus, Escherichia coli, and Pseudomonas aeruginosa were tested in triplicate using two commercial antimicrobial testing systems, the Vitek2 and the MicroScan MIC panel, to challenge the BacterioScan FLLS. The BacterioScan FLLS showed a high degree of categorical concordance with the commercial methods. Pairwise comparison with each commercial system serving as a reference standard showed 88.9% agreement with MicroScan (two minor errors) and 72.2% agreement with Vitek (five minor errors). FLLS using the BacterioScan system shows promise as a novel method for the rapid and accurate determination of antimicrobial susceptibility. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Management Aspects of Software Maintenance.
1984-09-01
educated in * the complex nature of software maintenance to be able to properly evaluate and manage the software maintenance effort. In this...maintenance and improvement may be called "software evolution". The soft- ware manager must be Educated in the complex nature cf soft- Iware maintenance to be...complaint of error or request for modification is also studied in order to determine what action needs tc be taken. 2. Define Objective and Approach :
Stephan, Carl N; Simpson, Ellie K
2008-11-01
With the ever increasing production of average soft tissue depth studies, data are becoming increasingly complex, less standardized, and more unwieldy. So far, no overarching review has been attempted to determine: the validity of continued data collection; the usefulness of the existing data subcategorizations; or if a synthesis is possible to produce a manageable soft tissue depth library. While a principal components analysis would provide the best foundation for such an assessment, this type of investigation is not currently possible because of a lack of easily accessible raw data (first, many studies are narrow; second, raw data are infrequently published and/or stored and are not always shared by some authors). This paper provides an alternate means of investigation using an hierarchical approach to review and compare the effects of single variables on published mean values for adults whilst acknowledging measurement errors and within-group variation. The results revealed: (i) no clear secular trends at frequently investigated landmarks; (ii) wide variation in soft tissue depth measures between different measurement techniques irrespective of whether living persons or cadavers were considered; (iii) no clear clustering of non-Caucasoid data far from the Caucasoid means; and (iv) minor differences between males and females. Consequently, the data were pooled across studies using weighted means and standard deviations to cancel out random and opposing study-specific errors, and to produce a single soft tissue depth table with increased sample sizes (e.g., 6786 individuals at pogonion).
Effect of neoclassical toroidal viscosity on error-field penetration thresholds in tokamak plasmas.
Cole, A J; Hegna, C C; Callen, J D
2007-08-10
A model for field-error penetration is developed that includes nonresonant as well as the usual resonant field-error effects. The nonresonant components cause a neoclassical toroidal viscous torque that keeps the plasma rotating at a rate comparable to the ion diamagnetic frequency. The new theory is used to examine resonant error-field penetration threshold scaling in Ohmic tokamak plasmas. Compared to previous theoretical results, we find the plasma is less susceptible to error-field penetration and locking, by a factor that depends on the nonresonant error-field amplitude.
Chang, Shih-Tsun; Liu, Yen-Hsiu; Lee, Jiahn-Shing; See, Lai-Chu
2015-09-01
The effect of correcting static vision on sports vision is still not clear. To examine whether sports vision (depth perception [DP], dynamic visual acuity [DVA], eye movement [EM], peripheral vision [PV], and momentary vision [MV],) were different among soft tennis adolescent athletes with normal vision (Group A), with refractive error and corrected with (Group B) and without eyeglasses (Group C). A cross-section study was conducted. Soft tennis athletes aged 10-13 who played softball tennis for 2-5 years, and who were without any ocular diseases and without visual training for the past 3 months were recruited. DPs were measured in an absolute deviation (mm) between a moving rod and fixing rod (approaching at 25 mm/s, receding at 25 mm/s, approaching at 50 mm/s, receding at 50 mm/s) using electric DP tester. A smaller deviation represented better DP. DVA, EM, PV, and MV were measured on a scale from 1 (worse) to 10 (best) using ATHLEVISION software. Chi-square test and Kruskal-Wallis test was used to compare the data among the three study groups. A total of 73 athletes (37 in Group A, 8 in Group B, 28 in Group C) were enrolled in this study. All four items of DP showed significant difference among the three study groups (P = 0.0051, 0.0004, 0.0095, 0.0021). PV displayed significant difference among the three study groups (P = 0.0044). There was no significant difference in DVA, EM, and MV among the three study groups. Significant better DP and PV were seen among soft tennis adolescent athletes with normal vision than those with refractive error regardless whether they had eyeglasses corrected. On the other hand, DVA, EM, and MV were similar among the three study groups.
Human Error and Commercial Aviation Accidents: A Comprehensive, Fine-Grained Analysis Using HFACS
2006-07-01
Factors Figure 2. The HFACS framework. 3 practiced and seemingly automatic behaviors is that they are particularly susceptible to attention and/or memory...been included in most error frameworks, the third and final error form, perceptual errors, has received comparatively less attention . No less...operate safely. After all, just as not everyone can play linebacker for their favorite professional football team or be a concert pianist , not
Balasuriya, Lilanthi; Vyles, David; Bakerman, Paul; Holton, Vanessa; Vaidya, Vinay; Garcia-Filion, Pamela; Westdorp, Joan; Sanchez, Christine; Kurz, Rhonda
2017-09-01
An enhanced dose range checking (DRC) system was developed to evaluate prescription error rates in the pediatric intensive care unit and the pediatric cardiovascular intensive care unit. An enhanced DRC system incorporating "soft" and "hard" alerts was designed and implemented. Practitioner responses to alerts for patients admitted to the pediatric intensive care unit and the pediatric cardiovascular intensive care unit were retrospectively reviewed. Alert rates increased from 0.3% to 3.4% after "go-live" (P < 0.001). Before go-live, all alerts were soft alerts. In the period after go-live, 68% of alerts were soft alerts and 32% were hard alerts. Before go-live, providers reduced doses only 1 time for every 10 dose alerts. After implementation of the enhanced computerized physician order entry system, the practitioners responded to soft alerts by reducing doses to more appropriate levels in 24.7% of orders (70/283), compared with 10% (3/30) before go-live (P = 0.0701). The practitioners deleted orders in 9.5% of cases (27/283) after implementation of the enhanced DRC system, as compared with no cancelled orders before go-live (P = 0.0774). Medication orders that triggered a soft alert were submitted unmodified in 65.7% (186/283) as compared with 90% (27/30) of orders before go-live (P = 0.0067). After go-live, 28.7% of hard alerts resulted in a reduced dose, 64% resulted in a cancelled order, and 7.4% were submitted as written. Before go-live, alerts were often clinically irrelevant. After go-live, there was a statistically significant decrease in orders that were submitted unmodified and an increase in the number of orders that were reduced or cancelled.
Kulengowski, B; Ribes, J A; Burgess, D S
2018-04-16
Polymyxins have been revitalized to combat carbapenem-resistant Enterobacteriaceae (CRE). However, evaluating the activity of these agents by traditional broth dilution methods is not practical for busy clinical laboratories. We compared polymyxin B activity using two quantitative susceptibility testing methods, Etest ® and broth microdilution (BMD), against CRE isolates from patients at an academic medical centre. Polymyxin B activity against 70 CRE clinical isolates was determined by Etest ® according to the manufacturer and by BMD according to CLSI guidelines. Pseudomonas aeruginosa ATCC ® 27853 and Escherichia coli NCTC 13846 served as quality control strains. The EUCAST colistin susceptibility breakpoint of Enterobacteriaceae (≤2 mg/L) was used. Essential agreement was isolates with an MIC within 1 log 2 dilution over total isolates. Categorical agreement was number of isolates in the same susceptibility category (susceptible or resistant) over total isolates. Major and very major error rates were calculated using number of susceptible and number of resistant isolates, respectively, as the denominator. McNemar's test was used for determining a difference in susceptibility between methods. The CRE isolates were primarily Klebsiella spp. (49%) and Enterobacter spp. (36%). Polymyxin B susceptibility was significantly higher by Etest ® compared with BMD (97% versus 77%; p 0.0001). Categorical agreement was 80%, but essential agreement was low (10%). False non-susceptibility was never observed by Etest ® (BMD reference), but the very major errors were high (88%). Etest ® reporting of false susceptibility may result in inappropriate antibiotic use and treatment failure clinically. We do not recommend using Etest ® for polymyxin B susceptibility testing for routine patient care. Copyright © 2018 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Online Soft Sensor of Humidity in PEM Fuel Cell Based on Dynamic Partial Least Squares
Long, Rong; Chen, Qihong; Zhang, Liyan; Ma, Longhua; Quan, Shuhai
2013-01-01
Online monitoring humidity in the proton exchange membrane (PEM) fuel cell is an important issue in maintaining proper membrane humidity. The cost and size of existing sensors for monitoring humidity are prohibitive for online measurements. Online prediction of humidity using readily available measured data would be beneficial to water management. In this paper, a novel soft sensor method based on dynamic partial least squares (DPLS) regression is proposed and applied to humidity prediction in PEM fuel cell. In order to obtain data of humidity and test the feasibility of the proposed DPLS-based soft sensor a hardware-in-the-loop (HIL) test system is constructed. The time lag of the DPLS-based soft sensor is selected as 30 by comparing the root-mean-square error in different time lag. The performance of the proposed DPLS-based soft sensor is demonstrated by experimental results. PMID:24453923
Auxiliary variables for numerically solving nonlinear equations with softly broken symmetries.
Olum, Ken D; Masoumi, Ali
2017-06-01
General methods for solving simultaneous nonlinear equations work by generating a sequence of approximate solutions that successively improve a measure of the total error. However, if the total error function has a narrow curved valley, the available techniques tend to find the solution after a very large number of steps, if ever. The solver first converges rapidly to the valley, but once there it converges extremely slowly to the solution. In this paper we show that in the specific physically important case where these valleys are the result of a softly broken symmetry, the solution can often be found much more quickly by adding the generators of the softly broken symmetry as auxiliary variables. This makes the number of variables more than the equations and hence there will be a family of solutions, any one of which would be acceptable. We present a procedure for finding solutions in this case and apply it to several simple examples and an important problem in the physics of false vacuum decay. We also provide a Mathematica package that implements Powell's hybrid method with the generalization to allow more variables than equations.
In-flight performance of pulse-processing system of the ASTRO-H/Hitomi soft x-ray spectrometer
NASA Astrophysics Data System (ADS)
Ishisaki, Yoshitaka; Yamada, Shinya; Seta, Hiromi; Tashiro, Makoto S.; Takeda, Sawako; Terada, Yukikatsu; Kato, Yuka; Tsujimoto, Masahiro; Koyama, Shu; Mitsuda, Kazuhisa; Sawada, Makoto; Boyce, Kevin R.; Chiao, Meng P.; Watanabe, Tomomi; Leutenegger, Maurice A.; Eckart, Megan E.; Porter, Frederick Scott; Kilbourne, Caroline Anne
2018-01-01
We summarize results of the initial in-orbit performance of the pulse shape processor (PSP) of the soft x-ray spectrometer instrument onboard ASTRO-H (Hitomi). Event formats, kind of telemetry, and the pulse-processing parameters are described, and the parameter settings in orbit are listed. The PSP was powered-on 2 days after launch, and the event threshold was lowered in orbit. The PSP worked fine in orbit, and there was neither memory error nor SpaceWire communication error until the break-up of spacecraft. Time assignment, electrical crosstalk, and the event screening criteria are studied. It is confirmed that the event processing rate at 100% central processing unit load is ˜200 c / s / array, compliant with the requirement on the PSP.
21 CFR 520.88g - Amoxicillin trihydrate and clavulanate potassium film-coated tablets.
Code of Federal Regulations, 2011 CFR
2011-04-01
... spp., E. coli, and Pasteurella spp. Also, treatment of urinary tract infections (cystitis) due to susceptible strains of E. coli. (iii) Limitations. Skin and soft tissue infections: abscesses, cellulitis...-lactamase S. aureus, Staphylococcus spp., Streptococcus spp., and Escherichia coli. Treatment of periodontal...
Absolute method of measuring magnetic susceptibility
Thorpe, A.; Senftle, F.E.
1959-01-01
An absolute method of standardization and measurement of the magnetic susceptibility of small samples is presented which can be applied to most techniques based on the Faraday method. The fact that the susceptibility is a function of the area under the curve of sample displacement versus distance of the magnet from the sample, offers a simple method of measuring the susceptibility without recourse to a standard sample. Typical results on a few substances are compared with reported values, and an error of less than 2% can be achieved. ?? 1959 The American Institute of Physics.
Bronson, N R
1984-05-01
A new A-mode biometry system for determining axial length measurements of the eye has been developed that incorporates a soft-membrane transducer. The soft transducer decreases the risk of indenting the cornea with the probe resulting in inaccurate measurements. A microprocessor evaluates echo patterns and determines whether or not axial alignment has been obtained, eliminating possible user error. The new A-scan requires minimal user skill and can be used successfully by both physician and technician.
Closed-Loop Analysis of Soft Decisions for Serial Links
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Steele, Glen F.; Zucha, Joan P.; Schlensinger, Adam M.
2012-01-01
Modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more overhead through noisier channels, and software-defined radios use error-correction techniques that approach Shannon s theoretical limit of performance. The authors describe the benefit of closed-loop measurements for a receiver when paired with a counterpart transmitter and representative channel conditions. We also describe a real-time Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in real-time during the development of software defined radios.
De Rosario, Helios; Page, Álvaro; Besa, Antonio
2017-09-06
The accurate location of the main axes of rotation (AoR) is a crucial step in many applications of human movement analysis. There are different formal methods to determine the direction and position of the AoR, whose performance varies across studies, depending on the pose and the source of errors. Most methods are based on minimizing squared differences between observed and modelled marker positions or rigid motion parameters, implicitly assuming independent and uncorrelated errors, but the largest error usually results from soft tissue artefacts (STA), which do not have such statistical properties and are not effectively cancelled out by such methods. However, with adequate methods it is possible to assume that STA only account for a small fraction of the observed motion and to obtain explicit formulas through differential analysis that relate STA components to the resulting errors in AoR parameters. In this paper such formulas are derived for three different functional calibration techniques (Geometric Fitting, mean Finite Helical Axis, and SARA), to explain why each technique behaves differently from the others, and to propose strategies to compensate for those errors. These techniques were tested with published data from a sit-to-stand activity, where the true axis was defined using bi-planar fluoroscopy. All the methods were able to estimate the direction of the AoR with an error of less than 5°, whereas there were errors in the location of the axis of 30-40mm. Such location errors could be reduced to less than 17mm by the methods based on equations that use rigid motion parameters (mean Finite Helical Axis, SARA) when the translation component was calculated using the three markers nearest to the axis. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Ni, Jianjun David
2011-01-01
This presentation briefly discusses a research effort on mitigation techniques of pulsed radio frequency interference (RFI) on a Low-Density-Parity-Check (LDPC) code. This problem is of considerable interest in the context of providing reliable communications to the space vehicle which might suffer severe degradation due to pulsed RFI sources such as large radars. The LDPC code is one of modern forward-error-correction (FEC) codes which have the decoding performance to approach the Shannon Limit. The LDPC code studied here is the AR4JA (2048, 1024) code recommended by the Consultative Committee for Space Data Systems (CCSDS) and it has been chosen for some spacecraft design. Even though this code is designed as a powerful FEC code in the additive white Gaussian noise channel, simulation data and test results show that the performance of this LDPC decoder is severely degraded when exposed to the pulsed RFI specified in the spacecraft s transponder specifications. An analysis work (through modeling and simulation) has been conducted to evaluate the impact of the pulsed RFI and a few implemental techniques have been investigated to mitigate the pulsed RFI impact by reshuffling the soft-decision-data available at the input of the LDPC decoder. The simulation results show that the LDPC decoding performance of codeword error rate (CWER) under pulsed RFI can be improved up to four orders of magnitude through a simple soft-decision-data reshuffle scheme. This study reveals that an error floor of LDPC decoding performance appears around CWER=1E-4 when the proposed technique is applied to mitigate the pulsed RFI impact. The mechanism causing this error floor remains unknown, further investigation is necessary.
Cognitive errors: thinking clearly when it could be child maltreatment.
Laskey, Antoinette L
2014-10-01
Cognitive errors have been studied in a broad array of fields, including medicine. The more that is understood about how the human mind processes complex information, the more it becomes clear that certain situations are particularly susceptible to less than optimal outcomes because of these errors. This article explores how some of the known cognitive errors may influence the diagnosis of child abuse, resulting in both false-negative and false-positive diagnoses. Suggested remedies for these errors are offered. Copyright © 2014 Elsevier Inc. All rights reserved.
Justification of Estimates for Fiscal Year 1983 Submitted to Congress.
1982-02-01
hierarchies to aid software production; completion of the components of an adaptive suspension vehicle including a storage energy unit, hydraulics, laser...and corrosion (long storage times), and radiation-induced breakdown. Solid- lubricated main engine bearings for cruise missile engines would offer...environments will cause "soft error" (computational and memory storage errors) in advanced microelectronic circuits. Research on high-speed, low-power
Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging.
Liu, Fang; Jang, Hyungseok; Kijowski, Richard; Bradshaw, Tyler; McMillan, Alan B
2018-02-01
Purpose To develop and evaluate the feasibility of deep learning approaches for magnetic resonance (MR) imaging-based attenuation correction (AC) (termed deep MRAC) in brain positron emission tomography (PET)/MR imaging. Materials and Methods A PET/MR imaging AC pipeline was built by using a deep learning approach to generate pseudo computed tomographic (CT) scans from MR images. A deep convolutional auto-encoder network was trained to identify air, bone, and soft tissue in volumetric head MR images coregistered to CT data for training. A set of 30 retrospective three-dimensional T1-weighted head images was used to train the model, which was then evaluated in 10 patients by comparing the generated pseudo CT scan to an acquired CT scan. A prospective study was carried out for utilizing simultaneous PET/MR imaging for five subjects by using the proposed approach. Analysis of covariance and paired-sample t tests were used for statistical analysis to compare PET reconstruction error with deep MRAC and two existing MR imaging-based AC approaches with CT-based AC. Results Deep MRAC provides an accurate pseudo CT scan with a mean Dice coefficient of 0.971 ± 0.005 for air, 0.936 ± 0.011 for soft tissue, and 0.803 ± 0.021 for bone. Furthermore, deep MRAC provides good PET results, with average errors of less than 1% in most brain regions. Significantly lower PET reconstruction errors were realized with deep MRAC (-0.7% ± 1.1) compared with Dixon-based soft-tissue and air segmentation (-5.8% ± 3.1) and anatomic CT-based template registration (-4.8% ± 2.2). Conclusion The authors developed an automated approach that allows generation of discrete-valued pseudo CT scans (soft tissue, bone, and air) from a single high-spatial-resolution diagnostic-quality three-dimensional MR image and evaluated it in brain PET/MR imaging. This deep learning approach for MR imaging-based AC provided reduced PET reconstruction error relative to a CT-based standard within the brain compared with current MR imaging-based AC approaches. © RSNA, 2017 Online supplemental material is available for this article.
Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code
NASA Astrophysics Data System (ADS)
Marinkovic, Slavica; Guillemot, Christine
2006-12-01
Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.
Steward, Christine D.; Stocker, Sheila A.; Swenson, Jana M.; O’Hara, Caroline M.; Edwards, Jonathan R.; Gaynes, Robert P.; McGowan, John E.; Tenover, Fred C.
1999-01-01
Fluoroquinolone resistance appears to be increasing in many species of bacteria, particularly in those causing nosocomial infections. However, the accuracy of some antimicrobial susceptibility testing methods for detecting fluoroquinolone resistance remains uncertain. Therefore, we compared the accuracy of the results of agar dilution, disk diffusion, MicroScan Walk Away Neg Combo 15 conventional panels, and Vitek GNS-F7 cards to the accuracy of the results of the broth microdilution reference method for detection of ciprofloxacin and ofloxacin resistance in 195 clinical isolates of the family Enterobacteriaceae collected from six U.S. hospitals for a national surveillance project (Project ICARE [Intensive Care Antimicrobial Resistance Epidemiology]). For ciprofloxacin, very major error rates were 0% (disk diffusion and MicroScan), 0.9% (agar dilution), and 2.7% (Vitek), while major error rates ranged from 0% (agar dilution) to 3.7% (MicroScan and Vitek). Minor error rates ranged from 12.3% (agar dilution) to 20.5% (MicroScan). For ofloxacin, no very major errors were observed, and major errors were noted only with MicroScan (3.7% major error rate). Minor error rates ranged from 8.2% (agar dilution) to 18.5% (Vitek). Minor errors for all methods were substantially reduced when results with MICs within ±1 dilution of the broth microdilution reference MIC were excluded from analysis. However, the high number of minor errors by all test systems remains a concern. PMID:9986809
Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMs
NASA Astrophysics Data System (ADS)
Diehl, S. E.; Ochoa, A., Jr.; Dressendorfer, P. V.; Koga, P.; Kolasinski, W. A.
1982-12-01
Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors.
Jung, Sung-ah; Choi, Yoon Jeong; Lee, Dong-Won; Kim, Kyung-Ho; Chung, Chooryung J
2015-05-01
To investigate the prevalence of distinguishable soft tissue scarring after the removal of temporary anchorage devices (TADs) such as orthodontic miniscrews and to analyze the factors associated with scar formation. The prevalence of soft tissue scarring in 66 patients (202 miniscrew removal sites) was clinically investigated at least 1 year after miniscrew removal. To determine the clinical factors associated with soft tissue scar formation, miniscrew stability; host factors including age, gender, and gingival biotype; and miniscrew-related factors such as insertion site, vertical position, and insertion period were evaluated. The prevalence of a distinguishable scar remaining at least 1 year after miniscrew removal was 44.6%. Patients with flat gingiva showed a significantly higher prevalence of soft tissue scar formation than did those with pronounced scalloped gingiva (P < .05). Maxillary buccal removal sites showed a significantly higher prevalence of soft tissue scar formation than did those in the mandible or palatal slope (P < .05). Miniscrew sites at the alveolar mucosa showed a significantly lower prevalence of soft tissue scar formation than did those in the mucogingival junction or the attached gingiva (P < .01). The prevalence of distinguishable scarring after miniscrew removal was fairly high. On the basis of our results, patients with flat gingiva and buccal interdental gingival insertion sites are more susceptible to scar formation.
Real-time optimal guidance for orbital maneuvering.
NASA Technical Reports Server (NTRS)
Cohen, A. O.; Brown, K. R.
1973-01-01
A new formulation for soft-constraint trajectory optimization is presented as a real-time optimal feedback guidance method for multiburn orbital maneuvers. Control is always chosen to minimize burn time plus a quadratic penalty for end condition errors, weighted so that early in the mission (when controllability is greatest) terminal errors are held negligible. Eventually, as controllability diminishes, the method partially relaxes but effectively still compensates perturbations in whatever subspace remains controllable. Although the soft-constraint concept is well-known in optimal control, the present formulation is novel in addressing the loss of controllability inherent in multiple burn orbital maneuvers. Moreover the necessary conditions usually obtained from a Bolza formulation are modified in this case so that the fully hard constraint formulation is a numerically well behaved subcase. As a result convergence properties have been greatly improved.
Aigner, Maria; Erbeznik, Thomas; Gschwentner, Martin; Lass-Flörl, Cornelia
2017-08-01
Candida species were tested for susceptibility to caspofungin, anidulafungin, and micafungin in order to evaluate the roles of Etest and Sensititre YeastOne in antifungal susceptibility testing for daily routines and to survey resistance. A total of 104 Candida species isolates detected from blood cultures were investigated. With EUCAST broth microdilution as the reference method, essential agreement (EA), categorical agreement (CA), very major errors (VME), major errors (ME), and minor (MIN) errors were assessed by reading MICs at 18, 24, and 48 h. By use of EUCAST broth microdilution and species-specific clinical breakpoints (CBPs), echinocandin resistance was not detected during the study period. Using EUCAST CBPs, MIC readings at 24 h for the Etest and Sensititre YeastOne resulted in CA levels of 99% and 93% for anidulafungin and 99% and 97% for micafungin. Using revised CLSI CBPs for caspofungin, CA levels were 92% and 99% for Etest and Sensititre YeastOne. The Etest proved an excellent, easy-to-handle alternative method for testing susceptibility to anidulafungin and micafungin. Due to misclassifications, the Etest is less suitable for testing susceptibility to caspofungin (8% of isolates falsely tested resistant). The CA levels of Sensititre YeastOne were 93% and 97% for anidulafungin and micafungin (24 h) by use of EUCAST CBPs and increased to 100% for both antifungals if CLSI CBPs were applied and to 100% and 99% if Sensititre YeastOne epidemiological cutoff values (ECOFFs) were applied. No one echinocandin could be demonstrated to be superior to another in vitro Since resistance was lacking among our Candida isolates, we cannot derive any recommendation from accurate resistance detection by the Etest and Sensititre YeastOne. Copyright © 2017 American Society for Microbiology.
NASA Astrophysics Data System (ADS)
Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko
2017-08-01
We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.
Shipton, Ceri; Roberts, Patrick; Archer, Will; Armitage, Simon J; Bita, Caesar; Blinkhorn, James; Courtney-Mustaphi, Colin; Crowther, Alison; Curtis, Richard; Errico, Francesco d'; Douka, Katerina; Faulkner, Patrick; Groucutt, Huw S; Helm, Richard; Herries, Andy I R; Jembe, Severinus; Kourampas, Nikos; Lee-Thorp, Julia; Marchant, Rob; Mercader, Julio; Marti, Africa Pitarch; Prendergast, Mary E; Rowson, Ben; Tengeza, Amini; Tibesasa, Ruth; White, Tom S; Petraglia, Michael D; Boivin, Nicole
2018-06-05
The originally published version of this Article contained an error in Fig. 3, whereby an additional unrelated graph was overlaid on top of the magnetic susceptibility plot. Furthermore, the Article title contained an error in the capitalisation of 'Stone Age'. Both of these errors have now been corrected in both the PDF and HTML versions of the Article.
Joyanes, Providencia; del Carmen Conejo, María; Martínez-Martínez, Luis; Perea, Evelio J.
2001-01-01
VITEK 2 is a new automatic system for the identification and susceptibility testing of the most clinically important bacteria. In the present study 198 clinical isolates, including Pseudomonas aeruginosa (n = 146), Acinetobacter baumannii (n = 25), and Stenotrophomonas maltophilia (n = 27) were evaluated. Reference susceptibility testing of cefepime, cefotaxime, ceftazidime, ciprofloxacin, gentamicin, imipenem, meropenem, piperacillin, tobramycin, levofloxacin (only for P. aeruginosa), co-trimoxazole (only for S. maltophilia), and ampicillin-sulbactam and tetracycline (only for A. baumannii) was performed by microdilution (NCCLS guidelines). The VITEK 2 system correctly identified 91.6, 100, and 76% of P. aeruginosa, S. maltophilia, and A. baumannii isolates, respectively, within 3 h. The respective percentages of essential agreement (to within 1 twofold dilution) for P. aeruginosa and A. baumannii were 89.0 and 88.0% (cefepime), 91.1 and 100% (cefotaxime), 95.2 and 96.0% (ceftazidime), 98.6 and 100% (ciprofloxacin), 88.4 and 100% (gentamicin), 87.0 and 92.0% (imipenem), 85.0 and 88.0% (meropenem), 84.2 and 96.0% (piperacillin), and 97.3 and 80% (tobramycin). The essential agreement for levofloxacin against P. aeruginosa was 86.3%. The percentages of essential agreement for ampicillin-sulbactam and tetracycline against A. baumannii were 88.0 and 100%, respectively. Very major errors for P. aeruginosa (resistant by the reference method, susceptible with the VITEK 2 system [resistant to susceptible]) were noted for cefepime (0.7%), cefotaxime (0.7%), gentamicin (0.7%), imipenem (1.4%), levofloxacin (2.7%), and piperacillin (2.7%) and, for one strain of A. baumannii, for imipenem. Major errors (susceptible to resistant) were noted only for P. aeruginosa and cefepime (2.0%), ceftazidime (0.7%), and piperacillin (3.4%). Minor errors ranged from 0.0% for piperacillin to 22.6% for cefotaxime against P. aeruginosa and from 0.0% for piperacillin and ciprofloxacin to 20.0% for cefepime against A. baumannii. The VITEK 2 system provided co-trimoxazole MICs only for S. maltophilia; no very major or major errors were obtained for co-trimoxazole against this species. It is concluded that the VITEK 2 system allows the rapid identification of S. maltophilia and most P. aeruginosa and A. baumannii isolates. The VITEK 2 system can perform reliable susceptibility testing of many of the antimicrobial agents used against P. aeruginosa and A. baumannii. It would be desirable if new versions of the VITEK 2 software were able to determine MICs and the corresponding clinical categories of agents active against S. maltophilia. PMID:11526158
Faghri, Jamshid; Zandi, Alireza; Peiman, Alireza; Fazeli, Hossein; Esfahani, Bahram Nasr; Safaei, Hajieh Ghasemian; Hosseini, Nafiseh Sadat; Mobasherizadeh, Sina; Sedighi, Mansour; Burbur, Samaneh; Oryan, Golfam
2016-03-01
To study on antibiotic susceptibility and identify coagulase-negative Staphylococcus (CoNS) species based on tuf gene sequencing from keratitis followed by using soft contact lenses in Isfahan, Iran, 2013. This study examined 77 keratitis cases. The samples were cultured and the isolation of CoNS was done by phenotypic tests, and in vitro sensitivity testing was done by Kirby-Bauer disk diffusion susceptibility method. Thirty-eight of isolates were conveniently identified as CoNS. In this study, 27 (71.1%), 21 (55.3%), and 16 (42.1%) were resistant to penicillin, erythromycin, and tetracycline, respectively. One hundred percent of isolates were sensitive to gentamicin, and 36 (94.7%) and 33 (86.8%) of isolates were sensitive to chloramphenicol and ciprofloxacin, respectively. Also, resistances to cefoxitin were 7 (18.4%). Analysis of tuf gene proved to be discriminative and sensitive in which all the isolates were identified with 99.0% similarity to reference strains, and Staphylococcus epidermidis had the highest prevalence among other species. Results of this study showed that CoNS are the most common agents causing contact lens-associated microbial keratitis, and the tuf gene sequencing analysis is a reliable method for distinguishing CoNS species. Also gentamycin, chloramphenicol, and ciprofloxacin are more effective than the other antibacterial agents against these types of bacteria.
TID and SEE Response of an Advanced Samsung 4G NAND Flash Memory
NASA Technical Reports Server (NTRS)
Oldham, Timothy R.; Friendlich, M.; Howard, J. W.; Berg, M. D.; Kim, H. S.; Irwin, T. L.; LaBel, K. A.
2007-01-01
Initial total ionizing dose (TID) and single event heavy ion test results are presented for an unhardened commercial flash memory, fabricated with 63 nm technology. Results are that the parts survive to a TID of nearly 200 krad (SiO2), with a tractable soft error rate of about 10(exp -l2) errors/bit-day, for the Adams Ten Percent Worst Case Environment.
Implementing smart infusion pumps with dose-error reduction software: real-world experiences.
Heron, Claire
2017-04-27
Intravenous (IV) drug administration, especially with 'smart pumps', is complex and susceptible to errors. Although errors can occur at any stage of the IV medication process, most errors occur during reconstitution and administration. Dose-error reduction software (DERS) loaded on to infusion pumps incorporates a drug library with predefined upper and lower drug dose limits and infusion rates, which can reduce IV infusion errors. Although this is an important advance for patient safety at the point of care, uptake is still relatively low. This article discuses the challenges and benefits of implementing DERS in clinical practice as experienced by three UK trusts.
An alternative data filling approach for prediction of missing data in soft sets (ADFIS).
Sadiq Khan, Muhammad; Al-Garadi, Mohammed Ali; Wahab, Ainuddin Wahid Abdul; Herawan, Tutut
2016-01-01
Soft set theory is a mathematical approach that provides solution for dealing with uncertain data. As a standard soft set, it can be represented as a Boolean-valued information system, and hence it has been used in hundreds of useful applications. Meanwhile, these applications become worthless if the Boolean information system contains missing data due to error, security or mishandling. Few researches exist that focused on handling partially incomplete soft set and none of them has high accuracy rate in prediction performance of handling missing data. It is shown that the data filling approach for incomplete soft set (DFIS) has the best performance among all previous approaches. However, in reviewing DFIS, accuracy is still its main problem. In this paper, we propose an alternative data filling approach for prediction of missing data in soft sets, namely ADFIS. The novelty of ADFIS is that, unlike the previous approach that used probability, we focus more on reliability of association among parameters in soft set. Experimental results on small, 04 UCI benchmark data and causality workbench lung cancer (LUCAP2) data shows that ADFIS performs better accuracy as compared to DFIS.
NASA Technical Reports Server (NTRS)
Belcastro, C. M.
1984-01-01
Advanced composite aircraft designs include fault-tolerant computer-based digital control systems with thigh reliability requirements for adverse as well as optimum operating environments. Since aircraft penetrate intense electromagnetic fields during thunderstorms, onboard computer systems maya be subjected to field-induced transient voltages and currents resulting in functional error modes which are collectively referred to as digital system upset. A methodology was developed for assessing the upset susceptibility of a computer system onboard an aircraft flying through a lightning environment. Upset error modes in a general-purpose microprocessor were studied via tests which involved the random input of analog transients which model lightning-induced signals onto interface lines of an 8080-based microcomputer from which upset error data were recorded. The application of Markov modeling to upset susceptibility estimation is discussed and a stochastic model development.
In vitro susceptibility of Pseudomonas species to carbenicillin and trimethoprim-sulfamethoxazole.
Hill, S F; Haldane, D J; Ngui-Yen, J H; Smith, J A
1985-01-01
We compared susceptibility tests of 47 Pseudomonas aeruginosa isolates and 40 Pseudomonas species to carbenicillin and trimethoprim-sulfamethoxazole by the MS-2 and Sceptor systems and agar dilution. The major and very major errors encountered in these tests in the MS-2 and Sceptor systems raise doubts about the accuracy of these methods for testing P. aeruginosa and confirm that they should not be used for testing the susceptibility of Pseudomonas species to the two drugs tested. PMID:3930567
A novel genome mutation in wheat increases Fusarium Head Blight resistance
USDA-ARS?s Scientific Manuscript database
We sought to validate an FHB resistance QTL reported to be on chromosome 2A in the soft red winter wheat cultivar Freedom by introducing it into the highly susceptible rapid maturing dwarf wheat Apogee. Marker-assisted backcrossing with an SSR marker reported to be associated with this QTL was under...
Improved Rubin-Bodner Model for the Prediction of Soft Tissue Deformations
Zhang, Guangming; Xia, James J.; Liebschner, Michael; Zhang, Xiaoyan; Kim, Daeseung; Zhou, Xiaobo
2016-01-01
In craniomaxillofacial (CMF) surgery, a reliable way of simulating the soft tissue deformation resulted from skeletal reconstruction is vitally important for preventing the risks of facial distortion postoperatively. However, it is difficult to simulate the soft tissue behaviors affected by different types of CMF surgery. This study presents an integrated bio-mechanical and statistical learning model to improve accuracy and reliability of predictions on soft facial tissue behavior. The Rubin-Bodner (RB) model is initially used to describe the biomechanical behavior of the soft facial tissue. Subsequently, a finite element model (FEM) computers the stress of each node in soft facial tissue mesh data resulted from bone displacement. Next, the Generalized Regression Neural Network (GRNN) method is implemented to obtain the relationship between the facial soft tissue deformation and the stress distribution corresponding to different CMF surgical types and to improve evaluation of elastic parameters included in the RB model. Therefore, the soft facial tissue deformation can be predicted by biomechanical properties and statistical model. Leave-one-out cross-validation is used on eleven patients. As a result, the average prediction error of our model (0.7035mm) is lower than those resulting from other approaches. It also demonstrates that the more accurate bio-mechanical information the model has, the better prediction performance it could achieve. PMID:27717593
Results from the First Two Flights of the Static Computer Memory Integrity Testing Experiment
NASA Technical Reports Server (NTRS)
Hancock, Thomas M., III
1999-01-01
This paper details the scientific objectives, experiment design, data collection method, and post flight analysis following the first two flights of the Static Computer Memory Integrity Testing (SCMIT) experiment. SCMIT is designed to detect soft-event upsets in passive magnetic memory. A soft-event upset is a change in the logic state of active or passive forms of magnetic memory, commonly referred to as a "Bitflip". In its mildest form a soft-event upset can cause software exceptions, unexpected events, start spacecraft safeing (ending data collection) or corrupted fault protection and error recovery capabilities. In it's most severe form loss of mission or spacecraft can occur. Analysis after the first flight (in 1991 during STS-40) identified possible soft-event upsets to 25% of the experiment detectors. Post flight analysis after the second flight (in 1997 on STS-87) failed to find any evidence of soft-event upsets. The SCMIT experiment is currently scheduled for a third flight in December 1999 on STS-101.
Vaccine-associated sarcomas in cats: a unique cancer model.
McNiel, E A
2001-01-01
Epidemiologic evidence supports a relationship between vaccination of cats for rabies and feline leukemia virus with the development of soft tissue sarcomas at the site of administration. These tumors are locally invasive and histologically aggressive. As with high-grade soft tissue sarcoma in humans, combination treatment with radiation therapy and surgery provides for optimum tumor control. Feline vaccine-associated sarcoma has become a difficult issue for the veterinary profession for legal, ethical, and clinical reasons. Although most research efforts have focused on therapeutic intervention, this tumor has great potential to provide an informative model for carcinogenesis and genetic susceptibility applicable to cancer in all species, including humans.
Salazar-Ospina, Lorena; Jiménez, Judy Natalia
2017-09-21
Staphylococcus aureus is responsible for a large number of infections in pediatric population; however, information about the behavior of such infections in this population is limited. The aim of the study was to describe the clinical, epidemiological, and molecular characteristics of infections caused by methicillin-susceptible and resistant S. aureus (MSSA-MRSA) in a pediatric population. A cross-sectional descriptive study in patients from birth to 14 years of age from three high-complexity institutions was conducted (2008-2010). All patients infected with methicillin-resistant S. aureus and a representative sample of patients infected with methicillin-susceptible S. aureus were included. Clinical and epidemiological information was obtained from medical records and molecular characterization included spa typing, pulsed-field gel electrophoresis (PFGE), and multilocus sequence typing (MLST). In addition, staphylococcal cassette chromosome mec (SCCmec) and virulence factor genes were detected. A total of 182 patients, 65 with methicillin-susceptible S. aureus infections and 117 with methicillin-resistant S. aureus infections, were included in the study; 41.4% of the patients being under 1 year. The most frequent infections were of the skin and soft tissues. Backgrounds such as having stayed in day care centers and previous use of antibiotics were more common in patients with methicillin-resistant S. aureus infections (p≤0.05). Sixteen clonal complexes were identified and methicillin-susceptible S. aureus strains were more diverse. The most common cassette was staphylococcal cassette chromosomemec IVc (70.8%), which was linked to Panton-Valentine leukocidin (pvl). In contrast with other locations, a prevalence of infections in children under 1 year of age in the city could be observed; this emphasizes the importance of epidemiological knowledge at the local level. Copyright © 2017 Sociedade Brasileira de Pediatria. Published by Elsevier Editora Ltda. All rights reserved.
An undulator based soft x-ray source for microscopy on the Duke electron storage ring
NASA Astrophysics Data System (ADS)
Johnson, Lewis Elgin
1998-09-01
This dissertation describes the design, development, and installation of an undulator-based soft x-ray source on the Duke Free Electron Laser laboratory electron storage ring. Insertion device and soft x-ray beamline physics and technology are all discussed in detail. The Duke/NIST undulator is a 3.64-m long hybrid design constructed by the Brobeck Division of Maxwell Laboratories. Originally built for an FEL project at the National Institute of Standards and Technology, the undulator was acquired by Duke in 1992 for use as a soft x-ray source for the FEL laboratory. Initial Hall probe measurements on the magnetic field distribution of the undulator revealed field errors of more than 0.80%. Initial phase errors for the device were more than 11 degrees. Through a series of in situ and off-line measurements and modifications we have re-tuned the magnet field structure of the device to produce strong spectral characteristics through the 5th harmonic. A low operating K has served to reduce the effects of magnetic field errors on the harmonic spectral content. Although rms field errors remained at 0.75%, we succeeded in reducing phase errors to less than 5 degrees. Using trajectory simulations from magnetic field data, we have computed the spectral output given the interaction of the Duke storage ring electron beam and the NIST undulator. Driven by a series of concerns and constraints over maximum utility, personnel safety and funding, we have also constructed a unique front end beamline for the undulator. The front end has been designed for maximum throughput of the 1st harmonic around 40A in its standard mode of operation. The front end has an alternative mode of operation which transmits the 3rd and 5th harmonics. This compact system also allows for the extraction of some of the bend magnet produced synchrotron and transition radiation from the storage ring. As with any well designed front end system, it also provides excellent protection to personnel and to the storage ring. A diagnostic beamline consisting of a transmission grating spectrometer and scanning wire beam profile monitor was constructed to measure the spatial and spectral characteristics of the undulator radiation. Test of the system with a circulating electron beam has confirmed the magnetic and focusing properties of the undulator, and verified that it can be used without perturbing the orbit of the beam.
Thomas, Geethu E.; Geetha, Kiran A.; Augustine, Lesly; Mamiyil, Sabu; Thomas, George
2016-01-01
Mode of reproduction is generally considered to have long-range evolutionary implications on population survival. Because sexual reproduction produces genetically diverse genotypes, this mode of reproduction is predicted to positively influence the success potential of offspring in evolutionary arms race with parasites (Red queen) whereas, without segregation and recombination, the obligate asexual multiplication may push a species into extinction due to the steady accumulation of deleterious mutations (Muller’s ratchet). However, the extent of linearity between reproductive strategies, genetic diversity and population fitness, and the contributions of different breeding strategies to population fitness are yet to be understood clearly. Genus Zingiber belonging to the pan-tropic family Zingiberaceae represents a good system to study contributions of different breeding behavior on genetic diversity and population fitness, as this genus comprises species with contrasting breeding systems. In this study, we analyzed breeding behavior, amplified fragment length polymorphism diversity and response to the soft-rot pathogen Pythium aphanidermatum in 18 natural populations of three wild Zingiber spp.: Z. neesanum, Z. nimmonii, and Z. zerumbet, together with the obligately asexual cultivated congener, ginger (Z. officinale). Ginger showed an exceptionally narrow genetic base, and adding to this, all the tested cultivars were uniformly susceptible to soft-rot. Concordant with the postulates of Muller’s ratchet, the background selection may be continuously pushing ginger into the ancestral state, rendering it inefficient in host-pathogen coevolution. Z. neesanum and Z. nimmonii populations were sexual and genetically diverse; however, contrary to Red Queen expectations, the populations were highly susceptible to soft-rot. Z. zerumbet showed a hemiclonal breeding behavior. The populations inhabiting forest understory were large and continuous, sexual and genetically diverse, but were susceptible, whereas populations inhabiting the revenue land were fragmented and monoclonal, but were resistant. It may be possible that, when genetic recombination becomes at a premium due to the genetic constraints imparted by habitat fragmentation or pathogen pressure, Z. zerumbet PMID:28066470
Quantum error-correction failure distributions: Comparison of coherent and stochastic error models
NASA Astrophysics Data System (ADS)
Barnes, Jeff P.; Trout, Colin J.; Lucarelli, Dennis; Clader, B. D.
2017-06-01
We compare failure distributions of quantum error correction circuits for stochastic errors and coherent errors. We utilize a fully coherent simulation of a fault-tolerant quantum error correcting circuit for a d =3 Steane and surface code. We find that the output distributions are markedly different for the two error models, showing that no simple mapping between the two error models exists. Coherent errors create very broad and heavy-tailed failure distributions. This suggests that they are susceptible to outlier events and that mean statistics, such as pseudothreshold estimates, may not provide the key figure of merit. This provides further statistical insight into why coherent errors can be so harmful for quantum error correction. These output probability distributions may also provide a useful metric that can be utilized when optimizing quantum error correcting codes and decoding procedures for purely coherent errors.
Measurement Error Calibration in Mixed-Mode Sample Surveys
ERIC Educational Resources Information Center
Buelens, Bart; van den Brakel, Jan A.
2015-01-01
Mixed-mode surveys are known to be susceptible to mode-dependent selection and measurement effects, collectively referred to as mode effects. The use of different data collection modes within the same survey may reduce selectivity of the overall response but is characterized by measurement errors differing across modes. Inference in sample surveys…
ERIC Educational Resources Information Center
Mirandola, C.; Paparella, G.; Re, A. M.; Ghetti, S.; Cornoldi, C.
2012-01-01
Enhanced semantic processing is associated with increased false recognition of items consistent with studied material, suggesting that children with poor semantic skills could produce fewer false memories. We examined whether memory errors differed in children with Attention Deficit/Hyperactivity Disorder (ADHD) and controls. Children viewed 18…
Genetic susceptibility to bone and soft tissue sarcomas: a field synopsis and meta-analysis.
Benna, Clara; Simioni, Andrea; Pasquali, Sandro; De Boni, Davide; Rajendran, Senthilkumar; Spiro, Giovanna; Colombo, Chiara; Virgone, Calogero; DuBois, Steven G; Gronchi, Alessandro; Rossi, Carlo Riccardo; Mocellin, Simone
2018-04-06
The genetic architecture of bone and soft tissue sarcomas susceptibility is yet to be elucidated. We aimed to comprehensively collect and meta-analyze the current knowledge on genetic susceptibility in these rare tumors. We conducted a systematic review and meta-analysis of the evidence on the association between DNA variation and risk of developing sarcomas through searching PubMed, The Cochrane Library, Scopus and Web of Science databases. To evaluate result credibility, summary evidence was graded according to the Venice criteria and false positive report probability (FPRP) was calculated to further validate result noteworthiness. Integrative analysis of genetic and eQTL (expression quantitative trait locus) data was coupled with network and pathway analysis to explore the hypothesis that specific cell functions are involved in sarcoma predisposition. We retrieved 90 eligible studies comprising 47,796 subjects (cases: 14,358, 30%) and investigating 1,126 polymorphisms involving 320 distinct genes. Meta-analysis identified 55 single nucleotide polymorphisms (SNPs) significantly associated with disease risk with a high (N=9), moderate (N=38) and low (N=8) level of evidence, findings being classified as noteworthy basically only when the level of evidence was high. The estimated joint population attributable risk for three independent SNPs (rs11599754 of ZNF365/EGR2 , rs231775 of CTLA4 , and rs454006 of PRKCG ) was 37.2%. We also identified 53 SNPs significantly associated with sarcoma risk based on single studies.Pathway analysis enabled us to propose that sarcoma predisposition might be linked especially to germline variation of genes whose products are involved in the function of the DNA repair machinery. We built the first knowledgebase on the evidence linking DNA variation to sarcomas susceptibility, which can be used to generate mechanistic hypotheses and inform future studies in this field of oncology.
Watanabe, Shinichi; Ohnishi, Takamitsu; Yuasa, Akira; Kiyota, Hiroshi; Iwata, Satoshi; Kaku, Mitsuo; Watanabe, Akira; Sato, Junko; Hanaki, Hideaki; Manabe, Motomu; Suzuki, Tamio; Otsuka, Fujio; Aihara, Michiko; Iozumi, Ken; Tamaki, Takeshi; Funada, Yuichi; Shinozaki, Mikio; Kobayashi, Motoko; Okuda, Masaru; Kikyo, Go; Kikuchi, Kumi; Okada, Yoshitane; Takeshima, Masanori; Kaneko, Osamu; Ogawa, Natsuki; Ito, Rie; Okuyama, Ryuhei; Shimada, Shinji; Shimizu, Tadamichi; Hatta, Naohito; Manabu, Maeda; Tsutsui, Kiyohiro; Tanaka, Toshihiro; Miyachi, Yoshiki; Asada, Hideo; Furukawa, Fukumi; Kurokawa, Ichiro; Iwatsuki, Keiji; Hide, Michihiro; Muto, Masahiko; Yamamoto, Osamu; Niihara, Hiroyuki; Takagaki, Kenji; Kubota, Yasuo; Sayama, Koji; Sano, Shigetoshi; Furue, Masutaka; Kanekura, Takuro
2017-08-01
To investigate the trends of antimicrobial resistance in pathogens isolated from skin and soft-tissue infections (SSTI) at dermatology departments in Japan, a Japanese surveillance committee conducted the first nationwide survey in 2013. Three main organisms were collected from SSTI at 30 dermatology departments in medical centers and 10 dermatology clinics. A total of 860 strains - 579 of Staphylococcus aureus, 240 of coagulase-negative Staphylococci, and 41 of Streptococcus pyogenes - were collected and shipped to a central laboratory for antimicrobial susceptibility testing. The patient profiles were also studied. Among all 579 strains of S. aureus, 141 (24.4%) were methicillin-resistant (MRSA). Among 97 Staphylococcus epidermidis strains, 54 (55.7%) were methicillin-resistant (MRSE). MRSA and MRSE were more frequently isolated from inpatients than from outpatients. Furthermore, these methicillin-resistant strains were also isolated more frequently from patients with histories of taking antibiotics within 4 weeks and hospitalization within 1 year compared to those without. However, there were no significant differences in MIC values and susceptibility patterns of the MRSA strains between patients with a history of hospitalization within 1 year and those without. Therefore, most of the isolated MRSA cases at dermatology departments are not healthcare-acquired, but community-acquired MRSA. S. pyogenes strains were susceptible to most antibiotics except macrolides. The information in this study is not only important in terms of local public health but will also contribute to an understanding of epidemic clones of pathogens from SSTI. Copyright © 2017 Japanese Society of Chemotherapy and The Japanese Association for Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Chang, Shih-Tsun; Liu, Yen-Hsiu; Lee, Jiahn-Shing; See, Lai-Chu
2015-01-01
Background: The effect of correcting static vision on sports vision is still not clear. Aim: To examine whether sports vision (depth perception [DP], dynamic visual acuity [DVA], eye movement [EM], peripheral vision [PV], and momentary vision [MV],) were different among soft tennis adolescent athletes with normal vision (Group A), with refractive error and corrected with (Group B) and without eyeglasses (Group C). Setting and Design: A cross-section study was conducted. Soft tennis athletes aged 10–13 who played softball tennis for 2–5 years, and who were without any ocular diseases and without visual training for the past 3 months were recruited. Materials and Methods: DPs were measured in an absolute deviation (mm) between a moving rod and fixing rod (approaching at 25 mm/s, receding at 25 mm/s, approaching at 50 mm/s, receding at 50 mm/s) using electric DP tester. A smaller deviation represented better DP. DVA, EM, PV, and MV were measured on a scale from 1 (worse) to 10 (best) using ATHLEVISION software. Statistical Analysis: Chi-square test and Kruskal–Wallis test was used to compare the data among the three study groups. Results: A total of 73 athletes (37 in Group A, 8 in Group B, 28 in Group C) were enrolled in this study. All four items of DP showed significant difference among the three study groups (P = 0.0051, 0.0004, 0.0095, 0.0021). PV displayed significant difference among the three study groups (P = 0.0044). There was no significant difference in DVA, EM, and MV among the three study groups. Conclusions: Significant better DP and PV were seen among soft tennis adolescent athletes with normal vision than those with refractive error regardless whether they had eyeglasses corrected. On the other hand, DVA, EM, and MV were similar among the three study groups. PMID:26632127
Laser as a Tool to Study Radiation Effects in CMOS
NASA Astrophysics Data System (ADS)
Ajdari, Bahar
Energetic particles from cosmic ray or terrestrial sources can strike sensitive areas of CMOS devices and cause soft errors. Understanding the effects of such interactions is crucial as the device technology advances, and chip reliability has become more important than ever. Particle accelerator testing has been the standard method to characterize the sensitivity of chips to single event upsets (SEUs). However, because of their costs and availability limitations, other techniques have been explored. Pulsed laser has been a successful tool for characterization of SEU behavior, but to this day, laser has not been recognized as a comparable method to beam testing. In this thesis, I propose a methodology of correlating laser soft error rate (SER) to particle beam gathered data. Additionally, results are presented showing a temperature dependence of SER and the "neighbor effect" phenomenon where due to the close proximity of devices a "weakening effect" in the ON state can be observed.
Kazaura, Kamugisha; Omae, Kazunori; Suzuki, Toshiji; Matsumoto, Mitsuji; Mutafungwa, Edward; Korhonen, Timo O; Murakami, Tadaaki; Takahashi, Koichi; Matsumoto, Hideki; Wakamori, Kazuhiko; Arimoto, Yoshinori
2006-06-12
The deterioration and deformation of a free-space optical beam wave-front as it propagates through the atmosphere can reduce the link availability and may introduce burst errors thus degrading the performance of the system. We investigate the suitability of utilizing soft-computing (SC) based tools for improving performance of free-space optical (FSO) communications systems. The SC based tools are used for the prediction of key parameters of a FSO communications system. Measured data collected from an experimental FSO communication system is used as training and testing data for a proposed multi-layer neural network predictor (MNNP) used to predict future parameter values. The predicted parameters are essential for reducing transmission errors by improving the antenna's accuracy of tracking data beams. This is particularly essential for periods considered to be of strong atmospheric turbulence. The parameter values predicted using the proposed tool show acceptable conformity with original measurements.
Distributed phased array architecture study
NASA Technical Reports Server (NTRS)
Bourgeois, Brian
1987-01-01
Variations in amplifiers and phase shifters can cause degraded antenna performance, depending also on the environmental conditions and antenna array architecture. The implementation of distributed phased array hardware was studied with the aid of the DISTAR computer program as a simulation tool. This simulation provides guidance in hardware simulation. Both hard and soft failures of the amplifiers in the T/R modules are modeled. Hard failures are catastrophic: no power is transmitted to the antenna elements. Noncatastrophic or soft failures are modeled as a modified Gaussian distribution. The resulting amplitude characteristics then determine the array excitation coefficients. The phase characteristics take on a uniform distribution. Pattern characteristics such as antenna gain, half power beamwidth, mainbeam phase errors, sidelobe levels, and beam pointing errors were studied as functions of amplifier and phase shifter variations. General specifications for amplifier and phase shifter tolerances in various architecture configurations for C band and S band were determined.
Effects of Stopping Ions and LET Fluctuations on Soft Error Rate Prediction.
Weeden-Wright, S. L.; King, Michael Patrick; Hooten, N. C.; ...
2015-02-01
Variability in energy deposition from stopping ions and LET fluctuations is quantified for specific radiation environments. When compared to predictions using average LET via CREME96, LET fluctuations lead to an order-of-magnitude difference in effective flux and a nearly 4x decrease in predicted soft error rate (SER) in an example calculation performed on a commercial 65 nm SRAM. The large LET fluctuations reported here will be even greater for the smaller sensitive volumes that are characteristic of highly scaled technologies. End-of-range effects of stopping ions do not lead to significant inaccuracies in radiation environments with low solar activity unless the sensitivevolumemore » thickness is 100 μm or greater. In contrast, end-of-range effects for stopping ions lead to significant inaccuracies for sensitive- volume thicknesses less than 10 μm in radiation environments with high solar activity.« less
NASA Astrophysics Data System (ADS)
Yan, Hong; Song, Xiangzhong; Tian, Kuangda; Chen, Yilin; Xiong, Yanmei; Min, Shungeng
2018-02-01
A novel method, mid-infrared (MIR) spectroscopy, which enables the determination of Chlorantraniliprole in Abamectin within minutes, is proposed. We further evaluate the prediction ability of four wavelength selection methods, including bootstrapping soft shrinkage approach (BOSS), Monte Carlo uninformative variable elimination (MCUVE), genetic algorithm partial least squares (GA-PLS) and competitive adaptive reweighted sampling (CARS) respectively. The results showed that BOSS method obtained the lowest root mean squared error of cross validation (RMSECV) (0.0245) and root mean squared error of prediction (RMSEP) (0.0271), as well as the highest coefficient of determination of cross-validation (Qcv2) (0.9998) and the coefficient of determination of test set (Q2test) (0.9989), which demonstrated that the mid infrared spectroscopy can be used to detect Chlorantraniliprole in Abamectin conveniently. Meanwhile, a suitable wavelength selection method (BOSS) is essential to conducting a component spectral analysis.
Analyzing the effectiveness of a frame-level redundancy scrubbing technique for SRAM-based FPGAs
Tonfat, Jorge; Lima Kastensmidt, Fernanda; Rech, Paolo; ...
2015-12-17
Radiation effects such as soft errors are the major threat to the reliability of SRAM-based FPGAs. This work analyzes the effectiveness in correcting soft errors of a novel scrubbing technique using internal frame redundancy called Frame-level Redundancy Scrubbing (FLR-scrubbing). This correction technique can be implemented in a coarse grain TMR design. The FLR-scrubbing technique was implemented on a mid-size Xilinx Virtex-5 FPGA device used as a case study. The FLR-scrubbing technique was tested under neutron radiation and fault injection. Implementation results demonstrated minimum area and energy consumption overhead when compared to other techniques. The time to repair the fault ismore » also improved by using the Internal Configuration Access Port (ICAP). Lastly, neutron radiation test results demonstrated that the proposed technique is suitable for correcting accumulated SEUs and MBUs.« less
Damage level prediction of non-reshaped berm breakwater using ANN, SVM and ANFIS models
NASA Astrophysics Data System (ADS)
Mandal, Sukomal; Rao, Subba; N., Harish; Lokesha
2012-06-01
The damage analysis of coastal structure is very important as it involves many design parameters to be considered for the better and safe design of structure. In the present study experimental data for non-reshaped berm breakwater are collected from Marine Structures Laboratory, Department of Applied Mechanics and Hydraulics, NITK, Surathkal, India. Soft computing techniques like Artificial Neural Network (ANN), Support Vector Machine (SVM) and Adaptive Neuro Fuzzy Inference system (ANFIS) models are constructed using experimental data sets to predict the damage level of non-reshaped berm breakwater. The experimental data are used to train ANN, SVM and ANFIS models and results are determined in terms of statistical measures like mean square error, root mean square error, correla-tion coefficient and scatter index. The result shows that soft computing techniques i.e., ANN, SVM and ANFIS can be efficient tools in predicting damage levels of non reshaped berm breakwater.
Analyzing the effectiveness of a frame-level redundancy scrubbing technique for SRAM-based FPGAs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tonfat, Jorge; Lima Kastensmidt, Fernanda; Rech, Paolo
Radiation effects such as soft errors are the major threat to the reliability of SRAM-based FPGAs. This work analyzes the effectiveness in correcting soft errors of a novel scrubbing technique using internal frame redundancy called Frame-level Redundancy Scrubbing (FLR-scrubbing). This correction technique can be implemented in a coarse grain TMR design. The FLR-scrubbing technique was implemented on a mid-size Xilinx Virtex-5 FPGA device used as a case study. The FLR-scrubbing technique was tested under neutron radiation and fault injection. Implementation results demonstrated minimum area and energy consumption overhead when compared to other techniques. The time to repair the fault ismore » also improved by using the Internal Configuration Access Port (ICAP). Lastly, neutron radiation test results demonstrated that the proposed technique is suitable for correcting accumulated SEUs and MBUs.« less
NASA Astrophysics Data System (ADS)
Ouyang, S.; Song, L. J.; Liu, Y. H.; Huo, J. T.; Wang, J. Q.; Xu, W.; Li, J. L.; Wang, C. T.; Wang, X. M.; Li, R. W.
2018-06-01
The soft magnetic properties of Fe-based metallic glasses are reduced significantly by external and residual stresses, e.g., the susceptibility decreases and coercivity increases, which limits their application severely. Unraveling the micromechanism of how the stress influences the soft magnetic properties is of great help for enhancing the performance of Fe-based metallic glasses. In this work, we investigate the effect of viscoelastic heterogeneity on the motion of magnetic domain wall surrounding nanoindentations. Compared to the matrix, dissipation of the viscoelastic heterogeneity increases toward the nanoindentation. Meanwhile, the motion of domain wall under external magnetic field becomes more difficult toward the nanoindentations. A correlation between the viscoelastic dissipation and the moving ability of magnetic domain walls is observed, which can be well fitted using magnetoelastic coupling theory. This suggests that manipulating the microscale viscoelastic heterogeneity is probably a helpful strategy for enhancing the soft magnetic properties of metallic glasses.
Jia, Rui; Monk, Paul; Murray, David; Noble, J Alison; Mellon, Stephen
2017-09-06
Optoelectronic motion capture systems are widely employed to measure the movement of human joints. However, there can be a significant discrepancy between the data obtained by a motion capture system (MCS) and the actual movement of underlying bony structures, which is attributed to soft tissue artefact. In this paper, a computer-aided tracking and motion analysis with ultrasound (CAT & MAUS) system with an augmented globally optimal registration algorithm is presented to dynamically track the underlying bony structure during movement. The augmented registration part of CAT & MAUS was validated with a high system accuracy of 80%. The Euclidean distance between the marker-based bony landmark and the bony landmark tracked by CAT & MAUS was calculated to quantify the measurement error of an MCS caused by soft tissue artefact during movement. The average Euclidean distance between the target bony landmark measured by each of the CAT & MAUS system and the MCS alone varied from 8.32mm to 16.87mm in gait. This indicates the discrepancy between the MCS measured bony landmark and the actual underlying bony landmark. Moreover, Procrustes analysis was applied to demonstrate that CAT & MAUS reduces the deformation of the body segment shape modeled by markers during motion. The augmented CAT & MAUS system shows its potential to dynamically detect and locate actual underlying bony landmarks, which reduces the MCS measurement error caused by soft tissue artefact during movement. Copyright © 2017 Elsevier Ltd. All rights reserved.
Color reproduction for advanced manufacture of soft tissue prostheses.
Xiao, Kaida; Zardawi, Faraedon; van Noort, Richard; Yates, Julian M
2013-11-01
The objectives of this study were to develop a color reproduction system in advanced manufacture technology for accurate and automatic processing of soft tissue prostheses. The manufacturing protocol was defined to effectively and consistently produce soft tissue prostheses using a 3D printing system. Within this protocol printer color profiles were developed using a number of mathematical models for the proposed 3D color printing system based on 240 training colors. On this basis, the color reproduction system was established and their system errors including accuracy of color reproduction, performance of color repeatability and color gamut were evaluated using 14 known human skin shades. The printer color profile developed using the third-order polynomial regression based on least-square fitting provided the best model performance. The results demonstrated that by using the proposed color reproduction system, 14 different skin colors could be reproduced and excellent color reproduction performance achieved. Evaluation of the system's color repeatability revealed a demonstrable system error and this highlighted the need for regular evaluation. The color gamut for the proposed 3D printing system was simulated and it was demonstrated that the vast majority of skin colors can be reproduced with the exception of extreme dark or light skin color shades. This study demonstrated that the proposed color reproduction system can be effectively used to reproduce a range of human skin colors for application in advanced manufacture of soft tissue prostheses. Copyright © 2013 Elsevier Ltd. All rights reserved.
Peterman, Robert J; Jiang, Shuying; Johe, Rene; Mukherjee, Padma M
2016-12-01
Dolphin® visual treatment objective (VTO) prediction software is routinely utilized by orthodontists during the treatment planning of orthognathic cases to help predict post-surgical soft tissue changes. Although surgical soft tissue prediction is considered to be a vital tool, its accuracy is not well understood in tow-jaw surgical procedures. The objective of this study was to quantify the accuracy of Dolphin Imaging's VTO soft tissue prediction software on class III patients treated with maxillary advancement and mandibular setback and to validate the efficacy of the software in such complex cases. This retrospective study analyzed the records of 14 patients treated with comprehensive orthodontics in conjunction with two-jaw orthognathic surgery. Pre- and post-treatment radiographs were traced and superimposed to determine the actual skeletal movements achieved in surgery. This information was then used to simulate surgery in the software and generate a final soft tissue patient profile prediction. Prediction images were then compared to the actual post-treatment profile photos to determine differences. Dolphin Imaging's software was determined to be accurate within an error range of +/- 2 mm in the X-axis at most landmarks. The lower lip predictions were most inaccurate. Clinically, the observed error suggests that the VTO may be used for demonstration and communication with a patient or consulting practitioner. However, Dolphin should not be useful for precise treatment planning of surgical movements. This program should be used with caution to prevent unrealistic patient expectations and dissatisfaction.
Auslander, Margeaux V; Thomas, Ayanna K; Gutchess, Angela H
2017-01-01
Background/Study Context: The present experiment investigated the role of confidence and control beliefs in susceptibility to the misinformation effect in young and older adults. Control beliefs are perceptions about one's abilities or competence and the extent to which one can influence performance outcomes. It was predicted that level of control beliefs would influence misinformation susceptibility and overall memory confidence. Fifty university students (ages 18-26) and 37 community-dwelling older adults (ages 62-86) were tested. Participants viewed a video, answered questions containing misinformation, and then completed a source-recognition test to determine whether the information presented was seen in the video, the questionnaire only, both, or neither. For each response, participants indicated their level of confidence. The relationship between control beliefs and memory performance was moderated by confidence. That is, individuals with lower control beliefs made more errors as confidence decreased. Additionally, the relationship between confidence and memory performance differed by age, with greater confidence related to more errors for young adults. Confidence is an important factor in how control beliefs and age are related to memory errors in the misinformation effect. This may have implications for the legal system, particularly with eyewitness testimony. The confidence of an individual should be considered if the eyewitness is a younger adult.
Spilker, R L; de Almeida, E S; Donzelli, P S
1992-01-01
This chapter addresses computationally demanding numerical formulations in the biomechanics of soft tissues. The theory of mixtures can be used to represent soft hydrated tissues in the human musculoskeletal system as a two-phase continuum consisting of an incompressible solid phase (collagen and proteoglycan) and an incompressible fluid phase (interstitial water). We first consider the finite deformation of soft hydrated tissues in which the solid phase is represented as hyperelastic. A finite element formulation of the governing nonlinear biphasic equations is presented based on a mixed-penalty approach and derived using the weighted residual method. Fluid and solid phase deformation, velocity, and pressure are interpolated within each element, and the pressure variables within each element are eliminated at the element level. A system of nonlinear, first-order differential equations in the fluid and solid phase deformation and velocity is obtained. In order to solve these equations, the contributions of the hyperelastic solid phase are incrementally linearized, a finite difference rule is introduced for temporal discretization, and an iterative scheme is adopted to achieve equilibrium at the end of each time increment. We demonstrate the accuracy and adequacy of the procedure using a six-node, isoparametric axisymmetric element, and we present an example problem for which independent numerical solution is available. Next, we present an automated, adaptive environment for the simulation of soft tissue continua in which the finite element analysis is coupled with automatic mesh generation, error indicators, and projection methods. Mesh generation and updating, including both refinement and coarsening, for the two-dimensional examples examined in this study are performed using the finite quadtree approach. The adaptive analysis is based on an error indicator which is the L2 norm of the difference between the finite element solution and a projected finite element solution. Total stress, calculated as the sum of the solid and fluid phase stresses, is used in the error indicator. To allow the finite difference algorithm to proceed in time using an updated mesh, solution values must be transferred to the new nodal locations. This rezoning is accomplished using a projected field for the primary variables. The accuracy and effectiveness of this adaptive finite element analysis is demonstrated using a linear, two-dimensional, axisymmetric problem corresponding to the indentation of a thin sheet of soft tissue. The method is shown to effectively capture the steep gradients and to produce solutions in good agreement with independent, converged, numerical solutions.
USDA-ARS?s Scientific Manuscript database
Sweet potato is a nutritional source worldwide. Soft rot caused by Rhizopus spp. is a major limiting factor in the storage of produce, rendering it potentially unsafe for human consumption. In this study, Rhizopus oryzae was used to develop a concept of postharvest disease control by weakening the p...
ERIC Educational Resources Information Center
Dunn, Rita; And Others
1985-01-01
Cites research showing individual reactions to bright and dim light in the classroom. Shows individual susceptibility to extreme negativism in inappropriate lighting conditions and suggests that students' predispositions for illumination be identified. Notes that restless, fidgety youngsters should be placed into softly lit sections, with the…
Quantitative Trait Loci for Endosperm Modification and Amino Acid Contents in Quality Protein Maize
USDA-ARS?s Scientific Manuscript database
The deficient protein quality of corn grain can be improved by replacing the normal Opaque2 (O2) alleles with non-functional mutant alleles o2. Unfortunately, o2 alleles are associated with a very soft endosperm texture, poor yield and susceptibility to diseases and insects. Plant breeders have been...
NASA Technical Reports Server (NTRS)
Belcastro, C. M.
1984-01-01
A methodology was developed a assess the upset susceptibility/reliability of a computer system onboard an aircraft flying through a lightning environment. Upset error modes in a general purpose microprocessor were studied. The upset tests involved the random input of analog transients which model lightning induced signals onto interface lines of an 8080 based microcomputer from which upset error data was recorded. The program code on the microprocessor during tests is designed to exercise all of the machine cycles and memory addressing techniques implemented in the 8080 central processing unit. A statistical analysis is presented in which possible correlations are established between the probability of upset occurrence and transient signal inputs during specific processing states and operations. A stochastic upset susceptibility model for the 8080 microprocessor is presented. The susceptibility of this microprocessor to upset, once analog transients have entered the system, is determined analytically by calculating the state probabilities of the stochastic model.
Propagation of measurement accuracy to biomass soft-sensor estimation and control quality.
Steinwandter, Valentin; Zahel, Thomas; Sagmeister, Patrick; Herwig, Christoph
2017-01-01
In biopharmaceutical process development and manufacturing, the online measurement of biomass and derived specific turnover rates is a central task to physiologically monitor and control the process. However, hard-type sensors such as dielectric spectroscopy, broth fluorescence, or permittivity measurement harbor various disadvantages. Therefore, soft-sensors, which use measurements of the off-gas stream and substrate feed to reconcile turnover rates and provide an online estimate of the biomass formation, are smart alternatives. For the reconciliation procedure, mass and energy balances are used together with accuracy estimations of measured conversion rates, which were so far arbitrarily chosen and static over the entire process. In this contribution, we present a novel strategy within the soft-sensor framework (named adaptive soft-sensor) to propagate uncertainties from measurements to conversion rates and demonstrate the benefits: For industrially relevant conditions, hereby the error of the resulting estimated biomass formation rate and specific substrate consumption rate could be decreased by 43 and 64 %, respectively, compared to traditional soft-sensor approaches. Moreover, we present a generic workflow to determine the required raw signal accuracy to obtain predefined accuracies of soft-sensor estimations. Thereby, appropriate measurement devices and maintenance intervals can be selected. Furthermore, using this workflow, we demonstrate that the estimation accuracy of the soft-sensor can be additionally and substantially increased.
Examining the Angular Resolution of the Astro-H's Soft X-Ray Telescopes
NASA Technical Reports Server (NTRS)
Sato, Toshiki; Iizuka, Ryo; Ishida, Manabu; Kikuchi, Naomichi; Maeda, Yoshitomo; Kurashima, Sho; Nakaniwa, Nozomi; Tomikawa, Kazuki; Hayashi, Takayuki; Mori, Hideyuki;
2016-01-01
The international x-ray observatory ASTRO-H was renamed Hitomi after launch. It covers a wide energy range from a few hundred eV to 600 keV. It is equipped with two soft x-ray telescopes (SXTs: SXT-I and SXT-S) for imaging the soft x-ray sky up to 12 keV, which focus an image onto the respective focal-plane detectors: CCD camera (SXI) and a calorimeter (SXS). The SXTs are fabricated in a quadrant unit. The angular resolution in half-power diameter (HPD) of each quadrant of the SXTs ranges between 1.1 and 1.4 arc min at 4.51 keV. It was also found that one quadrant has an energy dependence on the HPD. We examine the angular resolution with spot scan measurements. In order to understand the cause of imaging capability deterioration and to reflect it to the future telescope development, we carried out spot scan measurements, in which we illuminate all over the aperture of each quadrant with a square beam 8 mm on a side. Based on the scan results, we made maps of image blurring and a focus position. The former and the latter reflect figure error and positioning error, respectively, of the foils that are within the incident 8 mm x 8 mm beam. As a result, we estimated those errors in a quadrant to be approx. 0.9 to 1.0 and approx. 0.6 to 0.9 arc min, respectively. We found that the larger the positioning error in a quadrant is, the larger its HPD is. The HPD map, which manifests the local image blurring, is very similar from quadrant to quadrant, but the map of the focus position is different from location to location in each telescope. It is also found that the difference in local performance causes energy dependence of the HPD.
Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.; Lavelle, Thomas M.; Patnaik, Surya
2003-01-01
The neural network and regression methods of NASA Glenn Research Center s COMETBOARDS design optimization testbed were used to generate approximate analysis and design models for a subsonic aircraft operating at Mach 0.85 cruise speed. The analytical model is defined by nine design variables: wing aspect ratio, engine thrust, wing area, sweep angle, chord-thickness ratio, turbine temperature, pressure ratio, bypass ratio, fan pressure; and eight response parameters: weight, landing velocity, takeoff and landing field lengths, approach thrust, overall efficiency, and compressor pressure and temperature. The variables were adjusted to optimally balance the engines to the airframe. The solution strategy included a sensitivity model and the soft analysis model. Researchers generated the sensitivity model by training the approximators to predict an optimum design. The trained neural network predicted all response variables, within 5-percent error. This was reduced to 1 percent by the regression method. The soft analysis model was developed to replace aircraft analysis as the reanalyzer in design optimization. Soft models have been generated for a neural network method, a regression method, and a hybrid method obtained by combining the approximators. The performance of the models is graphed for aircraft weight versus thrust as well as for wing area and turbine temperature. The regression method followed the analytical solution with little error. The neural network exhibited 5-percent maximum error over all parameters. Performance of the hybrid method was intermediate in comparison to the individual approximators. Error in the response variable is smaller than that shown in the figure because of a distortion scale factor. The overall performance of the approximators was considered to be satisfactory because aircraft analysis with NASA Langley Research Center s FLOPS (Flight Optimization System) code is a synthesis of diverse disciplines: weight estimation, aerodynamic analysis, engine cycle analysis, propulsion data interpolation, mission performance, airfield length for landing and takeoff, noise footprint, and others.
NASA Technical Reports Server (NTRS)
Ackermann, M.; Ajello, M.; Allafort, A.; Atwood, W. B.; Baldini, L.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bhat, P. N.;
2012-01-01
Due to an error at the publisher, the times given for the major tick marks in the X-axis in Figure 1 of the published article are incorrect. The correctly labeled times should be 00:52:00, 00:54:00,..., and 01:04:00. The correct version of Figure 1 and its caption is shown below. IOP Publishing sincerely regrets this error.25.
NASA Astrophysics Data System (ADS)
Ackermann, M.; Ajello, M.; Allafort, A.; Atwood, W. B.; Baldini, L.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bhat, P. N.; Blandford, R. D.; Bonamente, E.; Borgland, A. W.; Bregeon, J.; Briggs, M. S.; Brigida, M.; Bruel, P.; Buehler, R.; Burgess, J. M.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Casandjian, J. M.; Cecchi, C.; Charles, E.; Chekhtman, A.; Chiang, J.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Connaughton, V.; Conrad, J.; Cutini, S.; Dennis, B. R.; de Palma, F.; Dermer, C. D.; Digel, S. W.; Silva, E. do Couto e.; Drell, P. S.; Drlica-Wagner, A.; Dubois, R.; Favuzzi, C.; Fegan, S. J.; Ferrara, E. C.; Fortin, P.; Fukazawa, Y.; Fusco, P.; Gargano, F.; Germani, S.; Giglietto, N.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Grillo, L.; Grove, J. E.; Gruber, D.; Guiriec, S.; Hadasch, D.; Hayashida, M.; Hays, E.; Horan, D.; Iafrate, G.; Jóhannesson, G.; Johnson, A. S.; Johnson, W. N.; Kamae, T.; Kippen, R. M.; Knödlseder, J.; Kuss, M.; Lande, J.; Latronico, L.; Longo, F.; Loparco, F.; Lott, B.; Lovellette, M. N.; Lubrano, P.; Mazziotta, M. N.; McEnery, J. E.; Meegan, C.; Mehault, J.; Michelson, P. F.; Mitthumsiri, W.; Monte, C.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Murphy, R.; Naumann-Godo, M.; Nuss, E.; Nymark, T.; Ohno, M.; Ohsugi, T.; Okumura, A.; Omodei, N.; Orlando, E.; Paciesas, W. S.; Panetta, J. H.; Parent, D.; Pesce-Rollins, M.; Petrosian, V.; Pierbattista, M.; Piron, F.; Pivato, G.; Poon, H.; Porter, T. A.; Preece, R.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Reimer, O.; Ritz, S.; Sbarra, C.; Schwartz, R. A.; Sgrò, C.; Share, G. H.; Siskind, E. J.; Spinelli, P.; Takahashi, H.; Tanaka, T.; Tanaka, Y.; Thayer, J. B.; Tibaldo, L.; Tinivella, M.; Tolbert, A. K.; Tosti, G.; Troja, E.; Uchiyama, Y.; Usher, T. L.; Vandenbroucke, J.; Vasileiou, V.; Vianello, G.; Vitale, V.; von Kienlin, A.; Waite, A. P.; Wilson-Hodge, C.; Wood, D. L.; Wood, K. S.; Yang, Z.
2012-04-01
Due to an error at the publisher, the times given for the major tick marks in the X-axis in Figure 1 of the published article are incorrect. The correctly labeled times should be "00:52:00," "00:54:00," ... , and "01:04:00." The correct version of Figure 1 and its caption is shown below. IOP Publishing sincerely regrets this error.
Prospective Comparison of Cefoxitin and Cefazolin in Infections Caused by Aerobic Bacteria
Gurwith, Marc; Albritton, William; Lank, Beverley; Harding, Godfrey; Ronald, Allan
1978-01-01
Intravenous cefazolin and cefoxitin were compared in a prospective randomized trial in infections where the suspected pathogen was expected to be susceptible to both antibiotics. In the cefazolin group (12 patients) the diagnosis was pneumonia in 4, including 2 with pneumococcal bacteremia, soft tissue infection in 5, Staphylococcus aureus bacteremia in 1, acute pyelonephritis in 1, and disseminated gonococcal infection in 1. In the cefoxitin group (10 patients) the diagnosis was pneumonia in 4, including 2 with pneumococcal bacteremia, soft tissue infection in 4, acute pyelonephritis in 1, and disseminated gonococcal infection in 1. In the cefazolin group receiving an evaluable course of therapy, a good clinical response was seen in 10 of 11 patients, and a bacteriological response was seen in 5 of 7. Cefazolin failed to eradicate S. aureus bacteremia in 1 patient and S. aureus in a skin ulcer of another patient. All 10 cefoxitin patients had good clinical and bacteriological responses, but in 1 patient S. aureus colonization of a postoperative wound recurred after discontinuation of the drug. Side effects in both groups included skin rash, phlebitis, and elevation of the serum alkaline phosphatase. Both cefoxitin and cefazolin appeared effective in infections caused by susceptible aerobic pathogens with the possible exception of S. aureus, although all 11 strains of S. aureus isolated in this study were susceptible in vitro to both antibiotics. Cefoxitin appeared to be equivalent to cefazolin in efficacy and occurrence of side effects. PMID:348096
Investigating the impact of spatial priors on the performance of model-based IVUS elastography
Richards, M S; Doyley, M M
2012-01-01
This paper describes methods that provide pre-requisite information for computing circumferential stress in modulus elastograms recovered from vascular tissue—information that could help cardiologists detect life-threatening plaques and predict their propensity to rupture. The modulus recovery process is an ill-posed problem; therefore additional information is needed to provide useful elastograms. In this work, prior geometrical information was used to impose hard or soft constraints on the reconstruction process. We conducted simulation and phantom studies to evaluate and compare modulus elastograms computed with soft and hard constraints versus those computed without any prior information. The results revealed that (1) the contrast-to-noise ratio of modulus elastograms achieved using the soft prior and hard prior reconstruction methods exceeded those computed without any prior information; (2) the soft prior and hard prior reconstruction methods could tolerate up to 8 % measurement noise; and (3) the performance of soft and hard prior modulus elastogram degraded when incomplete spatial priors were employed. This work demonstrates that including spatial priors in the reconstruction process should improve the performance of model-based elastography, and the soft prior approach should enhance the robustness of the reconstruction process to errors in the geometrical information. PMID:22037648
Kulengowski, Brandon; Brignola, Matthew; Gallagher, Chanah; Rutter, W Cliff; Ribes, Julie A; Burgess, David S
2017-01-01
Abstract Background Polymyxins are being revitalized to combat carbapenem-resistant Enterobacteriaceae (CRE). However, evaluating the activity of these agents by traditional broth dilution methods is not practical for busy clinical laboratories. We compared polymyxin B (PMB) activity utilizing two quantitative susceptibility testing methods, Etest® and broth microdilution (BMD), against CRE isolates from patients at an academic medical center. Methods PMB activity against 70 recent CRE clinical isolates was determined by BMD and Etest® according to CLSI guidelines. P. aeruginosa ATCC® 27853 was used as a quality control strain. The CLSI PMB susceptibility breakpoint of non-fermenting gram-negative bacteria (<2 mg/L) was used. Essential agreement between methods was defined as an MIC measured within 1 log2 dilution. Categorical agreement was defined between methods as classification of isolates in the same susceptibility category (susceptible or resistant). Major and very major error rates were calculated, and McNemar’s test was used for determining a difference between methods. Results CRE isolates were primarily Enterobacter spp. (43%), followed by K. pneumoniae (41%) and E. coli (9%). Essential agreement between testing methods was low (9%), but categorical agreement was 81% (P = 0.0002). Although false non-susceptibility was never observed by Etest® (BMD as reference), the rate of very major errors by Etest® was high (19%). Etest® miscalled 87% of PMB-resistant CRE. Conclusion Etest® reporting of false susceptibility may result in inappropriate antibiotic utilization and treatment failure clinically. We do not recommend using Etest® for PMB susceptibility testing for routine patient care. Disclosures All authors: No reported disclosures.
Córcoles, A.D.; Magesan, Easwar; Srinivasan, Srikanth J.; Cross, Andrew W.; Steffen, M.; Gambetta, Jay M.; Chow, Jerry M.
2015-01-01
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code. PMID:25923200
Córcoles, A D; Magesan, Easwar; Srinivasan, Srikanth J; Cross, Andrew W; Steffen, M; Gambetta, Jay M; Chow, Jerry M
2015-04-29
The ability to detect and deal with errors when manipulating quantum systems is a fundamental requirement for fault-tolerant quantum computing. Unlike classical bits that are subject to only digital bit-flip errors, quantum bits are susceptible to a much larger spectrum of errors, for which any complete quantum error-correcting code must account. Whilst classical bit-flip detection can be realized via a linear array of qubits, a general fault-tolerant quantum error-correcting code requires extending into a higher-dimensional lattice. Here we present a quantum error detection protocol on a two-by-two planar lattice of superconducting qubits. The protocol detects an arbitrary quantum error on an encoded two-qubit entangled state via quantum non-demolition parity measurements on another pair of error syndrome qubits. This result represents a building block towards larger lattices amenable to fault-tolerant quantum error correction architectures such as the surface code.
The Forced Soft Spring Equation
ERIC Educational Resources Information Center
Fay, T. H.
2006-01-01
Through numerical investigations, this paper studies examples of the forced Duffing type spring equation with [epsilon] negative. By performing trial-and-error numerical experiments, the existence is demonstrated of stability boundaries in the phase plane indicating initial conditions yielding bounded solutions. Subharmonic boundaries are…
Once-Daily Amikacin Dosing in Burn Patients Treated with Continuous Venovenous Hemofiltration
2011-10-01
previously documented inaccuracies with the Vitek 2 instrument (1), all Acinetobacter baumannii isolates reported as susceptible to amikacin were confirmed...to the analysis. Bacteriology. Amikacin MICs were determined for Gram-negative blood- stream isolates using the Vitek 2 instrument (bioMérieux...al. 2010. Aminoglycoside resistance and susceptibility testing errors in Acinetobacter baumannii-calcoaceticus complex. J. Clin. Microbiol. 48:1132
Single event upset susceptibility testing of the Xilinx Virtex II FPGA
NASA Technical Reports Server (NTRS)
Yui, C.; Swift, G.; Carmichael, C.
2002-01-01
Heavy ion testing of the Xilinx Virtex IZ was conducted on the configuration, block RAM and user flip flop cells to determine their single event upset susceptibility using LETs of 1.2 to 60 MeVcm^2/mg. A software program specifically designed to count errors in the FPGA is used to reveal L1/e values and single-event-functional interrupt failures.
Idelevich, Evgeny A.; Grunewald, Camilla M.; Wüllenweber, Jörg; Becker, Karsten
2014-01-01
Fungaemia is associated with high mortality rates and early appropriate antifungal therapy is essential for patient management. However, classical diagnostic workflow takes up to several days due to the slow growth of yeasts. Therefore, an approach for direct species identification and direct antifungal susceptibility testing (AFST) without prior time-consuming sub-culturing of yeasts from positive blood cultures (BCs) is urgently needed. Yeast cell pellets prepared using Sepsityper kit were used for direct identification by MALDI-TOF mass spectrometry (MS) and for direct inoculation of Vitek 2 AST-YS07 card for AFST. For comparison, MALDI-TOF MS and Vitek 2 testing were performed from yeast subculture. A total of twenty four positive BCs including twelve C. glabrata, nine C. albicans, two C. dubliniensis and one C. krusei isolate were processed. Applying modified thresholds for species identification (score ≥1.5 with two identical consecutive propositions), 62.5% of BCs were identified by direct MALDI-TOF MS. AFST results were generated for 72.7% of BCs directly tested by Vitek 2 and for 100% of standardized suspensions from 24 h cultures. Thus, AFST comparison was possible for 70 isolate-antifungal combinations. Essential agreement (minimum inhibitory concentration difference ≤1 double dilution step) was 88.6%. Very major errors (VMEs) (false-susceptibility), major errors (false-resistance) and minor errors (false categorization involving intermediate result) amounted to 33.3% (of resistant isolates), 1.9% (of susceptible isolates) and 1.4% providing 90.0% categorical agreement. All VMEs were due to fluconazole or voriconazole. This direct method saved on average 23.5 h for identification and 15.1 h for AFST, compared to routine procedures. However, performance for azole susceptibility testing was suboptimal and testing from subculture remains indispensable to validate the direct finding. PMID:25489741
Idelevich, Evgeny A; Grunewald, Camilla M; Wüllenweber, Jörg; Becker, Karsten
2014-01-01
Fungaemia is associated with high mortality rates and early appropriate antifungal therapy is essential for patient management. However, classical diagnostic workflow takes up to several days due to the slow growth of yeasts. Therefore, an approach for direct species identification and direct antifungal susceptibility testing (AFST) without prior time-consuming sub-culturing of yeasts from positive blood cultures (BCs) is urgently needed. Yeast cell pellets prepared using Sepsityper kit were used for direct identification by MALDI-TOF mass spectrometry (MS) and for direct inoculation of Vitek 2 AST-YS07 card for AFST. For comparison, MALDI-TOF MS and Vitek 2 testing were performed from yeast subculture. A total of twenty four positive BCs including twelve C. glabrata, nine C. albicans, two C. dubliniensis and one C. krusei isolate were processed. Applying modified thresholds for species identification (score ≥ 1.5 with two identical consecutive propositions), 62.5% of BCs were identified by direct MALDI-TOF MS. AFST results were generated for 72.7% of BCs directly tested by Vitek 2 and for 100% of standardized suspensions from 24 h cultures. Thus, AFST comparison was possible for 70 isolate-antifungal combinations. Essential agreement (minimum inhibitory concentration difference ≤ 1 double dilution step) was 88.6%. Very major errors (VMEs) (false-susceptibility), major errors (false-resistance) and minor errors (false categorization involving intermediate result) amounted to 33.3% (of resistant isolates), 1.9% (of susceptible isolates) and 1.4% providing 90.0% categorical agreement. All VMEs were due to fluconazole or voriconazole. This direct method saved on average 23.5 h for identification and 15.1 h for AFST, compared to routine procedures. However, performance for azole susceptibility testing was suboptimal and testing from subculture remains indispensable to validate the direct finding.
Development of Biological Acoustic Impedance Microscope and its Error Estimation
NASA Astrophysics Data System (ADS)
Hozumi, Naohiro; Nakano, Aiko; Terauchi, Satoshi; Nagao, Masayuki; Yoshida, Sachiko; Kobayashi, Kazuto; Yamamoto, Seiji; Saijo, Yoshifumi
This report deals with the scanning acoustic microscope for imaging cross sectional acoustic impedance of biological soft tissues. A focused acoustic beam was transmitted to the tissue object mounted on the "rear surface" of plastic substrate. A cerebellum tissue of rat and a reference material were observed at the same time under the same condition. As the incidence is not vertical, not only longitudinal wave but also transversal wave is generated in the substrate. The error in acoustic impedance assuming vertical incidence was estimated. It was proved that the error can precisely be compensated, if the beam pattern and acoustic parameters of coupling medium and substrate had been known.
Prediction of human errors by maladaptive changes in event-related brain networks.
Eichele, Tom; Debener, Stefan; Calhoun, Vince D; Specht, Karsten; Engel, Andreas K; Hugdahl, Kenneth; von Cramon, D Yves; Ullsperger, Markus
2008-04-22
Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve approximately 30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations.
Prediction of human errors by maladaptive changes in event-related brain networks
Eichele, Tom; Debener, Stefan; Calhoun, Vince D.; Specht, Karsten; Engel, Andreas K.; Hugdahl, Kenneth; von Cramon, D. Yves; Ullsperger, Markus
2008-01-01
Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional MRI and applying independent component analysis followed by deconvolution of hemodynamic responses, we studied error preceding brain activity on a trial-by-trial basis. We found a set of brain regions in which the temporal evolution of activation predicted performance errors. These maladaptive brain activity changes started to evolve ≈30 sec before the error. In particular, a coincident decrease of deactivation in default mode regions of the brain, together with a decline of activation in regions associated with maintaining task effort, raised the probability of future errors. Our findings provide insights into the brain network dynamics preceding human performance errors and suggest that monitoring of the identified precursor states may help in avoiding human errors in critical real-world situations. PMID:18427123
Microcircuit radiation effects databank
NASA Technical Reports Server (NTRS)
1983-01-01
This databank is the collation of radiation test data submitted by many testers and serves as a reference for engineers who are concerned with and have some knowledge of the effects of the natural radiation environment on microcircuits. It contains radiation sensitivity results from ground tests and is divided into two sections. Section A lists total dose damage information, and section B lists single event upset cross sections, I.E., the probability of a soft error (bit flip) or of a hard error (latchup).
Patient-specific polyetheretherketone facial implants in a computer-aided planning workflow.
Guevara-Rojas, Godoberto; Figl, Michael; Schicho, Kurt; Seemann, Rudolf; Traxler, Hannes; Vacariu, Apostolos; Carbon, Claus-Christian; Ewers, Rolf; Watzinger, Franz
2014-09-01
In the present study, we report an innovative workflow using polyetheretherketone (PEEK) patient-specific implants for esthetic corrections in the facial region through onlay grafting. The planning includes implant design according to virtual osteotomy and generation of a subtraction volume. The implant design was refined by stepwise changing the implant geometry according to soft tissue simulations. One patient was scanned using computed tomography. PEEK implants were interactively designed and manufactured using rapid prototyping techniques. Positioning intraoperatively was assisted by computer-aided navigation. Two months after surgery, a 3-dimensional surface model of the patient's face was generated using photogrammetry. Finally, the Hausdorff distance calculation was used to quantify the overall error, encompassing the failures in soft tissue simulation and implantation. The implant positioning process during surgery was satisfactory. The simulated soft tissue surface and the photogrammetry scan of the patient showed a high correspondence, especially where the skin covered the implants. The mean total error (Hausdorff distance) was 0.81 ± 1.00 mm (median 0.48, interquartile range 1.11). The spatial deviation remained less than 0.7 mm for the vast majority of points. The proposed workflow provides a complete computer-aided design, computer-aided manufacturing, and computer-aided surgery chain for implant design, allowing for soft tissue simulation, fabrication of patient-specific implants, and image-guided surgery to position the implants. Much of the surgical complexity resulting from osteotomies of the zygoma, chin, or mandibular angle might be transferred into the planning phase of patient-specific implants. Copyright © 2014 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Zhang, Yang; De Stefano, Rosalba; Robine, Marie; Butelli, Eugenio; Bulling, Katharina; Hill, Lionel; Rejzek, Martin; Martin, Cathie; Schoonbeek, Henk-jan
2015-11-01
The shelf life of tomato (Solanum lycopersicum) fruit is determined by the processes of overripening and susceptibility to pathogens. Postharvest shelf life is one of the most important traits for commercially grown tomatoes. We compared the shelf life of tomato fruit that accumulate different flavonoids and found that delayed overripening is associated with increased total antioxidant capacity caused by the accumulation of flavonoids in the fruit. However, reduced susceptibility to Botrytis cinerea, a major postharvest fungal pathogen of tomato, is conferred by specific flavonoids only. We demonstrate an association between flavonoid structure, selective scavenging ability for different free radicals, and reduced susceptibility to B. cinerea. Our study provides mechanistic insight into how flavonoids influence the shelf life, information that could be used to improve the shelf life of tomato and, potentially, other soft fruit. © 2015 American Society of Plant Biologists. All Rights Reserved.
Zhang, Yang; De Stefano, Rosalba; Robine, Marie; Butelli, Eugenio; Bulling, Katharina; Hill, Lionel; Rejzek, Martin; Martin, Cathie; Schoonbeek, Henk-jan
2015-01-01
The shelf life of tomato (Solanum lycopersicum) fruit is determined by the processes of overripening and susceptibility to pathogens. Postharvest shelf life is one of the most important traits for commercially grown tomatoes. We compared the shelf life of tomato fruit that accumulate different flavonoids and found that delayed overripening is associated with increased total antioxidant capacity caused by the accumulation of flavonoids in the fruit. However, reduced susceptibility to Botrytis cinerea, a major postharvest fungal pathogen of tomato, is conferred by specific flavonoids only. We demonstrate an association between flavonoid structure, selective scavenging ability for different free radicals, and reduced susceptibility to B. cinerea. Our study provides mechanistic insight into how flavonoids influence the shelf life, information that could be used to improve the shelf life of tomato and, potentially, other soft fruit. PMID:26082399
Moggia, Claudia; Graell, Jordi; Lara, Isabel; González, Guillermina; Lobos, Gustavo A.
2017-01-01
Fresh blueberries are very susceptible to mechanical damage, which limits postharvest life and firmness. Softening and susceptibility of cultivars “Duke” and “Brigitta” to developing internal browning (IB) after mechanical impact and subsequent storage was evaluated during a 2-year study (2011/2012, 2012/2013). On each season fruit were carefully hand-picked, segregated into soft (<1.60 N), medium (1.61–1.80 N), and firm (1.81–2.00 N) categories, and then either were dropped (32 cm) onto a hard plastic surface or remained non-dropped. All fruit were kept under refrigerated storage (0°C and 85–88% relative humidity) to assess firmness loss and IB after 7, 14, 21, 28, and 35 days. In general, regardless of cultivar or season, high variability in fruit firmness was observed within each commercial harvest, and significant differences in IB and softening rates were found. “Duke” exhibited high softening rates, as well as high and significant r2 between firmness and IB, but little differences for dropped vs. non-dropped fruit. “Brigitta,” having lesser firmness rates, exhibited almost no relationships between firmness and IB (especially for non-dropped fruit), but marked differences between dropping treatments. Firmness loss and IB development were related to firmness at harvest, soft and firm fruit being the most and least damaged, respectively. Soft fruit were characterized by greater IB development during storage along with high soluble solids/acid ratio, which could be used together with firmness to estimate harvest date and storage potential of fruit. Results of this work suggest that the differences in fruit quality traits at harvest could be related to the time that fruit stay on the plant after turning blue, soft fruit being more advanced in maturity. Finally, the observed differences between segregated categories reinforce the importance of analyzing fruit condition for each sorted group separately. PMID:28443123
Moggia, Claudia; Graell, Jordi; Lara, Isabel; González, Guillermina; Lobos, Gustavo A
2017-01-01
Fresh blueberries are very susceptible to mechanical damage, which limits postharvest life and firmness. Softening and susceptibility of cultivars "Duke" and "Brigitta" to developing internal browning (IB) after mechanical impact and subsequent storage was evaluated during a 2-year study (2011/2012, 2012/2013). On each season fruit were carefully hand-picked, segregated into soft (<1.60 N), medium (1.61-1.80 N), and firm (1.81-2.00 N) categories, and then either were dropped (32 cm) onto a hard plastic surface or remained non-dropped. All fruit were kept under refrigerated storage (0°C and 85-88% relative humidity) to assess firmness loss and IB after 7, 14, 21, 28, and 35 days. In general, regardless of cultivar or season, high variability in fruit firmness was observed within each commercial harvest, and significant differences in IB and softening rates were found. "Duke" exhibited high softening rates, as well as high and significant r 2 between firmness and IB, but little differences for dropped vs. non-dropped fruit. "Brigitta," having lesser firmness rates, exhibited almost no relationships between firmness and IB (especially for non-dropped fruit), but marked differences between dropping treatments. Firmness loss and IB development were related to firmness at harvest, soft and firm fruit being the most and least damaged, respectively. Soft fruit were characterized by greater IB development during storage along with high soluble solids/acid ratio, which could be used together with firmness to estimate harvest date and storage potential of fruit. Results of this work suggest that the differences in fruit quality traits at harvest could be related to the time that fruit stay on the plant after turning blue, soft fruit being more advanced in maturity. Finally, the observed differences between segregated categories reinforce the importance of analyzing fruit condition for each sorted group separately.
CHROMagar Candida Medium for Direct Susceptibility Testing of Yeast from Blood Cultures
Tan, Grace L.; Peterson, Ellena M.
2005-01-01
An evaluation was performed on 95 blood cultures positive for Candida spp. to determine the correlation of direct susceptibility testing of fluconazole versus both standardized disk diffusion and MIC methods. For direct testing, an aliquot taken from BD BACTEC Plus and/or BD BACTEC Lytic/10 bottles (Becton Dickinson [BD], Sparks, MD) positive by gram stain for yeast was subcultured to CHROMagar Candida (BD), and a 25-μg fluconazole disk (BD) was placed on the plate. The area of growth inhibition surrounding the disk was measured at 24 and 48 h. In addition, a subculture of the isolate was tested by a microdilution MIC using YeastOne (TREK Diagnostics Systems Inc., OH) and disk diffusion (NCCLS M44-A) using a standardized inoculum plated onto CHROMagar Candida as well as Mueller-Hinton agar to which 2% glucose and 0.5 μg/ml methylene blue dye was added (MH-GMB). The categorical interpretation derived from the MIC was used as the reference to which the disk diffusion results were compared. There were a total of 41 Candida albicans, 23 Candida glabrata, 20 Candida parapsilosis, 9 Candida tropicalis, and 1 each of Candida krusei and Candida lusitaniae tested. At 24 h there was full agreement among the methods for all C. albicans, C. tropicalis, C. lusitaniae, and C. krusei isolates. For the C. parapsilosis isolates at 24 h there was one very major discrepancy using the direct CHROMagar and one major error with the standardized MH-GMB. The majority of the errors were seen at 24 h with the C. glabrata isolates. Of the 23 C. glabrata isolates at 24 h by direct CHROMagar, there were 10 minor and 1 very major error; by MH-GMB there were 12 minor and 2 very major errors; and by standardized CHROMagar Candida there were 13 minor and 2 major errors. There were no very major errors with C. glabrata when all plates were read at 48 h. At 24 h by the direct and standardized CHROMagar the majority of C. glabrata isolates were more resistant, whereas by MH-GMB they were more susceptible than the reference MIC interpretation. In summary, subculturing yeast directly from blood cultures onto CHROMagar to which a fluconazole disk has been added may provide a presumptive identification at 24 h and, with the exception of C. glabrata, was able to predict the susceptibility to fluconazole with the majority of Candida isolates examined in this evaluation. PMID:15814992
2012-01-01
Background Although proton radiotherapy is a promising new approach for cancer patients, functional interference is a concern for patients with implantable cardioverter defibrillators (ICDs). The purpose of this study was to clarify the influence of secondary neutrons induced by proton radiotherapy on ICDs. Methods The experimental set-up simulated proton radiotherapy for a patient with an ICD. Four new ICDs were placed 0.3 cm laterally and 3 cm distally outside the radiation field in order to evaluate the influence of secondary neutrons. The cumulative in-field radiation dose was 107 Gy over 10 sessions of irradiation with a dose rate of 2 Gy/min and a field size of 10 × 10 cm2. After each radiation fraction, interference with the ICD by the therapy was analyzed by an ICD programmer. The dose distributions of secondary neutrons were estimated by Monte-Carlo simulation. Results The frequency of the power-on reset, the most serious soft error where the programmed pacing mode changes temporarily to a safety back-up mode, was 1 per approximately 50 Gy. The total number of soft errors logged in all devices was 29, which was a rate of 1 soft error per approximately 15 Gy. No permanent device malfunctions were detected. The calculated dose of secondary neutrons per 1 Gy proton dose in the phantom was approximately 1.3-8.9 mSv/Gy. Conclusions With the present experimental settings, the probability was approximately 1 power-on reset per 50 Gy, which was below the dose level (60-80 Gy) generally used in proton radiotherapy. Further quantitative analysis in various settings is needed to establish guidelines regarding proton radiotherapy for cancer patients with ICDs. PMID:22284700
NASA Astrophysics Data System (ADS)
Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan
2017-11-01
Precipitation plays an important role in determining the climate of a region. Precise estimation of precipitation is required to manage and plan water resources, as well as other related applications such as hydrology, climatology, meteorology and agriculture. Time series of hydrologic variables such as precipitation are composed of deterministic and stochastic parts. Despite this fact, the stochastic part of the precipitation data is not usually considered in modeling of precipitation process. As an innovation, the present study introduces three new hybrid models by integrating soft computing methods including multivariate adaptive regression splines (MARS), Bayesian networks (BN) and gene expression programming (GEP) with a time series model, namely generalized autoregressive conditional heteroscedasticity (GARCH) for modeling of the monthly precipitation. For this purpose, the deterministic (obtained by soft computing methods) and stochastic (obtained by GARCH time series model) parts are combined with each other. To carry out this research, monthly precipitation data of Babolsar, Bandar Anzali, Gorgan, Ramsar, Tehran and Urmia stations with different climates in Iran were used during the period of 1965-2014. Root mean square error (RMSE), relative root mean square error (RRMSE), mean absolute error (MAE) and determination coefficient (R2) were employed to evaluate the performance of conventional/single MARS, BN and GEP, as well as the proposed MARS-GARCH, BN-GARCH and GEP-GARCH hybrid models. It was found that the proposed novel models are more precise than single MARS, BN and GEP models. Overall, MARS-GARCH and BN-GARCH models yielded better accuracy than GEP-GARCH. The results of the present study confirmed the suitability of proposed methodology for precise modeling of precipitation.
Idelevich, Evgeny A; Becker, Karsten; Schmitz, Janne; Knaack, Dennis; Peters, Georg; Köck, Robin
2016-01-01
Results of disk diffusion antimicrobial susceptibility testing depend on individual visual reading of inhibition zone diameters. Therefore, automated reading using camera systems might represent a useful tool for standardization. In this study, the ADAGIO automated system (Bio-Rad) was evaluated for reading disk diffusion tests of fastidious bacteria. 144 clinical isolates (68 β-haemolytic streptococci, 28 Streptococcus pneumoniae, 18 viridans group streptococci, 13 Haemophilus influenzae, 7 Moraxella catarrhalis, and 10 Campylobacter jejuni) were tested on Mueller-Hinton agar supplemented with 5% defibrinated horse blood and 20 mg/L β-NAD (MH-F, Oxoid) according to EUCAST. Plates were read manually with a ruler and automatically using the ADAGIO system. Inhibition zone diameters, indicated by the automated system, were visually controlled and adjusted, if necessary. Among 1548 isolate-antibiotic combinations, comparison of automated vs. manual reading yielded categorical agreement (CA) without visual adjustment of the automatically determined zone diameters in 81.4%. In 20% (309 of 1548) of tests it was deemed necessary to adjust the automatically determined zone diameter after visual control. After adjustment, CA was 94.8%; very major errors (false susceptible interpretation), major errors (false resistant interpretation) and minor errors (false categorization involving intermediate result), calculated according to the ISO 20776-2 guideline, accounted to 13.7% (13 of 95 resistant results), 3.3% (47 of 1424 susceptible results) and 1.4% (21 of 1548 total results), respectively, compared to manual reading. The ADAGIO system allowed for automated reading of disk diffusion testing in fastidious bacteria and, after visual validation of the automated results, yielded good categorical agreement with manual reading.
Idelevich, Evgeny A.; Schüle, Isabel; Grünastel, Barbara; Wüllenweber, Jörg; Peters, Georg
2014-01-01
Briefly incubated agar cultures from positive blood cultures were used for antimicrobial susceptibility testing (AST) by Vitek 2. The cultivation time until inoculation was 3.8 h for Gram-positive cocci and 2.4 h for Gram-negative rods. The error rates were low, providing early and reliable AST without additional time or cost expenditure. PMID:25165084
Haptic communication between humans is tuned by the hard or soft mechanics of interaction
Usai, Francesco; Ganesh, Gowrishankar; Sanguineti, Vittorio; Burdet, Etienne
2018-01-01
To move a hard table together, humans may coordinate by following the dominant partner’s motion [1–4], but this strategy is unsuitable for a soft mattress where the perceived forces are small. How do partners readily coordinate in such differing interaction dynamics? To address this, we investigated how pairs tracked a target using flexion-extension of their wrists, which were coupled by a hard, medium or soft virtual elastic band. Tracking performance monotonically increased with a stiffer band for the worse partner, who had higher tracking error, at the cost of the skilled partner’s muscular effort. This suggests that the worse partner followed the skilled one’s lead, but simulations show that the results are better explained by a model where partners share movement goals through the forces, whilst the coupling dynamics determine the capacity of communicable information. This model elucidates the versatile mechanism by which humans can coordinate during both hard and soft physical interactions to ensure maximum performance with minimal effort. PMID:29565966
Lu, Min-Hua; Mao, Rui; Lu, Yin; Liu, Zheng; Wang, Tian-Fu; Chen, Si-Ping
2012-01-01
Indentation testing is a widely used approach to evaluate mechanical characteristics of soft tissues quantitatively. Young's modulus of soft tissue can be calculated from the force-deformation data with known tissue thickness and Poisson's ratio using Hayes' equation. Our group previously developed a noncontact indentation system using a water jet as a soft indenter as well as the coupling medium for the propagation of high-frequency ultrasound. The novel system has shown its ability to detect the early degeneration of articular cartilage. However, there is still lack of a quantitative method to extract the intrinsic mechanical properties of soft tissue from water jet indentation. The purpose of this study is to investigate the relationship between the loading-unloading curves and the mechanical properties of soft tissues to provide an imaging technique of tissue mechanical properties. A 3D finite element model of water jet indentation was developed with consideration of finite deformation effect. An improved Hayes' equation has been derived by introducing a new scaling factor which is dependent on Poisson's ratios v, aspect ratio a/h (the radius of the indenter/the thickness of the test tissue), and deformation ratio d/h. With this model, the Young's modulus of soft tissue can be quantitatively evaluated and imaged with the error no more than 2%. PMID:22927890
Neurological soft signs in children with attention deficit hyperactivity disorder.
Patankar, V C; Sangle, J P; Shah, Henal R; Dave, M; Kamath, R M
2012-04-01
Attention deficit hyperactivity disorder (ADHD) is a common neurodevelopmental disorder with wide repercussions. Since it is etiologically related to delayed maturation, neurological soft signs (NSS) could be a tool to assess this. Further the correlation of NSS with severity and type of ADHD and presence of Specific Learning Disability (SLD) would give further insight into it. To study neurological soft signs and risk factors (type, mode of delivery, and milestones) in children with ADHD and to correlate NSS with type and severity of ADHD and with co-morbid Specific Learning Disability. The study was carried out in Child care services of a tertiary teaching urban hospital. It was a cross-sectional single interview study. 52 consecutive children diagnosed as having ADHD were assessed for the presence of neurological soft signs using Revised Physical and Neurological Examination soft Signs scale (PANESS). The ADHD was rated by parents using ADHD parent rating scale. The data was analyzed using the chi-squared test and Pearson's co-relational analysis. Neurological soft signs are present in 84% of children. They are equally present in both the inattentive-hyperactive and impulsive-hyperactive types of ADHD. The presence of neurological soft signs in ADHD are independent of the presence of co-morbid SLD. Dysrrhythmias and overflow with gait were typically seen for impulsive-hyperactive type and higher severity of ADHD is related to more errors.
Weiss, K; Laverdière, M; Rivest, R
1996-01-01
Corynebacterium species are increasingly being implicated in foreign-body infections and in immunocompromised-host infections. However, there are no specific recommendations on the method or the criteria to use in order to determine the in vitro activities of the antibiotics commonly used to treat Corynebacterium infections. The first aim of our study was to compare the susceptibilities of various species of Corynebacterium to vancomycin, erythromycin, and penicillin by using a broth microdilution method and a disk diffusion method. Second, the activity of penicillin against our isolates was assessed by using the interpretative criteria recommended by the National Committee for Clinical Laboratory Standards for the determination of the susceptibility of streptococci and Listeria monocytogenes to penicillin. Overall, 100% of the isolates were susceptible to vancomycin, while considerable variations in the activities of erythromycin and penicillin were noted for the different species tested, including the non-Corynebacterium jeikeium species. A good correlation in the susceptibilities of vancomycin and erythromycin between the disk diffusion and the microdilution methods was observed. However, a 5% rate of major or very major errors was detected with the Listeria criteria, while a high rate of minor errors (18%) was noted when the streptococcus criteria were used. Our findings indicate considerable variations in the activities of erythromycin and penicillin against the various species of Corynebacterium. Because of the absence of definite recommendations, important discrepancies were observed between the methods and the interpretations of the penicillin activity. PMID:8849254
Protocol Processing for 100 Gbit/s and Beyond - A Soft Real-Time Approach in Hardware and Software
NASA Astrophysics Data System (ADS)
Büchner, Steffen; Lopacinski, Lukasz; Kraemer, Rolf; Nolte, Jörg
2017-09-01
100 Gbit/s wireless communication protocol processing stresses all parts of a communication system until the outermost. The efficient use of upcoming 100 Gbit/s and beyond transmission technology requires the rethinking of the way protocols are processed by the communication endpoints. This paper summarizes the achievements of the project End2End100. We will present a comprehensive soft real-time stream processing approach that allows the protocol designer to develop, analyze, and plan scalable protocols for ultra high data rates of 100 Gbit/s and beyond. Furthermore, we will present an ultra-low power, adaptable, and massively parallelized FEC (Forward Error Correction) scheme that detects and corrects bit errors at line rate with an energy consumption between 1 pJ/bit and 13 pJ/bit. The evaluation results discussed in this publication show that our comprehensive approach allows end-to-end communication with a very low protocol processing overhead.
Novel intelligent real-time position tracking system using FPGA and fuzzy logic.
Soares dos Santos, Marco P; Ferreira, J A F
2014-03-01
The main aim of this paper is to test if FPGAs are able to achieve better position tracking performance than software-based soft real-time platforms. For comparison purposes, the same controller design was implemented in these architectures. A Multi-state Fuzzy Logic controller (FLC) was implemented both in a Xilinx(®) Virtex-II FPGA (XC2v1000) and in a soft real-time platform NI CompactRIO(®)-9002. The same sampling time was used. The comparative tests were conducted using a servo-pneumatic actuation system. Steady-state errors lower than 4 μm were reached for an arbitrary vertical positioning of a 6.2 kg mass when the controller was embedded into the FPGA platform. Performance gains up to 16 times in the steady-state error, up to 27 times in the overshoot and up to 19.5 times in the settling time were achieved by using the FPGA-based controller over the software-based FLC controller. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
An Interactive Concatenated Turbo Coding System
NASA Technical Reports Server (NTRS)
Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc
1999-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests
Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong
2016-01-01
A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.
He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong
2016-01-01
A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h.
NASA Astrophysics Data System (ADS)
Bender, P.; Tschöpe, A.; Birringer, R.
2014-12-01
The local mechanical coupling of ferromagnetic nanorods in hydrogels was characterized by magnetization measurements. Nickel nanorods were synthesized by the AAO-template method and embedded in gelatine hydrogels with mechanically soft or hard matrix properties determined by the gelatine weight fraction. By applying a homogeneous magnetic field during gelation the nanorods were aligned along the field resulting in uniaxially textured ferrogels. The magnetization curves of the soft ferrogel exhibited not only important similarities but also characteristic differences as compared to the hard ferrogel. The hystereses measured in a field parallel to the texture axis were almost identical for both samples indicating effective coupling of the nanorods with the polymer network. By contrast, measurements in a magnetic field perpendicular to the texture axis revealed a much higher initial susceptibility of the soft as compared to the hard ferrogel. This difference was attributed to the additional rotation of the nanorods allowed by the reduced shear modulus in the soft ferrogel matrix. Two methods for data analysis were presented which enabled us to determine the shear modulus of the gelatine matrix which was interpreted as a local rather than macroscopic quantity in consideration of the nanoscale of the probe particles.
NASA Astrophysics Data System (ADS)
ÁLvarez, A.; Orfila, A.; Tintoré, J.
2004-03-01
Satellites are the only systems able to provide continuous information on the spatiotemporal variability of vast areas of the ocean. Relatively long-term time series of satellite data are nowadays available. These spatiotemporal time series of satellite observations can be employed to build empirical models, called satellite-based ocean forecasting (SOFT) systems, to forecast certain aspects of future ocean states. SOFT systems can predict satellite-observed fields at different timescales. The forecast skill of SOFT systems forecasting the sea surface temperature (SST) at monthly timescales has been extensively explored in previous works. In this work we study the performance of two SOFT systems forecasting, respectively, the SST and sea level anomaly (SLA) at weekly timescales, that is, providing forecasts of the weekly averaged SST and SLA fields with 1 week in advance. The SOFT systems were implemented in the Ligurian Sea (Western Mediterranean Sea). Predictions from the SOFT systems are compared with observations and with the predictions obtained from persistence models. Results indicate that the SOFT system forecasting the SST field is always superior in terms of predictability to persistence. Minimum prediction errors in the SST are obtained during winter and spring seasons. On the other hand, the biggest differences between the performance of SOFT and persistence models are found during summer and autumn. These changes in the predictability are explained on the basis of the particular variability of the SST field in the Ligurian Sea. Concerning the SLA field, no improvements with respect to persistence have been found for the SOFT system forecasting the SLA field.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
A Randomized Trial of Soft Multifocal Contact Lenses for Myopia Control: Baseline Data and Methods.
Walline, Jeffrey J; Gaume Giannoni, Amber; Sinnott, Loraine T; Chandler, Moriah A; Huang, Juan; Mutti, Donald O; Jones-Jordan, Lisa A; Berntsen, David A
2017-09-01
The Bifocal Lenses In Nearsighted Kids (BLINK) study is the first soft multifocal contact lens myopia control study to compare add powers and measure peripheral refractive error in the vertical meridian, so it will provide important information about the potential mechanism of myopia control. The BLINK study is a National Eye Institute-sponsored, double-masked, randomized clinical trial to investigate the effects of soft multifocal contact lenses on myopia progression. This article describes the subjects' baseline characteristics and study methods. Subjects were 7 to 11 years old, had -0.75 to -5.00 spherical component and less than 1.00 diopter (D) astigmatism, and had 20/25 or better logMAR distance visual acuity with manifest refraction in each eye and with +2.50-D add soft bifocal contact lenses on both eyes. Children were randomly assigned to wear Biofinity single-vision, Biofinity Multifocal "D" with a +1.50-D add power, or Biofinity Multifocal "D" with a +2.50-D add power contact lenses. We examined 443 subjects at the baseline visits, and 294 (66.4%) subjects were enrolled. Of the enrolled subjects, 177 (60.2%) were female, and 200 (68%) were white. The mean (± SD) age was 10.3 ± 1.2 years, and 117 (39.8%) of the eligible subjects were younger than 10 years. The mean spherical equivalent refractive error, measured by cycloplegic autorefraction was -2.39 ± 1.00 D. The best-corrected binocular logMAR visual acuity with glasses was +0.01 ± 0.06 (20/21) at distance and -0.03 ± 0.08 (20/18) at near. The BLINK study subjects are similar to patients who would routinely be eligible for myopia control in practice, so the results will provide clinical information about soft bifocal contact lens myopia control as well as information about the mechanism of the treatment effect, if one occurs.
Pinning Susceptibility at the Jamming Transition
NASA Astrophysics Data System (ADS)
Graves, Amy; Padgett, Elliot; Goodrich, Carl; Liu, Andrea
2013-03-01
Jamming in the presence of fixed or pinned obstacles, representing quenched disorder, is a situation of both practical and theoretical interest. We study the jamming of soft, bidisperse discs in which a subset of discs are pinned while the remaining particles equilibrate around them at a given volume fraction. The obstacles provide a supporting structure for the jammed configuration which not only lowers the jamming threshold, ϕJ, but affects the coordination number and other parameters of interest as the critical point is approached. In the limit of low obstacle density, one can calculate a pinning susceptibility χP, analogous to the magnetic susceptibility, with obstacle density playing the role of the magnetic field. The pinning susceptibility is thus expected to diverge in the thermodynamic limit as χP ~| ϕ -ϕJ | -γP . Finite-size scaling calculations allow us to confirm this and calculate the critical exponent, γP. Acknowledgement is made to the Donors of the Petrolium Research Fund administered by the American Chemical Society, Swarthmore College's Eugene M. Lang Faculty Fellowship, NSF grant DMR-1062638 and DOE grant DE-FG02-05ER46199.
Szekely, Adrien; Palmer, Michael D.; Johnson, Elizabeth M.
2012-01-01
Rapid, accurate identification of yeast isolates from clinical samples has always been important given their innately variable antifungal susceptibility profiles. Recently, this has become paramount with the proposed introduction of species-specific interpretive breakpoints for MICs obtained in yeast antifungal susceptibility tests (M. A. Pfaller, D. Andes, D. J. Diekema, A. Espinel–Ingroff, D. Sheehan, and CLSI Subcommittee for Antifungal Susceptibility Testing, Drug Resist. Updat. 13:180–195, 2010). Here, we present the results of a 12-month evaluation of the accuracy of identifications that accompany yeast isolates submitted to the Mycology Reference Laboratory (United Kingdom) for either confirmation of identity or susceptibility testing. In total, 1,781 yeast isolates were analyzed, and the robustness of prior identifications obtained in microbiology laboratories throughout the United Kingdom was assessed using a combination of culture on chromogenic agar, morphology on cornmeal agar, and molecular identification by pyrosequencing. Over 40% of isolates (755) were submitted without any suggested identification. Of those isolates with a prior identification, 100 (9.7%) were incorrectly identified. Error rates ranged from 5.2% (for organisms submitted for antifungal susceptibility testing) to 18.2% (for organisms requiring confirmation of identity) and varied in a strictly species-specific manner. At least 50% of identification errors would be likely to affect interpretation of MIC data, with a possible impact on patient management. In addition, 2.3% of submitted cultures were found to contain mixtures of at least two yeast species. The vast majority of mixtures had gone undetected in the referring laboratory and would have impacted the interpretation of antifungal susceptibility profiles and patient management. Some of the more common misidentifications are discussed according to the identification method employed, with suggestions for avoiding such misinterpretations. PMID:22649009
Borman, Andrew M; Szekely, Adrien; Palmer, Michael D; Johnson, Elizabeth M
2012-08-01
Rapid, accurate identification of yeast isolates from clinical samples has always been important given their innately variable antifungal susceptibility profiles. Recently, this has become paramount with the proposed introduction of species-specific interpretive breakpoints for MICs obtained in yeast antifungal susceptibility tests (M. A. Pfaller, D. Andes, D. J. Diekema, A. Espinel-Ingroff, D. Sheehan, and CLSI Subcommittee for Antifungal Susceptibility Testing, Drug Resist. Updat. 13:180-195, 2010). Here, we present the results of a 12-month evaluation of the accuracy of identifications that accompany yeast isolates submitted to the Mycology Reference Laboratory (United Kingdom) for either confirmation of identity or susceptibility testing. In total, 1,781 yeast isolates were analyzed, and the robustness of prior identifications obtained in microbiology laboratories throughout the United Kingdom was assessed using a combination of culture on chromogenic agar, morphology on cornmeal agar, and molecular identification by pyrosequencing. Over 40% of isolates (755) were submitted without any suggested identification. Of those isolates with a prior identification, 100 (9.7%) were incorrectly identified. Error rates ranged from 5.2% (for organisms submitted for antifungal susceptibility testing) to 18.2% (for organisms requiring confirmation of identity) and varied in a strictly species-specific manner. At least 50% of identification errors would be likely to affect interpretation of MIC data, with a possible impact on patient management. In addition, 2.3% of submitted cultures were found to contain mixtures of at least two yeast species. The vast majority of mixtures had gone undetected in the referring laboratory and would have impacted the interpretation of antifungal susceptibility profiles and patient management. Some of the more common misidentifications are discussed according to the identification method employed, with suggestions for avoiding such misinterpretations.
Effect of single vision soft contact lenses on peripheral refraction.
Kang, Pauline; Fan, Yvonne; Oh, Kelly; Trac, Kevin; Zhang, Frank; Swarbrick, Helen
2012-07-01
To investigate changes in peripheral refraction with under-, full, and over-correction of central refraction with commercially available single vision soft contact lenses (SCLs) in young myopic adults. Thirty-four myopic adult subjects were fitted with Proclear Sphere SCLs to under-correct (+0.75 DS), fully correct, and over-correct (-0.75 DS) their manifest central refractive error. Central and peripheral refraction were measured with no lens wear and subsequently with different levels of SCL central refractive error correction. The uncorrected refractive error was myopic at all locations along the horizontal meridian. Peripheral refraction was relatively hyperopic compared to center at 30 and 35° in the temporal visual field (VF) in low myopes and at 30 and 35° in the temporal VF and 10, 30, and 35° in the nasal VF in moderate myopes. All levels of SCL correction caused a hyperopic shift in refraction at all locations in the horizontal VF. The smallest hyperopic shift was demonstrated with under-correction followed by full correction and then by over-correction of central refractive error. An increase in relative peripheral hyperopia was measured with full correction SCLs compared with no correction in both low and moderate myopes. However, no difference in relative peripheral refraction profiles were found between under-, full, and over-correction. Under-, full, and over-correction of central refractive error with single vision SCLs caused a hyperopic shift in both central and peripheral refraction at all positions in the horizontal meridian. All levels of SCL correction caused the peripheral retina, which initially experienced absolute myopic defocus at baseline with no correction, to experience absolute hyperopic defocus. This peripheral hyperopia may be a possible cause of myopia progression reported with different types and levels of myopia correction.
NASA Astrophysics Data System (ADS)
Samboju, Vishal; Adams, Matthew; Salgaonkar, Vasant; Diederich, Chris J.; Cunha, J. Adam M.
2017-02-01
The speed of sound (SOS) for ultrasound devices used for imaging soft tissue is often calibrated to water, 1540 m/s1 , despite in-vivo soft tissue SOS varying from 1450 to 1613 m/s2 . Images acquired with 1540 m/s and used in conjunction with stereotactic external coordinate systems can thus result in displacement errors of several millimeters. Ultrasound imaging systems are routinely used to guide interventional thermal ablation and cryoablation devices, or radiation sources for brachytherapy3 . Brachytherapy uses small radioactive pellets, inserted interstitially with needles under ultrasound guidance, to eradicate cancerous tissue4 . Since the radiation dose diminishes with distance from the pellet as 1/r2 , imaging uncertainty of a few millimeters can result in significant erroneous dose delivery5,6. Likewise, modeling of power deposition and thermal dose accumulations from ablative sources are also prone to errors due to placement offsets from SOS errors7 . This work presents a method of mitigating needle placement error due to SOS variances without the need of ionizing radiation2,8. We demonstrate the effects of changes in dosimetry in a prostate brachytherapy environment due to patientspecific SOS variances and the ability to mitigate dose delivery uncertainty. Electromagnetic (EM) sensors embedded in the brachytherapy ultrasound system provide information regarding 3D position and orientation of the ultrasound array. Algorithms using data from these two modalities are used to correct bmode images to account for SOS errors. While ultrasound localization resulted in >3 mm displacements, EM resolution was verified to <1 mm precision using custom-built phantoms with various SOS, showing 1% accuracy in SOS measurement.
Genetic susceptibility to bone and soft tissue sarcomas: a field synopsis and meta-analysis
Benna, Clara; Simioni, Andrea; Pasquali, Sandro; De Boni, Davide; Rajendran, Senthilkumar; Spiro, Giovanna; Colombo, Chiara; Virgone, Calogero; DuBois, Steven G.; Gronchi, Alessandro; Rossi, Carlo Riccardo; Mocellin, Simone
2018-01-01
Background The genetic architecture of bone and soft tissue sarcomas susceptibility is yet to be elucidated. We aimed to comprehensively collect and meta-analyze the current knowledge on genetic susceptibility in these rare tumors. Methods We conducted a systematic review and meta-analysis of the evidence on the association between DNA variation and risk of developing sarcomas through searching PubMed, The Cochrane Library, Scopus and Web of Science databases. To evaluate result credibility, summary evidence was graded according to the Venice criteria and false positive report probability (FPRP) was calculated to further validate result noteworthiness. Integrative analysis of genetic and eQTL (expression quantitative trait locus) data was coupled with network and pathway analysis to explore the hypothesis that specific cell functions are involved in sarcoma predisposition. Results We retrieved 90 eligible studies comprising 47,796 subjects (cases: 14,358, 30%) and investigating 1,126 polymorphisms involving 320 distinct genes. Meta-analysis identified 55 single nucleotide polymorphisms (SNPs) significantly associated with disease risk with a high (N=9), moderate (N=38) and low (N=8) level of evidence, findings being classified as noteworthy basically only when the level of evidence was high. The estimated joint population attributable risk for three independent SNPs (rs11599754 of ZNF365/EGR2, rs231775 of CTLA4, and rs454006 of PRKCG) was 37.2%. We also identified 53 SNPs significantly associated with sarcoma risk based on single studies. Pathway analysis enabled us to propose that sarcoma predisposition might be linked especially to germline variation of genes whose products are involved in the function of the DNA repair machinery. Conclusions We built the first knowledgebase on the evidence linking DNA variation to sarcomas susceptibility, which can be used to generate mechanistic hypotheses and inform future studies in this field of oncology. PMID:29719630
The Single Event Effect Characteristics of the 486-DX4 Microprocessor
NASA Technical Reports Server (NTRS)
Kouba, Coy; Choi, Gwan
1996-01-01
This research describes the development of an experimental radiation testing environment to investigate the single event effect (SEE) susceptibility of the 486-DX4 microprocessor. SEE effects are caused by radiation particles that disrupt the logic state of an operating semiconductor, and include single event upsets (SEU) and single event latchup (SEL). The relevance of this work can be applied directly to digital devices that are used in spaceflight computer systems. The 486-DX4 is a powerful commercial microprocessor that is currently under consideration for use in several spaceflight systems. As part of its selection process, it must be rigorously tested to determine its overall reliability in the space environment, including its radiation susceptibility. The goal of this research is to experimentally test and characterize the single event effects of the 486-DX4 microprocessor using a cyclotron facility as the fault-injection source. The test philosophy is to focus on the "operational susceptibility," by executing real software and monitoring for errors while the device is under irradiation. This research encompasses both experimental and analytical techniques, and yields a characterization of the 486-DX4's behavior for different operating modes. Additionally, the test methodology can accommodate a wide range of digital devices, such as microprocessors, microcontrollers, ASICS, and memory modules, for future testing. The goals were achieved by testing with three heavy-ion species to provide different linear energy transfer rates, and a total of six microprocessor parts were tested from two different vendors. A consistent set of error modes were identified that indicate the manner in which the errors were detected in the processor. The upset cross-section curves were calculated for each error mode, and the SEU threshold and saturation levels were identified for each processor. Results show a distinct difference in the upset rate for different configurations of the on-chip cache, as well as proving that one vendor is superior to the other in terms of latchup susceptibility. Results from this testing were also used to provide a mean-time-between-failure estimate of the 486-DX4 operating in the radiation environment for the International Space Station.
NASA Astrophysics Data System (ADS)
Chertok, I. M.; Belov, A. V.
2018-03-01
Correction to: Solar Phys https://doi.org/10.1007/s11207-017-1169-1 We found an important error in the text of our article. On page 6, the second sentence of Section 3.2 "We studied the variations in soft X-ray flare characteristics in more detail by averaging them within the running windows of ± one Carrington rotation with a step of two rotations." should instead read "We studied the variations in soft X-ray flare characteristics in more detail by averaging them within the running windows of ± 2.5 Carrington rotations with a step of two rotations." We regret the inconvenience. The online version of the original article can be found at https://doi.org/10.1007/s11207-017-1169-1
Soft tissue deformation estimation by spatio-temporal Kalman filter finite element method.
Yarahmadian, Mehran; Zhong, Yongmin; Gu, Chengfan; Shin, Jaehyun
2018-01-01
Soft tissue modeling plays an important role in the development of surgical training simulators as well as in robot-assisted minimally invasive surgeries. It has been known that while the traditional Finite Element Method (FEM) promises the accurate modeling of soft tissue deformation, it still suffers from a slow computational process. This paper presents a Kalman filter finite element method to model soft tissue deformation in real time without sacrificing the traditional FEM accuracy. The proposed method employs the FEM equilibrium equation and formulates it as a filtering process to estimate soft tissue behavior using real-time measurement data. The model is temporally discretized using the Newmark method and further formulated as the system state equation. Simulation results demonstrate that the computational time of KF-FEM is approximately 10 times shorter than the traditional FEM and it is still as accurate as the traditional FEM. The normalized root-mean-square error of the proposed KF-FEM in reference to the traditional FEM is computed as 0.0116. It is concluded that the proposed method significantly improves the computational performance of the traditional FEM without sacrificing FEM accuracy. The proposed method also filters noises involved in system state and measurement data.
Microscope self-calibration based on micro laser line imaging and soft computing algorithms
NASA Astrophysics Data System (ADS)
Apolinar Muñoz Rodríguez, J.
2018-06-01
A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.
NASA Astrophysics Data System (ADS)
Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan
2018-02-01
Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.
A Computing Method to Determine the Performance of an Ionic Liquid Gel Soft Actuator
Zhang, Chenghong; Zhou, Yanmin; Wang, Zhipeng
2018-01-01
A new type of soft actuator material—an ionic liquid gel (ILG) that consists of BMIMBF4, HEMA, DEAP, and ZrO2—is polymerized into a gel state under ultraviolet (UV) light irradiation. In this paper, we first propose that the ILG conforms to the assumptions of hyperelastic theory and that the Mooney-Rivlin model can be used to study the properties of the ILG. Under the five-parameter and nine-parameter Mooney-Rivlin models, the formulas for the calculation of the uniaxial tensile stress, plane uniform tensile stress, and 3D directional stress are deduced. The five-parameter and nine-parameter Mooney-Rivlin models of the ILG with a ZrO2 content of 3 wt% were obtained by uniaxial tensile testing, and the parameters are denoted as c10, c01, c20, c11, and c02 and c10, c01, c20, c11, c02, c30, c21, c12, and c03, respectively. Through the analysis and comparison of the uniaxial tensile stress between the calculated and experimental data, the error between the stress data calculated from the five-parameter Mooney-Rivlin model and the experimental data is less than 0.51%, and the error between the stress data calculated from the nine-parameter Mooney-Rivlin model and the experimental data is no more than 8.87%. Hence, our work presents a feasible and credible formula for the calculation of the stress of the ILG. This work opens a new path to assess the performance of a soft actuator composed of an ILG and will contribute to the optimized design of soft robots. PMID:29853999
A Computing Method to Determine the Performance of an Ionic Liquid Gel Soft Actuator.
He, Bin; Zhang, Chenghong; Zhou, Yanmin; Wang, Zhipeng
2018-01-01
A new type of soft actuator material-an ionic liquid gel (ILG) that consists of BMIMBF 4 , HEMA, DEAP, and ZrO 2 -is polymerized into a gel state under ultraviolet (UV) light irradiation. In this paper, we first propose that the ILG conforms to the assumptions of hyperelastic theory and that the Mooney-Rivlin model can be used to study the properties of the ILG. Under the five-parameter and nine-parameter Mooney-Rivlin models, the formulas for the calculation of the uniaxial tensile stress, plane uniform tensile stress, and 3D directional stress are deduced. The five-parameter and nine-parameter Mooney-Rivlin models of the ILG with a ZrO 2 content of 3 wt% were obtained by uniaxial tensile testing, and the parameters are denoted as c 10 , c 01 , c 20 , c 11 , and c 02 and c 10 , c 01 , c 20 , c 11 , c 02 , c 30 , c 21 , c 12 , and c 03 , respectively. Through the analysis and comparison of the uniaxial tensile stress between the calculated and experimental data, the error between the stress data calculated from the five-parameter Mooney-Rivlin model and the experimental data is less than 0.51%, and the error between the stress data calculated from the nine-parameter Mooney-Rivlin model and the experimental data is no more than 8.87%. Hence, our work presents a feasible and credible formula for the calculation of the stress of the ILG. This work opens a new path to assess the performance of a soft actuator composed of an ILG and will contribute to the optimized design of soft robots.
Transient fault behavior in a microprocessor: A case study
NASA Technical Reports Server (NTRS)
Duba, Patrick
1989-01-01
An experimental analysis is described which studies the susceptibility of a microprocessor based jet engine controller to upsets caused by current and voltage transients. A design automation environment which allows the run time injection of transients and the tracing from their impact device to the pin level is described. The resulting error data are categorized by the charge levels of the injected transients by location and by their potential to cause logic upsets, latched errors, and pin errors. The results show a 3 picoCouloumb threshold, below which the transients have little impact. An Arithmetic and Logic Unit transient is most likely to result in logic upsets and pin errors (i.e., impact the external environment). The transients in the countdown unit are potentially serious since they can result in latched errors, thus causing latent faults. Suggestions to protect the processor against these errors, by incorporating internal error detection and transient suppression techniques, are also made.
Lionakis, M.S.; Hajishengallis, G.
2015-01-01
In recent years, the study of genetic defects arising from inborn errors in immunity has resulted in the discovery of new genes involved in the function of the immune system and in the elucidation of the roles of known genes whose importance was previously unappreciated. With the recent explosion in the field of genomics and the increasing number of genetic defects identified, the study of naturally occurring mutations has become a powerful tool for gaining mechanistic insight into the functions of the human immune system. In this concise perspective, we discuss emerging evidence that inborn errors in immunity constitute real-life models that are indispensable both for the in-depth understanding of human biology and for obtaining critical insights into common diseases, such as those affecting oral health. In the field of oral mucosal immunity, through the study of patients with select gene disruptions, the interleukin-17 (IL-17) pathway has emerged as a critical element in oral immune surveillance and susceptibility to inflammatory disease, with disruptions in the IL-17 axis now strongly linked to mucosal fungal susceptibility, whereas overactivation of the same pathways is linked to inflammatory periodontitis. PMID:25900229
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1982-06-01
The US General Accounting Office and executive agency Inspectors General have reported losses of millions of dollars in government funds resulting from fraud, waste and error. The Administration and the Congress have initiated determined efforts to eliminate such losses from government programs and activities. Primary emphasis in this effort is on the strengthening of accounting and administrative controls. Accordingly, the Office of Management and Budget (OMB) issued Circular No. A-123, Internal Control Systems, on October 28, 1981. The campaign to improve internal controls was endorsed by the Secretary of Energy in a memorandum to Heads of Departmental Components, dated Marchmore » 13, 1981, Subject: Internal Control as a Deterrent to Fraud, Waste and Error. A vulnerability assessment is a review of the susceptibility of a program or function to unauthorized use of resources, errors in reports and information, and illegal or unethical acts. It is based on considerations of the environment in which the program or function is carried out, the inherent riskiness of the program or function, and a preliminary evaluation as to whether adequate safeguards exist and are functioning.« less
NASA Astrophysics Data System (ADS)
Long, Jeffrey K.
1989-09-01
This theses developed computer models of two types of amplitude comparison monopulse processors using the Block Oriented System Simulation (BOSS) software package and to determine the response to these models to impulsive input signals. This study was an effort to determine the susceptibility of monopulse tracking radars to impulsing jamming signals. Two types of amplitude comparison monopulse receivers were modeled, one using logarithmic amplifiers and the other using automatic gain control for signal normalization. Simulations of both types of systems were run under various conditions of gain or frequency imbalance between the two receiver channels. The resulting errors from the imbalanced simulations were compared to the outputs of similar, baseline simulations which had no electrical imbalances. The accuracy of both types of processors was directly affected by gain or frequency imbalances in their receiver channels. In most cases, it was possible to generate both positive and negative angular errors, dependent upon the type and degree of mismatch between the channels. The system most susceptible to induced errors was a frequency imbalanced processor which used AGC circuitry. Any errors introduced will be a function of the degree of mismatch between the channels and therefore would be difficult to exploit reliably.
Ocular manifestations of sickle cell disease and genetic susceptibility for refractive errors
Shukla, Palak; Verma, Henu; Patel, Santosh; Patra, P. K.; Bhaskar, L. V. K. S.
2017-01-01
PURPOSE: Sickle cell disease (SCD) is the most common and serious form of an inherited blood disorder that lead to higher risk of early mortality. SCD patients are at high risk for developing multiorgan acute and chronic complications linked with significant morbidity and mortality. Some of the ophthalmological complications of SCD include retinal changes, refractive errors, vitreous hemorrhage, and abnormalities of the cornea. MATERIALS AND METHODS: The present study includes 96 SCD patients. A dilated comprehensive eye examination was performed to know the status of retinopathy. Refractive errors were measured in all patients. In patients with >10 years of age, cycloplegia was not performed before autorefractometry. A subset of fifty patients’ genotyping was done for NOS3 27-base pair (bp) variable number of tandem repeat (VNTR) and IL4 intron-3 VNTR polymorphisms using polymerase chain reaction-electrophoresis. Chi-square test was performed to test the association between the polymorphisms and refractive errors. RESULTS: The results of the present study revealed that 63.5% of patients have myopia followed by 19.8% hyperopia. NOS3 27-bp VNTR genotypes significantly deviated from Hardy–Weinberg equilibrium (P < 0.0001). Although IL4 70-bp VNTR increased the risk of developing refractive errors, it is not statistically significant. However, NOS3 27-bp VNTR significantly reduced the risk of development of myopia. CONCLUSION: In summary, our study documents the prevalence of refractive errors along with some retinal changes in Indian SCD patients. Further, this study demonstrates that the NOS3 VNTR contributes to the susceptibility to development of myopia in SCD cases. PMID:29018763
Ocular manifestations of sickle cell disease and genetic susceptibility for refractive errors.
Shukla, Palak; Verma, Henu; Patel, Santosh; Patra, P K; Bhaskar, L V K S
2017-01-01
Sickle cell disease (SCD) is the most common and serious form of an inherited blood disorder that lead to higher risk of early mortality. SCD patients are at high risk for developing multiorgan acute and chronic complications linked with significant morbidity and mortality. Some of the ophthalmological complications of SCD include retinal changes, refractive errors, vitreous hemorrhage, and abnormalities of the cornea. The present study includes 96 SCD patients. A dilated comprehensive eye examination was performed to know the status of retinopathy. Refractive errors were measured in all patients. In patients with >10 years of age, cycloplegia was not performed before autorefractometry. A subset of fifty patients' genotyping was done for NOS3 27-base pair (bp) variable number of tandem repeat (VNTR) and IL4 intron-3 VNTR polymorphisms using polymerase chain reaction-electrophoresis. Chi-square test was performed to test the association between the polymorphisms and refractive errors. The results of the present study revealed that 63.5% of patients have myopia followed by 19.8% hyperopia. NOS3 27-bp VNTR genotypes significantly deviated from Hardy-Weinberg equilibrium ( P < 0.0001). Although IL4 70-bp VNTR increased the risk of developing refractive errors, it is not statistically significant. However, NOS3 27-bp VNTR significantly reduced the risk of development of myopia. In summary, our study documents the prevalence of refractive errors along with some retinal changes in Indian SCD patients. Further, this study demonstrates that the NOS3 VNTR contributes to the susceptibility to development of myopia in SCD cases.
Programmable Numerical Function Generators: Architectures and Synthesis Method
2005-08-01
generates HDL (Hardware Descrip- tion Language) code from the design specification described by Scilab [14], a MATLAB-like numerical calculation soft...cad.com/Error-NFG/. [14] Scilab 3.0, INRIA-ENPC, France, http://scilabsoft.inria.fr/ [15] M. J. Schulte and J. E. Stine, “Approximating elementary functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watkins, W.T.; Siebers, J.V.; Bzdusek, K.
Purpose: To introduce methods to analyze Deformable Image Registration (DIR) and identify regions of potential DIR errors. Methods: DIR Deformable Vector Fields (DVFs) quantifying patient anatomic changes were evaluated using the Jacobian determinant and the magnitude of DVF curl as functions of tissue density and tissue type. These quantities represent local relative deformation and rotation, respectively. Large values in dense tissues can potentially identify non-physical DVF errors. For multiple DVFs per patient, histograms and visualization of DVF differences were also considered. To demonstrate the capabilities of methods, we computed multiple DVFs for each of five Head and Neck (H'N) patientsmore » (P1–P5) via a Fast-symmetric Demons (FSD) algorithm and via a Diffeomorphic Demons (DFD) algorithm, and show the potential to identify DVF errors. Results: Quantitative comparisons of the FSD and DFD registrations revealed <0.3 cm DVF differences in >99% of all voxels for P1, >96% for P2, and >90% of voxels for P3. While the FSD and DFD registrations were very similar for these patients, the Jacobian determinant was >50% in 9–15% of soft tissue and in 3–17% of bony tissue in each of these cases. The volumes of large soft tissue deformation were consistent for all five patients using the FSD algorithm (mean 15%±4% volume), whereas DFD reduced regions of large deformation by 10% volume (785 cm{sup 3}) for P4 and by 14% volume (1775 cm{sup 3}) for P5. The DFD registrations resulted in fewer regions of large DVF-curl; 50% rotations in FSD registrations averaged 209±136 cm{sup 3} in soft tissue and 10±11 cm{sup 3} in bony tissue, but using DFD these values were reduced to 42±53 cm{sup 3} and 1.1±1.5 cm{sup 3}, respectively. Conclusion: Analysis of Jacobian determinant and curl as functions of tissue density can identify regions of potential DVF errors by identifying non-physical deformations and rotations. Collaboration with Phillips Healthcare, as indicated in authorship.« less
Stone, Nimalie D.; O'Hara, Caroline M.; Williams, Portia P.; McGowan, John E.; Tenover, Fred C.
2007-01-01
We compared the antimicrobial susceptibility testing results generated by disk diffusion and the VITEK 2 automated system with the results of the Clinical and Laboratory Standards Institute (CLSI) broth microdilution (BMD) reference method for 61 isolates of unusual species of Enterobacteriaceae. The isolates represented 15 genera and 26 different species, including Buttiauxella, Cedecea, Kluyvera, Leminorella, and Yokenella. Antimicrobial agents included aminoglycosides, carbapenems, cephalosporins, fluoroquinolones, penicillins, and trimethoprim-sulfamethoxazole. CLSI interpretative criteria for Enterobacteriaceae were used. Of the 12 drugs tested by BMD and disk diffusion, 10 showed >95% categorical agreement (CA). CA was lower for ampicillin (80.3%) and cefazolin (77.0%). There were 3 very major errors (all with cefazolin), 1 major error (also with cefazolin), and 26 minor errors. Of the 40 isolates (representing 12 species) that could be identified with the VITEK 2 database, 36 were identified correctly to species level, 1 was identified to genus level only, and 3 were reported as unidentified. VITEK 2 generated MIC results for 42 (68.8%) of 61 isolates, but categorical interpretations (susceptible, intermediate, and resistant) were provided for only 22. For the 17 drugs tested by both BMD and VITEK 2, essential agreement ranged from 80.9 to 100% and CA ranged from 68.2% (ampicillin) to 100%; thirteen drugs exhibited 100% CA. In summary, disk diffusion provides a reliable alternative to BMD for testing of unusual Enterobacteriaceae, some of which cannot be tested, or produce incorrect results, by automated methods. PMID:17135429
Accuracy analysis for triangulation and tracking based on time-multiplexed structured light.
Wagner, Benjamin; Stüber, Patrick; Wissel, Tobias; Bruder, Ralf; Schweikard, Achim; Ernst, Floris
2014-08-01
The authors' research group is currently developing a new optical head tracking system for intracranial radiosurgery. This tracking system utilizes infrared laser light to measure features of the soft tissue on the patient's forehead. These features are intended to offer highly accurate registration with respect to the rigid skull structure by means of compensating for the soft tissue. In this context, the system also has to be able to quickly generate accurate reconstructions of the skin surface. For this purpose, the authors have developed a laser scanning device which uses time-multiplexed structured light to triangulate surface points. The accuracy of the authors' laser scanning device is analyzed and compared for different triangulation methods. These methods are given by the Linear-Eigen method and a nonlinear least squares method. Since Microsoft's Kinect camera represents an alternative for fast surface reconstruction, the authors' results are also compared to the triangulation accuracy of the Kinect device. Moreover, the authors' laser scanning device was used for tracking of a rigid object to determine how this process is influenced by the remaining triangulation errors. For this experiment, the scanning device was mounted to the end-effector of a robot to be able to calculate a ground truth for the tracking. The analysis of the triangulation accuracy of the authors' laser scanning device revealed a root mean square (RMS) error of 0.16 mm. In comparison, the analysis of the triangulation accuracy of the Kinect device revealed a RMS error of 0.89 mm. It turned out that the remaining triangulation errors only cause small inaccuracies for the tracking of a rigid object. Here, the tracking accuracy was given by a RMS translational error of 0.33 mm and a RMS rotational error of 0.12°. This paper shows that time-multiplexed structured light can be used to generate highly accurate reconstructions of surfaces. Furthermore, the reconstructed point sets can be used for high-accuracy tracking of objects, meeting the strict requirements of intracranial radiosurgery.
Viallon, Magalie; Terraz, Sylvain; Roland, Joerg; Dumont, Erik; Becker, Christoph D; Salomir, Rares
2010-04-01
MR thermometry based on the proton resonance frequency shift (PRFS) is the most commonly used method for the monitoring of thermal therapies. As the chemical shift of water protons is temperature dependent, the local temperature variation (relative to an initial baseline) may be calculated from time-dependent phase changes in gradient-echo (GRE) MR images. Dynamic phase shift in GRE images is also produced by time-dependent changes in the magnetic bulk susceptibility of tissue. Gas bubbles (known as "white cavitation") are frequently visualized near the RF electrode in ultrasonography-guided radio frequency ablation (RFA). This study aimed to investigate RFA-induced cavitation's effects by using simultaneous ultrasonography and MRI, to both visualize the cavitation and quantify the subsequent magnetic susceptibility-mediated errors in concurrent PRFS MR-thermometry (MRT) as well as to propose a first-order correction for the latter errors. RF heating in saline gels and in ex vivo tissues was performed with MR-compatible bipolar and monopolar electrodes inside a 1.5 T MR clinical scanner. Ultrasonography simultaneous to PRFS MRT was achieved using a MR-compatible phased-array ultrasonic transducer. PRFS MRT was performed interleaved in three orthogonal planes and compared to measurements from fluoroptic sensors, under low and, respectively, high RFA power levels. Control experiments were performed to isolate the main source of errors in standard PRFS thermometry. Ultrasonography, MRI and digital camera pictures clearly demonstrated generation of bubbles every time when operating the radio frequency equipment at therapeutic powers (> or = 30 W). Simultaneous bimodal (ultrasonography and MRI) monitoring of high power RF heating demonstrated a correlation between the onset of the PRFS-thermometry errors and the appearance of bubbles around the applicator. In an ex vivo study using a bipolar RF electrode under low power level (5 W), the MR measured temperature curves accurately matched the reference fluoroptic data. In similar ex vivo studies when applying higher RFA power levels (30 W), the correlation plots of MR thermometry versus fluoroptic data showed large errors in PRFS-derived temperature (up to 45 degrees C absolute deviation, positive or negative) depending not only on fluoroptic tip position but also on the RF electrode orientation relative to the B0 axis. Regions with apparent decrease in the PRFS-derived temperature maps as much as 30 degrees C below the initial baseline were visualized during RFA high power application. Ex vivo data were corrected assuming a Gaussian dynamic source of susceptibility, centered in the anode/cathode gap of the RF bipolar electrode. After correction, the temperature maps recovered the revolution symmetry pattern predicted by theory and matched the fluoroptic data within 4.5 degrees C mean offset. RFA induces dynamic changes in magnetic bulk susceptibility in biological tissue, resulting in large and spatially dependent errors of phase-subtraction-only PRFS MRT and unexploitable thermal dose maps. These thermometry artifacts were strongly correlated with the appearance of transient cavitation. A first-order dynamic model of susceptibility provided a useful method for minimizing these artifacts in phantom and ex vivo experiments.
Approach to the Pediatric Prescription in a Community Pharmacy
Benavides, Sandra; Huynh, Donna; Morgan, Jill; Briars, Leslie
2011-01-01
Pediatric patients are more susceptible to medication errors for a variety of reasons including physical and social differences and the necessity for patient-specific dosing. As such, community pharmacists may feel uncomfortable in verifying or dispensing a prescription for a pediatric patient. However, the use of a systematic approach to the pediatric prescription can provide confidence to pharmacists and minimize the possibility of a medication error. The objective of this article is to provide the community pharmacist with an overview of the potential areas of medication errors in a prescription for a pediatric patient. Additionally, the article guides the community pharmacist through a pediatric prescription, highlighting common areas of medication errors. PMID:22768015
Xuan, Lihui; Hui, Dongxue; Cheng, Wanli; Wong, Andrew H H; Han, Guangping; Tan, Wei Khong; Tawi, Carlson A D
2017-07-12
The effects of alkaline copper quaternary (ACQ) and zinc borate (ZB) on the resistance of corn stalk fiber (CSF)-reinforced high-density polyethylene (HDPE) composites to biodegradation were examined. Both biocides could inhibit termites, mold fungi, and wood-decay fungi, even at high CSF formulations (i.e., 60%). Additionally, ACQ enhanced the resistance of the composite materials to certain biotic stresses better than ZB. The CSF/HDPE composites treated with ACQ at the 3.0% level exhibited a superior performance against termites, white rot fungi, and brown rot fungi. ACQ treatment at the 1% level was optimal for inhibiting soft rot fungi. Furthermore, mold growth was not observed on ACQ-treated CSF/HDPE samples. The untreated CSF/HDPE composites were more susceptible to mold infections and decay than the untreated poplar/HDPE composites, likely because of an incomplete removal of the pith. The chemical features of the corn stalk may also have influenced these differences, but this possibility will need to be explored in future investigations. Furthermore, the CSF component of CSF/HDPE composites is highly susceptible to fungal attacks, with the soft rot fungus inducing the largest mass losses, followed by the white rot fungus, and then the brown rot fungus.
Effect of Preservative Pretreatment on the Biological Durability of Corn Straw Fiber/HDPE Composites
Xuan, Lihui; Hui, Dongxue; Cheng, Wanli; Wong, Andrew H. H.; Han, Guangping; Tan, Wei Khong; Tawi, Carlson A. D.
2017-01-01
The effects of alkaline copper quaternary (ACQ) and zinc borate (ZB) on the resistance of corn stalk fiber (CSF)-reinforced high-density polyethylene (HDPE) composites to biodegradation were examined. Both biocides could inhibit termites, mold fungi, and wood-decay fungi, even at high CSF formulations (i.e., 60%). Additionally, ACQ enhanced the resistance of the composite materials to certain biotic stresses better than ZB. The CSF/HDPE composites treated with ACQ at the 3.0% level exhibited a superior performance against termites, white rot fungi, and brown rot fungi. ACQ treatment at the 1% level was optimal for inhibiting soft rot fungi. Furthermore, mold growth was not observed on ACQ-treated CSF/HDPE samples. The untreated CSF/HDPE composites were more susceptible to mold infections and decay than the untreated poplar/HDPE composites, likely because of an incomplete removal of the pith. The chemical features of the corn stalk may also have influenced these differences, but this possibility will need to be explored in future investigations. Furthermore, the CSF component of CSF/HDPE composites is highly susceptible to fungal attacks, with the soft rot fungus inducing the largest mass losses, followed by the white rot fungus, and then the brown rot fungus. PMID:28773150
Accuracy of MRI-based Magnetic Susceptibility Measurements
NASA Astrophysics Data System (ADS)
Russek, Stephen; Erdevig, Hannah; Keenan, Kathryn; Stupic, Karl
Magnetic Resonance Imaging (MRI) is increasingly used to map tissue susceptibility to identify microbleeds associated with brain injury and pathologic iron deposits associated with neurologic diseases such as Parkinson's and Alzheimer's disease. Field distortions with a resolution of a few parts per billion can be measured using MRI phase maps. The field distortion map can be inverted to obtain a quantitative susceptibility map. To determine the accuracy of MRI-based susceptibility measurements, a set of phantoms with paramagnetic salts and nano-iron gels were fabricated. The shapes and orientations of features were varied. Measured susceptibility of 1.0 mM GdCl3 solution in water as a function of temperature agreed well with the theoretical predictions, assuming Gd+3 is spin 7/2. The MRI susceptibility measurements were compared with SQUID magnetometry. The paramagnetic susceptibility sits on top of the much larger diamagnetic susceptibility of water (-9.04 x 10-6), which leads to errors in the SQUID measurements. To extract out the paramagnetic contribution using standard magnetometry, measurements must be made down to low temperature (2K). MRI-based susceptometry is shown to be as or more accurate than standard magnetometry and susceptometry techniques.
Liu, Xiang; Effenberger, Frank; Chand, Naresh
2015-03-09
We demonstrate a flexible modulation and detection scheme for upstream transmission in passive optical networks using pulse position modulation at optical network unit, facilitating burst-mode detection with automatic decision threshold tracking, and DSP-enabled soft-combining at optical line terminal. Adaptive receiver sensitivities of -33.1 dBm, -36.6 dBm and -38.3 dBm at a bit error ratio of 10(-4) are respectively achieved for 2.5 Gb/s, 1.25 Gb/s and 625 Mb/s after transmission over a 20-km standard single-mode fiber without any optical amplification.
Multi-stage decoding of multi-level modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.
1991-01-01
Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).
DOE Office of Scientific and Technical Information (OSTI.GOV)
De Silva, T; Ketcha, M; Siewerdsen, J H
Purpose: In image-guided spine surgery, mapping 3D preoperative images to 2D intraoperative images via 3D-2D registration can provide valuable assistance in target localization. However, the presence of surgical instrumentation, hardware implants, and soft-tissue resection/displacement causes mismatches in image content, confounding existing registration methods. Manual/semi-automatic methods to mask such extraneous content is time consuming, user-dependent, error prone, and disruptive to clinical workflow. We developed and evaluated 2 novel similarity metrics within a robust registration framework to overcome such challenges in target localization. Methods: An IRB-approved retrospective study in 19 spine surgery patients included 19 preoperative 3D CT images and 50 intraoperativemore » mobile radiographs in cervical, thoracic, and lumbar spine regions. A neuroradiologist provided truth definition of vertebral positions in CT and radiography. 3D-2D registration was performed using the CMA-ES optimizer with 4 gradient-based image similarity metrics: (1) gradient information (GI); (2) gradient correlation (GC); (3) a novel variant referred to as gradient orientation (GO); and (4) a second variant referred to as truncated gradient correlation (TGC). Registration accuracy was evaluated in terms of the projection distance error (PDE) of the vertebral levels. Results: Conventional similarity metrics were susceptible to gross registration error and failure modes associated with the presence of surgical instrumentation: for GI, the median PDE and interquartile range was 33.0±43.6 mm; similarly for GC, PDE = 23.0±92.6 mm respectively. The robust metrics GO and TGC, on the other hand, demonstrated major improvement in PDE (7.6 ±9.4 mm and 8.1± 18.1 mm, respectively) and elimination of gross failure modes. Conclusion: The proposed GO and TGC similarity measures improve registration accuracy and robustness to gross failure in the presence of strong image content mismatch. Such registration capability could offer valuable assistance in target localization without disruption of clinical workflow. G. Kleinszig and S. Vogt are employees of Siemens Healthcare.« less
Effect of the mandible on mouthguard measurements of head kinematics.
Kuo, Calvin; Wu, Lyndia C; Hammoor, Brad T; Luck, Jason F; Cutcliffe, Hattie C; Lynall, Robert C; Kait, Jason R; Campbell, Kody R; Mihalik, Jason P; Bass, Cameron R; Camarillo, David B
2016-06-14
Wearable sensors are becoming increasingly popular for measuring head motions and detecting head impacts. Many sensors are worn on the skin or in headgear and can suffer from motion artifacts introduced by the compliance of soft tissue or decoupling of headgear from the skull. The instrumented mouthguard is designed to couple directly to the upper dentition, which is made of hard enamel and anchored in a bony socket by stiff ligaments. This gives the mouthguard superior coupling to the skull compared with other systems. However, multiple validation studies have yielded conflicting results with respect to the mouthguard׳s head kinematics measurement accuracy. Here, we demonstrate that imposing different constraints on the mandible (lower jaw) can alter mouthguard kinematic accuracy in dummy headform testing. In addition, post mortem human surrogate tests utilizing the worst-case unconstrained mandible condition yield 40% and 80% normalized root mean square error in angular velocity and angular acceleration respectively. These errors can be modeled using a simple spring-mass system in which the soft mouthguard material near the sensors acts as a spring and the mandible as a mass. However, the mouthguard can be designed to mitigate these disturbances by isolating sensors from mandible loads, improving accuracy to below 15% normalized root mean square error in all kinematic measures. Thus, while current mouthguards would suffer from measurement errors in the worst-case unconstrained mandible condition, future mouthguards should be designed to account for these disturbances and future validation testing should include unconstrained mandibles to ensure proper accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Large poroelastic deformation of a soft material
NASA Astrophysics Data System (ADS)
MacMinn, Christopher W.; Dufresne, Eric R.; Wettlaufer, John S.
2014-11-01
Flow through a porous material will drive mechanical deformation when the fluid pressure becomes comparable to the stiffness of the solid skeleton. This has applications ranging from hydraulic fracture for recovery of shale gas, where fluid is injected at high pressure, to the mechanics of biological cells and tissues, where the solid skeleton is very soft. The traditional linear theory of poroelasticity captures this fluid-solid coupling by combining Darcy's law with linear elasticity. However, linear elasticity is only volume-conservative to first order in the strain, which can become problematic when damage, plasticity, or extreme softness lead to large deformations. Here, we compare the predictions of linear poroelasticity with those of a large-deformation framework in the context of two model problems. We show that errors in volume conservation are compounded and amplified by coupling with the fluid flow, and can become important even when the deformation is small. We also illustrate these results with a laboratory experiment.
NASA Astrophysics Data System (ADS)
Elfgen, S.; Franck, D.; Hameyer, K.
2018-04-01
Magnetic measurements are indispensable for the characterization of soft magnetic material used e.g. in electrical machines. Characteristic values are used as quality control during production and for the parametrization of material models. Uncertainties and errors in the measurements are reflected directly in the parameters of the material models. This can result in over-dimensioning and inaccuracies in simulations for the design of electrical machines. Therefore, existing influencing factors in the characterization of soft magnetic materials are named and their resulting uncertainties contributions studied. The analysis of the resulting uncertainty contributions can serve the operator as additional selection criteria for different measuring sensors. The investigation is performed for measurements within and outside the currently prescribed standard, using a Single sheet tester and its impact on the identification of iron loss parameter is studied.
Local thermal energy as a structural indicator in glasses.
Zylberg, Jacques; Lerner, Edan; Bar-Sinai, Yohai; Bouchbinder, Eran
2017-07-11
Identifying heterogeneous structures in glasses-such as localized soft spots-and understanding structure-dynamics relations in these systems remain major scientific challenges. Here, we derive an exact expression for the local thermal energy of interacting particles (the mean local potential energy change caused by thermal fluctuations) in glassy systems by a systematic low-temperature expansion. We show that the local thermal energy can attain anomalously large values, inversely related to the degree of softness of localized structures in a glass, determined by a coupling between internal stresses-an intrinsic signature of glassy frustration-anharmonicity and low-frequency vibrational modes. These anomalously large values follow a fat-tailed distribution, with a universal exponent related to the recently observed universal [Formula: see text] density of states of quasilocalized low-frequency vibrational modes. When the spatial thermal energy field-a "softness field"-is considered, this power law tail manifests itself by highly localized spots, which are significantly softer than their surroundings. These soft spots are shown to be susceptible to plastic rearrangements under external driving forces, having predictive powers that surpass those of the normal modes-based approach. These results offer a general, system/model-independent, physical/observable-based approach to identify structural properties of quiescent glasses and relate them to glassy dynamics.
NASA Astrophysics Data System (ADS)
Sabdaningsih, A.; Cristianawati, O.; Sibero, M. T.; Nuryadi, H.; Radjasa, O. K.; Sabdono, A.; Trianto, A.
2017-02-01
Multidrug resistant Staphylococcus haemolyticus is a Gram-positive bacteria and member of coagulase negative staphylococci (CoNS) which has the highest level of antimicrobial resistance. This nosocomial pathogen due to skin or soft tissue infections, bacteremia, septicemia, peritonitis, otitis media, meningitis and urinary tract infections. The ability to produce enterotoxins, hemolysins, biofilm, and cytotoxins could be an important characteristic for the successful of infection. Marine-derived fungi have potency as a continuous supply of bioactive compound. The aim of this research was screening antibacterial agent from crude extracts of marine-derived fungi associated with soft corals against MDR-S. haemolyticus. Among 23 isolates of marine-derived fungi isolated from 7 soft corals, there were 4 isolates active against MDR-S. haemolyticus. The screening was conducted by using agar plug diffusion method. Isolate PPSC-27-A4 had the highest antibacterial activity with diameter 23±9,6 mm. The crude extract from this isolate had been confirmed to antibacterial susceptibility test and it had the highest antibacterial activity in 12.2 mm with concentration of 300μg/ml from mycelia extract. PPSC-27-A4 had been characterized in molecular, based on the sequence analysis of 18S rRNA, PPSC-27-A4 isolate was identified as Trichoderma longibrachiatum.
Ozone Profile Retrievals from the OMPS on Suomi NPP
NASA Astrophysics Data System (ADS)
Bak, J.; Liu, X.; Kim, J. H.; Haffner, D. P.; Chance, K.; Yang, K.; Sun, K.; Gonzalez Abad, G.
2017-12-01
We verify and correct the Ozone Mapping and Profiler Suite (OMPS) Nadir Mapper (NM) L1B v2.0 data with the aim of producing accurate ozone profile retrievals using an optimal estimation based inversion method in the 302.5-340 nm fitting. The evaluation of available slit functions demonstrates that preflight-measured slit functions well represent OMPS measurements compared to derived Gaussian slit functions. Our OMPS fitting residuals contain significant wavelength and cross-track dependent biases, and thereby serious cross-track striping errors are found in preliminary retrievals, especially in the troposphere. To eliminate the systematic component of the fitting residuals, we apply "soft calibration" to OMPS radiances. With the soft calibration the amplitude of fitting residuals decreases from 1 % to 0.2 % over low/mid latitudes, and thereby the consistency of tropospheric ozone retrievals between OMPS and Ozone Monitoring Instrument (OMI) are substantially improved. A common mode correction is implemented for additional radiometric calibration, which improves retrievals especially at high latitudes where the amplitude of fitting residuals decreases by a factor of 2. We estimate the floor noise error of OMPS measurements from standard deviations of the fitting residuals. The derived error in the Huggins band ( 0.1 %) is 2 times smaller than OMI floor noise error and 2 times larger than OMPS L1B measurement error. The OMPS floor noise errors better constrain our retrievals for maximizing measurement information and stabilizing our fitting residuals. The final precision of the fitting residuals is less than 0.1 % in the low/mid latitude, with 1 degrees of freedom for signal for the tropospheric ozone, so that we meet the general requirements for successful tropospheric ozone retrievals. To assess if the quality of OMPS ozone retrievals could be acceptable for scientific use, we will characterize OMPS ozone profile retrievals, present error analysis, and validate retrievals using a reference dataset. The useful information on the vertical distribution of ozone is limited below 40 km only from OMPS NM measurements due to the absence of Hartley ozone wavelength. This shortcoming will be improved with the joint ozone profile retrieval using Nadir Profiler (NP) measurements covering the 250 to 310 nm range.
Consideration of species community composition in statistical ...
Diseases are increasing in marine ecosystems, and these increases have been attributed to a number of environmental factors including climate change, pollution, and overfishing. However, many studies pool disease prevalence into taxonomic groups, disregarding host species composition when comparing sites or assessing environmental impacts on patterns of disease presence. We used simulated data under a known environmental effect to assess the ability of standard statistical methods (binomial and linear regression, ANOVA) to detect a significant environmental effect on pooled disease prevalence with varying species abundance distributions and relative susceptibilities to disease. When one species was more susceptible to a disease and both species only partially overlapped in their distributions, models tended to produce a greater number of false positives (Type I error). Differences in disease risk between regions or along an environmental gradient tended to be underestimated, or even in the wrong direction, when highly susceptible taxa had reduced abundances in impacted sites, a situation likely to be common in nature. Including relative abundance as an additional variable in regressions improved model accuracy, but tended to be conservative, producing more false negatives (Type II error) when species abundance was strongly correlated with the environmental effect. Investigators should be cautious of underlying assumptions of species similarity in susceptib
The importance of intra-hospital pharmacovigilance in the detection of medication errors
Villegas, Francisco; Figueroa-Montero, David; Barbero-Becerra, Varenka; Juárez-Hernández, Eva; Uribe, Misael; Chávez-Tapia, Norberto; González-Chon, Octavio
2018-01-01
Hospitalized patients are susceptible to medication errors, which represent between the fourth and the sixth cause of death. The department of intra-hospital pharmacovigilance intervenes in the entire process of medication with the purpose to prevent, repair and assess damages. To analyze medication errors reported by Mexican Fundación Clínica Médica Sur pharmacovigilance system and their impact on patients. Prospective study carried out from 2012 to 2015, where medication prescriptions given to patients were recorded. Owing to heterogeneity, data were described as absolute numbers in a logarithmic scale. 292 932 prescriptions of 56 368 patients were analyzed, and 8.9% of medication errors were identified. The treating physician was responsible of 83.32% of medication errors, residents of 6.71% and interns of 0.09%. No error caused permanent damage or death. This is the pharmacovigilance study with the largest sample size reported. Copyright: © 2018 SecretarÍa de Salud.
Feuerstein, Marco; Reichl, Tobias; Vogel, Jakob; Traub, Joerg; Navab, Nassir
2009-06-01
Electromagnetic tracking is currently one of the most promising means of localizing flexible endoscopic instruments such as flexible laparoscopic ultrasound transducers. However, electromagnetic tracking is also susceptible to interference from ferromagnetic material, which distorts the magnetic field and leads to tracking errors. This paper presents new methods for real-time online detection and reduction of dynamic electromagnetic tracking errors when localizing a flexible laparoscopic ultrasound transducer. We use a hybrid tracking setup to combine optical tracking of the transducer shaft and electromagnetic tracking of the flexible transducer tip. A novel approach of modeling the poses of the transducer tip in relation to the transducer shaft allows us to reliably detect and significantly reduce electromagnetic tracking errors. For detecting errors of more than 5 mm, we achieved a sensitivity and specificity of 91% and 93%, respectively. Initial 3-D rms error of 6.91 mm were reduced to 3.15 mm.
Estimating soft tissue thickness from light-tissue interactions––a simulation study
Wissel, Tobias; Bruder, Ralf; Schweikard, Achim; Ernst, Floris
2013-01-01
Immobilization and marker-based motion tracking in radiation therapy often cause decreased patient comfort. However, the more comfortable alternative of optical surface tracking is highly inaccurate due to missing point-to-point correspondences between subsequent point clouds as well as elastic deformation of soft tissue. In this study, we present a proof of concept for measuring subcutaneous features with a laser scanner setup focusing on the skin thickness as additional input for high accuracy optical surface tracking. Using Monte-Carlo simulations for multi-layered tissue, we show that informative features can be extracted from the simulated tissue reflection by integrating intensities within concentric ROIs around the laser spot center. Training a regression model with a simulated data set identifies patterns that allow for predicting skin thickness with a root mean square error of down to 18 µm. Different approaches to compensate for varying observation angles were shown to yield errors still below 90 µm. Finally, this initial study provides a very promising proof of concept and encourages research towards a practical prototype. PMID:23847741
Khozani, Zohreh Sheikh; Bonakdari, Hossein; Zaji, Amir Hossein
2016-01-01
Two new soft computing models, namely genetic programming (GP) and genetic artificial algorithm (GAA) neural network (a combination of modified genetic algorithm and artificial neural network methods) were developed in order to predict the percentage of shear force in a rectangular channel with non-homogeneous roughness. The ability of these methods to estimate the percentage of shear force was investigated. Moreover, the independent parameters' effectiveness in predicting the percentage of shear force was determined using sensitivity analysis. According to the results, the GP model demonstrated superior performance to the GAA model. A comparison was also made between the GP program determined as the best model and five equations obtained in prior research. The GP model with the lowest error values (root mean square error ((RMSE) of 0.0515) had the best function compared with the other equations presented for rough and smooth channels as well as smooth ducts. The equation proposed for rectangular channels with rough boundaries (RMSE of 0.0642) outperformed the prior equations for smooth boundaries.
NASA Astrophysics Data System (ADS)
Fulkerson, David E.
2010-02-01
This paper describes a new methodology for characterizing the electrical behavior and soft error rate (SER) of CMOS and SiGe HBT integrated circuits that are struck by ions. A typical engineering design problem is to calculate the SER of a critical path that commonly includes several circuits such as an input buffer, several logic gates, logic storage, clock tree circuitry, and an output buffer. Using multiple 3D TCAD simulations to solve this problem is too costly and time-consuming for general engineering use. The new and simple methodology handles the problem with ease by simple SPICE simulations. The methodology accurately predicts the measured threshold linear energy transfer (LET) of a bulk CMOS SRAM. It solves for circuit currents and voltage spikes that are close to those predicted by expensive 3D TCAD simulations. It accurately predicts the measured event cross-section vs. LET curve of an experimental SiGe HBT flip-flop. The experimental cross section vs. frequency behavior and other subtle effects are also accurately predicted.
Hubert, G; Regis, D; Cheminet, A; Gatti, M; Lacoste, V
2014-10-01
Particles originating from primary cosmic radiation, which hit the Earth's atmosphere give rise to a complex field of secondary particles. These particles include neutrons, protons, muons, pions, etc. Since the 1980s it has been known that terrestrial cosmic rays can penetrate the natural shielding of buildings, equipment and circuit package and induce soft errors in integrated circuits. Recently, research has shown that commercial static random access memories are now so small and sufficiently sensitive that single event upsets (SEUs) may be induced from the electronic stopping of a proton. With continued advancements in process size, this downward trend in sensitivity is expected to continue. Then, muon soft errors have been predicted for nano-electronics. This paper describes the effects in the specific cases such as neutron-, proton- and muon-induced SEU observed in complementary metal-oxide semiconductor. The results will allow investigating the technology node sensitivity along the scaling trend. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Nitta, Nariaki
1988-01-01
Hard X-ray spectra in solar flares obtained by the broadband spectrometers aboard Hinotori and SMM are compared. Within the uncertainty brought about by assuming the typical energy of the background X-rays, spectra by the Hinotori spectrometer are usually consistent with those by the SMM spectrometer for flares in 1981. On the contrary, flares in 1982 persistently show 20-50-percent higher flux by Hinotori than by SMM. If this discrepancy is entirely attributable to errors in the calibration of energy ranges, the errors would be about 10 percent. Despite such a discrepancy in absolute flux, in the the decay phase of one flare, spectra revealed a hard X-ray component (probably a 'superhot' component) that could be explained neither by emission from a plasma at about 2 x 10 to the 7th K nor by a nonthermal power-law component. Imaging observations during this period show hard X-ray emission nearly cospatial with soft X-ray emission, in contrast with earlier times at which hard and soft X-rays come from different places.
Little, Catherine M; Chapman, Thomas W; Moreau, Debra L; Hillier, N Kirk
2017-01-01
Drosophila suzukii Matsumara has recently emerged as a major invasive pest species in soft-skinned fruits in berries throughout North America and Europe. Its distribution has spread so rapidly that little is known of the extent of fruit susceptibility, particularly in boreal regions. Populations of D. suzukii increase dramatically in late summer in boreal regions, concurrent with fruiting seasons for commercially and culturally significant fruits and berries. We tested fruit preference and susceptibility of lingonberry, blueberry, chokecherry, sea buckthorn and raspberry fruits to D. suzukii. Female D. suzukii attempted to oviposit on all fruit types tested. Fruits with lower brix and lower pH levels were preferred in choice tests. Undamaged lingonberries were relatively safe from infestation; however, bruised or frost-damaged fruits were easily penetrated. Sea buckthorn and raspberry fruits were highly preferred. Although blueberry growers have experienced severe economic crop losses due to D. suzukii, we have found that blueberries were the least preferred of the fruits tested. This suggests that D. suzukii are largely opportunistic and highlights the importance of fruit phenology in fruit susceptibility. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
NASA Astrophysics Data System (ADS)
De Lorenzo, Danilo; De Momi, Elena; Beretta, Elisa; Cerveri, Pietro; Perona, Franco; Ferrigno, Giancarlo
2009-02-01
Computer Assisted Orthopaedic Surgery (CAOS) systems improve the results and the standardization of surgical interventions. Anatomical landmarks and bone surface detection is straightforward to either register the surgical space with the pre-operative imaging space and to compute biomechanical parameters for prosthesis alignment. Surface points acquisition increases the intervention invasiveness and can be influenced by the soft tissue layer interposition (7-15mm localization errors). This study is aimed at evaluating the accuracy of a custom-made A-mode ultrasound (US) system for non invasive detection of anatomical landmarks and surfaces. A-mode solutions eliminate the necessity of US images segmentation, offers real-time signal processing and requires less invasive equipment. The system consists in a single transducer US probe optically tracked, a pulser/receiver and an FPGA-based board, which is responsible for logic control command generation and for real-time signal processing and three custom-made board (signal acquisition, blanking and synchronization). We propose a new calibration method of the US system. The experimental validation was then performed measuring the length of known-shape polymethylmethacrylate boxes filled with pure water and acquiring bone surface points on a bovine bone phantom covered with soft-tissue mimicking materials. Measurement errors were computed through MR and CT images acquisitions of the phantom. Points acquisition on bone surface with the US system demonstrated lower errors (1.2mm) than standard pointer acquisition (4.2mm).
Least Reliable Bits Coding (LRBC) for high data rate satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Wagner, Paul; Budinger, James
1992-01-01
An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.
Using Remotely Sensed Information for Near Real-Time Landslide Hazard Assessment
NASA Technical Reports Server (NTRS)
Kirschbaum, Dalia; Adler, Robert; Peters-Lidard, Christa
2013-01-01
The increasing availability of remotely sensed precipitation and surface products provides a unique opportunity to explore how landslide susceptibility and hazard assessment may be approached at larger spatial scales with higher resolution remote sensing products. A prototype global landslide hazard assessment framework has been developed to evaluate how landslide susceptibility and satellite-derived precipitation estimates can be used to identify potential landslide conditions in near-real time. Preliminary analysis of this algorithm suggests that forecasting errors are geographically variable due to the resolution and accuracy of the current susceptibility map and the application of satellite-based rainfall estimates. This research is currently working to improve the algorithm through considering higher spatial and temporal resolution landslide susceptibility information and testing different rainfall triggering thresholds, antecedent rainfall scenarios, and various surface products at regional and global scales.
D'Allaire, S; DeRoth, L
1986-01-01
Ten susceptible and ten resistant pigs to malignant hyperthermia were used to observe the effects of exercise and ambient temperature on selected physiological parameters. Pigs were submitted to a ten minute exercise on a treadmill operating at a speed of 1.8 km/h and inclined to 11 degrees. Exercise in the first group was at an ambient temperature of 14 degrees C, and in the second at 29 degrees C. The right carotid artery was previously cannulated for blood pressure measurements and for repeated blood sampling during exercise. Arterial pressure, heart rate, rectal and cutaneous temperatures were recorded. Levels of cortisol, creatine kinase and its isoenzymes were measured. At 14 degrees C, exercise caused some physiological adjustments in susceptible animals; heart rate, skin temperature and cortisol levels increased (P less than 0.05). In resistant pigs, only the heart rate was elevated significantly following exertional stress at 14 degrees C. Exercise at 29 degrees C produced severe stress and marked physiological changes: heart rate, rectal and skin temperatures and cortisol levels increased significantly in both susceptible and resistant swine. At 29 degrees C, susceptible pigs also had higher levels of serum cortisol, total creatine kinase and MM isoenzyme (P less than 0.05) compared to resistant pigs. The results indicate that, following exertional or thermal stress, susceptible pigs undergo more extensive physiological changes than do resistant pigs. Similar levels of stress prior to slaughter may trigger physiological changes which in the susceptible pigs would likely result in pale, soft exudative myopathy. PMID:3742362
A Microfluidic Channel Method for Rapid Drug-Susceptibility Testing of Pseudomonas aeruginosa
Matsumoto, Yoshimi; Grushnikov, Andrey; Kikuchi, Kazuma; Noji, Hiroyuki; Yamaguchi, Akihito; Yagi, Yasushi
2016-01-01
The recent global increase in the prevalence of antibiotic-resistant bacteria and lack of development of new therapeutic agents emphasize the importance of selecting appropriate antimicrobials for the treatment of infections. However, to date, the development of completely accelerated drug susceptibility testing methods has not been achieved despite the availability of a rapid identification method. We proposed an innovative rapid method for drug susceptibility testing for Pseudomonas aeruginosa that provides results within 3 h. The drug susceptibility testing microfluidic (DSTM) device was prepared using soft lithography. It consisted of five sets of four microfluidic channels sharing one inlet slot, and the four channels are gathered in a small area, permitting simultaneous microscopic observation. Antimicrobials were pre-introduced into each channel and dried before use. Bacterial suspensions in cation-adjusted Mueller–Hinton broth were introduced from the inlet slot and incubated for 3 h. Susceptibilities were microscopically evaluated on the basis of differences in cell numbers and shapes between drug-treated and control cells, using dedicated software. The results of 101 clinically isolated strains of P. aeruginosa obtained using the DSTM method strongly correlated with results obtained using the ordinary microbroth dilution method. Ciprofloxacin, meropenem, ceftazidime, and piperacillin caused elongation in susceptible cells, while meropenem also induced spheroplast and bulge formation. Morphological observation could alternatively be used to determine the susceptibility of P. aeruginosa to these drugs, although amikacin had little effect on cell shape. The rapid determination of bacterial drug susceptibility using the DSTM method could also be applicable to other pathogenic species, and it could easily be introduced into clinical laboratories without the need for expensive instrumentation. PMID:26872134
Using Digital Image Correlation to Characterize Local Strains on Vascular Tissue Specimens.
Zhou, Boran; Ravindran, Suraj; Ferdous, Jahid; Kidane, Addis; Sutton, Michael A; Shazly, Tarek
2016-01-24
Characterization of the mechanical behavior of biological and engineered soft tissues is a central component of fundamental biomedical research and product development. Stress-strain relationships are typically obtained from mechanical testing data to enable comparative assessment among samples and in some cases identification of constitutive mechanical properties. However, errors may be introduced through the use of average strain measures, as significant heterogeneity in the strain field may result from geometrical non-uniformity of the sample and stress concentrations induced by mounting/gripping of soft tissues within the test system. When strain field heterogeneity is significant, accurate assessment of the sample mechanical response requires measurement of local strains. This study demonstrates a novel biomechanical testing protocol for calculating local surface strains using a mechanical testing device coupled with a high resolution camera and a digital image correlation technique. A series of sample surface images are acquired and then analyzed to quantify the local surface strain of a vascular tissue specimen subjected to ramped uniaxial loading. This approach can improve accuracy in experimental vascular biomechanics and has potential for broader use among other native soft tissues, engineered soft tissues, and soft hydrogel/polymeric materials. In the video, we demonstrate how to set up the system components and perform a complete experiment on native vascular tissue.
NASA Astrophysics Data System (ADS)
Chao, Luo
2015-11-01
In this paper, a novel digital secure communication scheme is firstly proposed. Different from the usual secure communication schemes based on chaotic synchronization, the proposed scheme employs asynchronous communication which avoids the weakness of synchronous systems and is susceptible to environmental interference. Moreover, as to the transmission errors and data loss in the process of communication, the proposed scheme has the ability to be error-checking and error-correcting in real time. In order to guarantee security, the fractional-order complex chaotic system with the shifting of order is utilized to modulate the transmitted signal, which has high nonlinearity and complexity in both frequency and time domains. The corresponding numerical simulations demonstrate the effectiveness and feasibility of the scheme.
Miraldi Utz, Virginia
2017-01-01
Myopia is the most common eye disorder and major cause of visual impairment worldwide. As the incidence of myopia continues to rise, the need to further understand the complex roles of molecular and environmental factors controlling variation in refractive error is of increasing importance. Tkatchenko and colleagues applied a systematic approach using a combination of gene set enrichment analysis, genome-wide association studies, and functional analysis of a murine model to identify a myopia susceptibility gene, APLP2. Differential expression of refractive error was associated with time spent reading for those with low frequency variants in this gene. This provides support for the longstanding hypothesis of gene-environment interactions in refractive error development.
Gündoğdu, Aycan; Kılıç, Hüseyin; Ulu Kılıç, Ayşegül; Kutateladze, Mzia
2016-04-01
Skin and soft tissue infections (SSTIs) may represent a wide clinical spectrum from cellulitis to high-mortality associated necrotizing fasciitis. Limitations in therapy due to the multiple drug resistance, leads to increase in the morbidity and mortality rates, especially in complicated SSTIs such as diabetic foot, decubitus, and surgical wound infections. Therefore, alternative treatment strategies other than antibiotics are needed in appropriate clinical conditions. "Bacteriophage therapy", which is an old method and has been used as part of standard treatment in some countries such as Georgia and Russia, has again become popular worldwide. The aim of this study was to investigate the in vitro susceptibilities of multidrug-resistant (MDR) pathogens isolated from patients with complicated SSTIs, against standard bacteriophage (phage) cocktails. Six different ready-made phage preparations [Pyophage, Intestiphage, ENKO, SES, Fersisi and Staphylococcal Bacteriophage (Sb)] used in this study have been provided by G. Eliava Institute, Georgia. Because of the absence of ready-made phage preparations for Acinetobacter baumannii and Klebsiella pneumoniae, Φ1-Φ7 and ΦKL1- ΦKL3 phages were used provided from the same institute's phage library, respectively. Isolation and identification of the pathogens from abscess and wound samples of patients with SSTIs were performed by conventional methods and automatized VITEK(®)-2 (bioMerieux, ABD) system. Antimicrobial susceptibility testing was conducted complying CLSI standards' and the bacteria that were resistant to at least two different antibiotic groups were considered as MDR. Accordingly, a total of 33 isolates, nine of them were E.coli (8 ESBL and 1 ESBL + carbapenemase positive); nine were MDR P.aeruginosa; nine were MDR A.baumannii; three were methicillin-resistant Staphylococcus aureus (MRSA) and three were K.pneumoniae (1 ESBL, 1 carbapenemase and 1 ESBL + carbapenemase positive) were included in the study. The phage susceptibilities of the pathogens were performed by using spot test. In the study, 29 (87.9%) out of 33 MDR pathogens were found to be susceptible to at least one of the tested phage/phage preparations. All MRSA (3/3) strains were susceptible to ENKO, SES, Fersisi and Sb phage cocktails, while all A.baumannii isolates (9/9) were susceptible to Φ5 and Φ7 phages. However, two E.coli, one K. pneumoniae and one P.aeruginosa strains were resistant to the all phage preparations tested. Although the clinical use of phages has not been approved yet, except a few Eastern European countries, this study exhibits the potential use of the topical bacteriophage therapy in the treatment of complicated SSTIs caused by MDR pathogens with limited treatment options, such as diabetic foot, decubitus, and surgical wound infections.
Compact disk error measurements
NASA Technical Reports Server (NTRS)
Howe, D.; Harriman, K.; Tehranchi, B.
1993-01-01
The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.
Brigham, John C.; Aquino, Wilkins; Aguilo, Miguel A.; Diamessis, Peter J.
2010-01-01
An approach for efficient and accurate finite element analysis of harmonically excited soft solids using high-order spectral finite elements is presented and evaluated. The Helmholtz-type equations used to model such systems suffer from additional numerical error known as pollution when excitation frequency becomes high relative to stiffness (i.e. high wave number), which is the case, for example, for soft tissues subject to ultrasound excitations. The use of high-order polynomial elements allows for a reduction in this pollution error, but requires additional consideration to counteract Runge's phenomenon and/or poor linear system conditioning, which has led to the use of spectral element approaches. This work examines in detail the computational benefits and practical applicability of high-order spectral elements for such problems. The spectral elements examined are tensor product elements (i.e. quad or brick elements) of high-order Lagrangian polynomials with non-uniformly distributed Gauss-Lobatto-Legendre nodal points. A shear plane wave example is presented to show the dependence of the accuracy and computational expense of high-order elements on wave number. Then, a convergence study for a viscoelastic acoustic-structure interaction finite element model of an actual ultrasound driven vibroacoustic experiment is shown. The number of degrees of freedom required for a given accuracy level was found to consistently decrease with increasing element order. However, the computationally optimal element order was found to strongly depend on the wave number. PMID:21461402
A device for high-throughput monitoring of degradation in soft tissue samples.
Tzeranis, D S; Panagiotopoulos, I; Gkouma, S; Kanakaris, G; Georgiou, N; Vaindirlis, N; Vasileiou, G; Neidlin, M; Gkousioudi, A; Spitas, V; Macheras, G A; Alexopoulos, L G
2018-06-06
This work describes the design and validation of a novel device, the High-Throughput Degradation Monitoring Device (HDD), for monitoring the degradation of 24 soft tissue samples over incubation periods of several days inside a cell culture incubator. The device quantifies sample degradation by monitoring its deformation induced by a static gravity load. Initial instrument design and experimental protocol development focused on quantifying cartilage degeneration. Characterization of measurement errors, caused mainly by thermal transients and by translating the instrument sensor, demonstrated that HDD can quantify sample degradation with <6 μm precision and <10 μm temperature-induced errors. HDD capabilities were evaluated in a pilot study that monitored the degradation of fresh ex vivo human cartilage samples by collagenase solutions over three days. HDD could robustly resolve the effects of collagenase concentration as small as 0.5 mg/ml. Careful sample preparation resulted in measurements that did not suffer from donor-to-donor variation (coefficient of variance <70%). Due to its unique combination of sample throughput, measurement precision, temporal sampling and experimental versality, HDD provides a novel biomechanics-based experimental platform for quantifying the effects of proteins (cytokines, growth factors, enzymes, antibodies) or small molecules on the degradation of soft tissues or tissue engineering constructs. Thereby, HDD can complement established tools and in vitro models in important applications including drug screening and biomaterial development. Copyright © 2018 Elsevier Ltd. All rights reserved.
Walden, Steven J; Evans, Sam L; Mulville, Jacqui
2017-01-01
The purpose of this study was to determine how the Vickers hardness (HV) of bone varies during soft tissue putrefaction. This has possible forensic applications, notably for determining the postmortem interval. Experimental porcine bone samples were decomposed in surface and burial deposition scenarios over a period of 6 months. Although the Vickers hardness varied widely, it was found that when transverse axial hardness was subtracted from longitudinal axial hardness, the difference showed correlations with three distinct phases of soft tissue putrefaction. The ratio of transverse axial hardness to longitudinal axial hardness showed a similar correlation. A difference of 10 or greater in HV with soft tissue present and signs of minimal decomposition, was associated with a decomposition period of 250 cumulative cooling degree days or less. A difference of 10 (+/- standard error of mean at a 95% confidence interval) or greater in HV associated with marked decomposition indicated a decomposition period of 1450 cumulative cooling degree days or more. A difference of -7 to +8 (+/- standard error of mean at a 95% confidence interval) was thus associated with 250 to 1450 cumulative cooling degree days' decomposition. The ratio of transverse axial HV to longitudinal HV, ranging from 2.42 to 1.54, is a more reliable indicator in this context and is preferable to using negative integers These differences may have potential as an indicator of postmortem interval and thus the time of body deposition in the forensic context. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Lei; Hu, Jianhao
2010-12-01
Notice of Violation of IEEE Publication Principles"Joint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath"by Lei Li and Jianhao Hu,in the IEEE Transactions on Nuclear Science, vol.57, no.6, Dec. 2010, pp. 3779-3786After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.This paper contains substantial duplication of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following articles:"Multiple Error Detection and Correction Based on Redundant Residue Number Systems"by Vik Tor Goh and M.U. Siddiqi,in the IEEE Transactions on Communications, vol.56, no.3, March 2008, pp.325-330"A Coding Theory Approach to Error Control in Redundant Residue Number Systems. I: Theory and Single Error Correction"by H. Krishna, K-Y. Lin, and J-D. Sun, in the IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol.39, no.1, Jan 1992, pp.8-17In this paper, we propose a joint scheme which combines redundant residue number systems (RRNS) with module isolation (MI) for mitigating single event multiple bit upsets (SEMBUs) in datapath. The proposed hardening scheme employs redundant residues to improve the fault tolerance for datapath and module spacings to guarantee that SEMBUs caused by charge sharing do not propagate among the operation channels of different moduli. The features of RRNS, such as independence, parallel and error correction, are exploited to establish the radiation hardening architecture for the datapath in radiation environments. In the proposed scheme, all of the residues can be processed independently, and most of the soft errors in datapath can be corrected with the redundant relationship of the residues at correction module, which is allocated at the end of the datapath. In the back-end implementation, module isolation technique is used to improve the soft error rate performance for RRNS by physically separating the operation channels of different moduli. The case studies show at least an order of magnitude decrease on the soft error rate (SER) as compared to the NonRHBD designs, and demonstrate that RRNS+MI can reduce the SER from 10-12 to 10-17 when the processing steps of datapath are 106. The proposed scheme can even achieve less area and latency overheads than that without radiation hardening, since RRNS can reduce the operational complexity in datapath.
Wiesinger, Florian; Bylund, Mikael; Yang, Jaewon; Kaushik, Sandeep; Shanbhag, Dattesh; Ahn, Sangtae; Jonsson, Joakim H; Lundman, Josef A; Hope, Thomas; Nyholm, Tufve; Larson, Peder; Cozzini, Cristina
2018-02-18
To describe a method for converting Zero TE (ZTE) MR images into X-ray attenuation information in the form of pseudo-CT images and demonstrate its performance for (1) attenuation correction (AC) in PET/MR and (2) dose planning in MR-guided radiation therapy planning (RTP). Proton density-weighted ZTE images were acquired as input for MR-based pseudo-CT conversion, providing (1) efficient capture of short-lived bone signals, (2) flat soft-tissue contrast, and (3) fast and robust 3D MR imaging. After bias correction and normalization, the images were segmented into bone, soft-tissue, and air by means of thresholding and morphological refinements. Fixed Hounsfield replacement values were assigned for air (-1000 HU) and soft-tissue (+42 HU), whereas continuous linear mapping was used for bone. The obtained ZTE-derived pseudo-CT images accurately resembled the true CT images (i.e., Dice coefficient for bone overlap of 0.73 ± 0.08 and mean absolute error of 123 ± 25 HU evaluated over the whole head, including errors from residual registration mismatches in the neck and mouth regions). The linear bone mapping accounted for bone density variations. Averaged across five patients, ZTE-based AC demonstrated a PET error of -0.04 ± 1.68% relative to CT-based AC. Similarly, for RTP assessed in eight patients, the absolute dose difference over the target volume was found to be 0.23 ± 0.42%. The described method enables MR to pseudo-CT image conversion for the head in an accurate, robust, and fast manner without relying on anatomical prior knowledge. Potential applications include PET/MR-AC, and MR-guided RTP. © 2018 International Society for Magnetic Resonance in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Perry B.; Geyer, Amy; Borrego, David
Purpose: To investigate the benefits and limitations of patient-phantom matching for determining organ dose during fluoroscopy guided interventions. Methods: In this study, 27 CT datasets representing patients of different sizes and genders were contoured and converted into patient-specific computational models. Each model was matched, based on height and weight, to computational phantoms selected from the UF hybrid patient-dependent series. In order to investigate the influence of phantom type on patient organ dose, Monte Carlo methods were used to simulate two cardiac projections (PA/left lateral) and two abdominal projections (RAO/LPO). Organ dose conversion coefficients were then calculated for each patient-specific andmore » patient-dependent phantom and also for a reference stylized and reference hybrid phantom. The coefficients were subsequently analyzed for any correlation between patient-specificity and the accuracy of the dose estimate. Accuracy was quantified by calculating an absolute percent difference using the patient-specific dose conversion coefficients as the reference. Results: Patient-phantom matching was shown most beneficial for estimating the dose to heavy patients. In these cases, the improvement over using a reference stylized phantom ranged from approximately 50% to 120% for abdominal projections and for a reference hybrid phantom from 20% to 60% for all projections. For lighter individuals, patient-phantom matching was clearly superior to using a reference stylized phantom, but not significantly better than using a reference hybrid phantom for certain fields and projections. Conclusions: The results indicate two sources of error when patients are matched with phantoms: Anatomical error, which is inherent due to differences in organ size and location, and error attributed to differences in the total soft tissue attenuation. For small patients, differences in soft tissue attenuation are minimal and are exceeded by inherent anatomical differences. For large patients, difference in soft tissue attenuation can be large. In these cases, patient-phantom matching proves most effective as differences in soft tissue attenuation are mitigated. With increasing obesity rates, overweight patients will continue to make up a growing fraction of all patients undergoing medical imaging. Thus, having phantoms that better represent this population represents a considerable improvement over previous methods. In response to this study, additional phantoms representing heavier weight percentiles will be added to the UFHADM and UFHADF patient-dependent series.« less
Quality assurance of the international computerised 24 h dietary recall method (EPIC-Soft).
Crispim, Sandra P; Nicolas, Genevieve; Casagrande, Corinne; Knaze, Viktoria; Illner, Anne-Kathrin; Huybrechts, Inge; Slimani, Nadia
2014-02-01
The interview-administered 24 h dietary recall (24-HDR) EPIC-Soft® has a series of controls to guarantee the quality of dietary data across countries. These comprise all steps that are part of fieldwork preparation, data collection and data management; however, a complete characterisation of these quality controls is still lacking. The present paper describes in detail the quality controls applied in EPIC-Soft, which are, to a large extent, built on the basis of the EPIC-Soft error model and are present in three phases: (1) before, (2) during and (3) after the 24-HDR interviews. Quality controls for consistency and harmonisation are implemented before the interviews while preparing the seventy databases constituting an EPIC-Soft version (e.g. pre-defined and coded foods and recipes). During the interviews, EPIC-Soft uses a cognitive approach by helping the respondent to recall the dietary intake information in a stepwise manner and includes controls for consistency (e.g. probing questions) as well as for completeness of the collected data (e.g. system calculation for some unknown amounts). After the interviews, a series of controls can be applied by dietitians and data managers to further guarantee data quality. For example, the interview-specific 'note files' that were created to track any problems or missing information during the interviews can be checked to clarify the information initially provided. Overall, the quality controls employed in the EPIC-Soft methodology are not always perceivable, but prove to be of assistance for its overall standardisation and possibly for the accuracy of the collected data.
Yurayart, Chompoonek; Nuchnoul, Noppawan; Moolkum, Pornsawan; Jirasuksiri, Supitcha; Niyomtham, Waree; Chindamporn, Ariya; Kajiwara, Susumu; Prapasarakul, Nuvee
2013-10-01
Malassezia pachydermatis and Candida parapsilosis are recognized as commensal yeasts on the skin of healthy dogs but also causative agents of eborrheic dermatitis, especially in atopic dogs. We determined and compared the susceptibility levels of yeasts isolated from dogs with and without seborrheic dermatitis (SD) using the disk diffusion method (DD) for itraconazole (ITZ), ketoconazole (KTZ), nystatin (NYS), terbinafine (TERB) and 5-fluorocytosine (5-FC) and the broth microdilution method (BMD) for ITZ and KTZ. The reliability between the methods was assessed using an agreement analysis and linear regression. Forty-five M. pachydermatis and 28 C. parapsilosis isolates were identified based on physiological characteristics and an approved molecular analysis. By DD, all tested M. pachydermatis isolates were susceptible to ITZ, KTZ, NYS and TERB but resistant to 5-FC. Only 46 - 60% of the tested C. parapsilosis isolates were susceptible to KTZ, TERB and 5-FC, but ITZ and NYS were effective against all. By BMD, over 95% of M. pachydermatis isolates were susceptible to KTZ and ITZ with an MIC90 < 0.03 and 0.12 μg/ml, respectively. The frequency of KTZ- and ITZ-resistant C. parapsilosis was 29% and 7%, and the MIC90 values were 1 μg/ml and 0.5-1 μg/ml, respectively. Regarding the agreement analysis, 2.2% of minor errors were observed in M. pachydermatis and 0.2-1% of very major errors occurred among C. parapsilosis. There were no significant differences in the yeast resistance rates between dogs with and without SD. KTZ and ITZ were still efficacious for M. pachydermatis but a high rate of KTZ resistant was reported in C. parapsilosis.
Development of a self-healing soft pneumatic actuator: a first concept.
Terryn, Seppe; Mathijssen, Glenn; Brancart, Joost; Lefeber, Dirk; Assche, Guy Van; Vanderborght, Bram
2015-07-07
Inspired by the intrinsic softness and the corresponding embodied intelligence principles, soft pneumatic actuators (SPA) have been developed, which ensure safe interaction in unstructured, unknown environments. Due to their intrinsic softness, these actuators have the ability to resist large mechanical impacts. However, the soft materials used in these structures are in general susceptible to damage caused by sharp objects found in the unstructured environments. This paper proposes to integrate a self-healing (SH-) mechanism in SPAs, such that cuts, tears and perforations in the actuator can be self-healed. Diels-Alder (DA-) polymers, covalent polymer network systems based on the thermoreversible DA-reaction, were selected and their mechanical, as well as SH-properties, are described. To evaluate the feasibility of developing an SPA constructed out of SH-material, a single cell prototype, a SH-soft pneumatic cell (SH-SPC), was constructed entirely out of DA-polymers. Exploiting the SH-property of the DA-polymers, a completely new shaping process is presented in this paper, referred to as 'shaping through folding and self-healing'. 3D polygon structures, like the cubic SH-SPC, can be constructed by folding SH-polymer sheet. The sides of the structures can be sealed and made airtight using a SH-procedure at relatively low temperatures (<90 °C). Both the (thermo) mechanical and SH-properties of the SH-SPC prototype were experimentally validated and showed excellent performances. Macroscopic incisions in the prototype were completely healed using a SH-procedure (<70 °C). Starting from this single-cell prototype, it is straight-forward to develop a multi-cell prototype, the first SPA ever built completely out of SH-polymers.
Height and weight errors in aeromedical certification data.
DOT National Transportation Integrated Search
1973-06-01
The Framingham Relative Weight Index (FRWI) of obesity was described in previous reports as a screening aid for detecting susceptibility to coronary heart disease (CHD). FRWI calculation requires measured values of height and weight but the height an...
NASA Astrophysics Data System (ADS)
Issachar, R.; Levi, T.; Lyakhovsky, V.; Marco, S.; Weinberger, R.
2016-07-01
This study examines the limitations of the method of low-temperature anisotropy of magnetic susceptibility (LT-AMS) measurements in air and presents technical improvements that significantly reduce the instrumental drift and measurement errors. We analyzed the temperature profile of porous chalk core after cooling in liquid nitrogen and found that the average temperature of the sample during the LT-AMS measurement in air is higher than 77K and close to 92K. This analysis indicates that the susceptibility of the paramagnetic minerals are amplified by a factor ˜3.2 relative to that of room temperature AMS (RT-AMS). In addition, it was found that liquid nitrogen was absorbed in the samples during immersing and contributed diamagnetic component of ˜-9 × 10-6 SI to the total mean susceptibility. We showed that silicone sheet placed around and at the bottom of the measuring coil is an effective thermal protection, preventing instrument drift by the cold sample. In this way, the measuring errors of LT-AMS reduced to the level of RT-AMS, allowing accurate comparison with standard AMS measurements. We examined the applicability of the LT-AMS measurements on chalk samples that consist <5% (weight) of paramagnetic minerals and showed that it helps to efficiently enhance the paramagnetic fabric. The present study offers a practical approach, which can be applied to various types of rocks to better delineate the paramagnetic phase using conventional equipment.
NASA Astrophysics Data System (ADS)
Marsilius, Mie; Granzow, Torsten; Jones, Jacob L.
2011-02-01
The superior piezoelectric properties of all polycrystalline ferroelectrics are based on the extent of non-180° domain wall motion under electrical and mechanical poling loads. To distinguish between 180° and non-180° domain wall motion in a soft-doped and a hard-doped lead zirconate titanate (PZT) ceramic, domain texture measurements were performed using x-ray and neutron diffraction after different loading procedures. Comparing the results to measurements of the remanent strain and piezoelectric coefficient allowed the differentiation between different microstructural contributions to the macroscopic parameters. Both types of ceramic showed similar behavior under electric field, but the hard-doped material was more susceptible to mechanical load. A considerable fraction of the piezoelectric coefficient originated from poling by the preferred orientation of 180° domains.
Levels of immunity parameters underpin bleaching and disease susceptibility of reef corals.
Palmer, Caroline V; Bythell, John C; Willis, Bette L
2010-06-01
Immunity is a key life history trait that may explain hierarchies in the susceptibility of corals to disease and thermal bleaching, two of the greatest current threats to coral health and the persistence of tropical reefs. Despite their ongoing and rapid global decline, there have been few investigations into the immunity mechanisms of reef-building corals. Variables commonly associated with invertebrate immunity, including the presence of melanin, size of melanin-containing granular cells, and phenoloxidase (PO) activity, as well as concentrations of fluorescent proteins (FPs), were investigated in hard (Scleractinia) and soft (Alcyonacea) corals spanning 10 families from the Great Barrier Reef. Detectable levels of these indicators were present in all corals investigated, although relative investment differed among coral taxa. Overall levels of investment were inversely correlated to thermal bleaching and disease susceptibility. In addition, PO activity, melanin-containing granular cell size, and FP concentration were each found to be significant predictors of susceptibility and thus may play key roles in coral immunity. Correlative evidence that taxonomic (family-level) variation in the levels of these constituent immunity parameters underpins susceptibility to both thermal bleaching and disease indicates that baseline immunity underlies the vulnerability of corals to these two threats. This reinforces the necessity of a holistic approach to understanding bleaching and disease in order to accurately determine the resilience of coral reefs.
Rossi X-Ray Timing Explorer All-Sky Monitor Localization of SGR 1627-41
NASA Astrophysics Data System (ADS)
Smith, Donald A.; Bradt, Hale V.; Levine, Alan M.
1999-07-01
The fourth unambiguously identified soft gamma repeater (SGR), SGR 1627-41, was discovered with the BATSE instrument on 1998 June 15. Interplanetary Network (IPN) measurements and BATSE data constrained the location of this new SGR to a 6° segment of a narrow (19") annulus. We present two bursts from this source observed by the All-Sky Monitor (ASM) on the Rossi X-Ray Timing Explorer. We use the ASM data to further constrain the source location to a 5' long segment of the BATSE/IPN error box. The ASM/IPN error box lies within 0.3 arcmin of the supernova remnant G337.0-0.1. The probability that a supernova remnant would fall so close to the error box purely by chance is ~5%.
RXTE All-Sky Monitor Localization of SGR 1627-41
NASA Astrophysics Data System (ADS)
Smith, D. A.; Bradt, H. V.; Levine, A. M.
1999-09-01
The fourth unambiguously identified Soft Gamma Repeater (SGR), SGR 1627--41, was discovered with the BATSE instrument on 1998 June 15 (Kouveliotou et al. 1998). Interplanetary Network (IPN) measurements and BATSE data constrained the location of this new SGR to a 6(deg) segment of a narrow (19('') ) annulus (Hurley et al. 1999; Woods et al. 1998). We report on two bursts from this source observed by the All-Sky Monitor (ASM) on RXTE. We use the ASM data to further constrain the source location to a 5(') long segment of the BATSE/IPN error box. The ASM/IPN error box lies within 0.3(') of the supernova remnant (SNR) G337.0--0.1. The probability that a SNR would fall so close to the error box purely by chance is ~ 5%.
NASA Astrophysics Data System (ADS)
Shedekar, Vinayak S.; King, Kevin W.; Fausey, Norman R.; Soboyejo, Alfred B. O.; Harmel, R. Daren; Brown, Larry C.
2016-09-01
Three different models of tipping bucket rain gauges (TBRs), viz. HS-TB3 (Hydrological Services Pty Ltd.), ISCO-674 (Isco, Inc.) and TR-525 (Texas Electronics, Inc.), were calibrated in the lab to quantify measurement errors across a range of rainfall intensities (5 mm·h- 1 to 250 mm·h- 1) and three different volumetric settings. Instantaneous and cumulative values of simulated rainfall were recorded at 1, 2, 5, 10 and 20-min intervals. All three TBR models showed a substantial deviation (α = 0.05) in measurements from actual rainfall depths, with increasing underestimation errors at greater rainfall intensities. Simple linear regression equations were developed for each TBR to correct the TBR readings based on measured intensities (R2 > 0.98). Additionally, two dynamic calibration techniques, viz. quadratic model (R2 > 0.7) and T vs. 1/Q model (R2 = > 0.98), were tested and found to be useful in situations when the volumetric settings of TBRs are unknown. The correction models were successfully applied to correct field-collected rainfall data from respective TBR models. The calibration parameters of correction models were found to be highly sensitive to changes in volumetric calibration of TBRs. Overall, the HS-TB3 model (with a better protected tipping bucket mechanism, and consistent measurement errors across a range of rainfall intensities) was found to be the most reliable and consistent for rainfall measurements, followed by the ISCO-674 (with susceptibility to clogging and relatively smaller measurement errors across a range of rainfall intensities) and the TR-525 (with high susceptibility to clogging and frequent changes in volumetric calibration, and highly intensity-dependent measurement errors). The study demonstrated that corrections based on dynamic and volumetric calibration can only help minimize-but not completely eliminate the measurement errors. The findings from this study will be useful for correcting field data from TBRs; and may have major implications to field- and watershed-scale hydrologic studies.
Single Event Effect Testing of the Micron MT46V128M8
NASA Technical Reports Server (NTRS)
Stansberry, Scott; Campola, Michael; Wilcox, Ted; Seidleck, Christina; Phan, Anthony
2017-01-01
The Micron MT46V128M8 was tested for single event effects (SEE) at the Texas AM University Cyclotron Facility (TAMU) in June of 2017. Testing revealed a sensitivity to device hang-ups classified as single event functional interrupts (SEFI) and possible soft data errors classified as single event upsets (SEU).
Micromagnetic Study of Perpendicular Magnetic Recording Media
NASA Astrophysics Data System (ADS)
Dong, Yan
With increasing areal density in magnetic recording systems, perpendicular recording has successfully replaced longitudinal recording to mitigate the superparamagnetic limit. The extensive theoretical and experimental research associated with perpendicular magnetic recording media has contributed significantly to improving magnetic recording performance. Micromagnetic studies on perpendicular recording media, including aspects of the design of hybrid soft underlayers, media noise properties, inter-grain exchange characterization and ultra-high density bit patterned media recording, are presented in this dissertation. To improve the writability of recording media, one needs to reduce the head-to-keeper spacing while maintaining a good texture growth for the recording layer. A hybrid soft underlayer, consisting of a thin crystalline soft underlayer stacked above a non-magnetic seed layer and a conventional amorphous soft underlayer, provides an alternative approach for reducing the effective head-to-keeper spacing in perpendicular recording. Micromagnetic simulations indicate that the media using a hybrid soft underlayer helps enhance the effective field and the field gradient in comparison with conventional media that uses only an amorphous soft underlayer. The hybrid soft underlayer can support a thicker non-magnetic seed layer yet achieve an equivalent or better effective field and field gradient. A noise plateau for intermediate recording densities is observed for a recording layer of typical magnetization. Medium noise characteristics and transition jitter in perpendicular magnetic recording are explored using micromagnetic simulation. The plateau is replaced by a normal linear dependence of noise on recording density for a low magnetization recording layer. We show analytically that a source of the plateau is similar to that producing the Non-Linear-Transition-Shift of signal. In particular, magnetostatic effects are predicted to produce positive correlation of jitter and thus negative correlation of noise at the densities associated with the plateau. One focus for developing perpendicular recording media is on how to extract intergranular exchange coupling and intrinsic anisotropy field dispersion. A micromagnetic numerical technique is developed to effectively separate the effects of intergranular exchange coupling and anisotropy dispersion by finding their correlation to differentiated M-H curves with different initial magnetization states, even in the presence of thermal fluctuation. The validity of this method is investigated with a series of intergranular exchange couplings and anisotropy dispersions for different media thickness. This characterization method allows for an experimental measurement employing a vibrating sample magnetometer (VSM). Bit patterned media have been suggested to extend areal density beyond 1 Tbit/in2. The feasibility of 4 Tbit/in2 bit patterned recording is determined by aspects of write head design and media fabrication, and is estimated by the bit error rate. Micromagnetic specifications including 2.3:1 BAR bit patterned exchange coupled composite media, trailing shield, and side shields are proposed to meet the requirement of 3x10 -4 bit error rate, 4 nm fly height, 5% switching field distribution, 5% timing and 5% jitter errors for 4 Tbit/in2 bit-patterned recording. Demagnetizing field distribution is examined by studying the shielding effect of the side shields on the stray field from the neighboring dots. For recording self-assembled bit-patterned media, the head design writes two staggered tracks in a single pass and has maximum perpendicular field gradients of 580 Oe/nm along the down-track direction and 476 Oe/nm along the cross-track direction. The geometry demanded by self-assembly reduces recording density to 2.9 Tbit/in 2.
Estimation of Fetal Weight during Labor: Still a Challenge.
Barros, Joana Goulão; Reis, Inês; Pereira, Isabel; Clode, Nuno; Graça, Luís M
2016-01-01
To evaluate the accuracy of fetal weight prediction by ultrasonography labor employing a formula including the linear measurements of femur length (FL) and mid-thigh soft-tissue thickness (STT). We conducted a prospective study involving singleton uncomplicated term pregnancies within 48 hours of delivery. Only pregnancies with a cephalic fetus admitted in the labor ward for elective cesarean section, induction of labor or spontaneous labor were included. We excluded all non-Caucasian women, the ones previously diagnosed with gestational diabetes and the ones with evidence of ruptured membranes. Fetal weight estimates were calculated using a previously proposed formula [estimated fetal weight = 1687.47 + (54.1 x FL) + (76.68 x STT). The relationship between actual birth weight and estimated fetal weight was analyzed using Pearson's correlation. The formula's performance was assessed by calculating the signed and absolute errors. Mean weight difference and signed percentage error were calculated for birth weight divided into three subgroups: < 3000 g; 3000-4000 g; and > 4000 g. We included for analysis 145 cases and found a significant, yet low, linear relationship between birth weight and estimated fetal weight (p < 0.001; R2 = 0.197) with an absolute mean error of 10.6%. The lowest mean percentage error (0.3%) corresponded to the subgroup with birth weight between 3000 g and 4000 g. This study demonstrates a poor correlation between actual birth weight and the estimated fetal weight using a formula based on femur length and mid-thigh soft-tissue thickness, both linear parameters. Although avoidance of circumferential ultrasound measurements might prove to be beneficial, it is still yet to be found a fetal estimation formula that can be both accurate and simple to perform.
Determining the Magnetic Properties of 1 kg Mass Standards
Davis, Richard S.
1995-01-01
Magnetic interactions may lead to errors in precision mass metrology. An analytical description of such magnetic errors is presented in which the roles of both the volume magnetic susceptibility and permanent magnetization are discussed. The same formalism is then used to describe in detail the calibration and operation of a susceptometer developed at the Bureau International des Poids et Mesures (BIPM). The device has been optimized for the determination of the magnetic properties of 1 kg mass standards. PMID:29151735
Training for Rapid Interpretation of Voluminous Multimodal Data
2008-04-01
determined after the reserach that the corresponding FAC for the randomness and oversensitivity biases could be reasonably construed as true incidents...and the low overall susceptibility to errors in Experiment 4 made control comparisons irrelevant. Further research might employ similar methodology
McAuliffe, Gary N.; Hennessy, Jann; Baird, Robert W.
2015-01-01
Vibrio, Aeromonas, Chromobacterium violaceum, and Shewanella (VACS) are water-associated Gram-negative organisms that can cause a variety of infections. The frequency, patient characteristics, and antimicrobial susceptibilities for 468 isolates from 442 patients from the Northern Territory were reviewed. Aeromonas spp. (312 of 468; 67%) were most commonly isolated followed by Vibrio spp. (71 of 468; 15%), Shewanella spp. (61 of 468; 13%), and C. violaceum (24 of 468; 5%). A strong male predominance was found (male to female ratio of 2.3:1). Skin and soft tissue isolations (373 of 468; 80%) from lower limb infections (222 of 371; 60%) were the most common clinical manifestation. The episodes were usually polymicrobial (281 of 468; 60%). Coisolates included Staphylococcus aureus (137 of 468; 29%), β-hemolytic streptococci (74 of 468; 16%), enterobacteriaceae (111 of 468; 24%), non-fermentative Gram-negative bacilli (35 of 468; 7%), and other VACS organisms (37 of 468; 8%). Antimicrobial resistance of VACS organisms to ciprofloxacin (0–4%), cefepime (0–3%), and gentamicin (0–0.8%) and Vibrio spp., Aeromonas spp., and Shewanella to cotrimoxazole (0–3%) was rarely shown. For water-associated lower limb skin and soft tissue infections in the tropics, clinicians should consider empirical antimicrobial therapy with agents active against S. aureus and VACS organisms. PMID:25548380
Grinberg, Alex; Biggs, Patrick J; Zhang, Ji; Ritchie, Stephen; Oneroa, Zachary; O'Neill, Charlotte; Karkaba, Ali; Velathanthiri, Niluka S; Coombs, Geoffrey W
2017-10-01
Staphylococcus aureus skin and soft tissue infection (Sa-SSTI) places a significant burden on healthcare systems. New Zealand has a high incidence of Sa-SSTI, and here most morbidity is caused by a polyclonal methicillin-susceptible (MSSA) bacterial population. However, MSSA also colonise asymptomatically the cornified epithelia of approximately 20% of the population, and their divide between commensalism and pathogenicity is poorly understood. We aimed to see whether MSSA are genetically differentiated across colonisation and SSTI; and given the close interactions between people and pets, whether strains isolated from pets differ from human strains. We compared the genomes of contemporaneous colonisation and clinical MSSA isolates obtained in New Zealand from humans and pets. Core and accessory genome comparisons revealed a homogeneous bacterial population across colonisation, disease, humans, and pets. The rate of MSSA colonisation in dogs was comparatively low (5.4%). In New Zealand, most Sa-SSTI morbidity is caused by a random sample of the colonising MSSA population, consistent with the opportunistic infection model rather than the paradigm distinguishing strains according to their pathogenicity. Thus, studies of the factors determining colonisation and immune-escape may be more beneficial than comparative virulence studies. Contact with house-hold pets may pose low zoonotic risk. Copyright © 2017 The British Infection Association. Published by Elsevier Ltd. All rights reserved.
Radiation damage studies of soft magnetic metallic glasses irradiated with high-energy heavy ions
NASA Astrophysics Data System (ADS)
Pavlovič, Márius; Miglierini, Marcel; Mustafin, Edil; Ensinger, Wolfgang; Šagátová, Andrea; Šoka, Martin
2015-01-01
Some soft magnetic metallic glasses are considered for use in magnetic cores of accelerator radio frequency cavities. Due to losses of the circulating ion beam, they may be exposed to irradiation by different ions at different energies. This paper presents data and review results of irradiation experiments concerning the influence of high-energy heavy ions on magnetic susceptibility of VITROPERM®-type metallic glasses. Samples of the VITROPERM® magnetic ribbons were irradiated by Au, Xe and U ions at 11.1 MeV/A (per nucleon) and 5.9 MeV/A, respectively. Irradiation fluences from 1 × 1011 up to 1 × 1013 ions/cm2 were applied. In case of the Au and U ions, the total fluence was accumulated in one beamtime, whereas two separate beamtimes were used to accumulate the final fluence in case of the Xe ions. Relative change in the samples' magnetic susceptibility after and before irradiation was evaluated as a function of the irradiation fluence. The irradiation experiments were performed with the UNILAC accelerator at GSI Helmholtzzentrum für Schwerionenforschung GmbH. They were simulated in SRIM2010 in order to obtain ionization densities (electronic stopping, dE/dx) and dpa (displacements per atom) caused by the ion beams in the sample material. This paper focuses mainly on the results collected in experiments with the Xe ions and compares them with data obtained in earlier experiments using Au and U ions. Radiation hardness of VITROPERM® is compared with radiation hardness of VITROVAC® that was studied in previous experiments. The VITROPERM® samples showed less drop in magnetic susceptibility in comparison with the VITROVAC® ones, and this drop occurred at higher fluences. This indicates higher radiation hardness of VITROPERM® compared with VITROVAC®. In addition, heavier ions cause bigger change in magnetic susceptibility than the lighter ones. The effect can be roughly scaled with electronic stopping, which suggests that the main mechanism of radiation damage is associated with swift electrons generated in the material via ionization by primary heavy ions.
Mazzariol, Annarita; Aldegheri, Marco; Ligozzi, Marco; Lo Cascio, Giuliana; Koncan, Raffaella; Fontana, Roberta
2008-01-01
A total of 78 isolates of Pseudomonas aeruginosa grouped according to the phenotype for ceftazidime and imipenem susceptibility/resistance were used to assess the accuracy of the Vitek 2 system in antimicrobial susceptibility testing. Comparisons were made with a MIC gradient test for piperacillin-tazobactam, ceftazidime, aztreonam, imipenem, meropenem, gentamicin, and ciprofloxacin. For the total of 546 isolate-antimicrobial combinations tested, the category agreement was 83.6%, with 2.0, 1.6, and 12.8% very major, major, and minor errors, respectively. Vitek 2 accuracy was influenced differently by the mechanism responsible for resistance, and interpretation of the results in relation to phenotype could improve the performance of the system. PMID:18434562
ROSAT X-Ray Observation of the Second Error Box for SGR 1900+14
NASA Technical Reports Server (NTRS)
Li, P.; Hurley, K.; Vrba, F.; Kouveliotou, C.; Meegan, C. A.; Fishman, G. J.; Kulkarni, S.; Frail, D.
1997-01-01
The positions of the two error boxes for the soft gamma repeater (SGR) 1900+14 were determined by the "network synthesis" method, which employs observations by the Ulysses gamma-ray burst and CGRO BATSE instruments. The location of the first error box has been observed at optical, infrared, and X-ray wavelengths, resulting in the discovery of a ROSAT X-ray point source and a curious double infrared source. We have recently used the ROSAT HRI to observe the second error box to complete the counterpart search. A total of six X-ray sources were identified within the field of view. None of them falls within the network synthesis error box, and a 3 sigma upper limit to any X-ray counterpart was estimated to be 6.35 x 10(exp -14) ergs/sq cm/s. The closest source is approximately 3 min. away, and has an estimated unabsorbed flux of 1.5 x 10(exp -12) ergs/sq cm/s. Unlike the first error box, there is no supernova remnant near the second error box. The closest one, G43.9+1.6, lies approximately 2.dg6 away. For these reasons, we believe that the first error box is more likely to be the correct one.
NASA Technical Reports Server (NTRS)
Spector, E.; LeBlanc, A.; Shackelford, L.
1995-01-01
This study reports on the short-term in vivo precision and absolute measurements of three combinations of whole-body scan modes and analysis software using a Hologic QDR 2000 dual-energy X-ray densitometer. A group of 21 normal, healthy volunteers (11 male and 10 female) were scanned six times, receiving one pencil-beam and one array whole-body scan on three occasions approximately 1 week apart. The following combinations of scan modes and analysis software were used: pencil-beam scans analyzed with Hologic's standard whole-body software (PB scans); the same pencil-beam analyzed with Hologic's newer "enhanced" software (EPB scans); and array scans analyzed with the enhanced software (EA scans). Precision values (% coefficient of variation, %CV) were calculated for whole-body and regional bone mineral content (BMC), bone mineral density (BMD), fat mass, lean mass, %fat and total mass. In general, there was no significant difference among the three scan types with respect to short-term precision of BMD and only slight differences in the precision of BMC. Precision of BMC and BMD for all three scan types was excellent: < 1% CV for whole-body values, with most regional values in the 1%-2% range. Pencil-beam scans demonstrated significantly better soft tissue precision than did array scans. Precision errors for whole-body lean mass were: 0.9% (PB), 1.1% (EPB) and 1.9% (EA). Precision errors for whole-body fat mass were: 1.7% (PB), 2.4% (EPB) and 5.6% (EA). EPB precision errors were slightly higher than PB precision errors for lean, fat and %fat measurements of all regions except the head, although these differences were significant only for the fat and % fat of the arms and legs. In addition EPB precision values exhibited greater individual variability than PB precision values. Finally, absolute values of bone and soft tissue were compared among the three combinations of scan and analysis modes. BMC, BMD, fat mass, %fat and lean mass were significantly different between PB scans and either of the EPB or EA scans. Differences were as large as 20%-25% for certain regional fat and BMD measurements. Additional work may be needed to examine the relative accuracy of the scan mode/software combinations and to identify reasons for the differences in soft tissue precision with the array whole-body scan mode.
Design Methodology for Magnetic Field-Based Soft Tri-Axis Tactile Sensors.
Wang, Hongbo; de Boer, Greg; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Hewson, Robert; Culmer, Peter
2016-08-24
Tactile sensors are essential if robots are to safely interact with the external world and to dexterously manipulate objects. Current tactile sensors have limitations restricting their use, notably being too fragile or having limited performance. Magnetic field-based soft tactile sensors offer a potential improvement, being durable, low cost, accurate and high bandwidth, but they are relatively undeveloped because of the complexities involved in design and calibration. This paper presents a general design methodology for magnetic field-based three-axis soft tactile sensors, enabling researchers to easily develop specific tactile sensors for a variety of applications. All aspects (design, fabrication, calibration and evaluation) of the development of tri-axis soft tactile sensors are presented and discussed. A moving least square approach is used to decouple and convert the magnetic field signal to force output to eliminate non-linearity and cross-talk effects. A case study of a tactile sensor prototype, MagOne, was developed. This achieved a resolution of 1.42 mN in normal force measurement (0.71 mN in shear force), good output repeatability and has a maximum hysteresis error of 3.4%. These results outperform comparable sensors reported previously, highlighting the efficacy of our methodology for sensor design.
Design Methodology for Magnetic Field-Based Soft Tri-Axis Tactile Sensors
Wang, Hongbo; de Boer, Greg; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Hewson, Robert; Culmer, Peter
2016-01-01
Tactile sensors are essential if robots are to safely interact with the external world and to dexterously manipulate objects. Current tactile sensors have limitations restricting their use, notably being too fragile or having limited performance. Magnetic field-based soft tactile sensors offer a potential improvement, being durable, low cost, accurate and high bandwidth, but they are relatively undeveloped because of the complexities involved in design and calibration. This paper presents a general design methodology for magnetic field-based three-axis soft tactile sensors, enabling researchers to easily develop specific tactile sensors for a variety of applications. All aspects (design, fabrication, calibration and evaluation) of the development of tri-axis soft tactile sensors are presented and discussed. A moving least square approach is used to decouple and convert the magnetic field signal to force output to eliminate non-linearity and cross-talk effects. A case study of a tactile sensor prototype, MagOne, was developed. This achieved a resolution of 1.42 mN in normal force measurement (0.71 mN in shear force), good output repeatability and has a maximum hysteresis error of 3.4%. These results outperform comparable sensors reported previously, highlighting the efficacy of our methodology for sensor design. PMID:27563908
Li, Lingyun; Zhang, Fuming; Hu, Min; Ren, Fuji; Chi, Lianli; Linhardt, Robert J.
2016-01-01
Low molecular weight heparins are complex polycomponent drugs that have recently become amenable to top-down analysis using liquid chromatography-mass spectrometry. Even using open source deconvolution software, DeconTools, and automatic structural assignment software, GlycReSoft, the comparison of two or more low molecular weight heparins is extremely time-consuming, taking about a week for an expert analyst and provides no guarantee of accuracy. Efficient data processing tools are required to improve analysis. This study uses the programming language of Microsoft Excel™ Visual Basic for Applications to extend its standard functionality for macro functions and specific mathematical modules for mass spectrometric data processing. The program developed enables the comparison of top-down analytical glycomics data on two or more low molecular weight heparins. The current study describes a new program, GlycCompSoft, which has a low error rate with good time efficiency in the automatic processing of large data sets. The experimental results based on three lots of Lovenox®, Clexane® and three generic enoxaparin samples show that the run time of GlycCompSoft decreases from 11 to 2 seconds when the data processed decreases from 18000 to 1500 rows. PMID:27942011
Local thermal energy as a structural indicator in glasses
NASA Astrophysics Data System (ADS)
Zylberg, Jacques; Lerner, Edan; Bar-Sinai, Yohai; Bouchbinder, Eran
2017-07-01
Identifying heterogeneous structures in glasses—such as localized soft spots—and understanding structure-dynamics relations in these systems remain major scientific challenges. Here, we derive an exact expression for the local thermal energy of interacting particles (the mean local potential energy change caused by thermal fluctuations) in glassy systems by a systematic low-temperature expansion. We show that the local thermal energy can attain anomalously large values, inversely related to the degree of softness of localized structures in a glass, determined by a coupling between internal stresses—an intrinsic signature of glassy frustration—anharmonicity and low-frequency vibrational modes. These anomalously large values follow a fat-tailed distribution, with a universal exponent related to the recently observed universal
2013-01-01
Background Staphylococcus aureus is an opportunistic commensal bacterium that mostly colonizes the skin and soft tissues. The pathogenicity of S. aureus is due to both its ability to resist antibiotics, and the production of toxins. Here, we characterize a group of genes responsible for toxin production and antibiotic resistance of S. aureus strains isolated from skin, soft tissue, and bone related infections. Results A total of 136 S. aureus strains were collected from five different types of infection: furuncles, pyomyositis, abscesses, Buruli ulcers, and osteomyelitis, from hospital admissions and out-patients in Benin. All strains were resistant to benzyl penicillin, while 25% were resistant to methicillin, and all showed sensitivity to vancomycin. Panton-Valentine leukocidin (PVL) was the most commonly produced virulence factor (70%), followed by staphylococcal enterotoxin B (44%). Exfoliative toxin B was produced by 1.3% of the strains, and was only found in isolates from Buruli ulcers. The tsst-1, sec, and seh genes were rarely detected (≤1%). Conclusions This study provides new insight into the prevalence of toxin and antibiotic resistance genes in S. aureus strains responsible for skin, soft tissue, and bone infections. Our results showed that PVL was strongly associated with pyomyositis and osteomyelitis, and that there is a high prevalence of PVL-MRSA skin infections in Benin. PMID:23924370
Sina, Haziz; Ahoyo, Théodora A; Moussaoui, Wardi; Keller, Daniel; Bankolé, Honoré S; Barogui, Yves; Stienstra, Ymkje; Kotchoni, Simeon O; Prévost, Gilles; Baba-Moussa, Lamine
2013-08-08
Staphylococcus aureus is an opportunistic commensal bacterium that mostly colonizes the skin and soft tissues. The pathogenicity of S. aureus is due to both its ability to resist antibiotics, and the production of toxins. Here, we characterize a group of genes responsible for toxin production and antibiotic resistance of S. aureus strains isolated from skin, soft tissue, and bone related infections. A total of 136 S. aureus strains were collected from five different types of infection: furuncles, pyomyositis, abscesses, Buruli ulcers, and osteomyelitis, from hospital admissions and out-patients in Benin. All strains were resistant to benzyl penicillin, while 25% were resistant to methicillin, and all showed sensitivity to vancomycin. Panton-Valentine leukocidin (PVL) was the most commonly produced virulence factor (70%), followed by staphylococcal enterotoxin B (44%). Exfoliative toxin B was produced by 1.3% of the strains, and was only found in isolates from Buruli ulcers. The tsst-1, sec, and seh genes were rarely detected (≤1%). This study provides new insight into the prevalence of toxin and antibiotic resistance genes in S. aureus strains responsible for skin, soft tissue, and bone infections. Our results showed that PVL was strongly associated with pyomyositis and osteomyelitis, and that there is a high prevalence of PVL-MRSA skin infections in Benin.
Akbarzadeh, A; Ay, M R; Ahmadian, A; Alam, N Riahi; Zaidi, H
2013-02-01
Hybrid PET/MRI presents many advantages in comparison with its counterpart PET/CT in terms of improved soft-tissue contrast, decrease in radiation exposure, and truly simultaneous and multi-parametric imaging capabilities. However, the lack of well-established methodology for MR-based attenuation correction is hampering further development and wider acceptance of this technology. We assess the impact of ignoring bone attenuation and using different tissue classes for generation of the attenuation map on the accuracy of attenuation correction of PET data. This work was performed using simulation studies based on the XCAT phantom and clinical input data. For the latter, PET and CT images of patients were used as input for the analytic simulation model using realistic activity distributions where CT-based attenuation correction was utilized as reference for comparison. For both phantom and clinical studies, the reference attenuation map was classified into various numbers of tissue classes to produce three (air, soft tissue and lung), four (air, lungs, soft tissue and cortical bones) and five (air, lungs, soft tissue, cortical bones and spongeous bones) class attenuation maps. The phantom studies demonstrated that ignoring bone increases the relative error by up to 6.8% in the body and up to 31.0% for bony regions. Likewise, the simulated clinical studies showed that the mean relative error reached 15% for lesions located in the body and 30.7% for lesions located in bones, when neglecting bones. These results demonstrate an underestimation of about 30% of tracer uptake when neglecting bone, which in turn imposes substantial loss of quantitative accuracy for PET images produced by hybrid PET/MRI systems. Considering bones in the attenuation map will considerably improve the accuracy of MR-guided attenuation correction in hybrid PET/MR to enable quantitative PET imaging on hybrid PET/MR technologies.
Lee, Dae-Hee; Park, Sung-Chul; Park, Hyung-Joon; Han, Seung-Beom
2016-12-01
Open-wedge high tibial osteotomy (HTO) cannot always accurately correct limb alignment, resulting in under- or over-correction. This study assessed the relationship between soft tissue laxity of the knee joint and alignment correction in open-wedge HTO. This prospective study involved 85 patients (86 knees) undergoing open-wedge HTO for primary medial osteoarthritis. The mechanical axis (MA), weight-bearing line (WBL) ratio, and joint line convergence angle (JLCA) were measured on radiographs preoperatively and after 6 months, and the differences between the pre- and post-surgery values were calculated. Post-operative WBL ratios of 57-67 % were classified as acceptable correction. WBL ratios <57 and >67 % were classified as under- and over-corrections, respectively. Preoperative JLCA correlated positively with differences in MA (r = 0.358, P = 0.001) and WBL ratio (P = 0.003). Difference in JLCA showed a stronger correlation than preoperative JLCA with differences in MA (P < 0.001) and WBL ratio (P < 0.001). Difference in JLCA was the only predictor of both difference in MA (P < 0.001) and difference in WBL ratio (P < 0.001). The difference between pre- and post-operative JLCA differed significantly between the under-correction, acceptable-correction, and over-correction groups (P = 0.033). Preoperative JLCA, however, did not differ significantly between the three groups. Neither preoperative JLCA nor difference in JLCA correlated with change in posterior slope. Preoperative degree of soft tissue laxity in the knee joint was related to the degree of alignment correction, but not to alignment correction error, in open-wedge HTO. Change in soft tissue laxity around the knee from before to after open-wedge HTO correlated with both correction amount and correction error. Therefore, a too large change in JLCA from before to after open-wedge osteotomy may be due to an overly large reduction in JLCA following osteotomy, suggesting alignment over-correction during surgery. II.
3D microwave tomography of the breast using prior anatomical information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golnabi, Amir H., E-mail: golnabia@montclair.edu; Meaney, Paul M.; Paulsen, Keith D.
2016-04-15
Purpose: The authors have developed a new 3D breast image reconstruction technique that utilizes the soft tissue spatial resolution of magnetic resonance imaging (MRI) and integrates the dielectric property differentiation from microwave imaging to produce a dual modality approach with the goal of augmenting the specificity of MR imaging, possibly without the need for nonspecific contrast agents. The integration is performed through the application of a soft prior regularization which imports segmented geometric meshes generated from MR exams and uses it to constrain the microwave tomography algorithm to recover nearly uniform property distributions within segmented regions with sharp delineation betweenmore » these internal subzones. Methods: Previous investigations have demonstrated that this approach is effective in 2D simulation and phantom experiments and also in clinical exams. The current study extends the algorithm to 3D and provides a thorough analysis of the sensitivity and robustness to misalignment errors in size and location between the spatial prior information and the actual data. Results: Image results in 3D were not strongly dependent on reconstruction mesh density, and the changes of less than 30% in recovered property values arose from variations of more than 125% in target region size—an outcome which was more robust than in 2D. Similarly, changes of less than 13% occurred in the 3D image results from variations in target location of nearly 90% of the inclusion size. Permittivity and conductivity errors were about 5 times and 2 times smaller, respectively, with the 3D spatial prior algorithm in actual phantom experiments than those which occurred without priors. Conclusions: The presented study confirms that the incorporation of structural information in the form of a soft constraint can considerably improve the accuracy of the property estimates in predefined regions of interest. These findings are encouraging and establish a strong foundation for using the soft prior technique in clinical studies, where their microwave imaging system and MRI can simultaneously collect breast exam data in patients.« less
A new discrete dipole kernel for quantitative susceptibility mapping.
Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian
2018-09-01
Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.
Enhancement of MS2D Bartington point measurement of soil magnetic susceptibility
NASA Astrophysics Data System (ADS)
Fabijańczyk, Piotr; Zawadzki, Jarosław
2015-04-01
Field magnetometry is fast method used to assess the potential soil pollution. The most popular device used to measure the soil magnetic susceptibility on the soil surface is a MS2D Bartington. Single reading using MS2D device of soil magnetic susceptibility is low time-consuming but often characterized by considerable errors related to the instrument or environmental and lithogenic factors. Typically, in order to calculate the reliable average value of soil magnetic susceptibility, a series of MS2D readings is performed in the sample point. As it was analyzed previously, such methodology makes it possible to significantly reduce the nugget effect of the variograms of soil magnetic susceptibility that is related to the micro-scale variance and measurement errors. The goal of this study was to optimize the process of taking a series of MS2D readings, whose average value constitutes a single measurement, in order to take into account micro-scale variations of soil magnetic susceptibility in proper determination of this parameter. This was done using statistical and geostatistical analyses. The analyses were performed using field MS2D measurements that were carried out in the study area located in the direct vicinity of the Katowice agglomeration. At 150 sample points 10 MS2D readings of soil magnetic susceptibility were taken. Using this data set, series of experimental variograms were calculated and modeled. Firstly, using single random MS2D reading for each sample point, and next using the data set increased by adding one more MS2D reading, until their number reached 10. The parameters of variogram: nugget effect, sill and range of correlation were used to determine the most suitable number of MS2D readings at sample point. The distributions of soil magnetic susceptibility at sample point were also analyzed in order to determine adequate number of readings enabling to calculate reliable average soil magnetic susceptibility. The research leading to these results has received funding from the Polish-Norwegian Research Programme operated by the National Centre for Research and Development under the Norwegian Financial Mechanism 2009-2014 in the frame of Project IMPACT - Contract No Pol-Nor/199338/45/2013. References: Zawadzki J., Magiera T., Fabijańczyk P., 2007. The influence of forest stand and organic horizon development on soil surface measurement of magnetic susceptibility. Polish Journal of Soil Science, XL(2), 113-124 Zawadzki J., Fabijańczyk P., Magiera T., Strzyszcz Z., 2010. Study of litter influence on magnetic susceptibility measurements of urban forest topsoils using the MS2D sensor. Environmental Earth Sciences, 61(2), 223-230.
The lucky image-motion prediction for simple scene observation based soft-sensor technology
NASA Astrophysics Data System (ADS)
Li, Yan; Su, Yun; Hu, Bin
2015-08-01
High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.
Self-sensing of dielectric elastomer actuator enhanced by artificial neural network
NASA Astrophysics Data System (ADS)
Ye, Zhihang; Chen, Zheng
2017-09-01
Dielectric elastomer (DE) is a type of soft actuating material, the shape of which can be changed under electrical voltage stimuli. DE materials have promising usage in future’s soft actuators and sensors, such as soft robotics, energy harvesters, and wearable sensors. In this paper, a stripe DE actuator with integrated sensing capability is designed, fabricated, and characterized. Since the strip actuator can be approximated as a compliant capacitor, it is possible to detect the actuator’s displacement by analyzing the actuator’s impedance change. An integrated sensing scheme that adds a high frequency probing signal into actuation signal is developed. Electrical impedance changes in the probing signal are extracted by fast Fourier transform algorithm, and nonlinear data fitting methods involving artificial neural network are implemented to detect the actuator’s displacement. A series of experiments show that by improving data processing and analyzing methods, the integrated sensing method can achieve error level of lower than 1%.
Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod
2016-08-06
In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively.
A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications
Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod
2016-01-01
In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively. PMID:27509495
A burst-mode photon counting receiver with automatic channel estimation and bit rate detection
NASA Astrophysics Data System (ADS)
Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.
2016-04-01
We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.
Moore, Stephanie N; Hawley, Gregory D; Smith, Emily N; Mignemi, Nicholas A; Ihejirika, Rivka C; Yuasa, Masato; Cates, Justin M M; Liu, Xulei; Schoenecker, Jonathan G
2016-01-01
Soft tissue calcification, including both dystrophic calcification and heterotopic ossification, may occur following injury. These lesions have variable fates as they are either resorbed or persist. Persistent soft tissue calcification may result in chronic inflammation and/or loss of function of that soft tissue. The molecular mechanisms that result in the development and maturation of calcifications are uncertain. As a result, directed therapies that prevent or resorb soft tissue calcifications remain largely unsuccessful. Animal models of post-traumatic soft tissue calcification that allow for cost-effective, serial analysis of an individual animal over time are necessary to derive and test novel therapies. We have determined that a cardiotoxin-induced injury of the muscles in the posterior compartment of the lower extremity represents a useful model in which soft tissue calcification develops remote from adjacent bones, thereby allowing for serial analysis by plain radiography. The purpose of the study was to design and validate a method for quantifying soft tissue calcifications in mice longitudinally using plain radiographic techniques and an ordinal scoring system. Muscle injury was induced by injecting cardiotoxin into the posterior compartment of the lower extremity in mice susceptible to developing soft tissue calcification. Seven days following injury, radiographs were obtained under anesthesia. Multiple researchers applied methods designed to standardize post-image processing of digital radiographs (N = 4) and quantify soft tissue calcification (N = 6) in these images using an ordinal scoring system. Inter- and intra-observer agreement for both post-image processing and the scoring system used was assessed using weighted kappa statistics. Soft tissue calcification quantifications by the ordinal scale were compared to mineral volume measurements (threshold 450.7mgHA/cm3) determined by μCT. Finally, sample-size calculations necessary to discriminate between a 25%, 50%, 75%, and 100% difference in STiCSS score 7 days following burn/CTX induced muscle injury were determined. Precision analysis demonstrated substantial to good agreement for both post-image processing (κ = 0.73 to 0.90) and scoring (κ = 0.88 to 0.93), with low inter- and intra-observer variability. Additionally, there was a strong correlation in quantification of soft tissue calcification between the ordinal system and by mineral volume quantification by μCT (Spearman r = 0.83 to 0.89). The ordinal scoring system reliably quantified soft tissue calcification in a burn/CTX-induced soft tissue calcification model compared to non-injured controls (Mann-Whitney rank test: P = 0.0002, ***). Sample size calculations revealed that 6 mice per group would be required to detect a 50% difference in STiCSS score with a power of 0.8. Finally, the STiCSS was demonstrated to reliably quantify soft tissue calcification [dystrophic calcification and heterotopic ossification] by radiographic analysis, independent of the histopathological state of the mineralization. Radiographic analysis can discriminate muscle injury-induced soft tissue calcification from adjacent bone and follow its clinical course over time without requiring the sacrifice of the animal. While the STiCSS cannot identify the specific type of soft tissue calcification present, it is still a useful and valid method by which to quantify the degree of soft tissue calcification. This methodology allows for longitudinal measurements of soft tissue calcification in a single animal, which is relatively less expensive, less time-consuming, and exposes the animal to less radiation than in vivo μCT. Therefore, this high-throughput, longitudinal analytic method for quantifying soft tissue calcification is a viable alternative for the study of soft tissue calcification.
Moore, Stephanie N.; Hawley, Gregory D.; Smith, Emily N.; Mignemi, Nicholas A.; Ihejirika, Rivka C.; Yuasa, Masato; Cates, Justin M. M.; Liu, Xulei; Schoenecker, Jonathan G.
2016-01-01
Introduction Soft tissue calcification, including both dystrophic calcification and heterotopic ossification, may occur following injury. These lesions have variable fates as they are either resorbed or persist. Persistent soft tissue calcification may result in chronic inflammation and/or loss of function of that soft tissue. The molecular mechanisms that result in the development and maturation of calcifications are uncertain. As a result, directed therapies that prevent or resorb soft tissue calcifications remain largely unsuccessful. Animal models of post-traumatic soft tissue calcification that allow for cost-effective, serial analysis of an individual animal over time are necessary to derive and test novel therapies. We have determined that a cardiotoxin-induced injury of the muscles in the posterior compartment of the lower extremity represents a useful model in which soft tissue calcification develops remote from adjacent bones, thereby allowing for serial analysis by plain radiography. The purpose of the study was to design and validate a method for quantifying soft tissue calcifications in mice longitudinally using plain radiographic techniques and an ordinal scoring system. Methods Muscle injury was induced by injecting cardiotoxin into the posterior compartment of the lower extremity in mice susceptible to developing soft tissue calcification. Seven days following injury, radiographs were obtained under anesthesia. Multiple researchers applied methods designed to standardize post-image processing of digital radiographs (N = 4) and quantify soft tissue calcification (N = 6) in these images using an ordinal scoring system. Inter- and intra-observer agreement for both post-image processing and the scoring system used was assessed using weighted kappa statistics. Soft tissue calcification quantifications by the ordinal scale were compared to mineral volume measurements (threshold 450.7mgHA/cm3) determined by μCT. Finally, sample-size calculations necessary to discriminate between a 25%, 50%, 75%, and 100% difference in STiCSS score 7 days following burn/CTX induced muscle injury were determined. Results Precision analysis demonstrated substantial to good agreement for both post-image processing (κ = 0.73 to 0.90) and scoring (κ = 0.88 to 0.93), with low inter- and intra-observer variability. Additionally, there was a strong correlation in quantification of soft tissue calcification between the ordinal system and by mineral volume quantification by μCT (Spearman r = 0.83 to 0.89). The ordinal scoring system reliably quantified soft tissue calcification in a burn/CTX-induced soft tissue calcification model compared to non-injured controls (Mann-Whitney rank test: P = 0.0002, ***). Sample size calculations revealed that 6 mice per group would be required to detect a 50% difference in STiCSS score with a power of 0.8. Finally, the STiCSS was demonstrated to reliably quantify soft tissue calcification [dystrophic calcification and heterotopic ossification] by radiographic analysis, independent of the histopathological state of the mineralization. Conclusions Radiographic analysis can discriminate muscle injury-induced soft tissue calcification from adjacent bone and follow its clinical course over time without requiring the sacrifice of the animal. While the STiCSS cannot identify the specific type of soft tissue calcification present, it is still a useful and valid method by which to quantify the degree of soft tissue calcification. This methodology allows for longitudinal measurements of soft tissue calcification in a single animal, which is relatively less expensive, less time-consuming, and exposes the animal to less radiation than in vivo μCT. Therefore, this high-throughput, longitudinal analytic method for quantifying soft tissue calcification is a viable alternative for the study of soft tissue calcification. PMID:27438007
Toth, I K; Newton, J A; Hyman, L J; Lees, A K; Daykin, M; Ortori, C; Williams, P; Fray, R G
2004-08-01
Many gram-negative bacteria employ N-acylhomoserine lactones (AHL) to regulate diverse physiological processes in concert with cell population density (quorum sensing [QS]). In the plant pathogen Erwinia carotovora, the AHL synthesized via the carI/expI genes are responsible for regulating the production of secreted plant cell wall-degrading exoenzymes and the antibiotic carbapen-3-em carboxylic acid. We have previously shown that targeting the product of an AHL synthase gene (yenI) from Yersinia enterocolitica to the chloroplasts of transgenic tobacco plants caused the synthesis in planta of the cognate AHL signaling molecules N-(3-oxohexanoyl)-L-homoserine lactone (3-oxo-C6-HSL) and N-hexanoylhomoserine lactone (C6-HSL), which in turn, were able to complement a carI-QS mutant. In the present study, we demonstrate that transgenic potato plants containing the yenI gene are also able to express AHL and that the presence and level of these AHL in the plant increases susceptibility to infection by E. carotovora. Susceptibility is further affected by both the bacterial level and the plant tissue under investigation.
Liu, Jien-Wei; Ko, Wen-Chien; Huang, Cheng-Hua; Liao, Chun-Hsing; Lu, Chin-Te; Chuang, Yin-Ching; Tsao, Shih-Ming; Chen, Yao-Shen; Liu, Yung-Ching; Chen, Wei-Yu; Jang, Tsrang-Neng; Lin, Hsiu-Chen; Chen, Chih-Ming; Shi, Zhi-Yuan; Pan, Sung-Ching; Yang, Jia-Ling; Kung, Hsiang-Chi; Liu, Chun-Eng; Cheng, Yu-Jen; Chen, Yen-Hsu; Lu, Po-Liang; Sun, Wu; Wang, Lih-Shinn; Yu, Kwok-Woon; Chiang, Ping-Cherng; Lee, Ming-Hsun; Lee, Chun-Ming; Hsu, Gwo-Jong
2012-01-01
The Tigecycline In Vitro Surveillance in Taiwan (TIST) study, initiated in 2006, is a nationwide surveillance program designed to longitudinally monitor the in vitro activity of tigecycline against commonly encountered drug-resistant bacteria. This study compared the in vitro activity of tigecycline against 3,014 isolates of clinically important drug-resistant bacteria using the standard broth microdilution and disk diffusion methods. Species studied included methicillin-resistant Staphylococcus aureus (MRSA; n = 759), vancomycin-resistant Enterococcus faecium (VRE; n = 191), extended-spectrum β-lactamase (ESBL)-producing Escherichia coli (n = 602), ESBL-producing Klebsiella pneumoniae (n = 736), and Acinetobacter baumannii (n = 726) that had been collected from patients treated between 2008 and 2010 at 20 hospitals in Taiwan. MICs and inhibition zone diameters were interpreted according to the currently recommended U.S. Food and Drug Administration (FDA) criteria and the European Committee on Antimicrobial Susceptibility Testing (EUCAST) criteria. The MIC90 values of tigecycline against MRSA, VRE, ESBL-producing E. coli, ESBL-producing K. pneumoniae, and A. baumannii were 0.5, 0.125, 0.5, 2, and 8 μg/ml, respectively. The total error rates between the two methods using the FDA criteria were high: 38.4% for ESBL-producing K. pneumoniae and 33.8% for A. baumannii. Using the EUCAST criteria, the total error rate was also high (54.6%) for A. baumannii isolates. The total error rates between these two methods were <5% for MRSA, VRE, and ESBL-producing E. coli. For routine susceptibility testing of ESBL-producing K. pneumoniae and A. baumannii against tigecycline, the broth microdilution method should be used because of the poor correlation of results between these two methods. PMID:22155819
Detection of Methicillin-Resistant Coagulase-Negative Staphylococci by the Vitek 2 System
Johnson, Kristen N.; Andreacchio, Kathleen
2014-01-01
The accurate performance of the Vitek 2 GP67 card for detecting methicillin-resistant coagulase-negative staphylococci (CoNS) is not known. We prospectively determined the ability of the Vitek 2 GP67 card to accurately detect methicillin-resistant CoNS, with mecA PCR results used as the gold standard for a 4-month period in 2012. Included in the study were 240 consecutively collected nonduplicate CoNS isolates. Cefoxitin susceptibility by disk diffusion testing was determined for all isolates. We found that the three tested systems, Vitek 2 oxacillin and cefoxitin testing and cefoxitin disk susceptibility testing, lacked specificity and, in some cases, sensitivity for detecting methicillin resistance. The Vitek 2 oxacillin and cefoxitin tests had very major error rates of 4% and 8%, respectively, and major error rates of 38% and 26%, respectively. Disk cefoxitin testing gave the best performance, with very major and major error rates of 2% and 24%, respectively. The test performances were species dependent, with the greatest errors found for Staphylococcus saprophyticus. While the 2014 CLSI guidelines recommend reporting isolates that test resistant by the oxacillin MIC or cefoxitin disk test as oxacillin resistant, following such guidelines produces erroneous results, depending on the test method and bacterial species tested. Vitek 2 cefoxitin testing is not an adequate substitute for cefoxitin disk testing. For critical-source isolates, mecA PCR, rather than Vitek 2 or cefoxitin disk testing, is required for optimal antimicrobial therapy. PMID:24951799
A protocol for monitoring soft tissue motion under compression garments during drop landings.
Mills, Chris; Scurr, Joanna; Wood, Louise
2011-06-03
This study used a single-subject design to establish a valid and reliable protocol for monitoring soft tissue motion under compression garments during drop landings. One male participant performed six 40 cm drop landings onto a force platform, in three compression conditions (none, medium high). Five reflective markers placed on the thigh under the compression garment and five over the garment were filmed using two cameras (1000 Hz). Following manual digitisation, marker coordinates were reconstructed and their resultant displacements and maximum change in separation distance between skin and garment markers were calculated. To determine reliability of marker application, 35 markers were attached to the thigh over the high compression garment and filmed. Markers were then removed and re-applied on three occasions; marker separation and distance to thigh centre of gravity were calculated. Results showed similar ground reaction forces during landing trials. Significant reductions in the maximum change in separation distance between markers from no compression to high compression landings were reported. Typical errors in marker movement under and over the garment were 0.1mm in medium and high compression landings. Re-application of markers showed mean typical errors of 1mm in marker separation and <3mm relative to thigh centre of gravity. This paper presents a novel protocol that demonstrates sufficient sensitivity to detect reductions in soft tissue motion during landings in high compression garments compared to no compression. Additionally, markers placed under or over the garment demonstrate low variance in movement, and the protocol reports good reliability in marker re-application. Copyright © 2011 Elsevier Ltd. All rights reserved.
Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery.
Rottmann, Joerg; Keall, Paul; Berbeco, Ross
2013-09-01
To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time.
The effect of dissolved oxygen on the susceptibility of blood.
Berman, Avery J L; Ma, Yuhan; Hoge, Richard D; Pike, G Bruce
2016-01-01
It has been predicted that, during hyperoxia, excess O2 dissolved in arterial blood will significantly alter the blood's magnetic susceptibility. This would confound the interpretation of the hyperoxia-induced blood oxygenation level-dependent signal as arising solely from changes in deoxyhemoglobin. This study, therefore, aimed to determine how dissolved O2 affects the susceptibility of blood. We present a comprehensive model for the effect of dissolved O2 on the susceptibility of blood and compare it with another recently published model, referred to here as the ideal gas model (IGM). For validation, distilled water and samples of bovine plasma were oxygenated over a range of hyperoxic O2 concentrations and their susceptibilities were determined using multiecho gradient echo phase imaging. In distilled water and plasma, the measured changes in susceptibility were very linear, with identical slopes of 0.062 ppb/mm Hg of O2. This change was dramatically less than previously predicted using the IGM and was close to that predicted by our model. The primary source of error in the IGM is the overestimation of the volume fraction occupied by dissolved O2. Under most physiological conditions, the susceptibility of dissolved O2 can be disregarded in MRI studies employing hyperoxia. © 2015 Wiley Periodicals, Inc.
Susceptibility to distraction during reading in young, young-old, and old-old adults.
McGinnis, Debra
2012-01-01
BACKGROUND/STUDY CONTEXT: Susceptibility to distraction during reading may increase with age, resulting in comprehension errors. Neurological integrity and cognitive reserve are possible covariates of age-related distraction susceptibility. The current study investigated distraction susceptibility in three age groups (young, young-old, and old-old), and examined the covariation patterns of variable sets associated with neurological integrity and cognitive reserve. Participants responded to comprehension questions after reading stories that included semantically related or semantically unrelated distractors. Neurological integrity measures consisted of Mini-Mental State Examination, Selective Reminding, and Category Fluency. Cognitive reserve measures consisted of education and vocabulary. Old-old adults were more likely than young and young-old adults to select distractors when responding to comprehension questions (24.02%, 11.95%, 3.68%, respectively). Age-related distraction variance significantly overlapped neurological variance, and became more transparent after cognitive reserve variance was controlled. This study augments previous age-related distraction research by highlighting (a) the increase in distraction susceptibility in adults over 79, particularly when distractors are semantically related; (b) the influence of age-related neurological integrity on distraction; and (c) the possibility that education and verbal experience may decrease distraction susceptibility, consistent with cognitive reserve frameworks.
Recent advances in coding theory for near error-free communications
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.
1991-01-01
Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
Grout, Ray; Kolla, Hemanth; Minion, Michael; ...
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Spectral Regularization Algorithms for Learning Large Incomplete Matrices.
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-03-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-01-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465
Fais, Paolo; Viero, Alessia; Viel, Guido; Giordano, Renzo; Raniero, Dario; Kusstatscher, Stefano; Giraudo, Chiara; Cecchetto, Giovanni; Montisci, Massimo
2018-04-07
Necrotizing fasciitis (NF) is a life-threatening infection of soft tissues spreading along the fasciae to the surrounding musculature, subcutaneous fat and overlying skin areas that can rapidly lead to septic shock and death. Due to the pandemic increase of medical malpractice lawsuits, above all in Western countries, the forensic pathologist is frequently asked to investigate post-mortem cases of NF in order to determine the cause of death and to identify any related negligence and/or medical error. Herein, we review the medical literature dealing with cases of NF in a post-mortem setting, present a case series of seven NF fatalities and discuss the main ante-mortem and post-mortem diagnostic challenges of both clinical and forensic interests. In particular, we address the following issues: (1) origin of soft tissue infections, (2) micro-organisms involved, (3) time of progression of the infection to NF, (4) clinical and histological staging of NF and (5) pros and cons of clinical and laboratory scores, specific forensic issues related to the reconstruction of the ideal medical conduct and the evaluation of the causal value/link of any eventual medical error.
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Soft-light overhead illumination systems improve laparoscopic task performance.
Takai, Akihiro; Takada, Yasutsugu; Motomura, Hideki; Teramukai, Satoshi
2014-02-01
The aim of this study was to evaluate the impact of attached shadow cues for laparoscopic task performance. We developed a soft-light overhead illumination system (SOIS) that produced attached shadows on objects. We compared results using the SOIS with those using a conventional illumination system with regard to laparoscopic experience and laparoscope-to-target distances (LTDs). Forty-two medical students and 23 surgeons participated in the study. A peg transfer task (LTD, 120 mm) for students and surgeons, and a suture removal task (LTD, 30 mm) for students were performed. Illumination systems were randomly assigned to each task. Endpoints were: total number of peg transfers; percentage of peg-dropping errors; and total execution time for suture removal. After the task, participants filled out a questionnaire on their preference for a particular illumination system. Total number of peg transfers was greater with the SOIS for both students and surgeons. Percentage of peg-dropping errors for surgeons was lower with the SOIS. Total execution time for suture removal was shorter with the SOIS. Forty-five participants (69% in total) evaluated the SOIS for easier task performance. The present results confirm that the SOIS improves laparoscopic task performance, regardless of previous laparoscopic experience or LTD.
Serafini, Marta; Griglio, Alessia; Aprile, Silvio; Seiti, Fabio; Travelli, Cristina; Pattarino, Franco; Grosa, Giorgio; Sorba, Giovanni; Genazzani, Armando A; Gonzalez-Rodriguez, Sara; Butron, Laura; Devesa, Isabel; Fernandez-Carvajal, Asia; Pirali, Tracey; Ferrer-Montiel, Antonio
2018-05-24
Despite being an old molecule, capsaicin is still a hot topic in the scientific community, and the development of new capsaicinoids is a promising pharmacological approach in the management of skin disorders related to inflammation and pruritus. Here we report the synthesis and the evaluation of capsaicin soft drugs that undergo deactivation by the hydrolyzing activity of skin esterases. The implanting of an ester group in the lipophilic moiety of capsaicinoids by the Passerini multicomponent reaction affords both agonists and antagonists that retain transient receptor potential vanilloid 1 channel (TRPV1) modulating activity and, at the same time, are susceptible to hydrolysis. The most promising antagonist identified shows in vivo anti-nociceptive activity on pruritus and hyperalgesia without producing hyperthermia, thus validating it as novel treatment for dermatological conditions that implicate TRPV1 channel dysfunction.
High-frequency rotational losses in different soft magnetic composites
NASA Astrophysics Data System (ADS)
de la Barrière, O.; Appino, C.; Ragusa, C.; Fiorillo, F.; Mazaleyrat, F.; LoBue, M.
2014-05-01
The isotropic properties of Soft Magnetic Composites (SMC) favor the design of new machine topologies and their granular structure can induce a potential decrease of the dynamic loss component. This paper is devoted to the characterization of the broadband magnetic losses of different SMC types under alternating and circular induction. The investigated materials differ by their grain size, heat treatment, compaction rate, and binder type. It is shown that, up to peak polarization Jp = 1.25 T, the ratios between the rotational and the alternating loss components (classical, hysteresis, and excess) are quite independent of the material structural details, quite analogous to the known behavior of nonoriented steel laminations. On the contrary, at higher inductions, it is observed that the Jp value at which the rotational hysteresis loss attains its maximum, related to the progressive disappearance of the domain walls under increasing rotational fields, decreases with the material susceptibility.
Us, Ebru; Kutlu, Huseyin H; Tekeli, Alper; Ocal, Duygu; Cirpan, Sevilay; Memikoglu, Kemal O
2017-04-01
We described a health care-associated Serratia marcescens outbreak of wound and soft tissue infection lasting approximately 11 months at Ankara University Ibni Sina Hospital. After identification of S marcescens strains from the clinical and environmental samples, and their susceptibility testing to antimicrobial agents, pulsed-field gel electrophoresis (PFGE) was performed to detect molecular epidemiologic relationships among these isolates. The strains which were isolated from the saline bottles used for wound cleansing in the wound care unit were found to be 100% interrelated by PFGE to the strains from the samples of the outbreak patients. Reuse of the emptied bottles has no longer been allowed since the outbreak occurred. Besides, more efficient and frequent infection control training for hospital staff has been conducted. Copyright © 2017 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
Measurements of Surfactant Squeeze-out Using Magnetically-Levitated Liquid Bridges
NASA Technical Reports Server (NTRS)
Rosenblatt, Charles
2004-01-01
Liquid bridges: Columns of liquid supported by two solid surfaces. These are generally opposing right circular cylinders in 0g. For a cylindrical bridge of length L and diameter d, in zero g, the maximum slenderness ratio Lambda [L/d] = pi [Rayleigh]. In the presence of gravity the cylindrical shape of an axisymmetric bridge tends to deform. Fluid has a volumetric magnetic susceptibility X. Magnetic levitation has numerous applications in studies of fluids, "soft" and "hard" condensed matter physics, and biophysics
2006-09-01
second phase of the project was modified to incorporate a soft solution design. The plan consisted of construction of a wave diffraction mound to reduce...and local public property (that is considered as private) in the U.S. Court of Federal Claims. An RSM approach was followed in this study...representation of hard bottom (non-erodible areas), and hot-start option. M2D has been designed as a local- scale model that can be easily and quickly
The Effects of Various Levels of Ascorbic Acid Intake Upon Oral Wound Healing in Guinea Pigs.
1983-01-01
82174 7’ 4 8 Alfano observed increased permeability of oral tissue with decreased vitamin C supplementation. Periodontal tissues are susceptible to...make a uniform wound in the midline of the oral mucosa, anterior to the maxillary premolars. (Figure 2). The trephine bur penetrated the tissues until...grids across the wound site and dividing by two. This area encompassed the edges of the bone wound or residual mature soft connective tissue . The
Spectrophotometric reading of EUCAST antifungal susceptibility testing of Aspergillus fumigatus.
Meletiadis, J; Leth Mortensen, K; Verweij, P E; Mouton, J W; Arendrup, M C
2017-02-01
Given the increasing number of antifungal drugs and the emergence of resistant Aspergillus isolates, objective, automated and high-throughput antifungal susceptibility testing is important. The EUCAST E.Def 9.3 reference method for MIC determination of Aspergillus species relies on visual reading. Spectrophotometric reading was not adopted because of concern that non-uniform filamentous growth might lead to unreliable and non-reproducible results. We therefore evaluated spectrophotometric reading for the determination of MICs of antifungal azoles against Aspergillus fumigatus. Eighty-eight clinical isolates of A. fumigatus were tested against four medical azoles (posaconazole, voriconazole, itraconazole, isavuconazole) and one agricultural azole (tebuconazole) with EUCAST E.Def 9.3. The visually determined MICs (complete inhibition of growth) were compared with spectrophotometrically determined MICs and essential (±1 twofold dilution) and categorical (susceptible/intermediate/resistant or wild-type/non-wild-type) agreement was calculated. Spectrophotometric data were analysed with regression analysis using the E max model, and the effective concentration corresponding to 5% (EC 5 ) was estimated. Using the 5% cut-off, high essential (92%-97%) and categorical (93%-99%) agreement (<6% errors) was found between spectrophotometric and visual MICs. The EC 5 also correlated with the visually determined MICs with an essential agreement of 83%-96% and a categorical agreement of 90%-100% (<5% errors). Spectrophotometric determination of MICs of antifungal drugs may increase objectivity, and allow automation and high-throughput of EUCAST E.Def 9.3 antifungal susceptibility testing of Aspergillus species. Copyright © 2016 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Flexocoupling-induced soft acoustic modes and the spatially modulated phases in ferroelectrics
NASA Astrophysics Data System (ADS)
Morozovska, Anna N.; Glinchuk, Maya D.; Eliseev, Eugene A.; Vysochanskii, Yulian M.
2017-09-01
Using the Landau-Ginzburg-Devonshire theory and one component approximation, we examined the conditions of the soft acoustic phonon mode (A-mode) appearance in a ferroelectric (FE) depending on the magnitude of the flexoelectric coefficient f and temperature T . If the flexocoefficient f is equal to the temperature-dependent critical value fcr(T ) at some temperature T =TIC , the A-mode frequency tends to zero at wave vector k =k0cr , and the spontaneous polarization becomes spatially modulated in the temperature range T
NASA Technical Reports Server (NTRS)
Simon, Marvin; Valles, Esteban; Jones, Christopher
2008-01-01
This paper addresses the carrier-phase estimation problem under low SNR conditions as are typical of turbo- and LDPC-coded applications. In previous publications by the first author, closed-loop carrier synchronization schemes for error-correction coded BPSK and QPSK modulation were proposed that were based on feeding back hard data decisions at the input of the loop, the purpose being to remove the modulation prior to attempting to track the carrier phase as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. In this paper, we consider an alternative approach wherein the extrinsic soft information from the iterative decoder of turbo or LDPC codes is instead used as the feedback.
Guelpa, Anina; Bevilacqua, Marta; Marini, Federico; O'Kennedy, Kim; Geladi, Paul; Manley, Marena
2015-04-15
It has been established in this study that the Rapid Visco Analyser (RVA) can describe maize hardness, irrespective of the RVA profile, when used in association with appropriate multivariate data analysis techniques. Therefore, the RVA can complement or replace current and/or conventional methods as a hardness descriptor. Hardness modelling based on RVA viscograms was carried out using seven conventional hardness methods (hectoliter mass (HLM), hundred kernel mass (HKM), particle size index (PSI), percentage vitreous endosperm (%VE), protein content, percentage chop (%chop) and near infrared (NIR) spectroscopy) as references and three different RVA profiles (hard, soft and standard) as predictors. An approach using locally weighted partial least squares (LW-PLS) was followed to build the regression models. The resulted prediction errors (root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP)) for the quantification of hardness values were always lower or in the same order of the laboratory error of the reference method. Copyright © 2014 Elsevier Ltd. All rights reserved.
A review of setup error in supine breast radiotherapy using cone-beam computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batumalai, Vikneswary, E-mail: Vikneswary.batumalai@sswahs.nsw.gov.au; Liverpool and Macarthur Cancer Therapy Centres, New South Wales; Ingham Institute of Applied Medical Research, Sydney, New South Wales
2016-10-01
Setup error in breast radiotherapy (RT) measured with 3-dimensional cone-beam computed tomography (CBCT) is becoming more common. The purpose of this study is to review the literature relating to the magnitude of setup error in breast RT measured with CBCT. The different methods of image registration between CBCT and planning computed tomography (CT) scan were also explored. A literature search, not limited by date, was conducted using Medline and Google Scholar with the following key words: breast cancer, RT, setup error, and CBCT. This review includes studies that reported on systematic and random errors, and the methods used when registeringmore » CBCT scans with planning CT scan. A total of 11 relevant studies were identified for inclusion in this review. The average magnitude of error is generally less than 5 mm across a number of studies reviewed. The common registration methods used when registering CBCT scans with planning CT scan are based on bony anatomy, soft tissue, and surgical clips. No clear relationships between the setup errors detected and methods of registration were observed from this review. Further studies are needed to assess the benefit of CBCT over electronic portal image, as CBCT remains unproven to be of wide benefit in breast RT.« less
NASA Astrophysics Data System (ADS)
Lu, Shih-I.
2018-01-01
We use the discrete solvent reaction field model to evaluate the linear and second-order nonlinear optical susceptibilities of 3-methyl-4-nitropyridine-1-oxyde crystal. In this approach, crystal environment is created by supercell architecture. A self-consistent procedure is used to obtain charges and polarizabilities for environmental atoms. Impact of atomic polarizabilities on the properties of interest is highlighted. This approach is shown to give the second-order nonlinear optical susceptibilities within error bar of experiment as well as the linear optical susceptibilities in the same order as experiment. Similar quality of calculations are also applied to both 4-N,N-dimethylamino-3-acetamidonitrobenzene and 2-methyl-4-nitroaniline crystals.
On-orbit observations of single event upset in Harris HM-6508 1K RAMs, reissue A
NASA Astrophysics Data System (ADS)
Blake, J. B.; Mandel, R.
1987-02-01
The Harris HM-6508 1K x 1 RAMs are part of a subsystem of a satellite in a low, polar orbit. The memory module, used in the subsystem containing the RAMs, consists of three printed circuit cards, with each card containing eight 2K byte memory hybrids, for a total of 48K bytes. Each memory hybrid contains 16 HM-6508 RAM chips. On a regular basis all but 256 bytes of the 48K bytes are examined for bit errors. Two different techniques were used for detecting bit errors. The first technique, a memory check sum, was capable of automatically detecting all single bit and some double bit errors which occurred within a page of memory. A memory page consists of 256 bytes. Memory check sum tests are performed approximately every 90 minutes. To detect a multiple error or to determine the exact location of the bit error within the page the entire contents of the memory is dumped and compared to the load file. Memory dumps are normally performed once a month, or immediately after the check sum routine detects an error. Once the exact location of the error is found, the correct value is reloaded into memory. After the memory is reloaded, the contents of the memory location in question is verified in order to determine if the error was a soft error generated by an SEU or a hard error generated by a part failure or cosmic-ray induced latchup.
Fast Quantitative Susceptibility Mapping with L1-Regularization and Automatic Parameter Selection
Bilgic, Berkin; Fan, Audrey P.; Polimeni, Jonathan R.; Cauley, Stephen F.; Bianciardi, Marta; Adalsteinsson, Elfar; Wald, Lawrence L.; Setsompop, Kawin
2014-01-01
Purpose To enable fast reconstruction of quantitative susceptibility maps with Total Variation penalty and automatic regularization parameter selection. Methods ℓ1-regularized susceptibility mapping is accelerated by variable-splitting, which allows closed-form evaluation of each iteration of the algorithm by soft thresholding and FFTs. This fast algorithm also renders automatic regularization parameter estimation practical. A weighting mask derived from the magnitude signal can be incorporated to allow edge-aware regularization. Results Compared to the nonlinear Conjugate Gradient (CG) solver, the proposed method offers 20× speed-up in reconstruction time. A complete pipeline including Laplacian phase unwrapping, background phase removal with SHARP filtering and ℓ1-regularized dipole inversion at 0.6 mm isotropic resolution is completed in 1.2 minutes using Matlab on a standard workstation compared to 22 minutes using the Conjugate Gradient solver. This fast reconstruction allows estimation of regularization parameters with the L-curve method in 13 minutes, which would have taken 4 hours with the CG algorithm. Proposed method also permits magnitude-weighted regularization, which prevents smoothing across edges identified on the magnitude signal. This more complicated optimization problem is solved 5× faster than the nonlinear CG approach. Utility of the proposed method is also demonstrated in functional BOLD susceptibility mapping, where processing of the massive time-series dataset would otherwise be prohibitive with the CG solver. Conclusion Online reconstruction of regularized susceptibility maps may become feasible with the proposed dipole inversion. PMID:24259479
Nocardiosis in 132 patients with cancer: microbiological and clinical analyses.
Wang, Huan-Ling; Seo, Yiel-Hea; LaSala, P Rocco; Tarrand, Jeffery J; Han, Xiang Y
2014-10-01
To correlate the microbiological and clinical features of infections caused by Nocardia species. We determined the species and drug susceptibility of 138 Nocardia strains isolated from 132 patients at the University of Texas M. D. Anderson Cancer Center (Houston, TX) from 2002 through 2012 and analyzed the clinical features. The 132 patients included 82 men and 50 women with a mean age of 59.1 years. All except two had underlying cancer, and 47 (35.6%) also received a stem cell transplant. These patients experienced 136 episodes of Nocardia infection, including pulmonary infection, abscess of deep skin and soft tissue, bacteremia and dissemination, and brain abscess. The 138 Nocardia strains involved 27 species, of which 20 species have been described since 2000. Common species included Nocardia nova, Nocardia cyriacigeorgica, Nocardia farcinica, and Nocardia abscessus, together accounting for 59.4%. N nova caused most bacteremia cases, whereas N farcinica caused most of the skin and brain infections. Infections with a few recent species likely represented first confirmation or report of human infections. Antimicrobial susceptibility tests of 117 strains showed that they were all susceptible to trimethoprim-sulfamethoxazole and linezolid but variably susceptible to other drugs depending on species. Most patients who were treated for the infection showed improvement or resolution. Diverse Nocardia species can cause secondary infections in patients with cancer. Timely species identification and antimicrobial susceptibility tests may guide treatment. Copyright© by the American Society for Clinical Pathology.
Naimi, Haji Mohammad; Rasekh, Hamidullah; Noori, Ahmad Zia; Bahaduri, Mohammad Aman
2017-11-29
Staphylococcus aureus (S. aureus) is a major pathogen implicated in skin and soft tissue infections, abscess in deep organs, toxin mediated diseases, respiratory tract infections, urinary tract infections, post-surgical wound infections, meningitis and many other diseases. Irresponsible and over use of antibiotics has led to an increased presence of multidrug resistant organisms and especially methicillin resistant Staphylococcus aureus (MRSA) as a major public health concern in Afghanistan. As a result, there are many infections with many of them undiagnosed or improperly diagnosed. We aimed to establish a baseline of knowledge regarding the prevalence of MRSA in Kabul, Afghanistan, as well as S. aureus antimicrobial susceptibility to current available antimicrobials, while also determining those most effective to treat S. aureus infections. Samples were collected from patients at two main Health facilities in Kabul between September 2016 and February 2017. Antibiotic susceptibility profiles were determined by the disc diffusion method and studied using standard CLSI protocols. Out of 105 strains of S. aureus isolated from pus, urine, tracheal secretions, and blood, almost half (46; 43.8%) were methicillin-sensitive Staphylococcus aureus (MSSA) while 59 (56.2%) were Methicillin-resistant Staphylococcus aureus (MRSA). All strains were susceptible to vancomycin. In total, 100 (95.2%) strains were susceptible to rifampicin, 96 (91.4%) susceptible to clindamycin, 94 (89.5%) susceptible to imipenem, 83 (79.0%) susceptible to gentamicin, 81(77.1%) susceptible to doxycycline, 77 (77.1%) susceptible to amoxicillin + clavulanic acid, 78 (74.3%) susceptible to cefazolin, 71 (67.6%) susceptible to tobramycin, 68 (64.8%) susceptible to chloramphenicol, 60 (57.1%) were susceptible to trimethoprim-sulfamethoxazole, 47 (44.8%) susceptible to ciprofloxacin, 38 (36.2%) susceptible to azithromycin and erythromycin, 37 (35.2%) susceptible to ceftriaxone and 11 (10.5%) were susceptible to cefixim. Almost all (104; 99.05%) were resistant to penicillin G and only 1 (0.95%) was intermediate to penicillin G. Interestingly, 74.6% of MRSA strains were azithromycin resistant with 8.5% of them clindamycin resistant. Ninety-six (91.4%) of the isolates were multi-drug resistant. There was a high rate of Methicillin resistance (56.2%) among S. aureus strains in the samples collected and most (91.4%) were multidrug resistant. The most effective antibiotics to treat Staph infections were vancomycin, rifampicin, imipenem, clindamycin, amoxicillin-clavulanic acid, cefazolin, gentamicin and doxycycline. The least effective were azithromycin, ceftriaxone, cefixim and penicillin. We recommend that, where possible, in every case of S. aureus infection in Kabul, Afghanistan, Antibiotic susceptibility testing (AST) should be performed and responsible use of antibiotics should be considered.
Schrider, Daniel R.; Mendes, Fábio K.; Hahn, Matthew W.; Kern, Andrew D.
2015-01-01
Characterizing the nature of the adaptive process at the genetic level is a central goal for population genetics. In particular, we know little about the sources of adaptive substitution or about the number of adaptive variants currently segregating in nature. Historically, population geneticists have focused attention on the hard-sweep model of adaptation in which a de novo beneficial mutation arises and rapidly fixes in a population. Recently more attention has been given to soft-sweep models, in which alleles that were previously neutral, or nearly so, drift until such a time as the environment shifts and their selection coefficient changes to become beneficial. It remains an active and difficult problem, however, to tease apart the telltale signatures of hard vs. soft sweeps in genomic polymorphism data. Through extensive simulations of hard- and soft-sweep models, here we show that indeed the two might not be separable through the use of simple summary statistics. In particular, it seems that recombination in regions linked to, but distant from, sites of hard sweeps can create patterns of polymorphism that closely mirror what is expected to be found near soft sweeps. We find that a very similar situation arises when using haplotype-based statistics that are aimed at detecting partial or ongoing selective sweeps, such that it is difficult to distinguish the shoulder of a hard sweep from the center of a partial sweep. While knowing the location of the selected site mitigates this problem slightly, we show that stochasticity in signatures of natural selection will frequently cause the signal to reach its zenith far from this site and that this effect is more severe for soft sweeps; thus inferences of the target as well as the mode of positive selection may be inaccurate. In addition, both the time since a sweep ends and biologically realistic levels of allelic gene conversion lead to errors in the classification and identification of selective sweeps. This general problem of “soft shoulders” underscores the difficulty in differentiating soft and partial sweeps from hard-sweep scenarios in molecular population genomics data. The soft-shoulder effect also implies that the more common hard sweeps have been in recent evolutionary history, the more prevalent spurious signatures of soft or partial sweeps may appear in some genome-wide scans. PMID:25716978
Schrider, Daniel R; Mendes, Fábio K; Hahn, Matthew W; Kern, Andrew D
2015-05-01
Characterizing the nature of the adaptive process at the genetic level is a central goal for population genetics. In particular, we know little about the sources of adaptive substitution or about the number of adaptive variants currently segregating in nature. Historically, population geneticists have focused attention on the hard-sweep model of adaptation in which a de novo beneficial mutation arises and rapidly fixes in a population. Recently more attention has been given to soft-sweep models, in which alleles that were previously neutral, or nearly so, drift until such a time as the environment shifts and their selection coefficient changes to become beneficial. It remains an active and difficult problem, however, to tease apart the telltale signatures of hard vs. soft sweeps in genomic polymorphism data. Through extensive simulations of hard- and soft-sweep models, here we show that indeed the two might not be separable through the use of simple summary statistics. In particular, it seems that recombination in regions linked to, but distant from, sites of hard sweeps can create patterns of polymorphism that closely mirror what is expected to be found near soft sweeps. We find that a very similar situation arises when using haplotype-based statistics that are aimed at detecting partial or ongoing selective sweeps, such that it is difficult to distinguish the shoulder of a hard sweep from the center of a partial sweep. While knowing the location of the selected site mitigates this problem slightly, we show that stochasticity in signatures of natural selection will frequently cause the signal to reach its zenith far from this site and that this effect is more severe for soft sweeps; thus inferences of the target as well as the mode of positive selection may be inaccurate. In addition, both the time since a sweep ends and biologically realistic levels of allelic gene conversion lead to errors in the classification and identification of selective sweeps. This general problem of "soft shoulders" underscores the difficulty in differentiating soft and partial sweeps from hard-sweep scenarios in molecular population genomics data. The soft-shoulder effect also implies that the more common hard sweeps have been in recent evolutionary history, the more prevalent spurious signatures of soft or partial sweeps may appear in some genome-wide scans. Copyright © 2015 by the Genetics Society of America.
A fast and remote magnetonanothermometry for a liquid environment
NASA Astrophysics Data System (ADS)
He, Le; Liu, Wenzhong; Xie, Qingguo; Pi, Shiqiang; Morais, P. C.
2016-02-01
This study reports on a new approach for remote nanothermometry with short response time (milliseconds) aiming to operate in liquid media using AC susceptibility components of a suspended magnetic nanoparticle subjected to the Brownian relaxation mechanism. A simple, low cost, and accurate system was designed to measure AC susceptibility using an AC magnetic field at small amplitude (6 Oe) and frequency range (5 kHz) superimposed on a weak DC magnetic field (up to 30 Oe). A model based on the AC susceptibility of magnetic nanoparticles (30 nm average diameter) was constructed to describe the temperature measurement sensitivity of the dominated Brownian relaxation time. A new approach for remote nanothermometry was achieved with measured AC susceptibility by the designed system and the proposed model. Our experimental results show that our magnetonanothermometer allows temperature errors lower than 0.3 K with standard deviations lower than 0.1 K in the temperature range from 310 to 320 K.
Chemical Analysis of the Moon at the Surveyor VI Landing Site: Preliminary Results.
Turkevich, A L; Patterson, J H; Franzgrote, E J
1968-06-07
The alpha-scattering experiment aboard soft-landing Surveyor VI has provided a chemical analysis of the surface of the moon in Sinus Medii. The preliminary results indicate that, within experimental errors, the composition is the same as that found by Surveyor V in Mare Tranquillitatis. This finding suggests that large portions of the lunar maria resemble basalt in composition.
Single event upset in avionics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taber, A.; Normand, E.
1993-04-01
Data from military/experimental flights and laboratory testing indicate that typical non radiation-hardened 64K and 256K static random access memories (SRAMs) can experience a significant soft upset rate at aircraft altitudes due to energetic neutrons created by cosmic ray interactions in the atmosphere. It is suggested that error detection and correction (EDAC) circuitry be considered for all avionics designs containing large amounts of semi-conductor memory.
Jorgensen, James H.; Barry, Arthur L.; Traczewski, M. M.; Sahm, Daniel F.; McElmeel, M. Leticia; Crawford, Sharon A.
2000-01-01
The VITEK 2 is a new automated instrument for rapid organism identification and susceptibility testing. It has the capability of performing rapid susceptibility testing of Streptococcus pneumoniae with specially configured cards that contain enriched growth medium and antimicrobial agents relevant for this organism. The present study compared the results of testing of a group of 53 challenge strains of pneumococci with known resistance properties and a collection of clinical isolates examined in two study phases with a total of 402 and 416 isolates, respectively, with a prototype of the VITEK 2. Testing was conducted in three geographically separate laboratories; the challenge collection was tested by all three laboratories, and the unique clinical isolates were tested separately by the individual laboratories. The VITEK 2 results of tests with 10 antimicrobial agents were compared to the results generated by the National Committee for Clinical Laboratory Standards reference broth microdilution MIC test method. Excellent interlaboratory agreement was observed with the challenge strains. The overall agreement within a single twofold dilution of MICs defined by the VITEK 2 and reference method with the clinical isolates was 96.3%, although there were a number of off-scale MICs that could not be compared. The best agreement with the clinical isolates was achieved with ofloxacin and chloramphenicol (100%), and the lowest level of agreement among those drugs with sufficient on-scale MICs occurred with trimethoprim-sulfamethoxazole (89.7%). Overall there were 1.3% very major, 6.6% minor, and no major interpretive category errors encountered with the clinical isolates, although >80% of the minor interpretive errors involved only a single log2 dilution difference. The mean time for generation of susceptibility results with the clinical isolates was 8.1 h. The VITEK 2 provided rapid, reliable susceptibility category determinations with both the challenge and clinical isolates examined in this study. PMID:10921932
Rufai, Syed Beenish; Kumar, Parveen; Singh, Amit; Prajapati, Suneel; Balooni, Veena; Singh, Sarman
2014-06-01
The MTBDRplus line probe assay (LPA) and Xpert MTB/RIF have been endorsed by the World Health Organization for the rapid diagnosis of drug-resistant tuberculosis. However, there is no clarity regarding the superiority of one over the other. In a double-blinded prospective study, we evaluated the efficacy of the Xpert MTB/RIF on samples that were first tested by LPA under the revised national tuberculosis control program of India. A total of 405 sputum samples from suspected drug-resistant tuberculosis patients were included. Of these, 285 smear-positive samples were subjected to LPA. Seventy-two (25.8%) samples showed multidrug resistance, 62 (22.2%) showed rifampin monoresistance, 29 (10.3%) showed isoniazid monoresistance, and 116 (41.5%) were pan-susceptible. Six (2.1%) of the samples gave invalid results. Of the 62 rifampin-monoresistant samples by LPA, 38 (61.4%) showed rifampin resistance, while 21 (33.8%) were found susceptible to rifampin by Xpert MTB/RIF using cartridge version G4. Three (4.8%) samples gave an error. Of the 116 pan-susceptible samples, only 83 were available for Xpert MTB/RIF testing; 4 (5.1%) were rifampin resistant, 74 (94.8%) were susceptible, and 5 (6.0%) showed an error. The 25 discrepant samples were further subjected to MGIT960 drug susceptibility testing. The MGIT960 results showed 100% agreement with LPA results but only 64.4% agreement with Xpert MTB/RIF results. Sequencing analysis of discrepant samples showed 91.3% concordance with LPA but only 8.7% concordance with the Xpert MTB/RIF assay. These findings indicate that by using Xpert MTB/RIF testing we might be underestimating the burden of drug-resistant tuberculosis and indicate that country-specific probes need to be designed to increase the sensitivity of the Xpert MTB/RIF. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
Rufai, Syed Beenish; Kumar, Parveen; Singh, Amit; Prajapati, Suneel; Balooni, Veena
2014-01-01
The MTBDRplus line probe assay (LPA) and Xpert MTB/RIF have been endorsed by the World Health Organization for the rapid diagnosis of drug-resistant tuberculosis. However, there is no clarity regarding the superiority of one over the other. In a double-blinded prospective study, we evaluated the efficacy of the Xpert MTB/RIF on samples that were first tested by LPA under the revised national tuberculosis control program of India. A total of 405 sputum samples from suspected drug-resistant tuberculosis patients were included. Of these, 285 smear-positive samples were subjected to LPA. Seventy-two (25.8%) samples showed multidrug resistance, 62 (22.2%) showed rifampin monoresistance, 29 (10.3%) showed isoniazid monoresistance, and 116 (41.5%) were pan-susceptible. Six (2.1%) of the samples gave invalid results. Of the 62 rifampin-monoresistant samples by LPA, 38 (61.4%) showed rifampin resistance, while 21 (33.8%) were found susceptible to rifampin by Xpert MTB/RIF using cartridge version G4. Three (4.8%) samples gave an error. Of the 116 pan-susceptible samples, only 83 were available for Xpert MTB/RIF testing; 4 (5.1%) were rifampin resistant, 74 (94.8%) were susceptible, and 5 (6.0%) showed an error. The 25 discrepant samples were further subjected to MGIT960 drug susceptibility testing. The MGIT960 results showed 100% agreement with LPA results but only 64.4% agreement with Xpert MTB/RIF results. Sequencing analysis of discrepant samples showed 91.3% concordance with LPA but only 8.7% concordance with the Xpert MTB/RIF assay. These findings indicate that by using Xpert MTB/RIF testing we might be underestimating the burden of drug-resistant tuberculosis and indicate that country-specific probes need to be designed to increase the sensitivity of the Xpert MTB/RIF. PMID:24648554
Park, Young Min; Kwon, Han Jo; Lee, Jong Soo
2015-03-01
To determine the bacteriological spectrum of the removed therapeutic soft contact lenses (TSCLs) and to establish efficacy of prophylactic antibiotics on TSCLs used for 2 weeks for treatment of patients with recurrent corneal erosion syndrome (RCES). This study included idiopathic RCES treated using highly oxygen-permeable silicone hydrogel contact lenses (CLs), and treated 4 times per day with topical tobramycin 3% for 2 weeks. After TSCLs were applied for 2 weeks, the lenses were removed with sterile forceps under which a speculum was inserted, and placed on blood agar with the inner face down. The TSCLs were analyzed for bacterial colonization, and antibiotic susceptibility tests were performed for the isolates, using disk diffusion. Of the 40 lenses analyzed, 9 (22.5%) yielded positive cultures. Staphylococcus epidermidis was the most commonly isolated microorganism; there were five methicillin-sensitive coagulase-negative staphylococci and two methicillin-resistant coagulase-negative staphylococci. Furthermore, we found two lenses that were colonized by Enterobacter gergoviae and Citrobacter freundii. All cultured bacteria showed intermediate or complete sensitivity to ciprofloxacin, tigecycline, and tobramycin. Despite bacterial colonization in 9 CLs, no clinical signs of infectious keratitis were found in any of the patients with prophylactic topical tobramycin 3%. In case of using TSCLs for 2 weeks, tobramycin or ciprofloxacin may be useful as prophylactic topical antibiotics for preventing secondary corneal infections. Considering currently growing incidence of ciprofloxacin-resistant ocular isolates, tobramycin seems to be a reasonable prophylactic topical antibiotic susceptible broad spectrum of bacteria in clinics.
Liquid crystal templating as an approach to spatially and temporally organise soft matter.
van der Asdonk, Pim; Kouwer, Paul H J
2017-10-02
Chemistry quickly moves from a molecular science to a systems science. This requires spatial and temporal control over the organisation of molecules and molecular assemblies. Whilst Nature almost by default (transiently) organises her components at multiple different length scales, scientists struggle to realise even relatively straightforward patterns. In the past decades, supramolecular chemistry has taught us the rules to precisely engineer molecular assembly at the nanometre scale. At higher length scales, however, we are bound to top-down nanotechnology techniques to realise order. For soft, biological matter, many of these top-down techniques come with serious limitations since the molecules generally show low susceptibilities to the applied stimuli. A new method is based on liquid crystal templating. In this hierarchical approach, a liquid crystalline host serves as the scaffold to order polymers or assemblies. Being a liquid crystal, the host material can be ordered at many different length scales and on top of that, is highly susceptible to many external stimuli, which can even be used to manipulate the liquid crystal organisation in time. As a result, we anticipate large control over the organisation of the materials inside the liquid crystalline host. Recently, liquid crystal templating was also realised in water. This suddenly makes this tool highly applicable to start organising more delicate biological materials or even small organisms. We review the scope and limitations of liquid crystal templating and look out to where the technique may lead us.
Guardabassi, Luca; Hedberg, Sandra; Jessen, Lisbeth Rem; Damborg, Peter
2015-10-26
Urinary tract infection (UTI) is a common reason for antimicrobial prescription in dogs and cats. The objective of this study was to optimize and evaluate a culture-based point-of-care test for detection, identification and antimicrobial susceptibility testing of bacterial uro-pathogens in veterinary practice. Seventy-two urine samples from dogs and cats with suspected UTI presenting to seven veterinary facilities were used by clinical staff and an investigator to estimate sensitivity and specificity of Flexicult Vet A compared to laboratory reference standards for culture and susceptibility testing. Subsequently, the test was modified by inclusion of an oxacillin-containing compartment for detection of methicillin-resistant staphylococci. The performance of the modified product (Flexicult Vet B) for susceptibility testing was evaluated in vitro using a collection of 110 clinical isolates. Bacteriuria was reported by the laboratory in 25 (35 %) samples from the field study. The sensitivity and specificity of Flexicult Vet A for detection of bacteriuria were 83 and 100 %, respectively. Bacterial species were correctly identified in 53 and 100 % of the positive samples by clinical staff and the investigator, respectively. The susceptibility results were interpreted correctly by clinical staff for 70 % of the 94 drug-strain combinations. Higher percentages of correct interpretation were observed when the results were interpreted by the investigator in both the field (76 %) and the in vitro study (94 %). The most frequent errors were false resistance to β-lactams (ampicillin, amoxicillin-clavulanate and cephalotin) in Escherichia coli for Flexicult Vet A, and false amoxicillin-clavulanate resistance in E. coli and false ampicillin susceptibility in Staphylococcus pseudintermedius for Flexicult Vet B. The latter error can be prevented by categorizing staphylococcal strains growing in the oxacillin compartment as resistant to all β-lactams. Despite the shortcomings regarding species identification by clinical staff and β-lactam susceptibility testing of E. coli, Flexicult Vet B (commercial name Flexicult(®) Vet) is a time- and cost-effective point-of-care test to guide antimicrobial choice and facilitate implementation of antimicrobial use guidelines for treatment of UTIs in small animals, provided that clinical staff is adequately trained to interpret the results and that clinics meet minimum standards to operate in-house culture.
The high accuracy data processing system of laser interferometry signals based on MSP430
NASA Astrophysics Data System (ADS)
Qi, Yong-yue; Lin, Yu-chi; Zhao, Mei-rong
2009-07-01
Generally speaking there are two orthogonal signals used in single-frequency laser interferometer for differentiating direction and electronic subdivision. However there usually exist three errors with the interferential signals: zero offsets error, unequal amplitude error and quadrature phase shift error. These three errors have a serious impact on subdivision precision. Based on Heydemann error compensation algorithm, it is proposed to achieve compensation of the three errors. Due to complicated operation of the Heydemann mode, a improved arithmetic is advanced to decrease the calculating time effectively in accordance with the special characteristic that only one item of data will be changed in each fitting algorithm operation. Then a real-time and dynamic compensatory circuit is designed. Taking microchip MSP430 as the core of hardware system, two input signals with the three errors are turned into digital quantity by the AD7862. After data processing in line with improved arithmetic, two ideal signals without errors are output by the AD7225. At the same time two original signals are turned into relevant square wave and imported to the differentiating direction circuit. The impulse exported from the distinguishing direction circuit is counted by the timer of the microchip. According to the number of the pulse and the soft subdivision the final result is showed by LED. The arithmetic and the circuit are adopted to test the capability of a laser interferometer with 8 times optical path difference and the measuring accuracy of 12-14nm is achieved.
An engineer's view on genetic information and biological evolution.
Battail, Gérard
2004-01-01
We develop ideas on genome replication introduced in Battail [Europhys. Lett. 40 (1997) 343]. Starting with the hypothesis that the genome replication process uses error-correcting means, and the auxiliary one that nested codes are used to this end, we first review the concepts of redundancy and error-correcting codes. Then we show that these hypotheses imply that: distinct species exist with a hierarchical taxonomy, there is a trend of evolution towards complexity, and evolution proceeds by discrete jumps. At least the first two features above may be considered as biological facts so, in the absence of direct evidence, they provide an indirect proof in favour of the hypothesized error-correction system. The very high redundancy of genomes makes it possible. In order to explain how it is implemented, we suggest that soft codes and replication decoding, to be briefly described, are plausible candidates. Experimentally proven properties of long-range correlation of the DNA message substantiate this claim.
Computing in the presence of soft bit errors. [caused by single event upset on spacecraft
NASA Technical Reports Server (NTRS)
Rasmussen, R. D.
1984-01-01
It is shown that single-event-upsets (SEUs) due to cosmic rays are a significant source of single bit error in spacecraft computers. The physical mechanism of SEU, electron hole generation by means of Linear Energy Transfer (LET), it discussed with reference made to the results of a study of the environmental effects on computer systems of the Galileo spacecraft. Techniques for making software more tolerant of cosmic ray effects are considered, including: reducing the number of registers used by the software; continuity testing of variables; redundant execution of major procedures for error detection; and encoding state variables to detect single-bit changes. Attention is also given to design modifications which may reduce the cosmic ray exposure of on-board hardware. These modifications include: shielding components operating in LEO; removing low-power Schottky parts; and the use of CMOS diodes. The SEU parameters of different electronic components are listed in a table.
Random access to mobile networks with advanced error correction
NASA Technical Reports Server (NTRS)
Dippold, Michael
1990-01-01
A random access scheme for unreliable data channels is investigated in conjunction with an adaptive Hybrid-II Automatic Repeat Request (ARQ) scheme using Rate Compatible Punctured Codes (RCPC) Forward Error Correction (FEC). A simple scheme with fixed frame length and equal slot sizes is chosen and reservation is implicit by the first packet transmitted randomly in a free slot, similar to Reservation Aloha. This allows the further transmission of redundancy if the last decoding attempt failed. Results show that a high channel utilization and superior throughput can be achieved with this scheme that shows a quite low implementation complexity. For the example of an interleaved Rayleigh channel and soft decision utilization and mean delay are calculated. A utilization of 40 percent may be achieved for a frame with the number of slots being equal to half the station number under high traffic load. The effects of feedback channel errors and some countermeasures are discussed.
NASA Astrophysics Data System (ADS)
Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod
2015-10-01
In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.
NASA Astrophysics Data System (ADS)
Adineh-Vand, A.; Torabi, M.; Roshani, G. H.; Taghipour, M.; Feghhi, S. A. H.; Rezaei, M.; Sadati, S. M.
2013-09-01
This paper presents a soft computing based artificial intelligent technique, adaptive neuro-fuzzy inference system (ANFIS) to predict the neutron production rate (NPR) of IR-IECF device in wide discharge current and voltage ranges. A hybrid learning algorithm consists of back-propagation and least-squares estimation is used for training the ANFIS model. The performance of the proposed ANFIS model is tested using the experimental data using four performance measures: correlation coefficient, mean absolute error, mean relative error percentage (MRE%) and root mean square error. The obtained results show that the proposed ANFIS model has achieved good agreement with the experimental results. In comparison to the experimental data the proposed ANFIS model has MRE% <1.53 and 2.85 % for training and testing data respectively. Therefore, this model can be used as an efficient tool to predict the NPR in the IR-IECF device.
PREVALENCE OF REFRACTIVE ERRORS IN MADRASSA STUDENTS OF HARIPUR DISTRICT.
Atta, Zoia; Arif, Abdus Salam; Ahmed, Iftikhar; Farooq, Umer
2015-01-01
Visual impairment due to refractive errors is one of the most common problems among school-age children and is the second leading cause of treatable blindness. The Right to Sight, a global initiative launched by a coalition of non-government organizations and the World Health Organization (WHO), aims to eliminate avoidable visual impairment and blindness at a global level. In order to achieve this goal it is important to know the prevalence of different refractive errors in a community. Children and teenagers are the most susceptible groups to be affected by refractive errors. So, this population needs to be screened for different types of refractive errors. The study was done with the objective to find the frequency of different types of refractive errors in students of madrassas between the ages of 5-20 years in Haripur. This cross sectional study was done with 300 students between ages of 5-20 years in Madrassas of Haripur. The students were screened for refractive errors and the types of the errors were noted. After screening for refractive errors-the glasses were prescribed to the students. Myopia being 52.6% was the most frequent refractive error in students, followed by hyperopia 28.4% and astigmatism 19%. This study showed that myopia is an important problem in madrassa population. Females and males are almost equally affected. Spectacle correction of refractive errors is the cheapest and easy solution of this problem.
Financial errors in dementia: Testing a neuroeconomic conceptual framework
Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L.; Rosen, Howard J.
2013-01-01
Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer’s disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p< 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p< 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention. PMID:23550884
Integration of landslide susceptibility products in the environmental plans
NASA Astrophysics Data System (ADS)
Fiorucci, Federica; Reichenbach, Paola; Rossi, Mauro; Cardinali, Mauro; Guzzetti, Fausto
2015-04-01
Landslides are one of the most destructive natural hazard that causes damages to urban area worldwide. The knowledge of where a landslide could occur is essential for the strategic management of the territory and for a good urban planning . In this contest landslide susceptibility zoning (LSZ) is crucial to provide information on the degree to which an area can be affected by future slope movements. Despite landslide susceptibility maps have been prepared extensively during the last decades, there are few examples of application is in the environmental plans (EP). In this work we present a proposal for the integration of the landslide inventory map with the following landslide susceptibility products: (i) landslide susceptibility zonation , (ii) the associated error map and (iii) the susceptibility uncertainty map. Moreover we proposed to incorporate detailed morphological studies for the evaluation of landslide risk associated to local parceling plan. The integration of all this information is crucial for the management of landslide risk in urban expansions forecasts. Municipality, province and regional administration are often not able to support the costs of landslide risk evaluation for extensive areas but should concentrate their financial resources to specific hazardous and unsafe situations defined by the result of the integration of landslide susceptibility products. Zonation and detail morphological analysis should be performed taking into account the existing laws and regulations, and could become a starting point to discuss new regulations for the landslide risk management.
Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Moorthy, H. T.
1997-01-01
This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.
Soft tissue deformation for surgical simulation: a position-based dynamics approach.
Camara, Mafalda; Mayer, Erik; Darzi, Ara; Pratt, Philip
2016-06-01
To assist the rehearsal and planning of robot-assisted partial nephrectomy, a real-time simulation platform is presented that allows surgeons to visualise and interact with rapidly constructed patient-specific biomechanical models of the anatomical regions of interest. Coupled to a framework for volumetric deformation, the platform furthermore simulates intracorporeal 2D ultrasound image acquisition, using preoperative imaging as the data source. This not only facilitates the planning of optimal transducer trajectories and viewpoints, but can also act as a validation context for manually operated freehand 3D acquisitions and reconstructions. The simulation platform was implemented within the GPU-accelerated NVIDIA FleX position-based dynamics framework. In order to validate the model and determine material properties and other simulation parameter values, a porcine kidney with embedded fiducial beads was CT-scanned and segmented. Acquisitions for the rest position and three different levels of probe-induced deformation were collected. Optimal values of the cluster stiffness coefficients were determined for a range of different particle radii, where the objective function comprised the mean distance error between real and simulated fiducial positions over the sequence of deformations. The mean fiducial error at each deformation stage was found to be compatible with the level of ultrasound probe calibration error typically observed in clinical practice. Furthermore, the simulation exhibited unconditional stability on account of its use of clustered shape-matching constraints. A novel position-based dynamics implementation of soft tissue deformation has been shown to facilitate several desirable simulation characteristics: real-time performance, unconditional stability, rapid model construction enabling patient-specific behaviour and accuracy with respect to reference CT images.
Magnetic anisotropy of some phyllosilicates
NASA Astrophysics Data System (ADS)
Borradaile, Graham J.; Werner, Tomasz
1994-08-01
Magnetic susceptibility, anisotropy of susceptibility and hysteresis of single microcrystals of chlorite, biotite, phlogopite, muscovite, zinnwaldite and fuchsite were measured in low and high magnetic fields with an alternating gradient force magnetometer (Micromag). Their properties are sufficient to account for the low field susceptibility (AMS) of most micaceous rocks. Nearly all samples show some ferromagnetic contribution at low fields due to inclusions of pseudosingle domain and multidomain magnetite. The paramagnetic contribution isolated at high fields usually exceeds the ferromagnetic contribution. The paramagnetic susceptibility is intrinsic to the silicate lattice and agrees with values predicted from chemical composition within the limits of error. The minimum susceptibility is nearly parallel to c, another axis is parallel to b and the third susceptibility (usually the maximum) is close to a. The paramagnetic susceptibility has a disk-shaped magnitude ellipsoid with strong anisotropy ( P' < 2). The ferromagnetic contributions at low fields have more variably shaped ellipsoids with greater eccentricity ( P' < 5). The silicate lattice does not constrain their orientation. Our technique cannot determine the principal axes of the ferromagnetic component. However, its principal values usually correspond with the paramagnetic principal susceptibilities in order of magnitude. Thus, the combined paramagnetic-ferromagnetic anisotropy recognised in routine studies of AMS should faithfully represent the petrofabric of most micaceous rocks. Nevertheless, nearly 10% of our samples have incompatible anisotropy ellipsoids for the silicate host and magnetite inclusions. These yield a net inverse AMS that does not correctly represent the orientation of the silicate lattice. Therefore, some caution is necessary in petrofabric-AMS studies of micaceous rocks.
Selective breeding for susceptibility to myopia reveals a gene-environment interaction.
Chen, Yen-Po; Hocking, Paul M; Wang, Ling; Povazay, Boris; Prashar, Ankush; To, Chi-Ho; Erichsen, Jonathan T; Feldkaemper, Marita; Hofer, Bernd; Drexler, Wolfgang; Schaeffel, Frank; Guggenheim, Jeremy A
2011-06-08
Purpose. To test whether the interanimal variability in susceptibility to visually induced myopia is genetically determined. Methods. Monocular deprivation of sharp vision (DSV) was induced in outbred White Leghorn chicks aged 4 days. After 4 days' DSV, myopia susceptibility was quantified by the relative changes in axial length and refraction. Chicks in the extreme tails of the distribution of susceptibility to DSV were kept and paired for breeding (high- and low-susceptibility lines). A second round of selection was then performed. The third generation of chicks, derived from the selected parents, was assessed after either monocular DSV (4 or 10 days) or lens wear. Results. After two rounds of selective breeding, the chicks from the high-susceptibility line developed approximately twice as much myopia in response to 4 days' DSV as did those from the low-susceptibility line (P < 0.001). All ocular component dimensions differed significantly (P < 0.001) between the two selected lines, both before treatment and in the responses of the treated eye. When DSV was conducted for 10 days, the relative changes in axial length and refractive error were still significantly different between the high and low lines (P < 0.001). The chicks bred for high or low susceptibility to DSV also showed significantly different responses to minus lens wear, but not to plus lens wear. Additive genetic effects explained ∼50% of the interanimal variability in response to DSV. Conclusions. Genes and environment interact to shape refractive development in chicks.
Controlling qubit drift by recycling error correction syndromes
NASA Astrophysics Data System (ADS)
Blume-Kohout, Robin
2015-03-01
Physical qubits are susceptible to systematic drift, above and beyond the stochastic Markovian noise that motivates quantum error correction. This parameter drift must be compensated - if it is ignored, error rates will rise to intolerable levels - but compensation requires knowing the parameters' current value, which appears to require halting experimental work to recalibrate (e.g. via quantum tomography). Fortunately, this is untrue. I show how to perform on-the-fly recalibration on the physical qubits in an error correcting code, using only information from the error correction syndromes. The algorithm for detecting and compensating drift is very simple - yet, remarkably, when used to compensate Brownian drift in the qubit Hamiltonian, it achieves a stabilized error rate very close to the theoretical lower bound. Against 1/f noise, it is less effective only because 1/f noise is (like white noise) dominated by high-frequency fluctuations that are uncompensatable. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE
Cognitive processes in anesthesiology decision making.
Stiegler, Marjorie Podraza; Tung, Avery
2014-01-01
The quality and safety of health care are under increasing scrutiny. Recent studies suggest that medical errors, practice variability, and guideline noncompliance are common, and that cognitive error contributes significantly to delayed or incorrect diagnoses. These observations have increased interest in understanding decision-making psychology.Many nonrational (i.e., not purely based in statistics) cognitive factors influence medical decisions and may lead to error. The most well-studied include heuristics, preferences for certainty, overconfidence, affective (emotional) influences, memory distortions, bias, and social forces such as fairness or blame.Although the extent to which such cognitive processes play a role in anesthesia practice is unknown, anesthesia care frequently requires rapid, complex decisions that are most susceptible to decision errors. This review will examine current theories of human decision behavior, identify effects of nonrational cognitive processes on decision making, describe characteristic anesthesia decisions in this context, and suggest strategies to improve decision making.
Djordjevic, Ivan B; Vasic, Bane
2006-05-29
A maximum a posteriori probability (MAP) symbol decoding supplemented with iterative decoding is proposed as an effective mean for suppression of intrachannel nonlinearities. The MAP detector, based on Bahl-Cocke-Jelinek-Raviv algorithm, operates on the channel trellis, a dynamical model of intersymbol interference, and provides soft-decision outputs processed further in an iterative decoder. A dramatic performance improvement is demonstrated. The main reason is that the conventional maximum-likelihood sequence detector based on Viterbi algorithm provides hard-decision outputs only, hence preventing the soft iterative decoding. The proposed scheme operates very well in the presence of strong intrachannel intersymbol interference, when other advanced forward error correction schemes fail, and it is also suitable for 40 Gb/s upgrade over existing 10 Gb/s infrastructure.
Takegami, Kazuki; Hayashi, Hiroaki; Okino, Hiroki; Kimoto, Natsumi; Maehata, Itsumi; Kanazawa, Yuki; Okazaki, Tohru; Hashizume, Takuya; Kobayashi, Ikuo
2016-07-01
Our aim in this study is to derive an identification limit on a dosimeter for not disturbing a medical image when patients wear a small-type optically stimulated luminescence (OSL) dosimeter on their bodies during X-ray diagnostic imaging. For evaluation of the detection limit based on an analysis of X-ray spectra, we propose a new quantitative identification method. We performed experiments for which we used diagnostic X-ray equipment, a soft-tissue-equivalent phantom (1-20 cm), and a CdTe X-ray spectrometer assuming one pixel of the X-ray imaging detector. Then, with the following two experimental settings, corresponding X-ray spectra were measured with 40-120 kVp and 0.5-1000 mAs at a source-to-detector distance of 100 cm: (1) X-rays penetrating a soft-tissue-equivalent phantom with the OSL dosimeter attached directly on the phantom, and (2) X-rays penetrating only the soft-tissue-equivalent phantom. Next, the energy fluence and errors in the fluence were calculated from the spectra. When the energy fluence with errors concerning these two experimental conditions was estimated to be indistinctive, we defined the condition as the OSL dosimeter not being identified on the X-ray image. Based on our analysis, we determined the identification limit of the dosimeter. We then compared our results with those for the general irradiation conditions used in clinics. We found that the OSL dosimeter could not be identified under the irradiation conditions of abdominal and chest radiography, namely, one can apply the OSL dosimeter to measurement of the exposure dose in the irradiation field of X-rays without disturbing medical images.
Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery
Rottmann, Joerg; Keall, Paul; Berbeco, Ross
2013-01-01
Purpose: To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. Methods: 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Results: Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. Conclusions: The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time. PMID:24007146
Vinnicombe, S J; Whelehan, P; Thomson, K; McLean, D; Purdie, C A; Jordan, L B; Hubbard, S; Evans, A J
2014-04-01
Shear wave elastography (SWE) is a promising adjunct to greyscale ultrasound in differentiating benign from malignant breast masses. The purpose of this study was to characterise breast cancers which are not stiff on quantitative SWE, to elucidate potential sources of error in clinical application of SWE to evaluation of breast masses. Three hundred and two consecutive patients examined by SWE who underwent immediate surgery for breast cancer were included. Characteristics of 280 lesions with suspicious SWE values (mean stiffness >50 kPa) were compared with 22 lesions with benign SWE values (<50 kPa). Statistical significance of the differences was assessed using non-parametric goodness-of-fit tests. Pure ductal carcinoma in situ (DCIS) masses were more often soft on SWE than masses representing invasive breast cancer. Invasive cancers that were soft were more frequently: histological grade 1, tubular subtype, ≤10 mm invasive size and detected at screening mammography. No significant differences were found with respect to the presence of invasive lobular cancer, vascular invasion, hormone and HER-2 receptor status. Lymph node positivity was less common in soft cancers. Malignant breast masses classified as benign by quantitative SWE tend to have better prognostic features than those correctly classified as malignant. • Over 90 % of cancers assessable with ultrasound have a mean stiffness >50 kPa. • 'Soft' invasive cancers are frequently small (≤10 mm), low grade and screen-detected. • Pure DCIS masses are more often soft than invasive cancers (>40 %). • Large symptomatic masses are better evaluated with SWE than small clinically occult lesions. • When assessing small lesions, 'softness' should not raise the threshold for biopsy.
SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, S; Rottmann, J; Berbeco, R
2014-06-01
Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantommore » moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the accuracy of auto-tracking. This work is supported in part by the Varian Medical Systems, Inc.« less
The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross
2014-06-15
Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained bymore » continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID image acquisition at the frame rate of at least 4.29 Hz is recommended. Motion blurring in the images with frame rates below 4.29 Hz can significantly reduce the accuracy of autotracking.« less
Rapidly growing mycobacteria in Singapore, 2006-2011.
Tang, S S; Lye, D C; Jureen, R; Sng, L-H; Hsu, L Y
2015-03-01
Nontuberculous mycobacteria infection is a growing global concern, but data from Asia are limited. This study aimed to describe the distribution and antibiotic susceptibility profiles of rapidly growing mycobacterium (RGM) isolates in Singapore. Clinical RGM isolates with antibiotic susceptibility tests performed between 2006 and 2011 were identified using microbiology laboratory databases and minimum inhibitory concentrations of amikacin, cefoxitin, clarithromycin, ciprofloxacin, doxycycline, imipenem, linezolid, moxifloxacin, sulfamethoxazole or trimethoprim-sulfamethoxazole, tigecycline and tobramycin were recorded. Regression analysis was performed to detect changes in antibiotic susceptibility patterns over time. A total of 427 isolates were included. Of these, 277 (65%) were from respiratory specimens, 42 (10%) were related to skin and soft tissue infections and 36 (8%) were recovered from blood specimens. The two most common species identified were Mycobacterium abscessus (73%) and Mycobacterium fortuitum group (22%), with amikacin and clarithromycin being most active against the former, and quinolones and trimethoprim-sulfamethoxazole against the latter. Decreases in susceptibility of M. abscessus to linezolid by 8.8% per year (p 0.001), M. fortuitum group to imipenem by 9.5% per year (p 0.023) and clarithromycin by 4.7% per year (p 0.033) were observed. M. abscessus in respiratory specimens is the most common RGM identified in Singapore. Antibiotic options for treatment of RGM infections are increasingly limited. Copyright © 2014 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Feichtenschlager, Christian; Gerwing, Martin; Failing, Klaus; Peppler, Christine; Kása, Andreas; Kramer, Martin; von Pückler, Kerstin H
2018-06-02
To determine the effectiveness of magnetic resonance imaging (MRI) in the evaluation of anatomical stifle structures with respect to implant positioning after tibial plateau levelling osteotomy (TPLO) using a titanium plate. Selected sagittal and dorsal sequences of pre- and postoperative MRI (1.0 T scanner) of 13 paired ( n = 26) sound cadaveric stifle joints were evaluated. The effect of susceptibility artifact on adjacent anatomical stifle structures was graded from 0 to 5. The impact of implant positioning regarding assessment score was calculated using Spearman's rank correlation coefficient. Sagittal turbo spin echo (TSE)-acquired images enabled interpretation of most soft tissue, osseous and cartilage structures without detrimental effect of susceptibility artifact distortions. In T2-weighted TSE images, the cranial cruciate ligament and caudal horn of the medial meniscus could be evaluated, independent of implant position, without any susceptibility artifact in all specimens. T2-weighted fast field echo, water selective, balanced fast field echo and short tau inversion recovery were most markedly affected by susceptibility artifact. In selected TSE sequences, MRI allows evaluation of critical intra-articular structures after titanium TPLO plate implantation. Further investigations with confirmed stifle pathologies in dogs are required, to evaluate the accuracy of MRI after TPLO in clinical cases in this context. Schattauer GmbH Stuttgart.
TOPICAL REVIEW: Anatomical imaging for radiotherapy
NASA Astrophysics Data System (ADS)
Evans, Philip M.
2008-06-01
The goal of radiation therapy is to achieve maximal therapeutic benefit expressed in terms of a high probability of local control of disease with minimal side effects. Physically this often equates to the delivery of a high dose of radiation to the tumour or target region whilst maintaining an acceptably low dose to other tissues, particularly those adjacent to the target. Techniques such as intensity modulated radiotherapy (IMRT), stereotactic radiosurgery and computer planned brachytherapy provide the means to calculate the radiation dose delivery to achieve the desired dose distribution. Imaging is an essential tool in all state of the art planning and delivery techniques: (i) to enable planning of the desired treatment, (ii) to verify the treatment is delivered as planned and (iii) to follow-up treatment outcome to monitor that the treatment has had the desired effect. Clinical imaging techniques can be loosely classified into anatomic methods which measure the basic physical characteristics of tissue such as their density and biological imaging techniques which measure functional characteristics such as metabolism. In this review we consider anatomical imaging techniques. Biological imaging is considered in another article. Anatomical imaging is generally used for goals (i) and (ii) above. Computed tomography (CT) has been the mainstay of anatomical treatment planning for many years, enabling some delineation of soft tissue as well as radiation attenuation estimation for dose prediction. Magnetic resonance imaging is fast becoming widespread alongside CT, enabling superior soft-tissue visualization. Traditionally scanning for treatment planning has relied on the use of a single snapshot scan. Recent years have seen the development of techniques such as 4D CT and adaptive radiotherapy (ART). In 4D CT raw data are encoded with phase information and reconstructed to yield a set of scans detailing motion through the breathing, or cardiac, cycle. In ART a set of scans is taken on different days. Both allow planning to account for variability intrinsic to the patient. Treatment verification has been carried out using a variety of technologies including: MV portal imaging, kV portal/fluoroscopy, MVCT, conebeam kVCT, ultrasound and optical surface imaging. The various methods have their pros and cons. The four x-ray methods involve an extra radiation dose to normal tissue. The portal methods may not generally be used to visualize soft tissue, consequently they are often used in conjunction with implanted fiducial markers. The two CT-based methods allow measurement of inter-fraction variation only. Ultrasound allows soft-tissue measurement with zero dose but requires skilled interpretation, and there is evidence of systematic differences between ultrasound and other data sources, perhaps due to the effects of the probe pressure. Optical imaging also involves zero dose but requires good correlation between the target and the external measurement and thus is often used in conjunction with an x-ray method. The use of anatomical imaging in radiotherapy allows treatment uncertainties to be determined. These include errors between the mean position at treatment and that at planning (the systematic error) and the day-to-day variation in treatment set-up (the random error). Positional variations may also be categorized in terms of inter- and intra-fraction errors. Various empirical treatment margin formulae and intervention approaches exist to determine the optimum strategies for treatment in the presence of these known errors. Other methods exist to try to minimize error margins drastically including the currently available breath-hold techniques and the tracking methods which are largely in development. This paper will review anatomical imaging techniques in radiotherapy and how they are used to boost the therapeutic benefit of the treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharkov, B. B.; Chizhik, V. I.; Dvinskikh, S. V., E-mail: sergeid@kth.se
2016-01-21
Dipolar recoupling is an essential part of current solid-state NMR methodology for probing atomic-resolution structure and dynamics in solids and soft matter. Recently described magic-echo amplitude- and phase-modulated cross-polarization heteronuclear recoupling strategy aims at efficient and robust recoupling in the entire range of coupling constants both in rigid and highly dynamic molecules. In the present study, the properties of this recoupling technique are investigated by theoretical analysis, spin-dynamics simulation, and experimentally. The resonance conditions and the efficiency of suppressing the rf field errors are examined and compared to those for other recoupling sequences based on similar principles. The experimental datamore » obtained in a variety of rigid and soft solids illustrate the scope of the method and corroborate the results of analytical and numerical calculations. The technique benefits from the dipolar resolution over a wider range of coupling constants compared to that in other state-of-the-art methods and thus is advantageous in studies of complex solids with a broad range of dynamic processes and molecular mobility degrees.« less
Learning the inverse kinetics of an octopus-like manipulator in three-dimensional space.
Giorelli, M; Renda, F; Calisti, M; Arienti, A; Ferri, G; Laschi, C
2015-05-13
This work addresses the inverse kinematics problem of a bioinspired octopus-like manipulator moving in three-dimensional space. The bioinspired manipulator has a conical soft structure that confers the ability of twirling around objects as a real octopus arm does. Despite the simple design, the soft conical shape manipulator driven by cables is described by nonlinear differential equations, which are difficult to solve analytically. Since exact solutions of the equations are not available, the Jacobian matrix cannot be calculated analytically and the classical iterative methods cannot be used. To overcome the intrinsic problems of methods based on the Jacobian matrix, this paper proposes a neural network learning the inverse kinematics of a soft octopus-like manipulator driven by cables. After the learning phase, a feed-forward neural network is able to represent the relation between manipulator tip positions and forces applied to the cables. Experimental results show that a desired tip position can be achieved in a short time, since heavy computations are avoided, with a degree of accuracy of 8% relative average error with respect to the total arm length.
A Novel Soft Pneumatic Artificial Muscle with High-Contraction Ratio.
Han, Kwanghyun; Kim, Nam-Ho; Shin, Dongjun
2018-06-20
There is a growing interest in soft actuators for human-friendly robotic applications. However, it is very challenging for conventional soft actuators to achieve both a large working distance and high force. To address this problem, we present a high-contraction ratio pneumatic artificial muscle (HCRPAM), which has a novel actuation concept. The HCRPAM can contract substantially while generating a large force suitable for a wide range of robotic applications. Our proposed prototyping method allows for an easy and quick fabrication, considering various design variables. We derived a mathematical model using a virtual work principle, and validated the model experimentally. We conducted simulations for the design optimization using this model. Our experimental results show that the HCRPAM has a 183.3% larger contraction ratio and 37.1% higher force output than the conventional pneumatic artificial muscle (McKibben muscle). Furthermore, the actuator has a compatible position tracking performance of 1.0 Hz and relatively low hysteresis error of 4.8%. Finally, we discussed the controllable bending characteristics of the HCRPAM, which uses heterogeneous materials and has an asymmetrical structure to make it comfortable for a human to wear.
Zhu, Bin; Mirabello, Lisa; Chatterjee, Nilanjan
2018-06-22
In rare variant association studies, aggregating rare and/or low frequency variants, may increase statistical power for detection of the underlying susceptibility gene or region. However, it is unclear which variants, or class of them, in a gene contribute most to the association. We proposed a subregion-based burden test (REBET) to simultaneously select susceptibility genes and identify important underlying subregions. The subregions are predefined by shared common biologic characteristics, such as the protein domain or functional impact. Based on a subset-based approach considering local correlations between combinations of test statistics of subregions, REBET is able to properly control the type I error rate while adjusting for multiple comparisons in a computationally efficient manner. Simulation studies show that REBET can achieve power competitive to alternative methods when rare variants cluster within subregions. In two case studies, REBET is able to identify known disease susceptibility genes, and more importantly pinpoint the unreported most susceptible subregions, which represent protein domains essential for gene function. R package REBET is available at https://dceg.cancer.gov/tools/analysis/rebet. Published 2018. This article is a U.S. Government work and is in the public domain in the USA.
Occurrence of risk factors for dental erosion in the population of young adults in Norway.
Asmyhr, Øyvind; Grytten, Jostein; Holst, Dorthe
2012-10-01
The aim of this study was to describe the occurrence of risk factors for dental erosion for a group of young adults who are particularly susceptible to erosion. Another aim was to describe the awareness of erosion and distribution of risk factors according to the educational background of the parents. The sample (n = 2004 individuals) was randomly drawn from the population of 19-20-year-old Norwegians. The data were collected using telephone interviews. We measured awareness about erosion using the following question: 'Have you ever heard about dental erosion?' We obtained information about the frequency of intake of the following risk factors: soft drinks with and without sugar, and juice. Soft drinks with sugar included lemonade (Coca-Cola, Solo, Pepsi, Mozell and ice tea) and sport drinks (XL1, Maxim). Soft drinks without sugar included Cola light, Zero, PepsiMax, Solo light and ice tea light. Juice included orange, grapefruit, apple and kiwi juice. The parents' level of education was based on the Norwegian school system, which has three levels: compulsory schooling (10 years), upper secondary school education (up to 13 years) and university/college education. The data were analysed using logistic regressions analyses. Awareness of erosion was high - 93.5% of respondents were aware of the problem. The majority of respondents believed that erosion can be prevented - altogether 84.9%. They also believed that soft drinks with and without sugar are equally important for the development of erosion. 17.5% of respondents drank soft drinks with sugar daily or several times a day. The corresponding figures for soft drinks without sugar and juice were 4.9% and 34.1%, respectively. Young adults with mothers with high education drank soft drinks both with and without sugar less frequently than those who had mothers with low education. This pattern was the opposite for juice. Consumption of soft drinks and juice is high, even though awareness and knowledge about the causes of erosion are widespread. This indicates the need for effective intervention strategies to reduce the level of consumption. These strategies should take into account the fact that the distribution of risk factors is skewed with respect to parents' level of education. © 2012 John Wiley & Sons A/S.
Lau, Billy T; Ji, Hanlee P
2017-09-21
RNA-Seq measures gene expression by counting sequence reads belonging to unique cDNA fragments. Molecular barcodes commonly in the form of random nucleotides were recently introduced to improve gene expression measures by detecting amplification duplicates, but are susceptible to errors generated during PCR and sequencing. This results in false positive counts, leading to inaccurate transcriptome quantification especially at low input and single-cell RNA amounts where the total number of molecules present is minuscule. To address this issue, we demonstrated the systematic identification of molecular species using transposable error-correcting barcodes that are exponentially expanded to tens of billions of unique labels. We experimentally showed random-mer molecular barcodes suffer from substantial and persistent errors that are difficult to resolve. To assess our method's performance, we applied it to the analysis of known reference RNA standards. By including an inline random-mer molecular barcode, we systematically characterized the presence of sequence errors in random-mer molecular barcodes. We observed that such errors are extensive and become more dominant at low input amounts. We described the first study to use transposable molecular barcodes and its use for studying random-mer molecular barcode errors. Extensive errors found in random-mer molecular barcodes may warrant the use of error correcting barcodes for transcriptome analysis as input amounts decrease.
Implementation Of The Configurable Fault Tolerant System Experiment On NPSAT 1
2016-03-01
REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF THE CONFIGURABLE FAULT TOLERANT SYSTEM EXPERIMENT ON NPSAT...open-source microprocessor without interlocked pipeline stages (MIPS) based processor softcore, a cached memory structure capable of accessing double...data rate type three and secure digital card memories, an interface to the main satellite bus, and XILINX’s soft error mitigation softcore. The
Radio frequency tags systems to initiate system processing
NASA Astrophysics Data System (ADS)
Madsen, Harold O.; Madsen, David W.
1994-09-01
This paper describes the automatic identification technology which has been installed at Applied Magnetic Corp. MR fab. World class manufacturing requires technology exploitation. This system combines (1) FluoroTrac cassette and operator tracking, (2) CELLworks cell controller software tools, and (3) Auto-Soft Inc. software integration services. The combined system eliminates operator keystrokes and errors during normal processing within a semiconductor fab. The methods and benefits of this system are described.
Studies Of Single-Event-Upset Models
NASA Technical Reports Server (NTRS)
Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.
1988-01-01
Report presents latest in series of investigations of "soft" bit errors known as single-event upsets (SEU). In this investigation, SEU response of low-power, Schottky-diode-clamped, transistor/transistor-logic (TTL) static random-access memory (RAM) observed during irradiation by Br and O ions in ranges of 100 to 240 and 20 to 100 MeV, respectively. Experimental data complete verification of computer model used to simulate SEU in this circuit.
Clean galena, contaminated lead, and soft errors in memory chips
NASA Astrophysics Data System (ADS)
Lykken, G. I.; Hustoft, J.; Ziegler, B.; Momcilovic, B.
2000-10-01
Lead (Pb) disks were exposed to a radon (Rn)-rich atmosphere and surface alpha particle emissions were detected over time. Cumulative 210Po alpha emission increased nearly linearly with time. Conversely, cumulative emission for each of 218Po and 214Po was constant after one and two hours, respectively. Processing of radiation-free Pb ore (galena) in inert atmospheres was compared with processing in ambient air. Galena processed within a flux heated in a graphite crucible while exposed to an inert atmosphere, resulted in lead contaminated with 210Po (Trial 1). A glove box was next used to prepare a baseline radiation-free flux sample in an alumina crucible that was heated in an oven with an inert atmosphere (Trials 2 and 3). Ambient air was thereafter introduced, in place of the inert atmosphere, to the radiation-free flux mixture during processing (Trial 4). Ambient air introduced Rn and its progeny (RAD) into the flux during processing so that the processed Pb contained Po isotopes. A typical coke used in lead smelting also emitted numerous alpha particles. We postulate that alpha particles from tin/lead solder bumps, a cause of computer chip memory soft errors, may originate from Rn and RAD in the ambient air and/or coke used as a reducing agent in the standard galena smelting procedure.
Methicillin resistant Staphylococcus aureus (MRSA) in India: prevalence & susceptibility pattern.
2013-02-01
Methicillin resistant Staphylococcus aureus (MRSA) is endemic in India and is a dangerous pathogen for hospital acquired infections. This study was conducted in 15 Indian tertiary care centres during a two year period from January 2008 to December 2009 to determine the prevalence of MRSA and susceptibility pattern of S. aureus isolates in India. All S. aureus isolates obtained during the study period in the participating centres were included in the study. Each centre compiled their data in a predefined template which included data of the antimicrobial susceptibility pattern, location of the patient and specimen type. The data in the submitted templates were collated and analysed. A total of 26310 isolates were included in the study. The overall prevalence of methicillin resistance during the study period was 41 per cent. Isolation rates for MRSA from outpatients, ward inpatients and ICU were 28, 42 and 43 per cent, respectively in 2008 and 27, 49 and 47 per cent, respectively in 2009. The majority of S. aureus isolates was obtained from patients with skin and soft tissue infections followed by those suffering from blood stream infections and respiratory infections. Susceptibility to ciprofloxacin was low in both MSSA (53%) and MRSA (21%). MSSA isolates showed a higher susceptibility to gentamicin, co-trimoxazole, erythromycin and clindamycin as compared to MRSA isolates. No isolate was found resistant to vancomycin or linezolid. The study showed a high level of MRSA in our country. There is a need to study epidemiology of such infections. Robust antimicrobial stewardship and strengthened infection control measures are required to prevent spread and reduce emergence of resistance.
Optical Assessment of Soft Contact Lens Edge-Thickness.
Tankam, Patrice; Won, Jungeun; Canavesi, Cristina; Cox, Ian; Rolland, Jannick P
2016-08-01
To assess the edge shape of soft contact lenses using Gabor-Domain Optical Coherence Microscopy (GD-OCM) with a 2-μm imaging resolution in three dimensions and to generate edge-thickness profiles at different distances from the edge tip of soft contact lenses. A high-speed custom-designed GD-OCM system was used to produce 3D images of the edge of an experimental soft contact lens (Bausch + Lomb, Rochester, NY) in four different configurations: in air, submerged into water, submerged into saline with contrast agent, and placed onto the cornea of a porcine eyeball. An algorithm to compute the edge-thickness was developed and applied to cross-sectional images. The proposed algorithm includes the accurate detection of the interfaces between the lens and the environment, and the correction of the refraction error. The sharply defined edge tip of a soft contact lens was visualized in 3D. Results showed precise thickness measurement of the contact lens edge profile. Fifty cross-sectional image frames for each configuration were used to test the robustness of the algorithm in evaluating the edge-thickness at any distance from the edge tip. The precision of the measurements was less than 0.2 μm. The results confirmed the ability of GD-OCM to provide high-definition images of soft contact lens edges. As a nondestructive, precise, and fast metrology tool for soft contact lens measurement, the integration of GD-OCM in the design and manufacturing of contact lenses will be beneficial for further improvement in edge design and quality control. In the clinical perspective, the in vivo evaluation of the lens fitted onto the cornea will advance our understanding of how the edge interacts with the ocular surface. The latter will provide insights into the impact of long-term use of contact lenses on the visual performance.
Optical Assessment of Soft Contact Lens Edge-Thickness
Tankam, Patrice; Won, Jungeun; Canavesi, Cristina; Cox, Ian; Rolland, Jannick P.
2016-01-01
Purpose To assess the edge shape of soft contact lenses using Gabor-Domain Optical Coherence Microscopy (GD-OCM) with a 2 μm imaging resolution in three dimensions, and to generate edge-thickness profiles at different distances from the edge tip of soft contact lenses. Methods A high-speed custom-designed GD-OCM system was used to produce 3D images of the edge of an experimental soft contact lens (Bausch + Lomb, Rochester NY) in four different configurations: in air, submerged into water, submerged into saline with contrast agent, and placed onto the cornea of a porcine eyeball. An algorithm to compute the edge-thickness was developed and applied to cross-sectional images. The proposed algorithm includes the accurate detection of the interfaces between the lens and the environment, and the correction of the refraction error. Results The sharply defined edge tip of a soft contact lens was visualized in 3D. Results showed precise thickness measurement of the contact lens edge profile. 50 cross-sectional image frames for each configuration were used to test the robustness of the algorithm in evaluating the edge-thickness at any distance from the edge tip. The precision of the measurements was less than 0.2 μm. Conclusions The results confirmed the ability of GD-OCM to provide high definition images of soft contact lens edges. As a non-destructive, precise, and fast metrology tool for soft contact lens measurement, the integration of GD-OCM in the design and manufacturing of contact lenses will be beneficial for further improvement in edge design and quality control. In the clinical perspective, the in-vivo evaluation of the lens fitted onto the cornea will advance our understanding of how the edge interacts with the ocular surface. The latter will provide insights into the impact of long-term use of contact lenses on the visual performance. PMID:27232902
Enhanced Auditory Arousal Increases Intake of Less Palatable and Healthier Foods
Privitera, Gregory J.; Diaz, Melissa; Haas, Meagan C.
2014-01-01
Two experiments were conducted to test a prediction of the arousal hypothesis that increased arousal will increase intake of less palatable and healthy foods. In both experiments, arousal was manipulated by adjusting the volume of a movie (soft, loud volume) while participants consumed foods. In Experiment 1, participants ate fresh (palatable) or stale (less palatable) popcorn during a 9-minute movie played at a soft or loud volume. Experiment 2 used the same procedures with healthier foods (carrot sticks and apple slices). Partial support for the arousal hypothesis in Experiment 1 showed that participants consumed more stale but not fresh popcorn in the loud (high arousal) versus soft (low arousal) volume group. These findings suggest that low but not high palatable foods are susceptible to manipulations of arousal. Consistent with this interpretation, Experiment 2 showed that high but not low environmental arousal increased intake of the fruits and vegetables, which are typically rated as lower in palatability compared to high fat foods. These results show that high arousal in an eating-typical environment increases intake of less palatable foods, and healthy foods (i.e., fruits and vegetables). Increasing the availability of healthier foods in a loud food environment can have a positive impact on increasing intake of fruits and vegetables in that environment. PMID:24762340
Ritchie, Stephen R; Fraser, John D; Libby, Eric; Morris, Arthur J; Rainey, Paul B; Thomas, Mark G
2011-04-15
To estimate the burden of skin and soft tissue infection caused by Staphylococcus aureus (S. aureus), and to determine the effects of ethnicity and age on the rate of skin and soft tissue due to MRSA in the Auckland community. We reviewed the culture and susceptibility results of all wound swabs processed by Auckland's only community microbiology laboratory in 2007. Demographic data for a random sample of 1000 people who had a wound swab collected and for all people from whom a methicillin-resistant S. aureus (MRSA) strain was isolated were obtained and compared to demographic data for the total population of Auckland. S. aureus was isolated from 23853/47047 (51%) wound swab cultures performed in 2007; the estimated annual incidence of S. aureus isolation from a wound swab was 1847/100,000 people; and the estimated annual incidence of MRSA isolation from a wound swab was 145/100,000 people. Maori and Pacific people had higher rates of non-multiresistant MRSA infection compared with New Zealand European and Asian people; elderly New Zealand European people had much higher rates of multiresistant MRSA infections compared with people from other ethnic groups. S. aureus is a very common cause of disease in the community and the incidence of infection with MRSA subtypes varies with ethnicity.
Enhanced auditory arousal increases intake of less palatable and healthier foods.
Privitera, Gregory J; Diaz, Melissa; Haas, Meagan C
2014-01-23
Two experiments were conducted to test a prediction of the arousal hypothesis that increased arousal will increase intake of less palatable and healthy foods. In both experiments, arousal was manipulated by adjusting the volume of a movie (soft, loud volume) while participants consumed foods. In Experiment 1, participants ate fresh (palatable) or stale (less palatable) popcorn during a 9-minute movie played at a soft or loud volume. Experiment 2 used the same procedures with healthier foods (carrot sticks and apple slices). Partial support for the arousal hypothesis in Experiment 1 showed that participants consumed more stale but not fresh popcorn in the loud (high arousal) versus soft (low arousal) volume group. These findings suggest that low but not high palatable foods are susceptible to manipulations of arousal. Consistent with this interpretation, Experiment 2 showed that high but not low environmental arousal increased intake of the fruits and vegetables, which are typically rated as lower in palatability compared to high fat foods. These results show that high arousal in an eating-typical environment increases intake of less palatable foods, and healthy foods (i.e., fruits and vegetables). Increasing the availability of healthier foods in a loud food environment can have a positive impact on increasing intake of fruits and vegetables in that environment.
A comparison study of different facial soft tissue analysis methods.
Kook, Min-Suk; Jung, Seunggon; Park, Hong-Ju; Oh, Hee-Kyun; Ryu, Sun-Youl; Cho, Jin-Hyoung; Lee, Jae-Seo; Yoon, Suk-Ja; Kim, Min-Soo; Shin, Hyo-Keun
2014-07-01
The purpose of this study was to evaluate several different facial soft tissue measurement methods. After marking 15 landmarks in the facial area of 12 mannequin heads of different sizes and shapes, facial soft tissue measurements were performed by the following 5 methods: Direct anthropometry, Digitizer, 3D CT, 3D scanner, and DI3D system. With these measurement methods, 10 measurement values representing the facial width, height, and depth were determined twice with a one week interval by one examiner. These data were analyzed with the SPSS program. The position created based on multi-dimensional scaling showed that direct anthropometry, 3D CT, digitizer, 3D scanner demonstrated relatively similar values, while the DI3D system showed slightly different values. All 5 methods demonstrated good accuracy and had a high coefficient of reliability (>0.92) and a low technical error (<0.9 mm). The measured value of the distance between the right and left medial canthus obtained by using the DI3D system was statistically significantly different from that obtained by using the digital caliper, digitizer and laser scanner (p < 0.05), but the other measured values were not significantly different. On evaluating the reproducibility of measurement methods, two measurement values (Ls-Li, G-Pg) obtained by using direct anthropometry, one measurement value (N'-Prn) obtained by using the digitizer, and four measurement values (EnRt-EnLt, AlaRt-AlaLt, ChRt-ChLt, Sn-Pg) obtained by using the DI3D system, were statistically significantly different. However, the mean measurement error in every measurement method was low (<0.7 mm). All measurement values obtained by using the 3D CT and 3D scanner did not show any statistically significant difference. The results of this study show that all 3D facial soft tissue analysis methods demonstrate favorable accuracy and reproducibility, and hence they can be used in clinical practice and research studies. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Whittle, Rebecca; Peat, George; Belcher, John; Collins, Gary S; Riley, Richard D
2018-05-18
Measurement error in predictor variables may threaten the validity of clinical prediction models. We sought to evaluate the possible extent of the problem. A secondary objective was to examine whether predictors are measured at the intended moment of model use. A systematic search of Medline was used to identify a sample of articles reporting the development of a clinical prediction model published in 2015. After screening according to a predefined inclusion criteria, information on predictors, strategies to control for measurement error and intended moment of model use were extracted. Susceptibility to measurement error for each predictor was classified into low and high risk. Thirty-three studies were reviewed, including 151 different predictors in the final prediction models. Fifty-one (33.7%) predictors were categorised as high risk of error, however this was not accounted for in the model development. Only 8 (24.2%) studies explicitly stated the intended moment of model use and when the predictors were measured. Reporting of measurement error and intended moment of model use is poor in prediction model studies. There is a need to identify circumstances where ignoring measurement error in prediction models is consequential and whether accounting for the error will improve the predictions. Copyright © 2018. Published by Elsevier Inc.
New Developments in FPGA: SEUs and Fail-Safe Strategies from the NASA Goddard Perspective
NASA Technical Reports Server (NTRS)
Berg, Melanie D.; Label, Kenneth A.; Pellish, Jonathan
2016-01-01
It has been shown that, when exposed to radiation environments, each Field Programmable Gate Array (FPGA) device has unique error signatures. Subsequently, fail-safe and mitigation strategies will differ per FPGA type. In this session several design approaches for safe systems will be presented. It will also explore the benefits and limitations of several mitigation techniques. The intention of the presentation is to provide information regarding FPGA types, their susceptibilities, and proven fail-safe strategies; so that users can select appropriate mitigation and perform the required trade for system insertion. The presentation will describe three types of FPGA devices and their susceptibilities in radiation environments.
New Developments in FPGA: SEUs and Fail-Safe Strategies from the NASA Goddard Perspective
NASA Technical Reports Server (NTRS)
Berg, Melanie D.; LaBel, Kenneth; Pellish, Jonathan
2015-01-01
It has been shown that, when exposed to radiation environments, each Field Programmable Gate Array (FPGA) device has unique error signatures. Subsequently, fail-safe and mitigation strategies will differ per FPGA type. In this session several design approaches for safe systems will be presented. It will also explore the benefits and limitations of several mitigation techniques. The intention of the presentation is to provide information regarding FPGA types, their susceptibilities, and proven fail-safe strategies; so that users can select appropriate mitigation and perform the required trade for system insertion. The presentation will describe three types of FPGA devices and their susceptibilities in radiation environments.
Read disturb errors in a CMOS static RAM chip. [radiation hardened for spacedraft
NASA Technical Reports Server (NTRS)
Wood, Steven H.; Marr, James C., IV; Nguyen, Tien T.; Padgett, Dwayne J.; Tran, Joe C.; Griswold, Thomas W.; Lebowitz, Daniel C.
1989-01-01
Results are reported from an extensive investigation into pattern-sensitive soft errors (read disturb errors) in the TCC244 CMOS static RAM chip. The TCC244, also known as the SA2838, is a radiation-hard single-event-upset-resistant 4 x 256 memory chip. This device is being used by the Jet Propulsion Laboratory in the Galileo and Magellan spacecraft, which will have encounters with Jupiter and Venus, respectively. Two aspects of the part's design are shown to result in the occurrence of read disturb errors: the transparence of the signal path from the address pins to the array of cells, and the large resistance in the Vdd and Vss lines of the cells in the center of the array. Probe measurements taken during a read disturb failure illustrate how address skews and the data pattern in the chip combine to produce a bit flip. A capacitive charge pump formed by the individual cell capacitances and the resistance in the supply lines pumps down both the internal cell voltage and the local supply voltage until a bit flip occurs.
Beam hardening correction in CT myocardial perfusion measurement
NASA Astrophysics Data System (ADS)
So, Aaron; Hsieh, Jiang; Li, Jian-Ying; Lee, Ting-Yim
2009-05-01
This paper presents a method for correcting beam hardening (BH) in cardiac CT perfusion imaging. The proposed algorithm works with reconstructed images instead of projection data. It applies thresholds to separate low (soft tissue) and high (bone and contrast) attenuating material in a CT image. The BH error in each projection is estimated by a polynomial function of the forward projection of the segmented image. The error image is reconstructed by back-projection of the estimated errors. A BH-corrected image is then obtained by subtracting a scaled error image from the original image. Phantoms were designed to simulate the BH artifacts encountered in cardiac CT perfusion studies of humans and animals that are most commonly used in cardiac research. These phantoms were used to investigate whether BH artifacts can be reduced with our approach and to determine the optimal settings, which depend upon the anatomy of the scanned subject, of the correction algorithm for patient and animal studies. The correction algorithm was also applied to correct BH in a clinical study to further demonstrate the effectiveness of our technique.
Link Performance Analysis and monitoring - A unified approach to divergent requirements
NASA Astrophysics Data System (ADS)
Thom, G. A.
Link Performance Analysis and real-time monitoring are generally covered by a wide range of equipment. Bit Error Rate testers provide digital link performance measurements but are not useful during real-time data flows. Real-time performance monitors utilize the fixed overhead content but vary widely from format to format. Link quality information is also present from signal reconstruction equipment in the form of receiver AGC, bit synchronizer AGC, and bit synchronizer soft decision level outputs, but no general approach to utilizing this information exists. This paper presents an approach to link tests, real-time data quality monitoring, and results presentation that utilizes a set of general purpose modules in a flexible architectural environment. The system operates over a wide range of bit rates (up to 150 Mbs) and employs several measurement techniques, including P/N code errors or fixed PCM format errors, derived real-time BER from frame sync errors, and Data Quality Analysis derived by counting significant sync status changes. The architecture performs with a minimum of elements in place to permit a phased update of the user's unit in accordance with his needs.
Classification Model for Forest Fire Hotspot Occurrences Prediction Using ANFIS Algorithm
NASA Astrophysics Data System (ADS)
Wijayanto, A. K.; Sani, O.; Kartika, N. D.; Herdiyeni, Y.
2017-01-01
This study proposed the application of data mining technique namely Adaptive Neuro-Fuzzy inference system (ANFIS) on forest fires hotspot data to develop classification models for hotspots occurrence in Central Kalimantan. Hotspot is a point that is indicated as the location of fires. In this study, hotspot distribution is categorized as true alarm and false alarm. ANFIS is a soft computing method in which a given inputoutput data set is expressed in a fuzzy inference system (FIS). The FIS implements a nonlinear mapping from its input space to the output space. The method of this study classified hotspots as target objects by correlating spatial attributes data using three folds in ANFIS algorithm to obtain the best model. The best result obtained from the 3rd fold provided low error for training (error = 0.0093676) and also low error testing result (error = 0.0093676). Attribute of distance to road is the most determining factor that influences the probability of true and false alarm where the level of human activities in this attribute is higher. This classification model can be used to develop early warning system of forest fire.
De Rosario, Helios; Page, Alvaro; Mata, Vicente
2014-05-07
This paper proposes a variation of the instantaneous helical pivot technique for locating centers of rotation. The point of optimal kinematic error (POKE), which minimizes the velocity at the center of rotation, may be obtained by just adding a weighting factor equal to the square of angular velocity in Woltring׳s equation of the pivot of instantaneous helical axes (PIHA). Calculations are simplified with respect to the original method, since it is not necessary to make explicit calculations of the helical axis, and the effect of accidental errors is reduced. The improved performance of this method was validated by simulations based on a functional calibration task for the gleno-humeral joint center. Noisy data caused a systematic dislocation of the calculated center of rotation towards the center of the arm marker cluster. This error in PIHA could even exceed the effect of soft tissue artifacts associated to small and medium deformations, but it was successfully reduced by the POKE estimation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Classification Errors and Bias Regarding Research on Sexual Minority Youths
ERIC Educational Resources Information Center
Cimpian, Joseph R.
2017-01-01
Quantitative research on sexual minority youths (SMYs) has likely contributed to misperceptions about the risk and deviance of this population. In part because it often relies on self-reported data from population-based self-administered questionnaires, this research is susceptible to misclassification bias whereby youths who are not SMYs are…
Post-Modeling Histogram Matching of Maps Produced Using Regression Trees
Andrew J. Lister; Tonya W. Lister
2006-01-01
Spatial predictive models often use statistical techniques that in some way rely on averaging of values. Estimates from linear modeling are known to be susceptible to truncation of variance when the independent (predictor) variables are measured with error. A straightforward post-processing technique (histogram matching) for attempting to mitigate this effect is...
Single event upset susceptibilities of latchup immune CMOS process programmable gate arrays
NASA Astrophysics Data System (ADS)
Koga, R.; Crain, W. R.; Crawford, K. B.; Hansel, S. J.; Lau, D. D.; Tsubota, T. K.
Single event upsets (SEU) and latchup susceptibilities of complementary metal oxide semiconductor programmable gate arrays (CMOS PPGA's) were measured at the Lawrence Berkeley Laboratory 88-in. cyclotron facility with Xe (603 MeV), Cu (290 MeV), and Ar (180 MeV) ion beams. The PPGA devices tested were those which may be used in space. Most of the SEU measurements were taken with a newly constructed tester called the Bus Access Storage and Comparison System (BASACS) operating via a Macintosh II computer. When BASACS finds that an output does not match a prerecorded pattern, the state of all outputs, position in the test cycle, and other necessary information is transmitted and stored in the Macintosh. The upset rate was kept between 1 and 3 per second. After a sufficient number of errors are stored, the test is stopped and the total fluence of particles and total errors are recorded. The device power supply current was closely monitored to check for occurrence of latchup. Results of the tests are presented, indicating that some of the PPGA's are good candidates for selected space applications.
NASA Astrophysics Data System (ADS)
Pokorny, Petr; Pokorny, Jiri; Chadima, Martin; Hrouda, Frantisek; Studynka, Jan; Vejlupek, Josef
2016-04-01
The KLY5 Kappabridge is equipped, in addition to standard measurement of in-phase magnetic susceptibility and its anisotropy, for precise and calibrated measurement of out-of-phase susceptibility and its anisotropy. The phase angle is measured in "absolute" terms, i.e. without any residual phase error. The measured value of the out-of-phase susceptibility is independent on both the magnitude of the complex susceptibility and intensity of the driving magnetic field. The precise decomposition of the complex susceptibility into the in-phase and out-of-phase components is verified through presumably zero out-of-phase susceptibility of pure gadolinium oxide. The outstanding sensitivity in measurement of weak samples is achieved by newly developed drift compensation routine in addition to the latest models of electronic devices. In rocks, soils, and environmental materials, in which it is usually due to viscous relaxation, the out-of-phase susceptibility is able to substitute the more laborious frequency-dependent susceptibility routinely used in magnetic granulometry. Another new feature is measurement of the anisotropy of out-of-phase magnetic susceptibility (opAMS), which is also performed simultaneously and automatically with standard (in-phase) AMS measurement. The opAMS enables the direct determination of the magnetic sub-fabrics of the minerals that show non-zero out-of-phase susceptibility either due to viscous relaxation (ultrafine grains of magnetite or maghemite), or due to weak-field hysteresis (titanomagnetite, hematite, pyrrhotite), or due to eddy currents (in conductive minerals). Using the 3D rotator, the instrument performs the measurement of both the AMS and opAMS by only one insertion of the specimen into the specimen holder. In addition, fully automated measurement of the field variation of the AMS and opAMS is possible. The instrument is able to measure, in conjunction with the CS-4 Furnace and CS-L Cryostat, the temperature variation of susceptibility.
Stereotype susceptibility narrows the gender gap in imagined self-rotation performance.
Wraga, Maryjane; Duncan, Lauren; Jacobs, Emily C; Helt, Molly; Church, Jessica
2006-10-01
Three studies examined the impact of stereotype messages on men's and women's performance of a mental rotation task involving imagined self-rotations. Experiment 1 established baseline differences between men and women; women made 12% more errors than did men. Experiment 2 found that exposure to a positive stereotype message enhanced women's performance in comparison with that of another group of women who received neutral information. In Experiment 3, men who were exposed to the same stereotype message emphasizing a female advantage made more errors than did male controls, and the magnitude of error was similar to that for women from Experiment 1. The results suggest that the gender gap in mental rotation performance is partially caused by experiential factors, particularly those induced by sociocultural stereotypes.
Samardzic, Dejan; Thamburaj, Krishnamoorthy
2015-01-01
To report the brain imaging features on magnetic resonance imaging (MRI) in inadvertent intrathecal gadolinium administration. A 67-year-old female with gadolinium encephalopathy from inadvertent high dose intrathecal gadolinium administration during an epidural steroid injection was studied with multisequence 3T MRI. T1-weighted imaging shows pseudo-T2 appearance with diffusion of gadolinium into the brain parenchyma, olivary bodies, and membranous labyrinth. Nulling of cerebrospinal fluid (CSF) signal is absent on fluid attenuation recovery (FLAIR). Susceptibility-weighted imaging (SWI) demonstrates features similar to subarachnoid hemorrhage. CT may demonstrate a pseudo-cerebral edema pattern given the high attenuation characteristics of gadolinium. Intrathecal gadolinium demonstrates characteristic imaging features on MRI of the brain and may mimic subarachnoid hemorrhage on susceptibility-weighted imaging. Identifying high dose gadolinium within the CSF spaces on MRI is essential to avoid diagnostic and therapeutic errors. Copyright © 2013 by the American Society of Neuroimaging.
Fuchs, P C; Barry, A L; Thornsberry, C; Gavan, T L; Jones, R N
1983-01-01
Augmentin (Beecham Laboratories, Bristol, Tenn.), a combination drug consisting of two parts amoxicillin to one part clavulanic acid and a potent beta-lactamase inhibitor, was evaluated in vitro in comparison with ampicillin or amoxicillin or both for its inhibitory and bactericidal activities against selected clinical isolates. Regression analysis was performed and tentative disk diffusion susceptibility breakpoints were determined. A multicenter performance study of the disk diffusion test was conducted with three quality control organisms to determine tentative quality control limits. All methicillin-susceptible staphylococci and Haemophilus influenzae isolates were susceptible to Augmentin, although the minimal inhibitory concentrations for beta-lactamase-producing strains of both groups were, on the average, fourfold higher than those for enzyme-negative strains. Among the Enterobacteriaceae, Augmentin exhibited significantly greater activity than did ampicillin against Klebsiella pneumoniae, Citrobacter diversus, Proteus vulgaris, and about one-third of the Escherichia coli strains tested. Bactericidal activity usually occurred at the minimal inhibitory concentration. There was a slight inoculum concentration effect on the Augmentin minimal inhibitory concentrations. On the basis of regression and error rate-bounded analyses, the suggested interpretive disk diffusion susceptibility breakpoints for Augmentin are: susceptible, greater than or equal to 18 mm; resistant, less than or equal to 13 mm (gram-negative bacilli); and susceptible, greater than or equal to 20 mm (staphylococci and H. influenzae). The use of a beta-lactamase-producing organism, such as E. coli Beecham 1532, is recommended for quality assurance of Augmentin susceptibility testing. PMID:6625554
Petrungaro, Paul S; Gonzalez, Santiago; Villegas, Carlos
2018-02-01
As dental implants become more popular for the treatment of partial and total edentulism and treatment of "terminal dentitions," techniques for the management of the atrophic posterior maxillae continue to evolve. Although dental implants carry a high success rate long term, attention must be given to the growing numbers of revisions or retreatment of cases that have had previous dental implant treatment and/or advanced bone replacement procedures that, due to either poor patient compliance, iatrogenic error, or poor quality of the pre-existing alveolar and/or soft tissues, have led to large osseous defects, possibly with deficient soft-tissue volume. In the posterior maxillae, where the poorest quality of bone in the oral cavity exists, achieving regeneration of the alveolar bone and adequate volume of soft tissue remains a complex procedure. This is made even more difficult when dealing with loss of dental implants previously placed, aggressive bone reduction required in various implant procedures, and/or residual sinus infections precluding proper closure of the oral wound margins. The purpose of this article is to outline a technique for the total closure of large oro-antral communications, with underlying osseous defects greater than 15 mm in width and 30 mm in length, for which multiple previous attempts at closure had failed, to achieve not only the reconstruction of adequate volume and quality of soft tissues in the area of the previous fistula, but also total regeneration of the osseous structures in the area of the large void.
NASA Astrophysics Data System (ADS)
Vidic, Nataša. J.; TenPas, Jeff D.; Verosub, Kenneth L.; Singer, Michael J.
2000-08-01
Magnetic susceptibility variations in the Chinese loess/palaeosol sequences have been used extensively for palaeoclimatic interpretations. The magnetic signal of these sequences must be divided into lithogenic and pedogenic components because the palaeoclimatic record is primarily reflected in the pedogenic component. In this paper we compare two methods for separating the pedogenic and lithogenic components of the magnetic susceptibility signal: the citrate-bicarbonate-dithionite (CBD) extraction procedure, and a mixing analysis. Both methods yield good estimates of the pedogenic component, especially for the palaeosols. The CBD procedure underestimates the lithogenic component and overestimates the pedogenic component. The magnitude of this effect is moderately high in loess layers but almost negligible in palaeosols. The mixing model overestimates the lithogenic component and underestimates the pedogenic component. Both methods can be adjusted to yield better estimates of both components. The lithogenic susceptibility, as determined by either method, suggests that palaeoclimatic interpretations based only on total susceptibility will be in error and that a single estimate of the average lithogenic susceptibility is not an accurate basis for adjusting the total susceptibility. A long-term decline in lithogenic susceptibility with depth in the section suggests more intense or prolonged periods of weathering associated with the formation of the older palaeosols. The CBD procedure provides the most comprehensive information on the magnitude of the components and magnetic mineralogy of loess and palaeosols. However, the mixing analysis provides a sensitive, rapid, and easily applied alternative to the CBD procedure. A combination of the two approaches provides the most powerful and perhaps the most accurate way of separating the magnetic susceptibility components.
Propagation of landslide inventory errors on data driven landslide susceptibility models
NASA Astrophysics Data System (ADS)
Henriques, C. S.; Zezere, J. L.; Neves, M.; Garcia, R. A. C.; Oliveira, S. C.; Piedade, A.
2009-04-01
Research on landslide susceptibility assessment developed recently worldwide has shown that quality and reliability of modelling results are more sensitive to the quality and consistence of the cartographic database than to statistical tools used in the modelling process. Particularly, the quality of the landslide inventory is of crucial importance, because data-driven models used for landside susceptibility evaluation are based on the spatial correlation between past landslide occurrences and a data set of thematic layers representing independent landslide predisposing factors. Uncertainty within landslide inventorying may be very high and is usually related to: (i) the geological and geomorphological complexity of the study area; (ii) the dominant land use and the rhythm and magnitude of land use change; (iii) the conservation level of landslide evidences (e.g., topography, vegetation, drainage) both in the field and aerial photographs; and (iv) the experience of the geomorphologist(s) that build the landslide inventory. Traditionally, landslide inventory has been made through aerial-photo interpretation and field work surveying by using standard geomorphological techniques. More recently, the interpretation of detailed geo-referenced digital ortophotomaps (pixel = 0.5 m), combined with the accurate topography, as become an additional analytical tool for landslide identification at the regional scale. The present study was performed in a test site (256 km2) within Caldas da Rainha County, located in the central part of Portugal. Detailed geo-referenced digital ortophotomaps obtained in 2004 were used to build three different landslide inventories. The landslide inventory #1 was constructed by a single regular trained geomorphologist using photo-interpretation. 408 probable slope movements were identified and geo-referenced by a point marked in the central part of the probable landslide rupture zone. The landslide inventory #2 was obtained through the examination of landslide inventory #1 by a senior geomorphologist. This second phase of photo and morphologic interpretation (pre-validation) allows the selection of 204 probable slope movements from the first landslide inventory. The landslide inventory #3 was obtained by the field verification of the total set of probable landslide zones (408 points), and was performed by 6 geomorphologists. This inventory has 193 validated slope movements, and includes 101 "new landslides" that have not been recognized by the ortophotomaps interpretation. Additionally, the field work enabled the cartographic delimitation of the slope movement depletion and accumulation zones, and the definition of landslide type. Landslide susceptibility was assessed using the three landslide inventories by using a single predictive model (logistic regression) and the same set of landslide predisposing factors to allow comparison of results. Uncertainty associated to landslide inventory errors and their propagation on landslide susceptibility results are evaluated and compared by the computation of success-rate and prediction-rate curves. The error derived from landslide inventorying is quantified by assessing the overlapping degree of susceptible areas obtained from the different prediction models.
Vehicular traffic noise prediction using soft computing approach.
Singh, Daljeet; Nigam, S P; Agrawal, V P; Kumar, Maneek
2016-12-01
A new approach for the development of vehicular traffic noise prediction models is presented. Four different soft computing methods, namely, Generalized Linear Model, Decision Trees, Random Forests and Neural Networks, have been used to develop models to predict the hourly equivalent continuous sound pressure level, Leq, at different locations in the Patiala city in India. The input variables include the traffic volume per hour, percentage of heavy vehicles and average speed of vehicles. The performance of the four models is compared on the basis of performance criteria of coefficient of determination, mean square error and accuracy. 10-fold cross validation is done to check the stability of the Random Forest model, which gave the best results. A t-test is performed to check the fit of the model with the field data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cai, Jian-Hua
2017-09-01
To eliminate the random error of the derivative near-IR (NIR) spectrum and to improve model stability and the prediction accuracy of the gluten protein content, a combined method is proposed for pretreatment of the NIR spectrum based on both empirical mode decomposition and the wavelet soft-threshold method. The principle and the steps of the method are introduced and the denoising effect is evaluated. The wheat gluten protein content is calculated based on the denoised spectrum, and the results are compared with those of the nine-point smoothing method and the wavelet soft-threshold method. Experimental results show that the proposed combined method is effective in completing pretreatment of the NIR spectrum, and the proposed method improves the accuracy of detection of wheat gluten protein content from the NIR spectrum.
Coeuret, Valérie; Gueguen, Micheline; Vernoux, Jean Paul
2004-11-01
The aim was to select potentially probiotic lactobacilli from 88 strains isolated from unpasteurized milk and cheese products, and to incorporate these bacteria in a viable state into a soft cheese, without changing its quality. The survival of these bacteria was assessed in acidic and bile conditions, after freezing at -80 degrees C. Four strains from unpasteurized Camembert--two Lactobacillus plantarum strains and two Lb. paracasei/casei strains--were identified and typed by PCR and PFGE and were found to display potentially probiotic characteristics in addition to resistance to low pH and bile. These characteristics were resistance to lysozyme, adhesion to CACO-2 cells, antimicrobial effects against common foodborne pathogens (Listeria monocytogenes, Staphylococcus aureus, Salmonella spp., Escherichia coli, innocuity following the ingestion of high doses by mice and appropriate antibiotic susceptibility profiles. The potential of Lb. plantarum strain UCMA 3037 for incorporation into a soft cheese (Pont-l'Eveque registered designation of origin (RDO)) was investigated. This strain grew well and survived in sufficient numbers (more than 10(7) cfu/g throughout the shelf-life of the product) in the cheese. This strain did not change the quality score of the product until the best before date (75 days after manufacture). Thus, unpasteurized Camembert is a natural source of potentially probiotic lactobacilli, which could be used as an additive in the development of potentially probiotic soft cheeses. Further work is required to demonstrate the persistence and efficacy of these strains in the human host upon ingestion.
Kackar, Siddharth; Suman, Ethel; Kotian, M Shashidhar
2017-01-01
Microbial biofilm formation on contact lenses and lens storage cases may be a risk factor for contact lens-associated corneal infections. Various types of contact lens care solutions are used to reduce microbial growths on lenses. The present study aimed at comparing the growths of biofilms on the different contact lenses and lens cases. The study also aimed at determining the effect of lens care solutions and bacteriophage on these biofilms. One type of hard lens and two types of soft lenses were used for the study. The organisms used were Staphylococcus aureus ATCC 25923, Pseudomonas aeruginosa ATCC 27853, Candida albicans ATCC 60193 and Escherichia coli ATCC 25922. Biofilm production was performed by modified O'Toole and Kolter method and effect of lens cleaning solutions and a crude coliphage on biofilms was also studied. Results were visualised using scanning electron microscopy and quantitated by colony counting method and spectrophotometric measurement of optical density (OD). Statistical analysis was done by SPSS 11.5, Kruskal-Wallis test and Chi-square test. Soft lens cleaning solutions had a significant inhibitory effect (P = 0.020) on biofilm formation on soft lenses and also lens cases (P < 0.001). Soft lens cleaning solution 2 was more efficient than solution 1. However, no such inhibitory effect was observed with regard to hard lens cleaning solution, but for a significant reduction in the OD values (P < 0.001). There was no significant inhibitory effect by bacteriophages. This study showed the importance of selecting the appropriate lens cleaning solution to prevent biofilm production on contact lenses.
Estimation of the laser cutting operating cost by support vector regression methodology
NASA Astrophysics Data System (ADS)
Jović, Srđan; Radović, Aleksandar; Šarkoćević, Živče; Petković, Dalibor; Alizamir, Meysam
2016-09-01
Laser cutting is a popular manufacturing process utilized to cut various types of materials economically. The operating cost is affected by laser power, cutting speed, assist gas pressure, nozzle diameter and focus point position as well as the workpiece material. In this article, the process factors investigated were: laser power, cutting speed, air pressure and focal point position. The aim of this work is to relate the operating cost to the process parameters mentioned above. CO2 laser cutting of stainless steel of medical grade AISI316L has been investigated. The main goal was to analyze the operating cost through the laser power, cutting speed, air pressure, focal point position and material thickness. Since the laser operating cost is a complex, non-linear task, soft computing optimization algorithms can be used. Intelligent soft computing scheme support vector regression (SVR) was implemented. The performance of the proposed estimator was confirmed with the simulation results. The SVR results are then compared with artificial neural network and genetic programing. According to the results, a greater improvement in estimation accuracy can be achieved through the SVR compared to other soft computing methodologies. The new optimization methods benefit from the soft computing capabilities of global optimization and multiobjective optimization rather than choosing a starting point by trial and error and combining multiple criteria into a single criterion.
Speech outcome after early repair of cleft soft palate using Furlow technique.
Abdel-Aziz, Mosaad
2013-01-01
The earlier closure of palatal cleft is the better the speech outcome and the less compensatory articulation errors, however dissection on the hard palate may interfere with facial growth. In Furlow palatoplasty, dissection on the hard palate is not needed and surgery is usually limited to the soft palate, so the technique has no deleterious effect on the facial growth. The aim of this study was to assess the efficacy of Furlow palatoplasty technique on the speech of young infants with cleft soft palate. Twenty-one infants with cleft soft palate were included in this study, their ages ranged from 3 to 6 months. Their clefts were repaired using Furlow technique. The patients were followed up for at least 4 years; at the end of the follow up period they were subjected to flexible nasopharyngoscopy to assess the velopharyngeal closure and speech analysis using auditory perceptual assessment. Eighteen cases (85.7%) showed complete velopharyngeal closure, 1 case (4.8%) showed borderline competence, and 2 cases (9.5%) showed borderline incompetence. Normal resonance has been attained in 18 patients (85.7%), and mild hypernasality in 3 patients (14.3%), no patients demonstrated nasal emission of air. Speech therapy was beneficial for cases with residual hypernasality; no cases needed secondary corrective surgery. Furlow palatoplasty at a younger age has favorable speech outcome with no detectable morbidity. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Karuppanan, Udayakumar; Unni, Sujatha Narayanan; Angarai, Ganesan R
2017-01-01
Assessment of mechanical properties of soft matter is a challenging task in a purely noninvasive and noncontact environment. As tissue mechanical properties play a vital role in determining tissue health status, such noninvasive methods offer great potential in framing large-scale medical screening strategies. The digital speckle pattern interferometry (DSPI)-based image capture and analysis system described here is capable of extracting the deformation information from a single acquired fringe pattern. Such a method of analysis would be required in the case of the highly dynamic nature of speckle patterns derived from soft tissues while applying mechanical compression. Soft phantoms mimicking breast tissue optical and mechanical properties were fabricated and tested in the DSPI out of plane configuration set up. Hilbert transform (HT)-based image analysis algorithm was developed to extract the phase and corresponding deformation of the sample from a single acquired fringe pattern. The experimental fringe contours were found to correlate with numerically simulated deformation patterns of the sample using Abaqus finite element analysis software. The extracted deformation from the experimental fringe pattern using the HT-based algorithm is compared with the deformation value obtained using numerical simulation under similar conditions of loading and the results are found to correlate with an average %error of 10. The proposed method is applied on breast phantoms fabricated with included subsurface anomaly mimicking cancerous tissue and the results are analyzed.
Wu, Jinpeng; Sallis, Shawn; Qiao, Ruimin; Li, Qinghao; Zhuo, Zengqing; Dai, Kehua; Guo, Zixuan; Yang, Wanli
2018-04-17
Energy storage has become more and more a limiting factor of today's sustainable energy applications, including electric vehicles and green electric grid based on volatile solar and wind sources. The pressing demand of developing high-performance electrochemical energy storage solutions, i.e., batteries, relies on both fundamental understanding and practical developments from both the academy and industry. The formidable challenge of developing successful battery technology stems from the different requirements for different energy-storage applications. Energy density, power, stability, safety, and cost parameters all have to be balanced in batteries to meet the requirements of different applications. Therefore, multiple battery technologies based on different materials and mechanisms need to be developed and optimized. Incisive tools that could directly probe the chemical reactions in various battery materials are becoming critical to advance the field beyond its conventional trial-and-error approach. Here, we present detailed protocols for soft X-ray absorption spectroscopy (sXAS), soft X-ray emission spectroscopy (sXES), and resonant inelastic X-ray scattering (RIXS) experiments, which are inherently elemental-sensitive probes of the transition-metal 3d and anion 2p states in battery compounds. We provide the details on the experimental techniques and demonstrations revealing the key chemical states in battery materials through these soft X-ray spectroscopy techniques.
Bobenchik, April M.; Hindler, Janet A.; Giltner, Carmen L.; Saeki, Sandra
2014-01-01
Vitek 2 (bioMérieux, Inc., Durham, NC) is a widely used commercial antimicrobial susceptibility testing system. We compared MIC results obtained by Vitek 2 to those obtained by the Clinical and Laboratory Standards Institute (CLSI) broth microdilution (BMD) reference method for 134 staphylococcal and 84 enterococcal clinical isolates. Nineteen agents were evaluated, including all those available on Vitek 2 for testing staphylococci and enterococci. The resistance phenotypes tested included methicillin-resistant Staphylococcus aureus (MRSA) (n = 58), S. aureus with inducible clindamycin resistance (ICR) (n = 30), trimethoprim-sulfamethoxazole-resistant MRSA (n = 10), vancomycin-resistant Enterococcus (n = 37), high-level gentamicin-resistant Enterococcus (n = 15), linezolid-resistant Enterococcus (n = 5), and daptomycin-nonsusceptible Enterococcus faecalis (n = 6). For the staphylococci, there was 98.9% categorical agreement (CA). There was one very major error (VME) for gentamicin in a Staphylococcus hominis isolate, six VMEs for inducible clindamycin in S. aureus isolates, and two major errors (ME) for daptomycin in an S. aureus and a Staphylococcus epidermidis isolate. For enterococci, there was 97.3% CA. Two VMEs were observed for daptomycin in isolates of E. faecalis and 2 ME, 1 for high-level gentamicin resistance and 1 for nitrofurantoin, in E. faecium isolates. Overall, there was 98.3% CA and 99% essential agreement for the testing of staphylococci and enterococci by the Vitek 2. With the exception of detecting ICR in S. aureus, Vitek 2 performed reliably for antimicrobial susceptibility testing of staphylococci and enterococci. PMID:24478467
Elkady, Ahmed M; Sun, Hongfu; Wilman, Alan H
2016-05-01
Quantitative Susceptibility Mapping (QSM) is an emerging area of brain research with clear application to brain iron studies in deep gray matter. However, acquisition of standard whole brain QSM can be time-consuming. One means to reduce scan time is to use a focal acquisition restricted only to the regions of interest such as deep gray matter. However, the non-local dipole field necessary for QSM reconstruction extends far beyond the structure of interest. We demonstrate the practical implications of these non-local fields on the choice of brain volume for QSM. In an illustrative numerical simulation and then in human brain experiments, we examine the effect on QSM of volume reduction in each dimension. For the globus pallidus, as an example of iron-rich deep gray matter, we demonstrate that substantial errors can arise even when the field-of-view far exceeds the physical structural boundaries. Thus, QSM reconstruction requires a non-local field-of-view prescription to ensure minimal errors. An axial QSM acquisition, centered on the globus pallidus, should encompass at least 76mm in the superior-inferior direction to conserve susceptibility values from the globus pallidus. This dimension exceeds the physical coronal extent of this structure by at least five-fold. As QSM sees wider use in the neuroscience community, its unique requirement for an extended field-of-view needs to be considered. Copyright © 2016 Elsevier Inc. All rights reserved.
Low signal-to-noise FDEM in-phase data: Practical potential for magnetic susceptibility modelling
NASA Astrophysics Data System (ADS)
Delefortrie, Samuël; Hanssens, Daan; De Smedt, Philippe
2018-05-01
In this paper, we consider the use of land-based frequency-domain electromagnetics (FDEM) for magnetic susceptibility modelling. FDEM data comprises both out-of-phase and in-phase components, which can be related to the electrical conductivity and magnetic susceptibility of the subsurface. Though applying the FDEM method to obtain information on the subsurface conductivity is well established in various domains (e.g. through the low induction number approximation of subsurface apparent conductivity), the potential for susceptibility mapping is often overlooked. Especially given a subsurface with a low magnetite and maghemite content (e.g. most sedimentary environments), it is generally assumed that susceptibility is negligible. Nonetheless, the heterogeneity of the near surface and the impact of anthropogenic disturbances on the soil can cause sufficient variation in susceptibility for it to be detectable in a repeatable way. Unfortunately, it can be challenging to study the potential for susceptibility mapping due to systematic errors, an often poor low signal-to-noise ratio, and the intricacy of correlating in-phase responses with subsurface susceptibility and conductivity. Alongside use of an accurate forward model - accounting for out-of-phase/in-phase coupling - any attempt at relating the in-phase response with subsurface susceptibility requires overcoming instrument-specific limitations that burden the real-world application of FDEM susceptibility mapping. Firstly, the often erratic and drift-sensitive nature of in-phase responses calls for relative data levelling. In addition, a correction for absolute levelling offsets may be equally necessary: ancillary (subsurface) susceptibility data can be used to assess the importance of absolute in-phase calibration though hereby accurate in-situ data is required. To allow assessing the (importance of) in-phase calibration alongside the potential of FDEM data for susceptibility modelling, we consider an experimental test case whereby the in-phase responses of a multi-receiver FDEM instrument are calibrated through downhole susceptibility data. Our results show that, while it is possible to derive approximate susceptibility profiles from FDEM data, robust quantitative analysis hinges on appropriate calibration of the responses.
Intraoperative Radiation Therapy: Characterization and Application
1989-03-01
difficult to obtain. Notably, carcinomas of the pancreas, stomach, colon, and rectum, and sarcomas of soft tissue are prime candidates for IORT (2:131...Their pioneering efforts served as the basis for all my work. Mr. John Brohas of the AFIT Model Fabrication Shop aided my efforts considerably by... fabricated to set the collimator jaws to the required 10 cm x 10 cm aperture. The necessary parts are available from Varian. This will help eliminate errors
Quantitative evaluation of phase processing approaches in susceptibility weighted imaging
NASA Astrophysics Data System (ADS)
Li, Ningzhi; Wang, Wen-Tung; Sati, Pascal; Pham, Dzung L.; Butman, John A.
2012-03-01
Susceptibility weighted imaging (SWI) takes advantage of the local variation in susceptibility between different tissues to enable highly detailed visualization of the cerebral venous system and sensitive detection of intracranial hemorrhages. Thus, it has been increasingly used in magnetic resonance imaging studies of traumatic brain injury as well as other intracranial pathologies. In SWI, magnitude information is combined with phase information to enhance the susceptibility induced image contrast. Because of global susceptibility variations across the image, the rate of phase accumulation varies widely across the image resulting in phase wrapping artifacts that interfere with the local assessment of phase variation. Homodyne filtering is a common approach to eliminate this global phase variation. However, filter size requires careful selection in order to preserve image contrast and avoid errors resulting from residual phase wraps. An alternative approach is to apply phase unwrapping prior to high pass filtering. A suitable phase unwrapping algorithm guarantees no residual phase wraps but additional computational steps are required. In this work, we quantitatively evaluate these two phase processing approaches on both simulated and real data using different filters and cutoff frequencies. Our analysis leads to an improved understanding of the relationship between phase wraps, susceptibility effects, and acquisition parameters. Although homodyne filtering approaches are faster and more straightforward, phase unwrapping approaches perform more accurately in a wider variety of acquisition scenarios.
Near-Global Survey of Cloud Column Susceptibilities Using ISCCP Data
NASA Technical Reports Server (NTRS)
Han, Qingyuan; Rossow, William B.; Chou, Joyce; Welch, Ronald M.; Hansen, James E. (Technical Monitor)
2000-01-01
A new parameter, cloud column susceptibility, is introduced to study the aerosol indirect effect. There are several advantages of this new parameter in comparison with the traditional cloud susceptibility. First, no assumptions about constant liquid water content and cloud layer thickness are required in calculations so that errors caused by these assumptions can be avoided. Second, no a priori knowledge of liquid water content is necessary in remote sensing, which makes global survey by satellite data possible even though liquid water content may change significantly. Third, this new parameter can deal with variations of cloud geometrical thickness during cloud-aerosol interactions, which are evidenced by Without assuming how cloud droplet size will respond to changes of number concentration, this new parameter describes the aerosol indirect effect more directly. It addresses the question of how cloud albedo changes with increasing column number concentrations of cloud droplets, which is resulted from cloud-aerosol interactions. In this study, two approaches are used to retrieve cloud column susceptibility by satellite data. The results of both approaches show a striking contrast of cloud column susceptibilities between continental and maritime. Between the two approaches, the one that uses no assumption of constant liquid water content leads to smaller, some times even negative, cloud column susceptibilities. This finding suggests that the aerosol indirect effect may be overestimated if the assumption of constant liquid water content is used in model studies.
Clément, Julien; Dumas, Raphaël; Hagemeister, Nicola; de Guise, Jaques A
2015-11-05
Soft tissue artifact (STA) distort marker-based knee kinematics measures and make them difficult to use in clinical practice. None of the current methods designed to compensate for STA is suitable, but multi-body optimization (MBO) has demonstrated encouraging results and can be improved. The goal of this study was to develop and validate the performance of knee joint models, with anatomical and subject-specific kinematic constraints, used in MBO to reduce STA errors. Twenty subjects were recruited: 10 healthy and 10 osteoarthritis (OA) subjects. Subject-specific knee joint models were evaluated by comparing dynamic knee kinematics recorded by a motion capture system (KneeKG™) and optimized with MBO to quasi-static knee kinematics measured by a low-dose, upright, biplanar radiographic imaging system (EOS(®)). Errors due to STA ranged from 1.6° to 22.4° for knee rotations and from 0.8 mm to 14.9 mm for knee displacements in healthy and OA subjects. Subject-specific knee joint models were most effective in compensating for STA in terms of abduction-adduction, inter-external rotation and antero-posterior displacement. Root mean square errors with subject-specific knee joint models ranged from 2.2±1.2° to 6.0±3.9° for knee rotations and from 2.4±1.1 mm to 4.3±2.4 mm for knee displacements in healthy and OA subjects, respectively. Our study shows that MBO can be improved with subject-specific knee joint models, and that the quality of the motion capture calibration is critical. Future investigations should focus on more refined knee joint models to reproduce specific OA knee geometry and physiology. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tonutti, Michele; Gras, Gauthier; Yang, Guang-Zhong
2017-07-01
Accurate reconstruction and visualisation of soft tissue deformation in real time is crucial in image-guided surgery, particularly in augmented reality (AR) applications. Current deformation models are characterised by a trade-off between accuracy and computational speed. We propose an approach to derive a patient-specific deformation model for brain pathologies by combining the results of pre-computed finite element method (FEM) simulations with machine learning algorithms. The models can be computed instantaneously and offer an accuracy comparable to FEM models. A brain tumour is used as the subject of the deformation model. Load-driven FEM simulations are performed on a tetrahedral brain mesh afflicted by a tumour. Forces of varying magnitudes, positions, and inclination angles are applied onto the brain's surface. Two machine learning algorithms-artificial neural networks (ANNs) and support vector regression (SVR)-are employed to derive a model that can predict the resulting deformation for each node in the tumour's mesh. The tumour deformation can be predicted in real time given relevant information about the geometry of the anatomy and the load, all of which can be measured instantly during a surgical operation. The models can predict the position of the nodes with errors below 0.3mm, beyond the general threshold of surgical accuracy and suitable for high fidelity AR systems. The SVR models perform better than the ANN's, with positional errors for SVR models reaching under 0.2mm. The results represent an improvement over existing deformation models for real time applications, providing smaller errors and high patient-specificity. The proposed approach addresses the current needs of image-guided surgical systems and has the potential to be employed to model the deformation of any type of soft tissue. Copyright © 2017 Elsevier B.V. All rights reserved.
Drug error in paediatric anaesthesia: current status and where to go now.
Anderson, Brian J
2018-06-01
Medication errors in paediatric anaesthesia and the perioperative setting continue to occur despite widespread recognition of the problem and published advice for reduction of this predicament at international, national, local and individual levels. Current literature was reviewed to ascertain drug error rates and to appraise causes and proposed solutions to reduce these errors. The medication error incidence remains high. There is documentation of reduction through identification of causes with consequent education and application of safety analytics and quality improvement programs in anaesthesia departments. Children remain at higher risk than adults because of additional complexities such as drug dose calculations, increased susceptibility to some adverse effects and changes associated with growth and maturation. Major improvements are best made through institutional system changes rather than a commitment to do better on the part of each practitioner. Medication errors in paediatric anaesthesia represent an important risk to children and most are avoidable. There is now an understanding of the genesis of adverse drug events and this understanding should facilitate the implementation of known effective countermeasures. An institution-wide commitment and strategy are the basis for a worthwhile and sustained improvement in medication safety.
Huff, Mark J; Umanath, Sharda
2018-06-01
In 2 experiments, we assessed age-related suggestibility to additive and contradictory misinformation (i.e., remembering of false details from an external source). After reading a fictional story, participants answered questions containing misleading details that were either additive (misleading details that supplemented an original event) or contradictory (errors that changed original details). On a final test, suggestibility was greater for additive than contradictory misinformation, and older adults endorsed fewer false contradictory details than younger adults. To mitigate suggestibility in Experiment 2, participants were warned about potential errors, instructed to detect errors, or instructed to detect errors after exposure to examples of additive and contradictory details. Again, suggestibility to additive misinformation was greater than contradictory, and older adults endorsed less contradictory misinformation. Only after detection instructions with misinformation examples were younger adults able to reduce contradictory misinformation effects and reduced these effects to the level of older adults. Additive misinformation however, was immune to all warning and detection instructions. Thus, older adults were less susceptible to contradictory misinformation errors, and younger adults could match this misinformation rate when warning/detection instructions were strong. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Improved astigmatic focus error detection method
NASA Technical Reports Server (NTRS)
Bernacki, Bruce E.
1992-01-01
All easy-to-implement focus- and track-error detection methods presently used in magneto-optical (MO) disk drives using pre-grooved media suffer from a side effect known as feedthrough. Feedthrough is the unwanted focus error signal (FES) produced when the optical head is seeking a new track, and light refracted from the pre-grooved disk produces an erroneous FES. Some focus and track-error detection methods are more resistant to feedthrough, but tend to be complicated and/or difficult to keep in alignment as a result of environmental insults. The astigmatic focus/push-pull tracking method is an elegant, easy-to-align focus- and track-error detection method. Unfortunately, it is also highly susceptible to feedthrough when astigmatism is present, with the worst effects caused by astigmatism oriented such that the tangential and sagittal foci are at 45 deg to the track direction. This disclosure outlines a method to nearly completely eliminate the worst-case form of feedthrough due to astigmatism oriented 45 deg to the track direction. Feedthrough due to other primary aberrations is not improved, but performance is identical to the unimproved astigmatic method.
NASA Technical Reports Server (NTRS)
Belcastro, Celeste M.; Fischl, Robert; Kam, Moshe
1992-01-01
This paper presents a strategy for dynamically monitoring digital controllers in the laboratory for susceptibility to electromagnetic disturbances that compromise control integrity. The integrity of digital control systems operating in harsh electromagnetic environments can be compromised by upsets caused by induced transient electrical signals. Digital system upset is a functional error mode that involves no component damage, can occur simultaneously in all channels of a redundant control computer, and is software dependent. The motivation for this work is the need to develop tools and techniques that can be used in the laboratory to validate and/or certify critical aircraft controllers operating in electromagnetically adverse environments that result from lightning, high-intensity radiated fields (HIRF), and nuclear electromagnetic pulses (NEMP). The detection strategy presented in this paper provides dynamic monitoring of a given control computer for degraded functional integrity resulting from redundancy management errors, control calculation errors, and control correctness/effectiveness errors. In particular, this paper discusses the use of Kalman filtering, data fusion, and statistical decision theory in monitoring a given digital controller for control calculation errors.
Adult age differences in unconscious transference: source confusion or identity blending?
Perfect, Timothy J; Harris, Lucy J
2003-06-01
Eyewitnesses are known often to falsely identify a familiar but innocent bystander when asked to pick out a perpetrator from a lineup. Such unconscious transference errors have been attributed to either identity confusions at encoding or source retrieval errors. Three experiments contrasted younger and older adults in their susceptibility to such misidentifications. Participants saw photographs of perpetrators, then a series of mug shots of innocent bystanders. A week later, they saw lineups containing bystanders (and others containing perpetrators in Experiment 3) and were asked whether any of the perpetrators were present. When younger faces were used as stimuli (Experiments 1 and 3), older adults showed higher rates of transference errors. When older faces were used as stimuli (Experiments 2 and 3), no such age effects in rates of unconscious transference were apparent. In addition, older adults in Experiment 3 showed an own-age bias effect for correct identification of targets. Unconscious transference errors were found to be due to both source retrieval errors and identity confusions, but age-related increases were found only in the latter.
Error Correcting Optical Mapping Data.
Mukherjee, Kingshuk; Washimkar, Darshan; Muggli, Martin D; Salmela, Leena; Boucher, Christina
2018-05-26
Optical mapping is a unique system that is capable of producing high-resolution, high-throughput genomic map data that gives information about the structure of a genome [21]. Recently it has been used for scaffolding contigs and assembly validation for large-scale sequencing projects, including the maize [32], goat [6], and amborella [4] genomes. However, a major impediment in the use of this data is the variety and quantity of errors in the raw optical mapping data, which are called Rmaps. The challenges associated with using Rmap data are analogous to dealing with insertions and deletions in the alignment of long reads. Moreover, they are arguably harder to tackle since the data is numerical and susceptible to inaccuracy. We develop cOMET to error correct Rmap data, which to the best of our knowledge is the only optical mapping error correction method. Our experimental results demonstrate that cOMET has high prevision and corrects 82.49% of insertion errors and 77.38% of deletion errors in Rmap data generated from the E. coli K-12 reference genome. Out of the deletion errors corrected, 98.26% are true errors. Similarly, out of the insertion errors corrected, 82.19% are true errors. It also successfully scales to large genomes, improving the quality of 78% and 99% of the Rmaps in the plum and goat genomes, respectively. Lastly, we show the utility of error correction by demonstrating how it improves the assembly of Rmap data. Error corrected Rmap data results in an assembly that is more contiguous, and covers a larger fraction of the genome.
Song, Tianxiao; Wang, Xueyun; Liang, Wenwei; Xing, Li
2018-05-14
Benefiting from frame structure, RINS can improve the navigation accuracy by modulating the inertial sensor errors with proper rotation scheme. In the traditional motor control method, the measurements of the photoelectric encoder are always adopted to drive inertial measurement unit (IMU) to rotate. However, when carrier conducts heading motion, the inertial sensor errors may no longer be zero-mean in navigation coordinate. Meanwhile, some high-speed carriers like aircraft need to roll a certain angle to balance the centrifugal force during the heading motion, which may result in non-negligible coupling errors, caused by the FOG installation errors and scale factor errors. Moreover, the error parameters of FOG are susceptible to the temperature and magnetic field, and the pre-calibration is a time-consuming process which is difficult to completely suppress the FOG-related errors. In this paper, an improved motor control method with the measurements of FOG is proposed to address these problems, with which the outer frame can insulate the carrier's roll motion and the inner frame can simultaneously achieve the rotary modulation on the basis of insulating the heading motion. The results of turntable experiments indicate that the navigation performance of dual-axis RINS has been significantly improved over the traditional method, which could still be maintained even with large FOG installation errors and scale factor errors, proving that the proposed method can relax the requirements for the accuracy of FOG-related errors.
On the error propagation of semi-Lagrange and Fourier methods for advection problems☆
Einkemmer, Lukas; Ostermann, Alexander
2015-01-01
In this paper we study the error propagation of numerical schemes for the advection equation in the case where high precision is desired. The numerical methods considered are based on the fast Fourier transform, polynomial interpolation (semi-Lagrangian methods using a Lagrange or spline interpolation), and a discontinuous Galerkin semi-Lagrangian approach (which is conservative and has to store more than a single value per cell). We demonstrate, by carrying out numerical experiments, that the worst case error estimates given in the literature provide a good explanation for the error propagation of the interpolation-based semi-Lagrangian methods. For the discontinuous Galerkin semi-Lagrangian method, however, we find that the characteristic property of semi-Lagrangian error estimates (namely the fact that the error increases proportionally to the number of time steps) is not observed. We provide an explanation for this behavior and conduct numerical simulations that corroborate the different qualitative features of the error in the two respective types of semi-Lagrangian methods. The method based on the fast Fourier transform is exact but, due to round-off errors, susceptible to a linear increase of the error in the number of time steps. We show how to modify the Cooley–Tukey algorithm in order to obtain an error growth that is proportional to the square root of the number of time steps. Finally, we show, for a simple model, that our conclusions hold true if the advection solver is used as part of a splitting scheme. PMID:25844018
NASA Technical Reports Server (NTRS)
Tasca, D. M.
1981-01-01
Single event upset phenomena are discussed, taking into account cosmic ray induced errors in IIL microprocessors and logic devices, single event upsets in NMOS microprocessors, a prediction model for bipolar RAMs in a high energy ion/proton environment, the search for neutron-induced hard errors in VLSI structures, soft errors due to protons in the radiation belt, and the use of an ion microbeam to study single event upsets in microcircuits. Basic mechanisms in materials and devices are examined, giving attention to gamma induced noise in CCD's, the annealing of MOS capacitors, an analysis of photobleaching techniques for the radiation hardening of fiber optic data links, a hardened field insulator, the simulation of radiation damage in solids, and the manufacturing of radiation resistant optical fibers. Energy deposition and dosimetry is considered along with SGEMP/IEMP, radiation effects in devices, space radiation effects and spacecraft charging, EMP/SREMP, and aspects of fabrication, testing, and hardness assurance.
Whole head quantitative susceptibility mapping using a least-norm direct dipole inversion method.
Sun, Hongfu; Ma, Yuhan; MacDonald, M Ethan; Pike, G Bruce
2018-06-15
A new dipole field inversion method for whole head quantitative susceptibility mapping (QSM) is proposed. Instead of performing background field removal and local field inversion sequentially, the proposed method performs dipole field inversion directly on the total field map in a single step. To aid this under-determined and ill-posed inversion process and obtain robust QSM images, Tikhonov regularization is implemented to seek the local susceptibility solution with the least-norm (LN) using the L-curve criterion. The proposed LN-QSM does not require brain edge erosion, thereby preserving the cerebral cortex in the final images. This should improve its applicability for QSM-based cortical grey matter measurement, functional imaging and venography of full brain. Furthermore, LN-QSM also enables susceptibility mapping of the entire head without the need for brain extraction, which makes QSM reconstruction more automated and less dependent on intermediate pre-processing methods and their associated parameters. It is shown that the proposed LN-QSM method reduced errors in a numerical phantom simulation, improved accuracy in a gadolinium phantom experiment, and suppressed artefacts in nine subjects, as compared to two-step and other single-step QSM methods. Measurements of deep grey matter and skull susceptibilities from LN-QSM are consistent with established reconstruction methods. Copyright © 2018 Elsevier Inc. All rights reserved.
Khan, Zohaib Nisar; Sabino, Isabela Tomazini; de Souza Melo, Carina Guimarães; Martini, Tatiana; da Silva Pereira, Heloísa Aparecida Barbosa; Buzalaf, Marília Afonso Rabelo
2018-04-29
Appropriate doses of fluoride (F) have therapeutic action against dental caries, but higher levels can cause disturbances in soft and mineralized tissues. Interestingly, the susceptibility to the toxic effects of F is genetically determined. This study evaluated the effects of F on the liver proteome of mice susceptible (A/J) or resistant (129P3/J) to the effects of F. Weanling male A/J (n = 12) and 129P3/J (n = 12) mice were housed in pairs and assigned to two groups given low-F food and drinking water containing 15 or 50 ppm F for 6 weeks. Liver proteome profiles were examined using nano-LC-ESI-MS/MS. Difference in expression among the groups was determined using the PLGS software. Treatment with the lower F concentration provoked more pronounced alterations in fold change in liver proteins in comparison to the treatment with the higher F concentration. Interestingly, most of the proteins with fold change upon treatment with 15 ppm F were increased in the A/J mice compared with their 129P3/J counterparts, suggesting an attempt of the former to fight the deleterious effects of F. However, upon treatment with 50 ppm F, most proteins with fold change were decreased in the A/J mice compared with their 129P3/J counterparts, especially proteins related to oxidative stress and protein folding, which might be related to the higher susceptibility of the A/J animals to the deleterious effects of F. Our findings add light into the mechanisms underlying genetic susceptibility to fluorosis.
SLIMS--a user-friendly sample operations and inventory management system for genotyping labs.
Van Rossum, Thea; Tripp, Ben; Daley, Denise
2010-07-15
We present the Sample-based Laboratory Information Management System (SLIMS), a powerful and user-friendly open source web application that provides all members of a laboratory with an interface to view, edit and create sample information. SLIMS aims to simplify common laboratory tasks with tools such as a user-friendly shopping cart for subjects, samples and containers that easily generates reports, shareable lists and plate designs for genotyping. Further key features include customizable data views, database change-logging and dynamically filled pre-formatted reports. Along with being feature-rich, SLIMS' power comes from being able to handle longitudinal data from multiple time-points and biological sources. This type of data is increasingly common from studies searching for susceptibility genes for common complex diseases that collect thousands of samples generating millions of genotypes and overwhelming amounts of data. LIMSs provide an efficient way to deal with this data while increasing accessibility and reducing laboratory errors; however, professional LIMS are often too costly to be practical. SLIMS gives labs a feasible alternative that is easily accessible, user-centrically designed and feature-rich. To facilitate system customization, and utilization for other groups, manuals have been written for users and developers. Documentation, source code and manuals are available at http://genapha.icapture.ubc.ca/SLIMS/index.jsp. SLIMS was developed using Java 1.6.0, JSPs, Hibernate 3.3.1.GA, DB2 and mySQL, Apache Tomcat 6.0.18, NetBeans IDE 6.5, Jasper Reports 3.5.1 and JasperSoft's iReport 3.5.1.
Implications of scaling on static RAM bit cell stability and reliability
NASA Astrophysics Data System (ADS)
Coones, Mary Ann; Herr, Norm; Bormann, Al; Erington, Kent; Soorholtz, Vince; Sweeney, John; Phillips, Michael
1993-01-01
In order to lower manufacturing costs and increase performance, static random access memory (SRAM) bit cells are scaled progressively toward submicron geometries. The reliability of an SRAM is highly dependent on the bit cell stability. Smaller memory cells with less capacitance and restoring current make the array more susceptible to failures from defectivity, alpha hits, and other instabilities and leakage mechanisms. Improving long term reliability while migrating to higher density devices makes the task of building in and improving reliability increasingly difficult. Reliability requirements for high density SRAMs are very demanding with failure rates of less than 100 failures per billion device hours (100 FITs) being a common criteria. Design techniques for increasing bit cell stability and manufacturability must be implemented in order to build in this level of reliability. Several types of analyses are performed to benchmark the performance of the SRAM device. Examples of these analysis techniques which are presented here include DC parametric measurements of test structures, functional bit mapping of the circuit used to characterize the entire distribution of bits, electrical microprobing of weak and/or failing bits, and system and accelerated soft error rate measurements. These tests allow process and design improvements to be evaluated prior to implementation on the final product. These results are used to provide comprehensive bit cell characterization which can then be compared to device models and adjusted accordingly to provide optimized cell stability versus cell size for a particular technology. The result is designed in reliability which can be accomplished during the early stages of product development.
Discretization effects in the topological susceptibility in lattice QCD
NASA Astrophysics Data System (ADS)
Hart, A.
2004-04-01
We study the topological susceptibility χ in QCD with two quark flavors using lattice field configurations that have been produced with an O(a)-improved clover quark action. We find clear evidence for the expected suppression at a small quark mass mq and examine the variation of χ with this mass and the lattice spacing a. A joint continuum and chiral extrapolation yields good agreement with theoretical expectations as a,mq→0. A moderate increase in autocorrelation is observed on the more chiral ensembles, but within large statistical errors. Finite volume effects are negligible for the Leutwyler-Smilga parameter xLS≳10, and no evidence for a nearby phase transition is observed.
CW Interference Effects on High Data Rate Transmission Through the ACTS Wideband Channel
NASA Technical Reports Server (NTRS)
Kerczewski, Robert J.; Ngo, Duc H.; Tran, Quang K.; Tran, Diepchi T.; Yu, John; Kachmar, Brian A.; Svoboda, James S.
1996-01-01
Satellite communications channels are susceptible to various sources of interference. Wideband channels have a proportionally greater probability of receiving interference than narrowband channels. NASA's Advanced Communications Technology Satellite (ACTS) includes a 900 MHz bandwidth hardlimiting transponder which has provided an opportunity for the study of interference effects of wideband channels. A series of interference tests using two independent ACTS ground terminals measured the effects of continuous-wave (CW) uplink interference on the bit-error rate of a 220 Mbps digitally modulated carrier. These results indicate the susceptibility of high data rate transmissions to CW interference and are compared to results obtained with a laboratory hardware-based system simulation and a computer simulation.
New Developments in FPGA Devices: SEUs and Fail-Safe Strategies from the NASA Goddard Perspective
NASA Technical Reports Server (NTRS)
Berg, Melanie; LaBel, Kenneth; Pellish, Jonathan
2016-01-01
It has been shown that, when exposed to radiation environments, each Field Programmable Gate Array (FPGA) device has unique error signatures. Subsequently, fail-safe and mitigation strategies will differ per FPGA type. In this session several design approaches for safe systems will be presented. It will also explore the benefits and limitations of several mitigation techniques. The intention of the presentation is to provide information regarding FPGA types, their susceptibilities, and proven fail-safe strategies; so that users can select appropriate mitigation and perform the required trade for system insertion. The presentation will describe three types of FPGA devices and their susceptibilities in radiation environments.
On the possibility of non-invasive multilayer temperature estimation using soft-computing methods.
Teixeira, C A; Pereira, W C A; Ruano, A E; Ruano, M Graça
2010-01-01
This work reports original results on the possibility of non-invasive temperature estimation (NITE) in a multilayered phantom by applying soft-computing methods. The existence of reliable non-invasive temperature estimator models would improve the security and efficacy of thermal therapies. These points would lead to a broader acceptance of this kind of therapies. Several approaches based on medical imaging technologies were proposed, magnetic resonance imaging (MRI) being appointed as the only one to achieve the acceptable temperature resolutions for hyperthermia purposes. However, MRI intrinsic characteristics (e.g., high instrumentation cost) lead us to use backscattered ultrasound (BSU). Among the different BSU features, temporal echo-shifts have received a major attention. These shifts are due to changes of speed-of-sound and expansion of the medium. The originality of this work involves two aspects: the estimator model itself is original (based on soft-computing methods) and the application to temperature estimation in a three-layer phantom is also not reported in literature. In this work a three-layer (non-homogeneous) phantom was developed. The two external layers were composed of (in % of weight): 86.5% degassed water, 11% glycerin and 2.5% agar-agar. The intermediate layer was obtained by adding graphite powder in the amount of 2% of the water weight to the above composition. The phantom was developed to have attenuation and speed-of-sound similar to in vivo muscle, according to the literature. BSU signals were collected and cumulative temporal echo-shifts computed. These shifts and the past temperature values were then considered as possible estimators inputs. A soft-computing methodology was applied to look for appropriate multilayered temperature estimators. The methodology involves radial-basis functions neural networks (RBFNN) with structure optimized by the multi-objective genetic algorithm (MOGA). In this work 40 operating conditions were considered, i.e. five 5-mm spaced spatial points and eight therapeutic intensities (I(SATA)): 0.3, 0.5, 0.7, 1.0, 1.3, 1.5, 1.7 and 2.0W/cm(2). Models were trained and selected to estimate temperature at only four intensities, then during the validation phase, the best-fitted models were analyzed in data collected at the eight intensities. This procedure leads to a more realistic evaluation of the generalisation level of the best-obtained structures. At the end of the identification phase, 82 (preferable) estimator models were achieved. The majority of them present an average maximum absolute error (MAE) inferior to 0.5 degrees C. The best-fitted estimator presents a MAE of only 0.4 degrees C for both the 40 operating conditions. This means that the gold-standard maximum error (0.5 degrees C) pointed for hyperthermia was fulfilled independently of the intensity and spatial position considered, showing the improved generalisation capacity of the identified estimator models. As the majority of the preferable estimator models, the best one presents 6 inputs and 11 neurons. In addition to the appropriate error performance, the estimator models present also a reduced computational complexity and then the possibility to be applied in real-time. A non-invasive temperature estimation model, based on soft-computing technique, was proposed for a three-layered phantom. The best-achieved estimator models presented an appropriate error performance regardless of the spatial point considered (inside or at the interface of the layers) and of the intensity applied. Other methodologies published so far, estimate temperature only in homogeneous media. The main drawback of the proposed methodology is the necessity of a-priory knowledge of the temperature behavior. Data used for training and optimisation should be representative, i.e., they should cover all possible physical situations of the estimation environment.
Effect of acoustic similarity on short-term auditory memory in the monkey
Scott, Brian H.; Mishkin, Mortimer; Yin, Pingbo
2013-01-01
Recent evidence suggests that the monkey’s short-term memory in audition depends on a passively retained sensory trace as opposed to a trace reactivated from long-term memory for use in working memory. Reliance on a passive sensory trace could render memory particularly susceptible to confusion between sounds that are similar in some acoustic dimension. If so, then in delayed matching-to-sample, the monkey’s performance should be predicted by the similarity in the salient acoustic dimension between the sample and subsequent test stimulus, even at very short delays. To test this prediction and isolate the acoustic features relevant to short-term memory, we examined the pattern of errors made by two rhesus monkeys performing a serial, auditory delayed match-to-sample task with interstimulus intervals of 1 s. The analysis revealed that false-alarm errors did indeed result from similarity-based confusion between the sample and the subsequent nonmatch stimuli. Manipulation of the stimuli showed that removal of spectral cues was more disruptive to matching behavior than removal of temporal cues. In addition, the effect of acoustic similarity on false-alarm response was stronger at the first nonmatch stimulus than at the second one. This pattern of errors would be expected if the first nonmatch stimulus overwrote the sample’s trace, and suggests that the passively retained trace is not only vulnerable to similarity-based confusion but is also highly susceptible to overwriting. PMID:23376550
Effect of acoustic similarity on short-term auditory memory in the monkey.
Scott, Brian H; Mishkin, Mortimer; Yin, Pingbo
2013-04-01
Recent evidence suggests that the monkey's short-term memory in audition depends on a passively retained sensory trace as opposed to a trace reactivated from long-term memory for use in working memory. Reliance on a passive sensory trace could render memory particularly susceptible to confusion between sounds that are similar in some acoustic dimension. If so, then in delayed matching-to-sample, the monkey's performance should be predicted by the similarity in the salient acoustic dimension between the sample and subsequent test stimulus, even at very short delays. To test this prediction and isolate the acoustic features relevant to short-term memory, we examined the pattern of errors made by two rhesus monkeys performing a serial, auditory delayed match-to-sample task with interstimulus intervals of 1 s. The analysis revealed that false-alarm errors did indeed result from similarity-based confusion between the sample and the subsequent nonmatch stimuli. Manipulation of the stimuli showed that removal of spectral cues was more disruptive to matching behavior than removal of temporal cues. In addition, the effect of acoustic similarity on false-alarm response was stronger at the first nonmatch stimulus than at the second one. This pattern of errors would be expected if the first nonmatch stimulus overwrote the sample's trace, and suggests that the passively retained trace is not only vulnerable to similarity-based confusion but is also highly susceptible to overwriting. Copyright © 2013 Elsevier B.V. All rights reserved.