An Investigation into Soft Error Detection Efficiency at Operating System Level
Taheri, Hassan
2014-01-01
Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance. PMID:24574894
An investigation into soft error detection efficiency at operating system level.
Asghari, Seyyed Amir; Kaynak, Okyay; Taheri, Hassan
2014-01-01
Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.
Evaluating Application Resilience with XRay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Sui; Bronevetsky, Greg; Li, Bin
2015-05-07
The rising count and shrinking feature size of transistors within modern computers is making them increasingly vulnerable to various types of soft faults. This problem is especially acute in high-performance computing (HPC) systems used for scientific computing, because these systems include many thousands of compute cores and nodes, all of which may be utilized in a single large-scale run. The increasing vulnerability of HPC applications to errors induced by soft faults is motivating extensive work on techniques to make these applications more resiilent to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithmspecific error detection andmore » tolerance techniques. Effective use of such techniques requires a detailed understanding of how a given application is affected by soft faults to ensure that (i) efforts to improve application resilience are spent in the code regions most vulnerable to faults and (ii) the appropriate resilience technique is applied to each code region. This paper presents XRay, a tool to view the application vulnerability to soft errors, and illustrates how XRay can be used in the context of a representative application. In addition to providing actionable insights into application behavior XRay automatically selects the number of fault injection experiments required to provide an informative view of application behavior, ensuring that the information is statistically well-grounded without performing unnecessary experiments.« less
Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip
NASA Astrophysics Data System (ADS)
Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang
2016-09-01
Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.
Comparisons of single event vulnerability of GaAs SRAMS
NASA Astrophysics Data System (ADS)
Weatherford, T. R.; Hauser, J. R.; Diehl, S. E.
1986-12-01
A GaAs MESFET/JFET model incorporated into SPICE has been used to accurately describe C-EJFET, E/D MESFET and D MESFET/resistor GaAs memory technologies. These cells have been evaluated for critical charges due to gate-to-drain and drain-to-source charge collection. Low gate-to-drain critical charges limit conventional GaAs SRAM soft error rates to approximately 1E-6 errors/bit-day. SEU hardening approaches including decoupling resistors, diodes, and FETs have been investigated. Results predict GaAs RAM cell critical charges can be increased to over 0.1 pC. Soft error rates in such hardened memories may approach 1E-7 errors/bit-day without significantly reducing memory speed. Tradeoffs between hardening level, performance and fabrication complexity are discussed.
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests
Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong
2016-01-01
A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.
He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong
2016-01-01
A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h.
Clover: Compiler directed lightweight soft error resilience
Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...
2015-05-01
This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less
A Case for Soft Error Detection and Correction in Computational Chemistry.
van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A
2013-09-10
High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, S.; Peng, L.; Bronevetsky, G.
As HPC systems approach Exascale, their circuit feature will shrink, while their overall size will grow, all at a fixed power limit. These trends imply that soft faults in electronic circuits will become an increasingly significant problem for applications that run on these systems, causing them to occasionally crash or worse, silently return incorrect results. This is motivating extensive work on application resilience to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithm-specific error detection and resilience techniques. Effective use of such techniques requires a detailed understanding of (1) which vulnerable parts of the application aremore » most worth protecting (2) the performance and resilience impact of fault resilience mechanisms on the application. This paper presents FaultTelescope, a tool that combines these two and generates actionable insights by presenting in an intuitive way application vulnerabilities and impact of fault resilience mechanisms on applications.« less
NASA Astrophysics Data System (ADS)
Watanabe, Y.; Abe, S.
2014-06-01
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant source of soft errors regardless of design rule.
A Quatro-Based 65-nm Flip-Flop Circuit for Soft-Error Resilience
NASA Astrophysics Data System (ADS)
Li, Y.-Q.; Wang, H.-B.; Liu, R.; Chen, L.; Nofal, I.; Shi, S.-T.; He, A.-L.; Guo, G.; Baeg, S. H.; Wen, S.-J.; Wong, R.; Chen, M.; Wu, Q.
2017-06-01
A flip-flop circuit hardened against soft errors is presented in this paper. This design is an improved version of Quatro for further enhanced soft-error resilience by integrating the guard-gate technique. The proposed design, as well as reference Quatro and regular flip-flops, was implemented and manufactured in a 65-nm CMOS bulk technology. Experimental characterization results of their alpha and heavy ions soft-error rates verified the superior hardening performance of the proposed design over the other two circuits.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, Y., E-mail: watanabe@aees.kyushu-u.ac.jp; Abe, S.
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant sourcemore » of soft errors regardless of design rule.« less
Proton upsets in LSI memories in space
NASA Technical Reports Server (NTRS)
Mcnulty, P. J.; Wyatt, R. C.; Filz, R. C.; Rothwell, P. L.; Farrell, G. E.
1980-01-01
Two types of large scale integrated dynamic random access memory devices were tested and found to be subject to soft errors when exposed to protons incident at energies between 18 and 130 MeV. These errors are shown to differ significantly from those induced in the same devices by alphas from an Am-241 source. There is considerable variation among devices in their sensitivity to proton-induced soft errors, even among devices of the same type. For protons incident at 130 MeV, the soft error cross sections measured in these experiments varied from 10 to the -8th to 10 to the -6th sq cm/proton. For individual devices, however, the soft error cross section consistently increased with beam energy from 18-130 MeV. Analysis indicates that the soft errors induced by energetic protons result from spallation interactions between the incident protons and the nuclei of the atoms comprising the device. Because energetic protons are the most numerous of both the galactic and solar cosmic rays and form the inner radiation belt, proton-induced soft errors have potentially serious implications for many electronic systems flown in space.
NASA Astrophysics Data System (ADS)
Zhang, Kuiyuan; Umehara, Shigehiro; Yamaguchi, Junki; Furuta, Jun; Kobayashi, Kazutoshi
2016-08-01
This paper analyzes how body bias and BOX region thickness affect soft error rates in 65-nm SOTB (Silicon on Thin BOX) and 28-nm UTBB (Ultra Thin Body and BOX) FD-SOI processes. Soft errors are induced by alpha-particle and neutron irradiation and the results are then analyzed by Monte Carlo based simulation using PHITS-TCAD. The alpha-particle-induced single event upset (SEU) cross-section and neutron-induced soft error rate (SER) obtained by simulation are consistent with measurement results. We clarify that SERs decreased in response to an increase in the BOX thickness for SOTB while SERs in UTBB are independent of BOX thickness. We also discover SOTB develops a higher tolerance to soft errors when reverse body bias is applied while UTBB become more susceptible.
Multi-bits error detection and fast recovery in RISC cores
NASA Astrophysics Data System (ADS)
Jing, Wang; Xing, Yang; Yuanfu, Zhao; Weigong, Zhang; Jiao, Shen; Keni, Qiu
2015-11-01
The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap.
Determination of pesticide residues in fruit-based soft drinks.
García-Reyes, Juan F; Gilbert-López, Bienvenida; Molina-Díaz, Antonio; Fernández-Alba, Amadeo R
2008-12-01
Here we report the first worldwide reconnaissance study of the presence and occurrence of pesticides in fruit-based soft drinks. While there are strict regulations and exhaustive controls for pesticides in fruits, vegetables, and drinking water, scarce attention has been paid to highly consumed derivate products, which may contain these commodities as ingredients. In the case of the fruit-based soft drinks industry, there are no clear regulations, relating to pesticides, which address them, even when there is significant consumption in vulnerable groups such as children. In this work, we have developed a screening method to search automatically for up to 100 pesticides in fruit-based soft drinks extracts based on the application of liquid chromatography-electrospray time-of-flight mass spectrometry (LC-TOF MS). The sample extracts injected were obtained by a preliminary sample treatment step based on solid-phase extraction using hydrophilic-lipophilic balanced polymer-based reverse phase cartridges and methanol as eluting solvent. Subsequent identification, confirmation, and quantitation were carried out by LC-TOF MS analysis: the confirmation of the target species was based on retention time matching and accurate mass measurements of protonated molecules ([M + H]+) and fragment ions (obtaining accuracy errors typically lower than 2 ppm). With the proposed method, we measured over 100 fruit-based soft drink samples, purchased from 15 different countries from companies with brands distributed worldwide and found relatively large concentration levels of pesticides in most of the samples analyzed. The concentration levels detected were of the micrograms per liter level, low when considering the European maximum residue levels (MRLs) set for fruits but very high (i.e., 300 times) when considering the MRLs for drinking or bottled water. The detected pesticides (carbendazim, thiabendazole, imazalil and its main degradate, prochloraz and its main degradate, malathion, and iprodione) are mainly those applied to crops in the final stages of production (postharvest treatment), some of them contain chlorine atoms in their structures. Therefore, steps should be taken with the aim of removing any traces of pesticides in these products, in order to avoid this source of pesticide exposure on the consumer, particularly on vulnerable groups with higher exposure, such as children.
Asymmetric Memory Circuit Would Resist Soft Errors
NASA Technical Reports Server (NTRS)
Buehler, Martin G.; Perlman, Marvin
1990-01-01
Some nonlinear error-correcting codes more efficient in presence of asymmetry. Combination of circuit-design and coding concepts expected to make integrated-circuit random-access memories more resistant to "soft" errors (temporary bit errors, also called "single-event upsets" due to ionizing radiation). Integrated circuit of new type made deliberately more susceptible to one kind of bit error than to other, and associated error-correcting code adapted to exploit this asymmetry in error probabilities.
Phonons in two-dimensional soft colloidal crystals.
Chen, Ke; Still, Tim; Schoenholz, Samuel; Aptowicz, Kevin B; Schindler, Michael; Maggs, A C; Liu, Andrea J; Yodh, A G
2013-08-01
The vibrational modes of pristine and polycrystalline monolayer colloidal crystals composed of thermosensitive microgel particles are measured using video microscopy and covariance matrix analysis. At low frequencies, the Debye relation for two-dimensional harmonic crystals is observed in both crystal types; at higher frequencies, evidence for van Hove singularities in the phonon density of states is significantly smeared out by experimental noise and measurement statistics. The effects of these errors are analyzed using numerical simulations. We introduce methods to correct for these limitations, which can be applied to disordered systems as well as crystalline ones, and we show that application of the error correction procedure to the experimental data leads to more pronounced van Hove singularities in the pristine crystal. Finally, quasilocalized low-frequency modes in polycrystalline two-dimensional colloidal crystals are identified and demonstrated to correlate with structural defects such as dislocations, suggesting that quasilocalized low-frequency phonon modes may be used to identify local regions vulnerable to rearrangements in crystalline as well as amorphous solids.
Use of modeling to identify vulnerabilities to human error in laparoscopy.
Funk, Kenneth H; Bauer, James D; Doolen, Toni L; Telasha, David; Nicolalde, R Javier; Reeber, Miriam; Yodpijit, Nantakrit; Long, Myra
2010-01-01
This article describes an exercise to investigate the utility of modeling and human factors analysis in understanding surgical processes and their vulnerabilities to medical error. A formal method to identify error vulnerabilities was developed and applied to a test case of Veress needle insertion during closed laparoscopy. A team of 2 surgeons, a medical assistant, and 3 engineers used hierarchical task analysis and Integrated DEFinition language 0 (IDEF0) modeling to create rich models of the processes used in initial port creation. Using terminology from a standardized human performance database, detailed task descriptions were written for 4 tasks executed in the process of inserting the Veress needle. Key terms from the descriptions were used to extract from the database generic errors that could occur. Task descriptions with potential errors were translated back into surgical terminology. Referring to the process models and task descriptions, the team used a modified failure modes and effects analysis (FMEA) to consider each potential error for its probability of occurrence, its consequences if it should occur and be undetected, and its probability of detection. The resulting likely and consequential errors were prioritized for intervention. A literature-based validation study confirmed the significance of the top error vulnerabilities identified using the method. Ongoing work includes design and evaluation of procedures to correct the identified vulnerabilities and improvements to the modeling and vulnerability identification methods. Copyright 2010 AAGL. Published by Elsevier Inc. All rights reserved.
Latin hypercube approach to estimate uncertainty in ground water vulnerability
Gurdak, J.J.; McCray, J.E.; Thyne, G.; Qi, S.L.
2007-01-01
A methodology is proposed to quantify prediction uncertainty associated with ground water vulnerability models that were developed through an approach that coupled multivariate logistic regression with a geographic information system (GIS). This method uses Latin hypercube sampling (LHS) to illustrate the propagation of input error and estimate uncertainty associated with the logistic regression predictions of ground water vulnerability. Central to the proposed method is the assumption that prediction uncertainty in ground water vulnerability models is a function of input error propagation from uncertainty in the estimated logistic regression model coefficients (model error) and the values of explanatory variables represented in the GIS (data error). Input probability distributions that represent both model and data error sources of uncertainty were simultaneously sampled using a Latin hypercube approach with logistic regression calculations of probability of elevated nonpoint source contaminants in ground water. The resulting probability distribution represents the prediction intervals and associated uncertainty of the ground water vulnerability predictions. The method is illustrated through a ground water vulnerability assessment of the High Plains regional aquifer. Results of the LHS simulations reveal significant prediction uncertainties that vary spatially across the regional aquifer. Additionally, the proposed method enables a spatial deconstruction of the prediction uncertainty that can lead to improved prediction of ground water vulnerability. ?? 2007 National Ground Water Association.
Collective bargaining: a vulnerability assessment.
Fitzpatrick, M A
2001-02-01
When it comes to the "soft side" of health care, employees want to be informed, respected, and included in decisions that affect their ability to care for patients with pride and satisfaction. Union vulnerability is low when staff satisfaction and morale are high.
Register file soft error recovery
Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.
2013-10-15
Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.
Preisig, James C
2005-07-01
Equations are derived for analyzing the performance of channel estimate based equalizers. The performance is characterized in terms of the mean squared soft decision error (sigma2(s)) of each equalizer. This error is decomposed into two components. These are the minimum achievable error (sigma2(0)) and the excess error (sigma2(e)). The former is the soft decision error that would be realized by the equalizer if the filter coefficient calculation were based upon perfect knowledge of the channel impulse response and statistics of the interfering noise field. The latter is the additional soft decision error that is realized due to errors in the estimates of these channel parameters. These expressions accurately predict the equalizer errors observed in the processing of experimental data by a channel estimate based decision feedback equalizer (DFE) and a passive time-reversal equalizer. Further expressions are presented that allow equalizer performance to be predicted given the scattering function of the acoustic channel. The analysis using these expressions yields insights into the features of surface scattering that most significantly impact equalizer performance in shallow water environments and motivates the implementation of a DFE that is robust with respect to channel estimation errors.
Single Event Effect microchip testing at the Texas A&M University Cyclotron Institute
NASA Astrophysics Data System (ADS)
Clark, Henry; Yennello, Sherry; Texas A&M University-Cyclotron Institute Team
2015-10-01
A Single Event Effect (SEE) is caused by a single, energetic particle that deposits a sufficient amount of charge in a device as it transverses it and upsets its normal operation. Soft errors are non-destructive and normally appear as transient pulses in logic or support circuitry, or as bit flips in memory cells or registers. Hard errors usually result in a high operating current, above device specifications, and must be cleared by a power reset. Burnout errors are so destructive that the device becomes operationally dead. Spacecraft designers must be concerned with the causes of SEE's from protons and heavy ions since commercial devices are typically chosen reduce the parameters of power, weight, volume and cost but have increased functionality, which in turn are typically vulnerable to SEE. As a result all mission-critical devices must be tested. The TAMU K500 superconducting cyclotron has provided beams for space radiation testing since 1994. Starting at just 100 hours/year at inception, the demand has grown to 3000 hours/year. In recent years, most beam time has been for US defense system testing. Nearly 15% has been provided for foreign agencies from Europe and Asia. An overview of the testing facility and future plans will be presented.
An Assessment of Vulnerabilities for Ship-based Control Systems
2009-09-01
VULNERABILITIES FOR SHIP- BASED CONTROL SYSTEMS by Richard Bensing September 2009 Thesis Advisor: Karen Burke Co-Advisor: George Dinolt...COVERED Master’s Thesis 4. TITLE AND SUBTITLE: An Assessment of Vulnerabilities for Ship- based Control Systems 6. AUTHOR(S) Richard Bensing 5...soft underbelly. Computer- based control systems form the heart of the critical infrastructure, and these control systems are riddled with rampant
Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiala, David J; Mueller, Frank; Engelmann, Christian
Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores. Exacerbating this situation, some of these faults remain undetected, manifesting themselves as silent errors that corrupt memory while applications continue to operate and report incorrect results. This paper studies the potential for redundancy to both detect and correct soft errors in MPI message-passing applications. Our study investigates the challenges inherent to detecting soft errors within MPI application while providing transparent MPI redundancy. By assuming a model wherein corruption in application data manifests itself by producing differing MPI message data betweenmore » replicas, we study the best suited protocols for detecting and correcting MPI data that is the result of corruption. To experimentally validate our proposed detection and correction protocols, we introduce RedMPI, an MPI library which resides in the MPI profiling layer. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring any modifications to the application source by utilizing either double or triple redundancy. Our results indicate that our most efficient consistency protocol can successfully protect applications experiencing even high rates of silent data corruption with runtime overheads between 0% and 30% as compared to unprotected applications without redundancy. Using our fault injector within RedMPI, we observe that even a single soft error can have profound effects on running applications, causing a cascading pattern of corruption in most cases causes that spreads to all other processes. RedMPI's protection has been shown to successfully mitigate the effects of soft errors while allowing applications to complete with correct results even in the face of errors.« less
Practicality of Evaluating Soft Errors in Commercial sub-90 nm CMOS for Space Applications
NASA Technical Reports Server (NTRS)
Pellish, Jonathan A.; LaBel, Kenneth A.
2010-01-01
The purpose of this presentation is to: Highlight space memory evaluation evolution, Review recent developments regarding low-energy proton direct ionization soft errors, Assess current space memory evaluation challenges, including increase of non-volatile technology choices, and Discuss related testing and evaluation complexities.
Devitt, Aleea L.; Tippett, Lynette; Schacter, Daniel L.; Addis, Donna Rose
2016-01-01
Because of its reconstructive nature, autobiographical memory (AM) is subject to a range of distortions. One distortion involves the erroneous incorporation of features from one episodic memory into another, forming what are known as memory conjunction errors. Healthy aging has been associated with an enhanced susceptibility to conjunction errors for laboratory stimuli, yet it is unclear whether these findings translate to the autobiographical domain. We investigated the impact of aging on vulnerability to AM conjunction errors, and explored potential cognitive processes underlying the formation of these errors. An imagination recombination paradigm was used to elicit AM conjunction errors in young and older adults. Participants also completed a battery of neuropsychological tests targeting relational memory and inhibition ability. Consistent with findings using laboratory stimuli, older adults were more susceptible to AM conjunction errors than younger adults. However, older adults were not differentially vulnerable to the inflating effects of imagination. Individual variation in AM conjunction error vulnerability was attributable to inhibitory capacity. An inability to suppress the cumulative familiarity of individual AM details appears to contribute to the heightened formation of AM conjunction errors with age. PMID:27929343
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimura, K.; Ohmi, K.; Tottori University Electronic Display Research Center, 101 Minami4-chome, Koyama-cho, Tottori-shi, Tottori 680-8551
With increasing density of memory devices, the issue of generating soft errors by cosmic rays is becoming more and more serious. Therefore, the irradiation resistance of resistance random access memory (ReRAM) to cosmic radiation has to be elucidated for practical use. In this paper, we investigated the data retention characteristics of ReRAM against ultraviolet irradiation with a Pt/NiO/ITO structure. Soft errors were confirmed to be caused by ultraviolet irradiation in both low- and high-resistance states. An analysis of the wavelength dependence of light irradiation on data retention characteristics suggested that electronic excitation from the valence to the conduction band andmore » to the energy level generated due to the introduction of oxygen vacancies caused the errors. Based on a statistically estimated soft error rates, the errors were suggested to be caused by the cohesion and dispersion of oxygen vacancies owing to the generation of electron-hole pairs and valence changes by the ultraviolet irradiation.« less
Random Weighting, Strong Tracking, and Unscented Kalman Filter for Soft Tissue Characterization.
Shin, Jaehyun; Zhong, Yongmin; Oetomo, Denny; Gu, Chengfan
2018-05-21
This paper presents a new nonlinear filtering method based on the Hunt-Crossley model for online nonlinear soft tissue characterization. This method overcomes the problem of performance degradation in the unscented Kalman filter due to contact model error. It adopts the concept of Mahalanobis distance to identify contact model error, and further incorporates a scaling factor in predicted state covariance to compensate identified model error. This scaling factor is determined according to the principle of innovation orthogonality to avoid the cumbersome computation of Jacobian matrix, where the random weighting concept is adopted to improve the estimation accuracy of innovation covariance. A master-slave robotic indentation system is developed to validate the performance of the proposed method. Simulation and experimental results as well as comparison analyses demonstrate that the efficacy of the proposed method for online characterization of soft tissue parameters in the presence of contact model error.
Financial errors in dementia: Testing a neuroeconomic conceptual framework
Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L.; Rosen, Howard J.
2013-01-01
Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer’s disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p< 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p< 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention. PMID:23550884
CLEAR: Cross-Layer Exploration for Architecting Resilience
2017-03-01
benchmark analysis, also provides cost-effective solutions (~1% additional energy cost for the same 50× improvement). This paper addresses the...core (OoO-core) [Wang 04], across 18 benchmarks . Such extensive exploration enables us to conclusively answer the above cross-layer resilience...analysis of the effects of soft errors on application benchmarks , provides a highly effective soft error resilience approach. 3. The above
Alpha particle-induced soft errors in microelectronic devices. I
NASA Astrophysics Data System (ADS)
Redman, D. J.; Sega, R. M.; Joseph, R.
1980-03-01
The article provides a tutorial review and trend assessment of the problem of alpha particle-induced soft errors in VLSI memories. Attention is given to an analysis of the design evolution of modern ICs, and the characteristics of alpha particles and their origin in IC packaging are reviewed. Finally, the process of an alpha particle penetrating an IC is examined.
Smart structure with elastomeric contact surface for prosthetic fingertip sensitivity development
NASA Astrophysics Data System (ADS)
Gu, Chunxin; Liu, Weiting; Yu, Ping; Cheng, Xiaoying; Fu, Xin
2017-09-01
Current flexible/compliant tactile sensors suffer from low sensitivity and high hysteresis introduced by the essential viscosity characteristic of soft material, either used as compliant sensing element or as flexible coverage. To overcome these disadvantages, this paper focuses on developing a tactile sensor with a smart hybrid structure to obtain comprehensive properties in terms of size, compliance, robustness and pressure sensing ability so as to meet the requirements of limited space applications such as prosthetic fingertips. Employing micro-fabricated tiny silicon-based pressure die as the sensing element, it is easy to have both small size and good mechanical performance. To protect it from potential damage and maintain the compliant surface, a rigid base and a soft layer form a sealed chamber and encapsulate the fixed die together with fluid. The fluid serves as highly efficient pressure propagation media of mechanical stimulus from the compliant skin to the pressure die without any hazard impacting the vulnerable connecting wires. To understand the pressure transmission mechanism, a simplified and concise analytic model of a spring system is proposed. Using easy fabrication technologies, a prototype of a 3 × 3 sensor array with total dimensions of 14 mm × 14 mm × 6.5 mm was developed. Based on the quasi-linear relationship between fluid volume and pressure, finite element modeling was developed to analyze the chamber deformation and pressure output of the sensor cell. Experimental tests of the sensor prototype were implemented. The results showed that the sensor cell had good sensing performance with sensitivity of 19.9 mV N-1, linearity of 0.998, repeatability error of 3.41%, and hysteresis error of 3.34%. The force sensing range was from 5 mN to 1.6 N.
Low delay and area efficient soft error correction in arbitration logic
Sugawara, Yutaka
2013-09-10
There is provided an arbitration logic device for controlling an access to a shared resource. The arbitration logic device comprises at least one storage element, a winner selection logic device, and an error detection logic device. The storage element stores a plurality of requestors' information. The winner selection logic device selects a winner requestor among the requestors based on the requestors' information received from a plurality of requestors. The winner selection logic device selects the winner requestor without checking whether there is the soft error in the winner requestor's information.
NASA Technical Reports Server (NTRS)
Dismukes, Key; Berman, Ben; Loukopoulos, Loukisa
2007-01-01
Reviewed NTSB reports of the 19 U.S. airline accidents between 1991-2000 attributed primarily to crew error. Asked: Why might any airline crew in situation of accident crew--knowing only what they knew--be vulnerable. Can never know with certainty why accident crew made specific errors but can determine why the population of pilots is vulnerable. Considers variability of expert performance as function of interplay of multiple factors.
NASA Astrophysics Data System (ADS)
Celik, Cihangir
Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano-scale technologies. Prevention of SEEs has been studied and applied in the semiconductor industry by including radiation protection precautions in the system architecture or by using corrective algorithms in the system operation. Decreasing 10B content (20%of natural boron) in the natural boron of Borophosphosilicate glass (BPSG) layers that are conventionally used in the fabrication of semiconductor devices was one of the major radiation protection approaches for the system architecture. Neutron interaction in the BPSG layer was the origin of the SEEs because of the 10B (n,alpha) 7Li reaction products. Both of the particles produced have the capability of ionization in the silicon substrate region, whose thickness is comparable to the ranges of these particles. Using the soft error phenomenon in exactly the opposite manner of the semiconductor industry can provide a new neutron detection system based on the SERs in the semiconductor memories. By investigating the soft error mechanisms in the available semiconductor memories and enhancing the soft error occurrences in these devices, one can convert all memory using intelligent systems into portable, power efficient, directiondependent neutron detectors. The Neutron Intercepting Silicon Chip (NISC) project aims to achieve this goal by introducing 10B-enriched BPSG layers to the semiconductor memory architectures. This research addresses the development of a simulation tool, the NISC Soft Error Analysis Tool (NISCSAT), for soft error modeling and analysis in the semiconductor memories to provide basic design considerations for the NISC. NISCSAT performs particle transport and calculates the soft error probabilities, or SER, depending on energy depositions of the particles in a given memory node model of the NISC. Soft error measurements were performed with commercially available, off-the-shelf semiconductor memories and microprocessors to observe soft error variations with the neutron flux and memory supply voltage. Measurement results show that soft errors in the memories increase proportionally with the neutron flux, whereas they decrease with increasing the supply voltages. NISC design considerations include the effects of device scaling, 10B content in the BPSG layer, incoming neutron energy, and critical charge of the node for this dissertation. NISCSAT simulations were performed with various memory node models to account these effects. Device scaling simulations showed that any further increase in the thickness of the BPSG layer beyond 2 mum causes self-shielding of the incoming neutrons due to the BPSG layer and results in lower detection efficiencies. Moreover, if the BPSG layer is located more than 4 mum apart from the depletion region in the node, there are no soft errors in the node due to the fact that both of the reaction products have lower ranges in the silicon or any possible node layers. Calculation results regarding the critical charge indicated that the mean charge deposition of the reaction products in the sensitive volume of the node is about 15 fC. It is evident that the NISC design should have a memory architecture with a critical charge of 15 fC or less to obtain higher detection efficiencies. Moreover, the sensitive volume should be placed in close proximity to the BPSG layers so that its location would be within the range of alpha and 7Li particles. Results showed that the distance between the BPSG layer and the sensitive volume should be less than 2 mum to increase the detection efficiency of the NISC. Incoming neutron energy was also investigated by simulations and the results obtained from these simulations showed that NISC neutron detection efficiency is related with the neutron cross-sections of 10B (n,alpha) 7Li reaction, e.g., ratio of the thermal (0.0253 eV) to fast (2 MeV) neutron detection efficiencies is approximately equal to 8000:1. Environmental conditions and their effects on the NISC performance were also studied in this research. Cosmic rays were modeled and simulated via NISCSAT to investigate detection reliability of the NISC. Simulation results show that cosmic rays account for less than 2 % of the soft errors for the thermal neutron detection. On the other hand, fast neutron detection by the NISC, which already has a poor efficiency due to the low neutron cross-sections, becomes almost impossible at higher altitudes where the cosmic ray fluxes and their energies are higher. NISCSAT simulations regarding soft error dependency of the NISC for temperature and electromagnetic fields show that there are no significant effects in the NISC detection efficiency. Furthermore, the detection efficiency of the NISC decreases with both air humidity and use of moderators since the incoming neutrons scatter away before reaching the memory surface.
Oliveira-Santos, Thiago; Klaeser, Bernd; Weitzel, Thilo; Krause, Thomas; Nolte, Lutz-Peter; Peterhans, Matthias; Weber, Stefan
2011-01-01
Percutaneous needle intervention based on PET/CT images is effective, but exposes the patient to unnecessary radiation due to the increased number of CT scans required. Computer assisted intervention can reduce the number of scans, but requires handling, matching and visualization of two different datasets. While one dataset is used for target definition according to metabolism, the other is used for instrument guidance according to anatomical structures. No navigation systems capable of handling such data and performing PET/CT image-based procedures while following clinically approved protocols for oncologic percutaneous interventions are available. The need for such systems is emphasized in scenarios where the target can be located in different types of tissue such as bone and soft tissue. These two tissues require different clinical protocols for puncturing and may therefore give rise to different problems during the navigated intervention. Studies comparing the performance of navigated needle interventions targeting lesions located in these two types of tissue are not often found in the literature. Hence, this paper presents an optical navigation system for percutaneous needle interventions based on PET/CT images. The system provides viewers for guiding the physician to the target with real-time visualization of PET/CT datasets, and is able to handle targets located in both bone and soft tissue. The navigation system and the required clinical workflow were designed taking into consideration clinical protocols and requirements, and the system is thus operable by a single person, even during transition to the sterile phase. Both the system and the workflow were evaluated in an initial set of experiments simulating 41 lesions (23 located in bone tissue and 18 in soft tissue) in swine cadavers. We also measured and decomposed the overall system error into distinct error sources, which allowed for the identification of particularities involved in the process as well as highlighting the differences between bone and soft tissue punctures. An overall average error of 4.23 mm and 3.07 mm for bone and soft tissue punctures, respectively, demonstrated the feasibility of using this system for such interventions. The proposed system workflow was shown to be effective in separating the preparation from the sterile phase, as well as in keeping the system manageable by a single operator. Among the distinct sources of error, the user error based on the system accuracy (defined as the distance from the planned target to the actual needle tip) appeared to be the most significant. Bone punctures showed higher user error, whereas soft tissue punctures showed higher tissue deformation error.
Multi-Spectral Solar Telescope Array. II - Soft X-ray/EUV reflectivity of the multilayer mirrors
NASA Technical Reports Server (NTRS)
Barbee, Troy W., Jr.; Weed, J. W.; Hoover, Richard B.; Allen, Maxwell J.; Lindblom, Joakim F.; O'Neal, Ray H.; Kankelborg, Charles C.; Deforest, Craig E.; Paris, Elizabeth S.; Walker, Arthur B. C., Jr.
1991-01-01
The Multispectral Solar Telescope Array is a rocket-borne observatory which encompasses seven compact soft X-ray/EUV, multilayer-coated, and two compact far-UV, interference film-coated, Cassegrain and Ritchey-Chretien telescopes. Extensive measurements are presented on the efficiency and spectral bandpass of the X-ray/EUV telescopes. Attention is given to systematic errors and measurement errors.
Performance Data Errors in Air Carrier Operations: Causes and Countermeasures
NASA Technical Reports Server (NTRS)
Berman, Benjamin A.; Dismukes, R Key; Jobe, Kimberly K.
2012-01-01
Several airline accidents have occurred in recent years as the result of erroneous weight or performance data used to calculate V-speeds, flap/trim settings, required runway lengths, and/or required climb gradients. In this report we consider 4 recent studies of performance data error, report our own study of ASRS-reported incidents, and provide countermeasures that can reduce vulnerability to accidents caused by performance data errors. Performance data are generated through a lengthy process involving several employee groups and computer and/or paper-based systems. Although much of the airline indUStry 's concern has focused on errors pilots make in entering FMS data, we determined that errors occur at every stage of the process and that errors by ground personnel are probably at least as frequent and certainly as consequential as errors by pilots. Most of the errors we examined could in principle have been trapped by effective use of existing procedures or technology; however, the fact that they were not trapped anywhere indicates the need for better countermeasures. Existing procedures are often inadequately designed to mesh with the ways humans process information. Because procedures often do not take into account the ways in which information flows in actual flight ops and time pressures and interruptions experienced by pilots and ground personnel, vulnerability to error is greater. Some aspects of NextGen operations may exacerbate this vulnerability. We identify measures to reduce the number of errors and to help catch the errors that occur.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batista, Antonio J. N.; Santos, Bruno; Fernandes, Ana
The data acquisition and control instrumentation cubicles room of the ITER tokamak will be irradiated with neutrons during the fusion reactor operation. A Virtex-6 FPGA from Xilinx (XC6VLX365T-1FFG1156C) is used on the ATCA-IO-PROCESSOR board, included in the ITER Catalog of I and C products - Fast Controllers. The Virtex-6 is a re-programmable logic device where the configuration is stored in Static RAM (SRAM), functional data stored in dedicated Block RAM (BRAM) and functional state logic in Flip-Flops. Single Event Upsets (SEU) due to the ionizing radiation of neutrons causes soft errors, unintended changes (bit-flips) to the values stored in statemore » elements of the FPGA. The SEU monitoring and soft errors repairing, when possible, were explored in this work. An FPGA built-in Soft Error Mitigation (SEM) controller detects and corrects soft errors in the FPGA configuration memory. Novel SEU sensors with Error Correction Code (ECC) detect and repair the BRAM memories. Proper management of SEU can increase reliability and availability of control instrumentation hardware for nuclear applications. The results of the tests performed using the SEM controller and the BRAM SEU sensors are presented for a Virtex-6 FPGA (XC6VLX240T-1FFG1156C) when irradiated with neutrons from the Portuguese Research Reactor (RPI), a 1 MW nuclear fission reactor operated by IST in the neighborhood of Lisbon. Results show that the proposed SEU mitigation technique is able to repair the majority of the detected SEU errors in the configuration and BRAM memories. (authors)« less
Utilization of robotic-arm assisted total knee arthroplasty for soft tissue protection.
Sultan, Assem A; Piuzzi, Nicolas; Khlopas, Anton; Chughtai, Morad; Sodhi, Nipun; Mont, Michael A
2017-12-01
Despite the well-established success of total knee arthroplasty (TKA), iatrogenic ligamentous and soft tissue injuries are infrequent, but potential complications that can have devastating impact on clinical outcomes. These injuries are often related to technical errors and excessive soft tissue manipulation, particularly during bony resections. Recently, robotic-arm assisted TKA was introduced and demonstrated promising results with potential technical advantages over manual surgery in implant positioning and mechanical accuracy. Furthermore, soft tissue protection is an additional potential advantage offered by these systems that can reduce inadvertent human technical errors encountered during standard manual resections. Therefore, due to the relative paucity of literature, we attempted to answer the following questions: 1) does robotic-arm assisted TKA offer a technical advantage that allows enhanced soft tissue protection? 2) What is the available evidence about soft tissue protection? Recently introduced models of robotic-arm assisted TKA systems with advanced technology showed promising clinical outcomes and soft tissue protection in the short- and mid-term follow-up with results comparable or superior to manual TKA. In this review, we attempted to explore this dimension of robotics in TKA and investigate the soft tissue related complications currently reported in the literature.
Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for sub-130 nm Technologies
NASA Technical Reports Server (NTRS)
Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Michael M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.
2010-01-01
We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.
Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for Sub-130 nm Technologies
NASA Technical Reports Server (NTRS)
Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Thomas M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.
2010-01-01
We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.
In vivo soft tissue differentiation by diffuse reflectance spectroscopy: preliminary results
NASA Astrophysics Data System (ADS)
Zam, Azhar; Stelzle, Florian; Tangermann-Gerk, Katja; Adler, Werner; Nkenke, Emeka; Neukam, Friedrich Wilhelm; Schmidt, Michael; Douplik, Alexandre
Remote laser surgery does not provide haptic feedback to operate layer by layer and preserve vulnerable anatomical structures like nerve tissue or blood vessels. The aim of this study is identification of soft tissue in vivo by diffuse reflectance spectroscopy to set the base for a feedback control system to enhance nerve preservation in oral and maxillofacial laser surgery. Various soft tissues can be identified by diffuse reflectance spectroscopy in vivo. The results may set the base for a feedback system to prevent nerve damage during oral and maxillofacial laser surgery.
Hard sphere perturbation theory for thermodynamics of soft-sphere model liquid
NASA Astrophysics Data System (ADS)
Mon, K. K.
2001-09-01
It is a long-standing consensus in the literature that hard sphere perturbation theory (HSPT) is not accurate for dense soft sphere model liquids, interacting with repulsive r-n pair potentials for small n. In this paper, we show that if the intrinsic error of HSPT for soft sphere model liquids is accounted for, then this is not completely true. We present results for n=4, 6, 9, 12 which indicate that, even first order variational HSPT can provide free energy upper bounds to within a few percent at densities near freezing when corrected for the intrinsic error of the HSPT.
Neural and computational processes underlying dynamic changes in self-esteem.
Will, Geert-Jan; Rutledge, Robb B; Moutoussis, Michael; Dolan, Raymond J
2017-10-24
Self-esteem is shaped by the appraisals we receive from others. Here, we characterize neural and computational mechanisms underlying this form of social influence. We introduce a computational model that captures fluctuations in self-esteem engendered by prediction errors that quantify the difference between expected and received social feedback. Using functional MRI, we show these social prediction errors correlate with activity in ventral striatum/subgenual anterior cingulate cortex, while updates in self-esteem resulting from these errors co-varied with activity in ventromedial prefrontal cortex (vmPFC). We linked computational parameters to psychiatric symptoms using canonical correlation analysis to identify an 'interpersonal vulnerability' dimension. Vulnerability modulated the expression of prediction error responses in anterior insula and insula-vmPFC connectivity during self-esteem updates. Our findings indicate that updating of self-evaluative beliefs relies on learning mechanisms akin to those used in learning about others. Enhanced insula-vmPFC connectivity during updating of those beliefs may represent a marker for psychiatric vulnerability.
Full temperature single event upset characterization of two microprocessor technologies
NASA Technical Reports Server (NTRS)
Nichols, Donald K.; Coss, James R.; Smith, L. S.; Rax, Bernard; Huebner, Mark
1988-01-01
Data for the 9450 I3L bipolar microprocessor and the 80C86 CMOS/epi (vintage 1985) microprocessor are presented, showing single-event soft errors for the full MIL-SPEC temperature range of -55 to 125 C. These data show for the first time that the soft-error cross sections continue to decrease with decreasing temperature at subzero temperatures. The temperature dependence of the two parts, however, is very different.
SSL/TLS Vulnerability Detection Using Black Box Approach
NASA Astrophysics Data System (ADS)
Gunawan, D.; Sitorus, E. H.; Rahmat, R. F.; Hizriadi, A.
2018-03-01
Socket Secure Layer (SSL) and Transport Layer Security (TLS) are cryptographic protocols that provide data encryption to secure the communication over a network. However, in some cases, there are vulnerability found in the implementation of SSL/TLS because of weak cipher key, certificate validation error or session handling error. One of the most vulnerable SSL/TLS bugs is heartbleed. As the security is essential in data communication, this research aims to build a scanner that detect the SSL/TLS vulnerability by using black box approach. This research will focus on heartbleed case. In addition, this research also gathers information about existing SSL in the server. The black box approach is used to test the output of a system without knowing the process inside the system itself. For testing purpose, this research scanned websites and found that some of the websites still have SSL/TLS vulnerability. Thus, the black box approach can be used to detect the vulnerability without considering the source code and the process inside the application.
New-Sum: A Novel Online ABFT Scheme For General Iterative Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Dingwen; Song, Shuaiwen; Krishnamoorthy, Sriram
Emerging high-performance computing platforms, with large component counts and lower power margins, are anticipated to be more susceptible to soft errors in both logic circuits and memory subsystems. We present an online algorithm-based fault tolerance (ABFT) approach to efficiently detect and recover soft errors for general iterative methods. We design a novel checksum-based encoding scheme for matrix-vector multiplication that is resilient to both arithmetic and memory errors. Our design decouples the checksum updating process from the actual computation, and allows adaptive checksum overhead control. Building on this new encoding mechanism, we propose two online ABFT designs that can effectively recovermore » from errors when combined with a checkpoint/rollback scheme.« less
Closed-Loop Analysis of Soft Decisions for Serial Links
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Steele, Glen F.; Zucha, Joan P.; Schlesinger, Adam M.
2013-01-01
We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin; Steele, Glen; Zucha, Joan; Schlesinger, Adam
2013-01-01
We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.
NASA Astrophysics Data System (ADS)
Yoshizawa, Masasumi; Nakamura, Yuuta; Ishiguro, Masataka; Moriya, Tadashi
2007-07-01
In this paper, we describe a method of compensating the attenuation of the ultrasound caused by soft tissue in the transducer vibration method for the measurement of the acoustic impedance of in vivo bone. In the in vivo measurement, the acoustic impedance of bone is measured through soft tissue; therefore, the amplitude of the ultrasound reflected from the bone is attenuated. This attenuation causes an error of the order of -20 to -30% when the acoustic impedance is determined from the measured signals. To compensate the attenuation, the attenuation coefficient and length of the soft tissue are measured by the transducer vibration method. In the experiment using a phantom, this method allows the measurement of the acoustic impedance typically with an error as small as -8 to 10%.
Design Techniques for Power-Aware Combinational Logic SER Mitigation
NASA Astrophysics Data System (ADS)
Mahatme, Nihaar N.
The history of modern semiconductor devices and circuits suggests that technologists have been able to maintain scaling at the rate predicted by Moore's Law [Moor-65]. With improved performance, speed and lower area, technology scaling has also exacerbated reliability issues such as soft errors. Soft errors are transient errors that occur in microelectronic circuits due to ionizing radiation particle strikes on reverse biased semiconductor junctions. These radiation induced errors at the terrestrial-level are caused due to radiation particle strikes by (1) alpha particles emitted as decay products of packing material (2) cosmic rays that produce energetic protons and neutrons, and (3) thermal neutrons [Dodd-03], [Srou-88] and more recently muons and electrons [Ma-79] [Nara-08] [Siew-10] [King-10]. In the space environment radiation induced errors are a much bigger threat and are mainly caused by cosmic heavy-ions, protons etc. The effects of radiation exposure on circuits and measures to protect against them have been studied extensively for the past 40 years, especially for parts operating in space. Radiation particle strikes can affect memory as well as combinational logic. Typically when these particles strike semiconductor junctions of transistors that are part of feedback structures such as SRAM memory cells or flip-flops, it can lead to an inversion of the cell content. Such a failure is formally called a bit-flip or single-event upset (SEU). When such particles strike sensitive junctions part of combinational logic gates they produce transient voltage spikes or glitches called single-event transients (SETs) that could be latched by receiving flip-flops. As the circuits are clocked faster, there are more number of clocking edges which increases the likelihood of latching these transients. In older technology generations the probability of errors in flip-flops due to SETs being latched was much lower compared to direct strikes on flip-flops or SRAMs leading to SEUs. This was mainly because the operating frequencies were much lower for older technology generations. The Intel Pentium II for example was fabricated using 0.35 microm technology and operated between 200-330 MHz. With technology scaling however, operating frequencies have increased tremendously and the contribution of soft errors due to latched SETs from combinational logic could account for a significant proportion of the chip-level soft error rate [Sief-12][Maha-11][Shiv02] [Bu97]. Therefore there is a need to systematically characterize the problem of combinational logic single-event effects (SEE) and understand the various factors that affect the combinational logic single-event error rate. Just as scaling has led to soft errors emerging as a reliability-limiting failure mode for modern digital ICs, the problem of increasing power consumption has arguably been a bigger bane of scaling. While Moore's Law loftily states the blessing of technology scaling to be smaller and faster transistor it fails to highlight that the power density increases exponentially with every technology generation. The power density problem was partially solved in the 1970's and 1980's by moving from bipolar and GaAs technologies to full-scale silicon CMOS technologies. Following this however, technology miniaturization that enabled high-speed, multicore and parallel computing has steadily increased the power density and the power consumption problem. Today minimizing the power consumption is as much critical for power hungry server farms as it for portable devices, all pervasive sensor networks and future eco-bio-sensors. Low-power consumption is now regularly part of design philosophies for various digital products with diverse applications from computing to communication to healthcare. Thus designers in today's world are left grappling with both a "power wall" as well as a "reliability wall". Unfortunately, when it comes to improving reliability through soft error mitigation, most approaches are invariably straddled with overheads in terms of area or speed and more importantly power. Thus, the cost of protecting combinational logic through the use of power hungry mitigation approaches can disrupt the power budget significantly. Therefore there is a strong need to develop techniques that can provide both power minimization as well as combinational logic soft error mitigation. This dissertation, advances hitherto untapped opportunities to jointly reduce power consumption and deliver soft error resilient designs. Circuit as well as architectural approaches are employed to achieve this objective and the advantages of cross-layer optimization for power and soft error reliability are emphasized.
Predicting plasticity with soft vibrational modes: from dislocations to glasses.
Rottler, Jörg; Schoenholz, Samuel S; Liu, Andrea J
2014-04-01
We show that quasilocalized low-frequency modes in the vibrational spectrum can be used to construct soft spots, or regions vulnerable to rearrangement, which serve as a universal tool for the identification of flow defects in solids. We show that soft spots not only encode spatial information, via their location, but also directional information, via directors for particles within each soft spot. Single crystals with isolated dislocations exhibit low-frequency phonon modes that localize at the core, and their polarization pattern predicts the motion of atoms during elementary dislocation glide in two and three dimensions in exquisite detail. Even in polycrystals and disordered solids, we find that the directors associated with particles in soft spots are highly correlated with the direction of particle displacements in rearrangements.
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, C; Kumarasiri, A; Chetvertkov, M
2015-06-15
Purpose: Accurate deformable image registration (DIR) between CT and CBCT in H&N is challenging. In this study, we propose a practical hybrid method that uses not only the pixel intensities but also organ physical properties, structure volume of interest (VOI), and interactive local registrations. Methods: Five oropharyngeal cancer patients were selected retrospectively. For each patient, the planning CT was registered to the last fraction CBCT, where the anatomy difference was largest. A three step registration strategy was tested; Step1) DIR using pixel intensity only, Step2) DIR with additional use of structure VOI and rigidity penalty, and Step3) interactive local correction.more » For Step1, a public-domain open-source DIR algorithm was used (cubic B-spline, mutual information, steepest gradient optimization, and 4-level multi-resolution). For Step2, rigidity penalty was applied on bony anatomies and brain, and a structure VOI was used to handle the body truncation such as the shoulder cut-off on CBCT. Finally, in Step3, the registrations were reviewed on our in-house developed software and the erroneous areas were corrected via a local registration using level-set motion algorithm. Results: After Step1, there were considerable amount of registration errors in soft tissues and unrealistic stretching in the posterior to the neck and near the shoulder due to body truncation. The brain was also found deformed to a measurable extent near the superior border of CBCT. Such errors could be effectively removed by using a structure VOI and rigidity penalty. The rest of the local soft tissue error could be corrected using the interactive software tool. The estimated interactive correction time was approximately 5 minutes. Conclusion: The DIR using only the image pixel intensity was vulnerable to noise and body truncation. A corrective action was inevitable to achieve good quality of registrations. We found the proposed three-step hybrid method efficient and practical for CT/CBCT registrations in H&N. My department receives grant support from Industrial partners: (a) Varian Medical Systems, Palo Alto, CA, and (b) Philips HealthCare, Best, Netherlands.« less
Yang, Yuan; Quan, Nannan; Bu, Jingjing; Li, Xueping; Yu, Ningmei
2016-09-26
High order modulation and demodulation technology can solve the frequency requirement between the wireless energy transmission and data communication. In order to achieve reliable wireless data communication based on high order modulation technology for visual prosthesis, this work proposed a Reed-Solomon (RS) error correcting code (ECC) circuit on the basis of differential amplitude and phase shift keying (DAPSK) soft demodulation. Firstly, recognizing the weakness of the traditional DAPSK soft demodulation algorithm based on division that is complex for hardware implementation, an improved phase soft demodulation algorithm for visual prosthesis to reduce the hardware complexity is put forward. Based on this new algorithm, an improved RS soft decoding method is hence proposed. In this new decoding method, the combination of Chase algorithm and hard decoding algorithms is used to achieve soft decoding. In order to meet the requirements of implantable visual prosthesis, the method to calculate reliability of symbol-level based on multiplication of bit reliability is derived, which reduces the testing vectors number of Chase algorithm. The proposed algorithms are verified by MATLAB simulation and FPGA experimental results. During MATLAB simulation, the biological channel attenuation property model is added into the ECC circuit. The data rate is 8 Mbps in the MATLAB simulation and FPGA experiments. MATLAB simulation results show that the improved phase soft demodulation algorithm proposed in this paper saves hardware resources without losing bit error rate (BER) performance. Compared with the traditional demodulation circuit, the coding gain of the ECC circuit has been improved by about 3 dB under the same BER of [Formula: see text]. The FPGA experimental results show that under the condition of data demodulation error with wireless coils 3 cm away, the system can correct it. The greater the distance, the higher the BER. Then we use a bit error rate analyzer to measure BER of the demodulation circuit and the RS ECC circuit with different distance of two coils. And the experimental results show that the RS ECC circuit has about an order of magnitude lower BER than the demodulation circuit when under the same coils distance. Therefore, the RS ECC circuit has more higher reliability of the communication in the system. The improved phase soft demodulation algorithm and soft decoding algorithm proposed in this paper enables data communication that is more reliable than other demodulation system, which also provide a significant reference for further study to the visual prosthesis system.
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
ERIC Educational Resources Information Center
Hallin, Anna Eva; Reuterskiöld, Christina
2017-01-01
Purpose: The first aim of this study was to investigate if Swedish-speaking school-age children with language impairment (LI) show specific morphosyntactic vulnerabilities in error detection. The second aim was to investigate the effects of lexical frequency on error detection, an overlooked aspect of previous error detection studies. Method:…
Vu, Lien T; Chen, Chao-Chang A; Lee, Chia-Cheng; Yu, Chia-Wei
2018-04-20
This study aims to develop a compensating method to minimize the shrinkage error of the shell mold (SM) in the injection molding (IM) process to obtain uniform optical power in the central optical zone of soft axial symmetric multifocal contact lenses (CL). The Z-shrinkage error along the Z axis or axial axis of the anterior SM corresponding to the anterior surface of a dry contact lens in the IM process can be minimized by optimizing IM process parameters and then by compensating for additional (Add) powers in the central zone of the original lens design. First, the shrinkage error is minimized by optimizing three levels of four IM parameters, including mold temperature, injection velocity, packing pressure, and cooling time in 18 IM simulations based on an orthogonal array L 18 (2 1 ×3 4 ). Then, based on the Z-shrinkage error from IM simulation, three new contact lens designs are obtained by increasing the Add power in the central zone of the original multifocal CL design to compensate for the optical power errors. Results obtained from IM process simulations and the optical simulations show that the new CL design with 0.1 D increasing in Add power has the closest shrinkage profile to the original anterior SM profile with percentage of reduction in absolute Z-shrinkage error of 55% and more uniform power in the central zone than in the other two cases. Moreover, actual experiments of IM of SM for casting soft multifocal CLs have been performed. The final product of wet CLs has been completed for the original design and the new design. Results of the optical performance have verified the improvement of the compensated design of CLs. The feasibility of this compensating method has been proven based on the measurement results of the produced soft multifocal CLs of the new design. Results of this study can be further applied to predict or compensate for the total optical power errors of the soft multifocal CLs.
A cascaded coding scheme for error control
NASA Technical Reports Server (NTRS)
Shu, L.; Kasami, T.
1985-01-01
A cascade coding scheme for error control is investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are evaluated. They seem to be quite suitable for satellite down-link error control.
A cascaded coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Lin, S.
1985-01-01
A cascaded coding scheme for error control was investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are studied which seem to be quite suitable for satellite down-link error control.
Testing a Novel 3D Printed Radiographic Imaging Device for Use in Forensic Odontology.
Newcomb, Tara L; Bruhn, Ann M; Giles, Bridget; Garcia, Hector M; Diawara, Norou
2017-01-01
There are specific challenges related to forensic dental radiology and difficulties in aligning X-ray equipment to teeth of interest. Researchers used 3D printing to create a new device, the combined holding and aiming device (CHAD), to address the positioning limitations of current dental X-ray devices. Participants (N = 24) used the CHAD, soft dental wax, and a modified external aiming device (MEAD) to determine device preference, radiographer's efficiency, and technique errors. Each participant exposed six X-rays per device for a total of 432 X-rays scored. A significant difference was found at the 0.05 level between the three devices (p = 0.0015), with the MEAD having the least amount of total errors and soft dental wax taking the least amount of time. Total errors were highest when participants used soft dental wax-both the MEAD and the CHAD performed best overall. Further research in forensic dental radiology and use of holding devices is needed. © 2016 American Academy of Forensic Sciences.
Real-time soft error rate measurements on bulk 40 nm SRAM memories: a five-year dual-site experiment
NASA Astrophysics Data System (ADS)
Autran, J. L.; Munteanu, D.; Moindjie, S.; Saad Saoud, T.; Gasiot, G.; Roche, P.
2016-11-01
This paper reports five years of real-time soft error rate experimentation conducted with the same setup at mountain altitude for three years and then at sea level for two years. More than 7 Gbit of SRAM memories manufactured in CMOS bulk 40 nm technology have been subjected to the natural radiation background. The intensity of the atmospheric neutron flux has been continuously measured on site during these experiments using dedicated neutron monitors. As the result, the neutron and alpha component of the soft error rate (SER) have been very accurately extracted from these measurements, refining the first SER estimations performed in 2012 for this SRAM technology. Data obtained at sea level evidence, for the first time, a possible correlation between the neutron flux changes induced by the daily atmospheric pressure variations and the measured SER. Finally, all of the experimental data are compared with results obtained from accelerated tests and numerical simulation.
Gutiérrez, J. J.; Russell, James K.
2016-01-01
Background. Cardiopulmonary resuscitation (CPR) feedback devices are being increasingly used. However, current accelerometer-based devices overestimate chest displacement when CPR is performed on soft surfaces, which may lead to insufficient compression depth. Aim. To assess the performance of a new algorithm for measuring compression depth and rate based on two accelerometers in a simulated resuscitation scenario. Materials and Methods. Compressions were provided to a manikin on two mattresses, foam and sprung, with and without a backboard. One accelerometer was placed on the chest and the second at the manikin's back. Chest displacement and mattress displacement were calculated from the spectral analysis of the corresponding acceleration every 2 seconds and subtracted to compute the actual sternal-spinal displacement. Compression rate was obtained from the chest acceleration. Results. Median unsigned error in depth was 2.1 mm (4.4%). Error was 2.4 mm in the foam and 1.7 mm in the sprung mattress (p < 0.001). Error was 3.1/2.0 mm and 1.8/1.6 mm with/without backboard for foam and sprung, respectively (p < 0.001). Median error in rate was 0.9 cpm (1.0%), with no significant differences between test conditions. Conclusion. The system provided accurate feedback on chest compression depth and rate on soft surfaces. Our solution compensated mattress displacement, avoiding overestimation of compression depth when CPR is performed on soft surfaces. PMID:27999808
Topological Vulnerability Analysis
NASA Astrophysics Data System (ADS)
Jajodia, Sushil; Noel, Steven
Traditionally, network administrators rely on labor-intensive processes for tracking network configurations and vulnerabilities. This requires a great deal of expertise, and is error prone because of the complexity of networks and associated security data. The interdependencies of network vulnerabilities make traditional point-wise vulnerability analysis inadequate. We describe a Topological Vulnerability Analysis (TVA) approach that analyzes vulnerability dependencies and shows all possible attack paths into a network. From models of the network vulnerabilities and potential attacker exploits, we compute attack graphs that convey the impact of individual and combined vulnerabilities on overall security. TVA finds potential paths of vulnerability through a network, showing exactly how attackers may penetrate a network. From this, we identify key vulnerabilities and provide strategies for protection of critical network assets.
Resnick, C M; Dang, R R; Glick, S J; Padwa, B L
2017-03-01
Three-dimensional (3D) soft tissue prediction is replacing two-dimensional analysis in planning for orthognathic surgery. The accuracy of different computational models to predict soft tissue changes in 3D, however, is unclear. A retrospective pilot study was implemented to assess the accuracy of Dolphin 3D software in making these predictions. Seven patients who had a single-segment Le Fort I osteotomy and had preoperative (T 0 ) and >6-month postoperative (T 1 ) cone beam computed tomography (CBCT) scans and 3D photographs were included. The actual skeletal change was determined by subtracting the T 0 from the T 1 CBCT. 3D photographs were overlaid onto the T 0 CBCT and virtual skeletal movements equivalent to the achieved repositioning were applied using Dolphin 3D planner. A 3D soft tissue prediction (T P ) was generated and differences between the T P and T 1 images (error) were measured at 14 points and at the nasolabial angle. A mean linear prediction error of 2.91±2.16mm was found. The mean error at the nasolabial angle was 8.1±5.6°. In conclusion, the ability to accurately predict 3D soft tissue changes after Le Fort I osteotomy using Dolphin 3D software is limited. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Markvicka, Eric J; Bartlett, Michael D; Huang, Xiaonan; Majidi, Carmel
2018-07-01
Large-area stretchable electronics are critical for progress in wearable computing, soft robotics and inflatable structures. Recent efforts have focused on engineering electronics from soft materials-elastomers, polyelectrolyte gels and liquid metal. While these materials enable elastic compliance and deformability, they are vulnerable to tearing, puncture and other mechanical damage modes that cause electrical failure. Here, we introduce a material architecture for soft and highly deformable circuit interconnects that are electromechanically stable under typical loading conditions, while exhibiting uncompromising resilience to mechanical damage. The material is composed of liquid metal droplets suspended in a soft elastomer; when damaged, the droplets rupture to form new connections with neighbours and re-route electrical signals without interruption. Since self-healing occurs spontaneously, these materials do not require manual repair or external heat. We demonstrate this unprecedented electronic robustness in a self-repairing digital counter and self-healing soft robotic quadruped that continue to function after significant damage.
NASA Astrophysics Data System (ADS)
Ghosh, Rahul; Debbarma, Rama
2017-06-01
Setback structures are highly vulnerable during earthquakes due to its vertical geometrical and mass irregularity, but the vulnerability becomes higher if the structures also have stiffness irregularity in elevation. The risk factor of such structure may increase, if the structure rests on sloping ground. In this paper, an attempt has been made to evaluate the seismic performance of setback structures resting on plain ground as well as in the slope of a hill, with soft storey configuration. The analysis has been performed in three individual methods, equivalent static force method, response spectrum method and time history method and extreme responses have been recorded for open ground storeyed setback building. To mitigate this soft storey effect and the extreme responses, three individual mitigation techniques have been adopted and the best solution among these three techniques is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hashii, Haruko, E-mail: haruko@pmrc.tsukuba.ac.jp; Hashimoto, Takayuki; Okawa, Ayako
2013-03-01
Purpose: Radiation therapy for cancer may be required for patients with implantable cardiac devices. However, the influence of secondary neutrons or scattered irradiation from high-energy photons (≥10 MV) on implantable cardioverter-defibrillators (ICDs) is unclear. This study was performed to examine this issue in 2 ICD models. Methods and Materials: ICDs were positioned around a water phantom under conditions simulating clinical radiation therapy. The ICDs were not irradiated directly. A control ICD was positioned 140 cm from the irradiation isocenter. Fractional irradiation was performed with 18-MV and 10-MV photon beams to give cumulative in-field doses of 600 Gy and 1600 Gy,more » respectively. Errors were checked after each fraction. Soft errors were defined as severe (change to safety back-up mode), moderate (memory interference, no changes in device parameters), and minor (slight memory change, undetectable by computer). Results: Hard errors were not observed. For the older ICD model, the incidences of severe, moderate, and minor soft errors at 18 MV were 0.75, 0.5, and 0.83/50 Gy at the isocenter. The corresponding data for 10 MV were 0.094, 0.063, and 0 /50 Gy. For the newer ICD model at 18 MV, these data were 0.083, 2.3, and 5.8 /50 Gy. Moderate and minor errors occurred at 18 MV in control ICDs placed 140 cm from the isocenter. The error incidences were 0, 1, and 0 /600 Gy at the isocenter for the newer model, and 0, 1, and 6 /600Gy for the older model. At 10 MV, no errors occurred in control ICDs. Conclusions: ICD errors occurred more frequently at 18 MV irradiation, which suggests that the errors were mainly caused by secondary neutrons. Soft errors of ICDs were observed with high energy photon beams, but most were not critical in the newer model. These errors may occur even when the device is far from the irradiation field.« less
Vulnerability of ground water to atrazine leaching in Kent County, Michigan
Holtschlag, D.J.; Luukkonen, C.L.
1997-01-01
A steady-state model of pesticide leaching through the unsaturated zone was used with readily available hydrologic, lithologic, and pesticide characteristics to estimate the vulnerability of the near-surface aquifer to atrazine contamination from non-point sources in Kent County, Michigan. The modelcomputed fraction of atrazine remaining at the water table, RM, was used as the vulnerability criterion; time of travel to the water table also was computed. Model results indicate that the average fraction of atrazine remaining at the water table was 0.039 percent; the fraction ranged from 0 to 3.6 percent. Time of travel of atrazine from the soil surface to the water table averaged 17.7 years and ranged from 2.2 to 118 years.Three maps were generated to present three views of the same atrazine vulnerability characteristics using different metrics (nonlinear transformations of the computed fractions remaining). The metrics were chosen because of the highly (right) skewed distribution of computed fractions. The first metric, rm = RMλ (where λ was 0.0625), depicts a relatively uniform distribution of vulnerability across the county with localized areas of high and low vulnerability visible. The second metric, rmλ-0.5, depicts about one-half the county at low vulnerability with discontinuous patterns of high vulnerability evident. In the third metric, rmλ-1.0 (RM), more than 95 percent of the county appears to have low vulnerability; small, distinct areas of high vulnerability are present.Aquifer vulnerability estimates in the RM metric were used with a steady-state, uniform atrazine application rate to compute a potential concentration of atrazine in leachate reaching the water table. The average estimated potential atrazine concentration in leachate at the water table was 0.16 μg/L (micrograms per liter) in the model area; estimated potential concentrations ranged from 0 to 26 μg/L. About 2 percent of the model area had estimated potential atrazine concentrations in leachate at the water table that exceeded the USEPA (U.S. Environmental Protection Agency) maximum contaminant level of 3 μg/L.Uncertainty analyses were used to assess effects of parameter uncertainty and spatial interpolation error on the variability of the estimated fractions of atrazine remaining at the water table. Results of Monte Carlo simulations indicate that parameter uncertainty is associated with a standard error of 0.0875 in the computed fractions (in the rm metric). Results of kriging analysis indicate that errors in spatial interpolation are associated with a standard error of 0.146 (in the rm metric). Thus, uncertainty in fractions remaining is primarily associated with spatial interpolation error, which can be reduced by increasing the density of points where the leaching model is applied.A sensitivity analysis indicated which of 13 hydrologic, lithologic, and pesticide characteristics were influential in determining fractions of atrazine remaining at the water table. Results indicate that fractions remaining are most sensitive to the unit changes in pesticide half life and in organic-carbon content in soils and unweathered rocks, and least sensitive to infiltration rates.The leaching model applied in this report provides an estimate of the vulnerability of the near-surface aquifer in Kent County to contamination by atrazine. The vulnerability estimate is related to water-quality criteria developed by the USEPA to help assess potential risks from atrazine to the near-surface aquifer. However, atrazine accounts for only 28 percent of the herbicide use in the county; additional potential for contamination exists from other pesticides and pesticide metabolites. Therefore, additional work is needed to develop a comprehensive understanding of the relative risks associated with specific pesticides. The modeling approach described in this report provides a technique for estimating relative vulnerabilities to specific pesticides and for helping to assess potential risks.
Ghorbani, Mahdi; Salahshour, Fateme; Haghparast, Abbas; Knaup, Courtney
2014-01-01
Purpose The aim of this study is to compare the dose in various soft tissues in brachytherapy with photon emitting sources. Material and methods 103Pd, 125I, 169Yb, 192Ir brachytherapy sources were simulated with MCNPX Monte Carlo code, and their dose rate constant and radial dose function were compared with the published data. A spherical phantom with 50 cm radius was simulated and the dose at various radial distances in adipose tissue, breast tissue, 4-component soft tissue, brain (grey/white matter), muscle (skeletal), lung tissue, blood (whole), 9-component soft tissue, and water were calculated. The absolute dose and relative dose difference with respect to 9-component soft tissue was obtained for various materials, sources, and distances. Results There was good agreement between the dosimetric parameters of the sources and the published data. Adipose tissue, breast tissue, 4-component soft tissue, and water showed the greatest difference in dose relative to the dose to the 9-component soft tissue. The other soft tissues showed lower dose differences. The dose difference was also higher for 103Pd source than for 125I, 169Yb, and 192Ir sources. Furthermore, greater distances from the source had higher relative dose differences and the effect can be justified due to the change in photon spectrum (softening or hardening) as photons traverse the phantom material. Conclusions The ignorance of soft tissue characteristics (density, composition, etc.) by treatment planning systems incorporates a significant error in dose delivery to the patient in brachytherapy with photon sources. The error depends on the type of soft tissue, brachytherapy source, as well as the distance from the source. PMID:24790623
Accuracy of three-dimensional facial soft tissue simulation in post-traumatic zygoma reconstruction.
Li, P; Zhou, Z W; Ren, J Y; Zhang, Y; Tian, W D; Tang, W
2016-12-01
The aim of this study was to evaluate the accuracy of novel software-CMF-preCADS-for the prediction of soft tissue changes following repositioning surgery for zygomatic fractures. Twenty patients who had sustained an isolated zygomatic fracture accompanied by facial deformity and who were treated with repositioning surgery participated in this study. Cone beam computed tomography (CBCT) scans and three-dimensional (3D) stereophotographs were acquired preoperatively and postoperatively. The 3D skeletal model from the preoperative CBCT data was matched with the postoperative one, and the fractured zygomatic fragments were segmented and aligned to the postoperative position for prediction. Then, the predicted model was matched with the postoperative 3D stereophotograph for quantification of the simulation error. The mean absolute error in the zygomatic soft tissue region between the predicted model and the real one was 1.42±1.56mm for all cases. The accuracy of the prediction (mean absolute error ≤2mm) was 87%. In the subjective assessment it was found that the majority of evaluators considered the predicted model and the postoperative model to be 'very similar'. CMF-preCADS software can provide a realistic, accurate prediction of the facial soft tissue appearance after repositioning surgery for zygomatic fractures. The reliability of this software for other types of repositioning surgery for maxillofacial fractures should be validated in the future. Copyright © 2016. Published by Elsevier Ltd.
A device for characterising the mechanical properties of the plantar soft tissue of the foot.
Parker, D; Cooper, G; Pearson, S; Crofts, G; Howard, D; Busby, P; Nester, C
2015-11-01
The plantar soft tissue is a highly functional viscoelastic structure involved in transferring load to the human body during walking. A Soft Tissue Response Imaging Device was developed to apply a vertical compression to the plantar soft tissue whilst measuring the mechanical response via a combined load cell and ultrasound imaging arrangement. Accuracy of motion compared to input profiles; validation of the response measured for standard materials in compression; variability of force and displacement measures for consecutive compressive cycles; and implementation in vivo with five healthy participants. Static displacement displayed average error of 0.04 mm (range of 15 mm), and static load displayed average error of 0.15 N (range of 250 N). Validation tests showed acceptable agreement compared to a Houndsfield tensometer for both displacement (CMC > 0.99 RMSE > 0.18 mm) and load (CMC > 0.95 RMSE < 4.86 N). Device motion was highly repeatable for bench-top tests (ICC = 0.99) and participant trials (CMC = 1.00). Soft tissue response was found repeatable for intra (CMC > 0.98) and inter trials (CMC > 0.70). The device has been shown to be capable of implementing complex loading patterns similar to gait, and of capturing the compressive response of the plantar soft tissue for a range of loading conditions in vivo. Copyright © 2015. Published by Elsevier Ltd.
Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B
2000-12-01
Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.
A Strategy to Use Soft Data Effectively in Randomized Controlled Clinical Trials.
ERIC Educational Resources Information Center
Kraemer, Helena Chmura; Thiemann, Sue
1989-01-01
Sees soft data, measures having substantial intrasubject variability due to errors of measurement or response inconsistency, as important measures of response in randomized clinical trials. Shows that using intensive design and slope of response on time as outcome measure maximizes sample retention and decreases within-group variability, thus…
Health, Maintenance, and Recovery of Soft Tissues around Implants.
Wang, Yulan; Zhang, Yufeng; Miron, Richard J
2016-06-01
The health of peri-implant soft tissues is one of the most important aspects of osseointegration necessary for the long-term survival of dental implants. To review the process of soft tissue healing around osseointegrated implants and discuss the maintenance requirements as well as the possible short-comings of peri-implant soft tissue integration. Literature search on the process involved in osseointegration, soft tissue healing and currently available treatment modalities was performed and a brief description of each process was provided. The peri-implant interface has been shown to be less effective than natural teeth in resisting bacterial invasion because gingival fiber alignment and reduced vascular supply make it more vulnerable to subsequent peri-implant disease and future bone loss around implants. And we summarized common procedures which have been shown to be effective in preventing peri-implantitis disease progression as well as clinical techniques utilized to regenerate soft tissues with bone loss in advanced cases of peri-implantitis. Due to the difference between peri-implant interface and natural teeth, clinicians and patients should pay more attention in the maintenance and recovery of soft tissues around implants. © 2015 Wiley Periodicals, Inc.
Research on On-Line Modeling of Fed-Batch Fermentation Process Based on v-SVR
NASA Astrophysics Data System (ADS)
Ma, Yongjun
The fermentation process is very complex and non-linear, many parameters are not easy to measure directly on line, soft sensor modeling is a good solution. This paper introduces v-support vector regression (v-SVR) for soft sensor modeling of fed-batch fermentation process. v-SVR is a novel type of learning machine. It can control the accuracy of fitness and prediction error by adjusting the parameter v. An on-line training algorithm is discussed in detail to reduce the training complexity of v-SVR. The experimental results show that v-SVR has low error rate and better generalization with appropriate v.
NASA Astrophysics Data System (ADS)
Tambara, Lucas Antunes; Tonfat, Jorge; Santos, André; Kastensmidt, Fernanda Lima; Medina, Nilberto H.; Added, Nemitala; Aguiar, Vitor A. P.; Aguirre, Fernando; Silveira, Marcilei A. G.
2017-02-01
The increasing system complexity of FPGA-based hardware designs and shortening of time-to-market have motivated the adoption of new designing methodologies focused on addressing the current need for high-performance circuits. High-Level Synthesis (HLS) tools can generate Register Transfer Level (RTL) designs from high-level software programming languages. These tools have evolved significantly in recent years, providing optimized RTL designs, which can serve the needs of safety-critical applications that require both high performance and high reliability levels. However, a reliability evaluation of HLS-based designs under soft errors has not yet been presented. In this work, the trade-offs of different HLS-based designs in terms of reliability, resource utilization, and performance are investigated by analyzing their behavior under soft errors and comparing them to a standard processor-based implementation in an SRAM-based FPGA. Results obtained from fault injection campaigns and radiation experiments show that it is possible to increase the performance of a processor-based system up to 5,000 times by changing its architecture with a small impact in the cross section (increasing up to 8 times), and still increasing the Mean Workload Between Failures (MWBF) of the system.
The Communication Link and Error ANalysis (CLEAN) simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.; Crowe, Shane
1993-01-01
During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattarivand, Mike; Summers, Clare; Robar, James
Purpose: To evaluate the validity of using spine as a surrogate for tumor positioning with ExacTrac stereoscopic imaging in lung stereotactic body radiation therapy (SBRT). Methods: Using the Novalis ExacTrac x-ray system, 39 lung SBRT patients (182 treatments) were aligned before treatment with 6 degrees (6D) of freedom couch (3 translations, 3 rotations) based on spine matching on stereoscopic images. The couch was shifted to treatment isocenter and pre-treatment CBCT was performed based on a soft tissue match around tumor volume. The CBCT data were used to measure residual errors following ExacTrac alignment. The thresholds for re-aligning the patients basedmore » on CBCT were 3mm shift or 3° rotation (in any 6D). In order to evaluate the effect of tumor location on residual errors, correlations between tumor distance from spine and individual residual errors were calculated. Results: Residual errors were up to 0.5±2.4mm. Using 3mm/3° thresholds, 80/182 (44%) of the treatments required re-alignment based on CBCT soft tissue matching following ExacTrac spine alignment. Most mismatches were in sup-inf, ant-post, and roll directions which had larger standard deviations. No correlation was found between tumor distance from spine and individual residual errors. Conclusion: ExacTrac stereoscopic imaging offers a quick pre-treatment patient alignment. However, bone matching based on spine is not reliable for aligning lung SBRT patients who require soft tissue image registration from CBCT. Spine can be a poor surrogate for lung SBRT patient alignment even for proximal tumor volumes.« less
Smart Braid Feedback for the Closed-Loop Control of Soft Robotic Systems.
Felt, Wyatt; Chin, Khai Yi; Remy, C David
2017-09-01
This article experimentally investigates the potential of using flexible, inductance-based contraction sensors in the closed-loop motion control of soft robots. Accurate motion control remains a highly challenging task for soft robotic systems. Precise models of the actuation dynamics and environmental interactions are often unavailable. This renders open-loop control impossible, while closed-loop control suffers from a lack of suitable feedback. Conventional motion sensors, such as linear or rotary encoders, are difficult to adapt to robots that lack discrete mechanical joints. The rigid nature of these sensors runs contrary to the aspirational benefits of soft systems. As truly soft sensor solutions are still in their infancy, motion control of soft robots has so far relied on laboratory-based sensing systems such as motion capture, electromagnetic (EM) tracking, or Fiber Bragg Gratings. In this article, we used embedded flexible sensors known as Smart Braids to sense the contraction of McKibben muscles through changes in inductance. We evaluated closed-loop control on two systems: a revolute joint and a planar, one degree of freedom continuum manipulator. In the revolute joint, our proposed controller compensated for elasticity in the actuator connections. The Smart Braid feedback allowed motion control with a steady-state root-mean-square (RMS) error of [1.5]°. In the continuum manipulator, Smart Braid feedback enabled tracking of the desired tip angle with a steady-state RMS error of [1.25]°. This work demonstrates that Smart Braid sensors can provide accurate position feedback in closed-loop motion control suitable for field applications of soft robotic systems.
Adaptive and Resilient Soft Tensegrity Robots.
Rieffel, John; Mouret, Jean-Baptiste
2018-04-17
Living organisms intertwine soft (e.g., muscle) and hard (e.g., bones) materials, giving them an intrinsic flexibility and resiliency often lacking in conventional rigid robots. The emerging field of soft robotics seeks to harness these same properties to create resilient machines. The nature of soft materials, however, presents considerable challenges to aspects of design, construction, and control-and up until now, the vast majority of gaits for soft robots have been hand-designed through empirical trial-and-error. This article describes an easy-to-assemble tensegrity-based soft robot capable of highly dynamic locomotive gaits and demonstrating structural and behavioral resilience in the face of physical damage. Enabling this is the use of a machine learning algorithm able to discover effective gaits with a minimal number of physical trials. These results lend further credence to soft-robotic approaches that seek to harness the interaction of complex material dynamics to generate a wealth of dynamical behaviors.
Error and attack tolerance of complex networks
NASA Astrophysics Data System (ADS)
Albert, Réka; Jeong, Hawoong; Barabási, Albert-László
2000-07-01
Many complex systems display a surprising degree of tolerance against errors. For example, relatively simple organisms grow, persist and reproduce despite drastic pharmaceutical or environmental interventions, an error tolerance attributed to the robustness of the underlying metabolic network. Complex communication networks display a surprising degree of robustness: although key components regularly malfunction, local failures rarely lead to the loss of the global information-carrying ability of the network. The stability of these and other complex systems is often attributed to the redundant wiring of the functional web defined by the systems' components. Here we demonstrate that error tolerance is not shared by all redundant systems: it is displayed only by a class of inhomogeneously wired networks, called scale-free networks, which include the World-Wide Web, the Internet, social networks and cells. We find that such networks display an unexpected degree of robustness, the ability of their nodes to communicate being unaffected even by unrealistically high failure rates. However, error tolerance comes at a high price in that these networks are extremely vulnerable to attacks (that is, to the selection and removal of a few nodes that play a vital role in maintaining the network's connectivity). Such error tolerance and attack vulnerability are generic properties of communication networks.
Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement
Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian
2013-01-01
Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system remained within the acceptable range. PMID:22678990
Accuracy study of a robotic system for MRI-guided prostate needle placement.
Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian
2013-09-01
Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system remained within the acceptable range. Copyright © 2012 John Wiley & Sons, Ltd.
Neural and computational processes underlying dynamic changes in self-esteem
Rutledge, Robb B; Moutoussis, Michael; Dolan, Raymond J
2017-01-01
Self-esteem is shaped by the appraisals we receive from others. Here, we characterize neural and computational mechanisms underlying this form of social influence. We introduce a computational model that captures fluctuations in self-esteem engendered by prediction errors that quantify the difference between expected and received social feedback. Using functional MRI, we show these social prediction errors correlate with activity in ventral striatum/subgenual anterior cingulate cortex, while updates in self-esteem resulting from these errors co-varied with activity in ventromedial prefrontal cortex (vmPFC). We linked computational parameters to psychiatric symptoms using canonical correlation analysis to identify an ‘interpersonal vulnerability’ dimension. Vulnerability modulated the expression of prediction error responses in anterior insula and insula-vmPFC connectivity during self-esteem updates. Our findings indicate that updating of self-evaluative beliefs relies on learning mechanisms akin to those used in learning about others. Enhanced insula-vmPFC connectivity during updating of those beliefs may represent a marker for psychiatric vulnerability. PMID:29061228
Checklists and Monitoring in the Cockpit: Why Crucial Defenses Sometimes Fail
NASA Technical Reports Server (NTRS)
Dismukes, R. Key; Berman, Ben
2010-01-01
Checklists and monitoring are two essential defenses against equipment failures and pilot errors. Problems with checklist use and pilots failures to monitor adequately have a long history in aviation accidents. This study was conducted to explore why checklists and monitoring sometimes fail to catch errors and equipment malfunctions as intended. Flight crew procedures were observed from the cockpit jumpseat during normal airline operations in order to: 1) collect data on monitoring and checklist use in cockpit operations in typical flight conditions; 2) provide a plausible cognitive account of why deviations from formal checklist and monitoring procedures sometimes occur; 3) lay a foundation for identifying ways to reduce vulnerability to inadvertent checklist and monitoring errors; 4) compare checklist and monitoring execution in normal flights with performance issues uncovered in accident investigations; and 5) suggest ways to improve the effectiveness of checklists and monitoring. Cognitive explanations for deviations from prescribed procedures are provided, along with suggestions for countermeasures for vulnerability to error.
Understanding Risk Tolerance and Building an Effective Safety Culture
NASA Technical Reports Server (NTRS)
Loyd, David
2018-01-01
Estimates range from 65-90 percent of catastrophic mishaps are due to human error. NASA's human factors-related mishaps causes are estimated at approximately 75 percent. As much as we'd like to error-proof our work environment, even the most automated and complex technical endeavors require human interaction... and are vulnerable to human frailty. Industry and government are focusing not only on human factors integration into hazardous work environments, but also looking for practical approaches to cultivating a strong Safety Culture that diminishes risk. Industry and government organizations have recognized the value of monitoring leading indicators to identify potential risk vulnerabilities. NASA has adapted this approach to assess risk controls associated with hazardous, critical, and complex facilities. NASA's facility risk assessments integrate commercial loss control, OSHA (Occupational Safety and Health Administration) Process Safety, API (American Petroleum Institute) Performance Indicator Standard, and NASA Operational Readiness Inspection concepts to identify risk control vulnerabilities.
NASA Astrophysics Data System (ADS)
Lohrmann, Carol A.
1990-03-01
Interoperability of commercial Land Mobile Radios (LMR) and the military's tactical LMR is highly desirable if the U.S. government is to respond effectively in a national emergency or in a joint military operation. This ability to talk securely and immediately across agency and military service boundaries is often overlooked. One way to ensure interoperability is to develop and promote Federal communication standards (FS). This thesis surveys one area of the proposed FS 1024 for LMRs; namely, the error detection and correction (EDAC) of the message indicator (MI) bits used for cryptographic synchronization. Several EDAC codes are examined (Hamming, Quadratic Residue, hard decision Golay and soft decision Golay), tested on three FORTRAN programmed channel simulations (INMARSAT, Gaussian and constant burst width), compared and analyzed (based on bit error rates and percent of error-free super-frame runs) so that a best code can be recommended. Out of the four codes under study, the soft decision Golay code (24,12) is evaluated to be the best. This finding is based on the code's ability to detect and correct errors as well as the relative ease of implementation of the algorithm.
Khurana, Harpreet Kaur; Cho, Il Kyu; Shim, Jae Yong; Li, Qing X; Jun, Soojin
2008-02-13
Aspartame is a low-calorie sweetener commonly used in soft drinks; however, the maximum usage dose is limited by the U.S. Food and Drug Administration. Fourier transform infrared (FTIR) spectroscopy with attenuated total reflectance sampling accessory and partial least-squares regression (PLS) was used for rapid determination of aspartame in soft drinks. On the basis of spectral characterization, the highest R2 value, and lowest PRESS value, the spectral region between 1600 and 1900 cm(-1) was selected for quantitative estimation of aspartame. The potential of FTIR spectroscopy for aspartame quantification was examined and validated by the conventional HPLC method. Using the FTIR method, aspartame contents in four selected carbonated diet soft drinks were found to average from 0.43 to 0.50 mg/mL with prediction errors ranging from 2.4 to 5.7% when compared with HPLC measurements. The developed method also showed a high degree of accuracy because real samples were used for calibration, thus minimizing potential interference errors. The FTIR method developed can be suitably used for routine quality control analysis of aspartame in the beverage-manufacturing sector.
45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.
Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile
2012-07-30
In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.
Prefrontal vulnerabilities and whole brain connectivity in aging and depression.
Lamar, Melissa; Charlton, Rebecca A; Ajilore, Olusola; Zhang, Aifeng; Yang, Shaolin; Barrick, Thomas R; Rhodes, Emma; Kumar, Anand
2013-07-01
Studies exploring the underpinnings of age-related neurodegeneration suggest fronto-limbic alterations that are increasingly vulnerable in the presence of disease including late life depression. Less work has assessed the impact of this specific vulnerability on widespread brain circuitry. Seventy-nine older adults (healthy controls=45; late life depression=34) completed translational tasks shown in non-human primates to rely on fronto-limbic networks involving dorsolateral (Self-Ordered Pointing Task) or orbitofrontal (Object Alternation Task) cortices. A sub-sample of participants also completed diffusion tensor imaging for white matter tract quantification (uncinate and cingulum bundle; n=58) and whole brain tract-based spatial statistics (n=62). Despite task associations to specific white matter tracts across both groups, only healthy controls demonstrated significant correlations between widespread tract integrity and cognition. Thus, increasing Object Alternation Task errors were associated with decreasing fractional anisotropy in the uncinate in late life depression; however, only in healthy controls was the uncinate incorporated into a larger network of white matter vulnerability associating fractional anisotropy with Object Alternation Task errors using whole brain tract-based spatial statistics. It appears that the whole brain impact of specific fronto-limbic vulnerabilities in aging may be eclipsed in the presence of disease-specific neuropathology like that seen in late life depression. Copyright © 2013 Elsevier Ltd. All rights reserved.
Error-Based Design Space Windowing
NASA Technical Reports Server (NTRS)
Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman
2002-01-01
Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.
General Biology and Current Management Approaches of Soft Scale Pests (Hemiptera: Coccidae).
Camacho, Ernesto Robayo; Chong, Juang-Horng
We summarize the economic importance, biology, and management of soft scales, focusing on pests of agricultural, horticultural, and silvicultural crops in outdoor production systems and urban landscapes. We also provide summaries on voltinism, crawler emergence timing, and predictive models for crawler emergence to assist in developing soft scale management programs. Phloem-feeding soft scale pests cause direct (e.g., injuries to plant tissues and removal of nutrients) and indirect damage (e.g., reduction in photosynthesis and aesthetic value by honeydew and sooty mold). Variations in life cycle, reproduction, fecundity, and behavior exist among congenerics due to host, environmental, climatic, and geographical variations. Sampling of soft scale pests involves sighting the insects or their damage, and assessing their abundance. Crawlers of most univoltine species emerge in the spring and the summer. Degree-day models and plant phenological indicators help determine the initiation of sampling and treatment against crawlers (the life stage most vulnerable to contact insecticides). The efficacy of cultural management tactics, such as fertilization, pruning, and irrigation, in reducing soft scale abundance is poorly documented. A large number of parasitoids and predators attack soft scale populations in the field; therefore, natural enemy conservation by using selective insecticides is important. Systemic insecticides provide greater flexibility in application method and timing, and have longer residual longevity than contact insecticides. Application timing of contact insecticides that coincides with crawler emergence is most effective in reducing soft scale abundance.
Estimating patient-specific soft-tissue properties in a TKA knee.
Ewing, Joseph A; Kaufman, Michelle K; Hutter, Erin E; Granger, Jeffrey F; Beal, Matthew D; Piazza, Stephen J; Siston, Robert A
2016-03-01
Surgical technique is one factor that has been identified as critical to success of total knee arthroplasty. Researchers have shown that computer simulations can aid in determining how decisions in the operating room generally affect post-operative outcomes. However, to use simulations to make clinically relevant predictions about knee forces and motions for a specific total knee patient, patient-specific models are needed. This study introduces a methodology for estimating knee soft-tissue properties of an individual total knee patient. A custom surgical navigation system and stability device were used to measure the force-displacement relationship of the knee. Soft-tissue properties were estimated using a parameter optimization that matched simulated tibiofemoral kinematics with experimental tibiofemoral kinematics. Simulations using optimized ligament properties had an average root mean square error of 3.5° across all tests while simulations using generic ligament properties taken from literature had an average root mean square error of 8.4°. Specimens showed large variability among ligament properties regardless of similarities in prosthetic component alignment and measured knee laxity. These results demonstrate the importance of soft-tissue properties in determining knee stability, and suggest that to make clinically relevant predictions of post-operative knee motions and forces using computer simulations, patient-specific soft-tissue properties are needed. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu
2008-10-01
The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.
Addressing the Hard Factors for Command File Errors by Probabilistic Reasoning
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Bryant, Larry
2014-01-01
Command File Errors (CFE) are managed using standard risk management approaches at the Jet Propulsion Laboratory. Over the last few years, more emphasis has been made on the collection, organization, and analysis of these errors for the purpose of reducing the CFE rates. More recently, probabilistic modeling techniques have been used for more in depth analysis of the perceived error rates of the DAWN mission and for managing the soft factors in the upcoming phases of the mission. We broadly classify the factors that can lead to CFE's as soft factors, which relate to the cognition of the operators and hard factors which relate to the Mission System which is composed of the hardware, software and procedures used for the generation, verification & validation and execution of commands. The focus of this paper is to use probabilistic models that represent multiple missions at JPL to determine the root cause and sensitivities of the various components of the mission system and develop recommendations and techniques for addressing them. The customization of these multi-mission models to a sample interplanetary spacecraft is done for this purpose.
Cutti, Andrea Giovanni; Cappello, Angelo; Davalli, Angelo
2006-01-01
Soft tissue artefact is the dominant error source for upper extremity motion analyses that use skin-mounted markers, especially in humeral axial rotation. A new in vivo technique is presented that is based on the definition of a humerus bone-embedded frame almost "artefact free" but influenced by the elbow orientation in the measurement of the humeral axial rotation, and on an algorithm designed to solve this kinematic coupling. The technique was validated in vivo in a study of six healthy subjects who performed five arm-movement tasks. For each task the similarity between a gold standard pattern and the axial rotation pattern before and after the application of the compensation algorithm was evaluated in terms of explained variance, gain, phase and offset. In addition the root mean square error between the patterns was used as a global similarity estimator. After the application, for four out of five tasks, patterns were highly correlated, in phase, with almost equal gain and limited offset; the root mean square error decreased from the original 9 degrees to 3 degrees . The proposed technique appears to help compensate for the soft tissue artefact affecting axial rotation. A further development is also proposed to make the technique effective also for the pure prono-supination task.
In-flight calibration of the Hitomi Soft X-ray Spectrometer. (2) Point spread function
NASA Astrophysics Data System (ADS)
Maeda, Yoshitomo; Sato, Toshiki; Hayashi, Takayuki; Iizuka, Ryo; Angelini, Lorella; Asai, Ryota; Furuzawa, Akihiro; Kelley, Richard; Koyama, Shu; Kurashima, Sho; Ishida, Manabu; Mori, Hideyuki; Nakaniwa, Nozomi; Okajima, Takashi; Serlemitsos, Peter J.; Tsujimoto, Masahiro; Yaqoob, Tahir
2018-03-01
We present results of inflight calibration of the point spread function of the Soft X-ray Telescope that focuses X-rays onto the pixel array of the Soft X-ray Spectrometer system. We make a full array image of a point-like source by extracting a pulsed component of the Crab nebula emission. Within the limited statistics afforded by an exposure time of only 6.9 ks and limited knowledge of the systematic uncertainties, we find that the raytracing model of 1 {^'.} 2 half-power-diameter is consistent with an image of the observed event distributions across pixels. The ratio between the Crab pulsar image and the raytracing shows scatter from pixel to pixel that is 40% or less in all except one pixel. The pixel-to-pixel ratio has a spread of 20%, on average, for the 15 edge pixels, with an averaged statistical error of 17% (1 σ). In the central 16 pixels, the corresponding ratio is 15% with an error of 6%.
Eisner, Brian H; Kambadakone, Avinash; Monga, Manoj; Anderson, James K; Thoreson, Andrew A; Lee, Hang; Dretler, Stephen P; Sahani, Dushyant V
2009-04-01
We determined the most accurate method of measuring urinary stones on computerized tomography. For the in vitro portion of the study 24 calculi, including 12 calcium oxalate monohydrate and 12 uric acid stones, that had been previously collected at our clinic were measured manually with hand calipers as the gold standard measurement. The calculi were then embedded into human kidney-sized potatoes and scanned using 64-slice multidetector computerized tomography. Computerized tomography measurements were performed at 4 window settings, including standard soft tissue windows (window width-320 and window length-50), standard bone windows (window width-1120 and window length-300), 5.13x magnified soft tissue windows and 5.13x magnified bone windows. Maximum stone dimensions were recorded. For the in vivo portion of the study 41 patients with distal ureteral stones who underwent noncontrast computerized tomography and subsequently spontaneously passed the stones were analyzed. All analyzed stones were 100% calcium oxalate monohydrate or mixed, calcium based stones. Stones were prospectively collected at the clinic and the largest diameter was measured with digital calipers as the gold standard. This was compared to computerized tomography measurements using 4.0x magnified soft tissue windows and 4.0x magnified bone windows. Statistical comparisons were performed using Pearson's correlation and paired t test. In the in vitro portion of the study the most accurate measurements were obtained using 5.13x magnified bone windows with a mean 0.13 mm difference from caliper measurement (p = 0.6). Measurements performed in the soft tissue window with and without magnification, and in the bone window without magnification were significantly different from hand caliper measurements (mean difference 1.2, 1.9 and 1.4 mm, p = 0.003, <0.001 and 0.0002, respectively). When comparing measurement errors between stones of different composition in vitro, the error for calcium oxalate calculi was significantly different from the gold standard for all methods except bone window settings with magnification. For uric acid calculi the measurement error was observed only in standard soft tissue window settings. In vivo 4.0x magnified bone windows was superior to 4.0x magnified soft tissue windows in measurement accuracy. Magnified bone window measurements were not statistically different from digital caliper measurements (mean underestimation vs digital caliper 0.3 mm, p = 0.4), while magnified soft tissue windows were statistically distinct (mean underestimation 1.4 mm, p = 0.001). In this study magnified bone windows were the most accurate method of stone measurements in vitro and in vivo. Therefore, we recommend the routine use of magnified bone windows for computerized tomography measurement of stones. In vitro the measurement error in calcium oxalate stones was greater than that in uric acid stones, suggesting that stone composition may be responsible for measurement inaccuracies.
Characterization of Errors Inherent in System EMP Vulnerability Assessment Programs,
1980-10-01
Patriot system. * B-i aircraft. * E-3A airborne warning and control system aircraft. * PRC-77 radio. * Lance missile system. * Safeguard ABM system...carefully or the offset will create large frequency domain error. Frequency-tying, too, can improve f-domain data. Of the various recording sytems studied
Scaled CMOS Technology Reliability Users Guide
NASA Technical Reports Server (NTRS)
White, Mark
2010-01-01
The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is presented revealing a power relationship. General models describing the soft error rates across scaled product generations are presented. The analysis methodology may be applied to other scaled microelectronic products and their key parameters.
Co-operation of digital nonlinear equalizers and soft-decision LDPC FEC in nonlinear transmission.
Tanimura, Takahito; Oda, Shoichiro; Hoshida, Takeshi; Aoki, Yasuhiko; Tao, Zhenning; Rasmussen, Jens C
2013-12-30
We experimentally and numerically investigated the characteristics of 128 Gb/s dual polarization - quadrature phase shift keying signals received with two types of nonlinear equalizers (NLEs) followed by soft-decision (SD) low-density parity-check (LDPC) forward error correction (FEC). Successful co-operation among SD-FEC and NLEs over various nonlinear transmissions were demonstrated by optimization of parameters for NLEs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogunmolu, O; Gans, N; Jiang, S
Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance ofmore » the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control will be pursued.« less
Dynamic soft tissue deformation estimation based on energy analysis
NASA Astrophysics Data System (ADS)
Gao, Dedong; Lei, Yong; Yao, Bin
2016-10-01
The needle placement accuracy of millimeters is required in many needle-based surgeries. The tissue deformation, especially that occurring on the surface of organ tissue, affects the needle-targeting accuracy of both manual and robotic needle insertions. It is necessary to understand the mechanism of tissue deformation during needle insertion into soft tissue. In this paper, soft tissue surface deformation is investigated on the basis of continuum mechanics, where a geometry model is presented to quantitatively approximate the volume of tissue deformation. The energy-based method is presented to the dynamic process of needle insertion into soft tissue based on continuum mechanics, and the volume of the cone is exploited to quantitatively approximate the deformation on the surface of soft tissue. The external work is converted into potential, kinetic, dissipated, and strain energies during the dynamic rigid needle-tissue interactive process. The needle insertion experimental setup, consisting of a linear actuator, force sensor, needle, tissue container, and a light, is constructed while an image-based method for measuring the depth and radius of the soft tissue surface deformations is introduced to obtain the experimental data. The relationship between the changed volume of tissue deformation and the insertion parameters is created based on the law of conservation of energy, with the volume of tissue deformation having been obtained using image-based measurements. The experiments are performed on phantom specimens, and an energy-based analytical fitted model is presented to estimate the volume of tissue deformation. The experimental results show that the energy-based analytical fitted model can predict the volume of soft tissue deformation, and the root mean squared errors of the fitting model and experimental data are 0.61 and 0.25 at the velocities 2.50 mm/s and 5.00 mm/s. The estimating parameters of the soft tissue surface deformations are proven to be useful for compensating the needle-targeting error in the rigid needle insertion procedure, especially for percutaneous needle insertion into organs.
Microcircuit radiation effects databank
NASA Technical Reports Server (NTRS)
1983-01-01
Radiation test data submitted by many testers is collated to serve as a reference for engineers who are concerned with and have some knowledge of the effects of the natural radiation environment on microcircuits. Total dose damage information and single event upset cross sections, i.e., the probability of a soft error (bit flip) or of a hard error (latchup) are presented.
NASA Astrophysics Data System (ADS)
Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez
2014-03-01
Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.
Evidence for explosive chromospheric evaporation in a solar flare observed with SMM
NASA Technical Reports Server (NTRS)
Zarro, D. M.; Saba, J. L. R.; Strong, K. T.; Canfield, R. C.; Metcalf, T.
1986-01-01
SMM soft X-ray data and Sacramento Peak Observatory H-alpha observations are combined in a study of the impulsive phase of a solar flare. A blue asymmetry, indicative of upflow motions, was observed in the coronal Ca XIX line during the soft X-ray rise phase. H-alpha redshifts, indicative of downward motions, were observed simultaneously in bright flare kernels during the period of hard X-ray emission. It is shown that, to within observational errors, the impulsive phase momentum transported by the upflowing soft X-ray plasma is equivalent to that of the downward moving chromospheric material.
Detecting Silent Data Corruption for Extreme-Scale Applications through Data Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bautista-Gomez, Leonardo; Cappello, Franck
Supercomputers allow scientists to study natural phenomena by means of computer simulations. Next-generation machines are expected to have more components and, at the same time, consume several times less energy per operation. These trends are pushing supercomputer construction to the limits of miniaturization and energy-saving strategies. Consequently, the number of soft errors is expected to increase dramatically in the coming years. While mechanisms are in place to correct or at least detect some soft errors, a significant percentage of those errors pass unnoticed by the hardware. Such silent errors are extremely damaging because they can make applications silently produce wrongmore » results. In this work we propose a technique that leverages certain properties of high-performance computing applications in order to detect silent errors at the application level. Our technique detects corruption solely based on the behavior of the application datasets and is completely application-agnostic. We propose multiple corruption detectors, and we couple them to work together in a fashion transparent to the user. We demonstrate that this strategy can detect the majority of the corruptions, while incurring negligible overhead. We show that with the help of these detectors, applications can have up to 80% of coverage against data corruption.« less
The Effects of Observation Errors on the Attack Vulnerability of Complex Networks
2012-11-01
more detail, to construct a true network we select a topology (erdos- renyi (Erdos & Renyi , 1959), scale-free (Barabási & Albert, 1999), small world...Efficiency of Scale-Free Networks: Error and Attack Tolerance. Physica A, Volume 320, pp. 622-642. 6. Erdos, P. & Renyi , A., 1959. On Random Graphs, I
Quality Control of an OSCE Using Generalizability Theory and Many-Faceted Rasch Measurement
ERIC Educational Resources Information Center
Iramaneerat, Cherdsak; Yudkowsky, Rachel; Myford, Carol M.; Downing, Steven M.
2008-01-01
An Objective Structured Clinical Examination (OSCE) is an effective method for evaluating competencies. However, scores obtained from an OSCE are vulnerable to many potential measurement errors that cases, items, or standardized patients (SPs) can introduce. Monitoring these sources of errors is an important quality control mechanism to ensure…
Single event upset vulnerability of selected 4K and 16K CMOS static RAM's
NASA Technical Reports Server (NTRS)
Kolasinski, W. A.; Koga, R.; Blake, J. B.; Brucker, G.; Pandya, P.; Petersen, E.; Price, W.
1982-01-01
Upset thresholds for bulk CMOS and CMOS/SOS RAMS were deduced after bombardment of the devices with 140 MeV Kr, 160 MeV Ar, and 33 MeV O beams in a cyclotron. The trials were performed to test prototype devices intended for space applications, to relate feature size to the critical upset charge, and to check the validity of computer simulation models. The tests were run on 4 and 1 K memory cells with 6 transistors, in either hardened or unhardened configurations. The upset cross sections were calculated to determine the critical charge for upset from the soft errors observed in the irradiated cells. Computer simulations of the critical charge were found to deviate from the experimentally observed variation of the critical charge as the square of the feature size. Modeled values of series resistors decoupling the inverter pairs of memory cells showed that above some minimum resistance value a small increase in resistance produces a large increase in the critical charge, which the experimental data showed to be of questionable validity unless the value is made dependent on the maximum allowed read-write time.
Two Cultures in Modern Science and Technology: For Safety and Validity Does Medicine Have to Update?
Becker, Robert E
2016-01-11
Two different scientific cultures go unreconciled in modern medicine. Each culture accepts that scientific knowledge and technologies are vulnerable to and easily invalidated by methods and conditions of acquisition, interpretation, and application. How these vulnerabilities are addressed separates the 2 cultures and potentially explains medicine's difficulties eradicating errors. A traditional culture, dominant in medicine, leaves error control in the hands of individual and group investigators and practitioners. A competing modern scientific culture accepts errors as inevitable, pernicious, and pervasive sources of adverse events throughout medical research and patient care too malignant for individuals or groups to control. Error risks to the validity of scientific knowledge and safety in patient care require systemwide programming able to support a culture in medicine grounded in tested, continually updated, widely promulgated, and uniformly implemented standards of practice for research and patient care. Experiences from successes in other sciences and industries strongly support the need for leadership from the Institute of Medicine's recommended Center for Patient Safely within the Federal Executive branch of government.
Asymmetric soft-error resistant memory
NASA Technical Reports Server (NTRS)
Buehler, Martin G. (Inventor); Perlman, Marvin (Inventor)
1991-01-01
A memory system is provided, of the type that includes an error-correcting circuit that detects and corrects, that more efficiently utilizes the capacity of a memory formed of groups of binary cells whose states can be inadvertently switched by ionizing radiation. Each memory cell has an asymmetric geometry, so that ionizing radiation causes a significantly greater probability of errors in one state than in the opposite state (e.g., an erroneous switch from '1' to '0' is far more likely than a switch from '0' to'1'. An asymmetric error correcting coding circuit can be used with the asymmetric memory cells, which requires fewer bits than an efficient symmetric error correcting code.
General Biology and Current Management Approaches of Soft Scale Pests (Hemiptera: Coccidae)
Camacho, Ernesto Robayo; Chong, Juang-Horng
2015-01-01
We summarize the economic importance, biology, and management of soft scales, focusing on pests of agricultural, horticultural, and silvicultural crops in outdoor production systems and urban landscapes. We also provide summaries on voltinism, crawler emergence timing, and predictive models for crawler emergence to assist in developing soft scale management programs. Phloem-feeding soft scale pests cause direct (e.g., injuries to plant tissues and removal of nutrients) and indirect damage (e.g., reduction in photosynthesis and aesthetic value by honeydew and sooty mold). Variations in life cycle, reproduction, fecundity, and behavior exist among congenerics due to host, environmental, climatic, and geographical variations. Sampling of soft scale pests involves sighting the insects or their damage, and assessing their abundance. Crawlers of most univoltine species emerge in the spring and the summer. Degree-day models and plant phenological indicators help determine the initiation of sampling and treatment against crawlers (the life stage most vulnerable to contact insecticides). The efficacy of cultural management tactics, such as fertilization, pruning, and irrigation, in reducing soft scale abundance is poorly documented. A large number of parasitoids and predators attack soft scale populations in the field; therefore, natural enemy conservation by using selective insecticides is important. Systemic insecticides provide greater flexibility in application method and timing, and have longer residual longevity than contact insecticides. Application timing of contact insecticides that coincides with crawler emergence is most effective in reducing soft scale abundance. PMID:26823990
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Lin, Shu
2000-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
Management Aspects of Software Maintenance.
1984-09-01
educated in * the complex nature of software maintenance to be able to properly evaluate and manage the software maintenance effort. In this...maintenance and improvement may be called "software evolution". The soft- ware manager must be Educated in the complex nature cf soft- Iware maintenance to be...complaint of error or request for modification is also studied in order to determine what action needs tc be taken. 2. Define Objective and Approach :
Stephan, Carl N; Simpson, Ellie K
2008-11-01
With the ever increasing production of average soft tissue depth studies, data are becoming increasingly complex, less standardized, and more unwieldy. So far, no overarching review has been attempted to determine: the validity of continued data collection; the usefulness of the existing data subcategorizations; or if a synthesis is possible to produce a manageable soft tissue depth library. While a principal components analysis would provide the best foundation for such an assessment, this type of investigation is not currently possible because of a lack of easily accessible raw data (first, many studies are narrow; second, raw data are infrequently published and/or stored and are not always shared by some authors). This paper provides an alternate means of investigation using an hierarchical approach to review and compare the effects of single variables on published mean values for adults whilst acknowledging measurement errors and within-group variation. The results revealed: (i) no clear secular trends at frequently investigated landmarks; (ii) wide variation in soft tissue depth measures between different measurement techniques irrespective of whether living persons or cadavers were considered; (iii) no clear clustering of non-Caucasoid data far from the Caucasoid means; and (iv) minor differences between males and females. Consequently, the data were pooled across studies using weighted means and standard deviations to cancel out random and opposing study-specific errors, and to produce a single soft tissue depth table with increased sample sizes (e.g., 6786 individuals at pogonion).
Chang, Shih-Tsun; Liu, Yen-Hsiu; Lee, Jiahn-Shing; See, Lai-Chu
2015-09-01
The effect of correcting static vision on sports vision is still not clear. To examine whether sports vision (depth perception [DP], dynamic visual acuity [DVA], eye movement [EM], peripheral vision [PV], and momentary vision [MV],) were different among soft tennis adolescent athletes with normal vision (Group A), with refractive error and corrected with (Group B) and without eyeglasses (Group C). A cross-section study was conducted. Soft tennis athletes aged 10-13 who played softball tennis for 2-5 years, and who were without any ocular diseases and without visual training for the past 3 months were recruited. DPs were measured in an absolute deviation (mm) between a moving rod and fixing rod (approaching at 25 mm/s, receding at 25 mm/s, approaching at 50 mm/s, receding at 50 mm/s) using electric DP tester. A smaller deviation represented better DP. DVA, EM, PV, and MV were measured on a scale from 1 (worse) to 10 (best) using ATHLEVISION software. Chi-square test and Kruskal-Wallis test was used to compare the data among the three study groups. A total of 73 athletes (37 in Group A, 8 in Group B, 28 in Group C) were enrolled in this study. All four items of DP showed significant difference among the three study groups (P = 0.0051, 0.0004, 0.0095, 0.0021). PV displayed significant difference among the three study groups (P = 0.0044). There was no significant difference in DVA, EM, and MV among the three study groups. Significant better DP and PV were seen among soft tennis adolescent athletes with normal vision than those with refractive error regardless whether they had eyeglasses corrected. On the other hand, DVA, EM, and MV were similar among the three study groups.
Balasuriya, Lilanthi; Vyles, David; Bakerman, Paul; Holton, Vanessa; Vaidya, Vinay; Garcia-Filion, Pamela; Westdorp, Joan; Sanchez, Christine; Kurz, Rhonda
2017-09-01
An enhanced dose range checking (DRC) system was developed to evaluate prescription error rates in the pediatric intensive care unit and the pediatric cardiovascular intensive care unit. An enhanced DRC system incorporating "soft" and "hard" alerts was designed and implemented. Practitioner responses to alerts for patients admitted to the pediatric intensive care unit and the pediatric cardiovascular intensive care unit were retrospectively reviewed. Alert rates increased from 0.3% to 3.4% after "go-live" (P < 0.001). Before go-live, all alerts were soft alerts. In the period after go-live, 68% of alerts were soft alerts and 32% were hard alerts. Before go-live, providers reduced doses only 1 time for every 10 dose alerts. After implementation of the enhanced computerized physician order entry system, the practitioners responded to soft alerts by reducing doses to more appropriate levels in 24.7% of orders (70/283), compared with 10% (3/30) before go-live (P = 0.0701). The practitioners deleted orders in 9.5% of cases (27/283) after implementation of the enhanced DRC system, as compared with no cancelled orders before go-live (P = 0.0774). Medication orders that triggered a soft alert were submitted unmodified in 65.7% (186/283) as compared with 90% (27/30) of orders before go-live (P = 0.0067). After go-live, 28.7% of hard alerts resulted in a reduced dose, 64% resulted in a cancelled order, and 7.4% were submitted as written. Before go-live, alerts were often clinically irrelevant. After go-live, there was a statistically significant decrease in orders that were submitted unmodified and an increase in the number of orders that were reduced or cancelled.
2014-04-01
laparoscopic ventral hernia repair. Additional simulation stations were added to the standards and purchases (including a motion tracking system) were...framework for laparoscopic ventral hernia; Incorporation of error-based simulators into an exit assessment of chief surgical residents; Development of...simulating a laparoscopic ventral hernia (LVH) repair. Based on collected data, the lab worked to finalize the incorporation of error-based simulators
Online Soft Sensor of Humidity in PEM Fuel Cell Based on Dynamic Partial Least Squares
Long, Rong; Chen, Qihong; Zhang, Liyan; Ma, Longhua; Quan, Shuhai
2013-01-01
Online monitoring humidity in the proton exchange membrane (PEM) fuel cell is an important issue in maintaining proper membrane humidity. The cost and size of existing sensors for monitoring humidity are prohibitive for online measurements. Online prediction of humidity using readily available measured data would be beneficial to water management. In this paper, a novel soft sensor method based on dynamic partial least squares (DPLS) regression is proposed and applied to humidity prediction in PEM fuel cell. In order to obtain data of humidity and test the feasibility of the proposed DPLS-based soft sensor a hardware-in-the-loop (HIL) test system is constructed. The time lag of the DPLS-based soft sensor is selected as 30 by comparing the root-mean-square error in different time lag. The performance of the proposed DPLS-based soft sensor is demonstrated by experimental results. PMID:24453923
Interspecific song imitation by a Prairie Warbler
Bruce E. Byers; Brodie A. Kramer; Michael E. Akresh; David I. King
2013-01-01
Song development in oscine songbirds relies on imitation of adult singers and thus leaves developing birds vulnerable to potentially costly errors caused by imitation of inappropriate models, such as the songs of other species. In May and June 2012, we recorded the songs of a bird that made such an error: a male Prairie Warbler (Setophaga discolor)...
When Errors Count: An EEG Study on Numerical Error Monitoring under Performance Pressure
ERIC Educational Resources Information Center
Schillinger, Frieder L.; De Smedt, Bert; Grabner, Roland H.
2016-01-01
In high-stake tests, students often display lower achievements than expected based on their skill level--a phenomenon known as choking under pressure. This imposes a serious problem for many students, especially for test-anxious individuals. Among school subjects, mathematics has been shown to be particularly vulnerable to choking. To succeed in a…
A new nonpenetrating ballistic injury.
Carroll, A W; Soderstrom, C A
1978-12-01
A new, nonpenetrating ballistic injury mechanism involving individuals protected by soft body armor is described. Experimental studies using laboratory animals have demonstrated that despite stopping missile penetration, the heart, liver, spleen, and spinal cord are vulnerable to injury. The rapid jolting force of an impacting bullet is contrasted with the usually encountered mechanisms producing blunt trauma injury. The experimental methodology used to assess a 20% increase in survival probability and an 80% decrease in the need for surgical intervention with a new soft body armor is reviewed. Five cases of ballistic assaults on law enforcement personnel protected by soft body armor are presented. Four emphasize the potentially lifesaving qualities of the armor, while the fifth indicates the need for torso encircling design. Hospitalization should follow all assaults, regardless of the innocuous appearance of the skin lesion and the apparent well being on the assaulted individual. Therapeutic guidelines for patient management are suggested.
A new nonpenetrating ballistic injury.
Carroll, A W; Soderstrom, C A
1978-01-01
A new, nonpenetrating ballistic injury mechanism involving individuals protected by soft body armor is described. Experimental studies using laboratory animals have demonstrated that despite stopping missile penetration, the heart, liver, spleen, and spinal cord are vulnerable to injury. The rapid jolting force of an impacting bullet is contrasted with the usually encountered mechanisms producing blunt trauma injury. The experimental methodology used to assess a 20% increase in survival probability and an 80% decrease in the need for surgical intervention with a new soft body armor is reviewed. Five cases of ballistic assaults on law enforcement personnel protected by soft body armor are presented. Four emphasize the potentially lifesaving qualities of the armor, while the fifth indicates the need for torso encircling design. Hospitalization should follow all assaults, regardless of the innocuous appearance of the skin lesion and the apparent well being on the assaulted individual. Therapeutic guidelines for patient management are suggested. Images Fig. 1. Fig. 2. Fig. 3. PMID:736653
Auxiliary variables for numerically solving nonlinear equations with softly broken symmetries.
Olum, Ken D; Masoumi, Ali
2017-06-01
General methods for solving simultaneous nonlinear equations work by generating a sequence of approximate solutions that successively improve a measure of the total error. However, if the total error function has a narrow curved valley, the available techniques tend to find the solution after a very large number of steps, if ever. The solver first converges rapidly to the valley, but once there it converges extremely slowly to the solution. In this paper we show that in the specific physically important case where these valleys are the result of a softly broken symmetry, the solution can often be found much more quickly by adding the generators of the softly broken symmetry as auxiliary variables. This makes the number of variables more than the equations and hence there will be a family of solutions, any one of which would be acceptable. We present a procedure for finding solutions in this case and apply it to several simple examples and an important problem in the physics of false vacuum decay. We also provide a Mathematica package that implements Powell's hybrid method with the generalization to allow more variables than equations.
In-flight performance of pulse-processing system of the ASTRO-H/Hitomi soft x-ray spectrometer
NASA Astrophysics Data System (ADS)
Ishisaki, Yoshitaka; Yamada, Shinya; Seta, Hiromi; Tashiro, Makoto S.; Takeda, Sawako; Terada, Yukikatsu; Kato, Yuka; Tsujimoto, Masahiro; Koyama, Shu; Mitsuda, Kazuhisa; Sawada, Makoto; Boyce, Kevin R.; Chiao, Meng P.; Watanabe, Tomomi; Leutenegger, Maurice A.; Eckart, Megan E.; Porter, Frederick Scott; Kilbourne, Caroline Anne
2018-01-01
We summarize results of the initial in-orbit performance of the pulse shape processor (PSP) of the soft x-ray spectrometer instrument onboard ASTRO-H (Hitomi). Event formats, kind of telemetry, and the pulse-processing parameters are described, and the parameter settings in orbit are listed. The PSP was powered-on 2 days after launch, and the event threshold was lowered in orbit. The PSP worked fine in orbit, and there was neither memory error nor SpaceWire communication error until the break-up of spacecraft. Time assignment, electrical crosstalk, and the event screening criteria are studied. It is confirmed that the event processing rate at 100% central processing unit load is ˜200 c / s / array, compliant with the requirement on the PSP.
Bronson, N R
1984-05-01
A new A-mode biometry system for determining axial length measurements of the eye has been developed that incorporates a soft-membrane transducer. The soft transducer decreases the risk of indenting the cornea with the probe resulting in inaccurate measurements. A microprocessor evaluates echo patterns and determines whether or not axial alignment has been obtained, eliminating possible user error. The new A-scan requires minimal user skill and can be used successfully by both physician and technician.
Closed-Loop Analysis of Soft Decisions for Serial Links
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Steele, Glen F.; Zucha, Joan P.; Schlensinger, Adam M.
2012-01-01
Modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more overhead through noisier channels, and software-defined radios use error-correction techniques that approach Shannon s theoretical limit of performance. The authors describe the benefit of closed-loop measurements for a receiver when paired with a counterpart transmitter and representative channel conditions. We also describe a real-time Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in real-time during the development of software defined radios.
De Rosario, Helios; Page, Álvaro; Besa, Antonio
2017-09-06
The accurate location of the main axes of rotation (AoR) is a crucial step in many applications of human movement analysis. There are different formal methods to determine the direction and position of the AoR, whose performance varies across studies, depending on the pose and the source of errors. Most methods are based on minimizing squared differences between observed and modelled marker positions or rigid motion parameters, implicitly assuming independent and uncorrelated errors, but the largest error usually results from soft tissue artefacts (STA), which do not have such statistical properties and are not effectively cancelled out by such methods. However, with adequate methods it is possible to assume that STA only account for a small fraction of the observed motion and to obtain explicit formulas through differential analysis that relate STA components to the resulting errors in AoR parameters. In this paper such formulas are derived for three different functional calibration techniques (Geometric Fitting, mean Finite Helical Axis, and SARA), to explain why each technique behaves differently from the others, and to propose strategies to compensate for those errors. These techniques were tested with published data from a sit-to-stand activity, where the true axis was defined using bi-planar fluoroscopy. All the methods were able to estimate the direction of the AoR with an error of less than 5°, whereas there were errors in the location of the axis of 30-40mm. Such location errors could be reduced to less than 17mm by the methods based on equations that use rigid motion parameters (mean Finite Helical Axis, SARA) when the translation component was calculated using the three markers nearest to the axis. Copyright © 2017 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Ni, Jianjun David
2011-01-01
This presentation briefly discusses a research effort on mitigation techniques of pulsed radio frequency interference (RFI) on a Low-Density-Parity-Check (LDPC) code. This problem is of considerable interest in the context of providing reliable communications to the space vehicle which might suffer severe degradation due to pulsed RFI sources such as large radars. The LDPC code is one of modern forward-error-correction (FEC) codes which have the decoding performance to approach the Shannon Limit. The LDPC code studied here is the AR4JA (2048, 1024) code recommended by the Consultative Committee for Space Data Systems (CCSDS) and it has been chosen for some spacecraft design. Even though this code is designed as a powerful FEC code in the additive white Gaussian noise channel, simulation data and test results show that the performance of this LDPC decoder is severely degraded when exposed to the pulsed RFI specified in the spacecraft s transponder specifications. An analysis work (through modeling and simulation) has been conducted to evaluate the impact of the pulsed RFI and a few implemental techniques have been investigated to mitigate the pulsed RFI impact by reshuffling the soft-decision-data available at the input of the LDPC decoder. The simulation results show that the LDPC decoding performance of codeword error rate (CWER) under pulsed RFI can be improved up to four orders of magnitude through a simple soft-decision-data reshuffle scheme. This study reveals that an error floor of LDPC decoding performance appears around CWER=1E-4 when the proposed technique is applied to mitigate the pulsed RFI impact. The mechanism causing this error floor remains unknown, further investigation is necessary.
Justification of Estimates for Fiscal Year 1983 Submitted to Congress.
1982-02-01
hierarchies to aid software production; completion of the components of an adaptive suspension vehicle including a storage energy unit, hydraulics, laser...and corrosion (long storage times), and radiation-induced breakdown. Solid- lubricated main engine bearings for cruise missile engines would offer...environments will cause "soft error" (computational and memory storage errors) in advanced microelectronic circuits. Research on high-speed, low-power
Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging.
Liu, Fang; Jang, Hyungseok; Kijowski, Richard; Bradshaw, Tyler; McMillan, Alan B
2018-02-01
Purpose To develop and evaluate the feasibility of deep learning approaches for magnetic resonance (MR) imaging-based attenuation correction (AC) (termed deep MRAC) in brain positron emission tomography (PET)/MR imaging. Materials and Methods A PET/MR imaging AC pipeline was built by using a deep learning approach to generate pseudo computed tomographic (CT) scans from MR images. A deep convolutional auto-encoder network was trained to identify air, bone, and soft tissue in volumetric head MR images coregistered to CT data for training. A set of 30 retrospective three-dimensional T1-weighted head images was used to train the model, which was then evaluated in 10 patients by comparing the generated pseudo CT scan to an acquired CT scan. A prospective study was carried out for utilizing simultaneous PET/MR imaging for five subjects by using the proposed approach. Analysis of covariance and paired-sample t tests were used for statistical analysis to compare PET reconstruction error with deep MRAC and two existing MR imaging-based AC approaches with CT-based AC. Results Deep MRAC provides an accurate pseudo CT scan with a mean Dice coefficient of 0.971 ± 0.005 for air, 0.936 ± 0.011 for soft tissue, and 0.803 ± 0.021 for bone. Furthermore, deep MRAC provides good PET results, with average errors of less than 1% in most brain regions. Significantly lower PET reconstruction errors were realized with deep MRAC (-0.7% ± 1.1) compared with Dixon-based soft-tissue and air segmentation (-5.8% ± 3.1) and anatomic CT-based template registration (-4.8% ± 2.2). Conclusion The authors developed an automated approach that allows generation of discrete-valued pseudo CT scans (soft tissue, bone, and air) from a single high-spatial-resolution diagnostic-quality three-dimensional MR image and evaluated it in brain PET/MR imaging. This deep learning approach for MR imaging-based AC provided reduced PET reconstruction error relative to a CT-based standard within the brain compared with current MR imaging-based AC approaches. © RSNA, 2017 Online supplemental material is available for this article.
Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code
NASA Astrophysics Data System (ADS)
Marinkovic, Slavica; Guillemot, Christine
2006-12-01
Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.
Shi, Weifang; Zeng, Weihua
2013-01-01
Reducing human vulnerability to chemical hazards in the industrialized city is a matter of great urgency. Vulnerability mapping is an alternative approach for providing vulnerability-reducing interventions in a region. This study presents a method for mapping human vulnerability to chemical hazards by using clustering analysis for effective vulnerability reduction. Taking the city of Shanghai as the study area, we measure human exposure to chemical hazards by using the proximity model with additionally considering the toxicity of hazardous substances, and capture the sensitivity and coping capacity with corresponding indicators. We perform an improved k-means clustering approach on the basis of genetic algorithm by using a 500 m × 500 m geographical grid as basic spatial unit. The sum of squared errors and silhouette coefficient are combined to measure the quality of clustering and to determine the optimal clustering number. Clustering result reveals a set of six typical human vulnerability patterns that show distinct vulnerability dimension combinations. The vulnerability mapping of the study area reflects cluster-specific vulnerability characteristics and their spatial distribution. Finally, we suggest specific points that can provide new insights in rationally allocating the limited funds for the vulnerability reduction of each cluster. PMID:23787337
Grevesse, Thomas; Dabiri, Borna E; Parker, Kevin Kit; Gabriele, Sylvain
2015-03-30
Although pathological changes in axonal morphology have emerged as important features of traumatic brain injury (TBI), the mechanical vulnerability of the axonal microcompartment relative to the cell body is not well understood. We hypothesized that soma and neurite microcompartments exhibit distinct mechanical behaviors, rendering axons more sensitive to a mechanical injury. In order to test this assumption, we combined protein micropatterns with magnetic tweezer rheology to probe the viscoelastic properties of neuronal microcompartments. Creep experiments revealed two opposite rheological behaviors within cortical neurons: the cell body was soft and characterized by a solid-like response, whereas the neurite compartment was stiffer and viscous-like. By using pharmacological agents, we demonstrated that the nucleus is responsible for the solid-like behavior and the stress-stiffening response of the soma, whereas neurofilaments have a predominant contribution in the viscous behavior of the neurite. Furthermore, we found that the neurite is a mechanosensitive compartment that becomes softer and adopts a pronounced viscous state on soft matrices. Together, these findings highlight the importance of the regionalization of mechanical and rigidity-sensing properties within neuron microcompartments in the preferential damage of axons during traumatic brain injury and into potential mechanisms of axonal outgrowth after injury.
NASA Astrophysics Data System (ADS)
Grevesse, Thomas; Dabiri, Borna E.; Parker, Kevin Kit; Gabriele, Sylvain
2015-03-01
Although pathological changes in axonal morphology have emerged as important features of traumatic brain injury (TBI), the mechanical vulnerability of the axonal microcompartment relative to the cell body is not well understood. We hypothesized that soma and neurite microcompartments exhibit distinct mechanical behaviors, rendering axons more sensitive to a mechanical injury. In order to test this assumption, we combined protein micropatterns with magnetic tweezer rheology to probe the viscoelastic properties of neuronal microcompartments. Creep experiments revealed two opposite rheological behaviors within cortical neurons: the cell body was soft and characterized by a solid-like response, whereas the neurite compartment was stiffer and viscous-like. By using pharmacological agents, we demonstrated that the nucleus is responsible for the solid-like behavior and the stress-stiffening response of the soma, whereas neurofilaments have a predominant contribution in the viscous behavior of the neurite. Furthermore, we found that the neurite is a mechanosensitive compartment that becomes softer and adopts a pronounced viscous state on soft matrices. Together, these findings highlight the importance of the regionalization of mechanical and rigidity-sensing properties within neuron microcompartments in the preferential damage of axons during traumatic brain injury and into potential mechanisms of axonal outgrowth after injury.
Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMs
NASA Astrophysics Data System (ADS)
Diehl, S. E.; Ochoa, A., Jr.; Dressendorfer, P. V.; Koga, P.; Kolasinski, W. A.
1982-12-01
Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors.
Real-time optimal guidance for orbital maneuvering.
NASA Technical Reports Server (NTRS)
Cohen, A. O.; Brown, K. R.
1973-01-01
A new formulation for soft-constraint trajectory optimization is presented as a real-time optimal feedback guidance method for multiburn orbital maneuvers. Control is always chosen to minimize burn time plus a quadratic penalty for end condition errors, weighted so that early in the mission (when controllability is greatest) terminal errors are held negligible. Eventually, as controllability diminishes, the method partially relaxes but effectively still compensates perturbations in whatever subspace remains controllable. Although the soft-constraint concept is well-known in optimal control, the present formulation is novel in addressing the loss of controllability inherent in multiple burn orbital maneuvers. Moreover the necessary conditions usually obtained from a Bolza formulation are modified in this case so that the fully hard constraint formulation is a numerically well behaved subcase. As a result convergence properties have been greatly improved.
NASA Astrophysics Data System (ADS)
Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko
2017-08-01
We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.
Market mechanisms protect the vulnerable brain.
Ramchandran, Kanchna; Nayakankuppam, Dhananjay; Berg, Joyce; Tranel, Daniel; Denburg, Natalie L
2011-07-01
Markets are mechanisms of social exchange, intended to facilitate trading. However, the question remains as to whether markets would help or hurt individuals with decision-makings deficits, as is frequently encountered in the case of cognitive aging. Essential for predicting future gains and losses in monetary and social domains, the striatal nuclei in the brain undergo structural, neurochemical, and functional decline with age. We correlated the efficacy of market mechanisms with dorsal striatal decline in an aging population, by using market based trading in the context of the 2008 U.S. Presidential Elections (primary cycle). Impaired decision-makers displayed higher prediction error (difference between their prediction and actual outcome). Lower in vivo caudate volume was also associated with higher prediction error. Importantly, market-based trading protected older adults with lower caudate volume to a greater extent from their own poorly calibrated predictions. Counterintuitive to the traditional public perception of the market as a fickle, risky proposition where vulnerable traders are most surely to be burned, we suggest that market-based mechanisms protect individuals with brain-based decision-making vulnerabilities. Copyright © 2011 Elsevier Ltd. All rights reserved.
Market mechanisms protect the vulnerable brain
Ramchandran, Kanchna; Nayakankuppam, Dhananjay; Berg, Joyce; Tranel, Daniel
2011-01-01
Markets are mechanisms of social exchange, intended to facilitate trading. However, the question remains as to whether markets would help or hurt individuals with decision-makings deficits, as is frequently encountered in the case of cognitive aging. Essential for predicting future gains and losses in monetary and social domains, the striatal nuclei in the brain undergo structural, neurochemical, and functional decline with age. We correlated the efficacy of market mechanisms with dorsal striatal decline in an aging population, by using market based trading in the context of the 2008 U.S Presidential Elections (primary cycle). Impaired decision-makers displayed higher prediction error (difference between their prediction and actual outcome). Lower in vivo caudate volume was also associated with higher prediction error. Importantly, market-based trading protected older adults with lower caudate volume to a greater extent from their own poorly calibrated predictions. Counterintuitive to the traditional public perception of the market as a fickle, risky proposition where vulnerable traders are most surely to be burned, we suggest that market-based mechanisms protect individuals with brain-based decision-making vulnerabilities. PMID:21600226
NASA Astrophysics Data System (ADS)
Low, W. W.; Wong, K. S.; Lee, J. L.
2018-04-01
With the growth of economy and population, there is an increase in infrastructure construction projects. As such, it is unavoidable to have construction projects on soft soil. Without proper risk management plan, construction projects are vulnerable to different types of risks which will have negative impact on project’s time, cost and quality. Literature review showed that little or none of the research is focused on the risk assessment on the infrastructure project in soft soil. Hence, the aim of this research is to propose a risk assessment framework in infrastructure projects in soft soil during the construction stage. This research was focused on the impact of risks on project time and internal risk factors. The research method was Analytical Hierarchy Process and the sample population was experienced industry experts who have experience in infrastructure projects. Analysis was completed and result showed that for internal factors, the five most significant risks on time element are lack of special equipment, potential contractual disputes and claims, shortage of skilled workers, delay/lack of materials supply, and insolvency of contractor/sub-contractor. Results indicated that resources risk factor play a critical role on project time frame in infrastructure projects in soft soil during the construction stage.
TID and SEE Response of an Advanced Samsung 4G NAND Flash Memory
NASA Technical Reports Server (NTRS)
Oldham, Timothy R.; Friendlich, M.; Howard, J. W.; Berg, M. D.; Kim, H. S.; Irwin, T. L.; LaBel, K. A.
2007-01-01
Initial total ionizing dose (TID) and single event heavy ion test results are presented for an unhardened commercial flash memory, fabricated with 63 nm technology. Results are that the parts survive to a TID of nearly 200 krad (SiO2), with a tractable soft error rate of about 10(exp -l2) errors/bit-day, for the Adams Ten Percent Worst Case Environment.
An alternative data filling approach for prediction of missing data in soft sets (ADFIS).
Sadiq Khan, Muhammad; Al-Garadi, Mohammed Ali; Wahab, Ainuddin Wahid Abdul; Herawan, Tutut
2016-01-01
Soft set theory is a mathematical approach that provides solution for dealing with uncertain data. As a standard soft set, it can be represented as a Boolean-valued information system, and hence it has been used in hundreds of useful applications. Meanwhile, these applications become worthless if the Boolean information system contains missing data due to error, security or mishandling. Few researches exist that focused on handling partially incomplete soft set and none of them has high accuracy rate in prediction performance of handling missing data. It is shown that the data filling approach for incomplete soft set (DFIS) has the best performance among all previous approaches. However, in reviewing DFIS, accuracy is still its main problem. In this paper, we propose an alternative data filling approach for prediction of missing data in soft sets, namely ADFIS. The novelty of ADFIS is that, unlike the previous approach that used probability, we focus more on reliability of association among parameters in soft set. Experimental results on small, 04 UCI benchmark data and causality workbench lung cancer (LUCAP2) data shows that ADFIS performs better accuracy as compared to DFIS.
NASA Astrophysics Data System (ADS)
Li, Yan; Jing, Joseph C.; Qu, Yueqiao; Miao, Yusi; Ma, Teng; Yu, Mingyue; Zhou, Qifa; Chen, Zhongping
2017-02-01
The rupture of atherosclerotic plaques is the leading cause of acute coronary events, so accurate assessment of plaque is critical. A large lipid pool, thin fibrous cap, and inflammatory reaction are the crucial characteristics for identifying vulnerable plaques. In our study, a tri-modality imaging system for intravascular imaging was designed and implemented. The tri-modality imaging system with a 1-mm probe diameter is able to simultaneously acquire optical coherence tomography (OCT), intravascular ultrasound (IVUS), and fluorescence imaging. Moreover, for fluorescence imaging, we used the FDA-approved indocyanine green (ICG) dye as the contrast agent to target lipid-loaded macrophages. Firstly, IVUS is used as the first step for identifying plaque since IVUS enables the visualization of the layered structures of the artery wall. Due to low soft-tissue contrast, IVUS only provides initial identification of the lipid plaque. Then OCT is used for differentiating fibrosis and lipid pool based on its relatively higher soft tissue contrast and high sensitivity/specificity. Last, fluorescence imaging is used for identifying inflammatory reaction to further confirm whether the plaque is vulnerable or not. Ex vivo experiment of a male New Zealand white rabbit aorta was performed to validate the performance of our tri-modality system. H and E histology results of the rabbit aorta were also presented to check assessment accuracy. The miniature tri-modality probe, together with the use of ICG dye suggest that the system is of great potential for providing a more accurate assessment of vulnerable plaques in clinical applications.
Improved Rubin-Bodner Model for the Prediction of Soft Tissue Deformations
Zhang, Guangming; Xia, James J.; Liebschner, Michael; Zhang, Xiaoyan; Kim, Daeseung; Zhou, Xiaobo
2016-01-01
In craniomaxillofacial (CMF) surgery, a reliable way of simulating the soft tissue deformation resulted from skeletal reconstruction is vitally important for preventing the risks of facial distortion postoperatively. However, it is difficult to simulate the soft tissue behaviors affected by different types of CMF surgery. This study presents an integrated bio-mechanical and statistical learning model to improve accuracy and reliability of predictions on soft facial tissue behavior. The Rubin-Bodner (RB) model is initially used to describe the biomechanical behavior of the soft facial tissue. Subsequently, a finite element model (FEM) computers the stress of each node in soft facial tissue mesh data resulted from bone displacement. Next, the Generalized Regression Neural Network (GRNN) method is implemented to obtain the relationship between the facial soft tissue deformation and the stress distribution corresponding to different CMF surgical types and to improve evaluation of elastic parameters included in the RB model. Therefore, the soft facial tissue deformation can be predicted by biomechanical properties and statistical model. Leave-one-out cross-validation is used on eleven patients. As a result, the average prediction error of our model (0.7035mm) is lower than those resulting from other approaches. It also demonstrates that the more accurate bio-mechanical information the model has, the better prediction performance it could achieve. PMID:27717593
Results from the First Two Flights of the Static Computer Memory Integrity Testing Experiment
NASA Technical Reports Server (NTRS)
Hancock, Thomas M., III
1999-01-01
This paper details the scientific objectives, experiment design, data collection method, and post flight analysis following the first two flights of the Static Computer Memory Integrity Testing (SCMIT) experiment. SCMIT is designed to detect soft-event upsets in passive magnetic memory. A soft-event upset is a change in the logic state of active or passive forms of magnetic memory, commonly referred to as a "Bitflip". In its mildest form a soft-event upset can cause software exceptions, unexpected events, start spacecraft safeing (ending data collection) or corrupted fault protection and error recovery capabilities. In it's most severe form loss of mission or spacecraft can occur. Analysis after the first flight (in 1991 during STS-40) identified possible soft-event upsets to 25% of the experiment detectors. Post flight analysis after the second flight (in 1997 on STS-87) failed to find any evidence of soft-event upsets. The SCMIT experiment is currently scheduled for a third flight in December 1999 on STS-101.
Pressing the Approach: A NASA Study of 19 Recent Accidents Yields a New Perspective on Pilot Error
NASA Technical Reports Server (NTRS)
Berman, Benjamin A.; Dismukes, R. Key
2007-01-01
This article begins with a review of two sample airplane accidents that were caused by pilot error. The analysis of these and 17 other accidents suggested that almost all experienced pilot operating in the same environment in which the accident crews were operating and knowing only what the accident crews knew at each moment of the flight, would be vulnerable to making a similar decision and similar errors. Whether a particular crew in a given situation makes errors depends on somewhat random interaction of factors. Two themes that seem to be prevalent in these cases are: Plan Continuation Bias, and Snowballing Workload.
An undulator based soft x-ray source for microscopy on the Duke electron storage ring
NASA Astrophysics Data System (ADS)
Johnson, Lewis Elgin
1998-09-01
This dissertation describes the design, development, and installation of an undulator-based soft x-ray source on the Duke Free Electron Laser laboratory electron storage ring. Insertion device and soft x-ray beamline physics and technology are all discussed in detail. The Duke/NIST undulator is a 3.64-m long hybrid design constructed by the Brobeck Division of Maxwell Laboratories. Originally built for an FEL project at the National Institute of Standards and Technology, the undulator was acquired by Duke in 1992 for use as a soft x-ray source for the FEL laboratory. Initial Hall probe measurements on the magnetic field distribution of the undulator revealed field errors of more than 0.80%. Initial phase errors for the device were more than 11 degrees. Through a series of in situ and off-line measurements and modifications we have re-tuned the magnet field structure of the device to produce strong spectral characteristics through the 5th harmonic. A low operating K has served to reduce the effects of magnetic field errors on the harmonic spectral content. Although rms field errors remained at 0.75%, we succeeded in reducing phase errors to less than 5 degrees. Using trajectory simulations from magnetic field data, we have computed the spectral output given the interaction of the Duke storage ring electron beam and the NIST undulator. Driven by a series of concerns and constraints over maximum utility, personnel safety and funding, we have also constructed a unique front end beamline for the undulator. The front end has been designed for maximum throughput of the 1st harmonic around 40A in its standard mode of operation. The front end has an alternative mode of operation which transmits the 3rd and 5th harmonics. This compact system also allows for the extraction of some of the bend magnet produced synchrotron and transition radiation from the storage ring. As with any well designed front end system, it also provides excellent protection to personnel and to the storage ring. A diagnostic beamline consisting of a transmission grating spectrometer and scanning wire beam profile monitor was constructed to measure the spatial and spectral characteristics of the undulator radiation. Test of the system with a circulating electron beam has confirmed the magnetic and focusing properties of the undulator, and verified that it can be used without perturbing the orbit of the beam.
Chang, Shih-Tsun; Liu, Yen-Hsiu; Lee, Jiahn-Shing; See, Lai-Chu
2015-01-01
Background: The effect of correcting static vision on sports vision is still not clear. Aim: To examine whether sports vision (depth perception [DP], dynamic visual acuity [DVA], eye movement [EM], peripheral vision [PV], and momentary vision [MV],) were different among soft tennis adolescent athletes with normal vision (Group A), with refractive error and corrected with (Group B) and without eyeglasses (Group C). Setting and Design: A cross-section study was conducted. Soft tennis athletes aged 10–13 who played softball tennis for 2–5 years, and who were without any ocular diseases and without visual training for the past 3 months were recruited. Materials and Methods: DPs were measured in an absolute deviation (mm) between a moving rod and fixing rod (approaching at 25 mm/s, receding at 25 mm/s, approaching at 50 mm/s, receding at 50 mm/s) using electric DP tester. A smaller deviation represented better DP. DVA, EM, PV, and MV were measured on a scale from 1 (worse) to 10 (best) using ATHLEVISION software. Statistical Analysis: Chi-square test and Kruskal–Wallis test was used to compare the data among the three study groups. Results: A total of 73 athletes (37 in Group A, 8 in Group B, 28 in Group C) were enrolled in this study. All four items of DP showed significant difference among the three study groups (P = 0.0051, 0.0004, 0.0095, 0.0021). PV displayed significant difference among the three study groups (P = 0.0044). There was no significant difference in DVA, EM, and MV among the three study groups. Conclusions: Significant better DP and PV were seen among soft tennis adolescent athletes with normal vision than those with refractive error regardless whether they had eyeglasses corrected. On the other hand, DVA, EM, and MV were similar among the three study groups. PMID:26632127
Naik, Aanand Dinkar; Rao, Raghuram; Petersen, Laura Ann
2008-01-01
Diagnostic errors are poorly understood despite being a frequent cause of medical errors. Recent efforts have aimed to advance the "basic science" of diagnostic error prevention by tracing errors to their most basic origins. Although a refined theory of diagnostic error prevention will take years to formulate, we focus on communication breakdown, a major contributor to diagnostic errors and an increasingly recognized preventable factor in medical mishaps. We describe a comprehensive framework that integrates the potential sources of communication breakdowns within the diagnostic process and identifies vulnerable steps in the diagnostic process where various types of communication breakdowns can precipitate error. We then discuss potential information technology-based interventions that may have efficacy in preventing one or more forms of these breakdowns. These possible intervention strategies include using new technologies to enhance communication between health providers and health systems, improve patient involvement, and facilitate management of information in the medical record. PMID:18373151
Microanatomy of the cochlear hook
NASA Astrophysics Data System (ADS)
Kwan, Changyow Claire; Tan, Xiaodong; Stock, Stuart R.; Soriano, Carmen; Xiao, Xianghui; Richter, Claus-Peter
2017-09-01
Communication among humans occurs through coding and decoding of acoustic information. The inner ear or cochlea acts as a frequency analyzer and divides the acoustic signal into small frequency bands, which are processed at different sites along the cochlea. The mechano-electrical conversion is accomplished by the soft tissue structures in the cochlea. While the anatomy for most of the cochlea has been well described, a detailed description of the very high frequency and vulnerable cochlear hook region is missing. To study the cochlear hook, mice cochleae were imaged with synchrotron radiation and high-resolution reconstructions have been made from the tomographic scans. This is the first detailed description of the bony and soft tissues of the hook region of the mammalian cochlea.
Triage: an investigation of the process and potential vulnerabilities.
Hitchcock, Maree; Gillespie, Brigid; Crilly, Julia; Chaboyer, Wendy
2014-07-01
To explore and describe the triage process in the Emergency Department to identify problems and potential vulnerabilities that may affect the triage process. Triage is the first step in the patient journey in the Emergency Department and is often the front line in reducing the potential for errors and mistakes. A fieldwork study to provide an in-depth appreciation and understanding of the triage process. Fieldwork included unstructured observer-only observation, field notes, informal and formal interviews that were conducted over the months of June, July and August 2012. Over 170 hours of observation were performed covering day, evening and night shifts, 7 days of the week. Sixty episodes of triage were observed; 31 informal interviews and 14 formal interviews were completed. Thematic analysis was used. Three themes were identified from the analysis of the data and included: 'negotiating patient flow and care delivery through the Emergency Department'; 'interdisciplinary team communicating and collaborating to provide appropriate and safe care to patients'; and 'varying levels of competence of the triage nurse'. In these themes, vulnerabilities and problems described included over and under triage, extended time to triage assessment, triage errors, multiple patients arriving simultaneously, emergency department and hospital overcrowding. Findings suggest that vulnerabilities in the triage process may cause disruptions to patient flow and compromise care, thus potentially impacting nurses' ability to provide safe and effective care. © 2013 John Wiley & Sons Ltd.
Laser as a Tool to Study Radiation Effects in CMOS
NASA Astrophysics Data System (ADS)
Ajdari, Bahar
Energetic particles from cosmic ray or terrestrial sources can strike sensitive areas of CMOS devices and cause soft errors. Understanding the effects of such interactions is crucial as the device technology advances, and chip reliability has become more important than ever. Particle accelerator testing has been the standard method to characterize the sensitivity of chips to single event upsets (SEUs). However, because of their costs and availability limitations, other techniques have been explored. Pulsed laser has been a successful tool for characterization of SEU behavior, but to this day, laser has not been recognized as a comparable method to beam testing. In this thesis, I propose a methodology of correlating laser soft error rate (SER) to particle beam gathered data. Additionally, results are presented showing a temperature dependence of SER and the "neighbor effect" phenomenon where due to the close proximity of devices a "weakening effect" in the ON state can be observed.
Kazaura, Kamugisha; Omae, Kazunori; Suzuki, Toshiji; Matsumoto, Mitsuji; Mutafungwa, Edward; Korhonen, Timo O; Murakami, Tadaaki; Takahashi, Koichi; Matsumoto, Hideki; Wakamori, Kazuhiko; Arimoto, Yoshinori
2006-06-12
The deterioration and deformation of a free-space optical beam wave-front as it propagates through the atmosphere can reduce the link availability and may introduce burst errors thus degrading the performance of the system. We investigate the suitability of utilizing soft-computing (SC) based tools for improving performance of free-space optical (FSO) communications systems. The SC based tools are used for the prediction of key parameters of a FSO communications system. Measured data collected from an experimental FSO communication system is used as training and testing data for a proposed multi-layer neural network predictor (MNNP) used to predict future parameter values. The predicted parameters are essential for reducing transmission errors by improving the antenna's accuracy of tracking data beams. This is particularly essential for periods considered to be of strong atmospheric turbulence. The parameter values predicted using the proposed tool show acceptable conformity with original measurements.
Distributed phased array architecture study
NASA Technical Reports Server (NTRS)
Bourgeois, Brian
1987-01-01
Variations in amplifiers and phase shifters can cause degraded antenna performance, depending also on the environmental conditions and antenna array architecture. The implementation of distributed phased array hardware was studied with the aid of the DISTAR computer program as a simulation tool. This simulation provides guidance in hardware simulation. Both hard and soft failures of the amplifiers in the T/R modules are modeled. Hard failures are catastrophic: no power is transmitted to the antenna elements. Noncatastrophic or soft failures are modeled as a modified Gaussian distribution. The resulting amplitude characteristics then determine the array excitation coefficients. The phase characteristics take on a uniform distribution. Pattern characteristics such as antenna gain, half power beamwidth, mainbeam phase errors, sidelobe levels, and beam pointing errors were studied as functions of amplifier and phase shifter variations. General specifications for amplifier and phase shifter tolerances in various architecture configurations for C band and S band were determined.
Effects of Stopping Ions and LET Fluctuations on Soft Error Rate Prediction.
Weeden-Wright, S. L.; King, Michael Patrick; Hooten, N. C.; ...
2015-02-01
Variability in energy deposition from stopping ions and LET fluctuations is quantified for specific radiation environments. When compared to predictions using average LET via CREME96, LET fluctuations lead to an order-of-magnitude difference in effective flux and a nearly 4x decrease in predicted soft error rate (SER) in an example calculation performed on a commercial 65 nm SRAM. The large LET fluctuations reported here will be even greater for the smaller sensitive volumes that are characteristic of highly scaled technologies. End-of-range effects of stopping ions do not lead to significant inaccuracies in radiation environments with low solar activity unless the sensitivevolumemore » thickness is 100 μm or greater. In contrast, end-of-range effects for stopping ions lead to significant inaccuracies for sensitive- volume thicknesses less than 10 μm in radiation environments with high solar activity.« less
NASA Astrophysics Data System (ADS)
Yan, Hong; Song, Xiangzhong; Tian, Kuangda; Chen, Yilin; Xiong, Yanmei; Min, Shungeng
2018-02-01
A novel method, mid-infrared (MIR) spectroscopy, which enables the determination of Chlorantraniliprole in Abamectin within minutes, is proposed. We further evaluate the prediction ability of four wavelength selection methods, including bootstrapping soft shrinkage approach (BOSS), Monte Carlo uninformative variable elimination (MCUVE), genetic algorithm partial least squares (GA-PLS) and competitive adaptive reweighted sampling (CARS) respectively. The results showed that BOSS method obtained the lowest root mean squared error of cross validation (RMSECV) (0.0245) and root mean squared error of prediction (RMSEP) (0.0271), as well as the highest coefficient of determination of cross-validation (Qcv2) (0.9998) and the coefficient of determination of test set (Q2test) (0.9989), which demonstrated that the mid infrared spectroscopy can be used to detect Chlorantraniliprole in Abamectin conveniently. Meanwhile, a suitable wavelength selection method (BOSS) is essential to conducting a component spectral analysis.
Analyzing the effectiveness of a frame-level redundancy scrubbing technique for SRAM-based FPGAs
Tonfat, Jorge; Lima Kastensmidt, Fernanda; Rech, Paolo; ...
2015-12-17
Radiation effects such as soft errors are the major threat to the reliability of SRAM-based FPGAs. This work analyzes the effectiveness in correcting soft errors of a novel scrubbing technique using internal frame redundancy called Frame-level Redundancy Scrubbing (FLR-scrubbing). This correction technique can be implemented in a coarse grain TMR design. The FLR-scrubbing technique was implemented on a mid-size Xilinx Virtex-5 FPGA device used as a case study. The FLR-scrubbing technique was tested under neutron radiation and fault injection. Implementation results demonstrated minimum area and energy consumption overhead when compared to other techniques. The time to repair the fault ismore » also improved by using the Internal Configuration Access Port (ICAP). Lastly, neutron radiation test results demonstrated that the proposed technique is suitable for correcting accumulated SEUs and MBUs.« less
Damage level prediction of non-reshaped berm breakwater using ANN, SVM and ANFIS models
NASA Astrophysics Data System (ADS)
Mandal, Sukomal; Rao, Subba; N., Harish; Lokesha
2012-06-01
The damage analysis of coastal structure is very important as it involves many design parameters to be considered for the better and safe design of structure. In the present study experimental data for non-reshaped berm breakwater are collected from Marine Structures Laboratory, Department of Applied Mechanics and Hydraulics, NITK, Surathkal, India. Soft computing techniques like Artificial Neural Network (ANN), Support Vector Machine (SVM) and Adaptive Neuro Fuzzy Inference system (ANFIS) models are constructed using experimental data sets to predict the damage level of non-reshaped berm breakwater. The experimental data are used to train ANN, SVM and ANFIS models and results are determined in terms of statistical measures like mean square error, root mean square error, correla-tion coefficient and scatter index. The result shows that soft computing techniques i.e., ANN, SVM and ANFIS can be efficient tools in predicting damage levels of non reshaped berm breakwater.
Analyzing the effectiveness of a frame-level redundancy scrubbing technique for SRAM-based FPGAs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tonfat, Jorge; Lima Kastensmidt, Fernanda; Rech, Paolo
Radiation effects such as soft errors are the major threat to the reliability of SRAM-based FPGAs. This work analyzes the effectiveness in correcting soft errors of a novel scrubbing technique using internal frame redundancy called Frame-level Redundancy Scrubbing (FLR-scrubbing). This correction technique can be implemented in a coarse grain TMR design. The FLR-scrubbing technique was implemented on a mid-size Xilinx Virtex-5 FPGA device used as a case study. The FLR-scrubbing technique was tested under neutron radiation and fault injection. Implementation results demonstrated minimum area and energy consumption overhead when compared to other techniques. The time to repair the fault ismore » also improved by using the Internal Configuration Access Port (ICAP). Lastly, neutron radiation test results demonstrated that the proposed technique is suitable for correcting accumulated SEUs and MBUs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1982-06-01
The US General Accounting Office and executive agency Inspectors General have reported losses of millions of dollars in government funds resulting from fraud, waste and error. The Administration and the Congress have initiated determined efforts to eliminate such losses from government programs and activities. Primary emphasis in this effort is on the strengthening of accounting and administrative controls. Accordingly, the Office of Management and Budget (OMB) issued Circular No. A-123, Internal Control Systems, on October 28, 1981. The campaign to improve internal controls was endorsed by the Secretary of Energy in a memorandum to Heads of Departmental Components, dated Marchmore » 13, 1981, Subject: Internal Control as a Deterrent to Fraud, Waste and Error. A vulnerability assessment is a review of the susceptibility of a program or function to unauthorized use of resources, errors in reports and information, and illegal or unethical acts. It is based on considerations of the environment in which the program or function is carried out, the inherent riskiness of the program or function, and a preliminary evaluation as to whether adequate safeguards exist and are functioning.« less
Andreyeva, Tatiana; Kelly, Inas Rashad; Harris, Jennifer L
2011-07-01
There is insufficient research on the direct effects of food advertising on children's diet and diet-related health, particularly in non-experimental settings. We employ a nationally-representative sample from the Early Childhood Longitudinal Survey-Kindergarten Cohort (ECLS-K) and the Nielsen Company data on spot television advertising of cereals, fast food restaurants and soft drinks to children across the top 55 designated-market areas to estimate the relation between exposure to food advertising on television and children's food consumption and body weight. Our results suggest that soft drink and fast food television advertising is associated with increased consumption of soft drinks and fast food among elementary school children (Grade 5). Exposure to 100 incremental TV ads for sugar-sweetened carbonated soft drinks during 2002-2004 was associated with a 9.4% rise in children's consumption of soft drinks in 2004. The same increase in exposure to fast food advertising was associated with a 1.1% rise in children's consumption of fast food. There was no detectable link between advertising exposure and average body weight, but fast food advertising was significantly associated with body mass index for overweight and obese children (≥85th BMI percentile), revealing detectable effects for a vulnerable group of children. Exposure to advertising for calorie-dense nutrient-poor foods may increase overall consumption of unhealthy food categories. Copyright © 2011 Elsevier B.V. All rights reserved.
Jia, Rui; Monk, Paul; Murray, David; Noble, J Alison; Mellon, Stephen
2017-09-06
Optoelectronic motion capture systems are widely employed to measure the movement of human joints. However, there can be a significant discrepancy between the data obtained by a motion capture system (MCS) and the actual movement of underlying bony structures, which is attributed to soft tissue artefact. In this paper, a computer-aided tracking and motion analysis with ultrasound (CAT & MAUS) system with an augmented globally optimal registration algorithm is presented to dynamically track the underlying bony structure during movement. The augmented registration part of CAT & MAUS was validated with a high system accuracy of 80%. The Euclidean distance between the marker-based bony landmark and the bony landmark tracked by CAT & MAUS was calculated to quantify the measurement error of an MCS caused by soft tissue artefact during movement. The average Euclidean distance between the target bony landmark measured by each of the CAT & MAUS system and the MCS alone varied from 8.32mm to 16.87mm in gait. This indicates the discrepancy between the MCS measured bony landmark and the actual underlying bony landmark. Moreover, Procrustes analysis was applied to demonstrate that CAT & MAUS reduces the deformation of the body segment shape modeled by markers during motion. The augmented CAT & MAUS system shows its potential to dynamically detect and locate actual underlying bony landmarks, which reduces the MCS measurement error caused by soft tissue artefact during movement. Copyright © 2017 Elsevier Ltd. All rights reserved.
Color reproduction for advanced manufacture of soft tissue prostheses.
Xiao, Kaida; Zardawi, Faraedon; van Noort, Richard; Yates, Julian M
2013-11-01
The objectives of this study were to develop a color reproduction system in advanced manufacture technology for accurate and automatic processing of soft tissue prostheses. The manufacturing protocol was defined to effectively and consistently produce soft tissue prostheses using a 3D printing system. Within this protocol printer color profiles were developed using a number of mathematical models for the proposed 3D color printing system based on 240 training colors. On this basis, the color reproduction system was established and their system errors including accuracy of color reproduction, performance of color repeatability and color gamut were evaluated using 14 known human skin shades. The printer color profile developed using the third-order polynomial regression based on least-square fitting provided the best model performance. The results demonstrated that by using the proposed color reproduction system, 14 different skin colors could be reproduced and excellent color reproduction performance achieved. Evaluation of the system's color repeatability revealed a demonstrable system error and this highlighted the need for regular evaluation. The color gamut for the proposed 3D printing system was simulated and it was demonstrated that the vast majority of skin colors can be reproduced with the exception of extreme dark or light skin color shades. This study demonstrated that the proposed color reproduction system can be effectively used to reproduce a range of human skin colors for application in advanced manufacture of soft tissue prostheses. Copyright © 2013 Elsevier Ltd. All rights reserved.
Peterman, Robert J; Jiang, Shuying; Johe, Rene; Mukherjee, Padma M
2016-12-01
Dolphin® visual treatment objective (VTO) prediction software is routinely utilized by orthodontists during the treatment planning of orthognathic cases to help predict post-surgical soft tissue changes. Although surgical soft tissue prediction is considered to be a vital tool, its accuracy is not well understood in tow-jaw surgical procedures. The objective of this study was to quantify the accuracy of Dolphin Imaging's VTO soft tissue prediction software on class III patients treated with maxillary advancement and mandibular setback and to validate the efficacy of the software in such complex cases. This retrospective study analyzed the records of 14 patients treated with comprehensive orthodontics in conjunction with two-jaw orthognathic surgery. Pre- and post-treatment radiographs were traced and superimposed to determine the actual skeletal movements achieved in surgery. This information was then used to simulate surgery in the software and generate a final soft tissue patient profile prediction. Prediction images were then compared to the actual post-treatment profile photos to determine differences. Dolphin Imaging's software was determined to be accurate within an error range of +/- 2 mm in the X-axis at most landmarks. The lower lip predictions were most inaccurate. Clinically, the observed error suggests that the VTO may be used for demonstration and communication with a patient or consulting practitioner. However, Dolphin should not be useful for precise treatment planning of surgical movements. This program should be used with caution to prevent unrealistic patient expectations and dissatisfaction.
Spilker, R L; de Almeida, E S; Donzelli, P S
1992-01-01
This chapter addresses computationally demanding numerical formulations in the biomechanics of soft tissues. The theory of mixtures can be used to represent soft hydrated tissues in the human musculoskeletal system as a two-phase continuum consisting of an incompressible solid phase (collagen and proteoglycan) and an incompressible fluid phase (interstitial water). We first consider the finite deformation of soft hydrated tissues in which the solid phase is represented as hyperelastic. A finite element formulation of the governing nonlinear biphasic equations is presented based on a mixed-penalty approach and derived using the weighted residual method. Fluid and solid phase deformation, velocity, and pressure are interpolated within each element, and the pressure variables within each element are eliminated at the element level. A system of nonlinear, first-order differential equations in the fluid and solid phase deformation and velocity is obtained. In order to solve these equations, the contributions of the hyperelastic solid phase are incrementally linearized, a finite difference rule is introduced for temporal discretization, and an iterative scheme is adopted to achieve equilibrium at the end of each time increment. We demonstrate the accuracy and adequacy of the procedure using a six-node, isoparametric axisymmetric element, and we present an example problem for which independent numerical solution is available. Next, we present an automated, adaptive environment for the simulation of soft tissue continua in which the finite element analysis is coupled with automatic mesh generation, error indicators, and projection methods. Mesh generation and updating, including both refinement and coarsening, for the two-dimensional examples examined in this study are performed using the finite quadtree approach. The adaptive analysis is based on an error indicator which is the L2 norm of the difference between the finite element solution and a projected finite element solution. Total stress, calculated as the sum of the solid and fluid phase stresses, is used in the error indicator. To allow the finite difference algorithm to proceed in time using an updated mesh, solution values must be transferred to the new nodal locations. This rezoning is accomplished using a projected field for the primary variables. The accuracy and effectiveness of this adaptive finite element analysis is demonstrated using a linear, two-dimensional, axisymmetric problem corresponding to the indentation of a thin sheet of soft tissue. The method is shown to effectively capture the steep gradients and to produce solutions in good agreement with independent, converged, numerical solutions.
Agribusiness: Industry Study Final Report, AY 2003-2004, Seminar 1
2004-01-01
vulnerability of our nation’s food supply to a potential biological attack, primarily due to inadequate governmental oversight. He states, "The Food and Drug...national laboratory system to assist with chemical, biological , and radiological agent identification and analysis. Following the passage of the...Genetically Modified Foods,” June 2003. Chalk, Peter, “Hitting America’s Soft Underbelly: The Potential Threat of Deliberate Biological Attacks
Propagation of measurement accuracy to biomass soft-sensor estimation and control quality.
Steinwandter, Valentin; Zahel, Thomas; Sagmeister, Patrick; Herwig, Christoph
2017-01-01
In biopharmaceutical process development and manufacturing, the online measurement of biomass and derived specific turnover rates is a central task to physiologically monitor and control the process. However, hard-type sensors such as dielectric spectroscopy, broth fluorescence, or permittivity measurement harbor various disadvantages. Therefore, soft-sensors, which use measurements of the off-gas stream and substrate feed to reconcile turnover rates and provide an online estimate of the biomass formation, are smart alternatives. For the reconciliation procedure, mass and energy balances are used together with accuracy estimations of measured conversion rates, which were so far arbitrarily chosen and static over the entire process. In this contribution, we present a novel strategy within the soft-sensor framework (named adaptive soft-sensor) to propagate uncertainties from measurements to conversion rates and demonstrate the benefits: For industrially relevant conditions, hereby the error of the resulting estimated biomass formation rate and specific substrate consumption rate could be decreased by 43 and 64 %, respectively, compared to traditional soft-sensor approaches. Moreover, we present a generic workflow to determine the required raw signal accuracy to obtain predefined accuracies of soft-sensor estimations. Thereby, appropriate measurement devices and maintenance intervals can be selected. Furthermore, using this workflow, we demonstrate that the estimation accuracy of the soft-sensor can be additionally and substantially increased.
Examining the Angular Resolution of the Astro-H's Soft X-Ray Telescopes
NASA Technical Reports Server (NTRS)
Sato, Toshiki; Iizuka, Ryo; Ishida, Manabu; Kikuchi, Naomichi; Maeda, Yoshitomo; Kurashima, Sho; Nakaniwa, Nozomi; Tomikawa, Kazuki; Hayashi, Takayuki; Mori, Hideyuki;
2016-01-01
The international x-ray observatory ASTRO-H was renamed Hitomi after launch. It covers a wide energy range from a few hundred eV to 600 keV. It is equipped with two soft x-ray telescopes (SXTs: SXT-I and SXT-S) for imaging the soft x-ray sky up to 12 keV, which focus an image onto the respective focal-plane detectors: CCD camera (SXI) and a calorimeter (SXS). The SXTs are fabricated in a quadrant unit. The angular resolution in half-power diameter (HPD) of each quadrant of the SXTs ranges between 1.1 and 1.4 arc min at 4.51 keV. It was also found that one quadrant has an energy dependence on the HPD. We examine the angular resolution with spot scan measurements. In order to understand the cause of imaging capability deterioration and to reflect it to the future telescope development, we carried out spot scan measurements, in which we illuminate all over the aperture of each quadrant with a square beam 8 mm on a side. Based on the scan results, we made maps of image blurring and a focus position. The former and the latter reflect figure error and positioning error, respectively, of the foils that are within the incident 8 mm x 8 mm beam. As a result, we estimated those errors in a quadrant to be approx. 0.9 to 1.0 and approx. 0.6 to 0.9 arc min, respectively. We found that the larger the positioning error in a quadrant is, the larger its HPD is. The HPD map, which manifests the local image blurring, is very similar from quadrant to quadrant, but the map of the focus position is different from location to location in each telescope. It is also found that the difference in local performance causes energy dependence of the HPD.
Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.; Lavelle, Thomas M.; Patnaik, Surya
2003-01-01
The neural network and regression methods of NASA Glenn Research Center s COMETBOARDS design optimization testbed were used to generate approximate analysis and design models for a subsonic aircraft operating at Mach 0.85 cruise speed. The analytical model is defined by nine design variables: wing aspect ratio, engine thrust, wing area, sweep angle, chord-thickness ratio, turbine temperature, pressure ratio, bypass ratio, fan pressure; and eight response parameters: weight, landing velocity, takeoff and landing field lengths, approach thrust, overall efficiency, and compressor pressure and temperature. The variables were adjusted to optimally balance the engines to the airframe. The solution strategy included a sensitivity model and the soft analysis model. Researchers generated the sensitivity model by training the approximators to predict an optimum design. The trained neural network predicted all response variables, within 5-percent error. This was reduced to 1 percent by the regression method. The soft analysis model was developed to replace aircraft analysis as the reanalyzer in design optimization. Soft models have been generated for a neural network method, a regression method, and a hybrid method obtained by combining the approximators. The performance of the models is graphed for aircraft weight versus thrust as well as for wing area and turbine temperature. The regression method followed the analytical solution with little error. The neural network exhibited 5-percent maximum error over all parameters. Performance of the hybrid method was intermediate in comparison to the individual approximators. Error in the response variable is smaller than that shown in the figure because of a distortion scale factor. The overall performance of the approximators was considered to be satisfactory because aircraft analysis with NASA Langley Research Center s FLOPS (Flight Optimization System) code is a synthesis of diverse disciplines: weight estimation, aerodynamic analysis, engine cycle analysis, propulsion data interpolation, mission performance, airfield length for landing and takeoff, noise footprint, and others.
NASA Technical Reports Server (NTRS)
Ackermann, M.; Ajello, M.; Allafort, A.; Atwood, W. B.; Baldini, L.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bhat, P. N.;
2012-01-01
Due to an error at the publisher, the times given for the major tick marks in the X-axis in Figure 1 of the published article are incorrect. The correctly labeled times should be 00:52:00, 00:54:00,..., and 01:04:00. The correct version of Figure 1 and its caption is shown below. IOP Publishing sincerely regrets this error.25.
NASA Astrophysics Data System (ADS)
Ackermann, M.; Ajello, M.; Allafort, A.; Atwood, W. B.; Baldini, L.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bhat, P. N.; Blandford, R. D.; Bonamente, E.; Borgland, A. W.; Bregeon, J.; Briggs, M. S.; Brigida, M.; Bruel, P.; Buehler, R.; Burgess, J. M.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Casandjian, J. M.; Cecchi, C.; Charles, E.; Chekhtman, A.; Chiang, J.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Connaughton, V.; Conrad, J.; Cutini, S.; Dennis, B. R.; de Palma, F.; Dermer, C. D.; Digel, S. W.; Silva, E. do Couto e.; Drell, P. S.; Drlica-Wagner, A.; Dubois, R.; Favuzzi, C.; Fegan, S. J.; Ferrara, E. C.; Fortin, P.; Fukazawa, Y.; Fusco, P.; Gargano, F.; Germani, S.; Giglietto, N.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Grillo, L.; Grove, J. E.; Gruber, D.; Guiriec, S.; Hadasch, D.; Hayashida, M.; Hays, E.; Horan, D.; Iafrate, G.; Jóhannesson, G.; Johnson, A. S.; Johnson, W. N.; Kamae, T.; Kippen, R. M.; Knödlseder, J.; Kuss, M.; Lande, J.; Latronico, L.; Longo, F.; Loparco, F.; Lott, B.; Lovellette, M. N.; Lubrano, P.; Mazziotta, M. N.; McEnery, J. E.; Meegan, C.; Mehault, J.; Michelson, P. F.; Mitthumsiri, W.; Monte, C.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Murphy, R.; Naumann-Godo, M.; Nuss, E.; Nymark, T.; Ohno, M.; Ohsugi, T.; Okumura, A.; Omodei, N.; Orlando, E.; Paciesas, W. S.; Panetta, J. H.; Parent, D.; Pesce-Rollins, M.; Petrosian, V.; Pierbattista, M.; Piron, F.; Pivato, G.; Poon, H.; Porter, T. A.; Preece, R.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Reimer, O.; Ritz, S.; Sbarra, C.; Schwartz, R. A.; Sgrò, C.; Share, G. H.; Siskind, E. J.; Spinelli, P.; Takahashi, H.; Tanaka, T.; Tanaka, Y.; Thayer, J. B.; Tibaldo, L.; Tinivella, M.; Tolbert, A. K.; Tosti, G.; Troja, E.; Uchiyama, Y.; Usher, T. L.; Vandenbroucke, J.; Vasileiou, V.; Vianello, G.; Vitale, V.; von Kienlin, A.; Waite, A. P.; Wilson-Hodge, C.; Wood, D. L.; Wood, K. S.; Yang, Z.
2012-04-01
Due to an error at the publisher, the times given for the major tick marks in the X-axis in Figure 1 of the published article are incorrect. The correctly labeled times should be "00:52:00," "00:54:00," ... , and "01:04:00." The correct version of Figure 1 and its caption is shown below. IOP Publishing sincerely regrets this error.
Tissue velocity imaging of coronary artery by rotating-type intravascular ultrasound.
Saijo, Yoshifumi; Tanaka, Akira; Owada, Naoki; Akino, Yoshihisa; Nitta, Shinichi
2004-04-01
Intravascular ultrasound (IVUS) provides not only the dimensions of coronary artery but the information of tissue components. In catheterization laboratory, soft and hard plaques are classified by visual inspection of echo intensity. So-called soft plaque contains lipid core or thrombus and it is believed to be more vulnerable than a hard plaque. However, it is not simple to analyze the echo signals quantitatively. When we look at a reflection signal, the intensity is affected by the distance of the object, the medium between transducer and objects and the fluctuation caused by rotation of IVUS probe. The time of flight is also affected by the sound speed of the medium and Doppler shift caused by tissue motion but usually those can be neglected. Thus, the analysis of RF signal in time domain can be more quantitative than intensity of RF signal. In the present study, a novel imaging technique called "intravascular tissue velocity imaging" was developed for searching a vulnerable plaque. Radio-frequency (RF) signal from a clinically used IVUS apparatus was digitized at 500 MSa/s and stored in a workstation. First, non-uniform rotation was corrected by maximizing the correlation coefficient of circumferential RF signal distribution in two consecutive frames. Then, the correlation and displacement were calculated by analyzing the radial difference of RF signal. Tissue velocity was determined by the displacement and the frame rate. The correlation image of normal and atherosclerotic coronary arteries clearly showed the internal and external borders of arterial wall. Soft plaque with low echo area in the intima showed high velocity while the calcified lesion showed the very low tissue velocity. This technique provides important information on tissue character of coronary artery.
Investigating the impact of spatial priors on the performance of model-based IVUS elastography
Richards, M S; Doyley, M M
2012-01-01
This paper describes methods that provide pre-requisite information for computing circumferential stress in modulus elastograms recovered from vascular tissue—information that could help cardiologists detect life-threatening plaques and predict their propensity to rupture. The modulus recovery process is an ill-posed problem; therefore additional information is needed to provide useful elastograms. In this work, prior geometrical information was used to impose hard or soft constraints on the reconstruction process. We conducted simulation and phantom studies to evaluate and compare modulus elastograms computed with soft and hard constraints versus those computed without any prior information. The results revealed that (1) the contrast-to-noise ratio of modulus elastograms achieved using the soft prior and hard prior reconstruction methods exceeded those computed without any prior information; (2) the soft prior and hard prior reconstruction methods could tolerate up to 8 % measurement noise; and (3) the performance of soft and hard prior modulus elastogram degraded when incomplete spatial priors were employed. This work demonstrates that including spatial priors in the reconstruction process should improve the performance of model-based elastography, and the soft prior approach should enhance the robustness of the reconstruction process to errors in the geometrical information. PMID:22037648
Single-Event Upset Characterization of Common First- and Second-Order All-Digital Phase-Locked Loops
NASA Astrophysics Data System (ADS)
Chen, Y. P.; Massengill, L. W.; Kauppila, J. S.; Bhuva, B. L.; Holman, W. T.; Loveless, T. D.
2017-08-01
The single-event upset (SEU) vulnerability of common first- and second-order all-digital-phase-locked loops (ADPLLs) is investigated through field-programmable gate array-based fault injection experiments. SEUs in the highest order pole of the loop filter and fraction-based phase detectors (PDs) may result in the worst case error response, i.e., limit cycle errors, often requiring system restart. SEUs in integer-based linear PDs may result in loss-of-lock errors, while SEUs in bang-bang PDs only result in temporary-frequency errors. ADPLLs with the same frequency tuning range but fewer bits in the control word exhibit better overall SEU performance.
Liévanos, Raoul S
2018-04-16
The California Community Environmental Health Screening Tool (CalEnviroScreen) advances research and policy pertaining to environmental health vulnerability. However, CalEnviroScreen departs from its historical foundations and comparable screening tools by no longer considering racial status as an indicator of environmental health vulnerability and predictor of cumulative pollution burden. This study used conceptual frameworks and analytical techniques from environmental health and inequality literature to address the limitations of CalEnviroScreen, especially its inattention to race-based environmental health vulnerabilities. It developed an adjusted measure of cumulative pollution burden from the CalEnviroScreen 2.0 data that facilitates multivariate analyses of the effect of neighborhood racial composition on cumulative pollution burden, net of other indicators of population vulnerability, traffic density, industrial zoning, and local and regional clustering of pollution burden. Principal component analyses produced three new measures of population vulnerability, including Latina/o cumulative disadvantage that represents the spatial concentration of Latinas/os, economic disadvantage, limited English-speaking ability, and health vulnerability. Spatial error regression analyses demonstrated that concentrations of Latinas/os, followed by Latina/o cumulative disadvantage, are the strongest demographic determinants of adjusted cumulative pollution burden. Findings have implications for research and policy pertaining to cumulative impacts and race-based environmental health vulnerabilities within and beyond California.
2018-01-01
The California Community Environmental Health Screening Tool (CalEnviroScreen) advances research and policy pertaining to environmental health vulnerability. However, CalEnviroScreen departs from its historical foundations and comparable screening tools by no longer considering racial status as an indicator of environmental health vulnerability and predictor of cumulative pollution burden. This study used conceptual frameworks and analytical techniques from environmental health and inequality literature to address the limitations of CalEnviroScreen, especially its inattention to race-based environmental health vulnerabilities. It developed an adjusted measure of cumulative pollution burden from the CalEnviroScreen 2.0 data that facilitates multivariate analyses of the effect of neighborhood racial composition on cumulative pollution burden, net of other indicators of population vulnerability, traffic density, industrial zoning, and local and regional clustering of pollution burden. Principal component analyses produced three new measures of population vulnerability, including Latina/o cumulative disadvantage that represents the spatial concentration of Latinas/os, economic disadvantage, limited English-speaking ability, and health vulnerability. Spatial error regression analyses demonstrated that concentrations of Latinas/os, followed by Latina/o cumulative disadvantage, are the strongest demographic determinants of adjusted cumulative pollution burden. Findings have implications for research and policy pertaining to cumulative impacts and race-based environmental health vulnerabilities within and beyond California. PMID:29659481
Dubuy, Veerle; De Cocker, Katrien; De Bourdeaudhuij, Ilse; Maes, Lea; Seghers, Jan; Lefevre, Johan; De Martelaer, Kristine; Brooke, Hannah; Cardon, Greet
2014-05-16
The increasing rates of obesity among children and adolescents, especially in those from lower socio-economic backgrounds, emphasise the need for interventions promoting a healthy diet and physical activity. The present study aimed to examine the effectiveness of the 'Health Scores!' program, which combined professional football player role models with a school-based program to promote a healthy diet and physical activity to socially vulnerable children and adolescents. The intervention was implemented in two settings: professional football clubs and schools. Socially vulnerable children and adolescents (n = 165 intervention group, n = 440 control group, aged 10-14 year) provided self-reported data on dietary habits and physical activity before and after the four-month intervention. Intervention effects were evaluated using repeated measures analysis of variance. In addition, a process evaluation was conducted. No intervention effects were found for several dietary behaviours, including consumption of breakfast, fruit, soft drinks or sweet and savoury snacks. Positive intervention effects were found for self-efficacy for having a daily breakfast (p < 0.01), positive attitude towards vegetables consumption (p < 0.01) and towards lower soft drink consumption (p < 0.001). A trend towards significance (p < 0.10) was found for self-efficacy for reaching the physical activity guidelines. For sports participation no significant intervention effect was found. In total, 92 pupils completed the process evaluation, the feedback was largely positive. The 'Health Scores!' intervention was successful in increasing psychosocial correlates of a healthy diet and PA. The use of professional football players as a credible source for health promotion was appealing to socially vulnerable children and adolescents.
The Forced Soft Spring Equation
ERIC Educational Resources Information Center
Fay, T. H.
2006-01-01
Through numerical investigations, this paper studies examples of the forced Duffing type spring equation with [epsilon] negative. By performing trial-and-error numerical experiments, the existence is demonstrated of stability boundaries in the phase plane indicating initial conditions yielding bounded solutions. Subharmonic boundaries are…
The Cannabis Pathway to Non-Affective Psychosis may Reflect Less Neurobiological Vulnerability
Løberg, Else-Marie; Helle, Siri; Nygård, Merethe; Berle, Jan Øystein; Kroken, Rune A.; Johnsen, Erik
2014-01-01
There is a high prevalence of cannabis use reported in non-affective psychosis. Early prospective longitudinal studies conclude that cannabis use is a risk factor for psychosis, and neurochemical studies on cannabis have suggested potential mechanisms for this effect. Recent advances in the field of neuroscience and genetics may have important implications for our understanding of this relationship. Importantly, we need to better understand the vulnerability × cannabis interaction to shed light on the mediators of cannabis as a risk factor for psychosis. Thus, the present study reviews recent literature on several variables relevant for understanding the relationship between cannabis and psychosis, including age of onset, cognition, brain functioning, family history, genetics, and neurological soft signs (NSS) in non-affective psychosis. Compared with non-using non-affective psychosis, the present review shows that there seem to be fewer stable cognitive deficits in patients with cannabis use and psychosis, in addition to fewer NSS and possibly more normalized brain functioning, indicating less neurobiological vulnerability for psychosis. There are, however, some familiar and genetic vulnerabilities present in the cannabis psychosis group, which may influence the cannabis pathway to psychosis by increasing sensitivity to cannabis. Furthermore, an earlier age of onset suggests a different pathway to psychosis in the cannabis-using patients. Two alternative vulnerability models are presented to integrate these seemingly paradoxical findings PMID:25477825
Development of Biological Acoustic Impedance Microscope and its Error Estimation
NASA Astrophysics Data System (ADS)
Hozumi, Naohiro; Nakano, Aiko; Terauchi, Satoshi; Nagao, Masayuki; Yoshida, Sachiko; Kobayashi, Kazuto; Yamamoto, Seiji; Saijo, Yoshifumi
This report deals with the scanning acoustic microscope for imaging cross sectional acoustic impedance of biological soft tissues. A focused acoustic beam was transmitted to the tissue object mounted on the "rear surface" of plastic substrate. A cerebellum tissue of rat and a reference material were observed at the same time under the same condition. As the incidence is not vertical, not only longitudinal wave but also transversal wave is generated in the substrate. The error in acoustic impedance assuming vertical incidence was estimated. It was proved that the error can precisely be compensated, if the beam pattern and acoustic parameters of coupling medium and substrate had been known.
Microcircuit radiation effects databank
NASA Technical Reports Server (NTRS)
1983-01-01
This databank is the collation of radiation test data submitted by many testers and serves as a reference for engineers who are concerned with and have some knowledge of the effects of the natural radiation environment on microcircuits. It contains radiation sensitivity results from ground tests and is divided into two sections. Section A lists total dose damage information, and section B lists single event upset cross sections, I.E., the probability of a soft error (bit flip) or of a hard error (latchup).
CrossTalk: The Journal of Defense Software Engineering. Volume 19, Number 9
2006-09-01
it does. Several freely down- loadable methodologies have emerged to support the developer in modeling threats to applications and other soft...SECURIS. Model -Driven Develop - ment and Analysis of Secure Information Systems <www.sintef.no/ content/page1_1824.aspx>. 10. The SECURIS Project ...By applying these methods to the SDLC , we can actively reduce the number of known vulnerabilities in software as it is developed . For
Patient-specific polyetheretherketone facial implants in a computer-aided planning workflow.
Guevara-Rojas, Godoberto; Figl, Michael; Schicho, Kurt; Seemann, Rudolf; Traxler, Hannes; Vacariu, Apostolos; Carbon, Claus-Christian; Ewers, Rolf; Watzinger, Franz
2014-09-01
In the present study, we report an innovative workflow using polyetheretherketone (PEEK) patient-specific implants for esthetic corrections in the facial region through onlay grafting. The planning includes implant design according to virtual osteotomy and generation of a subtraction volume. The implant design was refined by stepwise changing the implant geometry according to soft tissue simulations. One patient was scanned using computed tomography. PEEK implants were interactively designed and manufactured using rapid prototyping techniques. Positioning intraoperatively was assisted by computer-aided navigation. Two months after surgery, a 3-dimensional surface model of the patient's face was generated using photogrammetry. Finally, the Hausdorff distance calculation was used to quantify the overall error, encompassing the failures in soft tissue simulation and implantation. The implant positioning process during surgery was satisfactory. The simulated soft tissue surface and the photogrammetry scan of the patient showed a high correspondence, especially where the skin covered the implants. The mean total error (Hausdorff distance) was 0.81 ± 1.00 mm (median 0.48, interquartile range 1.11). The spatial deviation remained less than 0.7 mm for the vast majority of points. The proposed workflow provides a complete computer-aided design, computer-aided manufacturing, and computer-aided surgery chain for implant design, allowing for soft tissue simulation, fabrication of patient-specific implants, and image-guided surgery to position the implants. Much of the surgical complexity resulting from osteotomies of the zygoma, chin, or mandibular angle might be transferred into the planning phase of patient-specific implants. Copyright © 2014 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cappelli, M.; Gadomski, A. M.; Sepiellis, M.
In the field of nuclear power plant (NPP) safety modeling, the perception of the role of socio-cognitive engineering (SCE) is continuously increasing. Today, the focus is especially on the identification of human and organization decisional errors caused by operators and managers under high-risk conditions, as evident by analyzing reports on nuclear incidents occurred in the past. At present, the engineering and social safety requirements need to enlarge their domain of interest in such a way to include all possible losses generating events that could be the consequences of an abnormal state of a NPP. Socio-cognitive modeling of Integrated Nuclear Safetymore » Management (INSM) using the TOGA meta-theory has been discussed during the ICCAP 2011 Conference. In this paper, more detailed aspects of the cognitive decision-making and its possible human errors and organizational vulnerability are presented. The formal TOGA-based network model for cognitive decision-making enables to indicate and analyze nodes and arcs in which plant operators and managers errors may appear. The TOGA's multi-level IPK (Information, Preferences, Knowledge) model of abstract intelligent agents (AIAs) is applied. In the NPP context, super-safety approach is also discussed, by taking under consideration unexpected events and managing them from a systemic perspective. As the nature of human errors depends on the specific properties of the decision-maker and the decisional context of operation, a classification of decision-making using IPK is suggested. Several types of initial situations of decision-making useful for the diagnosis of NPP operators and managers errors are considered. The developed models can be used as a basis for applications to NPP educational or engineering simulators to be used for training the NPP executive staff. (authors)« less
2012-01-01
Background Although proton radiotherapy is a promising new approach for cancer patients, functional interference is a concern for patients with implantable cardioverter defibrillators (ICDs). The purpose of this study was to clarify the influence of secondary neutrons induced by proton radiotherapy on ICDs. Methods The experimental set-up simulated proton radiotherapy for a patient with an ICD. Four new ICDs were placed 0.3 cm laterally and 3 cm distally outside the radiation field in order to evaluate the influence of secondary neutrons. The cumulative in-field radiation dose was 107 Gy over 10 sessions of irradiation with a dose rate of 2 Gy/min and a field size of 10 × 10 cm2. After each radiation fraction, interference with the ICD by the therapy was analyzed by an ICD programmer. The dose distributions of secondary neutrons were estimated by Monte-Carlo simulation. Results The frequency of the power-on reset, the most serious soft error where the programmed pacing mode changes temporarily to a safety back-up mode, was 1 per approximately 50 Gy. The total number of soft errors logged in all devices was 29, which was a rate of 1 soft error per approximately 15 Gy. No permanent device malfunctions were detected. The calculated dose of secondary neutrons per 1 Gy proton dose in the phantom was approximately 1.3-8.9 mSv/Gy. Conclusions With the present experimental settings, the probability was approximately 1 power-on reset per 50 Gy, which was below the dose level (60-80 Gy) generally used in proton radiotherapy. Further quantitative analysis in various settings is needed to establish guidelines regarding proton radiotherapy for cancer patients with ICDs. PMID:22284700
NASA Astrophysics Data System (ADS)
Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan
2017-11-01
Precipitation plays an important role in determining the climate of a region. Precise estimation of precipitation is required to manage and plan water resources, as well as other related applications such as hydrology, climatology, meteorology and agriculture. Time series of hydrologic variables such as precipitation are composed of deterministic and stochastic parts. Despite this fact, the stochastic part of the precipitation data is not usually considered in modeling of precipitation process. As an innovation, the present study introduces three new hybrid models by integrating soft computing methods including multivariate adaptive regression splines (MARS), Bayesian networks (BN) and gene expression programming (GEP) with a time series model, namely generalized autoregressive conditional heteroscedasticity (GARCH) for modeling of the monthly precipitation. For this purpose, the deterministic (obtained by soft computing methods) and stochastic (obtained by GARCH time series model) parts are combined with each other. To carry out this research, monthly precipitation data of Babolsar, Bandar Anzali, Gorgan, Ramsar, Tehran and Urmia stations with different climates in Iran were used during the period of 1965-2014. Root mean square error (RMSE), relative root mean square error (RRMSE), mean absolute error (MAE) and determination coefficient (R2) were employed to evaluate the performance of conventional/single MARS, BN and GEP, as well as the proposed MARS-GARCH, BN-GARCH and GEP-GARCH hybrid models. It was found that the proposed novel models are more precise than single MARS, BN and GEP models. Overall, MARS-GARCH and BN-GARCH models yielded better accuracy than GEP-GARCH. The results of the present study confirmed the suitability of proposed methodology for precise modeling of precipitation.
Haptic communication between humans is tuned by the hard or soft mechanics of interaction
Usai, Francesco; Ganesh, Gowrishankar; Sanguineti, Vittorio; Burdet, Etienne
2018-01-01
To move a hard table together, humans may coordinate by following the dominant partner’s motion [1–4], but this strategy is unsuitable for a soft mattress where the perceived forces are small. How do partners readily coordinate in such differing interaction dynamics? To address this, we investigated how pairs tracked a target using flexion-extension of their wrists, which were coupled by a hard, medium or soft virtual elastic band. Tracking performance monotonically increased with a stiffer band for the worse partner, who had higher tracking error, at the cost of the skilled partner’s muscular effort. This suggests that the worse partner followed the skilled one’s lead, but simulations show that the results are better explained by a model where partners share movement goals through the forces, whilst the coupling dynamics determine the capacity of communicable information. This model elucidates the versatile mechanism by which humans can coordinate during both hard and soft physical interactions to ensure maximum performance with minimal effort. PMID:29565966
Lu, Min-Hua; Mao, Rui; Lu, Yin; Liu, Zheng; Wang, Tian-Fu; Chen, Si-Ping
2012-01-01
Indentation testing is a widely used approach to evaluate mechanical characteristics of soft tissues quantitatively. Young's modulus of soft tissue can be calculated from the force-deformation data with known tissue thickness and Poisson's ratio using Hayes' equation. Our group previously developed a noncontact indentation system using a water jet as a soft indenter as well as the coupling medium for the propagation of high-frequency ultrasound. The novel system has shown its ability to detect the early degeneration of articular cartilage. However, there is still lack of a quantitative method to extract the intrinsic mechanical properties of soft tissue from water jet indentation. The purpose of this study is to investigate the relationship between the loading-unloading curves and the mechanical properties of soft tissues to provide an imaging technique of tissue mechanical properties. A 3D finite element model of water jet indentation was developed with consideration of finite deformation effect. An improved Hayes' equation has been derived by introducing a new scaling factor which is dependent on Poisson's ratios v, aspect ratio a/h (the radius of the indenter/the thickness of the test tissue), and deformation ratio d/h. With this model, the Young's modulus of soft tissue can be quantitatively evaluated and imaged with the error no more than 2%. PMID:22927890
Neurological soft signs in children with attention deficit hyperactivity disorder.
Patankar, V C; Sangle, J P; Shah, Henal R; Dave, M; Kamath, R M
2012-04-01
Attention deficit hyperactivity disorder (ADHD) is a common neurodevelopmental disorder with wide repercussions. Since it is etiologically related to delayed maturation, neurological soft signs (NSS) could be a tool to assess this. Further the correlation of NSS with severity and type of ADHD and presence of Specific Learning Disability (SLD) would give further insight into it. To study neurological soft signs and risk factors (type, mode of delivery, and milestones) in children with ADHD and to correlate NSS with type and severity of ADHD and with co-morbid Specific Learning Disability. The study was carried out in Child care services of a tertiary teaching urban hospital. It was a cross-sectional single interview study. 52 consecutive children diagnosed as having ADHD were assessed for the presence of neurological soft signs using Revised Physical and Neurological Examination soft Signs scale (PANESS). The ADHD was rated by parents using ADHD parent rating scale. The data was analyzed using the chi-squared test and Pearson's co-relational analysis. Neurological soft signs are present in 84% of children. They are equally present in both the inattentive-hyperactive and impulsive-hyperactive types of ADHD. The presence of neurological soft signs in ADHD are independent of the presence of co-morbid SLD. Dysrrhythmias and overflow with gait were typically seen for impulsive-hyperactive type and higher severity of ADHD is related to more errors.
Protocol Processing for 100 Gbit/s and Beyond - A Soft Real-Time Approach in Hardware and Software
NASA Astrophysics Data System (ADS)
Büchner, Steffen; Lopacinski, Lukasz; Kraemer, Rolf; Nolte, Jörg
2017-09-01
100 Gbit/s wireless communication protocol processing stresses all parts of a communication system until the outermost. The efficient use of upcoming 100 Gbit/s and beyond transmission technology requires the rethinking of the way protocols are processed by the communication endpoints. This paper summarizes the achievements of the project End2End100. We will present a comprehensive soft real-time stream processing approach that allows the protocol designer to develop, analyze, and plan scalable protocols for ultra high data rates of 100 Gbit/s and beyond. Furthermore, we will present an ultra-low power, adaptable, and massively parallelized FEC (Forward Error Correction) scheme that detects and corrects bit errors at line rate with an energy consumption between 1 pJ/bit and 13 pJ/bit. The evaluation results discussed in this publication show that our comprehensive approach allows end-to-end communication with a very low protocol processing overhead.
Novel intelligent real-time position tracking system using FPGA and fuzzy logic.
Soares dos Santos, Marco P; Ferreira, J A F
2014-03-01
The main aim of this paper is to test if FPGAs are able to achieve better position tracking performance than software-based soft real-time platforms. For comparison purposes, the same controller design was implemented in these architectures. A Multi-state Fuzzy Logic controller (FLC) was implemented both in a Xilinx(®) Virtex-II FPGA (XC2v1000) and in a soft real-time platform NI CompactRIO(®)-9002. The same sampling time was used. The comparative tests were conducted using a servo-pneumatic actuation system. Steady-state errors lower than 4 μm were reached for an arbitrary vertical positioning of a 6.2 kg mass when the controller was embedded into the FPGA platform. Performance gains up to 16 times in the steady-state error, up to 27 times in the overshoot and up to 19.5 times in the settling time were achieved by using the FPGA-based controller over the software-based FLC controller. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
An Interactive Concatenated Turbo Coding System
NASA Technical Reports Server (NTRS)
Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc
1999-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
Bias in the Counseling Process: How to Recognize and Avoid It.
ERIC Educational Resources Information Center
Morrow, Kelly A.; Deidan, Cecilia T.
1992-01-01
Notes that counselors' vulnerability to inferential bias during counseling process may result in misdiagnosis and improper interventions. Discusses these inferential biases: availability and representativeness heuristics; fundamental attribution error; anchoring, prior knowledge, and labeling; confirmatory hypothesis testing; and reconstructive…
NASA Astrophysics Data System (ADS)
ÁLvarez, A.; Orfila, A.; Tintoré, J.
2004-03-01
Satellites are the only systems able to provide continuous information on the spatiotemporal variability of vast areas of the ocean. Relatively long-term time series of satellite data are nowadays available. These spatiotemporal time series of satellite observations can be employed to build empirical models, called satellite-based ocean forecasting (SOFT) systems, to forecast certain aspects of future ocean states. SOFT systems can predict satellite-observed fields at different timescales. The forecast skill of SOFT systems forecasting the sea surface temperature (SST) at monthly timescales has been extensively explored in previous works. In this work we study the performance of two SOFT systems forecasting, respectively, the SST and sea level anomaly (SLA) at weekly timescales, that is, providing forecasts of the weekly averaged SST and SLA fields with 1 week in advance. The SOFT systems were implemented in the Ligurian Sea (Western Mediterranean Sea). Predictions from the SOFT systems are compared with observations and with the predictions obtained from persistence models. Results indicate that the SOFT system forecasting the SST field is always superior in terms of predictability to persistence. Minimum prediction errors in the SST are obtained during winter and spring seasons. On the other hand, the biggest differences between the performance of SOFT and persistence models are found during summer and autumn. These changes in the predictability are explained on the basis of the particular variability of the SST field in the Ligurian Sea. Concerning the SLA field, no improvements with respect to persistence have been found for the SOFT system forecasting the SLA field.
Latest trends in parts SEP susceptibility from heavy ions
NASA Technical Reports Server (NTRS)
Nichols, Donald K.; Smith, L. S.; Soli, George A.; Koga, R.; Kolasinski, W. A.
1989-01-01
JPL and Aerospace have collected a third set of heavy-ion single-event phenomena (SEP) test data since their last joint IEEE publications in December 1985 and December 1987. Trends in SEP susceptibility (e.g., soft errors and latchup) for state-of-the-art parts are presented. Results of the study indicate that hard technologies and unacceptably soft technologies can be flagged. In some instances, specific tested parts can be taken as candidates for key microprocessors or memories. As always with radiation test data, specific test data for qualified flight parts is recommended for critical applications.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
A Randomized Trial of Soft Multifocal Contact Lenses for Myopia Control: Baseline Data and Methods.
Walline, Jeffrey J; Gaume Giannoni, Amber; Sinnott, Loraine T; Chandler, Moriah A; Huang, Juan; Mutti, Donald O; Jones-Jordan, Lisa A; Berntsen, David A
2017-09-01
The Bifocal Lenses In Nearsighted Kids (BLINK) study is the first soft multifocal contact lens myopia control study to compare add powers and measure peripheral refractive error in the vertical meridian, so it will provide important information about the potential mechanism of myopia control. The BLINK study is a National Eye Institute-sponsored, double-masked, randomized clinical trial to investigate the effects of soft multifocal contact lenses on myopia progression. This article describes the subjects' baseline characteristics and study methods. Subjects were 7 to 11 years old, had -0.75 to -5.00 spherical component and less than 1.00 diopter (D) astigmatism, and had 20/25 or better logMAR distance visual acuity with manifest refraction in each eye and with +2.50-D add soft bifocal contact lenses on both eyes. Children were randomly assigned to wear Biofinity single-vision, Biofinity Multifocal "D" with a +1.50-D add power, or Biofinity Multifocal "D" with a +2.50-D add power contact lenses. We examined 443 subjects at the baseline visits, and 294 (66.4%) subjects were enrolled. Of the enrolled subjects, 177 (60.2%) were female, and 200 (68%) were white. The mean (± SD) age was 10.3 ± 1.2 years, and 117 (39.8%) of the eligible subjects were younger than 10 years. The mean spherical equivalent refractive error, measured by cycloplegic autorefraction was -2.39 ± 1.00 D. The best-corrected binocular logMAR visual acuity with glasses was +0.01 ± 0.06 (20/21) at distance and -0.03 ± 0.08 (20/18) at near. The BLINK study subjects are similar to patients who would routinely be eligible for myopia control in practice, so the results will provide clinical information about soft bifocal contact lens myopia control as well as information about the mechanism of the treatment effect, if one occurs.
Mitigate Soft Target’s Vulnerability and Prevent Crime Through Biometrics
2013-12-01
distribution is unlimited 12b. DISTRIBUTION CODE A 13. ABSTRACT (maximum 200 words ) Identifying a known criminal or terrorist, and providing...these questions. xiv ACKNOWLEDGMENTS “As we express our gratitude, we must never forget that the highest appreciation is not to utter words but to...live them.” —John Fitzgerald Kennedy There are so many people that words cannot express the gratitude I have for the role they have played in
Effect of single vision soft contact lenses on peripheral refraction.
Kang, Pauline; Fan, Yvonne; Oh, Kelly; Trac, Kevin; Zhang, Frank; Swarbrick, Helen
2012-07-01
To investigate changes in peripheral refraction with under-, full, and over-correction of central refraction with commercially available single vision soft contact lenses (SCLs) in young myopic adults. Thirty-four myopic adult subjects were fitted with Proclear Sphere SCLs to under-correct (+0.75 DS), fully correct, and over-correct (-0.75 DS) their manifest central refractive error. Central and peripheral refraction were measured with no lens wear and subsequently with different levels of SCL central refractive error correction. The uncorrected refractive error was myopic at all locations along the horizontal meridian. Peripheral refraction was relatively hyperopic compared to center at 30 and 35° in the temporal visual field (VF) in low myopes and at 30 and 35° in the temporal VF and 10, 30, and 35° in the nasal VF in moderate myopes. All levels of SCL correction caused a hyperopic shift in refraction at all locations in the horizontal VF. The smallest hyperopic shift was demonstrated with under-correction followed by full correction and then by over-correction of central refractive error. An increase in relative peripheral hyperopia was measured with full correction SCLs compared with no correction in both low and moderate myopes. However, no difference in relative peripheral refraction profiles were found between under-, full, and over-correction. Under-, full, and over-correction of central refractive error with single vision SCLs caused a hyperopic shift in both central and peripheral refraction at all positions in the horizontal meridian. All levels of SCL correction caused the peripheral retina, which initially experienced absolute myopic defocus at baseline with no correction, to experience absolute hyperopic defocus. This peripheral hyperopia may be a possible cause of myopia progression reported with different types and levels of myopia correction.
NASA Astrophysics Data System (ADS)
Samboju, Vishal; Adams, Matthew; Salgaonkar, Vasant; Diederich, Chris J.; Cunha, J. Adam M.
2017-02-01
The speed of sound (SOS) for ultrasound devices used for imaging soft tissue is often calibrated to water, 1540 m/s1 , despite in-vivo soft tissue SOS varying from 1450 to 1613 m/s2 . Images acquired with 1540 m/s and used in conjunction with stereotactic external coordinate systems can thus result in displacement errors of several millimeters. Ultrasound imaging systems are routinely used to guide interventional thermal ablation and cryoablation devices, or radiation sources for brachytherapy3 . Brachytherapy uses small radioactive pellets, inserted interstitially with needles under ultrasound guidance, to eradicate cancerous tissue4 . Since the radiation dose diminishes with distance from the pellet as 1/r2 , imaging uncertainty of a few millimeters can result in significant erroneous dose delivery5,6. Likewise, modeling of power deposition and thermal dose accumulations from ablative sources are also prone to errors due to placement offsets from SOS errors7 . This work presents a method of mitigating needle placement error due to SOS variances without the need of ionizing radiation2,8. We demonstrate the effects of changes in dosimetry in a prostate brachytherapy environment due to patientspecific SOS variances and the ability to mitigate dose delivery uncertainty. Electromagnetic (EM) sensors embedded in the brachytherapy ultrasound system provide information regarding 3D position and orientation of the ultrasound array. Algorithms using data from these two modalities are used to correct bmode images to account for SOS errors. While ultrasound localization resulted in >3 mm displacements, EM resolution was verified to <1 mm precision using custom-built phantoms with various SOS, showing 1% accuracy in SOS measurement.
NASA Astrophysics Data System (ADS)
Chertok, I. M.; Belov, A. V.
2018-03-01
Correction to: Solar Phys https://doi.org/10.1007/s11207-017-1169-1 We found an important error in the text of our article. On page 6, the second sentence of Section 3.2 "We studied the variations in soft X-ray flare characteristics in more detail by averaging them within the running windows of ± one Carrington rotation with a step of two rotations." should instead read "We studied the variations in soft X-ray flare characteristics in more detail by averaging them within the running windows of ± 2.5 Carrington rotations with a step of two rotations." We regret the inconvenience. The online version of the original article can be found at https://doi.org/10.1007/s11207-017-1169-1
Safeguarding Databases Basic Concepts Revisited.
ERIC Educational Resources Information Center
Cardinali, Richard
1995-01-01
Discusses issues of database security and integrity, including computer crime and vandalism, human error, computer viruses, employee and user access, and personnel policies. Suggests some precautions to minimize system vulnerability such as careful personnel screening, audit systems, passwords, and building and software security systems. (JKP)
Soft tissue deformation estimation by spatio-temporal Kalman filter finite element method.
Yarahmadian, Mehran; Zhong, Yongmin; Gu, Chengfan; Shin, Jaehyun
2018-01-01
Soft tissue modeling plays an important role in the development of surgical training simulators as well as in robot-assisted minimally invasive surgeries. It has been known that while the traditional Finite Element Method (FEM) promises the accurate modeling of soft tissue deformation, it still suffers from a slow computational process. This paper presents a Kalman filter finite element method to model soft tissue deformation in real time without sacrificing the traditional FEM accuracy. The proposed method employs the FEM equilibrium equation and formulates it as a filtering process to estimate soft tissue behavior using real-time measurement data. The model is temporally discretized using the Newmark method and further formulated as the system state equation. Simulation results demonstrate that the computational time of KF-FEM is approximately 10 times shorter than the traditional FEM and it is still as accurate as the traditional FEM. The normalized root-mean-square error of the proposed KF-FEM in reference to the traditional FEM is computed as 0.0116. It is concluded that the proposed method significantly improves the computational performance of the traditional FEM without sacrificing FEM accuracy. The proposed method also filters noises involved in system state and measurement data.
Microscope self-calibration based on micro laser line imaging and soft computing algorithms
NASA Astrophysics Data System (ADS)
Apolinar Muñoz Rodríguez, J.
2018-06-01
A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.
NASA Astrophysics Data System (ADS)
Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan
2018-02-01
Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.
A Computing Method to Determine the Performance of an Ionic Liquid Gel Soft Actuator
Zhang, Chenghong; Zhou, Yanmin; Wang, Zhipeng
2018-01-01
A new type of soft actuator material—an ionic liquid gel (ILG) that consists of BMIMBF4, HEMA, DEAP, and ZrO2—is polymerized into a gel state under ultraviolet (UV) light irradiation. In this paper, we first propose that the ILG conforms to the assumptions of hyperelastic theory and that the Mooney-Rivlin model can be used to study the properties of the ILG. Under the five-parameter and nine-parameter Mooney-Rivlin models, the formulas for the calculation of the uniaxial tensile stress, plane uniform tensile stress, and 3D directional stress are deduced. The five-parameter and nine-parameter Mooney-Rivlin models of the ILG with a ZrO2 content of 3 wt% were obtained by uniaxial tensile testing, and the parameters are denoted as c10, c01, c20, c11, and c02 and c10, c01, c20, c11, c02, c30, c21, c12, and c03, respectively. Through the analysis and comparison of the uniaxial tensile stress between the calculated and experimental data, the error between the stress data calculated from the five-parameter Mooney-Rivlin model and the experimental data is less than 0.51%, and the error between the stress data calculated from the nine-parameter Mooney-Rivlin model and the experimental data is no more than 8.87%. Hence, our work presents a feasible and credible formula for the calculation of the stress of the ILG. This work opens a new path to assess the performance of a soft actuator composed of an ILG and will contribute to the optimized design of soft robots. PMID:29853999
A Computing Method to Determine the Performance of an Ionic Liquid Gel Soft Actuator.
He, Bin; Zhang, Chenghong; Zhou, Yanmin; Wang, Zhipeng
2018-01-01
A new type of soft actuator material-an ionic liquid gel (ILG) that consists of BMIMBF 4 , HEMA, DEAP, and ZrO 2 -is polymerized into a gel state under ultraviolet (UV) light irradiation. In this paper, we first propose that the ILG conforms to the assumptions of hyperelastic theory and that the Mooney-Rivlin model can be used to study the properties of the ILG. Under the five-parameter and nine-parameter Mooney-Rivlin models, the formulas for the calculation of the uniaxial tensile stress, plane uniform tensile stress, and 3D directional stress are deduced. The five-parameter and nine-parameter Mooney-Rivlin models of the ILG with a ZrO 2 content of 3 wt% were obtained by uniaxial tensile testing, and the parameters are denoted as c 10 , c 01 , c 20 , c 11 , and c 02 and c 10 , c 01 , c 20 , c 11 , c 02 , c 30 , c 21 , c 12 , and c 03 , respectively. Through the analysis and comparison of the uniaxial tensile stress between the calculated and experimental data, the error between the stress data calculated from the five-parameter Mooney-Rivlin model and the experimental data is less than 0.51%, and the error between the stress data calculated from the nine-parameter Mooney-Rivlin model and the experimental data is no more than 8.87%. Hence, our work presents a feasible and credible formula for the calculation of the stress of the ILG. This work opens a new path to assess the performance of a soft actuator composed of an ILG and will contribute to the optimized design of soft robots.
Lost in Translation: the Case for Integrated Testing
NASA Technical Reports Server (NTRS)
Young, Aaron
2017-01-01
The building of a spacecraft is complex and often involves multiple suppliers and companies that have their own designs and processes. Standards have been developed across the industries to reduce the chances for critical flight errors at the system level, but the spacecraft is still vulnerable to the introduction of critical errors during integration of these systems. Critical errors can occur at any time during the process and in many cases, human reliability analysis (HRA) identifies human error as a risk driver. Most programs have a test plan in place that is intended to catch these errors, but it is not uncommon for schedule and cost stress to result in less testing than initially planned. Therefore, integrated testing, or "testing as you fly," is essential as a final check on the design and assembly to catch any errors prior to the mission. This presentation will outline the unique benefits of integrated testing by catching critical flight errors that can otherwise go undetected, discuss HRA methods that are used to identify opportunities for human error, lessons learned and challenges over ownership of testing will be discussed.
Programmable Numerical Function Generators: Architectures and Synthesis Method
2005-08-01
generates HDL (Hardware Descrip- tion Language) code from the design specification described by Scilab [14], a MATLAB-like numerical calculation soft...cad.com/Error-NFG/. [14] Scilab 3.0, INRIA-ENPC, France, http://scilabsoft.inria.fr/ [15] M. J. Schulte and J. E. Stine, “Approximating elementary functions
Overview of Device SEE Susceptibility from Heavy Ions
NASA Technical Reports Server (NTRS)
Nichols, D. K.; Coss, J. R.; McCarthy, K. P.; Schwartz, H. R.; Smith, L. S.
1998-01-01
A fifth set of heavy ion single event effects (SEE) test data have been collected since the last IEEE publications (1,2,3,4) in December issues for 1985, 1987, 1989, and 1991. Trends in SEE susceptibility (including soft errors and latchup) for state-of-the-art parts are evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watkins, W.T.; Siebers, J.V.; Bzdusek, K.
Purpose: To introduce methods to analyze Deformable Image Registration (DIR) and identify regions of potential DIR errors. Methods: DIR Deformable Vector Fields (DVFs) quantifying patient anatomic changes were evaluated using the Jacobian determinant and the magnitude of DVF curl as functions of tissue density and tissue type. These quantities represent local relative deformation and rotation, respectively. Large values in dense tissues can potentially identify non-physical DVF errors. For multiple DVFs per patient, histograms and visualization of DVF differences were also considered. To demonstrate the capabilities of methods, we computed multiple DVFs for each of five Head and Neck (H'N) patientsmore » (P1–P5) via a Fast-symmetric Demons (FSD) algorithm and via a Diffeomorphic Demons (DFD) algorithm, and show the potential to identify DVF errors. Results: Quantitative comparisons of the FSD and DFD registrations revealed <0.3 cm DVF differences in >99% of all voxels for P1, >96% for P2, and >90% of voxels for P3. While the FSD and DFD registrations were very similar for these patients, the Jacobian determinant was >50% in 9–15% of soft tissue and in 3–17% of bony tissue in each of these cases. The volumes of large soft tissue deformation were consistent for all five patients using the FSD algorithm (mean 15%±4% volume), whereas DFD reduced regions of large deformation by 10% volume (785 cm{sup 3}) for P4 and by 14% volume (1775 cm{sup 3}) for P5. The DFD registrations resulted in fewer regions of large DVF-curl; 50% rotations in FSD registrations averaged 209±136 cm{sup 3} in soft tissue and 10±11 cm{sup 3} in bony tissue, but using DFD these values were reduced to 42±53 cm{sup 3} and 1.1±1.5 cm{sup 3}, respectively. Conclusion: Analysis of Jacobian determinant and curl as functions of tissue density can identify regions of potential DVF errors by identifying non-physical deformations and rotations. Collaboration with Phillips Healthcare, as indicated in authorship.« less
Accuracy analysis for triangulation and tracking based on time-multiplexed structured light.
Wagner, Benjamin; Stüber, Patrick; Wissel, Tobias; Bruder, Ralf; Schweikard, Achim; Ernst, Floris
2014-08-01
The authors' research group is currently developing a new optical head tracking system for intracranial radiosurgery. This tracking system utilizes infrared laser light to measure features of the soft tissue on the patient's forehead. These features are intended to offer highly accurate registration with respect to the rigid skull structure by means of compensating for the soft tissue. In this context, the system also has to be able to quickly generate accurate reconstructions of the skin surface. For this purpose, the authors have developed a laser scanning device which uses time-multiplexed structured light to triangulate surface points. The accuracy of the authors' laser scanning device is analyzed and compared for different triangulation methods. These methods are given by the Linear-Eigen method and a nonlinear least squares method. Since Microsoft's Kinect camera represents an alternative for fast surface reconstruction, the authors' results are also compared to the triangulation accuracy of the Kinect device. Moreover, the authors' laser scanning device was used for tracking of a rigid object to determine how this process is influenced by the remaining triangulation errors. For this experiment, the scanning device was mounted to the end-effector of a robot to be able to calculate a ground truth for the tracking. The analysis of the triangulation accuracy of the authors' laser scanning device revealed a root mean square (RMS) error of 0.16 mm. In comparison, the analysis of the triangulation accuracy of the Kinect device revealed a RMS error of 0.89 mm. It turned out that the remaining triangulation errors only cause small inaccuracies for the tracking of a rigid object. Here, the tracking accuracy was given by a RMS translational error of 0.33 mm and a RMS rotational error of 0.12°. This paper shows that time-multiplexed structured light can be used to generate highly accurate reconstructions of surfaces. Furthermore, the reconstructed point sets can be used for high-accuracy tracking of objects, meeting the strict requirements of intracranial radiosurgery.
Holmes, Avram J; Bogdan, Ryan; Pizzagalli, Diego A
2010-04-01
A variable number of tandem repeats (short (S) vs long (L)) in the promoter region of the serotonin transporter gene (5-HTTLPR) and a functional variant of a single-nucleotide polymorphism (rs25531) in 5-HTTLPR have been recently associated with increased risk for major depressive disorder (MDD). In particular, relative to L/L or L(A) homozygotes (hereafter referred to as L' participants), S carriers or L(g)-allele carriers (S' participants) have been found to have a higher probability of developing depression after stressful life events, although inconsistencies abound. Previous research indicates that patients with MDD are characterized by executive dysfunction and abnormal activation within the anterior cingulate cortex (ACC), particularly in situations requiring adaptive behavioral adjustments following errors and response conflict (action monitoring). The goal of this study was to test whether psychiatrically healthy S' participants would show abnormalities similar to those of MDD subjects. To this end, 19 S' and 14 L' participants performed a modified Flanker task known to induce errors, response conflict, and activations in various ACC subdivisions during functional magnetic resonance imaging. As hypothesized, relative to L' participants, S' participants showed (1) impaired post-error and post-conflict behavioral adjustments; (2) larger error-related rostral ACC activation; and (3) lower conflict-related dorsal ACC activation. As similar behavioral and neural dysfunctions have been recently described in MDD patient samples, the current results raise the possibility that impaired action monitoring and associated ACC dysregulation may represent risk factors increased vulnerability to depression.
Modelling social vulnerability in sub-Saharan West Africa using a geographical information system
Arokoyu, Samuel B.
2015-01-01
In recent times, disasters and risk management have gained significant attention, especially with increasing awareness of the risks and increasing impact of natural and other hazards especially in the developing world. Vulnerability, the potential for loss of life or property from disaster, has biophysical or social dimensions. Social vulnerability relates to societal attributes which has negative impacts on disaster outcomes. This study sought to develop a spatially explicit index of social vulnerability, thus addressing the dearth of research in this area in sub-Saharan Africa. Nineteen variables were identified covering various aspects. Descriptive analysis of these variables revealed high heterogeneity across the South West region of Nigeria for both the state and the local government areas (LGAs). Feature identification using correlation analysis identified six important variables. Factor analysis identified two dimensions, namely accessibility and socioeconomic conditions, from this subset. A social vulnerability index (SoVI) showed that Ondo and Ekiti have more vulnerable LGAs than other states in the region. About 50% of the LGAs in Osun and Ogun have a relatively low social vulnerability. Distribution of the SoVI shows that there are great differences within states as well as across regions. Scores of population density, disability and poverty have a high margin of error in relation to mean state scores. The study showed that with a geographical information system there are opportunities to model social vulnerability and monitor its evolution and dynamics across the continent.
Future Short Range Ground-Based Air Defence: System Drivers, Characteristics and Architectures
2001-03-01
vulnerable being on the right. Although for completeness the defended asset characteristics shown in Table 1 are based upon a conventional armoured formation...Camouflage scrimmed draped visual full/thermal EMCON 4 3 2 1 Visibility line of sight occulting/obscured non line of sight "Contact static FLOT fluid...confused mel~e Armour soft semi-hard hard defensive aids Protection Digging in open under cover dug in full o/h protection AD none AAAD CAD fully
Clinical decision-making: heuristics and cognitive biases for the ophthalmologist.
Hussain, Ahsen; Oestreicher, James
Diagnostic errors have a significant impact on health care outcomes and patient care. The underlying causes and development of diagnostic error are complex with flaws in health care systems, as well as human error, playing a role. Cognitive biases and a failure of decision-making shortcuts (heuristics) are human factors that can compromise the diagnostic process. We describe these mechanisms, their role with the clinician, and provide clinical scenarios to highlight the various points at which biases may emerge. We discuss strategies to modify the development and influence of these processes and the vulnerability of heuristics to provide insight and improve clinical outcomes. Copyright © 2017 Elsevier Inc. All rights reserved.
A modified JPEG-LS lossless compression method for remote sensing images
NASA Astrophysics Data System (ADS)
Deng, Lihua; Huang, Zhenghua
2015-12-01
As many variable length source coders, JPEG-LS is highly vulnerable to channel errors which occur in the transmission of remote sensing images. The error diffusion is one of the important factors which infect its robustness. The common method of improving the error resilience of JPEG-LS is dividing the image into many strips or blocks, and then coding each of them independently, but this method reduces the coding efficiency. In this paper, a block based JPEP-LS lossless compression method with an adaptive parameter is proposed. In the modified scheme, the threshold parameter RESET is adapted to an image and the compression efficiency is close to that of the conventional JPEG-LS.
Stenemo, Fredrik; Jarvis, Nicholas
2007-09-01
A simulation tool for site-specific vulnerability assessments of pesticide leaching to groundwater was developed, based on the pesticide fate and transport model MACRO, parameterized using pedotransfer functions and reasonable worst-case parameter values. The effects of uncertainty in the pedotransfer functions on simulation results were examined for 48 combinations of soils, pesticides and application timings, by sampling pedotransfer function regression errors and propagating them through the simulation model in a Monte Carlo analysis. An uncertainty factor, f(u), was derived, defined as the ratio between the concentration simulated with no errors, c(sim), and the 80th percentile concentration for the scenario. The pedotransfer function errors caused a large variation in simulation results, with f(u) ranging from 1.14 to 1440, with a median of 2.8. A non-linear relationship was found between f(u) and c(sim), which can be used to account for parameter uncertainty by correcting the simulated concentration, c(sim), to an estimated 80th percentile value. For fine-textured soils, the predictions were most sensitive to errors in the pedotransfer functions for two parameters regulating macropore flow (the saturated matrix hydraulic conductivity, K(b), and the effective diffusion pathlength, d) and two water retention function parameters (van Genuchten's N and alpha parameters). For coarse-textured soils, the model was also sensitive to errors in the exponent in the degradation water response function and the dispersivity, in addition to K(b), but showed little sensitivity to d. To reduce uncertainty in model predictions, improved pedotransfer functions for K(b), d, N and alpha would therefore be most useful. 2007 Society of Chemical Industry
Liu, Xiang; Effenberger, Frank; Chand, Naresh
2015-03-09
We demonstrate a flexible modulation and detection scheme for upstream transmission in passive optical networks using pulse position modulation at optical network unit, facilitating burst-mode detection with automatic decision threshold tracking, and DSP-enabled soft-combining at optical line terminal. Adaptive receiver sensitivities of -33.1 dBm, -36.6 dBm and -38.3 dBm at a bit error ratio of 10(-4) are respectively achieved for 2.5 Gb/s, 1.25 Gb/s and 625 Mb/s after transmission over a 20-km standard single-mode fiber without any optical amplification.
Multi-stage decoding of multi-level modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.
1991-01-01
Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).
Supporting Patient Provider Communication across Medical Settings
ERIC Educational Resources Information Center
Nordness, Amy S.; Beukelman, David R.
2017-01-01
Medical errors and poor patient care have been impacted by communication failures despite mandates for effective patient-provider communication (The Joint Commission, 2010). There are a high number of communication vulnerable individuals in medical situations due to new medical conditions, preexisting conditions, and language differences, to name…
Effect of the mandible on mouthguard measurements of head kinematics.
Kuo, Calvin; Wu, Lyndia C; Hammoor, Brad T; Luck, Jason F; Cutcliffe, Hattie C; Lynall, Robert C; Kait, Jason R; Campbell, Kody R; Mihalik, Jason P; Bass, Cameron R; Camarillo, David B
2016-06-14
Wearable sensors are becoming increasingly popular for measuring head motions and detecting head impacts. Many sensors are worn on the skin or in headgear and can suffer from motion artifacts introduced by the compliance of soft tissue or decoupling of headgear from the skull. The instrumented mouthguard is designed to couple directly to the upper dentition, which is made of hard enamel and anchored in a bony socket by stiff ligaments. This gives the mouthguard superior coupling to the skull compared with other systems. However, multiple validation studies have yielded conflicting results with respect to the mouthguard׳s head kinematics measurement accuracy. Here, we demonstrate that imposing different constraints on the mandible (lower jaw) can alter mouthguard kinematic accuracy in dummy headform testing. In addition, post mortem human surrogate tests utilizing the worst-case unconstrained mandible condition yield 40% and 80% normalized root mean square error in angular velocity and angular acceleration respectively. These errors can be modeled using a simple spring-mass system in which the soft mouthguard material near the sensors acts as a spring and the mandible as a mass. However, the mouthguard can be designed to mitigate these disturbances by isolating sensors from mandible loads, improving accuracy to below 15% normalized root mean square error in all kinematic measures. Thus, while current mouthguards would suffer from measurement errors in the worst-case unconstrained mandible condition, future mouthguards should be designed to account for these disturbances and future validation testing should include unconstrained mandibles to ensure proper accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Large poroelastic deformation of a soft material
NASA Astrophysics Data System (ADS)
MacMinn, Christopher W.; Dufresne, Eric R.; Wettlaufer, John S.
2014-11-01
Flow through a porous material will drive mechanical deformation when the fluid pressure becomes comparable to the stiffness of the solid skeleton. This has applications ranging from hydraulic fracture for recovery of shale gas, where fluid is injected at high pressure, to the mechanics of biological cells and tissues, where the solid skeleton is very soft. The traditional linear theory of poroelasticity captures this fluid-solid coupling by combining Darcy's law with linear elasticity. However, linear elasticity is only volume-conservative to first order in the strain, which can become problematic when damage, plasticity, or extreme softness lead to large deformations. Here, we compare the predictions of linear poroelasticity with those of a large-deformation framework in the context of two model problems. We show that errors in volume conservation are compounded and amplified by coupling with the fluid flow, and can become important even when the deformation is small. We also illustrate these results with a laboratory experiment.
NASA Astrophysics Data System (ADS)
Elfgen, S.; Franck, D.; Hameyer, K.
2018-04-01
Magnetic measurements are indispensable for the characterization of soft magnetic material used e.g. in electrical machines. Characteristic values are used as quality control during production and for the parametrization of material models. Uncertainties and errors in the measurements are reflected directly in the parameters of the material models. This can result in over-dimensioning and inaccuracies in simulations for the design of electrical machines. Therefore, existing influencing factors in the characterization of soft magnetic materials are named and their resulting uncertainties contributions studied. The analysis of the resulting uncertainty contributions can serve the operator as additional selection criteria for different measuring sensors. The investigation is performed for measurements within and outside the currently prescribed standard, using a Single sheet tester and its impact on the identification of iron loss parameter is studied.
NASA Astrophysics Data System (ADS)
Adib, A.; Afzal, P.; Heydarzadeh, K.
2015-01-01
The aim of this study is to classify the site effect using concentration-area (C-A) fractal model in Meybod city, central Iran, based on microtremor data analysis. Log-log plots of the frequency, amplification and vulnerability index (k-g) indicate a multifractal nature for the parameters in the area. The results obtained from the C-A fractal modelling reveal that proper soil types are located around the central city. The results derived via the fractal modelling were utilized to improve the Nogoshi and Igarashi (1970, 1971) classification results in the Meybod city. The resulting categories are: (1) hard soil and weak rock with frequency of 6.2 to 8 Hz, (2) stiff soil with frequency of about 4.9 to 6.2 Hz, (3) moderately soft soil with the frequency of 2.4 to 4.9 Hz, and (4) soft soil with the frequency lower than 2.4 Hz.
Site effect classification based on microtremor data analysis using concentration-area fractal model
NASA Astrophysics Data System (ADS)
Adib, A.; Afzal, P.; Heydarzadeh, K.
2014-07-01
The aim of this study is to classify the site effect using concentration-area (C-A) fractal model in Meybod city, Central Iran, based on microtremor data analysis. Log-log plots of the frequency, amplification and vulnerability index (k-g) indicate a multifractal nature for the parameters in the area. The results obtained from the C-A fractal modeling reveal that proper soil types are located around the central city. The results derived via the fractal modeling were utilized to improve the Nogoshi's classification results in the Meybod city. The resulted categories are: (1) hard soil and weak rock with frequency of 6.2 to 8 Hz, (2) stiff soil with frequency of about 4.9 to 6.2 Hz, (3) moderately soft soil with the frequency of 2.4 to 4.9 Hz, and (4) soft soil with the frequency lower than 2.4 Hz.
FPGA-Based, Self-Checking, Fault-Tolerant Computers
NASA Technical Reports Server (NTRS)
Some, Raphael; Rennels, David
2004-01-01
A proposed computer architecture would exploit the capabilities of commercially available field-programmable gate arrays (FPGAs) to enable computers to detect and recover from bit errors. The main purpose of the proposed architecture is to enable fault-tolerant computing in the presence of single-event upsets (SEUs). [An SEU is a spurious bit flip (also called a soft error) caused by a single impact of ionizing radiation.] The architecture would also enable recovery from some soft errors caused by electrical transients and, to some extent, from intermittent and permanent (hard) errors caused by aging of electronic components. A typical FPGA of the current generation contains one or more complete processor cores, memories, and highspeed serial input/output (I/O) channels, making it possible to shrink a board-level processor node to a single integrated-circuit chip. Custom, highly efficient microcontrollers, general-purpose computers, custom I/O processors, and signal processors can be rapidly and efficiently implemented by use of FPGAs. Unfortunately, FPGAs are susceptible to SEUs. Prior efforts to mitigate the effects of SEUs have yielded solutions that degrade performance of the system and require support from external hardware and software. In comparison with other fault-tolerant- computing architectures (e.g., triple modular redundancy), the proposed architecture could be implemented with less circuitry and lower power demand. Moreover, the fault-tolerant computing functions would require only minimal support from circuitry outside the central processing units (CPUs) of computers, would not require any software support, and would be largely transparent to software and to other computer hardware. There would be two types of modules: a self-checking processor module and a memory system (see figure). The self-checking processor module would be implemented on a single FPGA and would be capable of detecting its own internal errors. It would contain two CPUs executing identical programs in lock step, with comparison of their outputs to detect errors. It would also contain various cache local memory circuits, communication circuits, and configurable special-purpose processors that would use self-checking checkers. (The basic principle of the self-checking checker method is to utilize logic circuitry that generates error signals whenever there is an error in either the checker or the circuit being checked.) The memory system would comprise a main memory and a hardware-controlled check-pointing system (CPS) based on a buffer memory denoted the recovery cache. The main memory would contain random-access memory (RAM) chips and FPGAs that would, in addition to everything else, implement double-error-detecting and single-error-correcting memory functions to enable recovery from single-bit errors.
Ozone Profile Retrievals from the OMPS on Suomi NPP
NASA Astrophysics Data System (ADS)
Bak, J.; Liu, X.; Kim, J. H.; Haffner, D. P.; Chance, K.; Yang, K.; Sun, K.; Gonzalez Abad, G.
2017-12-01
We verify and correct the Ozone Mapping and Profiler Suite (OMPS) Nadir Mapper (NM) L1B v2.0 data with the aim of producing accurate ozone profile retrievals using an optimal estimation based inversion method in the 302.5-340 nm fitting. The evaluation of available slit functions demonstrates that preflight-measured slit functions well represent OMPS measurements compared to derived Gaussian slit functions. Our OMPS fitting residuals contain significant wavelength and cross-track dependent biases, and thereby serious cross-track striping errors are found in preliminary retrievals, especially in the troposphere. To eliminate the systematic component of the fitting residuals, we apply "soft calibration" to OMPS radiances. With the soft calibration the amplitude of fitting residuals decreases from 1 % to 0.2 % over low/mid latitudes, and thereby the consistency of tropospheric ozone retrievals between OMPS and Ozone Monitoring Instrument (OMI) are substantially improved. A common mode correction is implemented for additional radiometric calibration, which improves retrievals especially at high latitudes where the amplitude of fitting residuals decreases by a factor of 2. We estimate the floor noise error of OMPS measurements from standard deviations of the fitting residuals. The derived error in the Huggins band ( 0.1 %) is 2 times smaller than OMI floor noise error and 2 times larger than OMPS L1B measurement error. The OMPS floor noise errors better constrain our retrievals for maximizing measurement information and stabilizing our fitting residuals. The final precision of the fitting residuals is less than 0.1 % in the low/mid latitude, with 1 degrees of freedom for signal for the tropospheric ozone, so that we meet the general requirements for successful tropospheric ozone retrievals. To assess if the quality of OMPS ozone retrievals could be acceptable for scientific use, we will characterize OMPS ozone profile retrievals, present error analysis, and validate retrievals using a reference dataset. The useful information on the vertical distribution of ozone is limited below 40 km only from OMPS NM measurements due to the absence of Hartley ozone wavelength. This shortcoming will be improved with the joint ozone profile retrieval using Nadir Profiler (NP) measurements covering the 250 to 310 nm range.
Goldman-Mellor, Sidra; Caspi, Avshalom; Arseneault, Louise; Ajala, Nifemi; Ambler, Antony; Danese, Andrea; Fisher, Helen; Hucker, Abigail; Odgers, Candice; Williams, Teresa; Wong, Chloe; Moffitt, Terrie E
2016-02-01
Labour market disengagement among youths has lasting negative economic and social consequences, yet is poorly understood. We compared four types of work-related self-perceptions, as well as vulnerability to mental health and substance abuse problems, among youths not in education, employment or training (NEET) and among their peers. Participants were from the Environmental Risk (E-Risk) longitudinal study, a nationally representative UK cohort of 2,232 twins born in 1994-1995. We measured commitment to work, job-search effort, professional/technical skills, 'soft' skills (e.g. teamwork, decision-making, communication), optimism about getting ahead, and mental health and substance use disorders at age 18. We also examined childhood mental health. At age 18, 11.6% of participants were NEET. NEET participants reported themselves as committed to work and searching for jobs with greater diligence than their non-NEET peers. However, they reported fewer 'soft' skills (B = -0.98, p < .001) and felt less optimistic about their likelihood of getting ahead in life (B = -2.41, p < .001). NEET youths also had higher rates of concurrent mental health and substance abuse problems, but these did not explain the relationship with work-related self-perceptions. Nearly 60% of NEET (vs. 35% of non-NEET) youths had already experienced ≥1 mental health problem in childhood/adolescence. Associations of NEET status with concurrent mental health problems were independent of pre-existing mental health vulnerability. Our findings indicate that while NEET is clearly an economic and mental health issue, it does not appear to be a motivation issue. Alongside skills, work-related self-perceptions and mental health problems may be targets for intervention and service provision among this high-risk population. © 2015 Association for Child and Adolescent Mental Health.
Kraemer, Sara; Carayon, Pascale
2007-03-01
This paper describes human errors and violations of end users and network administration in computer and information security. This information is summarized in a conceptual framework for examining the human and organizational factors contributing to computer and information security. This framework includes human error taxonomies to describe the work conditions that contribute adversely to computer and information security, i.e. to security vulnerabilities and breaches. The issue of human error and violation in computer and information security was explored through a series of 16 interviews with network administrators and security specialists. The interviews were audio taped, transcribed, and analyzed by coding specific themes in a node structure. The result is an expanded framework that classifies types of human error and identifies specific human and organizational factors that contribute to computer and information security. Network administrators tended to view errors created by end users as more intentional than unintentional, while errors created by network administrators as more unintentional than intentional. Organizational factors, such as communication, security culture, policy, and organizational structure, were the most frequently cited factors associated with computer and information security.
Trends in Device SEE Susceptibility from Heavy Ions
NASA Technical Reports Server (NTRS)
Nichols, D. K.; Coss, J. R.; McCarty, K. P.; Schwartz, H. R.; Swift, G. M.; Watson, R. K.; Koga, R.; Crain, W. R.; Crawford, K. B.; Hansel, S. J.
1995-01-01
The sixth set of heavy ion single event effects (SEE) test data have been collected since the last IEEE publications in December issues of IEEE - Nuclear Science Transactions for 1985, 1987, 1989, 1991, and the IEEE Workshop Record, 1993. Trends in SEE susceptibility (including soft errors and latchup) for state-of- are evaluated.
Estimating soft tissue thickness from light-tissue interactions––a simulation study
Wissel, Tobias; Bruder, Ralf; Schweikard, Achim; Ernst, Floris
2013-01-01
Immobilization and marker-based motion tracking in radiation therapy often cause decreased patient comfort. However, the more comfortable alternative of optical surface tracking is highly inaccurate due to missing point-to-point correspondences between subsequent point clouds as well as elastic deformation of soft tissue. In this study, we present a proof of concept for measuring subcutaneous features with a laser scanner setup focusing on the skin thickness as additional input for high accuracy optical surface tracking. Using Monte-Carlo simulations for multi-layered tissue, we show that informative features can be extracted from the simulated tissue reflection by integrating intensities within concentric ROIs around the laser spot center. Training a regression model with a simulated data set identifies patterns that allow for predicting skin thickness with a root mean square error of down to 18 µm. Different approaches to compensate for varying observation angles were shown to yield errors still below 90 µm. Finally, this initial study provides a very promising proof of concept and encourages research towards a practical prototype. PMID:23847741
Khozani, Zohreh Sheikh; Bonakdari, Hossein; Zaji, Amir Hossein
2016-01-01
Two new soft computing models, namely genetic programming (GP) and genetic artificial algorithm (GAA) neural network (a combination of modified genetic algorithm and artificial neural network methods) were developed in order to predict the percentage of shear force in a rectangular channel with non-homogeneous roughness. The ability of these methods to estimate the percentage of shear force was investigated. Moreover, the independent parameters' effectiveness in predicting the percentage of shear force was determined using sensitivity analysis. According to the results, the GP model demonstrated superior performance to the GAA model. A comparison was also made between the GP program determined as the best model and five equations obtained in prior research. The GP model with the lowest error values (root mean square error ((RMSE) of 0.0515) had the best function compared with the other equations presented for rough and smooth channels as well as smooth ducts. The equation proposed for rectangular channels with rough boundaries (RMSE of 0.0642) outperformed the prior equations for smooth boundaries.
NASA Astrophysics Data System (ADS)
Fulkerson, David E.
2010-02-01
This paper describes a new methodology for characterizing the electrical behavior and soft error rate (SER) of CMOS and SiGe HBT integrated circuits that are struck by ions. A typical engineering design problem is to calculate the SER of a critical path that commonly includes several circuits such as an input buffer, several logic gates, logic storage, clock tree circuitry, and an output buffer. Using multiple 3D TCAD simulations to solve this problem is too costly and time-consuming for general engineering use. The new and simple methodology handles the problem with ease by simple SPICE simulations. The methodology accurately predicts the measured threshold linear energy transfer (LET) of a bulk CMOS SRAM. It solves for circuit currents and voltage spikes that are close to those predicted by expensive 3D TCAD simulations. It accurately predicts the measured event cross-section vs. LET curve of an experimental SiGe HBT flip-flop. The experimental cross section vs. frequency behavior and other subtle effects are also accurately predicted.
Hubert, G; Regis, D; Cheminet, A; Gatti, M; Lacoste, V
2014-10-01
Particles originating from primary cosmic radiation, which hit the Earth's atmosphere give rise to a complex field of secondary particles. These particles include neutrons, protons, muons, pions, etc. Since the 1980s it has been known that terrestrial cosmic rays can penetrate the natural shielding of buildings, equipment and circuit package and induce soft errors in integrated circuits. Recently, research has shown that commercial static random access memories are now so small and sufficiently sensitive that single event upsets (SEUs) may be induced from the electronic stopping of a proton. With continued advancements in process size, this downward trend in sensitivity is expected to continue. Then, muon soft errors have been predicted for nano-electronics. This paper describes the effects in the specific cases such as neutron-, proton- and muon-induced SEU observed in complementary metal-oxide semiconductor. The results will allow investigating the technology node sensitivity along the scaling trend. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Nitta, Nariaki
1988-01-01
Hard X-ray spectra in solar flares obtained by the broadband spectrometers aboard Hinotori and SMM are compared. Within the uncertainty brought about by assuming the typical energy of the background X-rays, spectra by the Hinotori spectrometer are usually consistent with those by the SMM spectrometer for flares in 1981. On the contrary, flares in 1982 persistently show 20-50-percent higher flux by Hinotori than by SMM. If this discrepancy is entirely attributable to errors in the calibration of energy ranges, the errors would be about 10 percent. Despite such a discrepancy in absolute flux, in the the decay phase of one flare, spectra revealed a hard X-ray component (probably a 'superhot' component) that could be explained neither by emission from a plasma at about 2 x 10 to the 7th K nor by a nonthermal power-law component. Imaging observations during this period show hard X-ray emission nearly cospatial with soft X-ray emission, in contrast with earlier times at which hard and soft X-rays come from different places.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
NASA Astrophysics Data System (ADS)
De Lorenzo, Danilo; De Momi, Elena; Beretta, Elisa; Cerveri, Pietro; Perona, Franco; Ferrigno, Giancarlo
2009-02-01
Computer Assisted Orthopaedic Surgery (CAOS) systems improve the results and the standardization of surgical interventions. Anatomical landmarks and bone surface detection is straightforward to either register the surgical space with the pre-operative imaging space and to compute biomechanical parameters for prosthesis alignment. Surface points acquisition increases the intervention invasiveness and can be influenced by the soft tissue layer interposition (7-15mm localization errors). This study is aimed at evaluating the accuracy of a custom-made A-mode ultrasound (US) system for non invasive detection of anatomical landmarks and surfaces. A-mode solutions eliminate the necessity of US images segmentation, offers real-time signal processing and requires less invasive equipment. The system consists in a single transducer US probe optically tracked, a pulser/receiver and an FPGA-based board, which is responsible for logic control command generation and for real-time signal processing and three custom-made board (signal acquisition, blanking and synchronization). We propose a new calibration method of the US system. The experimental validation was then performed measuring the length of known-shape polymethylmethacrylate boxes filled with pure water and acquiring bone surface points on a bovine bone phantom covered with soft-tissue mimicking materials. Measurement errors were computed through MR and CT images acquisitions of the phantom. Points acquisition on bone surface with the US system demonstrated lower errors (1.2mm) than standard pointer acquisition (4.2mm).
Least Reliable Bits Coding (LRBC) for high data rate satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Wagner, Paul; Budinger, James
1992-01-01
An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.
[Description of clinical thinking by the dual-process theory].
Peña G, Luis
2012-06-01
Clinical thinking is a very complex process that can be described by the dual-process theory, it has an intuitive part (that recognizes patterns) and an analytical part (that tests hypotheses). It is vulnerable to cognitive bias that professionals must be aware of, to minimize diagnostic errors.
ERIC Educational Resources Information Center
Razwick, Jeff
2007-01-01
Many schools are almost entirely reliant on alarms and sprinklers for their fire protection. As these devices need to be triggered and supplied with power or water to work properly, they are vulnerable to errors. To provide adequate safety, a good fire-protection program must have three primary elements: fire protection and suppression, and…
Using Digital Image Correlation to Characterize Local Strains on Vascular Tissue Specimens.
Zhou, Boran; Ravindran, Suraj; Ferdous, Jahid; Kidane, Addis; Sutton, Michael A; Shazly, Tarek
2016-01-24
Characterization of the mechanical behavior of biological and engineered soft tissues is a central component of fundamental biomedical research and product development. Stress-strain relationships are typically obtained from mechanical testing data to enable comparative assessment among samples and in some cases identification of constitutive mechanical properties. However, errors may be introduced through the use of average strain measures, as significant heterogeneity in the strain field may result from geometrical non-uniformity of the sample and stress concentrations induced by mounting/gripping of soft tissues within the test system. When strain field heterogeneity is significant, accurate assessment of the sample mechanical response requires measurement of local strains. This study demonstrates a novel biomechanical testing protocol for calculating local surface strains using a mechanical testing device coupled with a high resolution camera and a digital image correlation technique. A series of sample surface images are acquired and then analyzed to quantify the local surface strain of a vascular tissue specimen subjected to ramped uniaxial loading. This approach can improve accuracy in experimental vascular biomechanics and has potential for broader use among other native soft tissues, engineered soft tissues, and soft hydrogel/polymeric materials. In the video, we demonstrate how to set up the system components and perform a complete experiment on native vascular tissue.
Le Floc’h, Simon; Cloutier, Guy; Finet, Gérard; Tracqui, Philippe; Pettigrew, Roderic I.; Ohayon, Jacques
2016-01-01
Peak cap stress amplitude is recognized as a good indicator of vulnerable plaque (VP) rupture. However, such stress evaluation strongly relies on a precise, but still lacking, knowledge of the mechanical properties exhibited by the plaque components. As a first response to this limitation, our group recently developed, in a previous theoretical study, an original approach, called iMOD, which reconstructs elasticity maps (or modulograms) of atheroma plaques from the estimation of strain fields. In the present in vitro experimental study, conducted on PVA-C arterial phantoms, we investigate the benefit of coupling the iMOD procedure with the acquisition of intravascular ultrasound (IVUS) measurements for detection of VP. Our results show that the combined iMOD-IVUS strategy : 1) successfully detected and quantified soft inclusion contours with high positive predictive values and sensitivities of 89.7 ± 3.9% and 81.5 ± 8.8 %, respectively, 2) estimated reasonably cap thicknesses larger than ~300 µm, but underestimated thinner caps, and 3) quantified satisfactorily Young's modulus of hard medium (mean value of 109.7 ± 23.7 kPa instead of 145.4 ± 31.8 kPa), but overestimated the stiffness of soft inclusions (mean Young`s moduli of 31.4 ± 9.7 kPa instead of 17.6 ± 3.4 kPa). All together, these results demonstrate a promising benefit of the new iMOD-IVUS clinical imaging method for in vivo VP detection. PMID:20826899
Compact disk error measurements
NASA Technical Reports Server (NTRS)
Howe, D.; Harriman, K.; Tehranchi, B.
1993-01-01
The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.
Brigham, John C.; Aquino, Wilkins; Aguilo, Miguel A.; Diamessis, Peter J.
2010-01-01
An approach for efficient and accurate finite element analysis of harmonically excited soft solids using high-order spectral finite elements is presented and evaluated. The Helmholtz-type equations used to model such systems suffer from additional numerical error known as pollution when excitation frequency becomes high relative to stiffness (i.e. high wave number), which is the case, for example, for soft tissues subject to ultrasound excitations. The use of high-order polynomial elements allows for a reduction in this pollution error, but requires additional consideration to counteract Runge's phenomenon and/or poor linear system conditioning, which has led to the use of spectral element approaches. This work examines in detail the computational benefits and practical applicability of high-order spectral elements for such problems. The spectral elements examined are tensor product elements (i.e. quad or brick elements) of high-order Lagrangian polynomials with non-uniformly distributed Gauss-Lobatto-Legendre nodal points. A shear plane wave example is presented to show the dependence of the accuracy and computational expense of high-order elements on wave number. Then, a convergence study for a viscoelastic acoustic-structure interaction finite element model of an actual ultrasound driven vibroacoustic experiment is shown. The number of degrees of freedom required for a given accuracy level was found to consistently decrease with increasing element order. However, the computationally optimal element order was found to strongly depend on the wave number. PMID:21461402
A device for high-throughput monitoring of degradation in soft tissue samples.
Tzeranis, D S; Panagiotopoulos, I; Gkouma, S; Kanakaris, G; Georgiou, N; Vaindirlis, N; Vasileiou, G; Neidlin, M; Gkousioudi, A; Spitas, V; Macheras, G A; Alexopoulos, L G
2018-06-06
This work describes the design and validation of a novel device, the High-Throughput Degradation Monitoring Device (HDD), for monitoring the degradation of 24 soft tissue samples over incubation periods of several days inside a cell culture incubator. The device quantifies sample degradation by monitoring its deformation induced by a static gravity load. Initial instrument design and experimental protocol development focused on quantifying cartilage degeneration. Characterization of measurement errors, caused mainly by thermal transients and by translating the instrument sensor, demonstrated that HDD can quantify sample degradation with <6 μm precision and <10 μm temperature-induced errors. HDD capabilities were evaluated in a pilot study that monitored the degradation of fresh ex vivo human cartilage samples by collagenase solutions over three days. HDD could robustly resolve the effects of collagenase concentration as small as 0.5 mg/ml. Careful sample preparation resulted in measurements that did not suffer from donor-to-donor variation (coefficient of variance <70%). Due to its unique combination of sample throughput, measurement precision, temporal sampling and experimental versality, HDD provides a novel biomechanics-based experimental platform for quantifying the effects of proteins (cytokines, growth factors, enzymes, antibodies) or small molecules on the degradation of soft tissues or tissue engineering constructs. Thereby, HDD can complement established tools and in vitro models in important applications including drug screening and biomaterial development. Copyright © 2018 Elsevier Ltd. All rights reserved.
Walden, Steven J; Evans, Sam L; Mulville, Jacqui
2017-01-01
The purpose of this study was to determine how the Vickers hardness (HV) of bone varies during soft tissue putrefaction. This has possible forensic applications, notably for determining the postmortem interval. Experimental porcine bone samples were decomposed in surface and burial deposition scenarios over a period of 6 months. Although the Vickers hardness varied widely, it was found that when transverse axial hardness was subtracted from longitudinal axial hardness, the difference showed correlations with three distinct phases of soft tissue putrefaction. The ratio of transverse axial hardness to longitudinal axial hardness showed a similar correlation. A difference of 10 or greater in HV with soft tissue present and signs of minimal decomposition, was associated with a decomposition period of 250 cumulative cooling degree days or less. A difference of 10 (+/- standard error of mean at a 95% confidence interval) or greater in HV associated with marked decomposition indicated a decomposition period of 1450 cumulative cooling degree days or more. A difference of -7 to +8 (+/- standard error of mean at a 95% confidence interval) was thus associated with 250 to 1450 cumulative cooling degree days' decomposition. The ratio of transverse axial HV to longitudinal HV, ranging from 2.42 to 1.54, is a more reliable indicator in this context and is preferable to using negative integers These differences may have potential as an indicator of postmortem interval and thus the time of body deposition in the forensic context. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Lei; Hu, Jianhao
2010-12-01
Notice of Violation of IEEE Publication Principles"Joint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath"by Lei Li and Jianhao Hu,in the IEEE Transactions on Nuclear Science, vol.57, no.6, Dec. 2010, pp. 3779-3786After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.This paper contains substantial duplication of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following articles:"Multiple Error Detection and Correction Based on Redundant Residue Number Systems"by Vik Tor Goh and M.U. Siddiqi,in the IEEE Transactions on Communications, vol.56, no.3, March 2008, pp.325-330"A Coding Theory Approach to Error Control in Redundant Residue Number Systems. I: Theory and Single Error Correction"by H. Krishna, K-Y. Lin, and J-D. Sun, in the IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol.39, no.1, Jan 1992, pp.8-17In this paper, we propose a joint scheme which combines redundant residue number systems (RRNS) with module isolation (MI) for mitigating single event multiple bit upsets (SEMBUs) in datapath. The proposed hardening scheme employs redundant residues to improve the fault tolerance for datapath and module spacings to guarantee that SEMBUs caused by charge sharing do not propagate among the operation channels of different moduli. The features of RRNS, such as independence, parallel and error correction, are exploited to establish the radiation hardening architecture for the datapath in radiation environments. In the proposed scheme, all of the residues can be processed independently, and most of the soft errors in datapath can be corrected with the redundant relationship of the residues at correction module, which is allocated at the end of the datapath. In the back-end implementation, module isolation technique is used to improve the soft error rate performance for RRNS by physically separating the operation channels of different moduli. The case studies show at least an order of magnitude decrease on the soft error rate (SER) as compared to the NonRHBD designs, and demonstrate that RRNS+MI can reduce the SER from 10-12 to 10-17 when the processing steps of datapath are 106. The proposed scheme can even achieve less area and latency overheads than that without radiation hardening, since RRNS can reduce the operational complexity in datapath.
Wiesinger, Florian; Bylund, Mikael; Yang, Jaewon; Kaushik, Sandeep; Shanbhag, Dattesh; Ahn, Sangtae; Jonsson, Joakim H; Lundman, Josef A; Hope, Thomas; Nyholm, Tufve; Larson, Peder; Cozzini, Cristina
2018-02-18
To describe a method for converting Zero TE (ZTE) MR images into X-ray attenuation information in the form of pseudo-CT images and demonstrate its performance for (1) attenuation correction (AC) in PET/MR and (2) dose planning in MR-guided radiation therapy planning (RTP). Proton density-weighted ZTE images were acquired as input for MR-based pseudo-CT conversion, providing (1) efficient capture of short-lived bone signals, (2) flat soft-tissue contrast, and (3) fast and robust 3D MR imaging. After bias correction and normalization, the images were segmented into bone, soft-tissue, and air by means of thresholding and morphological refinements. Fixed Hounsfield replacement values were assigned for air (-1000 HU) and soft-tissue (+42 HU), whereas continuous linear mapping was used for bone. The obtained ZTE-derived pseudo-CT images accurately resembled the true CT images (i.e., Dice coefficient for bone overlap of 0.73 ± 0.08 and mean absolute error of 123 ± 25 HU evaluated over the whole head, including errors from residual registration mismatches in the neck and mouth regions). The linear bone mapping accounted for bone density variations. Averaged across five patients, ZTE-based AC demonstrated a PET error of -0.04 ± 1.68% relative to CT-based AC. Similarly, for RTP assessed in eight patients, the absolute dose difference over the target volume was found to be 0.23 ± 0.42%. The described method enables MR to pseudo-CT image conversion for the head in an accurate, robust, and fast manner without relying on anatomical prior knowledge. Potential applications include PET/MR-AC, and MR-guided RTP. © 2018 International Society for Magnetic Resonance in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Perry B.; Geyer, Amy; Borrego, David
Purpose: To investigate the benefits and limitations of patient-phantom matching for determining organ dose during fluoroscopy guided interventions. Methods: In this study, 27 CT datasets representing patients of different sizes and genders were contoured and converted into patient-specific computational models. Each model was matched, based on height and weight, to computational phantoms selected from the UF hybrid patient-dependent series. In order to investigate the influence of phantom type on patient organ dose, Monte Carlo methods were used to simulate two cardiac projections (PA/left lateral) and two abdominal projections (RAO/LPO). Organ dose conversion coefficients were then calculated for each patient-specific andmore » patient-dependent phantom and also for a reference stylized and reference hybrid phantom. The coefficients were subsequently analyzed for any correlation between patient-specificity and the accuracy of the dose estimate. Accuracy was quantified by calculating an absolute percent difference using the patient-specific dose conversion coefficients as the reference. Results: Patient-phantom matching was shown most beneficial for estimating the dose to heavy patients. In these cases, the improvement over using a reference stylized phantom ranged from approximately 50% to 120% for abdominal projections and for a reference hybrid phantom from 20% to 60% for all projections. For lighter individuals, patient-phantom matching was clearly superior to using a reference stylized phantom, but not significantly better than using a reference hybrid phantom for certain fields and projections. Conclusions: The results indicate two sources of error when patients are matched with phantoms: Anatomical error, which is inherent due to differences in organ size and location, and error attributed to differences in the total soft tissue attenuation. For small patients, differences in soft tissue attenuation are minimal and are exceeded by inherent anatomical differences. For large patients, difference in soft tissue attenuation can be large. In these cases, patient-phantom matching proves most effective as differences in soft tissue attenuation are mitigated. With increasing obesity rates, overweight patients will continue to make up a growing fraction of all patients undergoing medical imaging. Thus, having phantoms that better represent this population represents a considerable improvement over previous methods. In response to this study, additional phantoms representing heavier weight percentiles will be added to the UFHADM and UFHADF patient-dependent series.« less
Quality assurance of the international computerised 24 h dietary recall method (EPIC-Soft).
Crispim, Sandra P; Nicolas, Genevieve; Casagrande, Corinne; Knaze, Viktoria; Illner, Anne-Kathrin; Huybrechts, Inge; Slimani, Nadia
2014-02-01
The interview-administered 24 h dietary recall (24-HDR) EPIC-Soft® has a series of controls to guarantee the quality of dietary data across countries. These comprise all steps that are part of fieldwork preparation, data collection and data management; however, a complete characterisation of these quality controls is still lacking. The present paper describes in detail the quality controls applied in EPIC-Soft, which are, to a large extent, built on the basis of the EPIC-Soft error model and are present in three phases: (1) before, (2) during and (3) after the 24-HDR interviews. Quality controls for consistency and harmonisation are implemented before the interviews while preparing the seventy databases constituting an EPIC-Soft version (e.g. pre-defined and coded foods and recipes). During the interviews, EPIC-Soft uses a cognitive approach by helping the respondent to recall the dietary intake information in a stepwise manner and includes controls for consistency (e.g. probing questions) as well as for completeness of the collected data (e.g. system calculation for some unknown amounts). After the interviews, a series of controls can be applied by dietitians and data managers to further guarantee data quality. For example, the interview-specific 'note files' that were created to track any problems or missing information during the interviews can be checked to clarify the information initially provided. Overall, the quality controls employed in the EPIC-Soft methodology are not always perceivable, but prove to be of assistance for its overall standardisation and possibly for the accuracy of the collected data.
Impact of time-of-flight PET on quantification errors in MR imaging-based attenuation correction.
Mehranian, Abolfazl; Zaidi, Habib
2015-04-01
Time-of-flight (TOF) PET/MR imaging is an emerging imaging technology with great capabilities offered by TOF to improve image quality and lesion detectability. We assessed, for the first time, the impact of TOF image reconstruction on PET quantification errors induced by MR imaging-based attenuation correction (MRAC) using simulation and clinical PET/CT studies. Standard 4-class attenuation maps were derived by segmentation of CT images of 27 patients undergoing PET/CT examinations into background air, lung, soft-tissue, and fat tissue classes, followed by the assignment of predefined attenuation coefficients to each class. For each patient, 4 PET images were reconstructed: non-TOF and TOF both corrected for attenuation using reference CT-based attenuation correction and the resulting 4-class MRAC maps. The relative errors between non-TOF and TOF MRAC reconstructions were compared with their reference CT-based attenuation correction reconstructions. The bias was locally and globally evaluated using volumes of interest (VOIs) defined on lesions and normal tissues and CT-derived tissue classes containing all voxels in a given tissue, respectively. The impact of TOF on reducing the errors induced by metal-susceptibility and respiratory-phase mismatch artifacts was also evaluated using clinical and simulation studies. Our results show that TOF PET can remarkably reduce attenuation correction artifacts and quantification errors in the lungs and bone tissues. Using classwise analysis, it was found that the non-TOF MRAC method results in an error of -3.4% ± 11.5% in the lungs and -21.8% ± 2.9% in bones, whereas its TOF counterpart reduced the errors to -2.9% ± 7.1% and -15.3% ± 2.3%, respectively. The VOI-based analysis revealed that the non-TOF and TOF methods resulted in an average overestimation of 7.5% and 3.9% in or near lung lesions (n = 23) and underestimation of less than 5% for soft tissue and in or near bone lesions (n = 91). Simulation results showed that as TOF resolution improves, artifacts and quantification errors are substantially reduced. TOF PET substantially reduces artifacts and improves significantly the quantitative accuracy of standard MRAC methods. Therefore, MRAC should be less of a concern on future TOF PET/MR scanners with improved timing resolution. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
Rossi X-Ray Timing Explorer All-Sky Monitor Localization of SGR 1627-41
NASA Astrophysics Data System (ADS)
Smith, Donald A.; Bradt, Hale V.; Levine, Alan M.
1999-07-01
The fourth unambiguously identified soft gamma repeater (SGR), SGR 1627-41, was discovered with the BATSE instrument on 1998 June 15. Interplanetary Network (IPN) measurements and BATSE data constrained the location of this new SGR to a 6° segment of a narrow (19") annulus. We present two bursts from this source observed by the All-Sky Monitor (ASM) on the Rossi X-Ray Timing Explorer. We use the ASM data to further constrain the source location to a 5' long segment of the BATSE/IPN error box. The ASM/IPN error box lies within 0.3 arcmin of the supernova remnant G337.0-0.1. The probability that a supernova remnant would fall so close to the error box purely by chance is ~5%.
RXTE All-Sky Monitor Localization of SGR 1627-41
NASA Astrophysics Data System (ADS)
Smith, D. A.; Bradt, H. V.; Levine, A. M.
1999-09-01
The fourth unambiguously identified Soft Gamma Repeater (SGR), SGR 1627--41, was discovered with the BATSE instrument on 1998 June 15 (Kouveliotou et al. 1998). Interplanetary Network (IPN) measurements and BATSE data constrained the location of this new SGR to a 6(deg) segment of a narrow (19('') ) annulus (Hurley et al. 1999; Woods et al. 1998). We report on two bursts from this source observed by the All-Sky Monitor (ASM) on RXTE. We use the ASM data to further constrain the source location to a 5(') long segment of the BATSE/IPN error box. The ASM/IPN error box lies within 0.3(') of the supernova remnant (SNR) G337.0--0.1. The probability that a SNR would fall so close to the error box purely by chance is ~ 5%.
Crosby, Richard; Salazar, Laura F.; DiClemente, Ralph J.; Yarber, William L.; Caliendo, Angela M.; Staples-Horne, Michelle
2009-01-01
Objectives To identify the prevalence of condom use errors among detained female teens and to test two inter-related hypotheses concerning condom failure. Methods A cross-sectional survey of 134 female teens recruited within eight detention facilities. Measures were collected using audio-computer assisted self-interviewing. Assessment for the presence of C. trachomatis and N. gonorrhoeae was also conducted. Results Five forms of condom use errors/problems were common: not discussing condom use with the partner (34.3%), not having a condom when one was desired (48.5%), starting sex before application (21.6%), removing condoms before sex concludes (26.9%), and breakage (32.8%). Significant, associations were found between condom errors/problems and drug/alcohol use. Errors/problems with condom use were significantly higher among teens diagnosed with an STD (P=.039 for an index measure; P=.022 for a single-item measure). Conclusions Findings suggest that detained female teens may have experienced multiple condom use error and problems thereby increasing their vulnerability to STD acquisition. PMID:18082855
Single Event Effect Testing of the Micron MT46V128M8
NASA Technical Reports Server (NTRS)
Stansberry, Scott; Campola, Michael; Wilcox, Ted; Seidleck, Christina; Phan, Anthony
2017-01-01
The Micron MT46V128M8 was tested for single event effects (SEE) at the Texas AM University Cyclotron Facility (TAMU) in June of 2017. Testing revealed a sensitivity to device hang-ups classified as single event functional interrupts (SEFI) and possible soft data errors classified as single event upsets (SEU).
Micromagnetic Study of Perpendicular Magnetic Recording Media
NASA Astrophysics Data System (ADS)
Dong, Yan
With increasing areal density in magnetic recording systems, perpendicular recording has successfully replaced longitudinal recording to mitigate the superparamagnetic limit. The extensive theoretical and experimental research associated with perpendicular magnetic recording media has contributed significantly to improving magnetic recording performance. Micromagnetic studies on perpendicular recording media, including aspects of the design of hybrid soft underlayers, media noise properties, inter-grain exchange characterization and ultra-high density bit patterned media recording, are presented in this dissertation. To improve the writability of recording media, one needs to reduce the head-to-keeper spacing while maintaining a good texture growth for the recording layer. A hybrid soft underlayer, consisting of a thin crystalline soft underlayer stacked above a non-magnetic seed layer and a conventional amorphous soft underlayer, provides an alternative approach for reducing the effective head-to-keeper spacing in perpendicular recording. Micromagnetic simulations indicate that the media using a hybrid soft underlayer helps enhance the effective field and the field gradient in comparison with conventional media that uses only an amorphous soft underlayer. The hybrid soft underlayer can support a thicker non-magnetic seed layer yet achieve an equivalent or better effective field and field gradient. A noise plateau for intermediate recording densities is observed for a recording layer of typical magnetization. Medium noise characteristics and transition jitter in perpendicular magnetic recording are explored using micromagnetic simulation. The plateau is replaced by a normal linear dependence of noise on recording density for a low magnetization recording layer. We show analytically that a source of the plateau is similar to that producing the Non-Linear-Transition-Shift of signal. In particular, magnetostatic effects are predicted to produce positive correlation of jitter and thus negative correlation of noise at the densities associated with the plateau. One focus for developing perpendicular recording media is on how to extract intergranular exchange coupling and intrinsic anisotropy field dispersion. A micromagnetic numerical technique is developed to effectively separate the effects of intergranular exchange coupling and anisotropy dispersion by finding their correlation to differentiated M-H curves with different initial magnetization states, even in the presence of thermal fluctuation. The validity of this method is investigated with a series of intergranular exchange couplings and anisotropy dispersions for different media thickness. This characterization method allows for an experimental measurement employing a vibrating sample magnetometer (VSM). Bit patterned media have been suggested to extend areal density beyond 1 Tbit/in2. The feasibility of 4 Tbit/in2 bit patterned recording is determined by aspects of write head design and media fabrication, and is estimated by the bit error rate. Micromagnetic specifications including 2.3:1 BAR bit patterned exchange coupled composite media, trailing shield, and side shields are proposed to meet the requirement of 3x10 -4 bit error rate, 4 nm fly height, 5% switching field distribution, 5% timing and 5% jitter errors for 4 Tbit/in2 bit-patterned recording. Demagnetizing field distribution is examined by studying the shielding effect of the side shields on the stray field from the neighboring dots. For recording self-assembled bit-patterned media, the head design writes two staggered tracks in a single pass and has maximum perpendicular field gradients of 580 Oe/nm along the down-track direction and 476 Oe/nm along the cross-track direction. The geometry demanded by self-assembly reduces recording density to 2.9 Tbit/in 2.
Estimation of Fetal Weight during Labor: Still a Challenge.
Barros, Joana Goulão; Reis, Inês; Pereira, Isabel; Clode, Nuno; Graça, Luís M
2016-01-01
To evaluate the accuracy of fetal weight prediction by ultrasonography labor employing a formula including the linear measurements of femur length (FL) and mid-thigh soft-tissue thickness (STT). We conducted a prospective study involving singleton uncomplicated term pregnancies within 48 hours of delivery. Only pregnancies with a cephalic fetus admitted in the labor ward for elective cesarean section, induction of labor or spontaneous labor were included. We excluded all non-Caucasian women, the ones previously diagnosed with gestational diabetes and the ones with evidence of ruptured membranes. Fetal weight estimates were calculated using a previously proposed formula [estimated fetal weight = 1687.47 + (54.1 x FL) + (76.68 x STT). The relationship between actual birth weight and estimated fetal weight was analyzed using Pearson's correlation. The formula's performance was assessed by calculating the signed and absolute errors. Mean weight difference and signed percentage error were calculated for birth weight divided into three subgroups: < 3000 g; 3000-4000 g; and > 4000 g. We included for analysis 145 cases and found a significant, yet low, linear relationship between birth weight and estimated fetal weight (p < 0.001; R2 = 0.197) with an absolute mean error of 10.6%. The lowest mean percentage error (0.3%) corresponded to the subgroup with birth weight between 3000 g and 4000 g. This study demonstrates a poor correlation between actual birth weight and the estimated fetal weight using a formula based on femur length and mid-thigh soft-tissue thickness, both linear parameters. Although avoidance of circumferential ultrasound measurements might prove to be beneficial, it is still yet to be found a fetal estimation formula that can be both accurate and simple to perform.
Electrodermal lability as an indicator for subjective sleepiness during total sleep deprivation.
Michael, Lars; Passmann, Sven; Becker, Ruth
2012-08-01
The present study addresses the suitability of electrodermal lability as an indicator of individual vulnerability to the effects of total sleep deprivation. During two complete circadian cycles, the effects of 48h of total sleep deprivation on physiological measures (electrodermal activity and body temperature), subjective sleepiness (measured by visual analogue scale and tiredness symptom scale) and task performance (reaction time and errors in a go/no go task) were investigated. Analyses of variance with repeated measures revealed substantial decreases of the number of skin conductance responses, body temperature, and increases for subjective sleepiness, reaction time and error rates. For all changes, strong circadian oscillations could be observed as well. The electrodermal more labile subgroup reported higher subjective sleepiness compared with electrodermal more stable participants, but showed no differences in the time courses of body temperature and task performance. Therefore, electrodermal lability seems to be a specific indicator for the changes in subjective sleepiness due to total sleep deprivation and circadian oscillations, but not a suitable indicator for vulnerability to the effects of sleep deprivation per se. © 2011 European Sleep Research Society.
ROSAT X-Ray Observation of the Second Error Box for SGR 1900+14
NASA Technical Reports Server (NTRS)
Li, P.; Hurley, K.; Vrba, F.; Kouveliotou, C.; Meegan, C. A.; Fishman, G. J.; Kulkarni, S.; Frail, D.
1997-01-01
The positions of the two error boxes for the soft gamma repeater (SGR) 1900+14 were determined by the "network synthesis" method, which employs observations by the Ulysses gamma-ray burst and CGRO BATSE instruments. The location of the first error box has been observed at optical, infrared, and X-ray wavelengths, resulting in the discovery of a ROSAT X-ray point source and a curious double infrared source. We have recently used the ROSAT HRI to observe the second error box to complete the counterpart search. A total of six X-ray sources were identified within the field of view. None of them falls within the network synthesis error box, and a 3 sigma upper limit to any X-ray counterpart was estimated to be 6.35 x 10(exp -14) ergs/sq cm/s. The closest source is approximately 3 min. away, and has an estimated unabsorbed flux of 1.5 x 10(exp -12) ergs/sq cm/s. Unlike the first error box, there is no supernova remnant near the second error box. The closest one, G43.9+1.6, lies approximately 2.dg6 away. For these reasons, we believe that the first error box is more likely to be the correct one.
Rabøl, Louise Isager; Andersen, Mette Lehmann; Østergaard, Doris; Bjørn, Brian; Lilja, Beth; Mogensen, Torben
2011-03-01
Poor teamwork and communication between healthcare staff are correlated to patient safety incidents. However, the organisational factors responsible for these issues are unexplored. Root cause analyses (RCA) use human factors thinking to analyse the systems behind severe patient safety incidents. The objective of this study is to review RCA reports (RCAR) for characteristics of verbal communication errors between hospital staff in an organisational perspective. Two independent raters analysed 84 RCARs, conducted in six Danish hospitals between 2004 and 2006, for descriptions and characteristics of verbal communication errors such as handover errors and error during teamwork. Raters found description of verbal communication errors in 44 reports (52%). These included handover errors (35 (86%)), communication errors between different staff groups (19 (43%)), misunderstandings (13 (30%)), communication errors between junior and senior staff members (11 (25%)), hesitance in speaking up (10 (23%)) and communication errors during teamwork (8 (18%)). The kappa values were 0.44-0.78. Unproceduralized communication and information exchange via telephone, related to transfer between units and consults from other specialties, were particularly vulnerable processes. With the risk of bias in mind, it is concluded that more than half of the RCARs described erroneous verbal communication between staff members as root causes of or contributing factors of severe patient safety incidents. The RCARs rich descriptions of the incidents revealed the organisational factors and needs related to these errors.
NASA Technical Reports Server (NTRS)
Spector, E.; LeBlanc, A.; Shackelford, L.
1995-01-01
This study reports on the short-term in vivo precision and absolute measurements of three combinations of whole-body scan modes and analysis software using a Hologic QDR 2000 dual-energy X-ray densitometer. A group of 21 normal, healthy volunteers (11 male and 10 female) were scanned six times, receiving one pencil-beam and one array whole-body scan on three occasions approximately 1 week apart. The following combinations of scan modes and analysis software were used: pencil-beam scans analyzed with Hologic's standard whole-body software (PB scans); the same pencil-beam analyzed with Hologic's newer "enhanced" software (EPB scans); and array scans analyzed with the enhanced software (EA scans). Precision values (% coefficient of variation, %CV) were calculated for whole-body and regional bone mineral content (BMC), bone mineral density (BMD), fat mass, lean mass, %fat and total mass. In general, there was no significant difference among the three scan types with respect to short-term precision of BMD and only slight differences in the precision of BMC. Precision of BMC and BMD for all three scan types was excellent: < 1% CV for whole-body values, with most regional values in the 1%-2% range. Pencil-beam scans demonstrated significantly better soft tissue precision than did array scans. Precision errors for whole-body lean mass were: 0.9% (PB), 1.1% (EPB) and 1.9% (EA). Precision errors for whole-body fat mass were: 1.7% (PB), 2.4% (EPB) and 5.6% (EA). EPB precision errors were slightly higher than PB precision errors for lean, fat and %fat measurements of all regions except the head, although these differences were significant only for the fat and % fat of the arms and legs. In addition EPB precision values exhibited greater individual variability than PB precision values. Finally, absolute values of bone and soft tissue were compared among the three combinations of scan and analysis modes. BMC, BMD, fat mass, %fat and lean mass were significantly different between PB scans and either of the EPB or EA scans. Differences were as large as 20%-25% for certain regional fat and BMD measurements. Additional work may be needed to examine the relative accuracy of the scan mode/software combinations and to identify reasons for the differences in soft tissue precision with the array whole-body scan mode.
Design Methodology for Magnetic Field-Based Soft Tri-Axis Tactile Sensors.
Wang, Hongbo; de Boer, Greg; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Hewson, Robert; Culmer, Peter
2016-08-24
Tactile sensors are essential if robots are to safely interact with the external world and to dexterously manipulate objects. Current tactile sensors have limitations restricting their use, notably being too fragile or having limited performance. Magnetic field-based soft tactile sensors offer a potential improvement, being durable, low cost, accurate and high bandwidth, but they are relatively undeveloped because of the complexities involved in design and calibration. This paper presents a general design methodology for magnetic field-based three-axis soft tactile sensors, enabling researchers to easily develop specific tactile sensors for a variety of applications. All aspects (design, fabrication, calibration and evaluation) of the development of tri-axis soft tactile sensors are presented and discussed. A moving least square approach is used to decouple and convert the magnetic field signal to force output to eliminate non-linearity and cross-talk effects. A case study of a tactile sensor prototype, MagOne, was developed. This achieved a resolution of 1.42 mN in normal force measurement (0.71 mN in shear force), good output repeatability and has a maximum hysteresis error of 3.4%. These results outperform comparable sensors reported previously, highlighting the efficacy of our methodology for sensor design.
Design Methodology for Magnetic Field-Based Soft Tri-Axis Tactile Sensors
Wang, Hongbo; de Boer, Greg; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Hewson, Robert; Culmer, Peter
2016-01-01
Tactile sensors are essential if robots are to safely interact with the external world and to dexterously manipulate objects. Current tactile sensors have limitations restricting their use, notably being too fragile or having limited performance. Magnetic field-based soft tactile sensors offer a potential improvement, being durable, low cost, accurate and high bandwidth, but they are relatively undeveloped because of the complexities involved in design and calibration. This paper presents a general design methodology for magnetic field-based three-axis soft tactile sensors, enabling researchers to easily develop specific tactile sensors for a variety of applications. All aspects (design, fabrication, calibration and evaluation) of the development of tri-axis soft tactile sensors are presented and discussed. A moving least square approach is used to decouple and convert the magnetic field signal to force output to eliminate non-linearity and cross-talk effects. A case study of a tactile sensor prototype, MagOne, was developed. This achieved a resolution of 1.42 mN in normal force measurement (0.71 mN in shear force), good output repeatability and has a maximum hysteresis error of 3.4%. These results outperform comparable sensors reported previously, highlighting the efficacy of our methodology for sensor design. PMID:27563908
Li, Lingyun; Zhang, Fuming; Hu, Min; Ren, Fuji; Chi, Lianli; Linhardt, Robert J.
2016-01-01
Low molecular weight heparins are complex polycomponent drugs that have recently become amenable to top-down analysis using liquid chromatography-mass spectrometry. Even using open source deconvolution software, DeconTools, and automatic structural assignment software, GlycReSoft, the comparison of two or more low molecular weight heparins is extremely time-consuming, taking about a week for an expert analyst and provides no guarantee of accuracy. Efficient data processing tools are required to improve analysis. This study uses the programming language of Microsoft Excel™ Visual Basic for Applications to extend its standard functionality for macro functions and specific mathematical modules for mass spectrometric data processing. The program developed enables the comparison of top-down analytical glycomics data on two or more low molecular weight heparins. The current study describes a new program, GlycCompSoft, which has a low error rate with good time efficiency in the automatic processing of large data sets. The experimental results based on three lots of Lovenox®, Clexane® and three generic enoxaparin samples show that the run time of GlycCompSoft decreases from 11 to 2 seconds when the data processed decreases from 18000 to 1500 rows. PMID:27942011
Akbarzadeh, A; Ay, M R; Ahmadian, A; Alam, N Riahi; Zaidi, H
2013-02-01
Hybrid PET/MRI presents many advantages in comparison with its counterpart PET/CT in terms of improved soft-tissue contrast, decrease in radiation exposure, and truly simultaneous and multi-parametric imaging capabilities. However, the lack of well-established methodology for MR-based attenuation correction is hampering further development and wider acceptance of this technology. We assess the impact of ignoring bone attenuation and using different tissue classes for generation of the attenuation map on the accuracy of attenuation correction of PET data. This work was performed using simulation studies based on the XCAT phantom and clinical input data. For the latter, PET and CT images of patients were used as input for the analytic simulation model using realistic activity distributions where CT-based attenuation correction was utilized as reference for comparison. For both phantom and clinical studies, the reference attenuation map was classified into various numbers of tissue classes to produce three (air, soft tissue and lung), four (air, lungs, soft tissue and cortical bones) and five (air, lungs, soft tissue, cortical bones and spongeous bones) class attenuation maps. The phantom studies demonstrated that ignoring bone increases the relative error by up to 6.8% in the body and up to 31.0% for bony regions. Likewise, the simulated clinical studies showed that the mean relative error reached 15% for lesions located in the body and 30.7% for lesions located in bones, when neglecting bones. These results demonstrate an underestimation of about 30% of tracer uptake when neglecting bone, which in turn imposes substantial loss of quantitative accuracy for PET images produced by hybrid PET/MRI systems. Considering bones in the attenuation map will considerably improve the accuracy of MR-guided attenuation correction in hybrid PET/MR to enable quantitative PET imaging on hybrid PET/MR technologies.
Lee, Dae-Hee; Park, Sung-Chul; Park, Hyung-Joon; Han, Seung-Beom
2016-12-01
Open-wedge high tibial osteotomy (HTO) cannot always accurately correct limb alignment, resulting in under- or over-correction. This study assessed the relationship between soft tissue laxity of the knee joint and alignment correction in open-wedge HTO. This prospective study involved 85 patients (86 knees) undergoing open-wedge HTO for primary medial osteoarthritis. The mechanical axis (MA), weight-bearing line (WBL) ratio, and joint line convergence angle (JLCA) were measured on radiographs preoperatively and after 6 months, and the differences between the pre- and post-surgery values were calculated. Post-operative WBL ratios of 57-67 % were classified as acceptable correction. WBL ratios <57 and >67 % were classified as under- and over-corrections, respectively. Preoperative JLCA correlated positively with differences in MA (r = 0.358, P = 0.001) and WBL ratio (P = 0.003). Difference in JLCA showed a stronger correlation than preoperative JLCA with differences in MA (P < 0.001) and WBL ratio (P < 0.001). Difference in JLCA was the only predictor of both difference in MA (P < 0.001) and difference in WBL ratio (P < 0.001). The difference between pre- and post-operative JLCA differed significantly between the under-correction, acceptable-correction, and over-correction groups (P = 0.033). Preoperative JLCA, however, did not differ significantly between the three groups. Neither preoperative JLCA nor difference in JLCA correlated with change in posterior slope. Preoperative degree of soft tissue laxity in the knee joint was related to the degree of alignment correction, but not to alignment correction error, in open-wedge HTO. Change in soft tissue laxity around the knee from before to after open-wedge HTO correlated with both correction amount and correction error. Therefore, a too large change in JLCA from before to after open-wedge osteotomy may be due to an overly large reduction in JLCA following osteotomy, suggesting alignment over-correction during surgery. II.
3D microwave tomography of the breast using prior anatomical information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golnabi, Amir H., E-mail: golnabia@montclair.edu; Meaney, Paul M.; Paulsen, Keith D.
2016-04-15
Purpose: The authors have developed a new 3D breast image reconstruction technique that utilizes the soft tissue spatial resolution of magnetic resonance imaging (MRI) and integrates the dielectric property differentiation from microwave imaging to produce a dual modality approach with the goal of augmenting the specificity of MR imaging, possibly without the need for nonspecific contrast agents. The integration is performed through the application of a soft prior regularization which imports segmented geometric meshes generated from MR exams and uses it to constrain the microwave tomography algorithm to recover nearly uniform property distributions within segmented regions with sharp delineation betweenmore » these internal subzones. Methods: Previous investigations have demonstrated that this approach is effective in 2D simulation and phantom experiments and also in clinical exams. The current study extends the algorithm to 3D and provides a thorough analysis of the sensitivity and robustness to misalignment errors in size and location between the spatial prior information and the actual data. Results: Image results in 3D were not strongly dependent on reconstruction mesh density, and the changes of less than 30% in recovered property values arose from variations of more than 125% in target region size—an outcome which was more robust than in 2D. Similarly, changes of less than 13% occurred in the 3D image results from variations in target location of nearly 90% of the inclusion size. Permittivity and conductivity errors were about 5 times and 2 times smaller, respectively, with the 3D spatial prior algorithm in actual phantom experiments than those which occurred without priors. Conclusions: The presented study confirms that the incorporation of structural information in the form of a soft constraint can considerably improve the accuracy of the property estimates in predefined regions of interest. These findings are encouraging and establish a strong foundation for using the soft prior technique in clinical studies, where their microwave imaging system and MRI can simultaneously collect breast exam data in patients.« less
Jacobsen, Anna L; Pratt, R Brandon
2012-06-01
Vulnerability to cavitation curves are used to estimate xylem cavitation resistance and can be constructed using multiple techniques. It was recently suggested that a technique that relies on centrifugal force to generate negative xylem pressures may be susceptible to an open vessel artifact in long-vesselled species. Here, we used custom centrifuge rotors to measure different sample lengths of 1-yr-old stems of grapevine to examine the influence of open vessels on vulnerability curves, thus testing the hypothesized open vessel artifact. These curves were compared with a dehydration-based vulnerability curve. Although samples differed significantly in the number of open vessels, there was no difference in the vulnerability to cavitation measured on 0.14- and 0.271-m-long samples of Vitis vinifera. Dehydration and centrifuge-based curves showed a similar pattern of declining xylem-specific hydraulic conductivity (K(s)) with declining water potential. The percentage loss in hydraulic conductivity (PLC) differed between dehydration and centrifuge curves and it was determined that grapevine is susceptible to errors in estimating maximum K(s) during dehydration because of the development of vessel blockages. Our results from a long-vesselled liana do not support the open vessel artifact hypothesis. © 2012 The Authors. New Phytologist © 2012 New Phytologist Trust.
The lucky image-motion prediction for simple scene observation based soft-sensor technology
NASA Astrophysics Data System (ADS)
Li, Yan; Su, Yun; Hu, Bin
2015-08-01
High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.
Self-sensing of dielectric elastomer actuator enhanced by artificial neural network
NASA Astrophysics Data System (ADS)
Ye, Zhihang; Chen, Zheng
2017-09-01
Dielectric elastomer (DE) is a type of soft actuating material, the shape of which can be changed under electrical voltage stimuli. DE materials have promising usage in future’s soft actuators and sensors, such as soft robotics, energy harvesters, and wearable sensors. In this paper, a stripe DE actuator with integrated sensing capability is designed, fabricated, and characterized. Since the strip actuator can be approximated as a compliant capacitor, it is possible to detect the actuator’s displacement by analyzing the actuator’s impedance change. An integrated sensing scheme that adds a high frequency probing signal into actuation signal is developed. Electrical impedance changes in the probing signal are extracted by fast Fourier transform algorithm, and nonlinear data fitting methods involving artificial neural network are implemented to detect the actuator’s displacement. A series of experiments show that by improving data processing and analyzing methods, the integrated sensing method can achieve error level of lower than 1%.
Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod
2016-08-06
In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively.
A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications
Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod
2016-01-01
In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively. PMID:27509495
A burst-mode photon counting receiver with automatic channel estimation and bit rate detection
NASA Astrophysics Data System (ADS)
Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.
2016-04-01
We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.
High Reliability Organizations--Medication Safety.
Yip, Luke; Farmer, Brenna
2015-06-01
High reliability organizations (HROs), such as the aviation industry, successfully engage in high-risk endeavors and have low incidence of adverse events. HROs have a preoccupation with failure and errors. They analyze each event to effect system wide change in an attempt to mitigate the occurrence of similar errors. The healthcare industry can adapt HRO practices, specifically with regard to teamwork and communication. Crew resource management concepts can be adapted to healthcare with the use of certain tools such as checklists and the sterile cockpit to reduce medication errors. HROs also use The Swiss Cheese Model to evaluate risk and look for vulnerabilities in multiple protective barriers, instead of focusing on one failure. This model can be used in medication safety to evaluate medication management in addition to using the teamwork and communication tools of HROs.
A protocol for monitoring soft tissue motion under compression garments during drop landings.
Mills, Chris; Scurr, Joanna; Wood, Louise
2011-06-03
This study used a single-subject design to establish a valid and reliable protocol for monitoring soft tissue motion under compression garments during drop landings. One male participant performed six 40 cm drop landings onto a force platform, in three compression conditions (none, medium high). Five reflective markers placed on the thigh under the compression garment and five over the garment were filmed using two cameras (1000 Hz). Following manual digitisation, marker coordinates were reconstructed and their resultant displacements and maximum change in separation distance between skin and garment markers were calculated. To determine reliability of marker application, 35 markers were attached to the thigh over the high compression garment and filmed. Markers were then removed and re-applied on three occasions; marker separation and distance to thigh centre of gravity were calculated. Results showed similar ground reaction forces during landing trials. Significant reductions in the maximum change in separation distance between markers from no compression to high compression landings were reported. Typical errors in marker movement under and over the garment were 0.1mm in medium and high compression landings. Re-application of markers showed mean typical errors of 1mm in marker separation and <3mm relative to thigh centre of gravity. This paper presents a novel protocol that demonstrates sufficient sensitivity to detect reductions in soft tissue motion during landings in high compression garments compared to no compression. Additionally, markers placed under or over the garment demonstrate low variance in movement, and the protocol reports good reliability in marker re-application. Copyright © 2011 Elsevier Ltd. All rights reserved.
Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery.
Rottmann, Joerg; Keall, Paul; Berbeco, Ross
2013-09-01
To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time.
Recent advances in coding theory for near error-free communications
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.
1991-01-01
Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.
Facial injuries following hyena attack in rural eastern Ethiopia.
Fell, M J; Ayalew, Y; McClenaghan, F C; McGurk, M
2014-12-01
Hyenas are effective hunters and will consider humans as potential prey if the need and opportunity arise. This study describes the circumstances of hyena attacks, the patterns of injuries sustained, and reconstruction in a resource-poor setting. As part of a charitable surgical mission to Ethiopia in 2012, 45 patients with facial deformities were reviewed, of whom four were victims of hyena attacks. A semi-structured interview was performed to ascertain the circumstances of the attack and the subsequent consequences. The age of the victims at the time of attack varied from 5 to 50 years. The attacks occurred when the victims were alone and vulnerable and took place in outdoor open spaces, during the evening or at night. The initial lunge was made to the facial area; if the jaws closed on the facial bones they were crushed, but in all cases the soft tissues were grasped and torn from the underlying bone. Reconstruction was dictated by the extent of soft tissue loss but could normally be obtained by use of local or regional flaps. Hyenas have been shown to attack humans in a predictable way and cause injuries that typically involve the soft tissues of the face. Copyright © 2014 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Gender and age differences in lean soft tissue mass and sarcopenia among healthy elderly.
Kirchengast, Sylvia; Huber, Johannes
2009-06-01
Sarcopenia, the age related decline in skeletal muscle mass has dramatic consequences. It leads to impaired performance, increased vulnerability, frailty and an increased risk of falls. Various extrinsic and intrinsic factors contribute to the aetiology of sarcopenia. The aims of the present study was to analyse gender differences in the prevalence of sarcopenia and document gender differences in lean soft tissue mass in healthy elderly. 139 healthy subjects ageing between 59 and 92 years (x = 71.5 +/- 7.8), 77 females and 64 males, were enrolled in the study. Body composition was measured by means of dual energy X-ray absorptiometry. Additionally appendicular muscle mass (ASM) was calculated. While no linear decrease in lean soft tissue mass was found for both sexes, the prevalence of sarcopenia increased significantly with increasing age in females as well as in males. Significant gender differences in the prevalence of sarcopenia were found for people younger than 70 years and those older than 80 years. In the youngest age group (< 70 years) sarcopenia was found more frequently among women, while in the oldest age group (> 80 years) the opposite was true. It can be concluded that the prevalence of sarcopenia differs between the two genders however these differences are influenced by age.
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
Grout, Ray; Kolla, Hemanth; Minion, Michael; ...
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Spectral Regularization Algorithms for Learning Large Incomplete Matrices.
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-03-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-01-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465
Fais, Paolo; Viero, Alessia; Viel, Guido; Giordano, Renzo; Raniero, Dario; Kusstatscher, Stefano; Giraudo, Chiara; Cecchetto, Giovanni; Montisci, Massimo
2018-04-07
Necrotizing fasciitis (NF) is a life-threatening infection of soft tissues spreading along the fasciae to the surrounding musculature, subcutaneous fat and overlying skin areas that can rapidly lead to septic shock and death. Due to the pandemic increase of medical malpractice lawsuits, above all in Western countries, the forensic pathologist is frequently asked to investigate post-mortem cases of NF in order to determine the cause of death and to identify any related negligence and/or medical error. Herein, we review the medical literature dealing with cases of NF in a post-mortem setting, present a case series of seven NF fatalities and discuss the main ante-mortem and post-mortem diagnostic challenges of both clinical and forensic interests. In particular, we address the following issues: (1) origin of soft tissue infections, (2) micro-organisms involved, (3) time of progression of the infection to NF, (4) clinical and histological staging of NF and (5) pros and cons of clinical and laboratory scores, specific forensic issues related to the reconstruction of the ideal medical conduct and the evaluation of the causal value/link of any eventual medical error.
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Soft-light overhead illumination systems improve laparoscopic task performance.
Takai, Akihiro; Takada, Yasutsugu; Motomura, Hideki; Teramukai, Satoshi
2014-02-01
The aim of this study was to evaluate the impact of attached shadow cues for laparoscopic task performance. We developed a soft-light overhead illumination system (SOIS) that produced attached shadows on objects. We compared results using the SOIS with those using a conventional illumination system with regard to laparoscopic experience and laparoscope-to-target distances (LTDs). Forty-two medical students and 23 surgeons participated in the study. A peg transfer task (LTD, 120 mm) for students and surgeons, and a suture removal task (LTD, 30 mm) for students were performed. Illumination systems were randomly assigned to each task. Endpoints were: total number of peg transfers; percentage of peg-dropping errors; and total execution time for suture removal. After the task, participants filled out a questionnaire on their preference for a particular illumination system. Total number of peg transfers was greater with the SOIS for both students and surgeons. Percentage of peg-dropping errors for surgeons was lower with the SOIS. Total execution time for suture removal was shorter with the SOIS. Forty-five participants (69% in total) evaluated the SOIS for easier task performance. The present results confirm that the SOIS improves laparoscopic task performance, regardless of previous laparoscopic experience or LTD.
Application of the SEIPS Model to Analyze Medication Safety in a Crisis Residential Center.
Steele, Maria L; Talley, Brenda; Frith, Karen H
2018-02-01
Medication safety and error reduction has been studied in acute and long-term care settings, but little research is found in the literature regarding mental health settings. Because mental health settings are complex, medication administration is vulnerable to a variety of errors from transcription to administration. The purpose of this study was to analyze critical factors related to a mental health work system structure and processes that threaten safe medication administration practices. The Systems Engineering Initiative for Patient Safety (SEIPS) model provides a framework to analyze factors affecting medication safety. The model approach analyzes the work system concepts of technology, tasks, persons, environment, and organization to guide the collection of data. In the study, the Lean methodology tools were used to identify vulnerabilities in the system that could be targeted later for improvement activities. The project director completed face-to-face interviews, asked nurses to record disruptions in a log, and administered a questionnaire to nursing staff. The project director also conducted medication chart reviews and recorded medication errors using a standardized taxonomy for errors that allowed categorization of the prevalent types of medication errors. Results of the study revealed disruptions during the medication process, pharmacology training needs, and documentation processes as the primary opportunities for improvement. The project engaged nurses to identify sustainable quality improvement strategies to improve patient safety. The mental health setting carries challenges for safe medication administration practices. Through analysis of the structure, process, and outcomes of medication administration, opportunities for quality improvement and sustainable interventions were identified, including minimizing the number of distractions during medication administration, training nurses on psychotropic medications, and improving the documentation system. A task force was created to analyze the descriptive data and to establish objectives aimed at improving efficiency of the work system and care process involved in medication administration at the end of the project. Copyright © 2017 Elsevier Inc. All rights reserved.
Goldstein, Harland L.; Breit, George N.; Yount, James C.; Reynolds, Richard L.; Reheis, Marith C.; Skipp, Gary L.; Fisher, Eric M.; Lamothe, Paul J.
2011-01-01
This report presents data and describes the methods used to determine the physical attributes, as well as the chemical and mineralogical composition of surficial deposits; groundwater levels; and water composition in the area of Franklin Lake playa and Ash Meadows, California and Nevada. The results support studies that examine (1) the interaction between groundwater and the ground surface, and the transport of solutes through the unsaturated zone; (2) the potential for the accumulation of metals and metalloids in surface crusts; (3) emission of dust from metal-rich salt crust; and (4) the effects of metal-rich dusts on human and ecosystem health. The evaporation of shallow (<3 to 4 m) groundwater in saline, arid environments commonly results in the accumulation of salt in the subsurface and (or) the formation of salt crusts at the ground surface. Ground-surface characteristics such as hardness, electrical conductivity, and mineralogy depend on the types and forms of these salt crusts. In the study area, salt crusts range from hard and bedded to soft and loose (Reynolds and others, 2009). Depending on various factors such as the depth and composition of groundwater and sediment characteristics of the unsaturated zone, salt crusts may accumulate relatively high contents of trace elements. Soft, loose salt crusts are highly vulnerable to wind erosion and transport. These vulnerable crusts, which may contain high contents of potentially toxic trace elements, can travel as atmospheric dust and affect human and ecosystem health at local to regional scales.
NASA Technical Reports Server (NTRS)
Simon, Marvin; Valles, Esteban; Jones, Christopher
2008-01-01
This paper addresses the carrier-phase estimation problem under low SNR conditions as are typical of turbo- and LDPC-coded applications. In previous publications by the first author, closed-loop carrier synchronization schemes for error-correction coded BPSK and QPSK modulation were proposed that were based on feeding back hard data decisions at the input of the loop, the purpose being to remove the modulation prior to attempting to track the carrier phase as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. In this paper, we consider an alternative approach wherein the extrinsic soft information from the iterative decoder of turbo or LDPC codes is instead used as the feedback.
Nilsson, Robert; Mićić, Mileva; Filipović, Jelena; Šobot, Ana Valenta; Drakulić, Dunja; Stanojlović, Miloš; Joksiċ, Gordana
2016-04-01
The aim of this study was to identify palatable additives which have a significant protective action against soft tissue changes in the oral cavity caused by Swedish smokeless tobacco ("snus"), and that satisfy existing legal requirements. Although the cancer risk from snus is extremely low, long term use may result in highly undesirable keratotic lesions and associated epithelial abnormalities in the oral cavity. The rat forestomach, which is vulnerable to the irritative action of non-genotoxic compounds like butylated hydroxyanisole, propionic acid as well as snus, was chosen as an experimental model. Studied toxicological endpoints included histopathology and cellular proliferation based on DNA incorporation of bromodeoxyuridine. After 6 weeks' exposure, blueberries (bilberries) and an extract from the common milk thistle were found to exert a highly significant inhibition of cell proliferation induced by snus in the rat forestomach epithelium, indicating a potential protection with respect soft tissue changes in the human oral cavity. Copyright © 2016 Elsevier Inc. All rights reserved.
Guelpa, Anina; Bevilacqua, Marta; Marini, Federico; O'Kennedy, Kim; Geladi, Paul; Manley, Marena
2015-04-15
It has been established in this study that the Rapid Visco Analyser (RVA) can describe maize hardness, irrespective of the RVA profile, when used in association with appropriate multivariate data analysis techniques. Therefore, the RVA can complement or replace current and/or conventional methods as a hardness descriptor. Hardness modelling based on RVA viscograms was carried out using seven conventional hardness methods (hectoliter mass (HLM), hundred kernel mass (HKM), particle size index (PSI), percentage vitreous endosperm (%VE), protein content, percentage chop (%chop) and near infrared (NIR) spectroscopy) as references and three different RVA profiles (hard, soft and standard) as predictors. An approach using locally weighted partial least squares (LW-PLS) was followed to build the regression models. The resulted prediction errors (root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP)) for the quantification of hardness values were always lower or in the same order of the laboratory error of the reference method. Copyright © 2014 Elsevier Ltd. All rights reserved.
A review of setup error in supine breast radiotherapy using cone-beam computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batumalai, Vikneswary, E-mail: Vikneswary.batumalai@sswahs.nsw.gov.au; Liverpool and Macarthur Cancer Therapy Centres, New South Wales; Ingham Institute of Applied Medical Research, Sydney, New South Wales
2016-10-01
Setup error in breast radiotherapy (RT) measured with 3-dimensional cone-beam computed tomography (CBCT) is becoming more common. The purpose of this study is to review the literature relating to the magnitude of setup error in breast RT measured with CBCT. The different methods of image registration between CBCT and planning computed tomography (CT) scan were also explored. A literature search, not limited by date, was conducted using Medline and Google Scholar with the following key words: breast cancer, RT, setup error, and CBCT. This review includes studies that reported on systematic and random errors, and the methods used when registeringmore » CBCT scans with planning CT scan. A total of 11 relevant studies were identified for inclusion in this review. The average magnitude of error is generally less than 5 mm across a number of studies reviewed. The common registration methods used when registering CBCT scans with planning CT scan are based on bony anatomy, soft tissue, and surgical clips. No clear relationships between the setup errors detected and methods of registration were observed from this review. Further studies are needed to assess the benefit of CBCT over electronic portal image, as CBCT remains unproven to be of wide benefit in breast RT.« less
Gesch, Dean B.
2013-01-01
The accuracy with which coastal topography has been mapped directly affects the reliability and usefulness of elevationbased sea-level rise vulnerability assessments. Recent research has shown that the qualities of the elevation data must be well understood to properly model potential impacts. The cumulative vertical uncertainty has contributions from elevation data error, water level data uncertainties, and vertical datum and transformation uncertainties. The concepts of minimum sealevel rise increment and minimum planning timeline, important parameters for an elevation-based sea-level rise assessment, are used in recognition of the inherent vertical uncertainty of the underlying data. These concepts were applied to conduct a sea-level rise vulnerability assessment of the Mobile Bay, Alabama, region based on high-quality lidar-derived elevation data. The results that detail the area and associated resources (land cover, population, and infrastructure) vulnerable to a 1.18-m sea-level rise by the year 2100 are reported as a range of values (at the 95% confidence level) to account for the vertical uncertainty in the base data. Examination of the tabulated statistics about land cover, population, and infrastructure in the minimum and maximum vulnerable areas shows that these resources are not uniformly distributed throughout the overall vulnerable zone. The methods demonstrated in the Mobile Bay analysis provide an example of how to consider and properly account for vertical uncertainty in elevation-based sea-level rise vulnerability assessments, and the advantages of doing so.
Pickering, Brian W; Hurley, Killian; Marsh, Brian
2009-11-01
To use a handover assessment tool for identifying patient information corruption and objectively evaluating interventions designed to reduce handover errors and improve medical decision making. The continuous monitoring, intervention, and evaluation of the patient in modern intensive care unit practice generates large quantities of information, the platform on which medical decisions are made. Information corruption, defined as errors of distortion/omission compared with the medical record, may result in medical judgment errors. Identifying these errors may lead to quality improvements in intensive care unit care delivery and safety. Handover assessment instrument development study divided into two phases by the introduction of a handover intervention. Closed, 17-bed, university-affiliated mixed surgical/medical intensive care unit. Senior and junior medical members of the intensive care unit team. Electronic handover page. Study subjects were asked to recall clinical information commonly discussed at handover on individual patients. The handover score measured the percentage of information correctly retained for each individual doctor-patient interaction. The clinical intention score, a subjective measure of medical judgment, was graded (1-5) by three blinded intensive care unit experts. A total of 137 interactions were scored. Median (interquartile range) handover scores for phases 1 and 2 were 79.07% (67.44-84.50) and 83.72% (76.16-88.37), respectively. Score variance was reduced by the handover intervention (p < .05). Increasing median handover scores, 68.60 to 83.72, were associated with increases in clinical intention scores from 1 to 5 (chi-square = 23.59, df = 4, p < .0001). When asked to recall clinical information discussed at handover, medical members of the intensive care unit team provide data that are significantly corrupted compared with the medical record. Low subjective clinical judgment scores are significant associated with low handover scores. The handover/clinical intention scores may, therefore, be useful screening tools for intensive care unit system vulnerability to medical error. Additionally, handover instruments can identify interventions that reduce system vulnerability to error and may be used to guide quality improvements in handover practice.
On-orbit observations of single event upset in Harris HM-6508 1K RAMs, reissue A
NASA Astrophysics Data System (ADS)
Blake, J. B.; Mandel, R.
1987-02-01
The Harris HM-6508 1K x 1 RAMs are part of a subsystem of a satellite in a low, polar orbit. The memory module, used in the subsystem containing the RAMs, consists of three printed circuit cards, with each card containing eight 2K byte memory hybrids, for a total of 48K bytes. Each memory hybrid contains 16 HM-6508 RAM chips. On a regular basis all but 256 bytes of the 48K bytes are examined for bit errors. Two different techniques were used for detecting bit errors. The first technique, a memory check sum, was capable of automatically detecting all single bit and some double bit errors which occurred within a page of memory. A memory page consists of 256 bytes. Memory check sum tests are performed approximately every 90 minutes. To detect a multiple error or to determine the exact location of the bit error within the page the entire contents of the memory is dumped and compared to the load file. Memory dumps are normally performed once a month, or immediately after the check sum routine detects an error. Once the exact location of the error is found, the correct value is reloaded into memory. After the memory is reloaded, the contents of the memory location in question is verified in order to determine if the error was a soft error generated by an SEU or a hard error generated by a part failure or cosmic-ray induced latchup.
Lauria, V; Garofalo, G; Fiorentino, F; Massi, D; Milisenda, G; Piraino, S; Russo, T; Gristina, M
2017-08-14
Deep-sea coral assemblages are key components of marine ecosystems that generate habitats for fish and invertebrate communities and act as marine biodiversity hot spots. Because of their life history traits, deep-sea corals are highly vulnerable to human impacts such as fishing. They are an indicator of vulnerable marine ecosystems (VMEs), therefore their conservation is essential to preserve marine biodiversity. In the Mediterranean Sea deep-sea coral habitats are associated with commercially important crustaceans, consequently their abundance has dramatically declined due to the effects of trawling. Marine spatial planning is required to ensure that the conservation of these habitats is achieved. Species distribution models were used to investigate the distribution of two critically endangered octocorals (Funiculina quadrangularis and Isidella elongata) in the central Mediterranean as a function of environmental and fisheries variables. Results show that both species exhibit species-specific habitat preferences and spatial patterns in response to environmental variables, but the impact of trawling on their distribution differed. In particular F. quadrangularis can overlap with fishing activities, whereas I. elongata occurs exclusively where fishing is low or absent. This study represents the first attempt to identify key areas for the protection of soft and compact mud VMEs in the central Mediterranean Sea.
Microseismicity of Blawan hydrothermal complex, Bondowoso, East Java, Indonesia
NASA Astrophysics Data System (ADS)
Maryanto, S.
2018-03-01
Peak Ground Acceleration (PGA), hypocentre, and epicentre of Blawan hydrothermal complex have been analysed in order to investigate its seismicity. PGA has been determined based on Fukushima-Tanaka method and the source location of microseismic estimated using particle motion method. PGA ranged between 0.095-0.323 g and tends to be higher in the formation that containing not compacted rocks. The seismic vulnerability index region indicated that the zone with high PGA also has a high seismic vulnerability index. This was because the rocks making up these zones were inclined soft and low-density rocks. For seismic sources around the area, epicentre and hypocentre, have estimated base on seismic particle motion method of single station. The stations used in this study were mobile stations identified as BL01, BL02, BL03, BL05, BL06, BL07 and BL08. The results of the analysis particle motion obtained 44 points epicentre and the depth of the sources about 15 – 110 meters below ground surface.
Schrider, Daniel R.; Mendes, Fábio K.; Hahn, Matthew W.; Kern, Andrew D.
2015-01-01
Characterizing the nature of the adaptive process at the genetic level is a central goal for population genetics. In particular, we know little about the sources of adaptive substitution or about the number of adaptive variants currently segregating in nature. Historically, population geneticists have focused attention on the hard-sweep model of adaptation in which a de novo beneficial mutation arises and rapidly fixes in a population. Recently more attention has been given to soft-sweep models, in which alleles that were previously neutral, or nearly so, drift until such a time as the environment shifts and their selection coefficient changes to become beneficial. It remains an active and difficult problem, however, to tease apart the telltale signatures of hard vs. soft sweeps in genomic polymorphism data. Through extensive simulations of hard- and soft-sweep models, here we show that indeed the two might not be separable through the use of simple summary statistics. In particular, it seems that recombination in regions linked to, but distant from, sites of hard sweeps can create patterns of polymorphism that closely mirror what is expected to be found near soft sweeps. We find that a very similar situation arises when using haplotype-based statistics that are aimed at detecting partial or ongoing selective sweeps, such that it is difficult to distinguish the shoulder of a hard sweep from the center of a partial sweep. While knowing the location of the selected site mitigates this problem slightly, we show that stochasticity in signatures of natural selection will frequently cause the signal to reach its zenith far from this site and that this effect is more severe for soft sweeps; thus inferences of the target as well as the mode of positive selection may be inaccurate. In addition, both the time since a sweep ends and biologically realistic levels of allelic gene conversion lead to errors in the classification and identification of selective sweeps. This general problem of “soft shoulders” underscores the difficulty in differentiating soft and partial sweeps from hard-sweep scenarios in molecular population genomics data. The soft-shoulder effect also implies that the more common hard sweeps have been in recent evolutionary history, the more prevalent spurious signatures of soft or partial sweeps may appear in some genome-wide scans. PMID:25716978
Schrider, Daniel R; Mendes, Fábio K; Hahn, Matthew W; Kern, Andrew D
2015-05-01
Characterizing the nature of the adaptive process at the genetic level is a central goal for population genetics. In particular, we know little about the sources of adaptive substitution or about the number of adaptive variants currently segregating in nature. Historically, population geneticists have focused attention on the hard-sweep model of adaptation in which a de novo beneficial mutation arises and rapidly fixes in a population. Recently more attention has been given to soft-sweep models, in which alleles that were previously neutral, or nearly so, drift until such a time as the environment shifts and their selection coefficient changes to become beneficial. It remains an active and difficult problem, however, to tease apart the telltale signatures of hard vs. soft sweeps in genomic polymorphism data. Through extensive simulations of hard- and soft-sweep models, here we show that indeed the two might not be separable through the use of simple summary statistics. In particular, it seems that recombination in regions linked to, but distant from, sites of hard sweeps can create patterns of polymorphism that closely mirror what is expected to be found near soft sweeps. We find that a very similar situation arises when using haplotype-based statistics that are aimed at detecting partial or ongoing selective sweeps, such that it is difficult to distinguish the shoulder of a hard sweep from the center of a partial sweep. While knowing the location of the selected site mitigates this problem slightly, we show that stochasticity in signatures of natural selection will frequently cause the signal to reach its zenith far from this site and that this effect is more severe for soft sweeps; thus inferences of the target as well as the mode of positive selection may be inaccurate. In addition, both the time since a sweep ends and biologically realistic levels of allelic gene conversion lead to errors in the classification and identification of selective sweeps. This general problem of "soft shoulders" underscores the difficulty in differentiating soft and partial sweeps from hard-sweep scenarios in molecular population genomics data. The soft-shoulder effect also implies that the more common hard sweeps have been in recent evolutionary history, the more prevalent spurious signatures of soft or partial sweeps may appear in some genome-wide scans. Copyright © 2015 by the Genetics Society of America.
Bezrukov, Ilja; Schmidt, Holger; Mantlik, Frédéric; Schwenzer, Nina; Brendle, Cornelia; Schölkopf, Bernhard; Pichler, Bernd J
2013-10-01
Hybrid PET/MR systems have recently entered clinical practice. Thus, the accuracy of MR-based attenuation correction in simultaneously acquired data can now be investigated. We assessed the accuracy of 4 methods of MR-based attenuation correction in lesions within soft tissue, bone, and MR susceptibility artifacts: 2 segmentation-based methods (SEG1, provided by the manufacturer, and SEG2, a method with atlas-based susceptibility artifact correction); an atlas- and pattern recognition-based method (AT&PR), which also used artifact correction; and a new method combining AT&PR and SEG2 (SEG2wBONE). Attenuation maps were calculated for the PET/MR datasets of 10 patients acquired on a whole-body PET/MR system, allowing for simultaneous acquisition of PET and MR data. Eighty percent iso-contour volumes of interest were placed on lesions in soft tissue (n = 21), in bone (n = 20), near bone (n = 19), and within or near MR susceptibility artifacts (n = 9). Relative mean volume-of-interest differences were calculated with CT-based attenuation correction as a reference. For soft-tissue lesions, none of the methods revealed a significant difference in PET standardized uptake value relative to CT-based attenuation correction (SEG1, -2.6% ± 5.8%; SEG2, -1.6% ± 4.9%; AT&PR, -4.7% ± 6.5%; SEG2wBONE, 0.2% ± 5.3%). For bone lesions, underestimation of PET standardized uptake values was found for all methods, with minimized error for the atlas-based approaches (SEG1, -16.1% ± 9.7%; SEG2, -11.0% ± 6.7%; AT&PR, -6.6% ± 5.0%; SEG2wBONE, -4.7% ± 4.4%). For lesions near bone, underestimations of lower magnitude were observed (SEG1, -12.0% ± 7.4%; SEG2, -9.2% ± 6.5%; AT&PR, -4.6% ± 7.8%; SEG2wBONE, -4.2% ± 6.2%). For lesions affected by MR susceptibility artifacts, quantification errors could be reduced using the atlas-based artifact correction (SEG1, -54.0% ± 38.4%; SEG2, -15.0% ± 12.2%; AT&PR, -4.1% ± 11.2%; SEG2wBONE, 0.6% ± 11.1%). For soft-tissue lesions, none of the evaluated methods showed statistically significant errors. For bone lesions, significant underestimations of -16% and -11% occurred for methods in which bone tissue was ignored (SEG1 and SEG2). In the present attenuation correction schemes, uncorrected MR susceptibility artifacts typically result in reduced attenuation values, potentially leading to highly reduced PET standardized uptake values, rendering lesions indistinguishable from background. While AT&PR and SEG2wBONE show accurate results in both soft tissue and bone, SEG2wBONE uses a two-step approach for tissue classification, which increases the robustness of prediction and can be applied retrospectively if more precision in bone areas is needed.
Application of the epidemiological model in studying human error in aviation
NASA Technical Reports Server (NTRS)
Cheaney, E. S.; Billings, C. E.
1981-01-01
An epidemiological model is described in conjunction with the analytical process through which aviation occurrence reports are composed into the events and factors pertinent to it. The model represents a process in which disease, emanating from environmental conditions, manifests itself in symptoms that may lead to fatal illness, recoverable illness, or no illness depending on individual circumstances of patient vulnerability, preventive actions, and intervention. In the aviation system the analogy of the disease process is the predilection for error of human participants. This arises from factors in the operating or physical environment and results in errors of commission or omission that, again depending on the individual circumstances, may lead to accidents, system perturbations, or harmless corrections. A discussion of the previous investigations, each of which manifests the application of the epidemiological method, exemplifies its use and effectiveness.
Chemical Analysis of the Moon at the Surveyor VI Landing Site: Preliminary Results.
Turkevich, A L; Patterson, J H; Franzgrote, E J
1968-06-07
The alpha-scattering experiment aboard soft-landing Surveyor VI has provided a chemical analysis of the surface of the moon in Sinus Medii. The preliminary results indicate that, within experimental errors, the composition is the same as that found by Surveyor V in Mare Tranquillitatis. This finding suggests that large portions of the lunar maria resemble basalt in composition.
Single event upset in avionics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taber, A.; Normand, E.
1993-04-01
Data from military/experimental flights and laboratory testing indicate that typical non radiation-hardened 64K and 256K static random access memories (SRAMs) can experience a significant soft upset rate at aircraft altitudes due to energetic neutrons created by cosmic ray interactions in the atmosphere. It is suggested that error detection and correction (EDAC) circuitry be considered for all avionics designs containing large amounts of semi-conductor memory.
Public health consequences on vulnerable populations from acute chemical releases.
Ruckart, Perri Zeitz; Orr, Maureen F
2008-07-09
Data from a large, multi-state surveillance system on acute chemical releases were analyzed to describe the type of events that are potentially affecting vulnerable populations (children, elderly and hospitalized patients) in order to better prevent and plan for these types of incidents in the future. During 2003-2005, there were 231 events where vulnerable populations were within ¼ mile of the event and the area of impact was greater than 200 feet from the facility/point of release. Most events occurred on a weekday during times when day care centers or schools were likely to be in session. Equipment failure and human error caused a majority of the releases. Agencies involved in preparing for and responding to chemical emergencies should work with hospitals, nursing homes, day care centers, and schools to develop policies and procedures for initiating appropriate protective measures and managing the medical needs of patients. Chemical emergency response drills should involve the entire community to protect those that may be more susceptible to harm.
Public Health Consequences on Vulnerable Populations from Acute Chemical Releases
Ruckart, Perri Zeitz; Orr, Maureen F.
2008-01-01
Data from a large, multi-state surveillance system on acute chemical releases were analyzed to describe the type of events that are potentially affecting vulnerable populations (children, elderly and hospitalized patients) in order to better prevent and plan for these types of incidents in the future. During 2003–2005, there were 231 events where vulnerable populations were within ¼ mile of the event and the area of impact was greater than 200 feet from the facility/point of release. Most events occurred on a weekday during times when day care centers or schools were likely to be in session. Equipment failure and human error caused a majority of the releases. Agencies involved in preparing for and responding to chemical emergencies should work with hospitals, nursing homes, day care centers, and schools to develop policies and procedures for initiating appropriate protective measures and managing the medical needs of patients. Chemical emergency response drills should involve the entire community to protect those that may be more susceptible to harm. PMID:21572842
Spatial serial order processing in schizophrenia.
Fraser, David; Park, Sohee; Clark, Gina; Yohanna, Daniel; Houk, James C
2004-10-01
The aim of this study was to examine serial order processing deficits in 21 schizophrenia patients and 16 age- and education-matched healthy controls. In a spatial serial order working memory task, one to four spatial targets were presented in a randomized sequence. Subjects were required to remember the locations and the order in which the targets were presented. Patients showed a marked deficit in ability to remember the sequences compared with controls. Increasing the number of targets within a sequence resulted in poorer memory performance for both control and schizophrenia subjects, but the effect was much more pronounced in the patients. Targets presented at the end of a long sequence were more vulnerable to memory error in schizophrenia patients. Performance deficits were not attributable to motor errors, but to errors in target choice. The results support the idea that the memory errors seen in schizophrenia patients may be due to saturating the working memory network at relatively low levels of memory load.
NREL's Cybersecurity Initiative Aims to Wall Off the Smart Grid from
provided the Energy Department with $4.5 billion to modernize the electric power grid. One key to this possible. As just one example, in typical computer-based communications systems, like the Internet, data is found only one vulnerability, which was due to a misconfigured device. Through just that one error, the
Cohen, Trevor; Blatter, Brett; Almeida, Carlos; Patel, Vimla L.
2007-01-01
Objective Contemporary error research suggests that the quest to eradicate error is misguided. Error commission, detection, and recovery are an integral part of cognitive work, even at the expert level. In collaborative workspaces, the perception of potential error is directly observable: workers discuss and respond to perceived violations of accepted practice norms. As perceived violations are captured and corrected preemptively, they do not fit Reason’s widely accepted definition of error as “failure to achieve an intended outcome.” However, perceived violations suggest the aversion of potential error, and consequently have implications for error prevention. This research aims to identify and describe perceived violations of the boundaries of accepted procedure in a psychiatric emergency department (PED), and how they are resolved in practice. Design Clinical discourse from fourteen PED patient rounds was audio-recorded. Excerpts from recordings suggesting perceived violations or incidents of miscommunication were extracted and analyzed using qualitative coding methods. The results are interpreted in relation to prior research on vulnerabilities to error in the PED. Results Thirty incidents of perceived violations or miscommunication are identified and analyzed. Of these, only one medication error was formally reported. Other incidents would not have been detected by a retrospective analysis. Conclusions The analysis of perceived violations expands the data available for error analysis beyond occasional reported adverse events. These data are prospective: responses are captured in real time. This analysis supports a set of recommendations to improve the quality of care in the PED and other critical care contexts. PMID:17329728
The high accuracy data processing system of laser interferometry signals based on MSP430
NASA Astrophysics Data System (ADS)
Qi, Yong-yue; Lin, Yu-chi; Zhao, Mei-rong
2009-07-01
Generally speaking there are two orthogonal signals used in single-frequency laser interferometer for differentiating direction and electronic subdivision. However there usually exist three errors with the interferential signals: zero offsets error, unequal amplitude error and quadrature phase shift error. These three errors have a serious impact on subdivision precision. Based on Heydemann error compensation algorithm, it is proposed to achieve compensation of the three errors. Due to complicated operation of the Heydemann mode, a improved arithmetic is advanced to decrease the calculating time effectively in accordance with the special characteristic that only one item of data will be changed in each fitting algorithm operation. Then a real-time and dynamic compensatory circuit is designed. Taking microchip MSP430 as the core of hardware system, two input signals with the three errors are turned into digital quantity by the AD7862. After data processing in line with improved arithmetic, two ideal signals without errors are output by the AD7225. At the same time two original signals are turned into relevant square wave and imported to the differentiating direction circuit. The impulse exported from the distinguishing direction circuit is counted by the timer of the microchip. According to the number of the pulse and the soft subdivision the final result is showed by LED. The arithmetic and the circuit are adopted to test the capability of a laser interferometer with 8 times optical path difference and the measuring accuracy of 12-14nm is achieved.
An engineer's view on genetic information and biological evolution.
Battail, Gérard
2004-01-01
We develop ideas on genome replication introduced in Battail [Europhys. Lett. 40 (1997) 343]. Starting with the hypothesis that the genome replication process uses error-correcting means, and the auxiliary one that nested codes are used to this end, we first review the concepts of redundancy and error-correcting codes. Then we show that these hypotheses imply that: distinct species exist with a hierarchical taxonomy, there is a trend of evolution towards complexity, and evolution proceeds by discrete jumps. At least the first two features above may be considered as biological facts so, in the absence of direct evidence, they provide an indirect proof in favour of the hypothesized error-correction system. The very high redundancy of genomes makes it possible. In order to explain how it is implemented, we suggest that soft codes and replication decoding, to be briefly described, are plausible candidates. Experimentally proven properties of long-range correlation of the DNA message substantiate this claim.
Computing in the presence of soft bit errors. [caused by single event upset on spacecraft
NASA Technical Reports Server (NTRS)
Rasmussen, R. D.
1984-01-01
It is shown that single-event-upsets (SEUs) due to cosmic rays are a significant source of single bit error in spacecraft computers. The physical mechanism of SEU, electron hole generation by means of Linear Energy Transfer (LET), it discussed with reference made to the results of a study of the environmental effects on computer systems of the Galileo spacecraft. Techniques for making software more tolerant of cosmic ray effects are considered, including: reducing the number of registers used by the software; continuity testing of variables; redundant execution of major procedures for error detection; and encoding state variables to detect single-bit changes. Attention is also given to design modifications which may reduce the cosmic ray exposure of on-board hardware. These modifications include: shielding components operating in LEO; removing low-power Schottky parts; and the use of CMOS diodes. The SEU parameters of different electronic components are listed in a table.
Random access to mobile networks with advanced error correction
NASA Technical Reports Server (NTRS)
Dippold, Michael
1990-01-01
A random access scheme for unreliable data channels is investigated in conjunction with an adaptive Hybrid-II Automatic Repeat Request (ARQ) scheme using Rate Compatible Punctured Codes (RCPC) Forward Error Correction (FEC). A simple scheme with fixed frame length and equal slot sizes is chosen and reservation is implicit by the first packet transmitted randomly in a free slot, similar to Reservation Aloha. This allows the further transmission of redundancy if the last decoding attempt failed. Results show that a high channel utilization and superior throughput can be achieved with this scheme that shows a quite low implementation complexity. For the example of an interleaved Rayleigh channel and soft decision utilization and mean delay are calculated. A utilization of 40 percent may be achieved for a frame with the number of slots being equal to half the station number under high traffic load. The effects of feedback channel errors and some countermeasures are discussed.
NASA Astrophysics Data System (ADS)
Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod
2015-10-01
In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.
NASA Astrophysics Data System (ADS)
Adineh-Vand, A.; Torabi, M.; Roshani, G. H.; Taghipour, M.; Feghhi, S. A. H.; Rezaei, M.; Sadati, S. M.
2013-09-01
This paper presents a soft computing based artificial intelligent technique, adaptive neuro-fuzzy inference system (ANFIS) to predict the neutron production rate (NPR) of IR-IECF device in wide discharge current and voltage ranges. A hybrid learning algorithm consists of back-propagation and least-squares estimation is used for training the ANFIS model. The performance of the proposed ANFIS model is tested using the experimental data using four performance measures: correlation coefficient, mean absolute error, mean relative error percentage (MRE%) and root mean square error. The obtained results show that the proposed ANFIS model has achieved good agreement with the experimental results. In comparison to the experimental data the proposed ANFIS model has MRE% <1.53 and 2.85 % for training and testing data respectively. Therefore, this model can be used as an efficient tool to predict the NPR in the IR-IECF device.
Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Moorthy, H. T.
1997-01-01
This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.
Soft tissue deformation for surgical simulation: a position-based dynamics approach.
Camara, Mafalda; Mayer, Erik; Darzi, Ara; Pratt, Philip
2016-06-01
To assist the rehearsal and planning of robot-assisted partial nephrectomy, a real-time simulation platform is presented that allows surgeons to visualise and interact with rapidly constructed patient-specific biomechanical models of the anatomical regions of interest. Coupled to a framework for volumetric deformation, the platform furthermore simulates intracorporeal 2D ultrasound image acquisition, using preoperative imaging as the data source. This not only facilitates the planning of optimal transducer trajectories and viewpoints, but can also act as a validation context for manually operated freehand 3D acquisitions and reconstructions. The simulation platform was implemented within the GPU-accelerated NVIDIA FleX position-based dynamics framework. In order to validate the model and determine material properties and other simulation parameter values, a porcine kidney with embedded fiducial beads was CT-scanned and segmented. Acquisitions for the rest position and three different levels of probe-induced deformation were collected. Optimal values of the cluster stiffness coefficients were determined for a range of different particle radii, where the objective function comprised the mean distance error between real and simulated fiducial positions over the sequence of deformations. The mean fiducial error at each deformation stage was found to be compatible with the level of ultrasound probe calibration error typically observed in clinical practice. Furthermore, the simulation exhibited unconditional stability on account of its use of clustered shape-matching constraints. A novel position-based dynamics implementation of soft tissue deformation has been shown to facilitate several desirable simulation characteristics: real-time performance, unconditional stability, rapid model construction enabling patient-specific behaviour and accuracy with respect to reference CT images.
Djordjevic, Ivan B; Vasic, Bane
2006-05-29
A maximum a posteriori probability (MAP) symbol decoding supplemented with iterative decoding is proposed as an effective mean for suppression of intrachannel nonlinearities. The MAP detector, based on Bahl-Cocke-Jelinek-Raviv algorithm, operates on the channel trellis, a dynamical model of intersymbol interference, and provides soft-decision outputs processed further in an iterative decoder. A dramatic performance improvement is demonstrated. The main reason is that the conventional maximum-likelihood sequence detector based on Viterbi algorithm provides hard-decision outputs only, hence preventing the soft iterative decoding. The proposed scheme operates very well in the presence of strong intrachannel intersymbol interference, when other advanced forward error correction schemes fail, and it is also suitable for 40 Gb/s upgrade over existing 10 Gb/s infrastructure.
Takegami, Kazuki; Hayashi, Hiroaki; Okino, Hiroki; Kimoto, Natsumi; Maehata, Itsumi; Kanazawa, Yuki; Okazaki, Tohru; Hashizume, Takuya; Kobayashi, Ikuo
2016-07-01
Our aim in this study is to derive an identification limit on a dosimeter for not disturbing a medical image when patients wear a small-type optically stimulated luminescence (OSL) dosimeter on their bodies during X-ray diagnostic imaging. For evaluation of the detection limit based on an analysis of X-ray spectra, we propose a new quantitative identification method. We performed experiments for which we used diagnostic X-ray equipment, a soft-tissue-equivalent phantom (1-20 cm), and a CdTe X-ray spectrometer assuming one pixel of the X-ray imaging detector. Then, with the following two experimental settings, corresponding X-ray spectra were measured with 40-120 kVp and 0.5-1000 mAs at a source-to-detector distance of 100 cm: (1) X-rays penetrating a soft-tissue-equivalent phantom with the OSL dosimeter attached directly on the phantom, and (2) X-rays penetrating only the soft-tissue-equivalent phantom. Next, the energy fluence and errors in the fluence were calculated from the spectra. When the energy fluence with errors concerning these two experimental conditions was estimated to be indistinctive, we defined the condition as the OSL dosimeter not being identified on the X-ray image. Based on our analysis, we determined the identification limit of the dosimeter. We then compared our results with those for the general irradiation conditions used in clinics. We found that the OSL dosimeter could not be identified under the irradiation conditions of abdominal and chest radiography, namely, one can apply the OSL dosimeter to measurement of the exposure dose in the irradiation field of X-rays without disturbing medical images.
Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery
Rottmann, Joerg; Keall, Paul; Berbeco, Ross
2013-01-01
Purpose: To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. Methods: 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Results: Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. Conclusions: The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time. PMID:24007146
Vinnicombe, S J; Whelehan, P; Thomson, K; McLean, D; Purdie, C A; Jordan, L B; Hubbard, S; Evans, A J
2014-04-01
Shear wave elastography (SWE) is a promising adjunct to greyscale ultrasound in differentiating benign from malignant breast masses. The purpose of this study was to characterise breast cancers which are not stiff on quantitative SWE, to elucidate potential sources of error in clinical application of SWE to evaluation of breast masses. Three hundred and two consecutive patients examined by SWE who underwent immediate surgery for breast cancer were included. Characteristics of 280 lesions with suspicious SWE values (mean stiffness >50 kPa) were compared with 22 lesions with benign SWE values (<50 kPa). Statistical significance of the differences was assessed using non-parametric goodness-of-fit tests. Pure ductal carcinoma in situ (DCIS) masses were more often soft on SWE than masses representing invasive breast cancer. Invasive cancers that were soft were more frequently: histological grade 1, tubular subtype, ≤10 mm invasive size and detected at screening mammography. No significant differences were found with respect to the presence of invasive lobular cancer, vascular invasion, hormone and HER-2 receptor status. Lymph node positivity was less common in soft cancers. Malignant breast masses classified as benign by quantitative SWE tend to have better prognostic features than those correctly classified as malignant. • Over 90 % of cancers assessable with ultrasound have a mean stiffness >50 kPa. • 'Soft' invasive cancers are frequently small (≤10 mm), low grade and screen-detected. • Pure DCIS masses are more often soft than invasive cancers (>40 %). • Large symptomatic masses are better evaluated with SWE than small clinically occult lesions. • When assessing small lesions, 'softness' should not raise the threshold for biopsy.
Using the lead vehicle as preview sensor in convoy vehicle active suspension control
NASA Astrophysics Data System (ADS)
Rahman, Mustafizur; Rideout, Geoff
2012-12-01
Both ride quality and roadholding of actively suspended vehicles can be improved by sensing the road ahead of the vehicle and using this information in a preview controller. Previous applications have used look-ahead sensors mounted on the front bumper to measure terrain beneath. Such sensors are vulnerable, potentially confused by water, snow, or other soft obstacles and offer a fixed preview time. For convoy vehicle applications, this paper proposes using the overall response of the preceding vehicle(s) to generate preview controller information for follower vehicles. A robust observer is used to estimate the states of a quarter-car vehicle model, from which road profile is estimated and passed on to the follower vehicle(s) to generate a preview function. The preview-active suspension, implemented in discrete time using a shift register approach to improve simulation time, reduces sprung mass acceleration and dynamic tyre deflection peaks by more than 50% and 40%, respectively. Terrain can change from one vehicle to the next if a loose obstacle is dislodged, or if the vehicle paths are sufficiently different so that one vehicle misses a discrete road event. The resulting spurious preview information can give suspension performance worse than that of a passive or conventional active system. In this paper, each vehicle can effectively estimate the road profile based on its own state trajectory. By comparing its own road estimate with the preview information, preview errors can be detected and suspension control quickly switched from preview to conventional active control to preserve performance improvements compared to passive suspensions.
SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, S; Rottmann, J; Berbeco, R
2014-06-01
Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantommore » moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the accuracy of auto-tracking. This work is supported in part by the Varian Medical Systems, Inc.« less
The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross
2014-06-15
Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained bymore » continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID image acquisition at the frame rate of at least 4.29 Hz is recommended. Motion blurring in the images with frame rates below 4.29 Hz can significantly reduce the accuracy of autotracking.« less
TOPICAL REVIEW: Anatomical imaging for radiotherapy
NASA Astrophysics Data System (ADS)
Evans, Philip M.
2008-06-01
The goal of radiation therapy is to achieve maximal therapeutic benefit expressed in terms of a high probability of local control of disease with minimal side effects. Physically this often equates to the delivery of a high dose of radiation to the tumour or target region whilst maintaining an acceptably low dose to other tissues, particularly those adjacent to the target. Techniques such as intensity modulated radiotherapy (IMRT), stereotactic radiosurgery and computer planned brachytherapy provide the means to calculate the radiation dose delivery to achieve the desired dose distribution. Imaging is an essential tool in all state of the art planning and delivery techniques: (i) to enable planning of the desired treatment, (ii) to verify the treatment is delivered as planned and (iii) to follow-up treatment outcome to monitor that the treatment has had the desired effect. Clinical imaging techniques can be loosely classified into anatomic methods which measure the basic physical characteristics of tissue such as their density and biological imaging techniques which measure functional characteristics such as metabolism. In this review we consider anatomical imaging techniques. Biological imaging is considered in another article. Anatomical imaging is generally used for goals (i) and (ii) above. Computed tomography (CT) has been the mainstay of anatomical treatment planning for many years, enabling some delineation of soft tissue as well as radiation attenuation estimation for dose prediction. Magnetic resonance imaging is fast becoming widespread alongside CT, enabling superior soft-tissue visualization. Traditionally scanning for treatment planning has relied on the use of a single snapshot scan. Recent years have seen the development of techniques such as 4D CT and adaptive radiotherapy (ART). In 4D CT raw data are encoded with phase information and reconstructed to yield a set of scans detailing motion through the breathing, or cardiac, cycle. In ART a set of scans is taken on different days. Both allow planning to account for variability intrinsic to the patient. Treatment verification has been carried out using a variety of technologies including: MV portal imaging, kV portal/fluoroscopy, MVCT, conebeam kVCT, ultrasound and optical surface imaging. The various methods have their pros and cons. The four x-ray methods involve an extra radiation dose to normal tissue. The portal methods may not generally be used to visualize soft tissue, consequently they are often used in conjunction with implanted fiducial markers. The two CT-based methods allow measurement of inter-fraction variation only. Ultrasound allows soft-tissue measurement with zero dose but requires skilled interpretation, and there is evidence of systematic differences between ultrasound and other data sources, perhaps due to the effects of the probe pressure. Optical imaging also involves zero dose but requires good correlation between the target and the external measurement and thus is often used in conjunction with an x-ray method. The use of anatomical imaging in radiotherapy allows treatment uncertainties to be determined. These include errors between the mean position at treatment and that at planning (the systematic error) and the day-to-day variation in treatment set-up (the random error). Positional variations may also be categorized in terms of inter- and intra-fraction errors. Various empirical treatment margin formulae and intervention approaches exist to determine the optimum strategies for treatment in the presence of these known errors. Other methods exist to try to minimize error margins drastically including the currently available breath-hold techniques and the tracking methods which are largely in development. This paper will review anatomical imaging techniques in radiotherapy and how they are used to boost the therapeutic benefit of the treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharkov, B. B.; Chizhik, V. I.; Dvinskikh, S. V., E-mail: sergeid@kth.se
2016-01-21
Dipolar recoupling is an essential part of current solid-state NMR methodology for probing atomic-resolution structure and dynamics in solids and soft matter. Recently described magic-echo amplitude- and phase-modulated cross-polarization heteronuclear recoupling strategy aims at efficient and robust recoupling in the entire range of coupling constants both in rigid and highly dynamic molecules. In the present study, the properties of this recoupling technique are investigated by theoretical analysis, spin-dynamics simulation, and experimentally. The resonance conditions and the efficiency of suppressing the rf field errors are examined and compared to those for other recoupling sequences based on similar principles. The experimental datamore » obtained in a variety of rigid and soft solids illustrate the scope of the method and corroborate the results of analytical and numerical calculations. The technique benefits from the dipolar resolution over a wider range of coupling constants compared to that in other state-of-the-art methods and thus is advantageous in studies of complex solids with a broad range of dynamic processes and molecular mobility degrees.« less
Learning the inverse kinetics of an octopus-like manipulator in three-dimensional space.
Giorelli, M; Renda, F; Calisti, M; Arienti, A; Ferri, G; Laschi, C
2015-05-13
This work addresses the inverse kinematics problem of a bioinspired octopus-like manipulator moving in three-dimensional space. The bioinspired manipulator has a conical soft structure that confers the ability of twirling around objects as a real octopus arm does. Despite the simple design, the soft conical shape manipulator driven by cables is described by nonlinear differential equations, which are difficult to solve analytically. Since exact solutions of the equations are not available, the Jacobian matrix cannot be calculated analytically and the classical iterative methods cannot be used. To overcome the intrinsic problems of methods based on the Jacobian matrix, this paper proposes a neural network learning the inverse kinematics of a soft octopus-like manipulator driven by cables. After the learning phase, a feed-forward neural network is able to represent the relation between manipulator tip positions and forces applied to the cables. Experimental results show that a desired tip position can be achieved in a short time, since heavy computations are avoided, with a degree of accuracy of 8% relative average error with respect to the total arm length.
A Novel Soft Pneumatic Artificial Muscle with High-Contraction Ratio.
Han, Kwanghyun; Kim, Nam-Ho; Shin, Dongjun
2018-06-20
There is a growing interest in soft actuators for human-friendly robotic applications. However, it is very challenging for conventional soft actuators to achieve both a large working distance and high force. To address this problem, we present a high-contraction ratio pneumatic artificial muscle (HCRPAM), which has a novel actuation concept. The HCRPAM can contract substantially while generating a large force suitable for a wide range of robotic applications. Our proposed prototyping method allows for an easy and quick fabrication, considering various design variables. We derived a mathematical model using a virtual work principle, and validated the model experimentally. We conducted simulations for the design optimization using this model. Our experimental results show that the HCRPAM has a 183.3% larger contraction ratio and 37.1% higher force output than the conventional pneumatic artificial muscle (McKibben muscle). Furthermore, the actuator has a compatible position tracking performance of 1.0 Hz and relatively low hysteresis error of 4.8%. Finally, we discussed the controllable bending characteristics of the HCRPAM, which uses heterogeneous materials and has an asymmetrical structure to make it comfortable for a human to wear.
Tollånes, Mette C; Strandberg-Larsen, Katrine; Eichelberger, Kacey Y; Moster, Dag; Lie, Rolv Terje; Brantsæter, Anne Lise; Meltzer, Helle Margrete; Stoltenberg, Camilla; Wilcox, Allen J
2016-09-01
Postnatal administration of caffeine may reduce the risk of cerebral palsy (CP) in vulnerable low-birth-weight neonates. The effect of antenatal caffeine exposure remains unknown. We investigated the association of intake of caffeine by pregnant women and risk of CP in their children. The study was based on The Norwegian Mother and Child Cohort Study, comprising >100,000 live-born children, of whom 222 were subsequently diagnosed with CP. Mothers reported their caffeine consumption in questionnaires completed around pregnancy week 17 (102,986 mother-child pairs), week 22 (87,987 mother-child pairs), and week 30 (94,372 mother-child pairs). At week 17, participants were asked about present and prepregnancy consumption. We used Cox regression models to estimate associations between exposure [daily servings (1 serving = 125 mL) of caffeinated coffee, tea, and soft drinks and total caffeine consumption] and CP in children, with nonconsumers as the reference group. Models included adjustment for maternal age and education, medically assisted reproduction, and smoking, and for each source of caffeine, adjustments were made for the other sources. Total daily caffeine intake before and during pregnancy was not associated with CP risk. High consumption (≥6 servings/d) of caffeinated soft drinks before pregnancy was associated with an increased CP risk (HR: 1.9; 95% CI: 1.2, 3.1), and children of women consuming 3-5 daily servings of caffeinated soft drinks during pregnancy weeks 13-30 also had an increased CP risk (HR: 1.7; 95% CI: 1.1, 2.8). A mean daily consumption of 51-100 mg caffeine from soft drinks during the first half of pregnancy was associated with a 1.9-fold increased risk of CP in children (HR: 1.9; 95% CI: 1.1, 3.6). Maternal total daily caffeine consumption before and during pregnancy was not associated with CP risk in children. The observed increased risk with caffeinated soft drinks warrants further investigation. © 2016 American Society for Nutrition.
Implementation Of The Configurable Fault Tolerant System Experiment On NPSAT 1
2016-03-01
REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF THE CONFIGURABLE FAULT TOLERANT SYSTEM EXPERIMENT ON NPSAT...open-source microprocessor without interlocked pipeline stages (MIPS) based processor softcore, a cached memory structure capable of accessing double...data rate type three and secure digital card memories, an interface to the main satellite bus, and XILINX’s soft error mitigation softcore. The
Radio frequency tags systems to initiate system processing
NASA Astrophysics Data System (ADS)
Madsen, Harold O.; Madsen, David W.
1994-09-01
This paper describes the automatic identification technology which has been installed at Applied Magnetic Corp. MR fab. World class manufacturing requires technology exploitation. This system combines (1) FluoroTrac cassette and operator tracking, (2) CELLworks cell controller software tools, and (3) Auto-Soft Inc. software integration services. The combined system eliminates operator keystrokes and errors during normal processing within a semiconductor fab. The methods and benefits of this system are described.
Studies Of Single-Event-Upset Models
NASA Technical Reports Server (NTRS)
Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.
1988-01-01
Report presents latest in series of investigations of "soft" bit errors known as single-event upsets (SEU). In this investigation, SEU response of low-power, Schottky-diode-clamped, transistor/transistor-logic (TTL) static random-access memory (RAM) observed during irradiation by Br and O ions in ranges of 100 to 240 and 20 to 100 MeV, respectively. Experimental data complete verification of computer model used to simulate SEU in this circuit.
Clean galena, contaminated lead, and soft errors in memory chips
NASA Astrophysics Data System (ADS)
Lykken, G. I.; Hustoft, J.; Ziegler, B.; Momcilovic, B.
2000-10-01
Lead (Pb) disks were exposed to a radon (Rn)-rich atmosphere and surface alpha particle emissions were detected over time. Cumulative 210Po alpha emission increased nearly linearly with time. Conversely, cumulative emission for each of 218Po and 214Po was constant after one and two hours, respectively. Processing of radiation-free Pb ore (galena) in inert atmospheres was compared with processing in ambient air. Galena processed within a flux heated in a graphite crucible while exposed to an inert atmosphere, resulted in lead contaminated with 210Po (Trial 1). A glove box was next used to prepare a baseline radiation-free flux sample in an alumina crucible that was heated in an oven with an inert atmosphere (Trials 2 and 3). Ambient air was thereafter introduced, in place of the inert atmosphere, to the radiation-free flux mixture during processing (Trial 4). Ambient air introduced Rn and its progeny (RAD) into the flux during processing so that the processed Pb contained Po isotopes. A typical coke used in lead smelting also emitted numerous alpha particles. We postulate that alpha particles from tin/lead solder bumps, a cause of computer chip memory soft errors, may originate from Rn and RAD in the ambient air and/or coke used as a reducing agent in the standard galena smelting procedure.
Optical Assessment of Soft Contact Lens Edge-Thickness.
Tankam, Patrice; Won, Jungeun; Canavesi, Cristina; Cox, Ian; Rolland, Jannick P
2016-08-01
To assess the edge shape of soft contact lenses using Gabor-Domain Optical Coherence Microscopy (GD-OCM) with a 2-μm imaging resolution in three dimensions and to generate edge-thickness profiles at different distances from the edge tip of soft contact lenses. A high-speed custom-designed GD-OCM system was used to produce 3D images of the edge of an experimental soft contact lens (Bausch + Lomb, Rochester, NY) in four different configurations: in air, submerged into water, submerged into saline with contrast agent, and placed onto the cornea of a porcine eyeball. An algorithm to compute the edge-thickness was developed and applied to cross-sectional images. The proposed algorithm includes the accurate detection of the interfaces between the lens and the environment, and the correction of the refraction error. The sharply defined edge tip of a soft contact lens was visualized in 3D. Results showed precise thickness measurement of the contact lens edge profile. Fifty cross-sectional image frames for each configuration were used to test the robustness of the algorithm in evaluating the edge-thickness at any distance from the edge tip. The precision of the measurements was less than 0.2 μm. The results confirmed the ability of GD-OCM to provide high-definition images of soft contact lens edges. As a nondestructive, precise, and fast metrology tool for soft contact lens measurement, the integration of GD-OCM in the design and manufacturing of contact lenses will be beneficial for further improvement in edge design and quality control. In the clinical perspective, the in vivo evaluation of the lens fitted onto the cornea will advance our understanding of how the edge interacts with the ocular surface. The latter will provide insights into the impact of long-term use of contact lenses on the visual performance.
Optical Assessment of Soft Contact Lens Edge-Thickness
Tankam, Patrice; Won, Jungeun; Canavesi, Cristina; Cox, Ian; Rolland, Jannick P.
2016-01-01
Purpose To assess the edge shape of soft contact lenses using Gabor-Domain Optical Coherence Microscopy (GD-OCM) with a 2 μm imaging resolution in three dimensions, and to generate edge-thickness profiles at different distances from the edge tip of soft contact lenses. Methods A high-speed custom-designed GD-OCM system was used to produce 3D images of the edge of an experimental soft contact lens (Bausch + Lomb, Rochester NY) in four different configurations: in air, submerged into water, submerged into saline with contrast agent, and placed onto the cornea of a porcine eyeball. An algorithm to compute the edge-thickness was developed and applied to cross-sectional images. The proposed algorithm includes the accurate detection of the interfaces between the lens and the environment, and the correction of the refraction error. Results The sharply defined edge tip of a soft contact lens was visualized in 3D. Results showed precise thickness measurement of the contact lens edge profile. 50 cross-sectional image frames for each configuration were used to test the robustness of the algorithm in evaluating the edge-thickness at any distance from the edge tip. The precision of the measurements was less than 0.2 μm. Conclusions The results confirmed the ability of GD-OCM to provide high definition images of soft contact lens edges. As a non-destructive, precise, and fast metrology tool for soft contact lens measurement, the integration of GD-OCM in the design and manufacturing of contact lenses will be beneficial for further improvement in edge design and quality control. In the clinical perspective, the in-vivo evaluation of the lens fitted onto the cornea will advance our understanding of how the edge interacts with the ocular surface. The latter will provide insights into the impact of long-term use of contact lenses on the visual performance. PMID:27232902
A Dynamic Game on Network Topology for Counterinsurgency Applications
2015-03-26
scenario. This study creates a dynamic game on network topology to provide insight into the effec- tiveness of offensive targeting strategies determined by...focused upon the diffusion of thoughts and innovations throughout complex social networks. Coleman et al. (1966) and Ryan & Gross (1950) investigated...free networks make them extremely resilient against errors but very vulnerable to attack. Most interest- ingly, a determined attacker can remove well
A comparison study of different facial soft tissue analysis methods.
Kook, Min-Suk; Jung, Seunggon; Park, Hong-Ju; Oh, Hee-Kyun; Ryu, Sun-Youl; Cho, Jin-Hyoung; Lee, Jae-Seo; Yoon, Suk-Ja; Kim, Min-Soo; Shin, Hyo-Keun
2014-07-01
The purpose of this study was to evaluate several different facial soft tissue measurement methods. After marking 15 landmarks in the facial area of 12 mannequin heads of different sizes and shapes, facial soft tissue measurements were performed by the following 5 methods: Direct anthropometry, Digitizer, 3D CT, 3D scanner, and DI3D system. With these measurement methods, 10 measurement values representing the facial width, height, and depth were determined twice with a one week interval by one examiner. These data were analyzed with the SPSS program. The position created based on multi-dimensional scaling showed that direct anthropometry, 3D CT, digitizer, 3D scanner demonstrated relatively similar values, while the DI3D system showed slightly different values. All 5 methods demonstrated good accuracy and had a high coefficient of reliability (>0.92) and a low technical error (<0.9 mm). The measured value of the distance between the right and left medial canthus obtained by using the DI3D system was statistically significantly different from that obtained by using the digital caliper, digitizer and laser scanner (p < 0.05), but the other measured values were not significantly different. On evaluating the reproducibility of measurement methods, two measurement values (Ls-Li, G-Pg) obtained by using direct anthropometry, one measurement value (N'-Prn) obtained by using the digitizer, and four measurement values (EnRt-EnLt, AlaRt-AlaLt, ChRt-ChLt, Sn-Pg) obtained by using the DI3D system, were statistically significantly different. However, the mean measurement error in every measurement method was low (<0.7 mm). All measurement values obtained by using the 3D CT and 3D scanner did not show any statistically significant difference. The results of this study show that all 3D facial soft tissue analysis methods demonstrate favorable accuracy and reproducibility, and hence they can be used in clinical practice and research studies. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Read disturb errors in a CMOS static RAM chip. [radiation hardened for spacedraft
NASA Technical Reports Server (NTRS)
Wood, Steven H.; Marr, James C., IV; Nguyen, Tien T.; Padgett, Dwayne J.; Tran, Joe C.; Griswold, Thomas W.; Lebowitz, Daniel C.
1989-01-01
Results are reported from an extensive investigation into pattern-sensitive soft errors (read disturb errors) in the TCC244 CMOS static RAM chip. The TCC244, also known as the SA2838, is a radiation-hard single-event-upset-resistant 4 x 256 memory chip. This device is being used by the Jet Propulsion Laboratory in the Galileo and Magellan spacecraft, which will have encounters with Jupiter and Venus, respectively. Two aspects of the part's design are shown to result in the occurrence of read disturb errors: the transparence of the signal path from the address pins to the array of cells, and the large resistance in the Vdd and Vss lines of the cells in the center of the array. Probe measurements taken during a read disturb failure illustrate how address skews and the data pattern in the chip combine to produce a bit flip. A capacitive charge pump formed by the individual cell capacitances and the resistance in the supply lines pumps down both the internal cell voltage and the local supply voltage until a bit flip occurs.
Beam hardening correction in CT myocardial perfusion measurement
NASA Astrophysics Data System (ADS)
So, Aaron; Hsieh, Jiang; Li, Jian-Ying; Lee, Ting-Yim
2009-05-01
This paper presents a method for correcting beam hardening (BH) in cardiac CT perfusion imaging. The proposed algorithm works with reconstructed images instead of projection data. It applies thresholds to separate low (soft tissue) and high (bone and contrast) attenuating material in a CT image. The BH error in each projection is estimated by a polynomial function of the forward projection of the segmented image. The error image is reconstructed by back-projection of the estimated errors. A BH-corrected image is then obtained by subtracting a scaled error image from the original image. Phantoms were designed to simulate the BH artifacts encountered in cardiac CT perfusion studies of humans and animals that are most commonly used in cardiac research. These phantoms were used to investigate whether BH artifacts can be reduced with our approach and to determine the optimal settings, which depend upon the anatomy of the scanned subject, of the correction algorithm for patient and animal studies. The correction algorithm was also applied to correct BH in a clinical study to further demonstrate the effectiveness of our technique.
Link Performance Analysis and monitoring - A unified approach to divergent requirements
NASA Astrophysics Data System (ADS)
Thom, G. A.
Link Performance Analysis and real-time monitoring are generally covered by a wide range of equipment. Bit Error Rate testers provide digital link performance measurements but are not useful during real-time data flows. Real-time performance monitors utilize the fixed overhead content but vary widely from format to format. Link quality information is also present from signal reconstruction equipment in the form of receiver AGC, bit synchronizer AGC, and bit synchronizer soft decision level outputs, but no general approach to utilizing this information exists. This paper presents an approach to link tests, real-time data quality monitoring, and results presentation that utilizes a set of general purpose modules in a flexible architectural environment. The system operates over a wide range of bit rates (up to 150 Mbs) and employs several measurement techniques, including P/N code errors or fixed PCM format errors, derived real-time BER from frame sync errors, and Data Quality Analysis derived by counting significant sync status changes. The architecture performs with a minimum of elements in place to permit a phased update of the user's unit in accordance with his needs.
Classification Model for Forest Fire Hotspot Occurrences Prediction Using ANFIS Algorithm
NASA Astrophysics Data System (ADS)
Wijayanto, A. K.; Sani, O.; Kartika, N. D.; Herdiyeni, Y.
2017-01-01
This study proposed the application of data mining technique namely Adaptive Neuro-Fuzzy inference system (ANFIS) on forest fires hotspot data to develop classification models for hotspots occurrence in Central Kalimantan. Hotspot is a point that is indicated as the location of fires. In this study, hotspot distribution is categorized as true alarm and false alarm. ANFIS is a soft computing method in which a given inputoutput data set is expressed in a fuzzy inference system (FIS). The FIS implements a nonlinear mapping from its input space to the output space. The method of this study classified hotspots as target objects by correlating spatial attributes data using three folds in ANFIS algorithm to obtain the best model. The best result obtained from the 3rd fold provided low error for training (error = 0.0093676) and also low error testing result (error = 0.0093676). Attribute of distance to road is the most determining factor that influences the probability of true and false alarm where the level of human activities in this attribute is higher. This classification model can be used to develop early warning system of forest fire.
De Rosario, Helios; Page, Alvaro; Mata, Vicente
2014-05-07
This paper proposes a variation of the instantaneous helical pivot technique for locating centers of rotation. The point of optimal kinematic error (POKE), which minimizes the velocity at the center of rotation, may be obtained by just adding a weighting factor equal to the square of angular velocity in Woltring׳s equation of the pivot of instantaneous helical axes (PIHA). Calculations are simplified with respect to the original method, since it is not necessary to make explicit calculations of the helical axis, and the effect of accidental errors is reduced. The improved performance of this method was validated by simulations based on a functional calibration task for the gleno-humeral joint center. Noisy data caused a systematic dislocation of the calculated center of rotation towards the center of the arm marker cluster. This error in PIHA could even exceed the effect of soft tissue artifacts associated to small and medium deformations, but it was successfully reduced by the POKE estimation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Megaquakes, prograde surface waves and urban evolution
NASA Astrophysics Data System (ADS)
Lomnitz, C.; Castaños, H.
2013-05-01
Cities grow according to evolutionary principles. They move away from soft-ground conditions and avoid vulnerable types of structures. A megaquake generates prograde surface waves that produce unexpected damage in modern buildings. The examples (Figs. 1 and 2) were taken from the 1985 Mexico City and the 2010 Concepción, Chile megaquakes. About 400 structures built under supervision according to modern building codes were destroyed in the Mexican earthquake. All were sited on soft ground. A Rayleigh wave will cause surface particles to move as ellipses in a vertical plane. Building codes assume that this motion will be retrograde as on a homogeneous elastic halfspace, but soft soils are intermediate materials between a solid and a liquid. When Poisson's ratio tends to ν→0.5 the particle motion turns prograde as it would on a homogeneous fluid halfspace. Building codes assume that the tilt of the ground is not in phase with the acceleration but we show that structures on soft ground tilt into the direction of the horizontal ground acceleration. The combined effect of gravity and acceleration may destabilize a structure when it is in resonance with its eigenfrequency. Castaños, H. and C. Lomnitz, 2013. Charles Darwin and the 1835 Chile earthquake. Seismol. Res. Lett., 84, 19-23. Lomnitz, C., 1990. Mexico 1985: the case for gravity waves. Geophys. J. Int., 102, 569-572. Malischewsky, P.G. et al., 2008. The domain of existence of prograde Rayleigh-wave particle motion. Wave Motion 45, 556-564.; Figure 1 1985 Mexico megaquake--overturned 15-story apartment building in Mexico City ; Figure 2 2010 Chile megaquake Overturned 15-story R-C apartment building in Concepción
NASA Astrophysics Data System (ADS)
Aghababaei, Sajjad; Saeedi, Gholamreza; Jalalifar, Hossein
2016-05-01
The floor failure at longwall face decreases productivity and safety, increases operation costs, and causes other serious problems. In Parvadeh-I coal mine, the timber is used to prevent the puncture of powered support base into the floor. In this paper, a rock engineering system (RES)-based model is presented to evaluate the risk of floor failure mechanisms at the longwall face of E 2 and W 1 panels. The presented model is used to determine the most probable floor failure mechanism, effective factors, damaged regions and remedial actions. From the analyzed results, it is found that soft floor failure is dominant in the floor failure mechanism at Parvadeh-I coal mine. The average of vulnerability index (VI) for soft, buckling and compressive floor failure mechanisms was estimated equal to 52, 43 and 30 for both panels, respectively. By determining the critical VI for soft floor failure mechanism equal to 54, the percentage of regions with VIs beyond the critical VI in E 2 and W 1 panels is equal to 65.5 and 30, respectively. The percentage of damaged regions showed that the excess amount of used timber to prevent the puncture of weak floor below the powered support base is equal to 4,180,739 kg. RES outputs and analyzed results showed that setting and yielding load of powered supports, length of face, existent water at face, geometry of powered supports, changing the cutting pattern at longwall face and limiting the panels to damaged regions with supercritical VIs could be considered to control the soft floor failure in this mine. The results of this research could be used as a useful tool to identify the damaged regions prior to mining operation at longwall panel for the same conditions.
Haller, U; Welti, S; Haenggi, D; Fink, D
2005-06-01
The number of liability cases but also the size of individual claims due to alleged treatment errors are increasing steadily. Spectacular sentences, especially in the USA, encourage this trend. Wherever human beings work, errors happen. The health care system is particularly susceptible and shows a high potential for errors. Therefore risk management has to be given top priority in hospitals. Preparing the introduction of critical incident reporting (CIR) as the means to notify errors is time-consuming and calls for a change in attitude because in many places the necessary base of trust has to be created first. CIR is not made to find the guilty and punish them but to uncover the origins of errors in order to eliminate them. The Department of Anesthesiology of the University Hospital of Basel has developed an electronic error notification system, which, in collaboration with the Swiss Medical Association, allows each specialist society to participate electronically in a CIR system (CIRS) in order to create the largest database possible and thereby to allow statements concerning the extent and type of error sources in medicine. After a pilot project in 2000-2004, the Swiss Society of Gynecology and Obstetrics is now progressively introducing the 'CIRS Medical' of the Swiss Medical Association. In our country, such programs are vulnerable to judicial intervention due to the lack of explicit legal guarantees of protection. High-quality data registration and skillful counseling are all the more important. Hospital directors and managers are called upon to examine those incidents which are based on errors inherent in the system.
Patient safety in the care of mentally ill people in Switzerland: Action plan 2016
Richard, Aline; Mascherek, Anna C; Schwappach, David L B
2017-01-01
Background: Patient safety in mental healthcare has not attracted great attention yet, although the burden and the prevalence of mental diseases are high. The risk of errors with potential for harm of patients, such as aggression against self and others or non-drug treatment errors is particularly high in this vulnerable group. Aim: To develop priority topics and strategies for action to foster patient safety in mental healthcare. Method: The Swiss patient safety foundation together with experts conducted round table discussions and a Delphi questionnaire to define topics along the treatment pathway, and to prioritise these topics. Finally, fields of action were developed. Results: An action plan was developed including the definition and prioritization of 9 topics where errors may occur. A global rating task revealed errors concerning diagnostics and structural errors as most important. This led to the development of 4 fields of action (awareness raising, research, implementation, and education and training) including practice-oriented potential starting points to enhance patient safety. Conclusions: The action plan highlights issues of high concern for patient safety in mental healthcare. It serves as a starting point for the development of strategies for action as well as of concrete activities.
Translating Climate Projections for Bridge Engineering
NASA Astrophysics Data System (ADS)
Anderson, C.; Takle, E. S.; Krajewski, W.; Mantilla, R.; Quintero, F.
2015-12-01
A bridge vulnerability pilot study was conducted by Iowa Department of Transportation (IADOT) as one of nineteen pilots supported by the Federal Highway Administration Climate Change Resilience Pilots. Our pilot study team consisted of the IADOT senior bridge engineer who is the preliminary design section leader as well as climate and hydrological scientists. The pilot project culminated in a visual graphic designed by the bridge engineer (Figure 1), and an evaluation framework for bridge engineering design. The framework has four stages. The first two stages evaluate the spatial and temporal resolution needed in climate projection data in order to be suitable for input to a hydrology model. The framework separates streamflow simulation error into errors from the streamflow model and from the coarseness of input weather data series. In the final two stages, the framework evaluates credibility of climate projection streamflow simulations. Using an empirically downscaled data set, projection streamflow is generated. Error is computed in two time frames: the training period of the empirical downscaling methodology, and an out-of-sample period. If large errors in projection streamflow were observed during the training period, it would indicate low accuracy and, therefore, low credibility. If large errors in streamflow were observed during the out-of-sample period, it would mean the approach may not include some causes of change and, therefore, the climate projections would have limited credibility for setting expectations for changes. We address uncertainty with confidence intervals on quantiles of streamflow discharge. The results show the 95% confidence intervals have significant overlap. Nevertheless, the use of confidence intervals enabled engineering judgement. In our discussions, we noted the consistency in direction of change across basins, though the flood mechanism was different across basins, and the high bound of bridge lifetime period quantiles exceeded that of the historical period. This suggested the change was not isolated, and it systemically altered the risk profile. One suggestion to incorporate engineering judgement was to consider degrees of vulnerability using the median discharge of the historical period and the upper bound discharge for the bridge lifetime period.
Petrungaro, Paul S; Gonzalez, Santiago; Villegas, Carlos
2018-02-01
As dental implants become more popular for the treatment of partial and total edentulism and treatment of "terminal dentitions," techniques for the management of the atrophic posterior maxillae continue to evolve. Although dental implants carry a high success rate long term, attention must be given to the growing numbers of revisions or retreatment of cases that have had previous dental implant treatment and/or advanced bone replacement procedures that, due to either poor patient compliance, iatrogenic error, or poor quality of the pre-existing alveolar and/or soft tissues, have led to large osseous defects, possibly with deficient soft-tissue volume. In the posterior maxillae, where the poorest quality of bone in the oral cavity exists, achieving regeneration of the alveolar bone and adequate volume of soft tissue remains a complex procedure. This is made even more difficult when dealing with loss of dental implants previously placed, aggressive bone reduction required in various implant procedures, and/or residual sinus infections precluding proper closure of the oral wound margins. The purpose of this article is to outline a technique for the total closure of large oro-antral communications, with underlying osseous defects greater than 15 mm in width and 30 mm in length, for which multiple previous attempts at closure had failed, to achieve not only the reconstruction of adequate volume and quality of soft tissues in the area of the previous fistula, but also total regeneration of the osseous structures in the area of the large void.
NASA Astrophysics Data System (ADS)
Palaseanu, M.; Thatcher, C.; Danielson, J.; Gesch, D. B.; Poppenga, S.; Kottermair, M.; Jalandoni, A.; Carlson, E.
2016-12-01
Coastal topographic and bathymetric (topobathymetric) data with high spatial resolution (1-meter or better) and high vertical accuracy are needed to assess the vulnerability of Pacific Islands to climate change impacts, including sea level rise. According to the Intergovernmental Panel on Climate Change reports, low-lying atolls in the Pacific Ocean are extremely vulnerable to king tide events, storm surge, tsunamis, and sea-level rise. The lack of coastal topobathymetric data has been identified as a critical data gap for climate vulnerability and adaptation efforts in the Republic of the Marshall Islands (RMI). For Majuro Atoll, home to the largest city of RMI, the only elevation dataset currently available is the Shuttle Radar Topography Mission data which has a 30-meter spatial resolution and 16-meter vertical accuracy (expressed as linear error at 90%). To generate high-resolution digital elevation models (DEMs) in the RMI, elevation information and photographic imagery have been collected from field surveys using GNSS/total station and unmanned aerial vehicles for Structure-from-Motion (SfM) point cloud generation. Digital Globe WorldView II imagery was processed to create SfM point clouds to fill in gaps in the point cloud derived from the higher resolution UAS photos. The combined point cloud data is filtered and classified to bare-earth and georeferenced using the GNSS data acquired on roads and along survey transects perpendicular to the coast. A total station was used to collect elevation data under tree canopies where heavy vegetation cover blocked the view of GNSS satellites. A subset of the GPS / total station data was set aside for error assessment of the resulting DEM.
Whitelock, Claire F; Agyepong, Heather NAO; Patterson, Karalyn; Ersche, Karen D
2015-01-01
Almost one-third of the participants in a neuropsychological study signed the consent form below the given line. The relationship between a signature position on or below the line and participants’ cognitive function was investigated. Fifty drug-dependent individuals, 50 of their siblings, and 50 unrelated control participants completed a battery of neuropsychological tests using the Cambridge Neuropsychological Test Automated Battery (CANTAB). Individuals signing below, rather than on, the line performed more poorly on tests of visuospatial memory, but no differently on other cognitive tests. Signature positioning may be a soft sign for impairment of the mechanisms involved in visuospatial memory. PMID:24313358
Enhancing autonomy in paid surrogacy.
Damelio, Jennifer; Sorensen, Kelly
2008-06-01
The gestational surrogate--and her economic and educational vulnerability in particular--is the focus of many of the most persistent worries about paid surrogacy. Those who employ her, and those who broker and organize her services, usually have an advantage over her in resources and information. That asymmetry exposes her to the possibility of exploitation and abuse. Accordingly, some argue for banning paid surrogacy. Others defend legal permission on grounds of surrogate autonomy, but often retain concerns about the surrogate. In response to the dilemma of a ban versus bald permission, we propose a 'soft law' approach: states should require several hours of education of surrogates--education aimed at informing and enhancing surrogate autonomy.
Rehabilitation Medicine Approaches to Pain Management.
Cheville, Andrea L; Smith, Sean R; Basford, Jeffrey R
2018-06-01
Rehabilitation medicine offers strategies that reduce musculoskeletal pain, targeted approaches to alleviate movement-related pain, and interventions to optimize patients' function despite the persistence of pain. These approaches fall into four categories: modulating nociception, stabilizing and unloading painful structures, influencing pain perception, and alleviating soft tissue musculotendinous pain. Incorporating these interventions into individualized, comprehensive pain management programs offers the potential to empower patients and limit pain associated with mobility and required daily activities. Rehabilitative approach may be particularly helpful for patients with refractory movement-associated pain and functional vulnerability, and for those who do not wish for, or cannot, tolerate pharmacoanalgesia. Copyright © 2018 Elsevier Inc. All rights reserved.
Vehicular traffic noise prediction using soft computing approach.
Singh, Daljeet; Nigam, S P; Agrawal, V P; Kumar, Maneek
2016-12-01
A new approach for the development of vehicular traffic noise prediction models is presented. Four different soft computing methods, namely, Generalized Linear Model, Decision Trees, Random Forests and Neural Networks, have been used to develop models to predict the hourly equivalent continuous sound pressure level, Leq, at different locations in the Patiala city in India. The input variables include the traffic volume per hour, percentage of heavy vehicles and average speed of vehicles. The performance of the four models is compared on the basis of performance criteria of coefficient of determination, mean square error and accuracy. 10-fold cross validation is done to check the stability of the Random Forest model, which gave the best results. A t-test is performed to check the fit of the model with the field data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cai, Jian-Hua
2017-09-01
To eliminate the random error of the derivative near-IR (NIR) spectrum and to improve model stability and the prediction accuracy of the gluten protein content, a combined method is proposed for pretreatment of the NIR spectrum based on both empirical mode decomposition and the wavelet soft-threshold method. The principle and the steps of the method are introduced and the denoising effect is evaluated. The wheat gluten protein content is calculated based on the denoised spectrum, and the results are compared with those of the nine-point smoothing method and the wavelet soft-threshold method. Experimental results show that the proposed combined method is effective in completing pretreatment of the NIR spectrum, and the proposed method improves the accuracy of detection of wheat gluten protein content from the NIR spectrum.
Estimation of the laser cutting operating cost by support vector regression methodology
NASA Astrophysics Data System (ADS)
Jović, Srđan; Radović, Aleksandar; Šarkoćević, Živče; Petković, Dalibor; Alizamir, Meysam
2016-09-01
Laser cutting is a popular manufacturing process utilized to cut various types of materials economically. The operating cost is affected by laser power, cutting speed, assist gas pressure, nozzle diameter and focus point position as well as the workpiece material. In this article, the process factors investigated were: laser power, cutting speed, air pressure and focal point position. The aim of this work is to relate the operating cost to the process parameters mentioned above. CO2 laser cutting of stainless steel of medical grade AISI316L has been investigated. The main goal was to analyze the operating cost through the laser power, cutting speed, air pressure, focal point position and material thickness. Since the laser operating cost is a complex, non-linear task, soft computing optimization algorithms can be used. Intelligent soft computing scheme support vector regression (SVR) was implemented. The performance of the proposed estimator was confirmed with the simulation results. The SVR results are then compared with artificial neural network and genetic programing. According to the results, a greater improvement in estimation accuracy can be achieved through the SVR compared to other soft computing methodologies. The new optimization methods benefit from the soft computing capabilities of global optimization and multiobjective optimization rather than choosing a starting point by trial and error and combining multiple criteria into a single criterion.
Speech outcome after early repair of cleft soft palate using Furlow technique.
Abdel-Aziz, Mosaad
2013-01-01
The earlier closure of palatal cleft is the better the speech outcome and the less compensatory articulation errors, however dissection on the hard palate may interfere with facial growth. In Furlow palatoplasty, dissection on the hard palate is not needed and surgery is usually limited to the soft palate, so the technique has no deleterious effect on the facial growth. The aim of this study was to assess the efficacy of Furlow palatoplasty technique on the speech of young infants with cleft soft palate. Twenty-one infants with cleft soft palate were included in this study, their ages ranged from 3 to 6 months. Their clefts were repaired using Furlow technique. The patients were followed up for at least 4 years; at the end of the follow up period they were subjected to flexible nasopharyngoscopy to assess the velopharyngeal closure and speech analysis using auditory perceptual assessment. Eighteen cases (85.7%) showed complete velopharyngeal closure, 1 case (4.8%) showed borderline competence, and 2 cases (9.5%) showed borderline incompetence. Normal resonance has been attained in 18 patients (85.7%), and mild hypernasality in 3 patients (14.3%), no patients demonstrated nasal emission of air. Speech therapy was beneficial for cases with residual hypernasality; no cases needed secondary corrective surgery. Furlow palatoplasty at a younger age has favorable speech outcome with no detectable morbidity. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Karuppanan, Udayakumar; Unni, Sujatha Narayanan; Angarai, Ganesan R
2017-01-01
Assessment of mechanical properties of soft matter is a challenging task in a purely noninvasive and noncontact environment. As tissue mechanical properties play a vital role in determining tissue health status, such noninvasive methods offer great potential in framing large-scale medical screening strategies. The digital speckle pattern interferometry (DSPI)-based image capture and analysis system described here is capable of extracting the deformation information from a single acquired fringe pattern. Such a method of analysis would be required in the case of the highly dynamic nature of speckle patterns derived from soft tissues while applying mechanical compression. Soft phantoms mimicking breast tissue optical and mechanical properties were fabricated and tested in the DSPI out of plane configuration set up. Hilbert transform (HT)-based image analysis algorithm was developed to extract the phase and corresponding deformation of the sample from a single acquired fringe pattern. The experimental fringe contours were found to correlate with numerically simulated deformation patterns of the sample using Abaqus finite element analysis software. The extracted deformation from the experimental fringe pattern using the HT-based algorithm is compared with the deformation value obtained using numerical simulation under similar conditions of loading and the results are found to correlate with an average %error of 10. The proposed method is applied on breast phantoms fabricated with included subsurface anomaly mimicking cancerous tissue and the results are analyzed.
Wu, Jinpeng; Sallis, Shawn; Qiao, Ruimin; Li, Qinghao; Zhuo, Zengqing; Dai, Kehua; Guo, Zixuan; Yang, Wanli
2018-04-17
Energy storage has become more and more a limiting factor of today's sustainable energy applications, including electric vehicles and green electric grid based on volatile solar and wind sources. The pressing demand of developing high-performance electrochemical energy storage solutions, i.e., batteries, relies on both fundamental understanding and practical developments from both the academy and industry. The formidable challenge of developing successful battery technology stems from the different requirements for different energy-storage applications. Energy density, power, stability, safety, and cost parameters all have to be balanced in batteries to meet the requirements of different applications. Therefore, multiple battery technologies based on different materials and mechanisms need to be developed and optimized. Incisive tools that could directly probe the chemical reactions in various battery materials are becoming critical to advance the field beyond its conventional trial-and-error approach. Here, we present detailed protocols for soft X-ray absorption spectroscopy (sXAS), soft X-ray emission spectroscopy (sXES), and resonant inelastic X-ray scattering (RIXS) experiments, which are inherently elemental-sensitive probes of the transition-metal 3d and anion 2p states in battery compounds. We provide the details on the experimental techniques and demonstrations revealing the key chemical states in battery materials through these soft X-ray spectroscopy techniques.
[Tracing the map of medication errors outside the hospital environment in the Madrid Community].
Taravilla-Cerdán, Belén; Larrubia-Muñoz, Olga; de la Corte-García, María; Cruz-Martos, Encarnación
2011-12-01
Preparation of a map of medication errors reported by health professionals outside hospitals within the framework of Medication Errors Reporting for the Community of Madrid during the period 2008-2009. Retrospective observational study. Notification database of medication errors in the Community of Madrid. Notifications sent to the web page: Safe Use of Medicines and Health Products of the Community of Madrid. Information on the originator of the report, date of incident, shift, type of error and causes, outcome, patient characteristics, stage, place where it was produced and detected, if the medication was administered, lot number, expiry date and the general nature of the drug and a brief description of the incident. There were 5470 medication errors analysed, of which 3412 came from outside hospitals (62%), occurring mainly in the prescription stage (56.92%) and being more reported pharmacists. No harm was done in 92.9% of cases, but there was harm in 4.8% and in 2.3% there was an error that could not be followed up. The centralization of information has led to the confirmation that the prescription is a vulnerable point in the chain of drug therapy. Cleaning up prescription databases, preventing the marketing of commercial presentations that give rise to confusion, enhanced information to professionals and patients, and establishing standardised procedures, and avoiding the use of ambiguous prescriptions, illegible, or abbreviations, are useful strategies to try to minimise these errors. Copyright © 2010 Elsevier España, S.L. All rights reserved.
Aquatic habitat mapping with an acoustic doppler current profiler: Considerations for data quality
Gaeuman, David; Jacobson, Robert B.
2005-01-01
When mounted on a boat or other moving platform, acoustic Doppler current profilers (ADCPs) can be used to map a wide range of ecologically significant phenomena, including measures of fluid shear, turbulence, vorticity, and near-bed sediment transport. However, the instrument movement necessary for mapping applications can generate significant errors, many of which have not been inadequately described. This report focuses on the mechanisms by which moving-platform errors are generated, and quantifies their magnitudes under typical habitat-mapping conditions. The potential for velocity errors caused by mis-alignment of the instrument?s internal compass are widely recognized, but has not previously been quantified for moving instruments. Numerical analyses show that even relatively minor compass mis-alignments can produce significant velocity errors, depending on the ratio of absolute instrument velocity to the target velocity and on the relative directions of instrument and target motion. A maximum absolute instrument velocity of about 1 m/s is recommended for most mapping applications. Lower velocities are appropriate when making bed velocity measurements, an emerging application that makes use of ADCP bottom-tracking to measure the velocity of sediment particles at the bed. The mechanisms by which heterogeneities in the flow velocity field generate horizontal velocities errors are also quantified, and some basic limitations in the effectiveness of standard error-detection criteria for identifying these errors are described. Bed velocity measurements may be particularly vulnerable to errors caused by spatial variability in the sediment transport field.
Intraoperative Radiation Therapy: Characterization and Application
1989-03-01
difficult to obtain. Notably, carcinomas of the pancreas, stomach, colon, and rectum, and sarcomas of soft tissue are prime candidates for IORT (2:131...Their pioneering efforts served as the basis for all my work. Mr. John Brohas of the AFIT Model Fabrication Shop aided my efforts considerably by... fabricated to set the collimator jaws to the required 10 cm x 10 cm aperture. The necessary parts are available from Varian. This will help eliminate errors
Clément, Julien; Dumas, Raphaël; Hagemeister, Nicola; de Guise, Jaques A
2015-11-05
Soft tissue artifact (STA) distort marker-based knee kinematics measures and make them difficult to use in clinical practice. None of the current methods designed to compensate for STA is suitable, but multi-body optimization (MBO) has demonstrated encouraging results and can be improved. The goal of this study was to develop and validate the performance of knee joint models, with anatomical and subject-specific kinematic constraints, used in MBO to reduce STA errors. Twenty subjects were recruited: 10 healthy and 10 osteoarthritis (OA) subjects. Subject-specific knee joint models were evaluated by comparing dynamic knee kinematics recorded by a motion capture system (KneeKG™) and optimized with MBO to quasi-static knee kinematics measured by a low-dose, upright, biplanar radiographic imaging system (EOS(®)). Errors due to STA ranged from 1.6° to 22.4° for knee rotations and from 0.8 mm to 14.9 mm for knee displacements in healthy and OA subjects. Subject-specific knee joint models were most effective in compensating for STA in terms of abduction-adduction, inter-external rotation and antero-posterior displacement. Root mean square errors with subject-specific knee joint models ranged from 2.2±1.2° to 6.0±3.9° for knee rotations and from 2.4±1.1 mm to 4.3±2.4 mm for knee displacements in healthy and OA subjects, respectively. Our study shows that MBO can be improved with subject-specific knee joint models, and that the quality of the motion capture calibration is critical. Future investigations should focus on more refined knee joint models to reproduce specific OA knee geometry and physiology. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tonutti, Michele; Gras, Gauthier; Yang, Guang-Zhong
2017-07-01
Accurate reconstruction and visualisation of soft tissue deformation in real time is crucial in image-guided surgery, particularly in augmented reality (AR) applications. Current deformation models are characterised by a trade-off between accuracy and computational speed. We propose an approach to derive a patient-specific deformation model for brain pathologies by combining the results of pre-computed finite element method (FEM) simulations with machine learning algorithms. The models can be computed instantaneously and offer an accuracy comparable to FEM models. A brain tumour is used as the subject of the deformation model. Load-driven FEM simulations are performed on a tetrahedral brain mesh afflicted by a tumour. Forces of varying magnitudes, positions, and inclination angles are applied onto the brain's surface. Two machine learning algorithms-artificial neural networks (ANNs) and support vector regression (SVR)-are employed to derive a model that can predict the resulting deformation for each node in the tumour's mesh. The tumour deformation can be predicted in real time given relevant information about the geometry of the anatomy and the load, all of which can be measured instantly during a surgical operation. The models can predict the position of the nodes with errors below 0.3mm, beyond the general threshold of surgical accuracy and suitable for high fidelity AR systems. The SVR models perform better than the ANN's, with positional errors for SVR models reaching under 0.2mm. The results represent an improvement over existing deformation models for real time applications, providing smaller errors and high patient-specificity. The proposed approach addresses the current needs of image-guided surgical systems and has the potential to be employed to model the deformation of any type of soft tissue. Copyright © 2017 Elsevier B.V. All rights reserved.
Social class variation in risk: a comparative analysis of the dynamics of economic vulnerability.
Whelan, Christopher T; Maître, Bertrand
2008-12-01
A joint concern with multidimensionality and dynamics is a defining feature of the pervasive use of the terminology of social exclusion in the European Union. The notion of social exclusion focuses attention on economic vulnerability in the sense of exposure to risk and uncertainty. Sociological concern with these issues has been associated with the thesis that risk and uncertainty have become more pervasive and extend substantially beyond the working class. This paper combines features of recent approaches to statistical modelling of poverty dynamics and multidimensional deprivation in order to develop our understanding of the dynamics of economic vulnerability. An analysis involving nine countries and covering the first five waves of the European Community Household Panel shows that, across nations and time, it is possible to identify an economically vulnerable class. This class is characterized by heightened risk of falling below a critical resource level, exposure to material deprivation and experience of subjective economic stress. Cross-national differentials in persistence of vulnerability are wider than in the case of income poverty and less affected by measurement error. Economic vulnerability profiles vary across welfare regimes in a manner broadly consistent with our expectations. Variation in the impact of social class within and across countries provides no support for the argument that its role in structuring such risk has become much less important. Our findings suggest that it is possible to accept the importance of the emergence of new forms of social risk and acknowledge the significance of efforts to develop welfare states policies involving a shift of opportunities and decision making on to individuals without accepting the 'death of social class' thesis.
2017-01-01
Purpose This study evaluated the changes in nutritional status based on quality of life (QoL) item-level analysis to determine whether individual QoL responses might facilitate personal clinical impact. Materials and Methods This study retrospectively evaluated QoL data obtained by the European Organisation for Research and Treatment of Cancer (EORTC) Quality of Life Questionnaire-Core 30 (QLQ-C30) and Quality of Life Questionnaire-Stomach (QLQ-STO22) as well as metabolic-nutritional data obtained by bioelectrical impedance analysis and blood tests. Patients were assessed preoperatively and at the 5-year follow-up. QoL was analyzed at the level of the constituent items. The patients were categorized into vulnerable and non-vulnerable QoL groups for each scale based on their responses to the QoL items and changes in the metabolic-nutritional indices were compared. Results Multiple shortcomings in the metabolic-nutritional indices were observed in the vulnerable groups for nausea/vomiting (waist-hip ratio, degree of obesity), dyspnea (hemoglobin, iron), constipation (body fat mass, percent body fat), dysphagia (body fat mass, percent body fat), reflux (body weight, hemoglobin), dry mouth (percent body fat, waist-hip ratio), and taste (body weight, total body water, soft lean mass, body fat mass). The shortcomings in a single index were observed in the vulnerable groups for emotional functioning and pain (EORTC QLQ-C30) and for eating restrictions (EORTC QLQ-STO22). Conclusions Long-term postoperative QoL deterioration in emotional functioning, nausea/vomiting, pain, dyspnea, constipation, dysphagia, reflux, eating restrictions, dry mouth, and taste were associated with nutritional shortcomings. QoL item-level analysis, instead of scale-level analysis, may help to facilitate personalized treatment for individual QoL respondents. PMID:29302374
Safety in the operating theatre--part 1: interpersonal relationships and team performance
NASA Technical Reports Server (NTRS)
Schaefer, H. G.; Helmreich, R. L.; Scheidegger, D.
1995-01-01
The authors examine the application of interpersonal human factors training on operating room (OR) personnel. Mortality studies of OR deaths and critical incident studies of anesthesia are examined to determine the role of human error in OR incidents. Theoretical models of system vulnerability to accidents are presented with emphasis on a systems approach to OR performance. Input, process, and outcome factors are discussed in detail.
Officer Career Development: Reactions of Two Unrestricted Line Communities to Detailers
1987-08-01
self - esteem scale ( Rosenberg , 1979) (Cronbach alpha = .82). Evaluation of Job History (Box 2)^ 1. "What is your evaluation of the following...rating scales , which are vulnerable to "leniency error" (Kerlinger, 1965 ). That is, constituents may have evaluated detailers more favorably than they...communication in bargaining. Human Communication Research, 8^, 262-280. Rosenberg , M. (1979). Conceiving the self . New York: Basic Books. Turnbull, A. A
Al-Ghorabaie, Fateme Moin; Noferesti, Azam; Fadaee, Mahdi; Ganji, Nima
2016-08-01
The purpose of this study was to assess cognitive vulnerability and response style in clinical and normal individuals. A sample of 90 individuals was selected for each of the 3 groups of Generalized Anxiety disorder, Dysthymic disorder and normal individuals. They completed MCQ and RSQ. Results analyzed by MANOVA and post hoc showed significant differences among groups. Dysthymic group and GAD reported higher scores on cognitive confidence compared to the normal group. Individuals with GAD showed highly negative beliefs about need to control thought, compared to the other groups, but in cognitive self-consciousness they have no differences with the normal group. In regard to uncontrollability, danger and positive beliefs, GAD group had higher levels than the other groups. Although normal and GAD group didn't show any significant differences in response style, there was a significant difference between Dysthymic group and other groups in all response styles. Beliefs and meta-cognitive strategies can be distinguished between clinical and non clinical individuals. Also, findings support the Self-Regulatory Executive Function model. ary committee was effective in recognizing, designing and implementing tailored interventions for reduction of medication errors. A systematic approach is urgently needed to decrease organizational susceptibility to errors, through providing required resources to monitor, analyze and implement effective interventions.
Lin, Qigen; Wang, Ying; Liu, Tianxue; Zhu, Yingqi; Sui, Qi
2017-02-21
The lack of a detailed landslide inventory makes research on the vulnerability of people to landslides highly limited. In this paper, the authors collect information on the landslides that have caused casualties in China, and established the Landslides Casualties Inventory of China . 100 landslide cases from 2003 to 2012 were utilized to develop an empirical relationship between the volume of a landslide event and the casualties caused by the occurrence of the event. The error bars were used to describe the uncertainty of casualties resulting from landslides and to establish a threshold curve of casualties caused by landslides in China. The threshold curve was then applied to the landslide cases occurred in 2013 and 2014. The validation results show that the estimated casualties of the threshold curve were in good agreement with the real casualties with a small deviation. Therefore, the threshold curve can be used for estimating potential casualties and landslide vulnerability, which is meaningful for emergency rescue operations after landslides occurred and for risk assessment research.
Lin, Qigen; Wang, Ying; Liu, Tianxue; Zhu, Yingqi; Sui, Qi
2017-01-01
The lack of a detailed landslide inventory makes research on the vulnerability of people to landslides highly limited. In this paper, the authors collect information on the landslides that have caused casualties in China, and established the Landslides Casualties Inventory of China. 100 landslide cases from 2003 to 2012 were utilized to develop an empirical relationship between the volume of a landslide event and the casualties caused by the occurrence of the event. The error bars were used to describe the uncertainty of casualties resulting from landslides and to establish a threshold curve of casualties caused by landslides in China. The threshold curve was then applied to the landslide cases occurred in 2013 and 2014. The validation results show that the estimated casualties of the threshold curve were in good agreement with the real casualties with a small deviation. Therefore, the threshold curve can be used for estimating potential casualties and landslide vulnerability, which is meaningful for emergency rescue operations after landslides occurred and for risk assessment research. PMID:28230810
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, S; Chao, C; Columbia University, NY, NY
2014-06-01
Purpose: This study investigates the calibration error of detector sensitivity for MapCheck due to inaccurate positioning of the device, which is not taken into account by the current commercial iterative calibration algorithm. We hypothesize the calibration is more vulnerable to the positioning error for the flatten filter free (FFF) beams than the conventional flatten filter flattened beams. Methods: MapCheck2 was calibrated with 10MV conventional and FFF beams, with careful alignment and with 1cm positioning error during calibration, respectively. Open fields of 37cmx37cm were delivered to gauge the impact of resultant calibration errors. The local calibration error was modeled as amore » detector independent multiplication factor, with which propagation error was estimated with positioning error from 1mm to 1cm. The calibrated sensitivities, without positioning error, were compared between the conventional and FFF beams to evaluate the dependence on the beam type. Results: The 1cm positioning error leads to 0.39% and 5.24% local calibration error in the conventional and FFF beams respectively. After propagating to the edges of MapCheck, the calibration errors become 6.5% and 57.7%, respectively. The propagation error increases almost linearly with respect to the positioning error. The difference of sensitivities between the conventional and FFF beams was small (0.11 ± 0.49%). Conclusion: The results demonstrate that the positioning error is not handled by the current commercial calibration algorithm of MapCheck. Particularly, the calibration errors for the FFF beams are ~9 times greater than those for the conventional beams with identical positioning error, and a small 1mm positioning error might lead to up to 8% calibration error. Since the sensitivities are only slightly dependent of the beam type and the conventional beam is less affected by the positioning error, it is advisable to cross-check the sensitivities between the conventional and FFF beams to detect potential calibration errors due to inaccurate positioning. This work was partially supported by a DOD Grant No.; DOD W81XWH1010862.« less
NASA Technical Reports Server (NTRS)
Tasca, D. M.
1981-01-01
Single event upset phenomena are discussed, taking into account cosmic ray induced errors in IIL microprocessors and logic devices, single event upsets in NMOS microprocessors, a prediction model for bipolar RAMs in a high energy ion/proton environment, the search for neutron-induced hard errors in VLSI structures, soft errors due to protons in the radiation belt, and the use of an ion microbeam to study single event upsets in microcircuits. Basic mechanisms in materials and devices are examined, giving attention to gamma induced noise in CCD's, the annealing of MOS capacitors, an analysis of photobleaching techniques for the radiation hardening of fiber optic data links, a hardened field insulator, the simulation of radiation damage in solids, and the manufacturing of radiation resistant optical fibers. Energy deposition and dosimetry is considered along with SGEMP/IEMP, radiation effects in devices, space radiation effects and spacecraft charging, EMP/SREMP, and aspects of fabrication, testing, and hardness assurance.
On the possibility of non-invasive multilayer temperature estimation using soft-computing methods.
Teixeira, C A; Pereira, W C A; Ruano, A E; Ruano, M Graça
2010-01-01
This work reports original results on the possibility of non-invasive temperature estimation (NITE) in a multilayered phantom by applying soft-computing methods. The existence of reliable non-invasive temperature estimator models would improve the security and efficacy of thermal therapies. These points would lead to a broader acceptance of this kind of therapies. Several approaches based on medical imaging technologies were proposed, magnetic resonance imaging (MRI) being appointed as the only one to achieve the acceptable temperature resolutions for hyperthermia purposes. However, MRI intrinsic characteristics (e.g., high instrumentation cost) lead us to use backscattered ultrasound (BSU). Among the different BSU features, temporal echo-shifts have received a major attention. These shifts are due to changes of speed-of-sound and expansion of the medium. The originality of this work involves two aspects: the estimator model itself is original (based on soft-computing methods) and the application to temperature estimation in a three-layer phantom is also not reported in literature. In this work a three-layer (non-homogeneous) phantom was developed. The two external layers were composed of (in % of weight): 86.5% degassed water, 11% glycerin and 2.5% agar-agar. The intermediate layer was obtained by adding graphite powder in the amount of 2% of the water weight to the above composition. The phantom was developed to have attenuation and speed-of-sound similar to in vivo muscle, according to the literature. BSU signals were collected and cumulative temporal echo-shifts computed. These shifts and the past temperature values were then considered as possible estimators inputs. A soft-computing methodology was applied to look for appropriate multilayered temperature estimators. The methodology involves radial-basis functions neural networks (RBFNN) with structure optimized by the multi-objective genetic algorithm (MOGA). In this work 40 operating conditions were considered, i.e. five 5-mm spaced spatial points and eight therapeutic intensities (I(SATA)): 0.3, 0.5, 0.7, 1.0, 1.3, 1.5, 1.7 and 2.0W/cm(2). Models were trained and selected to estimate temperature at only four intensities, then during the validation phase, the best-fitted models were analyzed in data collected at the eight intensities. This procedure leads to a more realistic evaluation of the generalisation level of the best-obtained structures. At the end of the identification phase, 82 (preferable) estimator models were achieved. The majority of them present an average maximum absolute error (MAE) inferior to 0.5 degrees C. The best-fitted estimator presents a MAE of only 0.4 degrees C for both the 40 operating conditions. This means that the gold-standard maximum error (0.5 degrees C) pointed for hyperthermia was fulfilled independently of the intensity and spatial position considered, showing the improved generalisation capacity of the identified estimator models. As the majority of the preferable estimator models, the best one presents 6 inputs and 11 neurons. In addition to the appropriate error performance, the estimator models present also a reduced computational complexity and then the possibility to be applied in real-time. A non-invasive temperature estimation model, based on soft-computing technique, was proposed for a three-layered phantom. The best-achieved estimator models presented an appropriate error performance regardless of the spatial point considered (inside or at the interface of the layers) and of the intensity applied. Other methodologies published so far, estimate temperature only in homogeneous media. The main drawback of the proposed methodology is the necessity of a-priory knowledge of the temperature behavior. Data used for training and optimisation should be representative, i.e., they should cover all possible physical situations of the estimation environment.
NASA Astrophysics Data System (ADS)
Xuan, Yue
Background. Soft materials such as polymers and soft tissues have diverse applications in bioengineering, medical care, and industry. Quantitative mechanical characterization of soft materials at multiscales is required to assure that appropriate mechanical properties are presented to support the normal material function. Indentation test has been widely used to characterize soft material. However, the measurement of in situ contact area is always difficult. Method of Approach. A transparent indenter method was introduced to characterize the nonlinear behaviors of soft materials under large deformation. This approach made the direct measurement of contact area and local deformation possible. A microscope was used to capture the contact area evolution as well as the surface deformation. Based on this transparent indenter method, a novel transparent indentation measurement systems has been built and multiple soft materials including polymers and pericardial tissue have been characterized. Seven different indenters have been used to study the strain distribution on the contact surface, inner layer and vertical layer. Finite element models have been built to simulate the hyperelastic and anisotropic material behaviors. Proper material constants were obtained by fitting the experimental results. Results.Homogeneous and anisotropic silicone rubber and porcine pericardial tissue have been examined. Contact area and local deformation were measured by real time imaging the contact interface. The experimental results were compared with the predictions from the Hertzian equations. The accurate measurement of contact area results in more reliable Young's modulus, which is critical for soft materials. For the fiber reinforced anisotropic silicone rubber, the projected contact area under a hemispherical indenter exhibited elliptical shape. The local surface deformation under indenter was mapped using digital image correlation program. Punch test has been applied to thin films of silicone rubber and porcine pericardial tissue and results were analyzed using the same method. Conclusions. The transparent indenter testing system can effectively reduce the material properties measurement error by directly measuring the contact radii. The contact shape can provide valuable information for the anisotropic property of the material. Local surface deformation including contact surface, inner layer and vertical plane can be accurately tracked and mapped to study the strain distribution. The potential usage of the transparent indenter measurement system to investigate biological and biomaterials was verified. The experimental data including the real-time contact area combined with the finite element simulation would be powerful tool to study mechanical properties of soft materials and their relation to microstructure, which has potential in pathologies study such as tissue repair and surgery plan. Key words: transparent indenter, large deformation, soft material, anisotropic.
Multiple description distributed image coding with side information for mobile wireless transmission
NASA Astrophysics Data System (ADS)
Wu, Min; Song, Daewon; Chen, Chang Wen
2005-03-01
Multiple description coding (MDC) is a source coding technique that involves coding the source information into multiple descriptions, and then transmitting them over different channels in packet network or error-prone wireless environment to achieve graceful degradation if parts of descriptions are lost at the receiver. In this paper, we proposed a multiple description distributed wavelet zero tree image coding system for mobile wireless transmission. We provide two innovations to achieve an excellent error resilient capability. First, when MDC is applied to wavelet subband based image coding, it is possible to introduce correlation between the descriptions in each subband. We consider using such a correlation as well as potentially error corrupted description as side information in the decoding to formulate the MDC decoding as a Wyner Ziv decoding problem. If only part of descriptions is lost, however, their correlation information is still available, the proposed Wyner Ziv decoder can recover the description by using the correlation information and the error corrupted description as side information. Secondly, in each description, single bitstream wavelet zero tree coding is very vulnerable to the channel errors. The first bit error may cause the decoder to discard all subsequent bits whether or not the subsequent bits are correctly received. Therefore, we integrate the multiple description scalar quantization (MDSQ) with the multiple wavelet tree image coding method to reduce error propagation. We first group wavelet coefficients into multiple trees according to parent-child relationship and then code them separately by SPIHT algorithm to form multiple bitstreams. Such decomposition is able to reduce error propagation and therefore improve the error correcting capability of Wyner Ziv decoder. Experimental results show that the proposed scheme not only exhibits an excellent error resilient performance but also demonstrates graceful degradation over the packet loss rate.
Verifying Stability of Dynamic Soft-Computing Systems
NASA Technical Reports Server (NTRS)
Wen, Wu; Napolitano, Marcello; Callahan, John
1997-01-01
Soft computing is a general term for algorithms that learn from human knowledge and mimic human skills. Example of such algorithms are fuzzy inference systems and neural networks. Many applications, especially in control engineering, have demonstrated their appropriateness in building intelligent systems that are flexible and robust. Although recent research have shown that certain class of neuro-fuzzy controllers can be proven bounded and stable, they are implementation dependent and difficult to apply to the design and validation process. Many practitioners adopt the trial and error approach for system validation or resort to exhaustive testing using prototypes. In this paper, we describe our on-going research towards establishing necessary theoretic foundation as well as building practical tools for the verification and validation of soft-computing systems. A unified model for general neuro-fuzzy system is adopted. Classic non-linear system control theory and recent results of its applications to neuro-fuzzy systems are incorporated and applied to the unified model. It is hoped that general tools can be developed to help the designer to visualize and manipulate the regions of stability and boundedness, much the same way Bode plots and Root locus plots have helped conventional control design and validation.
Daboul, Amro; Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea
2018-01-01
Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'.
Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea
2018-01-01
Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'. PMID:29787586
Li, Qiang; Zhou, Xu; Wang, Yue; Qian, Jin; Zhang, Qingguo
2018-05-15
Although facial paralysis is a fundamental feature of hemifacial microsomia, the frequency and distribution of nerve abnormalities in patients with hemifacial microsomia remain unclear. In this study, the authors classified 1125 cases with microtia (including 339 patients with hemifacial microsomia and 786 with isolated microtia) according to Orbital Distortion Mandibular Hypoplasia Ear Anomaly Nerve Involvement Soft Tissue Dependency (OMENS) scheme. Then, the authors performed an independent analysis to describe the distribution feature of nerve abnormalities and reveal the possible relationships between facial paralysis and the other 4 fundamental features in the OMENS system. Results revealed that facial paralysis is present 23.9% of patients with hemifacial microsomia. The frontal-temporal branch is the most vulnerable branch in the total 1125 cases with microtia. The occurrence of facial paralysis is positively correlated with mandibular hypoplasia and soft tissue deficiency both in the total 1125 cases and the hemifacial microsomia patients. Orbital asymmetry is related to facial paralysis only in the total microtia cases, and ear deformity is related to facial paralysis only in hemifacial microsomia patients. No significant association was found between the severity of facial paralysis and any of the other 4 OMENS anomalies. These data suggest that the occurrence of facial paralysis may be associated with other OMENS abnormalities. The presence of serious mandibular hypoplasia or soft tissue deficiency should alert the clinician to a high possibility but not a high severity of facial paralysis.
Energy Performance Measurement and Simulation Modeling of Tactical Soft-Wall Shelters
2015-07-01
was too low to measure was on the order of 5 hours. Because the research team did not have access to the site between 1700 and 0500 hours the...Basic for Applications ( VBA ). The objective function was the root mean square (RMS) errors between modeled and measured heating load and the modeled...References Phase Change Energy Solutions. (2013). BioPCM web page, http://phasechange.com/index.php/en/about/our-material. Accessed 16 September
Update on parts SEE suspectibility from heavy ions. [Single Event Effects
NASA Technical Reports Server (NTRS)
Nichols, D. K.; Smith, L. S.; Schwartz, H. R.; Soli, G.; Watson, K.; Koga, R.; Crain, W. R.; Crawford, K. B.; Hansel, S. J.; Lau, D. D.
1991-01-01
JPL and the Aerospace Corporation have collected a fourth set of heavy ion single event effects (SEE) test data. Trends in SEE susceptibility (including soft errors and latchup) for state-of-the-art parts are displayed. All data are conveniently divided into two tables: one for MOS devices, and one for a shorter list of recently tested bipolar devices. In addition, a new table of data for latchup tests only (invariably CMOS processes) is given.
Temperature-based estimation of global solar radiation using soft computing methodologies
NASA Astrophysics Data System (ADS)
Mohammadi, Kasra; Shamshirband, Shahaboddin; Danesh, Amir Seyed; Abdullah, Mohd Shahidan; Zamani, Mazdak
2016-07-01
Precise knowledge of solar radiation is indeed essential in different technological and scientific applications of solar energy. Temperature-based estimation of global solar radiation would be appealing owing to broad availability of measured air temperatures. In this study, the potentials of soft computing techniques are evaluated to estimate daily horizontal global solar radiation (DHGSR) from measured maximum, minimum, and average air temperatures ( T max, T min, and T avg) in an Iranian city. For this purpose, a comparative evaluation between three methodologies of adaptive neuro-fuzzy inference system (ANFIS), radial basis function support vector regression (SVR-rbf), and polynomial basis function support vector regression (SVR-poly) is performed. Five combinations of T max, T min, and T avg are served as inputs to develop ANFIS, SVR-rbf, and SVR-poly models. The attained results show that all ANFIS, SVR-rbf, and SVR-poly models provide favorable accuracy. Based upon all techniques, the higher accuracies are achieved by models (5) using T max- T min and T max as inputs. According to the statistical results, SVR-rbf outperforms SVR-poly and ANFIS. For SVR-rbf (5), the mean absolute bias error, root mean square error, and correlation coefficient are 1.1931 MJ/m2, 2.0716 MJ/m2, and 0.9380, respectively. The survey results approve that SVR-rbf can be used efficiently to estimate DHGSR from air temperatures.
FPGA implementation of advanced FEC schemes for intelligent aggregation networks
NASA Astrophysics Data System (ADS)
Zou, Ding; Djordjevic, Ivan B.
2016-02-01
In state-of-the-art fiber-optics communication systems the fixed forward error correction (FEC) and constellation size are employed. While it is important to closely approach the Shannon limit by using turbo product codes (TPC) and low-density parity-check (LDPC) codes with soft-decision decoding (SDD) algorithm; rate-adaptive techniques, which enable increased information rates over short links and reliable transmission over long links, are likely to become more important with ever-increasing network traffic demands. In this invited paper, we describe a rate adaptive non-binary LDPC coding technique, and demonstrate its flexibility and good performance exhibiting no error floor at BER down to 10-15 in entire code rate range, by FPGA-based emulation, making it a viable solution in the next-generation high-speed intelligent aggregation networks.
FPGA implementation of high-performance QC-LDPC decoder for optical communications
NASA Astrophysics Data System (ADS)
Zou, Ding; Djordjevic, Ivan B.
2015-01-01
Forward error correction is as one of the key technologies enabling the next-generation high-speed fiber optical communications. Quasi-cyclic (QC) low-density parity-check (LDPC) codes have been considered as one of the promising candidates due to their large coding gain performance and low implementation complexity. In this paper, we present our designed QC-LDPC code with girth 10 and 25% overhead based on pairwise balanced design. By FPGAbased emulation, we demonstrate that the 5-bit soft-decision LDPC decoder can achieve 11.8dB net coding gain with no error floor at BER of 10-15 avoiding using any outer code or post-processing method. We believe that the proposed single QC-LDPC code is a promising solution for 400Gb/s optical communication systems and beyond.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.
Technology utilization to prevent medication errors.
Forni, Allison; Chu, Hanh T; Fanikos, John
2010-01-01
Medication errors have been increasingly recognized as a major cause of iatrogenic illness and system-wide improvements have been the focus of prevention efforts. Critically ill patients are particularly vulnerable to injury resulting from medication errors because of the severity of illness, need for high risk medications with a narrow therapeutic index and frequent use of intravenous infusions. Health information technology has been identified as method to reduce medication errors as well as improve the efficiency and quality of care; however, few studies regarding the impact of health information technology have focused on patients in the intensive care unit. Computerized physician order entry and clinical decision support systems can play a crucial role in decreasing errors in the ordering stage of the medication use process through improving the completeness and legibility of orders, alerting physicians to medication allergies and drug interactions and providing a means for standardization of practice. Electronic surveillance, reminders and alerts identify patients susceptible to an adverse event, communicate critical changes in a patient's condition, and facilitate timely and appropriate treatment. Bar code technology, intravenous infusion safety systems, and electronic medication administration records can target prevention of errors in medication dispensing and administration where other technologies would not be able to intercept a preventable adverse event. Systems integration and compliance are vital components in the implementation of health information technology and achievement of a safe medication use process.
A Tree Locality-Sensitive Hash for Secure Software Testing
2017-09-14
errors, or to look for vulnerabilities that could allow a nefarious actor to use our software against us. Ultimately, all testing is designed to find...and an equivalent number of feasible paths discovered by Klee. 1.5 Summary This document the Tree Locality-Sensitive Hash (TLSH), a locality-senstive...performing two groups of tests that verify the accuracy and usefulness of TLSH. Chapter 5 summarizes the contents of the dissertation and lists avenues
Resource Public Key Infrastructure Extension
2012-01-01
tests for checking compliance with the RFC 3779 extensions that are used in the RPKI. These tests also were used to identify an error in the OPENSSL ...rsync, OpenSSL , Cryptlib, and MySQL/ODBC. We assume that the adversaries can exploit any publicly known vulnerability in this software. • Server...NULL, set FLAG_NOCHAIN in Ctemp, defer verification. T = P Use OpenSSL to verify certificate chain S using trust anchor T, checking signature and
The role of learning-related dopamine signals in addiction vulnerability.
Huys, Quentin J M; Tobler, Philippe N; Hasler, Gregor; Flagel, Shelly B
2014-01-01
Dopaminergic signals play a mathematically precise role in reward-related learning, and variations in dopaminergic signaling have been implicated in vulnerability to addiction. Here, we provide a detailed overview of the relationship between theoretical, mathematical, and experimental accounts of phasic dopamine signaling, with implications for the role of learning-related dopamine signaling in addiction and related disorders. We describe the theoretical and behavioral characteristics of model-free learning based on errors in the prediction of reward, including step-by-step explanations of the underlying equations. We then use recent insights from an animal model that highlights individual variation in learning during a Pavlovian conditioning paradigm to describe overlapping aspects of incentive salience attribution and model-free learning. We argue that this provides a computationally coherent account of some features of addiction. © 2014 Elsevier B.V. All rights reserved.
Analysis of soft-decision FEC on non-AWGN channels.
Cho, Junho; Xie, Chongjin; Winzer, Peter J
2012-03-26
Soft-decision forward error correction (SD-FEC) schemes are typically designed for additive white Gaussian noise (AWGN) channels. In a fiber-optic communication system, noise may be neither circularly symmetric nor Gaussian, thus violating an important assumption underlying SD-FEC design. This paper quantifies the impact of non-AWGN noise on SD-FEC performance for such optical channels. We use a conditionally bivariate Gaussian noise model (CBGN) to analyze the impact of correlations among the signal's two quadrature components, and assess the effect of CBGN on SD-FEC performance using the density evolution of low-density parity-check (LDPC) codes. On a CBGN channel generating severely elliptic noise clouds, it is shown that more than 3 dB of coding gain are attainable by utilizing correlation information. Our analyses also give insights into potential improvements of the detection performance for fiber-optic transmission systems assisted by SD-FEC.
Biomineralization of a Self-assembled, Soft-Matrix Precursor: Enamel
NASA Astrophysics Data System (ADS)
Snead, Malcolm L.
2015-04-01
Enamel is the bioceramic covering of teeth, a composite tissue composed of hierarchical organized hydroxyapatite crystallites fabricated by cells under physiologic pH and temperature. Enamel material properties resist wear and fracture to serve a lifetime of chewing. Understanding the cellular and molecular mechanisms for enamel formation may allow a biology-inspired approach to material fabrication based on self-assembling proteins that control form and function. A genetic understanding of human diseases exposes insight from nature's errors by exposing critical fabrication events that can be validated experimentally and duplicated in mice using genetic engineering to phenocopy the human disease so that it can be explored in detail. This approach led to an assessment of amelogenin protein self-assembly that, when altered, disrupts fabrication of the soft enamel protein matrix. A misassembled protein matrix precursor results in loss of cell-to-matrix contacts essential to fabrication and mineralization.
NASA Astrophysics Data System (ADS)
Kong, Gyuyeol; Choi, Sooyong
2017-09-01
An enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding is proposed for four-level holographic data storage systems. While the previous four-ary modulation codes focus on preventing maximum two-dimensional intersymbol interference patterns, the proposed four-ary modulation code aims at maximizing the coding gains for better bit error rate performances. For achieving significant coding gains from the four-ary modulation codes, we design a new 2/3 four-ary modulation code in order to enlarge the free distance on the trellis through extensive simulation. The free distance of the proposed four-ary modulation code is extended from 1.21 to 2.04 compared with that of the conventional four-ary modulation code. The simulation result shows that the proposed four-ary modulation code has more than 1 dB gains compared with the conventional four-ary modulation code.
Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.
Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin
2017-02-01
Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.
Disentangling AGN and Star Formation in Soft X-Rays
NASA Technical Reports Server (NTRS)
LaMassa, Stephanie M.; Heckman, T. M.; Ptak, A.
2012-01-01
We have explored the interplay of star formation and active galactic nucleus (AGN) activity in soft X-rays (0.5-2 keV) in two samples of Seyfert 2 galaxies (Sy2s). Using a combination of low-resolution CCD spectra from Chandra and XMM-Newton, we modeled the soft emission of 34 Sy2s using power-law and thermal models. For the 11 sources with high signal-to-noise Chandra imaging of the diffuse host galaxy emission, we estimate the luminosity due to star formation by removing the AGN, fitting the residual emission. The AGN and star formation contributions to the soft X-ray luminosity (i.e., L(sub x,AGN) and L(sub x,SF)) for the remaining 24 Sy2s were estimated from the power-law and thermal luminosities derived from spectral fitting. These luminosities were scaled based on a template derived from XSINGS analysis of normal star-forming galaxies. To account for errors in the luminosities derived from spectral fitting and the spread in the scaling factor, we estimated L(sub x,AGN) and L(sub x,SF))from Monte Carlo simulations. These simulated luminosities agree with L(sub x,AGN) and L(sub x,SF) derived from Chandra imaging analysis within a 3sigma confidence level. Using the infrared [Ne ii]12.8 micron and [O iv]26 micron lines as a proxy of star formation and AGN activity, respectively, we independently disentangle the contributions of these two processes to the total soft X-ray emission. This decomposition generally agrees with L(sub x,SF) and L(sub x,AGN) at the 3 sigma level. In the absence of resolvable nuclear emission, our decomposition method provides a reasonable estimate of emission due to star formation in galaxies hosting type 2 AGNs.
Pokhai, Gabriel G; Oliver, Michele L; Gordon, Karen D
2009-09-01
Determination of the biomechanical properties of soft tissues such as tendons and ligaments is dependent on the accurate measurement of their cross-sectional area (CSA). Measurement methods, which involve contact with the specimen, are problematic because soft tissues are easily deformed. Noncontact measurement methods are preferable in this regard, but may experience difficulty in dealing with the complex cross-sectional shapes and glistening surfaces seen in soft tissues. Additionally, existing CSA measurement systems are separated from the materials testing machine, resulting in the inability to measure CSA during testing. Furthermore, CSA measurements are usually made in a different orientation, and with a different preload, prior to testing. To overcome these problems, a noncontact laser reflectance system (LRS) was developed. Designed to fit in an Instron 8872 servohydraulic test machine, the system measures CSA by orbiting a laser transducer in a circular path around a soft tissue specimen held by tissue clamps. CSA measurements can be conducted before and during tensile testing. The system was validated using machined metallic specimens of various shapes and sizes, as well as different sizes of bovine tendons. The metallic specimens could be measured to within 4% accuracy, and the tendons to within an average error of 4.3%. Statistical analyses showed no significant differences between the measurements of the LRS and those of the casting method, an established measurement technique. The LRS was successfully used to measure the changing CSA of bovine tendons during uniaxial tensile testing. The LRS developed in this work represents a simple, quick, and accurate way of reconstructing complex cross-sectional profiles and calculating cross-sectional areas. In addition, the LRS represents the first system capable of automatically measuring changing CSA of soft tissues during tensile testing, facilitating the calculation of more accurate biomechanical properties.
Achieving High Reliability in Histology: An Improvement Series to Reduce Errors.
Heher, Yael K; Chen, Yigu; Pyatibrat, Sergey; Yoon, Edward; Goldsmith, Jeffrey D; Sands, Kenneth E
2016-11-01
Despite sweeping medical advances in other fields, histology processes have by and large remained constant over the past 175 years. Patient label identification errors are a known liability in the laboratory and can be devastating, resulting in incorrect diagnoses and inappropriate treatment. The objective of this study was to identify vulnerable steps in the histology workflow and reduce the frequency of labeling errors (LEs). In this 36-month study period, a numerical step key (SK) was developed to capture LEs. The two most prevalent root causes were targeted for Lean workflow redesign: manual slide printing and microtome cutting. The numbers and rates of LEs before and after interventions were compared to evaluate the effectiveness of interventions. Following the adoption of a barcode-enabled laboratory information system, the error rate decreased from a baseline of 1.03% (794 errors in 76,958 cases) to 0.28% (107 errors in 37,880 cases). After the implementation of an innovative ice tool box, allowing single-piece workflow for histology microtome cutting, the rate came down to 0.22% (119 errors in 54,342 cases). The study pointed out the importance of tracking and understanding LEs by using a simple numerical SK and quantified the effectiveness of two customized Lean interventions. Overall, a 78.64% reduction in LEs and a 35.28% reduction in time spent on rework have been observed since the study began. © American Society for Clinical Pathology, 2016. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com
Torres-Ruiz, José M; Sperry, John S; Fernández, José E
2012-10-01
Xylem hydraulic conductivity (K) is typically defined as K = F/(P/L), where F is the flow rate through a xylem segment associated with an applied pressure gradient (P/L) along the segment. This definition assumes a linear flow-pressure relationship with a flow intercept (F(0)) of zero. While linearity is typically the case, there is often a non-zero F(0) that persists in the absence of leaks or evaporation and is caused by passive uptake of water by the sample. In this study, we determined the consequences of failing to account for non-zero F(0) for both K measurements and the use of K to estimate the vulnerability to xylem cavitation. We generated vulnerability curves for olive root samples (Olea europaea) by the centrifuge technique, measuring a maximally accurate reference K(ref) as the slope of a four-point F vs P/L relationship. The K(ref) was compared with three more rapid ways of estimating K. When F(0) was assumed to be zero, K was significantly under-estimated (average of -81.4 ± 4.7%), especially when K(ref) was low. Vulnerability curves derived from these under-estimated K values overestimated the vulnerability to cavitation. When non-zero F(0) was taken into account, whether it was measured or estimated, more accurate K values (relative to K(ref)) were obtained, and vulnerability curves indicated greater resistance to cavitation. We recommend accounting for non-zero F(0) for obtaining accurate estimates of K and cavitation resistance in hydraulic studies. Copyright © Physiologia Plantarum 2012.
Helmholtz and parabolic equation solutions to a benchmark problem in ocean acoustics.
Larsson, Elisabeth; Abrahamsson, Leif
2003-05-01
The Helmholtz equation (HE) describes wave propagation in applications such as acoustics and electromagnetics. For realistic problems, solving the HE is often too expensive. Instead, approximations like the parabolic wave equation (PE) are used. For low-frequency shallow-water environments, one persistent problem is to assess the accuracy of the PE model. In this work, a recently developed HE solver that can handle a smoothly varying bathymetry, variable material properties, and layered materials, is used for an investigation of the errors in PE solutions. In the HE solver, a preconditioned Krylov subspace method is applied to the discretized equations. The preconditioner combines domain decomposition and fast transform techniques. A benchmark problem with upslope-downslope propagation over a penetrable lossy seamount is solved. The numerical experiments show that, for the same bathymetry, a soft and slow bottom gives very similar HE and PE solutions, whereas the PE model is far from accurate for a hard and fast bottom. A first attempt to estimate the error is made by computing the relative deviation from the energy balance for the PE solution. This measure gives an indication of the magnitude of the error, but cannot be used as a strict error bound.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, Vincent W.C., E-mail: htvinwu@polyu.edu.hk; Tse, Teddy K.H.; Ho, Cola L.M.
2013-07-01
Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each casemore » by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time.« less
Karuppanan, Udayakumar; Unni, Sujatha Narayanan; Angarai, Ganesan R.
2017-01-01
Abstract. Assessment of mechanical properties of soft matter is a challenging task in a purely noninvasive and noncontact environment. As tissue mechanical properties play a vital role in determining tissue health status, such noninvasive methods offer great potential in framing large-scale medical screening strategies. The digital speckle pattern interferometry (DSPI)–based image capture and analysis system described here is capable of extracting the deformation information from a single acquired fringe pattern. Such a method of analysis would be required in the case of the highly dynamic nature of speckle patterns derived from soft tissues while applying mechanical compression. Soft phantoms mimicking breast tissue optical and mechanical properties were fabricated and tested in the DSPI out of plane configuration set up. Hilbert transform (HT)-based image analysis algorithm was developed to extract the phase and corresponding deformation of the sample from a single acquired fringe pattern. The experimental fringe contours were found to correlate with numerically simulated deformation patterns of the sample using Abaqus finite element analysis software. The extracted deformation from the experimental fringe pattern using the HT-based algorithm is compared with the deformation value obtained using numerical simulation under similar conditions of loading and the results are found to correlate with an average %error of 10. The proposed method is applied on breast phantoms fabricated with included subsurface anomaly mimicking cancerous tissue and the results are analyzed. PMID:28180134
Technical Note: Deep learning based MRAC using rapid ultra-short echo time imaging.
Jang, Hyungseok; Liu, Fang; Zhao, Gengyan; Bradshaw, Tyler; McMillan, Alan B
2018-05-15
In this study, we explore the feasibility of a novel framework for MR-based attenuation correction for PET/MR imaging based on deep learning via convolutional neural networks, which enables fully automated and robust estimation of a pseudo CT image based on ultrashort echo time (UTE), fat, and water images obtained by a rapid MR acquisition. MR images for MRAC are acquired using dual echo ramped hybrid encoding (dRHE), where both UTE and out-of-phase echo images are obtained within a short single acquisition (35 sec). Tissue labeling of air, soft tissue, and bone in the UTE image is accomplished via a deep learning network that was pre-trained with T1-weighted MR images. UTE images are used as input to the network, which was trained using labels derived from co-registered CT images. The tissue labels estimated by deep learning are refined by a conditional random field based correction. The soft tissue labels are further separated into fat and water components using the two-point Dixon method. The estimated bone, air, fat, and water images are then assigned appropriate Hounsfield units, resulting in a pseudo CT image for PET attenuation correction. To evaluate the proposed MRAC method, PET/MR imaging of the head was performed on 8 human subjects, where Dice similarity coefficients of the estimated tissue labels and relative PET errors were evaluated through comparison to a registered CT image. Dice coefficients for air (within the head), soft tissue, and bone labels were 0.76±0.03, 0.96±0.006, and 0.88±0.01. In PET quantification, the proposed MRAC method produced relative PET errors less than 1% within most brain regions. The proposed MRAC method utilizing deep learning with transfer learning and an efficient dRHE acquisition enables reliable PET quantification with accurate and rapid pseudo CT generation. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
The development and role of megavoltage cone beam computerized tomography in radiation oncology
NASA Astrophysics Data System (ADS)
Morin, Olivier
External beam radiation therapy has now the ability to deliver doses that conform tightly to a tumor volume. The steep dose gradients planned in these treatments make it increasingly important to reproduce the patient position and anatomy at each treatment fraction. For this reason, considerable research now focuses on in-room three-dimensional imaging. This thesis describes the first clinical megavoltage cone beam computed tomography (MVCBCT) system, which utilizes a conventional linear accelerator equipped with an amorphous silicon flat panel detector. The document covers the system development and investigation of its clinical applications over the last 4-5 years. The physical performance of the system was evaluated and optimized for soft-tissue contrast resolution leading to recommendations of imaging protocols to use for specific clinical applications and body sites. MVCBCT images can resolve differences of 5% in electron density for a mean dose of 9 cGy. Hence, the image quality of this system is sufficient to differentiate some soft-tissue structures. The absolute positioning accuracy with MVCBCT is better than 1 mm. The accuracy of isodose lines calculated using MVCBCT images of head and neck patients is within 3% and 3 mm. The system shows excellent stability in image quality, CT# calibration, radiation exposure and absolute positioning over a period of 8 months. A procedure for MVCBCT quality assurance was developed. In our clinic, MVCBCT has been used to detect non rigid spinal cord distortions, to position a patient with a paraspinous tumor close to metallic hardware, to position prostate cancer patients using gold markers or soft-tissue landmarks, to monitor head and neck anatomical changes and their dosimetric consequences, and to complement the convention CT for treatment planning in presence of metallic implants. MVCBCT imaging is changing the clinical practice of our department by increasingly revealing patient-specific errors. New verification protocols are being developed to minimize those errors thus moving the practice of radiation therapy one step closer to personalized medicine.
Leynes, Andrew P; Yang, Jaewon; Wiesinger, Florian; Kaushik, Sandeep S; Shanbhag, Dattesh D; Seo, Youngho; Hope, Thomas A; Larson, Peder E Z
2018-05-01
Accurate quantification of uptake on PET images depends on accurate attenuation correction in reconstruction. Current MR-based attenuation correction methods for body PET use a fat and water map derived from a 2-echo Dixon MRI sequence in which bone is neglected. Ultrashort-echo-time or zero-echo-time (ZTE) pulse sequences can capture bone information. We propose the use of patient-specific multiparametric MRI consisting of Dixon MRI and proton-density-weighted ZTE MRI to directly synthesize pseudo-CT images with a deep learning model: we call this method ZTE and Dixon deep pseudo-CT (ZeDD CT). Methods: Twenty-six patients were scanned using an integrated 3-T time-of-flight PET/MRI system. Helical CT images of the patients were acquired separately. A deep convolutional neural network was trained to transform ZTE and Dixon MR images into pseudo-CT images. Ten patients were used for model training, and 16 patients were used for evaluation. Bone and soft-tissue lesions were identified, and the SUV max was measured. The root-mean-squared error (RMSE) was used to compare the MR-based attenuation correction with the ground-truth CT attenuation correction. Results: In total, 30 bone lesions and 60 soft-tissue lesions were evaluated. The RMSE in PET quantification was reduced by a factor of 4 for bone lesions (10.24% for Dixon PET and 2.68% for ZeDD PET) and by a factor of 1.5 for soft-tissue lesions (6.24% for Dixon PET and 4.07% for ZeDD PET). Conclusion: ZeDD CT produces natural-looking and quantitatively accurate pseudo-CT images and reduces error in pelvic PET/MRI attenuation correction compared with standard methods. © 2018 by the Society of Nuclear Medicine and Molecular Imaging.
Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irène; Comtat, Claude
2017-09-21
In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach that estimates an AC map from an averaged CT template. As an alternative, we propose to use a zero echo time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield units ([Formula: see text]) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) to air and soft tissue and by using the linear relationship to generate continuous μ values for the bone. Additionally, for the purpose of comparison, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map generated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into [Formula: see text] was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of [Formula: see text] corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image.
Thrust at N{sup 3}LL with power corrections and a precision global fit for {alpha}{sub s}(m{sub Z})
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abbate, Riccardo; Stewart, Iain W.; Fickinger, Michael
2011-04-01
We give a factorization formula for the e{sup +}e{sup -} thrust distribution d{sigma}/d{tau} with {tau}=1-T based on the soft-collinear effective theory. The result is applicable for all {tau}, i.e. in the peak, tail, and far-tail regions. The formula includes O({alpha}{sub s}{sup 3}) fixed-order QCD results, resummation of singular partonic {alpha}{sub s}{sup j}ln{sup k}({tau})/{tau} terms with N{sup 3}LL accuracy, hadronization effects from fitting a universal nonperturbative soft function defined with field theory, bottom quark mass effects, QED corrections, and the dominant top mass dependent terms from the axial anomaly. We do not rely on Monte Carlo generators to determine nonperturbative effectsmore » since they are not compatible with higher order perturbative analyses. Instead our treatment is based on fitting nonperturbative matrix elements in field theory, which are moments {Omega}{sub i} of a nonperturbative soft function. We present a global analysis of all available thrust data measured at center-of-mass energies Q=35-207 GeV in the tail region, where a two-parameter fit to {alpha}{sub s}(m{sub Z}) and the first moment {Omega}{sub 1} suffices. We use a short-distance scheme to define {Omega}{sub 1}, called the R-gap scheme, thus ensuring that the perturbative d{sigma}/d{tau} does not suffer from an O({Lambda}{sub QCD}) renormalon ambiguity. We find {alpha}{sub s}(m{sub Z})=0.1135{+-}(0.0002){sub expt{+-}}(0.0005){sub hadr{+-}}(0.0009){sub pert}, with {chi}{sup 2}/dof=0.91, where the displayed 1-sigma errors are the total experimental error, the hadronization uncertainty, and the perturbative theory uncertainty, respectively. The hadronization uncertainty in {alpha}{sub s} is significantly decreased compared to earlier analyses by our two-parameter fit, which determines {Omega}{sub 1}=0.323 GeV with 16% uncertainty.« less
NASA Astrophysics Data System (ADS)
Khalifé, Maya; Fernandez, Brice; Jaubert, Olivier; Soussan, Michael; Brulon, Vincent; Buvat, Irène; Comtat, Claude
2017-10-01
In brain PET/MR applications, accurate attenuation maps are required for accurate PET image quantification. An implemented attenuation correction (AC) method for brain imaging is the single-atlas approach that estimates an AC map from an averaged CT template. As an alternative, we propose to use a zero echo time (ZTE) pulse sequence to segment bone, air and soft tissue. A linear relationship between histogram normalized ZTE intensity and measured CT density in Hounsfield units (HU ) in bone has been established thanks to a CT-MR database of 16 patients. Continuous AC maps were computed based on the segmented ZTE by setting a fixed linear attenuation coefficient (LAC) to air and soft tissue and by using the linear relationship to generate continuous μ values for the bone. Additionally, for the purpose of comparison, four other AC maps were generated: a ZTE derived AC map with a fixed LAC for the bone, an AC map based on the single-atlas approach as provided by the PET/MR manufacturer, a soft-tissue only AC map and, finally, the CT derived attenuation map used as the gold standard (CTAC). All these AC maps were used with different levels of smoothing for PET image reconstruction with and without time-of-flight (TOF). The subject-specific AC map generated by combining ZTE-based segmentation and linear scaling of the normalized ZTE signal into HU was found to be a good substitute for the measured CTAC map in brain PET/MR when used with a Gaussian smoothing kernel of 4~mm corresponding to the PET scanner intrinsic resolution. As expected TOF reduces AC error regardless of the AC method. The continuous ZTE-AC performed better than the other alternative MR derived AC methods, reducing the quantification error between the MRAC corrected PET image and the reference CTAC corrected PET image.
Swift J1822.3-1606: Optical spectroscopy of the counterpart candidates from the 10.4m GTC
NASA Astrophysics Data System (ADS)
de Ugarte Postigo, A.; Munoz-Darias, T.
2011-07-01
We have performed optical spectroscopy of the two objects (S1 and S2; ATEL #3496, #3502) present within the Swift/XRT error circle of the Soft Gamma-ray Repeater (SGR) candidate, Swift J1822.3-1606 (ATEL #3488, #3489, #3490, #3491, #3493, #3501, #3503). Observations were performed on July 20, 2011 using the OSIRIS spectrograph at the 10.4m Gran Telescopio de Canarias (GTC) telescope in La Palma, Spain.
Investigation of the Use of Erasures in a Concatenated Coding Scheme
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Marriott, Philip J.
1997-01-01
A new method for declaring erasures in a concatenated coding scheme is investigated. This method is used with the rate 1/2 K = 7 convolutional code and the (255, 223) Reed Solomon code. Errors and erasures Reed Solomon decoding is used. The erasure method proposed uses a soft output Viterbi algorithm and information provided by decoded Reed Solomon codewords in a deinterleaving frame. The results show that a gain of 0.3 dB is possible using a minimum amount of decoding trials.
Psycho-Motor and Error Enabled Simulations: Modeling Vulnerable Skills in the Pre-Mastery Phase
2016-04-01
participants. Multiple abstracts and posters were created for surgical conferences attended. These works concentrated on data from pre and post ...analyzed to give every participant a perspective of the smallest difference in stiffness they could differentiate. Based on the results the tests were...camera was affixed to a post mounted to this station’s table to capture a close-up view of the participant’s placement of needles on the simulation
ERIC Educational Resources Information Center
General Accounting Office, Washington, DC. Health, Education, and Human Services Div.
The District of Columbia Public Schools (DCPS) is one of the largest public school districts in the United States. Since 1989-90, there have been questions about several aspects of DCPS's enrollment-count process. A valid enrollment-count process and an accurate count are critical to DCPS's district- and school-level planning, staffing, funding,…
The Implications of Self-Reporting Systems for Maritime Domain Awareness
2006-12-01
SIA), offrent des avantages significatifs comparativement à la poursuite des navires par détecteur ordinaire et que la disponibilité de l’information...reporting system for sea-going vessels that originated in Sweden in the early 1990s. It was designed primarily for safety of life at sea (SOLAS) and...report information is prone to human error and potential malicious altering and the system itself was not designed with these vulnerabilities in mind
Herrera-Rangel, Aline; Aranda-Moreno, Catalina; Mantilla-Ochoa, Teresa; Zainos-Saucedo, Lylia; Jáuregui-Renaud, Kathrine
2014-01-01
To assess the influence of peripheral neuropathy, gender, and obesity on the postural stability of patients with type 2 diabetes mellitus. 151 patients with no history of otology, neurology, or orthopaedic or balance disorders accepted to participate in the study. After a clinical interview and neuropathy assessment, postural stability was evaluated by static posturography (eyes open/closed on hard/soft surface) and the "Up & Go" test. During static posturography, on hard surface, the length of sway was related to peripheral neuropathy, gender, age, and obesity; on soft surface, the length of sway was related to peripheral neuropathy, gender, and age, the influence of neuropathy was larger in males than in females, and closing the eyes increased further the difference between genders. The mean time to perform the "Up & Go" test was 11.6 ± 2.2 sec, with influence of peripheral neuropathy, gender, and age. In order to preserve the control of static upright posture during conditions with deficient sensory input, male patients with type 2 diabetes mellitus with no history of balance disorders may be more vulnerable than females, and obesity may decrease the static postural control in both males and females.
Ng, K; Phillips, M R; Borges, P; Thomas, T; August, P; Calado, H; Veloso-Gomes, F
2014-05-15
Traditional hard engineering structures and recently emerging soft engineering alternatives have been employed to protect vulnerable coastlines. Despite negative publicity, they have ensured community survival where socio-economic benefits outweigh adverse impacts. This is especially true for Small Islands (SI) where increasing sea levels and storm intensities threaten already limited land availability. This paper presents coastal vulnerability in São Miguel Island (the Azores SI archipelago) and considers SI issues with regard to coastal land loss. Regional wave statistics using 1998 to 2011 wind record showed: periods ranging from 7 to 13s (circa 83%); wave heights between 1 and 3m (circa 60%); and increasing trends in westerly (p=0.473), easterly (p=0.632) and southeasterly (p=0.932) waves. Sea level analyses between 1978 and 2007 indicated a statistically significant rising trend (2.5 ± 0.4 mm yr(-1); p=0.000), while between 1996 and 2007 it was 3.3 ± 1.5 mm yr(-1) (p=0.025), agreeing with other global sea level studies. Based on 2001 and 2008 population data and using zonal statistics, circa 60% of the Island's population was found to reside within 1 km of the sea and the percentage of total population was linearly correlated with distance from the shoreline (r(2)=99%). Three case studies show hard coastal engineering solutions preserved Azorean coastal lifestyle and had little or no observed negative impacts on their environs. Although hard engineering is likely to remain a valuable and feasible coastal protection option, an inventory of São Miguel's population distribution, surf breaks, bathymetry and coastal erosion rates showed the potential of using multifunctional artificial reefs as a soft engineering solution. These offshore submerged breakwaters offer coastal protection while providing additional benefits such as surfing amenity and beach widening. Consequently, findings of this work can inform other SI communities. Copyright © 2014 Elsevier B.V. All rights reserved.
Context sensitivity and ambiguity in component-based systems design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bespalko, S.J.; Sindt, A.
1997-10-01
Designers of components-based, real-time systems need to guarantee to correctness of soft-ware and its output. Complexity of a system, and thus the propensity for error, is best characterized by the number of states a component can encounter. In many cases, large numbers of states arise where the processing is highly dependent on context. In these cases, states are often missed, leading to errors. The following are proposals for compactly specifying system states which allow the factoring of complex components into a control module and a semantic processing module. Further, the need for methods that allow for the explicit representation ofmore » ambiguity and uncertainty in the design of components is discussed. Presented herein are examples of real-world problems which are highly context-sensitive or are inherently ambiguous.« less
Impact of Temporal Masking of Flip-Flop Upsets on Soft Error Rates of Sequential Circuits
NASA Astrophysics Data System (ADS)
Chen, R. M.; Mahatme, N. N.; Diggins, Z. J.; Wang, L.; Zhang, E. X.; Chen, Y. P.; Liu, Y. N.; Narasimham, B.; Witulski, A. F.; Bhuva, B. L.; Fleetwood, D. M.
2017-08-01
Reductions in single-event (SE) upset (SEU) rates for sequential circuits due to temporal masking effects are evaluated. The impacts of supply voltage, combinational-logic delay, flip-flop (FF) SEU performance, and particle linear energy transfer (LET) values are analyzed for SE cross sections of sequential circuits. Alpha particles and heavy ions with different LET values are used to characterize the circuits fabricated at the 40-nm bulk CMOS technology node. Experimental results show that increasing the delay of the logic circuit present between FFs and decreasing the supply voltage are two effective ways of reducing SE error rates for sequential circuits for particles with low LET values due to temporal masking. SEU-hardened FFs benefit less from temporal masking than conventional FFs. Circuit hardening implications for SEU-hardened and unhardened FFs are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N., E-mail: sergei.molotkov@gmail.com
2012-05-15
The fundamental quantum mechanics prohibitions on the measurability of quantum states allow secure key distribution between spatially remote users to be performed. Experimental and commercial implementations of quantum cryptography systems, however, use components that exist at the current technology level, in particular, one-photon avalanche photodetectors. These detectors are subject to the blinding effect. It was shown that all the known basic quantum key distribution protocols and systems based on them are vulnerable to attacks with blinding of photodetectors. In such attacks, an eavesdropper knows all the key transferred, does not produce errors at the reception side, and remains undetected. Threemore » protocols of quantum key distribution stable toward such attacks are suggested. The security of keys and detection of eavesdropping attempts are guaranteed by the internal structure of protocols themselves rather than additional technical improvements.« less
Legal, ethical and practical considerations in research involving nurses with dyslexia.
Gillin, Nicola
2015-09-01
To discuss the legal, ethical and practical considerations in UK studies involving nurses with dyslexia and medication administration errors (MAEs). Nurses with dyslexia are a vulnerable population as they are susceptible to misrepresentation in research, especially that which involves a sensitive topic such as MAEs. Nurses with dyslexia may be particularly vulnerable to research that could exploit, implicate or attribute unsafe practice to them and their disability. Special consideration should be exercised when researching this population. Despite the potential for legal, ethical and practical issues, MAEs and nurses with dyslexia are under-researched areas and warrant further research. Benefits can be gained, not only by participants but also those with a vested interest in how best to support dyslexic nurses in clinical practice. Through effective design, risks can be identified and minimised, and the research made viable, ethically sound and ultimately beneficial to all those involved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Duan, Sisi; Li, Yun; Levitt, Karl N.
Consensus is a fundamental approach to implementing fault-tolerant services through replication where there exists a tradeoff between the cost and the resilience. For instance, Crash Fault Tolerant (CFT) protocols have a low cost but can only handle crash failures while Byzantine Fault Tolerant (BFT) protocols handle arbitrary failures but have a higher cost. Hybrid protocols enjoy the benefits of both high performance without failures and high resiliency under failures by switching among different subprotocols. However, it is challenging to determine which subprotocols should be used. We propose a moving target approach to switch among protocols according to the existing systemmore » and network vulnerability. At the core of our approach is a formalized cost model that evaluates the vulnerability and performance of consensus protocols based on real-time Intrusion Detection System (IDS) signals. Based on the evaluation results, we demonstrate that a safe, cheap, and unpredictable protocol is always used and a high IDS error rate can be tolerated.« less
Wilson, Ander; Chiu, Yueh-Hsiu Mathilda; Hsu, Hsiao-Hsien Leon; Wright, Robert O; Wright, Rosalind J; Coull, Brent A
2017-07-01
Epidemiological research supports an association between maternal exposure to air pollution during pregnancy and adverse children's health outcomes. Advances in exposure assessment and statistics allow for estimation of both critical windows of vulnerability and exposure effect heterogeneity. Simultaneous estimation of windows of vulnerability and effect heterogeneity can be accomplished by fitting a distributed lag model (DLM) stratified by subgroup. However, this can provide an incomplete picture of how effects vary across subgroups because it does not allow for subgroups to have the same window but different within-window effects or to have different windows but the same within-window effect. Because the timing of some developmental processes are common across subpopulations of infants while for others the timing differs across subgroups, both scenarios are important to consider when evaluating health risks of prenatal exposures. We propose a new approach that partitions the DLM into a constrained functional predictor that estimates windows of vulnerability and a scalar effect representing the within-window effect directly. The proposed method allows for heterogeneity in only the window, only the within-window effect, or both. In a simulation study we show that a model assuming a shared component across groups results in lower bias and mean squared error for the estimated windows and effects when that component is in fact constant across groups. We apply the proposed method to estimate windows of vulnerability in the association between prenatal exposures to fine particulate matter and each of birth weight and asthma incidence, and estimate how these associations vary by sex and maternal obesity status in a Boston-area prospective pre-birth cohort study. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
SU-G-BRB-16: Vulnerabilities in the Gamma Metric
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neal, B; Siebers, J
Purpose: To explore vulnerabilities in the gamma index metric that undermine its wide use as a radiation therapy quality assurance tool. Methods: 2D test field pairs (images) are created specifically to achieve high gamma passing rates, but to also include gross errors by exploiting the distance-to-agreement and percent-passing components of the metric. The first set has no requirement of clinical practicality, but is intended to expose vulnerabilities. The second set exposes clinically realistic vulnerabilities. To circumvent limitations inherent to user-specific tuning of prediction algorithms to match measurements, digital test cases are manually constructed, thereby mimicking high-quality image prediction. Results: Withmore » a 3 mm distance-to-agreement metric, changing field size by ±6 mm results in a gamma passing rate over 99%. For a uniform field, a lattice of passing points spaced 5 mm apart results in a passing rate of 100%. Exploiting the percent-passing component, a 10×10 cm{sup 2} field can have a 95% passing rate when an 8 cm{sup 2}=2.8×2.8 cm{sup 2} highly out-of-tolerance (e.g. zero dose) square is missing from the comparison image. For clinically realistic vulnerabilities, an arc plan for which a 2D image is created can have a >95% passing rate solely due to agreement in the lateral spillage, with the failing 5% in the critical target region. A field with an integrated boost (e.g whole brain plus small metastases) could neglect the metastases entirely, yet still pass with a 95% threshold. All the failure modes described would be visually apparent on a gamma-map image. Conclusion: The %gamma<1 metric has significant vulnerabilities. High passing rates can obscure critical faults in hypothetical and delivered radiation doses. Great caution should be used with gamma as a QA metric; users should inspect the gamma-map. Visual analysis of gamma-maps may be impractical for cine acquisition.« less
NASA Astrophysics Data System (ADS)
Parisi, Alessandro; Argentiero, Ilenia; Fidelibus, Maria Dolores; Pellicani, Roberta; Spilotro, Giuseppe
2017-04-01
Considering a natural system without human-induced modifications, its resilience can be altered by many natural drivers (e.g. geological characteristics, climate) and their spatial modifications over time. Therefore, natural hazardous phenomena could shift natural system over tipping points in an easier or more difficult way. So long as natural system does not involve human settlements or transport infrastructures, natural system risk assessment could not be a basic topic. Nowadays, human activities have modified many natural systems forming, as a result, hybrid systems (both human and natural), in which natural and human-induced drivers modify hybrid systems vulnerability in order to decrease or increase their resilience: scientists define this new age Anthropocene. In this context, dynamic risk assessment of hybrid systems is required in order to avoid disaster when hazardous phenomena occur, but it is a quite complex issue. In fact, soft crisis emerging signals are difficult to identify because of wrong risk perception and lack of communication. Furthermore, natural and human-induced modifications are rarely registered and supervised by governments, so it is fairly difficult defining how systems resilience changes over time. Inhabitants of Ginosa (Taranto, South of Italy) had modified many old rock dwellings over thousand years since the Middle Ages. Indeed, they had built up three-storey houses on three hypogeum levels of rock dwellings along the ravine. The Matrice street collapse in Ginosa is an example of how natural and human-induced spatial modifications over time had led a soft crisis to evolve in a disaster, fortunately without fatalities. This research aim is to revisit events before the Matrice street collapse on the 21st January 2014. The will is to define the relationship between the hybrid system resilience and soft crisis variation over time and how human and natural drivers were involved in the shift.
Donn, Steven M; McDonnell, William M
2012-01-01
The Institute of Medicine has recommended a change in culture from "name and blame" to patient safety. This will require system redesign to identify and address errors, establish performance standards, and set safety expectations. This approach, however, is at odds with the present medical malpractice (tort) system. The current system is outcomes-based, meaning that health care providers and institutions are often sued despite providing appropriate care. Nevertheless, the focus should remain to provide the safest patient care. Effective peer review may be hindered by the present tort system. Reporting of medical errors is a key piece of peer review and education, and both anonymous reporting and confidential reporting of errors have potential disadvantages. Diagnostic and treatment errors continue to be the leading sources of allegations of malpractice in pediatrics, and the neonatal intensive care unit is uniquely vulnerable. Most errors result from systems failures rather than human error. Risk management can be an effective process to identify, evaluate, and address problems that may injure patients, lead to malpractice claims, and result in financial losses. Risk management identifies risk or potential risk, calculates the probability of an adverse event arising from a risk, estimates the impact of the adverse event, and attempts to control the risk. Implementation of a successful risk management program requires a positive attitude, sufficient knowledge base, and a commitment to improvement. Transparency in the disclosure of medical errors and a strategy of prospective risk management in dealing with medical errors may result in a substantial reduction in medical malpractice lawsuits, lower litigation costs, and a more safety-conscious environment. Thieme Medical Publishers, Inc.
Classifying nursing errors in clinical management within an Australian hospital.
Tran, D T; Johnson, M
2010-12-01
Although many classification systems relating to patient safety exist, no taxonomy was identified that classified nursing errors in clinical management. To develop a classification system for nursing errors relating to clinical management (NECM taxonomy) and to describe contributing factors and patient consequences. We analysed 241 (11%) self-reported incidents relating to clinical management in nursing in a metropolitan hospital. Descriptive analysis of numeric data and content analysis of text data were undertaken to derive the NECM taxonomy, contributing factors and consequences for patients. Clinical management incidents represented 1.63 incidents per 1000 occupied bed days. The four themes of the NECM taxonomy were nursing care process (67%), communication (22%), administrative process (5%), and knowledge and skill (6%). Half of the incidents did not cause any patient harm. Contributing factors (n=111) included the following: patient clinical, social conditions and behaviours (27%); resources (22%); environment and workload (18%); other health professionals (15%); communication (13%); and nurse's knowledge and experience (5%). The NECM taxonomy provides direction to clinicians and managers on areas in clinical management that are most vulnerable to error, and therefore, priorities for system change management. Any nurses who wish to classify nursing errors relating to clinical management could use these types of errors. This study informs further research into risk management behaviour, and self-assessment tools for clinicians. Globally, nurses need to continue to monitor and act upon patient safety issues. © 2010 The Authors. International Nursing Review © 2010 International Council of Nurses.
Rudolf, Martin; Clark, Mark E; Chimento, Melissa F; Li, Chuan-Ming; Medeiros, Nancy E; Curcio, Christine A
2008-03-01
Macular drusen are hallmarks of age-related maculopathy (ARM), but these focal extracellular lesions also appear with age in the peripheral retina. The present study was conducted to determine regional differences in morphology that contribute to the higher vulnerability of the macula to advanced disease. Drusen from the macula (n = 133) and periphery (n = 282) were isolated and concentrated from nine ARM-affected eyes. A semiquantitative light microscopic evaluation of 1-mum-thick sections included 12 parameters. Significant differences were found between the macula and periphery in ease of isolation, distribution of druse type, composition qualities, and substructures. On harvesting, macular drusen were friable, with liquefied or crystallized contents. Peripheral drusen were resilient and never crystallized. On examination, soft drusen appeared in the macula only, had homogeneous content without significant substructures, and had abundant basal laminar deposits (BlamD). Several substructures, previously postulated as signatures of druse biogenesis, were found primarily in hard drusen. Specific to hard drusen, which appeared everywhere, were central subregions and reduced RPE coverage. Macular hard drusen with a rich substructure profile differed from primarily homogeneous peripheral hard drusen. Compound drusen, found in the periphery only, exhibited a composition profile that was not intermediate between hard and soft. The data confirm regional differences in druse morphology, composition, and physical properties, most likely based on different formative mechanisms that may contribute to macular susceptibility for ARM progression. Two other reasons that only the macula is at high risk despite having relatively few drusen are the exclusive presence of soft drusen and the abundant BlamD in this region.
Neutron-Star Radius from a Population of Binary Neutron Star Mergers.
Bose, Sukanta; Chakravarti, Kabir; Rezzolla, Luciano; Sathyaprakash, B S; Takami, Kentaro
2018-01-19
We show how gravitational-wave observations with advanced detectors of tens to several tens of neutron-star binaries can measure the neutron-star radius with an accuracy of several to a few percent, for mass and spatial distributions that are realistic, and with none of the sources located within 100 Mpc. We achieve such an accuracy by combining measurements of the total mass from the inspiral phase with those of the compactness from the postmerger oscillation frequencies. For estimating the measurement errors of these frequencies, we utilize analytical fits to postmerger numerical relativity waveforms in the time domain, obtained here for the first time, for four nuclear-physics equations of state and a couple of values for the mass. We further exploit quasiuniversal relations to derive errors in compactness from those frequencies. Measuring the average radius to well within 10% is possible for a sample of 100 binaries distributed uniformly in volume between 100 and 300 Mpc, so long as the equation of state is not too soft or the binaries are not too heavy. We also give error estimates for the Einstein Telescope.
A forward error correction technique using a high-speed, high-rate single chip codec
NASA Astrophysics Data System (ADS)
Boyd, R. W.; Hartman, W. F.; Jones, Robert E.
The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.
Jayaram, Natalie; Spertus, John A; Kennedy, Kevin F; Vincent, Robert; Martin, Gerard R; Curtis, Jeptha P; Nykanen, David; Moore, Phillip M; Bergersen, Lisa
2017-11-21
Risk standardization for adverse events after congenital cardiac catheterization is needed to equitably compare patient outcomes among different hospitals as a foundation for quality improvement. The goal of this project was to develop a risk-standardization methodology to adjust for patient characteristics when comparing major adverse outcomes in the NCDR's (National Cardiovascular Data Registry) IMPACT Registry (Improving Pediatric and Adult Congenital Treatment). Between January 2011 and March 2014, 39 725 consecutive patients within IMPACT undergoing cardiac catheterization were identified. Given the heterogeneity of interventional procedures for congenital heart disease, new procedure-type risk categories were derived with empirical data and expert opinion, as were markers of hemodynamic vulnerability. A multivariable hierarchical logistic regression model to identify patient and procedural characteristics predictive of a major adverse event or death after cardiac catheterization was derived in 70% of the cohort and validated in the remaining 30%. The rate of major adverse event or death was 7.1% and 7.2% in the derivation and validation cohorts, respectively. Six procedure-type risk categories and 6 independent indicators of hemodynamic vulnerability were identified. The final risk adjustment model included procedure-type risk category, number of hemodynamic vulnerability indicators, renal insufficiency, single-ventricle physiology, and coagulation disorder. The model had good discrimination, with a C-statistic of 0.76 and 0.75 in the derivation and validation cohorts, respectively. Model calibration in the validation cohort was excellent, with a slope of 0.97 (standard error, 0.04; P value [for difference from 1] =0.53) and an intercept of 0.007 (standard error, 0.12; P value [for difference from 0] =0.95). The creation of a validated risk-standardization model for adverse outcomes after congenital cardiac catheterization can support reporting of risk-adjusted outcomes in the IMPACT Registry as a foundation for quality improvement. © 2017 American Heart Association, Inc.
Low-power laser treatment of musculoskeletal disorders and body measurements of the equine athlete
NASA Astrophysics Data System (ADS)
Antikas, Theo G.
1990-09-01
This field report presents and analyzes results on 1 cases of rnusculoskeletal disorders of equine athletes treated either with a Soft Laser 632 device (Worldwide Lasers International Geneva) or with an Omega Biotherapy infrared multiprobe multiwavelength device (Omega Labs London). It proposes a codification of low power laser forms of treatment onthefield and suggests modalities of such treatment(s). The therapeutic effects of low power laser beams as well as their postulated modes of action are discussed. Further a new technique utilizing a low power laser device (Technosynthese AG Zurich) for the accurate rnesurement of the height of ponies and horses is described. After testing in over 500 equines the apparatus and the technique were found accurate with an error factor not exceeding 1. 2 mm (1/20 inch) whereas the ancient ''standard stick'' method was found to produce a constant significant error in all animals measured. MATERIALS AND METHODS Soft Laser 632R device: Portable 25 mW heliumneon laser device emitting a visible red band of 632. 8 nm either through a ''window'' or through an optic fiber probe. Omega Biotherapy device: Portable 50 mW infrared laser device with two probes and a multiprobe emitting four wavelength laser bands. Pony_MetreR: Portable heliumneon device with two incorporated nivels and sliding through a rotating ''head'' placed at the top of a tripod that can move on either the vertical (x) or horizontal y) axis. RESULTS
Marwah, Nikhil
2016-01-01
ABSTRACT Objective: The aim of our study is to use cone beam computed tomography (CBCT) to assess the dimensional changes in the nasopharyngeal soft-tissue characteristics in children of Indian origin with repaired cleft lip and palate (CLP) and to compare the results with patients with ideal occlusion. Materials and methods: A sample of 20 children (10 girls, 10 boys) with repaired CLP was selected. Cone beam computed tomography scans were taken to measure the nasopharyngeal airway changes in terms of linear measurements and sagittal cross-sectional areas. Error analysis was performed to prevent systematic or random errors. Independent means t-tests and Pearson correlation analysis were used to evaluate sex differences and the correlations among the variables. Results: Nasopharyngeal soft-tissue characteristics were different in the control and the study groups. Subjects with repaired CLP had lesser lower aerial width, lower adenoidal width and lower airway width. The upper airway width was also significantly lesser. The retropalatal and the total airway area were significantly greater in the control group. Conclusion: The narrow pharyngeal airway in patients with CLP might result in functional impairment of breathing in patients. Further investigations are necessary to clarify the relationship between pharyngeal structure and airway function in patients with CLP. How to cite this article: Agarwal A, Marwah N. Assessment of the Airway Characteristics in Children with Cleft Lip and Palate using Cone Beam Computed Tomography. Int J Clin Pediatr Dent 2016;9(1):5-9. PMID:27274147
NASA Astrophysics Data System (ADS)
Saenz, Daniel L.; Kim, Hojin; Chen, Josephine; Stathakis, Sotirios; Kirby, Neil
2016-09-01
The primary purpose of the study was to determine how detailed deformable image registration (DIR) phantoms need to adequately simulate human anatomy and accurately assess the quality of DIR algorithms. In particular, how many distinct tissues are required in a phantom to simulate complex human anatomy? Pelvis and head-and-neck patient CT images were used for this study as virtual phantoms. Two data sets from each site were analyzed. The virtual phantoms were warped to create two pairs consisting of undeformed and deformed images. Otsu’s method was employed to create additional segmented image pairs of n distinct soft tissue CT number ranges (fat, muscle, etc). A realistic noise image was added to each image. Deformations were applied in MIM Software (MIM) and Velocity deformable multi-pass (DMP) and compared with the known warping. Images with more simulated tissue levels exhibit more contrast, enabling more accurate results. Deformation error (magnitude of the vector difference between known and predicted deformation) was used as a metric to evaluate how many CT number gray levels are needed for a phantom to serve as a realistic patient proxy. Stabilization of the mean deformation error was reached by three soft tissue levels for Velocity DMP and MIM, though MIM exhibited a persisting difference in accuracy between the discrete images and the unprocessed image pair. A minimum detail of three levels allows a realistic patient proxy for use with Velocity and MIM deformation algorithms.
2014-03-22
consideration for enlisted airmen, has largely become a non factor due to over-inflated scores, with other factors such as specialty knowledge test scores, time...appraisal. Secondly, an Artificial Neural Network (ANN) classifier will be applied to the large sample data to confirm that the values solicited to...jobs, employees make themselves vulnerable to the organization when they expend effort. If extra effort is expended to reduce errors or defects, or
2015-01-01
self - esteem . • Vulnerability to flattery or the promise of a better job: Often coupled with anger/revenge or adventure/thrill. • Ingratiation: A...conduct a self -assessment or access an on-site assessment that DHS cybersecurity professionals facilitate. To learn more about the CRR or to download...procedures to handle communication errors. 5 In 2012, workgroup member, the Petroleum Safety Author- ity of Norway, released a self -assessment schema for
2015-04-01
and execution of Performance Review Tool; Organization, coding, and transcribing of collected data; Analysis of qualitative survey and quantitative...University of Wisconsin System Madison, WI 53715-1218 REPORT DATE: April 2015 TYPE OF REPORT: Annual PREPARED FOR: U.S. Army Medical Research and...MONITOR’S ACRONYM(S) U.S. Army Medical Research and Material Command Fort Detrick, Maryland 21702-5012 11. SPONSOR/MONITOR’S REPORT NUMBER
Transitions of care and rehabilitation after fragility fractures.
Eslami, Michelle; Tran, Hong-Phuc
2014-05-01
Transitions in care are a vulnerable time period for patients during which unintended errors may occur. This article discusses potential risks that could occur during care transitions, suggested improvements, and the transition from hospital to skilled nursing facilities for patients needing rehabilitation after their discharge from the hospital. Different rehabilitation settings and their reimbursement are reviewed. Common potential medical conditions arising in patients undergoing rehabilitation, rehabilitation goals, and secondary prevention also are discussed. Copyright © 2014 Elsevier Inc. All rights reserved.
2016-12-01
repair, bowel anastomosis, central venous catheterization , and bladder catheterization . We performed a multivariate analysis of variance (MANOVA) to...DC, Gould MK. Preventing complications of central venous catheterization . N Engl J Med 2003;348:1123–33. 15. Maithel S, Sierra R, Korndorffer J, et...subclavian-vein catheterization . N Engl J Med. 1994;331(26):1735-1738. 2. Taylor RW, Palagiri AV. Central venous catheter- ization. Crit Care Med. 2007;35(5
The "erotic transference": some technical and countertransferential difficulties.
Book, H E
1995-01-01
This paper highlights dynamics that may interfere with the therapist's identifying and addressing the erotic transference: (1) deficient training; (2) theoretical orientations that devalue the transference while espousing a "real" relationship including self-disclosure; (3) countertransference responses to the erotic transference; and (4) clinical errors of focusing on the manifest erotic transference while overlooking significant but latent pre-oedipal, oedipal, aggressive, or selfobject issues. Inattention to these dynamics may render the therapist vulnerable to sexual acting out with his patient.
Grazing Incidence Wavefront Sensing and Verification of X-Ray Optics Performance
NASA Technical Reports Server (NTRS)
Saha, Timo T.; Rohrbach, Scott; Zhang, William W.
2011-01-01
Evaluation of interferometrically measured mirror metrology data and characterization of a telescope wavefront can be powerful tools in understanding of image characteristics of an x-ray optical system. In the development of soft x-ray telescope for the International X-Ray Observatory (IXO), we have developed new approaches to support the telescope development process. Interferometrically measuring the optical components over all relevant spatial frequencies can be used to evaluate and predict the performance of an x-ray telescope. Typically, the mirrors are measured using a mount that minimizes the mount and gravity induced errors. In the assembly and mounting process the shape of the mirror segments can dramatically change. We have developed wavefront sensing techniques suitable for the x-ray optical components to aid us in the characterization and evaluation of these changes. Hartmann sensing of a telescope and its components is a simple method that can be used to evaluate low order mirror surface errors and alignment errors. Phase retrieval techniques can also be used to assess and estimate the low order axial errors of the primary and secondary mirror segments. In this paper we describe the mathematical foundation of our Hartmann and phase retrieval sensing techniques. We show how these techniques can be used in the evaluation and performance prediction process of x-ray telescopes.
Dynamic simulation of the effect of soft toric contact lenses movement on retinal image quality.
Niu, Yafei; Sarver, Edwin J; Stevenson, Scott B; Marsack, Jason D; Parker, Katrina E; Applegate, Raymond A
2008-04-01
To report the development of a tool designed to dynamically simulate the effect of soft toric contact lens movement on retinal image quality, initial findings on three eyes, and the next steps to be taken to improve the utility of the tool. Three eyes of two subjects wearing soft toric contact lenses were cyclopleged with 1% cyclopentolate and 2.5% phenylephrine. Four hundred wavefront aberration measurements over a 5-mm pupil were recorded during soft contact lens wear at 30 Hz using a complete ophthalmic analysis system aberrometer. Each wavefront error measurement was input into Visual Optics Laboratory (version 7.15, Sarver and Associates, Inc.) to generate a retinal simulation of a high contrast log MAR visual acuity chart. The individual simulations were combined into a single dynamic movie using a custom MatLab PsychToolbox program. Visual acuity was measured for each eye reading the movie with best cycloplegic spectacle correction through a 3-mm artificial pupil to minimize the influence of the eyes' uncorrected aberrations. Comparison of the simulated acuity was made to values recorded while the subject read unaberrated charts with contact lenses through a 5-mm artificial pupil. For one study eye, average acuity was the same as the natural contact lens viewing condition. For the other two study eyes visual acuity of the best simulation was more than one line worse than natural viewing conditions. Dynamic simulation of retinal image quality, although not yet perfect, is a promising technique for visually illustrating the optical effects on image quality because of the movements of alignment-sensitive corrections.
Phantom experiments using soft-prior regularization EIT for breast cancer imaging.
Murphy, Ethan K; Mahara, Aditya; Wu, Xiaotian; Halter, Ryan J
2017-06-01
A soft-prior regularization (SR) electrical impedance tomography (EIT) technique for breast cancer imaging is described, which shows an ability to accurately reconstruct tumor/inclusion conductivity values within a dense breast model investigated using a cylindrical and a breast-shaped tank. The SR-EIT method relies on knowing the spatial location of a suspicious lesion initially detected from a second imaging modality. Standard approaches (using Laplace smoothing and total variation regularization) without prior structural information are unable to accurately reconstruct or detect the tumors. The soft-prior approach represents a very significant improvement to these standard approaches, and has the potential to improve conventional imaging techniques, such as automated whole breast ultrasound (AWB-US), by providing electrical property information of suspicious lesions to improve AWB-US's ability to discriminate benign from cancerous lesions. Specifically, the best soft-regularization technique found average absolute tumor/inclusion errors of 0.015 S m -1 for the cylindrical test and 0.055 S m -1 and 0.080 S m -1 for the breast-shaped tank for 1.8 cm and 2.5 cm inclusions, respectively. The standard approaches were statistically unable to distinguish the tumor from the mammary gland tissue. An analysis of false tumors (benign suspicious lesions) provides extra insight into the potential and challenges EIT has for providing clinically relevant information. The ability to obtain accurate conductivity values of a suspicious lesion (>1.8 cm) detected from another modality (e.g. AWB-US) could significantly reduce false positives and result in a clinically important technology.
Zakaria, Rozalina; Sheng, Ong Yong; Wern, Kam; Shamshirband, Shahaboddin; Wahab, Ainuddin Wahid Abdul; Petković, Dalibor; Saboohi, Hadi
2014-05-01
A soft methodology study has been applied on tapered plastic multimode sensors. This study basically used tapered plastic multimode fiber [polymethyl methacrylate (PMMA)] optics as a sensor. The tapered PMMA fiber was fabricated using an etching method involving deionized water and acetone to achieve a waist diameter and length of 0.45 and 10 mm, respectively. In addition, a tapered PMMA probe, which was coated by silver film, was fabricated and demonstrated using a calcium hypochlorite (G70) solution. The working mechanism of such a device is based on the observation increment in the transmission of the sensor that is immersed in solutions at high concentrations. As the concentration was varied from 0 to 6 ppm, the output voltage of the sensor increased linearly. The silver film coating increased the sensitivity of the proposed sensor because of the effective cladding refractive index, which increases with the coating and thus allows more light to be transmitted from the tapered fiber. In this study, the polynomial and radial basis function (RBF) were applied as the kernel function of the support vector regression (SVR) to estimate and predict the output voltage response of the sensors with and without silver film according to experimental tests. Instead of minimizing the observed training error, SVR_poly and SVR_rbf were used in an attempt to minimize the generalization error bound so as to achieve generalized performance. An adaptive neuro-fuzzy interference system (ANFIS) approach was also investigated for comparison. The experimental results showed that improvements in the predictive accuracy and capacity for generalization can be achieved by the SVR_poly approach in comparison to the SVR_rbf methodology. The same testing errors were found for the SVR_poly approach and the ANFIS approach.
Belley, Matthew D; Wang, Chu; Nguyen, Giao; Gunasingha, Rathnayaka; Chao, Nelson J; Chen, Benny J; Dewhirst, Mark W; Yoshizumi, Terry T
2014-03-01
Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Organ doses were simulated in the Geant4 application for tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Average doses in soft-tissue organs were found to vary by as much as 23%-32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigninga single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs.
Belley, Matthew D.; Wang, Chu; Nguyen, Giao; Gunasingha, Rathnayaka; Chao, Nelson J.; Chen, Benny J.; Dewhirst, Mark W.; Yoshizumi, Terry T.
2014-01-01
Purpose: Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Methods: Organ doses were simulated in the Geant4 application for tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Results: Average doses in soft-tissue organs were found to vary by as much as 23%–32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. Conclusions: This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigning a single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs. PMID:24593746
NASA Astrophysics Data System (ADS)
Wodzinski, Marek; Skalski, Andrzej; Ciepiela, Izabela; Kuszewski, Tomasz; Kedzierawski, Piotr; Gajda, Janusz
2018-02-01
Knowledge about tumor bed localization and its shape analysis is a crucial factor for preventing irradiation of healthy tissues during supportive radiotherapy and as a result, cancer recurrence. The localization process is especially hard for tumors placed nearby soft tissues, which undergo complex, nonrigid deformations. Among them, breast cancer can be considered as the most representative example. A natural approach to improving tumor bed localization is the use of image registration algorithms. However, this involves two unusual aspects which are not common in typical medical image registration: the real deformation field is discontinuous, and there is no direct correspondence between the cancer and its bed in the source and the target 3D images respectively. The tumor no longer exists during radiotherapy planning. Therefore, a traditional evaluation approach based on known, smooth deformations and target registration error are not directly applicable. In this work, we propose alternative artificial deformations which model the tumor bed creation process. We perform a comprehensive evaluation of the most commonly used deformable registration algorithms: B-Splines free form deformations (B-Splines FFD), different variants of the Demons and TV-L1 optical flow. The evaluation procedure includes quantitative assessment of the dedicated artificial deformations, target registration error calculation, 3D contour propagation and medical experts visual judgment. The results demonstrate that the currently, practically applied image registration (rigid registration and B-Splines FFD) are not able to correctly reconstruct discontinuous deformation fields. We show that the symmetric Demons provide the most accurate soft tissues alignment in terms of the ability to reconstruct the deformation field, target registration error and relative tumor volume change, while B-Splines FFD and TV-L1 optical flow are not an appropriate choice for the breast tumor bed localization problem, even though the visual alignment seems to be better than for the Demons algorithm. However, no algorithm could recover the deformation field with sufficient accuracy in terms of vector length and rotation angle differences.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belley, Matthew D.; Wang, Chu; Nguyen, Giao
2014-03-15
Purpose: Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Methods: Organ doses were simulated in the Geant4 application formore » tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Results: Average doses in soft-tissue organs were found to vary by as much as 23%–32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. Conclusions: This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigninga single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs.« less
Navigation system for minimally invasive esophagectomy: experimental study in a porcine model.
Nickel, Felix; Kenngott, Hannes G; Neuhaus, Jochen; Sommer, Christof M; Gehrig, Tobias; Kolb, Armin; Gondan, Matthias; Radeleff, Boris A; Schaible, Anja; Meinzer, Hans-Peter; Gutt, Carsten N; Müller-Stich, Beat-Peter
2013-10-01
Navigation systems potentially facilitate minimally invasive esophagectomy and improve patient outcome by improving intraoperative orientation, position estimation of instruments, and identification of lymph nodes and resection margins. The authors' self-developed navigation system is highly accurate in static environments. This study aimed to test the overall accuracy of the navigation system in a realistic operating room scenario and to identify the different sources of error altering accuracy. To simulate a realistic environment, a porcine model (n = 5) was used with endoscopic clips in the esophagus as navigation targets. Computed tomography imaging was followed by image segmentation and target definition with the medical imaging interaction toolkit software. Optical tracking was used for registration and localization of animals and navigation instruments. Intraoperatively, the instrument was displayed relative to segmented organs in real time. The target registration error (TRE) of the navigation system was defined as the distance between the target and the navigation instrument tip. The TRE was measured on skin targets with the animal in the 0° supine and 25° anti-Trendelenburg position and on the esophagus during laparoscopic transhiatal preparation. On skin targets, the TRE was significantly higher in the 25° position, at 14.6 ± 2.7 mm, compared with the 0° position, at 3.2 ± 1.3 mm. The TRE on the esophagus was 11.2 ± 2.4 mm. The main source of error was soft tissue deformation caused by intraoperative positioning, pneumoperitoneum, surgical manipulation, and tissue dissection. The navigation system obtained acceptable accuracy with a minimally invasive transhiatal approach to the esophagus in a realistic experimental model. Thus the system has the potential to improve intraoperative orientation, identification of lymph nodes and adequate resection margins, and visualization of risk structures. Compensation methods for soft tissue deformation may lead to an even more accurate navigation system in the future.
Clarke, D L; Kong, V Y; Naidoo, L C; Furlong, H; Aldous, C
2013-01-01
Acute surgical patients are particularly vulnerable to human error. The Acute Physiological Support Team (APST) was created with the twin objectives of identifying high-risk acute surgical patients in the general wards and reducing both the incidence of error and impact of error on these patients. A number of error taxonomies were used to understand the causes of human error and a simple risk stratification system was adopted to identify patients who are particularly at risk of error. During the period November 2012-January 2013 a total of 101 surgical patients were cared for by the APST at Edendale Hospital. The average age was forty years. There were 36 females and 65 males. There were 66 general surgical patients and 35 trauma patients. Fifty-six patients were referred on the day of their admission. The average length of stay in the APST was four days. Eleven patients were haemo-dynamically unstable on presentation and twelve were clinically septic. The reasons for referral were sepsis,(4) respiratory distress,(3) acute kidney injury AKI (38), post-operative monitoring (39), pancreatitis,(3) ICU down-referral,(7) hypoxia,(5) low GCS,(1) coagulopathy.(1) The mortality rate was 13%. A total of thirty-six patients experienced 56 errors. A total of 143 interventions were initiated by the APST. These included institution or adjustment of intravenous fluids (101), blood transfusion,(12) antibiotics,(9) the management of neutropenic sepsis,(1) central line insertion,(3) optimization of oxygen therapy,(7) correction of electrolyte abnormality,(8) correction of coagulopathy.(2) CONCLUSION: Our intervention combined current taxonomies of error with a simple risk stratification system and is a variant of the defence in depth strategy of error reduction. We effectively identified and corrected a significant number of human errors in high-risk acute surgical patients. This audit has helped understand the common sources of error in the general surgical wards and will inform on-going error reduction initiatives. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Noncontact diffuse correlation spectroscopy for noninvasive deep tissue blood flow measurement
NASA Astrophysics Data System (ADS)
Lin, Yu; He, Lian; Shang, Yu; Yu, Guoqiang
2012-01-01
A noncontact diffuse correlation spectroscopy (DCS) probe has been developed using two separated optical paths for the source and detector. This unique design avoids the interference between the source and detector and allows large source-detector separations for deep tissue blood flow measurements. The noncontact probe has been calibrated against a contact probe in a tissue-like phantom solution and human muscle tissues; flow changes concurrently measured by the two probes are highly correlated in both phantom (R2=0.89, p<10-5) and real-tissue (R2=0.77, p<10-5, n=9) tests. The noncontact DCS holds promise for measuring blood flow in vulnerable (e.g., pressure ulcer) and soft (e.g., breast) tissues without distorting tissue hemodynamic properties.
McKinney, Tim S.; Anning, David W.
2012-01-01
This product "Digital spatial data for observed, predicted, and misclassification errors for observations in the training dataset for nitrate and arsenic concentrations in basin-fill aquifers in the Southwest Principal Aquifers study area" is a 1:250,000-scale point spatial dataset developed as part of a regional Southwest Principal Aquifers (SWPA) study (Anning and others, 2012). The study examined the vulnerability of basin-fill aquifers in the southwestern United States to nitrate contamination and arsenic enrichment. Statistical models were developed by using the random forest classifier algorithm to predict concentrations of nitrate and arsenic across a model grid that represents local- and basin-scale measures of source, aquifer susceptibility, and geochemical conditions.
NASA Technical Reports Server (NTRS)
Foyle, David C.; Goodman, Allen; Hooley, Becky L.
2003-01-01
An overview is provided of the Human Performance Modeling (HPM) element within the NASA Aviation Safety Program (AvSP). Two separate model development tracks for performance modeling of real-world aviation environments are described: the first focuses on the advancement of cognitive modeling tools for system design, while the second centers on a prescriptive engineering model of activity tracking for error detection and analysis. A progressive implementation strategy for both tracks is discussed in which increasingly more complex, safety-relevant applications are undertaken to extend the state-of-the-art, as well as to reveal potential human-system vulnerabilities in the aviation domain. Of particular interest is the ability to predict the precursors to error and to assess potential mitigation strategies associated with the operational use of future flight deck technologies.
Evaluation of the 3dMDface system as a tool for soft tissue analysis.
Hong, C; Choi, K; Kachroo, Y; Kwon, T; Nguyen, A; McComb, R; Moon, W
2017-06-01
To evaluate the accuracy of three-dimensional stereophotogrammetry by comparing values obtained from direct anthropometry and the 3dMDface system. To achieve a more comprehensive evaluation of the reliability of 3dMD, both linear and surface measurements were examined. UCLA Section of Orthodontics. Mannequin head as model for anthropometric measurements. Image acquisition and analysis were carried out on a mannequin head using 16 anthropometric landmarks and 21 measured parameters for linear and surface distances. 3D images using 3dMDface system were made at 0, 1 and 24 hours; 1, 2, 3 and 4 weeks. Error magnitude statistics used include mean absolute difference, standard deviation of error, relative error magnitude and root mean square error. Intra-observer agreement for all measurements was attained. Overall mean errors were lower than 1.00 mm for both linear and surface parameter measurements, except in 5 of the 21 measurements. The three longest parameter distances showed increased variation compared to shorter distances. No systematic errors were observed for all performed paired t tests (P<.05). Agreement values between two observers ranged from 0.91 to 0.99. Measurements on a mannequin confirmed the accuracy of all landmarks and parameters analysed in this study using the 3dMDface system. Results indicated that 3dMDface system is an accurate tool for linear and surface measurements, with potentially broad-reaching applications in orthodontics, surgical treatment planning and treatment evaluation. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Wake topology of under-actuated rajiform batoid robots
NASA Astrophysics Data System (ADS)
Valdivia Y Alvarado, Pablo; Weymouth, Gabriel; Thekoodan, Dilip; Patrikalakis, Nicholas
2011-11-01
Under-actuated continuous soft robots are designed to have modes of vibration that match desired body motions using minimal actuation. The desired modes of vibration are enabled by flexible continuous bodies with heterogenous material distributions. Errors or intentional approximations in the manufactured material distributions alter the achieved body motions and influence the resulting locomotion performance. An under-actuated continuous soft robot designed to mimic rajiform batoids such as stingrays is used to investigate the influence that fin kinematics variations have on wake topology, and the trade-offs that simplifying the body material structure has on achievable swimming performance. Pectoral fin kinematics in rajiform batoids are defined by traveling waves along the fin cord with particular amplitude envelopes along both the fin cord and span. Digital particle image velocimetry (DPIV) analysis of a prototype's wake structure and immersed-boundary numerical simulations are used to clarify the role of traveling wave wavelength, fin flapping frequency, and amplitude envelope characteristics on the resulting wake topology and swimming performance.
Human Activity Recognition by Combining a Small Number of Classifiers.
Nazabal, Alfredo; Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Ghahramani, Zoubin
2016-09-01
We consider the problem of daily human activity recognition (HAR) using multiple wireless inertial sensors, and specifically, HAR systems with a very low number of sensors, each one providing an estimation of the performed activities. We propose new Bayesian models to combine the output of the sensors. The models are based on a soft outputs combination of individual classifiers to deal with the small number of sensors. We also incorporate the dynamic nature of human activities as a first-order homogeneous Markov chain. We develop both inductive and transductive inference methods for each model to be employed in supervised and semisupervised situations, respectively. Using different real HAR databases, we compare our classifiers combination models against a single classifier that employs all the signals from the sensors. Our models exhibit consistently a reduction of the error rate and an increase of robustness against sensor failures. Our models also outperform other classifiers combination models that do not consider soft outputs and an Markovian structure of the human activities.
Impact of jammer side information on the performance of anti-jam systems
NASA Astrophysics Data System (ADS)
Lim, Samuel
1992-03-01
The Chernoff bound parameter, D, provides a performance measure for all coded communication systems. D can be used to determine upper-bounds on bit error probabilities (BEPs) of Viterbi decoded convolutional codes. The impact on BEP bounds of channel measurements that provide additional side information can also be evaluated with D. This memo documents the results of a Chernoff bound parameter evaluation in optimum partial-band noise jamming (OPBNJ) for both BPSK and DPSK modulation schemes. Hard and soft quantized receivers, with and without jammer side information (JSI), were examined. The results of this analysis indicate that JSI does improve decoding performance. However, a knowledge of jammer presence alone achieves a performance level comparable to soft decision decoding with perfect JSI. Furthermore, performance degradation due to the lack of JSI can be compensated for by increasing the number of levels of quantization. Therefore, an anti-jam system without JSI can be made to perform almost as well as a system with JSI.
Simulations on false gain in recombination-pumped soft-X-ray lasers
NASA Astrophysics Data System (ADS)
Ozaki, T.; Kuroda, H.
1997-10-01
Numerical investigations are performed on false gain due to axial plasma expansion, which is expected to be important in initial proof-of-principle studies of recombination-pumped soft-X-ray lasers with extended capabilities. Modelling calculations of experiments with slab boron nitride targets reveal large false gain coefficients approaching 20 cm-1 in the case of plasmas with short active medium lengths. The false gain in the case of fiber targets is found to be of equal magnitude to that for slabs in the case of plasmas with less than 0.1 cm active medium lengths. Calculations for slab targets predict that adopting a tolerance of ǃ cm-1 for gain will severely restrict the time and the active medium length of the plasma that can be used for error-free observations, while those for fiber targets are found to be considerably relaxed. The effects of false gain in the 54.2 + Na Balmer ! laser is also investigated, again revealing the importance of this phenomena under optimum gain conditions.
Improving soft FEC performance for higher-order modulations via optimized bit channel mappings.
Häger, Christian; Amat, Alexandre Graell I; Brännström, Fredrik; Alvarado, Alex; Agrell, Erik
2014-06-16
Soft forward error correction with higher-order modulations is often implemented in practice via the pragmatic bit-interleaved coded modulation paradigm, where a single binary code is mapped to a nonbinary modulation. In this paper, we study the optimization of the mapping of the coded bits to the modulation bits for a polarization-multiplexed fiber-optical system without optical inline dispersion compensation. Our focus is on protograph-based low-density parity-check (LDPC) codes which allow for an efficient hardware implementation, suitable for high-speed optical communications. The optimization is applied to the AR4JA protograph family, and further extended to protograph-based spatially coupled LDPC codes assuming a windowed decoder. Full field simulations via the split-step Fourier method are used to verify the analysis. The results show performance gains of up to 0.25 dB, which translate into a possible extension of the transmission reach by roughly up to 8%, without significantly increasing the system complexity.
Micro/nano-particle decorated metal wire for cutting soft matter
NASA Astrophysics Data System (ADS)
Zhang, Wei; Feng, Liang-liang; Wu, Fan; Zhang, Run-run; Wu, Cheng-wei
2016-09-01
To cut soft materials such as biological tissues with minimal damage and reduced positional error is highly desired in medical surgery and biomechanics. After years of natural selection and evolution, mosquitoes have acquired the ability to insert their proboscises into human skin with astonishingly tiny forces. This can be associated with the unique structure of their proboscises, with micro/nano sawteeth, and the distinctive insertion manner: high frequency reciprocating saw cutting. Inspired by these, this communication describes the successful implantation of metal oxide particles onto molybdenum wire surfaces through a sol-calcination process, to form a biomimetic sawblade with a high density of micro/nano saw teeth, where the acidification is essential in terms of generating active anchoring sites on the wire. When used as a sawblade in conjunction with reciprocating action to cut the viscoelastic gel, both the cut-in force and cut-in displacement could be decreased substantially. The cutting speed and frequency of reciprocating action are important operating parameters influencing cut-in force.
Sani, Susan Raouf Hadadi; Tabibi, Zahra; Fadardi, Javad Salehi; Stavrinos, Despina
2017-12-01
The present study explored whether aggression, emotional regulation, cognitive inhibition, and attentional bias towards emotional stimuli were related to risky driving behavior (driving errors, and driving violations). A total of 117 applicants for taxi driver positions (89% male, M age=36.59years, SD=9.39, age range 24-62years) participated in the study. Measures included the Ahwaz Aggression Inventory, the Difficulties in emotion regulation Questionnaire, the emotional Stroop task, the Go/No-go task, and the Driving Behavior Questionnaire. Correlation and regression analyses showed that aggression and emotional regulation predicted risky driving behavior. Difficulties in emotion regulation, the obstinacy and revengeful component of aggression, attentional bias toward emotional stimuli, and cognitive inhibition predicted driving errors. Aggression was the only significant predictive factor for driving violations. In conclusion, aggression and difficulties in regulating emotions may exacerbate risky driving behaviors. Deficits in cognitive inhibition and attentional bias toward negative emotional stimuli can increase driving errors. Predisposition to aggression has strong effect on making one vulnerable to violation of traffic rules and crashes. Copyright © 2017 Elsevier Ltd. All rights reserved.
Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More
NASA Technical Reports Server (NTRS)
Kou, Yu; Lin, Shu; Fossorier, Marc
1999-01-01
Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.
Prediction of BP reactivity to talking using hybrid soft computing approaches.
Kaur, Gurmanik; Arora, Ajat Shatru; Jain, Vijender Kumar
2014-01-01
High blood pressure (BP) is associated with an increased risk of cardiovascular diseases. Therefore, optimal precision in measurement of BP is appropriate in clinical and research studies. In this work, anthropometric characteristics including age, height, weight, body mass index (BMI), and arm circumference (AC) were used as independent predictor variables for the prediction of BP reactivity to talking. Principal component analysis (PCA) was fused with artificial neural network (ANN), adaptive neurofuzzy inference system (ANFIS), and least square-support vector machine (LS-SVM) model to remove the multicollinearity effect among anthropometric predictor variables. The statistical tests in terms of coefficient of determination (R (2)), root mean square error (RMSE), and mean absolute percentage error (MAPE) revealed that PCA based LS-SVM (PCA-LS-SVM) model produced a more efficient prediction of BP reactivity as compared to other models. This assessment presents the importance and advantages posed by PCA fused prediction models for prediction of biological variables.
Low-density parity-check codes for volume holographic memory systems.
Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali
2003-02-10
We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.
Constitutive Modeling of Porcine Liver in Indentation Using 3D Ultrasound Imaging
Jordan, P.; Socrate, S.; Zickler, T.E.; Howe, R.D.
2009-01-01
In this work we present an inverse finite-element modeling framework for constitutive modeling and parameter estimation of soft tissues using full-field volumetric deformation data obtained from 3D ultrasound. The finite-element model is coupled to full-field visual measurements by regularization springs attached at nodal locations. The free ends of the springs are displaced according to the locally estimated tissue motion and the normalized potential energy stored in all springs serves as a measure of model-experiment agreement for material parameter optimization. We demonstrate good accuracy of estimated parameters and consistent convergence properties on synthetically generated data. We present constitutive model selection and parameter estimation for perfused porcine liver in indentation and demonstrate that a quasilinear viscoelastic model with shear modulus relaxation offers good model-experiment agreement in terms of indenter displacement (0.19 mm RMS error) and tissue displacement field (0.97 mm RMS error). PMID:19627823
NASA Technical Reports Server (NTRS)
Mcdonald, M. W.
1982-01-01
A frequency modulated continuous wave radar system was developed. The system operates in the 35 gigahertz frequency range and provides millimeter accuracy range and range rate measurements. This level of range resolution allows soft docking for the proposed teleoperator maneuvering system (TMS) or other autonomous or robotic space vehicles. Sources of error in the operation of the system which tend to limit its range resolution capabilities are identified. Alternative signal processing techniques are explored with emphasis on determination of the effects of inserting various signal filtering circuits in the system. The identification and elimination of an extraneous low frequency signal component created as a result of zero range immediate reflection of radar energy from the surface of the antenna dish back into the mixer of the system is described.
Error-correction coding for digital communications
NASA Astrophysics Data System (ADS)
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
Activation and Protection of Dendritic Cells in the Prostate Cancer Environment
2010-10-01
median survival time (in days) ± standard error of the mean (SEM). Skin graft survived for 11.0±0.7 days in control group and 15.8±1.1 days in the...potentiates allogeneic skin graft rejection and induces syngeneic graft rejection. Transplantation. 1998;65:1436-1446. 6. Jemal A, Siegel R, Xu J, Ward E...injections of ETA receptor inhibitor BQ-123 (for 10 days). Skin graft is soft and viable. 22 P.I.: Georgi Guruli; Award # W81XWH-05-1-0181
Third branchial cleft anomaly presenting as a retropharyngeal abscess.
Huang, R Y; Damrose, E J; Alavi, S; Maceri, D R; Shapiro, N L
2000-08-31
Branchial cleft anomalies are congenital developmental defects that typically present as a soft fluctuant mass or fistulous tract along the anterior border of the sternocleidomastoid muscle. However, branchial anomalies can manifest atypically, presenting diagnostic and therapeutic challenges. Error or delay in diagnosis can lead to complications, recurrences, and even life-threatening emergencies. We describe a case of an infected branchial cleft cyst that progressed to a retropharyngeal abscess in a 5-week-old female patient. The clinical, radiographic, and histologic findings of this rare presentation of branchial cleft cyst are discussed.
Neutron beam irradiation study of workload dependence of SER in a microprocessor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michalak, Sarah E; Graves, Todd L; Hong, Ted
It is known that workloads are an important factor in soft error rates (SER), but it is proving difficult to find differentiating workloads for microprocessors. We have performed neutron beam irradiation studies of a commercial microprocessor under a wide variety of workload conditions from idle, performing no operations, to very busy workloads resembling real HPC, graphics, and business applications. There is evidence that the mean times to first indication of failure, MTFIF defined in Section II, may be different for some of the applications.
Impact of Scaled Technology on Radiation Testing and Hardening
NASA Technical Reports Server (NTRS)
LaBel, Kenneth A.; Cohn, Lewis M.
2005-01-01
This presentation gives a brief overview of some of the radiation challenges facing emerging scaled digital technologies with implications on using consumer grade electronics and next generation hardening schemes. Commercial semiconductor manufacturers are recognizing some of these issues as issues for terrestrial performance. Looking at means of dealing with soft errors. The thinned oxide has indicated improved TID tolerance of commercial products hardened by "serendipity" which does not guarantee hardness or say if the trend will continue. This presentation also focuses one reliability implications of thinned oxides.
Wang, L; Turaka, A; Meyer, J; Spoka, D; Jin, L; Fan, J; Ma, C
2012-06-01
To assess the reliability of soft tissue alignment by comparing pre- and post-treatment cone-beam CT (CBCT) for image guidance in stereotactic body radiotherapy (SBRT) of lung cancers. Our lung SBRT procedures require all patients undergo 4D CT scan in order to obtain patient-specific target motion information through reconstructed 4D data using the maximum-intensity projection (MIP) algorithm. The internal target volume (ITV) was outlined directly from the MIP images and a 3-5 mm margin expansion was then applied to the ITV to create the PTV. Conformal treatment planning was performed on the helical images, to which the MIP images were fused. Prior to each treatment, CBCT was used for image guidance by comparing with the simulation CT and for patient relocalization based on the bony anatomy. Any displacement of the patient bony structure would be considered as setup errors and would be corrected by couch shifts. Theoretically, as the PTV definition included target internal motion, no further shifts other than setup corrections should be made. However, it is our practice to have treating physicians further check target localization within the PTV. Whenever the shifts based on the soft-tissue alignment (that is, target alignment) exceeded a certain value (e.g. 5 mm), a post-treatment CBCT was carried out to ensure that the tissue alignment is reliable by comparing between pre- and post-treatment CBCT. Pre- and post-CBCT has been performed for 7 patients so far who had shifts beyond 5 mm despite bony alignment. For all patients, post CBCT confirmed that the visualized target position was kept in the same position as before treatment after adjusting for soft-tissue alignment. For the patient population studied, it is shown that soft-tissue alignment is necessary and reliable in the lung SBRT for individual cases. © 2012 American Association of Physicists in Medicine.
Measurement of the refractive index of soft contact lenses during wear.
Varikooty, Jalaiah; Keir, Nancy; Woods, Craig A; Fonn, Desmond
2010-01-01
To determine whether the refractive index (RI) of a soft contact lens can be evaluated using refractometry while the lens remains on the eye and to compare this with more traditional ex vivo RI measurements. A slitlamp apparatus was modified to incorporate a customized Atago hand refractometer. With a double-masked study design, nine adapted symptomatic soft contact lens wearers wore a contact lens in each eye (lotrafilcon B and etafilcon A) in a randomized order. In vivo RI was determined from the relative Brix scale measurements immediately after lens insertion and after 1 and 10 hr of lens wear. Ex vivo refractometry was performed after 10 hr of lens wear for comparison. Means +/- standard errors of the means are reported. In vivo RI values at baseline were 1.422 +/- 0.0004 (lotrafilcon B) and 1.405 +/- 0.0021 (etafilcon A); after 1 hr of lens wear, values were 1.423 +/- 0.0006 and 1.408 +/- 0.0007, respectively; and after 10 hr of lens wear, values were 1.424 +/- 0.0004 and 1.411 +/- 0.0010, respectively. Ex vivo RI values at the end of the 10 hr wearing period were 1.424 +/- 0.0003 (lotrafilcon B) and 1.412 +/- 0.0017 (etafilcon A). The change in in vivo RI across the day was statistically significant for the etafilcon A lens (repeated-measures analysis of variance, P<0.01) but not for the lotrafilcon B lens (P>0.05). This novel adaptation of refractometry was able to measure the RI of soft contact lenses during wear (without lens removal). End of day RI measurements using in vivo and ex vivo refractometry were comparable with each other. Future work is required to determine whether this in vivo method can improve our understanding of the relationships between soft contact lens RI, hydration, on-eye lens performance, and symptomology.
MacKay, Mark; Anderson, Collin; Boehme, Sabrina; Cash, Jared; Zobell, Jeffery
2016-04-01
The Institute for Safe Medication Practices has stated that parenteral nutrition (PN) is considered a high-risk medication and has the potential of causing harm. Three organizations--American Society for Parenteral and Enteral Nutrition (A.S.P.E.N.), American Society of Health-System Pharmacists, and National Advisory Group--have published guidelines for ordering, transcribing, compounding and administering PN. These national organizations have published data on compliance to the guidelines and the risk of errors. The purpose of this article is to compare total compliance with ordering, transcription, compounding, administration, and error rate with a large pediatric institution. A computerized prescriber order entry (CPOE) program was developed that incorporates dosing with soft and hard stop recommendations and simultaneously eliminating the need for paper transcription. A CPOE team prioritized and identified issues, then developed solutions and integrated innovative CPOE and automated compounding device (ACD) technologies and practice changes to minimize opportunities for medication errors in PN prescription, transcription, preparation, and administration. Thirty developmental processes were identified and integrated in the CPOE program, resulting in practices that were compliant with A.S.P.E.N. safety consensus recommendations. Data from 7 years of development and implementation were analyzed and compared with published literature comparing error, harm rates, and cost reductions to determine if our process showed lower error rates compared with national outcomes. The CPOE program developed was in total compliance with the A.S.P.E.N. guidelines for PN. The frequency of PN medication errors at our hospital over the 7 years was 230 errors/84,503 PN prescriptions, or 0.27% compared with national data that determined that 74 of 4730 (1.6%) of prescriptions over 1.5 years were associated with a medication error. Errors were categorized by steps in the PN process: prescribing, transcription, preparation, and administration. There were no transcription errors, and most (95%) errors occurred during administration. We conclude that PN practices that conferred a meaningful cost reduction and a lower error rate (2.7/1000 PN) than reported in the literature (15.6/1000 PN) were ascribed to the development and implementation of practices that conform to national PN guidelines and recommendations. Electronic ordering and compounding programs eliminated all transcription and related opportunities for errors. © 2015 American Society for Parenteral and Enteral Nutrition.
Sahay, Ashlyn; Hutchinson, Marie; East, Leah
2015-05-01
Despite the growing awareness of the benefits of positive workplace climates, unsupportive and disruptive workplace behaviours are widespread in health care organisations. Recent graduate nurses, who are often new to a workplace, are particularly vulnerable in unsupportive climates, and are also recognised to be at higher risk for medication errors. Investigate the association between workplace supports and relationships and safe medication practice among graduate nurses. Exploratory study using quantitative survey with a convenience sample of 58 nursing graduates in two Australian States. Online survey focused on graduates' self-reported medication errors, safe medication practice and the nature of workplace supports and relationships. Spearman's correlations identified that unsupportive workplace relationships were inversely related to graduate nurse medication errors and erosion of safe medication practices, while supportive Nurse Unit Manager and supportive work team relationships positively influenced safe medication practice among graduates. Workplace supports and relationships are potentially both the cause and solution to graduate nurse medication errors and safe medication practices. The findings develop further understanding about the impact of unsupportive and disruptive behaviours on patient safety and draw attention to the importance of undergraduate and continuing education strategies that promote positive workplace behaviours and graduate resilience. Copyright © 2015 Elsevier Ltd. All rights reserved.
Han, Georges; Helm, Jonathan; Iucha, Cornelia; Zahn-Waxler, Carolyn; Hastings, Paul D.; Klimes-Dougan, Bonnie
2015-01-01
Background The central objective of the current study was to evaluate how executive functions (EF), and specifically cognitive flexibility, were concurrently and predictively associated with anxiety and depressive symptoms in adolescence. Method Adolescents (N = 220) and their parents participated in this longitudinal investigation. Adolescents’ EF was assessed by the Wisconsin Card Sorting Test (WCST) during the initial assessment, and symptoms of depressive and anxiety disorders were reported by mothers and youths concurrently and two years later. Results Correlational analyses suggested that youths who made more total errors (TE), including both perseverative errors (PE) and non-perseverative errors (NPE), concurrently exhibited significantly more depressive symptoms. Adolescents who made more TE and those who made more NPE tended to have more anxiety symptoms two years later. SEM analyses accounting for key explanatory variables (e.g., IQ, disruptive behavior disorders, and attention deficit hyperactive disorder) showed that TE was concurrently associated with parent reports of adolescent depressive symptoms. Discussion The results suggest internalizing psychopathology is associated with global (TE) and nonspecific (NPE) EF difficulties, but not robustly associated with cognitive inflexibility (PE). Future research with the WCST should consider different sources of errors which are posited to reflect divergent underlying neural mechanisms, conferring differential vulnerability for emerging mental health problems. PMID:26042358
[Prospective assessment of medication errors in critically ill patients in a university hospital].
Salazar L, Nicole; Jirón A, Marcela; Escobar O, Leslie; Tobar, Eduardo; Romero, Carlos
2011-11-01
Critically ill patients are especially vulnerable to medication errors (ME) due to their severe clinical situation and the complexities of their management. To determine the frequency and characteristics of ME and identify shortcomings in the processes of medication management in an Intensive Care Unit. During a 3 months period, an observational prospective and randomized study was carried out in the ICU of a university hospital. Every step of patient's medication management (prescription, transcription, dispensation, preparation and administration) was evaluated by an external trained professional. Steps with higher frequency of ME and their therapeutic groups involved were identified. Medications errors were classified according to the National Coordinating Council for Medication Error Reporting and Prevention. In 52 of 124 patients evaluated, 66 ME were found in 194 drugs prescribed. In 34% of prescribed drugs, there was at least 1 ME during its use. Half of ME occurred during medication administration, mainly due to problems in infusion rates and schedule times. Antibacterial drugs had the highest rate of ME. We found a 34% rate of ME per drug prescribed, which is in concordance with international reports. The identification of those steps more prone to ME in the ICU, will allow the implementation of an intervention program to improve the quality and security of medication management.
Assessing uncertainty in high-resolution spatial climate data across the US Northeast.
Bishop, Daniel A; Beier, Colin M
2013-01-01
Local and regional-scale knowledge of climate change is needed to model ecosystem responses, assess vulnerabilities and devise effective adaptation strategies. High-resolution gridded historical climate (GHC) products address this need, but come with multiple sources of uncertainty that are typically not well understood by data users. To better understand this uncertainty in a region with a complex climatology, we conducted a ground-truthing analysis of two 4 km GHC temperature products (PRISM and NRCC) for the US Northeast using 51 Cooperative Network (COOP) weather stations utilized by both GHC products. We estimated GHC prediction error for monthly temperature means and trends (1980-2009) across the US Northeast and evaluated any landscape effects (e.g., elevation, distance from coast) on those prediction errors. Results indicated that station-based prediction errors for the two GHC products were similar in magnitude, but on average, the NRCC product predicted cooler than observed temperature means and trends, while PRISM was cooler for means and warmer for trends. We found no evidence for systematic sources of uncertainty across the US Northeast, although errors were largest at high elevations. Errors in the coarse-scale (4 km) digital elevation models used by each product were correlated with temperature prediction errors, more so for NRCC than PRISM. In summary, uncertainty in spatial climate data has many sources and we recommend that data users develop an understanding of uncertainty at the appropriate scales for their purposes. To this end, we demonstrate a simple method for utilizing weather stations to assess local GHC uncertainty and inform decisions among alternative GHC products.
Cockpit Interruptions and Distractions: Effective Management Requires a Careful Balancing Act
NASA Technical Reports Server (NTRS)
Dismukes, R. K.; Young, Grant E.; Sumwalt, Robert L., III; Null, Cynthia H. (Technical Monitor)
1998-01-01
Managing several tasks concurrently is an everyday part of cockpit operations. For the most part, crews handle concurrent task demands efficiently, yet crew preoccupation with one task to the detriment of performing other tasks is one of the more common forms of error in the cockpit. Most pilots are familiar with the December 1972 L1011 crash that occurred when the crew became preoccupied with a landing gear light malfunction and failed to notice that someone had inadvertently bumped off the autopilot. More recently a DC-9 landed gear-up in Houston when the crew, preoccupied with an stabilized approach, failed to recognize that the gear was not down because they had not switched the hydraulic pumps to high. We have recently started a research project to study why crews are vulnerable to these sorts of errors. As part of that project we reviewed NTSB reports of accidents attributed to crew error; we concluded that nearly half of these accidents involved lapses of attention associated with interruptions, distractions, or preoccupation with one task to the exclusion of another task. We have also analyzed 107 ASRS reports involving competing tasks; we present here some of our conclusions from those ASRS reports. These 107 reports involved 21 different types of routine tasks crews neglected at a critical moment while attending to another task. Sixty-nine percent of the neglected tasks involved either failure to monitor the current status or position of the aircraft or failure to monitor the actions of the pilot flying or taxiing. Thirty-four different types of competing activities distracted or preoccupied the pilots. Ninety percent of these competing activities fell into one of four broad categories: communication (e.g., discussion among crew or radio communication), heads-down work (e.g., programming the FMS or reviewing approach plates), responding to abnormals, or searching for VMC traffic. We will discuss examples of each of these four categories and suggest things crews can do to reduce their vulnerability to these and similar situations.
Groenendijk, Piet; Heinen, Marius; Klammler, Gernot; Fank, Johann; Kupfersberger, Hans; Pisinaras, Vassilios; Gemitzi, Alexandra; Peña-Haro, Salvador; García-Prats, Alberto; Pulido-Velazquez, Manuel; Perego, Alessia; Acutis, Marco; Trevisan, Marco
2014-11-15
The agricultural sector faces the challenge of ensuring food security without an excessive burden on the environment. Simulation models provide excellent instruments for researchers to gain more insight into relevant processes and best agricultural practices and provide tools for planners for decision making support. The extent to which models are capable of reliable extrapolation and prediction is important for exploring new farming systems or assessing the impacts of future land and climate changes. A performance assessment was conducted by testing six detailed state-of-the-art models for simulation of nitrate leaching (ARMOSA, COUPMODEL, DAISY, EPIC, SIMWASER/STOTRASIM, SWAP/ANIMO) for lysimeter data of the Wagna experimental field station in Eastern Austria, where the soil is highly vulnerable to nitrate leaching. Three consecutive phases were distinguished to gain insight in the predictive power of the models: 1) a blind test for 2005-2008 in which only soil hydraulic characteristics, meteorological data and information about the agricultural management were accessible; 2) a calibration for the same period in which essential information on field observations was additionally available to the modellers; and 3) a validation for 2009-2011 with the corresponding type of data available as for the blind test. A set of statistical metrics (mean absolute error, root mean squared error, index of agreement, model efficiency, root relative squared error, Pearson's linear correlation coefficient) was applied for testing the results and comparing the models. None of the models performed good for all of the statistical metrics. Models designed for nitrate leaching in high-input farming systems had difficulties in accurately predicting leaching in low-input farming systems that are strongly influenced by the retention of nitrogen in catch crops and nitrogen fixation by legumes. An accurate calibration does not guarantee a good predictive power of the model. Nevertheless all models were able to identify years and crops with high- and low-leaching rates. Copyright © 2014 Elsevier B.V. All rights reserved.
Female pelvic synthetic CT generation based on joint intensity and shape analysis
NASA Astrophysics Data System (ADS)
Liu, Lianli; Jolly, Shruti; Cao, Yue; Vineberg, Karen; Fessler, Jeffrey A.; Balter, James M.
2017-04-01
Using MRI for radiotherapy treatment planning and image guidance is appealing as it provides superior soft tissue information over CT scans and avoids possible systematic errors introduced by aligning MR to CT images. This study presents a method that generates Synthetic CT (MRCT) volumes by performing probabilistic tissue classification of voxels from MRI data using a single imaging sequence (T1 Dixon). The intensity overlap between different tissues on MR images, a major challenge for voxel-based MRCT generation methods, is addressed by adding bone shape information to an intensity-based classification scheme. A simple pelvic bone shape model, built from principal component analysis of pelvis shape from 30 CT image volumes, is fitted to the MR volumes. The shape model generates a rough bone mask that excludes air and covers bone along with some surrounding soft tissues. Air regions are identified and masked out from the tissue classification process by intensity thresholding outside the bone mask. A regularization term is added to the fuzzy c-means classification scheme that constrains voxels outside the bone mask from being assigned memberships in the bone class. MRCT image volumes are generated by multiplying the probability of each voxel being represented in each class with assigned attenuation values of the corresponding class and summing the result across all classes. The MRCT images presented intensity distributions similar to CT images with a mean absolute error of 13.7 HU for muscle, 15.9 HU for fat, 49.1 HU for intra-pelvic soft tissues, 129.1 HU for marrow and 274.4 HU for bony tissues across 9 patients. Volumetric modulated arc therapy (VMAT) plans were optimized using MRCT-derived electron densities, and doses were recalculated using corresponding CT-derived density grids. Dose differences to planning target volumes were small with mean/standard deviation of 0.21/0.42 Gy for D0.5cc and 0.29/0.33 Gy for D99%. The results demonstrate the accuracy of the method and its potential in supporting MRI only radiotherapy treatment planning.
Guillaume, Charrier; Isabelle, Chuine; Marc, Bonhomme; Thierry, Améglio
2018-05-01
Frost damages develop when exposure overtakes frost vulnerability. Frost risk assessment therefore needs dynamic simulation of frost hardiness using temperature and photoperiod in interaction with developmental stage. Two models, including or not the effect of photoperiod, were calibrated using five years of frost hardiness monitoring (2007-2012), in two locations (low and high elevation) for three walnut genotypes with contrasted phenology and maximum hardiness (Juglans regia cv Franquette, J. regia × nigra 'Early' and 'Late'). The photothermal model predicted more accurate values for all genotypes (efficiency = 0.879; Root Mean Standard Error Predicted (RMSEP) = 2.55 °C) than the thermal model (efficiency = 0.801; RMSEP = 3.24 °C). Predicted frost damages were strongly correlated to minimum temperature of the freezing events (ρ = -0.983) rather than actual frost hardiness (ρ = -0.515), or ratio of phenological stage completion (ρ = 0.336). Higher frost risks are consequently predicted during winter, at high elevation, whereas spring is only risky at low elevation in early genotypes exhibiting faster dehardening rate. However, early frost damages, although of lower value, may negatively affect fruit production the subsequent year (R 2 = 0.381, P = 0.057). These results highlight the interacting pattern between frost exposure and vulnerability at different scales and the necessity of intra-organ studies to understand the time course of frost vulnerability in flower buds along the winter. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Handley, Sean J.; Willis, Trevor J.; Cole, Russell G.; Bradley, Anna; Cairney, Daniel J.; Brown, Stephen N.; Carter, Megan E.
2014-02-01
Trawling and dredge fisheries remove vulnerable fauna, homogenise sediments and assemblages, and break down biogenic habitats, but the full extent of these effects can be difficult to quantify in the absence of adequate control sites. Our study utilised rare control sites containing biogenic habitat, the Separation Point exclusion zone, formally protected for 28 years, as the basis for assessing the degree of change experienced by adjacent areas subject to benthic fishing. Sidescan sonar surveys verified that intensive trawling and dredging occurred in areas adjacent to, but not inside, the exclusion area. We compared sediment composition, biogenic cover, macrofaunal assemblages, biomass, and productivity of the benthos, inside and outside the exclusion zone. Disturbed sites were dominated by fine mud, with little or no shell-gravel, reduced number of species, and loss of large bodied animals, with concomitant reductions in biomass and productivity. At protected sites, large, rarer molluscs were more abundant and contributed the most to size-based estimates of productivity and biomass. Functional changes in fished assemblages were consistent with previously reported relative increases in scavengers, predators and deposit feeders at the expense of filter feeders and a grazer. We propose that the colonisation of biogenic species in protected sites was contingent on the presence of shell-gravel atop these soft sediments. The process of sediment homogenisation by bottom fishing and elimination of shell-gravels from surficial sediments appeared to have occurred over decades - a ‘shifting baseline’. Therefore, benchmarking historical sediment structure at control site like the Separation Point exclusion zone is necessary to determine the full extent of physical habitat change wrought by contact gears on sheltered soft sediment habitats to better underpin appropriate conservation, restoration or fisheries management goals.
Computer vision and soft computing for automatic skull-face overlay in craniofacial superimposition.
Campomanes-Álvarez, B Rosario; Ibáñez, O; Navarro, F; Alemán, I; Botella, M; Damas, S; Cordón, O
2014-12-01
Craniofacial superimposition can provide evidence to support that some human skeletal remains belong or not to a missing person. It involves the process of overlaying a skull with a number of ante mortem images of an individual and the analysis of their morphological correspondence. Within the craniofacial superimposition process, the skull-face overlay stage just focuses on achieving the best possible overlay of the skull and a single ante mortem image of the suspect. Although craniofacial superimposition has been in use for over a century, skull-face overlay is still applied by means of a trial-and-error approach without an automatic method. Practitioners finish the process once they consider that a good enough overlay has been attained. Hence, skull-face overlay is a very challenging, subjective, error prone, and time consuming part of the whole process. Though the numerical assessment of the method quality has not been achieved yet, computer vision and soft computing arise as powerful tools to automate it, dramatically reducing the time taken by the expert and obtaining an unbiased overlay result. In this manuscript, we justify and analyze the use of these techniques to properly model the skull-face overlay problem. We also present the automatic technical procedure we have developed using these computational methods and show the four overlays obtained in two craniofacial superimposition cases. This automatic procedure can be thus considered as a tool to aid forensic anthropologists to develop the skull-face overlay, automating and avoiding subjectivity of the most tedious task within craniofacial superimposition. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Voisin, Guillaume; Mottez, Fabrice; Bonazzola, Silvano
2018-02-01
Electron-positron pair production by collision of photons is investigated in view of application to pulsar physics. We compute the absorption rate of individual gamma-ray photons by an arbitrary anisotropic distribution of softer photons, and the energy and angular spectrum of the outgoing leptons. We work analytically within the approximation that 1 ≫ mc2/E > ɛ/E, with E and ɛ the gamma-ray and soft-photon maximum energy and mc2 the electron mass energy. We give results at leading order in these small parameters. For practical purposes, we provide expressions in the form of Laurent series which give correct reaction rates in the isotropic case within an average error of ˜ 7 per cent. We apply this formalism to gamma-rays flying downward or upward from a hot neutron star thermally radiating at a uniform temperature of 106 K. Other temperatures can be easily deduced using the relevant scaling laws. We find differences in absorption between these two extreme directions of almost two orders of magnitude, much larger than our error estimate. The magnetosphere appears completely opaque to downward gamma-rays while there are up to ˜ 10 per cent chances of absorbing an upward gamma-ray. We provide energy and angular spectra for both upward and downward gamma-rays. Energy spectra show a typical double peak, with larger separation at larger gamma-ray energies. Angular spectra are very narrow, with an opening angle ranging from 10-3 to 10-7 radians with increasing gamma-ray energies.
Soft-core processor study for node-based architectures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Houten, Jonathan Roger; Jarosz, Jason P.; Welch, Benjamin James
2008-09-01
Node-based architecture (NBA) designs for future satellite projects hold the promise of decreasing system development time and costs, size, weight, and power and positioning the laboratory to address other emerging mission opportunities quickly. Reconfigurable Field Programmable Gate Array (FPGA) based modules will comprise the core of several of the NBA nodes. Microprocessing capabilities will be necessary with varying degrees of mission-specific performance requirements on these nodes. To enable the flexibility of these reconfigurable nodes, it is advantageous to incorporate the microprocessor into the FPGA itself, either as a hardcore processor built into the FPGA or as a soft-core processor builtmore » out of FPGA elements. This document describes the evaluation of three reconfigurable FPGA based processors for use in future NBA systems--two soft cores (MicroBlaze and non-fault-tolerant LEON) and one hard core (PowerPC 405). Two standard performance benchmark applications were developed for each processor. The first, Dhrystone, is a fixed-point operation metric. The second, Whetstone, is a floating-point operation metric. Several trials were run at varying code locations, loop counts, processor speeds, and cache configurations. FPGA resource utilization was recorded for each configuration. Cache configurations impacted the results greatly; for optimal processor efficiency it is necessary to enable caches on the processors. Processor caches carry a penalty; cache error mitigation is necessary when operating in a radiation environment.« less
Cilla, M; Pérez-Rey, I; Martínez, M A; Peña, Estefania; Martínez, Javier
2018-06-23
Motivated by the search for new strategies for fitting a material model, a new approach is explored in the present work. The use of numerical and complex algorithms based on machine learning techniques such as support vector machines for regression, bagged decision trees and artificial neural networks is proposed for solving the parameter identification of constitutive laws for soft biological tissues. First, the mathematical tools were trained with analytical uniaxial data (circumferential and longitudinal directions) as inputs, and their corresponding material parameters of the Gasser, Ogden and Holzapfel strain energy function as outputs. The train and test errors show great efficiency during the training process in finding correlations between inputs and outputs; besides, the correlation coefficients were very close to 1. Second, the tool was validated with unseen observations of analytical circumferential and longitudinal uniaxial data. The results show an excellent agreement between the prediction of the material parameters of the SEF and the analytical curves. Finally, data from real circumferential and longitudinal uniaxial tests on different cardiovascular tissues were fitted, thus the material model of these tissues was predicted. We found that the method was able to consistently identify model parameters, and we believe that the use of these numerical tools could lead to an improvement in the characterization of soft biological tissues. This article is protected by copyright. All rights reserved.
NASA Astrophysics Data System (ADS)
Razuc, Mariela; Garrido, Mariano; Caro, Yamile S.; Teglia, Carla M.; Goicoechea, Héctor C.; Fernández Band, Beatriz S.
2013-04-01
A simple and fast on line spectrophotometric method combined with a hybrid hard-soft modeling multivariate curve resolution (HS-MCR) was proposed for the monitoring of photodegradation reaction of ciprofloxacin under UV radiation. The studied conditions attempt to emulate the effect of sunlight on these antibiotics that could be eventually present in the environment. The continuous flow system made it possible to study the ciprofloxacin degradation at different pH values almost at real time, avoiding errors that could arise from typical batch monitoring of the reaction. On the base of a concentration profiles obtained by previous pure soft-modeling approach, reaction pathways have been proposed for the parent compound and its photoproducts at different pH values. These kinetic models were used as a constraint in the HS-MCR analysis. The kinetic profiles and the corresponding pure response profile (UV-Vis spectra) of ciprofloxacin and its main degradation products were recovered after the application of HS-MCR analysis to the spectra recorded throughout the reaction. The observed behavior showed a good agreement with the photodegradation studies reported in the bibliography. Accordingly, the photodegradation reaction was studied by high performance liquid chromatography coupled with UV-Vis diode array detector (HPLC-DAD). The spectra recorded during the chromatographic analysis present a good correlation with the ones recovered by UV-Vis/HS-MCR method.
Kujawa, Autumn; Weinberg, Anna; Bunford, Nora; Fitzgerald, Kate D.; Hanna, Gregory L.; Monk, Christopher S.; Kennedy, Amy E.; Klumpp, Heide; Hajcak, Greg; Phan, K. Luan
2016-01-01
Increased error monitoring, as measured by the error-related negativity (ERN), has been shown to persist after treatment for obsessive-compulsive disorder in youth and adults; however, no previous studies have examined the ERN following treatment for related anxiety disorders. We used a flanker task to elicit the ERN in 28 youth and young adults (8–26 years old) with primary diagnoses of generalized anxiety disorder (GAD) or social anxiety disorder (SAD) and 35 healthy controls. Patients were assessed before and after treatment with cognitive-behavioral therapy (CBT) or selective serotonin reuptake inhibitors (SSRI), and healthy controls were assessed at a comparable interval. The ERN increased across assessments in the combined sample. Patients with SAD exhibited an enhanced ERN relative to healthy controls prior to and following treatment, even when analyses were limited to SAD patients who responded to treatment. Patients with GAD did not significantly differ from healthy controls at either assessment. Results provide preliminary evidence that enhanced error monitoring persists following treatment for SAD in youth and young adults, and support conceptualizations of increased error monitoring as a trait-like vulnerability that may contribute to risk for recurrence and impaired functioning later in life. Future work is needed to further evaluate the ERN in GAD across development, including whether an enhanced ERN develops in adulthood or is most apparent when worries focus on internal sources of threat. PMID:27495356
Ketamine Effects on Memory Reconsolidation Favor a Learning Model of Delusions
Gardner, Jennifer M.; Piggot, Jennifer S.; Turner, Danielle C.; Everitt, Jessica C.; Arana, Fernando Sergio; Morgan, Hannah L.; Milton, Amy L.; Lee, Jonathan L.; Aitken, Michael R. F.; Dickinson, Anthony; Everitt, Barry J.; Absalom, Anthony R.; Adapa, Ram; Subramanian, Naresh; Taylor, Jane R.; Krystal, John H.; Fletcher, Paul C.
2013-01-01
Delusions are the persistent and often bizarre beliefs that characterise psychosis. Previous studies have suggested that their emergence may be explained by disturbances in prediction error-dependent learning. Here we set up complementary studies in order to examine whether such a disturbance also modulates memory reconsolidation and hence explains their remarkable persistence. First, we quantified individual brain responses to prediction error in a causal learning task in 18 human subjects (8 female). Next, a placebo-controlled within-subjects study of the impact of ketamine was set up on the same individuals. We determined the influence of this NMDA receptor antagonist (previously shown to induce aberrant prediction error signal and lead to transient alterations in perception and belief) on the evolution of a fear memory over a 72 hour period: they initially underwent Pavlovian fear conditioning; 24 hours later, during ketamine or placebo administration, the conditioned stimulus (CS) was presented once, without reinforcement; memory strength was then tested again 24 hours later. Re-presentation of the CS under ketamine led to a stronger subsequent memory than under placebo. Moreover, the degree of strengthening correlated with individual vulnerability to ketamine's psychotogenic effects and with prediction error brain signal. This finding was partially replicated in an independent sample with an appetitive learning procedure (in 8 human subjects, 4 female). These results suggest a link between altered prediction error, memory strength and psychosis. They point to a core disruption that may explain not only the emergence of delusional beliefs but also their persistence. PMID:23776445
No Substitute for Going to the Field: Correcting Lidar DEMs in Salt Marshes
NASA Astrophysics Data System (ADS)
Renken, K.; Morris, J. T.; Lynch, J.; Bayley, H.; Neil, A.; Rasmussen, S.; Tyrrell, M.; Tanis, M.
2016-12-01
Models that forecast the response of salt marshes to current and future trends in sea level rise increasingly are used to guide management of these vulnerable ecosystems. Lidar-derived DEMs serve as the foundation for modeling landform change. However, caution is advised when using these DEMs as the starting point for models of salt marsh evolution. While broad vegetation class (i.e., young forest, old forest, grasslands, desert, etc.) has proven to be a significant predictor of vertical displacement error in terrestrial environments, differentiating error among different species or community types within the same ecosystem has received less attention. Salt marshes are dominated by monocultures of grass species and thus are an ideal environment to examine the within-species effect on lidar DEM error. We analyzed error of lidar DEMs using elevations from real-time kinematic (RTK) surveys in saltmarshes in multiple national parks and wildlife refuge areas from the mouth of the Chesapeake Bay to Massachusetts. Error of the lidar DEMs was sometimes large, on the order of 0.25 m, and varied significantly between sites because vegetation cover varies seasonally and lidar data was not always collected in the same season for each park. Vegetation cover and composition were used to explain differences between RTK elevations and lidar DEMs. This research underscores the importance of collecting RTK elevation data and vegetation cover data coincident with lidar data to produce correction factors specific to individual salt marsh sites.
Deferm, Julie T; Schreurs, Ruud; Baan, Frank; Bruggink, Robin; Merkx, Matthijs A W; Xi, Tong; Bergé, Stefaan J; Maal, Thomas J J
2018-04-01
The purpose of this study was to assess the feasibility of 3D intraoral scanning for documentation of palatal soft tissue by evaluating the accuracy of shape, color, and curvature. Intraoral scans of ten participants' upper dentition and palate were acquired with the TRIOS® 3D intraoral scanner by two observers. Conventional impressions were taken and digitized as a gold standard. The resulting surface models were aligned using an Iterative Closest Point approach. The absolute distance measurements between the intraoral models and the digitized impression were used to quantify the trueness and precision of intraoral scanning. The mean color of the palatal soft tissue was extracted in HSV (hue, saturation, value) format to establish the color precision. Finally, the mean curvature of the surface models was calculated and used for surface irregularity. Mean average distance error between the conventional impression models and the intraoral models was 0.02 ± 0.07 mm (p = 0.30). Mean interobserver color difference was - 0.08 ± 1.49° (p = 0.864), 0.28 ± 0.78% (p = 0.286), and 0.30 ± 1.14% (p = 0.426) for respectively hue, saturation, and value. The interobserver differences for overall and maximum surface irregularity were 0.01 ± 0.03 and 0.00 ± 0.05 mm. This study supports the hypothesis that the intraoral scan can perform a 3D documentation of palatal soft tissue in terms of shape, color, and curvature. An intraoral scanner can be an objective tool, adjunctive to the clinical examination of the palatal tissue.
Eyewitness identification evidence and innocence risk.
Clark, Steven E; Godfrey, Ryan D
2009-02-01
It is well known that the frailties of human memory and vulnerability to suggestion lead to eyewitness identification errors. However, variations in different aspects of the eyewitnessing conditions produce different kinds of errors that are related to wrongful convictions in very different ways. We present a review of the eyewitness identification literature, organized around underlying cognitive mechanisms, memory, similarity, and decision processes, assessing the effects on both correct and mistaken identification. In addition, we calculate a conditional probability we call innocence risk, which is the probability that the suspect is innocent, given that the suspect was identified. Assessment of innocence risk is critical to the theoretical development of eyewitness identification research, as well as to legal decision making and policy evaluation. Our review shows a complex relationship between misidentification and innocence risk, sheds light on some areas of controversy, and suggests that some issues thought to be resolved are in need of additional research.
Multiple Embedded Processors for Fault-Tolerant Computing
NASA Technical Reports Server (NTRS)
Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy
2005-01-01
A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.
Computing the apparent centroid of radar targets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C.E.
1996-12-31
A high-frequency multibounce radar scattering code was used as a simulation platform for demonstrating an algorithm to compute the ARC of specific radar targets. To illustrate this simulation process, several targets models were used. Simulation results for a sphere model were used to determine the errors of approximation associated with the simulation; verifying the process. The severity of glint induced tracking errors was also illustrated using a model of an F-15 aircraft. It was shown, in a deterministic manner, that the ARC of a target can fall well outside its physical extent. Finally, the apparent radar centroid simulation based onmore » a ray casting procedure is well suited for use on most massively parallel computing platforms and could lead to the development of a near real-time radar tracking simulation for applications such as endgame fuzing, survivability, and vulnerability analyses using specific radar targets and fuze algorithms.« less
One Step Quantum Key Distribution Based on EPR Entanglement.
Li, Jian; Li, Na; Li, Lei-Lei; Wang, Tao
2016-06-30
A novel quantum key distribution protocol is presented, based on entanglement and dense coding and allowing asymptotically secure key distribution. Considering the storage time limit of quantum bits, a grouping quantum key distribution protocol is proposed, which overcomes the vulnerability of first protocol and improves the maneuverability. Moreover, a security analysis is given and a simple type of eavesdropper's attack would introduce at least an error rate of 46.875%. Compared with the "Ping-pong" protocol involving two steps, the proposed protocol does not need to store the qubit and only involves one step.