NASA Astrophysics Data System (ADS)
Zhang, Kuiyuan; Umehara, Shigehiro; Yamaguchi, Junki; Furuta, Jun; Kobayashi, Kazutoshi
2016-08-01
This paper analyzes how body bias and BOX region thickness affect soft error rates in 65-nm SOTB (Silicon on Thin BOX) and 28-nm UTBB (Ultra Thin Body and BOX) FD-SOI processes. Soft errors are induced by alpha-particle and neutron irradiation and the results are then analyzed by Monte Carlo based simulation using PHITS-TCAD. The alpha-particle-induced single event upset (SEU) cross-section and neutron-induced soft error rate (SER) obtained by simulation are consistent with measurement results. We clarify that SERs decreased in response to an increase in the BOX thickness for SOTB while SERs in UTBB are independent of BOX thickness. We also discover SOTB develops a higher tolerance to soft errors when reverse body bias is applied while UTBB become more susceptible.
A Quatro-Based 65-nm Flip-Flop Circuit for Soft-Error Resilience
NASA Astrophysics Data System (ADS)
Li, Y.-Q.; Wang, H.-B.; Liu, R.; Chen, L.; Nofal, I.; Shi, S.-T.; He, A.-L.; Guo, G.; Baeg, S. H.; Wen, S.-J.; Wong, R.; Chen, M.; Wu, Q.
2017-06-01
A flip-flop circuit hardened against soft errors is presented in this paper. This design is an improved version of Quatro for further enhanced soft-error resilience by integrating the guard-gate technique. The proposed design, as well as reference Quatro and regular flip-flops, was implemented and manufactured in a 65-nm CMOS bulk technology. Experimental characterization results of their alpha and heavy ions soft-error rates verified the superior hardening performance of the proposed design over the other two circuits.
NASA Astrophysics Data System (ADS)
Watanabe, Y.; Abe, S.
2014-06-01
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant source of soft errors regardless of design rule.
Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip
NASA Astrophysics Data System (ADS)
Du, Xuecheng; He, Chaohui; Liu, Shuhuan; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang
2016-09-01
Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.
An Investigation into Soft Error Detection Efficiency at Operating System Level
Taheri, Hassan
2014-01-01
Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance. PMID:24574894
An investigation into soft error detection efficiency at operating system level.
Asghari, Seyyed Amir; Kaynak, Okyay; Taheri, Hassan
2014-01-01
Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and software levels for their alleviation. However, there is the basic assumption behind these works that the operating system is reliable and the focus is on other system levels. In this paper, we investigate the effects of soft errors on the operating system components and compare their vulnerability with that of application level components. Results show that soft errors in operating system components affect both operating system and application level components. Therefore, by providing endurance to operating system level components against soft errors, both operating system and application level components gain tolerance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watanabe, Y., E-mail: watanabe@aees.kyushu-u.ac.jp; Abe, S.
Terrestrial neutron-induced soft errors in MOSFETs from a 65 nm down to a 25 nm design rule are analyzed by means of multi-scale Monte Carlo simulation using the PHITS-HyENEXSS code system. Nuclear reaction models implemented in PHITS code are validated by comparisons with experimental data. From the analysis of calculated soft error rates, it is clarified that secondary He and H ions provide a major impact on soft errors with decreasing critical charge. It is also found that the high energy component from 10 MeV up to several hundreds of MeV in secondary cosmic-ray neutrons has the most significant sourcemore » of soft errors regardless of design rule.« less
Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for sub-130 nm Technologies
NASA Technical Reports Server (NTRS)
Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Michael M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.
2010-01-01
We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.
Impact of Spacecraft Shielding on Direct Ionization Soft Error Rates for Sub-130 nm Technologies
NASA Technical Reports Server (NTRS)
Pellish, Jonathan A.; Xapsos, Michael A.; Stauffer, Craig A.; Jordan, Thomas M.; Sanders, Anthony B.; Ladbury, Raymond L.; Oldham, Timothy R.; Marshall, Paul W.; Heidel, David F.; Rodbell, Kenneth P.
2010-01-01
We use ray tracing software to model various levels of spacecraft shielding complexity and energy deposition pulse height analysis to study how it affects the direct ionization soft error rate of microelectronic components in space. The analysis incorporates the galactic cosmic ray background, trapped proton, and solar heavy ion environments as well as the October 1989 and July 2000 solar particle events.
Gutiérrez, J. J.; Russell, James K.
2016-01-01
Background. Cardiopulmonary resuscitation (CPR) feedback devices are being increasingly used. However, current accelerometer-based devices overestimate chest displacement when CPR is performed on soft surfaces, which may lead to insufficient compression depth. Aim. To assess the performance of a new algorithm for measuring compression depth and rate based on two accelerometers in a simulated resuscitation scenario. Materials and Methods. Compressions were provided to a manikin on two mattresses, foam and sprung, with and without a backboard. One accelerometer was placed on the chest and the second at the manikin's back. Chest displacement and mattress displacement were calculated from the spectral analysis of the corresponding acceleration every 2 seconds and subtracted to compute the actual sternal-spinal displacement. Compression rate was obtained from the chest acceleration. Results. Median unsigned error in depth was 2.1 mm (4.4%). Error was 2.4 mm in the foam and 1.7 mm in the sprung mattress (p < 0.001). Error was 3.1/2.0 mm and 1.8/1.6 mm with/without backboard for foam and sprung, respectively (p < 0.001). Median error in rate was 0.9 cpm (1.0%), with no significant differences between test conditions. Conclusion. The system provided accurate feedback on chest compression depth and rate on soft surfaces. Our solution compensated mattress displacement, avoiding overestimation of compression depth when CPR is performed on soft surfaces. PMID:27999808
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
PRESAGE: Protecting Structured Address Generation against Soft Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation have not been widely researched (especially in the context of indexing large arrays). We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGEmore » is that any address computation scheme that propagates an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Ensuring the propagation of errors allows one to place detectors at loop exit points and helps turn silent corruptions into easily detectable error situations. Our experiments using the PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less
Real-time soft error rate measurements on bulk 40 nm SRAM memories: a five-year dual-site experiment
NASA Astrophysics Data System (ADS)
Autran, J. L.; Munteanu, D.; Moindjie, S.; Saad Saoud, T.; Gasiot, G.; Roche, P.
2016-11-01
This paper reports five years of real-time soft error rate experimentation conducted with the same setup at mountain altitude for three years and then at sea level for two years. More than 7 Gbit of SRAM memories manufactured in CMOS bulk 40 nm technology have been subjected to the natural radiation background. The intensity of the atmospheric neutron flux has been continuously measured on site during these experiments using dedicated neutron monitors. As the result, the neutron and alpha component of the soft error rate (SER) have been very accurately extracted from these measurements, refining the first SER estimations performed in 2012 for this SRAM technology. Data obtained at sea level evidence, for the first time, a possible correlation between the neutron flux changes induced by the daily atmospheric pressure variations and the measured SER. Finally, all of the experimental data are compared with results obtained from accelerated tests and numerical simulation.
Comparisons of single event vulnerability of GaAs SRAMS
NASA Astrophysics Data System (ADS)
Weatherford, T. R.; Hauser, J. R.; Diehl, S. E.
1986-12-01
A GaAs MESFET/JFET model incorporated into SPICE has been used to accurately describe C-EJFET, E/D MESFET and D MESFET/resistor GaAs memory technologies. These cells have been evaluated for critical charges due to gate-to-drain and drain-to-source charge collection. Low gate-to-drain critical charges limit conventional GaAs SRAM soft error rates to approximately 1E-6 errors/bit-day. SEU hardening approaches including decoupling resistors, diodes, and FETs have been investigated. Results predict GaAs RAM cell critical charges can be increased to over 0.1 pC. Soft error rates in such hardened memories may approach 1E-7 errors/bit-day without significantly reducing memory speed. Tradeoffs between hardening level, performance and fabrication complexity are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimura, K.; Ohmi, K.; Tottori University Electronic Display Research Center, 101 Minami4-chome, Koyama-cho, Tottori-shi, Tottori 680-8551
With increasing density of memory devices, the issue of generating soft errors by cosmic rays is becoming more and more serious. Therefore, the irradiation resistance of resistance random access memory (ReRAM) to cosmic radiation has to be elucidated for practical use. In this paper, we investigated the data retention characteristics of ReRAM against ultraviolet irradiation with a Pt/NiO/ITO structure. Soft errors were confirmed to be caused by ultraviolet irradiation in both low- and high-resistance states. An analysis of the wavelength dependence of light irradiation on data retention characteristics suggested that electronic excitation from the valence to the conduction band andmore » to the energy level generated due to the introduction of oxygen vacancies caused the errors. Based on a statistically estimated soft error rates, the errors were suggested to be caused by the cohesion and dispersion of oxygen vacancies owing to the generation of electron-hole pairs and valence changes by the ultraviolet irradiation.« less
A cascaded coding scheme for error control
NASA Technical Reports Server (NTRS)
Shu, L.; Kasami, T.
1985-01-01
A cascade coding scheme for error control is investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are evaluated. They seem to be quite suitable for satellite down-link error control.
A cascaded coding scheme for error control
NASA Technical Reports Server (NTRS)
Kasami, T.; Lin, S.
1985-01-01
A cascaded coding scheme for error control was investigated. The scheme employs a combination of hard and soft decisions in decoding. Error performance is analyzed. If the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit-error-rate. Some example schemes are studied which seem to be quite suitable for satellite down-link error control.
Yang, Yuan; Quan, Nannan; Bu, Jingjing; Li, Xueping; Yu, Ningmei
2016-09-26
High order modulation and demodulation technology can solve the frequency requirement between the wireless energy transmission and data communication. In order to achieve reliable wireless data communication based on high order modulation technology for visual prosthesis, this work proposed a Reed-Solomon (RS) error correcting code (ECC) circuit on the basis of differential amplitude and phase shift keying (DAPSK) soft demodulation. Firstly, recognizing the weakness of the traditional DAPSK soft demodulation algorithm based on division that is complex for hardware implementation, an improved phase soft demodulation algorithm for visual prosthesis to reduce the hardware complexity is put forward. Based on this new algorithm, an improved RS soft decoding method is hence proposed. In this new decoding method, the combination of Chase algorithm and hard decoding algorithms is used to achieve soft decoding. In order to meet the requirements of implantable visual prosthesis, the method to calculate reliability of symbol-level based on multiplication of bit reliability is derived, which reduces the testing vectors number of Chase algorithm. The proposed algorithms are verified by MATLAB simulation and FPGA experimental results. During MATLAB simulation, the biological channel attenuation property model is added into the ECC circuit. The data rate is 8 Mbps in the MATLAB simulation and FPGA experiments. MATLAB simulation results show that the improved phase soft demodulation algorithm proposed in this paper saves hardware resources without losing bit error rate (BER) performance. Compared with the traditional demodulation circuit, the coding gain of the ECC circuit has been improved by about 3 dB under the same BER of [Formula: see text]. The FPGA experimental results show that under the condition of data demodulation error with wireless coils 3 cm away, the system can correct it. The greater the distance, the higher the BER. Then we use a bit error rate analyzer to measure BER of the demodulation circuit and the RS ECC circuit with different distance of two coils. And the experimental results show that the RS ECC circuit has about an order of magnitude lower BER than the demodulation circuit when under the same coils distance. Therefore, the RS ECC circuit has more higher reliability of the communication in the system. The improved phase soft demodulation algorithm and soft decoding algorithm proposed in this paper enables data communication that is more reliable than other demodulation system, which also provide a significant reference for further study to the visual prosthesis system.
Clover: Compiler directed lightweight soft error resilience
Liu, Qingrui; Lee, Dongyoon; Jung, Changhee; ...
2015-05-01
This paper presents Clover, a compiler directed soft error detection and recovery scheme for lightweight soft error resilience. The compiler carefully generates soft error tolerant code based on idem-potent processing without explicit checkpoint. During program execution, Clover relies on a small number of acoustic wave detectors deployed in the processor to identify soft errors by sensing the wave made by a particle strike. To cope with DUE (detected unrecoverable errors) caused by the sensing latency of error detection, Clover leverages a novel selective instruction duplication technique called tail-DMR (dual modular redundancy). Once a soft error is detected by either themore » sensor or the tail-DMR, Clover takes care of the error as in the case of exception handling. To recover from the error, Clover simply redirects program control to the beginning of the code region where the error is detected. Lastly, the experiment results demonstrate that the average runtime overhead is only 26%, which is a 75% reduction compared to that of the state-of-the-art soft error resilience technique.« less
Detection and Correction of Silent Data Corruption for Large-Scale High-Performance Computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fiala, David J; Mueller, Frank; Engelmann, Christian
Faults have become the norm rather than the exception for high-end computing on clusters with 10s/100s of thousands of cores. Exacerbating this situation, some of these faults remain undetected, manifesting themselves as silent errors that corrupt memory while applications continue to operate and report incorrect results. This paper studies the potential for redundancy to both detect and correct soft errors in MPI message-passing applications. Our study investigates the challenges inherent to detecting soft errors within MPI application while providing transparent MPI redundancy. By assuming a model wherein corruption in application data manifests itself by producing differing MPI message data betweenmore » replicas, we study the best suited protocols for detecting and correcting MPI data that is the result of corruption. To experimentally validate our proposed detection and correction protocols, we introduce RedMPI, an MPI library which resides in the MPI profiling layer. RedMPI is capable of both online detection and correction of soft errors that occur in MPI applications without requiring any modifications to the application source by utilizing either double or triple redundancy. Our results indicate that our most efficient consistency protocol can successfully protect applications experiencing even high rates of silent data corruption with runtime overheads between 0% and 30% as compared to unprotected applications without redundancy. Using our fault injector within RedMPI, we observe that even a single soft error can have profound effects on running applications, causing a cascading pattern of corruption in most cases causes that spreads to all other processes. RedMPI's protection has been shown to successfully mitigate the effects of soft errors while allowing applications to complete with correct results even in the face of errors.« less
Closed-Loop Analysis of Soft Decisions for Serial Links
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Steele, Glen F.; Zucha, Joan P.; Schlesinger, Adam M.
2013-01-01
We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin; Steele, Glen; Zucha, Joan; Schlesinger, Adam
2013-01-01
We describe the benefit of using closed-loop measurements for a radio receiver paired with a counterpart transmitter. We show that real-time analysis of the soft decision output of a receiver can provide rich and relevant insight far beyond the traditional hard-decision bit error rate (BER) test statistic. We describe a Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in live-time during the development of software defined radios. This test technique gains importance as modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more protocol overhead through noisier channels, and software-defined radios (SDRs) use error-correction codes that approach Shannon's theoretical limit of performance.
Scaled CMOS Technology Reliability Users Guide
NASA Technical Reports Server (NTRS)
White, Mark
2010-01-01
The desire to assess the reliability of emerging scaled microelectronics technologies through faster reliability trials and more accurate acceleration models is the precursor for further research and experimentation in this relevant field. The effect of semiconductor scaling on microelectronics product reliability is an important aspect to the high reliability application user. From the perspective of a customer or user, who in many cases must deal with very limited, if any, manufacturer's reliability data to assess the product for a highly-reliable application, product-level testing is critical in the characterization and reliability assessment of advanced nanometer semiconductor scaling effects on microelectronics reliability. A methodology on how to accomplish this and techniques for deriving the expected product-level reliability on commercial memory products are provided.Competing mechanism theory and the multiple failure mechanism model are applied to the experimental results of scaled SDRAM products. Accelerated stress testing at multiple conditions is applied at the product level of several scaled memory products to assess the performance degradation and product reliability. Acceleration models are derived for each case. For several scaled SDRAM products, retention time degradation is studied and two distinct soft error populations are observed with each technology generation: early breakdown, characterized by randomly distributed weak bits with Weibull slope (beta)=1, and a main population breakdown with an increasing failure rate. Retention time soft error rates are calculated and a multiple failure mechanism acceleration model with parameters is derived for each technology. Defect densities are calculated and reflect a decreasing trend in the percentage of random defective bits for each successive product generation. A normalized soft error failure rate of the memory data retention time in FIT/Gb and FIT/cm2 for several scaled SDRAM generations is presented revealing a power relationship. General models describing the soft error rates across scaled product generations are presented. The analysis methodology may be applied to other scaled microelectronic products and their key parameters.
Propagation of measurement accuracy to biomass soft-sensor estimation and control quality.
Steinwandter, Valentin; Zahel, Thomas; Sagmeister, Patrick; Herwig, Christoph
2017-01-01
In biopharmaceutical process development and manufacturing, the online measurement of biomass and derived specific turnover rates is a central task to physiologically monitor and control the process. However, hard-type sensors such as dielectric spectroscopy, broth fluorescence, or permittivity measurement harbor various disadvantages. Therefore, soft-sensors, which use measurements of the off-gas stream and substrate feed to reconcile turnover rates and provide an online estimate of the biomass formation, are smart alternatives. For the reconciliation procedure, mass and energy balances are used together with accuracy estimations of measured conversion rates, which were so far arbitrarily chosen and static over the entire process. In this contribution, we present a novel strategy within the soft-sensor framework (named adaptive soft-sensor) to propagate uncertainties from measurements to conversion rates and demonstrate the benefits: For industrially relevant conditions, hereby the error of the resulting estimated biomass formation rate and specific substrate consumption rate could be decreased by 43 and 64 %, respectively, compared to traditional soft-sensor approaches. Moreover, we present a generic workflow to determine the required raw signal accuracy to obtain predefined accuracies of soft-sensor estimations. Thereby, appropriate measurement devices and maintenance intervals can be selected. Furthermore, using this workflow, we demonstrate that the estimation accuracy of the soft-sensor can be additionally and substantially increased.
Design Techniques for Power-Aware Combinational Logic SER Mitigation
NASA Astrophysics Data System (ADS)
Mahatme, Nihaar N.
The history of modern semiconductor devices and circuits suggests that technologists have been able to maintain scaling at the rate predicted by Moore's Law [Moor-65]. With improved performance, speed and lower area, technology scaling has also exacerbated reliability issues such as soft errors. Soft errors are transient errors that occur in microelectronic circuits due to ionizing radiation particle strikes on reverse biased semiconductor junctions. These radiation induced errors at the terrestrial-level are caused due to radiation particle strikes by (1) alpha particles emitted as decay products of packing material (2) cosmic rays that produce energetic protons and neutrons, and (3) thermal neutrons [Dodd-03], [Srou-88] and more recently muons and electrons [Ma-79] [Nara-08] [Siew-10] [King-10]. In the space environment radiation induced errors are a much bigger threat and are mainly caused by cosmic heavy-ions, protons etc. The effects of radiation exposure on circuits and measures to protect against them have been studied extensively for the past 40 years, especially for parts operating in space. Radiation particle strikes can affect memory as well as combinational logic. Typically when these particles strike semiconductor junctions of transistors that are part of feedback structures such as SRAM memory cells or flip-flops, it can lead to an inversion of the cell content. Such a failure is formally called a bit-flip or single-event upset (SEU). When such particles strike sensitive junctions part of combinational logic gates they produce transient voltage spikes or glitches called single-event transients (SETs) that could be latched by receiving flip-flops. As the circuits are clocked faster, there are more number of clocking edges which increases the likelihood of latching these transients. In older technology generations the probability of errors in flip-flops due to SETs being latched was much lower compared to direct strikes on flip-flops or SRAMs leading to SEUs. This was mainly because the operating frequencies were much lower for older technology generations. The Intel Pentium II for example was fabricated using 0.35 microm technology and operated between 200-330 MHz. With technology scaling however, operating frequencies have increased tremendously and the contribution of soft errors due to latched SETs from combinational logic could account for a significant proportion of the chip-level soft error rate [Sief-12][Maha-11][Shiv02] [Bu97]. Therefore there is a need to systematically characterize the problem of combinational logic single-event effects (SEE) and understand the various factors that affect the combinational logic single-event error rate. Just as scaling has led to soft errors emerging as a reliability-limiting failure mode for modern digital ICs, the problem of increasing power consumption has arguably been a bigger bane of scaling. While Moore's Law loftily states the blessing of technology scaling to be smaller and faster transistor it fails to highlight that the power density increases exponentially with every technology generation. The power density problem was partially solved in the 1970's and 1980's by moving from bipolar and GaAs technologies to full-scale silicon CMOS technologies. Following this however, technology miniaturization that enabled high-speed, multicore and parallel computing has steadily increased the power density and the power consumption problem. Today minimizing the power consumption is as much critical for power hungry server farms as it for portable devices, all pervasive sensor networks and future eco-bio-sensors. Low-power consumption is now regularly part of design philosophies for various digital products with diverse applications from computing to communication to healthcare. Thus designers in today's world are left grappling with both a "power wall" as well as a "reliability wall". Unfortunately, when it comes to improving reliability through soft error mitigation, most approaches are invariably straddled with overheads in terms of area or speed and more importantly power. Thus, the cost of protecting combinational logic through the use of power hungry mitigation approaches can disrupt the power budget significantly. Therefore there is a strong need to develop techniques that can provide both power minimization as well as combinational logic soft error mitigation. This dissertation, advances hitherto untapped opportunities to jointly reduce power consumption and deliver soft error resilient designs. Circuit as well as architectural approaches are employed to achieve this objective and the advantages of cross-layer optimization for power and soft error reliability are emphasized.
A Case for Soft Error Detection and Correction in Computational Chemistry.
van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A
2013-09-10
High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.
Addressing the Hard Factors for Command File Errors by Probabilistic Reasoning
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Bryant, Larry
2014-01-01
Command File Errors (CFE) are managed using standard risk management approaches at the Jet Propulsion Laboratory. Over the last few years, more emphasis has been made on the collection, organization, and analysis of these errors for the purpose of reducing the CFE rates. More recently, probabilistic modeling techniques have been used for more in depth analysis of the perceived error rates of the DAWN mission and for managing the soft factors in the upcoming phases of the mission. We broadly classify the factors that can lead to CFE's as soft factors, which relate to the cognition of the operators and hard factors which relate to the Mission System which is composed of the hardware, software and procedures used for the generation, verification & validation and execution of commands. The focus of this paper is to use probabilistic models that represent multiple missions at JPL to determine the root cause and sensitivities of the various components of the mission system and develop recommendations and techniques for addressing them. The customization of these multi-mission models to a sample interplanetary spacecraft is done for this purpose.
45 Gb/s low complexity optical front-end for soft-decision LDPC decoders.
Sakib, Meer Nazmus; Moayedi, Monireh; Gross, Warren J; Liboiron-Ladouceur, Odile
2012-07-30
In this paper a low complexity and energy efficient 45 Gb/s soft-decision optical front-end to be used with soft-decision low-density parity-check (LDPC) decoders is demonstrated. The results show that the optical front-end exhibits a net coding gain of 7.06 and 9.62 dB for post forward error correction bit error rate of 10(-7) and 10(-12) for long block length LDPC(32768,26803) code. The performance over a hard decision front-end is 1.9 dB for this code. It is shown that the soft-decision circuit can also be used as a 2-bit flash type analog-to-digital converter (ADC), in conjunction with equalization schemes. At bit rate of 15 Gb/s using RS(255,239), LDPC(672,336), (672, 504), (672, 588), and (1440, 1344) used with a 6-tap finite impulse response (FIR) equalizer will result in optical power savings of 3, 5, 7, 9.5 and 10.5 dB, respectively. The 2-bit flash ADC consumes only 2.71 W at 32 GSamples/s. At 45 GSamples/s the power consumption is estimated to be 4.95 W.
NASA Astrophysics Data System (ADS)
Celik, Cihangir
Advances in microelectronics result in sub-micrometer electronic technologies as predicted by Moore's Law, 1965, which states the number of transistors in a given space would double every two years. The most available memory architectures today have submicrometer transistor dimensions. The International Technology Roadmap for Semiconductors (ITRS), a continuation of Moore's Law, predicts that Dynamic Random Access Memory (DRAM) will have an average half pitch size of 50 nm and Microprocessor Units (MPU) will have an average gate length of 30 nm over the period of 2008-2012. Decreases in the dimensions satisfy the producer and consumer requirements of low power consumption, more data storage for a given space, faster clock speed, and portability of integrated circuits (IC), particularly memories. On the other hand, these properties also lead to a higher susceptibility of IC designs to temperature, magnetic interference, power supply, and environmental noise, and radiation. Radiation can directly or indirectly affect device operation. When a single energetic particle strikes a sensitive node in the micro-electronic device, it can cause a permanent or transient malfunction in the device. This behavior is called a Single Event Effect (SEE). SEEs are mostly transient errors that generate an electric pulse which alters the state of a logic node in the memory device without having a permanent effect on the functionality of the device. This is called a Single Event Upset (SEU) or Soft Error . Contrary to SEU, Single Event Latchup (SEL), Single Event Gate Rapture (SEGR), or Single Event Burnout (SEB) they have permanent effects on the device operation and a system reset or recovery is needed to return to proper operations. The rate at which a device or system encounters soft errors is defined as Soft Error Rate (SER). The semiconductor industry has been struggling with SEEs and is taking necessary measures in order to continue to improve system designs in nano-scale technologies. Prevention of SEEs has been studied and applied in the semiconductor industry by including radiation protection precautions in the system architecture or by using corrective algorithms in the system operation. Decreasing 10B content (20%of natural boron) in the natural boron of Borophosphosilicate glass (BPSG) layers that are conventionally used in the fabrication of semiconductor devices was one of the major radiation protection approaches for the system architecture. Neutron interaction in the BPSG layer was the origin of the SEEs because of the 10B (n,alpha) 7Li reaction products. Both of the particles produced have the capability of ionization in the silicon substrate region, whose thickness is comparable to the ranges of these particles. Using the soft error phenomenon in exactly the opposite manner of the semiconductor industry can provide a new neutron detection system based on the SERs in the semiconductor memories. By investigating the soft error mechanisms in the available semiconductor memories and enhancing the soft error occurrences in these devices, one can convert all memory using intelligent systems into portable, power efficient, directiondependent neutron detectors. The Neutron Intercepting Silicon Chip (NISC) project aims to achieve this goal by introducing 10B-enriched BPSG layers to the semiconductor memory architectures. This research addresses the development of a simulation tool, the NISC Soft Error Analysis Tool (NISCSAT), for soft error modeling and analysis in the semiconductor memories to provide basic design considerations for the NISC. NISCSAT performs particle transport and calculates the soft error probabilities, or SER, depending on energy depositions of the particles in a given memory node model of the NISC. Soft error measurements were performed with commercially available, off-the-shelf semiconductor memories and microprocessors to observe soft error variations with the neutron flux and memory supply voltage. Measurement results show that soft errors in the memories increase proportionally with the neutron flux, whereas they decrease with increasing the supply voltages. NISC design considerations include the effects of device scaling, 10B content in the BPSG layer, incoming neutron energy, and critical charge of the node for this dissertation. NISCSAT simulations were performed with various memory node models to account these effects. Device scaling simulations showed that any further increase in the thickness of the BPSG layer beyond 2 mum causes self-shielding of the incoming neutrons due to the BPSG layer and results in lower detection efficiencies. Moreover, if the BPSG layer is located more than 4 mum apart from the depletion region in the node, there are no soft errors in the node due to the fact that both of the reaction products have lower ranges in the silicon or any possible node layers. Calculation results regarding the critical charge indicated that the mean charge deposition of the reaction products in the sensitive volume of the node is about 15 fC. It is evident that the NISC design should have a memory architecture with a critical charge of 15 fC or less to obtain higher detection efficiencies. Moreover, the sensitive volume should be placed in close proximity to the BPSG layers so that its location would be within the range of alpha and 7Li particles. Results showed that the distance between the BPSG layer and the sensitive volume should be less than 2 mum to increase the detection efficiency of the NISC. Incoming neutron energy was also investigated by simulations and the results obtained from these simulations showed that NISC neutron detection efficiency is related with the neutron cross-sections of 10B (n,alpha) 7Li reaction, e.g., ratio of the thermal (0.0253 eV) to fast (2 MeV) neutron detection efficiencies is approximately equal to 8000:1. Environmental conditions and their effects on the NISC performance were also studied in this research. Cosmic rays were modeled and simulated via NISCSAT to investigate detection reliability of the NISC. Simulation results show that cosmic rays account for less than 2 % of the soft errors for the thermal neutron detection. On the other hand, fast neutron detection by the NISC, which already has a poor efficiency due to the low neutron cross-sections, becomes almost impossible at higher altitudes where the cosmic ray fluxes and their energies are higher. NISCSAT simulations regarding soft error dependency of the NISC for temperature and electromagnetic fields show that there are no significant effects in the NISC detection efficiency. Furthermore, the detection efficiency of the NISC decreases with both air humidity and use of moderators since the incoming neutrons scatter away before reaching the memory surface.
Research on On-Line Modeling of Fed-Batch Fermentation Process Based on v-SVR
NASA Astrophysics Data System (ADS)
Ma, Yongjun
The fermentation process is very complex and non-linear, many parameters are not easy to measure directly on line, soft sensor modeling is a good solution. This paper introduces v-support vector regression (v-SVR) for soft sensor modeling of fed-batch fermentation process. v-SVR is a novel type of learning machine. It can control the accuracy of fitness and prediction error by adjusting the parameter v. An on-line training algorithm is discussed in detail to reduce the training complexity of v-SVR. The experimental results show that v-SVR has low error rate and better generalization with appropriate v.
Proton upsets in LSI memories in space
NASA Technical Reports Server (NTRS)
Mcnulty, P. J.; Wyatt, R. C.; Filz, R. C.; Rothwell, P. L.; Farrell, G. E.
1980-01-01
Two types of large scale integrated dynamic random access memory devices were tested and found to be subject to soft errors when exposed to protons incident at energies between 18 and 130 MeV. These errors are shown to differ significantly from those induced in the same devices by alphas from an Am-241 source. There is considerable variation among devices in their sensitivity to proton-induced soft errors, even among devices of the same type. For protons incident at 130 MeV, the soft error cross sections measured in these experiments varied from 10 to the -8th to 10 to the -6th sq cm/proton. For individual devices, however, the soft error cross section consistently increased with beam energy from 18-130 MeV. Analysis indicates that the soft errors induced by energetic protons result from spallation interactions between the incident protons and the nuclei of the atoms comprising the device. Because energetic protons are the most numerous of both the galactic and solar cosmic rays and form the inner radiation belt, proton-induced soft errors have potentially serious implications for many electronic systems flown in space.
Least Reliable Bits Coding (LRBC) for high data rate satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Wagner, Paul; Budinger, James
1992-01-01
An analysis and discussion of a bandwidth efficient multi-level/multi-stage block coded modulation technique called Least Reliable Bits Coding (LRBC) is presented. LRBC uses simple multi-level component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Further, soft-decision multi-stage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Using analytical expressions and tight performance bounds it is shown that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of Binary Phase Shift Keying (BPSK). Bit error rates (BER) vs. channel bit energy with Additive White Gaussian Noise (AWGN) are given for a set of LRB Reed-Solomon (RS) encoded 8PSK modulation formats with an ensemble rate of 8/9. All formats exhibit a spectral efficiency of 2.67 = (log2(8))(8/9) information bps/Hz. Bit by bit coded and uncoded error probabilities with soft-decision information are determined. These are traded with with code rate to determine parameters that achieve good performance. The relative simplicity of Galois field algebra vs. the Viterbi algorithm and the availability of high speed commercial Very Large Scale Integration (VLSI) for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.
A burst-mode photon counting receiver with automatic channel estimation and bit rate detection
NASA Astrophysics Data System (ADS)
Rao, Hemonth G.; DeVoe, Catherine E.; Fletcher, Andrew S.; Gaschits, Igor D.; Hakimi, Farhad; Hamilton, Scott A.; Hardy, Nicholas D.; Ingwersen, John G.; Kaminsky, Richard D.; Moores, John D.; Scheinbart, Marvin S.; Yarnall, Timothy M.
2016-04-01
We demonstrate a multi-rate burst-mode photon-counting receiver for undersea communication at data rates up to 10.416 Mb/s over a 30-foot water channel. To the best of our knowledge, this is the first demonstration of burst-mode photon-counting communication. With added attenuation, the maximum link loss is 97.1 dB at λ=517 nm. In clear ocean water, this equates to link distances up to 148 meters. For λ=470 nm, the achievable link distance in clear ocean water is 450 meters. The receiver incorporates soft-decision forward error correction (FEC) based on a product code of an inner LDPC code and an outer BCH code. The FEC supports multiple code rates to achieve error-free performance. We have selected a burst-mode receiver architecture to provide robust performance with respect to unpredictable channel obstructions. The receiver is capable of on-the-fly data rate detection and adapts to changing levels of signal and background light. The receiver updates its phase alignment and channel estimates every 1.6 ms, allowing for rapid changes in water quality as well as motion between transmitter and receiver. We demonstrate on-the-fly rate detection, channel BER within 0.2 dB of theory across all data rates, and error-free performance within 1.82 dB of soft-decision capacity across all tested code rates. All signal processing is done in FPGAs and runs continuously in real time.
Effects of Stopping Ions and LET Fluctuations on Soft Error Rate Prediction.
Weeden-Wright, S. L.; King, Michael Patrick; Hooten, N. C.; ...
2015-02-01
Variability in energy deposition from stopping ions and LET fluctuations is quantified for specific radiation environments. When compared to predictions using average LET via CREME96, LET fluctuations lead to an order-of-magnitude difference in effective flux and a nearly 4x decrease in predicted soft error rate (SER) in an example calculation performed on a commercial 65 nm SRAM. The large LET fluctuations reported here will be even greater for the smaller sensitive volumes that are characteristic of highly scaled technologies. End-of-range effects of stopping ions do not lead to significant inaccuracies in radiation environments with low solar activity unless the sensitivevolumemore » thickness is 100 μm or greater. In contrast, end-of-range effects for stopping ions lead to significant inaccuracies for sensitive- volume thicknesses less than 10 μm in radiation environments with high solar activity.« less
Multi-bits error detection and fast recovery in RISC cores
NASA Astrophysics Data System (ADS)
Jing, Wang; Xing, Yang; Yuanfu, Zhao; Weigong, Zhang; Jiao, Shen; Keni, Qiu
2015-11-01
The particles-induced soft errors are a major threat to the reliability of microprocessors. Even worse, multi-bits upsets (MBUs) are ever-increased due to the rapidly shrinking feature size of the IC on a chip. Several architecture-level mechanisms have been proposed to protect microprocessors from soft errors, such as dual and triple modular redundancies (DMR and TMR). However, most of them are inefficient to combat the growing multi-bits errors or cannot well balance the critical paths delay, area and power penalty. This paper proposes a novel architecture, self-recovery dual-pipeline (SRDP), to effectively provide soft error detection and recovery with low cost for general RISC structures. We focus on the following three aspects. First, an advanced DMR pipeline is devised to detect soft error, especially MBU. Second, SEU/MBU errors can be located by enhancing self-checking logic into pipelines stage registers. Third, a recovery scheme is proposed with a recovery cost of 1 or 5 clock cycles. Our evaluation of a prototype implementation exhibits that the SRDP can successfully detect particle-induced soft errors up to 100% and recovery is nearly 95%, the other 5% will inter a specific trap.
TID and SEE Response of an Advanced Samsung 4G NAND Flash Memory
NASA Technical Reports Server (NTRS)
Oldham, Timothy R.; Friendlich, M.; Howard, J. W.; Berg, M. D.; Kim, H. S.; Irwin, T. L.; LaBel, K. A.
2007-01-01
Initial total ionizing dose (TID) and single event heavy ion test results are presented for an unhardened commercial flash memory, fabricated with 63 nm technology. Results are that the parts survive to a TID of nearly 200 krad (SiO2), with a tractable soft error rate of about 10(exp -l2) errors/bit-day, for the Adams Ten Percent Worst Case Environment.
Balasuriya, Lilanthi; Vyles, David; Bakerman, Paul; Holton, Vanessa; Vaidya, Vinay; Garcia-Filion, Pamela; Westdorp, Joan; Sanchez, Christine; Kurz, Rhonda
2017-09-01
An enhanced dose range checking (DRC) system was developed to evaluate prescription error rates in the pediatric intensive care unit and the pediatric cardiovascular intensive care unit. An enhanced DRC system incorporating "soft" and "hard" alerts was designed and implemented. Practitioner responses to alerts for patients admitted to the pediatric intensive care unit and the pediatric cardiovascular intensive care unit were retrospectively reviewed. Alert rates increased from 0.3% to 3.4% after "go-live" (P < 0.001). Before go-live, all alerts were soft alerts. In the period after go-live, 68% of alerts were soft alerts and 32% were hard alerts. Before go-live, providers reduced doses only 1 time for every 10 dose alerts. After implementation of the enhanced computerized physician order entry system, the practitioners responded to soft alerts by reducing doses to more appropriate levels in 24.7% of orders (70/283), compared with 10% (3/30) before go-live (P = 0.0701). The practitioners deleted orders in 9.5% of cases (27/283) after implementation of the enhanced DRC system, as compared with no cancelled orders before go-live (P = 0.0774). Medication orders that triggered a soft alert were submitted unmodified in 65.7% (186/283) as compared with 90% (27/30) of orders before go-live (P = 0.0067). After go-live, 28.7% of hard alerts resulted in a reduced dose, 64% resulted in a cancelled order, and 7.4% were submitted as written. Before go-live, alerts were often clinically irrelevant. After go-live, there was a statistically significant decrease in orders that were submitted unmodified and an increase in the number of orders that were reduced or cancelled.
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests
Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong
2016-01-01
A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10−3(error/particle/cm2), while the MTTF is approximately 110.7 h. PMID:27583533
Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.
He, Wei; Wang, Yueke; Xing, Kefei; Deng, Wei; Zhang, Zelong
2016-01-01
A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF) for space instruments. A model for the system functional error rate (SFER) is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA) is presented. Based on experimental results of different ions (O, Si, Cl, Ti) under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2), while the MTTF is approximately 110.7 h.
Asymmetric Memory Circuit Would Resist Soft Errors
NASA Technical Reports Server (NTRS)
Buehler, Martin G.; Perlman, Marvin
1990-01-01
Some nonlinear error-correcting codes more efficient in presence of asymmetry. Combination of circuit-design and coding concepts expected to make integrated-circuit random-access memories more resistant to "soft" errors (temporary bit errors, also called "single-event upsets" due to ionizing radiation). Integrated circuit of new type made deliberately more susceptible to one kind of bit error than to other, and associated error-correcting code adapted to exploit this asymmetry in error probabilities.
NASA Astrophysics Data System (ADS)
Lohrmann, Carol A.
1990-03-01
Interoperability of commercial Land Mobile Radios (LMR) and the military's tactical LMR is highly desirable if the U.S. government is to respond effectively in a national emergency or in a joint military operation. This ability to talk securely and immediately across agency and military service boundaries is often overlooked. One way to ensure interoperability is to develop and promote Federal communication standards (FS). This thesis surveys one area of the proposed FS 1024 for LMRs; namely, the error detection and correction (EDAC) of the message indicator (MI) bits used for cryptographic synchronization. Several EDAC codes are examined (Hamming, Quadratic Residue, hard decision Golay and soft decision Golay), tested on three FORTRAN programmed channel simulations (INMARSAT, Gaussian and constant burst width), compared and analyzed (based on bit error rates and percent of error-free super-frame runs) so that a best code can be recommended. Out of the four codes under study, the soft decision Golay code (24,12) is evaluated to be the best. This finding is based on the code's ability to detect and correct errors as well as the relative ease of implementation of the algorithm.
Ghorbani, Mahdi; Salahshour, Fateme; Haghparast, Abbas; Knaup, Courtney
2014-01-01
Purpose The aim of this study is to compare the dose in various soft tissues in brachytherapy with photon emitting sources. Material and methods 103Pd, 125I, 169Yb, 192Ir brachytherapy sources were simulated with MCNPX Monte Carlo code, and their dose rate constant and radial dose function were compared with the published data. A spherical phantom with 50 cm radius was simulated and the dose at various radial distances in adipose tissue, breast tissue, 4-component soft tissue, brain (grey/white matter), muscle (skeletal), lung tissue, blood (whole), 9-component soft tissue, and water were calculated. The absolute dose and relative dose difference with respect to 9-component soft tissue was obtained for various materials, sources, and distances. Results There was good agreement between the dosimetric parameters of the sources and the published data. Adipose tissue, breast tissue, 4-component soft tissue, and water showed the greatest difference in dose relative to the dose to the 9-component soft tissue. The other soft tissues showed lower dose differences. The dose difference was also higher for 103Pd source than for 125I, 169Yb, and 192Ir sources. Furthermore, greater distances from the source had higher relative dose differences and the effect can be justified due to the change in photon spectrum (softening or hardening) as photons traverse the phantom material. Conclusions The ignorance of soft tissue characteristics (density, composition, etc.) by treatment planning systems incorporates a significant error in dose delivery to the patient in brachytherapy with photon sources. The error depends on the type of soft tissue, brachytherapy source, as well as the distance from the source. PMID:24790623
Protocol Processing for 100 Gbit/s and Beyond - A Soft Real-Time Approach in Hardware and Software
NASA Astrophysics Data System (ADS)
Büchner, Steffen; Lopacinski, Lukasz; Kraemer, Rolf; Nolte, Jörg
2017-09-01
100 Gbit/s wireless communication protocol processing stresses all parts of a communication system until the outermost. The efficient use of upcoming 100 Gbit/s and beyond transmission technology requires the rethinking of the way protocols are processed by the communication endpoints. This paper summarizes the achievements of the project End2End100. We will present a comprehensive soft real-time stream processing approach that allows the protocol designer to develop, analyze, and plan scalable protocols for ultra high data rates of 100 Gbit/s and beyond. Furthermore, we will present an ultra-low power, adaptable, and massively parallelized FEC (Forward Error Correction) scheme that detects and corrects bit errors at line rate with an energy consumption between 1 pJ/bit and 13 pJ/bit. The evaluation results discussed in this publication show that our comprehensive approach allows end-to-end communication with a very low protocol processing overhead.
The impact of cine EPID image acquisition frame rate on markerless soft-tissue tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, Stephen, E-mail: syip@lroc.harvard.edu; Rottmann, Joerg; Berbeco, Ross
2014-06-15
Purpose: Although reduction of the cine electronic portal imaging device (EPID) acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor autotracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87 Hz with an amorphous silicon portal imager (AS1000, Varian Medical Systems, Palo Alto, CA). The maximum frame rate of 12.87 Hz is imposed by the EPID. Low frame rate images were obtained bymore » continuous frame averaging. A previously validated tracking algorithm was employed for autotracking. The difference between the programmed and autotracked positions of a Las Vegas phantom moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at 11 field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise are correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the autotracking errors increased at frame rates lower than 4.29 Hz. Above 4.29 Hz, changes in errors were negligible withδ < 1.60 mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R = 0.94) and patient studies (R = 0.72). Moderate to poor correlation was found between image noise and tracking error with R −0.58 and −0.19 for both studies, respectively. Conclusions: Cine EPID image acquisition at the frame rate of at least 4.29 Hz is recommended. Motion blurring in the images with frame rates below 4.29 Hz can significantly reduce the accuracy of autotracking.« less
In-flight performance of pulse-processing system of the ASTRO-H/Hitomi soft x-ray spectrometer
NASA Astrophysics Data System (ADS)
Ishisaki, Yoshitaka; Yamada, Shinya; Seta, Hiromi; Tashiro, Makoto S.; Takeda, Sawako; Terada, Yukikatsu; Kato, Yuka; Tsujimoto, Masahiro; Koyama, Shu; Mitsuda, Kazuhisa; Sawada, Makoto; Boyce, Kevin R.; Chiao, Meng P.; Watanabe, Tomomi; Leutenegger, Maurice A.; Eckart, Megan E.; Porter, Frederick Scott; Kilbourne, Caroline Anne
2018-01-01
We summarize results of the initial in-orbit performance of the pulse shape processor (PSP) of the soft x-ray spectrometer instrument onboard ASTRO-H (Hitomi). Event formats, kind of telemetry, and the pulse-processing parameters are described, and the parameter settings in orbit are listed. The PSP was powered-on 2 days after launch, and the event threshold was lowered in orbit. The PSP worked fine in orbit, and there was neither memory error nor SpaceWire communication error until the break-up of spacecraft. Time assignment, electrical crosstalk, and the event screening criteria are studied. It is confirmed that the event processing rate at 100% central processing unit load is ˜200 c / s / array, compliant with the requirement on the PSP.
SU-E-J-112: The Impact of Cine EPID Image Acquisition Frame Rate On Markerless Soft-Tissue Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yip, S; Rottmann, J; Berbeco, R
2014-06-01
Purpose: Although reduction of the cine EPID acquisition frame rate through multiple frame averaging may reduce hardware memory burden and decrease image noise, it can hinder the continuity of soft-tissue motion leading to poor auto-tracking results. The impact of motion blurring and image noise on the tracking performance was investigated. Methods: Phantom and patient images were acquired at a frame rate of 12.87Hz on an AS1000 portal imager. Low frame rate images were obtained by continuous frame averaging. A previously validated tracking algorithm was employed for auto-tracking. The difference between the programmed and auto-tracked positions of a Las Vegas phantommore » moving in the superior-inferior direction defined the tracking error (δ). Motion blurring was assessed by measuring the area change of the circle with the greatest depth. Additionally, lung tumors on 1747 frames acquired at eleven field angles from four radiotherapy patients are manually and automatically tracked with varying frame averaging. δ was defined by the position difference of the two tracking methods. Image noise was defined as the standard deviation of the background intensity. Motion blurring and image noise were correlated with δ using Pearson correlation coefficient (R). Results: For both phantom and patient studies, the auto-tracking errors increased at frame rates lower than 4.29Hz. Above 4.29Hz, changes in errors were negligible with δ<1.60mm. Motion blurring and image noise were observed to increase and decrease with frame averaging, respectively. Motion blurring and tracking errors were significantly correlated for the phantom (R=0.94) and patient studies (R=0.72). Moderate to poor correlation was found between image noise and tracking error with R -0.58 and -0.19 for both studies, respectively. Conclusion: An image acquisition frame rate of at least 4.29Hz is recommended for cine EPID tracking. Motion blurring in images with frame rates below 4.39Hz can substantially reduce the accuracy of auto-tracking. This work is supported in part by the Varian Medical Systems, Inc.« less
NASA Astrophysics Data System (ADS)
Chang, Chun; Huang, Benxiong; Xu, Zhengguang; Li, Bin; Zhao, Nan
2018-02-01
Three soft-input-soft-output (SISO) detection methods for dual-polarized quadrature duobinary (DP-QDB), including maximum-logarithmic-maximum-a-posteriori-probability-algorithm (Max-log-MAP)-based detection, soft-output-Viterbi-algorithm (SOVA)-based detection, and a proposed SISO detection, which can all be combined with SISO decoding, are presented. The three detection methods are investigated at 128 Gb/s in five-channel wavelength-division-multiplexing uncoded and low-density-parity-check (LDPC) coded DP-QDB systems by simulations. Max-log-MAP-based detection needs the returning-to-initial-states (RTIS) process despite having the best performance. When the LDPC code with a code rate of 0.83 is used, the detecting-and-decoding scheme with the SISO detection does not need RTIS and has better bit error rate (BER) performance than the scheme with SOVA-based detection. The former can reduce the optical signal-to-noise ratio (OSNR) requirement (at BER=10-5) by 2.56 dB relative to the latter. The application of the SISO iterative detection in LDPC-coded DP-QDB systems makes a good trade-off between requirements on transmission efficiency, OSNR requirement, and transmission distance, compared with the other two SISO methods.
Neurological soft signs in children with attention deficit hyperactivity disorder.
Patankar, V C; Sangle, J P; Shah, Henal R; Dave, M; Kamath, R M
2012-04-01
Attention deficit hyperactivity disorder (ADHD) is a common neurodevelopmental disorder with wide repercussions. Since it is etiologically related to delayed maturation, neurological soft signs (NSS) could be a tool to assess this. Further the correlation of NSS with severity and type of ADHD and presence of Specific Learning Disability (SLD) would give further insight into it. To study neurological soft signs and risk factors (type, mode of delivery, and milestones) in children with ADHD and to correlate NSS with type and severity of ADHD and with co-morbid Specific Learning Disability. The study was carried out in Child care services of a tertiary teaching urban hospital. It was a cross-sectional single interview study. 52 consecutive children diagnosed as having ADHD were assessed for the presence of neurological soft signs using Revised Physical and Neurological Examination soft Signs scale (PANESS). The ADHD was rated by parents using ADHD parent rating scale. The data was analyzed using the chi-squared test and Pearson's co-relational analysis. Neurological soft signs are present in 84% of children. They are equally present in both the inattentive-hyperactive and impulsive-hyperactive types of ADHD. The presence of neurological soft signs in ADHD are independent of the presence of co-morbid SLD. Dysrrhythmias and overflow with gait were typically seen for impulsive-hyperactive type and higher severity of ADHD is related to more errors.
A forward error correction technique using a high-speed, high-rate single chip codec
NASA Astrophysics Data System (ADS)
Boyd, R. W.; Hartman, W. F.; Jones, Robert E.
The authors describe an error-correction coding approach that allows operation in either burst or continuous modes at data rates of multiple hundreds of megabits per second. Bandspreading is low since the code rate is 7/8 or greater, which is consistent with high-rate link operation. The encoder, along with a hard-decision decoder, fits on a single application-specific integrated circuit (ASIC) chip. Soft-decision decoding is possible utilizing applique hardware in conjunction with the hard-decision decoder. Expected coding gain is a function of the application and is approximately 2.5 dB for hard-decision decoding at 10-5 bit-error rate with phase-shift-keying modulation and additive Gaussian white noise interference. The principal use envisioned for this technique is to achieve a modest amount of coding gain on high-data-rate, bandwidth-constrained channels. Data rates of up to 300 Mb/s can be accommodated by the codec chip. The major objective is burst-mode communications, where code words are composed of 32 n data bits followed by 32 overhead bits.
FPGA implementation of advanced FEC schemes for intelligent aggregation networks
NASA Astrophysics Data System (ADS)
Zou, Ding; Djordjevic, Ivan B.
2016-02-01
In state-of-the-art fiber-optics communication systems the fixed forward error correction (FEC) and constellation size are employed. While it is important to closely approach the Shannon limit by using turbo product codes (TPC) and low-density parity-check (LDPC) codes with soft-decision decoding (SDD) algorithm; rate-adaptive techniques, which enable increased information rates over short links and reliable transmission over long links, are likely to become more important with ever-increasing network traffic demands. In this invited paper, we describe a rate adaptive non-binary LDPC coding technique, and demonstrate its flexibility and good performance exhibiting no error floor at BER down to 10-15 in entire code rate range, by FPGA-based emulation, making it a viable solution in the next-generation high-speed intelligent aggregation networks.
NASA Astrophysics Data System (ADS)
Fulkerson, David E.
2010-02-01
This paper describes a new methodology for characterizing the electrical behavior and soft error rate (SER) of CMOS and SiGe HBT integrated circuits that are struck by ions. A typical engineering design problem is to calculate the SER of a critical path that commonly includes several circuits such as an input buffer, several logic gates, logic storage, clock tree circuitry, and an output buffer. Using multiple 3D TCAD simulations to solve this problem is too costly and time-consuming for general engineering use. The new and simple methodology handles the problem with ease by simple SPICE simulations. The methodology accurately predicts the measured threshold linear energy transfer (LET) of a bulk CMOS SRAM. It solves for circuit currents and voltage spikes that are close to those predicted by expensive 3D TCAD simulations. It accurately predicts the measured event cross-section vs. LET curve of an experimental SiGe HBT flip-flop. The experimental cross section vs. frequency behavior and other subtle effects are also accurately predicted.
2012-01-01
Background Although proton radiotherapy is a promising new approach for cancer patients, functional interference is a concern for patients with implantable cardioverter defibrillators (ICDs). The purpose of this study was to clarify the influence of secondary neutrons induced by proton radiotherapy on ICDs. Methods The experimental set-up simulated proton radiotherapy for a patient with an ICD. Four new ICDs were placed 0.3 cm laterally and 3 cm distally outside the radiation field in order to evaluate the influence of secondary neutrons. The cumulative in-field radiation dose was 107 Gy over 10 sessions of irradiation with a dose rate of 2 Gy/min and a field size of 10 × 10 cm2. After each radiation fraction, interference with the ICD by the therapy was analyzed by an ICD programmer. The dose distributions of secondary neutrons were estimated by Monte-Carlo simulation. Results The frequency of the power-on reset, the most serious soft error where the programmed pacing mode changes temporarily to a safety back-up mode, was 1 per approximately 50 Gy. The total number of soft errors logged in all devices was 29, which was a rate of 1 soft error per approximately 15 Gy. No permanent device malfunctions were detected. The calculated dose of secondary neutrons per 1 Gy proton dose in the phantom was approximately 1.3-8.9 mSv/Gy. Conclusions With the present experimental settings, the probability was approximately 1 power-on reset per 50 Gy, which was below the dose level (60-80 Gy) generally used in proton radiotherapy. Further quantitative analysis in various settings is needed to establish guidelines regarding proton radiotherapy for cancer patients with ICDs. PMID:22284700
Laser as a Tool to Study Radiation Effects in CMOS
NASA Astrophysics Data System (ADS)
Ajdari, Bahar
Energetic particles from cosmic ray or terrestrial sources can strike sensitive areas of CMOS devices and cause soft errors. Understanding the effects of such interactions is crucial as the device technology advances, and chip reliability has become more important than ever. Particle accelerator testing has been the standard method to characterize the sensitivity of chips to single event upsets (SEUs). However, because of their costs and availability limitations, other techniques have been explored. Pulsed laser has been a successful tool for characterization of SEU behavior, but to this day, laser has not been recognized as a comparable method to beam testing. In this thesis, I propose a methodology of correlating laser soft error rate (SER) to particle beam gathered data. Additionally, results are presented showing a temperature dependence of SER and the "neighbor effect" phenomenon where due to the close proximity of devices a "weakening effect" in the ON state can be observed.
NASA Astrophysics Data System (ADS)
Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko
2017-08-01
We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.
An alternative data filling approach for prediction of missing data in soft sets (ADFIS).
Sadiq Khan, Muhammad; Al-Garadi, Mohammed Ali; Wahab, Ainuddin Wahid Abdul; Herawan, Tutut
2016-01-01
Soft set theory is a mathematical approach that provides solution for dealing with uncertain data. As a standard soft set, it can be represented as a Boolean-valued information system, and hence it has been used in hundreds of useful applications. Meanwhile, these applications become worthless if the Boolean information system contains missing data due to error, security or mishandling. Few researches exist that focused on handling partially incomplete soft set and none of them has high accuracy rate in prediction performance of handling missing data. It is shown that the data filling approach for incomplete soft set (DFIS) has the best performance among all previous approaches. However, in reviewing DFIS, accuracy is still its main problem. In this paper, we propose an alternative data filling approach for prediction of missing data in soft sets, namely ADFIS. The novelty of ADFIS is that, unlike the previous approach that used probability, we focus more on reliability of association among parameters in soft set. Experimental results on small, 04 UCI benchmark data and causality workbench lung cancer (LUCAP2) data shows that ADFIS performs better accuracy as compared to DFIS.
Register file soft error recovery
Fleischer, Bruce M.; Fox, Thomas W.; Wait, Charles D.; Muff, Adam J.; Watson, III, Alfred T.
2013-10-15
Register file soft error recovery including a system that includes a first register file and a second register file that mirrors the first register file. The system also includes an arithmetic pipeline for receiving data read from the first register file, and error detection circuitry to detect whether the data read from the first register file includes corrupted data. The system further includes error recovery circuitry to insert an error recovery instruction into the arithmetic pipeline in response to detecting the corrupted data. The inserted error recovery instruction replaces the corrupted data in the first register file with a copy of the data from the second register file.
Preisig, James C
2005-07-01
Equations are derived for analyzing the performance of channel estimate based equalizers. The performance is characterized in terms of the mean squared soft decision error (sigma2(s)) of each equalizer. This error is decomposed into two components. These are the minimum achievable error (sigma2(0)) and the excess error (sigma2(e)). The former is the soft decision error that would be realized by the equalizer if the filter coefficient calculation were based upon perfect knowledge of the channel impulse response and statistics of the interfering noise field. The latter is the additional soft decision error that is realized due to errors in the estimates of these channel parameters. These expressions accurately predict the equalizer errors observed in the processing of experimental data by a channel estimate based decision feedback equalizer (DFE) and a passive time-reversal equalizer. Further expressions are presented that allow equalizer performance to be predicted given the scattering function of the acoustic channel. The analysis using these expressions yields insights into the features of surface scattering that most significantly impact equalizer performance in shallow water environments and motivates the implementation of a DFE that is robust with respect to channel estimation errors.
Practicality of Evaluating Soft Errors in Commercial sub-90 nm CMOS for Space Applications
NASA Technical Reports Server (NTRS)
Pellish, Jonathan A.; LaBel, Kenneth A.
2010-01-01
The purpose of this presentation is to: Highlight space memory evaluation evolution, Review recent developments regarding low-energy proton direct ionization soft errors, Assess current space memory evaluation challenges, including increase of non-volatile technology choices, and Discuss related testing and evaluation complexities.
Multi-stage decoding of multi-level modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Costello, Daniel J., Jr.
1991-01-01
Various types of multi-stage decoding for multi-level modulation codes are investigated. It is shown that if the component codes of a multi-level modulation code and types of decoding at various stages are chosen properly, high spectral efficiency and large coding gain can be achieved with reduced decoding complexity. Particularly, it is shown that the difference in performance between the suboptimum multi-stage soft-decision maximum likelihood decoding of a modulation code and the single-stage optimum soft-decision decoding of the code is very small, only a fraction of dB loss in signal to noise ratio at a bit error rate (BER) of 10(exp -6).
NASA Technical Reports Server (NTRS)
Ni, Jianjun David
2011-01-01
This presentation briefly discusses a research effort on mitigation techniques of pulsed radio frequency interference (RFI) on a Low-Density-Parity-Check (LDPC) code. This problem is of considerable interest in the context of providing reliable communications to the space vehicle which might suffer severe degradation due to pulsed RFI sources such as large radars. The LDPC code is one of modern forward-error-correction (FEC) codes which have the decoding performance to approach the Shannon Limit. The LDPC code studied here is the AR4JA (2048, 1024) code recommended by the Consultative Committee for Space Data Systems (CCSDS) and it has been chosen for some spacecraft design. Even though this code is designed as a powerful FEC code in the additive white Gaussian noise channel, simulation data and test results show that the performance of this LDPC decoder is severely degraded when exposed to the pulsed RFI specified in the spacecraft s transponder specifications. An analysis work (through modeling and simulation) has been conducted to evaluate the impact of the pulsed RFI and a few implemental techniques have been investigated to mitigate the pulsed RFI impact by reshuffling the soft-decision-data available at the input of the LDPC decoder. The simulation results show that the LDPC decoding performance of codeword error rate (CWER) under pulsed RFI can be improved up to four orders of magnitude through a simple soft-decision-data reshuffle scheme. This study reveals that an error floor of LDPC decoding performance appears around CWER=1E-4 when the proposed technique is applied to mitigate the pulsed RFI impact. The mechanism causing this error floor remains unknown, further investigation is necessary.
Impact of Temporal Masking of Flip-Flop Upsets on Soft Error Rates of Sequential Circuits
NASA Astrophysics Data System (ADS)
Chen, R. M.; Mahatme, N. N.; Diggins, Z. J.; Wang, L.; Zhang, E. X.; Chen, Y. P.; Liu, Y. N.; Narasimham, B.; Witulski, A. F.; Bhuva, B. L.; Fleetwood, D. M.
2017-08-01
Reductions in single-event (SE) upset (SEU) rates for sequential circuits due to temporal masking effects are evaluated. The impacts of supply voltage, combinational-logic delay, flip-flop (FF) SEU performance, and particle linear energy transfer (LET) values are analyzed for SE cross sections of sequential circuits. Alpha particles and heavy ions with different LET values are used to characterize the circuits fabricated at the 40-nm bulk CMOS technology node. Experimental results show that increasing the delay of the logic circuit present between FFs and decreasing the supply voltage are two effective ways of reducing SE error rates for sequential circuits for particles with low LET values due to temporal masking. SEU-hardened FFs benefit less from temporal masking than conventional FFs. Circuit hardening implications for SEU-hardened and unhardened FFs are discussed.
Random Weighting, Strong Tracking, and Unscented Kalman Filter for Soft Tissue Characterization.
Shin, Jaehyun; Zhong, Yongmin; Oetomo, Denny; Gu, Chengfan
2018-05-21
This paper presents a new nonlinear filtering method based on the Hunt-Crossley model for online nonlinear soft tissue characterization. This method overcomes the problem of performance degradation in the unscented Kalman filter due to contact model error. It adopts the concept of Mahalanobis distance to identify contact model error, and further incorporates a scaling factor in predicted state covariance to compensate identified model error. This scaling factor is determined according to the principle of innovation orthogonality to avoid the cumbersome computation of Jacobian matrix, where the random weighting concept is adopted to improve the estimation accuracy of innovation covariance. A master-slave robotic indentation system is developed to validate the performance of the proposed method. Simulation and experimental results as well as comparison analyses demonstrate that the efficacy of the proposed method for online characterization of soft tissue parameters in the presence of contact model error.
Single event upset in avionics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taber, A.; Normand, E.
1993-04-01
Data from military/experimental flights and laboratory testing indicate that typical non radiation-hardened 64K and 256K static random access memories (SRAMs) can experience a significant soft upset rate at aircraft altitudes due to energetic neutrons created by cosmic ray interactions in the atmosphere. It is suggested that error detection and correction (EDAC) circuitry be considered for all avionics designs containing large amounts of semi-conductor memory.
Link Performance Analysis and monitoring - A unified approach to divergent requirements
NASA Astrophysics Data System (ADS)
Thom, G. A.
Link Performance Analysis and real-time monitoring are generally covered by a wide range of equipment. Bit Error Rate testers provide digital link performance measurements but are not useful during real-time data flows. Real-time performance monitors utilize the fixed overhead content but vary widely from format to format. Link quality information is also present from signal reconstruction equipment in the form of receiver AGC, bit synchronizer AGC, and bit synchronizer soft decision level outputs, but no general approach to utilizing this information exists. This paper presents an approach to link tests, real-time data quality monitoring, and results presentation that utilizes a set of general purpose modules in a flexible architectural environment. The system operates over a wide range of bit rates (up to 150 Mbs) and employs several measurement techniques, including P/N code errors or fixed PCM format errors, derived real-time BER from frame sync errors, and Data Quality Analysis derived by counting significant sync status changes. The architecture performs with a minimum of elements in place to permit a phased update of the user's unit in accordance with his needs.
Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery.
Rottmann, Joerg; Keall, Paul; Berbeco, Ross
2013-09-01
To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time.
CLEAR: Cross-Layer Exploration for Architecting Resilience
2017-03-01
benchmark analysis, also provides cost-effective solutions (~1% additional energy cost for the same 50× improvement). This paper addresses the...core (OoO-core) [Wang 04], across 18 benchmarks . Such extensive exploration enables us to conclusively answer the above cross-layer resilience...analysis of the effects of soft errors on application benchmarks , provides a highly effective soft error resilience approach. 3. The above
Alpha particle-induced soft errors in microelectronic devices. I
NASA Astrophysics Data System (ADS)
Redman, D. J.; Sega, R. M.; Joseph, R.
1980-03-01
The article provides a tutorial review and trend assessment of the problem of alpha particle-induced soft errors in VLSI memories. Attention is given to an analysis of the design evolution of modern ICs, and the characteristics of alpha particles and their origin in IC packaging are reviewed. Finally, the process of an alpha particle penetrating an IC is examined.
Low delay and area efficient soft error correction in arbitration logic
Sugawara, Yutaka
2013-09-10
There is provided an arbitration logic device for controlling an access to a shared resource. The arbitration logic device comprises at least one storage element, a winner selection logic device, and an error detection logic device. The storage element stores a plurality of requestors' information. The winner selection logic device selects a winner requestor among the requestors based on the requestors' information received from a plurality of requestors. The winner selection logic device selects the winner requestor without checking whether there is the soft error in the winner requestor's information.
Implementation Of The Configurable Fault Tolerant System Experiment On NPSAT 1
2016-03-01
REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE IMPLEMENTATION OF THE CONFIGURABLE FAULT TOLERANT SYSTEM EXPERIMENT ON NPSAT...open-source microprocessor without interlocked pipeline stages (MIPS) based processor softcore, a cached memory structure capable of accessing double...data rate type three and secure digital card memories, an interface to the main satellite bus, and XILINX’s soft error mitigation softcore. The
Oliveira-Santos, Thiago; Klaeser, Bernd; Weitzel, Thilo; Krause, Thomas; Nolte, Lutz-Peter; Peterhans, Matthias; Weber, Stefan
2011-01-01
Percutaneous needle intervention based on PET/CT images is effective, but exposes the patient to unnecessary radiation due to the increased number of CT scans required. Computer assisted intervention can reduce the number of scans, but requires handling, matching and visualization of two different datasets. While one dataset is used for target definition according to metabolism, the other is used for instrument guidance according to anatomical structures. No navigation systems capable of handling such data and performing PET/CT image-based procedures while following clinically approved protocols for oncologic percutaneous interventions are available. The need for such systems is emphasized in scenarios where the target can be located in different types of tissue such as bone and soft tissue. These two tissues require different clinical protocols for puncturing and may therefore give rise to different problems during the navigated intervention. Studies comparing the performance of navigated needle interventions targeting lesions located in these two types of tissue are not often found in the literature. Hence, this paper presents an optical navigation system for percutaneous needle interventions based on PET/CT images. The system provides viewers for guiding the physician to the target with real-time visualization of PET/CT datasets, and is able to handle targets located in both bone and soft tissue. The navigation system and the required clinical workflow were designed taking into consideration clinical protocols and requirements, and the system is thus operable by a single person, even during transition to the sterile phase. Both the system and the workflow were evaluated in an initial set of experiments simulating 41 lesions (23 located in bone tissue and 18 in soft tissue) in swine cadavers. We also measured and decomposed the overall system error into distinct error sources, which allowed for the identification of particularities involved in the process as well as highlighting the differences between bone and soft tissue punctures. An overall average error of 4.23 mm and 3.07 mm for bone and soft tissue punctures, respectively, demonstrated the feasibility of using this system for such interventions. The proposed system workflow was shown to be effective in separating the preparation from the sterile phase, as well as in keeping the system manageable by a single operator. Among the distinct sources of error, the user error based on the system accuracy (defined as the distance from the planned target to the actual needle tip) appeared to be the most significant. Bone punctures showed higher user error, whereas soft tissue punctures showed higher tissue deformation error.
Multi-Spectral Solar Telescope Array. II - Soft X-ray/EUV reflectivity of the multilayer mirrors
NASA Technical Reports Server (NTRS)
Barbee, Troy W., Jr.; Weed, J. W.; Hoover, Richard B.; Allen, Maxwell J.; Lindblom, Joakim F.; O'Neal, Ray H.; Kankelborg, Charles C.; Deforest, Craig E.; Paris, Elizabeth S.; Walker, Arthur B. C., Jr.
1991-01-01
The Multispectral Solar Telescope Array is a rocket-borne observatory which encompasses seven compact soft X-ray/EUV, multilayer-coated, and two compact far-UV, interference film-coated, Cassegrain and Ritchey-Chretien telescopes. Extensive measurements are presented on the efficiency and spectral bandpass of the X-ray/EUV telescopes. Attention is given to systematic errors and measurement errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batista, Antonio J. N.; Santos, Bruno; Fernandes, Ana
The data acquisition and control instrumentation cubicles room of the ITER tokamak will be irradiated with neutrons during the fusion reactor operation. A Virtex-6 FPGA from Xilinx (XC6VLX365T-1FFG1156C) is used on the ATCA-IO-PROCESSOR board, included in the ITER Catalog of I and C products - Fast Controllers. The Virtex-6 is a re-programmable logic device where the configuration is stored in Static RAM (SRAM), functional data stored in dedicated Block RAM (BRAM) and functional state logic in Flip-Flops. Single Event Upsets (SEU) due to the ionizing radiation of neutrons causes soft errors, unintended changes (bit-flips) to the values stored in statemore » elements of the FPGA. The SEU monitoring and soft errors repairing, when possible, were explored in this work. An FPGA built-in Soft Error Mitigation (SEM) controller detects and corrects soft errors in the FPGA configuration memory. Novel SEU sensors with Error Correction Code (ECC) detect and repair the BRAM memories. Proper management of SEU can increase reliability and availability of control instrumentation hardware for nuclear applications. The results of the tests performed using the SEM controller and the BRAM SEU sensors are presented for a Virtex-6 FPGA (XC6VLX240T-1FFG1156C) when irradiated with neutrons from the Portuguese Research Reactor (RPI), a 1 MW nuclear fission reactor operated by IST in the neighborhood of Lisbon. Results show that the proposed SEU mitigation technique is able to repair the majority of the detected SEU errors in the configuration and BRAM memories. (authors)« less
Utilization of robotic-arm assisted total knee arthroplasty for soft tissue protection.
Sultan, Assem A; Piuzzi, Nicolas; Khlopas, Anton; Chughtai, Morad; Sodhi, Nipun; Mont, Michael A
2017-12-01
Despite the well-established success of total knee arthroplasty (TKA), iatrogenic ligamentous and soft tissue injuries are infrequent, but potential complications that can have devastating impact on clinical outcomes. These injuries are often related to technical errors and excessive soft tissue manipulation, particularly during bony resections. Recently, robotic-arm assisted TKA was introduced and demonstrated promising results with potential technical advantages over manual surgery in implant positioning and mechanical accuracy. Furthermore, soft tissue protection is an additional potential advantage offered by these systems that can reduce inadvertent human technical errors encountered during standard manual resections. Therefore, due to the relative paucity of literature, we attempted to answer the following questions: 1) does robotic-arm assisted TKA offer a technical advantage that allows enhanced soft tissue protection? 2) What is the available evidence about soft tissue protection? Recently introduced models of robotic-arm assisted TKA systems with advanced technology showed promising clinical outcomes and soft tissue protection in the short- and mid-term follow-up with results comparable or superior to manual TKA. In this review, we attempted to explore this dimension of robotics in TKA and investigate the soft tissue related complications currently reported in the literature.
Hard sphere perturbation theory for thermodynamics of soft-sphere model liquid
NASA Astrophysics Data System (ADS)
Mon, K. K.
2001-09-01
It is a long-standing consensus in the literature that hard sphere perturbation theory (HSPT) is not accurate for dense soft sphere model liquids, interacting with repulsive r-n pair potentials for small n. In this paper, we show that if the intrinsic error of HSPT for soft sphere model liquids is accounted for, then this is not completely true. We present results for n=4, 6, 9, 12 which indicate that, even first order variational HSPT can provide free energy upper bounds to within a few percent at densities near freezing when corrected for the intrinsic error of the HSPT.
Space vehicle Viterbi decoder. [data converters, algorithms
NASA Technical Reports Server (NTRS)
1975-01-01
The design and fabrication of an extremely low-power, constraint-length 7, rate 1/3 Viterbi decoder brassboard capable of operating at information rates of up to 100 kb/s is presented. The brassboard is partitioned to facilitate a later transition to an LSI version requiring even less power. The effect of soft-decision thresholds, path memory lengths, and output selection algorithms on the bit error rate is evaluated. A branch synchronization algorithm is compared with a more conventional approach. The implementation of the decoder and its test set (including all-digital noise source) are described along with the results of various system tests and evaluations. Results and recommendations are presented.
Real-time soft tissue motion estimation for lung tumors during radiotherapy delivery
Rottmann, Joerg; Keall, Paul; Berbeco, Ross
2013-01-01
Purpose: To provide real-time lung tumor motion estimation during radiotherapy treatment delivery without the need for implanted fiducial markers or additional imaging dose to the patient. Methods: 2D radiographs from the therapy beam's-eye-view (BEV) perspective are captured at a frame rate of 12.8 Hz with a frame grabber allowing direct RAM access to the image buffer. An in-house developed real-time soft tissue localization algorithm is utilized to calculate soft tissue displacement from these images in real-time. The system is tested with a Varian TX linear accelerator and an AS-1000 amorphous silicon electronic portal imaging device operating at a resolution of 512 × 384 pixels. The accuracy of the motion estimation is verified with a dynamic motion phantom. Clinical accuracy was tested on lung SBRT images acquired at 2 fps. Results: Real-time lung tumor motion estimation from BEV images without fiducial markers is successfully demonstrated. For the phantom study, a mean tracking error <1.0 mm [root mean square (rms) error of 0.3 mm] was observed. The tracking rms accuracy on BEV images from a lung SBRT patient (≈20 mm tumor motion range) is 1.0 mm. Conclusions: The authors demonstrate for the first time real-time markerless lung tumor motion estimation from BEV images alone. The described system can operate at a frame rate of 12.8 Hz and does not require prior knowledge to establish traceable landmarks for tracking on the fly. The authors show that the geometric accuracy is similar to (or better than) previously published markerless algorithms not operating in real-time. PMID:24007146
Compact disk error measurements
NASA Technical Reports Server (NTRS)
Howe, D.; Harriman, K.; Tehranchi, B.
1993-01-01
The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.
Full temperature single event upset characterization of two microprocessor technologies
NASA Technical Reports Server (NTRS)
Nichols, Donald K.; Coss, James R.; Smith, L. S.; Rax, Bernard; Huebner, Mark
1988-01-01
Data for the 9450 I3L bipolar microprocessor and the 80C86 CMOS/epi (vintage 1985) microprocessor are presented, showing single-event soft errors for the full MIL-SPEC temperature range of -55 to 125 C. These data show for the first time that the soft-error cross sections continue to decrease with decreasing temperature at subzero temperatures. The temperature dependence of the two parts, however, is very different.
New-Sum: A Novel Online ABFT Scheme For General Iterative Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tao, Dingwen; Song, Shuaiwen; Krishnamoorthy, Sriram
Emerging high-performance computing platforms, with large component counts and lower power margins, are anticipated to be more susceptible to soft errors in both logic circuits and memory subsystems. We present an online algorithm-based fault tolerance (ABFT) approach to efficiently detect and recover soft errors for general iterative methods. We design a novel checksum-based encoding scheme for matrix-vector multiplication that is resilient to both arithmetic and memory errors. Our design decouples the checksum updating process from the actual computation, and allows adaptive checksum overhead control. Building on this new encoding mechanism, we propose two online ABFT designs that can effectively recovermore » from errors when combined with a checkpoint/rollback scheme.« less
NASA Astrophysics Data System (ADS)
Yoshizawa, Masasumi; Nakamura, Yuuta; Ishiguro, Masataka; Moriya, Tadashi
2007-07-01
In this paper, we describe a method of compensating the attenuation of the ultrasound caused by soft tissue in the transducer vibration method for the measurement of the acoustic impedance of in vivo bone. In the in vivo measurement, the acoustic impedance of bone is measured through soft tissue; therefore, the amplitude of the ultrasound reflected from the bone is attenuated. This attenuation causes an error of the order of -20 to -30% when the acoustic impedance is determined from the measured signals. To compensate the attenuation, the attenuation coefficient and length of the soft tissue are measured by the transducer vibration method. In the experiment using a phantom, this method allows the measurement of the acoustic impedance typically with an error as small as -8 to 10%.
NASA Astrophysics Data System (ADS)
Li, Lei; Hu, Jianhao
2010-12-01
Notice of Violation of IEEE Publication Principles"Joint Redundant Residue Number Systems and Module Isolation for Mitigating Single Event Multiple Bit Upsets in Datapath"by Lei Li and Jianhao Hu,in the IEEE Transactions on Nuclear Science, vol.57, no.6, Dec. 2010, pp. 3779-3786After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles.This paper contains substantial duplication of original text from the paper cited below. The original text was copied without attribution (including appropriate references to the original author(s) and/or paper title) and without permission.Due to the nature of this violation, reasonable effort should be made to remove all past references to this paper, and future references should be made to the following articles:"Multiple Error Detection and Correction Based on Redundant Residue Number Systems"by Vik Tor Goh and M.U. Siddiqi,in the IEEE Transactions on Communications, vol.56, no.3, March 2008, pp.325-330"A Coding Theory Approach to Error Control in Redundant Residue Number Systems. I: Theory and Single Error Correction"by H. Krishna, K-Y. Lin, and J-D. Sun, in the IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol.39, no.1, Jan 1992, pp.8-17In this paper, we propose a joint scheme which combines redundant residue number systems (RRNS) with module isolation (MI) for mitigating single event multiple bit upsets (SEMBUs) in datapath. The proposed hardening scheme employs redundant residues to improve the fault tolerance for datapath and module spacings to guarantee that SEMBUs caused by charge sharing do not propagate among the operation channels of different moduli. The features of RRNS, such as independence, parallel and error correction, are exploited to establish the radiation hardening architecture for the datapath in radiation environments. In the proposed scheme, all of the residues can be processed independently, and most of the soft errors in datapath can be corrected with the redundant relationship of the residues at correction module, which is allocated at the end of the datapath. In the back-end implementation, module isolation technique is used to improve the soft error rate performance for RRNS by physically separating the operation channels of different moduli. The case studies show at least an order of magnitude decrease on the soft error rate (SER) as compared to the NonRHBD designs, and demonstrate that RRNS+MI can reduce the SER from 10-12 to 10-17 when the processing steps of datapath are 106. The proposed scheme can even achieve less area and latency overheads than that without radiation hardening, since RRNS can reduce the operational complexity in datapath.
High rate concatenated coding systems using bandwidth efficient trellis inner codes
NASA Technical Reports Server (NTRS)
Deng, Robert H.; Costello, Daniel J., Jr.
1989-01-01
High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.
Low-density parity-check codes for volume holographic memory systems.
Pishro-Nik, Hossein; Rahnavard, Nazanin; Ha, Jeongseok; Fekri, Faramarz; Adibi, Ali
2003-02-10
We investigate the application of low-density parity-check (LDPC) codes in volume holographic memory (VHM) systems. We show that a carefully designed irregular LDPC code has a very good performance in VHM systems. We optimize high-rate LDPC codes for the nonuniform error pattern in holographic memories to reduce the bit error rate extensively. The prior knowledge of noise distribution is used for designing as well as decoding the LDPC codes. We show that these codes have a superior performance to that of Reed-Solomon (RS) codes and regular LDPC counterparts. Our simulation shows that we can increase the maximum storage capacity of holographic memories by more than 50 percent if we use irregular LDPC codes with soft-decision decoding instead of conventionally employed RS codes with hard-decision decoding. The performance of these LDPC codes is close to the information theoretic capacity.
Soft-decision decoding techniques for linear block codes and their error performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu
1996-01-01
The first paper presents a new minimum-weight trellis-based soft-decision iterative decoding algorithm for binary linear block codes. The second paper derives an upper bound on the probability of block error for multilevel concatenated codes (MLCC). The bound evaluates difference in performance for different decompositions of some codes. The third paper investigates the bit error probability code for maximum likelihood decoding of binary linear codes. The fourth and final paper included in this report is concerns itself with the construction of multilevel concatenated block modulation codes using a multilevel concatenation scheme for the frequency non-selective Rayleigh fading channel.
MacKay, Mark; Anderson, Collin; Boehme, Sabrina; Cash, Jared; Zobell, Jeffery
2016-04-01
The Institute for Safe Medication Practices has stated that parenteral nutrition (PN) is considered a high-risk medication and has the potential of causing harm. Three organizations--American Society for Parenteral and Enteral Nutrition (A.S.P.E.N.), American Society of Health-System Pharmacists, and National Advisory Group--have published guidelines for ordering, transcribing, compounding and administering PN. These national organizations have published data on compliance to the guidelines and the risk of errors. The purpose of this article is to compare total compliance with ordering, transcription, compounding, administration, and error rate with a large pediatric institution. A computerized prescriber order entry (CPOE) program was developed that incorporates dosing with soft and hard stop recommendations and simultaneously eliminating the need for paper transcription. A CPOE team prioritized and identified issues, then developed solutions and integrated innovative CPOE and automated compounding device (ACD) technologies and practice changes to minimize opportunities for medication errors in PN prescription, transcription, preparation, and administration. Thirty developmental processes were identified and integrated in the CPOE program, resulting in practices that were compliant with A.S.P.E.N. safety consensus recommendations. Data from 7 years of development and implementation were analyzed and compared with published literature comparing error, harm rates, and cost reductions to determine if our process showed lower error rates compared with national outcomes. The CPOE program developed was in total compliance with the A.S.P.E.N. guidelines for PN. The frequency of PN medication errors at our hospital over the 7 years was 230 errors/84,503 PN prescriptions, or 0.27% compared with national data that determined that 74 of 4730 (1.6%) of prescriptions over 1.5 years were associated with a medication error. Errors were categorized by steps in the PN process: prescribing, transcription, preparation, and administration. There were no transcription errors, and most (95%) errors occurred during administration. We conclude that PN practices that conferred a meaningful cost reduction and a lower error rate (2.7/1000 PN) than reported in the literature (15.6/1000 PN) were ascribed to the development and implementation of practices that conform to national PN guidelines and recommendations. Electronic ordering and compounding programs eliminated all transcription and related opportunities for errors. © 2015 American Society for Parenteral and Enteral Nutrition.
NASA Technical Reports Server (NTRS)
Davarian, F.
1994-01-01
The LOOP computer program was written to simulate the Automatic Frequency Control (AFC) subsystem of a Differential Minimum Shift Keying (DMSK) receiver with a bit rate of 2400 baud. The AFC simulated by LOOP is a first order loop configuration with a first order R-C filter. NASA has been investigating the concept of mobile communications based on low-cost, low-power terminals linked via geostationary satellites. Studies have indicated that low bit rate transmission is suitable for this application, particularly from the frequency and power conservation point of view. A bit rate of 2400 BPS is attractive due to its applicability to the linear predictive coding of speech. Input to LOOP includes the following: 1) the initial frequency error; 2) the double-sided loop noise bandwidth; 3) the filter time constants; 4) the amount of intersymbol interference; and 5) the bit energy to noise spectral density. LOOP output includes: 1) the bit number and the frequency error of that bit; 2) the computed mean of the frequency error; and 3) the standard deviation of the frequency error. LOOP is written in MS SuperSoft FORTRAN 77 for interactive execution and has been implemented on an IBM PC operating under PC DOS with a memory requirement of approximately 40K of 8 bit bytes. This program was developed in 1986.
Belley, Matthew D; Wang, Chu; Nguyen, Giao; Gunasingha, Rathnayaka; Chao, Nelson J; Chen, Benny J; Dewhirst, Mark W; Yoshizumi, Terry T
2014-03-01
Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Organ doses were simulated in the Geant4 application for tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Average doses in soft-tissue organs were found to vary by as much as 23%-32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigninga single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs.
Belley, Matthew D.; Wang, Chu; Nguyen, Giao; Gunasingha, Rathnayaka; Chao, Nelson J.; Chen, Benny J.; Dewhirst, Mark W.; Yoshizumi, Terry T.
2014-01-01
Purpose: Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Methods: Organ doses were simulated in the Geant4 application for tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Results: Average doses in soft-tissue organs were found to vary by as much as 23%–32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. Conclusions: This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigning a single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs. PMID:24593746
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belley, Matthew D.; Wang, Chu; Nguyen, Giao
2014-03-15
Purpose: Accurate dosimetry is essential when irradiating mice to ensure that functional and molecular endpoints are well understood for the radiation dose delivered. Conventional methods of prescribing dose in mice involve the use of a single dose rate measurement and assume a uniform average dose throughout all organs of the entire mouse. Here, the authors report the individual average organ dose values for the irradiation of a 12, 23, and 33 g mouse on a 320 kVp x-ray irradiator and calculate the resulting error from using conventional dose prescription methods. Methods: Organ doses were simulated in the Geant4 application formore » tomographic emission toolkit using the MOBY mouse whole-body phantom. Dosimetry was performed for three beams utilizing filters A (1.65 mm Al), B (2.0 mm Al), and C (0.1 mm Cu + 2.5 mm Al), respectively. In addition, simulated x-ray spectra were validated with physical half-value layer measurements. Results: Average doses in soft-tissue organs were found to vary by as much as 23%–32% depending on the filter. Compared to filters A and B, filter C provided the hardest beam and had the lowest variation in soft-tissue average organ doses across all mouse sizes, with a difference of 23% for the median mouse size of 23 g. Conclusions: This work suggests a new dose prescription method in small animal dosimetry: it presents a departure from the conventional approach of assigninga single dose value for irradiation of mice to a more comprehensive approach of characterizing individual organ doses to minimize the error and uncertainty. In human radiation therapy, clinical treatment planning establishes the target dose as well as the dose distribution, however, this has generally not been done in small animal research. These results suggest that organ dose errors will be minimized by calibrating the dose rates for all filters, and using different dose rates for different organs.« less
Li, Lingyun; Zhang, Fuming; Hu, Min; Ren, Fuji; Chi, Lianli; Linhardt, Robert J.
2016-01-01
Low molecular weight heparins are complex polycomponent drugs that have recently become amenable to top-down analysis using liquid chromatography-mass spectrometry. Even using open source deconvolution software, DeconTools, and automatic structural assignment software, GlycReSoft, the comparison of two or more low molecular weight heparins is extremely time-consuming, taking about a week for an expert analyst and provides no guarantee of accuracy. Efficient data processing tools are required to improve analysis. This study uses the programming language of Microsoft Excel™ Visual Basic for Applications to extend its standard functionality for macro functions and specific mathematical modules for mass spectrometric data processing. The program developed enables the comparison of top-down analytical glycomics data on two or more low molecular weight heparins. The current study describes a new program, GlycCompSoft, which has a low error rate with good time efficiency in the automatic processing of large data sets. The experimental results based on three lots of Lovenox®, Clexane® and three generic enoxaparin samples show that the run time of GlycCompSoft decreases from 11 to 2 seconds when the data processed decreases from 18000 to 1500 rows. PMID:27942011
Random access to mobile networks with advanced error correction
NASA Technical Reports Server (NTRS)
Dippold, Michael
1990-01-01
A random access scheme for unreliable data channels is investigated in conjunction with an adaptive Hybrid-II Automatic Repeat Request (ARQ) scheme using Rate Compatible Punctured Codes (RCPC) Forward Error Correction (FEC). A simple scheme with fixed frame length and equal slot sizes is chosen and reservation is implicit by the first packet transmitted randomly in a free slot, similar to Reservation Aloha. This allows the further transmission of redundancy if the last decoding attempt failed. Results show that a high channel utilization and superior throughput can be achieved with this scheme that shows a quite low implementation complexity. For the example of an interleaved Rayleigh channel and soft decision utilization and mean delay are calculated. A utilization of 40 percent may be achieved for a frame with the number of slots being equal to half the station number under high traffic load. The effects of feedback channel errors and some countermeasures are discussed.
NASA Astrophysics Data System (ADS)
Adineh-Vand, A.; Torabi, M.; Roshani, G. H.; Taghipour, M.; Feghhi, S. A. H.; Rezaei, M.; Sadati, S. M.
2013-09-01
This paper presents a soft computing based artificial intelligent technique, adaptive neuro-fuzzy inference system (ANFIS) to predict the neutron production rate (NPR) of IR-IECF device in wide discharge current and voltage ranges. A hybrid learning algorithm consists of back-propagation and least-squares estimation is used for training the ANFIS model. The performance of the proposed ANFIS model is tested using the experimental data using four performance measures: correlation coefficient, mean absolute error, mean relative error percentage (MRE%) and root mean square error. The obtained results show that the proposed ANFIS model has achieved good agreement with the experimental results. In comparison to the experimental data the proposed ANFIS model has MRE% <1.53 and 2.85 % for training and testing data respectively. Therefore, this model can be used as an efficient tool to predict the NPR in the IR-IECF device.
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2016-02-01
Multiresolution analysis techniques including continuous wavelet transform, empirical mode decomposition, and variational mode decomposition are tested in the context of interest rate next-day variation prediction. In particular, multiresolution analysis techniques are used to decompose interest rate actual variation and feedforward neural network for training and prediction. Particle swarm optimization technique is adopted to optimize its initial weights. For comparison purpose, autoregressive moving average model, random walk process and the naive model are used as main reference models. In order to show the feasibility of the presented hybrid models that combine multiresolution analysis techniques and feedforward neural network optimized by particle swarm optimization, we used a set of six illustrative interest rates; including Moody's seasoned Aaa corporate bond yield, Moody's seasoned Baa corporate bond yield, 3-Month, 6-Month and 1-Year treasury bills, and effective federal fund rate. The forecasting results show that all multiresolution-based prediction systems outperform the conventional reference models on the criteria of mean absolute error, mean absolute deviation, and root mean-squared error. Therefore, it is advantageous to adopt hybrid multiresolution techniques and soft computing models to forecast interest rate daily variations as they provide good forecasting performance.
Vu, Lien T; Chen, Chao-Chang A; Lee, Chia-Cheng; Yu, Chia-Wei
2018-04-20
This study aims to develop a compensating method to minimize the shrinkage error of the shell mold (SM) in the injection molding (IM) process to obtain uniform optical power in the central optical zone of soft axial symmetric multifocal contact lenses (CL). The Z-shrinkage error along the Z axis or axial axis of the anterior SM corresponding to the anterior surface of a dry contact lens in the IM process can be minimized by optimizing IM process parameters and then by compensating for additional (Add) powers in the central zone of the original lens design. First, the shrinkage error is minimized by optimizing three levels of four IM parameters, including mold temperature, injection velocity, packing pressure, and cooling time in 18 IM simulations based on an orthogonal array L 18 (2 1 ×3 4 ). Then, based on the Z-shrinkage error from IM simulation, three new contact lens designs are obtained by increasing the Add power in the central zone of the original multifocal CL design to compensate for the optical power errors. Results obtained from IM process simulations and the optical simulations show that the new CL design with 0.1 D increasing in Add power has the closest shrinkage profile to the original anterior SM profile with percentage of reduction in absolute Z-shrinkage error of 55% and more uniform power in the central zone than in the other two cases. Moreover, actual experiments of IM of SM for casting soft multifocal CLs have been performed. The final product of wet CLs has been completed for the original design and the new design. Results of the optical performance have verified the improvement of the compensated design of CLs. The feasibility of this compensating method has been proven based on the measurement results of the produced soft multifocal CLs of the new design. Results of this study can be further applied to predict or compensate for the total optical power errors of the soft multifocal CLs.
Investigation of the Use of Erasures in a Concatenated Coding Scheme
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Marriott, Philip J.
1997-01-01
A new method for declaring erasures in a concatenated coding scheme is investigated. This method is used with the rate 1/2 K = 7 convolutional code and the (255, 223) Reed Solomon code. Errors and erasures Reed Solomon decoding is used. The erasure method proposed uses a soft output Viterbi algorithm and information provided by decoded Reed Solomon codewords in a deinterleaving frame. The results show that a gain of 0.3 dB is possible using a minimum amount of decoding trials.
Testing a Novel 3D Printed Radiographic Imaging Device for Use in Forensic Odontology.
Newcomb, Tara L; Bruhn, Ann M; Giles, Bridget; Garcia, Hector M; Diawara, Norou
2017-01-01
There are specific challenges related to forensic dental radiology and difficulties in aligning X-ray equipment to teeth of interest. Researchers used 3D printing to create a new device, the combined holding and aiming device (CHAD), to address the positioning limitations of current dental X-ray devices. Participants (N = 24) used the CHAD, soft dental wax, and a modified external aiming device (MEAD) to determine device preference, radiographer's efficiency, and technique errors. Each participant exposed six X-rays per device for a total of 432 X-rays scored. A significant difference was found at the 0.05 level between the three devices (p = 0.0015), with the MEAD having the least amount of total errors and soft dental wax taking the least amount of time. Total errors were highest when participants used soft dental wax-both the MEAD and the CHAD performed best overall. Further research in forensic dental radiology and use of holding devices is needed. © 2016 American Academy of Forensic Sciences.
Micromagnetic Study of Perpendicular Magnetic Recording Media
NASA Astrophysics Data System (ADS)
Dong, Yan
With increasing areal density in magnetic recording systems, perpendicular recording has successfully replaced longitudinal recording to mitigate the superparamagnetic limit. The extensive theoretical and experimental research associated with perpendicular magnetic recording media has contributed significantly to improving magnetic recording performance. Micromagnetic studies on perpendicular recording media, including aspects of the design of hybrid soft underlayers, media noise properties, inter-grain exchange characterization and ultra-high density bit patterned media recording, are presented in this dissertation. To improve the writability of recording media, one needs to reduce the head-to-keeper spacing while maintaining a good texture growth for the recording layer. A hybrid soft underlayer, consisting of a thin crystalline soft underlayer stacked above a non-magnetic seed layer and a conventional amorphous soft underlayer, provides an alternative approach for reducing the effective head-to-keeper spacing in perpendicular recording. Micromagnetic simulations indicate that the media using a hybrid soft underlayer helps enhance the effective field and the field gradient in comparison with conventional media that uses only an amorphous soft underlayer. The hybrid soft underlayer can support a thicker non-magnetic seed layer yet achieve an equivalent or better effective field and field gradient. A noise plateau for intermediate recording densities is observed for a recording layer of typical magnetization. Medium noise characteristics and transition jitter in perpendicular magnetic recording are explored using micromagnetic simulation. The plateau is replaced by a normal linear dependence of noise on recording density for a low magnetization recording layer. We show analytically that a source of the plateau is similar to that producing the Non-Linear-Transition-Shift of signal. In particular, magnetostatic effects are predicted to produce positive correlation of jitter and thus negative correlation of noise at the densities associated with the plateau. One focus for developing perpendicular recording media is on how to extract intergranular exchange coupling and intrinsic anisotropy field dispersion. A micromagnetic numerical technique is developed to effectively separate the effects of intergranular exchange coupling and anisotropy dispersion by finding their correlation to differentiated M-H curves with different initial magnetization states, even in the presence of thermal fluctuation. The validity of this method is investigated with a series of intergranular exchange couplings and anisotropy dispersions for different media thickness. This characterization method allows for an experimental measurement employing a vibrating sample magnetometer (VSM). Bit patterned media have been suggested to extend areal density beyond 1 Tbit/in2. The feasibility of 4 Tbit/in2 bit patterned recording is determined by aspects of write head design and media fabrication, and is estimated by the bit error rate. Micromagnetic specifications including 2.3:1 BAR bit patterned exchange coupled composite media, trailing shield, and side shields are proposed to meet the requirement of 3x10 -4 bit error rate, 4 nm fly height, 5% switching field distribution, 5% timing and 5% jitter errors for 4 Tbit/in2 bit-patterned recording. Demagnetizing field distribution is examined by studying the shielding effect of the side shields on the stray field from the neighboring dots. For recording self-assembled bit-patterned media, the head design writes two staggered tracks in a single pass and has maximum perpendicular field gradients of 580 Oe/nm along the down-track direction and 476 Oe/nm along the cross-track direction. The geometry demanded by self-assembly reduces recording density to 2.9 Tbit/in 2.
Springer, Mark S; Emerling, Christopher A; Meredith, Robert W; Janečka, Jan E; Eizirik, Eduardo; Murphy, William J
2017-01-01
The explosive, long fuse, and short fuse models represent competing hypotheses for the timing of placental mammal diversification. Support for the explosive model, which posits both interordinal and intraordinal diversification after the KPg mass extinction, derives from morphological cladistic studies that place Cretaceous eutherians outside of crown Placentalia. By contrast, most molecular studies favor the long fuse model wherein interordinal cladogenesis occurred in the Cretaceous followed by intraordinal cladogenesis after the KPg boundary. Phillips (2016) proposed a soft explosive model that allows for the emergence of a few lineages (Xenarthra, Afrotheria, Euarchontoglires, Laurasiatheria) in the Cretaceous, but otherwise agrees with the explosive model in positing the majority of interordinal diversification after the KPg mass extinction. Phillips (2016) argues that rate transference errors associated with large body size and long lifespan have inflated previous estimates of interordinal divergence times, and further suggests that most interordinal divergences are positioned after the KPg boundary when rate transference errors are avoided through the elimination of calibrations in large-bodied and/or long lifespan clades. Here, we show that rate transference errors can also occur in the opposite direction and drag forward estimated divergence dates when calibrations in large-bodied/long lifespan clades are omitted. This dragging forward effect results in the occurrence of more than half a billion years of 'zombie lineages' on Phillips' preferred timetree. By contrast with ghost lineages, which are a logical byproduct of an incomplete fossil record, zombie lineages occur when estimated divergence dates are younger than the minimum age of the oldest crown fossils. We also present the results of new timetree analyses that address the rate transference problem highlighted by Phillips (2016) by deleting taxa that exceed thresholds for body size and lifespan. These analyses recover all interordinal divergence times in the Cretaceous and are consistent with the long fuse model of placental diversification. Finally, we outline potential problems with morphological cladistic analyses of higher-level relationships among placental mammals that may account for the perceived discrepancies between molecular and paleontological estimates of placental divergence times. Copyright © 2016 Elsevier Inc. All rights reserved.
Resnick, C M; Dang, R R; Glick, S J; Padwa, B L
2017-03-01
Three-dimensional (3D) soft tissue prediction is replacing two-dimensional analysis in planning for orthognathic surgery. The accuracy of different computational models to predict soft tissue changes in 3D, however, is unclear. A retrospective pilot study was implemented to assess the accuracy of Dolphin 3D software in making these predictions. Seven patients who had a single-segment Le Fort I osteotomy and had preoperative (T 0 ) and >6-month postoperative (T 1 ) cone beam computed tomography (CBCT) scans and 3D photographs were included. The actual skeletal change was determined by subtracting the T 0 from the T 1 CBCT. 3D photographs were overlaid onto the T 0 CBCT and virtual skeletal movements equivalent to the achieved repositioning were applied using Dolphin 3D planner. A 3D soft tissue prediction (T P ) was generated and differences between the T P and T 1 images (error) were measured at 14 points and at the nasolabial angle. A mean linear prediction error of 2.91±2.16mm was found. The mean error at the nasolabial angle was 8.1±5.6°. In conclusion, the ability to accurately predict 3D soft tissue changes after Le Fort I osteotomy using Dolphin 3D software is limited. Copyright © 2016 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Iterative decoding of SOVA and LDPC product code for bit-patterned media recoding
NASA Astrophysics Data System (ADS)
Jeong, Seongkwon; Lee, Jaejin
2018-05-01
The demand for high-density storage systems has increased due to the exponential growth of data. Bit-patterned media recording (BPMR) is one of the promising technologies to achieve the density of 1Tbit/in2 and higher. To increase the areal density in BPMR, the spacing between islands needs to be reduced, yet this aggravates inter-symbol interference and inter-track interference and degrades the bit error rate performance. In this paper, we propose a decision feedback scheme using low-density parity check (LDPC) product code for BPMR. This scheme can improve the decoding performance using an iterative approach with extrinsic information and log-likelihood ratio value between iterative soft output Viterbi algorithm and LDPC product code. Simulation results show that the proposed LDPC product code can offer 1.8dB and 2.3dB gains over the one LDPC code at the density of 2.5 and 3 Tb/in2, respectively, when bit error rate is 10-6.
Neutron beam irradiation study of workload dependence of SER in a microprocessor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michalak, Sarah E; Graves, Todd L; Hong, Ted
It is known that workloads are an important factor in soft error rates (SER), but it is proving difficult to find differentiating workloads for microprocessors. We have performed neutron beam irradiation studies of a commercial microprocessor under a wide variety of workload conditions from idle, performing no operations, to very busy workloads resembling real HPC, graphics, and business applications. There is evidence that the mean times to first indication of failure, MTFIF defined in Section II, may be different for some of the applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hashii, Haruko, E-mail: haruko@pmrc.tsukuba.ac.jp; Hashimoto, Takayuki; Okawa, Ayako
2013-03-01
Purpose: Radiation therapy for cancer may be required for patients with implantable cardiac devices. However, the influence of secondary neutrons or scattered irradiation from high-energy photons (≥10 MV) on implantable cardioverter-defibrillators (ICDs) is unclear. This study was performed to examine this issue in 2 ICD models. Methods and Materials: ICDs were positioned around a water phantom under conditions simulating clinical radiation therapy. The ICDs were not irradiated directly. A control ICD was positioned 140 cm from the irradiation isocenter. Fractional irradiation was performed with 18-MV and 10-MV photon beams to give cumulative in-field doses of 600 Gy and 1600 Gy,more » respectively. Errors were checked after each fraction. Soft errors were defined as severe (change to safety back-up mode), moderate (memory interference, no changes in device parameters), and minor (slight memory change, undetectable by computer). Results: Hard errors were not observed. For the older ICD model, the incidences of severe, moderate, and minor soft errors at 18 MV were 0.75, 0.5, and 0.83/50 Gy at the isocenter. The corresponding data for 10 MV were 0.094, 0.063, and 0 /50 Gy. For the newer ICD model at 18 MV, these data were 0.083, 2.3, and 5.8 /50 Gy. Moderate and minor errors occurred at 18 MV in control ICDs placed 140 cm from the isocenter. The error incidences were 0, 1, and 0 /600 Gy at the isocenter for the newer model, and 0, 1, and 6 /600Gy for the older model. At 10 MV, no errors occurred in control ICDs. Conclusions: ICD errors occurred more frequently at 18 MV irradiation, which suggests that the errors were mainly caused by secondary neutrons. Soft errors of ICDs were observed with high energy photon beams, but most were not critical in the newer model. These errors may occur even when the device is far from the irradiation field.« less
NASA Astrophysics Data System (ADS)
Kong, Gyuyeol; Choi, Sooyong
2017-09-01
An enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding is proposed for four-level holographic data storage systems. While the previous four-ary modulation codes focus on preventing maximum two-dimensional intersymbol interference patterns, the proposed four-ary modulation code aims at maximizing the coding gains for better bit error rate performances. For achieving significant coding gains from the four-ary modulation codes, we design a new 2/3 four-ary modulation code in order to enlarge the free distance on the trellis through extensive simulation. The free distance of the proposed four-ary modulation code is extended from 1.21 to 2.04 compared with that of the conventional four-ary modulation code. The simulation result shows that the proposed four-ary modulation code has more than 1 dB gains compared with the conventional four-ary modulation code.
Accuracy of three-dimensional facial soft tissue simulation in post-traumatic zygoma reconstruction.
Li, P; Zhou, Z W; Ren, J Y; Zhang, Y; Tian, W D; Tang, W
2016-12-01
The aim of this study was to evaluate the accuracy of novel software-CMF-preCADS-for the prediction of soft tissue changes following repositioning surgery for zygomatic fractures. Twenty patients who had sustained an isolated zygomatic fracture accompanied by facial deformity and who were treated with repositioning surgery participated in this study. Cone beam computed tomography (CBCT) scans and three-dimensional (3D) stereophotographs were acquired preoperatively and postoperatively. The 3D skeletal model from the preoperative CBCT data was matched with the postoperative one, and the fractured zygomatic fragments were segmented and aligned to the postoperative position for prediction. Then, the predicted model was matched with the postoperative 3D stereophotograph for quantification of the simulation error. The mean absolute error in the zygomatic soft tissue region between the predicted model and the real one was 1.42±1.56mm for all cases. The accuracy of the prediction (mean absolute error ≤2mm) was 87%. In the subjective assessment it was found that the majority of evaluators considered the predicted model and the postoperative model to be 'very similar'. CMF-preCADS software can provide a realistic, accurate prediction of the facial soft tissue appearance after repositioning surgery for zygomatic fractures. The reliability of this software for other types of repositioning surgery for maxillofacial fractures should be validated in the future. Copyright © 2016. Published by Elsevier Ltd.
Evaluating Application Resilience with XRay
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Sui; Bronevetsky, Greg; Li, Bin
2015-05-07
The rising count and shrinking feature size of transistors within modern computers is making them increasingly vulnerable to various types of soft faults. This problem is especially acute in high-performance computing (HPC) systems used for scientific computing, because these systems include many thousands of compute cores and nodes, all of which may be utilized in a single large-scale run. The increasing vulnerability of HPC applications to errors induced by soft faults is motivating extensive work on techniques to make these applications more resiilent to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithmspecific error detection andmore » tolerance techniques. Effective use of such techniques requires a detailed understanding of how a given application is affected by soft faults to ensure that (i) efforts to improve application resilience are spent in the code regions most vulnerable to faults and (ii) the appropriate resilience technique is applied to each code region. This paper presents XRay, a tool to view the application vulnerability to soft errors, and illustrates how XRay can be used in the context of a representative application. In addition to providing actionable insights into application behavior XRay automatically selects the number of fault injection experiments required to provide an informative view of application behavior, ensuring that the information is statistically well-grounded without performing unnecessary experiments.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Perry B.; Geyer, Amy; Borrego, David
Purpose: To investigate the benefits and limitations of patient-phantom matching for determining organ dose during fluoroscopy guided interventions. Methods: In this study, 27 CT datasets representing patients of different sizes and genders were contoured and converted into patient-specific computational models. Each model was matched, based on height and weight, to computational phantoms selected from the UF hybrid patient-dependent series. In order to investigate the influence of phantom type on patient organ dose, Monte Carlo methods were used to simulate two cardiac projections (PA/left lateral) and two abdominal projections (RAO/LPO). Organ dose conversion coefficients were then calculated for each patient-specific andmore » patient-dependent phantom and also for a reference stylized and reference hybrid phantom. The coefficients were subsequently analyzed for any correlation between patient-specificity and the accuracy of the dose estimate. Accuracy was quantified by calculating an absolute percent difference using the patient-specific dose conversion coefficients as the reference. Results: Patient-phantom matching was shown most beneficial for estimating the dose to heavy patients. In these cases, the improvement over using a reference stylized phantom ranged from approximately 50% to 120% for abdominal projections and for a reference hybrid phantom from 20% to 60% for all projections. For lighter individuals, patient-phantom matching was clearly superior to using a reference stylized phantom, but not significantly better than using a reference hybrid phantom for certain fields and projections. Conclusions: The results indicate two sources of error when patients are matched with phantoms: Anatomical error, which is inherent due to differences in organ size and location, and error attributed to differences in the total soft tissue attenuation. For small patients, differences in soft tissue attenuation are minimal and are exceeded by inherent anatomical differences. For large patients, difference in soft tissue attenuation can be large. In these cases, patient-phantom matching proves most effective as differences in soft tissue attenuation are mitigated. With increasing obesity rates, overweight patients will continue to make up a growing fraction of all patients undergoing medical imaging. Thus, having phantoms that better represent this population represents a considerable improvement over previous methods. In response to this study, additional phantoms representing heavier weight percentiles will be added to the UFHADM and UFHADF patient-dependent series.« less
A device for characterising the mechanical properties of the plantar soft tissue of the foot.
Parker, D; Cooper, G; Pearson, S; Crofts, G; Howard, D; Busby, P; Nester, C
2015-11-01
The plantar soft tissue is a highly functional viscoelastic structure involved in transferring load to the human body during walking. A Soft Tissue Response Imaging Device was developed to apply a vertical compression to the plantar soft tissue whilst measuring the mechanical response via a combined load cell and ultrasound imaging arrangement. Accuracy of motion compared to input profiles; validation of the response measured for standard materials in compression; variability of force and displacement measures for consecutive compressive cycles; and implementation in vivo with five healthy participants. Static displacement displayed average error of 0.04 mm (range of 15 mm), and static load displayed average error of 0.15 N (range of 250 N). Validation tests showed acceptable agreement compared to a Houndsfield tensometer for both displacement (CMC > 0.99 RMSE > 0.18 mm) and load (CMC > 0.95 RMSE < 4.86 N). Device motion was highly repeatable for bench-top tests (ICC = 0.99) and participant trials (CMC = 1.00). Soft tissue response was found repeatable for intra (CMC > 0.98) and inter trials (CMC > 0.70). The device has been shown to be capable of implementing complex loading patterns similar to gait, and of capturing the compressive response of the plantar soft tissue for a range of loading conditions in vivo. Copyright © 2015. Published by Elsevier Ltd.
Least reliable bits coding (LRBC) for high data rate satellite communications
NASA Technical Reports Server (NTRS)
Vanderaar, Mark; Budinger, James; Wagner, Paul
1992-01-01
LRBC, a bandwidth efficient multilevel/multistage block-coded modulation technique, is analyzed. LRBC uses simple multilevel component codes that provide increased error protection on increasingly unreliable modulated bits in order to maintain an overall high code rate that increases spectral efficiency. Soft-decision multistage decoding is used to make decisions on unprotected bits through corrections made on more protected bits. Analytical expressions and tight performance bounds are used to show that LRBC can achieve increased spectral efficiency and maintain equivalent or better power efficiency compared to that of BPSK. The relative simplicity of Galois field algebra vs the Viterbi algorithm and the availability of high-speed commercial VLSI for block codes indicates that LRBC using block codes is a desirable method for high data rate implementations.
Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B
2000-12-01
Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.
A Strategy to Use Soft Data Effectively in Randomized Controlled Clinical Trials.
ERIC Educational Resources Information Center
Kraemer, Helena Chmura; Thiemann, Sue
1989-01-01
Sees soft data, measures having substantial intrasubject variability due to errors of measurement or response inconsistency, as important measures of response in randomized clinical trials. Shows that using intensive design and slope of response on time as outcome measure maximizes sample retention and decreases within-group variability, thus…
NASA Astrophysics Data System (ADS)
Voisin, Guillaume; Mottez, Fabrice; Bonazzola, Silvano
2018-02-01
Electron-positron pair production by collision of photons is investigated in view of application to pulsar physics. We compute the absorption rate of individual gamma-ray photons by an arbitrary anisotropic distribution of softer photons, and the energy and angular spectrum of the outgoing leptons. We work analytically within the approximation that 1 ≫ mc2/E > ɛ/E, with E and ɛ the gamma-ray and soft-photon maximum energy and mc2 the electron mass energy. We give results at leading order in these small parameters. For practical purposes, we provide expressions in the form of Laurent series which give correct reaction rates in the isotropic case within an average error of ˜ 7 per cent. We apply this formalism to gamma-rays flying downward or upward from a hot neutron star thermally radiating at a uniform temperature of 106 K. Other temperatures can be easily deduced using the relevant scaling laws. We find differences in absorption between these two extreme directions of almost two orders of magnitude, much larger than our error estimate. The magnetosphere appears completely opaque to downward gamma-rays while there are up to ˜ 10 per cent chances of absorbing an upward gamma-ray. We provide energy and angular spectra for both upward and downward gamma-rays. Energy spectra show a typical double peak, with larger separation at larger gamma-ray energies. Angular spectra are very narrow, with an opening angle ranging from 10-3 to 10-7 radians with increasing gamma-ray energies.
NASA Astrophysics Data System (ADS)
Tambara, Lucas Antunes; Tonfat, Jorge; Santos, André; Kastensmidt, Fernanda Lima; Medina, Nilberto H.; Added, Nemitala; Aguiar, Vitor A. P.; Aguirre, Fernando; Silveira, Marcilei A. G.
2017-02-01
The increasing system complexity of FPGA-based hardware designs and shortening of time-to-market have motivated the adoption of new designing methodologies focused on addressing the current need for high-performance circuits. High-Level Synthesis (HLS) tools can generate Register Transfer Level (RTL) designs from high-level software programming languages. These tools have evolved significantly in recent years, providing optimized RTL designs, which can serve the needs of safety-critical applications that require both high performance and high reliability levels. However, a reliability evaluation of HLS-based designs under soft errors has not yet been presented. In this work, the trade-offs of different HLS-based designs in terms of reliability, resource utilization, and performance are investigated by analyzing their behavior under soft errors and comparing them to a standard processor-based implementation in an SRAM-based FPGA. Results obtained from fault injection campaigns and radiation experiments show that it is possible to increase the performance of a processor-based system up to 5,000 times by changing its architecture with a small impact in the cross section (increasing up to 8 times), and still increasing the Mean Workload Between Failures (MWBF) of the system.
The Communication Link and Error ANalysis (CLEAN) simulator
NASA Technical Reports Server (NTRS)
Ebel, William J.; Ingels, Frank M.; Crowe, Shane
1993-01-01
During the period July 1, 1993 through December 30, 1993, significant developments to the Communication Link and Error ANalysis (CLEAN) simulator were completed and include: (1) Soft decision Viterbi decoding; (2) node synchronization for the Soft decision Viterbi decoder; (3) insertion/deletion error programs; (4) convolutional encoder; (5) programs to investigate new convolutional codes; (6) pseudo-noise sequence generator; (7) soft decision data generator; (8) RICE compression/decompression (integration of RICE code generated by Pen-Shu Yeh at Goddard Space Flight Center); (9) Markov Chain channel modeling; (10) percent complete indicator when a program is executed; (11) header documentation; and (12) help utility. The CLEAN simulation tool is now capable of simulating a very wide variety of satellite communication links including the TDRSS downlink with RFI. The RICE compression/decompression schemes allow studies to be performed on error effects on RICE decompressed data. The Markov Chain modeling programs allow channels with memory to be simulated. Memory results from filtering, forward error correction encoding/decoding, differential encoding/decoding, channel RFI, nonlinear transponders and from many other satellite system processes. Besides the development of the simulation, a study was performed to determine whether the PCI provides a performance improvement for the TDRSS downlink. There exist RFI with several duty cycles for the TDRSS downlink. We conclude that the PCI does not improve performance for any of these interferers except possibly one which occurs for the TDRS East. Therefore, the usefulness of the PCI is a function of the time spent transmitting data to the WSGT through the TDRS East transponder.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sattarivand, Mike; Summers, Clare; Robar, James
Purpose: To evaluate the validity of using spine as a surrogate for tumor positioning with ExacTrac stereoscopic imaging in lung stereotactic body radiation therapy (SBRT). Methods: Using the Novalis ExacTrac x-ray system, 39 lung SBRT patients (182 treatments) were aligned before treatment with 6 degrees (6D) of freedom couch (3 translations, 3 rotations) based on spine matching on stereoscopic images. The couch was shifted to treatment isocenter and pre-treatment CBCT was performed based on a soft tissue match around tumor volume. The CBCT data were used to measure residual errors following ExacTrac alignment. The thresholds for re-aligning the patients basedmore » on CBCT were 3mm shift or 3° rotation (in any 6D). In order to evaluate the effect of tumor location on residual errors, correlations between tumor distance from spine and individual residual errors were calculated. Results: Residual errors were up to 0.5±2.4mm. Using 3mm/3° thresholds, 80/182 (44%) of the treatments required re-alignment based on CBCT soft tissue matching following ExacTrac spine alignment. Most mismatches were in sup-inf, ant-post, and roll directions which had larger standard deviations. No correlation was found between tumor distance from spine and individual residual errors. Conclusion: ExacTrac stereoscopic imaging offers a quick pre-treatment patient alignment. However, bone matching based on spine is not reliable for aligning lung SBRT patients who require soft tissue image registration from CBCT. Spine can be a poor surrogate for lung SBRT patient alignment even for proximal tumor volumes.« less
Fine figure correction and other applications using novel MRF fluid designed for ultra-low roughness
NASA Astrophysics Data System (ADS)
Maloney, Chris; Oswald, Eric S.; Dumas, Paul
2015-10-01
An increasing number of technologies require ultra-low roughness (ULR) surfaces. Magnetorheological Finishing (MRF) is one of the options for meeting the roughness specifications for high-energy laser, EUV and X-ray applications. A novel MRF fluid, called C30, has been developed to finish surfaces to ULR. This novel MRF fluid is able to achieve <1.5Å RMS roughness on fused silica and other materials, but has a lower material removal rate with respect to other MRF fluids. As a result of these properties, C30 can also be used for applications in addition to finishing ULR surfaces. These applications include fine figure correction, figure correcting extremely soft materials and removing cosmetic defects. The effectiveness of these new applications is explored through experimental data. The low removal rate of C30 gives MRF the capability to fine figure correct low amplitude errors that are usually difficult to correct with higher removal rate fluids. The ability to figure correct extremely soft materials opens up MRF to a new realm of materials that are difficult to polish. C30 also offers the ability to remove cosmetic defects that often lead to failure during visual quality inspections. These new applications for C30 expand the niche in which MRF is typically used for.
Smart Braid Feedback for the Closed-Loop Control of Soft Robotic Systems.
Felt, Wyatt; Chin, Khai Yi; Remy, C David
2017-09-01
This article experimentally investigates the potential of using flexible, inductance-based contraction sensors in the closed-loop motion control of soft robots. Accurate motion control remains a highly challenging task for soft robotic systems. Precise models of the actuation dynamics and environmental interactions are often unavailable. This renders open-loop control impossible, while closed-loop control suffers from a lack of suitable feedback. Conventional motion sensors, such as linear or rotary encoders, are difficult to adapt to robots that lack discrete mechanical joints. The rigid nature of these sensors runs contrary to the aspirational benefits of soft systems. As truly soft sensor solutions are still in their infancy, motion control of soft robots has so far relied on laboratory-based sensing systems such as motion capture, electromagnetic (EM) tracking, or Fiber Bragg Gratings. In this article, we used embedded flexible sensors known as Smart Braids to sense the contraction of McKibben muscles through changes in inductance. We evaluated closed-loop control on two systems: a revolute joint and a planar, one degree of freedom continuum manipulator. In the revolute joint, our proposed controller compensated for elasticity in the actuator connections. The Smart Braid feedback allowed motion control with a steady-state root-mean-square (RMS) error of [1.5]°. In the continuum manipulator, Smart Braid feedback enabled tracking of the desired tip angle with a steady-state RMS error of [1.25]°. This work demonstrates that Smart Braid sensors can provide accurate position feedback in closed-loop motion control suitable for field applications of soft robotic systems.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection.
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-15
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes' (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10 -5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced.
NASA Astrophysics Data System (ADS)
Sabir, Zeeshan; Babar, M. Inayatullah; Shah, Syed Waqar
2012-12-01
Mobile adhoc network (MANET) refers to an arrangement of wireless mobile nodes that have the tendency of dynamically and freely self-organizing into temporary and arbitrary network topologies. Orthogonal frequency division multiplexing (OFDM) is the foremost choice for MANET system designers at the Physical Layer due to its inherent property of high data rate transmission that corresponds to its lofty spectrum efficiency. The downside of OFDM includes its sensitivity to synchronization errors (frequency offsets and symbol time). Most of the present day techniques employing OFDM for data transmission support mobility as one of the primary features. This mobility causes small frequency offsets due to the production of Doppler frequencies. It results in intercarrier interference (ICI) which degrades the signal quality due to a crosstalk between the subcarriers of OFDM symbol. An efficient frequency-domain block-type pilot-assisted ICI mitigation scheme is proposed in this article which nullifies the effect of channel frequency offsets from the received OFDM symbols. Second problem addressed in this article is the noise effect induced by different sources into the received symbol increasing its bit error rate and making it unsuitable for many applications. Forward-error-correcting turbo codes have been employed into the proposed model which adds redundant bits into the system which are later used for error detection and correction purpose. At the receiver end, maximum a posteriori (MAP) decoding algorithm is implemented using two component MAP decoders. These decoders tend to exchange interleaved extrinsic soft information among each other in the form of log likelihood ratio improving the previous estimate regarding the decoded bit in each iteration.
Adaptive and Resilient Soft Tensegrity Robots.
Rieffel, John; Mouret, Jean-Baptiste
2018-04-17
Living organisms intertwine soft (e.g., muscle) and hard (e.g., bones) materials, giving them an intrinsic flexibility and resiliency often lacking in conventional rigid robots. The emerging field of soft robotics seeks to harness these same properties to create resilient machines. The nature of soft materials, however, presents considerable challenges to aspects of design, construction, and control-and up until now, the vast majority of gaits for soft robots have been hand-designed through empirical trial-and-error. This article describes an easy-to-assemble tensegrity-based soft robot capable of highly dynamic locomotive gaits and demonstrating structural and behavioral resilience in the face of physical damage. Enabling this is the use of a machine learning algorithm able to discover effective gaits with a minimal number of physical trials. These results lend further credence to soft-robotic approaches that seek to harness the interaction of complex material dynamics to generate a wealth of dynamical behaviors.
Monte Carlo simulation of particle-induced bit upsets
NASA Astrophysics Data System (ADS)
Wrobel, Frédéric; Touboul, Antoine; Vaillé, Jean-Roch; Boch, Jérôme; Saigné, Frédéric
2017-09-01
We investigate the issue of radiation-induced failures in electronic devices by developing a Monte Carlo tool called MC-Oracle. It is able to transport the particles in device, to calculate the energy deposited in the sensitive region of the device and to calculate the transient current induced by the primary particle and the secondary particles produced during nuclear reactions. We compare our simulation results with SRAM experiments irradiated with neutrons, protons and ions. The agreement is very good and shows that it is possible to predict the soft error rate (SER) for a given device in a given environment.
Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement
Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian
2013-01-01
Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system remained within the acceptable range. PMID:22678990
Accuracy study of a robotic system for MRI-guided prostate needle placement.
Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian
2013-09-01
Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system remained within the acceptable range. Copyright © 2012 John Wiley & Sons, Ltd.
Experimental research of adaptive OFDM and OCT precoding with a high SE for VLLC system
NASA Astrophysics Data System (ADS)
Liu, Shuang-ao; He, Jing; Chen, Qinghui; Deng, Rui; Zhou, Zhihua; Chen, Shenghai; Chen, Lin
2017-09-01
In this paper, an adaptive orthogonal frequency division multiplexing (OFDM) modulation scheme with 128/64/32/16-quadrature amplitude modulation (QAM) and orthogonal circulant matrix transform (OCT) precoding is proposed and experimentally demonstrated for a visible laser light communication (VLLC) system with a cost-effective 450-nm blue-light laser diode (LD). The performance of OCT precoding is compared with conventional the adaptive Discrete Fourier Transform-spread (DFT-spread) OFDM scheme, 32 QAM OCT precoding OFDM scheme, 64 QAM OCT precoding OFDM scheme and adaptive OCT precoding OFDM scheme. The experimental results show that OCT precoding can achieve a relatively flat signal-to-noise ratio (SNR) curve, and it can provide performance improvement in bit error rate (BER). Furthermore, the BER of the proposed OFDM signal with a raw bit rate 5.04 Gb/s after 5-m free space transmission is less than 20% of soft-decision forward error correlation (SD-FEC) threshold of 2.4 × 10-2, and the spectral efficiency (SE) of 4.2 bit/s/Hz can be successfully achieved.
Petrungaro, Paul S; Gonzalez, Santiago; Villegas, Carlos
2018-02-01
As dental implants become more popular for the treatment of partial and total edentulism and treatment of "terminal dentitions," techniques for the management of the atrophic posterior maxillae continue to evolve. Although dental implants carry a high success rate long term, attention must be given to the growing numbers of revisions or retreatment of cases that have had previous dental implant treatment and/or advanced bone replacement procedures that, due to either poor patient compliance, iatrogenic error, or poor quality of the pre-existing alveolar and/or soft tissues, have led to large osseous defects, possibly with deficient soft-tissue volume. In the posterior maxillae, where the poorest quality of bone in the oral cavity exists, achieving regeneration of the alveolar bone and adequate volume of soft tissue remains a complex procedure. This is made even more difficult when dealing with loss of dental implants previously placed, aggressive bone reduction required in various implant procedures, and/or residual sinus infections precluding proper closure of the oral wound margins. The purpose of this article is to outline a technique for the total closure of large oro-antral communications, with underlying osseous defects greater than 15 mm in width and 30 mm in length, for which multiple previous attempts at closure had failed, to achieve not only the reconstruction of adequate volume and quality of soft tissues in the area of the previous fistula, but also total regeneration of the osseous structures in the area of the large void.
Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process. PMID:26977450
Falat, Lukas; Marcek, Dusan; Durisova, Maria
2016-01-01
This paper deals with application of quantitative soft computing prediction models into financial area as reliable and accurate prediction models can be very helpful in management decision-making process. The authors suggest a new hybrid neural network which is a combination of the standard RBF neural network, a genetic algorithm, and a moving average. The moving average is supposed to enhance the outputs of the network using the error part of the original neural network. Authors test the suggested model on high-frequency time series data of USD/CAD and examine the ability to forecast exchange rate values for the horizon of one day. To determine the forecasting efficiency, they perform a comparative statistical out-of-sample analysis of the tested model with autoregressive models and the standard neural network. They also incorporate genetic algorithm as an optimizing technique for adapting parameters of ANN which is then compared with standard backpropagation and backpropagation combined with K-means clustering algorithm. Finally, the authors find out that their suggested hybrid neural network is able to produce more accurate forecasts than the standard models and can be helpful in eliminating the risk of making the bad decision in decision-making process.
Low Density Parity Check Codes Based on Finite Geometries: A Rediscovery and More
NASA Technical Reports Server (NTRS)
Kou, Yu; Lin, Shu; Fossorier, Marc
1999-01-01
Low density parity check (LDPC) codes with iterative decoding based on belief propagation achieve astonishing error performance close to Shannon limit. No algebraic or geometric method for constructing these codes has been reported and they are largely generated by computer search. As a result, encoding of long LDPC codes is in general very complex. This paper presents two classes of high rate LDPC codes whose constructions are based on finite Euclidean and projective geometries, respectively. These classes of codes a.re cyclic and have good constraint parameters and minimum distances. Cyclic structure adows the use of linear feedback shift registers for encoding. These finite geometry LDPC codes achieve very good error performance with either soft-decision iterative decoding based on belief propagation or Gallager's hard-decision bit flipping algorithm. These codes can be punctured or extended to obtain other good LDPC codes. A generalization of these codes is also presented.
NASA Technical Reports Server (NTRS)
Mcdonald, M. W.
1982-01-01
A frequency modulated continuous wave radar system was developed. The system operates in the 35 gigahertz frequency range and provides millimeter accuracy range and range rate measurements. This level of range resolution allows soft docking for the proposed teleoperator maneuvering system (TMS) or other autonomous or robotic space vehicles. Sources of error in the operation of the system which tend to limit its range resolution capabilities are identified. Alternative signal processing techniques are explored with emphasis on determination of the effects of inserting various signal filtering circuits in the system. The identification and elimination of an extraneous low frequency signal component created as a result of zero range immediate reflection of radar energy from the surface of the antenna dish back into the mixer of the system is described.
Khurana, Harpreet Kaur; Cho, Il Kyu; Shim, Jae Yong; Li, Qing X; Jun, Soojin
2008-02-13
Aspartame is a low-calorie sweetener commonly used in soft drinks; however, the maximum usage dose is limited by the U.S. Food and Drug Administration. Fourier transform infrared (FTIR) spectroscopy with attenuated total reflectance sampling accessory and partial least-squares regression (PLS) was used for rapid determination of aspartame in soft drinks. On the basis of spectral characterization, the highest R2 value, and lowest PRESS value, the spectral region between 1600 and 1900 cm(-1) was selected for quantitative estimation of aspartame. The potential of FTIR spectroscopy for aspartame quantification was examined and validated by the conventional HPLC method. Using the FTIR method, aspartame contents in four selected carbonated diet soft drinks were found to average from 0.43 to 0.50 mg/mL with prediction errors ranging from 2.4 to 5.7% when compared with HPLC measurements. The developed method also showed a high degree of accuracy because real samples were used for calibration, thus minimizing potential interference errors. The FTIR method developed can be suitably used for routine quality control analysis of aspartame in the beverage-manufacturing sector.
Estimating patient-specific soft-tissue properties in a TKA knee.
Ewing, Joseph A; Kaufman, Michelle K; Hutter, Erin E; Granger, Jeffrey F; Beal, Matthew D; Piazza, Stephen J; Siston, Robert A
2016-03-01
Surgical technique is one factor that has been identified as critical to success of total knee arthroplasty. Researchers have shown that computer simulations can aid in determining how decisions in the operating room generally affect post-operative outcomes. However, to use simulations to make clinically relevant predictions about knee forces and motions for a specific total knee patient, patient-specific models are needed. This study introduces a methodology for estimating knee soft-tissue properties of an individual total knee patient. A custom surgical navigation system and stability device were used to measure the force-displacement relationship of the knee. Soft-tissue properties were estimated using a parameter optimization that matched simulated tibiofemoral kinematics with experimental tibiofemoral kinematics. Simulations using optimized ligament properties had an average root mean square error of 3.5° across all tests while simulations using generic ligament properties taken from literature had an average root mean square error of 8.4°. Specimens showed large variability among ligament properties regardless of similarities in prosthetic component alignment and measured knee laxity. These results demonstrate the importance of soft-tissue properties in determining knee stability, and suggest that to make clinically relevant predictions of post-operative knee motions and forces using computer simulations, patient-specific soft-tissue properties are needed. © 2015 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Wang, Biao; Yu, Xiaofen; Li, Qinzhao; Zheng, Yu
2008-10-01
The paper aiming at the influence factor of round grating dividing error, rolling-wheel produce eccentricity and surface shape errors provides an amendment method based on rolling-wheel to get the composite error model which includes all influence factors above, and then corrects the non-circle measurement angle error of the rolling-wheel. We make soft simulation verification and have experiment; the result indicates that the composite error amendment method can improve the diameter measurement accuracy with rolling-wheel theory. It has wide application prospect for the measurement accuracy higher than 5 μm/m.
Human Activity Recognition by Combining a Small Number of Classifiers.
Nazabal, Alfredo; Garcia-Moreno, Pablo; Artes-Rodriguez, Antonio; Ghahramani, Zoubin
2016-09-01
We consider the problem of daily human activity recognition (HAR) using multiple wireless inertial sensors, and specifically, HAR systems with a very low number of sensors, each one providing an estimation of the performed activities. We propose new Bayesian models to combine the output of the sensors. The models are based on a soft outputs combination of individual classifiers to deal with the small number of sensors. We also incorporate the dynamic nature of human activities as a first-order homogeneous Markov chain. We develop both inductive and transductive inference methods for each model to be employed in supervised and semisupervised situations, respectively. Using different real HAR databases, we compare our classifiers combination models against a single classifier that employs all the signals from the sensors. Our models exhibit consistently a reduction of the error rate and an increase of robustness against sensor failures. Our models also outperform other classifiers combination models that do not consider soft outputs and an Markovian structure of the human activities.
Cutti, Andrea Giovanni; Cappello, Angelo; Davalli, Angelo
2006-01-01
Soft tissue artefact is the dominant error source for upper extremity motion analyses that use skin-mounted markers, especially in humeral axial rotation. A new in vivo technique is presented that is based on the definition of a humerus bone-embedded frame almost "artefact free" but influenced by the elbow orientation in the measurement of the humeral axial rotation, and on an algorithm designed to solve this kinematic coupling. The technique was validated in vivo in a study of six healthy subjects who performed five arm-movement tasks. For each task the similarity between a gold standard pattern and the axial rotation pattern before and after the application of the compensation algorithm was evaluated in terms of explained variance, gain, phase and offset. In addition the root mean square error between the patterns was used as a global similarity estimator. After the application, for four out of five tasks, patterns were highly correlated, in phase, with almost equal gain and limited offset; the root mean square error decreased from the original 9 degrees to 3 degrees . The proposed technique appears to help compensate for the soft tissue artefact affecting axial rotation. A further development is also proposed to make the technique effective also for the pure prono-supination task.
In-flight calibration of the Hitomi Soft X-ray Spectrometer. (2) Point spread function
NASA Astrophysics Data System (ADS)
Maeda, Yoshitomo; Sato, Toshiki; Hayashi, Takayuki; Iizuka, Ryo; Angelini, Lorella; Asai, Ryota; Furuzawa, Akihiro; Kelley, Richard; Koyama, Shu; Kurashima, Sho; Ishida, Manabu; Mori, Hideyuki; Nakaniwa, Nozomi; Okajima, Takashi; Serlemitsos, Peter J.; Tsujimoto, Masahiro; Yaqoob, Tahir
2018-03-01
We present results of inflight calibration of the point spread function of the Soft X-ray Telescope that focuses X-rays onto the pixel array of the Soft X-ray Spectrometer system. We make a full array image of a point-like source by extracting a pulsed component of the Crab nebula emission. Within the limited statistics afforded by an exposure time of only 6.9 ks and limited knowledge of the systematic uncertainties, we find that the raytracing model of 1 {^'.} 2 half-power-diameter is consistent with an image of the observed event distributions across pixels. The ratio between the Crab pulsar image and the raytracing shows scatter from pixel to pixel that is 40% or less in all except one pixel. The pixel-to-pixel ratio has a spread of 20%, on average, for the 15 edge pixels, with an averaged statistical error of 17% (1 σ). In the central 16 pixels, the corresponding ratio is 15% with an error of 6%.
Eisner, Brian H; Kambadakone, Avinash; Monga, Manoj; Anderson, James K; Thoreson, Andrew A; Lee, Hang; Dretler, Stephen P; Sahani, Dushyant V
2009-04-01
We determined the most accurate method of measuring urinary stones on computerized tomography. For the in vitro portion of the study 24 calculi, including 12 calcium oxalate monohydrate and 12 uric acid stones, that had been previously collected at our clinic were measured manually with hand calipers as the gold standard measurement. The calculi were then embedded into human kidney-sized potatoes and scanned using 64-slice multidetector computerized tomography. Computerized tomography measurements were performed at 4 window settings, including standard soft tissue windows (window width-320 and window length-50), standard bone windows (window width-1120 and window length-300), 5.13x magnified soft tissue windows and 5.13x magnified bone windows. Maximum stone dimensions were recorded. For the in vivo portion of the study 41 patients with distal ureteral stones who underwent noncontrast computerized tomography and subsequently spontaneously passed the stones were analyzed. All analyzed stones were 100% calcium oxalate monohydrate or mixed, calcium based stones. Stones were prospectively collected at the clinic and the largest diameter was measured with digital calipers as the gold standard. This was compared to computerized tomography measurements using 4.0x magnified soft tissue windows and 4.0x magnified bone windows. Statistical comparisons were performed using Pearson's correlation and paired t test. In the in vitro portion of the study the most accurate measurements were obtained using 5.13x magnified bone windows with a mean 0.13 mm difference from caliper measurement (p = 0.6). Measurements performed in the soft tissue window with and without magnification, and in the bone window without magnification were significantly different from hand caliper measurements (mean difference 1.2, 1.9 and 1.4 mm, p = 0.003, <0.001 and 0.0002, respectively). When comparing measurement errors between stones of different composition in vitro, the error for calcium oxalate calculi was significantly different from the gold standard for all methods except bone window settings with magnification. For uric acid calculi the measurement error was observed only in standard soft tissue window settings. In vivo 4.0x magnified bone windows was superior to 4.0x magnified soft tissue windows in measurement accuracy. Magnified bone window measurements were not statistically different from digital caliper measurements (mean underestimation vs digital caliper 0.3 mm, p = 0.4), while magnified soft tissue windows were statistically distinct (mean underestimation 1.4 mm, p = 0.001). In this study magnified bone windows were the most accurate method of stone measurements in vitro and in vivo. Therefore, we recommend the routine use of magnified bone windows for computerized tomography measurement of stones. In vitro the measurement error in calcium oxalate stones was greater than that in uric acid stones, suggesting that stone composition may be responsible for measurement inaccuracies.
Co-operation of digital nonlinear equalizers and soft-decision LDPC FEC in nonlinear transmission.
Tanimura, Takahito; Oda, Shoichiro; Hoshida, Takeshi; Aoki, Yasuhiko; Tao, Zhenning; Rasmussen, Jens C
2013-12-30
We experimentally and numerically investigated the characteristics of 128 Gb/s dual polarization - quadrature phase shift keying signals received with two types of nonlinear equalizers (NLEs) followed by soft-decision (SD) low-density parity-check (LDPC) forward error correction (FEC). Successful co-operation among SD-FEC and NLEs over various nonlinear transmissions were demonstrated by optimization of parameters for NLEs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogunmolu, O; Gans, N; Jiang, S
Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance ofmore » the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control will be pursued.« less
Real-time fetal ECG system design using embedded microprocessors
NASA Astrophysics Data System (ADS)
Meyer-Baese, Uwe; Muddu, Harikrishna; Schinhaerl, Sebastian; Kumm, Martin; Zipf, Peter
2016-05-01
The emphasis of this project lies in the development and evaluation of new robust and high fidelity fetal electrocardiogram (FECG) systems to determine the fetal heart rate (FHR). Recently several powerful algorithms have been suggested to improve the FECG fidelity. Until now it is unknown if these algorithms allow a real-time processing, can be used in mobile systems (low power), and which algorithm produces the best error rate for a given system configuration. In this work we have developed high performance, low power microprocessor-based biomedical systems that allow a fair comparison of proposed, state-of-the-art FECG algorithms. We will evaluate different soft-core microprocessors and compare these solutions to other commercial off-the-shelf (COTS) hardcore solutions in terms of price, size, power, and speed.
Dynamic soft tissue deformation estimation based on energy analysis
NASA Astrophysics Data System (ADS)
Gao, Dedong; Lei, Yong; Yao, Bin
2016-10-01
The needle placement accuracy of millimeters is required in many needle-based surgeries. The tissue deformation, especially that occurring on the surface of organ tissue, affects the needle-targeting accuracy of both manual and robotic needle insertions. It is necessary to understand the mechanism of tissue deformation during needle insertion into soft tissue. In this paper, soft tissue surface deformation is investigated on the basis of continuum mechanics, where a geometry model is presented to quantitatively approximate the volume of tissue deformation. The energy-based method is presented to the dynamic process of needle insertion into soft tissue based on continuum mechanics, and the volume of the cone is exploited to quantitatively approximate the deformation on the surface of soft tissue. The external work is converted into potential, kinetic, dissipated, and strain energies during the dynamic rigid needle-tissue interactive process. The needle insertion experimental setup, consisting of a linear actuator, force sensor, needle, tissue container, and a light, is constructed while an image-based method for measuring the depth and radius of the soft tissue surface deformations is introduced to obtain the experimental data. The relationship between the changed volume of tissue deformation and the insertion parameters is created based on the law of conservation of energy, with the volume of tissue deformation having been obtained using image-based measurements. The experiments are performed on phantom specimens, and an energy-based analytical fitted model is presented to estimate the volume of tissue deformation. The experimental results show that the energy-based analytical fitted model can predict the volume of soft tissue deformation, and the root mean squared errors of the fitting model and experimental data are 0.61 and 0.25 at the velocities 2.50 mm/s and 5.00 mm/s. The estimating parameters of the soft tissue surface deformations are proven to be useful for compensating the needle-targeting error in the rigid needle insertion procedure, especially for percutaneous needle insertion into organs.
Ciambella, J; Paolone, A; Vidoli, S
2014-09-01
We report about the experimental identification of viscoelastic constitutive models for frequencies ranging within 0-10Hz. Dynamic moduli data are fitted forseveral materials of interest to medical applications: liver tissue (Chatelin et al., 2011), bioadhesive gel (Andrews et al., 2005), spleen tissue (Nicolle et al., 2012) and synthetic elastomer (Osanaiye, 1996). These materials actually represent a rather wide class of soft viscoelastic materials which are usually subjected to low frequencies deformations. We also provide prescriptions for the correct extrapolation of the material behavior at higher frequencies. Indeed, while experimental tests are more easily carried out at low frequency, the identified viscoelastic models are often used outside the frequency range of the actual test. We consider two different classes of models according to their relaxation function: Debye models, whose kernel decays exponentially fast, and fractional models, including Cole-Cole, Davidson-Cole, Nutting and Havriliak-Negami, characterized by a slower decay rate of the material memory. Candidate constitutive models are hence rated according to the accurateness of the identification and to their robustness to extrapolation. It is shown that all kernels whose decay rate is too fast lead to a poor fitting and high errors when the material behavior is extrapolated to broader frequency ranges. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Microcircuit radiation effects databank
NASA Technical Reports Server (NTRS)
1983-01-01
Radiation test data submitted by many testers is collated to serve as a reference for engineers who are concerned with and have some knowledge of the effects of the natural radiation environment on microcircuits. Total dose damage information and single event upset cross sections, i.e., the probability of a soft error (bit flip) or of a hard error (latchup) are presented.
Sum of the Magnitude for Hard Decision Decoding Algorithm Based on Loop Update Detection
Meng, Jiahui; Zhao, Danfeng; Tian, Hai; Zhang, Liang
2018-01-01
In order to improve the performance of non-binary low-density parity check codes (LDPC) hard decision decoding algorithm and to reduce the complexity of decoding, a sum of the magnitude for hard decision decoding algorithm based on loop update detection is proposed. This will also ensure the reliability, stability and high transmission rate of 5G mobile communication. The algorithm is based on the hard decision decoding algorithm (HDA) and uses the soft information from the channel to calculate the reliability, while the sum of the variable nodes’ (VN) magnitude is excluded for computing the reliability of the parity checks. At the same time, the reliability information of the variable node is considered and the loop update detection algorithm is introduced. The bit corresponding to the error code word is flipped multiple times, before this is searched in the order of most likely error probability to finally find the correct code word. Simulation results show that the performance of one of the improved schemes is better than the weighted symbol flipping (WSF) algorithm under different hexadecimal numbers by about 2.2 dB and 2.35 dB at the bit error rate (BER) of 10−5 over an additive white Gaussian noise (AWGN) channel, respectively. Furthermore, the average number of decoding iterations is significantly reduced. PMID:29342963
NASA Astrophysics Data System (ADS)
Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez
2014-03-01
Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.
Evidence for explosive chromospheric evaporation in a solar flare observed with SMM
NASA Technical Reports Server (NTRS)
Zarro, D. M.; Saba, J. L. R.; Strong, K. T.; Canfield, R. C.; Metcalf, T.
1986-01-01
SMM soft X-ray data and Sacramento Peak Observatory H-alpha observations are combined in a study of the impulsive phase of a solar flare. A blue asymmetry, indicative of upflow motions, was observed in the coronal Ca XIX line during the soft X-ray rise phase. H-alpha redshifts, indicative of downward motions, were observed simultaneously in bright flare kernels during the period of hard X-ray emission. It is shown that, to within observational errors, the impulsive phase momentum transported by the upflowing soft X-ray plasma is equivalent to that of the downward moving chromospheric material.
Detecting Silent Data Corruption for Extreme-Scale Applications through Data Mining
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bautista-Gomez, Leonardo; Cappello, Franck
Supercomputers allow scientists to study natural phenomena by means of computer simulations. Next-generation machines are expected to have more components and, at the same time, consume several times less energy per operation. These trends are pushing supercomputer construction to the limits of miniaturization and energy-saving strategies. Consequently, the number of soft errors is expected to increase dramatically in the coming years. While mechanisms are in place to correct or at least detect some soft errors, a significant percentage of those errors pass unnoticed by the hardware. Such silent errors are extremely damaging because they can make applications silently produce wrongmore » results. In this work we propose a technique that leverages certain properties of high-performance computing applications in order to detect silent errors at the application level. Our technique detects corruption solely based on the behavior of the application datasets and is completely application-agnostic. We propose multiple corruption detectors, and we couple them to work together in a fashion transparent to the user. We demonstrate that this strategy can detect the majority of the corruptions, while incurring negligible overhead. We show that with the help of these detectors, applications can have up to 80% of coverage against data corruption.« less
Channel modeling, signal processing and coding for perpendicular magnetic recording
NASA Astrophysics Data System (ADS)
Wu, Zheng
With the increasing areal density in magnetic recording systems, perpendicular recording has replaced longitudinal recording to overcome the superparamagnetic limit. Studies on perpendicular recording channels including aspects of channel modeling, signal processing and coding techniques are presented in this dissertation. To optimize a high density perpendicular magnetic recording system, one needs to know the tradeoffs between various components of the system including the read/write transducers, the magnetic medium, and the read channel. We extend the work by Chaichanavong on the parameter optimization for systems via design curves. Different signal processing and coding techniques are studied. Information-theoretic tools are utilized to determine the acceptable region for the channel parameters when optimal detection and linear coding techniques are used. Our results show that a considerable gain can be achieved by the optimal detection and coding techniques. The read-write process in perpendicular magnetic recording channels includes a number of nonlinear effects. Nonlinear transition shift (NLTS) is one of them. The signal distortion induced by NLTS can be reduced by write precompensation during data recording. We numerically evaluate the effect of NLTS on the read-back signal and examine the effectiveness of several write precompensation schemes in combating NLTS in a channel characterized by both transition jitter noise and additive white Gaussian electronics noise. We also present an analytical method to estimate the bit-error-rate and use it to help determine the optimal write precompensation values in multi-level precompensation schemes. We propose a mean-adjusted pattern-dependent noise predictive (PDNP) detection algorithm for use on the channel with NLTS. We show that this detector can offer significant improvements in bit-error-rate (BER) compared to conventional Viterbi and PDNP detectors. Moreover, the system performance can be further improved by combining the new detector with a simple write precompensation scheme. Soft-decision decoding for algebraic codes can improve performance for magnetic recording systems. In this dissertation, we propose two soft-decision decoding methods for tensor-product parity codes. We also present a list decoding algorithm for generalized error locating codes.
Asymmetric soft-error resistant memory
NASA Technical Reports Server (NTRS)
Buehler, Martin G. (Inventor); Perlman, Marvin (Inventor)
1991-01-01
A memory system is provided, of the type that includes an error-correcting circuit that detects and corrects, that more efficiently utilizes the capacity of a memory formed of groups of binary cells whose states can be inadvertently switched by ionizing radiation. Each memory cell has an asymmetric geometry, so that ionizing radiation causes a significantly greater probability of errors in one state than in the opposite state (e.g., an erroneous switch from '1' to '0' is far more likely than a switch from '0' to'1'. An asymmetric error correcting coding circuit can be used with the asymmetric memory cells, which requires fewer bits than an efficient symmetric error correcting code.
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Lin, Shu
2000-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
Management Aspects of Software Maintenance.
1984-09-01
educated in * the complex nature of software maintenance to be able to properly evaluate and manage the software maintenance effort. In this...maintenance and improvement may be called "software evolution". The soft- ware manager must be Educated in the complex nature cf soft- Iware maintenance to be...complaint of error or request for modification is also studied in order to determine what action needs tc be taken. 2. Define Objective and Approach :
Stephan, Carl N; Simpson, Ellie K
2008-11-01
With the ever increasing production of average soft tissue depth studies, data are becoming increasingly complex, less standardized, and more unwieldy. So far, no overarching review has been attempted to determine: the validity of continued data collection; the usefulness of the existing data subcategorizations; or if a synthesis is possible to produce a manageable soft tissue depth library. While a principal components analysis would provide the best foundation for such an assessment, this type of investigation is not currently possible because of a lack of easily accessible raw data (first, many studies are narrow; second, raw data are infrequently published and/or stored and are not always shared by some authors). This paper provides an alternate means of investigation using an hierarchical approach to review and compare the effects of single variables on published mean values for adults whilst acknowledging measurement errors and within-group variation. The results revealed: (i) no clear secular trends at frequently investigated landmarks; (ii) wide variation in soft tissue depth measures between different measurement techniques irrespective of whether living persons or cadavers were considered; (iii) no clear clustering of non-Caucasoid data far from the Caucasoid means; and (iv) minor differences between males and females. Consequently, the data were pooled across studies using weighted means and standard deviations to cancel out random and opposing study-specific errors, and to produce a single soft tissue depth table with increased sample sizes (e.g., 6786 individuals at pogonion).
Cheng, Tzu-Han; Tsai, Chen-Gia
2016-01-01
Although music and the emotion it conveys unfold over time, little is known about how listeners respond to shifts in musical emotions. A special technique in heavy metal music utilizes dramatic shifts between loud and soft passages. Loud passages are penetrated by distorted sounds conveying aggression, whereas soft passages are often characterized by a clean, calm singing voice and light accompaniment. The present study used heavy metal songs and soft sea sounds to examine how female listeners' respiration rates and heart rates responded to the arousal changes associated with auditory stimuli. The high-frequency power of heart rate variability (HF-HRV) was used to assess cardiac parasympathetic activity. The results showed that the soft passages of heavy metal songs and soft sea sounds expressed lower arousal and induced significantly higher HF-HRVs than the loud passages of heavy metal songs. Listeners' respiration rate was determined by the arousal level of the present music passage, whereas the heart rate was dependent on both the present and preceding passages. Compared with soft sea sounds, the loud music passage led to greater deceleration of the heart rate at the beginning of the following soft music passage. The sea sounds delayed the heart rate acceleration evoked by the following loud music passage. The data provide evidence that sound-induced parasympathetic activity affects listeners' heart rate in response to the following music passage. These findings have potential implications for future research on the temporal dynamics of musical emotions.
Chang, Shih-Tsun; Liu, Yen-Hsiu; Lee, Jiahn-Shing; See, Lai-Chu
2015-09-01
The effect of correcting static vision on sports vision is still not clear. To examine whether sports vision (depth perception [DP], dynamic visual acuity [DVA], eye movement [EM], peripheral vision [PV], and momentary vision [MV],) were different among soft tennis adolescent athletes with normal vision (Group A), with refractive error and corrected with (Group B) and without eyeglasses (Group C). A cross-section study was conducted. Soft tennis athletes aged 10-13 who played softball tennis for 2-5 years, and who were without any ocular diseases and without visual training for the past 3 months were recruited. DPs were measured in an absolute deviation (mm) between a moving rod and fixing rod (approaching at 25 mm/s, receding at 25 mm/s, approaching at 50 mm/s, receding at 50 mm/s) using electric DP tester. A smaller deviation represented better DP. DVA, EM, PV, and MV were measured on a scale from 1 (worse) to 10 (best) using ATHLEVISION software. Chi-square test and Kruskal-Wallis test was used to compare the data among the three study groups. A total of 73 athletes (37 in Group A, 8 in Group B, 28 in Group C) were enrolled in this study. All four items of DP showed significant difference among the three study groups (P = 0.0051, 0.0004, 0.0095, 0.0021). PV displayed significant difference among the three study groups (P = 0.0044). There was no significant difference in DVA, EM, and MV among the three study groups. Significant better DP and PV were seen among soft tennis adolescent athletes with normal vision than those with refractive error regardless whether they had eyeglasses corrected. On the other hand, DVA, EM, and MV were similar among the three study groups.
Suess, D.; Fuger, M.; Abert, C.; Bruckner, F.; Vogler, C.
2016-01-01
We report two effects that lead to a significant reduction of the switching field distribution in exchange spring media. The first effect relies on a subtle mechanism of the interplay between exchange coupling between soft and hard layers and anisotropy that allows significant reduction of the switching field distribution in exchange spring media. This effect reduces the switching field distribution by about 30% compared to single-phase media. A second effect is that due to the improved thermal stability of exchange spring media over single-phase media, the jitter due to thermal fluctuation is significantly smaller for exchange spring media than for single-phase media. The influence of this overall improved switching field distribution on the transition jitter in granular recording and the bit error rate in bit-patterned magnetic recording is discussed. The transition jitter in granular recording for a distribution of Khard values of 3% in the hard layer, taking into account thermal fluctuations during recording, is estimated to be a = 0.78 nm, which is similar to the best reported calculated jitter in optimized heat-assisted recording media. PMID:27245287
A soft kinetic data structure for lesion border detection.
Kockara, Sinan; Mete, Mutlu; Yip, Vincent; Lee, Brendan; Aydin, Kemal
2010-06-15
The medical imaging and image processing techniques, ranging from microscopic to macroscopic, has become one of the main components of diagnostic procedures to assist dermatologists in their medical decision-making processes. Computer-aided segmentation and border detection on dermoscopic images is one of the core components of diagnostic procedures and therapeutic interventions for skin cancer. Automated assessment tools for dermoscopic images have become an important research field mainly because of inter- and intra-observer variations in human interpretations. In this study, a novel approach-graph spanner-for automatic border detection in dermoscopic images is proposed. In this approach, a proximity graph representation of dermoscopic images in order to detect regions and borders in skin lesion is presented. Graph spanner approach is examined on a set of 100 dermoscopic images whose manually drawn borders by a dermatologist are used as the ground truth. Error rates, false positives and false negatives along with true positives and true negatives are quantified by digitally comparing results with manually determined borders from a dermatologist. The results show that the highest precision and recall rates obtained to determine lesion boundaries are 100%. However, accuracy of assessment averages out at 97.72% and borders errors' mean is 2.28% for whole dataset.
Online Soft Sensor of Humidity in PEM Fuel Cell Based on Dynamic Partial Least Squares
Long, Rong; Chen, Qihong; Zhang, Liyan; Ma, Longhua; Quan, Shuhai
2013-01-01
Online monitoring humidity in the proton exchange membrane (PEM) fuel cell is an important issue in maintaining proper membrane humidity. The cost and size of existing sensors for monitoring humidity are prohibitive for online measurements. Online prediction of humidity using readily available measured data would be beneficial to water management. In this paper, a novel soft sensor method based on dynamic partial least squares (DPLS) regression is proposed and applied to humidity prediction in PEM fuel cell. In order to obtain data of humidity and test the feasibility of the proposed DPLS-based soft sensor a hardware-in-the-loop (HIL) test system is constructed. The time lag of the DPLS-based soft sensor is selected as 30 by comparing the root-mean-square error in different time lag. The performance of the proposed DPLS-based soft sensor is demonstrated by experimental results. PMID:24453923
NASA Astrophysics Data System (ADS)
Xiao, Fei; Liu, Bo; Zhang, Lijia; Xin, Xiangjun; Zhang, Qi; Tian, Qinghua; Tian, Feng; Wang, Yongjun; Rao, Lan; Ullah, Rahat; Zhao, Feng; Li, Deng'ao
2018-02-01
A rate-adaptive multilevel coded modulation (RA-MLC) scheme based on fixed code length and a corresponding decoding scheme is proposed. RA-MLC scheme combines the multilevel coded and modulation technology with the binary linear block code at the transmitter. Bits division, coding, optional interleaving, and modulation are carried out by the preset rule, then transmitted through standard single mode fiber span equal to 100 km. The receiver improves the accuracy of decoding by means of soft information passing through different layers, which enhances the performance. Simulations are carried out in an intensity modulation-direct detection optical communication system using MATLAB®. Results show that the RA-MLC scheme can achieve bit error rate of 1E-5 when optical signal-to-noise ratio is 20.7 dB. It also reduced the number of decoders by 72% and realized 22 rate adaptation without significantly increasing the computing time. The coding gain is increased by 7.3 dB at BER=1E-3.
Auxiliary variables for numerically solving nonlinear equations with softly broken symmetries.
Olum, Ken D; Masoumi, Ali
2017-06-01
General methods for solving simultaneous nonlinear equations work by generating a sequence of approximate solutions that successively improve a measure of the total error. However, if the total error function has a narrow curved valley, the available techniques tend to find the solution after a very large number of steps, if ever. The solver first converges rapidly to the valley, but once there it converges extremely slowly to the solution. In this paper we show that in the specific physically important case where these valleys are the result of a softly broken symmetry, the solution can often be found much more quickly by adding the generators of the softly broken symmetry as auxiliary variables. This makes the number of variables more than the equations and hence there will be a family of solutions, any one of which would be acceptable. We present a procedure for finding solutions in this case and apply it to several simple examples and an important problem in the physics of false vacuum decay. We also provide a Mathematica package that implements Powell's hybrid method with the generalization to allow more variables than equations.
Bronson, N R
1984-05-01
A new A-mode biometry system for determining axial length measurements of the eye has been developed that incorporates a soft-membrane transducer. The soft transducer decreases the risk of indenting the cornea with the probe resulting in inaccurate measurements. A microprocessor evaluates echo patterns and determines whether or not axial alignment has been obtained, eliminating possible user error. The new A-scan requires minimal user skill and can be used successfully by both physician and technician.
Cheng, Tzu-Han; Tsai, Chen-Gia
2016-01-01
Although music and the emotion it conveys unfold over time, little is known about how listeners respond to shifts in musical emotions. A special technique in heavy metal music utilizes dramatic shifts between loud and soft passages. Loud passages are penetrated by distorted sounds conveying aggression, whereas soft passages are often characterized by a clean, calm singing voice and light accompaniment. The present study used heavy metal songs and soft sea sounds to examine how female listeners’ respiration rates and heart rates responded to the arousal changes associated with auditory stimuli. The high-frequency power of heart rate variability (HF-HRV) was used to assess cardiac parasympathetic activity. The results showed that the soft passages of heavy metal songs and soft sea sounds expressed lower arousal and induced significantly higher HF-HRVs than the loud passages of heavy metal songs. Listeners’ respiration rate was determined by the arousal level of the present music passage, whereas the heart rate was dependent on both the present and preceding passages. Compared with soft sea sounds, the loud music passage led to greater deceleration of the heart rate at the beginning of the following soft music passage. The sea sounds delayed the heart rate acceleration evoked by the following loud music passage. The data provide evidence that sound-induced parasympathetic activity affects listeners’ heart rate in response to the following music passage. These findings have potential implications for future research on the temporal dynamics of musical emotions. PMID:26925009
Performance of Low-Density Parity-Check Coded Modulation
NASA Astrophysics Data System (ADS)
Hamkins, J.
2011-02-01
This article presents the simulated performance of a family of nine AR4JA low-density parity-check (LDPC) codes when used with each of five modulations. In each case, the decoder inputs are codebit log-likelihood ratios computed from the received (noisy) modulation symbols using a general formula which applies to arbitrary modulations. Suboptimal soft-decision and hard-decision demodulators are also explored. Bit-interleaving and various mappings of bits to modulation symbols are considered. A number of subtle decoder algorithm details are shown to affect performance, especially in the error floor region. Among these are quantization dynamic range and step size, clipping degree-one variable nodes, "Jones clipping" of variable nodes, approximations of the min* function, and partial hard-limiting messages from check nodes. Using these decoder optimizations, all coded modulations simulated here are free of error floors down to codeword error rates below 10^{-6}. The purpose of generating this performance data is to aid system engineers in determining an appropriate code and modulation to use under specific power and bandwidth constraints, and to provide information needed to design a variable/adaptive coded modulation (VCM/ACM) system using the AR4JA codes. IPNPR Volume 42-185 Tagged File.txt
Closed-Loop Analysis of Soft Decisions for Serial Links
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Steele, Glen F.; Zucha, Joan P.; Schlensinger, Adam M.
2012-01-01
Modern receivers are providing soft decision symbol synchronization as radio links are challenged to push more data and more overhead through noisier channels, and software-defined radios use error-correction techniques that approach Shannon s theoretical limit of performance. The authors describe the benefit of closed-loop measurements for a receiver when paired with a counterpart transmitter and representative channel conditions. We also describe a real-time Soft Decision Analyzer (SDA) implementation for closed-loop measurements on single- or dual- (orthogonal) channel serial data communication links. The analyzer has been used to identify, quantify, and prioritize contributors to implementation loss in real-time during the development of software defined radios.
Lim, Glendale; Lin, Guo-Hao; Monje, Alberto; Chan, Hsun-Liang; Wang, Hom-Lay
The rate of developing soft tissue complications that accompany guided bone regeneration (GBR) procedures varies widely, from 0% to 45%. The present review was conducted to investigate the rate for resorbable versus nonresorbable membranes and the timing of soft tissue complications. Electronic and manual literature searches were conducted by two independent reviewers using several databases, including MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, and Cochrane Oral Health Group Trials Register, for articles published through July 2015, with no language restriction. Articles were included if they were clinical trials aimed at demonstrating the incidence of soft tissue complications following GBR procedures. Overall, 21 and 15 articles were included in the qualitative and quantitative synthesis, respectively. The weighted complication rate of the overall soft tissue complications, including membrane exposure, soft tissue dehiscence, and acute infection/abscess, into the calculation was 16.8% (95% CI = 10.6% to 25.4%). When considering the complication rate based on membrane type used, resorbable membrane was associated with a weighted complication rate of 18.3% (95% CI: 10.4% to 30.4%) and nonresorbable membrane with a rate of 17.6% (95% CI: 10.0% to 29.3%). Moreover, soft tissue lesions were reported as early as 1 week and as late as 6 months based on the included studies. Soft tissue complications after GBR are common (16.8%). Membrane type did not appear to significantly affect the complication rate, based on the limited number of data retrieved in this study. Technique sensitivity (ie, soft tissue management) may still be regarded as the main component to avoid soft tissue complications and, hence, to influence the success of bone regenerative therapy.
De Rosario, Helios; Page, Álvaro; Besa, Antonio
2017-09-06
The accurate location of the main axes of rotation (AoR) is a crucial step in many applications of human movement analysis. There are different formal methods to determine the direction and position of the AoR, whose performance varies across studies, depending on the pose and the source of errors. Most methods are based on minimizing squared differences between observed and modelled marker positions or rigid motion parameters, implicitly assuming independent and uncorrelated errors, but the largest error usually results from soft tissue artefacts (STA), which do not have such statistical properties and are not effectively cancelled out by such methods. However, with adequate methods it is possible to assume that STA only account for a small fraction of the observed motion and to obtain explicit formulas through differential analysis that relate STA components to the resulting errors in AoR parameters. In this paper such formulas are derived for three different functional calibration techniques (Geometric Fitting, mean Finite Helical Axis, and SARA), to explain why each technique behaves differently from the others, and to propose strategies to compensate for those errors. These techniques were tested with published data from a sit-to-stand activity, where the true axis was defined using bi-planar fluoroscopy. All the methods were able to estimate the direction of the AoR with an error of less than 5°, whereas there were errors in the location of the axis of 30-40mm. Such location errors could be reduced to less than 17mm by the methods based on equations that use rigid motion parameters (mean Finite Helical Axis, SARA) when the translation component was calculated using the three markers nearest to the axis. Copyright © 2017 Elsevier Ltd. All rights reserved.
Justification of Estimates for Fiscal Year 1983 Submitted to Congress.
1982-02-01
hierarchies to aid software production; completion of the components of an adaptive suspension vehicle including a storage energy unit, hydraulics, laser...and corrosion (long storage times), and radiation-induced breakdown. Solid- lubricated main engine bearings for cruise missile engines would offer...environments will cause "soft error" (computational and memory storage errors) in advanced microelectronic circuits. Research on high-speed, low-power
NASA Astrophysics Data System (ADS)
Abdallah, J.; Abreu, P.; Adam, W.; Adzic, P.; Albrecht, T.; Alemany-Fernandez, R.; Allmendinger, T.; Allport, P. P.; Amaldi, U.; Amapane, N.; Amato, S.; Anashkin, E.; Andreazza, A.; Andringa, S.; Anjos, N.; Antilogus, P.; Apel, W.-D.; Arnoud, Y.; Ask, S.; Asman, B.; Augustin, J. E.; Augustinus, A.; Baillon, P.; Ballestrero, A.; Bambade, P.; Barbier, R.; Bardin, D.; Barker, G. J.; Baroncelli, A.; Battaglia, M.; Baubillier, M.; Becks, K.-H.; Begalli, M.; Behrmann, A.; Ben-Haim, E.; Benekos, N.; Benvenuti, A.; Berat, C.; Berggren, M.; Bertrand, D.; Besancon, M.; Besson, N.; Bloch, D.; Blom, M.; Bluj, M.; Bonesini, M.; Boonekamp, M.; Booth, P. S. L.; Borisov, G.; Botner, O.; Bouquet, B.; Bowcock, T. J. V.; Boyko, I.; Bracko, M.; Brenner, R.; Brodet, E.; Bruckman, P.; Brunet, J. M.; Buschbeck, B.; Buschmann, P.; Calvi, M.; Camporesi, T.; Canale, V.; Carena, F.; Castro, N.; Cavallo, F.; Chapkin, M.; Charpentier, Ph.; Checchia, P.; Chierici, R.; Chliapnikov, P.; Chudoba, J.; Chung, S. U.; Cieslik, K.; Collins, P.; Contri, R.; Cosme, G.; Cossutti, F.; Costa, M. J.; Crennell, D.; Cuevas, J.; D'Hondt, J.; da Silva, T.; da Silva, W.; Della Ricca, G.; de Angelis, A.; de Boer, W.; de Clercq, C.; de Lotto, B.; de Maria, N.; de Min, A.; de Paula, L.; di Ciaccio, L.; di Simone, A.; Doroba, K.; Drees, J.; Eigen, G.; Ekelof, T.; Ellert, M.; Elsing, M.; Espirito Santo, M. C.; Fanourakis, G.; Fassouliotis, D.; Feindt, M.; Fernandez, J.; Ferrer, A.; Ferro, F.; Flagmeyer, U.; Foeth, H.; Fokitis, E.; Fulda-Quenzer, F.; Fuster, J.; Gandelman, M.; Garcia, C.; Gavillet, Ph.; Gazis, E.; Gokieli, R.; Golob, B.; Gomez-Ceballos, G.; Goncalves, P.; Graziani, E.; Grosdidier, G.; Grzelak, K.; Guy, J.; Haag, C.; Hallgren, A.; Hamacher, K.; Hamilton, K.; Haug, S.; Hauler, F.; Hedberg, V.; Hennecke, M.; Hoffman, J.; Holmgren, S.-O.; Holt, P. J.; Houlden, M. A.; Jackson, J. N.; Jarlskog, G.; Jarry, P.; Jeans, D.; Johansson, E. K.; Jonsson, P.; Joram, C.; Jungermann, L.; Kapusta, F.; Katsanevas, S.; Katsoufis, E.; Kernel, G.; Kersevan, B. P.; Kerzel, U.; King, B. T.; Kjaer, N. J.; Kluit, P.; Kokkinias, P.; Kourkoumelis, C.; Kouznetsov, O.; Krumstein, Z.; Kucharczyk, M.; Lamsa, J.; Leder, G.; Ledroit, F.; Leinonen, L.; Leitner, R.; Lemonne, J.; Lepeltier, V.; Lesiak, T.; Liebig, W.; Liko, D.; Lipniacka, A.; Lopes, J. H.; Lopez, J. M.; Loukas, D.; Lutz, P.; Lyons, L.; MacNaughton, J.; Malek, A.; Maltezos, S.; Mandl, F.; Marco, J.; Marco, R.; Marechal, B.; Margoni, M.; Marin, J.-C.; Mariotti, C.; Markou, A.; Martinez-Rivero, C.; Masik, J.; Mastroyiannopoulos, N.; Matorras, F.; Matteuzzi, C.; Mazzucato, F.; Mazzucato, M.; Mc Nulty, R.; Meroni, C.; Migliore, E.; Mitaroff, W.; Mjoernmark, U.; Moa, T.; Moch, M.; Moenig, K.; Monge, R.; Montenegro, J.; Moraes, D.; Moreno, S.; Morettini, P.; Mueller, U.; Muenich, K.; Mulders, M.; Mundim, L.; Murray, W.; Muryn, B.; Myatt, G.; Myklebust, T.; Nassiakou, M.; Navarria, F.; Nawrocki, K.; Nemecek, S.; Nicolaidou, R.; Nikolenko, M.; Oblakowska-Mucha, A.; Obraztsov, V.; Olshevski, A.; Onofre, A.; Orava, R.; Osterberg, K.; Ouraou, A.; Oyanguren, A.; Paganoni, M.; Paiano, S.; Palacios, J. P.; Palka, H.; Papadopoulou, Th. D.; Pape, L.; Parkes, C.; Parodi, F.; Parzefall, U.; Passeri, A.; Passon, O.; Peralta, L.; Perepelitsa, V.; Perrotta, A.; Petrolini, A.; Piedra, J.; Pieri, L.; Pierre, F.; Pimenta, M.; Piotto, E.; Podobnik, T.; Poireau, V.; Pol, M. E.; Polok, G.; Pozdniakov, V.; Pukhaeva, N.; Pullia, A.; Radojicic, D.; Rebecchi, P.; Rehn, J.; Reid, D.; Reinhardt, R.; Renton, P.; Richard, F.; Ridky, J.; Rivero, M.; Rodriguez, D.; Romero, A.; Ronchese, P.; Roudeau, P.; Rovelli, T.; Ruhlmann-Kleider, V.; Ryabtchikov, D.; Sadovsky, A.; Salmi, L.; Salt, J.; Sander, C.; Savoy-Navarro, A.; Schwickerath, U.; Sekulin, R.; Siebel, M.; Sisakian, A.; Smadja, G.; Smirnova, O.; Sokolov, A.; Sopczak, A.; Sosnowski, R.; Spassov, T.; Stanitzki, M.; Stocchi, A.; Strauss, J.; Stugu, B.; Szczekowski, M.; Szeptycka, M.; Szumlak, T.; Tabarelli, T.; Tegenfeldt, F.; Timmermans, J.; Tkatchev, L.; Tobin, M.; Todorovova, S.; Tome, B.; Tonazzo, A.; Tortosa, P.; Travnicek, P.; Treille, D.; Tristram, G.; Trochimczuk, M.; Troncon, C.; Turluer, M.-L.; Tyapkin, I. A.; Tyapkin, P.; Tzamarias, S.; Uvarov, V.; Valenti, G.; van Dam, P.; van Eldik, J.; van Remortel, N.; van Vulpen, I.; Vegni, G.; Veloso, F.; Venus, W.; Verdier, P.; Verzi, V.; Vilanova, D.; Vitale, L.; Vrba, V.; Wahlen, H.; Washbrook, A. J.; Weiser, C.; Wicke, D.; Wickens, J.; Wilkinson, G.; Winter, M.; Witek, M.; Yushchenko, O.; Zalewska, A.; Zalewski, P.; Zavrtanik, D.; Zhuravlov, V.; Zimin, N. I.; Zintchenko, A.; Zupan, M.; DELPHI Collaboration
2010-06-01
An analysis of the direct soft photon production rate as a function of the parent jet characteristics is presented, based on hadronic events collected by the DELPHI experiment at LEP1. The dependences of the photon rates on the jet kinematic characteristics (momentum, mass, etc.) and on the jet charged, neutral and total hadron multiplicities are reported. Up to a scale factor of about four, which characterizes the overall value of the soft photon excess, a similarity of the observed soft photon behavior to that of the inner hadronic bremsstrahlung predictions is found for the momentum, mass, and jet charged multiplicity dependences. However for the dependence of the soft photon rate on the jet neutral and total hadron multiplicities a prominent difference is found for the observed soft photon signal as compared to the expected bremsstrahlung from final state hadrons. The observed linear increase of the soft photon production rate with the jet total hadron multiplicity and its strong dependence on the jet neutral multiplicity suggest that the rate is proportional to the number of quark pairs produced in the fragmentation process, with the neutral pairs being more effectively radiating than the charged ones.
Deep Learning MR Imaging-based Attenuation Correction for PET/MR Imaging.
Liu, Fang; Jang, Hyungseok; Kijowski, Richard; Bradshaw, Tyler; McMillan, Alan B
2018-02-01
Purpose To develop and evaluate the feasibility of deep learning approaches for magnetic resonance (MR) imaging-based attenuation correction (AC) (termed deep MRAC) in brain positron emission tomography (PET)/MR imaging. Materials and Methods A PET/MR imaging AC pipeline was built by using a deep learning approach to generate pseudo computed tomographic (CT) scans from MR images. A deep convolutional auto-encoder network was trained to identify air, bone, and soft tissue in volumetric head MR images coregistered to CT data for training. A set of 30 retrospective three-dimensional T1-weighted head images was used to train the model, which was then evaluated in 10 patients by comparing the generated pseudo CT scan to an acquired CT scan. A prospective study was carried out for utilizing simultaneous PET/MR imaging for five subjects by using the proposed approach. Analysis of covariance and paired-sample t tests were used for statistical analysis to compare PET reconstruction error with deep MRAC and two existing MR imaging-based AC approaches with CT-based AC. Results Deep MRAC provides an accurate pseudo CT scan with a mean Dice coefficient of 0.971 ± 0.005 for air, 0.936 ± 0.011 for soft tissue, and 0.803 ± 0.021 for bone. Furthermore, deep MRAC provides good PET results, with average errors of less than 1% in most brain regions. Significantly lower PET reconstruction errors were realized with deep MRAC (-0.7% ± 1.1) compared with Dixon-based soft-tissue and air segmentation (-5.8% ± 3.1) and anatomic CT-based template registration (-4.8% ± 2.2). Conclusion The authors developed an automated approach that allows generation of discrete-valued pseudo CT scans (soft tissue, bone, and air) from a single high-spatial-resolution diagnostic-quality three-dimensional MR image and evaluated it in brain PET/MR imaging. This deep learning approach for MR imaging-based AC provided reduced PET reconstruction error relative to a CT-based standard within the brain compared with current MR imaging-based AC approaches. © RSNA, 2017 Online supplemental material is available for this article.
Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code
NASA Astrophysics Data System (ADS)
Marinkovic, Slavica; Guillemot, Christine
2006-12-01
Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.
Multi-petascale highly efficient parallel supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.
A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaflop-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC). The ASIC nodes are interconnected by a five dimensional torus network that optimally maximize the throughput of packet communications between nodes and minimize latency. The network implements collective network and a global asynchronous network that provides global barrier and notification functions. Integrated in the node design include a list-based prefetcher. The memory system implements transaction memory, thread level speculation, and multiversioning cache that improves soft error rate at the same time andmore » supports DMA functionality allowing for parallel processing message-passing.« less
Real-time minimal-bit-error probability decoding of convolutional codes
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1974-01-01
A recursive procedure is derived for decoding of rate R = 1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit, subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e., fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications, such as in the inner coding system for concatenated coding.
Real-time minimal bit error probability decoding of convolutional codes
NASA Technical Reports Server (NTRS)
Lee, L. N.
1973-01-01
A recursive procedure is derived for decoding of rate R=1/n binary convolutional codes which minimizes the probability of the individual decoding decisions for each information bit subject to the constraint that the decoding delay be limited to Delta branches. This new decoding algorithm is similar to, but somewhat more complex than, the Viterbi decoding algorithm. A real-time, i.e. fixed decoding delay, version of the Viterbi algorithm is also developed and used for comparison to the new algorithm on simulated channels. It is shown that the new algorithm offers advantages over Viterbi decoding in soft-decision applications such as in the inner coding system for concatenated coding.
Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMs
NASA Astrophysics Data System (ADS)
Diehl, S. E.; Ochoa, A., Jr.; Dressendorfer, P. V.; Koga, P.; Kolasinski, W. A.
1982-12-01
Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors.
Real-time optimal guidance for orbital maneuvering.
NASA Technical Reports Server (NTRS)
Cohen, A. O.; Brown, K. R.
1973-01-01
A new formulation for soft-constraint trajectory optimization is presented as a real-time optimal feedback guidance method for multiburn orbital maneuvers. Control is always chosen to minimize burn time plus a quadratic penalty for end condition errors, weighted so that early in the mission (when controllability is greatest) terminal errors are held negligible. Eventually, as controllability diminishes, the method partially relaxes but effectively still compensates perturbations in whatever subspace remains controllable. Although the soft-constraint concept is well-known in optimal control, the present formulation is novel in addressing the loss of controllability inherent in multiple burn orbital maneuvers. Moreover the necessary conditions usually obtained from a Bolza formulation are modified in this case so that the fully hard constraint formulation is a numerically well behaved subcase. As a result convergence properties have been greatly improved.
Improved Rubin-Bodner Model for the Prediction of Soft Tissue Deformations
Zhang, Guangming; Xia, James J.; Liebschner, Michael; Zhang, Xiaoyan; Kim, Daeseung; Zhou, Xiaobo
2016-01-01
In craniomaxillofacial (CMF) surgery, a reliable way of simulating the soft tissue deformation resulted from skeletal reconstruction is vitally important for preventing the risks of facial distortion postoperatively. However, it is difficult to simulate the soft tissue behaviors affected by different types of CMF surgery. This study presents an integrated bio-mechanical and statistical learning model to improve accuracy and reliability of predictions on soft facial tissue behavior. The Rubin-Bodner (RB) model is initially used to describe the biomechanical behavior of the soft facial tissue. Subsequently, a finite element model (FEM) computers the stress of each node in soft facial tissue mesh data resulted from bone displacement. Next, the Generalized Regression Neural Network (GRNN) method is implemented to obtain the relationship between the facial soft tissue deformation and the stress distribution corresponding to different CMF surgical types and to improve evaluation of elastic parameters included in the RB model. Therefore, the soft facial tissue deformation can be predicted by biomechanical properties and statistical model. Leave-one-out cross-validation is used on eleven patients. As a result, the average prediction error of our model (0.7035mm) is lower than those resulting from other approaches. It also demonstrates that the more accurate bio-mechanical information the model has, the better prediction performance it could achieve. PMID:27717593
Results from the First Two Flights of the Static Computer Memory Integrity Testing Experiment
NASA Technical Reports Server (NTRS)
Hancock, Thomas M., III
1999-01-01
This paper details the scientific objectives, experiment design, data collection method, and post flight analysis following the first two flights of the Static Computer Memory Integrity Testing (SCMIT) experiment. SCMIT is designed to detect soft-event upsets in passive magnetic memory. A soft-event upset is a change in the logic state of active or passive forms of magnetic memory, commonly referred to as a "Bitflip". In its mildest form a soft-event upset can cause software exceptions, unexpected events, start spacecraft safeing (ending data collection) or corrupted fault protection and error recovery capabilities. In it's most severe form loss of mission or spacecraft can occur. Analysis after the first flight (in 1991 during STS-40) identified possible soft-event upsets to 25% of the experiment detectors. Post flight analysis after the second flight (in 1997 on STS-87) failed to find any evidence of soft-event upsets. The SCMIT experiment is currently scheduled for a third flight in December 1999 on STS-101.
An undulator based soft x-ray source for microscopy on the Duke electron storage ring
NASA Astrophysics Data System (ADS)
Johnson, Lewis Elgin
1998-09-01
This dissertation describes the design, development, and installation of an undulator-based soft x-ray source on the Duke Free Electron Laser laboratory electron storage ring. Insertion device and soft x-ray beamline physics and technology are all discussed in detail. The Duke/NIST undulator is a 3.64-m long hybrid design constructed by the Brobeck Division of Maxwell Laboratories. Originally built for an FEL project at the National Institute of Standards and Technology, the undulator was acquired by Duke in 1992 for use as a soft x-ray source for the FEL laboratory. Initial Hall probe measurements on the magnetic field distribution of the undulator revealed field errors of more than 0.80%. Initial phase errors for the device were more than 11 degrees. Through a series of in situ and off-line measurements and modifications we have re-tuned the magnet field structure of the device to produce strong spectral characteristics through the 5th harmonic. A low operating K has served to reduce the effects of magnetic field errors on the harmonic spectral content. Although rms field errors remained at 0.75%, we succeeded in reducing phase errors to less than 5 degrees. Using trajectory simulations from magnetic field data, we have computed the spectral output given the interaction of the Duke storage ring electron beam and the NIST undulator. Driven by a series of concerns and constraints over maximum utility, personnel safety and funding, we have also constructed a unique front end beamline for the undulator. The front end has been designed for maximum throughput of the 1st harmonic around 40A in its standard mode of operation. The front end has an alternative mode of operation which transmits the 3rd and 5th harmonics. This compact system also allows for the extraction of some of the bend magnet produced synchrotron and transition radiation from the storage ring. As with any well designed front end system, it also provides excellent protection to personnel and to the storage ring. A diagnostic beamline consisting of a transmission grating spectrometer and scanning wire beam profile monitor was constructed to measure the spatial and spectral characteristics of the undulator radiation. Test of the system with a circulating electron beam has confirmed the magnetic and focusing properties of the undulator, and verified that it can be used without perturbing the orbit of the beam.
Hard-Soft Composite Carbon as a Long-Cycling and High-Rate Anode for Potassium-Ion Batteries
Jian, Zelang; Hwang, Sooyeon; Li, Zhifei; ...
2017-05-05
There exist tremendous needs for sustainable storage solutions for intermittent renewable energy sources, such as solar and wind energy. Thus, systems based on Earth-abundant elements deserve much attention. Potassium-ion batteries represent a promising candidate because of the abundance of potassium resources. As for the choices of anodes, graphite exhibits encouraging potassium-ion storage properties; however, it suffers limited rate capability and poor cycling stability. Here in this paper, nongraphitic carbons as K-ion anodes with sodium carboxymethyl cellulose as the binder are systematically investigated. Compared to hard carbon and soft carbon, a hard–soft composite carbon with 20 wt% soft carbon distributed inmore » the matrix phase of hard carbon microspheres exhibits highly amenable performance: high capacity, high rate capability, and very stable long-term cycling. In contrast, pure hard carbon suffers limited rate capability, while the capacity of pure soft carbon fades more rapidly.« less
Chriqui, Jamie F; Eidson, Shelby S; Bates, Hannalori; Kowalczyk, Shelly; Chaloupka, Frank J
2008-07-01
Junk food consumption is associated with rising obesity rates in the United States. While a "junk food" specific tax is a potential public health intervention, a majority of states already impose sales taxes on certain junk food and soft drinks. This study reviews the state sales tax variance for soft drinks and selected snack products sold through grocery stores and vending machines as of January 2007. Sales taxes vary by state, intended retail location (grocery store vs. vending machine), and product. Vended snacks and soft drinks are taxed at a higher rate than grocery items and other food products, generally, indicative of a "disfavored" tax status attributed to vended items. Soft drinks, candy, and gum are taxed at higher rates than are other items examined. Similar tax schemes in other countries and the potential implications of these findings relative to the relationship between price and consumption are discussed.
Hard-Soft Composite Carbon as a Long-Cycling and High-Rate Anode for Potassium-Ion Batteries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jian, Zelang; Hwang, Sooyeon; Li, Zhifei
There exist tremendous needs for sustainable storage solutions for intermittent renewable energy sources, such as solar and wind energy. Thus, systems based on Earth-abundant elements deserve much attention. Potassium-ion batteries represent a promising candidate because of the abundance of potassium resources. As for the choices of anodes, graphite exhibits encouraging potassium-ion storage properties; however, it suffers limited rate capability and poor cycling stability. Here in this paper, nongraphitic carbons as K-ion anodes with sodium carboxymethyl cellulose as the binder are systematically investigated. Compared to hard carbon and soft carbon, a hard–soft composite carbon with 20 wt% soft carbon distributed inmore » the matrix phase of hard carbon microspheres exhibits highly amenable performance: high capacity, high rate capability, and very stable long-term cycling. In contrast, pure hard carbon suffers limited rate capability, while the capacity of pure soft carbon fades more rapidly.« less
Chang, Shih-Tsun; Liu, Yen-Hsiu; Lee, Jiahn-Shing; See, Lai-Chu
2015-01-01
Background: The effect of correcting static vision on sports vision is still not clear. Aim: To examine whether sports vision (depth perception [DP], dynamic visual acuity [DVA], eye movement [EM], peripheral vision [PV], and momentary vision [MV],) were different among soft tennis adolescent athletes with normal vision (Group A), with refractive error and corrected with (Group B) and without eyeglasses (Group C). Setting and Design: A cross-section study was conducted. Soft tennis athletes aged 10–13 who played softball tennis for 2–5 years, and who were without any ocular diseases and without visual training for the past 3 months were recruited. Materials and Methods: DPs were measured in an absolute deviation (mm) between a moving rod and fixing rod (approaching at 25 mm/s, receding at 25 mm/s, approaching at 50 mm/s, receding at 50 mm/s) using electric DP tester. A smaller deviation represented better DP. DVA, EM, PV, and MV were measured on a scale from 1 (worse) to 10 (best) using ATHLEVISION software. Statistical Analysis: Chi-square test and Kruskal–Wallis test was used to compare the data among the three study groups. Results: A total of 73 athletes (37 in Group A, 8 in Group B, 28 in Group C) were enrolled in this study. All four items of DP showed significant difference among the three study groups (P = 0.0051, 0.0004, 0.0095, 0.0021). PV displayed significant difference among the three study groups (P = 0.0044). There was no significant difference in DVA, EM, and MV among the three study groups. Conclusions: Significant better DP and PV were seen among soft tennis adolescent athletes with normal vision than those with refractive error regardless whether they had eyeglasses corrected. On the other hand, DVA, EM, and MV were similar among the three study groups. PMID:26632127
Reliable Fusion of Stereo Matching and Depth Sensor for High Quality Dense Depth Maps
Liu, Jing; Li, Chunpeng; Fan, Xuefeng; Wang, Zhaoqi
2015-01-01
Depth estimation is a classical problem in computer vision, which typically relies on either a depth sensor or stereo matching alone. The depth sensor provides real-time estimates in repetitive and textureless regions where stereo matching is not effective. However, stereo matching can obtain more accurate results in rich texture regions and object boundaries where the depth sensor often fails. We fuse stereo matching and the depth sensor using their complementary characteristics to improve the depth estimation. Here, texture information is incorporated as a constraint to restrict the pixel’s scope of potential disparities and to reduce noise in repetitive and textureless regions. Furthermore, a novel pseudo-two-layer model is used to represent the relationship between disparities in different pixels and segments. It is more robust to luminance variation by treating information obtained from a depth sensor as prior knowledge. Segmentation is viewed as a soft constraint to reduce ambiguities caused by under- or over-segmentation. Compared to the average error rate 3.27% of the previous state-of-the-art methods, our method provides an average error rate of 2.61% on the Middlebury datasets, which shows that our method performs almost 20% better than other “fused” algorithms in the aspect of precision. PMID:26308003
[Correlation analysis of hearing level and soft palate movement after palatoplasty].
Lou, Qun; Ma, Xiaoran; Ma, Lian; Luo, Yi; Zhu, Hongping; Zhou, Zhibo
2015-10-01
To explore the relationship between hearing level and soft palate movement after palatoplasty and to verify the importance of recovery of soft palate movement function for improving the middle ear function as well as reducing the hearing loss. A total of 64 non-syndromic cleft palate patients were selected and the lateral cephalometric radiographs were taken. The patients hearing level was evaluated by the pure tone hearing threshold examination. This study also analyzed the correlation between hearing threshold of the patients after palatoplasty and the soft palate elevation angle and velopharyngeal rate respectively. Kendall correlation analysis revealed that the correlation coefficient between hearing threshold and the soft palate elevation angle after palatoplasty was -0.339 (r = -0.339, P < 0.01).The correlation showed a negative correlation. The hearing threshold decreased as the soft palate elevation angle increased. After palatoplasty, the correlation coefficient between the hearing threshold and the rate of velopharyngeal closure was -0.277 (r = -0.277, P < 0.01). The correlation showed a negative correlation. While, The hearing threshold decreased with the increase of velopharyngeal closure rate. The hearing threshold was correlated with soft palate elevation angle and velpharyngeal closure rate. The movement of soft palate and velopharyngeal closure function after palatoplasty both have impact on patient hearing level. In terms of the influence level, the movement of soft palate has a higher level of impact on patient hearing level than velopharygeal closure function.
Kazaura, Kamugisha; Omae, Kazunori; Suzuki, Toshiji; Matsumoto, Mitsuji; Mutafungwa, Edward; Korhonen, Timo O; Murakami, Tadaaki; Takahashi, Koichi; Matsumoto, Hideki; Wakamori, Kazuhiko; Arimoto, Yoshinori
2006-06-12
The deterioration and deformation of a free-space optical beam wave-front as it propagates through the atmosphere can reduce the link availability and may introduce burst errors thus degrading the performance of the system. We investigate the suitability of utilizing soft-computing (SC) based tools for improving performance of free-space optical (FSO) communications systems. The SC based tools are used for the prediction of key parameters of a FSO communications system. Measured data collected from an experimental FSO communication system is used as training and testing data for a proposed multi-layer neural network predictor (MNNP) used to predict future parameter values. The predicted parameters are essential for reducing transmission errors by improving the antenna's accuracy of tracking data beams. This is particularly essential for periods considered to be of strong atmospheric turbulence. The parameter values predicted using the proposed tool show acceptable conformity with original measurements.
Distributed phased array architecture study
NASA Technical Reports Server (NTRS)
Bourgeois, Brian
1987-01-01
Variations in amplifiers and phase shifters can cause degraded antenna performance, depending also on the environmental conditions and antenna array architecture. The implementation of distributed phased array hardware was studied with the aid of the DISTAR computer program as a simulation tool. This simulation provides guidance in hardware simulation. Both hard and soft failures of the amplifiers in the T/R modules are modeled. Hard failures are catastrophic: no power is transmitted to the antenna elements. Noncatastrophic or soft failures are modeled as a modified Gaussian distribution. The resulting amplitude characteristics then determine the array excitation coefficients. The phase characteristics take on a uniform distribution. Pattern characteristics such as antenna gain, half power beamwidth, mainbeam phase errors, sidelobe levels, and beam pointing errors were studied as functions of amplifier and phase shifter variations. General specifications for amplifier and phase shifter tolerances in various architecture configurations for C band and S band were determined.
NASA Astrophysics Data System (ADS)
Yan, Hong; Song, Xiangzhong; Tian, Kuangda; Chen, Yilin; Xiong, Yanmei; Min, Shungeng
2018-02-01
A novel method, mid-infrared (MIR) spectroscopy, which enables the determination of Chlorantraniliprole in Abamectin within minutes, is proposed. We further evaluate the prediction ability of four wavelength selection methods, including bootstrapping soft shrinkage approach (BOSS), Monte Carlo uninformative variable elimination (MCUVE), genetic algorithm partial least squares (GA-PLS) and competitive adaptive reweighted sampling (CARS) respectively. The results showed that BOSS method obtained the lowest root mean squared error of cross validation (RMSECV) (0.0245) and root mean squared error of prediction (RMSEP) (0.0271), as well as the highest coefficient of determination of cross-validation (Qcv2) (0.9998) and the coefficient of determination of test set (Q2test) (0.9989), which demonstrated that the mid infrared spectroscopy can be used to detect Chlorantraniliprole in Abamectin conveniently. Meanwhile, a suitable wavelength selection method (BOSS) is essential to conducting a component spectral analysis.
Analyzing the effectiveness of a frame-level redundancy scrubbing technique for SRAM-based FPGAs
Tonfat, Jorge; Lima Kastensmidt, Fernanda; Rech, Paolo; ...
2015-12-17
Radiation effects such as soft errors are the major threat to the reliability of SRAM-based FPGAs. This work analyzes the effectiveness in correcting soft errors of a novel scrubbing technique using internal frame redundancy called Frame-level Redundancy Scrubbing (FLR-scrubbing). This correction technique can be implemented in a coarse grain TMR design. The FLR-scrubbing technique was implemented on a mid-size Xilinx Virtex-5 FPGA device used as a case study. The FLR-scrubbing technique was tested under neutron radiation and fault injection. Implementation results demonstrated minimum area and energy consumption overhead when compared to other techniques. The time to repair the fault ismore » also improved by using the Internal Configuration Access Port (ICAP). Lastly, neutron radiation test results demonstrated that the proposed technique is suitable for correcting accumulated SEUs and MBUs.« less
Damage level prediction of non-reshaped berm breakwater using ANN, SVM and ANFIS models
NASA Astrophysics Data System (ADS)
Mandal, Sukomal; Rao, Subba; N., Harish; Lokesha
2012-06-01
The damage analysis of coastal structure is very important as it involves many design parameters to be considered for the better and safe design of structure. In the present study experimental data for non-reshaped berm breakwater are collected from Marine Structures Laboratory, Department of Applied Mechanics and Hydraulics, NITK, Surathkal, India. Soft computing techniques like Artificial Neural Network (ANN), Support Vector Machine (SVM) and Adaptive Neuro Fuzzy Inference system (ANFIS) models are constructed using experimental data sets to predict the damage level of non-reshaped berm breakwater. The experimental data are used to train ANN, SVM and ANFIS models and results are determined in terms of statistical measures like mean square error, root mean square error, correla-tion coefficient and scatter index. The result shows that soft computing techniques i.e., ANN, SVM and ANFIS can be efficient tools in predicting damage levels of non reshaped berm breakwater.
Analyzing the effectiveness of a frame-level redundancy scrubbing technique for SRAM-based FPGAs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tonfat, Jorge; Lima Kastensmidt, Fernanda; Rech, Paolo
Radiation effects such as soft errors are the major threat to the reliability of SRAM-based FPGAs. This work analyzes the effectiveness in correcting soft errors of a novel scrubbing technique using internal frame redundancy called Frame-level Redundancy Scrubbing (FLR-scrubbing). This correction technique can be implemented in a coarse grain TMR design. The FLR-scrubbing technique was implemented on a mid-size Xilinx Virtex-5 FPGA device used as a case study. The FLR-scrubbing technique was tested under neutron radiation and fault injection. Implementation results demonstrated minimum area and energy consumption overhead when compared to other techniques. The time to repair the fault ismore » also improved by using the Internal Configuration Access Port (ICAP). Lastly, neutron radiation test results demonstrated that the proposed technique is suitable for correcting accumulated SEUs and MBUs.« less
Jia, Rui; Monk, Paul; Murray, David; Noble, J Alison; Mellon, Stephen
2017-09-06
Optoelectronic motion capture systems are widely employed to measure the movement of human joints. However, there can be a significant discrepancy between the data obtained by a motion capture system (MCS) and the actual movement of underlying bony structures, which is attributed to soft tissue artefact. In this paper, a computer-aided tracking and motion analysis with ultrasound (CAT & MAUS) system with an augmented globally optimal registration algorithm is presented to dynamically track the underlying bony structure during movement. The augmented registration part of CAT & MAUS was validated with a high system accuracy of 80%. The Euclidean distance between the marker-based bony landmark and the bony landmark tracked by CAT & MAUS was calculated to quantify the measurement error of an MCS caused by soft tissue artefact during movement. The average Euclidean distance between the target bony landmark measured by each of the CAT & MAUS system and the MCS alone varied from 8.32mm to 16.87mm in gait. This indicates the discrepancy between the MCS measured bony landmark and the actual underlying bony landmark. Moreover, Procrustes analysis was applied to demonstrate that CAT & MAUS reduces the deformation of the body segment shape modeled by markers during motion. The augmented CAT & MAUS system shows its potential to dynamically detect and locate actual underlying bony landmarks, which reduces the MCS measurement error caused by soft tissue artefact during movement. Copyright © 2017 Elsevier Ltd. All rights reserved.
Color reproduction for advanced manufacture of soft tissue prostheses.
Xiao, Kaida; Zardawi, Faraedon; van Noort, Richard; Yates, Julian M
2013-11-01
The objectives of this study were to develop a color reproduction system in advanced manufacture technology for accurate and automatic processing of soft tissue prostheses. The manufacturing protocol was defined to effectively and consistently produce soft tissue prostheses using a 3D printing system. Within this protocol printer color profiles were developed using a number of mathematical models for the proposed 3D color printing system based on 240 training colors. On this basis, the color reproduction system was established and their system errors including accuracy of color reproduction, performance of color repeatability and color gamut were evaluated using 14 known human skin shades. The printer color profile developed using the third-order polynomial regression based on least-square fitting provided the best model performance. The results demonstrated that by using the proposed color reproduction system, 14 different skin colors could be reproduced and excellent color reproduction performance achieved. Evaluation of the system's color repeatability revealed a demonstrable system error and this highlighted the need for regular evaluation. The color gamut for the proposed 3D printing system was simulated and it was demonstrated that the vast majority of skin colors can be reproduced with the exception of extreme dark or light skin color shades. This study demonstrated that the proposed color reproduction system can be effectively used to reproduce a range of human skin colors for application in advanced manufacture of soft tissue prostheses. Copyright © 2013 Elsevier Ltd. All rights reserved.
Peterman, Robert J; Jiang, Shuying; Johe, Rene; Mukherjee, Padma M
2016-12-01
Dolphin® visual treatment objective (VTO) prediction software is routinely utilized by orthodontists during the treatment planning of orthognathic cases to help predict post-surgical soft tissue changes. Although surgical soft tissue prediction is considered to be a vital tool, its accuracy is not well understood in tow-jaw surgical procedures. The objective of this study was to quantify the accuracy of Dolphin Imaging's VTO soft tissue prediction software on class III patients treated with maxillary advancement and mandibular setback and to validate the efficacy of the software in such complex cases. This retrospective study analyzed the records of 14 patients treated with comprehensive orthodontics in conjunction with two-jaw orthognathic surgery. Pre- and post-treatment radiographs were traced and superimposed to determine the actual skeletal movements achieved in surgery. This information was then used to simulate surgery in the software and generate a final soft tissue patient profile prediction. Prediction images were then compared to the actual post-treatment profile photos to determine differences. Dolphin Imaging's software was determined to be accurate within an error range of +/- 2 mm in the X-axis at most landmarks. The lower lip predictions were most inaccurate. Clinically, the observed error suggests that the VTO may be used for demonstration and communication with a patient or consulting practitioner. However, Dolphin should not be useful for precise treatment planning of surgical movements. This program should be used with caution to prevent unrealistic patient expectations and dissatisfaction.
Proton irradiation effects on advanced digital and microwave III-V components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hash, G.L.; Schwank, J.R.; Shaneyfelt, M.R.
1994-09-01
A wide range of advanced III-V components suitable for use in high-speed satellite communication systems were evaluated for displacement damage and single-event effects in high-energy, high-fluence proton environments. Transistors and integrated circuits (both digital and MMIC) were irradiated with protons at energies from 41 to 197 MeV and at fluences from 10{sup 10} to 2 {times} 10{sup 14} protons/cm{sup 2}. Large soft-error rates were measured for digital GaAs MESFET (3 {times} 10{sup {minus}5} errors/bit-day) and heterojunction bipolar circuits (10{sup {minus}5} errors/bit-day). No transient signals were detected from MMIC circuits. The largest degradation in transistor response caused by displacement damage wasmore » observed for 1.0-{mu}m depletion- and enhancement-mode MESFET transistors. Shorter gate length MESFET transistors and HEMT transistors exhibited less displacement-induced damage. These results show that memory-intensive GaAs digital circuits may result in significant system degradation due to single-event upset in natural and man-made space environments. However, displacement damage effects should not be a limiting factor for fluence levels up to 10{sup 14} protons/cm{sup 2} [equivalent to total doses in excess of 10 Mrad(GaAs)].« less
Proton irradiation effects on advanced digital and microwave III-V components
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hash, G.L.; Schwank, J.R.; Shaneyfelt, M.R.
1994-12-01
A wide range of advanced III-V components suitable for use in high-speed satellite communication systems were evaluated for displacement damage and single-event effects in high-energy, high-fluence proton environments. Transistors and integrated circuits (both digital and MMIC) were irradiated with protons at energies from 41 to 197 MeV and at fluences from 10[sup 10] to 2 [times] 10[sup 14] protons/cm[sup 2]. Large soft-error rates were measured for digital GaAs MESFET (3 [times] 10[sup [minus]5] errors/bit-day) and heterojunction bipolar circuits (10[sup [minus]5] errors/bit-day). No transient signals were detected from MMIC circuits. The largest degradation in transistor response caused by displacement damage wasmore » observed for 1.0-[mu]m depletion- and enhancement-mode MESFET transistors. Shorter gate length MESFET transistors and HEMT transistors exhibited less displacement-induced damage. These results show that memory-intensive GaAs digital circuits may result in significant system degradation due to single-event upset in natural and man-made space environments. However, displacement damage effects should not be a limiting factor for fluence levels up to 10[sup 14] protons/cm[sup 2] [equivalent to total doses in excess of 10 Mrad (GaAs)].« less
Spilker, R L; de Almeida, E S; Donzelli, P S
1992-01-01
This chapter addresses computationally demanding numerical formulations in the biomechanics of soft tissues. The theory of mixtures can be used to represent soft hydrated tissues in the human musculoskeletal system as a two-phase continuum consisting of an incompressible solid phase (collagen and proteoglycan) and an incompressible fluid phase (interstitial water). We first consider the finite deformation of soft hydrated tissues in which the solid phase is represented as hyperelastic. A finite element formulation of the governing nonlinear biphasic equations is presented based on a mixed-penalty approach and derived using the weighted residual method. Fluid and solid phase deformation, velocity, and pressure are interpolated within each element, and the pressure variables within each element are eliminated at the element level. A system of nonlinear, first-order differential equations in the fluid and solid phase deformation and velocity is obtained. In order to solve these equations, the contributions of the hyperelastic solid phase are incrementally linearized, a finite difference rule is introduced for temporal discretization, and an iterative scheme is adopted to achieve equilibrium at the end of each time increment. We demonstrate the accuracy and adequacy of the procedure using a six-node, isoparametric axisymmetric element, and we present an example problem for which independent numerical solution is available. Next, we present an automated, adaptive environment for the simulation of soft tissue continua in which the finite element analysis is coupled with automatic mesh generation, error indicators, and projection methods. Mesh generation and updating, including both refinement and coarsening, for the two-dimensional examples examined in this study are performed using the finite quadtree approach. The adaptive analysis is based on an error indicator which is the L2 norm of the difference between the finite element solution and a projected finite element solution. Total stress, calculated as the sum of the solid and fluid phase stresses, is used in the error indicator. To allow the finite difference algorithm to proceed in time using an updated mesh, solution values must be transferred to the new nodal locations. This rezoning is accomplished using a projected field for the primary variables. The accuracy and effectiveness of this adaptive finite element analysis is demonstrated using a linear, two-dimensional, axisymmetric problem corresponding to the indentation of a thin sheet of soft tissue. The method is shown to effectively capture the steep gradients and to produce solutions in good agreement with independent, converged, numerical solutions.
Examining the Angular Resolution of the Astro-H's Soft X-Ray Telescopes
NASA Technical Reports Server (NTRS)
Sato, Toshiki; Iizuka, Ryo; Ishida, Manabu; Kikuchi, Naomichi; Maeda, Yoshitomo; Kurashima, Sho; Nakaniwa, Nozomi; Tomikawa, Kazuki; Hayashi, Takayuki; Mori, Hideyuki;
2016-01-01
The international x-ray observatory ASTRO-H was renamed Hitomi after launch. It covers a wide energy range from a few hundred eV to 600 keV. It is equipped with two soft x-ray telescopes (SXTs: SXT-I and SXT-S) for imaging the soft x-ray sky up to 12 keV, which focus an image onto the respective focal-plane detectors: CCD camera (SXI) and a calorimeter (SXS). The SXTs are fabricated in a quadrant unit. The angular resolution in half-power diameter (HPD) of each quadrant of the SXTs ranges between 1.1 and 1.4 arc min at 4.51 keV. It was also found that one quadrant has an energy dependence on the HPD. We examine the angular resolution with spot scan measurements. In order to understand the cause of imaging capability deterioration and to reflect it to the future telescope development, we carried out spot scan measurements, in which we illuminate all over the aperture of each quadrant with a square beam 8 mm on a side. Based on the scan results, we made maps of image blurring and a focus position. The former and the latter reflect figure error and positioning error, respectively, of the foils that are within the incident 8 mm x 8 mm beam. As a result, we estimated those errors in a quadrant to be approx. 0.9 to 1.0 and approx. 0.6 to 0.9 arc min, respectively. We found that the larger the positioning error in a quadrant is, the larger its HPD is. The HPD map, which manifests the local image blurring, is very similar from quadrant to quadrant, but the map of the focus position is different from location to location in each telescope. It is also found that the difference in local performance causes energy dependence of the HPD.
Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.; Lavelle, Thomas M.; Patnaik, Surya
2003-01-01
The neural network and regression methods of NASA Glenn Research Center s COMETBOARDS design optimization testbed were used to generate approximate analysis and design models for a subsonic aircraft operating at Mach 0.85 cruise speed. The analytical model is defined by nine design variables: wing aspect ratio, engine thrust, wing area, sweep angle, chord-thickness ratio, turbine temperature, pressure ratio, bypass ratio, fan pressure; and eight response parameters: weight, landing velocity, takeoff and landing field lengths, approach thrust, overall efficiency, and compressor pressure and temperature. The variables were adjusted to optimally balance the engines to the airframe. The solution strategy included a sensitivity model and the soft analysis model. Researchers generated the sensitivity model by training the approximators to predict an optimum design. The trained neural network predicted all response variables, within 5-percent error. This was reduced to 1 percent by the regression method. The soft analysis model was developed to replace aircraft analysis as the reanalyzer in design optimization. Soft models have been generated for a neural network method, a regression method, and a hybrid method obtained by combining the approximators. The performance of the models is graphed for aircraft weight versus thrust as well as for wing area and turbine temperature. The regression method followed the analytical solution with little error. The neural network exhibited 5-percent maximum error over all parameters. Performance of the hybrid method was intermediate in comparison to the individual approximators. Error in the response variable is smaller than that shown in the figure because of a distortion scale factor. The overall performance of the approximators was considered to be satisfactory because aircraft analysis with NASA Langley Research Center s FLOPS (Flight Optimization System) code is a synthesis of diverse disciplines: weight estimation, aerodynamic analysis, engine cycle analysis, propulsion data interpolation, mission performance, airfield length for landing and takeoff, noise footprint, and others.
NASA Technical Reports Server (NTRS)
Ackermann, M.; Ajello, M.; Allafort, A.; Atwood, W. B.; Baldini, L.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bhat, P. N.;
2012-01-01
Due to an error at the publisher, the times given for the major tick marks in the X-axis in Figure 1 of the published article are incorrect. The correctly labeled times should be 00:52:00, 00:54:00,..., and 01:04:00. The correct version of Figure 1 and its caption is shown below. IOP Publishing sincerely regrets this error.25.
NASA Astrophysics Data System (ADS)
Ackermann, M.; Ajello, M.; Allafort, A.; Atwood, W. B.; Baldini, L.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; Bhat, P. N.; Blandford, R. D.; Bonamente, E.; Borgland, A. W.; Bregeon, J.; Briggs, M. S.; Brigida, M.; Bruel, P.; Buehler, R.; Burgess, J. M.; Buson, S.; Caliandro, G. A.; Cameron, R. A.; Casandjian, J. M.; Cecchi, C.; Charles, E.; Chekhtman, A.; Chiang, J.; Ciprini, S.; Claus, R.; Cohen-Tanugi, J.; Connaughton, V.; Conrad, J.; Cutini, S.; Dennis, B. R.; de Palma, F.; Dermer, C. D.; Digel, S. W.; Silva, E. do Couto e.; Drell, P. S.; Drlica-Wagner, A.; Dubois, R.; Favuzzi, C.; Fegan, S. J.; Ferrara, E. C.; Fortin, P.; Fukazawa, Y.; Fusco, P.; Gargano, F.; Germani, S.; Giglietto, N.; Giordano, F.; Giroletti, M.; Glanzman, T.; Godfrey, G.; Grillo, L.; Grove, J. E.; Gruber, D.; Guiriec, S.; Hadasch, D.; Hayashida, M.; Hays, E.; Horan, D.; Iafrate, G.; Jóhannesson, G.; Johnson, A. S.; Johnson, W. N.; Kamae, T.; Kippen, R. M.; Knödlseder, J.; Kuss, M.; Lande, J.; Latronico, L.; Longo, F.; Loparco, F.; Lott, B.; Lovellette, M. N.; Lubrano, P.; Mazziotta, M. N.; McEnery, J. E.; Meegan, C.; Mehault, J.; Michelson, P. F.; Mitthumsiri, W.; Monte, C.; Monzani, M. E.; Morselli, A.; Moskalenko, I. V.; Murgia, S.; Murphy, R.; Naumann-Godo, M.; Nuss, E.; Nymark, T.; Ohno, M.; Ohsugi, T.; Okumura, A.; Omodei, N.; Orlando, E.; Paciesas, W. S.; Panetta, J. H.; Parent, D.; Pesce-Rollins, M.; Petrosian, V.; Pierbattista, M.; Piron, F.; Pivato, G.; Poon, H.; Porter, T. A.; Preece, R.; Rainò, S.; Rando, R.; Razzano, M.; Razzaque, S.; Reimer, A.; Reimer, O.; Ritz, S.; Sbarra, C.; Schwartz, R. A.; Sgrò, C.; Share, G. H.; Siskind, E. J.; Spinelli, P.; Takahashi, H.; Tanaka, T.; Tanaka, Y.; Thayer, J. B.; Tibaldo, L.; Tinivella, M.; Tolbert, A. K.; Tosti, G.; Troja, E.; Uchiyama, Y.; Usher, T. L.; Vandenbroucke, J.; Vasileiou, V.; Vianello, G.; Vitale, V.; von Kienlin, A.; Waite, A. P.; Wilson-Hodge, C.; Wood, D. L.; Wood, K. S.; Yang, Z.
2012-04-01
Due to an error at the publisher, the times given for the major tick marks in the X-axis in Figure 1 of the published article are incorrect. The correctly labeled times should be "00:52:00," "00:54:00," ... , and "01:04:00." The correct version of Figure 1 and its caption is shown below. IOP Publishing sincerely regrets this error.
Investigating the impact of spatial priors on the performance of model-based IVUS elastography
Richards, M S; Doyley, M M
2012-01-01
This paper describes methods that provide pre-requisite information for computing circumferential stress in modulus elastograms recovered from vascular tissue—information that could help cardiologists detect life-threatening plaques and predict their propensity to rupture. The modulus recovery process is an ill-posed problem; therefore additional information is needed to provide useful elastograms. In this work, prior geometrical information was used to impose hard or soft constraints on the reconstruction process. We conducted simulation and phantom studies to evaluate and compare modulus elastograms computed with soft and hard constraints versus those computed without any prior information. The results revealed that (1) the contrast-to-noise ratio of modulus elastograms achieved using the soft prior and hard prior reconstruction methods exceeded those computed without any prior information; (2) the soft prior and hard prior reconstruction methods could tolerate up to 8 % measurement noise; and (3) the performance of soft and hard prior modulus elastogram degraded when incomplete spatial priors were employed. This work demonstrates that including spatial priors in the reconstruction process should improve the performance of model-based elastography, and the soft prior approach should enhance the robustness of the reconstruction process to errors in the geometrical information. PMID:22037648
Angular photogrammetric analysis of the soft tissue profile in 12-year-old southern Chinese.
Leung, Cindi Sy; Yang, Yanqi; Wong, Ricky Wk; Hägg, Urban; Lo, John; McGrath, Colman
2014-12-24
To quantify average angular measurements that define the soft tissue profiles of 12-year-old southern Chinese and to determine gender differences. A random population sample of 514 12-year-old children was recruited (about 10% of a Hong Kong Chinese birth cohort). Photographs were taken in natural head posture and 12 soft tissue landmarks were located on the photos to measure 12 angular measurements using ImageJ (V1.45s) for Windows. Approximately 10% of photographs were reanalyzed and method error was calculated. Angular norm values for the 12 parameters were determined and gender differences were assessed using 2 sample T-test with 95% confidence interval. The response rate was 54.1% (278/514). Norm values for the 12 angular measurements were generated. The greatest variability was found for the nasolabial (Cm-Sn-Ls) and labiomental (Li-Sm-Pg) angles. Gender differences were found in 4 angular parameters: vertical nasal angle (N-Prn/TV) (p < 0.05), cervicomental angle (G-Pg/C-Me) (p < 0.001), facial convexity angle (G-Sn-Pg) (p < 0.01) and total facial convexity angle (G-Prn-Pg)(p < 0.01). Norm values for 12 angular measurements among 12-year-old southern Chinese children were provided and some variability noted. Gender differences were apparent in several angular measurements. This study has implications in developing norm values for southern Chinese and for comparison with other ethnic groups.
The Forced Soft Spring Equation
ERIC Educational Resources Information Center
Fay, T. H.
2006-01-01
Through numerical investigations, this paper studies examples of the forced Duffing type spring equation with [epsilon] negative. By performing trial-and-error numerical experiments, the existence is demonstrated of stability boundaries in the phase plane indicating initial conditions yielding bounded solutions. Subharmonic boundaries are…
Development of Biological Acoustic Impedance Microscope and its Error Estimation
NASA Astrophysics Data System (ADS)
Hozumi, Naohiro; Nakano, Aiko; Terauchi, Satoshi; Nagao, Masayuki; Yoshida, Sachiko; Kobayashi, Kazuto; Yamamoto, Seiji; Saijo, Yoshifumi
This report deals with the scanning acoustic microscope for imaging cross sectional acoustic impedance of biological soft tissues. A focused acoustic beam was transmitted to the tissue object mounted on the "rear surface" of plastic substrate. A cerebellum tissue of rat and a reference material were observed at the same time under the same condition. As the incidence is not vertical, not only longitudinal wave but also transversal wave is generated in the substrate. The error in acoustic impedance assuming vertical incidence was estimated. It was proved that the error can precisely be compensated, if the beam pattern and acoustic parameters of coupling medium and substrate had been known.
Microcircuit radiation effects databank
NASA Technical Reports Server (NTRS)
1983-01-01
This databank is the collation of radiation test data submitted by many testers and serves as a reference for engineers who are concerned with and have some knowledge of the effects of the natural radiation environment on microcircuits. It contains radiation sensitivity results from ground tests and is divided into two sections. Section A lists total dose damage information, and section B lists single event upset cross sections, I.E., the probability of a soft error (bit flip) or of a hard error (latchup).
The effect of multifocal soft contact lenses on peripheral refraction.
Kang, Pauline; Fan, Yvonne; Oh, Kelly; Trac, Kevin; Zhang, Frank; Swarbrick, Helen A
2013-07-01
To compare changes in peripheral refraction with single-vision (SV) and multifocal (MF) correction of distance central refraction with commercially available SV and MF soft contact lenses (SCLs) in young myopic adults. Thirty-four myopic adult subjects were fitted with Proclear Sphere and Proclear Multifocal SCLs to correct their manifest central refractive error. Central and peripheral refraction were measured with no lens wear and subsequently with the two different types of SCL correction. At baseline, refraction was myopic at all locations along the horizontal meridian. Peripheral refraction was relatively hyperopic compared with center at 30 and 35 degrees in the temporal visual field (VF) in low myopes, and at 30 and 35 degrees in the temporal VF, and 10, 30, and 35 degrees in the nasal VF in moderate myopes. Single-vision and MF distance correction with Proclear Sphere and Proclear Multifocal SCLs, respectively, caused a hyperopic shift in refraction at all locations in the horizontal VF. Compared with SV correction, MF SCL correction caused a significant relative myopic shift at all locations in the nasal VF in both low and moderate myopes and also at 35 degrees in the temporal VF in moderate myopes. Correction of central refractive error with SV and MF SCLs caused a hyperopic shift in both central and peripheral refraction at all positions in the horizontal meridian. Single-vision SCL correction caused the peripheral retina, which initially experienced absolute myopic defocus at baseline with no correction to experience an absolute hyperopic defocus. Multifocal SCL correction resulted in a relative myopic shift in peripheral refraction compared with SV SCL correction. This myopic shift may explain recent reports of reduced myopia progression rates with MF SCL correction.
NASA Astrophysics Data System (ADS)
Wang, Lunche; Kisi, Ozgur; Zounemat-Kermani, Mohammad; Li, Hui
2017-01-01
Pan evaporation (Ep) plays important roles in agricultural water resources management. One of the basic challenges is modeling Ep using limited climatic parameters because there are a number of factors affecting the evaporation rate. This study investigated the abilities of six different soft computing methods, multi-layer perceptron (MLP), generalized regression neural network (GRNN), fuzzy genetic (FG), least square support vector machine (LSSVM), multivariate adaptive regression spline (MARS), adaptive neuro-fuzzy inference systems with grid partition (ANFIS-GP), and two regression methods, multiple linear regression (MLR) and Stephens and Stewart model (SS) in predicting monthly Ep. Long-term climatic data at various sites crossing a wide range of climates during 1961-2000 are used for model development and validation. The results showed that the models have different accuracies in different climates and the MLP model performed superior to the other models in predicting monthly Ep at most stations using local input combinations (for example, the MAE (mean absolute errors), RMSE (root mean square errors), and determination coefficient (R2) are 0.314 mm/day, 0.405 mm/day and 0.988, respectively for HEB station), while GRNN model performed better in Tibetan Plateau (MAE, RMSE and R2 are 0.459 mm/day, 0.592 mm/day and 0.932, respectively). The accuracies of above models ranked as: MLP, GRNN, LSSVM, FG, ANFIS-GP, MARS and MLR. The overall results indicated that the soft computing techniques generally performed better than the regression methods, but MLR and SS models can be more preferred at some climatic zones instead of complex nonlinear models, for example, the BJ (Beijing), CQ (Chongqing) and HK (Haikou) stations. Therefore, it can be concluded that Ep could be successfully predicted using above models in hydrological modeling studies.
Patient-specific polyetheretherketone facial implants in a computer-aided planning workflow.
Guevara-Rojas, Godoberto; Figl, Michael; Schicho, Kurt; Seemann, Rudolf; Traxler, Hannes; Vacariu, Apostolos; Carbon, Claus-Christian; Ewers, Rolf; Watzinger, Franz
2014-09-01
In the present study, we report an innovative workflow using polyetheretherketone (PEEK) patient-specific implants for esthetic corrections in the facial region through onlay grafting. The planning includes implant design according to virtual osteotomy and generation of a subtraction volume. The implant design was refined by stepwise changing the implant geometry according to soft tissue simulations. One patient was scanned using computed tomography. PEEK implants were interactively designed and manufactured using rapid prototyping techniques. Positioning intraoperatively was assisted by computer-aided navigation. Two months after surgery, a 3-dimensional surface model of the patient's face was generated using photogrammetry. Finally, the Hausdorff distance calculation was used to quantify the overall error, encompassing the failures in soft tissue simulation and implantation. The implant positioning process during surgery was satisfactory. The simulated soft tissue surface and the photogrammetry scan of the patient showed a high correspondence, especially where the skin covered the implants. The mean total error (Hausdorff distance) was 0.81 ± 1.00 mm (median 0.48, interquartile range 1.11). The spatial deviation remained less than 0.7 mm for the vast majority of points. The proposed workflow provides a complete computer-aided design, computer-aided manufacturing, and computer-aided surgery chain for implant design, allowing for soft tissue simulation, fabrication of patient-specific implants, and image-guided surgery to position the implants. Much of the surgical complexity resulting from osteotomies of the zygoma, chin, or mandibular angle might be transferred into the planning phase of patient-specific implants. Copyright © 2014 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Demehri, S; Muhit, A; Zbijewski, W; Stayman, J W; Yorkston, J; Packard, N; Senn, R; Yang, D; Foos, D; Thawait, G K; Fayad, L M; Chhabra, A; Carrino, J A; Siewerdsen, J H
2015-06-01
To assess visualization tasks using cone-beam CT (CBCT) compared to multi-detector CT (MDCT) for musculoskeletal extremity imaging. Ten cadaveric hands and ten knees were examined using a dedicated CBCT prototype and a clinical multi-detector CT using nominal protocols (80 kVp-108mAs for CBCT; 120 kVp- 300 mAs for MDCT). Soft tissue and bone visualization tasks were assessed by four radiologists using five-point satisfaction (for CBCT and MDCT individually) and five-point preference (side-by-side CBCT versus MDCT image quality comparison) rating tests. Ratings were analyzed using Kruskal-Wallis and Wilcoxon signed-rank tests, and observer agreement was assessed using the Kappa-statistic. Knee CBCT images were rated "excellent" or "good" (median scores 5 and 4) for "bone" and "soft tissue" visualization tasks. Hand CBCT images were rated "excellent" or "adequate" (median scores 5 and 3) for "bone" and "soft tissue" visualization tasks. Preference tests rated CBCT equivalent or superior to MDCT for bone visualization and favoured the MDCT for soft tissue visualization tasks. Intraobserver agreement for CBCT satisfaction tests was fair to almost perfect (κ ~ 0.26-0.92), and interobserver agreement was fair to moderate (κ ~ 0.27-0.54). CBCT provided excellent image quality for bone visualization and adequate image quality for soft tissue visualization tasks. • CBCT provided adequate image quality for diagnostic tasks in extremity imaging. • CBCT images were "excellent" for "bone" and "good/adequate" for "soft tissue" visualization tasks. • CBCT image quality was equivalent/superior to MDCT for bone visualization tasks.
NASA Astrophysics Data System (ADS)
Mehdizadeh, Saeid; Behmanesh, Javad; Khalili, Keivan
2017-11-01
Precipitation plays an important role in determining the climate of a region. Precise estimation of precipitation is required to manage and plan water resources, as well as other related applications such as hydrology, climatology, meteorology and agriculture. Time series of hydrologic variables such as precipitation are composed of deterministic and stochastic parts. Despite this fact, the stochastic part of the precipitation data is not usually considered in modeling of precipitation process. As an innovation, the present study introduces three new hybrid models by integrating soft computing methods including multivariate adaptive regression splines (MARS), Bayesian networks (BN) and gene expression programming (GEP) with a time series model, namely generalized autoregressive conditional heteroscedasticity (GARCH) for modeling of the monthly precipitation. For this purpose, the deterministic (obtained by soft computing methods) and stochastic (obtained by GARCH time series model) parts are combined with each other. To carry out this research, monthly precipitation data of Babolsar, Bandar Anzali, Gorgan, Ramsar, Tehran and Urmia stations with different climates in Iran were used during the period of 1965-2014. Root mean square error (RMSE), relative root mean square error (RRMSE), mean absolute error (MAE) and determination coefficient (R2) were employed to evaluate the performance of conventional/single MARS, BN and GEP, as well as the proposed MARS-GARCH, BN-GARCH and GEP-GARCH hybrid models. It was found that the proposed novel models are more precise than single MARS, BN and GEP models. Overall, MARS-GARCH and BN-GARCH models yielded better accuracy than GEP-GARCH. The results of the present study confirmed the suitability of proposed methodology for precise modeling of precipitation.
Analytical functions to predict cosmic-ray neutron spectra in the atmosphere.
Sato, Tatsuhiko; Niita, Koji
2006-09-01
Estimation of cosmic-ray neutron spectra in the atmosphere has been an essential issue in the evaluation of the aircrew doses and the soft-error rates of semiconductor devices. We therefore performed Monte Carlo simulations for estimating neutron spectra using the PHITS code in adopting the nuclear data library JENDL-High-Energy file. Excellent agreements were observed between the calculated and measured spectra for a wide altitude range even at the ground level. Based on a comprehensive analysis of the simulation results, we propose analytical functions that can predict the cosmic-ray neutron spectra for any location in the atmosphere at altitudes below 20 km, considering the influences of local geometries such as ground and aircraft on the spectra. The accuracy of the analytical functions was well verified by various experimental data.
Haptic communication between humans is tuned by the hard or soft mechanics of interaction
Usai, Francesco; Ganesh, Gowrishankar; Sanguineti, Vittorio; Burdet, Etienne
2018-01-01
To move a hard table together, humans may coordinate by following the dominant partner’s motion [1–4], but this strategy is unsuitable for a soft mattress where the perceived forces are small. How do partners readily coordinate in such differing interaction dynamics? To address this, we investigated how pairs tracked a target using flexion-extension of their wrists, which were coupled by a hard, medium or soft virtual elastic band. Tracking performance monotonically increased with a stiffer band for the worse partner, who had higher tracking error, at the cost of the skilled partner’s muscular effort. This suggests that the worse partner followed the skilled one’s lead, but simulations show that the results are better explained by a model where partners share movement goals through the forces, whilst the coupling dynamics determine the capacity of communicable information. This model elucidates the versatile mechanism by which humans can coordinate during both hard and soft physical interactions to ensure maximum performance with minimal effort. PMID:29565966
Gehrke, Peter; Lobert, Markus; Dhom, Günter
2008-01-01
The pink esthetic score (PES) evaluates the esthetic outcome of soft tissue around implant-supported single crowns in the anterior zone by awarding seven points for the mesial and distal papilla, soft-tissue level, soft-tissue contour, soft-tissue color, soft-tissue texture, and alveolar process deficiency. The aim of this study was to measure the reproducibility of the PES and assess the influence exerted by the examiner's degree of dental specialization. Fifteen examiners (three general dentists, three oral maxillofacial surgeons, three orthodontists, three postgraduate students in implant dentistry, and three lay people) applied the PES to 30 implant-supported single restorations twice at an interval of 4 weeks. Using a 0-1-2 scoring system, 0 being the lowest, 2 being the highest value, the maximum achievable PES was 14. At the second assessment, the photographs were scored in reverse order. Differences between the two assessments were evaluated with the Spearman's rank correlation coefficient (R). The Wilcoxon signed-rank test was used for comparisons of differences between the ratings. A significance level of p < 0.05 was chosen for both tests. Observer results indicated that the agreement between the first and second rating for all occupational groups was 70.5%, with a broad correlation between the two ratings and a high statistical significance (Spearman's R = 0.58, p = 0; Wilcoxon T = 163,182, Z = 3.383599, p = 0.000716). The most agreement between the first and second rating was obtained by orthodontists with 73.5% (R = 0.67), and the least by lay people 65.9% (R = 0.50). Very poor and very esthetic restorations showed the smallest deviations. Orthodontists were found to have assigned significantly poorer ratings than any other group. The assessment of postgraduate students and laypersons were the most favorable. The PES allows for a more objective appraisal of the esthetic short- and long-term results of various surgical and prosthetic implant procedures. It reproducibly evaluates the peri-implant soft tissue around single-implant restorations and results in good intra-examiner agreement. However, an effect of observer specialization on rating soft-tissue esthetics can be shown.
Lu, Min-Hua; Mao, Rui; Lu, Yin; Liu, Zheng; Wang, Tian-Fu; Chen, Si-Ping
2012-01-01
Indentation testing is a widely used approach to evaluate mechanical characteristics of soft tissues quantitatively. Young's modulus of soft tissue can be calculated from the force-deformation data with known tissue thickness and Poisson's ratio using Hayes' equation. Our group previously developed a noncontact indentation system using a water jet as a soft indenter as well as the coupling medium for the propagation of high-frequency ultrasound. The novel system has shown its ability to detect the early degeneration of articular cartilage. However, there is still lack of a quantitative method to extract the intrinsic mechanical properties of soft tissue from water jet indentation. The purpose of this study is to investigate the relationship between the loading-unloading curves and the mechanical properties of soft tissues to provide an imaging technique of tissue mechanical properties. A 3D finite element model of water jet indentation was developed with consideration of finite deformation effect. An improved Hayes' equation has been derived by introducing a new scaling factor which is dependent on Poisson's ratios v, aspect ratio a/h (the radius of the indenter/the thickness of the test tissue), and deformation ratio d/h. With this model, the Young's modulus of soft tissue can be quantitatively evaluated and imaged with the error no more than 2%. PMID:22927890
Novel intelligent real-time position tracking system using FPGA and fuzzy logic.
Soares dos Santos, Marco P; Ferreira, J A F
2014-03-01
The main aim of this paper is to test if FPGAs are able to achieve better position tracking performance than software-based soft real-time platforms. For comparison purposes, the same controller design was implemented in these architectures. A Multi-state Fuzzy Logic controller (FLC) was implemented both in a Xilinx(®) Virtex-II FPGA (XC2v1000) and in a soft real-time platform NI CompactRIO(®)-9002. The same sampling time was used. The comparative tests were conducted using a servo-pneumatic actuation system. Steady-state errors lower than 4 μm were reached for an arbitrary vertical positioning of a 6.2 kg mass when the controller was embedded into the FPGA platform. Performance gains up to 16 times in the steady-state error, up to 27 times in the overshoot and up to 19.5 times in the settling time were achieved by using the FPGA-based controller over the software-based FLC controller. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.
An Interactive Concatenated Turbo Coding System
NASA Technical Reports Server (NTRS)
Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc
1999-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
NASA Astrophysics Data System (ADS)
ÁLvarez, A.; Orfila, A.; Tintoré, J.
2004-03-01
Satellites are the only systems able to provide continuous information on the spatiotemporal variability of vast areas of the ocean. Relatively long-term time series of satellite data are nowadays available. These spatiotemporal time series of satellite observations can be employed to build empirical models, called satellite-based ocean forecasting (SOFT) systems, to forecast certain aspects of future ocean states. SOFT systems can predict satellite-observed fields at different timescales. The forecast skill of SOFT systems forecasting the sea surface temperature (SST) at monthly timescales has been extensively explored in previous works. In this work we study the performance of two SOFT systems forecasting, respectively, the SST and sea level anomaly (SLA) at weekly timescales, that is, providing forecasts of the weekly averaged SST and SLA fields with 1 week in advance. The SOFT systems were implemented in the Ligurian Sea (Western Mediterranean Sea). Predictions from the SOFT systems are compared with observations and with the predictions obtained from persistence models. Results indicate that the SOFT system forecasting the SST field is always superior in terms of predictability to persistence. Minimum prediction errors in the SST are obtained during winter and spring seasons. On the other hand, the biggest differences between the performance of SOFT and persistence models are found during summer and autumn. These changes in the predictability are explained on the basis of the particular variability of the SST field in the Ligurian Sea. Concerning the SLA field, no improvements with respect to persistence have been found for the SOFT system forecasting the SLA field.
Discovery of a Transient Magnetar: XTE J1810-197
NASA Technical Reports Server (NTRS)
Ibrahim, Alaa I.; Markwardt, Craig B.; Swank, Jean H.; Ransom, Scott; Roberts, Mallory; Kaspi, Victoria; Woods, Peter M.; Safi-Harb, Samar; Balman, Solen; Parke, William C.
2004-01-01
We report the discovery of a new X-ray pulsar, XTE J1810-197, that was serendipitously discovered on 2003 July 15 by the Rossi X-Ray Timing Explorer (RXTE) while observing the soft gamma repeater SGR 1806-20. The pulsar has a 5.54 s spin period, a soft X-ray spectrum (with a photon index of approx. = 4). and is detectable in earlier RXTE observations back to 2003 January but not before. These show that a transient outburst began between 2002 November 17 and 2003 January 23 and that the source's persistent X-ray flux has been declining since then. The pulsar exhibits a high spin-down rate P approx.= l0(exp -11) s/s with no evidence of Doppler shifts due to a binary companion. The rapid spin-down rate and slow spin period imply a supercritical characteristic magnetic field B approx. = 3 x l0(exp 14) G and a young age tau less than or = 7600 yr. Follow-up Chandra observations provided an accurate position of the source. Within its error radius, the 1.5 m Russian-Turkish Optical Telescope found a limiting magnitude R(sub c) = 21.5. All such properties are strikingly similar to those of anomalous X-ray pulsars ad soft gamma repeaters, providing strong evidence that the source is a new magnetar. However, archival ASCA and ROSAT observations found the source nearly 2 orders of magnitude fainter. This transient behavior and the observed long-term flux variability of the source in absence of an observed SGR-like burst activity make it the first confirmed transient magnetar and suggest that other neutron stars that share the properties of XTE 51810- 197 during its inactive phase may be unidentified transient magnetars awaiting detection via a similar activity. This implies a larger population of magnetars than previously surmised and a possible evolutionary connection between magnetars and other neutron star families. Subject headings: pulsars: general -pulsars: individual (XTE 51810- 197) - stars: magnetic fields -
Latest trends in parts SEP susceptibility from heavy ions
NASA Technical Reports Server (NTRS)
Nichols, Donald K.; Smith, L. S.; Soli, George A.; Koga, R.; Kolasinski, W. A.
1989-01-01
JPL and Aerospace have collected a third set of heavy-ion single-event phenomena (SEP) test data since their last joint IEEE publications in December 1985 and December 1987. Trends in SEP susceptibility (e.g., soft errors and latchup) for state-of-the-art parts are presented. Results of the study indicate that hard technologies and unacceptably soft technologies can be flagged. In some instances, specific tested parts can be taken as candidates for key microprocessors or memories. As always with radiation test data, specific test data for qualified flight parts is recommended for critical applications.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.
A Randomized Trial of Soft Multifocal Contact Lenses for Myopia Control: Baseline Data and Methods.
Walline, Jeffrey J; Gaume Giannoni, Amber; Sinnott, Loraine T; Chandler, Moriah A; Huang, Juan; Mutti, Donald O; Jones-Jordan, Lisa A; Berntsen, David A
2017-09-01
The Bifocal Lenses In Nearsighted Kids (BLINK) study is the first soft multifocal contact lens myopia control study to compare add powers and measure peripheral refractive error in the vertical meridian, so it will provide important information about the potential mechanism of myopia control. The BLINK study is a National Eye Institute-sponsored, double-masked, randomized clinical trial to investigate the effects of soft multifocal contact lenses on myopia progression. This article describes the subjects' baseline characteristics and study methods. Subjects were 7 to 11 years old, had -0.75 to -5.00 spherical component and less than 1.00 diopter (D) astigmatism, and had 20/25 or better logMAR distance visual acuity with manifest refraction in each eye and with +2.50-D add soft bifocal contact lenses on both eyes. Children were randomly assigned to wear Biofinity single-vision, Biofinity Multifocal "D" with a +1.50-D add power, or Biofinity Multifocal "D" with a +2.50-D add power contact lenses. We examined 443 subjects at the baseline visits, and 294 (66.4%) subjects were enrolled. Of the enrolled subjects, 177 (60.2%) were female, and 200 (68%) were white. The mean (± SD) age was 10.3 ± 1.2 years, and 117 (39.8%) of the eligible subjects were younger than 10 years. The mean spherical equivalent refractive error, measured by cycloplegic autorefraction was -2.39 ± 1.00 D. The best-corrected binocular logMAR visual acuity with glasses was +0.01 ± 0.06 (20/21) at distance and -0.03 ± 0.08 (20/18) at near. The BLINK study subjects are similar to patients who would routinely be eligible for myopia control in practice, so the results will provide clinical information about soft bifocal contact lens myopia control as well as information about the mechanism of the treatment effect, if one occurs.
The lepton+jets Selection and Determination of the Lepton Fake Rate with the Full RunIIb Data Set
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meister, Daniel
2013-01-01
This thesis presents the combined single top andmore » $$ t\\overline{ }\\ t $$ lepton+jets selection for the full RunIIb dataset of the DØ detector. The selection uses the newest soft- ware versions including all standard central object identifications and corrections and has various additions and improvements compared to the previous 7 . 3 fb - 1 $$ t\\overline{ }\\ t $$ selection and the previous single top selection in order to accommodate even more different analyses. The lepton fake rate $$\\epsilon_{\\rm QCD}$$ and the real lepton efficiency $$\\epsilon_{\\rm sig}$$ are estimated using the matrix method and different variations are considered in order to determine the systematic errors. The calculation has to be done for each run period and every set of analysis cuts separately. In addition the values for the exclusive jet bins and for the new single top analysis cuts have been derived and the thesis shows numerous control plots to demonstrate the excellent agreement between data and Monte Carlo.« less
Effect of single vision soft contact lenses on peripheral refraction.
Kang, Pauline; Fan, Yvonne; Oh, Kelly; Trac, Kevin; Zhang, Frank; Swarbrick, Helen
2012-07-01
To investigate changes in peripheral refraction with under-, full, and over-correction of central refraction with commercially available single vision soft contact lenses (SCLs) in young myopic adults. Thirty-four myopic adult subjects were fitted with Proclear Sphere SCLs to under-correct (+0.75 DS), fully correct, and over-correct (-0.75 DS) their manifest central refractive error. Central and peripheral refraction were measured with no lens wear and subsequently with different levels of SCL central refractive error correction. The uncorrected refractive error was myopic at all locations along the horizontal meridian. Peripheral refraction was relatively hyperopic compared to center at 30 and 35° in the temporal visual field (VF) in low myopes and at 30 and 35° in the temporal VF and 10, 30, and 35° in the nasal VF in moderate myopes. All levels of SCL correction caused a hyperopic shift in refraction at all locations in the horizontal VF. The smallest hyperopic shift was demonstrated with under-correction followed by full correction and then by over-correction of central refractive error. An increase in relative peripheral hyperopia was measured with full correction SCLs compared with no correction in both low and moderate myopes. However, no difference in relative peripheral refraction profiles were found between under-, full, and over-correction. Under-, full, and over-correction of central refractive error with single vision SCLs caused a hyperopic shift in both central and peripheral refraction at all positions in the horizontal meridian. All levels of SCL correction caused the peripheral retina, which initially experienced absolute myopic defocus at baseline with no correction, to experience absolute hyperopic defocus. This peripheral hyperopia may be a possible cause of myopia progression reported with different types and levels of myopia correction.
NASA Astrophysics Data System (ADS)
Samboju, Vishal; Adams, Matthew; Salgaonkar, Vasant; Diederich, Chris J.; Cunha, J. Adam M.
2017-02-01
The speed of sound (SOS) for ultrasound devices used for imaging soft tissue is often calibrated to water, 1540 m/s1 , despite in-vivo soft tissue SOS varying from 1450 to 1613 m/s2 . Images acquired with 1540 m/s and used in conjunction with stereotactic external coordinate systems can thus result in displacement errors of several millimeters. Ultrasound imaging systems are routinely used to guide interventional thermal ablation and cryoablation devices, or radiation sources for brachytherapy3 . Brachytherapy uses small radioactive pellets, inserted interstitially with needles under ultrasound guidance, to eradicate cancerous tissue4 . Since the radiation dose diminishes with distance from the pellet as 1/r2 , imaging uncertainty of a few millimeters can result in significant erroneous dose delivery5,6. Likewise, modeling of power deposition and thermal dose accumulations from ablative sources are also prone to errors due to placement offsets from SOS errors7 . This work presents a method of mitigating needle placement error due to SOS variances without the need of ionizing radiation2,8. We demonstrate the effects of changes in dosimetry in a prostate brachytherapy environment due to patientspecific SOS variances and the ability to mitigate dose delivery uncertainty. Electromagnetic (EM) sensors embedded in the brachytherapy ultrasound system provide information regarding 3D position and orientation of the ultrasound array. Algorithms using data from these two modalities are used to correct bmode images to account for SOS errors. While ultrasound localization resulted in >3 mm displacements, EM resolution was verified to <1 mm precision using custom-built phantoms with various SOS, showing 1% accuracy in SOS measurement.
NASA Astrophysics Data System (ADS)
Chertok, I. M.; Belov, A. V.
2018-03-01
Correction to: Solar Phys https://doi.org/10.1007/s11207-017-1169-1 We found an important error in the text of our article. On page 6, the second sentence of Section 3.2 "We studied the variations in soft X-ray flare characteristics in more detail by averaging them within the running windows of ± one Carrington rotation with a step of two rotations." should instead read "We studied the variations in soft X-ray flare characteristics in more detail by averaging them within the running windows of ± 2.5 Carrington rotations with a step of two rotations." We regret the inconvenience. The online version of the original article can be found at https://doi.org/10.1007/s11207-017-1169-1
Soft tissue deformation estimation by spatio-temporal Kalman filter finite element method.
Yarahmadian, Mehran; Zhong, Yongmin; Gu, Chengfan; Shin, Jaehyun
2018-01-01
Soft tissue modeling plays an important role in the development of surgical training simulators as well as in robot-assisted minimally invasive surgeries. It has been known that while the traditional Finite Element Method (FEM) promises the accurate modeling of soft tissue deformation, it still suffers from a slow computational process. This paper presents a Kalman filter finite element method to model soft tissue deformation in real time without sacrificing the traditional FEM accuracy. The proposed method employs the FEM equilibrium equation and formulates it as a filtering process to estimate soft tissue behavior using real-time measurement data. The model is temporally discretized using the Newmark method and further formulated as the system state equation. Simulation results demonstrate that the computational time of KF-FEM is approximately 10 times shorter than the traditional FEM and it is still as accurate as the traditional FEM. The normalized root-mean-square error of the proposed KF-FEM in reference to the traditional FEM is computed as 0.0116. It is concluded that the proposed method significantly improves the computational performance of the traditional FEM without sacrificing FEM accuracy. The proposed method also filters noises involved in system state and measurement data.
Microscope self-calibration based on micro laser line imaging and soft computing algorithms
NASA Astrophysics Data System (ADS)
Apolinar Muñoz Rodríguez, J.
2018-06-01
A technique to perform microscope self-calibration via micro laser line and soft computing algorithms is presented. In this technique, the microscope vision parameters are computed by means of soft computing algorithms based on laser line projection. To implement the self-calibration, a microscope vision system is constructed by means of a CCD camera and a 38 μm laser line. From this arrangement, the microscope vision parameters are represented via Bezier approximation networks, which are accomplished through the laser line position. In this procedure, a genetic algorithm determines the microscope vision parameters by means of laser line imaging. Also, the approximation networks compute the three-dimensional vision by means of the laser line position. Additionally, the soft computing algorithms re-calibrate the vision parameters when the microscope vision system is modified during the vision task. The proposed self-calibration improves accuracy of the traditional microscope calibration, which is accomplished via external references to the microscope system. The capability of the self-calibration based on soft computing algorithms is determined by means of the calibration accuracy and the micro-scale measurement error. This contribution is corroborated by an evaluation based on the accuracy of the traditional microscope calibration.
A Computing Method to Determine the Performance of an Ionic Liquid Gel Soft Actuator
Zhang, Chenghong; Zhou, Yanmin; Wang, Zhipeng
2018-01-01
A new type of soft actuator material—an ionic liquid gel (ILG) that consists of BMIMBF4, HEMA, DEAP, and ZrO2—is polymerized into a gel state under ultraviolet (UV) light irradiation. In this paper, we first propose that the ILG conforms to the assumptions of hyperelastic theory and that the Mooney-Rivlin model can be used to study the properties of the ILG. Under the five-parameter and nine-parameter Mooney-Rivlin models, the formulas for the calculation of the uniaxial tensile stress, plane uniform tensile stress, and 3D directional stress are deduced. The five-parameter and nine-parameter Mooney-Rivlin models of the ILG with a ZrO2 content of 3 wt% were obtained by uniaxial tensile testing, and the parameters are denoted as c10, c01, c20, c11, and c02 and c10, c01, c20, c11, c02, c30, c21, c12, and c03, respectively. Through the analysis and comparison of the uniaxial tensile stress between the calculated and experimental data, the error between the stress data calculated from the five-parameter Mooney-Rivlin model and the experimental data is less than 0.51%, and the error between the stress data calculated from the nine-parameter Mooney-Rivlin model and the experimental data is no more than 8.87%. Hence, our work presents a feasible and credible formula for the calculation of the stress of the ILG. This work opens a new path to assess the performance of a soft actuator composed of an ILG and will contribute to the optimized design of soft robots. PMID:29853999
A Computing Method to Determine the Performance of an Ionic Liquid Gel Soft Actuator.
He, Bin; Zhang, Chenghong; Zhou, Yanmin; Wang, Zhipeng
2018-01-01
A new type of soft actuator material-an ionic liquid gel (ILG) that consists of BMIMBF 4 , HEMA, DEAP, and ZrO 2 -is polymerized into a gel state under ultraviolet (UV) light irradiation. In this paper, we first propose that the ILG conforms to the assumptions of hyperelastic theory and that the Mooney-Rivlin model can be used to study the properties of the ILG. Under the five-parameter and nine-parameter Mooney-Rivlin models, the formulas for the calculation of the uniaxial tensile stress, plane uniform tensile stress, and 3D directional stress are deduced. The five-parameter and nine-parameter Mooney-Rivlin models of the ILG with a ZrO 2 content of 3 wt% were obtained by uniaxial tensile testing, and the parameters are denoted as c 10 , c 01 , c 20 , c 11 , and c 02 and c 10 , c 01 , c 20 , c 11 , c 02 , c 30 , c 21 , c 12 , and c 03 , respectively. Through the analysis and comparison of the uniaxial tensile stress between the calculated and experimental data, the error between the stress data calculated from the five-parameter Mooney-Rivlin model and the experimental data is less than 0.51%, and the error between the stress data calculated from the nine-parameter Mooney-Rivlin model and the experimental data is no more than 8.87%. Hence, our work presents a feasible and credible formula for the calculation of the stress of the ILG. This work opens a new path to assess the performance of a soft actuator composed of an ILG and will contribute to the optimized design of soft robots.
Swaile, D F; Elstun, L T; Benzing, K W
2012-03-01
Individuals with axillary hyperhidrosis have much higher than average sweat rates and are often prescribed anhydrous aluminum chloride (AlCl(3)) solutions. Topical application of these solutions can be irritating to the skin, resulting in poor compliance and lower than desired efficacy. Demonstrate the efficacy of an over the counter "clinical strength" soft-solid antiperspirant using a night time application regimen and compare to a prescription aluminum chloride (6.5%) antiperspirant using male panelists. Gravimetric hot room efficacy testing (100 F and 35% Humidity) was performed comparing an over the counter soft-solid antiperspirant to placebo in a single test. Two separate gravimetric tests were placed comparing a prescription aluminum chloride (6.5%) antiperspirant to the same soft solid product using an intent to treat model. Skin irritation was assessed daily by a trained grader. Placebo testing resulted in 85% of panelists having a reduction in sweating rate greater than 50%. Comparison testing showed the over the counter soft solid reduced sweat rate by an average of 34% better than the prescription product while resulting significantly less skin irritation. Over the counter "clinical strength" soft-solid antiperspirants can be considered as an alternative treatment to aluminum chloride antiperspirants for the treatment of heavy sweating. © 2012 The Author. BJD © 2012 British Association of Dermatologists.
High strain-rate soft material characterization via inertial cavitation
NASA Astrophysics Data System (ADS)
Estrada, Jonathan B.; Barajas, Carlos; Henann, David L.; Johnsen, Eric; Franck, Christian
2018-03-01
Mechanical characterization of soft materials at high strain-rates is challenging due to their high compliance, slow wave speeds, and non-linear viscoelasticity. Yet, knowledge of their material behavior is paramount across a spectrum of biological and engineering applications from minimizing tissue damage in ultrasound and laser surgeries to diagnosing and mitigating impact injuries. To address this significant experimental hurdle and the need to accurately measure the viscoelastic properties of soft materials at high strain-rates (103-108 s-1), we present a minimally invasive, local 3D microrheology technique based on inertial microcavitation. By combining high-speed time-lapse imaging with an appropriate theoretical cavitation framework, we demonstrate that this technique has the capability to accurately determine the general viscoelastic material properties of soft matter as compliant as a few kilopascals. Similar to commercial characterization algorithms, we provide the user with significant flexibility in evaluating several constitutive laws to determine the most appropriate physical model for the material under investigation. Given its straightforward implementation into most current microscopy setups, we anticipate that this technique can be easily adopted by anyone interested in characterizing soft material properties at high loading rates including hydrogels, tissues and various polymeric specimens.
Programmable Numerical Function Generators: Architectures and Synthesis Method
2005-08-01
generates HDL (Hardware Descrip- tion Language) code from the design specification described by Scilab [14], a MATLAB-like numerical calculation soft...cad.com/Error-NFG/. [14] Scilab 3.0, INRIA-ENPC, France, http://scilabsoft.inria.fr/ [15] M. J. Schulte and J. E. Stine, “Approximating elementary functions
Overview of Device SEE Susceptibility from Heavy Ions
NASA Technical Reports Server (NTRS)
Nichols, D. K.; Coss, J. R.; McCarthy, K. P.; Schwartz, H. R.; Smith, L. S.
1998-01-01
A fifth set of heavy ion single event effects (SEE) test data have been collected since the last IEEE publications (1,2,3,4) in December issues for 1985, 1987, 1989, and 1991. Trends in SEE susceptibility (including soft errors and latchup) for state-of-the-art parts are evaluated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watkins, W.T.; Siebers, J.V.; Bzdusek, K.
Purpose: To introduce methods to analyze Deformable Image Registration (DIR) and identify regions of potential DIR errors. Methods: DIR Deformable Vector Fields (DVFs) quantifying patient anatomic changes were evaluated using the Jacobian determinant and the magnitude of DVF curl as functions of tissue density and tissue type. These quantities represent local relative deformation and rotation, respectively. Large values in dense tissues can potentially identify non-physical DVF errors. For multiple DVFs per patient, histograms and visualization of DVF differences were also considered. To demonstrate the capabilities of methods, we computed multiple DVFs for each of five Head and Neck (H'N) patientsmore » (P1–P5) via a Fast-symmetric Demons (FSD) algorithm and via a Diffeomorphic Demons (DFD) algorithm, and show the potential to identify DVF errors. Results: Quantitative comparisons of the FSD and DFD registrations revealed <0.3 cm DVF differences in >99% of all voxels for P1, >96% for P2, and >90% of voxels for P3. While the FSD and DFD registrations were very similar for these patients, the Jacobian determinant was >50% in 9–15% of soft tissue and in 3–17% of bony tissue in each of these cases. The volumes of large soft tissue deformation were consistent for all five patients using the FSD algorithm (mean 15%±4% volume), whereas DFD reduced regions of large deformation by 10% volume (785 cm{sup 3}) for P4 and by 14% volume (1775 cm{sup 3}) for P5. The DFD registrations resulted in fewer regions of large DVF-curl; 50% rotations in FSD registrations averaged 209±136 cm{sup 3} in soft tissue and 10±11 cm{sup 3} in bony tissue, but using DFD these values were reduced to 42±53 cm{sup 3} and 1.1±1.5 cm{sup 3}, respectively. Conclusion: Analysis of Jacobian determinant and curl as functions of tissue density can identify regions of potential DVF errors by identifying non-physical deformations and rotations. Collaboration with Phillips Healthcare, as indicated in authorship.« less
Correcting for sequencing error in maximum likelihood phylogeny inference.
Kuhner, Mary K; McGill, James
2014-11-04
Accurate phylogenies are critical to taxonomy as well as studies of speciation processes and other evolutionary patterns. Accurate branch lengths in phylogenies are critical for dating and rate measurements. Such accuracy may be jeopardized by unacknowledged sequencing error. We use simulated data to test a correction for DNA sequencing error in maximum likelihood phylogeny inference. Over a wide range of data polymorphism and true error rate, we found that correcting for sequencing error improves recovery of the branch lengths, even if the assumed error rate is up to twice the true error rate. Low error rates have little effect on recovery of the topology. When error is high, correction improves topological inference; however, when error is extremely high, using an assumed error rate greater than the true error rate leads to poor recovery of both topology and branch lengths. The error correction approach tested here was proposed in 2004 but has not been widely used, perhaps because researchers do not want to commit to an estimate of the error rate. This study shows that correction with an approximate error rate is generally preferable to ignoring the issue. Copyright © 2014 Kuhner and McGill.
Accuracy analysis for triangulation and tracking based on time-multiplexed structured light.
Wagner, Benjamin; Stüber, Patrick; Wissel, Tobias; Bruder, Ralf; Schweikard, Achim; Ernst, Floris
2014-08-01
The authors' research group is currently developing a new optical head tracking system for intracranial radiosurgery. This tracking system utilizes infrared laser light to measure features of the soft tissue on the patient's forehead. These features are intended to offer highly accurate registration with respect to the rigid skull structure by means of compensating for the soft tissue. In this context, the system also has to be able to quickly generate accurate reconstructions of the skin surface. For this purpose, the authors have developed a laser scanning device which uses time-multiplexed structured light to triangulate surface points. The accuracy of the authors' laser scanning device is analyzed and compared for different triangulation methods. These methods are given by the Linear-Eigen method and a nonlinear least squares method. Since Microsoft's Kinect camera represents an alternative for fast surface reconstruction, the authors' results are also compared to the triangulation accuracy of the Kinect device. Moreover, the authors' laser scanning device was used for tracking of a rigid object to determine how this process is influenced by the remaining triangulation errors. For this experiment, the scanning device was mounted to the end-effector of a robot to be able to calculate a ground truth for the tracking. The analysis of the triangulation accuracy of the authors' laser scanning device revealed a root mean square (RMS) error of 0.16 mm. In comparison, the analysis of the triangulation accuracy of the Kinect device revealed a RMS error of 0.89 mm. It turned out that the remaining triangulation errors only cause small inaccuracies for the tracking of a rigid object. Here, the tracking accuracy was given by a RMS translational error of 0.33 mm and a RMS rotational error of 0.12°. This paper shows that time-multiplexed structured light can be used to generate highly accurate reconstructions of surfaces. Furthermore, the reconstructed point sets can be used for high-accuracy tracking of objects, meeting the strict requirements of intracranial radiosurgery.
Estimated Probability of a Cervical Spine Injury During an ISS Mission
NASA Technical Reports Server (NTRS)
Brooker, John E.; Weaver, Aaron S.; Myers, Jerry G.
2013-01-01
Introduction: The Integrated Medical Model (IMM) utilizes historical data, cohort data, and external simulations as input factors to provide estimates of crew health, resource utilization and mission outcomes. The Cervical Spine Injury Module (CSIM) is an external simulation designed to provide the IMM with parameter estimates for 1) a probability distribution function (PDF) of the incidence rate, 2) the mean incidence rate, and 3) the standard deviation associated with the mean resulting from injury/trauma of the neck. Methods: An injury mechanism based on an idealized low-velocity blunt impact to the superior posterior thorax of an ISS crewmember was used as the simulated mission environment. As a result of this impact, the cervical spine is inertially loaded from the mass of the head producing an extension-flexion motion deforming the soft tissues of the neck. A multibody biomechanical model was developed to estimate the kinematic and dynamic response of the head-neck system from a prescribed acceleration profile. Logistic regression was performed on a dataset containing AIS1 soft tissue neck injuries from rear-end automobile collisions with published Neck Injury Criterion values producing an injury transfer function (ITF). An injury event scenario (IES) was constructed such that crew 1 is moving through a primary or standard translation path transferring large volume equipment impacting stationary crew 2. The incidence rate for this IES was estimated from in-flight data and used to calculate the probability of occurrence. The uncertainty in the model input factors were estimated from representative datasets and expressed in terms of probability distributions. A Monte Carlo Method utilizing simple random sampling was employed to propagate both aleatory and epistemic uncertain factors. Scatterplots and partial correlation coefficients (PCC) were generated to determine input factor sensitivity. CSIM was developed in the SimMechanics/Simulink environment with a Monte Carlo wrapper (MATLAB) used to integrate the components of the module. Results: The probability of generating an AIS1 soft tissue neck injury from the extension/flexion motion induced by a low-velocity blunt impact to the superior posterior thorax was fitted with a lognormal PDF with mean 0.26409, standard deviation 0.11353, standard error of mean 0.00114, and 95% confidence interval [0.26186, 0.26631]. Combining the probability of an AIS1 injury with the probability of IES occurrence was fitted with a Johnson SI PDF with mean 0.02772, standard deviation 0.02012, standard error of mean 0.00020, and 95% confidence interval [0.02733, 0.02812]. The input factor sensitivity analysis in descending order was IES incidence rate, ITF regression coefficient 1, impactor initial velocity, ITF regression coefficient 2, and all others (equipment mass, crew 1 body mass, crew 2 body mass) insignificant. Verification and Validation (V&V): The IMM V&V, based upon NASA STD 7009, was implemented which included an assessment of the data sets used to build CSIM. The documentation maintained includes source code comments and a technical report. The software code and documentation is under Subversion configuration management. Kinematic validation was performed by comparing the biomechanical model output to established corridors.
Jiménez-Sotelo, Paola; Hernández-Martínez, Maylet; Osorio-Revilla, Guillermo; Meza-Márquez, Ofelia Gabriela; García-Ochoa, Felipe; Gallardo-Velázquez, Tzayhrí
2016-07-01
Avocado oil is a high-value and nutraceutical oil whose authentication is very important since the addition of low-cost oils could lower its beneficial properties. Mid-FTIR spectroscopy combined with chemometrics was used to detect and quantify adulteration of avocado oil with sunflower and soybean oils in a ternary mixture. Thirty-seven laboratory-prepared adulterated samples and 20 pure avocado oil samples were evaluated. The adulterated oil amount ranged from 2% to 50% (w/w) in avocado oil. A soft independent modelling class analogy (SIMCA) model was developed to discriminate between pure and adulterated samples. The model showed recognition and rejection rate of 100% and proper classification in external validation. A partial least square (PLS) algorithm was used to estimate the percentage of adulteration. The PLS model showed values of R(2) > 0.9961, standard errors of calibration (SEC) in the range of 0.3963-0.7881, standard errors of prediction (SEP estimated) between 0.6483 and 0.9707, and good prediction performances in external validation. The results showed that mid-FTIR spectroscopy could be an accurate and reliable technique for qualitative and quantitative analysis of avocado oil in ternary mixtures.
Performance analysis of LDPC codes on OOK terahertz wireless channels
NASA Astrophysics Data System (ADS)
Chun, Liu; Chang, Wang; Jun-Cheng, Cao
2016-02-01
Atmospheric absorption, scattering, and scintillation are the major causes to deteriorate the transmission quality of terahertz (THz) wireless communications. An error control coding scheme based on low density parity check (LDPC) codes with soft decision decoding algorithm is proposed to improve the bit-error-rate (BER) performance of an on-off keying (OOK) modulated THz signal through atmospheric channel. The THz wave propagation characteristics and channel model in atmosphere is set up. Numerical simulations validate the great performance of LDPC codes against the atmospheric fading and demonstrate the huge potential in future ultra-high speed beyond Gbps THz communications. Project supported by the National Key Basic Research Program of China (Grant No. 2014CB339803), the National High Technology Research and Development Program of China (Grant No. 2011AA010205), the National Natural Science Foundation of China (Grant Nos. 61131006, 61321492, and 61204135), the Major National Development Project of Scientific Instrument and Equipment (Grant No. 2011YQ150021), the National Science and Technology Major Project (Grant No. 2011ZX02707), the International Collaboration and Innovation Program on High Mobility Materials Engineering of the Chinese Academy of Sciences, and the Shanghai Municipal Commission of Science and Technology (Grant No. 14530711300).
Adsorption-desorption kinetics of soft particles onto surfaces
NASA Astrophysics Data System (ADS)
Osberg, Brendan; Gerland, Ulrich
A broad range of physical, chemical, and biological systems feature processes in which particles randomly adsorb on a substrate. Theoretical models usually assume ``hard'' (mutually impenetrable) particles, but in soft matter physics the adsorbing particles can be effectively compressible, implying ``soft'' interaction potentials. We recently studied the kinetics of such soft particles adsorbing onto one-dimensional substrates, identifying three novel phenomena: (i) a gradual density increase, or ''cramming'', replaces the usual jamming behavior of hard particles, (ii) a density overshoot, can occur (only for soft particles) on a time scale set by the desorption rate, and (iii) relaxation rates of soft particles increase with particle size (on a lattice), while hard particles show the opposite trend. The latter occurs since unjamming requires desorption and many-bodied reorganization to equilibrate -a process that is generally very slow. Here we extend this analysis to a two-dimensional substrate, focusing on the question of whether the adsorption-desorption kinetics of particles in two dimensions is similarly enriched by the introduction of soft interactions. Application to experiments, for example the adsorption of fibrinogen on two-dimensional surfaces, will be discussed.
Liu, Xiang; Effenberger, Frank; Chand, Naresh
2015-03-09
We demonstrate a flexible modulation and detection scheme for upstream transmission in passive optical networks using pulse position modulation at optical network unit, facilitating burst-mode detection with automatic decision threshold tracking, and DSP-enabled soft-combining at optical line terminal. Adaptive receiver sensitivities of -33.1 dBm, -36.6 dBm and -38.3 dBm at a bit error ratio of 10(-4) are respectively achieved for 2.5 Gb/s, 1.25 Gb/s and 625 Mb/s after transmission over a 20-km standard single-mode fiber without any optical amplification.
NASA Astrophysics Data System (ADS)
Huo, Ming-Xia; Li, Ying
2017-12-01
Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.
Effect of the mandible on mouthguard measurements of head kinematics.
Kuo, Calvin; Wu, Lyndia C; Hammoor, Brad T; Luck, Jason F; Cutcliffe, Hattie C; Lynall, Robert C; Kait, Jason R; Campbell, Kody R; Mihalik, Jason P; Bass, Cameron R; Camarillo, David B
2016-06-14
Wearable sensors are becoming increasingly popular for measuring head motions and detecting head impacts. Many sensors are worn on the skin or in headgear and can suffer from motion artifacts introduced by the compliance of soft tissue or decoupling of headgear from the skull. The instrumented mouthguard is designed to couple directly to the upper dentition, which is made of hard enamel and anchored in a bony socket by stiff ligaments. This gives the mouthguard superior coupling to the skull compared with other systems. However, multiple validation studies have yielded conflicting results with respect to the mouthguard׳s head kinematics measurement accuracy. Here, we demonstrate that imposing different constraints on the mandible (lower jaw) can alter mouthguard kinematic accuracy in dummy headform testing. In addition, post mortem human surrogate tests utilizing the worst-case unconstrained mandible condition yield 40% and 80% normalized root mean square error in angular velocity and angular acceleration respectively. These errors can be modeled using a simple spring-mass system in which the soft mouthguard material near the sensors acts as a spring and the mandible as a mass. However, the mouthguard can be designed to mitigate these disturbances by isolating sensors from mandible loads, improving accuracy to below 15% normalized root mean square error in all kinematic measures. Thus, while current mouthguards would suffer from measurement errors in the worst-case unconstrained mandible condition, future mouthguards should be designed to account for these disturbances and future validation testing should include unconstrained mandibles to ensure proper accuracy. Copyright © 2016 Elsevier Ltd. All rights reserved.
Large poroelastic deformation of a soft material
NASA Astrophysics Data System (ADS)
MacMinn, Christopher W.; Dufresne, Eric R.; Wettlaufer, John S.
2014-11-01
Flow through a porous material will drive mechanical deformation when the fluid pressure becomes comparable to the stiffness of the solid skeleton. This has applications ranging from hydraulic fracture for recovery of shale gas, where fluid is injected at high pressure, to the mechanics of biological cells and tissues, where the solid skeleton is very soft. The traditional linear theory of poroelasticity captures this fluid-solid coupling by combining Darcy's law with linear elasticity. However, linear elasticity is only volume-conservative to first order in the strain, which can become problematic when damage, plasticity, or extreme softness lead to large deformations. Here, we compare the predictions of linear poroelasticity with those of a large-deformation framework in the context of two model problems. We show that errors in volume conservation are compounded and amplified by coupling with the fluid flow, and can become important even when the deformation is small. We also illustrate these results with a laboratory experiment.
NASA Astrophysics Data System (ADS)
Elfgen, S.; Franck, D.; Hameyer, K.
2018-04-01
Magnetic measurements are indispensable for the characterization of soft magnetic material used e.g. in electrical machines. Characteristic values are used as quality control during production and for the parametrization of material models. Uncertainties and errors in the measurements are reflected directly in the parameters of the material models. This can result in over-dimensioning and inaccuracies in simulations for the design of electrical machines. Therefore, existing influencing factors in the characterization of soft magnetic materials are named and their resulting uncertainties contributions studied. The analysis of the resulting uncertainty contributions can serve the operator as additional selection criteria for different measuring sensors. The investigation is performed for measurements within and outside the currently prescribed standard, using a Single sheet tester and its impact on the identification of iron loss parameter is studied.
FPGA-Based, Self-Checking, Fault-Tolerant Computers
NASA Technical Reports Server (NTRS)
Some, Raphael; Rennels, David
2004-01-01
A proposed computer architecture would exploit the capabilities of commercially available field-programmable gate arrays (FPGAs) to enable computers to detect and recover from bit errors. The main purpose of the proposed architecture is to enable fault-tolerant computing in the presence of single-event upsets (SEUs). [An SEU is a spurious bit flip (also called a soft error) caused by a single impact of ionizing radiation.] The architecture would also enable recovery from some soft errors caused by electrical transients and, to some extent, from intermittent and permanent (hard) errors caused by aging of electronic components. A typical FPGA of the current generation contains one or more complete processor cores, memories, and highspeed serial input/output (I/O) channels, making it possible to shrink a board-level processor node to a single integrated-circuit chip. Custom, highly efficient microcontrollers, general-purpose computers, custom I/O processors, and signal processors can be rapidly and efficiently implemented by use of FPGAs. Unfortunately, FPGAs are susceptible to SEUs. Prior efforts to mitigate the effects of SEUs have yielded solutions that degrade performance of the system and require support from external hardware and software. In comparison with other fault-tolerant- computing architectures (e.g., triple modular redundancy), the proposed architecture could be implemented with less circuitry and lower power demand. Moreover, the fault-tolerant computing functions would require only minimal support from circuitry outside the central processing units (CPUs) of computers, would not require any software support, and would be largely transparent to software and to other computer hardware. There would be two types of modules: a self-checking processor module and a memory system (see figure). The self-checking processor module would be implemented on a single FPGA and would be capable of detecting its own internal errors. It would contain two CPUs executing identical programs in lock step, with comparison of their outputs to detect errors. It would also contain various cache local memory circuits, communication circuits, and configurable special-purpose processors that would use self-checking checkers. (The basic principle of the self-checking checker method is to utilize logic circuitry that generates error signals whenever there is an error in either the checker or the circuit being checked.) The memory system would comprise a main memory and a hardware-controlled check-pointing system (CPS) based on a buffer memory denoted the recovery cache. The main memory would contain random-access memory (RAM) chips and FPGAs that would, in addition to everything else, implement double-error-detecting and single-error-correcting memory functions to enable recovery from single-bit errors.
Raman-shifted alexandrite laser for soft tissue ablation in the 6- to 7-µm wavelength range
Kozub, John; Ivanov, Borislav; Jayasinghe, Aroshan; Prasad, Ratna; Shen, Jin; Klosner, Marc; Heller, Donald; Mendenhall, Marcus; Piston, David W.; Joos, Karen; Hutson, M. Shane
2011-01-01
Prior work with free-electron lasers (FELs) showed that wavelengths in the 6- to 7-µm range could ablate soft tissues efficiently with little collateral damage; however, FELs proved too costly and too complex for widespread surgical use. Several alternative 6- to 7-µm laser systems have demonstrated the ability to cut soft tissues cleanly, but at rates that were much too low for surgical applications. Here, we present initial results with a Raman-shifted, pulsed alexandrite laser that is tunable from 6 to 7 µm and cuts soft tissues cleanly—approximately 15 µm of thermal damage surrounding ablation craters in cornea—and does so with volumetric ablation rates of 2–5 × 10−3 mm3/s. These rates are comparable to those attained in prior successful surgical trials using the FEL for optic nerve sheath fenestration. PMID:21559139
Ozone Profile Retrievals from the OMPS on Suomi NPP
NASA Astrophysics Data System (ADS)
Bak, J.; Liu, X.; Kim, J. H.; Haffner, D. P.; Chance, K.; Yang, K.; Sun, K.; Gonzalez Abad, G.
2017-12-01
We verify and correct the Ozone Mapping and Profiler Suite (OMPS) Nadir Mapper (NM) L1B v2.0 data with the aim of producing accurate ozone profile retrievals using an optimal estimation based inversion method in the 302.5-340 nm fitting. The evaluation of available slit functions demonstrates that preflight-measured slit functions well represent OMPS measurements compared to derived Gaussian slit functions. Our OMPS fitting residuals contain significant wavelength and cross-track dependent biases, and thereby serious cross-track striping errors are found in preliminary retrievals, especially in the troposphere. To eliminate the systematic component of the fitting residuals, we apply "soft calibration" to OMPS radiances. With the soft calibration the amplitude of fitting residuals decreases from 1 % to 0.2 % over low/mid latitudes, and thereby the consistency of tropospheric ozone retrievals between OMPS and Ozone Monitoring Instrument (OMI) are substantially improved. A common mode correction is implemented for additional radiometric calibration, which improves retrievals especially at high latitudes where the amplitude of fitting residuals decreases by a factor of 2. We estimate the floor noise error of OMPS measurements from standard deviations of the fitting residuals. The derived error in the Huggins band ( 0.1 %) is 2 times smaller than OMI floor noise error and 2 times larger than OMPS L1B measurement error. The OMPS floor noise errors better constrain our retrievals for maximizing measurement information and stabilizing our fitting residuals. The final precision of the fitting residuals is less than 0.1 % in the low/mid latitude, with 1 degrees of freedom for signal for the tropospheric ozone, so that we meet the general requirements for successful tropospheric ozone retrievals. To assess if the quality of OMPS ozone retrievals could be acceptable for scientific use, we will characterize OMPS ozone profile retrievals, present error analysis, and validate retrievals using a reference dataset. The useful information on the vertical distribution of ozone is limited below 40 km only from OMPS NM measurements due to the absence of Hartley ozone wavelength. This shortcoming will be improved with the joint ozone profile retrieval using Nadir Profiler (NP) measurements covering the 250 to 310 nm range.
X-ray Variations at the Orbital Period from Cygnus X-1 IN the High/Soft State
NASA Astrophysics Data System (ADS)
Boroson, Bram; Vrtilek, Saeqa Dil
2010-02-01
Orbital variability has been found in the X-ray hardness of the black hole candidate Cygnus X-1 during the soft/high X-ray state using light curves provided by the Rossi X-ray Timing Explorer's All-Sky Monitor. We are able to set broad limits on how the mass-loss rate and X-ray luminosity vary between the hard and soft states. The folded light curve shows diminished flux in the soft X-ray band at phi = 0 (defined as the time of the superior conjunction of the X-ray source). Models of the orbital variability provide slightly superior fits when the absorbing gas is concentrated in neutral clumps and better explain the strong variability in hardness. In combination with the previously established hard/low state dips, our observations give a lower limit to the mass-loss rate in the soft state (\\dot{M}<2× 10^{-6} M_{⊙} yr-1) than the limit in the hard state (\\dot{M}<4× 10^{-6} M_{⊙} yr-1). Without a change in the wind structure between X-ray states, the greater mass-loss rate during the low/hard state would be inconsistent with the increased flaring seen during the high-soft state.
A Soft Sensor for Bioprocess Control Based on Sequential Filtering of Metabolic Heat Signals
Paulsson, Dan; Gustavsson, Robert; Mandenius, Carl-Fredrik
2014-01-01
Soft sensors are the combination of robust on-line sensor signals with mathematical models for deriving additional process information. Here, we apply this principle to a microbial recombinant protein production process in a bioreactor by exploiting bio-calorimetric methodology. Temperature sensor signals from the cooling system of the bioreactor were used for estimating the metabolic heat of the microbial culture and from that the specific growth rate and active biomass concentration were derived. By applying sequential digital signal filtering, the soft sensor was made more robust for industrial practice with cultures generating low metabolic heat in environments with high noise level. The estimated specific growth rate signal obtained from the three stage sequential filter allowed controlled feeding of substrate during the fed-batch phase of the production process. The biomass and growth rate estimates from the soft sensor were also compared with an alternative sensor probe and a capacitance on-line sensor, for the same variables. The comparison showed similar or better sensitivity and lower variability for the metabolic heat soft sensor suggesting that using permanent temperature sensors of a bioreactor is a realistic and inexpensive alternative for monitoring and control. However, both alternatives are easy to implement in a soft sensor, alone or in parallel. PMID:25264951
A soft sensor for bioprocess control based on sequential filtering of metabolic heat signals.
Paulsson, Dan; Gustavsson, Robert; Mandenius, Carl-Fredrik
2014-09-26
Soft sensors are the combination of robust on-line sensor signals with mathematical models for deriving additional process information. Here, we apply this principle to a microbial recombinant protein production process in a bioreactor by exploiting bio-calorimetric methodology. Temperature sensor signals from the cooling system of the bioreactor were used for estimating the metabolic heat of the microbial culture and from that the specific growth rate and active biomass concentration were derived. By applying sequential digital signal filtering, the soft sensor was made more robust for industrial practice with cultures generating low metabolic heat in environments with high noise level. The estimated specific growth rate signal obtained from the three stage sequential filter allowed controlled feeding of substrate during the fed-batch phase of the production process. The biomass and growth rate estimates from the soft sensor were also compared with an alternative sensor probe and a capacitance on-line sensor, for the same variables. The comparison showed similar or better sensitivity and lower variability for the metabolic heat soft sensor suggesting that using permanent temperature sensors of a bioreactor is a realistic and inexpensive alternative for monitoring and control. However, both alternatives are easy to implement in a soft sensor, alone or in parallel.
Trends in Device SEE Susceptibility from Heavy Ions
NASA Technical Reports Server (NTRS)
Nichols, D. K.; Coss, J. R.; McCarty, K. P.; Schwartz, H. R.; Swift, G. M.; Watson, R. K.; Koga, R.; Crain, W. R.; Crawford, K. B.; Hansel, S. J.
1995-01-01
The sixth set of heavy ion single event effects (SEE) test data have been collected since the last IEEE publications in December issues of IEEE - Nuclear Science Transactions for 1985, 1987, 1989, 1991, and the IEEE Workshop Record, 1993. Trends in SEE susceptibility (including soft errors and latchup) for state-of- are evaluated.
Estimating soft tissue thickness from light-tissue interactions––a simulation study
Wissel, Tobias; Bruder, Ralf; Schweikard, Achim; Ernst, Floris
2013-01-01
Immobilization and marker-based motion tracking in radiation therapy often cause decreased patient comfort. However, the more comfortable alternative of optical surface tracking is highly inaccurate due to missing point-to-point correspondences between subsequent point clouds as well as elastic deformation of soft tissue. In this study, we present a proof of concept for measuring subcutaneous features with a laser scanner setup focusing on the skin thickness as additional input for high accuracy optical surface tracking. Using Monte-Carlo simulations for multi-layered tissue, we show that informative features can be extracted from the simulated tissue reflection by integrating intensities within concentric ROIs around the laser spot center. Training a regression model with a simulated data set identifies patterns that allow for predicting skin thickness with a root mean square error of down to 18 µm. Different approaches to compensate for varying observation angles were shown to yield errors still below 90 µm. Finally, this initial study provides a very promising proof of concept and encourages research towards a practical prototype. PMID:23847741
Khozani, Zohreh Sheikh; Bonakdari, Hossein; Zaji, Amir Hossein
2016-01-01
Two new soft computing models, namely genetic programming (GP) and genetic artificial algorithm (GAA) neural network (a combination of modified genetic algorithm and artificial neural network methods) were developed in order to predict the percentage of shear force in a rectangular channel with non-homogeneous roughness. The ability of these methods to estimate the percentage of shear force was investigated. Moreover, the independent parameters' effectiveness in predicting the percentage of shear force was determined using sensitivity analysis. According to the results, the GP model demonstrated superior performance to the GAA model. A comparison was also made between the GP program determined as the best model and five equations obtained in prior research. The GP model with the lowest error values (root mean square error ((RMSE) of 0.0515) had the best function compared with the other equations presented for rough and smooth channels as well as smooth ducts. The equation proposed for rectangular channels with rough boundaries (RMSE of 0.0642) outperformed the prior equations for smooth boundaries.
Hubert, G; Regis, D; Cheminet, A; Gatti, M; Lacoste, V
2014-10-01
Particles originating from primary cosmic radiation, which hit the Earth's atmosphere give rise to a complex field of secondary particles. These particles include neutrons, protons, muons, pions, etc. Since the 1980s it has been known that terrestrial cosmic rays can penetrate the natural shielding of buildings, equipment and circuit package and induce soft errors in integrated circuits. Recently, research has shown that commercial static random access memories are now so small and sufficiently sensitive that single event upsets (SEUs) may be induced from the electronic stopping of a proton. With continued advancements in process size, this downward trend in sensitivity is expected to continue. Then, muon soft errors have been predicted for nano-electronics. This paper describes the effects in the specific cases such as neutron-, proton- and muon-induced SEU observed in complementary metal-oxide semiconductor. The results will allow investigating the technology node sensitivity along the scaling trend. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Nitta, Nariaki
1988-01-01
Hard X-ray spectra in solar flares obtained by the broadband spectrometers aboard Hinotori and SMM are compared. Within the uncertainty brought about by assuming the typical energy of the background X-rays, spectra by the Hinotori spectrometer are usually consistent with those by the SMM spectrometer for flares in 1981. On the contrary, flares in 1982 persistently show 20-50-percent higher flux by Hinotori than by SMM. If this discrepancy is entirely attributable to errors in the calibration of energy ranges, the errors would be about 10 percent. Despite such a discrepancy in absolute flux, in the the decay phase of one flare, spectra revealed a hard X-ray component (probably a 'superhot' component) that could be explained neither by emission from a plasma at about 2 x 10 to the 7th K nor by a nonthermal power-law component. Imaging observations during this period show hard X-ray emission nearly cospatial with soft X-ray emission, in contrast with earlier times at which hard and soft X-rays come from different places.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
NASA Astrophysics Data System (ADS)
De Lorenzo, Danilo; De Momi, Elena; Beretta, Elisa; Cerveri, Pietro; Perona, Franco; Ferrigno, Giancarlo
2009-02-01
Computer Assisted Orthopaedic Surgery (CAOS) systems improve the results and the standardization of surgical interventions. Anatomical landmarks and bone surface detection is straightforward to either register the surgical space with the pre-operative imaging space and to compute biomechanical parameters for prosthesis alignment. Surface points acquisition increases the intervention invasiveness and can be influenced by the soft tissue layer interposition (7-15mm localization errors). This study is aimed at evaluating the accuracy of a custom-made A-mode ultrasound (US) system for non invasive detection of anatomical landmarks and surfaces. A-mode solutions eliminate the necessity of US images segmentation, offers real-time signal processing and requires less invasive equipment. The system consists in a single transducer US probe optically tracked, a pulser/receiver and an FPGA-based board, which is responsible for logic control command generation and for real-time signal processing and three custom-made board (signal acquisition, blanking and synchronization). We propose a new calibration method of the US system. The experimental validation was then performed measuring the length of known-shape polymethylmethacrylate boxes filled with pure water and acquiring bone surface points on a bovine bone phantom covered with soft-tissue mimicking materials. Measurement errors were computed through MR and CT images acquisitions of the phantom. Points acquisition on bone surface with the US system demonstrated lower errors (1.2mm) than standard pointer acquisition (4.2mm).
Automated palpation for breast tissue discrimination based on viscoelastic biomechanical properties.
Tsukune, Mariko; Kobayashi, Yo; Miyashita, Tomoyuki; Fujie, G Masakatsu
2015-05-01
Accurate, noninvasive methods are sought for breast tumor detection and diagnosis. In particular, a need for noninvasive techniques that measure both the nonlinear elastic and viscoelastic properties of breast tissue has been identified. For diagnostic purposes, it is important to select a nonlinear viscoelastic model with a small number of parameters that highly correlate with histological structure. However, the combination of conventional viscoelastic models with nonlinear elastic models requires a large number of parameters. A nonlinear viscoelastic model of breast tissue based on a simple equation with few parameters was developed and tested. The nonlinear viscoelastic properties of soft tissues in porcine breast were measured experimentally using fresh ex vivo samples. Robotic palpation was used for measurements employed in a finite element model. These measurements were used to calculate nonlinear viscoelastic parameters for fat, fibroglandular breast parenchyma and muscle. The ability of these parameters to distinguish the tissue types was evaluated in a two-step statistical analysis that included Holm's pairwise [Formula: see text] test. The discrimination error rate of a set of parameters was evaluated by the Mahalanobis distance. Ex vivo testing in porcine breast revealed significant differences in the nonlinear viscoelastic parameters among combinations of three tissue types. The discrimination error rate was low among all tested combinations of three tissue types. Although tissue discrimination was not achieved using only a single nonlinear viscoelastic parameter, a set of four nonlinear viscoelastic parameters were able to reliably and accurately discriminate fat, breast fibroglandular tissue and muscle.
Using Digital Image Correlation to Characterize Local Strains on Vascular Tissue Specimens.
Zhou, Boran; Ravindran, Suraj; Ferdous, Jahid; Kidane, Addis; Sutton, Michael A; Shazly, Tarek
2016-01-24
Characterization of the mechanical behavior of biological and engineered soft tissues is a central component of fundamental biomedical research and product development. Stress-strain relationships are typically obtained from mechanical testing data to enable comparative assessment among samples and in some cases identification of constitutive mechanical properties. However, errors may be introduced through the use of average strain measures, as significant heterogeneity in the strain field may result from geometrical non-uniformity of the sample and stress concentrations induced by mounting/gripping of soft tissues within the test system. When strain field heterogeneity is significant, accurate assessment of the sample mechanical response requires measurement of local strains. This study demonstrates a novel biomechanical testing protocol for calculating local surface strains using a mechanical testing device coupled with a high resolution camera and a digital image correlation technique. A series of sample surface images are acquired and then analyzed to quantify the local surface strain of a vascular tissue specimen subjected to ramped uniaxial loading. This approach can improve accuracy in experimental vascular biomechanics and has potential for broader use among other native soft tissues, engineered soft tissues, and soft hydrogel/polymeric materials. In the video, we demonstrate how to set up the system components and perform a complete experiment on native vascular tissue.
Dynamic soft variable structure control of singular systems
NASA Astrophysics Data System (ADS)
Liu, Yunlong; Zhang, Caihong; Gao, Cunchen
2012-08-01
The dynamic soft variable structure control (VSC) of singular systems is discussed in this paper. The definition of soft VSC and the design of its controller modes are given. The stability of singular systems with the dynamic soft VSC is proposed. The dynamic soft variable structure controller is designed, and the concrete algorithm on the dynamic soft VSC is given. The dynamic soft VSC of singular systems which was developed for the purpose of intentionally precluding chattering, achieving high regulation rates and shortening settling times enhanced the dynamic quality of the systems. It is illustrated the feasibility and validity of the proposed strategy by a simulation example, and an outlook on its auspicious further development is presented.
Brigham, John C.; Aquino, Wilkins; Aguilo, Miguel A.; Diamessis, Peter J.
2010-01-01
An approach for efficient and accurate finite element analysis of harmonically excited soft solids using high-order spectral finite elements is presented and evaluated. The Helmholtz-type equations used to model such systems suffer from additional numerical error known as pollution when excitation frequency becomes high relative to stiffness (i.e. high wave number), which is the case, for example, for soft tissues subject to ultrasound excitations. The use of high-order polynomial elements allows for a reduction in this pollution error, but requires additional consideration to counteract Runge's phenomenon and/or poor linear system conditioning, which has led to the use of spectral element approaches. This work examines in detail the computational benefits and practical applicability of high-order spectral elements for such problems. The spectral elements examined are tensor product elements (i.e. quad or brick elements) of high-order Lagrangian polynomials with non-uniformly distributed Gauss-Lobatto-Legendre nodal points. A shear plane wave example is presented to show the dependence of the accuracy and computational expense of high-order elements on wave number. Then, a convergence study for a viscoelastic acoustic-structure interaction finite element model of an actual ultrasound driven vibroacoustic experiment is shown. The number of degrees of freedom required for a given accuracy level was found to consistently decrease with increasing element order. However, the computationally optimal element order was found to strongly depend on the wave number. PMID:21461402
A device for high-throughput monitoring of degradation in soft tissue samples.
Tzeranis, D S; Panagiotopoulos, I; Gkouma, S; Kanakaris, G; Georgiou, N; Vaindirlis, N; Vasileiou, G; Neidlin, M; Gkousioudi, A; Spitas, V; Macheras, G A; Alexopoulos, L G
2018-06-06
This work describes the design and validation of a novel device, the High-Throughput Degradation Monitoring Device (HDD), for monitoring the degradation of 24 soft tissue samples over incubation periods of several days inside a cell culture incubator. The device quantifies sample degradation by monitoring its deformation induced by a static gravity load. Initial instrument design and experimental protocol development focused on quantifying cartilage degeneration. Characterization of measurement errors, caused mainly by thermal transients and by translating the instrument sensor, demonstrated that HDD can quantify sample degradation with <6 μm precision and <10 μm temperature-induced errors. HDD capabilities were evaluated in a pilot study that monitored the degradation of fresh ex vivo human cartilage samples by collagenase solutions over three days. HDD could robustly resolve the effects of collagenase concentration as small as 0.5 mg/ml. Careful sample preparation resulted in measurements that did not suffer from donor-to-donor variation (coefficient of variance <70%). Due to its unique combination of sample throughput, measurement precision, temporal sampling and experimental versality, HDD provides a novel biomechanics-based experimental platform for quantifying the effects of proteins (cytokines, growth factors, enzymes, antibodies) or small molecules on the degradation of soft tissues or tissue engineering constructs. Thereby, HDD can complement established tools and in vitro models in important applications including drug screening and biomaterial development. Copyright © 2018 Elsevier Ltd. All rights reserved.
Walden, Steven J; Evans, Sam L; Mulville, Jacqui
2017-01-01
The purpose of this study was to determine how the Vickers hardness (HV) of bone varies during soft tissue putrefaction. This has possible forensic applications, notably for determining the postmortem interval. Experimental porcine bone samples were decomposed in surface and burial deposition scenarios over a period of 6 months. Although the Vickers hardness varied widely, it was found that when transverse axial hardness was subtracted from longitudinal axial hardness, the difference showed correlations with three distinct phases of soft tissue putrefaction. The ratio of transverse axial hardness to longitudinal axial hardness showed a similar correlation. A difference of 10 or greater in HV with soft tissue present and signs of minimal decomposition, was associated with a decomposition period of 250 cumulative cooling degree days or less. A difference of 10 (+/- standard error of mean at a 95% confidence interval) or greater in HV associated with marked decomposition indicated a decomposition period of 1450 cumulative cooling degree days or more. A difference of -7 to +8 (+/- standard error of mean at a 95% confidence interval) was thus associated with 250 to 1450 cumulative cooling degree days' decomposition. The ratio of transverse axial HV to longitudinal HV, ranging from 2.42 to 1.54, is a more reliable indicator in this context and is preferable to using negative integers These differences may have potential as an indicator of postmortem interval and thus the time of body deposition in the forensic context. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Wiesinger, Florian; Bylund, Mikael; Yang, Jaewon; Kaushik, Sandeep; Shanbhag, Dattesh; Ahn, Sangtae; Jonsson, Joakim H; Lundman, Josef A; Hope, Thomas; Nyholm, Tufve; Larson, Peder; Cozzini, Cristina
2018-02-18
To describe a method for converting Zero TE (ZTE) MR images into X-ray attenuation information in the form of pseudo-CT images and demonstrate its performance for (1) attenuation correction (AC) in PET/MR and (2) dose planning in MR-guided radiation therapy planning (RTP). Proton density-weighted ZTE images were acquired as input for MR-based pseudo-CT conversion, providing (1) efficient capture of short-lived bone signals, (2) flat soft-tissue contrast, and (3) fast and robust 3D MR imaging. After bias correction and normalization, the images were segmented into bone, soft-tissue, and air by means of thresholding and morphological refinements. Fixed Hounsfield replacement values were assigned for air (-1000 HU) and soft-tissue (+42 HU), whereas continuous linear mapping was used for bone. The obtained ZTE-derived pseudo-CT images accurately resembled the true CT images (i.e., Dice coefficient for bone overlap of 0.73 ± 0.08 and mean absolute error of 123 ± 25 HU evaluated over the whole head, including errors from residual registration mismatches in the neck and mouth regions). The linear bone mapping accounted for bone density variations. Averaged across five patients, ZTE-based AC demonstrated a PET error of -0.04 ± 1.68% relative to CT-based AC. Similarly, for RTP assessed in eight patients, the absolute dose difference over the target volume was found to be 0.23 ± 0.42%. The described method enables MR to pseudo-CT image conversion for the head in an accurate, robust, and fast manner without relying on anatomical prior knowledge. Potential applications include PET/MR-AC, and MR-guided RTP. © 2018 International Society for Magnetic Resonance in Medicine.
A Minimum Variance Algorithm for Overdetermined TOA Equations with an Altitude Constraint.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Romero, Louis A; Mason, John J.
We present a direct (non-iterative) method for solving for the location of a radio frequency (RF) emitter, or an RF navigation receiver, using four or more time of arrival (TOA) measurements and an assumed altitude above an ellipsoidal earth. Both the emitter tracking problem and the navigation application are governed by the same equations, but with slightly different interpreta- tions of several variables. We treat the assumed altitude as a soft constraint, with a specified noise level, just as the TOA measurements are handled, with their respective noise levels. With 4 or more TOA measurements and the assumed altitude, themore » problem is overdetermined and is solved in the weighted least squares sense for the 4 unknowns, the 3-dimensional position and time. We call the new technique the TAQMV (TOA Altitude Quartic Minimum Variance) algorithm, and it achieves the minimum possible error variance for given levels of TOA and altitude estimate noise. The method algebraically produces four solutions, the least-squares solution, and potentially three other low residual solutions, if they exist. In the lightly overdermined cases where multiple local minima in the residual error surface are more likely to occur, this algebraic approach can produce all of the minima even when an iterative approach fails to converge. Algorithm performance in terms of solution error variance and divergence rate for bas eline (iterative) and proposed approach are given in tables.« less
Quality assurance of the international computerised 24 h dietary recall method (EPIC-Soft).
Crispim, Sandra P; Nicolas, Genevieve; Casagrande, Corinne; Knaze, Viktoria; Illner, Anne-Kathrin; Huybrechts, Inge; Slimani, Nadia
2014-02-01
The interview-administered 24 h dietary recall (24-HDR) EPIC-Soft® has a series of controls to guarantee the quality of dietary data across countries. These comprise all steps that are part of fieldwork preparation, data collection and data management; however, a complete characterisation of these quality controls is still lacking. The present paper describes in detail the quality controls applied in EPIC-Soft, which are, to a large extent, built on the basis of the EPIC-Soft error model and are present in three phases: (1) before, (2) during and (3) after the 24-HDR interviews. Quality controls for consistency and harmonisation are implemented before the interviews while preparing the seventy databases constituting an EPIC-Soft version (e.g. pre-defined and coded foods and recipes). During the interviews, EPIC-Soft uses a cognitive approach by helping the respondent to recall the dietary intake information in a stepwise manner and includes controls for consistency (e.g. probing questions) as well as for completeness of the collected data (e.g. system calculation for some unknown amounts). After the interviews, a series of controls can be applied by dietitians and data managers to further guarantee data quality. For example, the interview-specific 'note files' that were created to track any problems or missing information during the interviews can be checked to clarify the information initially provided. Overall, the quality controls employed in the EPIC-Soft methodology are not always perceivable, but prove to be of assistance for its overall standardisation and possibly for the accuracy of the collected data.
The Couples Emotion Rating Form: Psychometric Properties and Theoretical Associations
ERIC Educational Resources Information Center
Sanford, Keith
2007-01-01
The Couples Emotion Rating Form assesses 3 types of negative emotion that are salient during times of relationship conflict. Hard emotion includes feeling angry and aggravated, soft emotion includes feeling hurt and sad, and flat emotion includes feeling bored and indifferent. In Study 1, scales measuring hard and soft emotion were validated by…
Can Soft Drink Taxes Reduce Population Weight?
Fletcher, Jason M; Frisvold, David; Tefft, Nathan
2010-01-01
Soft drink consumption has been hypothesized as one of the major factors in the growing rates of obesity in the US. Nearly two-thirds of all states currently tax soft drinks using excise taxes, sales taxes, or special exemptions to food exemptions from sales taxes to reduce consumption of this product, raise revenue, and improve public health. In this paper, we evaluate the impact of changes in state soft drink taxes on body mass index (BMI), obesity, and overweight. Our results suggest that soft drink taxes influence BMI, but that the impact is small in magnitude.
Impact of time-of-flight PET on quantification errors in MR imaging-based attenuation correction.
Mehranian, Abolfazl; Zaidi, Habib
2015-04-01
Time-of-flight (TOF) PET/MR imaging is an emerging imaging technology with great capabilities offered by TOF to improve image quality and lesion detectability. We assessed, for the first time, the impact of TOF image reconstruction on PET quantification errors induced by MR imaging-based attenuation correction (MRAC) using simulation and clinical PET/CT studies. Standard 4-class attenuation maps were derived by segmentation of CT images of 27 patients undergoing PET/CT examinations into background air, lung, soft-tissue, and fat tissue classes, followed by the assignment of predefined attenuation coefficients to each class. For each patient, 4 PET images were reconstructed: non-TOF and TOF both corrected for attenuation using reference CT-based attenuation correction and the resulting 4-class MRAC maps. The relative errors between non-TOF and TOF MRAC reconstructions were compared with their reference CT-based attenuation correction reconstructions. The bias was locally and globally evaluated using volumes of interest (VOIs) defined on lesions and normal tissues and CT-derived tissue classes containing all voxels in a given tissue, respectively. The impact of TOF on reducing the errors induced by metal-susceptibility and respiratory-phase mismatch artifacts was also evaluated using clinical and simulation studies. Our results show that TOF PET can remarkably reduce attenuation correction artifacts and quantification errors in the lungs and bone tissues. Using classwise analysis, it was found that the non-TOF MRAC method results in an error of -3.4% ± 11.5% in the lungs and -21.8% ± 2.9% in bones, whereas its TOF counterpart reduced the errors to -2.9% ± 7.1% and -15.3% ± 2.3%, respectively. The VOI-based analysis revealed that the non-TOF and TOF methods resulted in an average overestimation of 7.5% and 3.9% in or near lung lesions (n = 23) and underestimation of less than 5% for soft tissue and in or near bone lesions (n = 91). Simulation results showed that as TOF resolution improves, artifacts and quantification errors are substantially reduced. TOF PET substantially reduces artifacts and improves significantly the quantitative accuracy of standard MRAC methods. Therefore, MRAC should be less of a concern on future TOF PET/MR scanners with improved timing resolution. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.
ERIC Educational Resources Information Center
Chamorro-Premuzic, Tomas; Arteche, Adriane; Bremner, Andrew J.; Greven, Corina; Furnham, Adrian
2010-01-01
Three UK studies on the relationship between a purpose-built instrument to assess the importance and development of 15 "soft skills" are reported. "Study 1" (N = 444) identified strong latent components underlying these soft skills, such that differences "between-skills" were over-shadowed by differences…
Soft Drink Vending Machines in Schools: A Clear and Present Danger
ERIC Educational Resources Information Center
Price, James; Murnan, Judy; Moore, Bradene
2006-01-01
This paper examines the availability of soft drinks in schools ("pouring rights contracts") and its effects on the growing nutritional problems of American youth. Of special concern is the prevalence of overweight youth, which has been increasing at alarming rates. There has been a direct relationship found between soft drink consumption and…
The role of radiology in paediatric soft tissue sarcomas
van Rijn, R.; McHugh, K.
2008-01-01
Abstract Paediatric soft tissue sarcomas (STS) are a group of malignant tumours that originate from primitive mesenchymal tissue and account for 7% of all childhood tumours. Rhabdomyosarcomas (RMS) and undifferentiated sarcomas account for approximately 50% of soft tissue sarcomas in children and non-rhabdomyomatous soft tissue sarcomas (NRSTS) the remainder. The prognosis and biology of STS tumours vary greatly depending on the age of the patient, the primary site, tumour size, tumour invasiveness, histologic grade, depth of invasion, and extent of disease at diagnosis. Over recent years, there has been a marked improvement in survival rates in children and adolescents with soft tissue sarcoma and ongoing international studies continue to aim to improve these survival rates whilst attempting to reduce the morbidity associated with treatment. Radiology plays a crucial role in the initial diagnosis and staging of STS, in the long term follow-up and in the assessment of many treatment related complications. We review the epidemiology, histology, clinical presentation, staging and prognosis of soft tissue sarcomas and discuss the role of radiology in their management. PMID:18442956
Rossi X-Ray Timing Explorer All-Sky Monitor Localization of SGR 1627-41
NASA Astrophysics Data System (ADS)
Smith, Donald A.; Bradt, Hale V.; Levine, Alan M.
1999-07-01
The fourth unambiguously identified soft gamma repeater (SGR), SGR 1627-41, was discovered with the BATSE instrument on 1998 June 15. Interplanetary Network (IPN) measurements and BATSE data constrained the location of this new SGR to a 6° segment of a narrow (19") annulus. We present two bursts from this source observed by the All-Sky Monitor (ASM) on the Rossi X-Ray Timing Explorer. We use the ASM data to further constrain the source location to a 5' long segment of the BATSE/IPN error box. The ASM/IPN error box lies within 0.3 arcmin of the supernova remnant G337.0-0.1. The probability that a supernova remnant would fall so close to the error box purely by chance is ~5%.
RXTE All-Sky Monitor Localization of SGR 1627-41
NASA Astrophysics Data System (ADS)
Smith, D. A.; Bradt, H. V.; Levine, A. M.
1999-09-01
The fourth unambiguously identified Soft Gamma Repeater (SGR), SGR 1627--41, was discovered with the BATSE instrument on 1998 June 15 (Kouveliotou et al. 1998). Interplanetary Network (IPN) measurements and BATSE data constrained the location of this new SGR to a 6(deg) segment of a narrow (19('') ) annulus (Hurley et al. 1999; Woods et al. 1998). We report on two bursts from this source observed by the All-Sky Monitor (ASM) on RXTE. We use the ASM data to further constrain the source location to a 5(') long segment of the BATSE/IPN error box. The ASM/IPN error box lies within 0.3(') of the supernova remnant (SNR) G337.0--0.1. The probability that a SNR would fall so close to the error box purely by chance is ~ 5%.
Zhang, Lu; Hong, Xuezhi; Pang, Xiaodan; Ozolins, Oskars; Udalcovs, Aleksejs; Schatz, Richard; Guo, Changjian; Zhang, Junwei; Nordwall, Fredrik; Engenhardt, Klaus M; Westergren, Urban; Popov, Sergei; Jacobsen, Gunnar; Xiao, Shilin; Hu, Weisheng; Chen, Jiajia
2018-01-15
We experimentally demonstrate the transmission of a 200 Gbit/s discrete multitone (DMT) at the soft forward error correction limit in an intensity-modulation direct-detection system with a single C-band packaged distributed feedback laser and traveling-wave electro absorption modulator (DFB-TWEAM), digital-to-analog converter and photodiode. The bit-power loaded DMT signal is transmitted over 1.6 km standard single-mode fiber with a net rate of 166.7 Gbit/s, achieving an effective electrical spectrum efficiency of 4.93 bit/s/Hz. Meanwhile, net rates of 174.2 Gbit/s and 179.5 Gbit/s are also demonstrated over 0.8 km SSMF and in an optical back-to-back case, respectively. The feature of the packaged DFB-TWEAM is presented. The nonlinearity-aware digital signal processing algorithm for channel equalization is mathematically described, which improves the signal-to-noise ratio up to 3.5 dB.
Single Event Effect Testing of the Micron MT46V128M8
NASA Technical Reports Server (NTRS)
Stansberry, Scott; Campola, Michael; Wilcox, Ted; Seidleck, Christina; Phan, Anthony
2017-01-01
The Micron MT46V128M8 was tested for single event effects (SEE) at the Texas AM University Cyclotron Facility (TAMU) in June of 2017. Testing revealed a sensitivity to device hang-ups classified as single event functional interrupts (SEFI) and possible soft data errors classified as single event upsets (SEU).
Portrayals of branded soft drinks in popular American movies: a content analysis.
Cassady, Diana; Townsend, Marilyn; Bell, Robert A; Watnik, Mitchell
2006-03-09
This study examines the portrayals of soft drinks in popular American movies as a potential vehicle for global marketing and an indicator of covert product placement. We conducted a content analysis of America's top-ten grossing films from 1991 through 2000 that included portrayals of beverages (95 movies total). Coding reliabilities were assessed with Cohen's kappa, and exceeded 0.80. If there was at least one instance of branding for a beverage, the film was considered having branded beverages. Fisher's exact test was used to determine if soft drink portrayals were related to audience rating or genre. Data on the amount of time soft drinks appeared onscreen was log transformed to satisfy the assumption of normality, and analyzed using a repeated measures ANOVA model. McNemar's test of agreement was used to test whether branded soft drinks are as likely to appear or to be actor-endorsed compared to other branded beverages. Rating was not associated with portrayals of branded soft drinks, but comedies were most likely to include a branded soft drink (p = 0.0136). Branded soft drinks appeared more commonly than other branded non-alcoholic beverages (p = 0.0001), branded beer (p = 0.0004), and other branded alcoholic beverages (p = 0.0006). Actors consumed branded soft drinks in five times the number of movies compared to their consumption of other branded non-alcoholic beverages (p = 0.0126). About half the revenue from the films with portrayals of branded soft drinks come from film sales outside the U.S. The frequent appearance of branded soft drinks provides indirect evidence that product placement is a common practice for American-produced films shown in the U.S. and other countries.
Portrayals of branded soft drinks in popular American movies: a content analysis
Cassady, Diana; Townsend, Marilyn; Bell, Robert A; Watnik, Mitchell
2006-01-01
Background This study examines the portrayals of soft drinks in popular American movies as a potential vehicle for global marketing and an indicator of covert product placement. Methods We conducted a content analysis of America's top-ten grossing films from 1991 through 2000 that included portrayals of beverages (95 movies total). Coding reliabilities were assessed with Cohen's kappa, and exceeded 0.80. If there was at least one instance of branding for a beverage, the film was considered having branded beverages. Fisher's exact test was used to determine if soft drink portrayals were related to audience rating or genre. Data on the amount of time soft drinks appeared onscreen was log transformed to satisfy the assumption of normality, and analyzed using a repeated measures ANOVA model. McNemar's test of agreement was used to test whether branded soft drinks are as likely to appear or to be actor-endorsed compared to other branded beverages. Results Rating was not associated with portrayals of branded soft drinks, but comedies were most likely to include a branded soft drink (p = 0.0136). Branded soft drinks appeared more commonly than other branded non-alcoholic beverages (p = 0.0001), branded beer (p = 0.0004), and other branded alcoholic beverages (p = 0.0006). Actors consumed branded soft drinks in five times the number of movies compared to their consumption of other branded non-alcoholic beverages (p = 0.0126). About half the revenue from the films with portrayals of branded soft drinks come from film sales outside the U.S. Conclusion The frequent appearance of branded soft drinks provides indirect evidence that product placement is a common practice for American-produced films shown in the U.S. and other countries. PMID:16526959
NASA Technical Reports Server (NTRS)
Marshall, Cheryl J.; Marshall, Paul W.
1999-01-01
This portion of the Short Course is divided into two segments to separately address the two major proton-related effects confronting satellite designers: ionization effects and displacement damage effects. While both of these topics are deeply rooted in "traditional" descriptions of space radiation effects, there are several factors at play to cause renewed concern for satellite systems being designed today. For example, emphasis on Commercial Off-The-Shelf (COTS) technologies in both commercial and government systems increases both Total Ionizing Dose (TID) and Single Event Effect (SEE) concerns. Scaling trends exacerbate the problems, especially with regard to SEEs where protons can dominate soft error rates and even cause destructive failure. In addition, proton-induced displacement damage at fluences encountered in natural space environments can cause degradation in modern bipolar circuitry as well as in many emerging electronic and opto-electronic technologies.
Estimating Single-Event Logic Cross Sections in Advanced Technologies
NASA Astrophysics Data System (ADS)
Harrington, R. C.; Kauppila, J. S.; Warren, K. M.; Chen, Y. P.; Maharrey, J. A.; Haeffner, T. D.; Loveless, T. D.; Bhuva, B. L.; Bounasser, M.; Lilja, K.; Massengill, L. W.
2017-08-01
Reliable estimation of logic single-event upset (SEU) cross section is becoming increasingly important for predicting the overall soft error rate. As technology scales and single-event transient (SET) pulse widths shrink to widths on the order of the setup-and-hold time of flip-flops, the probability of latching an SET as an SEU must be reevaluated. In this paper, previous assumptions about the relationship of SET pulsewidth to the probability of latching an SET are reconsidered and a model for transient latching probability has been developed for advanced technologies. A method using the improved transient latching probability and SET data is used to predict logic SEU cross section. The presented model has been used to estimate combinational logic SEU cross sections in 32-nm partially depleted silicon-on-insulator (SOI) technology given experimental heavy-ion SET data. Experimental SEU data show good agreement with the model presented in this paper.
Estimation of Fetal Weight during Labor: Still a Challenge.
Barros, Joana Goulão; Reis, Inês; Pereira, Isabel; Clode, Nuno; Graça, Luís M
2016-01-01
To evaluate the accuracy of fetal weight prediction by ultrasonography labor employing a formula including the linear measurements of femur length (FL) and mid-thigh soft-tissue thickness (STT). We conducted a prospective study involving singleton uncomplicated term pregnancies within 48 hours of delivery. Only pregnancies with a cephalic fetus admitted in the labor ward for elective cesarean section, induction of labor or spontaneous labor were included. We excluded all non-Caucasian women, the ones previously diagnosed with gestational diabetes and the ones with evidence of ruptured membranes. Fetal weight estimates were calculated using a previously proposed formula [estimated fetal weight = 1687.47 + (54.1 x FL) + (76.68 x STT). The relationship between actual birth weight and estimated fetal weight was analyzed using Pearson's correlation. The formula's performance was assessed by calculating the signed and absolute errors. Mean weight difference and signed percentage error were calculated for birth weight divided into three subgroups: < 3000 g; 3000-4000 g; and > 4000 g. We included for analysis 145 cases and found a significant, yet low, linear relationship between birth weight and estimated fetal weight (p < 0.001; R2 = 0.197) with an absolute mean error of 10.6%. The lowest mean percentage error (0.3%) corresponded to the subgroup with birth weight between 3000 g and 4000 g. This study demonstrates a poor correlation between actual birth weight and the estimated fetal weight using a formula based on femur length and mid-thigh soft-tissue thickness, both linear parameters. Although avoidance of circumferential ultrasound measurements might prove to be beneficial, it is still yet to be found a fetal estimation formula that can be both accurate and simple to perform.
Obesity does not affect survival outcomes in extremity soft tissue sarcoma.
Alamanda, Vignesh K; Moore, David C; Song, Yanna; Schwartz, Herbert S; Holt, Ginger E
2014-09-01
Obesity is a growing epidemic and has been associated with an increased frequency of complications after various surgical procedures. Studies also have shown adipose tissue to promote a microenvironment favorable for tumor growth. Additionally, the relationship between obesity and prognosis of soft tissue sarcomas has yet to be evaluated. We sought to assess if (1) obesity affects survival outcomes (local recurrence, distant metastasis, and death attributable to disease) in patients with extremity soft tissue sarcomas; and (2) whether obesity affected wound healing and other surgical complications after treatment. A BMI of 30 kg/m(2) or greater was used to define obesity. Querying our prospective database between 2001 and 2008, we identified 397 patients for the study; 154 were obese and 243 were not obese. Mean followup was 4.5 years (SD, 3.1 years) in the obese group and 3.9 years (SD, 3.2 years) in the nonobese group; the group with a BMI of 30 kg/m(2) or greater had a higher proportion of patients with followups of at least 2 years compared with the group with a BMI less than 30 kg/m(2) (76% versus 62%). Outcomes, including local recurrence, distant metastasis, and overall survival, were analyzed after patients were stratified by BMI. Multivariable survival models were used to identify independent predictors of survival outcomes. Wilcoxon rank sum test was used to compare continuous variables. Based on the accrual interval of 8 years, the additional followup of 5 years after data collection, and the median survival time for the patients with a BMI less than 30 kg/m(2) of 3 years, we were able to detect true median survival times in the patients with a BMI of 30 kg/m(2) of 2.2 years or less with 80% power and type I error rate of 0.05. Patients who were obese had similar survival outcomes and wound complication rates when compared with their nonobese counterparts. Patients who were obese were more likely to have lower-grade tumors (31% versus 20%; p = 0.021) and additional comorbidities including diabetes mellitus (26% versus 7%; p < 0.001), hypertension (63% versus 38%; p < 0.001), and smoking (49% versus 37%; p = 0.027). Regression analysis confirmed that even after accounting for certain tumor characteristics and comorbidities, obesity did not serve as an independent risk factor in affecting survival outcomes. Although the prevalence of obesity continues to increase and lead to many negative health consequences, it does not appear to adversely affect survival, local recurrence, or wound complication rates for patients with extremity soft tissue sarcomas. Level III, therapeutic study. See the Instructions for Authors for a complete description of levels of evidence.
USDA-ARS?s Scientific Manuscript database
This study evaluated the potential of giving formulated feed to juvenile lake sturgeon (Acipenser fulvescens) and determined the optimal feeding rate of a soft-moist feed on the growth performance and whole-body composition of this fish. Six feeding rates (% body weight per day: % BW/d) of a soft-mo...
Total mesorectal excision training in soft cadaver: feasibility and clinical application.
Tantiphlachiva, Kasaya; Suansawan, Channarong
2006-09-01
The major problem in the treatment of rectal cancer is local recurrence. After the introduction of total mesorectal excision (TME), the recurrent rate decreased from 100% to around 10%. The purpose of the present study was to evaluate the quality of organ and tissue plane preservation in soft cadaver and to assess the feasibility to perform the procedure (mobilization of colon and rectum, total mesorectal excision and stapler anastomosis) in soft cadaver. Colorectal Division, Department of Surgery and Surgical Training Center Department of Anatomy, Faculty of Medicine, Chulalongkorn University. Prospective descriptive study. Seven soft cadavers were used for total mesorectal excision (TME) training. These procedures were performed by 21 participants (1 soft cadaver for 3 participants). The procedures were done under the supervision of experienced colorectal surgeons. The successfulness, satisfaction in performing the procedure and the quality of organ preservation were evaluated using standardized questionnaires. Participants were satisfied about TME training in soft cadaver (mean 8.24-8.71) and rated that soft cadavers were good in terms of internal organs and tissue plane preservation (mean 7.19-8.19) (0 = extremely unsatisfied, 10 = extremely satisfied). Training of TME in soft cadaver is feasible. The similarity in tissue quality (texture, consistency, color) of the preserved organs to that of the living and the good feel of performing the procedure make the trainee better understand the techniques and improve their skills.
Epidemiologic study of soft tissue rheumatism in Shantou and Taiyuan, China.
Zeng, Qing-yu; Zang, Chang-hai; Lin, Ling; Chen, Su-biao; Li, Xiao-feng; Xiao, Zheng-yu; Dong, Hai-yuan; Zhang, Ai-lian; Chen, Ren
2010-08-05
Soft tissue rheumatism is a group of common rheumatic disorders reported in many countries. For investigating the prevalence rate of soft tissue rheumatism in different population in China, we carried out a population study in Shantou rural and Taiyuan urban area. Samples of 3915 adults in an urban area of Taiyuan, Shanxi Province, and 2350 in a rural area of Shantou, Guangdong Province were surveyed. Modified International League of Association for Rheumatology (ILAR)-Asia Pacific League of Association for Rheumatology (APLAR) Community Oriented Program for Control of Rheumatic Diseases (COPCORD) core questionnaire was implemented as screening tool. The positive responders were then all examined by rheumatologists. Prevalence rate of soft tissue rheumatism was 2.0% in Taiyuan, and 5.3% in Shantou. Rotator cuff (shoulder) tendinitis, adhesive capsulitis (frozen shoulder), lateral epicondylitis (tennis elbow), and digital flexor tenosynovitis (trigger finger) were the commonly seen soft tissue rheumatism in both areas. Tatarsalgia, plantar fasciitis, and De Quervain's tenosynovitis were more commonly seen in Shantou than that in Taiyuan. Only 1 case of fibromyalgia was found in Taiyuan and 2 cases in Shantou. The prevalence of soft tissue rheumatism varied with age, sex and occupation. Soft tissue rheumatism is common in Taiyuan and Shantou, China. The prevalence of soft tissue rheumatism was quite different with different geographic, environmental, and socioeconomic conditions; and varying with age, sex, and occupation. The prevalence of fibromyalgia is low in the present survey.
ROSAT X-Ray Observation of the Second Error Box for SGR 1900+14
NASA Technical Reports Server (NTRS)
Li, P.; Hurley, K.; Vrba, F.; Kouveliotou, C.; Meegan, C. A.; Fishman, G. J.; Kulkarni, S.; Frail, D.
1997-01-01
The positions of the two error boxes for the soft gamma repeater (SGR) 1900+14 were determined by the "network synthesis" method, which employs observations by the Ulysses gamma-ray burst and CGRO BATSE instruments. The location of the first error box has been observed at optical, infrared, and X-ray wavelengths, resulting in the discovery of a ROSAT X-ray point source and a curious double infrared source. We have recently used the ROSAT HRI to observe the second error box to complete the counterpart search. A total of six X-ray sources were identified within the field of view. None of them falls within the network synthesis error box, and a 3 sigma upper limit to any X-ray counterpart was estimated to be 6.35 x 10(exp -14) ergs/sq cm/s. The closest source is approximately 3 min. away, and has an estimated unabsorbed flux of 1.5 x 10(exp -12) ergs/sq cm/s. Unlike the first error box, there is no supernova remnant near the second error box. The closest one, G43.9+1.6, lies approximately 2.dg6 away. For these reasons, we believe that the first error box is more likely to be the correct one.
Mears, Lisa; Stocks, Stuart M; Albaek, Mads O; Sin, Gürkan; Gernaey, Krist V
2017-03-01
A mechanistic model-based soft sensor is developed and validated for 550L filamentous fungus fermentations operated at Novozymes A/S. The soft sensor is comprised of a parameter estimation block based on a stoichiometric balance, coupled to a dynamic process model. The on-line parameter estimation block models the changing rates of formation of product, biomass, and water, and the rate of consumption of feed using standard, available on-line measurements. This parameter estimation block, is coupled to a mechanistic process model, which solves the current states of biomass, product, substrate, dissolved oxygen and mass, as well as other process parameters including k L a, viscosity and partial pressure of CO 2 . State estimation at this scale requires a robust mass model including evaporation, which is a factor not often considered at smaller scales of operation. The model is developed using a historical data set of 11 batches from the fermentation pilot plant (550L) at Novozymes A/S. The model is then implemented on-line in 550L fermentation processes operated at Novozymes A/S in order to validate the state estimator model on 14 new batches utilizing a new strain. The product concentration in the validation batches was predicted with an average root mean sum of squared error (RMSSE) of 16.6%. In addition, calculation of the Janus coefficient for the validation batches shows a suitably calibrated model. The robustness of the model prediction is assessed with respect to the accuracy of the input data. Parameter estimation uncertainty is also carried out. The application of this on-line state estimator allows for on-line monitoring of pilot scale batches, including real-time estimates of multiple parameters which are not able to be monitored on-line. With successful application of a soft sensor at this scale, this allows for improved process monitoring, as well as opening up further possibilities for on-line control algorithms, utilizing these on-line model outputs. Biotechnol. Bioeng. 2017;114: 589-599. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Technical Reports Server (NTRS)
Spector, E.; LeBlanc, A.; Shackelford, L.
1995-01-01
This study reports on the short-term in vivo precision and absolute measurements of three combinations of whole-body scan modes and analysis software using a Hologic QDR 2000 dual-energy X-ray densitometer. A group of 21 normal, healthy volunteers (11 male and 10 female) were scanned six times, receiving one pencil-beam and one array whole-body scan on three occasions approximately 1 week apart. The following combinations of scan modes and analysis software were used: pencil-beam scans analyzed with Hologic's standard whole-body software (PB scans); the same pencil-beam analyzed with Hologic's newer "enhanced" software (EPB scans); and array scans analyzed with the enhanced software (EA scans). Precision values (% coefficient of variation, %CV) were calculated for whole-body and regional bone mineral content (BMC), bone mineral density (BMD), fat mass, lean mass, %fat and total mass. In general, there was no significant difference among the three scan types with respect to short-term precision of BMD and only slight differences in the precision of BMC. Precision of BMC and BMD for all three scan types was excellent: < 1% CV for whole-body values, with most regional values in the 1%-2% range. Pencil-beam scans demonstrated significantly better soft tissue precision than did array scans. Precision errors for whole-body lean mass were: 0.9% (PB), 1.1% (EPB) and 1.9% (EA). Precision errors for whole-body fat mass were: 1.7% (PB), 2.4% (EPB) and 5.6% (EA). EPB precision errors were slightly higher than PB precision errors for lean, fat and %fat measurements of all regions except the head, although these differences were significant only for the fat and % fat of the arms and legs. In addition EPB precision values exhibited greater individual variability than PB precision values. Finally, absolute values of bone and soft tissue were compared among the three combinations of scan and analysis modes. BMC, BMD, fat mass, %fat and lean mass were significantly different between PB scans and either of the EPB or EA scans. Differences were as large as 20%-25% for certain regional fat and BMD measurements. Additional work may be needed to examine the relative accuracy of the scan mode/software combinations and to identify reasons for the differences in soft tissue precision with the array whole-body scan mode.
Simultaneous Control of Error Rates in fMRI Data Analysis
Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David
2015-01-01
The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the Likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to ‘cleaner’ looking brain maps and operationally superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. PMID:26272730
Design Methodology for Magnetic Field-Based Soft Tri-Axis Tactile Sensors.
Wang, Hongbo; de Boer, Greg; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Hewson, Robert; Culmer, Peter
2016-08-24
Tactile sensors are essential if robots are to safely interact with the external world and to dexterously manipulate objects. Current tactile sensors have limitations restricting their use, notably being too fragile or having limited performance. Magnetic field-based soft tactile sensors offer a potential improvement, being durable, low cost, accurate and high bandwidth, but they are relatively undeveloped because of the complexities involved in design and calibration. This paper presents a general design methodology for magnetic field-based three-axis soft tactile sensors, enabling researchers to easily develop specific tactile sensors for a variety of applications. All aspects (design, fabrication, calibration and evaluation) of the development of tri-axis soft tactile sensors are presented and discussed. A moving least square approach is used to decouple and convert the magnetic field signal to force output to eliminate non-linearity and cross-talk effects. A case study of a tactile sensor prototype, MagOne, was developed. This achieved a resolution of 1.42 mN in normal force measurement (0.71 mN in shear force), good output repeatability and has a maximum hysteresis error of 3.4%. These results outperform comparable sensors reported previously, highlighting the efficacy of our methodology for sensor design.
Design Methodology for Magnetic Field-Based Soft Tri-Axis Tactile Sensors
Wang, Hongbo; de Boer, Greg; Kow, Junwai; Alazmani, Ali; Ghajari, Mazdak; Hewson, Robert; Culmer, Peter
2016-01-01
Tactile sensors are essential if robots are to safely interact with the external world and to dexterously manipulate objects. Current tactile sensors have limitations restricting their use, notably being too fragile or having limited performance. Magnetic field-based soft tactile sensors offer a potential improvement, being durable, low cost, accurate and high bandwidth, but they are relatively undeveloped because of the complexities involved in design and calibration. This paper presents a general design methodology for magnetic field-based three-axis soft tactile sensors, enabling researchers to easily develop specific tactile sensors for a variety of applications. All aspects (design, fabrication, calibration and evaluation) of the development of tri-axis soft tactile sensors are presented and discussed. A moving least square approach is used to decouple and convert the magnetic field signal to force output to eliminate non-linearity and cross-talk effects. A case study of a tactile sensor prototype, MagOne, was developed. This achieved a resolution of 1.42 mN in normal force measurement (0.71 mN in shear force), good output repeatability and has a maximum hysteresis error of 3.4%. These results outperform comparable sensors reported previously, highlighting the efficacy of our methodology for sensor design. PMID:27563908
ERIC Educational Resources Information Center
Brill, Robert T.; Gilfoil, David M.; Doll, Kristen
2014-01-01
Academic researchers have often touted the growing importance of "soft skills" for modern day business leaders, especially leadership and communication skills. Despite this growing interest and attention, relatively little work has been done to develop and validate tools to assess soft skills. Forty graduate students from nine MBA…
Can Soft Drink Taxes Reduce Population Weight?
Fletcher, Jason M.; Frisvold, David
2009-01-01
Soft drink consumption has been hypothesized as one of the major factors in the growing rates of obesity in the US. Nearly two-thirds of all states currently tax soft drinks using excise taxes, sales taxes, or special exemptions to food exemptions from sales taxes to reduce consumption of this product, raise revenue, and improve public health. In this paper, we evaluate the impact of changes in state soft drink taxes on body mass index (BMI), obesity, and overweight. Our results suggest that soft drink taxes influence BMI, but that the impact is small in magnitude. PMID:20657817
Neurological soft signs in individuals with schizotypal personality features.
Chan, Raymond C K; Wang, Ya; Zhao, Qing; Yan, Chao; Xu, Ting; Gong, Qi-Yong; Manschreck, Theo C
2010-09-01
The current study attempted to examine the prevalence of neurological soft signs and their relationships with schizotypal traits in individuals with psychometrically defined schizotypal personality disorder (SPD) features. Sixty-four individuals with SPD-proneness and 51 without SPD-proneness were recruited for the present study. The soft signs subscales of the Cambridge Neurological Inventory were administered to all participants; the Schizotypal Personality Questionnaire (SPQ) was administered to SPD-proneness and non-SPD-proneness participants. The SPD-proneness participants demonstrated significantly higher prevalence of soft signs than those without SPD-proneness. SPQ subscales were significantly associated with ratings of motor coordination, sensory integration and total soft signs. These findings suggest that neurological soft signs are trait markers of schizophrenia.
Akbarzadeh, A; Ay, M R; Ahmadian, A; Alam, N Riahi; Zaidi, H
2013-02-01
Hybrid PET/MRI presents many advantages in comparison with its counterpart PET/CT in terms of improved soft-tissue contrast, decrease in radiation exposure, and truly simultaneous and multi-parametric imaging capabilities. However, the lack of well-established methodology for MR-based attenuation correction is hampering further development and wider acceptance of this technology. We assess the impact of ignoring bone attenuation and using different tissue classes for generation of the attenuation map on the accuracy of attenuation correction of PET data. This work was performed using simulation studies based on the XCAT phantom and clinical input data. For the latter, PET and CT images of patients were used as input for the analytic simulation model using realistic activity distributions where CT-based attenuation correction was utilized as reference for comparison. For both phantom and clinical studies, the reference attenuation map was classified into various numbers of tissue classes to produce three (air, soft tissue and lung), four (air, lungs, soft tissue and cortical bones) and five (air, lungs, soft tissue, cortical bones and spongeous bones) class attenuation maps. The phantom studies demonstrated that ignoring bone increases the relative error by up to 6.8% in the body and up to 31.0% for bony regions. Likewise, the simulated clinical studies showed that the mean relative error reached 15% for lesions located in the body and 30.7% for lesions located in bones, when neglecting bones. These results demonstrate an underestimation of about 30% of tracer uptake when neglecting bone, which in turn imposes substantial loss of quantitative accuracy for PET images produced by hybrid PET/MRI systems. Considering bones in the attenuation map will considerably improve the accuracy of MR-guided attenuation correction in hybrid PET/MR to enable quantitative PET imaging on hybrid PET/MR technologies.
Lee, Dae-Hee; Park, Sung-Chul; Park, Hyung-Joon; Han, Seung-Beom
2016-12-01
Open-wedge high tibial osteotomy (HTO) cannot always accurately correct limb alignment, resulting in under- or over-correction. This study assessed the relationship between soft tissue laxity of the knee joint and alignment correction in open-wedge HTO. This prospective study involved 85 patients (86 knees) undergoing open-wedge HTO for primary medial osteoarthritis. The mechanical axis (MA), weight-bearing line (WBL) ratio, and joint line convergence angle (JLCA) were measured on radiographs preoperatively and after 6 months, and the differences between the pre- and post-surgery values were calculated. Post-operative WBL ratios of 57-67 % were classified as acceptable correction. WBL ratios <57 and >67 % were classified as under- and over-corrections, respectively. Preoperative JLCA correlated positively with differences in MA (r = 0.358, P = 0.001) and WBL ratio (P = 0.003). Difference in JLCA showed a stronger correlation than preoperative JLCA with differences in MA (P < 0.001) and WBL ratio (P < 0.001). Difference in JLCA was the only predictor of both difference in MA (P < 0.001) and difference in WBL ratio (P < 0.001). The difference between pre- and post-operative JLCA differed significantly between the under-correction, acceptable-correction, and over-correction groups (P = 0.033). Preoperative JLCA, however, did not differ significantly between the three groups. Neither preoperative JLCA nor difference in JLCA correlated with change in posterior slope. Preoperative degree of soft tissue laxity in the knee joint was related to the degree of alignment correction, but not to alignment correction error, in open-wedge HTO. Change in soft tissue laxity around the knee from before to after open-wedge HTO correlated with both correction amount and correction error. Therefore, a too large change in JLCA from before to after open-wedge osteotomy may be due to an overly large reduction in JLCA following osteotomy, suggesting alignment over-correction during surgery. II.
3D microwave tomography of the breast using prior anatomical information
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golnabi, Amir H., E-mail: golnabia@montclair.edu; Meaney, Paul M.; Paulsen, Keith D.
2016-04-15
Purpose: The authors have developed a new 3D breast image reconstruction technique that utilizes the soft tissue spatial resolution of magnetic resonance imaging (MRI) and integrates the dielectric property differentiation from microwave imaging to produce a dual modality approach with the goal of augmenting the specificity of MR imaging, possibly without the need for nonspecific contrast agents. The integration is performed through the application of a soft prior regularization which imports segmented geometric meshes generated from MR exams and uses it to constrain the microwave tomography algorithm to recover nearly uniform property distributions within segmented regions with sharp delineation betweenmore » these internal subzones. Methods: Previous investigations have demonstrated that this approach is effective in 2D simulation and phantom experiments and also in clinical exams. The current study extends the algorithm to 3D and provides a thorough analysis of the sensitivity and robustness to misalignment errors in size and location between the spatial prior information and the actual data. Results: Image results in 3D were not strongly dependent on reconstruction mesh density, and the changes of less than 30% in recovered property values arose from variations of more than 125% in target region size—an outcome which was more robust than in 2D. Similarly, changes of less than 13% occurred in the 3D image results from variations in target location of nearly 90% of the inclusion size. Permittivity and conductivity errors were about 5 times and 2 times smaller, respectively, with the 3D spatial prior algorithm in actual phantom experiments than those which occurred without priors. Conclusions: The presented study confirms that the incorporation of structural information in the form of a soft constraint can considerably improve the accuracy of the property estimates in predefined regions of interest. These findings are encouraging and establish a strong foundation for using the soft prior technique in clinical studies, where their microwave imaging system and MRI can simultaneously collect breast exam data in patients.« less
Turaga, Kiran K.; Beasley, Georgia M.; Kane, John M.; Delman, Keith A.; Grobmyer, Stephen R.; Gonzalez, Ricardo J.; Letson, G. Douglas; Cheong, David; Tyler, Douglas S.; Zager, Jonathan S.
2015-01-01
Objective To demonstrate the efficacy of isolated limb infusion (ILI) in limb preservation for patients with locally advanced soft-tissue sarcomas and nonmelanoma cutaneous malignant neoplasms. Background Locally advanced nonmelanoma cutaneous and soft-tissue malignant neoplasms, including soft-tissue sarcomas of the extremities, can pose significant treatment challenges. We report our experience, including responses and limb preservation rates, using ILI in cutaneous and soft-tissue malignant neoplasms. Methods We identified 22 patients with cutaneous and soft-tissue malignant neoplasms who underwent 26 ILIs with melphalan and actinomycin from January 1, 2004, through December 31, 2009, from 5 institutions. Outcome measures included limb preservation and in-field response rates. Toxicity was measured using the Wieberdink scale and serum creatinine phosphokinase levels. Results The median age was 70 years (range, 19-92 years), and 12 patients (55%) were women. Fourteen patients (64%) had sarcomas, 7 (32%) had Merkel cell carcinoma, and 1 (5%) had squamous cell carcinoma. The median length of stay was 5.5 days (interquartile range, 4-8 days). Twenty-five of the 26 ILIs (96%) resulted in Wieberdink grade III or less toxicity, and 1 patient (4%) developed grade IV toxicity. The median serum creatinine phosphokinase level was 127 U/L for upper extremity ILIs and 93 U/L for lower extremity ILIs. Nineteen of 22 patients (86%) underwent successful limb preservation. The 3-month in-field response rate was 79% (21% complete and 58% partial), and the median follow-up was 8.6 months (range, 1-63 months). Five patients underwent resection of disease after an ILI, of whom 80% are disease free at a median of 8.6 months. Conclusions Isolated limb infusion provides an attractive alternative therapy for regional disease control and limb preservation in patients with limb-threatening cutaneous and soft-tissue malignant neoplasms. Short-term response rates appear encouraging, yet durability of response is unknown. PMID:21768436
Multi-level bandwidth efficient block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1989-01-01
The multilevel technique is investigated for combining block coding and modulation. There are four parts. In the first part, a formulation is presented for signal sets on which modulation codes are to be constructed. Distance measures on a signal set are defined and their properties are developed. In the second part, a general formulation is presented for multilevel modulation codes in terms of component codes with appropriate Euclidean distances. The distance properties, Euclidean weight distribution and linear structure of multilevel modulation codes are investigated. In the third part, several specific methods for constructing multilevel block modulation codes with interdependency among component codes are proposed. Given a multilevel block modulation code C with no interdependency among the binary component codes, the proposed methods give a multilevel block modulation code C which has the same rate as C, a minimum squared Euclidean distance not less than that of code C, a trellis diagram with the same number of states as that of C and a smaller number of nearest neighbor codewords than that of C. In the last part, error performance of block modulation codes is analyzed for an AWGN channel based on soft-decision maximum likelihood decoding. Error probabilities of some specific codes are evaluated based on their Euclidean weight distributions and simulation results.
Roper, D Keith; Berry, Keith R; Dunklin, Jeremy R; Chambers, Caitlyn; Bejugam, Vinith; Forcherio, Gregory T; Lanier, Megan
2018-06-12
Embedding soft matter with nanoparticles (NPs) can provide electromagnetic tunability at sub-micron scales for a growing number of applications in healthcare, sustainable energy, and chemical processing. However, the use of NP-embedded soft material in temperature-sensitive applications has been constrained by difficulties in validating the prediction of rates for energy dissipation from thermally insulating to conducting behavior. This work improved the embedment of monodisperse NPs to stably decrease the inter-NP spacings in polydimethylsiloxane (PDMS) to nano-scale distances. Lumped-parameter and finite element analyses were refined to apportion the effects of the structure and composition of the NP-embedded soft polymer on the rates for conductive, convective, and radiative heat dissipation. These advances allowed for the rational selection of PDMS size and NP composition to optimize measured rates of internal (conductive) and external (convective and radiative) heat dissipation. Stably reducing the distance between monodisperse NPs to nano-scale intervals increased the overall heat dissipation rate by up to 29%. Refined fabrication of NP-embedded polymer enabled the tunability of the dynamic thermal response (the ratio of internal to external dissipation rate) by a factor of 3.1 to achieve a value of 0.091, the largest reported to date. Heat dissipation rates simulated a priori were consistent with 130 μm resolution thermal images across 2- to 15-fold changes in the geometry and composition of NP-PDMS. The Nusselt number was observed to increase with the fourth root of the Rayleigh number across thermally insulative and conductive regimes, further validating the approach. These developments support the model-informed design of soft media embedded with nano-scale-spaced NPs to optimize the heat dissipation rates for evolving temperature-sensitive diagnostic and therapeutic modalities, as well as emerging uses in flexible bioelectronics, cell and tissue culture, and solar-thermal heating.
The lucky image-motion prediction for simple scene observation based soft-sensor technology
NASA Astrophysics Data System (ADS)
Li, Yan; Su, Yun; Hu, Bin
2015-08-01
High resolution is important to earth remote sensors, while the vibration of the platforms of the remote sensors is a major factor restricting high resolution imaging. The image-motion prediction and real-time compensation are key technologies to solve this problem. For the reason that the traditional autocorrelation image algorithm cannot meet the demand for the simple scene image stabilization, this paper proposes to utilize soft-sensor technology in image-motion prediction, and focus on the research of algorithm optimization in imaging image-motion prediction. Simulations results indicate that the improving lucky image-motion stabilization algorithm combining the Back Propagation Network (BP NN) and support vector machine (SVM) is the most suitable for the simple scene image stabilization. The relative error of the image-motion prediction based the soft-sensor technology is below 5%, the training computing speed of the mathematical predication model is as fast as the real-time image stabilization in aerial photography.
Self-sensing of dielectric elastomer actuator enhanced by artificial neural network
NASA Astrophysics Data System (ADS)
Ye, Zhihang; Chen, Zheng
2017-09-01
Dielectric elastomer (DE) is a type of soft actuating material, the shape of which can be changed under electrical voltage stimuli. DE materials have promising usage in future’s soft actuators and sensors, such as soft robotics, energy harvesters, and wearable sensors. In this paper, a stripe DE actuator with integrated sensing capability is designed, fabricated, and characterized. Since the strip actuator can be approximated as a compliant capacitor, it is possible to detect the actuator’s displacement by analyzing the actuator’s impedance change. An integrated sensing scheme that adds a high frequency probing signal into actuation signal is developed. Electrical impedance changes in the probing signal are extracted by fast Fourier transform algorithm, and nonlinear data fitting methods involving artificial neural network are implemented to detect the actuator’s displacement. A series of experiments show that by improving data processing and analyzing methods, the integrated sensing method can achieve error level of lower than 1%.
Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod
2016-08-06
In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively.
A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications
Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod
2016-01-01
In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively. PMID:27509495
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Fujiwara, Tohru; Takata, Toyoo
1986-01-01
A coding scheme is investigated for error control in data communication systems. The scheme is obtained by cascading two error correcting codes, called the inner and outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon <1/2. It is shown that if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging form high rates to very low rates and Reed-Solomon codes as inner codes are considered, and their error probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
USDA-ARS?s Scientific Manuscript database
The purpose of this research was to study the functional differences between straight grade (75% extraction rate) and patent (60% extraction rate) flour blends from 28 genetically pure soft white and club wheat grain lots, as evidenced by variation in sugar snap cookie and Japanese sponge cake quali...
Abundance and physiology of dominant soft corals linked to water quality in Jakarta Bay, Indonesia
Januar, Indra; Wild, Christian; Kunzmann, Andreas
2016-01-01
Declining water quality is one of the main reasons of coral reef degradation in the Thousand Islands off the megacity Jakarta, Indonesia. Shifts in benthic community composition to higher soft coral abundances have been reported for many degraded reefs throughout the Indo-Pacific. However, it is not clear to what extent soft coral abundance and physiology are influenced by water quality. In this study, live benthic cover and water quality (i.e. dissolved inorganic nutrients (DIN), turbidity (NTU), and sedimentation) were assessed at three sites (< 20 km north of Jakarta) in Jakarta Bay (JB) and five sites along the outer Thousand Islands (20–60 km north of Jakarta). This was supplemented by measurements of photosynthetic yield and, for the first time, respiratory electron transport system (ETS) activity of two dominant soft coral genera, Sarcophyton spp. and Nephthea spp. Findings revealed highly eutrophic water conditions in JB compared to the outer Thousand Islands, with 44% higher DIN load (7.65 μM/L), 67% higher NTU (1.49 NTU) and 47% higher sedimentation rate (30.4 g m−2 d−1). Soft corals were the dominant type of coral cover within the bay (2.4% hard and 12.8% soft coral cover) compared to the outer Thousand Islands (28.3% hard and 6.9% soft coral cover). Soft coral abundances, photosynthetic yield, and ETS activity were highly correlated with key water quality parameters, particularly DIN and sedimentation rates. The findings suggest water quality controls the relative abundance and physiology of dominant soft corals in JB and may thus contribute to phase shifts from hard to soft coral dominance, highlighting the need to better manage water quality in order to prevent or reverse phase shifts. PMID:27904802
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 1 2013-10-01 2013-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 1 2014-10-01 2014-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 1 2012-10-01 2012-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
45 CFR 98.100 - Error Rate Report.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 1 2011-10-01 2011-10-01 false Error Rate Report. 98.100 Section 98.100 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.100 Error Rate Report. (a) Applicability—The requirements of this subpart...
A protocol for monitoring soft tissue motion under compression garments during drop landings.
Mills, Chris; Scurr, Joanna; Wood, Louise
2011-06-03
This study used a single-subject design to establish a valid and reliable protocol for monitoring soft tissue motion under compression garments during drop landings. One male participant performed six 40 cm drop landings onto a force platform, in three compression conditions (none, medium high). Five reflective markers placed on the thigh under the compression garment and five over the garment were filmed using two cameras (1000 Hz). Following manual digitisation, marker coordinates were reconstructed and their resultant displacements and maximum change in separation distance between skin and garment markers were calculated. To determine reliability of marker application, 35 markers were attached to the thigh over the high compression garment and filmed. Markers were then removed and re-applied on three occasions; marker separation and distance to thigh centre of gravity were calculated. Results showed similar ground reaction forces during landing trials. Significant reductions in the maximum change in separation distance between markers from no compression to high compression landings were reported. Typical errors in marker movement under and over the garment were 0.1mm in medium and high compression landings. Re-application of markers showed mean typical errors of 1mm in marker separation and <3mm relative to thigh centre of gravity. This paper presents a novel protocol that demonstrates sufficient sensitivity to detect reductions in soft tissue motion during landings in high compression garments compared to no compression. Additionally, markers placed under or over the garment demonstrate low variance in movement, and the protocol reports good reliability in marker re-application. Copyright © 2011 Elsevier Ltd. All rights reserved.
Quantifying exchange between groundwater and surface water in rarely measured organic sediments
NASA Astrophysics Data System (ADS)
Rosenberry, D. O.; Cavas, M.; Keith, D.; Gefell, M. J.; Jones, P. M.
2016-12-01
Transfer of water and chemicals between poorly competent organic sediments and surface water in low-energy riverine and lentic settings depends on several factors, including rate and direction of flow, redox state, number and type of benthic invertebrates, and chemical gradients at and near the sediment-water interface. In spite of their commonly large areal extent, direct measurements of flow in soft, organic sediments are rarely made and little is known about flux direction, rate, or heterogeneity. Commonly used monitoring wells are difficult to install and suffer from slow response to changing hydraulic head due to the low permeability of these sediments. Seepage meters can directly quantify seepage flux if several challenges can be overcome. Meters are difficult to install and operate where water is deep, visibility is poor, and the position of the sediment-water interface is not readily apparent. Soft, easily eroded sediment can be displaced during meter installation, creating bypass flow beneath the bottom of the seepage cylinder. Poorly competent sediments often cannot support the weight of the meters; they slowly sink into the bed and displace water inside the seepage cylinder, which leads to the interpretation of large upward flow. Decaying organic material within the sediment generates gas that can displace water and corrupt seepage-meter measurements. Several inexpensive modifications to a standard seepage meter, as well as precautions during installation and operation, can minimize these sources of error. Underwater video cameras can be mounted to the meter to remotely observe sediment disturbance during sensor installation and monitor the stability of the meter insertion depth during the period of deployment. Anchor rods can be driven a meter or more into the sediment until refusal, firmly anchoring the seepage meter at a constant sediment insertion depth. Data collected from modified seepage meters installed in Minnesota and New York demonstrate the importance of quantifying flows in these challenging settings where biogeochemistry is complex and seepage rates commonly have been assumed to be insignificantly small.
Recent advances in coding theory for near error-free communications
NASA Technical Reports Server (NTRS)
Cheung, K.-M.; Deutsch, L. J.; Dolinar, S. J.; Mceliece, R. J.; Pollara, F.; Shahshahani, M.; Swanson, L.
1991-01-01
Channel and source coding theories are discussed. The following subject areas are covered: large constraint length convolutional codes (the Galileo code); decoder design (the big Viterbi decoder); Voyager's and Galileo's data compression scheme; current research in data compression for images; neural networks for soft decoding; neural networks for source decoding; finite-state codes; and fractals for data compression.
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
Grout, Ray; Kolla, Hemanth; Minion, Michael; ...
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. Here, we demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Spectral Regularization Algorithms for Learning Large Incomplete Matrices.
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-03-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-01-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 106 × 106 incomplete matrix with 105 observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques. PMID:21552465
Fais, Paolo; Viero, Alessia; Viel, Guido; Giordano, Renzo; Raniero, Dario; Kusstatscher, Stefano; Giraudo, Chiara; Cecchetto, Giovanni; Montisci, Massimo
2018-04-07
Necrotizing fasciitis (NF) is a life-threatening infection of soft tissues spreading along the fasciae to the surrounding musculature, subcutaneous fat and overlying skin areas that can rapidly lead to septic shock and death. Due to the pandemic increase of medical malpractice lawsuits, above all in Western countries, the forensic pathologist is frequently asked to investigate post-mortem cases of NF in order to determine the cause of death and to identify any related negligence and/or medical error. Herein, we review the medical literature dealing with cases of NF in a post-mortem setting, present a case series of seven NF fatalities and discuss the main ante-mortem and post-mortem diagnostic challenges of both clinical and forensic interests. In particular, we address the following issues: (1) origin of soft tissue infections, (2) micro-organisms involved, (3) time of progression of the infection to NF, (4) clinical and histological staging of NF and (5) pros and cons of clinical and laboratory scores, specific forensic issues related to the reconstruction of the ideal medical conduct and the evaluation of the causal value/link of any eventual medical error.
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher- order accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited tomore » recovering from soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual on the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehen- sive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Achieving algorithmic resilience for temporal integration through spectral deferred corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grout, Ray; Kolla, Hemanth; Minion, Michael
2017-05-08
Spectral deferred corrections (SDC) is an iterative approach for constructing higher-order-accurate numerical approximations of ordinary differential equations. SDC starts with an initial approximation of the solution defined at a set of Gaussian or spectral collocation nodes over a time interval and uses an iterative application of lower-order time discretizations applied to a correction equation to improve the solution at these nodes. Each deferred correction sweep increases the formal order of accuracy of the method up to the limit inherent in the accuracy defined by the collocation points. In this paper, we demonstrate that SDC is well suited to recovering frommore » soft (transient) hardware faults in the data. A strategy where extra correction iterations are used to recover from soft errors and provide algorithmic resilience is proposed. Specifically, in this approach the iteration is continued until the residual (a measure of the error in the approximation) is small relative to the residual of the first correction iteration and changes slowly between successive iterations. We demonstrate the effectiveness of this strategy for both canonical test problems and a comprehensive situation involving a mature scientific application code that solves the reacting Navier-Stokes equations for combustion research.« less
Soft-light overhead illumination systems improve laparoscopic task performance.
Takai, Akihiro; Takada, Yasutsugu; Motomura, Hideki; Teramukai, Satoshi
2014-02-01
The aim of this study was to evaluate the impact of attached shadow cues for laparoscopic task performance. We developed a soft-light overhead illumination system (SOIS) that produced attached shadows on objects. We compared results using the SOIS with those using a conventional illumination system with regard to laparoscopic experience and laparoscope-to-target distances (LTDs). Forty-two medical students and 23 surgeons participated in the study. A peg transfer task (LTD, 120 mm) for students and surgeons, and a suture removal task (LTD, 30 mm) for students were performed. Illumination systems were randomly assigned to each task. Endpoints were: total number of peg transfers; percentage of peg-dropping errors; and total execution time for suture removal. After the task, participants filled out a questionnaire on their preference for a particular illumination system. Total number of peg transfers was greater with the SOIS for both students and surgeons. Percentage of peg-dropping errors for surgeons was lower with the SOIS. Total execution time for suture removal was shorter with the SOIS. Forty-five participants (69% in total) evaluated the SOIS for easier task performance. The present results confirm that the SOIS improves laparoscopic task performance, regardless of previous laparoscopic experience or LTD.
An educational and audit tool to reduce prescribing error in intensive care.
Thomas, A N; Boxall, E M; Laha, S K; Day, A J; Grundy, D
2008-10-01
To reduce prescribing errors in an intensive care unit by providing prescriber education in tutorials, ward-based teaching and feedback in 3-monthly cycles with each new group of trainee medical staff. Prescribing audits were conducted three times in each 3-month cycle, once pretraining, once post-training and a final audit after 6 weeks. The audit information was fed back to prescribers with their correct prescribing rates, rates for individual error types and total error rates together with anonymised information about other prescribers' error rates. The percentage of prescriptions with errors decreased over each 3-month cycle (pretraining 25%, 19%, (one missing data point), post-training 23%, 6%, 11%, final audit 7%, 3%, 5% (p<0.0005)). The total number of prescriptions and error rates varied widely between trainees (data collection one; cycle two: range of prescriptions written: 1-61, median 18; error rate: 0-100%; median: 15%). Prescriber education and feedback reduce manual prescribing errors in intensive care.
A Six Sigma Trial For Reduction of Error Rates in Pathology Laboratory.
Tosuner, Zeynep; Gücin, Zühal; Kiran, Tuğçe; Büyükpinarbaşili, Nur; Turna, Seval; Taşkiran, Olcay; Arici, Dilek Sema
2016-01-01
A major target of quality assurance is the minimization of error rates in order to enhance patient safety. Six Sigma is a method targeting zero error (3.4 errors per million events) used in industry. The five main principles of Six Sigma are defining, measuring, analysis, improvement and control. Using this methodology, the causes of errors can be examined and process improvement strategies can be identified. The aim of our study was to evaluate the utility of Six Sigma methodology in error reduction in our pathology laboratory. The errors encountered between April 2014 and April 2015 were recorded by the pathology personnel. Error follow-up forms were examined by the quality control supervisor, administrative supervisor and the head of the department. Using Six Sigma methodology, the rate of errors was measured monthly and the distribution of errors at the preanalytic, analytic and postanalytical phases was analysed. Improvement strategies were reclaimed in the monthly intradepartmental meetings and the control of the units with high error rates was provided. Fifty-six (52.4%) of 107 recorded errors in total were at the pre-analytic phase. Forty-five errors (42%) were recorded as analytical and 6 errors (5.6%) as post-analytical. Two of the 45 errors were major irrevocable errors. The error rate was 6.8 per million in the first half of the year and 1.3 per million in the second half, decreasing by 79.77%. The Six Sigma trial in our pathology laboratory provided the reduction of the error rates mainly in the pre-analytic and analytic phases.
Data Analysis & Statistical Methods for Command File Errors
NASA Technical Reports Server (NTRS)
Meshkat, Leila; Waggoner, Bruce; Bryant, Larry
2014-01-01
This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.
Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals
NASA Astrophysics Data System (ADS)
Goswami, S.; Flury, J.
2016-12-01
In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.
NASA Technical Reports Server (NTRS)
Simon, Marvin; Valles, Esteban; Jones, Christopher
2008-01-01
This paper addresses the carrier-phase estimation problem under low SNR conditions as are typical of turbo- and LDPC-coded applications. In previous publications by the first author, closed-loop carrier synchronization schemes for error-correction coded BPSK and QPSK modulation were proposed that were based on feeding back hard data decisions at the input of the loop, the purpose being to remove the modulation prior to attempting to track the carrier phase as opposed to the more conventional decision-feedback schemes that incorporate such feedback inside the loop. In this paper, we consider an alternative approach wherein the extrinsic soft information from the iterative decoder of turbo or LDPC codes is instead used as the feedback.
Guelpa, Anina; Bevilacqua, Marta; Marini, Federico; O'Kennedy, Kim; Geladi, Paul; Manley, Marena
2015-04-15
It has been established in this study that the Rapid Visco Analyser (RVA) can describe maize hardness, irrespective of the RVA profile, when used in association with appropriate multivariate data analysis techniques. Therefore, the RVA can complement or replace current and/or conventional methods as a hardness descriptor. Hardness modelling based on RVA viscograms was carried out using seven conventional hardness methods (hectoliter mass (HLM), hundred kernel mass (HKM), particle size index (PSI), percentage vitreous endosperm (%VE), protein content, percentage chop (%chop) and near infrared (NIR) spectroscopy) as references and three different RVA profiles (hard, soft and standard) as predictors. An approach using locally weighted partial least squares (LW-PLS) was followed to build the regression models. The resulted prediction errors (root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP)) for the quantification of hardness values were always lower or in the same order of the laboratory error of the reference method. Copyright © 2014 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zeb Gul, Jahan; Yang, Bong-Su; Yang, Young Jin; Chang, Dong Eui; Choi, Kyung Hyun
2016-11-01
Soft bots have the expedient ability of adopting intricate postures and fitting in complex shapes compared to mechanical robots. This paper presents a unique in situ UV curing three-dimensional (3D) printed multi-material tri-legged soft bot with spider mimicked multi-step dynamic forward gait using commercial bio metal filament (BMF) as an actuator. The printed soft bot can produce controllable forward motion in response to external signals. The fundamental properties of BMF, including output force, contractions at different frequencies, initial loading rate, and displacement-rate are verified. The tri-pedal soft bot CAD model is designed inspired by spider’s legged structure and its locomotion is assessed by simulating strain and displacement using finite element analysis. A customized rotational multi-head 3D printing system assisted with multiple wavelength’s curing lasers is used for in situ fabrication of tri-pedal soft-bot using two flexible materials (epoxy and polyurethane) in three layered steps. The size of tri-pedal soft-bot is 80 mm in diameter and each pedal’s width and depth is 5 mm × 5 mm respectively. The maximum forward speed achieved is 2.7 mm s-1 @ 5 Hz with input voltage of 3 V and 250 mA on a smooth surface. The fabricated tri-pedal soft bot proved its power efficiency and controllable locomotion at three input signal frequencies (1, 2, 5 Hz).
Soft black hole absorption rates as conservation laws
Avery, Steven G.; Schwab, Burkhard U. W.
2017-04-10
The absorption rate of low-energy, or soft, electromagnetic radiation by spherically symmetric black holes in arbitrary dimensions is shown to be fixed by conservation of energy and large gauge transformations. Here, we interpret this result as the explicit realization of the Hawking-Perry-Strominger Ward identity for large gauge transformations in the background of a non-evaporating black hole. Along the way we rederive and extend our previous analytic results regarding the absorption rate for the minimal scalar and the photon.
Soft black hole absorption rates as conservation laws
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avery, Steven G.; Schwab, Burkhard U. W.
The absorption rate of low-energy, or soft, electromagnetic radiation by spherically symmetric black holes in arbitrary dimensions is shown to be fixed by conservation of energy and large gauge transformations. Here, we interpret this result as the explicit realization of the Hawking-Perry-Strominger Ward identity for large gauge transformations in the background of a non-evaporating black hole. Along the way we rederive and extend our previous analytic results regarding the absorption rate for the minimal scalar and the photon.
A review of setup error in supine breast radiotherapy using cone-beam computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batumalai, Vikneswary, E-mail: Vikneswary.batumalai@sswahs.nsw.gov.au; Liverpool and Macarthur Cancer Therapy Centres, New South Wales; Ingham Institute of Applied Medical Research, Sydney, New South Wales
2016-10-01
Setup error in breast radiotherapy (RT) measured with 3-dimensional cone-beam computed tomography (CBCT) is becoming more common. The purpose of this study is to review the literature relating to the magnitude of setup error in breast RT measured with CBCT. The different methods of image registration between CBCT and planning computed tomography (CT) scan were also explored. A literature search, not limited by date, was conducted using Medline and Google Scholar with the following key words: breast cancer, RT, setup error, and CBCT. This review includes studies that reported on systematic and random errors, and the methods used when registeringmore » CBCT scans with planning CT scan. A total of 11 relevant studies were identified for inclusion in this review. The average magnitude of error is generally less than 5 mm across a number of studies reviewed. The common registration methods used when registering CBCT scans with planning CT scan are based on bony anatomy, soft tissue, and surgical clips. No clear relationships between the setup errors detected and methods of registration were observed from this review. Further studies are needed to assess the benefit of CBCT over electronic portal image, as CBCT remains unproven to be of wide benefit in breast RT.« less
On-orbit observations of single event upset in Harris HM-6508 1K RAMs, reissue A
NASA Astrophysics Data System (ADS)
Blake, J. B.; Mandel, R.
1987-02-01
The Harris HM-6508 1K x 1 RAMs are part of a subsystem of a satellite in a low, polar orbit. The memory module, used in the subsystem containing the RAMs, consists of three printed circuit cards, with each card containing eight 2K byte memory hybrids, for a total of 48K bytes. Each memory hybrid contains 16 HM-6508 RAM chips. On a regular basis all but 256 bytes of the 48K bytes are examined for bit errors. Two different techniques were used for detecting bit errors. The first technique, a memory check sum, was capable of automatically detecting all single bit and some double bit errors which occurred within a page of memory. A memory page consists of 256 bytes. Memory check sum tests are performed approximately every 90 minutes. To detect a multiple error or to determine the exact location of the bit error within the page the entire contents of the memory is dumped and compared to the load file. Memory dumps are normally performed once a month, or immediately after the check sum routine detects an error. Once the exact location of the error is found, the correct value is reloaded into memory. After the memory is reloaded, the contents of the memory location in question is verified in order to determine if the error was a soft error generated by an SEU or a hard error generated by a part failure or cosmic-ray induced latchup.
A cascaded coding scheme for error control and its performance analysis
NASA Technical Reports Server (NTRS)
Lin, S.
1986-01-01
A coding scheme for error control in data communication systems is investigated. The scheme is obtained by cascading two error correcting codes, called the inner and the outer codes. The error performance of the scheme is analyzed for a binary symmetric channel with bit error rate epsilon < 1/2. It is shown that, if the inner and outer codes are chosen properly, extremely high reliability can be attained even for a high channel bit error rate. Various specific example schemes with inner codes ranging from high rates to very low rates and Reed-Solomon codes are considered, and their probabilities are evaluated. They all provide extremely high reliability even for very high bit error rates, say 0.1 to 0.01. Several example schemes are being considered by NASA for satellite and spacecraft down link error control.
A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading
NASA Astrophysics Data System (ADS)
Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo
A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).
Effect of bar-code technology on the safety of medication administration.
Poon, Eric G; Keohane, Carol A; Yoon, Catherine S; Ditmore, Matthew; Bane, Anne; Levtzion-Korach, Osnat; Moniz, Thomas; Rothschild, Jeffrey M; Kachalia, Allen B; Hayes, Judy; Churchill, William W; Lipsitz, Stuart; Whittemore, Anthony D; Bates, David W; Gandhi, Tejal K
2010-05-06
Serious medication errors are common in hospitals and often occur during order transcription or administration of medication. To help prevent such errors, technology has been developed to verify medications by incorporating bar-code verification technology within an electronic medication-administration system (bar-code eMAR). We conducted a before-and-after, quasi-experimental study in an academic medical center that was implementing the bar-code eMAR. We assessed rates of errors in order transcription and medication administration on units before and after implementation of the bar-code eMAR. Errors that involved early or late administration of medications were classified as timing errors and all others as nontiming errors. Two clinicians reviewed the errors to determine their potential to harm patients and classified those that could be harmful as potential adverse drug events. We observed 14,041 medication administrations and reviewed 3082 order transcriptions. Observers noted 776 nontiming errors in medication administration on units that did not use the bar-code eMAR (an 11.5% error rate) versus 495 such errors on units that did use it (a 6.8% error rate)--a 41.4% relative reduction in errors (P<0.001). The rate of potential adverse drug events (other than those associated with timing errors) fell from 3.1% without the use of the bar-code eMAR to 1.6% with its use, representing a 50.8% relative reduction (P<0.001). The rate of timing errors in medication administration fell by 27.3% (P<0.001), but the rate of potential adverse drug events associated with timing errors did not change significantly. Transcription errors occurred at a rate of 6.1% on units that did not use the bar-code eMAR but were completely eliminated on units that did use it. Use of the bar-code eMAR substantially reduced the rate of errors in order transcription and in medication administration as well as potential adverse drug events, although it did not eliminate such errors. Our data show that the bar-code eMAR is an important intervention to improve medication safety. (ClinicalTrials.gov number, NCT00243373.) 2010 Massachusetts Medical Society
Development and implementation of a human accuracy program in patient foodservice.
Eden, S H; Wood, S M; Ptak, K M
1987-04-01
For many years, industry has utilized the concept of human error rates to monitor and minimize human errors in the production process. A consistent quality-controlled product increases consumer satisfaction and repeat purchase of product. Administrative dietitians have applied the concepts of using human error rates (the number of errors divided by the number of opportunities for error) at four hospitals, with a total bed capacity of 788, within a tertiary-care medical center. Human error rate was used to monitor and evaluate trayline employee performance and to evaluate layout and tasks of trayline stations, in addition to evaluating employees in patient service areas. Long-term employees initially opposed the error rate system with some hostility and resentment, while newer employees accepted the system. All employees now believe that the constant feedback given by supervisors enhances their self-esteem and productivity. Employee error rates are monitored daily and are used to counsel employees when necessary; they are also utilized during annual performance evaluation. Average daily error rates for a facility staffed by new employees decreased from 7% to an acceptable 3%. In a facility staffed by long-term employees, the error rate increased, reflecting improper error documentation. Patient satisfaction surveys reveal satisfaction, for tray accuracy increased from 88% to 92% in the facility staffed by long-term employees and has remained above the 90% standard in the facility staffed by new employees.
Wang, Jie-Sheng; Han, Shuang
2015-01-01
For predicting the key technology indicators (concentrate grade and tailings recovery rate) of flotation process, a feed-forward neural network (FNN) based soft-sensor model optimized by the hybrid algorithm combining particle swarm optimization (PSO) algorithm and gravitational search algorithm (GSA) is proposed. Although GSA has better optimization capability, it has slow convergence velocity and is easy to fall into local optimum. So in this paper, the velocity vector and position vector of GSA are adjusted by PSO algorithm in order to improve its convergence speed and prediction accuracy. Finally, the proposed hybrid algorithm is adopted to optimize the parameters of FNN soft-sensor model. Simulation results show that the model has better generalization and prediction accuracy for the concentrate grade and tailings recovery rate to meet the online soft-sensor requirements of the real-time control in the flotation process. PMID:26583034
Decomposition of Fuzzy Soft Sets with Finite Value Spaces
Jun, Young Bae
2014-01-01
The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter. PMID:24558342
Decomposition of fuzzy soft sets with finite value spaces.
Feng, Feng; Fujita, Hamido; Jun, Young Bae; Khan, Madad
2014-01-01
The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter.
Triggering soft bombs at the LHC
Knapen, Simon; Griso, Simone Pagan; Papucci, Michele; ...
2017-08-18
Very high multiplicity, spherically-symmetric distributions of soft particles, with p T ~ few×100 MeV, may be a signature of strongly-coupled hidden valleys that exhibit long, efficient showering windows. With traditional triggers, such ‘soft bomb’ events closely resemble pile-up and are therefore only recorded with minimum bias triggers at a very low efficiency. We demonstrate a proof-of-concept for a high-level triggering strategy that efficiently separates soft bombs from pile-up by searching for a ‘belt of fire’: a high density band of hits on the innermost layer of the tracker. Seeding our proposed high-level trigger with existing jet, missing transverse energy ormore » lepton hardware-level triggers, we show that net trigger efficiencies of order 10% are possible for bombs of mass several × 100 GeV. We also consider the special case that soft bombs are the result of an exotic decay of the 125 GeV Higgs. The fiducial rate for ‘Higgs bombs’ triggered in this manner is marginally higher than the rate achievable by triggering directly on a hard muon from associated Higgs production.« less
Triggering soft bombs at the LHC
NASA Astrophysics Data System (ADS)
Knapen, Simon; Griso, Simone Pagan; Papucci, Michele; Robinson, Dean J.
2017-08-01
Very high multiplicity, spherically-symmetric distributions of soft particles, with p T ˜ few×100 MeV, may be a signature of strongly-coupled hidden valleys that exhibit long, efficient showering windows. With traditional triggers, such `soft bomb' events closely resemble pile-up and are therefore only recorded with minimum bias triggers at a very low efficiency. We demonstrate a proof-of-concept for a high-level triggering strategy that efficiently separates soft bombs from pile-up by searching for a `belt of fire': a high density band of hits on the innermost layer of the tracker. Seeding our proposed high-level trigger with existing jet, missing transverse energy or lepton hardware-level triggers, we show that net trigger efficiencies of order 10% are possible for bombs of mass several × 100 GeV. We also consider the special case that soft bombs are the result of an exotic decay of the 125 GeV Higgs. The fiducial rate for `Higgs bombs' triggered in this manner is marginally higher than the rate achievable by triggering directly on a hard muon from associated Higgs production.
Ternary NiFeX as soft biasing film in a magnetoresistive sensor
NASA Astrophysics Data System (ADS)
Chen, Mao-Min; Gharsallah, Neila; Gorman, Grace L.; Latimer, Jacquie
1991-04-01
The properties of NiFeX ternary films (X being Al, Au, Nb, Pd, Pt, Si, and Zr) have been studied for soft-film biasing of the magnetoresistive (MR) trilayer sensor. In general, the addition of the element X into the NiFe alloy film decreases the saturation magnetization Bs and magnetoresistance coefficient of the film, while increasing the film's electrical resistivity ρ. One of the desirable properties of a soft film for biasing is high sheet resistance for minimum current flow. A figure of merit Bsρ that takes into account both the rate of increase in Bs and the rate of decrease in ρ when adding X element was derived to compare the effectiveness of various X elements in reducing the current shunting through the soft-film layer. Using this criterion, NiFeNb and NiFeZr emerge as good soft-film materials having a maximum sheet resistance relative to the MR layer. Other critical properties such as magnetoresistance coefficient, magnetostriction, coercivity, and anisotropy field were also examined and are discussed in this paper.
Triggering soft bombs at the LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knapen, Simon; Griso, Simone Pagan; Papucci, Michele
Very high multiplicity, spherically-symmetric distributions of soft particles, with p T ~ few×100 MeV, may be a signature of strongly-coupled hidden valleys that exhibit long, efficient showering windows. With traditional triggers, such ‘soft bomb’ events closely resemble pile-up and are therefore only recorded with minimum bias triggers at a very low efficiency. We demonstrate a proof-of-concept for a high-level triggering strategy that efficiently separates soft bombs from pile-up by searching for a ‘belt of fire’: a high density band of hits on the innermost layer of the tracker. Seeding our proposed high-level trigger with existing jet, missing transverse energy ormore » lepton hardware-level triggers, we show that net trigger efficiencies of order 10% are possible for bombs of mass several × 100 GeV. We also consider the special case that soft bombs are the result of an exotic decay of the 125 GeV Higgs. The fiducial rate for ‘Higgs bombs’ triggered in this manner is marginally higher than the rate achievable by triggering directly on a hard muon from associated Higgs production.« less
Payne, Christopher J; Wamala, Isaac; Abah, Colette; Thalhofer, Thomas; Saeed, Mossab; Bautista-Salinas, Daniel; Horvath, Markus A; Vasilyev, Nikolay V; Roche, Ellen T; Pigula, Frank A; Walsh, Conor J
2017-09-01
Soft robotic devices have significant potential for medical device applications that warrant safe synergistic interaction with humans. This article describes the optimization of an implantable soft robotic system for heart failure whereby soft actuators wrapped around the ventricles are programmed to contract and relax in synchrony with the beating heart. Elastic elements integrated into the soft actuators provide recoiling function so as to aid refilling during the diastolic phase of the cardiac cycle. Improved synchronization with the biological system is achieved by incorporating the native ventricular pressure into the control system to trigger assistance and synchronize the device with the heart. A three-state electro-pneumatic valve configuration allows the actuators to contract at different rates to vary contraction patterns. An in vivo study was performed to test three hypotheses relating to mechanical coupling and temporal synchronization of the actuators and heart. First, that adhesion of the actuators to the ventricles improves cardiac output. Second, that there is a contraction-relaxation ratio of the actuators which generates optimal cardiac output. Third, that the rate of actuator contraction is a factor in cardiac output.
Schrider, Daniel R.; Mendes, Fábio K.; Hahn, Matthew W.; Kern, Andrew D.
2015-01-01
Characterizing the nature of the adaptive process at the genetic level is a central goal for population genetics. In particular, we know little about the sources of adaptive substitution or about the number of adaptive variants currently segregating in nature. Historically, population geneticists have focused attention on the hard-sweep model of adaptation in which a de novo beneficial mutation arises and rapidly fixes in a population. Recently more attention has been given to soft-sweep models, in which alleles that were previously neutral, or nearly so, drift until such a time as the environment shifts and their selection coefficient changes to become beneficial. It remains an active and difficult problem, however, to tease apart the telltale signatures of hard vs. soft sweeps in genomic polymorphism data. Through extensive simulations of hard- and soft-sweep models, here we show that indeed the two might not be separable through the use of simple summary statistics. In particular, it seems that recombination in regions linked to, but distant from, sites of hard sweeps can create patterns of polymorphism that closely mirror what is expected to be found near soft sweeps. We find that a very similar situation arises when using haplotype-based statistics that are aimed at detecting partial or ongoing selective sweeps, such that it is difficult to distinguish the shoulder of a hard sweep from the center of a partial sweep. While knowing the location of the selected site mitigates this problem slightly, we show that stochasticity in signatures of natural selection will frequently cause the signal to reach its zenith far from this site and that this effect is more severe for soft sweeps; thus inferences of the target as well as the mode of positive selection may be inaccurate. In addition, both the time since a sweep ends and biologically realistic levels of allelic gene conversion lead to errors in the classification and identification of selective sweeps. This general problem of “soft shoulders” underscores the difficulty in differentiating soft and partial sweeps from hard-sweep scenarios in molecular population genomics data. The soft-shoulder effect also implies that the more common hard sweeps have been in recent evolutionary history, the more prevalent spurious signatures of soft or partial sweeps may appear in some genome-wide scans. PMID:25716978
Schrider, Daniel R; Mendes, Fábio K; Hahn, Matthew W; Kern, Andrew D
2015-05-01
Characterizing the nature of the adaptive process at the genetic level is a central goal for population genetics. In particular, we know little about the sources of adaptive substitution or about the number of adaptive variants currently segregating in nature. Historically, population geneticists have focused attention on the hard-sweep model of adaptation in which a de novo beneficial mutation arises and rapidly fixes in a population. Recently more attention has been given to soft-sweep models, in which alleles that were previously neutral, or nearly so, drift until such a time as the environment shifts and their selection coefficient changes to become beneficial. It remains an active and difficult problem, however, to tease apart the telltale signatures of hard vs. soft sweeps in genomic polymorphism data. Through extensive simulations of hard- and soft-sweep models, here we show that indeed the two might not be separable through the use of simple summary statistics. In particular, it seems that recombination in regions linked to, but distant from, sites of hard sweeps can create patterns of polymorphism that closely mirror what is expected to be found near soft sweeps. We find that a very similar situation arises when using haplotype-based statistics that are aimed at detecting partial or ongoing selective sweeps, such that it is difficult to distinguish the shoulder of a hard sweep from the center of a partial sweep. While knowing the location of the selected site mitigates this problem slightly, we show that stochasticity in signatures of natural selection will frequently cause the signal to reach its zenith far from this site and that this effect is more severe for soft sweeps; thus inferences of the target as well as the mode of positive selection may be inaccurate. In addition, both the time since a sweep ends and biologically realistic levels of allelic gene conversion lead to errors in the classification and identification of selective sweeps. This general problem of "soft shoulders" underscores the difficulty in differentiating soft and partial sweeps from hard-sweep scenarios in molecular population genomics data. The soft-shoulder effect also implies that the more common hard sweeps have been in recent evolutionary history, the more prevalent spurious signatures of soft or partial sweeps may appear in some genome-wide scans. Copyright © 2015 by the Genetics Society of America.
The influence of the structure and culture of medical group practices on prescription drug errors.
Kralewski, John E; Dowd, Bryan E; Heaton, Alan; Kaissi, Amer
2005-08-01
This project was designed to identify the magnitude of prescription drug errors in medical group practices and to explore the influence of the practice structure and culture on those error rates. Seventy-eight practices serving an upper Midwest managed care (Care Plus) plan during 2001 were included in the study. Using Care Plus claims data, prescription drug error rates were calculated at the enrollee level and then were aggregated to the group practice that each enrollee selected to provide and manage their care. Practice structure and culture data were obtained from surveys of the practices. Data were analyzed using multivariate regression. Both the culture and the structure of these group practices appear to influence prescription drug error rates. Seeing more patients per clinic hour, more prescriptions per patient, and being cared for in a rural clinic were all strongly associated with more errors. Conversely, having a case manager program is strongly related to fewer errors in all of our analyses. The culture of the practices clearly influences error rates, but the findings are mixed. Practices with cohesive cultures have lower error rates but, contrary to our hypothesis, cultures that value physician autonomy and individuality also have lower error rates than those with a more organizational orientation. Our study supports the contention that there are a substantial number of prescription drug errors in the ambulatory care sector. Even by the strictest definition, there were about 13 errors per 100 prescriptions for Care Plus patients in these group practices during 2001. Our study demonstrates that the structure of medical group practices influences prescription drug error rates. In some cases, this appears to be a direct relationship, such as the effects of having a case manager program on fewer drug errors, but in other cases the effect appears to be indirect through the improvement of drug prescribing practices. An important aspect of this study is that it provides insights into the relationships of the structure and culture of medical group practices and prescription drug errors and provides direction for future research. Research focused on the factors influencing the high error rates in rural areas and how the interaction of practice structural and cultural attributes influence error rates would add important insights into our findings. For medical practice directors, our data show that they should focus on patient care coordination to reduce errors.
Bezrukov, Ilja; Schmidt, Holger; Mantlik, Frédéric; Schwenzer, Nina; Brendle, Cornelia; Schölkopf, Bernhard; Pichler, Bernd J
2013-10-01
Hybrid PET/MR systems have recently entered clinical practice. Thus, the accuracy of MR-based attenuation correction in simultaneously acquired data can now be investigated. We assessed the accuracy of 4 methods of MR-based attenuation correction in lesions within soft tissue, bone, and MR susceptibility artifacts: 2 segmentation-based methods (SEG1, provided by the manufacturer, and SEG2, a method with atlas-based susceptibility artifact correction); an atlas- and pattern recognition-based method (AT&PR), which also used artifact correction; and a new method combining AT&PR and SEG2 (SEG2wBONE). Attenuation maps were calculated for the PET/MR datasets of 10 patients acquired on a whole-body PET/MR system, allowing for simultaneous acquisition of PET and MR data. Eighty percent iso-contour volumes of interest were placed on lesions in soft tissue (n = 21), in bone (n = 20), near bone (n = 19), and within or near MR susceptibility artifacts (n = 9). Relative mean volume-of-interest differences were calculated with CT-based attenuation correction as a reference. For soft-tissue lesions, none of the methods revealed a significant difference in PET standardized uptake value relative to CT-based attenuation correction (SEG1, -2.6% ± 5.8%; SEG2, -1.6% ± 4.9%; AT&PR, -4.7% ± 6.5%; SEG2wBONE, 0.2% ± 5.3%). For bone lesions, underestimation of PET standardized uptake values was found for all methods, with minimized error for the atlas-based approaches (SEG1, -16.1% ± 9.7%; SEG2, -11.0% ± 6.7%; AT&PR, -6.6% ± 5.0%; SEG2wBONE, -4.7% ± 4.4%). For lesions near bone, underestimations of lower magnitude were observed (SEG1, -12.0% ± 7.4%; SEG2, -9.2% ± 6.5%; AT&PR, -4.6% ± 7.8%; SEG2wBONE, -4.2% ± 6.2%). For lesions affected by MR susceptibility artifacts, quantification errors could be reduced using the atlas-based artifact correction (SEG1, -54.0% ± 38.4%; SEG2, -15.0% ± 12.2%; AT&PR, -4.1% ± 11.2%; SEG2wBONE, 0.6% ± 11.1%). For soft-tissue lesions, none of the evaluated methods showed statistically significant errors. For bone lesions, significant underestimations of -16% and -11% occurred for methods in which bone tissue was ignored (SEG1 and SEG2). In the present attenuation correction schemes, uncorrected MR susceptibility artifacts typically result in reduced attenuation values, potentially leading to highly reduced PET standardized uptake values, rendering lesions indistinguishable from background. While AT&PR and SEG2wBONE show accurate results in both soft tissue and bone, SEG2wBONE uses a two-step approach for tissue classification, which increases the robustness of prediction and can be applied retrospectively if more precision in bone areas is needed.
Fiber-reinforced scaffolds in soft tissue engineering
Wang, Wei; Fan, Yubo; Wang, Xiumei; Watari, Fumio
2017-01-01
Abstract Soft tissue engineering has been developed as a new strategy for repairing damaged or diseased soft tissues and organs to overcome the limitations of current therapies. Since most of soft tissues in the human body are usually supported by collagen fibers to form a three-dimensional microstructure, fiber-reinforced scaffolds have the advantage to mimic the structure, mechanical and biological environment of natural soft tissues, which benefits for their regeneration and remodeling. This article reviews and discusses the latest research advances on design and manufacture of novel fiber-reinforced scaffolds for soft tissue repair and how fiber addition affects their structural characteristics, mechanical strength and biological activities in vitro and in vivo. In general, the concept of fiber-reinforced scaffolds with adjustable microstructures, mechanical properties and degradation rates can provide an effective platform and promising method for developing satisfactory biomechanically functional implantations for soft tissue engineering or regenerative medicine. PMID:28798872
Chemical Analysis of the Moon at the Surveyor VI Landing Site: Preliminary Results.
Turkevich, A L; Patterson, J H; Franzgrote, E J
1968-06-07
The alpha-scattering experiment aboard soft-landing Surveyor VI has provided a chemical analysis of the surface of the moon in Sinus Medii. The preliminary results indicate that, within experimental errors, the composition is the same as that found by Surveyor V in Mare Tranquillitatis. This finding suggests that large portions of the lunar maria resemble basalt in composition.
Emergency department discharge prescription errors in an academic medical center
Belanger, April; Devine, Lauren T.; Lane, Aaron; Condren, Michelle E.
2017-01-01
This study described discharge prescription medication errors written for emergency department patients. This study used content analysis in a cross-sectional design to systematically categorize prescription errors found in a report of 1000 discharge prescriptions submitted in the electronic medical record in February 2015. Two pharmacy team members reviewed the discharge prescription list for errors. Open-ended data were coded by an additional rater for agreement on coding categories. Coding was based upon majority rule. Descriptive statistics were used to address the study objective. Categories evaluated were patient age, provider type, drug class, and type and time of error. The discharge prescription error rate out of 1000 prescriptions was 13.4%, with “incomplete or inadequate prescription” being the most commonly detected error (58.2%). The adult and pediatric error rates were 11.7% and 22.7%, respectively. The antibiotics reviewed had the highest number of errors. The highest within-class error rates were with antianginal medications, antiparasitic medications, antacids, appetite stimulants, and probiotics. Emergency medicine residents wrote the highest percentage of prescriptions (46.7%) and had an error rate of 9.2%. Residents of other specialties wrote 340 prescriptions and had an error rate of 20.9%. Errors occurred most often between 10:00 am and 6:00 pm. PMID:28405061
Panisset, J C; Pailhé, R; Schlatterer, B; Sigwalt, L; Sonnery-Cottet, B; Lutz, C; Lustig, S; Batailler, C; Bertiaux, S; Ehkirch, F P; Colombet, P; Steltzlen, C; Louis, M L; D'ingrado, P; Dalmay, F; Imbert, P; Saragaglia, D
2017-12-01
Lateral tenodesis (LT) is performed to limit the risk of iterative tear following anterior cruciate ligament (ACL) reconstruction in at-risk patients. By adding an extra procedure to isolated ACL graft, LT reconstruction increases operating time and may complicate postoperative course. The objective of the present study was to evaluate the rate of early complications. The study hypothesis was that associating ALL reconstruction to ACL reconstruction does not increase the complications rate found with isolated ACL reconstruction. A prospective multicenter study included 392 patients: 70% male; mean age, 29.9 years; treated by associated ACL and LT reconstruction. All adverse events were inventoried. Mean hospital stay was 2 days, with 46% day-surgery. Walking was resumed at a mean 27 days, with an advantage for patients treated by the hamstring technique. The early postoperative complications rate was 12%, with 1.7% specifically implicating LT reconstruction: pain, hematoma, stiffness in flexion and extension, and infection. There was a 5% rate of surgical revision during the first year, predominantly comprising arthrolysis for extension deficit. The 1-year recurrence rate was 2.8%. The complications rate for combined intra- and extra-articular reconstruction was no higher than for isolated intra-articular ACL reconstruction, with no increase in infection or stiffness rates. The rate of complications specific to ALL reconstruction was low, at 1.7%, and mainly involved fixation error causing lateral soft-tissue impingement. IV, prospective multicenter study. Copyright © 2017. Published by Elsevier Masson SAS.
X-ray variability of Cygnus X-1 in its soft state
NASA Technical Reports Server (NTRS)
Cui, W.; Zhang, S. N.; Jahoda, K.; Focke, W.; Swank, J.; Heindl, W. A.; Rothschild, R. E.
1997-01-01
Observations from the Rossi X-ray Timing Explorer (RXTE) of Cyg X-1 in the soft state and during the soft to hard transition are examined. The results of this analysis confirm previous conclusions that for this source there is a settling period (following the transition from the hard to soft state during which the low energy spectrum varies significantly, while the high energy portion changes little) during which the source reaches nominal soft state brightness. This behavior can be characterized by a soft low energy spectrum and significant low frequency 1/f noise and white noise on the power density spectrum, which becomes softer upon reaching the true soft state. The low frequency 1/f noise is not observed when Cyg X-1 is in the hard state, and therefore appears to be positively correlated with the disk mass accretion rate. The difference in the observed spectral and timing properties between the hard and soft states is qualitatively consistent with a fluctuating corona model.
Prediction of embankment settlement over soft soils.
DOT National Transportation Integrated Search
2009-06-01
The objective of this project was to review and verify the current design procedures used by TxDOT : to estimate the total and rate of consolidation settlement in embankments constructed on soft soils. Methods : to improve the settlement predictions ...
Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L
2017-01-01
Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. PMID:27193033
Dynamic deformation of soft soil media: Experimental studies and mathematical modeling
NASA Astrophysics Data System (ADS)
Balandin, V. V.; Bragov, A. M.; Igumnov, L. A.; Konstantinov, A. Yu.; Kotov, V. L.; Lomunov, A. K.
2015-05-01
A complex experimental-theoretical approach to studying the problem of high-rate strain of soft soil media is presented. This approach combines the following contemporary methods of dynamical tests: the modified Hopkinson-Kolsky method applied tomedium specimens contained in holders and the method of plane wave shock experiments. The following dynamic characteristics of sand soils are obtained: shock adiabatic curves, bulk compressibility curves, and shear resistance curves. The obtained experimental data are used to study the high-rate strain process in the system of a split pressure bar, and the constitutive relations of Grigoryan's mathematical model of soft soil medium are verified by comparing the results of computational and natural test experiments of impact and penetration.
NASA Astrophysics Data System (ADS)
Kang, Wonmo; Chen, YungChia; Bagchi, Amit; O'Shaughnessy, Thomas J.
2017-12-01
The material response of biologically relevant soft materials, e.g., extracellular matrix or cell cytoplasm, at high rate loading conditions is becoming increasingly important for emerging medical implications including the potential of cavitation-induced brain injury or cavitation created by medical devices, whether intentional or not. However, accurately probing soft samples remains challenging due to their delicate nature, which often excludes the use of conventional techniques requiring direct contact with a sample-loading frame. We present a drop-tower-based method, integrated with a unique sample holder and a series of effective springs and dampers, for testing soft samples with an emphasis on high-rate loading conditions. Our theoretical studies on the transient dynamics of the system show that well-controlled impacts between a movable mass and sample holder can be used as a means to rapidly load soft samples. For demonstrating the integrated system, we experimentally quantify the critical acceleration that corresponds to the onset of cavitation nucleation for pure water and 7.5% gelatin samples. This study reveals that 7.5% gelatin has a significantly higher, approximately double, critical acceleration as compared to pure water. Finally, we have also demonstrated a non-optical method of detecting cavitation in soft materials by correlating cavitation collapse with structural resonance of the sample container.
Modelling hard and soft states of Cygnus X-1 with propagating mass accretion rate fluctuations
NASA Astrophysics Data System (ADS)
Rapisarda, S.; Ingram, A.; van der Klis, M.
2017-12-01
We present a timing analysis of three Rossi X-ray Timing Explorer observations of the black hole binary Cygnus X-1 with the propagating mass accretion rate fluctuations model PROPFLUC. The model simultaneously predicts power spectra, time lags and coherence of the variability as a function of energy. The observations cover the soft and hard states of the source, and the transition between the two. We find good agreement between model predictions and data in the hard and soft states. Our analysis suggests that in the soft state the fluctuations propagate in an optically thin hot flow extending up to large radii above and below a stable optically thick disc. In the hard state, our results are consistent with a truncated disc geometry, where the hot flow extends radially inside the inner radius of the disc. In the transition from soft to hard state, the characteristics of the rapid variability are too complex to be successfully described with PROPFLUC. The surface density profile of the hot flow predicted by our model and the lack of quasi-periodic oscillations in the soft and hard states suggest that the spin of the black hole is aligned with the inner accretion disc and therefore probably with the rotational axis of the binary system.
The high accuracy data processing system of laser interferometry signals based on MSP430
NASA Astrophysics Data System (ADS)
Qi, Yong-yue; Lin, Yu-chi; Zhao, Mei-rong
2009-07-01
Generally speaking there are two orthogonal signals used in single-frequency laser interferometer for differentiating direction and electronic subdivision. However there usually exist three errors with the interferential signals: zero offsets error, unequal amplitude error and quadrature phase shift error. These three errors have a serious impact on subdivision precision. Based on Heydemann error compensation algorithm, it is proposed to achieve compensation of the three errors. Due to complicated operation of the Heydemann mode, a improved arithmetic is advanced to decrease the calculating time effectively in accordance with the special characteristic that only one item of data will be changed in each fitting algorithm operation. Then a real-time and dynamic compensatory circuit is designed. Taking microchip MSP430 as the core of hardware system, two input signals with the three errors are turned into digital quantity by the AD7862. After data processing in line with improved arithmetic, two ideal signals without errors are output by the AD7225. At the same time two original signals are turned into relevant square wave and imported to the differentiating direction circuit. The impulse exported from the distinguishing direction circuit is counted by the timer of the microchip. According to the number of the pulse and the soft subdivision the final result is showed by LED. The arithmetic and the circuit are adopted to test the capability of a laser interferometer with 8 times optical path difference and the measuring accuracy of 12-14nm is achieved.
Image quality of mixed convolution kernel in thoracic computed tomography.
Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar
2016-11-01
The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P < 0.03). Compared to the hard convolution kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P < 0.004) but a lower image quality for trachea, segmental bronchi, lung parenchyma, and skeleton (P < 0.001).The mixed convolution kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.
An engineer's view on genetic information and biological evolution.
Battail, Gérard
2004-01-01
We develop ideas on genome replication introduced in Battail [Europhys. Lett. 40 (1997) 343]. Starting with the hypothesis that the genome replication process uses error-correcting means, and the auxiliary one that nested codes are used to this end, we first review the concepts of redundancy and error-correcting codes. Then we show that these hypotheses imply that: distinct species exist with a hierarchical taxonomy, there is a trend of evolution towards complexity, and evolution proceeds by discrete jumps. At least the first two features above may be considered as biological facts so, in the absence of direct evidence, they provide an indirect proof in favour of the hypothesized error-correction system. The very high redundancy of genomes makes it possible. In order to explain how it is implemented, we suggest that soft codes and replication decoding, to be briefly described, are plausible candidates. Experimentally proven properties of long-range correlation of the DNA message substantiate this claim.
Computing in the presence of soft bit errors. [caused by single event upset on spacecraft
NASA Technical Reports Server (NTRS)
Rasmussen, R. D.
1984-01-01
It is shown that single-event-upsets (SEUs) due to cosmic rays are a significant source of single bit error in spacecraft computers. The physical mechanism of SEU, electron hole generation by means of Linear Energy Transfer (LET), it discussed with reference made to the results of a study of the environmental effects on computer systems of the Galileo spacecraft. Techniques for making software more tolerant of cosmic ray effects are considered, including: reducing the number of registers used by the software; continuity testing of variables; redundant execution of major procedures for error detection; and encoding state variables to detect single-bit changes. Attention is also given to design modifications which may reduce the cosmic ray exposure of on-board hardware. These modifications include: shielding components operating in LEO; removing low-power Schottky parts; and the use of CMOS diodes. The SEU parameters of different electronic components are listed in a table.
NASA Astrophysics Data System (ADS)
Mahapatra, Prasant Kumar; Sethi, Spardha; Kumar, Amod
2015-10-01
In conventional tool positioning technique, sensors embedded in the motion stages provide the accurate tool position information. In this paper, a machine vision based system and image processing technique for motion measurement of lathe tool from two-dimensional sequential images captured using charge coupled device camera having a resolution of 250 microns has been described. An algorithm was developed to calculate the observed distance travelled by the tool from the captured images. As expected, error was observed in the value of the distance traversed by the tool calculated from these images. Optimization of errors due to machine vision system, calibration, environmental factors, etc. in lathe tool movement was carried out using two soft computing techniques, namely, artificial immune system (AIS) and particle swarm optimization (PSO). The results show better capability of AIS over PSO.
Relation between the location of elements in the periodic table and various organ-uptake rates.
Ando, A; Ando, I; Hiraki, T; Hisada, K
1989-01-01
Fifty four elements and 65 radioactive compounds were examined to determine the organ uptake rates for rats 3, 24 and 48 h after i.v. injection of these compounds. They were prepared as carrier free nuclides, or containing a small amount of stable nuclide. Generally speaking, behaviors of K, Rb, Cs and Tl in all the organs were very similar to one another, but they differed from that of Na. Bivalent hard acids were avidly taken up into bone; therefore, uptake rates in soft tissues were very small. Hard acids of tri-, quadri- and pentavalence which were taken up into the soft tissue organs decreased more slowly from these organs than other ions. Soft acids such as Hg2+ were bound very firmly to the component in the kidney. Anions (with few exceptions), GeCl4 and SbCl3 were rapidly excreted in urine, so that the uptake rates in organs were low.
Dispensing error rate after implementation of an automated pharmacy carousel system.
Oswald, Scott; Caldwell, Richard
2007-07-01
A study was conducted to determine filling and dispensing error rates before and after the implementation of an automated pharmacy carousel system (APCS). The study was conducted in a 613-bed acute and tertiary care university hospital. Before the implementation of the APCS, filling and dispensing rates were recorded during October through November 2004 and January 2005. Postimplementation data were collected during May through June 2006. Errors were recorded in three areas of pharmacy operations: first-dose or missing medication fill, automated dispensing cabinet fill, and interdepartmental request fill. A filling error was defined as an error caught by a pharmacist during the verification step. A dispensing error was defined as an error caught by a pharmacist observer after verification by the pharmacist. Before implementation of the APCS, 422 first-dose or missing medication orders were observed between October 2004 and January 2005. Independent data collected in December 2005, approximately six weeks after the introduction of the APCS, found that filling and error rates had increased. The filling rate for automated dispensing cabinets was associated with the largest decrease in errors. Filling and dispensing error rates had decreased by December 2005. In terms of interdepartmental request fill, no dispensing errors were noted in 123 clinic orders dispensed before the implementation of the APCS. One dispensing error out of 85 clinic orders was identified after implementation of the APCS. The implementation of an APCS at a university hospital decreased medication filling errors related to automated cabinets only and did not affect other filling and dispensing errors.
Soft-Decision Decoding of Binary Linear Block Codes Based on an Iterative Search Algorithm
NASA Technical Reports Server (NTRS)
Lin, Shu; Kasami, Tadao; Moorthy, H. T.
1997-01-01
This correspondence presents a suboptimum soft-decision decoding scheme for binary linear block codes based on an iterative search algorithm. The scheme uses an algebraic decoder to iteratively generate a sequence of candidate codewords one at a time using a set of test error patterns that are constructed based on the reliability information of the received symbols. When a candidate codeword is generated, it is tested based on an optimality condition. If it satisfies the optimality condition, then it is the most likely (ML) codeword and the decoding stops. If it fails the optimality test, a search for the ML codeword is conducted in a region which contains the ML codeword. The search region is determined by the current candidate codeword and the reliability of the received symbols. The search is conducted through a purged trellis diagram for the given code using the Viterbi algorithm. If the search fails to find the ML codeword, a new candidate is generated using a new test error pattern, and the optimality test and search are renewed. The process of testing and search continues until either the MEL codeword is found or all the test error patterns are exhausted and the decoding process is terminated. Numerical results show that the proposed decoding scheme achieves either practically optimal performance or a performance only a fraction of a decibel away from the optimal maximum-likelihood decoding with a significant reduction in decoding complexity compared with the Viterbi decoding based on the full trellis diagram of the codes.
Soft tissue deformation for surgical simulation: a position-based dynamics approach.
Camara, Mafalda; Mayer, Erik; Darzi, Ara; Pratt, Philip
2016-06-01
To assist the rehearsal and planning of robot-assisted partial nephrectomy, a real-time simulation platform is presented that allows surgeons to visualise and interact with rapidly constructed patient-specific biomechanical models of the anatomical regions of interest. Coupled to a framework for volumetric deformation, the platform furthermore simulates intracorporeal 2D ultrasound image acquisition, using preoperative imaging as the data source. This not only facilitates the planning of optimal transducer trajectories and viewpoints, but can also act as a validation context for manually operated freehand 3D acquisitions and reconstructions. The simulation platform was implemented within the GPU-accelerated NVIDIA FleX position-based dynamics framework. In order to validate the model and determine material properties and other simulation parameter values, a porcine kidney with embedded fiducial beads was CT-scanned and segmented. Acquisitions for the rest position and three different levels of probe-induced deformation were collected. Optimal values of the cluster stiffness coefficients were determined for a range of different particle radii, where the objective function comprised the mean distance error between real and simulated fiducial positions over the sequence of deformations. The mean fiducial error at each deformation stage was found to be compatible with the level of ultrasound probe calibration error typically observed in clinical practice. Furthermore, the simulation exhibited unconditional stability on account of its use of clustered shape-matching constraints. A novel position-based dynamics implementation of soft tissue deformation has been shown to facilitate several desirable simulation characteristics: real-time performance, unconditional stability, rapid model construction enabling patient-specific behaviour and accuracy with respect to reference CT images.
Monopolar soft-mode coagulation using hemostatic forceps for peptic ulcer bleeding.
Yamasaki, Yasushi; Takenaka, Ryuta; Nunoue, Tomokazu; Kono, Yoshiyasu; Takemoto, Koji; Taira, Akihiko; Tsugeno, Hirofumi; Fujiki, Shigeatsu
2014-01-01
Upper gastrointestinal hemorrhage from bleeding peptic ulcer is sometimes difficult to treat by conventional endoscopic methods. Recently, monopolar electrocoagulation using a soft-coagulation system and hemostatic forceps (soft coagulation) has been used to prevent bleeding during endoscopic submucosal dissection. The aim of this study was to assess the safety and efficacy of soft coagulation in the treatment of bleeding peptic ulcer. A total of 39 patients with peptic ulcers were treated using soft coagulation at our hospital between January 2005 and March 2010. Emergency treatment employed an ERBE soft-mode coagulation system using hemostatic forceps. Second-look endoscopy was performed to evaluate the efficacy of prior therapy. Initial hemostasis was defined as accomplished by soft coagulation, with or without other endoscopic therapy prior to soft coagulation. The rate of initial hemostasis, rebleeding, and ultimate hemostasis were retrospectively analyzed. The study subjects were 31 men and 8 women with a mean age of 68.3±13.7 years, with 29 gastric ulcers and 10 duodenal ulcers. Initial hemostasis was achieved in 37 patients (95%). During follow-up, bleeding recurred in two patients, who were retreated with soft coagulation. The monopolar soft coagulation is feasible and safe for treating bleeding peptic ulcers.
Djordjevic, Ivan B; Vasic, Bane
2006-05-29
A maximum a posteriori probability (MAP) symbol decoding supplemented with iterative decoding is proposed as an effective mean for suppression of intrachannel nonlinearities. The MAP detector, based on Bahl-Cocke-Jelinek-Raviv algorithm, operates on the channel trellis, a dynamical model of intersymbol interference, and provides soft-decision outputs processed further in an iterative decoder. A dramatic performance improvement is demonstrated. The main reason is that the conventional maximum-likelihood sequence detector based on Viterbi algorithm provides hard-decision outputs only, hence preventing the soft iterative decoding. The proposed scheme operates very well in the presence of strong intrachannel intersymbol interference, when other advanced forward error correction schemes fail, and it is also suitable for 40 Gb/s upgrade over existing 10 Gb/s infrastructure.
Takegami, Kazuki; Hayashi, Hiroaki; Okino, Hiroki; Kimoto, Natsumi; Maehata, Itsumi; Kanazawa, Yuki; Okazaki, Tohru; Hashizume, Takuya; Kobayashi, Ikuo
2016-07-01
Our aim in this study is to derive an identification limit on a dosimeter for not disturbing a medical image when patients wear a small-type optically stimulated luminescence (OSL) dosimeter on their bodies during X-ray diagnostic imaging. For evaluation of the detection limit based on an analysis of X-ray spectra, we propose a new quantitative identification method. We performed experiments for which we used diagnostic X-ray equipment, a soft-tissue-equivalent phantom (1-20 cm), and a CdTe X-ray spectrometer assuming one pixel of the X-ray imaging detector. Then, with the following two experimental settings, corresponding X-ray spectra were measured with 40-120 kVp and 0.5-1000 mAs at a source-to-detector distance of 100 cm: (1) X-rays penetrating a soft-tissue-equivalent phantom with the OSL dosimeter attached directly on the phantom, and (2) X-rays penetrating only the soft-tissue-equivalent phantom. Next, the energy fluence and errors in the fluence were calculated from the spectra. When the energy fluence with errors concerning these two experimental conditions was estimated to be indistinctive, we defined the condition as the OSL dosimeter not being identified on the X-ray image. Based on our analysis, we determined the identification limit of the dosimeter. We then compared our results with those for the general irradiation conditions used in clinics. We found that the OSL dosimeter could not be identified under the irradiation conditions of abdominal and chest radiography, namely, one can apply the OSL dosimeter to measurement of the exposure dose in the irradiation field of X-rays without disturbing medical images.
Vinnicombe, S J; Whelehan, P; Thomson, K; McLean, D; Purdie, C A; Jordan, L B; Hubbard, S; Evans, A J
2014-04-01
Shear wave elastography (SWE) is a promising adjunct to greyscale ultrasound in differentiating benign from malignant breast masses. The purpose of this study was to characterise breast cancers which are not stiff on quantitative SWE, to elucidate potential sources of error in clinical application of SWE to evaluation of breast masses. Three hundred and two consecutive patients examined by SWE who underwent immediate surgery for breast cancer were included. Characteristics of 280 lesions with suspicious SWE values (mean stiffness >50 kPa) were compared with 22 lesions with benign SWE values (<50 kPa). Statistical significance of the differences was assessed using non-parametric goodness-of-fit tests. Pure ductal carcinoma in situ (DCIS) masses were more often soft on SWE than masses representing invasive breast cancer. Invasive cancers that were soft were more frequently: histological grade 1, tubular subtype, ≤10 mm invasive size and detected at screening mammography. No significant differences were found with respect to the presence of invasive lobular cancer, vascular invasion, hormone and HER-2 receptor status. Lymph node positivity was less common in soft cancers. Malignant breast masses classified as benign by quantitative SWE tend to have better prognostic features than those correctly classified as malignant. • Over 90 % of cancers assessable with ultrasound have a mean stiffness >50 kPa. • 'Soft' invasive cancers are frequently small (≤10 mm), low grade and screen-detected. • Pure DCIS masses are more often soft than invasive cancers (>40 %). • Large symptomatic masses are better evaluated with SWE than small clinically occult lesions. • When assessing small lesions, 'softness' should not raise the threshold for biopsy.
Differential detection in quadrature-quadrature phase shift keying (Q2PSK) systems
NASA Astrophysics Data System (ADS)
El-Ghandour, Osama M.; Saha, Debabrata
1991-05-01
A generalized quadrature-quadrature phase shift keying (Q2PSK) signaling format is considered for differential encoding and differential detection. Performance in the presence of additive white Gaussian noise (AWGN) is analyzed. Symbol error rate is found to be approximately twice the symbol error rate in a quaternary DPSK system operating at the same Eb/N0. However, the bandwidth efficiency of differential Q2PSK is substantially higher than that of quaternary DPSK. When the error is due to AWGN, the ratio of double error rate to single error rate can be very high, and the ratio may approach zero at high SNR. To improve error rate, differential detection through maximum-likelihood decoding based on multiple or N symbol observations is considered. If N and SNR are large this decoding gives a 3-dB advantage in error rate over conventional N = 2 differential detection, fully recovering the energy loss (as compared to coherent detection) if the observation is extended to a large number of symbol durations.
Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes
NASA Astrophysics Data System (ADS)
Jing, Lin; Brun, Todd; Quantum Research Team
Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.
Zogheib, T; Jacobs, R; Bornstein, M M; Agbaje, J O; Anumendem, D; Klazen, Y; Politis, C
2018-01-01
Three dimensional facial scanning is an innovation that provides opportunity for digital data acquisition, smile analysis and communication of treatment plan and outcome with patients. To assess the applicability of 3D facial scanning as compared to 2D clinical photography. Sample consisted of thirty Caucasians aged between 25 and 50 years old, without any dentofacial deformities. Fifteen soft-tissue facial landmarks were identified twice by 3 observers on 2D and 3D images of the 30 subjects. Five linear proportions and nine angular measurements were established in the orbital, nasal and oral regions. These data were compared to anthropometric norms of young Caucasians. Furthermore, a questionnaire was completed by 14 other observers, according to their personal judgment of the 2D and 3D images. Quantitatively, proportions linking the three facial regions in 3D were closer to the clinical standard (for 2D 3.3% and for 3D 1.8% error rate). Qualitatively, in 67% of the cases, observers were as confident about 3D as they were about 2D. Intra-observer Correlation Coefficient (ICC) revealed a better agreement between observers in 3D for the questions related to facial form, lip step and chin posture. The laser facial scanning could be a useful and reliable tool to analyze the circumoral region for orthodontic and orthognathic treatments as well as for plastic surgery planning and outcome.
Continuous quality improvement using intelligent infusion pump data analysis.
Breland, Burnis D
2010-09-01
The use of continuous quality-improvement (CQI) processes in the implementation of intelligent infusion pumps in a community teaching hospital is described. After the decision was made to implement intelligent i.v. infusion pumps in a 413-bed, community teaching hospital, drug libraries for use in the safety software had to be created. Before drug libraries could be created, it was necessary to determine the epidemiology of medication use in various clinical care areas. Standardization of medication administration was performed through the CQI process, using practical knowledge of clinicians at the bedside and evidence-based drug safety parameters in the scientific literature. Post-implementation, CQI allowed refinement of clinically important safety limits while minimizing inappropriate, meaningless soft limit alerts on a few select agents. Assigning individual clinical care areas (CCAs) to individual patient care units facilitated customization of drug libraries and identification of specific CCA compliance concerns. Between June 2007 and June 2008, there were seven library updates. These involved drug additions and deletions, customization of individual CCAs, and alterations of limits. Overall compliance with safety software use rose over time, from 33% in November 2006 to over 98% in December 2009. Many potentially clinically significant dosing errors were intercepted by the safety software, prompting edits by end users. Only 4-6% of soft limit alerts resulted in edits. Compliance rates for use of infusion pump safety software varied among CCAs over time. Education, auditing, and refinement of drug libraries led to improved compliance in most CCAs.
TOPICAL REVIEW: Anatomical imaging for radiotherapy
NASA Astrophysics Data System (ADS)
Evans, Philip M.
2008-06-01
The goal of radiation therapy is to achieve maximal therapeutic benefit expressed in terms of a high probability of local control of disease with minimal side effects. Physically this often equates to the delivery of a high dose of radiation to the tumour or target region whilst maintaining an acceptably low dose to other tissues, particularly those adjacent to the target. Techniques such as intensity modulated radiotherapy (IMRT), stereotactic radiosurgery and computer planned brachytherapy provide the means to calculate the radiation dose delivery to achieve the desired dose distribution. Imaging is an essential tool in all state of the art planning and delivery techniques: (i) to enable planning of the desired treatment, (ii) to verify the treatment is delivered as planned and (iii) to follow-up treatment outcome to monitor that the treatment has had the desired effect. Clinical imaging techniques can be loosely classified into anatomic methods which measure the basic physical characteristics of tissue such as their density and biological imaging techniques which measure functional characteristics such as metabolism. In this review we consider anatomical imaging techniques. Biological imaging is considered in another article. Anatomical imaging is generally used for goals (i) and (ii) above. Computed tomography (CT) has been the mainstay of anatomical treatment planning for many years, enabling some delineation of soft tissue as well as radiation attenuation estimation for dose prediction. Magnetic resonance imaging is fast becoming widespread alongside CT, enabling superior soft-tissue visualization. Traditionally scanning for treatment planning has relied on the use of a single snapshot scan. Recent years have seen the development of techniques such as 4D CT and adaptive radiotherapy (ART). In 4D CT raw data are encoded with phase information and reconstructed to yield a set of scans detailing motion through the breathing, or cardiac, cycle. In ART a set of scans is taken on different days. Both allow planning to account for variability intrinsic to the patient. Treatment verification has been carried out using a variety of technologies including: MV portal imaging, kV portal/fluoroscopy, MVCT, conebeam kVCT, ultrasound and optical surface imaging. The various methods have their pros and cons. The four x-ray methods involve an extra radiation dose to normal tissue. The portal methods may not generally be used to visualize soft tissue, consequently they are often used in conjunction with implanted fiducial markers. The two CT-based methods allow measurement of inter-fraction variation only. Ultrasound allows soft-tissue measurement with zero dose but requires skilled interpretation, and there is evidence of systematic differences between ultrasound and other data sources, perhaps due to the effects of the probe pressure. Optical imaging also involves zero dose but requires good correlation between the target and the external measurement and thus is often used in conjunction with an x-ray method. The use of anatomical imaging in radiotherapy allows treatment uncertainties to be determined. These include errors between the mean position at treatment and that at planning (the systematic error) and the day-to-day variation in treatment set-up (the random error). Positional variations may also be categorized in terms of inter- and intra-fraction errors. Various empirical treatment margin formulae and intervention approaches exist to determine the optimum strategies for treatment in the presence of these known errors. Other methods exist to try to minimize error margins drastically including the currently available breath-hold techniques and the tracking methods which are largely in development. This paper will review anatomical imaging techniques in radiotherapy and how they are used to boost the therapeutic benefit of the treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharkov, B. B.; Chizhik, V. I.; Dvinskikh, S. V., E-mail: sergeid@kth.se
2016-01-21
Dipolar recoupling is an essential part of current solid-state NMR methodology for probing atomic-resolution structure and dynamics in solids and soft matter. Recently described magic-echo amplitude- and phase-modulated cross-polarization heteronuclear recoupling strategy aims at efficient and robust recoupling in the entire range of coupling constants both in rigid and highly dynamic molecules. In the present study, the properties of this recoupling technique are investigated by theoretical analysis, spin-dynamics simulation, and experimentally. The resonance conditions and the efficiency of suppressing the rf field errors are examined and compared to those for other recoupling sequences based on similar principles. The experimental datamore » obtained in a variety of rigid and soft solids illustrate the scope of the method and corroborate the results of analytical and numerical calculations. The technique benefits from the dipolar resolution over a wider range of coupling constants compared to that in other state-of-the-art methods and thus is advantageous in studies of complex solids with a broad range of dynamic processes and molecular mobility degrees.« less
Learning the inverse kinetics of an octopus-like manipulator in three-dimensional space.
Giorelli, M; Renda, F; Calisti, M; Arienti, A; Ferri, G; Laschi, C
2015-05-13
This work addresses the inverse kinematics problem of a bioinspired octopus-like manipulator moving in three-dimensional space. The bioinspired manipulator has a conical soft structure that confers the ability of twirling around objects as a real octopus arm does. Despite the simple design, the soft conical shape manipulator driven by cables is described by nonlinear differential equations, which are difficult to solve analytically. Since exact solutions of the equations are not available, the Jacobian matrix cannot be calculated analytically and the classical iterative methods cannot be used. To overcome the intrinsic problems of methods based on the Jacobian matrix, this paper proposes a neural network learning the inverse kinematics of a soft octopus-like manipulator driven by cables. After the learning phase, a feed-forward neural network is able to represent the relation between manipulator tip positions and forces applied to the cables. Experimental results show that a desired tip position can be achieved in a short time, since heavy computations are avoided, with a degree of accuracy of 8% relative average error with respect to the total arm length.
A Novel Soft Pneumatic Artificial Muscle with High-Contraction Ratio.
Han, Kwanghyun; Kim, Nam-Ho; Shin, Dongjun
2018-06-20
There is a growing interest in soft actuators for human-friendly robotic applications. However, it is very challenging for conventional soft actuators to achieve both a large working distance and high force. To address this problem, we present a high-contraction ratio pneumatic artificial muscle (HCRPAM), which has a novel actuation concept. The HCRPAM can contract substantially while generating a large force suitable for a wide range of robotic applications. Our proposed prototyping method allows for an easy and quick fabrication, considering various design variables. We derived a mathematical model using a virtual work principle, and validated the model experimentally. We conducted simulations for the design optimization using this model. Our experimental results show that the HCRPAM has a 183.3% larger contraction ratio and 37.1% higher force output than the conventional pneumatic artificial muscle (McKibben muscle). Furthermore, the actuator has a compatible position tracking performance of 1.0 Hz and relatively low hysteresis error of 4.8%. Finally, we discussed the controllable bending characteristics of the HCRPAM, which uses heterogeneous materials and has an asymmetrical structure to make it comfortable for a human to wear.
A Soft-Start Circuit for Arcjet Ignition
NASA Technical Reports Server (NTRS)
Hamley, John A.; Sankovic, John M.
1993-01-01
The reduced propellant flow rates associated with high performance arcjets have placed new emphasis on electrode erosion, especially at startup. A soft-start current profile was defined which limited current overshoot during the initial 30 to 50 ms of operation, and maintained significantly lower than the nominal arc current for the first eight seconds of operation. A 2-5 kW arcjet PPU was modified to provide this current profile, and a 500 cycle test using simulated fully decomposed hydrazine was conducted to determine the electrode erosion during startup. Electrode geometry and mass flow rates were selected based on requirements for a 600 second specific impulse mission average arcjet system. The flow rate was varied throughout the test to simulate the blow down of a flight propellant system. Electrode damage was negligible at flow rates above 33 mg/s, and minor chamfering of the constrictor occurred at flow rates of 33 to 30 mg/s, corresponding to flow rates expected in the last 40 percent of the mission. Constrictor diameter remained unchanged and the thruster remained operable at the completion of the test. The soft-start current profile significantly reduced electrode damage when compared to state of the art starting techniques.
Executive Council lists and general practitioner files
Farmer, R. D. T.; Knox, E. G.; Cross, K. W.; Crombie, D. L.
1974-01-01
An investigation of the accuracy of general practitioner and Executive Council files was approached by a comparison of the two. High error rates were found, including both file errors and record errors. On analysis it emerged that file error rates could not be satisfactorily expressed except in a time-dimensioned way, and we were unable to do this within the context of our study. Record error rates and field error rates were expressible as proportions of the number of records on both the lists; 79·2% of all records exhibited non-congruencies and particular information fields had error rates ranging from 0·8% (assignation of sex) to 68·6% (assignation of civil state). Many of the errors, both field errors and record errors, were attributable to delayed updating of mutable information. It is concluded that the simple transfer of Executive Council lists to a computer filing system would not solve all the inaccuracies and would not in itself permit Executive Council registers to be used for any health care applications requiring high accuracy. For this it would be necessary to design and implement a purpose designed health care record system which would include, rather than depend upon, the general practitioner remuneration system. PMID:4816588
A finite nonlinear hyper-viscoelastic model for soft biological tissues.
Panda, Satish Kumar; Buist, Martin Lindsay
2018-03-01
Soft tissues exhibit highly nonlinear rate and time-dependent stress-strain behaviour. Strain and strain rate dependencies are often modelled using a hyperelastic model and a discrete (standard linear solid) or continuous spectrum (quasi-linear) viscoelastic model, respectively. However, these models are unable to properly capture the materials characteristics because hyperelastic models are unsuited for time-dependent events, whereas the common viscoelastic models are insufficient for the nonlinear and finite strain viscoelastic tissue responses. The convolution integral based models can demonstrate a finite viscoelastic response; however, their derivations are not consistent with the laws of thermodynamics. The aim of this work was to develop a three-dimensional finite hyper-viscoelastic model for soft tissues using a thermodynamically consistent approach. In addition, a nonlinear function, dependent on strain and strain rate, was adopted to capture the nonlinear variation of viscosity during a loading process. To demonstrate the efficacy and versatility of this approach, the model was used to recreate the experimental results performed on different types of soft tissues. In all the cases, the simulation results were well matched (R 2 ⩾0.99) with the experimental data. Copyright © 2018 Elsevier Ltd. All rights reserved.
Westbrook, Johanna I.; Li, Ling; Lehnbom, Elin C.; Baysari, Melissa T.; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O.
2015-01-01
Objectives To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Design Audit of 3291patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as ‘clinically important’. Setting Two major academic teaching hospitals in Sydney, Australia. Main Outcome Measures Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. Results A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6–1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0–253.8), but only 13.0/1000 (95% CI: 3.4–22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4–28.4%) contained ≥1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Conclusions Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. PMID:25583702
Histologic effects of a high-repetition pulsed Nd:YAG laser on intraoral soft tissue
NASA Astrophysics Data System (ADS)
White, Joel M.; Goodis, Harold E.; Yessik, Michael J.; Myers, Terry D.
1995-05-01
High-repetition rate, fiberoptic-delivered Nd:YAG lasers have increased oral soft tissue laser applications. This study focused on three parameters: the temperature rise occurring in deeper tissue during excision, the histology of thermal coagulation during excision of oral tissue, and effects of accidental exposure to adjacent hard tissue. Thermocouples were placed 5.0 +/- 0.5 mm in bone below fresh bovine gingiva and at the same depth in tongue; temperatures in the underlying tissue were measured during laser excision. An Nd:YAG laser with 100 microsecond(s) pulse duration was used to excise the tissue using a 200 or 300 micrometers diameter fiber in contact with the tissue. The soft tissue was excised using constant force and rate with laser powers of 1.5, 3, 5, and 10 W, and a variety of pulse rates. The tissue was bioprepared, sectioned and stained with hematoxylin and eosin. The width and depth of the tissue removed as well as lateral and deep thermal coagulation were measured in histologic sections with a measuring microscope (10x). Multifactor randomized ANOVA showed that probe diameter and repetition rates were not significant variables (p
Radio frequency tags systems to initiate system processing
NASA Astrophysics Data System (ADS)
Madsen, Harold O.; Madsen, David W.
1994-09-01
This paper describes the automatic identification technology which has been installed at Applied Magnetic Corp. MR fab. World class manufacturing requires technology exploitation. This system combines (1) FluoroTrac cassette and operator tracking, (2) CELLworks cell controller software tools, and (3) Auto-Soft Inc. software integration services. The combined system eliminates operator keystrokes and errors during normal processing within a semiconductor fab. The methods and benefits of this system are described.
Studies Of Single-Event-Upset Models
NASA Technical Reports Server (NTRS)
Zoutendyk, J. A.; Smith, L. S.; Soli, G. A.
1988-01-01
Report presents latest in series of investigations of "soft" bit errors known as single-event upsets (SEU). In this investigation, SEU response of low-power, Schottky-diode-clamped, transistor/transistor-logic (TTL) static random-access memory (RAM) observed during irradiation by Br and O ions in ranges of 100 to 240 and 20 to 100 MeV, respectively. Experimental data complete verification of computer model used to simulate SEU in this circuit.
Clean galena, contaminated lead, and soft errors in memory chips
NASA Astrophysics Data System (ADS)
Lykken, G. I.; Hustoft, J.; Ziegler, B.; Momcilovic, B.
2000-10-01
Lead (Pb) disks were exposed to a radon (Rn)-rich atmosphere and surface alpha particle emissions were detected over time. Cumulative 210Po alpha emission increased nearly linearly with time. Conversely, cumulative emission for each of 218Po and 214Po was constant after one and two hours, respectively. Processing of radiation-free Pb ore (galena) in inert atmospheres was compared with processing in ambient air. Galena processed within a flux heated in a graphite crucible while exposed to an inert atmosphere, resulted in lead contaminated with 210Po (Trial 1). A glove box was next used to prepare a baseline radiation-free flux sample in an alumina crucible that was heated in an oven with an inert atmosphere (Trials 2 and 3). Ambient air was thereafter introduced, in place of the inert atmosphere, to the radiation-free flux mixture during processing (Trial 4). Ambient air introduced Rn and its progeny (RAD) into the flux during processing so that the processed Pb contained Po isotopes. A typical coke used in lead smelting also emitted numerous alpha particles. We postulate that alpha particles from tin/lead solder bumps, a cause of computer chip memory soft errors, may originate from Rn and RAD in the ambient air and/or coke used as a reducing agent in the standard galena smelting procedure.
Tirumani, Sree Harsha; Wagner, Andrew J; Tirumani, Harika; Shinagare, Atul B; Jagannathan, Jyothi P; Hornick, Jason L; George, Suzanne; Ramaiya, Nikhil H
2015-06-01
To define the various CT densities of nonlipomatous component of dedifferentiated liposarcoma (DDLPS) and to determine if the rate of growth varies with density. This study identified 60 patients with DDPLS (38 men, 22 women; mean age at diagnosis 59 years, range, 35-82 years) who had one or more resections. CT scan immediately before the surgical resection (presurgery) and up to a maximum of one year before the surgery (baseline) was reviewed by two radiologists to note the density of the nonlipomatous elements and rate of growth during that period. Clinical and histopathological data were extracted from electronic medical records. Rate of growth of various densities was compared using Kruskal-Wallis test. Three distinct densities of the nonlipomatous component were noted: soft tissue density (SD), fluid density (FD), and mixed density (MD). Of 109 lesions on the presurgery scan (SD = 78; MD = 22; FD = 9), scans at baseline were available for 72/109 lesions (SD = 49; MD = 14; FD = 9). Median growth rate/month without treatment, with chemotherapy, and with radiotherapy were 40%, 24%, and 62%, respectively, for SD lesions and 28%, 61%, and 52% for MD lesions. For FD lesions, it was 72% and 35%, respectively, without treatment and with chemotherapy. There was no statistical difference in the rate of growth of various densities. Density changed over time in 8/72 (11%) lesions, including 2/49 SD lesions (to MD), 1/14 MD lesions (to SD), and 5/9 FD lesions (to SD). DDLPS has three distinct CT densities of which soft tissue density is the most common. Despite not being statistically significant, fluid density lesions had rapid growth rate and often converted to soft tissue density in our study.
Optical Assessment of Soft Contact Lens Edge-Thickness.
Tankam, Patrice; Won, Jungeun; Canavesi, Cristina; Cox, Ian; Rolland, Jannick P
2016-08-01
To assess the edge shape of soft contact lenses using Gabor-Domain Optical Coherence Microscopy (GD-OCM) with a 2-μm imaging resolution in three dimensions and to generate edge-thickness profiles at different distances from the edge tip of soft contact lenses. A high-speed custom-designed GD-OCM system was used to produce 3D images of the edge of an experimental soft contact lens (Bausch + Lomb, Rochester, NY) in four different configurations: in air, submerged into water, submerged into saline with contrast agent, and placed onto the cornea of a porcine eyeball. An algorithm to compute the edge-thickness was developed and applied to cross-sectional images. The proposed algorithm includes the accurate detection of the interfaces between the lens and the environment, and the correction of the refraction error. The sharply defined edge tip of a soft contact lens was visualized in 3D. Results showed precise thickness measurement of the contact lens edge profile. Fifty cross-sectional image frames for each configuration were used to test the robustness of the algorithm in evaluating the edge-thickness at any distance from the edge tip. The precision of the measurements was less than 0.2 μm. The results confirmed the ability of GD-OCM to provide high-definition images of soft contact lens edges. As a nondestructive, precise, and fast metrology tool for soft contact lens measurement, the integration of GD-OCM in the design and manufacturing of contact lenses will be beneficial for further improvement in edge design and quality control. In the clinical perspective, the in vivo evaluation of the lens fitted onto the cornea will advance our understanding of how the edge interacts with the ocular surface. The latter will provide insights into the impact of long-term use of contact lenses on the visual performance.
Optical Assessment of Soft Contact Lens Edge-Thickness
Tankam, Patrice; Won, Jungeun; Canavesi, Cristina; Cox, Ian; Rolland, Jannick P.
2016-01-01
Purpose To assess the edge shape of soft contact lenses using Gabor-Domain Optical Coherence Microscopy (GD-OCM) with a 2 μm imaging resolution in three dimensions, and to generate edge-thickness profiles at different distances from the edge tip of soft contact lenses. Methods A high-speed custom-designed GD-OCM system was used to produce 3D images of the edge of an experimental soft contact lens (Bausch + Lomb, Rochester NY) in four different configurations: in air, submerged into water, submerged into saline with contrast agent, and placed onto the cornea of a porcine eyeball. An algorithm to compute the edge-thickness was developed and applied to cross-sectional images. The proposed algorithm includes the accurate detection of the interfaces between the lens and the environment, and the correction of the refraction error. Results The sharply defined edge tip of a soft contact lens was visualized in 3D. Results showed precise thickness measurement of the contact lens edge profile. 50 cross-sectional image frames for each configuration were used to test the robustness of the algorithm in evaluating the edge-thickness at any distance from the edge tip. The precision of the measurements was less than 0.2 μm. Conclusions The results confirmed the ability of GD-OCM to provide high definition images of soft contact lens edges. As a non-destructive, precise, and fast metrology tool for soft contact lens measurement, the integration of GD-OCM in the design and manufacturing of contact lenses will be beneficial for further improvement in edge design and quality control. In the clinical perspective, the in-vivo evaluation of the lens fitted onto the cornea will advance our understanding of how the edge interacts with the ocular surface. The latter will provide insights into the impact of long-term use of contact lenses on the visual performance. PMID:27232902
Application of artificial neural networks to chemostratigraphy
NASA Astrophysics Data System (ADS)
Malmgren, BjöRn A.; Nordlund, Ulf
1996-08-01
Artificial neural networks, a branch of artificial intelligence, are computer systems formed by a number of simple, highly interconnected processing units that have the ability to learn a set of target vectors from a set of associated input signals. Neural networks learn by self-adjusting a set of parameters, using some pertinent algorithm to minimize the error between the desired output and network output. We explore the potential of this approach in solving a problem involving classification of geochemical data. The data, taken from the literature, are derived from four late Quaternary zones of volcanic ash of basaltic and rhyolithic origin from the Norwegian Sea. These ash layers span the oxygen isotope zones 1, 5, 7, and 11, respectively (last 420,000 years). The data consist of nine geochemical variables (oxides) determined in each of 183 samples. We employed a three-layer back propagation neural network to assess its efficiency to optimally differentiate samples from the four ash zones on the basis of their geochemical composition. For comparison, three statistical pattern recognition techniques, linear discriminant analysis, the k-nearest neighbor (k-NN) technique, and SIMCA (soft independent modeling of class analogy), were applied to the same data. All of these showed considerably higher error rates than the artificial neural network, indicating that the back propagation network was indeed more powerful in correctly classifying the ash particles to the appropriate zone on the basis of their geochemical composition.
[Velopharyngeal closure pattern and speech performance among submucous cleft palate patients].
Heng, Yin; Chunli, Guo; Bing, Shi; Yang, Li; Jingtao, Li
2017-06-01
To characterize the velopharyngeal closure patterns and speech performance among submucous cleft palate patients. Patients with submucous cleft palate visiting the Department of Cleft Lip and Palate Surgery, West China Hospital of Stomatology, Sichuan University between 2008 and 2016 were reviewed. Outcomes of subjective speech evaluation including velopharyngeal function, consonant articulation, and objective nasopharyngeal endoscopy including the mobility of soft palate, pharyngeal walls were retrospectively analyzed. A total of 353 cases were retrieved in this study, among which 138 (39.09%) demonstrated velopharyngeal competence, 176 (49.86%) velopharyngeal incompetence, and 39 (11.05%) marginal velopharyngeal incompetence. A total of 268 cases were subjected to nasopharyngeal endoscopy examination, where 167 (62.31%) demonstrated circular closure pattern, 89 (33.21%) coronal pattern, and 12 (4.48%) sagittal pattern. Passavant's ridge existed in 45.51% (76/167) patients with circular closure and 13.48% (12/89) patients with coronal closure. Among the 353 patients included in this study, 137 (38.81%) presented normal articulation, 124 (35.13%) consonant elimination, 51 (14.45%) compensatory articulation, 36 (10.20%) consonant weakening, 25 (7.08%) consonant replacement, and 36 (10.20%) multiple articulation errors. Circular closure was the most prevalent velopharyngeal closure pattern among patients with submucous cleft palate, and high-pressure consonant deletion was the most common articulation abnormality. Articulation error occurred more frequently among patients with a low velopharyngeal closure rate.
McCormack, Joshua R.; Underwood, Frank B.; Slaven, Emily J.; Cappaert, Thomas A.
2016-01-01
Background: Eccentric exercise is commonly used in the management of Achilles tendinopathy (AT) but its effectiveness for insertional AT has been questioned. Soft tissue treatment (Astym) combined with eccentric exercise could result in better outcomes than eccentric exercise alone. Hypothesis: Soft tissue treatment (Astym) plus eccentric exercise will be more effective than eccentric exercise alone for subjects with insertional AT. Study Design: Prospective randomized controlled trial. Level of Evidence: Level 2. Methods: Sixteen subjects were randomly assigned to either a soft tissue treatment (Astym) and eccentric exercise group or an eccentric exercise–only group. Intervention was completed over a 12-week period, with outcomes assessed at baseline, 4, 8, 12, 26, and 52 weeks. Outcomes included the Victorian Institute of Sport Assessment Achilles-Specific Questionnaire (VISA-A), the numeric pain rating scale (NPRS), and the global rating of change (GROC). Results: Significantly greater improvements on the VISA-A were noted in the soft tissue treatment (Astym) group over the 12-week intervention period, and these differences were maintained at the 26- and 52-week follow-ups. Both groups experienced a similar statistically significant improvement in pain over the short and long term. A significantly greater number of subjects in the soft tissue treatment (Astym) group achieved a successful outcome at 12 weeks. Conclusion: Soft tissue treatment (Astym) plus eccentric exercise was more effective than eccentric exercise only at improving function during both short- and long-term follow-up periods. Clinical Relevance: Soft tissue treatment (Astym) plus eccentric exercise appears to be a beneficial treatment program that clinicians should consider incorporating into the management of their patients with insertional AT. PMID:26893309
Colen, David L; Carney, Martin J; Shubinets, Valeriy; Lanni, Michael A; Liu, Tiffany; Levin, L Scott; Lee, Gwo-Chin; Kovach, Stephen J
2018-04-01
Total knee arthroplasty is a common orthopedic procedure in the United States and complications can be devastating. Soft-tissue compromise or joint infection may cause failure of prosthesis requiring knee fusion or amputation. The role of a plastic surgeon in total knee arthroplasty is critical for cases requiring optimization of the soft-tissue envelope. The purpose of this study was to elucidate factors associated with total knee arthroplasty salvage following complications and clarify principles of reconstruction to optimize outcomes. A retrospective review of patients requiring soft-tissue reconstruction performed by the senior author after total knee arthroplasty over 8 years was completed. Logistic regression and Fisher's exact tests determined factors associated with the primary outcome, prosthesis salvage versus knee fusion or amputation. Seventy-three knees in 71 patients required soft-tissue reconstruction (mean follow-up, 1.8 years), with a salvage rate of 61.1 percent, mostly using medial gastrocnemius flaps. Patients referred to our institution with complicated periprosthetic wounds were significantly more likely to lose their knee prosthesis than patients treated only within our system. Patients with multiple prior knee operations before definitive soft-tissue reconstruction had significantly decreased rates of prosthesis salvage and an increased risk of amputation. Knee salvage significantly decreased with positive joint cultures (Gram-negative greater than Gram-positive organisms) and particularly at the time of definitive reconstruction, which also trended toward an increased risk of amputation. In revision total knee arthroplasty, prompt soft-tissue reconstruction improves the likelihood of success, and protracted surgical courses and contamination increase failure and amputations. The authors show a benefit to involving plastic surgeons early in the course of total knee arthroplasty complications to optimize genicular soft tissues. Therapeutic, III.
Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L
2017-05-01
Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
A comparison study of different facial soft tissue analysis methods.
Kook, Min-Suk; Jung, Seunggon; Park, Hong-Ju; Oh, Hee-Kyun; Ryu, Sun-Youl; Cho, Jin-Hyoung; Lee, Jae-Seo; Yoon, Suk-Ja; Kim, Min-Soo; Shin, Hyo-Keun
2014-07-01
The purpose of this study was to evaluate several different facial soft tissue measurement methods. After marking 15 landmarks in the facial area of 12 mannequin heads of different sizes and shapes, facial soft tissue measurements were performed by the following 5 methods: Direct anthropometry, Digitizer, 3D CT, 3D scanner, and DI3D system. With these measurement methods, 10 measurement values representing the facial width, height, and depth were determined twice with a one week interval by one examiner. These data were analyzed with the SPSS program. The position created based on multi-dimensional scaling showed that direct anthropometry, 3D CT, digitizer, 3D scanner demonstrated relatively similar values, while the DI3D system showed slightly different values. All 5 methods demonstrated good accuracy and had a high coefficient of reliability (>0.92) and a low technical error (<0.9 mm). The measured value of the distance between the right and left medial canthus obtained by using the DI3D system was statistically significantly different from that obtained by using the digital caliper, digitizer and laser scanner (p < 0.05), but the other measured values were not significantly different. On evaluating the reproducibility of measurement methods, two measurement values (Ls-Li, G-Pg) obtained by using direct anthropometry, one measurement value (N'-Prn) obtained by using the digitizer, and four measurement values (EnRt-EnLt, AlaRt-AlaLt, ChRt-ChLt, Sn-Pg) obtained by using the DI3D system, were statistically significantly different. However, the mean measurement error in every measurement method was low (<0.7 mm). All measurement values obtained by using the 3D CT and 3D scanner did not show any statistically significant difference. The results of this study show that all 3D facial soft tissue analysis methods demonstrate favorable accuracy and reproducibility, and hence they can be used in clinical practice and research studies. Copyright © 2013 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Corrosion of aluminium in soft drinks.
Seruga, M; Hasenay, D
1996-04-01
The corrosion of aluminium (Al) in several brands of soft drinks (cola- and citrate-based drinks) has been studied, using an electrochemical method, namely potentiodynamic polarization. The results show that the corrosion of Al in soft drinks is a very slow, time-dependent and complex process, strongly influenced by the passivation, complexation and adsorption processes. The corrosion of Al in these drinks occurs principally due to the presence of acids: citric acid in citrate-based drinks and orthophosphoric acid in cola-based drinks. The corrosion rate of Al rose with an increase in the acidity of soft drinks, i.e. with increase of the content of total acids. The corrosion rates are much higher in the cola-based drinks than those in citrate-based drinks, due to the facts that: (1) orthophosphoric acid is more corrosive to Al than is citric acid, (2) a quite different passive oxide layer (with different properties) is formed on Al, depending on whether the drink is cola or citrate based. The method of potentiodynamic polarization was shown as being very suitable for the study of corrosion of Al in soft drinks, especially if it is combined with some non-electrochemical method, e.g. graphite furnace atomic absorption spectrometry (GFAAS).
Szenczi-Cseh, J; Horváth, Zs; Ambrus, Á
2017-12-01
We tested the applicability of EPIC-SOFT food picture series used in the context of a Hungarian food consumption survey gathering data for exposure assessment, and investigated errors in food portion estimation resulted from the visual perception and conceptualisation-memory. Sixty-two participants in three age groups (10 to <74 years) were presented with three different portion sizes of five foods. The results were considered acceptable if the relative difference between average estimated and actual weight obtained through the perception method was ≤25%, and the relative standard deviation of the individual weight estimates was <30% after compensating the effect of potential outliers with winsorisation. Picture series for all five food items were rated acceptable. Small portion sizes were tended to be overestimated, large ones were tended to be underestimated. Portions of boiled potato and creamed spinach were all over- and underestimated, respectively. Recalling the portion sizes resulted in overestimation with larger differences (up to 60.7%).
NASA Astrophysics Data System (ADS)
Allured, Ryan; Okajima, Takashi; Soufli, Regina; Fernández-Perea, Mónica; Daly, Ryan O.; Marlowe, Hannah; Griffiths, Scott T.; Pivovaroff, Michael J.; Kaaret, Philip
2012-10-01
The Bragg Reflection Polarimeter (BRP) on the NASA Gravity and Extreme Magnetism Small Explorer Mission is designed to measure the linear polarization of astrophysical sources in a narrow band centered at about 500 eV. X-rays are focused by Wolter I mirrors through a 4.5 m focal length to a time projection chamber (TPC) polarimeter, sensitive between 2{10 keV. In this optical path lies the BRP multilayer reflector at a nominal 45 degree incidence angle. The reflector reflects soft X-rays to the BRP detector and transmits hard X-rays to the TPC. As the spacecraft rotates about the optical axis, the reflected count rate will vary depending on the polarization of the incident beam. However, false polarization signals may be produced due to misalignments and spacecraft pointing wobble. Monte-Carlo simulations have been carried out, showing that the false modulation is below the statistical uncertainties for the expected focal plane offsets of < 2 mm.
Self-replication with magnetic dipolar colloids
NASA Astrophysics Data System (ADS)
Dempster, Joshua M.; Zhang, Rui; Olvera de la Cruz, Monica
2015-10-01
Colloidal self-replication represents an exciting research frontier in soft matter physics. Currently, all reported self-replication schemes involve coating colloidal particles with stimuli-responsive molecules to allow switchable interactions. In this paper, we introduce a scheme using ferromagnetic dipolar colloids and preprogrammed external magnetic fields to create an autonomous self-replication system. Interparticle dipole-dipole forces and periodically varying weak-strong magnetic fields cooperate to drive colloid monomers from the solute onto templates, bind them into replicas, and dissolve template complexes. We present three general design principles for autonomous linear replicators, derived from a focused study of a minimalist sphere-dimer magnetic system in which single binding sites allow formation of dimeric templates. We show via statistical models and computer simulations that our system exhibits nonlinear growth of templates and produces nearly exponential growth (low error rate) upon adding an optimized competing electrostatic potential. We devise experimental strategies for constructing the required magnetic colloids based on documented laboratory techniques. We also present qualitative ideas about building more complex self-replicating structures utilizing magnetic colloids.
Read disturb errors in a CMOS static RAM chip. [radiation hardened for spacedraft
NASA Technical Reports Server (NTRS)
Wood, Steven H.; Marr, James C., IV; Nguyen, Tien T.; Padgett, Dwayne J.; Tran, Joe C.; Griswold, Thomas W.; Lebowitz, Daniel C.
1989-01-01
Results are reported from an extensive investigation into pattern-sensitive soft errors (read disturb errors) in the TCC244 CMOS static RAM chip. The TCC244, also known as the SA2838, is a radiation-hard single-event-upset-resistant 4 x 256 memory chip. This device is being used by the Jet Propulsion Laboratory in the Galileo and Magellan spacecraft, which will have encounters with Jupiter and Venus, respectively. Two aspects of the part's design are shown to result in the occurrence of read disturb errors: the transparence of the signal path from the address pins to the array of cells, and the large resistance in the Vdd and Vss lines of the cells in the center of the array. Probe measurements taken during a read disturb failure illustrate how address skews and the data pattern in the chip combine to produce a bit flip. A capacitive charge pump formed by the individual cell capacitances and the resistance in the supply lines pumps down both the internal cell voltage and the local supply voltage until a bit flip occurs.
Beam hardening correction in CT myocardial perfusion measurement
NASA Astrophysics Data System (ADS)
So, Aaron; Hsieh, Jiang; Li, Jian-Ying; Lee, Ting-Yim
2009-05-01
This paper presents a method for correcting beam hardening (BH) in cardiac CT perfusion imaging. The proposed algorithm works with reconstructed images instead of projection data. It applies thresholds to separate low (soft tissue) and high (bone and contrast) attenuating material in a CT image. The BH error in each projection is estimated by a polynomial function of the forward projection of the segmented image. The error image is reconstructed by back-projection of the estimated errors. A BH-corrected image is then obtained by subtracting a scaled error image from the original image. Phantoms were designed to simulate the BH artifacts encountered in cardiac CT perfusion studies of humans and animals that are most commonly used in cardiac research. These phantoms were used to investigate whether BH artifacts can be reduced with our approach and to determine the optimal settings, which depend upon the anatomy of the scanned subject, of the correction algorithm for patient and animal studies. The correction algorithm was also applied to correct BH in a clinical study to further demonstrate the effectiveness of our technique.
Classification Model for Forest Fire Hotspot Occurrences Prediction Using ANFIS Algorithm
NASA Astrophysics Data System (ADS)
Wijayanto, A. K.; Sani, O.; Kartika, N. D.; Herdiyeni, Y.
2017-01-01
This study proposed the application of data mining technique namely Adaptive Neuro-Fuzzy inference system (ANFIS) on forest fires hotspot data to develop classification models for hotspots occurrence in Central Kalimantan. Hotspot is a point that is indicated as the location of fires. In this study, hotspot distribution is categorized as true alarm and false alarm. ANFIS is a soft computing method in which a given inputoutput data set is expressed in a fuzzy inference system (FIS). The FIS implements a nonlinear mapping from its input space to the output space. The method of this study classified hotspots as target objects by correlating spatial attributes data using three folds in ANFIS algorithm to obtain the best model. The best result obtained from the 3rd fold provided low error for training (error = 0.0093676) and also low error testing result (error = 0.0093676). Attribute of distance to road is the most determining factor that influences the probability of true and false alarm where the level of human activities in this attribute is higher. This classification model can be used to develop early warning system of forest fire.
De Rosario, Helios; Page, Alvaro; Mata, Vicente
2014-05-07
This paper proposes a variation of the instantaneous helical pivot technique for locating centers of rotation. The point of optimal kinematic error (POKE), which minimizes the velocity at the center of rotation, may be obtained by just adding a weighting factor equal to the square of angular velocity in Woltring׳s equation of the pivot of instantaneous helical axes (PIHA). Calculations are simplified with respect to the original method, since it is not necessary to make explicit calculations of the helical axis, and the effect of accidental errors is reduced. The improved performance of this method was validated by simulations based on a functional calibration task for the gleno-humeral joint center. Noisy data caused a systematic dislocation of the calculated center of rotation towards the center of the arm marker cluster. This error in PIHA could even exceed the effect of soft tissue artifacts associated to small and medium deformations, but it was successfully reduced by the POKE estimation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Weichman, B M; Chau, T T; Rona, G
1987-04-01
Histopathologic evaluation of hindpaws from control rats with established adjuvant arthritis showed severe alterations in soft tissue and bone, as well as progressive, moderate-to-severe articular changes. Following treatment with etodolac for 28 days, soft tissue and articular changes were rated mild, and bone changes were rated moderate, but with remodeling. These findings indicate that etodolac partially reversed the joint damage in these rats.
Propagation mode of Portevin-Le Chatelier plastic instabilities in an aluminium-magnesium alloy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zeghloul, A.; Mliha-Touati, M.; Bakir, S.
1996-11-01
The Portevin-Le Chatelier (PLC) effect is characterized by the appearance of serrations in load (hard tensile machine for constant strain rate tests) or by steps (soft tensile machine for constant stress rate tests) or by steps (soft tensile machine for constant stress rate tests) on the stress-strain curves. It is now widely accepted that the PLC propagative instability stems from the dynamic interaction between diffusing solute atoms and mobile dislocations in the temperature and strain rate ranges where dynamic strain ageing (DSA) takes place. This competition results in a negative strain-rate sensitivity. However, in some alloys, like concentrated solid solutions,more » shearing of precipitates accompanied by their dissolution and subsequent reprecipitation during tensile test may also lead to a negative strain rate sensitivity. In view of the renewed theoretical interest in propagative instabilities, it is important that the experimental features of band propagation be well characterized. In this work the authors present experimental results that are obtained from the investigation of the PLC bands associated with discontinuous yielding. These results show that the band strain, the band velocity and the propagation mode of the bands depend on the stress rate when the test is carried out on a soft tensile machine.« less
Classification based upon gene expression data: bias and precision of error rates.
Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L
2007-06-01
Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp
Do Errors on Classroom Reading Tasks Slow Growth in Reading? Technical Report No. 404.
ERIC Educational Resources Information Center
Anderson, Richard C.; And Others
A pervasive finding from research on teaching and classroom learning is that a low rate of error on classroom tasks is associated with large year to year gains in achievement, particularly for reading in the primary grades. The finding of a negative relationship between error rate, especially rate of oral reading errors, and gains in reading…
Patients with Advanced, Rare Sarcoma Respond to Cediranib | Center for Cancer Research
Alveolar soft part sarcomas (ASPS) are highly vascular tumors that usually affect adolescents and young adults. Comprising less than one percent of soft tissue sarcomas, ASPS can be cured with surgery. However, its tendency to metastasize and its lack of response to standard soft tissue sarcoma chemotherapy regimens makes ASPS a particularly lethal cancer with a five-year survival rate of 20 percent in patients with metastatic disease who are not candidates for surgery.
Tailoring Adjuvant Endocrine Therapy for Premenopausal Breast Cancer.
Francis, Prudence A; Pagani, Olivia; Fleming, Gini F; Walley, Barbara A; Colleoni, Marco; Láng, István; Gómez, Henry L; Tondini, Carlo; Ciruelos, Eva; Burstein, Harold J; Bonnefoi, Hervé R; Bellet, Meritxell; Martino, Silvana; Geyer, Charles E; Goetz, Matthew P; Stearns, Vered; Pinotti, Graziella; Puglisi, Fabio; Spazzapan, Simon; Climent, Miguel A; Pavesi, Lorenzo; Ruhstaller, Thomas; Davidson, Nancy E; Coleman, Robert; Debled, Marc; Buchholz, Stefan; Ingle, James N; Winer, Eric P; Maibach, Rudolf; Rabaglio-Poretti, Manuela; Ruepp, Barbara; Di Leo, Angelo; Coates, Alan S; Gelber, Richard D; Goldhirsch, Aron; Regan, Meredith M
2018-06-04
Background In the Suppression of Ovarian Function Trial (SOFT) and the Tamoxifen and Exemestane Trial (TEXT), the 5-year rates of recurrence of breast cancer were significantly lower among premenopausal women who received the aromatase inhibitor exemestane plus ovarian suppression than among those who received tamoxifen plus ovarian suppression. The addition of ovarian suppression to tamoxifen did not result in significantly lower recurrence rates than those with tamoxifen alone. Here, we report the updated results from the two trials. Methods Premenopausal women were randomly assigned to receive 5 years of tamoxifen, tamoxifen plus ovarian suppression, or exemestane plus ovarian suppression in SOFT and to receive tamoxifen plus ovarian suppression or exemestane plus ovarian suppression in TEXT. Randomization was stratified according to the receipt of chemotherapy. Results In SOFT, the 8-year disease-free survival rate was 78.9% with tamoxifen alone, 83.2% with tamoxifen plus ovarian suppression, and 85.9% with exemestane plus ovarian suppression (P=0.009 for tamoxifen alone vs. tamoxifen plus ovarian suppression). The 8-year rate of overall survival was 91.5% with tamoxifen alone, 93.3% with tamoxifen plus ovarian suppression, and 92.1% with exemestane plus ovarian suppression (P=0.01 for tamoxifen alone vs. tamoxifen plus ovarian suppression); among the women who remained premenopausal after chemotherapy, the rates were 85.1%, 89.4%, and 87.2%, respectively. Among the women with cancers that were negative for HER2 who received chemotherapy, the 8-year rate of distant recurrence with exemestane plus ovarian suppression was lower than the rate with tamoxifen plus ovarian suppression (by 7.0 percentage points in SOFT and by 5.0 percentage points in TEXT). Grade 3 or higher adverse events were reported in 24.6% of the tamoxifen-alone group, 31.0% of the tamoxifen-ovarian suppression group, and 32.3% of the exemestane-ovarian suppression group. Conclusions Among premenopausal women with breast cancer, the addition of ovarian suppression to tamoxifen resulted in significantly higher 8-year rates of both disease-free and overall survival than tamoxifen alone. The use of exemestane plus ovarian suppression resulted in even higher rates of freedom from recurrence. The frequency of adverse events was higher in the two groups that received ovarian suppression than in the tamoxifen-alone group. (Funded by Pfizer and others; SOFT and TEXT ClinicalTrials.gov numbers, NCT00066690 and NCT00066703 , respectively.).
Chen, Qi-Zhi; Liang, Shu-Ling; Wang, Jiang; Simon, George P
2011-11-01
Poly (glycerol sebacate) (PGS) is a promising elastomer for use in soft tissue engineering. However, it is difficult to achieve with PGS a satisfactory balance of mechanical compliance and degradation rate that meet the requirements of soft tissue engineering. In this work, we have synthesised a new PGS nanocomposite system filled with halloysite nanotubes, and mechanical properties, as well as related chemical characters, of the nanocomposites were investigated. It was found that the addition of nanotubular halloysite did not compromise the extensibility of material, compared with the pure PGS counterpart; instead the elongation at rupture was increased from 110 (in the pure PGS) to 225% (in the 20 wt% composite). Second, Young's modulus and resilience of 3-5 wt% composites were ∼0.8 MPa and >94% respectively, remaining close to the level of pure PGS which is desired for applications in soft tissue engineering. Third, an important feature of the 1-5 wt% composites was their stable mechanical properties over an extended period, which could allow the provision of reliable mechanical support to damaged tissues during the lag phase of the healing process. Finally, the in vitro study indicated that the addition of halloysite slowed down the degradation rate of the composites. In conclusion, the good compliance, enhanced stretchability, stable mechanical behavior over an extended period, and reduced degradation rates make the 3-5 wt% composites promising candidates for application in soft tissue engineering. Copyright © 2011 Elsevier Ltd. All rights reserved.
Estimating genotype error rates from high-coverage next-generation sequence data.
Wall, Jeffrey D; Tang, Ling Fung; Zerbe, Brandon; Kvale, Mark N; Kwok, Pui-Yan; Schaefer, Catherine; Risch, Neil
2014-11-01
Exome and whole-genome sequencing studies are becoming increasingly common, but little is known about the accuracy of the genotype calls made by the commonly used platforms. Here we use replicate high-coverage sequencing of blood and saliva DNA samples from four European-American individuals to estimate lower bounds on the error rates of Complete Genomics and Illumina HiSeq whole-genome and whole-exome sequencing. Error rates for nonreference genotype calls range from 0.1% to 0.6%, depending on the platform and the depth of coverage. Additionally, we found (1) no difference in the error profiles or rates between blood and saliva samples; (2) Complete Genomics sequences had substantially higher error rates than Illumina sequences had; (3) error rates were higher (up to 6%) for rare or unique variants; (4) error rates generally declined with genotype quality (GQ) score, but in a nonlinear fashion for the Illumina data, likely due to loss of specificity of GQ scores greater than 60; and (5) error rates increased with increasing depth of coverage for the Illumina data. These findings, especially (3)-(5), suggest that caution should be taken in interpreting the results of next-generation sequencing-based association studies, and even more so in clinical application of this technology in the absence of validation by other more robust sequencing or genotyping methods. © 2014 Wall et al.; Published by Cold Spring Harbor Laboratory Press.
Speech Errors across the Lifespan
ERIC Educational Resources Information Center
Vousden, Janet I.; Maylor, Elizabeth A.
2006-01-01
Dell, Burger, and Svec (1997) proposed that the proportion of speech errors classified as anticipations (e.g., "moot and mouth") can be predicted solely from the overall error rate, such that the greater the error rate, the lower the anticipatory proportion (AP) of errors. We report a study examining whether this effect applies to changes in error…
Computer calculated dose in paediatric prescribing.
Kirk, Richard C; Li-Meng Goh, Denise; Packia, Jeya; Min Kam, Huey; Ong, Benjamin K C
2005-01-01
Medication errors are an important cause of hospital-based morbidity and mortality. However, only a few medication error studies have been conducted in children. These have mainly quantified errors in the inpatient setting; there is very little data available on paediatric outpatient and emergency department medication errors and none on discharge medication. This deficiency is of concern because medication errors are more common in children and it has been suggested that the risk of an adverse drug event as a consequence of a medication error is higher in children than in adults. The aims of this study were to assess the rate of medication errors in predominantly ambulatory paediatric patients and the effect of computer calculated doses on medication error rates of two commonly prescribed drugs. This was a prospective cohort study performed in a paediatric unit in a university teaching hospital between March 2003 and August 2003. The hospital's existing computer clinical decision support system was modified so that doctors could choose the traditional prescription method or the enhanced method of computer calculated dose when prescribing paracetamol (acetaminophen) or promethazine. All prescriptions issued to children (<16 years of age) at the outpatient clinic, emergency department and at discharge from the inpatient service were analysed. A medication error was defined as to have occurred if there was an underdose (below the agreed value), an overdose (above the agreed value), no frequency of administration specified, no dose given or excessive total daily dose. The medication error rates and the factors influencing medication error rates were determined using SPSS version 12. From March to August 2003, 4281 prescriptions were issued. Seven prescriptions (0.16%) were excluded, hence 4274 prescriptions were analysed. Most prescriptions were issued by paediatricians (including neonatologists and paediatric surgeons) and/or junior doctors. The error rate in the children's emergency department was 15.7%, for outpatients was 21.5% and for discharge medication was 23.6%. Most errors were the result of an underdose (64%; 536/833). The computer calculated dose error rate was 12.6% compared with the traditional prescription error rate of 28.2%. Logistical regression analysis showed that computer calculated dose was an important and independent variable influencing the error rate (adjusted relative risk = 0.436, 95% CI 0.336, 0.520, p < 0.001). Other important independent variables were seniority and paediatric training of the person prescribing and the type of drug prescribed. Medication error, especially underdose, is common in outpatient, emergency department and discharge prescriptions. Computer calculated doses can significantly reduce errors, but other risk factors have to be concurrently addressed to achieve maximum benefit.
Angular rate optimal design for the rotary strapdown inertial navigation system.
Yu, Fei; Sun, Qian
2014-04-22
Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS.
Bulik, Catharine C.; Fauntleroy, Kathy A.; Jenkins, Stephen G.; Abuali, Mayssa; LaBombardi, Vincent J.; Nicolau, David P.; Kuti, Joseph L.
2010-01-01
We describe the levels of agreement between broth microdilution, Etest, Vitek 2, Sensititre, and MicroScan methods to accurately define the meropenem MIC and categorical interpretation of susceptibility against carbapenemase-producing Klebsiella pneumoniae (KPC). A total of 46 clinical K. pneumoniae isolates with KPC genotypes, all modified Hodge test and blaKPC positive, collected from two hospitals in NY were included. Results obtained by each method were compared with those from broth microdilution (the reference method), and agreement was assessed based on MICs and Clinical Laboratory Standards Institute (CLSI) interpretative criteria using 2010 susceptibility breakpoints. Based on broth microdilution, 0%, 2.2%, and 97.8% of the KPC isolates were classified as susceptible, intermediate, and resistant to meropenem, respectively. Results from MicroScan demonstrated the most agreement with those from broth microdilution, with 95.6% agreement based on the MIC and 2.2% classified as minor errors, and no major or very major errors. Etest demonstrated 82.6% agreement with broth microdilution MICs, a very major error rate of 2.2%, and a minor error rate of 2.2%. Vitek 2 MIC agreement was 30.4%, with a 23.9% very major error rate and a 39.1% minor error rate. Sensititre demonstrated MIC agreement for 26.1% of isolates, with a 3% very major error rate and a 26.1% minor error rate. Application of FDA breakpoints had little effect on minor error rates but increased very major error rates to 58.7% for Vitek 2 and Sensititre. Meropenem MIC results and categorical interpretations for carbapenemase-producing K. pneumoniae differ by methodology. Confirmation of testing results is encouraged when an accurate MIC is required for antibiotic dosing optimization. PMID:20484603
EFFECT OF ENDOSPERM HARDNESS ON AN ETHANOL PROCESS USING A GRANULAR STARCH HYDROLYZING ENZYME
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, P; W Liu, D B; Johnston, K D
Granular starch hydrolyzing enzymes (GSHE) can hydrolyze starch at low temperature (32°C). The dry grind process using GSHE (GSH process) has fewer unit operations and no changes in process conditions (pH 4.0 and 32°C) compared to the conventional process because it dispenses with the cooking and liquefaction step. In this study, the effects of endosperm hardness, protease, urea, and GSHE levels on GSH process were evaluated. Ground corn, soft endosperm, and hard endosperm were processed using two GSHE levels (0.1 and 0.4 mL per 100 g ground material) and four treatments of protease and urea addition. Soft and hard endospermmore » materials were obtained by grinding and sifting flaking grits from a dry milling pilot plant; classifications were confirmed using scanning electron microscopy. During 72 h of simultaneous granular starch hydrolysis and fermentation (GSHF), ethanol and glucose profiles were determined using HPLC. Soft endosperm resulted in higher final ethanol concentrations compared to ground corn or hard endosperm. Addition of urea increased final ethanol concentrations for soft and hard endosperm. Protease addition increased ethanol concentrations and fermentation rates for soft endosperm, hard endosperm, and ground corn. The effect of protease addition on ethanol concentrations and fermentation rates was most predominant for soft endosperm, less for hard endosperm, and least for ground corn. Samples (soft endosperm, hard endosperm, or corn) with protease resulted in higher (1.0% to 10.5% v/v) ethanol concentration compared to samples with urea. The GSH process with protease requires little or no urea addition. For fermentation of soft endosperm, GSHE dose can be reduced. Due to nutrients (lipids, minerals, and soluble proteins) present in corn that enhance yeast growth, ground corn fermented faster at the beginning than hard and soft endosperm.« less
Soft tissue augmentation around osseointegrated and uncovered dental implants: a systematic review.
Bassetti, Renzo G; Stähli, Alexandra; Bassetti, Mario A; Sculean, Anton
2017-01-01
The aim was to compile the current knowledge about the efficacy of different soft tissue correction methods around osseointegrated, already uncovered and/or loaded (OU/L) implants with insufficient soft tissue conditions. Procedures to increase peri-implant keratinized mucosa (KM) width and/or soft tissue volume were considered. Screening of two databases: MEDLINE (PubMed) and EMBASE (OVID), and manual search of articles were performed. Human studies reporting on soft tissue augmentation/correction methods around OU/L implants up to June 30, 2016, were considered. Quality assessment of selected full-text articles to weight risk of bias was performed using the Cochrane collaboration's tool. Overall, four randomized controlled trials (risk of bias = high/low) and five prospective studies (risk of bias = high) were included. Depending on the surgical techniques and graft materials, the enlargement of keratinized tissue (KT) ranged between 1.15 ± 0.81 and 2.57 ± 0.50 mm. The apically positioned partial thickness flap (APPTF), in combination with a free gingival graft (FGG), a subepithelial connective tissue graft (SCTG), or a xenogeneic graft material (XCM) were most effective. A coronally advanced flap (CAF) combined with SCTG in three, combined with allogenic graft materials (AMDA) in one, and a split thickness flap (STF) combined with SCTG in another study showed mean soft tissue recession coverage rates from 28 to 96.3 %. STF combined with XCM failed to improve peri-implant soft tissue coverage. The three APPTF-techniques combined with FGG, SCTG, or XCM achieved comparable enlargements of peri-implant KT. Further, both STF and CAF, both in combination with SCTG, are equivalent regarding recession coverage rates. STF + XCM and CAF + AMDA did not reach significant coverage. In case of soft tissue deficiency around OU/L dental implants, the selection of both an appropriate surgical technique and a suitable soft tissue graft material is of utmost clinical relevance.
Soft Skills in the Technology Education Classroom: What Do Students Need?
ERIC Educational Resources Information Center
Harris, Kara S.; Rogers, George E.
2008-01-01
In this article, the authors examine which nontechnical competencies or soft skills related to technology education should be developed by high school students. Results clearly indicate that university-level engineering and engineering technology professors rate students' interpersonal, communication, and work ethic competencies as desired…
Kim, Myoung-Soo; Kim, Jung-Soon; Jung, In Sook; Kim, Young Hae; Kim, Ho Jung
2007-03-01
The purpose of this study was to develop and evaluate an error reporting promoting program(ERPP) to systematically reduce the incidence rate of nursing errors in operating room. A non-equivalent control group non-synchronized design was used. Twenty-six operating room nurses who were in one university hospital in Busan participated in this study. They were stratified into four groups according to their operating room experience and were allocated to the experimental and control groups using a matching method. Mann-Whitney U Test was used to analyze the differences pre and post incidence rates of nursing errors between the two groups. The incidence rate of nursing errors decreased significantly in the experimental group compared to the pre-test score from 28.4% to 15.7%. The incidence rate by domains, it decreased significantly in the 3 domains-"compliance of aseptic technique", "management of document", "environmental management" in the experimental group while it decreased in the control group which was applied ordinary error-reporting method. Error-reporting system can make possible to hold the errors in common and to learn from them. ERPP was effective to reduce the errors of recognition-related nursing activities. For the wake of more effective error-prevention, we will be better to apply effort of risk management along the whole health care system with this program.
Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection
Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J
2017-01-01
Background The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. Objective We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term “validation relaxation.” Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. Methods We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of “required” constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. Results The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. Conclusions A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. PMID:28821474
Validation Relaxation: A Quality Assurance Strategy for Electronic Data Collection.
Kenny, Avi; Gordon, Nicholas; Griffiths, Thomas; Kraemer, John D; Siedner, Mark J
2017-08-18
The use of mobile devices for data collection in developing world settings is becoming increasingly common and may offer advantages in data collection quality and efficiency relative to paper-based methods. However, mobile data collection systems can hamper many standard quality assurance techniques due to the lack of a hardcopy backup of data. Consequently, mobile health data collection platforms have the potential to generate datasets that appear valid, but are susceptible to unidentified database design flaws, areas of miscomprehension by enumerators, and data recording errors. We describe the design and evaluation of a strategy for estimating data error rates and assessing enumerator performance during electronic data collection, which we term "validation relaxation." Validation relaxation involves the intentional omission of data validation features for select questions to allow for data recording errors to be committed, detected, and monitored. We analyzed data collected during a cluster sample population survey in rural Liberia using an electronic data collection system (Open Data Kit). We first developed a classification scheme for types of detectable errors and validation alterations required to detect them. We then implemented the following validation relaxation techniques to enable data error conduct and detection: intentional redundancy, removal of "required" constraint, and illogical response combinations. This allowed for up to 11 identifiable errors to be made per survey. The error rate was defined as the total number of errors committed divided by the number of potential errors. We summarized crude error rates and estimated changes in error rates over time for both individuals and the entire program using logistic regression. The aggregate error rate was 1.60% (125/7817). Error rates did not differ significantly between enumerators (P=.51), but decreased for the cohort with increasing days of application use, from 2.3% at survey start (95% CI 1.8%-2.8%) to 0.6% at day 45 (95% CI 0.3%-0.9%; OR=0.969; P<.001). The highest error rate (84/618, 13.6%) occurred for an intentional redundancy question for a birthdate field, which was repeated in separate sections of the survey. We found low error rates (0.0% to 3.1%) for all other possible errors. A strategy of removing validation rules on electronic data capture platforms can be used to create a set of detectable data errors, which can subsequently be used to assess group and individual enumerator error rates, their trends over time, and categories of data collection that require further training or additional quality control measures. This strategy may be particularly useful for identifying individual enumerators or systematic data errors that are responsive to enumerator training and is best applied to questions for which errors cannot be prevented through training or software design alone. Validation relaxation should be considered as a component of a holistic data quality assurance strategy. ©Avi Kenny, Nicholas Gordon, Thomas Griffiths, John D Kraemer, Mark J Siedner. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 18.08.2017.
NASA Technical Reports Server (NTRS)
Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.;
2006-01-01
A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arai, T; Nofiele, J; Sawant, A
2015-06-15
Purpose: Rapid MRI is an attractive, non-ionizing tool for soft-tissue-based monitoring of respiratory motion in thoracic and abdominal radiotherapy. One big challenge is to achieve high temporal resolution while maintaining adequate spatial resolution. K-t BLAST, sparse-sampling and reconstruction sequence based on a-priori information represents a potential solution. In this work, we investigated how much “true” motion information is lost as a-priori information is progressively added for faster imaging. Methods: Lung tumor motions in superior-inferior direction obtained from ten individuals were replayed into an in-house, MRI-compatible, programmable motion platform (50Hz refresh and 100microns precision). Six water-filled 1.5ml tubes were placed onmore » it as fiducial markers. Dynamic marker motion within a coronal slice (FOV: 32×32cm{sup 2}, resolution: 0.67×0.67mm{sup 2}, slice-thickness: 5mm) was collected on 3.0T body scanner (Ingenia, Philips). Balanced-FFE (TE/TR: 1.3ms/2.5ms, flip-angle: 40degrees) was used in conjunction with k-t BLAST. Each motion was repeated four times as four k-t acceleration factors 1, 2, 5, and 16 (corresponding frame rates were 2.5, 4.7, 9.8, and 19.1Hz, respectively) were compared. For each image set, one average motion trajectory was computed from six marker displacements. Root mean square error (RMS) was used as a metric of spatial accuracy where measured trajectories were compared to original data. Results: Tumor motion was approximately 10mm. The mean(standard deviation) of respiratory rates over ten patients was 0.28(0.06)Hz. Cumulative distributions of tumor motion frequency spectra (0–25Hz) obtained from the patients showed that 90% of motion fell on 3.88Hz or less. Therefore, the frame rate must be a double or higher for accurate monitoring. The RMS errors over patients for k-t factors of 1, 2, 5, and 16 were.10(.04),.17(.04), .21(.06) and.26(.06)mm, respectively. Conclusions: K-t factor of 5 or higher can cover the high frequency component of tumor respiratory motion, while the estimated error of spatial accuracy was approximately.2mm.« less
Vehicular traffic noise prediction using soft computing approach.
Singh, Daljeet; Nigam, S P; Agrawal, V P; Kumar, Maneek
2016-12-01
A new approach for the development of vehicular traffic noise prediction models is presented. Four different soft computing methods, namely, Generalized Linear Model, Decision Trees, Random Forests and Neural Networks, have been used to develop models to predict the hourly equivalent continuous sound pressure level, Leq, at different locations in the Patiala city in India. The input variables include the traffic volume per hour, percentage of heavy vehicles and average speed of vehicles. The performance of the four models is compared on the basis of performance criteria of coefficient of determination, mean square error and accuracy. 10-fold cross validation is done to check the stability of the Random Forest model, which gave the best results. A t-test is performed to check the fit of the model with the field data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cai, Jian-Hua
2017-09-01
To eliminate the random error of the derivative near-IR (NIR) spectrum and to improve model stability and the prediction accuracy of the gluten protein content, a combined method is proposed for pretreatment of the NIR spectrum based on both empirical mode decomposition and the wavelet soft-threshold method. The principle and the steps of the method are introduced and the denoising effect is evaluated. The wheat gluten protein content is calculated based on the denoised spectrum, and the results are compared with those of the nine-point smoothing method and the wavelet soft-threshold method. Experimental results show that the proposed combined method is effective in completing pretreatment of the NIR spectrum, and the proposed method improves the accuracy of detection of wheat gluten protein content from the NIR spectrum.
An error criterion for determining sampling rates in closed-loop control systems
NASA Technical Reports Server (NTRS)
Brecher, S. M.
1972-01-01
The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.
Estimation of the laser cutting operating cost by support vector regression methodology
NASA Astrophysics Data System (ADS)
Jović, Srđan; Radović, Aleksandar; Šarkoćević, Živče; Petković, Dalibor; Alizamir, Meysam
2016-09-01
Laser cutting is a popular manufacturing process utilized to cut various types of materials economically. The operating cost is affected by laser power, cutting speed, assist gas pressure, nozzle diameter and focus point position as well as the workpiece material. In this article, the process factors investigated were: laser power, cutting speed, air pressure and focal point position. The aim of this work is to relate the operating cost to the process parameters mentioned above. CO2 laser cutting of stainless steel of medical grade AISI316L has been investigated. The main goal was to analyze the operating cost through the laser power, cutting speed, air pressure, focal point position and material thickness. Since the laser operating cost is a complex, non-linear task, soft computing optimization algorithms can be used. Intelligent soft computing scheme support vector regression (SVR) was implemented. The performance of the proposed estimator was confirmed with the simulation results. The SVR results are then compared with artificial neural network and genetic programing. According to the results, a greater improvement in estimation accuracy can be achieved through the SVR compared to other soft computing methodologies. The new optimization methods benefit from the soft computing capabilities of global optimization and multiobjective optimization rather than choosing a starting point by trial and error and combining multiple criteria into a single criterion.
Speech outcome after early repair of cleft soft palate using Furlow technique.
Abdel-Aziz, Mosaad
2013-01-01
The earlier closure of palatal cleft is the better the speech outcome and the less compensatory articulation errors, however dissection on the hard palate may interfere with facial growth. In Furlow palatoplasty, dissection on the hard palate is not needed and surgery is usually limited to the soft palate, so the technique has no deleterious effect on the facial growth. The aim of this study was to assess the efficacy of Furlow palatoplasty technique on the speech of young infants with cleft soft palate. Twenty-one infants with cleft soft palate were included in this study, their ages ranged from 3 to 6 months. Their clefts were repaired using Furlow technique. The patients were followed up for at least 4 years; at the end of the follow up period they were subjected to flexible nasopharyngoscopy to assess the velopharyngeal closure and speech analysis using auditory perceptual assessment. Eighteen cases (85.7%) showed complete velopharyngeal closure, 1 case (4.8%) showed borderline competence, and 2 cases (9.5%) showed borderline incompetence. Normal resonance has been attained in 18 patients (85.7%), and mild hypernasality in 3 patients (14.3%), no patients demonstrated nasal emission of air. Speech therapy was beneficial for cases with residual hypernasality; no cases needed secondary corrective surgery. Furlow palatoplasty at a younger age has favorable speech outcome with no detectable morbidity. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Karuppanan, Udayakumar; Unni, Sujatha Narayanan; Angarai, Ganesan R
2017-01-01
Assessment of mechanical properties of soft matter is a challenging task in a purely noninvasive and noncontact environment. As tissue mechanical properties play a vital role in determining tissue health status, such noninvasive methods offer great potential in framing large-scale medical screening strategies. The digital speckle pattern interferometry (DSPI)-based image capture and analysis system described here is capable of extracting the deformation information from a single acquired fringe pattern. Such a method of analysis would be required in the case of the highly dynamic nature of speckle patterns derived from soft tissues while applying mechanical compression. Soft phantoms mimicking breast tissue optical and mechanical properties were fabricated and tested in the DSPI out of plane configuration set up. Hilbert transform (HT)-based image analysis algorithm was developed to extract the phase and corresponding deformation of the sample from a single acquired fringe pattern. The experimental fringe contours were found to correlate with numerically simulated deformation patterns of the sample using Abaqus finite element analysis software. The extracted deformation from the experimental fringe pattern using the HT-based algorithm is compared with the deformation value obtained using numerical simulation under similar conditions of loading and the results are found to correlate with an average %error of 10. The proposed method is applied on breast phantoms fabricated with included subsurface anomaly mimicking cancerous tissue and the results are analyzed.
Wu, Jinpeng; Sallis, Shawn; Qiao, Ruimin; Li, Qinghao; Zhuo, Zengqing; Dai, Kehua; Guo, Zixuan; Yang, Wanli
2018-04-17
Energy storage has become more and more a limiting factor of today's sustainable energy applications, including electric vehicles and green electric grid based on volatile solar and wind sources. The pressing demand of developing high-performance electrochemical energy storage solutions, i.e., batteries, relies on both fundamental understanding and practical developments from both the academy and industry. The formidable challenge of developing successful battery technology stems from the different requirements for different energy-storage applications. Energy density, power, stability, safety, and cost parameters all have to be balanced in batteries to meet the requirements of different applications. Therefore, multiple battery technologies based on different materials and mechanisms need to be developed and optimized. Incisive tools that could directly probe the chemical reactions in various battery materials are becoming critical to advance the field beyond its conventional trial-and-error approach. Here, we present detailed protocols for soft X-ray absorption spectroscopy (sXAS), soft X-ray emission spectroscopy (sXES), and resonant inelastic X-ray scattering (RIXS) experiments, which are inherently elemental-sensitive probes of the transition-metal 3d and anion 2p states in battery compounds. We provide the details on the experimental techniques and demonstrations revealing the key chemical states in battery materials through these soft X-ray spectroscopy techniques.
Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O
2015-02-01
To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association with the International Society for Quality in Health Care.
Experimental investigation of false positive errors in auditory species occurrence surveys
Miller, David A.W.; Weir, Linda A.; McClintock, Brett T.; Grant, Evan H. Campbell; Bailey, Larissa L.; Simons, Theodore R.
2012-01-01
False positive errors are a significant component of many ecological data sets, which in combination with false negative errors, can lead to severe biases in conclusions about ecological systems. We present results of a field experiment where observers recorded observations for known combinations of electronically broadcast calling anurans under conditions mimicking field surveys to determine species occurrence. Our objectives were to characterize false positive error probabilities for auditory methods based on a large number of observers, to determine if targeted instruction could be used to reduce false positive error rates, and to establish useful predictors of among-observer and among-species differences in error rates. We recruited 31 observers, ranging in abilities from novice to expert, that recorded detections for 12 species during 180 calling trials (66,960 total observations). All observers made multiple false positive errors and on average 8.1% of recorded detections in the experiment were false positive errors. Additional instruction had only minor effects on error rates. After instruction, false positive error probabilities decreased by 16% for treatment individuals compared to controls with broad confidence interval overlap of 0 (95% CI: -46 to 30%). This coincided with an increase in false negative errors due to the treatment (26%; -3 to 61%). Differences among observers in false positive and in false negative error rates were best predicted by scores from an online test and a self-assessment of observer ability completed prior to the field experiment. In contrast, years of experience conducting call surveys was a weak predictor of error rates. False positive errors were also more common for species that were played more frequently, but were not related to the dominant spectral frequency of the call. Our results corroborate other work that demonstrates false positives are a significant component of species occurrence data collected by auditory methods. Instructing observers to only report detections they are completely certain are correct is not sufficient to eliminate errors. As a result, analytical methods that account for false positive errors will be needed, and independent testing of observer ability is a useful predictor for among-observer variation in observation error rates.
Intraoperative Radiation Therapy: Characterization and Application
1989-03-01
difficult to obtain. Notably, carcinomas of the pancreas, stomach, colon, and rectum, and sarcomas of soft tissue are prime candidates for IORT (2:131...Their pioneering efforts served as the basis for all my work. Mr. John Brohas of the AFIT Model Fabrication Shop aided my efforts considerably by... fabricated to set the collimator jaws to the required 10 cm x 10 cm aperture. The necessary parts are available from Varian. This will help eliminate errors
Clément, Julien; Dumas, Raphaël; Hagemeister, Nicola; de Guise, Jaques A
2015-11-05
Soft tissue artifact (STA) distort marker-based knee kinematics measures and make them difficult to use in clinical practice. None of the current methods designed to compensate for STA is suitable, but multi-body optimization (MBO) has demonstrated encouraging results and can be improved. The goal of this study was to develop and validate the performance of knee joint models, with anatomical and subject-specific kinematic constraints, used in MBO to reduce STA errors. Twenty subjects were recruited: 10 healthy and 10 osteoarthritis (OA) subjects. Subject-specific knee joint models were evaluated by comparing dynamic knee kinematics recorded by a motion capture system (KneeKG™) and optimized with MBO to quasi-static knee kinematics measured by a low-dose, upright, biplanar radiographic imaging system (EOS(®)). Errors due to STA ranged from 1.6° to 22.4° for knee rotations and from 0.8 mm to 14.9 mm for knee displacements in healthy and OA subjects. Subject-specific knee joint models were most effective in compensating for STA in terms of abduction-adduction, inter-external rotation and antero-posterior displacement. Root mean square errors with subject-specific knee joint models ranged from 2.2±1.2° to 6.0±3.9° for knee rotations and from 2.4±1.1 mm to 4.3±2.4 mm for knee displacements in healthy and OA subjects, respectively. Our study shows that MBO can be improved with subject-specific knee joint models, and that the quality of the motion capture calibration is critical. Future investigations should focus on more refined knee joint models to reproduce specific OA knee geometry and physiology. Copyright © 2015 Elsevier Ltd. All rights reserved.
Tonutti, Michele; Gras, Gauthier; Yang, Guang-Zhong
2017-07-01
Accurate reconstruction and visualisation of soft tissue deformation in real time is crucial in image-guided surgery, particularly in augmented reality (AR) applications. Current deformation models are characterised by a trade-off between accuracy and computational speed. We propose an approach to derive a patient-specific deformation model for brain pathologies by combining the results of pre-computed finite element method (FEM) simulations with machine learning algorithms. The models can be computed instantaneously and offer an accuracy comparable to FEM models. A brain tumour is used as the subject of the deformation model. Load-driven FEM simulations are performed on a tetrahedral brain mesh afflicted by a tumour. Forces of varying magnitudes, positions, and inclination angles are applied onto the brain's surface. Two machine learning algorithms-artificial neural networks (ANNs) and support vector regression (SVR)-are employed to derive a model that can predict the resulting deformation for each node in the tumour's mesh. The tumour deformation can be predicted in real time given relevant information about the geometry of the anatomy and the load, all of which can be measured instantly during a surgical operation. The models can predict the position of the nodes with errors below 0.3mm, beyond the general threshold of surgical accuracy and suitable for high fidelity AR systems. The SVR models perform better than the ANN's, with positional errors for SVR models reaching under 0.2mm. The results represent an improvement over existing deformation models for real time applications, providing smaller errors and high patient-specificity. The proposed approach addresses the current needs of image-guided surgical systems and has the potential to be employed to model the deformation of any type of soft tissue. Copyright © 2017 Elsevier B.V. All rights reserved.
Technological Advancements and Error Rates in Radiation Therapy Delivery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Margalit, Danielle N., E-mail: dmargalit@partners.org; Harvard Cancer Consortium and Brigham and Women's Hospital/Dana Farber Cancer Institute, Boston, MA; Chen, Yu-Hui
2011-11-15
Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system atmore » Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique. There was a lower error rate with IMRT compared with 3D/conventional RT, highlighting the need for sustained vigilance against errors common to more traditional treatment techniques.« less
Quantitative dynamic ¹⁸FDG-PET and tracer kinetic analysis of soft tissue sarcomas.
Rusten, Espen; Rødal, Jan; Revheim, Mona E; Skretting, Arne; Bruland, Oyvind S; Malinen, Eirik
2013-08-01
To study soft tissue sarcomas using dynamic positron emission tomography (PET) with the glucose analog tracer [(18)F]fluoro-2-deoxy-D-glucose ((18)FDG), to investigate correlations between derived PET image parameters and clinical characteristics, and to discuss implications of dynamic PET acquisition (D-PET). D-PET images of 11 patients with soft tissue sarcomas were analyzed voxel-by-voxel using a compartment tracer kinetic model providing estimates of transfer rates between the vascular, non-metabolized, and metabolized compartments. Furthermore, standard uptake values (SUVs) in the early (2 min p.i.; SUVE) and late (45 min p.i.; SUVL) phases of the PET acquisition were obtained. The derived transfer rates K1, k2 and k3, along with the metabolic rate of (18)FDG (MRFDG) and the vascular fraction νp, was fused with the computed tomography (CT) images for visual interpretation. Correlations between D-PET imaging parameters and clinical parameters, i.e. tumor size, grade and clinical status, were calculated with a significance level of 0.05. The temporal uptake pattern of (18)FDG in the tumor varied considerably from patient to patient. SUVE peak was higher than SUVL peak for four patients. The images of the rate constants showed a systematic pattern, often with elevated intensity in the tumors compared to surrounding tissue. Significant correlations were found between SUVE/L and some of the rate parameters. Dynamic (18)FDG-PET may provide additional valuable information on soft tissue sarcomas not obtainable from conventional (18)FDG-PET. The prognostic role of dynamic imaging should be investigated.
Error Rate Comparison during Polymerase Chain Reaction by DNA Polymerase
McInerney, Peter; Adams, Paul; Hadi, Masood Z.
2014-01-01
As larger-scale cloning projects become more prevalent, there is an increasing need for comparisons among high fidelity DNA polymerases used for PCR amplification. All polymerases marketed for PCR applications are tested for fidelity properties (i.e., error rate determination) by vendors, and numerous literature reports have addressed PCR enzyme fidelity. Nonetheless, it is often difficult to make direct comparisons among different enzymes due to numerous methodological and analytical differences from study to study. We have measured the error rates for 6 DNA polymerases commonly used in PCR applications, including 3 polymerases typically used for cloning applications requiring high fidelity. Error ratemore » measurement values reported here were obtained by direct sequencing of cloned PCR products. The strategy employed here allows interrogation of error rate across a very large DNA sequence space, since 94 unique DNA targets were used as templates for PCR cloning. The six enzymes included in the study, Taq polymerase, AccuPrime-Taq High Fidelity, KOD Hot Start, cloned Pfu polymerase, Phusion Hot Start, and Pwo polymerase, we find the lowest error rates with Pfu , Phusion, and Pwo polymerases. Error rates are comparable for these 3 enzymes and are >10x lower than the error rate observed with Taq polymerase. Mutation spectra are reported, with the 3 high fidelity enzymes displaying broadly similar types of mutations. For these enzymes, transition mutations predominate, with little bias observed for type of transition.« less
NASA Astrophysics Data System (ADS)
Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.
2018-02-01
Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.
Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F
2016-01-01
In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.
Estimating Rain Rates from Tipping-Bucket Rain Gauge Measurements
NASA Technical Reports Server (NTRS)
Wang, Jianxin; Fisher, Brad L.; Wolff, David B.
2007-01-01
This paper describes the cubic spline based operational system for the generation of the TRMM one-minute rain rate product 2A-56 from Tipping Bucket (TB) gauge measurements. Methodological issues associated with applying the cubic spline to the TB gauge rain rate estimation are closely examined. A simulated TB gauge from a Joss-Waldvogel (JW) disdrometer is employed to evaluate effects of time scales and rain event definitions on errors of the rain rate estimation. The comparison between rain rates measured from the JW disdrometer and those estimated from the simulated TB gauge shows good overall agreement; however, the TB gauge suffers sampling problems, resulting in errors in the rain rate estimation. These errors are very sensitive to the time scale of rain rates. One-minute rain rates suffer substantial errors, especially at low rain rates. When one minute rain rates are averaged to 4-7 minute or longer time scales, the errors dramatically reduce. The rain event duration is very sensitive to the event definition but the event rain total is rather insensitive, provided that the events with less than 1 millimeter rain totals are excluded. Estimated lower rain rates are sensitive to the event definition whereas the higher rates are not. The median relative absolute errors are about 22% and 32% for 1-minute TB rain rates higher and lower than 3 mm per hour, respectively. These errors decrease to 5% and 14% when TB rain rates are used at 7-minute scale. The radar reflectivity-rainrate (Ze-R) distributions drawn from large amount of 7-minute TB rain rates and radar reflectivity data are mostly insensitive to the event definition.
Approximation of Bit Error Rates in Digital Communications
2007-06-01
and Technology Organisation DSTO—TN—0761 ABSTRACT This report investigates the estimation of bit error rates in digital communi- cations, motivated by...recent work in [6]. In the latter, bounds are used to construct estimates for bit error rates in the case of differentially coherent quadrature phase
NASA Astrophysics Data System (ADS)
Brost, Alexander; Bourier, Felix; Wimmer, Andreas; Koch, Martin; Kiraly, Atilla; Liao, Rui; Kurzidim, Klaus; Hornegger, Joachim; Strobel, Norbert
2012-02-01
Atrial fibrillation (AFib) has been identified as a major cause of stroke. Radiofrequency catheter ablation has become an increasingly important treatment option, especially when drug therapy fails. Navigation under X-ray can be enhanced by using augmented fluoroscopy. It renders overlay images from pre-operative 3-D data sets which are then fused with X-ray images to provide more details about the underlying soft-tissue anatomy. Unfortunately, these fluoroscopic overlay images are compromised by respiratory and cardiac motion. Various methods to deal with motion have been proposed. To meet clinical demands, they have to be fast. Methods providing a processing frame rate of 3 frames-per-second (fps) are considered suitable for interventional electrophysiology catheter procedures if an acquisition frame rate of 2 fps is used. Unfortunately, when working at a processing rate of 3 fps, the delay until the actual motion compensated image can be displayed is about 300 ms. More recent algorithms can achieve frame rates of up to 20 fps, which reduces the lag to 50 ms. By using a novel approach involving a 3-D catheter model, catheter segmentation and a distance transform, we can speed up motion compensation to 25 fps which results in a display delay of only 40 ms on a standard workstation for medical applications. Our method uses a constrained 2-D/3-D registration to perform catheter tracking, and it obtained a 2-D tracking error of 0.61 mm.
Arba-Mosquera, Samuel; Aslanides, Ioannis M.
2012-01-01
Purpose To analyze the effects of Eye-Tracker performance on the pulse positioning errors during refractive surgery. Methods A comprehensive model, which directly considers eye movements, including saccades, vestibular, optokinetic, vergence, and miniature, as well as, eye-tracker acquisition rate, eye-tracker latency time, scanner positioning time, laser firing rate, and laser trigger delay have been developed. Results Eye-tracker acquisition rates below 100 Hz correspond to pulse positioning errors above 1.5 mm. Eye-tracker latency times to about 15 ms correspond to pulse positioning errors of up to 3.5 mm. Scanner positioning times to about 9 ms correspond to pulse positioning errors of up to 2 mm. Laser firing rates faster than eye-tracker acquisition rates basically duplicate pulse-positioning errors. Laser trigger delays to about 300 μs have minor to no impact on pulse-positioning errors. Conclusions The proposed model can be used for comparison of laser systems used for ablation processes. Due to the pseudo-random nature of eye movements, positioning errors of single pulses are much larger than observed decentrations in the clinical settings. There is no single parameter that ‘alone’ minimizes the positioning error. It is the optimal combination of the several parameters that minimizes the error. The results of this analysis are important to understand the limitations of correcting very irregular ablation patterns.
Failure analysis and modeling of a multicomputer system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Subramani, Sujatha Srinivasan
1990-01-01
This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).
Angular Rate Optimal Design for the Rotary Strapdown Inertial Navigation System
Yu, Fei; Sun, Qian
2014-01-01
Due to the characteristics of high precision for a long duration, the rotary strapdown inertial navigation system (RSINS) has been widely used in submarines and surface ships. Nowadays, the core technology, the rotating scheme, has been studied by numerous researchers. It is well known that as one of the key technologies, the rotating angular rate seriously influences the effectiveness of the error modulating. In order to design the optimal rotating angular rate of the RSINS, the relationship between the rotating angular rate and the velocity error of the RSINS was analyzed in detail based on the Laplace transform and the inverse Laplace transform in this paper. The analysis results showed that the velocity error of the RSINS depends on not only the sensor error, but also the rotating angular rate. In order to minimize the velocity error, the rotating angular rate of the RSINS should match the sensor error. One optimal design method for the rotating rate of the RSINS was also proposed in this paper. Simulation and experimental results verified the validity and superiority of this optimal design method for the rotating rate of the RSINS. PMID:24759115
Reverse Transcription Errors and RNA-DNA Differences at Short Tandem Repeats.
Fungtammasan, Arkarachai; Tomaszkiewicz, Marta; Campos-Sánchez, Rebeca; Eckert, Kristin A; DeGiorgio, Michael; Makova, Kateryna D
2016-10-01
Transcript variation has important implications for organismal function in health and disease. Most transcriptome studies focus on assessing variation in gene expression levels and isoform representation. Variation at the level of transcript sequence is caused by RNA editing and transcription errors, and leads to nongenetically encoded transcript variants, or RNA-DNA differences (RDDs). Such variation has been understudied, in part because its detection is obscured by reverse transcription (RT) and sequencing errors. It has only been evaluated for intertranscript base substitution differences. Here, we investigated transcript sequence variation for short tandem repeats (STRs). We developed the first maximum-likelihood estimator (MLE) to infer RT error and RDD rates, taking next generation sequencing error rates into account. Using the MLE, we empirically evaluated RT error and RDD rates for STRs in a large-scale DNA and RNA replicated sequencing experiment conducted in a primate species. The RT error rates increased exponentially with STR length and were biased toward expansions. The RDD rates were approximately 1 order of magnitude lower than the RT error rates. The RT error rates estimated with the MLE from a primate data set were concordant with those estimated with an independent method, barcoded RNA sequencing, from a Caenorhabditis elegans data set. Our results have important implications for medical genomics, as STR allelic variation is associated with >40 diseases. STR nonallelic transcript variation can also contribute to disease phenotype. The MLE and empirical rates presented here can be used to evaluate the probability of disease-associated transcripts arising due to RDD. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
Zhang, Jiayu; Li, Jie; Zhang, Xi; Che, Xiaorui; Huang, Yugang; Feng, Kaiqiang
2018-05-04
The Semi-Strapdown Inertial Navigation System (SSINS) provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS) inertial measurement unit (MIMU) outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS) is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions.
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2013 CFR
2013-10-01
....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2014 CFR
2014-10-01
....102 Public Welfare Department of Health and Human Services GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2012 CFR
2012-10-01
....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
45 CFR 98.102 - Content of Error Rate Reports.
Code of Federal Regulations, 2011 CFR
2011-10-01
....102 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES GENERAL ADMINISTRATION CHILD CARE AND DEVELOPMENT FUND Error Rate Reporting § 98.102 Content of Error Rate Reports. (a) Baseline Submission Report... payments by the total dollar amount of child care payments that the State, the District of Columbia or...
Impact of an antiretroviral stewardship strategy on medication error rates.
Shea, Katherine M; Hobbs, Athena Lv; Shumake, Jason D; Templet, Derek J; Padilla-Tolentino, Eimeira; Mondy, Kristin E
2018-05-02
The impact of an antiretroviral stewardship strategy on medication error rates was evaluated. This single-center, retrospective, comparative cohort study included patients at least 18 years of age infected with human immunodeficiency virus (HIV) who were receiving antiretrovirals and admitted to the hospital. A multicomponent approach was developed and implemented and included modifications to the order-entry and verification system, pharmacist education, and a pharmacist-led antiretroviral therapy checklist. Pharmacists performed prospective audits using the checklist at the time of order verification. To assess the impact of the intervention, a retrospective review was performed before and after implementation to assess antiretroviral errors. Totals of 208 and 24 errors were identified before and after the intervention, respectively, resulting in a significant reduction in the overall error rate ( p < 0.001). In the postintervention group, significantly lower medication error rates were found in both patient admissions containing at least 1 medication error ( p < 0.001) and those with 2 or more errors ( p < 0.001). Significant reductions were also identified in each error type, including incorrect/incomplete medication regimen, incorrect dosing regimen, incorrect renal dose adjustment, incorrect administration, and the presence of a major drug-drug interaction. A regression tree selected ritonavir as the only specific medication that best predicted more errors preintervention ( p < 0.001); however, no antiretrovirals reliably predicted errors postintervention. An antiretroviral stewardship strategy for hospitalized HIV patients including prospective audit by staff pharmacists through use of an antiretroviral medication therapy checklist at the time of order verification decreased error rates. Copyright © 2018 by the American Society of Health-System Pharmacists, Inc. All rights reserved.
A homeostatic-driven turnover remodelling constitutive model for healing in soft tissues
Gasser, T. Christian; Bellomo, Facundo J.
2016-01-01
Remodelling of soft biological tissue is characterized by interacting biochemical and biomechanical events, which change the tissue's microstructure, and, consequently, its macroscopic mechanical properties. Remodelling is a well-defined stage of the healing process, and aims at recovering or repairing the injured extracellular matrix. Like other physiological processes, remodelling is thought to be driven by homeostasis, i.e. it tends to re-establish the properties of the uninjured tissue. However, homeostasis may never be reached, such that remodelling may also appear as a continuous pathological transformation of diseased tissues during aneurysm expansion, for example. A simple constitutive model for soft biological tissues that regards remodelling as homeostatic-driven turnover is developed. Specifically, the recoverable effective tissue damage, whose rate is the sum of a mechanical damage rate and a healing rate, serves as a scalar internal thermodynamic variable. In order to integrate the biochemical and biomechanical aspects of remodelling, the healing rate is, on the one hand, driven by mechanical stimuli, but, on the other hand, subjected to simple metabolic constraints. The proposed model is formulated in accordance with continuum damage mechanics within an open-system thermodynamics framework. The numerical implementation in an in-house finite-element code is described, particularized for Ogden hyperelasticity. Numerical examples illustrate the basic constitutive characteristics of the model and demonstrate its potential in representing aspects of remodelling of soft tissues. Simulation results are verified for their plausibility, but also validated against reported experimental data. PMID:27009177
A homeostatic-driven turnover remodelling constitutive model for healing in soft tissues.
Comellas, Ester; Gasser, T Christian; Bellomo, Facundo J; Oller, Sergio
2016-03-01
Remodelling of soft biological tissue is characterized by interacting biochemical and biomechanical events, which change the tissue's microstructure, and, consequently, its macroscopic mechanical properties. Remodelling is a well-defined stage of the healing process, and aims at recovering or repairing the injured extracellular matrix. Like other physiological processes, remodelling is thought to be driven by homeostasis, i.e. it tends to re-establish the properties of the uninjured tissue. However, homeostasis may never be reached, such that remodelling may also appear as a continuous pathological transformation of diseased tissues during aneurysm expansion, for example. A simple constitutive model for soft biological tissues that regards remodelling as homeostatic-driven turnover is developed. Specifically, the recoverable effective tissue damage, whose rate is the sum of a mechanical damage rate and a healing rate, serves as a scalar internal thermodynamic variable. In order to integrate the biochemical and biomechanical aspects of remodelling, the healing rate is, on the one hand, driven by mechanical stimuli, but, on the other hand, subjected to simple metabolic constraints. The proposed model is formulated in accordance with continuum damage mechanics within an open-system thermodynamics framework. The numerical implementation in an in-house finite-element code is described, particularized for Ogden hyperelasticity. Numerical examples illustrate the basic constitutive characteristics of the model and demonstrate its potential in representing aspects of remodelling of soft tissues. Simulation results are verified for their plausibility, but also validated against reported experimental data. © 2016 The Author(s).
Jou, Judy; Techakehakij, Win
2012-09-01
Sugar-sweetened beverage (SSB) taxation is becoming of increasing interest as a policy aimed at addressing the rising prevalence of obesity in many countries. Preliminary evidence indicates its potential to not only reduce obesity prevalence, but also generate public revenue. However, differences in country-specific contexts create uncertainties in its possible outcomes. This paper urges careful consideration of country-specific characteristics by suggesting three points in particular that may influence the effectiveness of a volume-based soft drink excise tax: population obesity prevalence, soft drink consumption levels, and existing baseline tax rates. Data from 19 countries are compared with regard to each point. The authors suggest that SSB or soft drink taxation policy may be more effective in reducing obesity prevalence where existing obesity prevalence and soft drink consumption levels are high. Conversely, in countries where the baseline tax rate is already considered high, SSB taxation may not have a noticeable impact on consumption patterns or obesity prevalence, and may incur negative feedback from the beverage industry or the general public. Thorough evaluation of these points is recommended prior to adopting SSB or soft drink taxation as an obesity reduction measure in any given country. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Czerny, Bozena; Zycki, Piotr T.
1994-01-01
The broad-band ROSAT/EXOSAT X-ray spectra of six Seyfert 1 galaxies are fitted by a model consisting of a direct power law and a component due to reflection/reprocessing from a partially ionized, optically thick medium. The reflected spectrum contains emission features from various elements in the soft X-ray range. In all objects but one (Mrk 335), the fit is satisfactory, and no additional soft X-ray excess is required by the data. This means that in most sources there is no need for the thermal 'big blue bumps' to extend into soft X-rays, and the soft X-ray excesses reported previously can be explained by reflection/reprocessing. Satisfactory fits are obtained for a medium ionized by a source radiating at less than or approximately 15% of the Eddington rate. The fits require that the reflection is enhanced relative to an isotropically emitting source above a flat disk. The necessary high effectiveness of reflection in the soft X-ray band requires strong soft thermal flux dominating over hard X-rays.
Taxing soft drinks and restricting access to vending machines to curb child obesity.
Fletcher, Jason M; Frisvold, David; Tefft, Nathan
2010-05-01
One of the largest drivers of the current obesity epidemic is thought to be excessive consumption of sugar-sweetened beverages. Some have proposed vending machine restrictions and taxing soft drinks to curb children's consumption of soft drinks; to a large extent, these policies have not been evaluated empirically. We examine these policies using two nationally representative data sets and find no evidence that, as currently practiced, either is effective at reducing children's weight. We conclude by outlining changes that may increase their effectiveness, such as implementing comprehensive restrictions on access to soft drinks in schools and imposing higher tax rates than are currently in place in many jurisdictions.
Soft pair excitations and double-log divergences due to carrier interactions in graphene
NASA Astrophysics Data System (ADS)
Lewandowski, Cyprian; Levitov, L. S.
2018-03-01
Interactions between charge carriers in graphene lead to logarithmic renormalization of observables mimicking the behavior known in (3+1)-dimensional quantum electrodynamics (QED). Here we analyze soft electron-hole (e -h ) excitations generated as a result of fast charge dynamics, a direct analog of the signature QED effect—multiple soft photons produced by the QED vacuum shakeup. We show that such excitations are generated in photon absorption, when a photogenerated high-energy e -h pair cascades down in energy and gives rise to multiple soft e -h excitations. This fundamental process is manifested in a double-log divergence in the emission rate of soft pairs and a characteristic power-law divergence in their energy spectrum of the form 1/ω ln(ω/Δ ) . Strong carrier-carrier interactions make pair production a prominent pathway in the photoexcitation cascade.
NASA Technical Reports Server (NTRS)
Tasca, D. M.
1981-01-01
Single event upset phenomena are discussed, taking into account cosmic ray induced errors in IIL microprocessors and logic devices, single event upsets in NMOS microprocessors, a prediction model for bipolar RAMs in a high energy ion/proton environment, the search for neutron-induced hard errors in VLSI structures, soft errors due to protons in the radiation belt, and the use of an ion microbeam to study single event upsets in microcircuits. Basic mechanisms in materials and devices are examined, giving attention to gamma induced noise in CCD's, the annealing of MOS capacitors, an analysis of photobleaching techniques for the radiation hardening of fiber optic data links, a hardened field insulator, the simulation of radiation damage in solids, and the manufacturing of radiation resistant optical fibers. Energy deposition and dosimetry is considered along with SGEMP/IEMP, radiation effects in devices, space radiation effects and spacecraft charging, EMP/SREMP, and aspects of fabrication, testing, and hardness assurance.
Spencer, Bruce D
2012-06-01
Latent class models are increasingly used to assess the accuracy of medical diagnostic tests and other classifications when no gold standard is available and the true state is unknown. When the latent class is treated as the true class, the latent class models provide measures of components of accuracy including specificity and sensitivity and their complements, type I and type II error rates. The error rates according to the latent class model differ from the true error rates, however, and empirical comparisons with a gold standard suggest the true error rates often are larger. We investigate conditions under which the true type I and type II error rates are larger than those provided by the latent class models. Results from Uebersax (1988, Psychological Bulletin 104, 405-416) are extended to accommodate random effects and covariates affecting the responses. The results are important for interpreting the results of latent class analyses. An error decomposition is presented that incorporates an error component from invalidity of the latent class model. © 2011, The International Biometric Society.
Han, Mira V; Thomas, Gregg W C; Lugo-Martinez, Jose; Hahn, Matthew W
2013-08-01
Current sequencing methods produce large amounts of data, but genome assemblies constructed from these data are often fragmented and incomplete. Incomplete and error-filled assemblies result in many annotation errors, especially in the number of genes present in a genome. This means that methods attempting to estimate rates of gene duplication and loss often will be misled by such errors and that rates of gene family evolution will be consistently overestimated. Here, we present a method that takes these errors into account, allowing one to accurately infer rates of gene gain and loss among genomes even with low assembly and annotation quality. The method is implemented in the newest version of the software package CAFE, along with several other novel features. We demonstrate the accuracy of the method with extensive simulations and reanalyze several previously published data sets. Our results show that errors in genome annotation do lead to higher inferred rates of gene gain and loss but that CAFE 3 sufficiently accounts for these errors to provide accurate estimates of important evolutionary parameters.
Derivation of an analytic expression for the error associated with the noise reduction rating
NASA Astrophysics Data System (ADS)
Murphy, William J.
2005-04-01
Hearing protection devices are assessed using the Real Ear Attenuation at Threshold (REAT) measurement procedure for the purpose of estimating the amount of noise reduction provided when worn by a subject. The rating number provided on the protector label is a function of the mean and standard deviation of the REAT results achieved by the test subjects. If a group of subjects have a large variance, then it follows that the certainty of the rating should be correspondingly lower. No estimate of the error of a protector's rating is given by existing standards or regulations. Propagation of errors was applied to the Noise Reduction Rating to develop an analytic expression for the hearing protector rating error term. Comparison of the analytic expression for the error to the standard deviation estimated from Monte Carlo simulation of subject attenuations yielded a linear relationship across several protector types and assumptions for the variance of the attenuations.
Errors in laboratory medicine: practical lessons to improve patient safety.
Howanitz, Peter J
2005-10-01
Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification, specimen acceptability, proficiency testing, critical value reporting, blood product wastage, and blood culture contamination. Error rate benchmarks for these performance measures were cited and recommendations for improving patient safety presented. Not only has each of the 8 performance measures proven practical, useful, and important for patient care, taken together, they also fulfill regulatory requirements. All laboratories should consider implementing these performance measures and standardizing their own scientific designs, data analysis, and error reduction strategies according to findings from these published studies.
On the possibility of non-invasive multilayer temperature estimation using soft-computing methods.
Teixeira, C A; Pereira, W C A; Ruano, A E; Ruano, M Graça
2010-01-01
This work reports original results on the possibility of non-invasive temperature estimation (NITE) in a multilayered phantom by applying soft-computing methods. The existence of reliable non-invasive temperature estimator models would improve the security and efficacy of thermal therapies. These points would lead to a broader acceptance of this kind of therapies. Several approaches based on medical imaging technologies were proposed, magnetic resonance imaging (MRI) being appointed as the only one to achieve the acceptable temperature resolutions for hyperthermia purposes. However, MRI intrinsic characteristics (e.g., high instrumentation cost) lead us to use backscattered ultrasound (BSU). Among the different BSU features, temporal echo-shifts have received a major attention. These shifts are due to changes of speed-of-sound and expansion of the medium. The originality of this work involves two aspects: the estimator model itself is original (based on soft-computing methods) and the application to temperature estimation in a three-layer phantom is also not reported in literature. In this work a three-layer (non-homogeneous) phantom was developed. The two external layers were composed of (in % of weight): 86.5% degassed water, 11% glycerin and 2.5% agar-agar. The intermediate layer was obtained by adding graphite powder in the amount of 2% of the water weight to the above composition. The phantom was developed to have attenuation and speed-of-sound similar to in vivo muscle, according to the literature. BSU signals were collected and cumulative temporal echo-shifts computed. These shifts and the past temperature values were then considered as possible estimators inputs. A soft-computing methodology was applied to look for appropriate multilayered temperature estimators. The methodology involves radial-basis functions neural networks (RBFNN) with structure optimized by the multi-objective genetic algorithm (MOGA). In this work 40 operating conditions were considered, i.e. five 5-mm spaced spatial points and eight therapeutic intensities (I(SATA)): 0.3, 0.5, 0.7, 1.0, 1.3, 1.5, 1.7 and 2.0W/cm(2). Models were trained and selected to estimate temperature at only four intensities, then during the validation phase, the best-fitted models were analyzed in data collected at the eight intensities. This procedure leads to a more realistic evaluation of the generalisation level of the best-obtained structures. At the end of the identification phase, 82 (preferable) estimator models were achieved. The majority of them present an average maximum absolute error (MAE) inferior to 0.5 degrees C. The best-fitted estimator presents a MAE of only 0.4 degrees C for both the 40 operating conditions. This means that the gold-standard maximum error (0.5 degrees C) pointed for hyperthermia was fulfilled independently of the intensity and spatial position considered, showing the improved generalisation capacity of the identified estimator models. As the majority of the preferable estimator models, the best one presents 6 inputs and 11 neurons. In addition to the appropriate error performance, the estimator models present also a reduced computational complexity and then the possibility to be applied in real-time. A non-invasive temperature estimation model, based on soft-computing technique, was proposed for a three-layered phantom. The best-achieved estimator models presented an appropriate error performance regardless of the spatial point considered (inside or at the interface of the layers) and of the intensity applied. Other methodologies published so far, estimate temperature only in homogeneous media. The main drawback of the proposed methodology is the necessity of a-priory knowledge of the temperature behavior. Data used for training and optimisation should be representative, i.e., they should cover all possible physical situations of the estimation environment.
NASA Astrophysics Data System (ADS)
Xuan, Yue
Background. Soft materials such as polymers and soft tissues have diverse applications in bioengineering, medical care, and industry. Quantitative mechanical characterization of soft materials at multiscales is required to assure that appropriate mechanical properties are presented to support the normal material function. Indentation test has been widely used to characterize soft material. However, the measurement of in situ contact area is always difficult. Method of Approach. A transparent indenter method was introduced to characterize the nonlinear behaviors of soft materials under large deformation. This approach made the direct measurement of contact area and local deformation possible. A microscope was used to capture the contact area evolution as well as the surface deformation. Based on this transparent indenter method, a novel transparent indentation measurement systems has been built and multiple soft materials including polymers and pericardial tissue have been characterized. Seven different indenters have been used to study the strain distribution on the contact surface, inner layer and vertical layer. Finite element models have been built to simulate the hyperelastic and anisotropic material behaviors. Proper material constants were obtained by fitting the experimental results. Results.Homogeneous and anisotropic silicone rubber and porcine pericardial tissue have been examined. Contact area and local deformation were measured by real time imaging the contact interface. The experimental results were compared with the predictions from the Hertzian equations. The accurate measurement of contact area results in more reliable Young's modulus, which is critical for soft materials. For the fiber reinforced anisotropic silicone rubber, the projected contact area under a hemispherical indenter exhibited elliptical shape. The local surface deformation under indenter was mapped using digital image correlation program. Punch test has been applied to thin films of silicone rubber and porcine pericardial tissue and results were analyzed using the same method. Conclusions. The transparent indenter testing system can effectively reduce the material properties measurement error by directly measuring the contact radii. The contact shape can provide valuable information for the anisotropic property of the material. Local surface deformation including contact surface, inner layer and vertical plane can be accurately tracked and mapped to study the strain distribution. The potential usage of the transparent indenter measurement system to investigate biological and biomaterials was verified. The experimental data including the real-time contact area combined with the finite element simulation would be powerful tool to study mechanical properties of soft materials and their relation to microstructure, which has potential in pathologies study such as tissue repair and surgery plan. Key words: transparent indenter, large deformation, soft material, anisotropic.
The statistical validity of nursing home survey findings.
Woolley, Douglas C
2011-11-01
The Medicare nursing home survey is a high-stakes process whose findings greatly affect nursing homes, their current and potential residents, and the communities they serve. Therefore, survey findings must achieve high validity. This study looked at the validity of one key assessment made during a nursing home survey: the observation of the rate of errors in administration of medications to residents (med-pass). Statistical analysis of the case under study and of alternative hypothetical cases. A skilled nursing home affiliated with a local medical school. The nursing home administrators and the medical director. Observational study. The probability that state nursing home surveyors make a Type I or Type II error in observing med-pass error rates, based on the current case and on a series of postulated med-pass error rates. In the common situation such as our case, where med-pass errors occur at slightly above a 5% rate after 50 observations, and therefore trigger a citation, the chance that the true rate remains above 5% after a large number of observations is just above 50%. If the true med-pass error rate were as high as 10%, and the survey team wished to achieve 75% accuracy in determining that a citation was appropriate, they would have to make more than 200 med-pass observations. In the more common situation where med pass errors are closer to 5%, the team would have to observe more than 2000 med-passes to achieve even a modest 75% accuracy in their determinations. In settings where error rates are low, large numbers of observations of an activity must be made to reach acceptable validity of estimates for the true rates of errors. In observing key nursing home functions with current methodology, the State Medicare nursing home survey process does not adhere to well-known principles of valid error determination. Alternate approaches in survey methodology are discussed. Copyright © 2011 American Medical Directors Association. Published by Elsevier Inc. All rights reserved.
Linking resources with demography to understand resource limitation for bears
Reynolds-Hogland, M. J.; Pacifici, L.B.; Mitchell, M.S.
2007-01-01
1. Identifying the resources that limit growth of animal populations is essential for effective conservation; however, resource limitation is difficult to quantify. Recent advances in geographical information systems (GIS) and resource modelling can be combined with demographic modelling to yield insights into resource limitation. 2. Using long-term data on a population of black bears Ursus americanus, we evaluated competing hypotheses about whether availability of hard mast (acorns and nuts) or soft mast (fleshy fruits) limited bears in the southern Appalachians, USA, during 1981-2002. The effects of clearcutting on habitat quality were also evaluated. Annual survival, recruitment and population growth rate were estimated using capture-recapture data from 101 females. The availability of hard mast, soft mast and clearcuts was estimated with a GIS, as each changed through time as a result of harvest and succession, and then availabilities were incorporated as covariates for each demographic parameter. 3. The model with the additive availability of hard mast and soft mast across the landscape predicted survival and population growth rate. Availability of young clearcuts predicted recruitment, but not population growth or survival. 4. Availability of hard mast stands across the landscape and availability of soft mast across the landscape were more important than hard mast production and availability of soft mast in young clearcuts, respectively. 5. Synthesis and applications. Our results indicate that older stands, which support high levels of hard mast and moderate levels of soft mast, should be maintained to sustain population growth of bears in the southern Appalachians. Simultaneously, the acreage of intermediate aged stands (10-25 years), which support very low levels of both hard mast and soft mast, should be minimized. The approach used in this study has broad application for wildlife management and conservation. State and federal wildlife agencies often possess long-term data on both resource availability and capture-recapture for wild populations. Combined, these two data types can be used to estimate survival, recruitment, population growth, elasticities of vital rates and the effects of resource availability on demographic parameters. Hence data that are traditionally used to understand population trends can be used to evaluate how and why demography changes over time. ?? 2007 The Authors.
Discovery of Photon Index Saturation in the Black Hole Binary GRS 1915+105
NASA Technical Reports Server (NTRS)
Titarchuk, Lev; Seifina, Elena
2009-01-01
We present a study of the correlations between spectral, timing properties and mass accretion rate observed in X-rays from the Galactic Black Hole (BH) binary GRS 1915+105 during the transition between hard and soft states. We analyze all transition episodes from this source observed with Rossi X-ray Timing Explorer (RXTE), coordinated with Ryle Radio Telescope (RT) observations. We show that broad-band energy spectra of GRS 1915+105 during all these spectral states can be adequately presented by two Bulk Motion Comptonization (BMC) components: a hard component (BMC1, photon index Gamma(sub 1) = 1.7 -- 3.0) with turnover at high energies and soft thermal component (BMC2, Gamma(sub 2) = 2.7 -- 4.2) with characteristic color temperature < or = 1 keV, and the red-skewed iron line (LAOR) component. We also present observable correlations between the index and the normalization of the disk "seed" component. The use of "seed" disk normalization, which is presumably proportional to mass accretion rate in the disk, is crucial to establish the index saturation effect during the transition to the soft state. We discovered the photon index saturation of the soft and hard spectral components at values of < or approximately equal 4.2 and 3 respectively. We present a physical model which explains the index-seed photon normalization correlations. We argue that the index saturation effect of the hard component (BMC1) is due to the soft photon Comptonization in the converging inflow close to 1311 and that of soft component is due to matter accumulation in the transition layer when mass accretion rate increases. Furthermore we demonstrate a strong correlation between equivalent width of the iron line and radio flux in GRS 1915+105. In addition to our spectral model components we also find a strong feature of "blackbody-like" bump which color temperature is about 4.5 keV in eight observations of the intermediate and soft states. We discuss a possible origin of this "blackbody-like" emission.
How does aging affect the types of error made in a visual short-term memory ‘object-recall’ task?
Sapkota, Raju P.; van der Linde, Ian; Pardhan, Shahina
2015-01-01
This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits. PMID:25653615
How does aging affect the types of error made in a visual short-term memory 'object-recall' task?
Sapkota, Raju P; van der Linde, Ian; Pardhan, Shahina
2014-01-01
This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits.
Clinical biochemistry laboratory rejection rates due to various types of preanalytical errors.
Atay, Aysenur; Demir, Leyla; Cuhadar, Serap; Saglam, Gulcan; Unal, Hulya; Aksun, Saliha; Arslan, Banu; Ozkan, Asuman; Sutcu, Recep
2014-01-01
Preanalytical errors, along the process from the beginning of test requests to the admissions of the specimens to the laboratory, cause the rejection of samples. The aim of this study was to better explain the reasons of rejected samples, regarding to their rates in certain test groups in our laboratory. This preliminary study was designed on the rejected samples in one-year period, based on the rates and types of inappropriateness. Test requests and blood samples of clinical chemistry, immunoassay, hematology, glycated hemoglobin, coagulation and erythrocyte sedimentation rate test units were evaluated. Types of inappropriateness were evaluated as follows: improperly labelled samples, hemolysed, clotted specimen, insufficient volume of specimen and total request errors. A total of 5,183,582 test requests from 1,035,743 blood collection tubes were considered. The total rejection rate was 0.65 %. The rejection rate of coagulation group was significantly higher (2.28%) than the other test groups (P < 0.001) including insufficient volume of specimen error rate as 1.38%. Rejection rates of hemolysis, clotted specimen and insufficient volume of sample error were found to be 8%, 24% and 34%, respectively. Total request errors, particularly, for unintelligible requests were 32% of the total for inpatients. The errors were especially attributable to unintelligible requests of inappropriate test requests, improperly labelled samples for inpatients and blood drawing errors especially due to insufficient volume of specimens in a coagulation test group. Further studies should be performed after corrective and preventive actions to detect a possible decrease in rejecting samples.
Heterogeneous activation in 2D colloidal glass-forming liquids classified by machine learning
NASA Astrophysics Data System (ADS)
Ma, Xiaoguang; Davidson, Zoey; Still, Tim; Ivancic, Robert; Schoenholz, Sam S.; Sussman, Daniel M.; Liu, A. J.; Yodh, A. G.
The trajectories of particles in colloidal glass-forming liquids are often characterized by long periods of ``in-cage'' fluctuations and rapid ``cage-breaking'' rearrangements. We study the rate of such rearrangements and its connection with local cage structures in a 2D binary mixture of poly(N-isopropyl acrylamide) spheres. We use the hopping function, Phop (t) , to identify rearrangements within particle trajectories. Then we obtain distributions of the residence time tR between consecutive rearrangements. The mean residence time tR (S) is found to correlate with the local configurations for the rearranging particles, characterized by 70 radial structural features and softness S, which ranks the structural similarities with respect to rearranging particles. Furthermore, tR (S) for particles with similar softness decays monotonically with increasing softness, indicating correlation between rearrangement rates and softness S. Finally we find that the conditional and full probability distribution functions, P (tR | S) and P (tR) , are well explained by a thermal activation model. We acknowledge financial supports from NSF-MRSEC DMR11-20901, NSF DMR16-07378, and NASA NNX08AO0G.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk
Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less
Verifying Stability of Dynamic Soft-Computing Systems
NASA Technical Reports Server (NTRS)
Wen, Wu; Napolitano, Marcello; Callahan, John
1997-01-01
Soft computing is a general term for algorithms that learn from human knowledge and mimic human skills. Example of such algorithms are fuzzy inference systems and neural networks. Many applications, especially in control engineering, have demonstrated their appropriateness in building intelligent systems that are flexible and robust. Although recent research have shown that certain class of neuro-fuzzy controllers can be proven bounded and stable, they are implementation dependent and difficult to apply to the design and validation process. Many practitioners adopt the trial and error approach for system validation or resort to exhaustive testing using prototypes. In this paper, we describe our on-going research towards establishing necessary theoretic foundation as well as building practical tools for the verification and validation of soft-computing systems. A unified model for general neuro-fuzzy system is adopted. Classic non-linear system control theory and recent results of its applications to neuro-fuzzy systems are incorporated and applied to the unified model. It is hoped that general tools can be developed to help the designer to visualize and manipulate the regions of stability and boundedness, much the same way Bode plots and Root locus plots have helped conventional control design and validation.
Error rate information in attention allocation pilot models
NASA Technical Reports Server (NTRS)
Faulkner, W. H.; Onstott, E. D.
1977-01-01
The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.
Daboul, Amro; Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea
2018-01-01
Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'.
Ivanovska, Tatyana; Bülow, Robin; Biffar, Reiner; Cardini, Andrea
2018-01-01
Using 3D anatomical landmarks from adult human head MRIs, we assessed the magnitude of inter-operator differences in Procrustes-based geometric morphometric analyses. An in depth analysis of both absolute and relative error was performed in a subsample of individuals with replicated digitization by three different operators. The effect of inter-operator differences was also explored in a large sample of more than 900 individuals. Although absolute error was not unusual for MRI measurements, including bone landmarks, shape was particularly affected by differences among operators, with up to more than 30% of sample variation accounted for by this type of error. The magnitude of the bias was such that it dominated the main pattern of bone and total (all landmarks included) shape variation, largely surpassing the effect of sex differences between hundreds of men and women. In contrast, however, we found higher reproducibility in soft-tissue nasal landmarks, despite relatively larger errors in estimates of nasal size. Our study exemplifies the assessment of measurement error using geometric morphometrics on landmarks from MRIs and stresses the importance of relating it to total sample variance within the specific methodological framework being used. In summary, precise landmarks may not necessarily imply negligible errors, especially in shape data; indeed, size and shape may be differentially impacted by measurement error and different types of landmarks may have relatively larger or smaller errors. Importantly, and consistently with other recent studies using geometric morphometrics on digital images (which, however, were not specific to MRI data), this study showed that inter-operator biases can be a major source of error in the analysis of large samples, as those that are becoming increasingly common in the 'era of big data'. PMID:29787586
7 CFR 275.23 - Determination of State agency program performance.
Code of Federal Regulations, 2011 CFR
2011-01-01
... NUTRITION SERVICE, DEPARTMENT OF AGRICULTURE FOOD STAMP AND FOOD DISTRIBUTION PROGRAM PERFORMANCE REPORTING... section, the adjusted regressed payment error rate shall be calculated to yield the State agency's payment error rate. The adjusted regressed payment error rate is given by r 1″ + r 2″. (ii) If FNS determines...
Derks, E M; Zwinderman, A H; Gamazon, E R
2017-05-01
Population divergence impacts the degree of population stratification in Genome Wide Association Studies. We aim to: (i) investigate type-I error rate as a function of population divergence (F ST ) in multi-ethnic (admixed) populations; (ii) evaluate the statistical power and effect size estimates; and (iii) investigate the impact of population stratification on the results of gene-based analyses. Quantitative phenotypes were simulated. Type-I error rate was investigated for Single Nucleotide Polymorphisms (SNPs) with varying levels of F ST between the ancestral European and African populations. Type-II error rate was investigated for a SNP characterized by a high value of F ST . In all tests, genomic MDS components were included to correct for population stratification. Type-I and type-II error rate was adequately controlled in a population that included two distinct ethnic populations but not in admixed samples. Statistical power was reduced in the admixed samples. Gene-based tests showed no residual inflation in type-I error rate.
Energy Performance Measurement and Simulation Modeling of Tactical Soft-Wall Shelters
2015-07-01
was too low to measure was on the order of 5 hours. Because the research team did not have access to the site between 1700 and 0500 hours the...Basic for Applications ( VBA ). The objective function was the root mean square (RMS) errors between modeled and measured heating load and the modeled...References Phase Change Energy Solutions. (2013). BioPCM web page, http://phasechange.com/index.php/en/about/our-material. Accessed 16 September
Update on parts SEE suspectibility from heavy ions. [Single Event Effects
NASA Technical Reports Server (NTRS)
Nichols, D. K.; Smith, L. S.; Schwartz, H. R.; Soli, G.; Watson, K.; Koga, R.; Crain, W. R.; Crawford, K. B.; Hansel, S. J.; Lau, D. D.
1991-01-01
JPL and the Aerospace Corporation have collected a fourth set of heavy ion single event effects (SEE) test data. Trends in SEE susceptibility (including soft errors and latchup) for state-of-the-art parts are displayed. All data are conveniently divided into two tables: one for MOS devices, and one for a shorter list of recently tested bipolar devices. In addition, a new table of data for latchup tests only (invariably CMOS processes) is given.
Temperature-based estimation of global solar radiation using soft computing methodologies
NASA Astrophysics Data System (ADS)
Mohammadi, Kasra; Shamshirband, Shahaboddin; Danesh, Amir Seyed; Abdullah, Mohd Shahidan; Zamani, Mazdak
2016-07-01
Precise knowledge of solar radiation is indeed essential in different technological and scientific applications of solar energy. Temperature-based estimation of global solar radiation would be appealing owing to broad availability of measured air temperatures. In this study, the potentials of soft computing techniques are evaluated to estimate daily horizontal global solar radiation (DHGSR) from measured maximum, minimum, and average air temperatures ( T max, T min, and T avg) in an Iranian city. For this purpose, a comparative evaluation between three methodologies of adaptive neuro-fuzzy inference system (ANFIS), radial basis function support vector regression (SVR-rbf), and polynomial basis function support vector regression (SVR-poly) is performed. Five combinations of T max, T min, and T avg are served as inputs to develop ANFIS, SVR-rbf, and SVR-poly models. The attained results show that all ANFIS, SVR-rbf, and SVR-poly models provide favorable accuracy. Based upon all techniques, the higher accuracies are achieved by models (5) using T max- T min and T max as inputs. According to the statistical results, SVR-rbf outperforms SVR-poly and ANFIS. For SVR-rbf (5), the mean absolute bias error, root mean square error, and correlation coefficient are 1.1931 MJ/m2, 2.0716 MJ/m2, and 0.9380, respectively. The survey results approve that SVR-rbf can be used efficiently to estimate DHGSR from air temperatures.
Xu, Tong; Ducote, Justin L.; Wong, Jerry T.; Molloi, Sabee
2011-01-01
Dual-energy chest radiography has the potential to provide better diagnosis of lung disease by removing the bone signal from the image. Dynamic dual-energy radiography is now possible with the introduction of digital flat panel detectors. The purpose of this study is to evaluate the feasibility of using dynamic dual-energy chest radiography for functional lung imaging and tumor motion assessment. The dual energy system used in this study can acquire up to 15 frame of dual-energy images per second. A swine animal model was mechanically ventilated and imaged using the dual-energy system. Sequences of soft-tissue images were obtained using dual-energy subtraction. Time subtracted soft-tissue images were shown to be able to provide information on regional ventilation. Motion tracking of a lung anatomic feature (a branch of pulmonary artery) was performed based on an image cross-correlation algorithm. The tracking precision was found to be better than 1 mm. An adaptive correlation model was established between the above tracked motion and an external surrogate signal (temperature within the tracheal tube). This model is used to predict lung feature motion using the continuous surrogate signal and low frame rate dual-energy images (0.1 to 3.0 frames /sec). The average RMS error of the prediction was (1.1 ± 0.3) mm. The dynamic dual-energy was shown to be potentially useful for lung functional imaging such as regional ventilation and kinetic studies. It can also be used for lung tumor motion assessment and prediction during radiation therapy. PMID:21285477
Hu, Jingwen; Klinich, Kathleen D; Miller, Carl S; Nazmi, Giseli; Pearlman, Mark D; Schneider, Lawrence W; Rupp, Jonathan D
2009-11-13
Motor-vehicle crashes are the leading cause of fetal deaths resulting from maternal trauma in the United States, and placental abruption is the most common cause of these deaths. To minimize this injury, new assessment tools, such as crash-test dummies and computational models of pregnant women, are needed to evaluate vehicle restraint systems with respect to reducing the risk of placental abruption. Developing these models requires accurate material properties for tissues in the pregnant abdomen under dynamic loading conditions that can occur in crashes. A method has been developed for determining dynamic material properties of human soft tissues that combines results from uniaxial tensile tests, specimen-specific finite-element models based on laser scans that accurately capture non-uniform tissue-specimen geometry, and optimization techniques. The current study applies this method to characterizing material properties of placental tissue. For 21 placenta specimens tested at a strain rate of 12/s, the mean failure strain is 0.472+/-0.097 and the mean failure stress is 34.80+/-12.62 kPa. A first-order Ogden material model with ground-state shear modulus (mu) of 23.97+/-5.52 kPa and exponent (alpha(1)) of 3.66+/-1.90 best fits the test results. The new method provides a nearly 40% error reduction (p<0.001) compared to traditional curve-fitting methods by considering detailed specimen geometry, loading conditions, and dynamic effects from high-speed loading. The proposed method can be applied to determine mechanical properties of other soft biological tissues.
Xu, Tong; Ducote, Justin L; Wong, Jerry T; Molloi, Sabee
2011-02-21
Dual-energy chest radiography has the potential to provide better diagnosis of lung disease by removing the bone signal from the image. Dynamic dual-energy radiography is now possible with the introduction of digital flat-panel detectors. The purpose of this study is to evaluate the feasibility of using dynamic dual-energy chest radiography for functional lung imaging and tumor motion assessment. The dual-energy system used in this study can acquire up to 15 frames of dual-energy images per second. A swine animal model was mechanically ventilated and imaged using the dual-energy system. Sequences of soft-tissue images were obtained using dual-energy subtraction. Time subtracted soft-tissue images were shown to be able to provide information on regional ventilation. Motion tracking of a lung anatomic feature (a branch of pulmonary artery) was performed based on an image cross-correlation algorithm. The tracking precision was found to be better than 1 mm. An adaptive correlation model was established between the above tracked motion and an external surrogate signal (temperature within the tracheal tube). This model is used to predict lung feature motion using the continuous surrogate signal and low frame rate dual-energy images (0.1-3.0 frames per second). The average RMS error of the prediction was (1.1 ± 0.3) mm. The dynamic dual energy was shown to be potentially useful for lung functional imaging such as regional ventilation and kinetic studies. It can also be used for lung tumor motion assessment and prediction during radiation therapy.
Zhu, Yuan O; Aw, Pauline P K; de Sessions, Paola Florez; Hong, Shuzhen; See, Lee Xian; Hong, Lewis Z; Wilm, Andreas; Li, Chen Hao; Hue, Stephane; Lim, Seng Gee; Nagarajan, Niranjan; Burkholder, William F; Hibberd, Martin
2017-10-27
Viral populations are complex, dynamic, and fast evolving. The evolution of groups of closely related viruses in a competitive environment is termed quasispecies. To fully understand the role that quasispecies play in viral evolution, characterizing the trajectories of viral genotypes in an evolving population is the key. In particular, long-range haplotype information for thousands of individual viruses is critical; yet generating this information is non-trivial. Popular deep sequencing methods generate relatively short reads that do not preserve linkage information, while third generation sequencing methods have higher error rates that make detection of low frequency mutations a bioinformatics challenge. Here we applied BAsE-Seq, an Illumina-based single-virion sequencing technology, to eight samples from four chronic hepatitis B (CHB) patients - once before antiviral treatment and once after viral rebound due to resistance. With single-virion sequencing, we obtained 248-8796 single-virion sequences per sample, which allowed us to find evidence for both hard and soft selective sweeps. We were able to reconstruct population demographic history that was independently verified by clinically collected data. We further verified four of the samples independently through PacBio SMRT and Illumina Pooled deep sequencing. Overall, we showed that single-virion sequencing yields insight into viral evolution and population dynamics in an efficient and high throughput manner. We believe that single-virion sequencing is widely applicable to the study of viral evolution in the context of drug resistance and host adaptation, allows differentiation between soft or hard selective sweeps, and may be useful in the reconstruction of intra-host viral population demographic history.
López-Gatius, F
2011-07-01
During the periovulatory period in dairy cattle, the largest ovarian follicle can be felt by palpation per rectum as a firm/soft follicle (young preovulatory follicle), a very soft follicle separating it from the remainder of the ovary (mature preovulatory follicle), or an evacuated follicle (follicle associated with ovulation). Because any one of these three follicle types may be present at the time of artificial insemination, the objective of this study was to identify possible differences between the effects of a firm/soft, very soft, or evacuated ovarian follicle on fertility. Out of a study sample of 2365 inseminations, very soft, firm/soft, and evacuated follicles were recorded in 1689 (71%), 593 (25%), and 83 (3.5%) inseminations, respectively. Logistic regression analysis indicated no significant effects of largest follicle type, vaginal discharge, season, days in milk, parity, synchronized or natural estrus, and semen-providing bull on the pregnancy rate. The only variable included in the final logistic regression model was the interaction season-follicle type. This interaction determined that the likelihood of pregnancy decreased significantly by factors of 0.86 or 0.82 in cows with a firm/soft follicle inseminated during the cool or warm period, respectively, and by a factor of 0.09 in cows with evacuated follicles inseminated during the warm period, using as reference cows with a very soft follicle inseminated during the cool period (yielding the highest pregnancy rate). As an overall conclusion, the state of the periovulatory follicle at insemination was clearly related to fertility and masked the effects of factors commonly affecting fertility such as parity, days in milk at AI and inseminating bull. More importantly they suggest that by including ovarian follicle checks in artificial insemination routines, the success of this procedure could be improved. Copyright © 2011 Elsevier Inc. All rights reserved.
Newman, Craig G J; Bevins, Adam D; Zajicek, John P; Hodges, John R; Vuillermoz, Emil; Dickenson, Jennifer M; Kelly, Denise S; Brown, Simona; Noad, Rupert F
2018-01-01
Ensuring reliable administration and reporting of cognitive screening tests are fundamental in establishing good clinical practice and research. This study captured the rate and type of errors in clinical practice, using the Addenbrooke's Cognitive Examination-III (ACE-III), and then the reduction in error rate using a computerized alternative, the ACEmobile app. In study 1, we evaluated ACE-III assessments completed in National Health Service (NHS) clinics ( n = 87) for administrator error. In study 2, ACEmobile and ACE-III were then evaluated for their ability to capture accurate measurement. In study 1, 78% of clinically administered ACE-IIIs were either scored incorrectly or had arithmetical errors. In study 2, error rates seen in the ACE-III were reduced by 85%-93% using ACEmobile. Error rates are ubiquitous in routine clinical use of cognitive screening tests and the ACE-III. ACEmobile provides a framework for supporting reduced administration, scoring, and arithmetical error during cognitive screening.
FPGA implementation of high-performance QC-LDPC decoder for optical communications
NASA Astrophysics Data System (ADS)
Zou, Ding; Djordjevic, Ivan B.
2015-01-01
Forward error correction is as one of the key technologies enabling the next-generation high-speed fiber optical communications. Quasi-cyclic (QC) low-density parity-check (LDPC) codes have been considered as one of the promising candidates due to their large coding gain performance and low implementation complexity. In this paper, we present our designed QC-LDPC code with girth 10 and 25% overhead based on pairwise balanced design. By FPGAbased emulation, we demonstrate that the 5-bit soft-decision LDPC decoder can achieve 11.8dB net coding gain with no error floor at BER of 10-15 avoiding using any outer code or post-processing method. We believe that the proposed single QC-LDPC code is a promising solution for 400Gb/s optical communication systems and beyond.
Alexander, John H; Levy, Elliott; Lawrence, Jack; Hanna, Michael; Waclawski, Anthony P; Wang, Junyuan; Califf, Robert M; Wallentin, Lars; Granger, Christopher B
2013-09-01
In ARISTOTLE, apixaban resulted in a 21% reduction in stroke, a 31% reduction in major bleeding, and an 11% reduction in death. However, approval of apixaban was delayed to investigate a statement in the clinical study report that "7.3% of subjects in the apixaban group and 1.2% of subjects in the warfarin group received, at some point during the study, a container of the wrong type." Rates of study medication dispensing error were characterized through reviews of study medication container tear-off labels in 6,520 participants from randomly selected study sites. The potential effect of dispensing errors on study outcomes was statistically simulated in sensitivity analyses in the overall population. The rate of medication dispensing error resulting in treatment error was 0.04%. Rates of participants receiving at least 1 incorrect container were 1.04% (34/3,273) in the apixaban group and 0.77% (25/3,247) in the warfarin group. Most of the originally reported errors were data entry errors in which the correct medication container was dispensed but the wrong container number was entered into the case report form. Sensitivity simulations in the overall trial population showed no meaningful effect of medication dispensing error on the main efficacy and safety outcomes. Rates of medication dispensing error were low and balanced between treatment groups. The initially reported dispensing error rate was the result of data recording and data management errors and not true medication dispensing errors. These analyses confirm the previously reported results of ARISTOTLE. © 2013.
Biogenic disturbance determines invasion success in a subtidal soft-sediment system.
Lohrer, Andrew M; Chiaroni, Luca D; Hewitt, Judi E; Thrush, Simon F
2008-05-01
Theoretically, disturbance and diversity can influence the success of invasive colonists if (1) resource limitation is a prime determinant of invasion success and (2) disturbance and diversity affect the availability of required resources. However, resource limitation is not of overriding importance in all systems, as exemplified by marine soft sediments, one of Earth's most widespread habitat types. Here, we tested the disturbance-invasion hypothesis in a marine soft-sediment system by altering rates of biogenic disturbance and tracking the natural colonization of plots by invasive species. Levels of sediment disturbance were controlled by manipulating densities of burrowing spatangoid urchins, the dominant biogenic sediment mixers in the system. Colonization success by two invasive species (a gobiid fish and a semelid bivalve) was greatest in plots with sediment disturbance rates < 500 cm(3) x m(-2) x d(-1), at the low end of the experimental disturbance gradient (0 to > 9000 cm(3) x m(-2) x d(-1)). Invasive colonization declined with increasing levels of sediment disturbance, counter to the disturbance-invasion hypothesis. Increased sediment disturbance by the urchins also reduced the richness and diversity of native macrofauna (particularly small, sedentary, surface feeders), though there was no evidence of increased availability of resources with increased disturbance that would have facilitated invasive colonization: sediment food resources (chlorophyll a and organic matter content) did not increase, and space and access to overlying water were not limited (low invertebrate abundance). Thus, our study revealed the importance of biogenic disturbance in promoting invasion resistance in a marine soft-sediment community, providing further evidence of the valuable role of bioturbation in soft-sediment systems (bioturbation also affects carbon processing, nutrient recycling, oxygen dynamics, benthic community structure, and so on.). Bioturbation rates are influenced by the presence and abundance of large burrowing species (like spatangoid urchins). Therefore, mass mortalities of large bioturbators could inflate invasion risk and alter other aspects of ecosystem performance in marine soft-sediment habitats.
Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1998-01-01
A code trellis is a graphical representation of a code, block or convolutional, in which every path represents a codeword (or a code sequence for a convolutional code). This representation makes it possible to implement Maximum Likelihood Decoding (MLD) of a code with reduced decoding complexity. The most well known trellis-based MLD algorithm is the Viterbi algorithm. The trellis representation was first introduced and used for convolutional codes [23]. This representation, together with the Viterbi decoding algorithm, has resulted in a wide range of applications of convolutional codes for error control in digital communications over the last two decades. There are two major reasons for this inactive period of research in this area. First, most coding theorists at that time believed that block codes did not have simple trellis structure like convolutional codes and maximum likelihood decoding of linear block codes using the Viterbi algorithm was practically impossible, except for very short block codes. Second, since almost all of the linear block codes are constructed algebraically or based on finite geometries, it was the belief of many coding theorists that algebraic decoding was the only way to decode these codes. These two reasons seriously hindered the development of efficient soft-decision decoding methods for linear block codes and their applications to error control in digital communications. This led to a general belief that block codes are inferior to convolutional codes and hence, that they were not useful. Chapter 2 gives a brief review of linear block codes. The goal is to provide the essential background material for the development of trellis structure and trellis-based decoding algorithms for linear block codes in the later chapters. Chapters 3 through 6 present the fundamental concepts, finite-state machine model, state space formulation, basic structural properties, state labeling, construction procedures, complexity, minimality, and sectionalization of trellises. Chapter 7 discusses trellis decomposition and subtrellises for low-weight codewords. Chapter 8 first presents well known methods for constructing long powerful codes from short component codes or component codes of smaller dimensions, and then provides methods for constructing their trellises which include Shannon and Cartesian product techniques. Chapter 9 deals with convolutional codes, puncturing, zero-tail termination and tail-biting.Chapters 10 through 13 present various trellis-based decoding algorithms, old and new. Chapter 10 first discusses the application of the well known Viterbi decoding algorithm to linear block codes, optimum sectionalization of a code trellis to minimize computation complexity, and design issues for IC (integrated circuit) implementation of a Viterbi decoder. Then it presents a new decoding algorithm for convolutional codes, named Differential Trellis Decoding (DTD) algorithm. Chapter 12 presents a suboptimum reliability-based iterative decoding algorithm with a low-weight trellis search for the most likely codeword. This decoding algorithm provides a good trade-off between error performance and decoding complexity. All the decoding algorithms presented in Chapters 10 through 12 are devised to minimize word error probability. Chapter 13 presents decoding algorithms that minimize bit error probability and provide the corresponding soft (reliability) information at the output of the decoder. Decoding algorithms presented are the MAP (maximum a posteriori probability) decoding algorithm and the Soft-Output Viterbi Algorithm (SOVA) algorithm. Finally, the minimization of bit error probability in trellis-based MLD is discussed.
Propagation of stage measurement uncertainties to streamflow time series
NASA Astrophysics Data System (ADS)
Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary
2016-04-01
Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.
The Advanced Light Source (ALS) Slicing Undulator Beamline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heimann, P. A.; Glover, T. E.; Plate, D.
2007-01-19
A beamline optimized for the bunch slicing technique has been construction at the Advanced Light Source (ALS). This beamline includes an in-vacuum undulator, soft and hard x-ray beamlines and a femtosecond laser system. The soft x-ray beamline may operate in spectrometer mode, where an entire absorption spectrum is accumulated at one time, or in monochromator mode. The femtosecond laser system has a high repetition rate of 20 kHz to improve the average slicing flux. The performance of the soft x-ray branch of the ALS slicing undulator beamline will be presented.
Tully, Mary P; Buchan, Iain E
2009-12-01
To investigate the prevalence of prescribing errors identified by pharmacists in hospital inpatients and the factors influencing error identification rates by pharmacists throughout hospital admission. 880-bed university teaching hospital in North-west England. Data about prescribing errors identified by pharmacists (median: 9 (range 4-17) collecting data per day) when conducting routine work were prospectively recorded on 38 randomly selected days over 18 months. Proportion of new medication orders in which an error was identified; predictors of error identification rate, adjusted for workload and seniority of pharmacist, day of week, type of ward or stage of patient admission. 33,012 new medication orders were reviewed for 5,199 patients; 3,455 errors (in 10.5% of orders) were identified for 2,040 patients (39.2%; median 1, range 1-12). Most were problem orders (1,456, 42.1%) or potentially significant errors (1,748, 50.6%); 197 (5.7%) were potentially serious; 1.6% (n = 54) were potentially severe or fatal. Errors were 41% (CI: 28-56%) more likely to be identified at patient's admission than at other times, independent of confounders. Workload was the strongest predictor of error identification rates, with 40% (33-46%) less errors identified on the busiest days than at other times. Errors identified fell by 1.9% (1.5-2.3%) for every additional chart checked, independent of confounders. Pharmacists routinely identify errors but increasing workload may reduce identification rates. Where resources are limited, they may be better spent on identifying and addressing errors immediately after admission to hospital.
Analysis of soft-decision FEC on non-AWGN channels.
Cho, Junho; Xie, Chongjin; Winzer, Peter J
2012-03-26
Soft-decision forward error correction (SD-FEC) schemes are typically designed for additive white Gaussian noise (AWGN) channels. In a fiber-optic communication system, noise may be neither circularly symmetric nor Gaussian, thus violating an important assumption underlying SD-FEC design. This paper quantifies the impact of non-AWGN noise on SD-FEC performance for such optical channels. We use a conditionally bivariate Gaussian noise model (CBGN) to analyze the impact of correlations among the signal's two quadrature components, and assess the effect of CBGN on SD-FEC performance using the density evolution of low-density parity-check (LDPC) codes. On a CBGN channel generating severely elliptic noise clouds, it is shown that more than 3 dB of coding gain are attainable by utilizing correlation information. Our analyses also give insights into potential improvements of the detection performance for fiber-optic transmission systems assisted by SD-FEC.
Biomineralization of a Self-assembled, Soft-Matrix Precursor: Enamel
NASA Astrophysics Data System (ADS)
Snead, Malcolm L.
2015-04-01
Enamel is the bioceramic covering of teeth, a composite tissue composed of hierarchical organized hydroxyapatite crystallites fabricated by cells under physiologic pH and temperature. Enamel material properties resist wear and fracture to serve a lifetime of chewing. Understanding the cellular and molecular mechanisms for enamel formation may allow a biology-inspired approach to material fabrication based on self-assembling proteins that control form and function. A genetic understanding of human diseases exposes insight from nature's errors by exposing critical fabrication events that can be validated experimentally and duplicated in mice using genetic engineering to phenocopy the human disease so that it can be explored in detail. This approach led to an assessment of amelogenin protein self-assembly that, when altered, disrupts fabrication of the soft enamel protein matrix. A misassembled protein matrix precursor results in loss of cell-to-matrix contacts essential to fabrication and mineralization.
Random Walk Graph Laplacian-Based Smoothness Prior for Soft Decoding of JPEG Images.
Liu, Xianming; Cheung, Gene; Wu, Xiaolin; Zhao, Debin
2017-02-01
Given the prevalence of joint photographic experts group (JPEG) compressed images, optimizing image reconstruction from the compressed format remains an important problem. Instead of simply reconstructing a pixel block from the centers of indexed discrete cosine transform (DCT) coefficient quantization bins (hard decoding), soft decoding reconstructs a block by selecting appropriate coefficient values within the indexed bins with the help of signal priors. The challenge thus lies in how to define suitable priors and apply them effectively. In this paper, we combine three image priors-Laplacian prior for DCT coefficients, sparsity prior, and graph-signal smoothness prior for image patches-to construct an efficient JPEG soft decoding algorithm. Specifically, we first use the Laplacian prior to compute a minimum mean square error initial solution for each code block. Next, we show that while the sparsity prior can reduce block artifacts, limiting the size of the overcomplete dictionary (to lower computation) would lead to poor recovery of high DCT frequencies. To alleviate this problem, we design a new graph-signal smoothness prior (desired signal has mainly low graph frequencies) based on the left eigenvectors of the random walk graph Laplacian matrix (LERaG). Compared with the previous graph-signal smoothness priors, LERaG has desirable image filtering properties with low computation overhead. We demonstrate how LERaG can facilitate recovery of high DCT frequencies of a piecewise smooth signal via an interpretation of low graph frequency components as relaxed solutions to normalized cut in spectral clustering. Finally, we construct a soft decoding algorithm using the three signal priors with appropriate prior weights. Experimental results show that our proposal outperforms the state-of-the-art soft decoding algorithms in both objective and subjective evaluations noticeably.
Disentangling AGN and Star Formation in Soft X-Rays
NASA Technical Reports Server (NTRS)
LaMassa, Stephanie M.; Heckman, T. M.; Ptak, A.
2012-01-01
We have explored the interplay of star formation and active galactic nucleus (AGN) activity in soft X-rays (0.5-2 keV) in two samples of Seyfert 2 galaxies (Sy2s). Using a combination of low-resolution CCD spectra from Chandra and XMM-Newton, we modeled the soft emission of 34 Sy2s using power-law and thermal models. For the 11 sources with high signal-to-noise Chandra imaging of the diffuse host galaxy emission, we estimate the luminosity due to star formation by removing the AGN, fitting the residual emission. The AGN and star formation contributions to the soft X-ray luminosity (i.e., L(sub x,AGN) and L(sub x,SF)) for the remaining 24 Sy2s were estimated from the power-law and thermal luminosities derived from spectral fitting. These luminosities were scaled based on a template derived from XSINGS analysis of normal star-forming galaxies. To account for errors in the luminosities derived from spectral fitting and the spread in the scaling factor, we estimated L(sub x,AGN) and L(sub x,SF))from Monte Carlo simulations. These simulated luminosities agree with L(sub x,AGN) and L(sub x,SF) derived from Chandra imaging analysis within a 3sigma confidence level. Using the infrared [Ne ii]12.8 micron and [O iv]26 micron lines as a proxy of star formation and AGN activity, respectively, we independently disentangle the contributions of these two processes to the total soft X-ray emission. This decomposition generally agrees with L(sub x,SF) and L(sub x,AGN) at the 3 sigma level. In the absence of resolvable nuclear emission, our decomposition method provides a reasonable estimate of emission due to star formation in galaxies hosting type 2 AGNs.
Pokhai, Gabriel G; Oliver, Michele L; Gordon, Karen D
2009-09-01
Determination of the biomechanical properties of soft tissues such as tendons and ligaments is dependent on the accurate measurement of their cross-sectional area (CSA). Measurement methods, which involve contact with the specimen, are problematic because soft tissues are easily deformed. Noncontact measurement methods are preferable in this regard, but may experience difficulty in dealing with the complex cross-sectional shapes and glistening surfaces seen in soft tissues. Additionally, existing CSA measurement systems are separated from the materials testing machine, resulting in the inability to measure CSA during testing. Furthermore, CSA measurements are usually made in a different orientation, and with a different preload, prior to testing. To overcome these problems, a noncontact laser reflectance system (LRS) was developed. Designed to fit in an Instron 8872 servohydraulic test machine, the system measures CSA by orbiting a laser transducer in a circular path around a soft tissue specimen held by tissue clamps. CSA measurements can be conducted before and during tensile testing. The system was validated using machined metallic specimens of various shapes and sizes, as well as different sizes of bovine tendons. The metallic specimens could be measured to within 4% accuracy, and the tendons to within an average error of 4.3%. Statistical analyses showed no significant differences between the measurements of the LRS and those of the casting method, an established measurement technique. The LRS was successfully used to measure the changing CSA of bovine tendons during uniaxial tensile testing. The LRS developed in this work represents a simple, quick, and accurate way of reconstructing complex cross-sectional profiles and calculating cross-sectional areas. In addition, the LRS represents the first system capable of automatically measuring changing CSA of soft tissues during tensile testing, facilitating the calculation of more accurate biomechanical properties.